Author Archives: Michael Karsyan

Automating event log backup

Recently we received a question from our customer who asked about regular backing up of system and application event logs. He wanted to backup only local logs, but let’s extend the task for remote logs as well. So our task is to backup System and Application event logs from a local computer and remote machine (SERV1) into a folder two times a week.

Let’s try to do it using standard tools. Windows event viewer lets you backup event log – there is a command in Event Viewer – “Save all event as” and you should save them into evtx format. However, it doesn’t allow you to backup an event log from a remote server to a local computer or visa-versa.  You will get a message:

An error occurred while Event Viewer was saving a log file in .evtx format from SERVER to PATH.
Events from the remote computer cannot be saved into .evtx format files on the local computer. If you want to save the events to the local computer, select a different file format. To save the events in .evtx format, save them onto the remote computer.

Why this happens? Windows Event Log Service (eventlog), which is responsible for all main event log functionality is running under LocalService account. LocalService presents anonymous credentials on the network, so it has no permissions to backup event log anywhere, but computer on which the service is running. The same limitation has wevtutil.exe command. It works with local event logs only.

What about Event Log Explorer? We designed Event Log Explorer to backup remote event logs as easy as local ones. When backing up a remote logs, it saves the log into a shared folder on a remove computer and then moves it into the target folder.

Just click right mouse button a log you wish to backup in the tree and select Save log as. Event Log Explorer will do the rest.

event log explorer backup

Note that you must have administrative permissions on the server Serv1 or you won’t be able to access files from ADMIN$ resource.

Of course, you can backup event logs manually, but our user asked us to automate process. Event Log Explorer comes with ELBACK utility which can be located in the application folder (default location is “C:\Program Files (x86)\Event Log Explorer\elback.exe”). This utility developed exactly for batch backup of event logs. Detailed information about this utility available in Event Log Explorer help.

Here is our command line to backup Application and System logs from a local PC and Serv1:

elback.exe D:\backup Application System \\Serv1\Application \\Serv1\System /clear

Remove /clear option if you don’t want to empty log automatically after backup.

Run this command to test it. If it works correctly, you will see 4 new files in D:\Backup folder that looks like Serv1-System-2016-05-18-10-56-056.evt.

Now we can automate the process. I will use Windows Scheduler. In article Exporting event logs with Windows PowerShell I described how to create Windows Scheduler task with PowerShell. Here I will just use Windows UI for this.

Click Windows Start button, then All Programs->Accessories->System Tools->Task Scheduler
Select Task Scheduler Library in the tree.
Create Basic Task from the menu.
Give any name to the task, e.g. “Logs backup”.
Click Next.
On Task Trigger page select Weekly.
Click Next.
Specify when you want to run the task.
Click Next.
Select “Start a program”
click Next.
In Program/Script line type
“C:\Program Files (x86)\Event Log Explorer\elback.exe”
In Add arguments type
D:\backup Application System \\Serv1\Application \\Serv1\System /clear
Click Next
Click Finish.

Now you can find your task in the list.
Click right mouse button on the task and select Run to test it.
Check “D:\Backup” folder if new evtx files appear.

Important note: if you decide to backup a local Security event log, you will probably need to elevate permissions to access the security log. To do so, click right mouse button on Logs backup task in Windows Scheduler and select Properties. In the Properties dialog enable checkbox “Run with highest privileges”.

Scheduling event log backup

Don’t forget to the task before going production.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Tracking down who removed files

Let’s assume you have a shared folder on a server which is accessible by all employees in your company. The users commonly copy some documents into this folder to let the others to work with these shared documents. One day you discover that some files unexpectedly disappeared from the shared folder. Usually this means that someone deleted these files (consciously or unconsciously). Now we need to detect the person who removed the files.

First, you need to setup Windows security auditing to monitor file access (and optionally logon) events. Of course, you should do it right after creating a shared folder and granting access to it (post factum setup won’t help you) . This article describes how to setup security auditing and audit file access and logon events.

If you correctly setup file access auditing for your shared folder, “File system” events will appear in Security log on every attempt to open file inside the folder.

file system auditing

So be sure that the maximum log size for Security log is set to a reasonable value (or you have a chance to lose old events). Microsoft recommends 4GB for most of Windows, but this depends on different factors – I prefer much smaller sizes with autobackup option.

Event 4660 occurs when someone removes a file or a folder. But its event description doesn’t contain the file name:

An object was deleted.

Subject:
            Security ID:                  S-1-5-21-3946697505-1589476648-2597793080-1114
            Account Name:                 mike
            Account Domain:               FSPRO
            Logon ID:                     0084C195

Object:
            Object Server:   Security
            Handle ID:        00000AC8

Process Information:
            Process ID:       00000004
            Process Name: 
            Transaction ID: {00000000-0000-0000-0000-000000000000}

In fact, when a user deletes file, Windows registers several events: 4663 and then 4660. It can also register event 4656 before 4663).

Here is a sample of 4663 event description:

An attempt was made to access an object.

Subject:
            Security ID:                  S-1-5-21-3946697505-1589476648-2597793080-1114
            Account Name:                 mike
            Account Domain:               FSPRO
            Logon ID:                     0084C195

Object:
            Object Server:   Security
            Object Type:     File
            Object Name:     C:\shared\Data\_DSC9978.JPG
            Handle ID:       00000AC8

Process Information:
            Process ID:      00000004
            Process Name: 

Access Request Information:
            Accesses:        DELETE
            Access Mask:     10000

You can notice that “Access Request Information” group contains Accesses: DELETE and Access Mask: 10000 parameters.

So we can just filter security event log by Event ID = 4663 and Access Request Information\Accesses = DELETE (and if you enabled auditing for several folders, but want to check a specific one, you should also add filter by Object\Object Name):

event 4663 - filter deleted events

Now we can see all “file delete” events with file names.

This method works most of time, but I wouldn’t call it perfect. First, nobody guaranty that Accesses will be DELETE all the time (although you can try Access Request Information\Accesses Contains DELETE). Second, 4663 event occurs on access attempt. In some cases, e.g. if your file is protected, event 4660 won’t appear. So to get more accurate picture, we should rely upon 4663 events and get details from the previous events. Event Log Explorer features Linked Filter, which allows you to link events in security log by description parameter. Look again at 4660 and 4663 event samples. You can link them by Object\Handle ID parameter. Note that Linked Filter scans events from top to bottom, so make sure that you sorted events from new to old (our base event will be 4660).

linked filter - events 4660-4663

Here I got the same result as before.

Now you can just display who deleted files. Event description keeps these details in “subject” group. I will use custom columns to show these details in the list:

custom columns-deleted files

Here is the result of adding custom columns:

showing-users who deleted files

You probably noticed that I added Logon ID along with User name. Using the Logon ID, we can detect from which machine user FSPRO\mike deleted files.

Just set a new filter for event id = 4624 (An account was successfully logged on):

looking for machine - filter by event 4624

And we are getting the machine name and its IP address

workstation name and ip address

 

facebooktwittergoogle_plusredditpinterestlinkedinmail

Windows boot performance diagnostics. Part 2

If you read the previous part of this article, you already know what Microsoft-Windows-Diagnostics-Performance log is for, why similar events have different event types and what information is “hidden” in the event details.

Now it’s time to practice. First, we need to decide what we will try to optimize. Let’s display only startup, shutdown and standby/resume events. The event IDs are 100, 200 and 300 respectfully. So let’s filter by event id = 100, 200, 300:
eventfilter-100-200-300

You can optionally switch warnings off in the filter to get error and critical events, but I will leave it checked.

Now we can view statistics of the events. Select Advanced from the main menu, click Analytical Reports. Then in the Analytical Report window click Report and select Event ID stats. I switched to Pivot chart for better view:
boot performance analytical report - standby

As you can see, I have just got a few Windows startup events in this log (and only 1 shutdown event). Most of events here are Standby/Resume events. There are several reasons for this – I switch off/reboot the computer very rarely, most of the time I use hibernate or sleep commands to switch off the computer. My primary drive is a SSD drive so I don’t have problems with prefetch at startup, so My PC commonly starts up in about 10 seconds.  And I don’t have too many autorun programs or services.

However, I really have performance issues with hibernation and resume. So let’s check what the reason of such performance degradation can be. Event 300 contains lots of parameters (similar to event 100 described in the previous post), but they don’t describe the source of the problem. As you may remember, the details of the problems are listed in the events that follow the “basic” events. For standby/resume – they are 3xx. So let’s filter by event id = 301-399 (Note that if you troubleshoot boot up events, it will be 101-110 or 101-199).
eventfilter-301-399

We can build event ID stats diagram again (it is not necessary, but we may find out common reasons of the problems):
degradation-reasons-general

We can see that the typical reasons of the degradations are the device driver issues (302, 351) and application issues (301).

So, let’s concentrate only on these events: set event filter to Event ID = 301, 302, 351
eventfilter-301-302-351

Now we should somehow display the source of the problem. In this article, I showed how to use custom columns. Fortunately for all these events, the sources of the problem have the same sequence numbers:
3 – an application file name or a driver name
5 – friendly name of an application or a driver
11 – full path to an application or a driver.

These numbers you can get from the XML view of these events (double click on event and switch to XML).

I will use friendly name (but you should be aware that some applications may have no friendly name):

Go to View->Custom columns and add a new column from PARAM[5]:
ProblemSource-CustomColumn

Now you can see the problem source as a column in the event list.
performance-diagnostics with custom column displaying faulty app

Let’s create a new pivot chart for this table, event log view to get statistics for “Problem source” column. Go to Advanced->Analytical reports. Then select “CustomColumn1 stats” from the Report menu.
diagnostic performance - reasons of degradation

Now I can see that I should check if a new version of ACPI Driver for NT available (and I should pay attention to Default Hub Driver for USB and several other drivers and applications).

Although I could stop at this point, I will show one more way of detecting the sources of performance issues. When researching events 101+, 201+ and 301+, you can find two parameters under EventData. They are TotalTime and DegradationTime. Totaltime is the time in milliseconds it took for processing (time to initialize a service, start an application, unload application etc). DegradationTime is the time meaning how much longer than usual it took for processing. So, we can exclude some unnecessary events that took not too much time, but are in the list. Let’s display only events that took 300 or more milliseconds. Do not close the pivot chart, just switch back to Event Log Explorer window to set filter. In the article “Advanced filtering. How to filter events by event description” I mentioned about setting XML query filters using XPath syntax. Now we can use it to refine events. Select View->XML query and type

*[EventData[Data[@Name='TotalTime']>'300']]

In my case, the number of displayed events reduced 8 times. Switch back to Analytical Report window and click Reconcile button.
performance diagnostics - refined degradation reasons

And we see a different picture – BlueSoleilCS and ATAPI miniport make contribution to the performance degradation.

I hope that not only you learned about Microsoft-Windows-Diagnostic-Performance events, but also got helpful information about the advanced usage of Event Log Explorer. The article covered several topics that our users often miss: XML view of an event, Analytical reports, custom columns and XML query filter.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Windows boot performance diagnostics. Part 1

Have you ever seen that your computer starts booting slowly? Or it slowly restores its state from hibernation. Maybe you observe performance issues when shutdown or hibernate process? In this article, I will show you how you can use Event Log Explorer to find performance problems linked with the startup/shutdown/hibernate/resume processes.

Starting from Windows Vista, Microsoft provides a bunch of event logs for different system purposes. In Windows Event Viewer, these logs are located in a special branch: “Applications and Services Logs”. Windows records performance diagnostics events into Microsoft-Windows-Diagnostics-Performance/Operational event log. To open this log in Windows Event Viewer, open Applications and Services Logs branch, then open Microsoft, then open Windows, then select Diagnostics-Performance and click on Operational.  In Event Log Explorer you can do it easier: open your computer in the tree, then open Microsoft-Windows folder and then click  Microsoft-Windows-Diagnostics-Performance/Operational.

Microsoft-Windows-Diagnostics-Performance

Now you can see a list of different diagnostic events. As you can see, events with the same event ID may have different types – Warning, Error and Critical. It looks like these event types depend on the duration of the startup/shutdown process. Another observation – theses events contain many important parameters internally, but you can’t see them in the event description (Windows Event Viewer doesn’t display them as well). The only way to display these parameters is to double click on the event to display event properties and switch to XML tab.

Compare:
diagnostics-performance event details

Unfortunately, I didn’t find a detailed documentation about all these events, so I did some research. First, I got meaning of events in Microsoft-Windows-Diagnostics-Performance:

Event ID Event Description
Boot Performance Monitoring
100 Windows has started up
101 This application took longer than usual to start up, resulting in a performance degradation in the system startup process
102 This driver took longer to initialize, resulting in a performance degradation in the system start up process
103 This startup service took longer than expected to startup, resulting in a performance degradation in the system start up process
104 Core system took longer to initialize, resulting in a performance degradation in the system start up process
105 Foreground optimizations (prefetching) took longer to complete, resulting in a performance degradation in the system start up process
106 Background optimizations (prefetching) took longer to complete, resulting in a performance degradation in the system start up process
107 Application of machine policy caused a slow down in the system start up process
108 Application of user policy caused a slow down in the system start up process
109 This device took longer to initialize, resulting in a performance degradation in the system start up process
110 Session manager initialization caused a slow down in the startup process
Shutdown Performance Monitoring
200 Windows has shutdown
201 This application caused a delay in the system shutdown process
202 This device caused a delay in the system shutdown process
203 This service caused a delay in the system shutdown process
Standby Performance Monitoring
300 Windows has resumed from standby
301 This application caused a delay during standby
302 This driver caused a delay during standby while servicing a device
303 This service caused a delay during hybrid-sleep
304 Creation of the hiber-file was slower than expected
305 Persisting disk caches was slower than expected
306 Preparing the video subsystem for sleep was slower than expected
307 Preparing Winlogon for sleep was slower than expected
308 Preparing system memory for sleep was slower than expected
309 Preparing core system for sleep was slower than expected
310 Preparing system worker threads for sleep was slower than expected
350 Bios initialization time was greater than 250ms (logo requirement) during system resume
351 This driver responded slower than expected to the resume request while servicing this device
352 Reading the hiber-file was slower than expected
System Performance Monitoring
400 Information about the system performance monitoring event
401 This process is using up processor time and is impacting the performance of Windows
402 This process is doing excessive disk activities and is impacting the performance of Windows
403 This driver is using up too many resources and is impacting the performance of Windows
404 This driver is waiting longer than expected on a device
405 This file is fragmented and is impacting the performance of Windows
406 Disk IO to this file is taking longer than expected
407 This process is using up too much system memory
408 Many processes are using too much system memory
Desktop Window Manager Monitoring
500 The Desktop Window Manager is experiencing heavy resource contention
501 The Desktop Window Manager is experiencing heavy resource contention

Since we research boot/shutdown/standby/resume performance, we should pay attention to 1xx, 2xx and 3xx events. We can see that events 100, 200 and 300 are basic events followed by 1xx, 2xx and 3xx which give extra information about the problem.

E.g. I can see two events

Event 100 :

Windows has started up:
    Boot Duration        :              34857ms
    IsDegradation        :              0
    Incident Time (UTC)  :              18.11.2015 23:58:10

 

Event 102:

This driver took longer to initialize, resulting in a performance degradation in the system start up process:
    File Name             :              mssmbios
    Friendly Name         :              System Management BIOS Driver
    Version               :              6.1.7600.16385 (win7_rtm.090713-1255)
    Total Time            :              1027ms
    Degradation Time      :              1026ms
    Incident Time (UTC)   :              18.11.2015 23:58:10

So we can see that System Management BIOS Driver caused the delay in the system boot up.

Unfortunately sometimes events 100, 200 and 300 occur alone without followed events. In this case it is impossible to detect the reason of the delay using Diagnostics-Performance event log.

You should pay attention to parameters of Event 100 (Microsoft-Windows-Diagnostics-Performance):

                 
 System
 ...
 EventData          
                 BootTsVersion 2
                 BootStartTime  2015-11-18T20:58:10.671600300Z
                 BootEndTime    2015-11-18T20:58:10.671600300Z
                 SystemBootInstance     89
                 UserBootInstance          84
                 BootTime           34857
                 MainPathBootTime       12657
                 BootKernelInitTime      21
                 BootDriverInitTime       3101
                 BootDevicesInitTime    18
                 BootPrefetchInitTime  0
                 BootPrefetchBytes        0
                 BootAutoChkTime         0
                 BootSmssInitTime          5084
                 BootCriticalServicesInitTime    436
                 BootUserProfileProcessingTime             2683
                 BootMachineProfileProcessingTime    0
                 BootExplorerInitTime   848
                 BootNumStartupApps  24
                 BootPostBootTime        22200
                 BootIsRebootAfterInstall           false
                 BootRootCauseStepImprovementBits 0
                 BootRootCauseGradualImprovementBits          0
                 BootRootCauseStepDegradationBits    4
                 BootRootCauseGradualDegradationBits             0
                 BootIsDegradation         False
                 BootIsStepDegradation               false
                 BootIsGradualDegradation        false
                 BootImprovementDelta              0
                 BootDegradationDelta 0
                 BootIsRootCauseIdentified       true
                 OSLoaderDuration         735
                 BootPNPInitStartTimeMS           21
                 BootPNPInitDuration    1342
                 OtherKernelInitDuration            332
                 SystemPNPInitStartTimeMS     1665
                 SystemPNPInitDuration              1776
                 SessionInitStartTimeMS             3451
                 Session0InitDuration    2088
                 Session1InitDuration    339
                 SessionInitOtherDuration          2656
                 WinLogonStartTimeMS               8536
                 OtherLogonInitActivityDuration             587
                 UserLogonWaitDuration             3551

Most of parameters are self-descriptive.

Pay attention that BootTime= MainPathBootTime + BootPostBootTime

MainPathBootTime is the time (in milliseconds)  from when the Windows Logo first appears on screen and until your desktop or logon prompt is presented.

BootPostBootTime is the time (in milliseconds) from the logon screen or desktop showing up and until the system become actually usable (the system has reached 80% idle).

In  many cases (but not in my sample), the problem with long boot time is linked with Windows Prefetcher – you will see that BootPrefetchInitTime is set to a large value, e.g. 60000 or more. In this case you can play with Superfetch service and Superfetch caching parameters (registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters). If you have just changed your HDD or SSD, you should re-run the assessment in Performance Information and Tools applet (or just run Winsat command as winsat.exe diskformal) to make Windows use the best prefetch strategy.

Now you have some background information about Diagnostics-Performance events. In the next part I will show how Event Log Explorer helps to detect boot performance issues.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Event Log Explorer Forensic Edition Preview

Several days ago, we published a preview of the new edition of Event Log Explorer software – Forensic Edition! Here I will show what benefits you can get from this edition. First, I should say that currently it is fully compatible with the standard edition of Event Log Explorer – it uses the same workspace, the same settings and currently the same registration key. By default, it installs itself into another folder, so you can use both editions concurrently. By the way, the forensic edition preview is available at
http://eventlogxp.com/download/elex_fe_setup.exe

What should you pay attention to?

You can find a new button on the tool bar add imaged pc button – Add imaged computer.

How it works. When you have a forensic image, you can mount it as a disk on your computer. And this command lets you add this imaged computer to the tree, and then view event logs from it.

Example:
I currently don’t have forensic images, but I have an old hard drive and connected it to my computer for tests. It is drive E:. I click “Add imaged computer” button and get this dialog:
Addimaged computer dialog
Event Log Explore tries to detect a mounted system drive automatically – it found it in my case, but if it wouldn’t find, just type correct path to the Windows folder on the mounted disk.

Type any name in “Friendly name” field, and press OK.

Now you can see that this “computer” has been added to the tree and you can open it and view event logs.forensic edition - new entity in the tree
Let’s try to open any log from this imaged computer, e.g. Application. Double click on Application and the event log will appear as a log file.
forensic edition - imaged log
Pay attention to the icon that appears on the log icons area. Commonly it appears when you set a description server. But now it notifies you that Event Log Explorer will try to get descriptions from the mounted disk, when if it failed to find them locally. Whenever Event Log Explorer tries to get event description, it will try to get it locally first (as always), but if the description is not available, it will open registry on the mounted disk, find a resource file that contains event description and try to extract the description from this file. Note that if you research this disk not only with help of Event Log Explorer, but with e.g. Registry Editor and opened registry files (loaded them as hives), you should unload the hives, or Event Log Explorer won’t be able to open them. If you want disable  getting descriptions from the imaged computer, just double click on that icon and uncheck “If no description available, get it from imaged disk” option.

What else

We disabled functionality that lets you modify event logs, such us clear log command, modifying event log properties.

What’s next?

We are planning to add the following features to forensic edition:

  • Timeline creation
  • Opening heavily damaged evtx files
  • Opening evt and evtx fragments
  • Features you will requested to add!
facebooktwittergoogle_plusredditpinterestlinkedinmail

Exporting event logs with Windows PowerShell

Do you need to automate error reporting based on recent events and don’t want to use third-party tools? This article describes how to collect events from different sources and unite them in one document using standard Windows instruments only.

Recently I described how to export events into Excel format using our Event Log Explorer software. However, in some cases, using third-party software can be impossible. This may happen if your company doesn’t have budget to purchase event log utilities, or such utilities are restricted by the company’s rules. In any case, the task of regular exporting the recent events from different machines into one legible file is still crucial. That’s why I will show how you can get the events from different Windows machines and export them into one file for further investigation.

Task

Let’s take the same task we solved previously. We have 4 Windows servers and we need to generate weekly reports of Error and Warning events in Application and System event logs. We should utilize only standard Windows instruments.

Instruments

Microsoft features Windows PowerShell as a framework to automate different administrative tasks and perform configuration management in Windows. My scripts require at least PowerShell version 3.0. If your PowerShell is outdated, you can update it by downloading Windows Management Framework from Microsoft’s site. To check PowerShell version simply type in PowerShell console:

$PSVersionTable.PSVersion

Getting Powershell version

In my case, PowerShell version = 3 which is OK.

Research

To access event logs, Windows PowerShell comes with Get-EventLog cmdlet:

Parameter Set: LogName
Get-EventLog [-LogName] <String> [[-InstanceId] <Int64[]> ] 
[-After <DateTime> ] [-AsBaseObject] [-Before <DateTime> ] 
[-ComputerName <String[]> ] [-EntryType <String[]> ] 
[-Index <Int32[]> ] [-Message <String> ] [-Newest <Int32> ] 
[-Source <String[]> ] [-UserName <String[]> ] [<CommonParameters>]

First we need to define the start date (the date after which we will get events). This date is calculated as today minus 7 days:

$now=get-date
$startdate=$now.adddays(-7)

Now we can read warning and error events from a log for the last week:

$el = get-eventlog -ComputerName Serv1 -log System -After $startdate -EntryType Error, Warning

Let’s check the result. Just type $el in the console. Yes, we can see events from the event log.
But how will we export the event log? Windows PowerShell doesn’t have cmdlets to export to Excel. But it supports export to CSV file. Let’s try it now:

$el | export-csv eventlog.csv

Yes, it works, but multi-line descriptions ruined the output file.
Maybe export to XML will help?

$el | export-clixml eventlog.xml

But how to display it in clear way? Excel understands XML files, but I have no idea how to interpret it:

PowerShell Log to XML

I guess we can make an XML transformation to convert this XML into more readable file, but I’m not an XML guru, but I have a more or less useful solution. We can solve our problem if we just export to CSV only several event properties (without event description):

$el |Select EntryType, TimeGenerated, Source, EventID | Export-CSV eventlog.csv -NoTypeInfo

Now we can read eventlog.csv in Excel without problems.

Putting all together

It’s time to write the PowerShell script.
Brief: we will read recent (7 days) error and warning events from Application and System event logs, join them, sort them by time and export to CSV format.

#
#  This script exports consolidated and filtered event logs to CSV
#  Author: Michael Karsyan, FSPro Labs, eventlogxp.com (c) 2016
#

Set-Variable -Name EventAgeDays -Value 7     #we will take events for the latest 7 days
Set-Variable -Name CompArr -Value @("SERV1", "SERV2", "SERV3", "SERV4")   # replace it with your server names
Set-Variable -Name LogNames -Value @("Application", "System")  # Checking app and system logs
Set-Variable -Name EventTypes -Value @("Error", "Warning")  # Loading only Errors and Warnings
Set-Variable -Name ExportFolder -Value "C:\TEST\"


$el_c = @()   #consolidated error log
$now=get-date
$startdate=$now.adddays(-$EventAgeDays)
$ExportFile=$ExportFolder + "el" + $now.ToString("yyyy-MM-dd---hh-mm-ss") + ".csv"  # we cannot use standard delimiteds like ":"

foreach($comp in $CompArr)
{
  foreach($log in $LogNames)
  {
    Write-Host Processing $comp\$log
    $el = get-eventlog -ComputerName $comp -log $log -After $startdate -EntryType $EventTypes
    $el_c += $el  #consolidating
  }
}
$el_sorted = $el_c | Sort-Object TimeGenerated    #sort by time
Write-Host Exporting to $ExportFile
$el_sorted|Select EntryType, TimeGenerated, Source, EventID, MachineName | Export-CSV $ExportFile -NoTypeInfo  #EXPORT
Write-Host Done!

Scheduling the task

To run the script, we should run this command:

PowerShell.exe -ExecutionPolicy ByPass -File export-logs.ps1

We can open Windows scheduler GUI to make this task, or use PowerShell console:
Microsoft recommends this way to schedule a PowerShell script:

$Trigger=New-JobTrigger -Weekly -At “7:00AM” -DaysOfWeek “Monday”
Register-ScheduledJob -Name “Export Logs” -FilePath “C:\Test\export-logs.ps1” -Trigger $Trigger

But this may miswork, because it adds to Windows Task Scheduler the following action:

powershell.exe -NoLogo -NonInteractive -WindowStyle Hidden -Command “Import-Module PSScheduledJob; $jobDef = [Microsoft.PowerShell.ScheduledJob.ScheduledJobDefinition]::LoadFromStore(‘Export Logs’, ‘C:\Users\Michael\AppData\Local\Microsoft\Windows\PowerShell\ScheduledJobs’); $jobDef.Run()”

If your policy prevents running PoweShell scripts, our export script won’t run because powershell parameters miss -ExecutionPolicy option.
That’s why I will use ScriptBlock instead of FilePath. This code does the trick:

$trigger=New-JobTrigger -Weekly -At "7:00AM" -DaysOfWeek "Monday"
$action="PowerShell.exe -ExecutionPolicy ByPass -File c:\test\export-logs.ps1"
$sb=[Scriptblock]::Create($action)
Register-ScheduledJob -Name "Export Logs" -ScriptBlock $sb -Trigger $trigger

Note that to run Register-ScheduledJob cmdlet, you need to start PowerShell elevated.
That’s all. Now you should have a task that runs every Monday at 7:00, collects events from your servers and exports them to CSV files.

Conclusion

As you can see, the problem of exporting events to Excel can be solved without third-party tools. This method is somewhat limited, but it works.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Case study – generating regular reports about the problems in your Windows network

Recently one of our clients asked us about the best way to organize a passive monitoring of their servers. The client told us that they don’t need to monitor the servers actively, but they want to have weekly reports about the problems. They tried to gather events using Windows PowerShell and export them to CSV format (to view events in Excel), but finally they gave up.

Task

The customer reported that he is a system administrator of a network with 4 Windows 2008 Severs and he needs to check out only system and application event logs of these servers. Ideally, these machines should generate only information events (no error or warnings). He would like to have reports of the problems in the beginning of every week.

So we can reformulate the task as follows:

Generate weekly report of all non-Information events in Application and System logs.

Our solution

First of all, we suggest to start a new copy of Event Log Explorer and create a new workspace for this task (use File->New workspace command). You can ignore this suggestion, but we recommend to always separate long-running tasks (like active monitoring or scheduled tasks) from operative event log tasks.

Then we need to add the required servers to the tree. This can be done either with help of Add Computer Wizards or manually (by pressing Add computer button).
Servers added

It’s time to create our log view. We will consolidate all the application and system logs from these servers in one view.

Open Serv1 server in the tree and double click on System log to open in.
Open system event log on server SRV1

We can subsequently add other event logs to the view, but it is better to set on-load filter first.  Go to View->Log Loading Options, select Load event from last 7 days (we need a report for the last week) and untick Information type.
log loading filter - (no information, last week)

Now we can add other logs to the view and they will be filtered automatically:

Right click on Application log of Serv1 and select Merge with the current view. Open Serv2, Serv3 and Serv4 and continue to add their application and system logs to the view.

Click on Date column to sort all merged events by date and time.

Rename unclear “Merger” name to something better: select View->Rename and change the name to “Weekly report“.

Now you should get something like this:
consolidated event logs

Let’s automate this.

Select Advanced->Scheduler from the main menu and create a new task. Name the task as “Problem Report” and click Next.

Set when we want to run the task, e.g. on Mondays at 7:00:
Event log scheduler

Click Next and select what we want to do: Refresh, then export:
event log - export to excel

We will export to Excel 2007 format with event descriptions.

Leave “Export path” with the default value “%USERPROFILE%\Documents” which means that Event Log Explorer will save reports in Documents folder of your user profile (note that in Export path you can enter any Windows path, including UNC paths, so it lets you store reports on remote computers).

Click Next, then Finish and then OK in Scheduler window.  Now you can save the workspace (File->Save workspace) and minimize the application (you can minimize it even into the notification area).

That’s all. On Monday at 7:00 AM, Event Log Explorer will load error and warning events for the last week from the servers and export these events into XSLX file:
exported to excel eventlog

And even if you close the program or restart your PC, you can always run Event Log Explorer and open your workspace – this will load all your settings and restore the scheduler.

Conclusion

As you can see, tuning Event Log Explorer didn’t take a lot of time (I did it in just 4 minutes), and what is more important you will have regular reports about problems from different sources without extra work! Needless to say that you can easily modify event filters to fulfill your specific requirements.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Forensics and Benford’s Law

As a producer of digital forensic software, we are regularly learning more about forensic methods and tasks. Recently I came across a curious article (and video) in Business Insider called “How forensic accountants use Benford’s Law to detect fraud

The video states that forensic guys can use Benford’s Law to analyze financial data and identify red flags.

This sounds interesting because it is too easy to check if a set of data obeys this law. Let’s do some checks.

Benford’s Law is a phenomenological law about the frequency distribution of leading digits in many real-life sets of numerical data. Mathematically it can be written asbenford law for forensics - formulad – the leading digit
P(d) – probability that a number begins with d.

For d from 1 to 9, we can calculate probabilities as follows:
Bendord Law for forensics - original chart

It is stated in the article that Benford’s Law works good for values which are results of multiplication, division and raising to a power, which are common for financial calculations.

Microsoft Excel helps us to verify this with no efforts. I will multiply 2 random values and divide it to another random value: RANDBETWEEN(1,99)* RANDBETWEEN(1,99)/ RANDBETWEEN(1,99)

and calculate the distribution for 500 values.

benford law for forensics - random values
Great! This really looks like Benford’s Law distribution. And this may really help in forensic investigation when analyzing calculated values. If you are interested how I extracted the first digit from a number – it’s very easy:
NUMBERVALUE(LEFT(TEXT(A2,0),1))

But the article’s video starts from the statement that the population of every US county obeys to this law as well. Although I understand, that we can predict demographic values using math formulas, this sounds weird. Since the population of US obeys to this law, I guess this should work for the world population. Let’s check it.

I was skeptical about the result, but it really looks like Bedford’s Law distribution!

benford law - world population
Ok, if it works with population, maybe it will work with similar entities, but specific to digital forensics. First, I thought about event logs, e.g. number of events in the log. But obviously it won’t work, because event logs are limited by size. In my example, when the event log grows,  the first digit will be always ‘2’ till you clear the log.

Event logs limited by size
Moreover, Windows commonly uses only few event logs, so we won’t get any statistical significance. And of course Benford’s Law won’t work if we test Event ID numbers – they have solely artificial nature and it’s up to developer to choose an identifier for event.

But what about files. What if we test it with file size – will it work? If it works for test samples, we will be able to suppose that if the investigation case  files don’t obey the law, someone could remove many files. I will test it with C:\Windows folder and its subfolders on my PC (it contains more than 100 000 files). By the way, since Windows keeps event log files in C:\Windows\System32\winevt\Logs and these files are limited by size, it may affect the statistics. Hopefully there are only about 150 files in Logs folder, so this effect is not significant. We will take size of every file in kilobytes.

benford law for forensics - files sizes in Windows folder
Well, it is really similar to Benford’s Law distribution. But one sample is not enough to make conclusions. It is interesting that in all my cases, we can see a small hump at 7 or 8 digits.

Let us summarize what we researched here. Benford’s Law works for big volumes of data that are the result of certain calculations, like financial results. It works with some statistical data. It may work for some specific digital data – although my test shows that it works for file size, it should be rechecked with other sets of data. Probably it will work with other digital data which digital forensic investigator may get as evidence. However this method won’t work for neat frauds. If only a small part of data modified, the distribution won’t show any significant deviation. In case of financial fraud, if a malefactor modified the original data and recalculate all the results, I suppose that the new set of data will obey the law.

Therefore, as any forensic instrument, Benford’s Law has its own applications, but it is not a panacea.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Once again about custom columns – extracting details from Application or System event logs

In my article “Exploring who logged on the system“, I described how to add to the event list custom columns that display data from the event description. Probably you might think that adding custom columns works for Security event logs only. Although we initially designed Custom columns feature exactly for Security logs (and referred them by name, e.g. Subject\Account name), we made it possible to use it with any logs.

Let’s take a typical system engineer or administrator task to discover faulty applications on the system. We will search for application error events. Such events are stored in Application event log, event source is “Application Error” and they are commonly registered with Event ID=1000. Therefore, we need to open Application log, and refine its events:
application error (event 1000)

Application error filtered

We can see that the description of event 1000 is not so well-structured as descriptions in Security log:
App error description

so it is impossible to refer description parameters by name.

However, we can still refer these parameters by number as {PARAM[x]}.

Look at the event description. The application name and its version is in the very beginning:

Faulting application name: googledrivesync.exe, version: 1.27.1227.2094, time stamp: 0x509418e4

Most likely, we will refer app name as PARAM[1] and its version as PARAM[2], but let’s make sure.

Double click on an event to get Event Properties and switch to XML tab.
app_error_XML

Yes, we can see that googledrivesync.exe 1.27.1227.2094 are the first and the second parameters respectively.

In this example, I will display app name and version number in one column as googledrivesync.exe ver 1.27.1227.2094, but you can try 2 columns (for name and version #) or get rid of version number at all.
Open Custom columns dialog (View->Custom columns).
App error custom columns

Title the column as Faulty app
Leave Event source and Event ID(s) blank (your log is filtered).
Set Value to {PARAM[1]} ver {PARAM[2]}
Press OK and enjoy the result!
Application error with custom column - final

facebooktwittergoogle_plusredditpinterestlinkedinmail

Advanced filtering. How to filter events by event description

A key instrument for event logs analysis is the function of event filtering. All known event log analysis tools have filtering feature, and I suppose, it is the most demanded feature of these applications. Setting filter for the most of event fields is easy. As a rule, all the event log applications let you filter by timeframe, event level, source, event IDs, users or computers with a more or less friendly user interface.

However, sometimes you may need to filter events by extra details, which you can see in the event description. Let’s take the task of  displaying logon events from RDP users. Previously I described how to display all the logon events, but now we need to make a more complex filter.

We know that logon events are 4624 and 4625 (successful logon and unsuccessful logon attempt). In this post, I showed how to interpret logon types and you can see that Logon type for RDP access is 10. So we need to filter by event ID= 4624 or 4625 and Logon type = 10. And this filter should be applied to Security log.

Although Windows Event Viewer does not provide user interface to filter events by extra event details, you can query event log by using structured XML queries. This feature is available since Windows 2008/Vista. Structured XML query is a special language based on XPath 1.0 syntax intended to retrieve events from event logs.  You can find information about the structured XML queries and XPath expressions at https://msdn.microsoft.com/en-us/library/windows/desktop/dd996910(v=vs.85).aspx

Here is a XML filter query for our task:

<QueryList>
<Query Id=”0″ Path=”Security”>
<Select Path=”Security”>
*[System[(EventID=4624 or EventID=4625)]]
and
*[EventData[Data[@Name=’LogonType’] and Data=10]]
</Select>
</Query>
</QueryList>

Using the power of XML query, you may filter events by virtually any criteria. Our Event Log Explorer “understands” the structured XML queries as well as built-in Event Viewer. But unlike Event Viewer, you don’t need to use full XML queries. Event Log Explorer accepts short XPath expressions like:

*[System[(EventID=4624 or EventID=4625)]]  and
*[EventData[Data[@Name=’LogonType’] and Data=10]]

However, despite the advantages of XML filtering, it is not easy to use it, especially if you don’t used to write XML queries on a regular basis.

So now, I will show how we simplified this process with our software.

Select View->Filter from the Event Log Explorer main menu to display Filter dialog.

First of all, you should type 4624,4625 into Event ID(s) filed because we need only logon events.

Event Log Explorer provides two basic ways of filtering events by description.

Filtering events by description text

You can just type any text in “Text in description” field and Event Log Explorer will display all events that contain this text in the description. You may just try to type

Logon Type:                        10

And this MAY really work. However, it is not reliable at all. You should not rely on the number of tabs inside description, moreover you cannot just type the tabulation symbols in the input line (I pasted it from the event description, and you should not copy it from this article – it won’t work – there are three tabulation symbols between “:” and “10”). Thereby, I suggest using this simple way, only for simple filters, e.g. when you need to display events containing a specific file name in the description.

The same “Text in description” filed has an extra feature when you enable RegExp checkbox near it. In this case, you can filter events with the power of regular expressions. When RegExp checkbox enabled, Event Log Explorer will search all event descriptions for occurrence of the regular expressions you typed in “Text in description” filed. Our expression should look like

Logon Type:\s*10

or

Logon Type:[ \t]*10

You probably think that this way is not much better than using structured XML queries. Instead of learning XPath syntax, I recommend you to learn regular expressions?

Although I don’t see alternatives to regular expression filters for some specific log analysis tasks, Event Log Explorer lets you fulfill this very task (displaying RDP logon events) much easier.

Filtering events by description parameters

Event Log Explorer gives you a user friendly ability to filter Security log by description parameters. In fact, you just tell the program the following:

Logon type   equals   10

To make entering the filter condition easy, I recommend select an even in the event list that contains Logon type in the description (you should do this before opening Filter dialog).

Use Filter by description params group to create your filter conditions:

event filter by description parameters

I believe that this way significantly saves your time for security log analysis.

Note that this approach works only when you analyze Security event logs because the security log descriptions are well-structured. When you analyze other logs, you may need to use regular expressions or XPath filter queries.

facebooktwittergoogle_plusredditpinterestlinkedinmail