In this article I talk about Performance Monitor and how to use it with PowerShell.
You get lots of useful information from PerfMon, but it can be cumbersome to work with. Using the GUI is easy but time consuming and not very portable. Here is a way to make it both easy and useful.
Performance Monitor
Windows Performance Monitor is a Microsoft Management Console (MMC) snap-in that provides tools for analyzing system performance. From a single console, you can monitor application and hardware performance in real time, customize what data you want to collect in logs, define thresholds for alerts and automatic actions, generate reports, and view past performance data in a variety of ways.
Performance Monitor on Technet.
The MMC
Sure, there's the MMC :-)
PS> perfmon
But, hey, this is the Power DBA, right?
with PowerShell
So, what do we need to work with Performance Monitor?
- A file containing the desired performance counters
- A command to turn this list into a User Defined Data Collector Set
- A directory to store the output
- A command to start the Data Collector Set
- A command to stop it
- A way to read the output
Define the counters we want
Maybe you found a list on the internet, or copied some from the MMC, or just typed them in yourself,... Allen White posted 15 SQL Server Performance Counters to Monitor on Sentry One's blog. Let's borrow his list.
What's there to note?
- Some counters have
(_Total)in their name. This means there can be multiple instances of this counter, like a number of disks, but I only want to see the total of those similar counters. - Some counters have
(*)in their name. This means there can be multiple instances of this counter,like a number of network interfaces, and I want to see them all separately. We could list this counter multiple times ourselves and put the name of the instance between the parentheses. When monitoring databases, it may be the desirable way to go if we would want to see only 3 databases we find more interesting than the other 27. That way we reduce the output significantly. But in the case of network interfaces,(*)is a lot easier, no? - Some counters have a longer path than others. Those belong to a different provider, like
SQLServer:
We can have different lists of counters and keep them very convenient in simple .txt files, or .config if you want to be fancy.
Provide a target
The output Performance Monitor gives us can be either a .blg or a .csv.
While .blg has the advantage it is not strict in the output structure, meaning we can change the collector set on the fly while capturing, I prefer a .csv. First, its size usually is only 1 / 10th of the .blg, second, it's easy to import in MS Excel or, even better, with Power Query!
Remember that Performance Monitor should run on the server we wish to monitor, and so the preferred location for the output file would be on that same server. We can copy or move it afterwards.
Create the Data Collector Set
Microsoft provides us with a nice command line tool called logman.
PS> logman create counter SentryOne15 -f csv -max 200 -si 00:01:00 -v MMddHHmm -o "C:\path\to\log\SQLServer1-SentryOne15.csv" -cf "C:\scripts\SQLSentry-15-counters.txt" -b 25/11/2016 00:00:00 -e 25/11/2016 23:58:00 -r
Right, let's see if we can decipher that.
Obviously, we call logman,
tell it to create a counter that we name 'SentryOne15'.
With -f csv we tell it to format the output as a .csv,
-max 200 says to keep the file size at maximum 200 MB
and -si is the sample interval, here set at 1 minute. ( A smaller interval may reveal more detail, but result in bigger files)
-v MMddHHmm will add a timestamp like '04121130' to be added in the log file name
and -o "C:\path\to\log\SQLServer1-SentryOne15.csv" is the output path for the log files. The actual file names will be something like 'C:\path\to\log\SQLServer1-SentryOne15_04121130.csv'.
The configuration file that holds our counters list is found by -cf "C:\scripts\SQLSentry-15-counters.txt".
-b and -e are the begin and end time of the collecting, respectively. I chose to start at midnight and stop a little before the next midnight. Yes, I loose some data, but I had trouble when setting it to a full day, so this is a small price for making it run smoothly. Remember, the begin time has to be in the future or it will not work. Also, the datetime format may be different on your system. Finally, when the collector set is already running, it won't start again, so be sure to stop it manually before your scheduled start should kick off.
We end with -r for repeat, so next midnight a new file will be created and the Data Collector Set starts again.
But with
PS> logman /?
you would have figured that out yourself :-)
NB: I once had troubles starting a Data Collector Set that seemed pretty normal. After a long quest, I learned that Performance Monitor schedules are hidden in Windows Task Scheduler under the Microsoft > windows > PLA path. So I discovered some 'queued' executions and by stopping those I resolved my problem.
Start and Stop the Data Collector set
With the command above, you don't really need to start and stop manually, but -b and -e are optional, so you can have Data Collector Sets defined that don't start automatically. Maybe you create some Sets in advance that you only start when troubleshooting or testing.
PS> logman start SentryOne15
PS> logman stop SentryOne15
it's as easy as that
Show the output
Well, since you now have one or more .csv files, it's up to you what you want to do with them. You can just save them because of compliance reasons, read them in notepad++ or in your PowerShell Host with an Import-Csv -path \\SQLServer1\C$\path\to\log\SQLServer1-SentryOne15_04121130.csv, or import the .csv in a database. You're a DBA after all, aren't you?
I use Power Query, for different reasons:
- I only need to define the query once, and can alter the days I wish to analyze just by moving the files in and out a directory and hit 'Refresh'.
- I can easily repeat this process per computer.
- Once in MS Excel, I can create Pivot Tables and Graphs really fast.
- ...
Here's an example in M:
let
Source = Folder.Files("\\SQLServer1\c$\path\to\log\"),
#"Filtered Rows" = Table.SelectRows(Source, each Text.StartsWith([Name], "SQLServer1-SentryOne15_")),
#"Combined Binaries" = Binary.Combine(#"Filtered Rows"[Content]),
#"Imported CSV" = Csv.Document(#"Combined Binaries",[Delimiter=",",Encoding=1252]),
#"Removed Duplicates" = Table.Distinct(#"Imported CSV", {"Column1"}),
#"Waarde vervangen" = Table.ReplaceValue(#"Removed Duplicates","\\SQLServer1\","",Replacer.ReplaceText,{"Column2", "Column3", "Column4", "Column5", "Column6",
"Column7", "Column8", "Column9", "Column10", "Column11", "Column12", "Column13", "Column14", "Column15", "Column16", "Column17", "Column18"}),
#"Kolommen verwijderd" = Table.RemoveColumns(#"Waarde vervangen",{"Column18"}),
#"Promoted Headers" = Table.PromoteHeaders(#"Kolommen verwijderd"),
#"Renamed Columns" = Table.RenameColumns(#"Promoted Headers",{{"(PDH-CSV 4.0) (Romance Daylight Time)(-120)", "Tijdstip"}}),
#"Type gewijzigd" = Table.TransformColumnTypes(#"Renamed Columns",{{"Tijdstip", type datetime}, {"Memory\Available MBytes", type number},
{"Memory\Pages Input/sec", type number}, {"Memory\Pages/sec", type number}, {"Memory\Free System Page Table Entries", type number},
{"Processor(0)\% Processor Time", type number}, {"Processor(1)\% Processor Time", type number}, {"Processor(2)\% Processor Time", type number},
{"Processor(3)\% Processor Time", type number}, {"Processor(_Total)\% Processor Time", type number}, {"Processor(0)\% Privileged Time", type number},
{"Processor(1)\% Privileged Time", type number}, {"Processor(2)\% Privileged Time", type number}, {"Processor(3)\% Privileged Time", type number},
{"Processor(_Total)\% Privileged Time", type number}, {"System\Context Switches/sec", type number}, {"System\Processor Queue Length", type number},
{"PhysicalDisk(0 C:)\Current Disk Queue Length", type number}, {"PhysicalDisk(1 D:)\Current Disk Queue Length", type number},
{"PhysicalDisk(2 E:)\Current Disk Queue Length", type number}, {"PhysicalDisk(3 F:)\Current Disk Queue Length", type number},
{"PhysicalDisk(_Total)\Current Disk Queue Length", type number}, {"PhysicalDisk(0 C:)\% Disk Time", type number},
{"PhysicalDisk(1 D:)\% Disk Time", type number}, {"PhysicalDisk(2 E:)\% Disk Time", type number}, {"PhysicalDisk(3 F:)\% Disk Time", type number},
{"PhysicalDisk(_Total)\% Disk Time", type number}, {"PhysicalDisk(0 C:)\Avg. Disk Queue Length", type number},
{"PhysicalDisk(1 D:)\Avg. Disk Queue Length", type number}, {"PhysicalDisk(2 E:)\Avg. Disk Queue Length", type number},
{"PhysicalDisk(3 F:)\Avg. Disk Queue Length", type number}, {"PhysicalDisk(_Total)\Avg. Disk Queue Length", type number}
{"PhysicalDisk(3 F:)\Avg. Disk Queue Length", type number}, {"PhysicalDisk(_Total)\Avg. Disk Queue Length", type number}
{"PhysicalDisk(3 F:)\Avg. Disk Queue Length", type number}, {"PhysicalDisk(_Total)\Avg. Disk Queue Length", type number}
in
#"Type gewijzigd"
This is not the exact example for our Sentry One set. The part where you Transform Column Types may require some work, because it depends on the number of disks and processors on your system.
But once completed, we perform magic:

some logman extras
Here are some useful logman commands:
PS> logman
or
PS> logman query
PS> logman query SQLSentry15
And there's a lot more you can do with logman.
To be continued
There are some neat examples of counter sets for different purposes, like monitoring SQL Server or Analysis Services. PAL is a complete solution to analyze and show SQL performance and issues amongst others. I'll talk about PAL in a dedicated article.