This will be the first part of a long series defining my custom SQL Server inventory system. I started my inventory as an exercise to learn my new environment after starting a new job, but it has grown to be very useful for reporting, using as a source for PowerShell to target all my instances, and for basic monitoring. There are many components to it, so I figured breaking them down piece by piece would be easier to digest. In this part, I will go through the script I use to pull information about the instance.
In my ever expanding mission to document and inventory my SQL environment, I wanted a way to get human-readable schedule information about my jobs. While initially this seemed like it would be a fairly trivial exercise, it turned out to be a little more involved. Its possible there is an easier way to do this, and if you know of one let me know. I will go through the interesting sections as I’ve done in the past, with a link to the full script at the end.
This is one of those functions born out of not wanting to do a task more than once, which evolved into a more useful/reusable function when I added a little extra functionality. I originally started writing it for the humble task of finding all instances that have xp_cmdshell enabled, but I evolved it to have the ability to find the configuration values for any and all of the configurations stored in sys.configurations as well as checking whether the values match the defaults.
This is one of those somewhat simple functions that I end up using quite a bit, similarly to the Test-ADUser one from my last post. I find it particularly useful for parameter validation in my other SQL-related functions. It is nice to be able to offload the connectivity tests right into the parameter declaration in the function without having to clutter up the main function.
I have been slowly moving my collection of scripts into more modular functions to make my code more re-usable, and so far it seems to be working out well. It is much easier to update one function in a module, then to try and track down every location I used that functionality and update it as well.
I have recently been working on automating our new hire process with PowerShell (at least the Exchange/AD aspects). I found myself needing a simple function that would return true or false based on whether a specified username is in use or not. I needed this for validating username parameters in my other functions, as well for my function that would generate a valid username from a first, middle, and last name that wasn’t already in use.
Most of the SQL instances at my current company were set up before me, and don’t really follow any standards for assigning the maximum server memory configuration for SQL. Since a couple of my instances were starting to starve the OS, I figured I would write a small PowerShell function to calculate what the max should be set to for a server running nothing but SQL on it. There are lots of good resources out there about how to calculate this, but I decided to use the recommended formula found here by Jonathan Kehayias. I also took some inspiration from the logic Chrissy LeMaire applies in her SMO recipe here, although I ended up modifying it quite a bit.
I have been working on automating my organizations Active Directory/Exchange user provisioning, and needed a method to easily log what actions are taken. To accomplish this, I decided to write a couple PowerShell logging functions that I could integrate into my other functions for creating the user, creating the mailbox, generating approval emails, and assigning groups. The functions I came up with are Start-Log, Write-Log, and Stop Log.
Perhaps the most fundamental DBA task is to configure proper SQL maintenance on all of your instances. There are many ways to approach this, including SSIS-based GUI-driven maintenance plans, custom homebrewed T-SQL scripts, PowerShell scheduled jobs, or third-party applications. However, my (and many other’s) preferred solution is to use Ola Hallengren’s fantastic maintenance scripts (ola.hallengren.com). This solution is recommended by many people for good reason. It provides a good amount of flexibility, compatibility, as well as a solid amount of logging. You can use this solution right out of the box and it will work great, but I like to tweak it a bit to better fit my needs and be a little easier to apply.
One of the first things I check when I get to the office in the morning is any SQL agent jobs that failed overnight. Our monitoring system will also pick these up, but I like to have a condensed report with the failed production jobs, job step(s) that failed, and error message so that I can quickly diagnose what went wrong. We already had a script in place which mostly worked, based on this, but I wanted to modify it a bit to return some more useful information.
Quick script for today. I needed to come up with a way to split a text file, specifically a SQL script with one insert statement per line up into smaller files if the base file exceeded a certain size. The requester of this functionality wanted the file to be split into 10,000 line chunks if it exceeded 5000KB.