PowerShell Studio Review

Over the years I’ve noticed certain characteristics in the IT professionals I respect the most. These range from the rather obvious intellectual curiosity that goes beyond all matters IT to the more subtle religious devotion to particular code formatting styles. The germane attribute to this post is that they also recognize and adore good tools. They sense when their current toolset is deficient and spend time looking for better ones. When a good one is found they become passionate about them and share them with other people they respect.

When I first started out in PowerShell the ISE was quickly determined to be less than ideal. I rarely spend time in the shell itself, but I write quite a lot of scripts, functions, and modules. A couple of years ago there wasn’t much in the way of alternatives. I used PowerGUI for a while, which wasn’t (and isn’t) bad. It provided Intellisense and a heap explorer (for lack of a better term; it’s a variable list), but at the time there weren’t all that many cmdlets to keep track of anyway. When I began exploring multithreading with PowerShell I ran into the big problem that the console PowerGUI wraps executes in an STA thread. That’s death for a PowerShell ISE for me now. The next tool I tried was Idera’s PowerShell Plus. It was a step ahead of PowerGUI, but was a bit ‘clunky’. Idera makes some great tools and provide awesome support for the PowerShell community (they run powershell.com). Also, their former VP of Marketing gave a copy of PowerShell Plus to every attendee in one of my SQL Saturday sessions. I probably would have stuck with it if it wasn’t for discovering Sapien’s PrimalScript.

PrimalScript is, so far as I can tell, the gold standard of scripting tools. It’s got more bells and whistles than I could ever explore. That was really my only problem with the tool. When it comes to scripting I’m a PowerShell only kind of guy. Due to PrimalScript’s long history PowerShell really became a ‘tacked on’ addition to the tool. It works remarkably well, but there is an awful lot of clutter if all you do is PoSH. A while back I discovered that they made dramatic upgrades to the tool they once called PrimalForms, which I had tried and discarded as a one-trick-pony for the rather esoteric task of writing PowerShell driven GUI forms. Renamed PowerShell Studio, it looked like it took all the good elements of PrimalScript and left out all the non-PowerShell stuff that I really didn’t care about. For a couple of weeks I’ve been working with it exclusively and it’s quite good. There are quite a few features that separate it from the other ISEs out there:

Swap v2 and v3 engines

This is an inspired feature. Especially with some of the known compatibility issues experienced running v2 scripts in v3, having this easy switch is a great help.

Layouts

There are so many features and toolbars that you can go a little nuts trying to navigate them all. And when I want to code I just want a blank window to hammer out the text. Layouts give you the opportunity to configure the interface to your heart’s desire *for any given scenario.* When I’m just pounding the keyboard I want a particular set of toolbars, but when I’m debugging I want something different. Likewise, if I’m doing code modifications and regression testing, I have even different needs. It’s quite easy to configure the interface and save the configuration for instant recall later. Here’s what I look at most of the time (Click to enlarge).

layout

Object Browser

This is a killer feature. The object browser is a toolbar that presents all of the components in PowerShell, the .NET framework, WMI, and your filesystem in one (searchable) interface. I find myself spending a lot of time in BOL trying to figure out various esoteric SMO issues, find overloaded constructor definitions, and umpteen other time chewers that an only-kind-of-developer like me has to do. This is a great help to be able to quickly browse through what is available and, if I need more information, the right-click -> MSDN Help shortcut is just awesome.

object_browser

Contextual help

As you type cmdlets the Get-Help contents opens automagically in a dock window. This is particularly helpful now that v3 added all sorts of cmdlets and there’s no way you can be familiar with them all. And there’s always that ‘what was the parameter name’ moment that I have shamefully often. It’s very handy and shows that the developers of the tool really understand PowerShell scripting.

help

Script packager

Bundle your script as an .exe? I didn’t get it at first. But, I tried it out and I’ll definitely have use for it. I’m constantly wanting to share something cool with people, but I can’t teach everyone PowerShell. Now I can just package it up and send them an .exe. As an added benefit, they can’t edit or see the code so it prevents changes that may break your code as well as perhaps protecting some secret sauce you don’t want to share (but, don’t be that guy!).

Compare files

I work in a scenario where I have to go through very clearly defined steps of testing and elevating PowerShell code between environments. Being able to click a button and view the differences between two different files is a big time saver.

There really are too many great features to list here and many of them I have yet to really dig into. Here’s a quick (but certainly not exhaustive) look at a few more.

  • Remoting – Set up your quick access list and open a remote PowerShell console without leaving the tool.
  • Embeded console that actually works – I don’t have to have a separate PowerShell window open. I can do everything within the tool.
  • Snippets – Other tools have snippets, but I find Powershell Studio to have the superior implementation. They are easy to use with the shortcut + tab pattern and it’s simple to create your own.
  • Auto-module import – Try to use a function from a module you haven’t loaded and PowerShell Studio will insert the Import line.
  • Projects – Being able to create a project is very handy for working with modules or multiple scripts/functions that are generally modified together.
  • Function Explorer – When you’re working with modules it’s very helpful to be able to see a list of all the functions within and navigate to their place in the module file.

Wish List

  • Better configuration – PowerShell Studio is sufficiently configurable. Most of the things that would annoy me I’ve been able to turn off or alter. But, as you’ve probably guessed I’m rather picky about my development environments. One of the first things I do when I try out a new IDE is set the colors. I think I’ve come up with the absolute best, objectively speaking, color scheme of all time that I’ve spent hours perfecting and I’m UNABLE TO USE it in PowerShell Studio because you can’t enter your own RGB values. Travesty! Also, I can’t configure line number coloring at all! The horror! As you can see here, what I came up with isn’t too bad, but not nearly as beautiful as my PrimalScript settings.

    colors

  • Disable tabs grabbing focus – I don’t care to know every time an auto-save occurs. I don’t need the output tab to pop up every time.
  • Code formatting – The more configurable the better!
  • License and embed Sublime Text – I greatly admire text editors. It’s an area most people ignore, but there are so many awesome things you can do with a good one. Sublime Text is my current fave (thanks to one of those guys I respect who shared it with me). Of course, a vi style editor would also be fantastic. Free me from the mouse!
  • Better integration with source control – I understand Sapien has their own source control product, but I can’t use it. I have TFS and I simply wasn’t able to get them to play well together.

Conclusion

I really wish I could return PrimalScript and go with PowerShell Studio. I’m a PrimalScript fan and will happily continue to use it, but when it comes time to upgrade I’m going with its PowerShell specific sister. If you spend a lot of time writing PowerShell code there’s just no better ISE out there.

psasync Module: Multithreaded PowerShell

The series I did on Concurrency in PowerShell contains by far the most popular posts on my blog (getting tens of views every day!), particularly the one on using runspaces. The code demonstrated in the post was an early attempt at using the method. Not long after I refactored the functions I was using into something more modular and flexible. Thus, the psasync module was born. It has proved to be a very simple way for me to kick off background work in runspaces in scripts or from the shell. It still has room for improvement (the functions don’t process pipeline input, for instance), but it’s a start. Since I hope to be updating it as suggestions come in or as I find ways to make it more robust, I’ve started a Codeplex project (my first, so be gentle). If you would like to be a contributor feel free to send me something and, if it’s good, I’ll give you credit on the project.

The module contains the following functions:

Get-RunspacePool

This should have, perhaps, been named Create-RunspacePool. As its name suggests, it returns a pool of runspaces that can be used (and reused).

Invoke-Async

This is a much improved version of the function introduced in the initial post. The big improvement was the addition of the AsyncPipeline class definition, which allows the creation of a simple object to keep track of both the pipeline and the AsyncResult handle which is returned by BeginInvoke(). This allows the process of looking at statuses of running processes and consuming results to be much simpler. The function also allows passing an array of parameters for script blocks with multiple arguments.

Receive-AsyncResults

This function wraps the code for pulling results (or errors, as the case may be) off the pipelines in the runspace pool utilizing the AsyncPipeline objects output from Invoke-Async.

Receive-AsyncStatus

A handy function for working with runspaces from the shell, Receive-AsyncStatus simply returns information about the status of the operations running in the pipelines you have invoked. Since Receive-AsyncResults is synchronous, this allows you to continue to work until your last process completes or selectively use Receive-AsyncResults on those that have completed.

Example Code

To demonstrate the use of the module, consider the same scenario presented in the Concurrency series: You have a series of Excel documents that you need to load into an SQL Server database. As before, set up the script block that will execute the real work.

Import-Module psasync

$AsyncPipelines = @()

$ScriptBlock = `
{
    Param($File)
    
    . <your_path>\Import-ExcelToSQLServer.ps1
    
    Import-ExcelToSQLServer -ServerName 'localhost' -DatabaseName 'SQLSaturday' -SheetName "SQLSaturday_1" `
        -TableName $($File.BaseName) -FilePath $($File.FullName)
}

# Create a pool of 3 runspaces
$pool = Get-RunspacePool 3

$files = Get-ChildItem <path-to-files> 

foreach($file in $files)
{
	 $AsyncPipelines += Invoke-Async -RunspacePool $pool -ScriptBlock $ScriptBlock -Parameters $file
}

Receive-AsyncStatus -Pipelines $AsyncPipelines

Receive-AsyncResults -Pipelines $AsyncPipelines -ShowProgress

You’ll notice there is nothing particularly complex here in the code. But, all the warnings from the runspace post apply. Multithreading is awesome and powerful, but use it with care.

Concurrency in PowerShell: Multi-threading with Runspaces

<This is part 2 in a series.>

Update: The psasync module described in this post is a better version of the code implemented below.

Download the code for this post.

In my last post I looked at using background jobs to execute PowerShell code concurrently, concluding that for many tasks the large amount of overhead makes this method counter productive. Fortunately, there is a better way. What I present here is an expansion on the work of Oisin Grehan (B | T), who deserves the credit for introducing this method. This blog post introduced me to the concepts upon which I expound in this post.

A quick note to begin. I am presenting something that is not well documented and outside the ‘normal’ operations of PowerShell. I don’t think PowerShell was designed to be used in this way, as evidenced by the lack of thread safety in cmdlets and no native synchronization mechanisms (that I can find). I’d love it if someone reading this blog can provide more color around PowerShell’s philosophy of multi-threading, but judging by the built in mechanisms (jobs) the designers wanted to avoid the issue, and for good reasons! Use this at your own risk and only after testing EXTENSIVELY.

To review; the test scenario for this series involves a series of Excel documents that must be loaded into an SQL Server database. The goal is to speed up the process by loading more than one file at a time. So, I need to gather a collection of the file objects and execute a PowerShell script block to execute the ETL (Extraction Transform Load) code against each file. As you can see, this is very simple code, but it must be executed many times … the ideal (but not only) use case for this pattern.

$ScriptBlock = `
{
    Param($File)
    
    . <your_path>\Import-ExcelToSQLServer.ps1
    
    Import-ExcelToSQLServer -ServerName 'localhost' -DatabaseName 'SQLSaturday' -SheetName "SQLSaturday_1" `
        -TableName $($File.BaseName) -FilePath $($File.FullName)
}

What I need is some way for PowerShell to act as a dispatcher to generate other threads on which these import processes can operate. The key elements to this are the RunspacePool and PowerShell classes in the System.Management.Automation namespace. These are classes meant to enable applications to utilize PowerShell processes, but I am using it for a different purpose. Yep, it’s about to get very developery on this blog. But, have no fear non-developers (like my fellow DBA’s) I’m working on making this easier for you.

Every PowerShell pipeline, defined by the PowerShell class, must have an environment of resources and session constructs (environment variables, loaded cmdlets, etc.) in which to run. In other words, every pipeline needs a runspace. A pipeline can only exist on one runspace. However, pipelines can also be queued onto runspace pools. It is this ability to create runspace pools that allows for (albeit clumsy) multi-threading capabilities.

RunspacePools are created through the CreateRunspacePool static method of the RunspaceFactory class. This method has 9 overloads, so there’s plenty of options to explore. The simplest method is:

$pool = [RunspaceFactory]::CreateRunspacePool(1, 3)

This simple line of code creates a pool of 3 runspaces upon which pipelines can run. You can do a lot more with runspace pools, such as establishing session state configurations that can be shared by all the runspaces in the pool. This is handy for, say, loading specific modules or establishing shared variables. But, it is beyond the scope of this post. Choosing the size of your runspace pool is very important. Too many and you will find diminishing (or worse) performance. Too few and you will not reap the full benefits. This is a decision that must be made per computer and per workload. More on this later.

Part of the configuration of the runspace pool is the apartment state value. With this code, I specify that all of the runspaces will be in single-threaded apartments and then open the pool for use.

$pool.ApartmentState = "STA"
$pool.Open()

Apartment states are very complicated topics and I’m not going to attempt to describe them here. I will only say that this is an attempt to force thread synchronization of COM objects. You also need to be aware of these since certain code will only work in a multi-threaded or single-threaded apartment. You should also be aware of what your IDE uses. For instance, the ISE uses STA, while the shell itself (in v2) is MTA. This can be confusing! Since this is a COM mechanism that doesn’t really ‘exist’ in Windows per say, it is not sufficient to solve your thread safety concerns. But, it is my attempt to provide what automatic synchronization I can. With that, a quick word on thread safety.

If you compare the script block above with the script block from my post on background jobs, you will see that the properties of the file objects are quite different. This is because the RunspacePool method does *not* serialize / deserialize objects, but passes the objects to the runspaces by reference. This means that an object on thread B that was created by thread A points to precisely the same memory location. So, if thread B calls a method of the object at the same time thread A is executing code in the same method, thread B’s call could be making modifications to local variables within the method’s memory space that change the outcome of, or break, thread A’s execution and vice versa. This is generally considered to be a bad thing. Be careful with this. You should take care in your code to ensure that the same object cannot be passed to more than one thread. Again, use at your own risk.

At this point, I can begin creating pipelines and assigning them to the runspace pool. In the code download you will see that I run this in a loop to add a script block for every file to the pool, but I’m keeping it simple here. There are a few other bits in the sample code that I don’t expound on in this post, too.

$pipeline  = [System.Management.Automation.PowerShell]::create()
$pipeline.RunspacePool = $pool 
$pipeline.AddScript($ScriptBlock).AddArgument($File)

Here the PowerShell pipeline object is captured and then assigned to the previously created run pool. The script is then added to the pipeline and a file object is passed as an argument. (Note that you can pass n parameters to a script block by appending additional AddArgument() calls. You can also queue n scripts or commands to a pipeline and they will be executed syncronously within the runspace.) The script is not executed immediately. Rather, two methods exist that cause the pipeline to begin executing. The Invoke() method is the synchronous version, which causes the dispatching thread to wait on the pipeline contents to process and return. BeginInvoke() is the asynchronous method that allows for the pipeline to be started and control returned to the dispatching thread.

$AsyncHandle = $pipeline.BeginInvoke()

BeginInvoke() returns an asynchronous handle object the properties of which include useful information such as the execution state of the pipeline. It’s also how you are able to hook into the pipeline at the appropriate time. To do so, the EndInvoke() method is used. EndInvoke() accepts the handle as it’s argument and will wait for the pipeline to complete before returning whatever contents (errors, objects, etc.) that were generated. In this code sample, the results are simply returned to the host pipeline. Also note that since the PowerShell class is unmanaged code, calling Dispose() is wise. Otherwise, garbage collection will not release the memory grants and your powershell.exe process will be bloated until such time as the object is disposed or the process is closed (just for fun you can test this using [GC]::Collect()). Closing the RunspacePool is also good practice.

$pipeline.EndInvoke($AsyncHandle)
$pipeline.Dispose()
$pool.Close()

Notes on usage

You shouldn’t use this method for every task and when you do every decision should be carefully considered. Take the size of your runspace pool, for instance. Think carefully about how and where your code will be executed. Where are the bottlenecks? Where is the resource usage occuring? And, of course, how many CPU cores are on the machine where the code will be executed (both host machine and remote)?

For example, I have used this method to perform index maintenance on SQL Servers. But, consider all of the pieces. If you didn’t know that index rebuilds (but not reorgs!) could be multi-threaded by SQL Server, you could get into some trouble. I came across a database tool that professes to multi-thread index rebuilds, but it’s method is to simply calculate the number of cores available to the server and kick off that number of rebuilds. Ignoring for a moment that you have not left any processors for Windows to use, you’ve also not considered the operations of the index rebuilds themselves. If the max degree of parallelism setting is 0 on the index definition (or any number other than 1), you could be looking at serious resource conflict. Imagine an 8 core server. That’s potentially 64 simultaneous threads! It will work, but the scheduler yields, CPU cache thrashing, context changes, ( cross-NUMA node access costs?) may have serious impact to your system.

So, be careful and think through the impact of the decisions you make when using this method.

Concurrency in PowerShell: Background jobs

This is part one of a series.
Download code for this post.

There are three methods to achieve asynchronous processing in PowerShell: background jobs, System.Management.Automation namespace classes, and (coming soon in PoSH v3!) workflows. Workflows actually share characteristics of the other two, but they are different enough to be a new method. I suppose you could also use the System.Threading .NET namespace, but then you’re writing straight .NET code and that’s just cheating. Over the next three weeks, I’ll be posting information about all three of these methods. There will be a lot of information, but it will all be useful and necessary. If you can avoid some of the problems I ran into it will be worth while. I’ll try not to be long-winded, but no promises there.

Before you get started you should know that there are a lot of ‘gotchas.’ Like a lot of PowerShell you’ll come across behavior that isn’t particularly intuitive. There are also a few ways you can get yourself into real trouble. You should familiarize yourself with the concept of thread safety, for example.

The scenario for this series is a common one: I have 50 Excel worksheets that need to be loaded into an SQL Server database as a staging phase for an ETL process. Of course, I could load these one at a time, but depending on how large they are it could take a significant amount of time. Most importantly, I am not fully utilizing the computing resources at my disposal. I want to be to control the amount of resources I use so I can ramp it up for speed if I so desire. One way to do attempt this is through the concurrent processing pattern. Backup jobs are enabled through the use of the *-Job cmdlets and operate very similarly to the forking concurrent process model. Using these cmdlets is straightforward and generally protects you from doing anything particularly harmful, although it isn’t hard to make your computer unusable for a while as I’ll demonstrate shortly.

To get started, I need a data structure for the Excel documents that I’ll be working with as well as a script block to define the work. I’m using a version of some code I posted a while back for this data load task that I have wrapped into a function called Import-ExcelToSQLServer. Note also that this series is based on a presentation I gave at SQL Saturday #111 so you’ll see references throughout this series.

$files = Get-ChildItem '<path>' 

$ScriptBlock = `
{
    Param($File)
   
    Import-ExcelToSQLServer -ServerName 'localhost' -DatabaseName 'SQLSaturday' -SheetName "SQLSaturday_1" `
            -TableName $($File.Name.Replace('.xlsx','')) -FilePath $($File.FullName) -EA Stop
}

Note the properties of the $File object that are being used. Astute PoSHers will note that I’m having to jump through some hoops to get the parameters formatted, which shouldn’t be necessary with System.IO.FileInfo objects. The problem is that though there are System.IO.FileInfo objects in the $Files array the objects only exist in the memory space of the dispatcher process. This means that the new process created by the Start-Job cmdlet has no access to the object across the process context barrier. As a result, the only mechanism available to pass information to the spawned process is through serialization. PowerShell converts the object to XML and then deserializes the XML into an object of type [Deserialized.System.IO.FileInfo] that has a modified set of parameters and only the ToString() method. This is quite frustrating if you are hoping to utilize methods of a FileInfo object, which are no longer available. Not only that, but the overhead of serialization and deserialization is non-trivial, not to mention the overhead of instantiating another FileInfo object if required. On the other hand, the upside is that issues of concurrency control are eliminated and thread safety is guaranteed.

The intention is now to execute the script block for every file in the $files array and I do not want to wait for one to complete before the other begins. So, I can use a foreach loop to execute Start-Job.

Foreach($file in $files)
{
    Start-Job -ScriptBlock $ScriptBlock -ArgumentList $file `
         -InitializationScript {. Import-ExcelToSQLServer.ps1}
}

Note the need to include the file containing the function code as an initialization script, even if it is loaded in the host process, due to the process barrier. The new process will, however, execute the PowerShell profile if you have one defined. If you have task manager opened after executing this you’ll see something like this.

Start-Job Task Manager results

A process has been created for each file. The Start-Job cmdlet has also registered handles in the session. These handles are the connection from the host session to the pipeline running in the forked process.

Once the loop exits, control is returned to the host. This is the real intention of the *-Job cmdlets. They allow you to execute work and continue to utilize the shell. The jobs are available to check at the PoSHer’s leisure. If desired, you could use the Wait-Job cmdlet to enter a synchronous state and concede control until the specified job(s) is/are completed.

When the time comes, check to see if there are any jobs still running like so.

Get-Job -State "Running"

The Receive-Job cmdlet utilizes the handles registered by the Start-Job cmdlet to access the external process pipelines. Of course, you could provide a single job number or array to Receive-Job. To receive the results of all jobs at one time simply use the pipeline.

Get-Job | Receive-Job

When completed, remember to clear the job-handles.

Remove-Job *

Should I Use Background Jobs?

Fifty running PowerShell processes at ~20MB of memory each (or more, depending on the work), plus the required CPU time (compounded by the need to manage schedulers, etc.) makes for a huge amount of overhead. This results in a run-time that is more than three times longer than executing the tasks synchronously. The individual units of work (one script block execution) needs to be sufficiently long to overcome the overhead of the forked process model.

Significant improvement can be made by throttling the number of processes that are allowed to run at one time. To do so, modify the Start-Job loop like so:

Foreach($file in $files)
{
    Start-Job -ScriptBlock $ScriptBlock -ArgumentList $file `
         -InitializationScript {. Import-ExcelToSQLServer.ps1}

    While((Get-Job -State 'Running').Count -ge 2)
    {
        Start-Sleep -Milliseconds 10
    }
} 

The addition of the while loop forces a wait if there are two or more running jobs. This dramatically improves the performance of the process by reducing the amount of overhead at the cost of causing the jobs to essentially run synchronously relative to the dispatching process. Unfortunately, it is still slower than running the jobs synchronously.

Next week I will look at a much more effective method. I am also currently working on a module that will (hopefully) provide a replacement for the Job cmdlets that will operate in essentially the same manner, just much more efficiently. However, it is still important to understand the Job cmdlets if for no other reason than they are on every computer with PowerShell. Additionally, they are simple and perfectly effective in some use cases, such as kicking off a long running process while maintaining control of the shell.

Speaking at SQL Saturday #111

I’m completely thrilled to be invited back to Atlanta on April 14th to speak at SQL Saturday #111. I’ll be throwing caution to the proverbial wind and tackling a rather difficult subject…concurrency in PowerShell. Concurrency is a fascinating and complicated subject that both solves and creates all kinds of interesting Computer Science problems. There’s no way I’ll be able to talk about everything in 60 minutes, but there should be plenty of time to make everyone in the room awfully dangerous with multiple processors.

I chose the word “concurrency” carefully. Many people think that running processes at the same time (asynchronously) is multi-threading. To me, however, true multi-threading requires shared memory space and not all of PowerShell’s asynchronous capabilities use that model. In fact, you have to dig in to the PowerShell .NET assemblies to start understanding how to actually do true multi-threading. It isn’t the most intuitive, but it’s incredibly powerful when you wrap your hands around it.

I’m enjoying the topic so much, in fact, that I’m going to start my first series on this blog. Starting the Monday after SQL Saturday, I’ll be posting an in-depth look at the various methods of asynchronous processing in PowerShell. There is a small amount of information scattered around on various blog posts and forums out there, but I don’t know of anywhere that has a consolidated, more-or-less exhaustive look at the subject, so I think I’ll be bringing something good to the awesome PoSH community.

If you read this blog and come to SQL Saturday, make sure to say ‘hello!’ And make sure to follow the #sqlsat111 hash tag on Twitter.

Verifying SQL Server Database Backups with #PowerShell

Like all DBAs I’m a bit obsessive about database backups. And, of course, an untested backup isn’t much of a backup at all (my IO subsystems could be corrupting my backups RIGHT NOW). But, testing your backups is time consuming, dull work. So, to satisfy my needs for validated backups and my highly tuned aversion to repetitive tasks, I’ve automated this process. This is a PowerShell script that, given an array of servers, will read the past 24 hour backup history and restore the oldest full backup that occurred along with all log backups to the present point in time. Every day I get an email telling me (mostly) good things about my database backups and that means the tiny flame of life still burning in this DBAs jaded heart burns a little brighter. And if I do have a problem (I have issues with log backups from time to time), it tells me that, too.

Of course, this script is written with a certain environment in mind. For instance, I have the luxury of being able to take a full backup every night for my critical databases. You may have to tweak it to work for you. I also use Idera’s SQL Safe (but I don’t use it for log backups), so I’ve included code for both standard .bak and .safe files

There are also a lot of ways to improve / expand this script. For instance, it only works on databases with one data file, which will be limiting to a lot of people. But, I hope this provides a foundation to build something that works in your environment.

<############################################################################################################

.SYNOPSIS
   This script will, for all databases on a list of servers (except for master), restore the most recent full
   backup and any log backups that it finds up to the current time. 
   
.Notes
    AUTHOR: Jon Boulineau
    LASTEDIT: 26 October 2011
.Link
     http://newsqlblog.com
     
############################################################################################################>

# Use Chad Miller's Invoke-Sqlcmd2 to avoid SQLPS
# www.sev17.com
function Invoke-Sqlcmd2
{
	param
	(
	[string]$ServerInstance,
	[string]$Database,
	[string]$Query,
    # IMPORTANT MODIFICATION: 
    # The default timeout of 30 seconds which Chad set is insufficient for the purpose.  If a restore takes longer, it will give up
    # on the process and kick off the next restore in the list (also creating errors in the report).  7200 seconds is 2 hours, but 
    # adjust as needed.  Of course, in any legitimate timeout situation you could end up with a very very long running script. This can
    # also be set to 0 for an infinite timeout.  More info: http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtimeout.aspx
	[Int32]$QueryTimeout = 7200
	)

	$conn = new-object System.Data.SqlClient.SQLConnection
	$conn.ConnectionString="Server={0};Database={1};Integrated Security=True" -f $ServerInstance,$Database
	$conn.Open()
	$cmd = new-object system.Data.SqlClient.SqlCommand($Query,$conn)
	$cmd.CommandTimeout=$QueryTimeout
	$ds = New-Object system.Data.DataSet
	$da = New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
	[void]$da.fill($ds)
	$conn.Close()
	$ds.Tables[0]
}

<####################### SET THESE VARIABLES TO CONFORM TO YOUR ENVIRONMENT ######################>

[Array]$serverNames = ""

# Specify the server to which the restores will be made.  BE CAREFUL WITH THIS !!!
[String]$testServer = ""

[String]$dataPath = "" 		# e:\SQLFiles\
[String]$logPath = "" 		# e:\SQLFiles\
[String]$SMTPServer = ""	# mail.mydomain.com
[String]$EmailTo = ""		# dba@mydomain.com
[String]$EmailFrom = ""		# admin@mydomain.com

# These are only necessary if you're using Idera SQLSafe
[String]$windowsUsername = "<Username>";
[String]$windowsPassword = "<EncryptedPassword>";

<####################### SET THESE VARIABLES TO CONFORM TO YOUR ENVIRONMENT ######################>


# There's probably a better way to format this using ConvertTo-HTML
[String]$resultHTMLTable = "<html><body><table>";

$historyQuery = 
    "SELECT    
        [bs].[database_name]
        , [bs].[backup_finish_date]
        , [bs].[server_name]
        , [bs].[type]
        , [mf].[physical_device_name]
    FROM
        [msdb].[dbo].[backupset] bs
        INNER JOIN [msdb].[dbo].[backupmediafamily] mf ON [bs].[media_set_id] = [mf].[media_set_id]
    WHERE
        DATEDIFF(hh , [bs].[backup_finish_date] , CURRENT_TIMESTAMP) <= 24
        AND [mf].[family_sequence_number] = 1"

foreach($name in $serverNames)
{

	# Get a list of the databases on the server
	$databases = Invoke-Sqlcmd2 -ServerInstance $name -Database master -Query "SELECT name FROM sys.databases WHERE name NOT IN ('tempdb','master')"

	# Get the entire backup history for the last 24 hours
	$backupSets = Invoke-Sqlcmd2 -ServerInstance $name -Database "master" -Query $historyQuery

	#For each of the databases, check for backups
	foreach($db in $databases)
	{
        clear;
		$resultHTMLTable = $resultHTMLTable + "<tr><td>$name</td><td>$($db.Name)</td>";
		
        # Note : This only works when one log or data file exists. 
		$dataPath = $dataPath + $($db.Name)_data.mdf
		$logPath = $logPath + $($db.Name)_log.ldf

		# We will need to bring the database online when the restores are complete
		$recoverQuery = "RESTORE DATABASE $($db.Name)"; 
		
		$dropQuery = "DROP DATABASE $($db.Name)";

		# Capture the most recent full backup
		$fullBackup = $backupSets | Where-Object{$_.database_name -eq $db.name -and $_.type -eq "D" } | Sort-Object -Property @{Expression={$_.backup_finish_date}; Ascending=$false} | Select-Object -First 1 ;

		if($fullBackup -eq $null) 
		{
			$resultHTMLTable = $resultHTMLTable + "<tr><td>$($db.Name)</td><td>Backups do not exist</td></tr>";
		}
		else
		{
       
			# Store an ordered list of log backups for the database
            $logBackups = $backupSets | Where-Object{$_.database_name -eq $db.name -and $_.type.ToString() -eq "L" -and $_.backup_finish_date -ge $($fullBackup.backup_finish_date)} | Sort-Object -Property @{Expression={$_.backup_finish_date}; Ascending=$true} ;

            # Idera
			#$fileList = Invoke-Sqlcmd2 -ServerInstance $testServer -Query "EXEC [master].[dbo].[xp_ss_restorefilelistonly] @filename = '$($fullBackup.physical_device_name)', @WindowsUsername = '$windowsUsername', @EncryptedWindowsPassword = '$windowsPassword'"

            # SQL Server standard        
            $fileList = Invoke-Sqlcmd2 -ServerInstance $testServer -Query "RESTORE FILELISTONLY FROM DISK = '$($fullBackup.physical_device_name)'"
            
			
			$dataFile = $fileList | Where-Object{$_.Type -eq "D"} 
			$logFile = $fileList | Where-Object{$_.Type -eq "L"} 
            
            # For Idera backups, specify a formatted parameter for data and log paths
			$dataMove = $($dataFile.LogicalName) + ' ' + $dataPath
			$logMove = $($logFile.LogicalName) + ' ' + $logPath

			# Idera
			# Note: This uses WITH REPLACE. Make sure this is what you want.
			<#
            $restoreQuery = "EXEC [master].[dbo].[xp_ss_restore]
                  @database = '$($fullBackup.database_name)'
    			, @filename = '$($fullBackup.physical_device_name)'
    			, @withmove = '$dataMove'
    			, @withmove = '$logMove'
    			, @Replace = 1 -- overwrite existing database 
    			, @DisconnectUsers = 1
    			, @WindowsUsername = '$windowsUsername'
    			, @EncryptedWindowsPassword = '$windowsPassword'
    			, @RecoveryMode = 'norecovery' -- may need to restore logs
    			, @nostatus = 0 -- send status to repository
    			, @ContinueAfterError = 0"
            #>
            
			# Note: This uses WITH REPLACE. Make sure this is what you want.
			$restoreQuery = "RESTORE DATABASE $($fullBackup.database_name) FROM DISK = '$($fullBackup.physical_device_name)' WITH REPLACE, NORECOVERY, CHECKSUM, MOVE '$($dataFile.LogicalName)' TO '$dataPath', MOVE '$($logFile.LogicalName)' TO '$logPath'"
            
			# Execute the restore query
			Write-Host "Restoring full backup for $($db.Name)";
			try
			{
				Invoke-Sqlcmd2 -ServerInstance $testserver -Database master -Query $restoreQuery -ErrorAction Stop

				if($logBackups -eq $null)
				{
                    $resultHTMLTable = $resultHTMLTable + "<td>No Log Backups</td>"
				} else
				{
                    $resultHTMLTable = $resultHTMLTable + "<td>Log Backups Exist</td>"
					foreach($log in $logBackups)
					{
						$restoreQuery = "RESTORE LOG $($db.Name) FROM DISK = '$($log.physical_device_name)' WITH NORECOVERY";

						Write-Host "Restoring log file $($log.physical_device_name) from $($log.backup_finish_date)";

						Invoke-Sqlcmd2 -ServerInstance $testserver -Database master -Query $restoreQuery -ErrorAction Stop
					}
				}

			}
			catch
			{
				$resultHTMLTable = $resultHTMLTable + "<td><font color=`"red`">$Error[0]</font></td>"

				# Recovering Database
				Invoke-Sqlcmd2 -ServerInstance $testServer -Query $recoverQuery -ErrorAction continue
                
				$Error.Clear();
			} 

			# Recovering Database
			Invoke-Sqlcmd2 -ServerInstance $testServer -Query $recoverQuery -ErrorAction continue
            
            # Drop the database
            Invoke-Sqlcmd2 -ServerInstance $testServer -Query $dropQuery -ErrorAction continue
            
			$resultHTMLTable = $resultHTMLTable + "<td>Success</td>"

			$resultHTMLTable = $resultHTMLTable + "</tr>"
		} # end if
	}
} 

$resultHTMLTable = $resultHTMLTable + "</table></body></html>";

# Send email report
Write-Host "Sending email";
Send-MailMessage -Subject "Database Restore Report" -BodyAsHtml $resultHTMLTable -SmtpServer $SMTPServer -To $EmailTo -From $EmailFrom

#sqlsat89 scripts

Here are the scripts I used during my session, Getting Started With PowerShell. I’ll have an after event post soon.

Starting Services:


#Start SQL Server
Get-Command
Get-Command -CommandType Cmdlet
Get-Command -CommandType Cmdlet | Format-Table name

# List all services
Get-Service 

# Attempt to filter on the "SQL" string
Get-Service | Select-String -Pattern "SQL"

# The filter doesn't work, why not?
Get-Help Select-String

# The output of Get-Services is an array of objects!
Get-Service | Get-Member ;

# Use the Where-Object cmdlet to filter the results
Get-Service | Where-Object{$_.Name -match "SQL"}

# You can start every service on the server at once if you want. 
# Get-Service | Where-Object{$_.Name -match "SQL"} | Start-Service

# Use the -Force parameter to ignore dependencies
# Get-Service | Where-Object{$_.Name -match "SQL"} | Stop-Service -Force

# There are several ways to start the services
# Start-Service "SQLSERVERAGENT" ;
# Start-Service "MSSQLSERVER" ;

Start-Service "MSSQLSERVER", "SQLSERVERAGENT" ; 
"MSSQLSERVER", "SQLSERVERAGENT" | Start-Service;

Renaming Agent Jobs:


# Declare a variable for the desired username
$job_owner = "DBAsRule" ;

# Even text is an object!
$job_owner |Get-Member

# Declare the intention to use the Microsoft.SqlServer.SMO .NET namespace
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | Out-Null

# The PowerShell 2.0 convention is to use Add-Type, but it is sometimes problematic.
# Add-Type -AssemblyName "Microsoft.SqlServer.SMO"
# Add-Type -path “C:\Windows\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\11.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll”

# Store an instance of the local SQL in a variable
$sql_server_object = New-Object Microsoft.SqlServer.Management.SMO.Server('localhost')

# Get-Help Get-Member

# Get the SQL Agent SMO object
$sql_server_agent = $sql_server_object.JobServer;

# $sql_server_agent | Get-Member

# Get the JobCollection object
$jobs = $sql_server_agent.Jobs;



foreach($j in $jobs)
{
	$j.OwnerLoginName;
}


foreach($j in $jobs)
{
	if($j.OwnerLoginName -ne $job_owner )
	{
		$j.OwnerLoginName = $job_owner;
	}	
}