Feb 022011
 

As you are typing in any given PowerShell console or in PowerShell ISE you can finish typing many things simply by hitting the “Tab” key while you are halfway through a word. This practice is refered to as “Tab Complete”. If you aren’t in the practice of using this all the time, then you really need to start utilizing it heavily for your own sake. Not only can it help you type and create code faster, but it can be useful in remembering cmdlet names or parameters when you are typing.

Things I know you can tab complete are: cmdlet names, file and directory names, third party programs, custom scripts, cmdlet parameters, variable names, object members, functions, and many more.

Another cool thing is that if you hit it too early in the word, simply hit it again to keep going through all possible entries. If you type “Get-” and then start hitting tab,you will see the names of all the cmdlets until you get the one you want. Or after you’ve typed the name of a cmdlet,type “-” and then tab to see all possible parameter names, and it even works with your own scripts!

Skipped by the command you wanted? Shift-Tab will start you back in the other direction.

Feb 022011
 

When you are trying to run a script over a very large environment you often want to make sure that each machine you are trying to hit is accessible and that you have permission. Most importantly, however, you want to do this as quickly as you can. This got me to thinking. I can use the test-connection cmdlet to see if the machine is online. It’s a really good tool because it can be passed the “-quiet” parameter and is then set to return a simple true or false answer. As well, you can speed it up by passing the “-count” parameter and speed things up by only testing once.

That’s easy enough, but what is the best way to test for permissions, and quickly. I’ve given it a lot of thought, and I think the best way is to run a test-path to the admin$ or c$ shares. Using these built-in shares you can quickly verify if you have administrative privileges to the machine. Here’s the catch, if you try to run the test-path cmdlet against a machine that isn’t responding it can run very slowly. So we want to first verify that it’s on, and then test for privileges. To guarantee that the test-path wouldn’t run I had traditionally made two “IF” statements with one nested into the other.

If (Test-Connection $Machine -quiet -count 1){
    If (Test-Path $Machineadmin$){
        #Code Here
    }
}

But I thought that made for some rather ugly code. Instead I wanted to put both tests into a single “IF” statement, but I was worried that both commands would run and slow everything down. Logic states that it would be rather pointless to test for the second condition of an “AND” statement if the first condition was false. So I thought that maybe PowerShell would cut out the second portion, so I write this quick script to test my theory.

Function A{
    write-host "Running A"
    Return $False
}
Function B{
    write-host "Running B"
    Return $True
}

If ((A) -AND (B)) {
    "Found True"
}Else{
    "Found False"
}
>>Running A
>>Found False

After playing with that code some more with various conditions I found exactly what I was hoping for. If you are testing multiple code blocks with an “AND” statement then it will execute down the path until one is false, and then stop.

So now I have shortened my test code to just one “IF” statement.

If ((Test-Connection $Machine -quiet -count 1) -AND (Test-Path $Machineadmin$)){
    #Code Here
}

As far as speed goes,it will of course depend on your environment,but I found that the following times were averages in mine:

Machine doesn’t exist (No DNS): 2500 ms
Machine not online: 3800 ms
Permissions denied: 50 ms
Online and Accessible: 30 ms

Jan 282011
 

Recently, a challenge came across my desk which included comparing very large sets of data against one another. Specifically, a list of all computers in our domain compared to a list of all computers registered with a specific application. This posed an interesting question to me, “What would be the fastest way to accomplish this?”

I set out to look for different ways of comparing lists. I can think of three. The first two would be to load all of the items into an array and then search the array, item by item, for the value using either the –match operator or the –contains operator. The third method would be to load all the items into a hash table with empty values and then search the keys to see if they exist. Now since I know that loading up a hash table should take more time than loading an array, so I want to time the entire process not just the searches.

To actually do the timing, I will use the measure-command cmdlet. If you haven’t ever used this, you should really play with it. It’s a great tool for figuring out how long any given code block takes to run. That can be useful for things like filling in a time on you write-progress applet, or reporting the time to execute back to a user. Really you can look at it as a way to avoid setting a variable to get-date and then creating a new-timespan after the command completes. It is essentially rolling it all into one.

So, it’s a race between searching Hash Tables, and Searching arrays using both –match and –contains. Here is the code I used:

 $Checks = Get-Content u:scriptworkstations.txt

$ArrayContainsTime = Measure-Command {
    $Array = @(Get-Content u:scriptworkstations.txt)
    $Found = 0
    foreach ($Name in $Checks){
        If ($Array -contains $Name){$found++}
    }
}
"Array Contains Count: t$($Array.Count)"
"Array Contains Found: t$($found)"

$ArrayMatchTime = Measure-Command {
    $Array = @(Get-Content u:scriptworkstations.txt)
    $Found = 0
    foreach ($Name in $Checks){
        If ($Array -match $Name){$found++}
    }
}
"Array Matches Count: t$($Array.Count)"
"Array Matches found: t$($found)"

$HashTime = Measure-Command {
    $HashTable = @{}

    ForEach ($Line in Get-Content u:scriptworkstations.txt){
        $HashTable.Add($Line,"1")
    }
    $Found = 0
    foreach ($Name in $Checks){
        If ($Hashtable.ContainsKey($Name)){$found++}
    }
}
"Hash Table Count: t$($HashTable.Count)"
"Hash Table Found: t$($found)"


"Milliseconds for Array Contains:t$($ArrayContainsTime.TotalMilliseconds)"
"Milliseconds for Array Matches:t$($ArrayMatchTime.TotalMilliseconds)"
"Milliseconds for Hast Table Contains:t$($HashTime.TotalMilliseconds)"

I have loaded up the text file with 2,000 entries, so we are basically comparing 2,000 items to 2,000 items. Every single one will be a match, so we can see that it’s working by making sure that the found and count values are the same. If you wanted to take this code and load two different lists, then you would see a difference there. So, without further delay, it’s off to the races!

Array Contains Count:   2000
Array Contains Found:   2000
Array Matches Count:    2000
Array Matches found:    2000
Hash Table Count:   2000
Hash Table Found:   2000
Milliseconds for Array Contains:    532.6136
Milliseconds for Array Matches: 9839.4498
Milliseconds for Hast Table Contains:   51.2049

So we have a winner! As you can see all the methods work, but the hash table is substantially faster than the other two, array based methods searching through all 2,000 items in under one tenth of a second! I think array using the –contains operator is still a very reasonable time, and is probably easier and more comfortable for the average scripter to use. As well, it should be said that the array with the –match operator isn’t insanely slow by any means, and is by far the most robust method for searching as it can match any portion of the name in the array. This should be used with caution, though, as it can actually create false positives. Let’s say you are looking for “Computer” in a list that contains “Computer1”. You may not expect this to be a match, but it will.

So, there you have it. If you need to search a massive list for some reason and speed is top on your mind, use a hash table!

Jan 282011
 

A co-worker of mine posed the question of how to get the same type of pop-up that you get when you are using a simple wscript.echo in VBS. Since I am always telling him that PowerShell is typically easier than VBS to do something I was a bit annoyed, because he found one case where VB is easier.

Searching online you find that most people try to create a new object that is a VBS session and then pass the wscript.echo command over there. I don’t like that solution because it’s a bit kludgy. Instead I figured that there was a .NET object that you could use to do this, and indeed there is.

Here is a link to the MSDN Article for System.Windows.Forms.MessageBox:
System.Windows.Forms.Messagebox

Alright, so we know that there is a .NET class, so we can just make a new object, right? Well, normally yes. But there is a bit of an annoying statement on that site, “You cannot create a new instance of the MessageBox class.” So that means that I have to call the methods with a static call. Here’s the basic gist

[System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[System.Windows.Forms.MessageBox]::Show("Hello, World.")

Made with powershell and .NET object

So that is the addition of the assembly to our current shell session and then calling it via a static method. As you can see it gives a really ugly window. We can add to that, though by overloading the show method with more stuff. Specifically, let’s fill out the title, the type of buttons we’ll use and the icon it will show.

[System.Windows.Forms.MessageBox]::Show("This is the Text","This is the title","AbortRetryIgnore","Warning")

Alright, so how did I know what text to type to make those buttons and icon appear? What others are available to me? Well you can refer to Microsoft’s site to find out:
Message Box Buttons
Message Box Icons

So, you can see that this can be fairly powerful, but not quite as “easy” as in VBS. Now, if you care about the input that the user gives, it’s really easy to get. All you need to do is grab the output.

$Testing = [System.Windows.Forms.MessageBox]::Show("Testing Output","","AbortRetryIgnore","Warning")
#Imaginary click of the "Retry button"
$Testing
>> Retry

So you can see that we can work with the output now in other ways, but what if we want something really detailed, like we could get in VBS with the Inputbox command. Well, that’s a giant pain. I played with this for a while, but since there is no equivalent .NET object, it’s not very easy. Basically, you have to create a blank form then add all your objects to it. Set what all the buttons do and then set what happens when the form closes. I found a good tutorial written by Microsoft, so I won’t re-invent the egg:
Creating a Custom Input Box

Jan 132011
 

Ever wanted to make your output a little cleaner? Try using the built in way of inserting a tab or a new line into your string in PowerShell. It’s an easy way to make your output look much better. All you need to do is add <code>n for a new line or </code>t for a tab.

 

That’s it. By the way, the backtick or backquote that you are seeing is the character to the left one the “1” key on your US formatted keyboard.

Jan 122011
 

Have you ever worked with arrays and felt you weren’t able to put in all the data you wanted? Have you tried making two arrays and syncing all the data between the two? Sometimes you just need to store more than one thing. The answer is to make your own custom object, and then put that into the array. It really is easier than you might think.

First, you need to make a new system object. To do this we’ll use the New-Object command. Now if you know what type of object, as defined by .NET, that you would like to create, then you can specify it here. But the point here is that we want to make our own object. So we can make a generic, blank “object” by using a system.object.

$myObject = New-Object System.Object

This makes a new object, but a new object isn’t really anything at all. Think of an object as just that, something that you can feel, that is tangible. If I were to tell you that you were holding an object, what would that mean to you? Probably nothing until you had some idea of what it was. Let’s pretend that what I just handed you is a user’s workstation, now this object has some meaning,but past giving it a name,we still haven’t defined what the object really is. Objects have properties that define them. In the case of a computer there are things like its manufacturer, how fast its processor is, and how much memory it has. So we want to track all of that in our object.

In Powershell what we need to do is add these properties to our object. We can do this with the Add-Member command. We need to tell it what type of property we are adding, what it’s called and what its value is.

$myObject | Add-Member -type NoteProperty -name Name -Value "Ryan_PC"
$myObject | Add-Member -type NoteProperty -name Manufacturer -Value "Dell"
$myObject | Add-Member -type NoteProperty -name ProcessorSpeed -Value "3 Ghz"
$myObject | Add-Member -type NoteProperty -name Memory -Value "6 GB"

Now we have a full object. It contains several properties that serve to define it and seperate it from other objects, even other PCs. We can work with the object in Powershell by calling it as a whole or by grabing any single property.

> $myObject

Name     Manufacturer   ProcessorSpeed   Memory
----     ------------   --------------   ------
Ryan_PC  Dell           3 Ghz            6 GB

> $myObject.Memory

6GB

> Test-Connection $myObject.Name -quiet

True

> $myObject.Manufacturer = "HP"
> $myObject

Name     Manufacturer   ProcessorSpeed   Memory
----     ------------   --------------   ------
Ryan_PC  HP             3 Ghz            6 GB

So now that you have this really cool object and can work with, you will of course want to have many, many more, and keep them well organized and in one place.  You can do this, and it too is very easy.  Simply take your custom objects, and put them into an array.  To show you, lets first make two more objects.

$myObject2 = New-Object System.Object

$myObject2 | Add-Member -type NoteProperty -name Name -Value "Doug_PC"
$myObject2 | Add-Member -type NoteProperty -name Manufacturer -Value "Dell"
$myObject2 | Add-Member -type NoteProperty -name ProcessorSpeed -Value "2.6 Ghz"
$myObject2 | Add-Member -type NoteProperty -name Memory -Value "4 GB"

$myObject3 = New-Object System.Object

$myObject3 | Add-Member -type NoteProperty -name Name -Value "Julie_PC"
$myObject3 | Add-Member -type NoteProperty -name Manufacturer -Value "Compaq"
$myObject3 | Add-Member -type NoteProperty -name ProcessorSpeed -Value "2.0 Ghz"
$myObject3 | Add-Member -type NoteProperty -name Memory -Value "2.5 GB"

Now we have three different objects,  so let’s create an empty array to place them in.

$myArray = @()

Okay, so now is simply adding the objects into the array.  You can use the += operator to do it.

$myArray += $myObject
$myArray += $myobject2, $myObject3

Notice that you can add more than one object to an array at once, if you like. 

Now that this is complete we have a list of all our custom objects.  This can be displayed nicely on the screen or even sent to out-gridview to give you a nice Excel-like view which can be searched, sorted, and filtered.  You can also use the array in a ForEach statement to run code against.  Keep in mind that you can also create the array of objects by adding one at a time in any given loop.  This is exactly how I create the server lists for the companies I work for.  It’s also worth stating that the “Add-Member” commandlet must have the value given, but you can simply give and empty string “” if you want to define it later.

> $myArray

Name         Manufacturer    ProcessorSpeed   Memory
----         ------------    --------------   ------
Ryan_PC      Dell            3 Ghz            6 GB
Doug_PC      Dell            2.6 Ghz          4 GB
Julie_PC     Compaq          2.0 Ghz          2.5 GB

> $myArray | Select-Object name

Name
----
Ryan_PC
Doug_PC
Julie_PC

Enjoy!

Jan 032011
 

While writing Powershell scripts you often want to know what types of errors are occurring within your scripts and deal with them appropriately. Powershell give several ways to track errors but one of the lesser known ones is the “Dollar Hook”. This is a variable that always exists and holds a variable stating whether the last command ended with an error or not. This is useful because instead of setting up a generic trap you can deal with a specific command that you know has a good chance of failing. Take this example:

Get-Process "ANonExistantProcess" -ErrorAction SilentlyContinue
$?
>>False

So how do you use it to deal with an error? Well that is easy, you can now run an “if” statement on it.

Get-Process "ANonExistantProcess" -ErrorAction SilentlyContinue
IF (!$?){ "There was an error!" }
>>There was an error!

Also, if you really want to look for a very specific error there is another variable that holds the exact output: $lastexitcode. This guy works just like the errorlevel idea in DOS. When you run a command the error number is stored in the variable where 0 is success.

ping "ANonExistantComputer"
$LastExitCode
>>1

Given that, there are a lot of commands that don’t support this notation, so I would just use a dollar hook and then check for your number by looking into the $error[0] variable at that time.

This gives us the tools, but know there are some annoying restrictions. First Dollar Hooks use the last COMMAND, so if you want to test for something that throws an error but isn’t a command, tuff luck. Try it with a divide by zero error, it won’t work. Next the $LastExitCode variable has to be supported by the command itself. A lot of times you’ll know that the command threw an error and what error number, but the return code will still be 0 (success). This is especially true for errors that are non-terminating errors.

Jan 032011
 

It may turn up that you need to use Pi for some random reason. Well instead of trying to make a variable in which you manually type it, you can use the version built into System.Math which holds Pi to 20 decimal places.

$Pi = [System.Math]::Pi
3 * $Pi
>> 9.42477796076938
Dec 212010
 

If you are writing a script that you want to (might) use again it can be really useful to use script parameters. PowerShell does some really cool things with parameters that many folks don’t know or realize you can do. This post covers some of those items.

To start us off let’s look at how to use a script parameter.

### Test-Param.ps1 ###
Param($Variable1 = "Hello", $Variable2 = "World")
"$Variable1 $Variable2"

If we save this script and run it we will get an output like this:

>>./Test-Param.ps1
>>Hello World

This simple case shows that our variables are being assigned to the default values. We can change these values by passing them to the script via the command line. These values are passed either in the order that they are sent, or statically assigned by using the variables name.

>>./Test-Param.ps1 "Goodbye"
>> Goodbye World

>> ./Test-Param.ps1 -Variable2 "Universe"
>> Hello Universe

>> ./Test-Param.ps1 "Universe" -Variable1 "Goodbye"
>> Goodbye Universe

As you can see, these are really pretty flexible. Another really cool part of params that you may have noticed by now if you are following along is that these variable names will use tab complete. So if you type “./test-param -” and then start hitting tab, you will cycle through all the names in the param block. If yo haven’t used a script in a long time, then you can quickly see what variables you might want to pass.

Now, sometimes you want to have the param feature, but you need to require the value. You can place some logic after the param block, but I like to just put code straight in there.

### Test-Param.ps1 ###
Param($Variable1 = Read-Host "Please input a value for Variable1",
      $Variable2 = Get-Content $Variable1
"$Variable1"
"$Variable2"

>> ./Test-Param.ps1
>> Please input a value for Variable1: c:ScriptTest.txt
>> c:ScriptTest.txt
>> Hello, World!

>> ./Test-Param.ps1 c:ScriptTest.txt
>> c:ScriptTest.txt
>> Hello, World!

>> ./Test-Param.ps1 c:ScriptTest.txt "Ignore my file"
>> c:ScriptTest.txt
>> Ignore my file

I think that just about covers it for this Quick Tip. Get started converting all your scripts to use a parameter block!

Dec 212010
 

NOTE: I have written a better script for generic multithreading which I have covered in my post HERE. If you are looking for a script to cover your every day needs, please read that article instead as I believe it is a better script. This script, however, is easier to understand if you are looking to learn this for yourself!

This post is actually the reason that I created the blog. When I first started looking into multithreading in PowerShell V2 I really didn’t find anyone on the web that really had a good explanation or how-to. So, why would you want to multithread your scripts? Well, if you have ever tried to run a certain script against every server or even workstation in your org you know that it can take a very long time to run because it is hitting each server, one at a time, in sequence. Wouldn’t it be great if you could run your script against 20 servers at a time? As it turns out, you can, and it’s easier than you think.
All of the work comes in understanding the 4 main cmdlets surrounding multithreading in PowerShell V2: Start-Job, Wait-Job, Get-Job and Receive-Job. The basic flow states that you start all jobs, then wait for all jobs to finish, then see what jobs you have (get) and receive all the output.

So, the above basic script starts a job,then waits for all jobs to finish and then receives all the data that all jobs (only one in our case) contain. Now this basic construct is limited,because if you try to pass a variable via the code block it won’t work. That is because whatever code block you pass is a bit like opening a new PowerShell session, pasting it in, and hitting enter. Any variables that you have aren’t in that new pristine environment. The guys over at Microsoft gave us a way to pass info in though with the cmdlets argument “ArgumentList” where you can use a variable from the host session to be passed to the new session. This, however, only works when calling a script, not a code block, so we have to also use the argument “FilePath” and provide a second PowerShell script. This is actually a good thing, because it means we can multithread any script we write as long as it has a consistent output and takes something as an argument. Take the following short script:

Now, this script is perfect because it is going to return an object and take a computer name as an argument when executed. Presumably, we would normally multi-thread something that has a much longer execution time, but keep in mind that if the host is offline this script could take a long time to run.

So there it is. It’s that easy to multithread any script that you’ve written, but this is the most basic construct. What happens if you want to control how many threads are open? How about letting the user know where the script is in execution and how things are going? Let’s start by adding a param block to the front of the script to get a bunch of information.

Now that we have a way to get some basic setting from the user let’s read in out computer list from the file provided and kill any currently running jobs.

Now let’s get our loop control going and start making some threads

So this section of code is pretty simple if you just break it down. First we use a while loop to hold there until the current number of running jobs is lower than the number we have declared as our maximum. The Write-Progress command is simply letting the user know what’s going on. Once we clear the while loop we are ready to add jobs to the list of running jobs. So we start another job with the Start-Job command and then write our progress out to the user. Once this block of code is done we want to wait for all the jobs to close. In my short example I used Get-Job | Wait-Job which does the job, but it hides the progress from our user, so instead I developed this little tidbit.

So this block of code is going to read in the computer names that we are still waiting on and show them in the write-progress command. Again I’ve used a while loop which would run unchecked if not for the start-sleep that I’ve placed in there, which is like saying “Hey, only check our progress every half second or so”. If I didn’t have the start-sleep it would simply max the processor.

Once all that is done, it is simply a matter of getting the output from all our workers. You can just use Get-Job | Receive-Job to spit is all out to the console or you can push any object out to PowerShell V2’s grid view which I love oh so much.

So there is a script which allows you to multithread any script in your arsenal. Enjoy!

The full text for the script I use is as follows: