Saturday, January 30, 2010

Powershell Community Extensions

While with the time and knowledge we can write a lot of the things we’ll use in Powershell, it’s often not worth it to re-invent the wheel. There’s a group of Powershell coders who have put together the Powershell Community Extensions – a set of Powershell aliases, functions, and other useful code to make working with Powershell a little easier.

For example, there’s a built-in function to split a string. Functions and DLLs are included to read and write ZIP files. There are even some Cmdlets written to allow easy reading and writing from the clipboard or MSMQ. While not all of this will be immediately useful, it’s definitely worth having around.

Installation should be pretty straightforward. Make sure all instances of Powershell are closed. The recommended install is performed using the MSI file. If you want the latest version, you’ll have to unzip the file into your Documents\Windows Powershell\Modules folder.  After that, you can import the module using “Import-Module PSCX”.  There is supposed to be a way to customize what you actually import as well so you don’t load items you’ll never use. The details should be on the Powershell Community Extensions site.

 

While I’m at it, I’d like to give props to John D. Cook who has written a short pamphlet called Day 1 With Powershell which lists a lot of little things that he wished he’d known before starting. There is some good, basic information in it, including configuration, some basics behind Powershell decisions (like = vs –eq), and some pointers to other Powershell sites. I’d recommend it if you’re just getting started with Powershell as it could give you some pointers on how to proceed past the basics.

Wednesday, January 27, 2010

Powershell – Comparison Operators

I’m writing this one so I remember these operators. Powershell doesn’t use standard operators such as =, <, <>, !=, etc. These operators are used in the following manner.

Operator

Conventional Operator

-eq =
-ne <> or !=
-gt >
-ge >=
-lt <
-le <=
-contains e.g., 1,2,3 –contains 1 is true
-notcontains e.g., 1,2,3 –notcontains 1 is false
-not ! e.g., –not ($a –eq $b) Logical Not operator
-and Logical AND e.g.,
($age –ge 21) –and ($gender –eq “M”)
-or Logical OR e.g.,
($age –le 21) –or ($InSchool –eq $true)
 
I think this is one of the areas that is going to trip me up quite a bit. In Powershell, “=” is always used to assign.  “<” and “>” are used for redirection. The names make sense to me in a way, but I can tell I’ll be spending some time with these before I’m completely comfortable with this concept.
 
The above are the base comparisons.  If you prefix them with “i” the operator becomes case-insensitive.  If you prefix an operator with “c”, the operator will be case-sensitive.  Thus we would get the following:
  "johnson" -eq "Johnson" #True


  "johnson" -ieq "Johnson" #True


  "johnson" -ceq "Johnson" #False




Where-Object



This is a useful Cmdlet to filter out results from the pipeline, but this should also be used with care. The initial object will still contain everything with which it started. The results will then be filtered. If you have a way to filter these results prior to passing them through the pipeline, you’ll often have better performance. The following will find all instances of Firefox running on the machine:



  Get-Process | where-object {$_.name -eq 'firefox'}


We could then pass this through the pipeline to operate on that specific object. Where-Object can be abbreviated with a “?”.



 



If, ElseIf, Else



This is a pretty basic statement for Programming and seems to make sense in the way it was implemented in Powershell.



If (condition) {#do something}

ElseIf (next condition) {#do something else}


Else (final condition) {#final action}



As in most uses of If, the process will work through until it finds a condition that evaluates to $true. Once it hits that, it will execute the command(s) inside the corresponding curly braces and exit out of the If.



 



Switch



Sometimes it doesn’t make sense to write out a bunch of IfElse lines to keep checking conditions. Switch can help shorten that. A simple example would be something like this.



$Value = 15
Switch($Value)
{
  1 {"Value is 1"}
  2 {"Value is 2"}
  15 {"Value is 15"}
  default {"Value is $_"}
}


Powershell interprets these in order and will evaluate each one in turn. The default line tells Powershell what to do if no match is found.  If you want Powershell to stop evaluating when it finds a match, add a ;break inside the {} following the matching condition. Otherwise, it could hit the same matching condition and evaluate each one it finds. This may be desirable at some times when you want to act on all matching statements. Other times you may want to short-circuit the process.



Another advantage of the Switch statement is that you can evaluate expressions instead of just the values.  In the example above, the behavior behind the scenes is performing {$_ –eq 1} and {$_ –eq 2}, etc for the lines in the switch. You can substitute your own expressions in here easily to change the evaluations.



Switch supports character comparisons (defaulting to case-insensitive equals), but can do other types by default with a parameter on the Switch statement (e.g., –regex, –case, –wildcard, etc.)



The biggest drawback for Switch is that it would be considered a “blocking” component in the pipeline. It needs to wait for all results before it operates. If you just want to do some filtering on the results, you would be better off using Where-Object or filtering up front.

Tuesday, January 26, 2010

Powershell – Quite note on Objects

I’m sure this comes as no surprise to anyone who’s dabbled in Powershell, but just about everything in Powershell has an Object underneath. The results we see on screen come from those objects and it’s only when those results are formatted, output, captured, etc that they cease to be an object. This drives much of the power in Powershell. You can set a variable to the contents of an object or part of an object and use that later.

Typically objects will have properties and methods. Properties define what an object “is” – e.g., Red, Small, Named, etc. Methods define what an object can do – e.g., Write, Read, Slice, Mix, etc. There are quite a few internal methods and properties that are common across the internal Powershell objects. You can find these by piping the object to the “Get-Member” cmdlet.

  $host | Get-Member


You can add your own properties and methods to objects easily by calling the Add-Member Cmdlet. The most direct way to do this seems to be by piping the object to the Add-Member Cmdlet.



$host | Add-Member –Member NoteProperty –Name PropertyName –Value PropertyValue



As Powershell is based on MS technologies, it makes sense that you can use the .NET Framework to work with objects. I will admit that I’m not as familiar with the wide variety of .NET objects so will not try to go into detail.  For one example, here’s a way to call the .NET framework to look up a DNS name by IP address.



  [system.Net.Dns]::GetHostByAddress("207.46.19.190")


 



Perhaps a more useful class would be System.Environment, first by querying its Static members:



  [System.Environment] | Get-Member -Static


You can see that there are a lot of members that are useful. If we choose one and don’t enter valid parameter data, we can even get useful error messages at times.  For example, the following will return an error with valid parameters in the message:



  [system.environment]::GetFolderPath("z")


We can tell by the error message, that some valid parameters would be Desktop, Programs, MyDocuments, Home, etc. Entering a valid parameter in this case will then return the path stored in the Environment for that variable.



 



You can even define your own objects using the New-Object Cmdlet. With no parameters, this creates an empty object. You can create objects of various types by passing the type as the first parameter to the New-Object cmdlet. If you set a variable to an object, you need to be careful about type casting.  Setting a variable to a string containing a date will not necessarily cause that variable to be an object of type System.DateTime. It will most likely be a System.String.  You may need to explicitly cast the object as the type you desire or force the value to be of the type you desire in order to cast the object as the correct/desired type.



 



You can call on COM objects, .NET objects, Powershell objects, Assemblies, and probably more than I’ve listed here as long as you know the name or way to call those objects. I’m still learning this are of Powershell, but hope to expand on this in future posts.

Wednesday, January 13, 2010

Powershell – Piping

One of the most often-used features I’ve seen so far in Powershell has been the concept of piping the results of commands into other commands to ultimately return something formatted, limited, or otherwise morphed into the desired output. The easiest examples might be something like:
dir | more
dir | sort-object length –desc | format-table -autosize


The first command returns a listing of the files in the current folder one screen at a time, pausing for you to read each screen and tell the command when to advance.  The second is a string of commands to get a listing of files, sort by the file size from largest to smallest, then auto-size each column for a best fit based on the data. Because each result set is passed as an object through the pipeline, you don’t have to handle the format of the text or other conversions. Powershell handles the objects and consumes them until you reach the end. By default Powershell appends a hidden “| Out-Default” command to display the results on screen. If you want to see more options for output, you can run:

Get-Command Out*
Get-Command Export*


Obviously there are a lot of other commands that can be used in the pipeline, but these seem to be especially useful if you want to output the results to something other than the screen.  Also note that if you convert your results along the way using Format-Table or Out-String, you will change the results from an object into an array of text.  (Out-String is the only output command that you can insert into a pipeline and not stop the pipeline. All other "Out-" commands will terminate the pipeline.)


Blocking vs. Non-Blocking

As with other languages, some Cmdlets may be “blocking” other parts of the pipeline. For example, a sort cannot pass its results on until the entire result set has been sorted. The “more” Cmdlet in Powershell is also blocking, but the equivalent “Out-Host –paging” will do the same thing without blocking. When to use various commands is an important consideration if you pipe multiple Cmdlets together, especially if you will be dealing with large objects or result sets.


Filtering

Filtering is accomplished by piping your resultset to Cmdlets such as Where-Object, Select-Object, ForEach-Object, or Get-Unique.
  • Where-Object allows you to examine all objects and return just those matching your criteria. This is very similar to a WHERE clause in SQL Server.
  • Select-Object acts in a manner similar to the SELECT clause of a SQL Statement and allows you to pick which properties are displayed. It can also handle things such as “TOP 5” or “Distinct” (not exact commands), but can also do some interesting handling of values in the array. For example, you can choose to display every other line of the result set or the first, last, and middle rows by examining the array.  (Note that this will create a new object in a lot of cases because you're filtering the columns.)
  • ForEach-Object operates on each result in the resultset to run commands against them.
  • Get-Unique eliminates duplicates in the resultset.
Filtering would be considered a Blocking operation in many cases and can sometimes be done more efficiently through the native commands or Cmdlets. For example, if you need to pull back just the files of type “.txt”, you may be better off filtering that out in your dir command rather than passing the entire resultset of all files to the Where-Object Cmdlet.


Tee-Object
It’s worth mentioning this Cmdlet because it could be really useful in troubleshooting. A simple example would be:

dir | Tee-Object -variable t1 | Select-Object Name,Length | Tee-Object -variable t2 | Sort-Object Length –descending


This would pipe the output of the “dir” command into a variable called $t1. It would then pass the output to Select-Object to limit the results down to Name and Length, then pass that resultset to a variable called $t2. Finally, it would output your results ordered by length descending.  You would now have two variables to examine results as you stepped through the pipeline. If you are getting unexpected results, this would be very valuable in troubleshooting.  You can also use “-filepath” instead of “-variable” to store results to a file instead of a variable.

Conclusion
There is a lot of information available for piping commands because this is one of the true strengths of Powershell. I'm not going to go into a lot of details at this point. There will be more examples in future posts and there are lots of other posts detailing the process.  Piping takes a set of results and passes that to the next command, and to the next, and to the next, until it gets to the end of the pipeline. At that point, the results are returned, stored, or discarded (if you chose to output to Out-Null). You can store these results along the way for comparison or even store them in a file for later analysis.

I plan to spend a pretty significant amount of time familiarizing myself with the various uses for the pipeline. Because it's used so heavily, this is going to be a key component to understanding and getting the most benefit from Powershell.

Saturday, January 9, 2010

Powershell – Arrays & Hash Tables

I don’t have too much to say about Arrays at this point, but as I’m partly blogging this for my own education, I’ll put down what I’m learning.  At this point, I’m expecting the arrays to be useful when trying to build my example project. One piece will be to step through existing files to look for certain flags or traits to tell me to do further processing.

Arrays

First of all, any variable that stores the results of a command with multi-line output will result in an array. This includes commands such as dir and ipconfig. Arrays start with position [0]. You can create an array by using comma-separated values such as $Variable = 1,2,3,4 or by using $Variable = @(1).

The first cool thing I saw was that using negative positions in an array steps through the array in reverse.  That means if I have an array of $Variable = @(0,1,2,3,4), that $Variable[1] will be 1.  $Variable[-1] will be 4.  I can see this being very useful when you need to step backwards through the results. This is very intuitive to me and I wish this were easier to implement in other scenarios. However, stepping through the array backwards could be tricky because it could result in creating a copy of the array. For small arrays this wouldn’t necessarily be a problem, but with large arrays, I could see performance becoming an issue. If you need to reverse the entire array, you can simply use [array]::Reverse($Variable) to reverse the array elements in place. (This actually changes the array stored in the variable.)

You can easily specify to return multiple elements from the array using $Variable[0,4,-12] which would return the 1st element, 4th element, and the element 12th from the end.

Adding an element is pretty easy. You can type:
  @Variable += @(NewElement)
However, this will create a new copy of the array in order to add another element because arrays in Powershell cannot be resized. This may be something to consider when working with large arrays.

Hash Tables

Hash tables seem to be pretty much what you’d expect. They store key/value pairs such as “Name”, “SQL Server 2008 Enterprise Edition” stored together. You define these in a manner very similar to that of arrays:
  $Variable = @{Name=”SQL Server”; Version=”10.0"; Edition=”Enterprise”}
That will define a new hash table containing 3 keypairs. Name, Version, and Edition.  Each will have a Name and Value property that will display when you display the variable. You can display just the value of a particular keypair with $Variable.Name or $Variable.Version.

Adding a new keypair is easy as well.  Use:
  $Variable.NewName = “New Value”

Similarly, you can remove a keypair with:
  $Variable.Remove(“Name”)

Hash Tables are used with the Format-Table command as well. You can customize the columns returned by manipulating their respective hash table values: Expression, Width, Label, and Alignment. Use get-help Format-Table for more information.

 

By default, copies of an array or hash table are actually copies of a pointer to the values. That means that if you just set the variable with the normal assign of $Var1 = $Var2, you’ll end up with pointers. Modifying either of those variables will affect the other(s). Modifying the array/hash table or using the .Clone() command on either will make a new copy.

Finally, if you need to strongly type an array, you can do so by putting the datatype before the array definition:
  [int[]]$MyArray = @(0,1,2,3)

This will ensure that all values stored in the array are of that datatype. Any attempts to add/remove values to the array that are not of that datatype will raise an error.

 

There are a lot of uses for arrays and hash tables. This just scratches the surface of what can be done, but it should be enough to get you started.

Powershell – Variables

I’ve just started to play around with variables in Powershell. I used them a little bit before this, but am in the process of figuring out basic commands around variables. First thing to note is that variables are not case-sensitive.  That means that a variable called “$MyVariable” is the same as “$MYVARIABLE” or “$mYvARIABLE”.  You can declare a variable and assign a value easily using something like:
$variable = value
That pretty much covers the majority of basic assignments. You can also use the Set-Variable command to not only set a variable, but change some options such as making a variable read-only or making it into a constant. Once a constant has been declared/set, you cannot delete it. It will be cleaned up when the Powershell session ends.  The only somewhat tricky point around variables is using special characters.  All variables should be declared within {}’s if you need to use special characters.
It’s easy to list all of the variables and their values using:
dir variable:
dir variable: | Format-Table Name, Value, Description -autosize



The first command lists all variables and their values. The second lists the name, value, and a Description property as well and then auto-sizes everything accordingly. Useful if you want to set other properties of your variables besides name and value. You can also use that latter command with a “-wrap” parameter to show the definitions of the system-level variables.


If you want to use Windows Environment variables, you’ll want to reference the “env:” virtual drive within Powershell. These are the variables such as your Path or Windows folder that are used within Windows. Any changes you make to these variables are only set within the scope of your Powershell session.


By default, variables stay within the scope of the function or session, but you can override this with $private to limit a variable to a scope, $local to use default scoping and enable variables to be read by sessions the current session creates, $script to allow the variable to work anywhere in the current script, and $global to allow the variable to be used anywhere, even external functions and scripts.





Variable Types and Attributes


Working with variables is easy with the defaults because they are weakly or loosely typed. If I assign a string to a variable, that variable contains a string. If I assign a bit value, the variable contains a Byte. That means that variables can change types easily as the script runs. However, that also means that if I’m expecting a variable to contain a date value and it somehow gets a string or floating point value, I could be very surprised.  Powershell allows for strong data typing of variables as well. If you define a variable in this manner:


[datetime]$myDate = "2009-12-25"


You will explicitly declare a variable of type DateTime. If you then attempt to set the variable to a string, you’ll get an error message. The variable types are standard .NET types with a handful of specific Powershell types.


You can set other properties using the Attributes for a variable and can even clear out the strong Typing using an Attributes.Clear() command against the variable. Attributes can be used to set constraints around a variable such as whether or not the variable can store $Null values, check constraints, or even RegEx patterns for the data stored in the variable.





This is by no means a definitely explanation of everything around variables in Powershell, but it’s a good start for most people. You can always run get-help About_Variables within Powershell for more details or get-help Set-Variable to read more about setting variables. I may update or post a follow-up if something else pertinent about variables comes up. I’d love to know a way to list out variable datatypes from within Powershell, but haven’t found a way yet.

Friday, January 8, 2010

Powershell – Modules and Profiles

Probably not technically correct, but pretty close. I’ve been trying to use SQLPSX for Powershell, recently updated to v2.0. For the longest time, I’ve tried various methods to import the modules and couldn’t get them to import successfully. I knew that this was likely due to lack of knowledge so spent a little time tracking down the root cause of my problems.
After quite a bit of searching, I finally figured out why I’ve been having such a hard time loading Powershell modules. By default, the Modules reside in
  %USERPROFILE%\Documents\WindowsPowershell\Modules
I verified that this was set within Powershell with quite a bit of different code snippets.  For some reason, this folder was never created in my Documents folder.  I manually created a folder called “WindowsPowershell” and another inside that called “Modules” and then extracted each of SQLPSX’s module folders into that newly created directory and I finally could use external modules. I restarted Powershell and used
import-module SQLServer
I got a warning that Powershell could not had to set my execution policy to allow remotely signed scripts. That’s not too hard to do, but I couldn’t import new modules without doing that. I ran
Set-ExecutionPolicy remotesigned
and set the default to “Y”. (This is a local test machine so I’m not as concerned about changing this setting.) I re-ran my import-module command and the modules finally imported.

Of course, now I’d like to automatically load these commands whenever I run Powershell, as well as customize the default Powershell environment to pull in the various SQL Powershell modules used in the SQLPS environment that ships with SQL Server 2008.  I found this MSDN article that discusses profiles and how to work with them. You can create a default profile easily with:
new-item -path $profile -itemtype file –force
notepad $profile



This will create a new profile for all users and all shells on that machine. The second command will open up that new file for editing. Place whatever commands you want to run on startup in this file. For example, this article discusses how you can get all of the behavior of the default SQLPS mini-shell. While not optimal, it looks like a great way to get the full power of Powershell w/ the new SQL functionality.

Thursday, January 7, 2010

Getting Started with Powershell

I’m in the process of learning PowerShell. This seems to be the first scripting language that we can use at the server/desktop level that has some serious backing from Microsoft. I remember VBScript, CScript, Batch files, and similar, but it seemed that if I wanted to run a script, I had to find workarounds or plug-ins to do what I wanted inside of some other language and then mix and match those together.  Powershell seems to drive a lot of processes behind the scenes. That being said, here’s a quick summary of some important things I’m learning in the process.

1. Powershell can call executables directly, but needs an explicit path to those executables if they aren’t in the system path. This means that you may need to use .\MyProgram.exe instead of just MyProgram.exe.

2. get-help command  - I can see myself spending a lot of time with this to figure out syntax and uses for various commands. As I’ve played around with it tonight, I’m really impressed by how much information is available without needing to leave PowerShell. Reminds me a lot of the “man” pages in *IX, but with more examples and in a language I can understand.

3. get-command –verb verb – Another useful command to list all Powershell Cmdlets containing that verb.

4. Parameters can be abbreviated as long as there are enough characters to uniquely ID the parameter. An error message will be thrown if there aren’t enough. Probably better to let auto-complete handle the parameter name in that case to avoid possible ambiguity.

5. Common Parameters. There are apparently several common parameters that should be available for most Cmdlets.  I can see a lot of use for –ErrorVariable and –OutVariable. These are designed to capture error or output details, respectively.

6. Aliases. I can see these being both helpful and a little painful. Helpful in that a lot of familiar commands such as dir and ls are aliases, but painful in forgetting that I’ve set one and seeing some underlying Cmdlet change. I guess I have some qualms from my days of using a pretty customized BASH shell and forgetting a lot of the basics that were used to create that shell in the first place. Just running the get-help on Alias returned several screens of information. Example command to run to see CmdLet followed by its aliases:
   dir alias: | Group-Object definition

  And just as I figured, there’s a way to export the aliases and re-import them from a file later so you don’t need to define them each time you restart Powershell. Export-Alias and Import-Alias

7. Virtual Drives. PowerShell has a lot of different virtual drives set up to access Aliases, Registry settings, physical drives, and others. I’ll be exploring those more as I need them. I’m halfway assuming that one of these tied to a server would be related to AD structures, but that may just be a whole new set of Cmdlets.

 

I think this may be a good place to stop for the day. It seems the next section deals with functions which are reminding of DOS Batch file parameters at first glance. I hope that is not the case, but I’ll know more about that tomorrow. So far I’ve gotten a good handle on some of the basics and learned some ways to help myself during scripting. That’s a great start.