Anything you define in PowerShell – variables, functions, or settings – have a certain life span. Eventually, they expire and are automatically removed from memory. This chapter talks about “scope” and how you manage the life span of objects or scripts.
Understanding and correctly managing scope can be very important. You want to make sure that a production script is not negatively influenced by “left-overs” from a previous script. Or you want certain PowerShell settings to apply only within a script. Maybe you are also wondering just why functions defined in a script you run won’t show up in your PowerShell console. These questions all touch “scope”.
At the end of this chapter, we will also be looking at how PowerShell finds commands and how to manage and control commands if there are ambiguous command names.
Topics Covered:
What’s a Scope, Anyway
“Scope” represents the area a given object is visible in. You could also call it “territory”. When you define something in one territory, another territory may not see the object. There are important default territories or scopes in PowerShell:
- PowerShell Session: Your PowerShell session – the PowerShell console or a development environment like ISE – always opens the first scope which is called “global”. Anything you define in that scope persists until you close PowerShell.
- Script: When you run a PowerShell script, this script by default runs in its own scope. So any variables or functions a script declares will automatically be cleared again when the script ends. This ensures that a script will not leave behind left-overs that may influence the global scope or other scripts that you run later. Note that the default behavior can be changed both by the user and the programmer, enabling the script to store variables or functions in the callers’ scope. You’ll learn about that in a minute.
- Function: Every function runs yet in another scope, so variables and functions declared in a function are by default not visible to the outside. This guarantees that functions won’t interfere with each other and write to the same variables – unless that is what you want. To create “shared” variables that are accessible to all functions, you would manually change scope. Again, that’ll be discussed in a minute.
- Script Block: Since functions really are named script blocks, what has been said about functions also applies to script blocks. They run in their own scope or territory too.
Working with Scopes
One important feature of PowerShell’s implementation of scopes is the way how objects from one scope or territory are passed to child scopes. In a nutshell, this works almost like “inheritance”, so by default child scopes can access objects created in parent scopes.
“Inheritance” is the wrong term, though, because in PowerShell this works more like a “cross-scope traversal”. Let’s check this out by looking at some real world examples.
Accessing Variables in Other Scopes
When you define a variable in your PowerShell console, you learned that it is stored in the global scope which is parent to all other scopes. Will this variable be available in child scopes, too? Let’s say you are calling a script or a function. Will the variable be accessible from within the script or function?
Yes, it will. By default, anything you define in a scope is visible to all child scopes. Although it looks a bit like “inheritance”, it really works different, though.
Whenever PowerShell tries to access a variable or function, it first looks in the current scope. If it is not found there, PowerShell traverses the parent scopes and continues its search until it finds the object or ultimately reaches the global scope. So, what you get will always be the variable or function that was declared in closest possible proximity to your current scope or territory.
By default, unless a variable is declared in the current scope, there is no guarantee that you access a specific variable in a specific scope. Let’s assume you created a variable $a in the PowerShell console. When you now call a script, and the script accesses the variable $a, two things can happen: if your script has defined $a itself, you get the scripts’ version of $a. If the script has not defined $a, you get the variable from the global scope that you defined in the console.
So here is the first golden rule that derives from this: in your scripts and functions, always declare variables and give them an initial value. If you don’t, you may get unexpected results. Here is a sample:
function Test { if ($true -eq $hasrun) { 'This function was called before' } else { $hasrun = $true 'This function runs for the first time' } }
When you call the function Test for the first time, it will state that it was called for the first time. When you call it a second time, it should notice that it was called before. In reality, the function does not, though. Each time you call it, it reports that it has been running for the first time. Moreover, in the PowerShell console enter this line:
$hasrun = 'some value'
When you now run the function Test again, it suddenly reports that it ran before. So the function is not at all doing what it was supposed to do. All of the unexpected behaviors can be explained with scopes.
Since each function creates its own scope, all variables defined within only exist while the function executes. Once the function is done, the scope is discarded. That’s why the variable $hasrun cannot be used to remember a previous function call. Each time the function runs, a new $hasrun variable is created.
So why then does the function report that it has been called before once you define a variable $hasrun with arbitrary content in the console?
When the function runs, the if statement checks to see whether $hasrun is equal to $true. Since at that point there is no $hasrun variable in this scope, PowerShell starts to search for the variable in the parent scopes. Here, it finds the variable. And since the if statement compares a boolean value with the variable content, automatic type casting takes place: the content of the variable is automatically converted to a boolean value. Anything except $null will result in $true. Check it out, and assign $null to the variable, then call the function again:
PS> $hasrun = $null PS> test This function runs for the first time
To solve this problem and make the function work, you have to use global variables. A global variable basically is what you created manually in the PowerShell console, and you can create and access global variables programmatically, too. Here is the revised function:
function Test { if ($global:hasrun -eq $true) { 'This function was called before' } else { $global:hasrun = $true 'This function runs for the first time' } }
Now the function works as expected:
PS> test This function runs for the first time PS> test This function was called before
There are two changes in the code that made this happen:
- Since all variables defined inside a function have a limited life span and are discarded once the function ends, store information that continues to be present after that needs in the global scope. You do that by adding “global:” to your variable name.
- To avoid implicit type casting, reverse the order of the comparison. PowerShell always looks at the type to the left, so if that is a boolean value, the variable content will also be turned into a boolean value. As you have seen, this may result in unexpected cross-effects. By using your variable first and comparing it to $true, the variable type will not be changed.
Note that in place of global:, you can also use script:. That’s another scope that may be useful. If you run the example in the console, they both represent the same scope, but when you define your function in a script and then run the script, script: refers to the script scope, so it creates “shared variables” that are accessible from anywhere inside the script. You will see an example of this shortly.
Keeping Information Private
Often, you want to make sure variables or functions do not spill over and pollute the global environment, so you want to make sure they are kept private. PowerShell by default does that for you, because variables and functions can only be seen by child scopes. They do not change the parent scopes.
The same is true for most PowerShell settings because they too are defined by variables. Let’s take a look at the ErrorActionPreference setting. It determines what a cmdlet should do when it encounters a problem. By default, it is set to ‘Continue’, so PowerShell displays an error message but continues to run.
In a script, when you set $ErrorActionPreference to ‘Stop’, you can trap errors and handle them yourself. Here is a simple example. Type in the following code and save it as a script, and then run the script:
$ErrorActionPreference = 'Stop' trap { "Something bad occured: $_" continue } "Starting" dir nonexisting: Get-Process willsmith "Done"
When you run this script, both errors are caught, and your script controls the error messages itself. Once the script is done, check the content of $ErrorActionPreference:
PS> $ErrorActionPreference continue
It is still set to ‘Continue’. By default, the change made to $ErrorActionPreference was limited to your script and did not change the setting in the parent scope. That’s good because it prevents unwanted side-effects and left-overs from previously running scripts.
Note: If the script did change the global setting, you may have called your script “dot-sourced”. We’ll discuss this shortly. To follow the example, you need to call your script the default way: in the PowerShell console, enter the complete path to your script file. If you have to place the path in quotes because of spaces, prepend it with “&”.
Using Private Scopes
In the previous script, the change to $ErrorActionPreference is automatically propagated to all child scopes. That’s the default behavior. While this does not seem to be a bad thing – and in most cases is what you need – it may become a problem in complex script solutions. Just assume your script calls another script.
Now, the second script becomes a child scope, and your initial script is the parent scope. Since your initial script has changed $ErrorActionPreference, this change is propagated to the second script, and error handling changes there as well.
Here is a little test scenario. Type in and save this code as script1.ps1:
$ErrorActionPreference = 'Stop' trap { "Something bad occured: $_" continue } $folder = Split-Path $MyInvocation.MyCommand.Definition 'Starting Script' dir nonexisting: 'Starting Subscript' & "$folder\script2.ps1" 'Done'
Now create a second script and call it script2.ps1. Save it in the same folder:
"script2 starting" dir nonexisting: Get-Process noprocess "script2 ending"
When you run script2.ps1, you get two error messages from PowerShell. As you can see, the entire script2.ps1 is executed. You can see both the start message and the end message:
PS> & 'C:\scripts\script2.ps1' script2 starting Get-ChildItem : Cannot find drive. A drive with the name 'nonexisting' does not exist. At C:\scripts\script2.ps1:2 char:4 + dir <<<< nonexisting: + CategoryInfo : ObjectNotFound: (nonexisting:String) [Get-ChildItem], DriveNotFoundException + FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand Get-Process : Cannot find a process with the name "noprocess". Verify the process name and call the cmdlet again. At C:\scripts\script2.ps1:3 char:12 + Get-Process <<<< noprocess + CategoryInfo : ObjectNotFound: (noprocess:String) [Get-process], ProcessCommandException + FullyQualifiedErrorId : NoProcessFoundForGivenName,Microsoft.PowerShell.Commands.GetProcessCommand
script2 ending
That is expected behavior. By default, the ErrorActionPreference is set to “Continue”, so PowerShell outputs error messages and continues with the next statement.
Now call script1.ps1 which basically calls script2.ps1 internally. The output suddenly is completely different:
PS> & 'C:\scripts\script1.ps1' Starting Script Something bad occured: Cannot find drive. A drive with the name 'nonexisting' does not exist. Starting Subscript script2 starting Something bad occured: Cannot find drive. A drive with the name 'nonexisting' does not exist. Done
No PowerShell error messages anymore. script1.ps1 has propagated the ErrorActionPreference setting to the child script, so the child script now also uses the setting “Continue”. Any error in script2.ps1 now bubbles up to the next available error handler which happens to be the trap in script1.ps1. That explains why the first error in script2.ps1 was output by the error handler in script1.ps1.
When you look closely at the result, you will notice though that script2.ps1 was aborted. It did not continue to run. Instead, when the first error occurred, all remaining calls where skipped.
That again is default behavior: the error handler in script1.ps1 uses the statement “Continue”, so after an error was reported, the error handler continues. It just does not continue in script2.ps1. That’s because an error handler always continues with the next statement that resides in the same scope the error handler is defined. script2.ps1 is a child scope, though.
Here are two rules that can correct the issues:
- If you want to call child scripts without propagating information or settings, make sure you mark them as private:. Note though that this will also prevent the changes from being visible in other child scopes such as functions you may have defined.
- If you do propagate $ErrorActionPreference=’Stop’ to child scripts, make sure you also implement an error handler in that script or else the script will be aborted at the first error.
- Library Script: your script is not actually performing a task but it is rather working like a library. It defines functions for later use.
- Debugging: you want to explore variable content after a script has run.
Here is the revised script1.ps1 that uses private:
$private:ErrorActionPreference = 'Stop' trap { "Something bad occured: $_" continue } $folder = Split-Path $MyInvocation.MyCommand.Definition 'Starting Script' dir nonexisting: 'Starting Subscript' & "$folder\script2.ps1" 'Done'
And this is the result:
PS> & 'C:\scripts\script1.ps1' Starting Script Something bad occured: Cannot find drive. A drive with the name 'nonexisting' does not exist. Starting Subscript script2 starting Get-ChildItem : Cannot find drive. A drive with the name 'nonexisting' does not exist. At C:\scripts\script2.ps1:2 char:4 + dir <<<< nonexisting: + CategoryInfo : ObjectNotFound: (nonexisting:String) [Get-ChildItem], DriveNotFoundException + FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand Get-Process : Cannot find a process with the name "noprocess". Verify the process name and call the cmdlet again. At C:\scripts\script2.ps1:3 char:12 + Get-Process <<<< noprocess + CategoryInfo : ObjectNotFound: (noprocess:String) [Get-process], ProcessCommandException + FullyQualifiedErrorId : NoProcessFoundForGivenName,Microsoft.PowerShell.Commands.GetProcessCommand script2 ending Done
Now, errors in script1.ps1 are handled by the built-in error handler, and errors in script2.ps1 are handled by PowerShell.
And this is the revised script2.ps1 that uses its own error handler.
trap { "Something bad occured: $_" continue } "script2 starting" dir nonexisting: Get-Process noprocess "script2 ending"
Make sure you change script1.ps1 back to the original version by removing “private:” again before you run it:
PS> & 'C:\scripts\script1.ps1' Starting Script Something bad occured: Cannot find drive. A drive with the name 'nonexisting' does not exist. Starting Subscript script2 starting Something bad occured: Cannot find drive. A drive with the name 'nonexisting' does not exist. Something bad occured: Cannot find a process with the name "noprocess". Verify the process name and call the cmdlet again. script2 ending Done
This time, all code in script2.ps1 was executed and each error was handled by the new error handler in script2.ps1.
Calling Scripts “Dot-Sourced”
In the previous chapter you learned that a PowerShell developer can select the scope PowerShell should use to access a variable or function. The user also has control over how scoping works.
In Figure 12.1 you see that by default, the global scope (representing the PowerShell console or development environment) and the script scope (representing a script you called from global scope) are two different scopes. This guarantees that a script cannot change the caller’s scope (unless the script developer used the ‘global:’ prefix as described earlier).
If the caller calls the script “dot-sourced”, though, the script scope is omitted, and what would have been the script scope now is the global scope – or put differently, global scope and script scope become the same.
This is how you can make sure functions and variables defined in a script remain accessible even after the script is done. Here is a sample. Type in the code and save it as script3.ps1:
function test-function { 'I am a test function!' } test-function
When you run this script the default way, the function test-function runs once because it is called from within the script. Once the script is done, the function is gone. You can no longer call test-function.
PS> & 'C:\script\script3.ps1' I am a test function! PS> test-function The term 'test-function' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:14 + test-function <<<< + CategoryInfo : ObjectNotFound: (test-function:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException
Now, run the script dot-sourced! You do that by replacing the call operator “&” by a dot:
PS> . 'C:\script\script3.ps1' I am a test function! PS> test-function I am a test function!
Since now the script scope and the global scope are identical, the script did define the function test-function in the global scope. That’s why the function is still there once the script ended.
There are two primary reasons to use dot-sourcing:
The profile script that PowerShell runs automatically during startup ($profile) is an example of a script that is running dot-sourced, although you cannot see the actual dot-sourcing call.
Note: To make sure functions defined in a script remain accessible, a developer could also prepend the function name with “global:”. However, that may not be such a clever idea. The prefix “global:” always creates the function in the global context. Dot-sourcing is more flexible because it creates the function in the caller’s context. So if a script runs another script dot-sourced, all functions defined in the second script are also available in the first, but the global context (the console) remains unaffected and unpolluted.
Managing Command Types
PowerShell supports a wide range of command types, and when you call a command, there is another type of scope. Each command type lives in its own scope, and when you ask PowerShell to execute a command, it searches the command type scopes in a specific order.
This default behavior is completely transparent if there is no ambiguity. If however you have different command types with the same name, this may lead to surprising results:
# Run an external command: ping -n 1 10.10.10.10 Pinging 10.10.10.10 with 32 bytes of data: Reply from 10.10.10.10: Bytes=32 Time<1ms TTL=128 Ping statistics for 10.10.10.10: Packets: Sent = 1, Received = 1, Lost = 0 (0% Loss), Ca. time in millisec: Minimum = 2ms, Maximum = 2ms, Average = 2ms # Create a function having the same name: function Ping { "Ping is not allowed." } # Function has priority over external program and turns off command: ping -n 1 10.10.10.10 Ping is not allowed.
As you can see, your function was able to “overwrite” ping.exe. Actually, it did not overwrite anything. The scope functions live in has just a higher priority than the scope applications live in. Aliases live in yet another scope which has the highest priority of them all:
Set-Alias ping echo ping -n 1 10.10.10.10 -n 1 10.10.10.10
Now, Ping calls the Echo command, which is an alias for Write-Output and simply outputs the parameters that you may have specified after Ping in the console.
CommandType | Description | Priority |
Alias | An alias for another command added by using Set-Alias | 1 |
Function | A PowerShell function defined by using function | 2 |
Filter | A PowerShell filter defined by using filter (a function with a process block) | 2 |
Cmdlet | A PowerShell cmdlet from a registered snap-in | 3 |
Application | An external Win32 application | 4 |
ExternalScript | An external script file with the file extension “.ps1” | 5 |
Script | A scriptblock | – |
Get-Command can tell you whether there are ambiguities:
Get-Command Ping CommandType Name Definition ----------- ---- ---------- function Ping "Ping is not allowed." Alias ping echo Application PING.EXE C:\Windows\system32\PING.EXE
Invoking a Specific Command Type
To make sure you invoke the command type you are after, you can use Get-Command to retrieve the command type, and then execute it with the call operator “&”. So in the example above, to explicitly call ping.exe, use this:
# Get command named "Ping" with commandtype "Application": $command = Get-Command Ping -CommandType Application # Call the command & $command -n 1 10.10.10.10 Pinging 10.10.10.10 with 32 bytes of data: Reply from 10.10.10.10: Bytes=32 Time<1ms TTL=128 Ping statistics for 10.10.10.10: Packets: Sent = 1, Received = 1, Lost = 0 (0% Loss), Ca. time in millisec: Minimum = 2ms, Maximum = 2ms, Average = 2ms
Summary
PowerShell uses scopes to manage the life span and visibility of variables and functions. By default, the content of scopes is visible to all child scopes and does not change any parent scope.
There is always at least one scope which is called “global scope”. New scopes are created when you define scripts or functions.
The developer can control the scope to use by prepending variable and function names with one of these keywords: global:, script:, private: and local:. The prefix local: is the default and can be omitted.
The user can control scope by optionally dot-sourcing scripts, functions or script blocks. With dot sourcing, for the element you are calling, no new scope is created. Instead, the caller’s context is used.
A different flavor of scope is used to manage the five different command types PowerShell supports. Here, PowerShell searches for commands in a specific order. If the command name is ambiguous, PowerShell uses the first command it finds. To find the command, it searches the command type scopes in this order: alias, function, cmdlet, application, external script, and script. Use Get-Command to locate a command yourself based on name and command type if you need more control.