Optimization and standardization of PowerShell scripts
Beautiful Code
The use of scripts in the work environment has changed considerably over the past decade. Initially, they were used to handle batch processing with rudimentary control structures, functions and executables that were only called as a function of events, and return values. The functional scope of the scripting language itself was therefore strongly focused on processing character strings.
The languages of the last century (e.g., Perl, Awk, and Bash shell scripting) are excellent tools for analyzing logfiles or the results of a command with regular expressions. PowerShell, on the other hand, focuses far more on the interfaces of server services, systems, and processes, with no need to detour through return values.
Another change in scripting relates to the relevance and design of a script app: Before PowerShell, scripts were typically developed by administrators to support their work. Their understanding of the use of the application as something fairly personal also affected the applied standards: In fact, there weren't any. The usual principles back then were:
- Quick and dirty: Only the function is important.
- Documentation is superfluous: After all, I wrote the script.
The lack of documentation can have negative consequences for the author, though, leading to cost-intensive delays in migration projects three years down the road if the admin no longer understands code or the purpose of the code.
Basic Principles for Business-Critical Scripts
The significance of PowerShell is best described as "enterprise scripting." On many Microsoft servers, PowerShell scripts are the only way to ensure comprehensive management – Exchange and Azure Active Directory being prime examples. The script thus gains business-critical relevance. When you create a script, you need to be aware of how its functionality is maintained in the server architecture when faced with staff changes, restructuring, and version changes.
The central principles are therefore ease of maintenance, outsourcing, and reusability, as well as detailed documentation, and these points should be the focus of script creation:
- Standardization of the inner and outer structure of a script
- Modularization through outsourcing of components
- Naming conventions for variables and functions
- Exception handling
- Definition of uniform exit codes
- Templates for scripts and functions
- Standardized documentation of the code
- Rules for optimal flow control
Additionally, it would be worth considering talking to your "scripting officer" to ensure compliance with corporate policy. Creating company-wide script repositories also helps prevent redundancy during development.
Building Stable Script Frameworks
A uniform, relative path structure allows an application to be ported to other systems. Absolute paths should be avoided because adapting them means unnecessary overhead. Creating a subfolder structure as shown in Figure 1 has proven successful: Below the home folder for all script apps are individual applications (e.g., Myapp1
and MyApp2
). Each application folder contains only the main processing file, which uses the same name as the application folder (e.g., Myapp1.ps1
). The application folder can be determined dynamically from within the PowerShell script:
$StrScriptFolder = (myinvocation.MyCommand).Path |split-Path -Parent
The relative structure can then be represented easily in the code:
$StrOutputFolder = $ActualScriptPath + "\output"; [...]
The subfolders are assigned to the main script components Logging
, Libraries
, Reports
, and External Control
. Each script should be traceable: If critical errors occur during processing, they should be written to an errorlogs
subfolder. For later analysis, I recommend saving as a CSV file with unambiguous column names: date and time of the error; processing that caused the error; and error levels like error
, warning
, info
, and optionally line in source code
are good choices. To standardize your error logs, it makes sense to use a real function (as opposed to Add-Content
).
In addition to errors or unexpected return values, you should always log script actions for creating, deleting, moving, and renaming objects. To distinguish these logs from the error log, they are stored in the functionlogs
subfolder. When a script creates reports, the output
folder is the storage location. This also corresponds to the structure given by comment-based help, which is explained in the Documentation
section.
Control information (e.g., which objects should be monitored in which domain and how to monitor them) should not reside within the source code. For one thing, retrospective editing is difficult because the information has to be found in the programming logic; for another, transferring data maintenance to specialist personnel without programming skills becomes difficult. The principle of maintainability is thus violated. The right place for control information is the input
folder.
In addition to data, script fragments and constants can also be swapped out. A separate folder is recommended for these "scriptlets" with a view to reusability. In the history of software development, inc
, short for "include" has established itself as the typical folder name for these components.
Format Source Code Cleanly
A clear internal structure greatly simplifies troubleshooting and error elimination. Here, too, uniform specifications should be available, as Figure 2 shows: The region
keyword combines areas of the source code into a logical unit, but they are irrelevant for processing by the interpreter. The regions can be nested (i.e., they can also be created along the parent-child axis). Besides the basic regions init
, process
, and clear
described in the figure, a test
region is recommended. You can check whether paths in the filesystem or external libraries exist. Further regions can be formed, for example, from units within the main sections that are related in terms of content.
Readable code also includes the delimitation of statement blocks. Wherever you have foreach
, if
, and so on in a nested form, a standardized approach to indentation becomes important (e.g., two to four blanks or a Tab setting). Some editors, such as Visual Studio Code, provide support for formatting the source code. Although the position of the opening and closing curly brackets is controversial among developers, placing the brackets on a separate line is a good idea (Figure 3).
Buy this article as PDF
(incl. VAT)