« Previous 1 2 3 Next »
Professional PowerShell environments
Ready for Greatness
WMI Events Trigger the Node
It is amazing that a technology from the era of Windows 95 and Windows NT can still provide up-to-date answers. The memory of the Windows Script Host fades, but WMI is still the core of many administrative solutions. Whether hardware access or infrastructure queries in PS with Get-NetAdapter
, WMI from the 1990s is the basis.
One feature is particularly interesting for consideration: the often underestimated Event interface. WMI can react asynchronously to almost every Windows operating system event. WMI distinguishes between two types of events: local events that are subordinate to the HOST process and that are scheduled when it ends and persistent event listeners, which also survive a restart of the system and are therefore best suited to support a service-based scripting environment.
Strictly speaking, the use of WMI event classes is the registration of watchers, which requires three things:
- An Event Filter, which defines what should be checked and at what intervals.
- An Event Consumer, which defines the action to be executed when the event occurs.
- Binding Management, which is the technology that correlates the first two elements of filter and action.
For PS, it looks like this:
> Register-WmiEvent -Query 'SELECT * FROM __InstanceModiFicationEvent WITHIN 5 WHERE ` TargetInstance ISA "Win32_Service" AND TargetInstance.State="Stopped" ` AND TargetInstance.Name="MyService" ' -action {send-Message}
The binding would take over a cmdlet, the filter is the argument for the -Query
parameter, and the consumer is defined by -action
. Security is built-in on the second level, script execution. WMI offers its own remote architecture with the Distributed Component Object Model (DCOM), which differs from Windows Remote Management (WinRM)-supported remoting features, the standard when using PS. DCOM uses port 135. Access and extended configuration are realized with the dcomcnfg.exe
tool.
The previous conclusion to a secure script environment for monitoring an application is that the combination of Windows service control programs and the Event interface is conceivable (Figure 1).
PS Orchestration
Reliability is most difficult to implement at the third level, the system level. A script cluster would be an option, but it has high overhead just to prevent permanent execution of a script. PS v3 has already introduced a concept that addresses the problem of complex control and monitoring of a company's IT infrastructure. Workflows provide some development and operations (DevOps) techniques to PS scripts, including the option of executing code blocks either sequentially or in parallel:
workflow test-systempresence {parallel {Gc C:\srvlist | % -process {testconnection $_ -quiet;}}}
This statement enables true parallel processing on all available processors. The workflow
concept also has something to offer for reliability. A workflow process can be interrupted. Even after a restart, processing can be resumed at the position where it was interrupted.
The disadvantage is the immense overhead, because the entire session must be saved, including all variables and session states, before using the Resume-Job
command (Figure 2). The use of a planned interruption should be accompanied by conceptually meaningful and documented inspection points. The restart of the interrupted workflow must be initialized manually (or by script). However, the system does not take into account whether processing is still meaningful. This action is also separate from the logic of the workflow process itself. Designed as a solution for complex automation and orchestration of script applications, the workflow concept has not proved its worth.
Desired State Configuration as a Leader
With PS v4, a new technology was introduced that is still undergoing further development: Desired State Configuration (DSC), part of a reactive and declarative strategy. The system configuration and the desired system status is available in a documented file. The Local Configuration Manager (LCM) is the control unit. It consists of the MSFT_DSCLocalConfigurationManager
Common Information Model (CIM) class, which works under the NT AUTHORITY\System computer account. The implementation comprises two scheduled tasks:
- A task runs in pull mode when the computer is started.
- The second task runs in pull mode every 30 minutes.
The scheduled tasks start the CIM class and get (pull) configurations from a share or web service, depending on the LCM settings. The advantages of "declarative" administration are not reserved just for a homogeneous Windows environment. The core of DSC is based on the Open Management Infrastructure (OMI) standard. The platform-independent availability makes OMI ideal for tasks in heterogeneous networks. The DSC implementation is provided by Microsoft for the SUSE, Red Hat, Ubuntu, Oracle, Debian, and CentOS distributions [2].
PS scripts can play a central role with DSC in heterogeneous environments. In the core, method of a CIM class is executed, whereby the execution always comprises two phases:
- Phase 1: Tests. The stored configuration is checked against the system; two events can occur: The system corresponds to wishful thinking and comes to the result "do nothing," or the system configuration deviates from the specification, and phase 2 is initiated.
- Phase 2: Implementation of the configuration expected in DSC. This includes users and groups, software, and processes and services. A DSC configuration defines the context (the target system) as a node and the element to be administered as a resource.
Because the control option also extends to services, it is conceivable to ensure script execution by monitoring a service with a PS script as payload (Listing 2).
Listing 2
Monitoring a Service
$pc = $env:ComputerName; Configuration MyService { # One or many nodes possible Node $pc { Service ServiceExample { Name = "TermService" StartupType = "Auto" State = "Running" } } }
For the overall design of a script structure, only DSC offers solutions that are reactive to system states and expandable and well-documented. Beyond workflow and DSC, a self-developed solution with regard to reliability would be possible. An interesting approach is based on programming a TCP/IP listener in PS. The more detailed procedure is described in a blog post online [3], wherein two reporting servers and the PS applications running on them can send data packets. After the listener has been created, it waits for incoming client requests.
The Pending ()
method reacts to a client's attempt to establish a connection. In this case, the return value is $TRUE
. An incoming connection can be initialized with the AcceptTCPClient ()
method. Now the problem of synchronous processing of PS comes into play. The rest of the script execution is interrupted until a client request is received. The only solution is a loop-based call with a time delay. Of course, this is not good scripting and means a performance slump for the entire app.
To finish, the evaluation of TCP/IP-based communication would be the core of the idea of a secure PS environment. If such a packet cannot be received, the transmitter is not active, and the redundancy script starts working.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.