Abstract
This article helps to explain the application of JSON data and the use of new SQL 2016 constructs that enable an integration of JSON data into a relational schema.
The idea behind this article is to list as many examples of JSON nested elements, an inclusion of sample data, relational data conversion to JSON data, JSON to the relational data, conversion of JSON elements into separate columns and representing the same data into separate rows.
Table of Contents
- Abstract
- Introduction
- Background
- The JSON Syntax Rules
- The JSON Values
- Basic Structure
- Relational data presentation using FOR JSON
- The JSON Data Transformation
- Transform JSON to Relational data
- Built-in functions for JSON processing
- How to define Nested Objects in JSON
- Parsing Nested elements of JSON
- Reading JSON into separate rows
- Reading JSON elements into separate columns
- Conclusion
- References
Introduction
The JSON stands for JavaScript Object Notation. JSON is the primary data representation for all NoSQL databases.This is a natural fit for the developers, who use JSON as the data interchange format in their Applications. The relative ability of JSON (JSON records are well structured but easily extended) on its scalability has attracted the developers looking DB migrations in agile environments. Data and schema, in volume, can be hard to change. Rewriting a large dataset stored on the disk while keeping the associated Applications online can be time-consuming. It can take days of background processing, in moderate to large examples, to upgrade the data
Background
Most of the traditional relational database engine now supports JSON. With SQL Server 2016, It’s easy to interchange JSON data between the Applications and database engine. Microsoft has provided various functions and capabilities to parse JSON data. They tried to bring JSON data into a relational storage. It also provides an ability to transform the relational data into JSON and JSON data into the denormalized data.Having these additional JSON features built into SQL Server should make it easier for the Applications to exchange JSON data with SQL Server.This functionality provides the flexibility in the integration of JSON data into the relational database engine. The developers can write and invent complex queries during their periodic stages of the development process.
Relational databases refer to the traditional data storage, Constructive and Intuitive SQL language, Complex query design and ACID property. NoSQL offers different concepts – complex structures are placed together into the collections of the entities, where you can take everything, which you need with one read operation or where you can insert complex structure with a single write and follows CAP property.
The relational databases normalize the data to some degree; that is, rather than repeating a piece of data in multiple rows, a table that needs that information will store a foreign key, which points to another table that holds the data. On the other hand, this process means that the data is typically shredded from its original form to fit into tables and then reassemble at the run time by joining the tables in response to a query. This becomes particularly expensive as the data set grows and the data need to be partitioned among the multiple database servers.
The JSON Syntax Rules
JSON syntax is derived from JavaScript object notation syntax.
- Data is in the name/value pairs. {“key””value”} – most common format for objects
- Data is separated by commas. {“key””value”},{“key””value”}
- Curly braces holds the objects. {“key”{“key””value”}}
- Square brackets holds the arrays. {“key”[ {“key””value”},{“key””value”} ]}
The JSON Values
In JSON, values must be one of the data types given below.
- A string
- A number
- An object (JSON object)
- an array
- A boolean
- null
Basic Structure
If you have parent/child (Fact/Dimension) relationships, where related child information is not changed frequently and you need to read the child records together with the parent without the additional JOINS, you can store the child records in the parent table as JSON array.In the traditional database, normalization process ensures to minimize the amount of information that duplicates but whereas in NoSQL, intentionally duplicate it to make it easier to use. Let’s say, representing a number of students taking a class. A normalized way of representing the data is given below. The use of an array denotes the dimension data of the relational table
{ course "Basic programming", room "1A", students[{ id 1, name "Prashanth" }, { id 2, name "Jayaram" }] }Here’s a denormalized data.
[{ course "Basic programming", room "1A", studentId 1, studentName "Prashanth"}, { course "Basic programming", room "1A", studentId 2, studentName "Jayaram"}]Let’s discuss the different dimensions of the sample data given below and represent the data in tabular and JSON file format. Also, you will learn to know how to query JSON file with the various available JSON constructs in SQL 2016
Native JSON support in SQL Server 2016 provides you few functions to read and parse your JSON string into a relational format.
- OPENJSON() – Table valued function parses JSON text and returns row set view of JSON.
- JSON_Value() – Scalar function returns a value from JSON on the specified path.
The sample output given below is an example of how to demonstrate the different dimension of representing the data into a JSON and the relational data. The example lists parent and child relationship and it is represented in JSON array (batter and topping) and nested objects as well.↑ Return to Top
Relational data presentation using FOR JSON
The FOR JSON AUTO clause is similar to the FOR XML AUTO clause. It automatically formats the JSON output based on the Column/Tables hierarchy defined in the SQL Query.The FOR JSON PATH clause is similar to the FOR XML PATH clause. It gives more control to define the structure using column alias with dot separator
For example,
Lets create a sample table ‘EMP’ and ‘DEPT’ and insert few rows in it
CREATE TABLE EMP(EMPNO NUMERIC(4) NOT NULL,ENAME VARCHAR(10),JOB VARCHAR(9),MGR NUMERIC(4),HIREDATE DATETIME,SAL NUMERIC(7, 2),COMM NUMERIC(7, 2),DEPTNO NUMERIC(2))(7782, 'CLARK', 'MANAGER', 7839, '9-JUN-1981', 2450, NULL, 10)INSERT INTO EMP VALUES(7788, 'SCOTT', 'ANALYST', 7566, '09-DEC-1982', 3000, NULL, 20)INSERT INTO EMP VALUES(7839, 'KING', 'PRESIDENT', NULL, '17-NOV-1981', 5000, NULL, 10)INSERT INTO EMP VALUES(7844, 'TURNER', 'SALESMAN', 7698, '8-SEP-1981', 1500, 0, 30)INSERT INTO EMP VALUES(7934, 'MILLER', 'CLERK', 7782, '23-JAN-1982', 1300, NULL, 10)CREATE TABLE DEPT
(DEPTNO NUMERIC(2),DNAME VARCHAR(14),LOC VARCHAR(13) )INSERT INTO DEPT VALUES (10, 'ACCOUNTING', 'NEW YORK')INSERT INTO DEPT VALUES (20, 'RESEARCH', 'DALLAS')INSERT INTO DEPT VALUES (30, 'SALES', 'CHICAGO')INSERT INTO DEPT VALUES (40, 'OPERATIONS', 'BOSTON')The output of “FOR JSON AUTO” option is given below
SELECT * FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNOWHERE D.DEPTNO=10FOR JSON AUTOThis option formats the JSON document automatically based upon the columns provided in the Query.”FOR JSON PATH” option, the dot syntax is used for nested output.
SELECT E.EMPNO,E.ENAME,D.DEPTNO AS "DEPARTMENT.DEPTNO" ,D.DNAME AS "DEPARTMENT.DNAME",D.LOC AS "DEPARTMENT.LOCATION"FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNOWHERE D.DEPTNO=10FOR JSON PATHSELECT E.EMPNO,E.ENAME,D.DEPTNO AS "DEPARTMENT.DEPTNO" ,D.DNAME AS "DEPARTMENT.DNAME",D.LOC AS "DEPARTMENT.LOCATION"FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNOWHERE D.DEPTNO=10FOR JSON PATH,ROOT('EMPLOYEE')As we can see with PATH option creates wrapper class ‘department’ and nest properties deptno,dname,location
The JSON Data Transformation
Transform the below sample data to JSON
| ID | Type | Name | Batter | Topping |
| 1 | donut | Cake | Regular | None |
| 1 | donut | Cake | Regular | Glazed |
| 1 | donut | Cake | Regular | Sugar |
| 1 | donut | Cake | Regular | Powdered Sugar |
| 1 | donut | Cake | Regular | Chocolate with Sprinkles |
| 1 | donut | Cake | Regular | Chocolate |
| 1 | donut | Cake | Regular | Maple |
| 1 | donut | Cake | Chocolate | None |
| 1 | donut | Cake | Chocolate | Glazed |
| 1 | donut | Cake | Chocolate | Sugar |
| 1 | donut | Cake | Chocolate | Powdered Sugar |
| 1 | donut | Cake | Chocolate | Chocolate with Sprinkles |
| 1 | donut | Cake | Chocolate | Chocolate |
| 1 | donut | Cake | Chocolate | Maple |
| 1 | donut | Cake | Blueberry | None |
| 1 | donut | Cake | Blueberry | Glazed |
| 1 | donut | Cake | Blueberry | Sugar |
| 1 | donut | Cake | Blueberry | Powdered Sugar |
| 1 | donut | Cake | Blueberry | Chocolate with Sprinkles |
| 1 | donut | Cake | Blueberry | Chocolate |
| 1 | donut | Cake | Blueberry | Maple |
| 1 | donut | Cake | Devils Food | None |
| 1 | donut | Cake | Devils Food | Glazed |
| 1 | donut | Cake | Devils Food | Sugar |
| 1 | donut | Cake | Devils Food | Powdered Sugar |
| 1 | donut | Cake | Devils Food | Chocolate with Sprinkles |
| 1 | donut | Cake | Devils Food | Chocolate |
| 1 | donut | Cake | Devils Food | Maple |
The below transformation holds nested objects as we can see there are few more entries are added for the id 0002. In the above sample data, we can see there is four batter types and 7 toppings to prepare 28(1*4*7=28)different types of cake.similarly. for id 0002, 4 batter types and 3 toppings used to prepare 12(1*4*3) types of cake.
[{ "id": "0001", "type": "donut", "name": "Cake", "ppu": 0.55, "batters": { "batter": [{ "id1": "1001", "type1": "Regular" }, { "id1": "1002", "type1": "Chocolate" }, { "id1": "1003", "type1": "Blueberry" }, { "id1": "1004", "type1": "Devils Food" }] }, "topping": [{ "id2": "5001", "type2": "None" }, { "id2": "5002", "type2": "Glazed" }, { "id2": "5005", "type2": "Sugar" }, { "id2": "5007", "type2": "Powdered Sugar" }, { "id2": "5006", "type2": "Chocolate with Sprinkles" }, { "id2": "5003", "type2": "Chocolate" }, { "id2": "5004", "type2": "Maple" }] }, { "id": "0002", "type": "donut", "name": "cup Cake", "ppu": 0.5, "batters": { "batter": [{ "id1": "1001", "type1": "Regular" }, { "id1": "1002", "type1": "Chocolate" }, { "id1": "1003", "type1": "Blueberry" }, { "id1": "1004", "type1": "Devil's Food" }] }, "topping": [{ "id2": "5001", "type2": "None" }, { "id2": "5002", "type2": "Glazed" }, { "id2": "5005", "type2": "Sugar" }] }]Transform JSON to Relational data
The OPENJSON is a table-value function (TVF). which looks into JSON text, locates an array of JSON objects, iterates through the elements of the array and for each element returns one row in the output result. To read JSON from the file, load the file using OPENROWSET construct into a variable. The stocks.json is an example for the demonstration. You can derive the path as per your requirement and the environment.In the following example is shown SQL code, which reads the content of the JSON file, using OPENROWSET BULK function and passes the content of JSON file (BulkColumn) to OPENJSON function
JSON file can be stored in local file system or global (Cloud storage).
SELECT ID, type, name, ppu, type1 batter, type2 topping FROMOPENROWSET(BULK N '\\hq6021\c$\stocks.json', SINGLE_CLOB) AS json CROSS APPLY OPENJSON(BulkColumn) WITH(id nvarchar(40), type nvarchar(40), name NVARCHAR(MAX), ppu NVARCHAR(MAX), batters NVARCHAR(MAX) AS JSON, topping NVARCHAR(MAX) AS JSON) AS t CROSS APPLY OPENJSON(batters, '$.batter') WITH(id1 nvarchar(100), type1 nvarchar(20)) CROSS APPLY OPENJSON(topping) WITH(id2 nvarchar(100), type2 nvarchar(20))Built-in functions for JSON processing
SQL Server 2016 provides the functions for parsing and processing JSON text. JSON built-in functions, which are available in SQL Server 2016 are given below.
- ISJSON( jsonText ) checks, if the NVARCHAR text is properly formatted according to the JSON specification. You can use this function to create check constraints on NVARCHAR columns, which contains JSON text
- JSON_VALUE( jsonText, path ) parses jsonText and extracts the scalar values on the specified JavaScript-like path (see below for some JSON path examples).
- JSON_QUERY( jsonText, path ) that parses jsonText and extracts objects or arrays on the specified JavaScript-like path (see below for some JSON path examples)
These functions use JSON paths for referencing the values or objects in JSON text. JSON paths use JavaScript-like syntax for referencing the properties in JSON text. Some examples are given below.
- ‘$’ – references entire JSON object in the input text.
- ‘$.property1’ – references property1 in JSON object.
- ‘$[4]’ – references a 5-th element in JSON array (indexes are counted from 0 like in JavaScript).
- ‘$.property1.property2.array1[5].property3.array2[15].property4’ – references complex nested property in the JSON object.
- ‘$.info. “first name”‘ – references “first name” property in info object. If the key contains some special characters such as space, dollar, etc., it should be surrounded by double quotes.
The dollar sign ($) represents the input JSON object (similar to root “/” in XPath language). You can add any JavaScript-like property or an array after “$” to reference properties in JSON object. One simple example of a query, where these built-in functions are used is given below.
DECLARE @MyJSON NVARCHAR(4000) = N '{ "info" { "type" 1, "address" { "town" "Louisville", "county" "Boulder", "country" "USA" }, "tags" ["Snow", "Breweries"] }, "LocationType" "East", "WeatherType" "Cold" } 'Select * from OPENJSON(@MyJSON) WITH(type int '$.info.type', LocationType varchar(20) '$.LocationType', WeatherType varchar(20) '$.WeatherType', town varchar(200) '$.info.address.town', county varchar(200) '$.info.address.county', country varchar(200) '$.info.address.country') AS t CROSS APPLY OPENJSON(@MyJSON, '$.info.tags')How to define Nested Objects in JSON
The examples are given above also contains a sample data that represents nested object.
DECLARE @json NVARCHAR(1000) SELECT @json = N '{ "Employee" [{ "Element" 1}, { "Element" 2}, { "Element" "n"}] } 'Parsing Nested elements of JSON
In the employee example given below, the employeeDepartment is the root of the JSON. The array element DEPT has a dimension data, which represents the department details of each employee. The employee JSON structure has 3 objects.
DECLARE @MyJSON NVARCHAR(4000) = N '{ "EmployeeDepartment": [{ "EmployeeID": "E0001", "FirstName": "Prashanth", "LastName": "Jayaram", "DOB": "1983-02-03", "DEPT": [{ "EmployeeDepartment": "Ducks" }, { "EmployeeDepartment": "Blues" }] }, { "EmployeeID": "E0002", "FirstName": "Prarthana", "LastName": "Prashanth", "DOB": "2015-07-06", "DEPT": [{ "EmployeeDepartment": "Red Wings" }] }, { "EmployeeID": "E0003", "FirstName": "Pravitha", "LastName": "Prashanth", "DOB": "2015-07-06", "DEPT": [{ "EmployeeDepartment": "Red Wings" }, { "EmployeeDepartment": "Green Bird" }] }] } '--SELECT * FROM OPENJSON(@MyJSON) SELECTEmployeeID, FirstName, LastName, DOB, DEPT, EmployeeDepartment FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') WITH(EmployeeID varchar(10), FirstName varchar(25), LastName varchar(25), DOB varchar(25), DEPT NVARCHAR(MAX) AS JSON) AS E CROSS APPLY OPENJSON(DEPT) WITH(EmployeeDepartment nvarchar(100))Reading JSON into separate rows
DECLARE @MyJSON NVARCHAR(4000) = N '{ "EmployeeDepartment": [{ "EmployeeID": "E0001", "FirstName": "Prashanth", "LastName": "Jayaram", "DOB": "1983-02-03", "DEPT": [{ "DeptID": "D1", "DName": "Ducks" }, { "DeptID": "D2", "DName": "Blues" }] }, { "EmployeeID": "E0002", "FirstName": "Prarthana", "LastName": "Prashanth", "DOB": "2015-07-06", "DEPT": [{ "DeptID": "D3", "DName": "Red Wings" }] }, { "EmployeeID": "E0003", "FirstName": "Pravitha", "LastName": "Prashanth", "DOB": "2015-07-06", "DEPT": [{ "DeptID": "D3", "DName": "Red Wings" }, { "DeptID": "D4", "DName": "Green Bird" }] }] } 'SELECTJSON_Value(c.value, '$.EmployeeID') as EmployeeID, JSON_Value(c.value, '$.FirstName') as FirstName, JSON_Value(C.value, '$.DOB') as DOB, JSON_Value(p.value, '$.DeptID') as DEPTID, JSON_Value(p.value, '$.DName') as DName FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') as c CROSS APPLY OPENJSON(c.value, '$.DEPT') as pReading JSON elements into separate columns
You can specify the child elements with the full path by using the dollar sign “$” inside the WITH() clause to segregate the data into the separate columns.
DECLARE @MyJSON NVARCHAR(4000) = N '{ "EmployeeDepartment": [{ "EmployeeID": "E0001", "FirstName": "Prashanth", "LastName": "Jayaram", "DOB": "1983-02-03", "DEPT": [{ "DeptID": "D1", "DName": "Ducks" }, { "DeptID": "D2", "DName": "Blues" }] }, { "EmployeeID": "E0002", "FirstName": "Prarthana", "LastName": "Prashanth", "DOB": "2015-07-06", "DEPT": [{ "DeptID": "D3", "DName": "Red Wings" }] }, { "EmployeeID": "E0003", "FirstName": "Pravitha", "LastName": "Prashanth", "DOB": "2015-07-06", "DEPT": [{ "DeptID": "D3", "DName": "Red Wings" }, { "DeptID": "D4", "DName": "Green Bird" }] }] } 'SELECTEmployeeID, FirstName, DOB, Dept1, DName1, Dept2, DName2 FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') WITH(EmployeeID varchar(20) '$.EmployeeID', FirstName varchar(20) '$.FirstName', DOB varchar(20) '$.DOB', Dept1 varchar(20) '$.DEPT[0].DeptID', Dname1 varchar(20) '$.DEPT[0].DName', Dept2 varchar(20) '$.DEPT[1].DeptID', Dname2 varchar(20) '$.DEPT[1].DName') AS EMPConclusion
SQL 2016 contains some very powerful JSON constructs. Mixing the power of the relational databases with the flexibility of JSON offers many benefits from the point of Migration, Integration, and Deployment. It is flexible because of simple syntax and lesser overhead to maintain and manage the JSON data.The powerful JSON SQL constructs enable to query and analyze JSON data as well as transform JSON to the relational data and the relational data to JSON.
There are plenty of examples and resources, which are available under various links. This is an effort to combine real-world scenarios and details the various ways of JSON data manipulation, using SQL 2016 JSON constructs. NoSQL offers different concepts – complex structures are placed together into the collections of the entities, where you can take everything, which you need with one read operation or where you can insert complex structure with a single write. The bad side is that sometimes you want to organize your information in different collections and then you will find that it is very hard to JOIN entities from the two collections.
With new SQL server, you have options to choose between these two concepts and use the best of both worlds. In your data models, you can choose when to use traditional relational structures and when to introduce NoSQL concepts.
References
- https//blogs.technet.microsoft.com/dataplatforminsider/2016/01/06/json-in-sql-server-2016-part-2-of-4/
- https//adobe.github.io/Spry/samples/data_region/JSONDataSetSample.html
- https//www.codeproject.com/Articles/1046120/Friday-the-th-JSON-is-coming-to-SQL-Server












































PowerShell : Monitoring and Notification – Disk Space Alerts
“Microsoft Technet Guru – Gold Medal Winning Article- Nov, 2016”
Abstract
Monitoring disk space utilization of server(s) is the critical and important job for any administrator. Keeping things organized might improve application availability and server availability. This article takes us through the in-detail steps to read each drive and notify every drive details based on threshold values and output data.You’ll basically feed a list of servers to watch over, and it will report back on these for you, meaning you could also use it as a more general “daily server disk space report”.A DBA doesn’t want to run out of space on their servers, even in their labs! To avoid this happening, wrote a PowerShell script to provide some alerts by email.
Table of Contents
Introduction
This article talks about the use of credentials. The credentials can be used to query external servers which have the trust relationship between the domains. Also, list various methods to secure the password. The process iterates through a list of servers and drives that you have listed in a csv file. Checking for disk space status of every listed drive and its status may fall under one of the four statuses that are defined as critical, warning, low and good. If the disk in a question of below the threshold then the corresponding status is updated and notification sent to the respective teams.
Highlights
Querying – Win32_LogicalDisks
Using Credentials
Get-Credential always pop-up dialog box for entering a password, however, you can save your securestring password to a file or directly feed the password. The problem with this is that the password will be exposed to anyone with access to the file.
Using Get-Credential cmdlet – Pop dialog box
The Get-Credential displays a window to enter credential details. This will appear every time when you run the script.The $credential variable store the username and password. It’s then fed to the respective queries for further processing.
clear$credential = Get-Credentialforeach ( $args in get-Content c:\server.txt ) {get-WmiObject win32_logicaldisk -Credential $credential -ComputerName $args -Filter"Drivetype=3"|ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size /1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace /1gb -as [int] }} -autosize}Hard code the credentials
The password is hard coded in the script. Of course, the problem with this is that your password will be exposed to anyone with access to the script file.
$User ='hqnt\abcd'$Pass = ConvertTo-SecureString'abcd@2015'-AsPlainText -Force$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Passforeach ( $args in get-Content c:\server.txt ) {get-WmiObject win32_logicaldisk -ComputerName $args -Credential $Credentials -Filter"Drivetype=3"|ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size /1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace /1gb -as [int] }} -autosize}Using Secured file
First, Password has to be written to a file
ps:\>read-host -AsSecureString |ConvertFrom-SecureString |Out-File C:\SecurePassword.txtSecond, The credentials are read from the file using PSCredential class. You don’t need to re-enter the password over and over again.
clear$User ='hqnt\abcdv'$pass= cat C:\passwordstring.txt |ConvertTo-SecureString$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Passforeach ( $args in get-Content c:\server.txt ) {get-WmiObject win32_logicaldisk -ComputerName $args -Credentials $cred -Filter"Drivetype=3"|ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size /1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace /1gb -as [int] }} -autosize}Without using Credentials
clearforeach ( $args in get-Content c:\server.txt ) {get-WmiObject win32_logicaldisk -ComputerName $args -Filter"Drivetype=3"|ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size /1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace /1gb -as [int] }} -autosize}Download
PowerShell : Monitoring and Notification – Disk Space Alerts
Code in detail
Input File
The template of Inputserver.csv is given below. Change the content as per your requirement/environment
import-csv C:\InputServer_1.csv |Format-Table -AutoSizeThe comma-delimited text file (.csv) file InputServer_1.csv file is parsed below. You’ll notice at the top that there are headings for each column: ServerName, Drive, LowTh, WarnTh,CritTh,Email, LowPri, WarnPri, CritPri, and EscInst. The PowerShell import-csv cmdlet understands this default format and expects to see the column headers before parsing the main data.
The above columns are defined just to illustrate how the process works. This can vary for every environment by considering the underlying infrastructure
Write-Log
Write-Log writes a message to a specified log file along with the current time stamp also writes state of the message(Information, Warning or Error).
For example, The first example writes a simple message with a default state to a log file abc.log. In the second example, a message along with “Error” state details are entered into the log file.
1.EXAMPLE12.PS:\> Write-Log -Message"Server is reachable and starting the process "-Logfile c:\PowerSQL\abc.log3.EXAMPLE24.PS:\> Write-Log -levelError -Message"Server is not reachable "-Logfile c:\PowerSQL\abc.logThe below function can be reused to in any of the PoSH code. Also, the output file will be used for troubleshooting and activity progress tracking.
01.Function Write-Log {02.[CmdletBinding()]03.Param(04.[Parameter(Mandatory=$False)]05.[ValidateSet("INFO","WARN","ERROR")]06.[String]07.$Level ="INFO",08.09.[Parameter(Mandatory=$True)]10.[string]11.$Message,12.13.[Parameter(Mandatory=$False)]14.[string]15.$logfile16.)17.18.$Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")19.$Line ="$Stamp $Level $Message"20.If($logfile) {21.Add-Content $logfile -Value $Line22.}23.Else {24.Write-Output $Line25.}26.}Password
This portion of code decides whether to pass credentials or not. The Get-credential always pop-up dialog box for entering a password, however, you can save your securestring password to a file or directly feed the password. The problem with this is that the password will be exposed to anyone with access to the file. If you want to use the default login credentials then you don’t need to mention anything in the code. You can comment the line of code.
$User ='abcd'$Pass = ConvertTo-SecureString''abcd@#2016' -AsPlainText -Force$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$PassImport Server and Drive details
01.#Import the file to get the drives status and other usage details02.03.#The Import-Csv cmdlet provides a way for you to read in data from a comma-separated values file (CSV)04.05.Import-Csv $InputServer|%{06.$cserver = $_.Server07.$cdrivelt = $_.Drive08.$clowth = $_.LowTh09.$cwarnth = $_.WarnTh10.$ccritth = $_.CritTh11.$cemail = $_.Email12.$clowpri = $_.LowPri13.$cwarnpri = $_.WarnPri14.$ccritpri = $_.CritPri15.$cescinst = $_.EscInst16.If (!(Test-Connection $_.Server -count1-quiet)) {17.#Write the message to the log file18.Write-Log -levelERROR -Message"$($_.Server) is not reachable"-Logfile $Logfile19.}20.else21.{22.#Write the Progress to log file23.Write-Log -Message"$($_.Server) is reachable and starting the process "-Logfile $Logfile24.25.$diskinfo= Get-WmiObject -Class Win32_LogicalDisk -ComputerName $cserver -Filter"DeviceID='$cdrivelt'"26.ForEach ($disk in $diskinfo)27.{28.#Calculate the % free. This parameter will be compared with various thresholds to derive the status of the drive29.If ($diskinfo.Size -gt0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) *100))}30.Else {$percentFree =0}31.#Determine if disk needs to be flagged for warning or critical alert32.If ($diskinfo.Size -gt0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) *100))}33.Else {$percentFree =0}34.If ($percentFree -le $ccritth) {35.36.$status ="Critical"37.$priority = $ccritpri38.$body = @"39.Notification that a disk drive is reporting an alert forlowdisk space!40.$cserver $cdrivelt has $percentFree % free space. Please assign an $priority priority ticket to the $cescinst team.41.-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09.42."@43.Send-MailMessage -to $cemail -from"HQMONP09@appvion.com"-Subject"Disk Alert - $cserver $cdrivelt out of disk space!"-body $body -smtpserver $SMTPServer44.Write-Log -Message"$($_.Server) Critical alert logged for the drive $cdrivelt "-Logfile $Logfile45.46.}Output
This section describes various options available in the script to validate the disk space data
Log filename
This below code defines the output log file location, and directory to save the output. The $date variable hold the date formatting part. It’s then appended to the $logfilename to generate a more meaningful filename. For example, DiskSpaceLog_2016-10-10
$date=Get-Date -format"yyyy-MM-d"#Prepare log file and output CSV file$LogFileName="DiskSpaceLog_$($date)"Email
$body = @"Notification that a disk drive is reporting an alert forlowdisk space!$cserver $cdrivelt has $percentFree % free space. Please assign an $priority priority ticket to the $cescinst team.-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09."@Send-MailMessage -to $cemail -from"HQMONP09@appvion.com"-Subject"Disk Alert - $cserver $cdrivelt out of disk space!"-body $body -smtpserver $SMTPServerConsole
$mydisk +=New-Object PSObject -Property @{Server=$_.ServerDeviceID= $disk.DeviceIDVolumeName= $disk.VolumeNameSize= [math]::Round(($disk.Size /1GB),2)Usedspace= [math]::Round((($disk.Size - $disk.FreeSpace)/1GB),2)Percentage= ("{0:P}"-f ($disk.FreeSpace / $disk.Size))status=$status}}}}$mydisk |Select-Object @{Name="Server";Expression={$_.Server}},@{Name="DeviceID";Expression={$_.DeviceID}},@{Name="VolumeName";Expression={$_.VolumeName}},@{Name="Size";Expression={$_.Size}},@{Name="Used Space";Expression={$_.Usedspace}},@{Name="% Free";Expression={$_.Percentage}},@{Name="Status";Expression={$_.status}}|Format-Table -AutoSizeCode
<#.SynopsisThe objective of the script is to make use of .csv files as sources for various parts of the script..DescriptionFunction to log manipulate the date based on the input file and display it to console. Log entries in the log file are time stamped. Bydefaultthe message are logged under INFO category. It can be changed to other category such as"WARN"and"Error"using -levelparameter.Parameter InoutFilePath to the file where the input details are saved.Example: c:\InputServer.csv.Parameter SMTPServerThe SMTP server name to send email to respective intendenciesExample: ancd.gmail.com.ExampleWrite-Log -Message"$($_.Server) is reachable and starting the process "-Logfile $Logfile.ExampleWrite-Log -Message"$($_.Server) Critical alert logged for the drive $cdrivelt "-Logfile $Logfile.Linkhttps://powershellsql.wordpress.com/
.NotesThe CSV file is going to holdallthe metadata for each drive that you intend to monitor and send out notification#>[CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact='Low')]Param([Parameter(Mandatory=$true,Position=0)][String]$InputServer,[Parameter(Mandatory=$true,Position=1)][String]$DirectorytoSave,[Parameter(Mandatory=$true,Position=2)][String]$SMTPServer)# formatting the date$date=Get-Date -format"yyyy-MM-d"#Prepare log file and output CSV file$LogFileName="DiskSpaceLog_$($date)"# before we do anything else, are we likely to be able to save the file?# if the directory doesn't exist, then create itif (!(Test-Path -path"$DirectoryToSaveTo")) #create it if not existing{New-Item"$DirectoryToSaveTo"-type directory | out-null}#log File creation$logfile ="$DirectoryToSave$LogFileName.log"if (!(Test-Path -path"$logfile")) #create it if not existing{New-Item -ItemType file $logfile -Force}# Prepare headers for the log file for each execution of scriptAdd-Content $logfile"#################################################################"Add-Content $logfile"Disk Space Details"Add-Content $logfile"Generated $(get-date)"Add-Content $logfile"Generated from $(gc env:computername)"Add-Content $logfile"#################################################################"Function Write-Log {[CmdletBinding()]Param([Parameter(Mandatory=$False)][ValidateSet("INFO","WARN","ERROR")][String]$Level ="INFO",[Parameter(Mandatory=$True)][string]$Message,[Parameter(Mandatory=$False)][string]$logfile)$Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")$Line ="$Stamp $Level $Message"If($logfile) {Add-Content $logfile -Value $Line}Else {Write-Output $Line}}#Creating PowerShell custom objects$Mydisk=@()#Import the file to get the drives status and other usage details#The Import-Csv cmdlet provides a way for you to read in data from a comma-separated values file (CSV)Import-Csv $InputServer|%{$cserver = $_.Server$cdrivelt = $_.Drive$clowth = $_.LowTh$cwarnth = $_.WarnTh$ccritth = $_.CritTh$cemail = $_.Email$clowpri = $_.LowPri$cwarnpri = $_.WarnPri$ccritpri = $_.CritPri$cescinst = $_.EscInstIf (!(Test-Connection $_.Server -count1-quiet)) {#Write the message to the log fileWrite-Log -levelERROR -Message"$($_.Server) is not reachable"-Logfile $Logfile}else{#Write the Progress to log fileWrite-Log -Message"$($_.Server) is reachable and starting the process "-Logfile $Logfile$diskinfo= Get-WmiObject -Class Win32_LogicalDisk -ComputerName $cserver -Filter"DeviceID='$cdrivelt'"ForEach ($disk in $diskinfo){#Calculate the % free. This parameter will be compared with various thresholds to derive the status of the driveIf ($diskinfo.Size -gt0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) *100))}Else {$percentFree =0}#Determine if disk needs to be flagged for warning or critical alertIf ($diskinfo.Size -gt0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) *100))}Else {$percentFree =0}If ($percentFree -le $ccritth) {$status ="Critical"$priority = $ccritpri$body = @"Notification that a disk drive is reporting an alert forlowdisk space!$cserver $cdrivelt has $percentFree % free space. Please assign an $priority priority ticket to the $cescinst team.-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09."@Send-MailMessage -to $cemail -from"HQMONP09@appvion.com"-Subject"Disk Alert - $cserver $cdrivelt out of disk space!"-body $body -smtpserver $SMTPServerWrite-Log -Message"$($_.Server) Critical alert logged for the drive $cdrivelt "-Logfile $Logfile}ElseIf ($percentFree -gt $ccritth -AND $percentFree -le $cwarnth) {$status ="Warning"$priority = $cwarnpri$body = @"Notification that a disk drive is reporting an alert forlowdisk space!$cserver $cdrivelt has $percentFree % free space. Please assign a $priority priority ticket to the $cescinst team.-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09."@Send-MailMessage -to $cemail -from"HQMONP09@appvion.com"-Subject"Disk Alert - $cserver $cdrivelt disk space warning!"-body $body -smtpserver $SMTPServerWrite-Log -Message"$($_.Server) Warning alert logged for the drive $cdrivelt "-Logfile $Logfile}ElseIf ($percentFree -ge $cwarnth -AND $percentFree -lt $clowth) { $status ="Low"$priority = $clowpri$body = @"Notification that a disk drive is reporting an alert forlowdisk space!$cserver $cdrivelt has $percentFree % free space. Please assign a $priority priority ticket to the $cescinst team.-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09."@Send-MailMessage -to $cemail -from"HQMONP09@appvion.com"-Subject"Disk Alert - $cserver $cdrivelt disk space warning!"-body $body -smtpserver $SMTPServerWrite-Log -Message"$($_.Server) low alert logged for the drive $cdrivelt "-Logfile $Logfile}Else { $status ="Good"}$mydisk +=New-Object PSObject -Property @{Server=$_.ServerDeviceID= $disk.DeviceIDVolumeName= $disk.VolumeNameSize= [math]::Round(($disk.Size /1GB),2)Usedspace= [math]::Round((($disk.Size - $disk.FreeSpace)/1GB),2)Percentage= ("{0:P}"-f ($disk.FreeSpace / $disk.Size))status=$status}}}}$mydisk |Select-Object @{Name="Server";Expression={$_.Server}},@{Name="DeviceID";Expression={$_.DeviceID}},@{Name="VolumeName";Expression={$_.VolumeName}},@{Name="Size";Expression={$_.Size}},@{Name="Used Space";Expression={$_.Usedspace}},@{Name="% Free";Expression={$_.Percentage}},@{Name="Status";Expression={$_.status}}|Format-Table -AutoSizeOutput
Compare Mail Inbox view with PoSH output
Log file details are shown below
Running DiskSpace.ps1 file with parameters
Conclusion
As usual, any feedback is welcome, and I hope that this article was helpful to you!
References
Technet
See Also
Share this: