SQL: Different ways to generate sequence

There was a question from the op regarding adding a new column to a query output by generating the cyclic sequence numbers from 1 to 3.

Select A =identity(int,1,1),B,C from table_abs

1,A,41

2,B,13

3,C,90

4,D,91

5,E,98

6,F,12

7,G,54

8,H,16

For this output, the 4th column generates the Sequence of numbers from 1 to 3 which is shown below

1,A,41,1

2,B,13,2

3,C,90,3

4,D,91,1

5,E,98,2

6,F,12,3

7,G,54,1

8,H,16 ,2

If you are using SQL 2012 then Sequence would be natural choice for any such operations.

Solution 1

Using Sequence

CREATE SEQUENCE Seq
 AS INTEGER
 START WITH 1
 INCREMENT BY 1
 MINVALUE 1
 MAXVALUE 3
 CYCLE;

SELECT table_name,NEXT VALUE FOR Seq New_column
FROM information_schema.tables

 

Solution 2

Using CTE and Modulus operator

;with q as
(
  select row_number() over (order by (select null)) A, *
  from sys.objects
)
select A, 1+A%3 B, *
from q

Solution 3

Loops and Temp table

create table dummyTest
(
id int, 
col1 char(1),
col2 int
)

insert into dummyTest values(1,'A',410),(2,'B',411),(3,'c',4111),(4,'d',421),(5,'e',441),(6,'f',451),(7,'g',481),(8,'h',401)

create table #dummy
(
id int, 
col1 char(1),
col2 int,
NewColumn int
)
 declare @n int,@i int,@limit int
 set @limit=1
 set @i=1
 select @n=count(*) from dummyTest
 while @i<=@n
 begin
 set @limit=1
 while @limit<=3
 begin
 print @limit
 insert into #dummy
 select *,NewColumn=@limit from dummyTest where id=@i
 set @i=@i+1
 set @limit=@limit+1
 end
  end

 select * from #dummy

 

Conclusion

The same solution can be derived using the cursor and there may be other solutions as well. At many instances, we opt for any solutions without thinking of data volume that may degrade the performance. This is one of the prime examples of why we need to upgrade to newer version.

 

 

 

Posted in SQL, SQL 2012, Uncategorized | Tagged , , | Leave a comment

SQL Server 2016: JSON integration

Abstract

This article helps to explain the application of JSON data and the use of new SQL 2016 constructs that enable an integration of JSON data into a relational schema.

The idea behind this article is to list as many examples of JSON nested elements, an inclusion of sample data,  relational data conversion to JSON data, JSON to the relational data, conversion of JSON elements into separate columns and representing the same data into separate rows.

 

↑ Return to Top


 

↑  Return to Top


Introduction

The JSON stands for JavaScript Object Notation. JSON is the primary data representation for all NoSQL databases.This is a natural fit for the developers, who use JSON as the data interchange format in their Applications. The relative ability of JSON (JSON records are well structured but easily extended) on its scalability has attracted the developers looking DB migrations in agile environments. Data and schema, in volume, can be hard to change. Rewriting a large dataset stored on the disk while keeping the associated Applications online can be time-consuming. It can take days of background processing, in moderate to large examples, to upgrade the data

↑  Return to Top


Background

Most of the traditional relational database engine now supports JSON. With SQL Server 2016, It’s easy to interchange JSON data between the Applications and database engine. Microsoft has provided various functions and capabilities to parse JSON data. They tried to bring JSON data into a relational storage. It also provides an ability to transform the relational data into JSON and JSON data into the denormalized data.Having these additional JSON features built into SQL Server should make it easier for the Applications to exchange JSON data with SQL Server.This functionality provides the flexibility in the integration of JSON data into the relational database engine. The developers can write and invent complex queries during their periodic stages of the development process.

Relational databases refer to the traditional data storage, Constructive and Intuitive SQL language, Complex query design and ACID property. NoSQL offers different concepts – complex structures are placed together into the collections of the entities, where you can take everything, which you need with one read operation or where you can insert complex structure with a single write and follows CAP property.

The relational databases normalize the data to some degree; that is, rather than repeating a piece of data in multiple rows, a table that needs that information will store a foreign key, which points to another table that holds the data. On the other hand, this process means that the data is typically shredded from its original form to fit into tables and then reassemble at the run time by joining the tables in response to a query. This becomes particularly expensive as the data set grows and the data need to be partitioned among the multiple database servers.

 

↑  Return to Top


The JSON Syntax Rules

JSON syntax is derived from JavaScript object notation syntax.

  • Data is in the name/value pairs. {“key””value”} – most common format for objects
  • Data is separated by commas. {“key””value”},{“key””value”}
  • Curly braces holds the objects. {“key”{“key””value”}}
  • Square brackets holds the arrays. {“key”[ {“key””value”},{“key””value”} ]}

↑  Return to Top


The JSON Values

In JSON, values must be one of the data types given below.

  • A string
  • A number
  • An object (JSON object)
  • an array
  • A boolean
  • null

↑  Return to Top


Basic Structure

If you have parent/child (Fact/Dimension) relationships, where related child information is not changed frequently and you need to read the child records together with the parent without the additional JOINS, you can store the child records in the parent table as JSON array.In the traditional database, normalization process ensures to minimize the amount of information that duplicates but whereas in NoSQL, intentionally duplicate it to make it easier to use. Let’s say, representing a number of students taking a class. A normalized way of representing the data is given below. The use of an array denotes the dimension data of the relational table

    course "Basic programming", 
        room "1A", 
        students[{ 
            id 1, 
            name "Prashanth"
        }, { 
            id 2, 
            name "Jayaram"
        }] 
}

Here’s a denormalized data.

[{ 
    course "Basic programming", 
    room "1A", 
    studentId 1, 
    studentName "Prashanth"
}, { 
    course "Basic programming", 
    room "1A", 
    studentId 2, 
    studentName "Jayaram"
}]
When you parse the JSON container, you will end up in fetching the denormalized data in one table.

Let’s discuss the different dimensions of the sample data given below and represent the data in tabular and JSON file format. Also, you will learn to know how to query JSON file with the various available JSON constructs in SQL 2016

Native JSON support in SQL Server 2016 provides you few functions to read and parse your JSON string into a relational format.

  • OPENJSON() – Table valued function parses JSON text and returns row set view of JSON.
  • JSON_Value() – Scalar function returns a value from JSON on the specified path.

The sample output given below is an example of how to demonstrate the different dimension of representing the data into a JSON and the relational data. The example lists parent and child relationship and it is represented in JSON array (batter and topping) and nested objects as well.↑  Return to Top


Relational data presentation using FOR JSON

The FOR JSON AUTO clause is similar to the FOR XML AUTO clause. It automatically formats the JSON output based on the Column/Tables hierarchy defined in the SQL Query.The FOR JSON PATH clause is similar to the FOR XML PATH clause. It gives more control to define the structure using column alias with dot separator

For example,

Lets   create a sample table ‘EMP’ and ‘DEPT’ and insert few rows in it

CREATE TABLE EMP
(EMPNO NUMERIC(4) NOT NULL,
ENAME VARCHAR(10),
JOB VARCHAR(9),
MGR NUMERIC(4),
HIREDATE DATETIME,
SAL NUMERIC(7, 2),
COMM NUMERIC(7, 2),
DEPTNO NUMERIC(2))
INSERT INTO EMP VALUES
(7782, 'CLARK', 'MANAGER', 7839, '9-JUN-1981', 2450, NULL, 10)
INSERT INTO EMP VALUES
(7788, 'SCOTT', 'ANALYST', 7566, '09-DEC-1982', 3000, NULL, 20)
INSERT INTO EMP VALUES
(7839, 'KING', 'PRESIDENT', NULL, '17-NOV-1981', 5000, NULL, 10)
INSERT INTO EMP VALUES
(7844, 'TURNER', 'SALESMAN', 7698, '8-SEP-1981', 1500, 0, 30)
INSERT INTO EMP VALUES
(7934, 'MILLER', 'CLERK', 7782, '23-JAN-1982', 1300, NULL, 10)
CREATE TABLE DEPT

(DEPTNO NUMERIC(2),
DNAME VARCHAR(14),
LOC VARCHAR(13) )
INSERT INTO DEPT VALUES (10, 'ACCOUNTING', 'NEW YORK')
INSERT INTO DEPT VALUES (20, 'RESEARCH', 'DALLAS')
INSERT INTO DEPT VALUES (30, 'SALES', 'CHICAGO')
INSERT INTO DEPT VALUES (40, 'OPERATIONS', 'BOSTON')

The output of “FOR JSON AUTO” option is given below

SELECT * FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNO
WHERE D.DEPTNO=10
FOR JSON AUTO

This option formats the JSON document automatically based upon the columns provided in the Query.”FOR JSON PATH” option, the dot syntax is used for nested output.

SELECT E.EMPNO,E.ENAME,D.DEPTNO  AS "DEPARTMENT.DEPTNO" ,D.DNAME AS "DEPARTMENT.DNAME",D.LOC AS "DEPARTMENT.LOCATION"
FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNO
WHERE D.DEPTNO=10
FOR JSON PATH
SELECT E.EMPNO,E.ENAME,D.DEPTNO  AS "DEPARTMENT.DEPTNO" ,D.DNAME AS "DEPARTMENT.DNAME",D.LOC AS "DEPARTMENT.LOCATION"
FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNO
WHERE D.DEPTNO=10
FOR JSON PATH,ROOT('EMPLOYEE')

As we can see with PATH option creates wrapper class ‘department’ and nest properties deptno,dname,location

↑  Return to Top


The JSON Data Transformation

Transform the below sample data to JSON

ID Type Name Batter Topping
1 donut Cake Regular None
1 donut Cake Regular Glazed
1 donut Cake Regular Sugar
1 donut Cake Regular Powdered Sugar
1 donut Cake Regular Chocolate with Sprinkles
1 donut Cake Regular Chocolate
1 donut Cake Regular Maple
1 donut Cake Chocolate None
1 donut Cake Chocolate Glazed
1 donut Cake Chocolate Sugar
1 donut Cake Chocolate Powdered Sugar
1 donut Cake Chocolate Chocolate with Sprinkles
1 donut Cake Chocolate Chocolate
1 donut Cake Chocolate Maple
1 donut Cake Blueberry None
1 donut Cake Blueberry Glazed
1 donut Cake Blueberry Sugar
1 donut Cake Blueberry Powdered Sugar
1 donut Cake Blueberry Chocolate with Sprinkles
1 donut Cake Blueberry Chocolate
1 donut Cake Blueberry Maple
1 donut Cake Devils Food None
1 donut Cake Devils Food Glazed
1 donut Cake Devils Food Sugar
1 donut Cake Devils Food Powdered Sugar
1 donut Cake Devils Food Chocolate with Sprinkles
1 donut Cake Devils Food Chocolate
1 donut Cake Devils Food Maple

The below transformation holds nested objects as we can see there are few more entries are added for the id 0002. In the above sample data, we can see there is four batter types and 7 toppings to prepare 28(1*4*7=28)different types of cake.similarly. for id 0002, 4 batter types and 3 toppings used to prepare 12(1*4*3) types of cake.

[{ 
    "id": "0001", 
    "type": "donut", 
    "name": "Cake", 
    "ppu": 0.55, 
    "batters": { 
        "batter": [{ 
            "id1": "1001", 
            "type1": "Regular"
        }, { 
            "id1": "1002", 
            "type1": "Chocolate"
        }, { 
            "id1": "1003", 
            "type1": "Blueberry"
        }, { 
            "id1": "1004", 
            "type1": "Devils Food"
        }] 
    }, 
    "topping": [{ 
        "id2": "5001", 
        "type2": "None"
    }, { 
        "id2": "5002", 
        "type2": "Glazed"
    }, { 
        "id2": "5005", 
        "type2": "Sugar"
    }, { 
        "id2": "5007", 
        "type2": "Powdered Sugar"
    }, { 
        "id2": "5006", 
        "type2": "Chocolate with Sprinkles"
    }, { 
        "id2": "5003", 
        "type2": "Chocolate"
    }, { 
        "id2": "5004", 
        "type2": "Maple"
    }] 
}, { 
    "id": "0002", 
    "type": "donut", 
    "name": "cup Cake", 
    "ppu": 0.5, 
    "batters": { 
        "batter": [{ 
            "id1": "1001", 
            "type1": "Regular"
        }, { 
            "id1": "1002", 
            "type1": "Chocolate"
        }, { 
            "id1": "1003", 
            "type1": "Blueberry"
        }, { 
            "id1": "1004", 
            "type1": "Devil's Food"
        }] 
    }, 
    "topping": [{ 
        "id2": "5001", 
        "type2": "None"
    }, { 
        "id2": "5002", 
        "type2": "Glazed"
    }, { 
        "id2": "5005", 
        "type2": "Sugar"
    }] 
}]

 ↑  Return to Top


Transform JSON to Relational data

The OPENJSON is a table-value function (TVF). which looks into JSON text, locates an array of JSON objects, iterates through the elements of the array and for each element returns one row in the output result. To read JSON from the file, load the file using OPENROWSET construct into a variable. The stocks.json is an example for the demonstration. You can derive the path as per your requirement and the environment.In the following example is shown SQL code, which reads the content of the JSON file, using OPENROWSET BULK function and passes the content of JSON file (BulkColumn) to OPENJSON function

JSON file can be stored in local file system or global (Cloud storage).

SELECT ID, type, name, ppu, type1 batter, type2 topping FROM
OPENROWSET(BULK N '\\hq6021\c$\stocks.json', SINGLE_CLOB) AS json 
CROSS APPLY OPENJSON(BulkColumn) 
WITH(id nvarchar(40), type nvarchar(40), name NVARCHAR(MAX), ppu NVARCHAR(MAX), batters NVARCHAR(MAX) AS JSON, topping NVARCHAR(MAX) AS JSON) AS
CROSS APPLY 
OPENJSON(batters, '$.batter') 
WITH(id1 nvarchar(100), type1 nvarchar(20)) 
CROSS APPLY 
OPENJSON(topping) 
WITH(id2 nvarchar(100), type2 nvarchar(20))

↑  Return to Top


Built-in functions for JSON processing

Source : https//blogs.technet.microsoft.com/dataplatforminsider/2016/01/06/json-in-sql-server-2016-part-2-of-4

 

SQL Server 2016 provides the functions for parsing and processing JSON text. JSON built-in functions, which are available in SQL Server 2016 are given below.

  • ISJSON( jsonText ) checks, if the NVARCHAR text is properly formatted according to the JSON specification. You can use this function to create check constraints on NVARCHAR columns, which contains JSON text
  • JSON_VALUE( jsonText, path ) parses jsonText and extracts the scalar values on the specified JavaScript-like path (see below for some JSON path examples).
  • JSON_QUERY( jsonText, path ) that parses jsonText and extracts objects or arrays on the specified JavaScript-like path (see below for some JSON path examples)

These functions use JSON paths for referencing the values or objects in JSON text. JSON paths use JavaScript-like syntax for referencing the properties in JSON text. Some examples are given below.

  • ‘$’ – references entire JSON object in the input text.
  • ‘$.property1’ – references property1 in JSON object.
  • ‘$[4]’ – references a 5-th element in JSON array (indexes are counted from 0 like in JavaScript).
  • ‘$.property1.property2.array1[5].property3.array2[15].property4’ – references complex nested property in the JSON object.
  • ‘$.info. “first name”‘ – references “first name” property in info object. If the key contains some special characters such as space, dollar, etc., it should be surrounded by double quotes.

The dollar sign ($) represents the input JSON object (similar to root “/” in XPath language). You can add any JavaScript-like property or an array after “$” to reference properties in JSON object. One simple example of a query, where these built-in functions are used is given below.

DECLARE @MyJSON NVARCHAR(4000) = N '{  
"info" { 
    "type" 
    1, "address" { 
        "town" 
        "Louisville", "county" 
        "Boulder", "country" 
        "USA" 
    }, "tags" ["Snow", "Breweries"] 
}, "LocationType" 
"East", "WeatherType" 
"Cold" 
'
Select * from OPENJSON(@MyJSON) 
WITH(type int '$.info.type', LocationType varchar(20) 
    '$.LocationType', WeatherType varchar(20) 
    '$.WeatherType', town varchar(200) 
    '$.info.address.town', county varchar(200) 
    '$.info.address.county', country varchar(200) 
    '$.info.address.country') AS
CROSS APPLY 
OPENJSON(@MyJSON, '$.info.tags')

↑  Return to Top


How to define Nested Objects in JSON

The examples are given above also contains a sample data that represents nested object.

DECLARE @json NVARCHAR(1000) 
SELECT @json = N '{  
"Employee" [{ 
    "Element"
    1
}, { 
    "Element"
    2
}, { 
    "Element"
    "n"
}] 
'

↑  Return to Top


Parsing Nested elements of JSON

In the employee example given below, the employeeDepartment is the root of the JSON. The array element DEPT has a dimension data, which represents the department details of each employee. The employee JSON structure has 3 objects.

DECLARE @MyJSON NVARCHAR(4000) = N '{   
"EmployeeDepartment": [{ 
    "EmployeeID": "E0001", 
    "FirstName": "Prashanth", 
    "LastName": "Jayaram", 
    "DOB": "1983-02-03", 
    "DEPT": [{ 
        "EmployeeDepartment": "Ducks" 
    }, { 
        "EmployeeDepartment": "Blues" 
    }] 
}, { 
    "EmployeeID": "E0002", 
    "FirstName": "Prarthana", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "EmployeeDepartment": "Red Wings" 
    }] 
}, { 
    "EmployeeID": "E0003", 
    "FirstName": "Pravitha", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "EmployeeDepartment": "Red Wings" 
    }, { 
        "EmployeeDepartment": "Green Bird" 
    }] 
}] 
'
--SELECT * FROM OPENJSON(@MyJSON) 
SELECT
EmployeeID, 
FirstName, 
LastName, 
DOB, 
DEPT, 
EmployeeDepartment 
FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') 
WITH(EmployeeID varchar(10), FirstName varchar(25), LastName varchar(25), DOB varchar(25), DEPT NVARCHAR(MAX) AS JSON) AS
CROSS APPLY 
OPENJSON(DEPT) 
WITH(EmployeeDepartment nvarchar(100))

↑  Return to Top


Reading JSON into separate rows

How about pulling them in separate rows, using JSON_Value() with OPENJSON() function. The query given below gives an overview of applying the JSON constructs on the nested elements.
DECLARE @MyJSON NVARCHAR(4000) = N '{   
"EmployeeDepartment": [{ 
    "EmployeeID": "E0001", 
    "FirstName": "Prashanth", 
    "LastName": "Jayaram", 
    "DOB": "1983-02-03", 
    "DEPT": [{ 
        "DeptID": "D1", 
        "DName": "Ducks" 
    }, { 
        "DeptID": "D2", 
        "DName": "Blues" 
    }] 
}, { 
    "EmployeeID": "E0002", 
    "FirstName": "Prarthana", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }] 
}, { 
    "EmployeeID": "E0003", 
    "FirstName": "Pravitha", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }, { 
        "DeptID": "D4", 
        "DName": "Green Bird" 
    }] 
}] 
'
SELECT
JSON_Value(c.value, '$.EmployeeID') as EmployeeID, 
    JSON_Value(c.value, '$.FirstName') as FirstName, 
    JSON_Value(C.value, '$.DOB') as DOB, 
    JSON_Value(p.value, '$.DeptID') as DEPTID, 
    JSON_Value(p.value, '$.DName') as DName 
FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') as
CROSS APPLY OPENJSON(c.value, '$.DEPT') as p

↑  Return to Top


Reading JSON elements into separate columns

You can specify the child elements with the full path by using the dollar sign “$” inside the WITH() clause to segregate the data into the separate columns.

DECLARE @MyJSON NVARCHAR(4000) = N '{   
"EmployeeDepartment": [{ 
    "EmployeeID": "E0001", 
    "FirstName": "Prashanth", 
    "LastName": "Jayaram", 
    "DOB": "1983-02-03", 
    "DEPT": [{ 
        "DeptID": "D1", 
        "DName": "Ducks" 
    }, { 
        "DeptID": "D2", 
        "DName": "Blues" 
    }] 
}, { 
    "EmployeeID": "E0002", 
    "FirstName": "Prarthana", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }] 
}, { 
    "EmployeeID": "E0003", 
    "FirstName": "Pravitha", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }, { 
        "DeptID": "D4", 
        "DName": "Green Bird" 
    }] 
}] 
'
SELECT
EmployeeID, 
FirstName, 
DOB, 
Dept1, DName1, 
Dept2, DName2 
FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') 
WITH(EmployeeID varchar(20) 
    '$.EmployeeID', FirstName varchar(20) 
    '$.FirstName', DOB varchar(20) 
    '$.DOB', Dept1 varchar(20) 
    '$.DEPT[0].DeptID', Dname1 varchar(20) 
    '$.DEPT[0].DName', Dept2 varchar(20) 
    '$.DEPT[1].DeptID', Dname2 varchar(20) 
    '$.DEPT[1].DName') AS EMP

↑  Return to Top


Conclusion

SQL 2016 contains some very powerful JSON constructs. Mixing the power of the relational databases with the flexibility of JSON offers many benefits from the point of Migration, Integration, and Deployment. It is flexible because of simple syntax and lesser overhead to maintain and manage the JSON data.The powerful JSON SQL constructs enable to query and analyze JSON data as well as transform JSON to the relational data and the relational data to JSON.

There are plenty of examples and resources, which are available under various links. This is an effort to combine real-world scenarios and details the various ways of JSON data manipulation, using SQL 2016 JSON constructs. NoSQL offers different concepts – complex structures are placed together into the collections of the entities, where you can take everything, which you need with one read operation or where you can insert complex structure with a single write. The bad side is that sometimes you want to organize your information in different collections and then you will find that it is very hard to JOIN entities from the two collections.

With new SQL server, you have options to choose between these two concepts and use the best of both worlds. In your data models, you can choose when to use traditional relational structures and when to introduce NoSQL concepts.

↑  Return to Top


References

For JSON AUTO/PATH

Posted in JSON, SQL, SQL 2016, T-SQL | Tagged , , , , , , , , | Leave a comment

SQL Server: How to backup and replace text in all search string matching Stored Procedure(s)

Table of Contents

Problem Statement:-

After SQL Server Migration or database rename or object rename, the dependent objects such as SP’s, linked servers requires modification. Is there a best way to update names in all the stored procedure instead of manually checking and updating? Or How do we handle linked servers; by creating an alias?. How do we ensure and validate the SP’s or Can we take a backup of those procedures out of ‘n’ of SP’s?

Solution:-

Yes, We can take a backup of those SP’s where it requires modification also for linked servers we can create an alias.
The step by step details are given below

  • Generate script of all stored procedures  – You can use the scripting wizard to generate the script.  Right-click the db –> tasks –> Generate scripts –> go through the wizard. The requirement is to generate for specific SP’s where it meets the search string pre-requisite.
  • Generate an updated script – The Same script is used to updated all the eligible SP’s with replace function.
  • Create alias for linked servers

Generate script for matching search string of all SP’s

The below T-SQL generates the script for SP’s which satisfies the search criteria.

Using sp_helptext

Replace the @SearchFor parameter in the below SQL’s and execute the code

-- set "Result to Text" mode by pressing Ctrl+T
SET NOCOUNT ON
DECLARE @sqlToRun VARCHAR(1000), @searchFor VARCHAR(100), @replaceWith VARCHAR(100)
-- text to search for
SET @searchFor = 'line'
-- this will hold stored procedures text
DECLARE @temp TABLE (spText VARCHAR(MAX))
DECLARE curHelp CURSOR FAST_FORWARD
FOR
-- get text of all stored procedures that contain search string
-- I am using custom escape character here since i need to espape [ and ] in search string
SELECT DISTINCT 'sp_helptext '''+OBJECT_SCHEMA_NAME(id)+'.'+OBJECT_NAME(id)+''' '
FROM syscomments WHERE TEXT LIKE '%' + REPLACE(REPLACE(@searchFor,']','\]'),'[','\[') + '%' ESCAPE '\'
ORDER BY 'sp_helptext '''+OBJECT_SCHEMA_NAME(id)+'.'+OBJECT_NAME(id)+''' '
 
OPEN curHelp
 
FETCH next FROM curHelp INTO @sqlToRun
WHILE @@FETCH_STATUS = 0
BEGIN
   --insert stored procedure text into a temporary table
   INSERT INTO @temp
   EXEC (@sqlToRun)
    
   -- add GO after each stored procedure
   INSERT INTO @temp
   VALUES ('GO')
    
   FETCH next FROM curHelp INTO @sqlToRun
END
 
CLOSE curHelp
DEALLOCATE curHelp
 
 
SELECT spText FROM @temp
-- now copy and paste result into new window
-- then make sure everything looks good and run
GO

Using system view sys.procedures

Replace the @SearchFor parameter in the below SQL’s and execute the code

SET NOCOUNT ON
 
DECLARE @Test TABLE (Id INT IDENTITY(1,1), Code varchar(max))
DECLARE @searchFor VARCHAR(100)
 
SET @searchFor = 'Line'
 
INSERT INTO @Test (Code)
SELECT 'IF object_ID(N''[' + schema_name(schema_id) + '].[' + Name + ']'') IS NOT NULL
           DROP PROCEDURE ['+ schema_name(schema_id) +' ].[' + Name + ']' + char(13) + char(10) + 'GO' + char(13) +char(10) +
           OBJECT_DEFINITION(OBJECT_ID) + char(13) +char(10) + 'GO' + char(13) + char(10)
            from sys.procedures
            where is_ms_shipped = 0 and OBJECT_DEFINITION(OBJECT_ID)  LIKE '%'+@searchFor+'%'
             
  
DECLARE @lnCurrent int, @lnMax int
DECLARE @LongName varchar(max)
  
SELECT @lnMax = MAX(Id) FROM @Test
SET @lnCurrent = 1
WHILE @lnCurrent <= @lnMax
      BEGIN
            SELECT @LongName = Code FROM @Test WHERE Id = @lnCurrent
            WHILE @LongName <> ''
               BEGIN
                   print LEFT(@LongName,8000)
                   SET @LongName = SUBSTRING(@LongName, 8001, LEN(@LongName))
               END
            SET @lnCurrent = @lnCurrent + 1
      END

Generate modified SP’s script

Replace the @SearchFor  and @replacewith parameter in the below SQL’s and execute the code. The output is copied into SSMS console and execute it to update all the SP’s.

-- set "Result to Text" mode by pressing Ctrl+T
SET NOCOUNT ON
DECLARE @sqlToRun VARCHAR(1000), @searchFor VARCHAR(100), @replaceWith VARCHAR(100)
-- text to search for
SET @searchFor = '[MY-SERVER]'
-- text to replace with
SET @replaceWith = '[MY-SERVER2]'
-- this will hold stored procedures text
DECLARE @temp TABLE (spText VARCHAR(MAX))
DECLARE curHelp CURSOR FAST_FORWARD
FOR
-- get text of all stored procedures that contain search string
-- I am using custom escape character here since i need to espape [ and ] in search string
SELECT DISTINCT 'sp_helptext '''+OBJECT_SCHEMA_NAME(id)+'.'+OBJECT_NAME(id)+''' '
FROM syscomments WHERE TEXT LIKE '%' + REPLACE(REPLACE(@searchFor,']','\]'),'[','\[') + '%' ESCAPE '\'
ORDER BY 'sp_helptext '''+OBJECT_SCHEMA_NAME(id)+'.'+OBJECT_NAME(id)+''' '
 
OPEN curHelp
FETCH next FROM curHelp INTO @sqlToRun
WHILE @@FETCH_STATUS = 0
BEGIN
   --insert stored procedure text into a temporary table
   INSERT INTO @temp
   EXEC (@sqlToRun)
   -- add GO after each stored procedure
   INSERT INTO @temp
   VALUES ('GO')
   FETCH next FROM curHelp INTO @sqlToRun
END
 
CLOSE curHelp
DEALLOCATE curHelp
 
-- find and replace search string in stored procedures
-- also replace CREATE PROCEDURE with ALTER PROCEDURE
UPDATE @temp
SET spText = REPLACE(REPLACE(spText,'CREATE PROCEDURE', 'ALTER PROCEDURE'),@searchFor,@replaceWith)
 
SELECT spText FROM @temp
-- now copy and paste result into new window
-- then make sure everything looks good and run
GO

Create Linked Server Alias

Step 1:

  • In SQL Server Management Studio open Linked Servers and then ‘New Linked Server’.
  •  Inside of appeared wizard – Select the General tab.
  •  Specify alias name in “Linked server” field.
  •  Select SQL Native Client as a provider.
  •  Add sql_server in “Product Name” field (that’s the magic).
  •  In “Data Source” – specify the name of the host to be used as linked server.

Step 2: In Security tab – specify proper security options (e.g. security context)Step 3: In Server Options tab – put “Data Access”, RPC, “Rpc Out” and “Use Remote Collaboration” to be true.

Conclusion

  • Time Saving – Identifying and modifying many objects is going to be a tedious job. The script makes life easier. Migration is part of the evolution but think of updating SP’s, It’s really important to have a backup and easy if some automation like this able to modify what is needed
  • Easy to run and generate scripts for SP’s based on search string
  • Easy to keep track of  modified SP’s and efficient way to rollback as it generates script for all eligible SP’s

References

Posted in SQL | Tagged , , , | 1 Comment

My Interview published on Microsoft Technet Blog

Wish you a Happy New Year to all the readers and technical enthusiasts!!!

https://blogs.technet.microsoft.com/wikininjas/2017/01/02/interview-with-a-sql-server-expert-and-wiki-ninja-prashanth-jayaram/

My interview is published on the Microsoft Technet blog yesterday

2017

 

 

Posted in GENERAL | Leave a comment

PowerShell : Monitoring and Notification – Disk Space Alerts

“Microsoft Technet Guru – Gold Medal Winning Article- Nov, 2016”

Abstract

Monitoring disk space utilization of server(s) is the critical and important job for any administrator. Keeping things organized might improve application availability and server availability. This article takes us through the in-detail steps to read each drive and notify every drive details based on threshold values and output data.You’ll basically feed a list of servers to watch over, and it will report back on these for you, meaning you could also use it as a more general “daily server disk space report”.A DBA doesn’t want to run out of space on their servers, even in their labs! To avoid this happening, wrote a PowerShell script to provide some alerts by email.


Introduction

This article talks about the use of credentials. The credentials can be used to query external servers which have the trust relationship between the domains. Also, list various methods to secure the password. The process iterates through a list of servers and drives that you have listed in a csv file. Checking for disk space status of every listed drive and its status may fall under one of the four statuses that are defined as critical, warning, low and good. If the disk in a question of below the threshold then the corresponding status is updated and notification sent to the respective teams.


Highlights

  • The CSV input  – The file contains server, disk information and threshold values
  • The WMI class, Win32_LogicalDisk querying with credentials and without credentials
  • Activity logging in a log file
  • The output has status columns which give a clear indication of status of each drive
  • Email notification


Querying – Win32_LogicalDisks

  • Using Credentials
  • Without using Credentials

Using Credentials

Get-Credential always pop-up dialog box for entering a password, however, you can save your securestring password to a file or directly feed the password. The problem with this is that the password will be exposed to anyone with access to the file.

  • Using Get-Credential cmdlet – Pop up dialog box
  • Directly using password
  • Using secured file

Using Get-Credential cmdlet – Pop dialog box

The Get-Credential displays a window to enter credential details. This will appear every time when you run the script.The $credential variable store the username and password. It’s then fed to the respective queries for further processing.

clear
$credential = Get-Credential
foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -Credential $credential -ComputerName $args -Filter "Drivetype=3"  |
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
}

Hard code the credentials

The password is hard coded in the script. Of course, the problem with this is that your password will be exposed to anyone with access to the script file.

$User = 'hqnt\abcd'
 $Pass = ConvertTo-SecureString 'abcd@2015' -AsPlainText -Force
 $Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass
 foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -ComputerName $args -Credential $Credentials -Filter "Drivetype=3"  |
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
}

Using Secured file

First, Password has to be written to a file

ps:\>read-host -AsSecureString |ConvertFrom-SecureString |Out-File C:\SecurePassword.txt

Second, The credentials are read from the file using PSCredential class. You don’t need to re-enter the password over and over again.

clear
$User = 'hqnt\abcdv'
$pass= cat C:\passwordstring.txt |ConvertTo-SecureString
$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass
foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -ComputerName $args -Credentials $cred -Filter "Drivetype=3"  |
 
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
 
}

 


Without using Credentials

You don’t need to use the credential parameter in any of the cmdlet execution.
clear
foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -ComputerName $args -Filter "Drivetype=3"  |
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
}

 


Download

Download the source code from the Technet Gallery

PowerShell : Monitoring and Notification – Disk Space Alerts

 


Code in detail

This section describes the coding and other details of the PoSH script – diskSpace.ps1.

Input File

The template of Inputserver.csv is given below. Change the content as per your requirement/environment

import-csv C:\InputServer_1.csv |Format-Table -AutoSize

The comma-delimited text file (.csv) file InputServer_1.csv file is parsed below. You’ll notice at the top that there are headings for each column: ServerName, Drive, LowTh, WarnTh,CritTh,Email, LowPri, WarnPri, CritPri, and EscInst. The PowerShell import-csv cmdlet understands this default format and expects to see the column headers before parsing the main data.

  • ServerName – Name of the server – acts as a source for cmdlet execution
  • Drive – The drive letter is an input for querying Win32_LogicalDisk class library
  • LowTh – The low limit for sending the alert notification to intended recipients
  • WarnTh – The Middle limit for the drive
  • CritTh – Higher limit for the drive and it requires urgent attention from the technician
  • Email – The list where the notification is sent
  • LowPri – The Priority decides the attention and importance . In this case, the technician will be having enough time to respond to this issue
  • WarnPri -The Priority decides the attention and importance . In this case, the technician will fix turn around time to react to any such issues
  • CritPri  – Requires urgent and immediate action
  • EscInst – Defines the escalation team.

The above columns are defined just to illustrate how the process works. This can vary for every environment by considering the underlying infrastructure

Write-Log

Write-Log writes a message to a specified log file along with the current time stamp also writes state of the message(Information, Warning or Error).

For example, The first example writes a simple message with a default state to a log file abc.log. In the second example, a message along with “Error” state details are entered into the log file.

1.EXAMPLE 1
2.PS:\> Write-Log  -Message "Server is reachable and starting the process " -Logfile c:\PowerSQL\abc.log
3.EXAMPLE 2
4.PS:\> Write-Log  -level Error -Message "Server is not reachable " -Logfile c:\PowerSQL\abc.log

The below function can be reused to in any of the PoSH code. Also, the output file will be used for troubleshooting and activity progress tracking.

01.Function Write-Log {
02.    [CmdletBinding()]
03.    Param(
04.    [Parameter(Mandatory=$False)]
05.    [ValidateSet("INFO","WARN","ERROR")]
06.    [String]
07.    $Level = "INFO",
08. 
09.    [Parameter(Mandatory=$True)]
10.    [string]
11.    $Message,
12. 
13.    [Parameter(Mandatory=$False)]
14.    [string]
15.    $logfile
16.    )
17. 
18.    $Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")
19.    $Line = "$Stamp $Level $Message"
20.    If($logfile) {
21.    Add-Content $logfile -Value $Line
22.    }
23.    Else {
24.        Write-Output $Line
25.    }
26.}

Password

This portion of code decides whether to pass credentials or not.  The Get-credential always pop-up dialog box for entering a password, however, you can save your securestring password to a file or directly feed the password. The problem with this is that the password will be exposed to anyone with access to the file. If you want to use the default login credentials then you don’t need to mention anything in the code. You can comment the line of code.

$User = 'abcd'
$Pass = ConvertTo-SecureString ''abcd@#2016' -AsPlainText -Force
$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass

Import Server and Drive details

This part of the code read values from CSV file. The CSV file has well-defined disk space threshold values for each drive that needs monitoring and alerting. The code imports the server name and reads server header value and loop through each and every server. After reading, the values are assigned to a local variable and then it’s used to query WMI query by filtering on -computername and -DeviceID which is marked green in color. The credentials parameter would be really helpful when you are querying different domains which has trust relationships in between. If you are querying the servers that are under the same domain and want to use the default login credentials then you can ignore the credential parameter.
The next part is to calculate the “free %”. The status column is populated based on input threshold values. The $percentageFree will be used to identify the status of the drive. The four status of each drives are Critical, Warning, Low and Good. The three input parameters $clowth, $cwarnth,$ccritth compared with $percentageFree variable to yield a result for status columns.

01.#Import the file to get the drives status and other usage details
02.
03.#The Import-Csv cmdlet provides a way for you to read in data from a comma-separated values file (CSV)
04.   
05.Import-Csv $InputServer|%{
06.$cserver = $_.Server
07.$cdrivelt = $_.Drive
08.$clowth = $_.LowTh
09.$cwarnth = $_.WarnTh
10.$ccritth = $_.CritTh
11.$cemail = $_.Email
12.$clowpri = $_.LowPri
13.$cwarnpri = $_.WarnPri
14.$ccritpri = $_.CritPri
15.$cescinst = $_.EscInst
16.If (!(Test-Connection $_.Server -count 1 -quiet)) {
17.#Write the message to the log file
18.Write-Log  -level ERROR -Message "$($_.Server) is not reachable" -Logfile $Logfile
19.}
20.else
21.{
22.#Write the Progress to log file
23.Write-Log  -Message "$($_.Server) is reachable and starting the process " -Logfile $Logfile
24. 
25.$diskinfo= Get-WmiObject -Class Win32_LogicalDisk -ComputerName $cserver  -Filter"DeviceID='$cdrivelt'"
26.ForEach ($disk in $diskinfo)
27.{
28.#Calculate the % free. This parameter will be compared with various thresholds to derive the status of the drive
29.If ($diskinfo.Size -gt 0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) * 100))}
30.Else {$percentFree = 0}
31.    #Determine if disk needs to be flagged for warning or critical alert
32.    If ($diskinfo.Size -gt 0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) * 100))}
33.    Else {$percentFree = 0}
34. If ($percentFree -le  $ccritth) {
35. 
36. $status = "Critical"
37. $priority = $ccritpri
38. $body = @"
39.Notification that a disk drive is reporting an alert for low disk space!
40.$cserver $cdrivelt has $percentFree % free space. Please assign an $priority priority ticket to the $cescinst team.
41.-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09.
42."@
43.Send-MailMessage -to $cemail -from "HQMONP09@appvion.com" -Subject "Disk Alert - $cserver $cdrivelt out of disk space!" -body $body -smtpserver $SMTPServer
44.Write-Log  -Message "$($_.Server) Critical alert logged for the drive $cdrivelt " -Logfile $Logfile
45.
46.}

Output

This section describes various options available in the script to validate the disk space data

Log filename

This below code defines the output log file location, and directory to save the output. The $date variable hold the date formatting part. It’s then appended to the $logfilename to generate a more meaningful filename. For example, DiskSpaceLog_2016-10-10

$date=Get-Date -format "yyyy-MM-d"
#Prepare log file and output CSV file
$LogFileName="DiskSpaceLog_$($date)"

Email

The section defines the body of an email, the content prepared using here-string. The email gives a detailed information about the severity of the space and notifies the technician to act to remediate the issue. Here-Strings are a great technique to use if you want to have a lot of text that covers several lines. The below is one of the example created for the body of the email auto-notification.

$body = @"
Notification that a disk drive is reporting an alert for low disk space!
$cserver $cdrivelt has $percentFree % free space. Please assign an $priority priority ticket to the $cescinst team.
-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09.
"@
Send-MailMessage -to $cemail -from "HQMONP09@appvion.com" -Subject "Disk Alert - $cserver $cdrivelt out of disk space!" -body $body -smtpserver $SMTPServer

Console

The output is customized and written to the console. Have given this option validate the data with the email. In the console output, the columns might not be displayed in the order in which we construct the script.  The output of an object to be displayed in a certain order because PoSH rearranges the output as per available metadata hence the below code describes one way of showing the customization.

$mydisk +=New-Object PSObject -Property @{
    Server=$_.Server
    DeviceID= $disk.DeviceID
    VolumeName= $disk.VolumeName
    Size= [math]::Round(($disk.Size /1GB),2)
    Usedspace= [math]::Round((($disk.Size - $disk.FreeSpace)/1GB),2)
    Percentage= ("{0:P}" -f ($disk.FreeSpace / $disk.Size))
    status=$status
    }
}
}
}
$mydisk |Select-Object @{Name="Server";Expression={$_.Server}},@{Name="DeviceID";Expression={$_.DeviceID}},
 @{Name="VolumeName";Expression={$_.VolumeName}},
 @{Name="Size";Expression={$_.Size}},
 @{Name="Used Space";Expression={$_.Usedspace}},
 @{Name="% Free";Expression={$_.Percentage}},
 @{Name="Status";Expression={$_.status}}|Format-Table  -AutoSize
 

 


Code

<#
.Synopsis
The objective of the script is to make use of .csv files as sources for various parts of the script.
 .Description
  Function to log manipulate the date based on the input file and display it to console. Log entries in the log file are time stamped. By default the message are logged under INFO category. It can be changed to other category such as "WARN" and "Error" using -level parameter
 .Parameter InoutFile
  Path to the file where the input details are saved.
  Example: c:\InputServer.csv
 
 .Parameter SMTPServer
  The SMTP server name to send email to respective intendencies
  Example: ancd.gmail.com
 .Example
   Write-Log  -Message "$($_.Server) is reachable and starting the process " -Logfile $Logfile
  
 .Example
  Write-Log  -Message "$($_.Server) Critical alert logged for the drive $cdrivelt " -Logfile $Logfile
 .Link
 .Notes
 The CSV file is going to hold all the metadata for each drive that you intend to monitor and send out notification
  
#>
    [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact='Low')]
    Param(
        [Parameter(Mandatory=$true,
                  Position=0)]
            [String]$InputServer,
        [Parameter(Mandatory=$true,
                  Position=1)]
            [String]$DirectorytoSave,
        [Parameter(Mandatory=$true,
                   Position=2)]
            [String]$SMTPServer
    )
 
# formatting the date
 
 $date=Get-Date -format "yyyy-MM-d"
  
 #Prepare log file and output CSV file
  
 $LogFileName="DiskSpaceLog_$($date)"
   
# before we do anything else, are we likely to be able to save the file?
# if the directory doesn't exist, then create it
 
if (!(Test-Path -path "$DirectoryToSaveTo")) #create it if not existing
  {
  New-Item "$DirectoryToSaveTo" -type directory | out-null
  }
#log File creation   
 
$logfile = "$DirectoryToSave$LogFileName.log"
 
if (!(Test-Path -path "$logfile")) #create it if not existing
  {
   New-Item -ItemType file $logfile -Force
  }
 
# Prepare headers for the log file for each execution of script
 
Add-Content $logfile "#################################################################"
Add-Content $logfile "Disk Space Details"
Add-Content $logfile "Generated $(get-date)"
Add-Content $logfile "Generated from $(gc env:computername)"
Add-Content $logfile "#################################################################"
Function Write-Log {
    [CmdletBinding()]
    Param(
    [Parameter(Mandatory=$False)]
    [ValidateSet("INFO","WARN","ERROR")]
    [String]
    $Level = "INFO",
 
    [Parameter(Mandatory=$True)]
    [string]
    $Message,
 
    [Parameter(Mandatory=$False)]
    [string]
    $logfile
    )
 
    $Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")
    $Line = "$Stamp $Level $Message"
    If($logfile) {
    Add-Content $logfile -Value $Line
    }
    Else {
        Write-Output $Line
    }
}
 
   
#Creating PowerShell custom objects
 
$Mydisk=@() 
 
#Import the file to get the drives status and other usage details
#The Import-Csv cmdlet provides a way for you to read in data from a comma-separated values file (CSV)
   
Import-Csv $InputServer|%{
$cserver = $_.Server
$cdrivelt = $_.Drive
$clowth = $_.LowTh
$cwarnth = $_.WarnTh
$ccritth = $_.CritTh
$cemail = $_.Email
$clowpri = $_.LowPri
$cwarnpri = $_.WarnPri
$ccritpri = $_.CritPri
$cescinst = $_.EscInst
If (!(Test-Connection $_.Server -count 1 -quiet)) {
#Write the message to the log file
Write-Log  -level ERROR -Message "$($_.Server) is not reachable" -Logfile $Logfile
}
else
{
#Write the Progress to log file
Write-Log  -Message "$($_.Server) is reachable and starting the process " -Logfile $Logfile
 
$diskinfo= Get-WmiObject -Class Win32_LogicalDisk -ComputerName $cserver  -Filter "DeviceID='$cdrivelt'"
ForEach ($disk in $diskinfo)
{
#Calculate the % free. This parameter will be compared with various thresholds to derive the status of the drive
If ($diskinfo.Size -gt 0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) * 100))}
Else {$percentFree = 0}
    #Determine if disk needs to be flagged for warning or critical alert
    If ($diskinfo.Size -gt 0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) * 100))}
    Else {$percentFree = 0}
 If ($percentFree -le  $ccritth) {
 
 $status = "Critical"
 $priority = $ccritpri
 $body = @"
Notification that a disk drive is reporting an alert for low disk space!
$cserver $cdrivelt has $percentFree % free space. Please assign an $priority priority ticket to the $cescinst team.
-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09.
"@
Send-MailMessage -to $cemail -from "HQMONP09@appvion.com" -Subject "Disk Alert - $cserver $cdrivelt out of disk space!" -body $body -smtpserver $SMTPServer
Write-Log  -Message "$($_.Server) Critical alert logged for the drive $cdrivelt " -Logfile $Logfile
}
 
 ElseIf ($percentFree -gt $ccritth -AND $percentFree -le $cwarnth) {
 $status = "Warning"
 $priority = $cwarnpri
$body = @"
Notification that a disk drive is reporting an alert for low disk space!
$cserver $cdrivelt has $percentFree % free space. Please assign a $priority priority ticket to the $cescinst team.
-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09.
"@
Send-MailMessage -to $cemail -from "HQMONP09@appvion.com" -Subject "Disk Alert - $cserver $cdrivelt disk space warning!" -body $body -smtpserver $SMTPServer
Write-Log  -Message "$($_.Server) Warning alert logged for the drive $cdrivelt " -Logfile $Logfile
}
ElseIf ($percentFree -ge $cwarnth -AND $percentFree -lt $clowth) { $status = "Low"
$priority = $clowpri
$body = @"
Notification that a disk drive is reporting an alert for low disk space!
$cserver $cdrivelt has $percentFree % free space. Please assign a $priority priority ticket to the $cescinst team.
-This is an automated email being generated by the script DiskMonCheck.ps1, as a scheduled task on HQMONP09.
"@
Send-MailMessage -to $cemail -from "HQMONP09@appvion.com" -Subject "Disk Alert - $cserver $cdrivelt disk space warning!" -body $body -smtpserver $SMTPServer
Write-Log  -Message "$($_.Server) low alert logged for the drive $cdrivelt " -Logfile $Logfile
}
              
Else { $status = "Good" }
 
$mydisk +=New-Object PSObject -Property @{
    Server=$_.Server
    DeviceID= $disk.DeviceID
    VolumeName= $disk.VolumeName
    Size= [math]::Round(($disk.Size /1GB),2)
    Usedspace= [math]::Round((($disk.Size - $disk.FreeSpace)/1GB),2)
    Percentage= ("{0:P}" -f ($disk.FreeSpace / $disk.Size))
    status=$status
    }
}
}
}
$mydisk |Select-Object @{Name="Server";Expression={$_.Server}},@{Name="DeviceID";Expression={$_.DeviceID}},
 @{Name="VolumeName";Expression={$_.VolumeName}},
 @{Name="Size";Expression={$_.Size}},
 @{Name="Used Space";Expression={$_.Usedspace}},
 @{Name="% Free";Expression={$_.Percentage}},
 @{Name="Status";Expression={$_.status}}|Format-Table  -AutoSize
  

Output

Compare Mail Inbox view with PoSH output

Log file details are shown below

Running DiskSpace.ps1 file with parameters

  • InputServer – The source for the entire script
  • DirectoryToSave – To save activity and progress of the script file into a file
  • SMTPServer – Email notification

Conclusion

This article has illustrated how to set up and configure email notification when Server is running out of Hard Disk Space. Setting up a simple alerting system like this (i.e. via a schedule SQL Server agent job or via Task Scheduler) is a great way to help ensure that you don’t run into any surprises.

As usual, any feedback is welcome, and I hope that this article was helpful to you!

 


References

Technet

  1. PoSH : CSV-DiskSpace- CSV 
  2. PoSH : Disk Space Utilization Report
  3. PoSH : CSV – Disk Space Report – HTML
  4. PoSH : CSV – Disk Space Report – Excel
  5. PoSH : DiskSpace GUI Tool
  6. PoSH : MultiServer(s) Disk Space GUI Tool
  7. PoSH & SQL : Monitoring Disk Space with SQL Server and PowerShell via SQL Agent


See Also


Posted in PowerShell | Tagged , , , , , , , , , , | Leave a comment

SQL Server: Discuss Execute(SQL) At LinkedServer

Abstract

This article details the use of “Execute AT LinkedServer” clause. It’s best in some ways when you are trying to run dynamic complex queries  across heterogeneous data source. There are many instances that OpenQuery/OpenRowSet and four-part qualifier calling might not work in the complex SQL design.  The limitation of linked server will explode when are trying to manipulate data and write complex queries with heterogeneous data sources.

Table of Contents

Posted in Uncategorized | Tagged , , , , , | Leave a comment

SQL : How to Find Unused Indexes details

The Dynamic Management View (DMV) named sys.dm_db_index_usage_stats that track  index usage details of the database. This DMV gives an information about an index which is being updated but not used in any seeks, scan or lookup operations.

The below query list

  1. Table name
  2. Index name
  3. No of Rows
  4. Size of Index
  5. Type of Index
  6. Drop SQL statement

Index dropping is done at your risk. Validate.. the data before dropping any information from the database.

select object_name(i.object_idas ObjectNamei.name as [Unused Index],MAX(p.rowsRows 
,8 * SUM(a.used_pagesAS 'Indexsize(KB)'case  
    when i.type = 0 then 'Heap'  
    when i.type1 then 'clustered' 
    when i.type=2 then 'Non-clustered'   
    when i.type=3 then 'XML'   
    when i.type=4 then 'Spatial'  
    when i.type=5 then 'Clustered xVelocity memory optimized columnstore index'   
    when i.type=6 then 'Nonclustered columnstore index'  
end index_type'DROP INDEX ' + i.name + ' ON ' + object_name(i.object_id'Drop Statement' 
from sys.indexes i 
left join sys.dm_db_index_usage_stats s on s.object_id = i.object_id 
     and i.index_id = s.index_id 
     and s.database_id = db_id() 
JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id 
JOIN sys.allocation_units AS a ON a.container_id = p.partition_id 
where objectproperty(i.object_id'IsIndexable') = 1 
AND objectproperty(i.object_id'IsIndexed') = 1 
and s.index_id is null -- and dm_db_index_usage_stats has no reference to this index 
or (s.user_updates > 0 and s.user_seeks = 0 and s.user_scans = 0 and s.user_lookups = 0)-- index is being updated, but not used by seeks/scans/lookups 
GROUP BY object_name(i.object_id) ,i.name,i.type 
order by object_name(i.object_idasc

 

Posted in SQL, Uncategorized | Tagged , , , | Leave a comment

PowerShell : CSV – Disk Space Report – Excel

Abstract

Monitoring disk space utilization of server(s) is a critical and important job for any administrator. Keeping things organized might improve application availability and server availability. Being a database administrator for 10 years, I have faced and handled/managed lot of issues with disk space. This article takes us through the in-detail steps to read each drive and report every drive details based on threshold values. The output is integrated with excel charts. The step by step process quickly take us through the disk space utilization details of server(s). You’ll basically feed a list of servers to watch over, and it will report back on these for you, meaning you could also use it as a more general “daily server disk space report”

↑ Return to Top


Introduction

This article talks about the use of credentials. The credentials can be used to query external servers which have a trust relationship between the domains. Also, list various methods to secure the password. The process iterates through a list of servers and drives that you have listed in a csv file. Checking for disk space status of every listed drive and its status may fall under one of the four statuses that is defined as critical, warning, low and good. If the disk in question is below the threshold then increment the corresponding status counter which acts as a data source for depicting excel charts. The nice thing about this script is that it will consolidate health status of each listed disks and gives a summary that needs your attention (you set the threshold as per requirement because the size of the drive may vary from server to server).

↑ Return to Top


Querying WMI objects – Win32_LogicalDisks

  • Using Credentials
  • Without using Credentials

Using Credentials

Get-credential always pop-up dialog box for entering a password, however, you can save your securestring password to a file or directly feed the password. The problem with this is that the password will be exposed to anyone with access to the file.

  • Using Get-Credential cmdlet – Pop up dialog box
  • Directly using password
  • Using secured file

Using Get-Credential cmdlet – Pop dialog box

The Get-Credential displays a window to enter credential details. This will appear every time when you run the script.The $credential variable store the username and password. It’s then fed to the respective queries for further processing.

clear
$credential = Get-Credential 
foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -Credential $credential -ComputerName $args -Filter "Drivetype=3"  |
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
}

Hard code the credentials

The password is hard coded in the script. Of course, the problem with this is that your password will be exposed to anyone with access to the script file.

$User = 'hqnt\abcd'
 $Pass = ConvertTo-SecureString 'abcd@2015' -AsPlainText -Force
 $Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass
 foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -ComputerName $args -Credential $Credentials -Filter "Drivetype=3"  |
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
}

Using Secured file

First, Password has to be written to a file

ps:\>read-host -AsSecureString |ConvertFrom-SecureString |Out-File C:\SecurePassword.txt

Second, The credentials are read from the file using PSCredential class. You don’t need to re-enter the password over and over again.

clear
$User = 'hqnt\abcdv'
$pass= cat C:\passwordstring.txt |ConvertTo-SecureString
$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass
foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -ComputerName $args -Credentials $cred -Filter "Drivetype=3"  |
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
}

↑ Return to Top


Without using CredentialsYou don’t need to pass credential parameter in any of the cmdlet execution.

clear
foreach ( $args in get-Content c:\server.txt ) {
get-WmiObject win32_logicaldisk -ComputerName $args -Filter "Drivetype=3"  |
ft SystemName,DeviceID,VolumeName,@{Label="Total SIze";Expression={$_.Size / 1gb -as [int] }},@{Label="Free Size";Expression={$_.freespace / 1gb -as [int] }} -autosize
}

↑ Return to Top


Download – Gallery link

↑ Return to Top


Code in detail

This section describes the coding and other details

Input File

The template of server.csv is given below. Change the content as per your requirement/environment

Server,Drive,LowTh,WarnTh,CritTh
HQDBSP008,E:,8,5,3
HQDBSP008,F:,8,20,3
HQDBSP0018,G:,8,5,3
HQSPDB9901,E:,8,5,3
HQSPDB0901,F:,20,5,3
HQSPDB8001,G:,8,5,3

Output and filename

This below code defines the output file location and filename. The $date variable hold the date formatting part. It’s then appended to the $filename to generate the most meaningful filename. For example, c:\DMZ_Server_Space_Utilization-2016-09-16

$DirectoryToSaveTo = "c:\"
$date=Get-Date -format "yyyy-MM-d"
$Filename="DMZ_Server_Space_Utilization_$($date)"

Password

This portion of code decides whether to pass credentials or not.  The Get-credential always pop-up dialog box for entering a password, however, you can save your securestring password to a file or directly feed the password. The problem with this is that the password will be exposed to anyone with access to the file. If you want to use the default login credentials then you don’t need to mention anything in the code. You can comment the line of code.

$User = 'abcd'
$Pass = ConvertTo-SecureString ''abcd@#2016' -AsPlainText -Force
$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass

Invoking Excel components

PowerShell is made available to use Excel component by invoking excel.application COM object that will allow us to work with excel to add data and format that data.  The New-object -ComObject  creates a new excel object using COM components and it made visible for troubleshooting using visible property set to true.  Once everything is defined, we can call the respective methods and properties to create workbook by adding sheet as its item

#Create a new Excel object using COM
$Excel = New-Object -ComObject Excel.Application
#you can see the Excel sheet and helpful in troubleshooting
$Excel.visible = $True
#After instantiating the excel objects, It's a time use its respective methods and properties
$Excel = $Excel.Workbooks.Add()
$Sheet = $Excel.Worksheets.Item(1)

Workbook number and details

FileFormat numbers in Mac Excel. These are the main file formats in Windows Excel 2007-2016:

51 = xlOpenXMLWorkbook (without macro’s in 2007-2013, xlsx)
52 = xlOpenXMLWorkbookMacroEnabled (with or without macro’s in 2007-2013, xlsm)
50 = xlExcel12 (Excel Binary Workbook in 2007-2013 with or without macro’s, xlsb)
56 = xlExcel8 (97-2003 format in Excel 2007-2013, xls)

$xlOpenXMLWorkbook=[int]51

Header and Coloring

Create a header for Disk Space Report, setting each cell to Bold and add a background color. The excel generates Computername, DeviceID, VolumeName,TotalSizeGB,
UsedSpaceGB, FreeSpaceGB, %Free and Status columns. The variable $column is incremented by 1 each time to create a new column.

$Sheet.Cells.Item($row,$column)= 'Computername'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'DeviceID'
#Making the font Bold and adjusting the background color (using the Interior.ColorIndex property of each cell
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'VolumeName'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'TotalSizeGB'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'UsedSpaceGB'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'FreeSpaceGB'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= '%Free'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'Status'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
#Set up a header filter
$headerRange = $Sheet.Range("a3","o3")
$headerRange.AutoFilter() | Out-Null

Import Server and Drive details

This part of the code read values from CSV file. The CSV file has five columns Server, Drive, LowTh, WarnTh, CritTh. The server.csv file shown in the below figure has a specific set of values for each server as well as for each drive. The code reads values and loop through every server.After reading from CSV file, the values are assigned to a local variable and then it’s used to query WMI query by filtering on -computername and -DeviceID which is marked green in color. The credentials parameter would be really helpful when you are querying different domains which have trust relationship  in between. If you are querying the servers that are under the same domain and want to use the default logon credentials then you can ignore the credential parameter.
The next part is to calculate the “free %” and writing data to a sheet.The status column is populated based on input threshold values. The $percentageFree will be used to identify the status of the drive. The four status of each drive is Critical, Warning, Low and Good. The three input parameters $clowth, $cwarnth,$ccritth compared with $percentageFree variable to yield a result for status columns.
Also, the code has used four counters to keep track of status column. The value of the counter is incremented whenever it meets the condition. The final value is used to derive an overall health of the drives of the listed servers.

Import-Csv C:\server.csv|%{
$cserver = $_.Server
$cdrivelt = $_.Drive
$clowth = $_.LowTh
$cwarnth = $_.WarnTh
$ccritth = $_.CritTh
$diskinfo= Get-WmiObject -Class Win32_LogicalDisk -ComputerName $cserver  -Filter "DeviceID='$cdrivelt'" -Credential $Credentials
ForEach ($disk in $diskinfo)
{
If ($diskinfo.Size -gt 0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) * 100))}
Else {$percentFree = 0}
#Process each disk in the collection and write to spreadsheet
    $Sheet.Cells.Item($row,1)= $disk.__Server
    $Sheet.Cells.Item($row,2)= $disk.DeviceID
    $Sheet.Cells.Item($row,3)= $disk.VolumeName
    $Sheet.Cells.Item($row,4)= [math]::Round(($disk.Size /1GB),2)
    $Sheet.Cells.Item($row,5)= [math]::Round((($disk.Size - $disk.FreeSpace)/1GB),2)
    $Sheet.Cells.Item($row,6)= [math]::Round(($disk.FreeSpace / 1GB),2)
    $Sheet.Cells.Item($row,7)= ("{0:P}" -f ($disk.FreeSpace / $disk.Size))
    
    #Determine if disk needs to be flagged for warning or critical alert
    If ($percentFree -le  $ccritth) {
        $Sheet.Cells.Item($row,8) = "Critical"
        $critical++
        #Check to see if space is near empty and use appropriate background colors
        $range = $Sheet.Range(("A{0}"  -f $row),("H{0}"  -f $row))
        $range.Select() | Out-Null   
        #Critical threshold        
        $range.Interior.ColorIndex = 3
    } ElseIf ($percentFree -gt $ccritth -AND $percentFree -le $cwarnth) {
        $Sheet.Cells.Item($row,8) = "Warning"
        $range = $Sheet.Range(("A{0}"  -f $row),("H{0}"  -f $row))
        $range.Select() | Out-Null       
        $warning++
        $range.Interior.ColorIndex = 6
        }
     ElseIf ($percentFree -ge $cwarnth -AND $percentFree -lt $clowth) {
        $Sheet.Cells.Item($row,8) = "Low"
        $range = $Sheet.Range(("A{0}"  -f $row),("H{0}"  -f $row))
        $range.Select() | Out-Null       
        $low++
        $range.Interior.ColorIndex = 12
        
    } Else {
        $Sheet.Cells.Item($row,8) = "Good"
        $good++
    }
     $row++
}
}

Charting

The final piece that needs to be added is a chart. The chart shows the number of critical, warning, low  and good states for each drive. Also, the code has used four counters to keep track of status column. The value of the counter is incremented whenever it meets the condition. The final value is used to derive an overall health of the drives of the listed servers.

The first part of the code is preparing an heading and second part assigns the corresponding values to its respective column headers.

$Sheet.Cells.Item($row,$Column) = 'Critical'
$Column++
$Sheet.Cells.Item($row,$Column) = 'Warning'
$Column++
$Sheet.Cells.Item($row,$Column) = 'Low'
$Column++
$Sheet.Cells.Item($row,$Column) = 'Good'
$Column = 1
$row++
#Critical formula
$Sheet.Cells.Item($row,$Column)=$critical
$Column++
#Warning formula
$Sheet.Cells.Item($row,$Column)=$warning
$Column++
#low formula
$Sheet.Cells.Item($row,$Column)=$low
$Column++
#Good formula
$Sheet.Cells.Item($row,$Column)= $good

For example,

Critical Warning Low Good
1 1 1 3

The expanded value of pie char is 70

#Configure the chart
##Use a 3D Pie Chart
$chart.ChartType = 70
$chart.Elevation = 40


The below code places the graph to the top left of the sheet.

#Set the location of the chart
$sheet.Shapes.Item("Chart 1").Placement = 3
$sheet.Shapes.Item("Chart 1").Top = 30
$sheet.Shapes.Item("Chart 1").Left = 600

↑ Return to Top


Code

<#

.SYNOPSIS
Name :  Disk Space Utilizaiton Report (Get-DiskSpaceExcel.ps1)
Description : Get disk space usage informations from remote server(s) with WMI and ouput Excel file

Author : Prashanth Jayaram

* Some ideas extracted Joe Prox Excel Charting. Refer the reference secion for more information
* Select list of servers from a CSV file
* Get remote Servers informations with WMI and Powershell :
* Disk (Disk type, letter, capacity in GB, free space in GB, % free , Status + display a Excel output)

.INPUT
.csv file with servers to activate

.OUTPUTS
Console outputs : You can alter the code to write the data to file or console

.NOTES
Version:        1.0
Author:         Prashanth Jayaram
Creation Date:  2016-26-09
Purpose/Change: Initial script development

.EXAMPLE
.\Get-DiskSpaceExcel.ps1
#>

#########################################################################################
#DirectoryPath - Make sure you have enough rights to write to that path
#########################################################################################
#### Spreadsheet Location - Make sure you change as per your requirement

 $DirectoryToSaveTo = "c:\"
 $date=Get-Date -format "yyyy-MM-d"
 $Filename="DMZ_Server_Space_Utilication_$($date)"
  
 ##Get-credential always pop-up dialog box for entering a password, however you can save your securestring password to a file or directly feed the password.
 ##The problem with this is that the password will be exposed to anyone with access to the file.
  
####User Credentials to access servers
 
 $User = "ccov648"
 $Pass = ConvertTo-SecureString "thanVitha@2015" -AsPlainText -Force
 $Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass
 
 
# before we do anything else, are we likely to be able to save the file?
# if the directory doesn't exist, then create it
if (!(Test-Path -path "$DirectoryToSaveTo")) #create it if not existing
  {
  New-Item "$DirectoryToSaveTo" -type directory | out-null
  }
   
 
#PowerShell is made available to use Excel componment by invoking excel.application COM object that will allow us to work with excel to add data and format that data.
#Create a new Excel object using COM
$Excel = New-Object -ComObject Excel.Application
#you can see the Excel sheet and helpful in troubleshooting
$Excel.visible = $True
#After instantiating the excel objects, It's a time use its respective methods and properties
$Excel = $Excel.Workbooks.Add()
$Sheet = $Excel.Worksheets.Item(1)
 
#Save the initial row so it can be used later to create a border
#Counter variable for rows
$intRow = $row
 
#FileFormat numbers in Mac Excel
 
#These are the main file formats in Windows Excel 2007-2016:
 
#51 = xlOpenXMLWorkbook (without macro's in 2007-2013, xlsx)
#52 = xlOpenXMLWorkbookMacroEnabled (with or without macro's in 2007-2013, xlsm)
#50 = xlExcel12 (Excel Binary Workbook in 2007-2013 with or without macro's, xlsb)
#56 = xlExcel8 (97-2003 format in Excel 2007-2013, xls)
 
$xlOpenXMLWorkbook=[int]51
 
#define the sheet name
 
$sheet.Name = 'DiskSpace'
 
$Sheet.Activate() | Out-Null
 
#Create a Title for the first worksheet
$row = 1
$Column = 1
$Sheet.Cells.Item($row,$column)= 'Disk Space Information'
 
$range = $Sheet.Range("a1","h2")
$range.Merge() | Out-Null
 
# [Enum]::getvalues([Microsoft.Office.Interop.Excel.XLVAlign]) | select @{n="Name";e={"$_"}},value__
#To fetch the list of alignment values using the above the enumerator.
 
$range.VerticalAlignment = -4160
 
#Give it a nice Style so it stands out
$range.Style = 'Title'
 
#Increment row for next set of data
$row++;$row++
 
#Save the initial row so it can be used later to create a border
$initalRow = $row
 
#Create a header for Disk Space Report; set each cell to Bold and add a background color
$Sheet.Cells.Item($row,$column)= 'Computername'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'DeviceID'
#Making the font Bold and adjusting the background color (using the Interior.ColorIndex property of each cell
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'VolumeName'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'TotalSizeGB'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'UsedSpaceGB'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'FreeSpaceGB'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= '%Free'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
$Sheet.Cells.Item($row,$column)= 'Status'
$Sheet.Cells.Item($row,$column).Interior.ColorIndex =48
$Sheet.Cells.Item($row,$column).Font.Bold=$True
$Column++
#Set up a header filter
$headerRange = $Sheet.Range("a3","o3")
$headerRange.AutoFilter() | Out-Null
 
#Increment Row and reset Column back to first column
$row++
$Column = 1
$critical=0
$warning=0
$low=0
$good=0
 
#Get the drives and filter out CD/DVD drives
 
Import-Csv C:\server.csv|%{
$cserver = $_.Server
$cdrivelt = $_.Drive
$clowth = $_.LowTh
$cwarnth = $_.WarnTh
$ccritth = $_.CritTh
 
$diskinfo= Get-WmiObject -Class Win32_LogicalDisk -ComputerName $cserver  -Filter "DeviceID='$cdrivelt'" -Credential $Credentials
ForEach ($disk in $diskinfo)
{
If ($diskinfo.Size -gt 0) {$percentFree = [Math]::round((($diskinfo.freespace/$diskinfo.size) * 100))}
Else {$percentFree = 0}
#Process each disk in the collection and write to spreadsheet
    $Sheet.Cells.Item($row,1)= $disk.__Server
    $Sheet.Cells.Item($row,2)= $disk.DeviceID
    $Sheet.Cells.Item($row,3)= $disk.VolumeName
    $Sheet.Cells.Item($row,4)= [math]::Round(($disk.Size /1GB),2)
    $Sheet.Cells.Item($row,5)= [math]::Round((($disk.Size - $disk.FreeSpace)/1GB),2)
    $Sheet.Cells.Item($row,6)= [math]::Round(($disk.FreeSpace / 1GB),2)
    $Sheet.Cells.Item($row,7)= ("{0:P}" -f ($disk.FreeSpace / $disk.Size))
     
    #Determine if disk needs to be flagged for warning or critical alert
    If ($percentFree -le  $ccritth) {
        $Sheet.Cells.Item($row,8) = "Critical"
        $critical++
        #Check to see if space is near empty and use appropriate background colors
        $range = $Sheet.Range(("A{0}"  -f $row),("H{0}"  -f $row))
        $range.Select() | Out-Null  
        #Critical threshold       
        $range.Interior.ColorIndex = 3
    } ElseIf ($percentFree -gt $ccritth -AND $percentFree -le $cwarnth) {
        $Sheet.Cells.Item($row,8) = "Warning"
        $range = $Sheet.Range(("A{0}"  -f $row),("H{0}"  -f $row))
        $range.Select() | Out-Null      
        $warning++
        $range.Interior.ColorIndex = 6
        }
     ElseIf ($percentFree -ge $cwarnth -AND $percentFree -lt $clowth) {
        $Sheet.Cells.Item($row,8) = "Low"
        $range = $Sheet.Range(("A{0}"  -f $row),("H{0}"  -f $row))
        $range.Select() | Out-Null      
        $low++
        $range.Interior.ColorIndex = 12
         
    } Else {
        $Sheet.Cells.Item($row,8) = "Good"
        $good++
    }
 
     $row++
}
}
 
 
#Add a border for data cells have used with the VerticalAlignment property.
#[Enum]::getvalues([Microsoft.Office.Interop.Excel.XlBordersIndex]) | select @{n="Name";e={"$_"}},value__
$row--
$dataRange = $Sheet.Range(("A{0}"  -f $initalRow),("H{0}"  -f $row))
7..12 | ForEach {
    $dataRange.Borders.Item($_).LineStyle = 1
    $dataRange.Borders.Item($_).Weight = 2
}
 
#Auto fit everything so it looks better
 
$usedRange = $Sheet.UsedRange                                                         
$usedRange.EntireColumn.AutoFit() | Out-Null
 
 
$sheet = $excel.Worksheets.Item(1)
  
$row++;$row++
 
$beginChartRow = $Row
 
$Sheet.Cells.Item($row,$Column) = 'Critical'
$Column++
$Sheet.Cells.Item($row,$Column) = 'Warning'
$Column++
$Sheet.Cells.Item($row,$Column) = 'Low'
$Column++
$Sheet.Cells.Item($row,$Column) = 'Good'
$Column = 1
$row++
#Critical formula
$Sheet.Cells.Item($row,$Column)=$critical
$Column++
#Warning formula
$Sheet.Cells.Item($row,$Column)=$warning
$Column++
#low formula
$Sheet.Cells.Item($row,$Column)=$low
$Column++
#Good formula
$Sheet.Cells.Item($row,$Column)= $good
 
$endChartRow = $row
 
$chartRange = $Sheet.Range(("A{0}" -f $beginChartRow),("d{0}" -f $endChartRow))
 
 
$chart = $sheet.Shapes.AddChart().Chart
 
 
#Configure the chart
##Use a 3D Pie Chart
$chart.ChartType = 70
$chart.Elevation = 40
#Give it some color
$sheet.Shapes.Item("Chart 1").Fill.ForeColor.TintAndShade = .34
$sheet.Shapes.Item("Chart 1").Fill.ForeColor.ObjectThemeColor = 5
$sheet.Shapes.Item("Chart 1").Fill.BackColor.TintAndShade = .765
$sheet.Shapes.Item("Chart 1").Fill.ForeColor.ObjectThemeColor = 5
 
$sheet.Shapes.Item("Chart 1").Fill.TwoColorGradient(1,1)
 
#Set the location of the chart
$sheet.Shapes.Item("Chart 1").Placement = 3
$sheet.Shapes.Item("Chart 1").Top = 30
$sheet.Shapes.Item("Chart 1").Left = 600
 
$chart.SetSourceData($chartRange)
$chart.HasTitle = $True
 
$chart.ApplyLayout(6,69)
$chart.ChartTitle.Text = "Disk Space Report"
$chart.ChartStyle = 26
$chart.PlotVisibleOnly = $False
$chart.SeriesCollection(1).DataLabels().ShowValue = $True
$chart.SeriesCollection(1).DataLabels().Separator = ("{0}" -f [char]10)
 
$chart.SeriesCollection(1).DataLabels().Position = 2
#Critical
$chart.SeriesCollection(1).Points(1).Format.Fill.ForeColor.RGB = 255
#Warning
$chart.SeriesCollection(1).Points(2).Format.Fill.ForeColor.RGB = 65535
#Low
$chart.SeriesCollection(1).Points(2).Format.Fill.ForeColor.RGB = 265535
#Good
$chart.SeriesCollection(1).Points(3).Format.Fill.ForeColor.RGB = 5287936
 
#Hide the data
#$chartRange.EntireRow.Hidden = $True
 
$sheet.Name = 'DiskInformation'
 
$filename = "$DirectoryToSaveTo$filename.xlsx"
if (test-path $filename ) { rm $filename } #delete the file if it already exists
$Sheet.UsedRange.EntireColumn.AutoFit()
$Excel.SaveAs($filename, $xlOpenXMLWorkbook) #save as an XML Workbook (xslx)
$Excel.Saved = $True
$Excel.Close()

↑ Return to Top


Output

↑ Return to Top


Conclusion

  1. CSV input – Easy to maintain and manage
  2. Customization can be done at the each drive level as the threshold value may vary on every server and most of the drive depending the size of each drive
  3. Graphical representation of Disk Space Usage Utilization report
  4. Simplest way to keep a cap on every drive to set threshold value
  5. Proactive monitoring and alerting respective teams may avoid unforeseen disk space issues

Note:- I would prefer to read the blog references entered under reference section for more in-depth information about charting with PoSH.

↑ Return to Top


References

Technet

Blogs

Posted in PowerShell, Uncategorized | Tagged , , , | 1 Comment

PowerShell: Extending Modules – System Parameter measurement

The Microsoft Technet Guru Award  (August 2016) winning article  – Silver

PowerShell: Extending Modules – System Parameter measurement

PowerShell: Extending Modules – System Parameter measurement

Abstract

The objective of this post to illustrate the various methods of creating PoSH Modules. To explain different methods, have considered PoSH GUI functions which are used measure system metrics such as Disk, Memory, and Top resource consumption Process. It need not be a function; any valid PowerShell script is an eligible object for creating a Module. The three functions are encapsulated in a PowerShell Module.

When Do I Create A PowerShell Module?

You can easily decide whether to create a module by answering the following questions while writing a script

  • Will the code I’m writing need to be used more than once?
  • Does this code essentially manage a single object or entity?
  • As I write this code, do I find that I’m breaking it apart into functions because it’s getting too complex to be in a single script?
  • Do I need to share the code with others?

If you answered yes to at least three of these four questions, it’s a good bet you should be writing a module instead of a PS1 script.

Modules and Snap-Ins

In PowerShell there are 2 main ways to extend the shell, these are:

Modules – A package that contains Windows PowerShell commands in the form of functions, cmdlets, and workflows, in addition, it may contain variables, aliases, and providers. Modules can be written in PowerShell and/or compiled as DLLs.

For example,

Import-Module SQLPS

You don’t need to register the module, need to just import it. on the other hand, the modules can additionally load functions, variables, aliases to your session.

 

 List loaded modules  Get-Module
 List installed modules  Get-Module -ListAvailable
 Show commands in a module  Get-Command -Module “ModuleName”
 Load a specific module  Import-Module -Name “ModuleName”

Snap-Ins – Are compiled cmdlets into a DLL written in a .Net language and need to register before they can be used. Snap-ins can add cmdlets and providers to your session.

For example,

Add-PSSnapin SqlServerCmdletSnapin100
 List Loaded Snap-Ins Get-PSSnapin
 List installed snap-ins  Get-PSSnapin -Registered
 Show commands in a snap-in  Get-Command -Module “SnapinName”
 Load a specific snap-in  Add-PSSnapin “SnapinName”

PoSH Modules

At its simplest, a module is a collection of PowerShell script contained in a file with a .PSM1 extension. A module is some chunk of code that you can import into PowerShell. Once imported, any cmdlets, scripts, or providers can be accessed.

Types of PowerShell modules:

  • Script Modules
  • Binary Modules
  • Manifest Modules
  • Dynamic Modules

Script Modules

PSM1 files that typically contain mostly functions, but can contain any valid PowerShell code. In this case simply take any valid Powershell script *.ps1 and rename it to *.psm1 file and place it in a folder. The paths where you can install your module are located in the $ENV:PSModulePath global variable. You can save the file in a default module path that can be identified using the environmental variable.

PS:\>$ENV:PSModulePath

For example, a common path to save a module on a system would be

  • %SystemRoot%\users\<user>\Documents\WindowsPowerShell\Modules\<modName> 
  • %windir%\System32\WindowsPowerShell\v1.0\Modules
  • %UserProfile%\Documents\WindowsPowerShell\Modules     (preferred).

Be sure to create a folder for your module. If you did not save your module to one of these paths, you would have to pass in the location of your module in the call to Import-Module. Starting with PowerShell 3.0, if you have placed your module on one of the PowerShell module paths, you do not need to explicitly import it: simply having a user call your function will automatically load it.

Pre-requisites

  • Powershell 2.0 and Above
  • Microsoft Chart Controls for Microsoft .NET Framework 3.5 Setup
  • .Net framework 3.0 or 3.5
  • MS charter for GUI modules which displays Graph

Download

Installation

Installation of a module is now very simple. Download and save the content of file in PowerShellGUI.psm1(Append) to any of the above-mentioned path or save it to a specific folderThis PowerShellGUI.psm1 module going to consists of three functions

  • Get-DiskSpaceGUI
  • Get-MemoryGUI
  • Get-ProcessGUI

Import-Module

There are many ways to load a PowerShell Module. The preferred way is to create folder and save the Module in that folderTo import module and loading a module is as follows.

  • Using Relative Path
  • Using Absolute Path

Using Relative Path

Follow the below steps to load the module from a system path

  • Download the module code
  • Create a folder PowerShellGUI (Same name as that of a Module name) file at any of the below location %windir%\System32\WindowsPowerShell\v1.0\Modules OR %UserProfile%\Documents\WindowsPowerShell\Modules (preferred)
  • Save the PowerShellGUI.psm1 %UserProfile%\Documents\WindowsPowerShell\Modules\PowerShellGUI\ (preferred)
  • Once done with step 1, 2 and 3 open a PowerShell window and run below commands.
PS:\>Import-Module -Name PowerShellGUI

This will import the module and functions into PowerShell

PS:\>Get-Module  -ListAvailable

This will list down all available modules in PowerShell

PS:\>Get-Command -Module PowerShellGUI

Using Absolute Path

Save the module to C:\PowerShellGUI.psm1

If we want to add another path for PowerShell to look at we just add that path to the current environment variable:

$env:psmodulepath = $env:psmodulepath + ";c:\PowerShellGUI;"

PS:\> Import-Module -Name C:\PowerShellGUI.psm1
PS:\>Get-Module PowerShellGUI

PS:\>Get-Command -Module PowershellGUI

Output

Binary Modules

Compiled DLL files typically not created by IT pros; these are usually left up to developers.A binary module is a .NET Framework assembly (.dll) that contains compiled code. Cmdlet developers can use this type of module to create modules that contain cmdlets, providers, and more. (Existing snap-ins can also be used as binary modules.)Compiling Binary cmdlets

$code = @'
using System;
using System.Collections.Generic;
using System.Management.Automation;
namespace CustomCmdlet
{
    [Cmdlet("Get", "Magic", SupportsTransactions = false)]
    public class test : PSCmdlet
    {
        private int _Age;
        [Alias(new string[]
        {
            "HowOld", "YourAge"
        }), Parameter(Position = 0,ValueFromPipeline = true)]
        
        public int Age
        {
            get { return _Age; }
            set { _Age = value; }
        }
        private string _Name;
        [Parameter(Position = 1)]
        public string Name
        {
            get { return _Name; }
            set { _Name = value; }
        }
        protected override void BeginProcessing()
        {
            this.WriteObject("Good morning...");
            base.BeginProcessing();
        }
        protected override void ProcessRecord()
        {
            this.WriteObject("Your name is " + Name + " and your age is " + Age);
            base.ProcessRecord();
        }
        protected override void EndProcessing()
        {
            this.WriteObject("That's it for now.");
            base.EndProcessing();
        }
    }
}
'@
# compile C# code to DLL
# use a timestamp to create unique file names
# while testing, when a DLL was imported before, it is in use until PowerShell closes
# so to do repeated tests, use different DLL file names
$datetime = Get-Date -Format yyyyMMddHHmmssffff
$DLLPath = "$env:temp\myCmdlet($datetime).dll"
Add-Type -TypeDefinition $code -OutputAssembly $DLLPath
# import a module
Import-Module -Name $DLLPath -Verbose

Manifest Modules

Script modules that contain a manifest.

How to load Multiple Modules

Load Multiple Modules (.psm1) or *.ps1 using a single psd1 file. You can edit the psd1 file and add the modules or files under NestedModule tag. The New-ModuleManifest cmdlet is used to create a module manifest. As shown below, in PowerShell version 3, the only field that you’re required to provide a value for is the path

  • Launch the Powershell ISE
  • Use the New-ModuleManifest command
  • Follow the instructions here – How to Write a Module Manifest. When asked for nested modules, key in the module as Modulepath\Modulename.psm1
  • Finally, once the .psd1 file is created, load / import it using Import-Module <<module-name>>
New-ModuleManifest -Path .\PowerShellGUI\PowershellGUI.psd1 -Author 'Prashanth Jayaram' -CompanyName 'CTS' -Copyright '(c)2016 Prashanth Jayaram' -ModuleVersion 1.0 -PowerShellVersion 2.0 -NestedModules '.\PowerShellGUI\PowerShellGUI.PSM1','\PowerShellGUI\Get-DiskUsage.psm1'

Dynamic Modules

Modules that are never written to disk and are only available in memory.Dynamic modules are basically an extension of script block concept.For Example, Let’s create a script block. It has two functions, one to retrieve disk space and another one to get the Uptime of a given machine

$scriptblock={
Function Get-DiskSpace([string[]]$server)
{
     Get-WmiObject -Class win32_volume -cn $server |Select-Object @{LABEL='Comptuer';EXPRESSION={$server}},driveletter, label,@{LABEL='GBfreespace';EXPRESSION={"{0:N2}" -f ($_.freespace/1GB)}}
} #end function Get-DiskSpace
Function Get-Uptime([string[]]$server) {
   $os = Get-WmiObject win32_operatingsystem -ComputerName $server
   $uptime = (Get-Date) - ($os.ConvertToDateTime($os.lastbootuptime))
   $Display = "Uptime: " + $Uptime.Days + " days, " + $Uptime.Hours + " hours, " + $Uptime.Minutes + " minutes"
   return $Display
}
}

Create a module using New-Module cmdlet

PS P:\> $server=New-Module -ScriptBlock $scriptblock -AsCustomObject

The New-Module cmdlet creates a module in memory using the script block. The -AsCustomObject parameter tells the PowerShell to pass the module as a PowerShell Object.

To retrieve the attributes of the New-Module

$server|Get-Member

Call the module

PS P:\> $SERVER.'Get-DiskSpace'("HQDBSP18")
PS P:\> $SERVER.'Get-DiskSpace'("HQDBSP18")

How to import custom PowerShell module into the remote session?

Enable remoting on the source system by running Enable-PSRemoting on it. create and load the module on Server 1 and call the module Server 2.

PS C:\> $Session = New-PSSession -ComputerName SERVER1
PS C:\> Import-Module -PSSession $foo -Name Get-DiskUsage
PS C:\> Get-DiskUsage

The above opens a PowerShell session on the server SERVER1 and import the Get-DiskUsage module on the remote computer. The command is run on the remote computer while it appears to be executed locally.

Profile

The Scope is limited to a session. If you want to load automatically then you need to create Profile. Rather than typing “Import-Module -Name PowerShellGUI” cmdlet every time you start Windows PowerShell, you can create a Windows PowerShell profile and add this cmdlet to the $profile global variable. After you create the profile, PowerShell Module is automatically added each time when you start Windows PowerShell.

Follow the below steps to create or edit the $profile.

PS:\> test-path $profile
True
PS:\> notepad $profile
PS:\> $profile
C:\Users\ccov648\Documents\WindowsPowerShell\Microsoft.PowerShellISE_profile.ps1

 

Conclusion

  • Discussed various types of modules which help in understanding of build and package creation in PoSH
  • The seamless approach to navigating remote modules helps in easy management and administration
  • Day to day activities can be customized into modules
  • Automatic loading of modules made easier with the help of profile creation
  • Discussed various method and usage of each module which helps in creating and decision making about the need for module

References

 

 

 

Posted in PowerShell | Tagged | 2 Comments

SQL Server: Monitoring Disk Space with FSUTIL command line utility

Table of Contents

Introduction

This article talks about one of SQL method to fetch the disk usage details and use of T-SQL in conjunction with invoking FSUTIL utility using SQL Windows Shell interface xp_cmdshell. The FSUTIL FSINFO lists all drives, queries the drive type, queries volume information, queries NTFS-specific volume information, or queries file system statistics.  Other SQL Methods to get Disk Space details are  OLE and WMI

Pre-requisites

  •   The FSUTIL utility requires that you have administrative privileges
  •   sysadmin rights on the sql server
  •   Enable xp_cmdshell

Data flow

The below diagram depicts the flow of the code

Enable xp_cmdshell 

The xp_cmdshell option is a SQL Server server configuration option that enables system administrators to control whether the extended stored procedure can be executed on a system also this procedure allows you to issue operating system commands directly to the Windows command shell via T-SQL code

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'xp_cmdShell', 1;
GO
RECONFIGURE;
GO

The below sql lists drives attached to the file system

EXEC [master].[dbo].[xp_cmdshell] N'FSUTIL FSINFO DRIVES'

The FSUTIL volume diskfree command list the drive usage statistics of the filesystem. The below example gathers the statistics of c drive.

declare @driveName varchar(100)
declare @query varchar(1000)
declare @fsinfo varchar(1000)
set @driveName = 'C'
set @query ='FSUTIL VOLUME DISKFREE '+@driveName+':\'
exec xp_cmdshell @query

The below screen shot shows the output of FSUTIL DRIVES and VOLUME.

In the code,  the where clause construct is used to get the non-null value from FSUTIL FSINFO DRIVES output and then manipulated using string function to get only the drive details A\C\D\E\F\G\

  /* Inserting the non-null values to temp table */
   SELECT @workstring = [output]
      FROM @xp_cmdshell_output
      WHERE [output] LIKE 'Drives:%'
   DELETE FROM @xp_cmdshell_output
 /* Prepare string for XML parsing*/
 -----------------------------------------------
 -- Replace string "Drives",":" and " " by ''
 ---------------------------------------------
   SELECT @workstring = REPLACE(REPLACE(REPLACE(@workstring, 'Drives', ''),':',''),' ','')
--PRINT @workstring

Output : A\C\D\E\F\G\

The code uses XML parsing to split the string. The Splitting of delimited strings Using XML is shown below.  The Parsed value is stored in a temporary table for further processing. The parsed output will be processed row by row to get a usage statistics of each drive
SELECT @XML = CAST(
                             ('<X>'
                              + REPLACE(@workstring
                                       ,'\'
                                       ,'</X><X>')
                              + '</X>')
                              AS XML)
INSERT INTO @drives ([Drive])
   SELECT LTRIM(N.value('.', 'VARCHAR(4000)'))
      FROM @XML.nodes('X') AS T(N)
      WHERE ASCII(LTRIM(N.value('.', 'VARCHAR(4000)'))) != 0
 
select * from @drives
 

The next part of the script is to loop through each drive and manipulate the data using string function. The drive detail is fed to FSUTIL VOLUME DISKFREE command and then its output is stored in a temp table variable. Then the output is transformed, manipulated using where clause and string function to fetch numeric value from string output and then updates corresponding Size columns using conversion formula of table variable. The above steps are repeated to updated the Free columns using the similar conversion formula.

SELECT @recid = 1
    WHILE @recid <= (SELECT MAX([RecID]) FROM @drives)
    BEGIN
       SELECT @workstring = ''
             ,@vexec_str = 'EXEC [master].[dbo].[xp_cmdshell] '
                         + QUOTENAME('FSUTIL VOLUME DISKFREE '
                              + [Drive]
                              + ':'
                           ,CHAR(39))
          FROM @drives
          WHERE [RecID] = @recid
       INSERT INTO @xp_cmdshell_output ([output])
          EXEC (@vexec_str)
       SELECT @workstring = [output]
          FROM @xp_cmdshell_output
          WHERE [output] LIKE '%Total # of bytes%'
       IF @workstring IS NOT NULL AND LEN(@workstring) > 0
       BEGIN
          SELECT @workstring = LTRIM(
                                     SUBSTRING(@workstring
                                              ,CHARINDEX(':'
                                                        ,@workstring
                                                        ) + 1
                                              ,LEN(@workstring)
                                              )
                                     )
          SELECT @workstring = LEFT(@workstring, LEN(@workstring))
          /* update the free field and convert its value to GB */
          UPDATE @drives
             SET [Size] = (CONVERT(numeric, @workstring))/1024/1024/1024.00
             WHERE [RecID] = @recid
       END
       ELSE
       DELETE
          FROM @drives
          WHERE [RecID] = @recid

The last part of the code fetches the data from the table variable

SELECT @@ServerName server,Drive,Size,Free, cast(Free/Size * 100.00  as decimal(5,2)) '%Free' FROM@drives

Download

The code is uploaded in the gallery TSQL_DiskSpace_FSUTIL

SQL Code

The complete code is given below. The code has broken into many pieces and explained above with few screen shots.

/* Variable declaration*/
DECLARE @recid         INT
       ,@workstring    VARCHAR(8000)
       ,@XML XML
       ,@vexec_str     VARCHAR(8000)
-- Create table variable to hold drive size info
DECLARE @drives TABLE (
   [RecID]             TINYINT IDENTITY(1,1)             -- Record ID
  ,[Drive]             VARCHAR(10)                       -- Drive letter
  ,[Size]              NUMERIC NULL                       -- Drive size
  ,[Free]              NUMERIC NULL
   )
-- Create table variable for xp_cmdshell output
DECLARE @xp_cmdshell_output TABLE (
   [output]            VARCHAR(8000) NULL                -- Raw text returned from xp_cmdshell execution
   )
INSERT INTO @xp_cmdshell_output ([output])
   EXEC [master].[dbo].[xp_cmdshell] N'FSUTIL FSINFO DRIVES'
   
/* Error handling*/
-----------------------------------------------------------------
--Check for sql server privilge to execute the FSUTIL utility to gather disk status
-----------------------------------------------------------------
IF (SELECT COUNT(1)
       FROM @xp_cmdshell_output
       WHERE [output] = 'The FSUTIL utility requires that you have administrative privileges.') > 0
  RAISERROR ('SQL Server Service account not an admin on this computer.', 11, 1);
ELSE
BEGIN
   /* Inserting the non-null values to temp table */
   SELECT @workstring = [output]
      FROM @xp_cmdshell_output
      WHERE [output] LIKE 'Drives:%'
   DELETE FROM @xp_cmdshell_output
 /* Prepare string for XML parsing*/
 -----------------------------------------------
 -- Replace string "Drives",":" and " " by ''
 ---------------------------------------------
   SELECT @workstring = REPLACE(REPLACE(REPLACE(@workstring, 'Drives', ''),':',''),' ','')
--PRINT @workstring
END
/* XML Parsing - Spilting the delimited string using XML*/
-----------------------------------------------------
-- the string is parsed for the delimiter '\'
-----------------------------------------------------
SELECT @XML = CAST(
                             ('<X>'
                              + REPLACE(@workstring
                                       ,'\'
                                       ,'</X><X>')
                              + '</X>')
                              AS XML)
                              
/* Store the parsed value into table variable */
                             
INSERT INTO @drives ([Drive])
   SELECT LTRIM(N.value('.', 'VARCHAR(4000)'))
      FROM @XML.nodes('X') AS T(N)
      WHERE ASCII(LTRIM(N.value('.', 'VARCHAR(4000)'))) != 0
----
--Display the results
---
select * from @drives
      -- Get size for each drive
      SELECT @recid = 1
      WHILE @recid <= (SELECT MAX([RecID]) FROM @drives)
      BEGIN
         SELECT @workstring = ''
               ,@vexec_str = 'EXEC [master].[dbo].[xp_cmdshell] '
                           + QUOTENAME('FSUTIL VOLUME DISKFREE '
                                + [Drive]
                                + ':'
                             ,CHAR(39))
            FROM @drives
            WHERE [RecID] = @recid
         INSERT INTO @xp_cmdshell_output ([output])
            EXEC (@vexec_str)
         SELECT @workstring = [output]
            FROM @xp_cmdshell_output
            WHERE [output] LIKE '%Total # of bytes%'
         IF @workstring IS NOT NULL AND LEN(@workstring) > 0
         BEGIN
            SELECT @workstring = LTRIM(
                                       SUBSTRING(@workstring
                                                ,CHARINDEX(':'
                                                          ,@workstring
                                                          ) + 1
                                                ,LEN(@workstring)
                                                )
                                       )
            SELECT @workstring = LEFT(@workstring, LEN(@workstring))
            /* update the free field and convert its value to GB */
            UPDATE @drives 
               SET [Size] = (CONVERT(numeric, @workstring))/1024/1024/1024.00
               WHERE [RecID] = @recid
         END
         ELSE
         DELETE
            FROM @drives
            WHERE [RecID] = @recid
             SELECT @workstring = [output]
            FROM @xp_cmdshell_output
            WHERE [output] LIKE '%Total # of free bytes%'
         IF @workstring IS NOT NULL AND LEN(@workstring) > 0
         BEGIN
            SELECT @workstring = LTRIM(
                                       SUBSTRING(@workstring
                                                ,CHARINDEX(':'
                                                          ,@workstring
                                                          ) + 1
                                                ,LEN(@workstring)
                                                )
                                       )
            SELECT @workstring = LEFT(@workstring, LEN(@workstring))
            /* update the free field and convert its value to GB */
            UPDATE @drives
               SET [free] = (convert(numeric, @workstring))/1024/1024/1024.00
               WHERE [RecID] = @recid
         END
         ELSE
         DELETE
            FROM @drives
            WHERE [RecID] = @recid
         DELETE FROM @xp_cmdshell_output
         SELECT @recid = @recid + 1
      END
      
SELECT @@ServerName server,Drive,Size,Free, cast(Free/Size * 100.00  as decimal(5,2)) '%Free' FROM @drives

Output

Conclusion

There are many ways to gather disk space. Its up-to an individual to gather the metrics using available list of tools and utilities. The above steps only briefs about an other way of capturing the details.

References

FSUTIL

Community Question

See Also

The below article gives an idea to execute sql script over multiple servers using sqlcmd

SQL – Disk Space Monitoring using OLE and WMI 

http://social.technet.microsoft.com/wiki/contents/articles/35609.sql-server-operations-monitoring-disk-space-with-wmi-and-ole-automation-objects.aspx

PoSH – DiskSpaceGUITool 

http://social.technet.microsoft.com/wiki/contents/articles/34900.disk-space-gui-tool-multi-server-s-search-grid-and-email-output-powershell.aspx

Posted in SQL, SQLCMD, Uncategorized | Tagged , , | Leave a comment
%d bloggers like this: