Migrate an SQL database from Windows to Linux using backup-and-restore and SMB Protocol

In an attempt decouple systems, we may modularize them. In most cases, we would need to move databases from one server to another—sometimes, even move them from Windows to Linux.

In the case of moving a database to Linux, SQL Server’s Backup and Restore feature is the recommended mode of migration; we would migrate from SQL Server to Windows to SQL Server vNext CTP 1.4 on Linux.

This topic provides step-by-step instructions for this procedure. In this tutorial, you will:

  • Create a database, SQLShack
  • Backup SQLShack on a Windows machine
  • Transfer the backup to your Linux machine using a File Sharing Protocol, Samba. Use Samba* to setup an SMB network share between the Windows and the Linux machines.
  • Restore the database using the SSMS console (sqlcmd) or GUI

Demonstration

Let’s create a database, SQLShack. I’m going to switch to the SQLShack database to create a sample table to hold dummy data which is needed to demonstrate the restoration process.

Database Backup on Windows Machine

Backup the database, SQLShack, on a Windows Machine.

The following simple command creates a backup of SQLShack to a given file location

Transfer backup file to the Linux machine

You can use Samba to create a shared folder on the Linux system and transfer files from the Windows computer to the Linux (CentOS) computer—or vice versa. This allows the files to be shared on the network through the SMB protocol. This will also allow Windows hosts to get remote access to the files using the file explorer.

Let us go through a step-by-step installation and configuration process of Samba so that we can easily share the files between computers running different operating systems.

further reading…

https://www.sqlshack.com/migrate-an-sql-database-from-windows-to-linux-using-backup-and-restore-and-smb-protocol/

Happy Learning!!!

 

Advertisements
Posted in SQL, SQL 2016, SQL Server vNext | Tagged , , , | Leave a comment

Installation of SQL Server vNext CTP on Linux Distribution CentOS 7

It was a paradigm shift in December 2016, when Microsoft made their SQL Server database available for Linux; it was the first time in history that Microsoft ever designed SQL Server to run on a non-Windows operating system. SQL Server vNext was released for public preview so the user community could test and deploy SQL Server on a Linux operating system. Microsoft took a major step in diversifying the database technology into the non-windows platform for the first time.

Introduction

The close integration of various data sources under one umbrella called Hadoop – Big Data enabled DBAs to handle data and its data structures on the available platforms. It’s a great opportunity for the SQL community members to try MS SQL on the non-windows platform.

SQL Server on Linux can potentially provide customers with even more flexibility in their data solution. SQL Server’s proven enterprise experience and capabilities is a valuable asset to Enterprise Linux customers around the world. It’s expanding the database market to meet the needs and be on par with the changing trends in technology.

This article is an effort to detail the instructions for the installation of SQL Server vNext on Linux. The article also includes several basic Linux commands, thereby being helpful in understanding the process of installation and configuration.

This article is a comprehensive guide for the installation and configuration of MSSQL; it includes:

  • Introduction and understanding of how SQL Server run on Linux
  • Challenges and complications in building and managing SQL Server on Linux
  • Curating the MSSQL Server installation using Linux commands
  • Detailed installation procedure

Pre-requisites

  • Basic understanding of Linux
  • Access to CentOS/RHEL Software Libraries
  • At least 4 GB RAM
  • At least 8 GB of hard disk space

How Does SQL Server Run on Linux?

Microsoft has provided great insight on how they were able to port SQL Server to run natively on Linux introducing what is known as Platform Abstraction Layer (“PAL”). The Microsoft Research Team set out to bring full functionality, performance, and scale value of the SQL Server RDBMS to Linux. The Microsoft Research Drawbridge acts as an abstraction layer between the OS and the application layers. Drawbridge explored a new approach to process virtualization and isolation. It’s a form of virtualization, specifically for application sandboxing. In fact, it’s very hard to provide a capable SQL version outside of windows within the stipulated time, hence MSR (Microsoft Research) team decides to integrate SQL Server’s existing platform layer SOS (SQL Server Operating System) with Drawbridge to create what we call as the SQLPAL. The SOS provides robust memory management, thread scheduling, and IO services. Creating SQLPAL enabled the full functional SQL version to run on Linux. Please read the reference section for more information.

Further reading….

https://www.sqlshack.com/installation-of-sql-server-vnext-ctp-on-linux-distribution-centos-7/

Thanks for reading my space. Happpy Learning!!

 

Posted in SQL Server vNext | Tagged , , , | Leave a comment

Understanding Database Backup Encryption in SQL Server

Terabytes of data, millions of rows; the entire business depends on this — a database administrator’s responsibility is as massive as it sounds. Apart from keeping the data secure, a database administrator also has to keep the system up and running, and restore the data as needed, in case of a failure, with minimal impact to the business.

While this is less of a challenge in an all-on-premises environment, database backups stored off-site or on the cloud require some more precaution. The data literally resides on someone else’s infrastructure. Data security has to be thought of from a different perspective now. If someone gets unauthorized access to the site, they could simply restore a copy of your database from a backup, onto their own hardware. What good, then, is it to lock the doors of your own equipment, when the soul has already departed?

Security best-practices are in place in order to secure the metaphorical soul. They need to be implemented not just in the production environment, but within our backup solution as well.

Earlier versions of SQL Server had a limitation on this security feature; we had to use third-party solutions to encrypt and compress the backup files. Microsoft, with SQL Server 2014, has begun introducing database backup encryption within the native backup capability—now, SQL Server has the ability to encrypt the data while creating a backup using various encryption algorithms!

Continue reading…..

www.sqlshack.com/understanding-database-backup-encryption-sql-server/

Happy Learning!!

 

Posted in SQL, SQL 2012, SQL 2016 | Tagged , , , , , , , | Leave a comment

SQL Server 2016 enhancements – Truncate Table and Table Partitioning

The idea behind this article is to discuss the importance and the implication of SQL Partition and understand the truncate command partitioning enhancements in SQL 2016

One of the biggest challenges for a DBA is to identify the right candidate for table partitioning, as it requires expertise in design and implementation.

The following are the various truncating options available in SQL 2016

  • Truncate individual partitions
  • Truncate multiple individual partitions
  • Truncate a Range of partitions
  • Truncate a Range with multiple individual partitions

Article Highlights

  • Define the importance and the implication of SQL table partitioning
  • Identify the right candidate for table partitioning
  • Provide inline comparison of features available in the different editions of SQL 2016
  • Discuss the truncate partition enhancements in SQL 2016
  • Demonstrate the truncate table partition use cases

SQL Partition

Let’s understand the objective of SQL partitioning, why we need partitioning and the factors that are vital to deciding on a Table Partitioning Strategy.

Partitions are a logical mapping of the physical data. A well-designed partition gives us an option to scale out the data. It optimizes the performance and simplifies the management of data by partitioning each table into multiple separate partitions. Although, not all tables are good candidates for partitioning. If the answer is ‘yes’ to all or most of the following questions, table partitioning may be a viable database design strategy; if the answer is ‘no’ to most of the following questions, table partitioning may not be the right solution for that table.

Continue reading……..

https://www.sqlshack.com/sql-server-2016-enhancements-truncate-table-table-partitioning/

 

Happy Learning!!!

 

Posted in SQL 2016, T-SQL | Tagged , , , , | Leave a comment

SQL Server Database Recovery Process Internals – database STARTUP Command

A database recovery process is an essential requirement for database systems, It can be a tedious job and the process of recovery varies on a lot of scenarios. The desire to improve recovery results has resulted in various procedures but understood by few and prone to errors. In this article, I’ll illustrate the impact of stopping the database instance in the middle of a large transaction that was running and discuss several techniques and tools that are available for faster and successful recovery.

Introduction

Let’s discuss a scenario where we have a SAN disk issue and latency is really high and think many jobs on the data warehouse systems started running indefinitely without a success.

The owner of the server felt this might be a server issue and decided to restart the server but from that point, he’s invited trouble. Everything comes back but he’s now not able to query the database because of a block generated by the system process on the user sessions during the recovery process. This leads to an involvement of senior DBA to troubleshoot the problem.

Further reading…

https://www.sqlshack.com/sql-server-database-recovery-process-internals-database-startup-command/

 

Happy Learning!!

 

Posted in SQL | Tagged , , | Leave a comment

Custom keyboard shortcuts in SQL Server Management Studio (SSMS)

People love taking shortcuts because it’s the easiest way to do things faster; computers are no exception. Shortcuts, particularly ones performed by keyboard, can save you hours of time once applied properly. Learning keyboard shortcut saves you a lot of time; you’ll definitely notice a boost the productivity because you’re not unnecessarily reaching for a mouse all the time.

I’ve rounded up some of the most common DBA customized keyboard shortcuts in SSMS. The same set of rules can be applied to define one which you feel is most important in your day to day activities. Stay on board and you’ll be mastering these tricks in no time!
https://www.sqlshack.com/custom-keyboard-shortcuts-in-sql-server-management-studio-ssms/

Happy Learning!!

Posted in SQL, SQL 2012, SQL 2016 | Tagged , , | Leave a comment

Illustration of an example availability monitoring service Using PowerShell and SQL

This article discusses a simple solution of how to monitor SQL service availability across multiple servers and reporting. To build this I’ll use SQL Server with simple PowerShell script and cmdlets which generate JSON data and displays results in HTML

Article highlights

  • Defines the objective of JSON
  • Explains the PoSH JSON cmdlets internals and other details on its usage
  • Proactive Monitoring of Service(s) over multiple servers – Covers fundamentals of how to Monitor SQL service(s). This sample code can be implemented to monitor any services on a Windows environment
  • Automated Process – Easy to implement If you don’t have a monitoring tool
  • Easier customization of services list and input servers list
  • Seamless Integration of PowerShell and SQL using SQL 2016 JSON constructs
  • HTML Report – shows how to do effective reporting using XML parsing and Database Mail
  • Summary of data

Continued…..

http://www.sqlshack.com/illustration-of-an-example-availability-monitoring-service-using-powershell-and-sql/

Happy Reading!!

 

Posted in JSON, SQL | Tagged , , , | Leave a comment

PowerShell: Managing Service(s) Using WMI, CIM and Get-Service Cmdlet

Abstract

A common task of any administrator is to manage and maintain the application availability by checking the various services status, accomplish configuring the services and restarting the service on a local/remote system. There are several components are available to accomplish this day to day tasks.This can be done using WMI (Win32_Services/Get-CIMinstance(PowerShell 3.0 onwards)) and use Get-Service along with Stop/Start/Restart-Service cmdlets.There are many instances where we opt for one over the other available GUI tools or commands.Let’s consider a requirement to check only those services where StartMode set to Auto or services that are stopped or services that are disabled. In this case, the Get-Service cmdlet which does not bind any such information hence querying Win32_Service or Get-CIMInstance are appropriate and serves the purpose. But fortunately, we have Windows Management Instrumentation (WMI). This class has several properties such as StartMode, State, start name etc:-
Introduction
This article talk about how to services are administrated using available components in our day to day activities.  This also includes how to monitor service(s) on local/remote systems over a group of servers using credentials and without using credentials.This script will allow you to read a list of servers from the input file and use credentials to connect remote/DMZ servers to pull service status information for a set of specific services. The various output forms are included with this article are excel, CSV, JSON and PowerShell console output.
First, we have to collect all services available on the computer. You can use three methods for this purpose:
  • Get-WMIObject
  • Get-CimInstance:(Supported in PowerShell 3.0 onwards)
  • Get-Service

Querying WMI

WMI is the Microsoft implementation of Web-Based Enterprise Management (WBEM), with some enhancements in the initial version of it, WBEM is an  industry initiative to develop a standard technology for accessing management information in an enterprise environment that covers not only Windows but also many other types of devices like routers, switches, storage arrays …etc. WMI uses the Common Information Model (CIM) industry standard to represent systems, applications, networks, devices, and other managed components. CIM is developed and maintained by the Distributed Management Task Force (DMTF).
All of that sounds pretty but when Microsoft developed the first versions of WMI they use DCOM (Distributed Component Object Model) wish if a proprietary  Microsoft Technology, so the standards and cross compatibility just took a back seat at the time, on more recent versions of the operating system with Windows Management Framework 2.0 and above MS started to include more and more of the standards and shifted to using WS-Management SOAP-based protocol thru their WinRM (Windows Remote Management) protocol.
We can look at WMI as a collection of objects that provide access to different parts of the operating system, just like with PowerShell objects we have properties, methods, and events for each. Each of these objects is defined by what is called MOF  (Manage Object Format) files that are saved in %windir%\System32\wbem with the extension of .mof. The MOF files get loaded by what is called a Provider when the Provider is registered he loads the definitions of the objects into the current WMI Namespace. The Namespace can be seen a file system structure that organizes the objects on function, inside of each namespace the objects are just like in PowerShell in what is called Class Instances and each of this is populated with the OS and Application information as the system runs so we always have the latest information in this classes.
Namespaces are organized in a hierarchical way where \root is the top level for all other namespaces. The default namespace where most of the other namespaces and classes are located is root\CIMv2 on Windows Kernel 6.x on Kernel 5.x it is Default\CIMv2. Some are installed by default and others are only available when specific applications are installed.
In summary, each Namespace contains Classes, these have:
Methods Actions that can be taken.
  • Properties Information that can be retrieved.
  • Instances of the class objects (services, Processes, Disks) each instance with Methods and Properties.
  • Events are actions that WMI can monitor for and take action when they happen.
  • Caveats of WMI
Now WMI is great, do not get me wrong, but sadly it does have some caveats the main ones being:
Not all classes are available on all versions of Windows. Check
Some applications even from MS have Providers in one version and not in another (Office) and in some cases they even change the properties and methods.
In other words, if you plan on running WMI queries against a series of hosts that may be of different versions of Windows or a specific application do make sure you test and validate your results. This goes the same for Architecture if you are testing x64 or x86.
Display all service names matching a string pattern.  The below example lists all the services starts with character “A”
get-service -ComputerName hqdbsp18|  Where-Object {$_.Name -like 'A*'} | select -expand name 
get-service -ComputerName hqdbsp18|  Where-Object {$_.Name -match '^M' -or $_.Name -match '^SQ' } | select -expand name
[] – for range matching. For example, Name LIKE ‘[M-S]%’ will list all services starting with any letter from a to f.
Get-WMIObject Win32_Service -Filter "Name LIKE '[M-S]%'" -ComputerName hqdbsp18| Select *|format-table -AutoSize
^ – not. For example, Name LIKE ‘[^M-S]%’ will list services that do not start with any letter from a to f.
Get-WMIObject Win32_Service -Filter "Name LIKE '[^M-S]%'" -ComputerName hqdbsp18| Select *|format-table -AutoSize
_ – matches one letter. For example, Name LIKE ‘_QL%’ will list services that start with S and followed by any letter.
Get-WMIObject Win32_Service -Filter "Name LIKE '_Q_S%'" -ComputerName hqdbsp18| Select *|format-table -AutoSize
Group the services based on its startup service account
Get-wmiobject win32_service -ComputerName HQDBSP18 | group startname |select * |Format-Table -AutoSize

Querying Get-CimInstance

PowerShell 3.0 shipping with Windows server 2012 and Windows 8 brings a new set of Cmdlets to manage any server or device that complies with CIM and WS-Man standards defined by DMTF. In this blog post we will explore these new Cmdlets and how can they help IT Pros in managing a data center.The WMI is derived from CIM and it provides and query interface called WMI Query Language (WQL) for accessing CIM object data.

The Get-CimInstance cmdlet available in PowerShell V3 can be used to retrieve WMI information from a remote computer using the WSMAN protocol instead of the legacy WMI service that uses DCOM and RPC. However, the remote computers must be running PowerShell 3 and WSMAN protocol version 3. When querying a remote computer,  Get-CIMInstance setups a temporary CIMSession. However, if the remote computer is running PowerShell 2.0 this will fail. You have to manually create a CIMSession with a CIMSessionOption to use the DCOM protocol. This Script does it for you and creates a CimSession depending on the remote Computer capabilities.

Get-cimInstance win32_service -computer hqdbsp18 -filter "startname like '%localsystem%' and name like '%App%'"| Select Name,startmode,state,startname,systemname |Format-Table -AutoSize
Get-wmiobject win32_service -computer hqdbsp18 -filter "startname like '%localsystem%' and name like '%App%'"| Select Name,startmode,state,startname,systemname |Format-Table -AutoSize

Get-Service

The Get-Service cmdlet retrieves information about all services running on the local/remote system. These cmdlets are.NET objects (of the type System.ServiceProcess.ServiceController) and Its to apply advanced filters and operations to administrate the services
First, Let’s list the available cmdlet to manage the services
Get-Command | findstr Service

Querying Get-Service

The Get-Service cmdlet retrieves information about all services running on the local/remote system. These cmdlets are .NET objects (of the type System.ServiceProcess.ServiceController) and Its to apply advanced filters and operations to administrate the services
The Get-Service cmdlet doesn’t support for supplying credentials, this is one of the main limitation of Get-Service cmdlet when it comes for non-domain or cross-domain scenarios.The workaround for the problem is to make use of Invoke-Command

Credentials

Most of the time it’s not good practice to hardcode the credentials. In that case, you might need to encrypt the password and use the secured code in the script. You need to run the below Powershell script to pull the encrypted password to c:\SecurePassword.txt and copy the code in the script.
$password = read-host -prompt "Enter your Password"
$secure = ConvertTo-SecureString $password -force -asPlainText 
ConvertFrom-SecureString $secure |Out-File c:\SecurePassword.txt
In the below example, Test@2013##) password is encrypted and its content is shown below
There are many instances where it’s ok to store the password in the script itself. In this case, the encrypted password is used in the script which serves the purpose.
The Get-Service and Get-CIMinstance cmdlet doesn’t accept the credential parameter. The workaround for the problem is to use the invoke-command cmdlet to create a session on the remote server.
Invoke-Command { Get-Service "*SQL*" } -ComputerName HQDBSP18 -Credential $Credentials |select Name,displayname,Status |Format-Table -AutoSize
This is the advantage of using WMI object where it accepts the credential parameter.

# Hard code the the Credentials details
$User = "CDW\DDOV810"
$Pass = ConvertTo-SecureString "thanVitha@2016" -AsPlainText -Force
#contain the username and password in a variable
$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass
Get-WMIObject Win32_Service -computer hqdbsp18 -credential $Credentials |where-object {$_.name -like "*SQL*"} |select Name,startmode,state,startname,systemname |Format-Table -AutoSize


The below screen shot gives more insight on the credentials parameter

The CIM Cmdlets do not have a -Credential parameter. The only way to specify alternate credentials is to manually build a new CIM session object, and pass that into the -CimSession parameter on the other cmdlets. The details are given below

$User =
"hadt\sdsdfs"
$Pass = ConvertTo-SecureString "sdfsdf@2016" -AsPlainText -Force
#contain the username and password in a variable
$Credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User,$Pass
$CimSession = New-CimSession -ComputerName abcd -Credential $Credentials
$service = Get-CimInstance -ClassName Win32_service -CimSession $CimSession
$service
Remove-CimSession -CimSession $CimSession


Return to Top


Simple way to get the services Details in a CSV file

The simplest and quickest way to retrieve the services details is by exporting the data to CSV.The two Windows PowerShell cmdlets that work with comma-separated values are ConvertTo-CSV and Export-CSV. The two cmdlets are basically the same; the difference is that Export-CSV will save to a text file, and ConvertTo-CSV does not.
# The output CSV file
 $CSVOutputfile="c:\Services.csv"
#Input file server details location
$input ="c:\server.txt"
# query win32_service class
Get-WmiObject Win32_Service -ComputerName (Get-Content $input )|Select-Object -Property __Server,Name,startmode,state,serviceaccount,displayname |Export-Csv -Path $CSVOutputfile
#The Invoke-Item cmdlet provides a way to open a file  from within Windows PowerShell.
invoke-item $CSVOutputfile

Service details in Excel

PowerShell is made available to use Excel component by invoking excel.application COM object that will allow us to work with excel to add data and format that data.  The New-object -ComObject  creates a new excel object using COM components and it made visible for troubleshooting using visible property set to true.  Once everything is defined, we can call the respective methods and properties to create workbook by adding sheet as its item

#Create a new Excel object using COM
$Excel = New-Object -ComObject Excel.Application
#you can see the Excel sheet and helpful in troubleshooting
$Excel.visible = $True
#After instantiating the excel objects, It's a time use its respective methods and properties
$Excel = $Excel.Workbooks.Add()
$Sheet = $Excel.Worksheets.Item(1)

Workbook number and details

FileFormat numbers in Mac Excel. These are the main file formats in Windows Excel 2007-2016:

51 = xlOpenXMLWorkbook (without macro’s in 2007-2013, xlsx)
52 = xlOpenXMLWorkbookMacroEnabled (with or without macro’s in 2007-2013, xlsm)
50 = xlExcel12 (Excel Binary Workbook in 2007-2013 with or without macro’s, xlsb)
56 = xlExcel8 (97-2003 format in Excel 2007-2013, xls)
$xlOpenXMLWorkbook=[int]51

<#  
.SYNOPSIS     
    Name :  Disk Space Utilization excel Report (Get-ServiceExcelReports.ps1)
    Description : Get disk space usage informations from remote server(s) with WMI and output CSV file
    
    Author : Prashanth Jayaram
       
    * Select list of servers from a text file
    * Get remote Servers informations with WMI and Powershell
    * service (Servername,Name,startmode,state,serviceaccount,displayname + display a Excel output)
      
       
.INPUT
    Input Server text file
    Service list
   
.OUTPUTS
    Excel output
   
.NOTES
    Version:        1.0
    Author:         Prashanth Jayaram
    Creation Date:  2017-02-02
    Purpose/Change: Initial script development
     
.EXAMPLE
    .\Get-ServiceExcelReports.ps1 -ServerList C:\server.txt -includeService "VM","Dhcp","SQL"
#>
  
  
#########################################################################################
  
 
  
param (
    [Parameter(Mandatory=$true)][string]$ServerList,
    [Parameter(Mandatory=$true)][string[]]$includeService
 )
  
 
$excel = New-Object -ComObject Excel.Application
 
$workbook = $excel.Workbooks.Add()
$worksheet = $workbook.Worksheets.Item(1)
 
$worksheet.Cells.Item(1,1) = "Server Name"
$worksheet.Cells.Item(1,2) = "Service Name"
$worksheet.Cells.Item(1,3) = "State"
$worksheet.Cells.Item(1,4) = "StartUp Type"
$worksheet.Cells.Item(1,5) = "Service Account"
$worksheet.Cells.Item(1,6) = "DisplayName"
 
  
$range = $worksheet.UsedRange
$range.Interior.ColorIndex = 19
$range.Font.ColorIndex = 11
$range.Font.Bold = $true
  
$row = 2
  
$IncludeArray=@()
  
#List of programs to exclude
#$IncludeArray = $includeService
  
Foreach($ServerName in (Get-Content $ServerList))
{
$service = Get-WmiObject Win32_Service -ComputerName $servername
if ($Service -ne $NULL)
{
foreach ($item in $service)
 {
 #$item.DisplayName
 Foreach($include in $includeService) 
     {                       
 #write-host $include                                    
 if(($item.name).Contains($include) -eq $TRUE)
    {
        $worksheet.Cells.Item($row, 1) = $servername
        $worksheet.Cells.Item($row, 2) = $item.name
        $worksheet.Cells.Item($row, 3) = $item.Status
        $worksheet.Cells.Item($row, 4) = $item.startmode
        $worksheet.Cells.Item($row, 5) = $item.startname
        $worksheet.Cells.Item($row, 6) = $item.displayname          
        $row++
        
     }
    }
 }
}
}
   
$range.EntireColumn.AutoFit()
$excel.Visible = $True

Excel Output

Return to Top


Service Details in CSV file

The main part of the code takes input details from a text file and output columns are ordered and customized as per the requirement. The five column output are listed in the csv output file and they are Servername, Name,start mode, state, serviceaccount and displayname. The three input parameters input text file, service list used to compare against the listed services and csv output file used to capture all five column outputs.
<#
 
.SYNOPSIS      
    Name :  Disk Space Utilization csv Report (Get-ServiceExcelReports.ps1)
    Description : Get disk space usage informations from remote server(s) with WMI and output CSV file
   
    Author : Prashanth Jayaram
      
    * Select list of servers from a text file
    * Get remote Servers informations with WMI and Powershell
    * service (Servername,Name,startmode,state,serviceaccount,displayname + display a csv output)
     
      
.INPUT
    Input Server text file
    Service list
  
.OUTPUTS
    CSV output, console and Grid output
  
.NOTES
    Version:        1.0
    Author:         Prashanth Jayaram
    Creation Date:  2017-02-02
    Purpose/Change: Initial script development
    
.EXAMPLE
    .\Get-Servicecsv.ps1 -ServerList C:\server.txt -includeService "VM","Dhcp","SQL"
#>
 
 
#########################################################################################
 
 
param (
    [Parameter(Mandatory=$true)][string]$ServerList,
    [Parameter(Mandatory=$true)][string[]]$includeService,
    [Parameter(Mandatory=$true)][string]$CSVoutputFile
 )
 
# Check if the output file CSV exist, if exists then delete it.
 
if (test-path $CSVoutputFile ) { rm $CSVoutputFile } 
#Custom object to maintain the order of the output columns
$props=@()
Foreach($ServerName in (Get-Content $ServerList)) 
$service = Get-WmiObject Win32_Service -ComputerName $servername
if ($Service -ne $NULL) 
  
foreach ($item in $service) 
    
 #$item.DisplayName 
 Foreach($include in $includeService)  
         {                        
         #write-host $include                                     
         if(($item.name).Contains($include) -eq $TRUE) 
                {
    
                  $props += [pscustomobject]@{
                  Servername = $servername
                  name =  $item.name
                  Status = $item.Status
                  startmode = $item.startmode
                  state = $item.state
                  serviceaccount=$item.startname
                  DisplayName =$item.displayname}
                
         
     
  
$props | Format-Table Servername,Name,startmode,state,serviceaccount,displayname  -AutoSize
$props | Select-Object Servername,Name,startmode,state,serviceaccount,displayname  |Export-Csv -Path $CSVoutputFile
Invoke-item $CSVoutputFile

Service Details in HTML file

The below section outlines the use of the convert-to-HTML cmdlet.without digging much into the detail, the <table> element, the <th> (table heading) element, and <td> (table cell) element. These styles enable us to put a nice border around our table and around each of the cells in that table: The See Also section has several in-depth information about formatting.
<#
 
.SYNOPSIS      
    Name :  Disk Space Utilization excel Report (Get-ServiceHTML.ps1)
    Description : Get disk space usage informations from remote server(s) with WMI and output HTML file
   
    Author : Prashanth Jayaram
      
    * Select list of servers from a text file
    * Get remote Servers informations with WMI and Powershell
    * Disk (Disk type, letter, capacity in GB, free space in GB, % free , Status + display a HTML output)
     
      
.INPUT
    Input Server text file
    Service list
    Output HTML file
  
.OUTPUTS
    HTML output
    Console Output
  
.NOTES
    Version:        1.0
    Author:         Prashanth Jayaram
    Creation Date:  2017-02-02
    Purpose/Change: Initial script development
    
.EXAMPLE
   
   Get-ServiceHTMLReport.ps1 -ServerList "\\hq6021\c$\server.txt" -includeService "VM","Dhcp","SQL" -OutputHTML "e:\CU2\ServericeReport.htm"
#>
 
 
#########################################################################################
param (
    [Parameter(Mandatory=$true)][string]$ServerList,
    [Parameter(Mandatory=$true)][String[]]$includeService,
    [Parameter(Mandatory=$true)][string]$OutputHTML
 )
<#
[String]$ServerList="\\hq6021\c$\server.txt"
[String[]]$includeService= "VM","Dhcp","SQL"
#>
$props=@()
 
 
Foreach($ServerName in (Get-Content $ServerList)) 
$service = Get-WmiObject Win32_Service -ComputerName $servername
if ($Service -ne $NULL) 
  
foreach ($item in $service) 
    
 #$item.DisplayName 
 Foreach($include in $includeService)  
         {                        
         #write-host $include                                     
         if(($item.name).Contains($include) -eq $TRUE) 
                {
    
                  $props += [pscustomobject]@{
                  Servername = $servername
                  name =  $item.name
                  Status = $item.Status
                  startmode = $item.startmode
                  state = $item.state
                  serviceaccount=$item.startname
                  DisplayName =$item.displayname}
                
         
     
  
$props |format-table Servername,Name,startmode,state,serviceaccount,displayname  -autosize
$a = "<style>"
$a = $a + "BODY{background-color:peachpuff;}"
$a = $a + "TABLE{border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;}"
$a = $a + "TH{border-width: 1px;padding: 0px;border-style: solid;border-color: black;}"
$a = $a + "TD{border-width: 1px;padding: 0px;border-style: solid;border-color: black;}"
$a = $a + "</style>"
$props |Select-Object Servername,Name,startmode,state,serviceaccount,displayname| ConvertTo-HTML -head $a |  Out-File $OutputHTML
invoke-item $OutputHTML

HTML Output

Conclusion

There are plenty of tools that are available to find the windows service. The article illustrated how to access and query the Windows Service on a local/remote machine using available tools and technique of PowerShell i.e. is WMI, CIM and Get-Service cmdlet also explains the various ways to identify the service status.  The service(s) detail output is reported using CSV, Console, HTML and Excel. The references links also state the difference protocol usage and the use of CIM are have better control over the WIM class libraries in terms of design and performance.  Even though windows remoting is disabled still able to use the CIM cmdlets to query the remote computer.The CIM can also be used to access non-windows systems.

References

Posted in PowerShell | Tagged , , , , , , | Leave a comment

SQL: Different ways to generate sequence

There was a question from the op regarding adding a new column to a query output by generating the cyclic sequence numbers from 1 to 3.

Select A =identity(int,1,1),B,C from table_abs

1,A,41

2,B,13

3,C,90

4,D,91

5,E,98

6,F,12

7,G,54

8,H,16

For this output, the 4th column generates the Sequence of numbers from 1 to 3 which is shown below

1,A,41,1

2,B,13,2

3,C,90,3

4,D,91,1

5,E,98,2

6,F,12,3

7,G,54,1

8,H,16 ,2

If you are using SQL 2012 then Sequence would be natural choice for any such operations.

Solution 1

Using Sequence

CREATE SEQUENCE Seq
 AS INTEGER
 START WITH 1
 INCREMENT BY 1
 MINVALUE 1
 MAXVALUE 3
 CYCLE;

SELECT table_name,NEXT VALUE FOR Seq New_column
FROM information_schema.tables

 

Solution 2

Using CTE and Modulus operator

;with q as
(
  select row_number() over (order by (select null)) A, *
  from sys.objects
)
select A, 1+A%3 B, *
from q

Solution 3

Loops and Temp table

create table dummyTest
(
id int, 
col1 char(1),
col2 int
)

insert into dummyTest values(1,'A',410),(2,'B',411),(3,'c',4111),(4,'d',421),(5,'e',441),(6,'f',451),(7,'g',481),(8,'h',401)

create table #dummy
(
id int, 
col1 char(1),
col2 int,
NewColumn int
)
 declare @n int,@i int,@limit int
 set @limit=1
 set @i=1
 select @n=count(*) from dummyTest
 while @i<=@n
 begin
 set @limit=1
 while @limit<=3
 begin
 print @limit
 insert into #dummy
 select *,NewColumn=@limit from dummyTest where id=@i
 set @i=@i+1
 set @limit=@limit+1
 end
  end

 select * from #dummy

The simplest of all the above methods

create table dummyTest
(
id int,
col1 char(1),
col2 int
)

insert into dummyTest values(1,’A’,410),(2,’B’,411),(3,’c’,4111),(4,’d’,421),(5,’e’,441),(6,’f’,451),(7,’g’,481),(8,’h’,401)

select id,col1,col2, (row_number() over (order by id)-1) %4+1 as cycle from dummyTest

 

 

Conclusion

The same solution can be derived using the cursor and there may be other solutions as well. At many instances, we opt for any solutions without thinking of data volume that may degrade the performance. This is one of the prime examples of why we need to upgrade to newer version.

 

 

 

Posted in SQL, SQL 2012, Uncategorized | Tagged , , | 2 Comments

SQL Server 2016: JSON integration

Abstract

This article helps to explain the application of JSON data and the use of new SQL 2016 constructs that enable an integration of JSON data into a relational schema.

The idea behind this article is to list as many examples of JSON nested elements, an inclusion of sample data,  relational data conversion to JSON data, JSON to the relational data, conversion of JSON elements into separate columns and representing the same data into separate rows.

 

↑ Return to Top


 

↑  Return to Top


Introduction

The JSON stands for JavaScript Object Notation. JSON is the primary data representation for all NoSQL databases.This is a natural fit for the developers, who use JSON as the data interchange format in their Applications. The relative ability of JSON (JSON records are well structured but easily extended) on its scalability has attracted the developers looking DB migrations in agile environments. Data and schema, in volume, can be hard to change. Rewriting a large dataset stored on the disk while keeping the associated Applications online can be time-consuming. It can take days of background processing, in moderate to large examples, to upgrade the data

↑  Return to Top


Background

Most of the traditional relational database engine now supports JSON. With SQL Server 2016, It’s easy to interchange JSON data between the Applications and database engine. Microsoft has provided various functions and capabilities to parse JSON data. They tried to bring JSON data into a relational storage. It also provides an ability to transform the relational data into JSON and JSON data into the denormalized data.Having these additional JSON features built into SQL Server should make it easier for the Applications to exchange JSON data with SQL Server.This functionality provides the flexibility in the integration of JSON data into the relational database engine. The developers can write and invent complex queries during their periodic stages of the development process.

Relational databases refer to the traditional data storage, Constructive and Intuitive SQL language, Complex query design and ACID property. NoSQL offers different concepts – complex structures are placed together into the collections of the entities, where you can take everything, which you need with one read operation or where you can insert complex structure with a single write and follows CAP property.

The relational databases normalize the data to some degree; that is, rather than repeating a piece of data in multiple rows, a table that needs that information will store a foreign key, which points to another table that holds the data. On the other hand, this process means that the data is typically shredded from its original form to fit into tables and then reassemble at the run time by joining the tables in response to a query. This becomes particularly expensive as the data set grows and the data need to be partitioned among the multiple database servers.

 

↑  Return to Top


The JSON Syntax Rules

JSON syntax is derived from JavaScript object notation syntax.

  • Data is in the name/value pairs. {“key””value”} – most common format for objects
  • Data is separated by commas. {“key””value”},{“key””value”}
  • Curly braces holds the objects. {“key”{“key””value”}}
  • Square brackets holds the arrays. {“key”[ {“key””value”},{“key””value”} ]}

↑  Return to Top


The JSON Values

In JSON, values must be one of the data types given below.

  • A string
  • A number
  • An object (JSON object)
  • an array
  • A boolean
  • null

↑  Return to Top


Basic Structure

If you have parent/child (Fact/Dimension) relationships, where related child information is not changed frequently and you need to read the child records together with the parent without the additional JOINS, you can store the child records in the parent table as JSON array.In the traditional database, normalization process ensures to minimize the amount of information that duplicates but whereas in NoSQL, intentionally duplicate it to make it easier to use. Let’s say, representing a number of students taking a class. A normalized way of representing the data is given below. The use of an array denotes the dimension data of the relational table

    course "Basic programming", 
        room "1A", 
        students[{ 
            id 1, 
            name "Prashanth"
        }, { 
            id 2, 
            name "Jayaram"
        }] 
}

Here’s a denormalized data.

[{ 
    course "Basic programming", 
    room "1A", 
    studentId 1, 
    studentName "Prashanth"
}, { 
    course "Basic programming", 
    room "1A", 
    studentId 2, 
    studentName "Jayaram"
}]
When you parse the JSON container, you will end up in fetching the denormalized data in one table.

Let’s discuss the different dimensions of the sample data given below and represent the data in tabular and JSON file format. Also, you will learn to know how to query JSON file with the various available JSON constructs in SQL 2016

Native JSON support in SQL Server 2016 provides you few functions to read and parse your JSON string into a relational format.

  • OPENJSON() – Table valued function parses JSON text and returns row set view of JSON.
  • JSON_Value() – Scalar function returns a value from JSON on the specified path.

The sample output given below is an example of how to demonstrate the different dimension of representing the data into a JSON and the relational data. The example lists parent and child relationship and it is represented in JSON array (batter and topping) and nested objects as well.↑  Return to Top


Relational data presentation using FOR JSON

The FOR JSON AUTO clause is similar to the FOR XML AUTO clause. It automatically formats the JSON output based on the Column/Tables hierarchy defined in the SQL Query.The FOR JSON PATH clause is similar to the FOR XML PATH clause. It gives more control to define the structure using column alias with dot separator

For example,

Lets   create a sample table ‘EMP’ and ‘DEPT’ and insert few rows in it

CREATE TABLE EMP
(EMPNO NUMERIC(4) NOT NULL,
ENAME VARCHAR(10),
JOB VARCHAR(9),
MGR NUMERIC(4),
HIREDATE DATETIME,
SAL NUMERIC(7, 2),
COMM NUMERIC(7, 2),
DEPTNO NUMERIC(2))
INSERT INTO EMP VALUES
(7782, 'CLARK', 'MANAGER', 7839, '9-JUN-1981', 2450, NULL, 10)
INSERT INTO EMP VALUES
(7788, 'SCOTT', 'ANALYST', 7566, '09-DEC-1982', 3000, NULL, 20)
INSERT INTO EMP VALUES
(7839, 'KING', 'PRESIDENT', NULL, '17-NOV-1981', 5000, NULL, 10)
INSERT INTO EMP VALUES
(7844, 'TURNER', 'SALESMAN', 7698, '8-SEP-1981', 1500, 0, 30)
INSERT INTO EMP VALUES
(7934, 'MILLER', 'CLERK', 7782, '23-JAN-1982', 1300, NULL, 10)
CREATE TABLE DEPT

(DEPTNO NUMERIC(2),
DNAME VARCHAR(14),
LOC VARCHAR(13) )
INSERT INTO DEPT VALUES (10, 'ACCOUNTING', 'NEW YORK')
INSERT INTO DEPT VALUES (20, 'RESEARCH', 'DALLAS')
INSERT INTO DEPT VALUES (30, 'SALES', 'CHICAGO')
INSERT INTO DEPT VALUES (40, 'OPERATIONS', 'BOSTON')

The output of “FOR JSON AUTO” option is given below

SELECT * FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNO
WHERE D.DEPTNO=10
FOR JSON AUTO

This option formats the JSON document automatically based upon the columns provided in the Query.”FOR JSON PATH” option, the dot syntax is used for nested output.

SELECT E.EMPNO,E.ENAME,D.DEPTNO  AS "DEPARTMENT.DEPTNO" ,D.DNAME AS "DEPARTMENT.DNAME",D.LOC AS "DEPARTMENT.LOCATION"
FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNO
WHERE D.DEPTNO=10
FOR JSON PATH
SELECT E.EMPNO,E.ENAME,D.DEPTNO  AS "DEPARTMENT.DEPTNO" ,D.DNAME AS "DEPARTMENT.DNAME",D.LOC AS "DEPARTMENT.LOCATION"
FROM EMP E INNER JOIN DEPT D ON E.DEPTNO=D.DEPTNO
WHERE D.DEPTNO=10
FOR JSON PATH,ROOT('EMPLOYEE')

As we can see with PATH option creates wrapper class ‘department’ and nest properties deptno,dname,location

↑  Return to Top


The JSON Data Transformation

Transform the below sample data to JSON

ID Type Name Batter Topping
1 donut Cake Regular None
1 donut Cake Regular Glazed
1 donut Cake Regular Sugar
1 donut Cake Regular Powdered Sugar
1 donut Cake Regular Chocolate with Sprinkles
1 donut Cake Regular Chocolate
1 donut Cake Regular Maple
1 donut Cake Chocolate None
1 donut Cake Chocolate Glazed
1 donut Cake Chocolate Sugar
1 donut Cake Chocolate Powdered Sugar
1 donut Cake Chocolate Chocolate with Sprinkles
1 donut Cake Chocolate Chocolate
1 donut Cake Chocolate Maple
1 donut Cake Blueberry None
1 donut Cake Blueberry Glazed
1 donut Cake Blueberry Sugar
1 donut Cake Blueberry Powdered Sugar
1 donut Cake Blueberry Chocolate with Sprinkles
1 donut Cake Blueberry Chocolate
1 donut Cake Blueberry Maple
1 donut Cake Devils Food None
1 donut Cake Devils Food Glazed
1 donut Cake Devils Food Sugar
1 donut Cake Devils Food Powdered Sugar
1 donut Cake Devils Food Chocolate with Sprinkles
1 donut Cake Devils Food Chocolate
1 donut Cake Devils Food Maple

The below transformation holds nested objects as we can see there are few more entries are added for the id 0002. In the above sample data, we can see there is four batter types and 7 toppings to prepare 28(1*4*7=28)different types of cake.similarly. for id 0002, 4 batter types and 3 toppings used to prepare 12(1*4*3) types of cake.

[{ 
    "id": "0001", 
    "type": "donut", 
    "name": "Cake", 
    "ppu": 0.55, 
    "batters": { 
        "batter": [{ 
            "id1": "1001", 
            "type1": "Regular"
        }, { 
            "id1": "1002", 
            "type1": "Chocolate"
        }, { 
            "id1": "1003", 
            "type1": "Blueberry"
        }, { 
            "id1": "1004", 
            "type1": "Devils Food"
        }] 
    }, 
    "topping": [{ 
        "id2": "5001", 
        "type2": "None"
    }, { 
        "id2": "5002", 
        "type2": "Glazed"
    }, { 
        "id2": "5005", 
        "type2": "Sugar"
    }, { 
        "id2": "5007", 
        "type2": "Powdered Sugar"
    }, { 
        "id2": "5006", 
        "type2": "Chocolate with Sprinkles"
    }, { 
        "id2": "5003", 
        "type2": "Chocolate"
    }, { 
        "id2": "5004", 
        "type2": "Maple"
    }] 
}, { 
    "id": "0002", 
    "type": "donut", 
    "name": "cup Cake", 
    "ppu": 0.5, 
    "batters": { 
        "batter": [{ 
            "id1": "1001", 
            "type1": "Regular"
        }, { 
            "id1": "1002", 
            "type1": "Chocolate"
        }, { 
            "id1": "1003", 
            "type1": "Blueberry"
        }, { 
            "id1": "1004", 
            "type1": "Devil's Food"
        }] 
    }, 
    "topping": [{ 
        "id2": "5001", 
        "type2": "None"
    }, { 
        "id2": "5002", 
        "type2": "Glazed"
    }, { 
        "id2": "5005", 
        "type2": "Sugar"
    }] 
}]

 ↑  Return to Top


Transform JSON to Relational data

The OPENJSON is a table-value function (TVF). which looks into JSON text, locates an array of JSON objects, iterates through the elements of the array and for each element returns one row in the output result. To read JSON from the file, load the file using OPENROWSET construct into a variable. The stocks.json is an example for the demonstration. You can derive the path as per your requirement and the environment.In the following example is shown SQL code, which reads the content of the JSON file, using OPENROWSET BULK function and passes the content of JSON file (BulkColumn) to OPENJSON function

JSON file can be stored in local file system or global (Cloud storage).

SELECT ID, type, name, ppu, type1 batter, type2 topping FROM
OPENROWSET(BULK N '\\hq6021\c$\stocks.json', SINGLE_CLOB) AS json 
CROSS APPLY OPENJSON(BulkColumn) 
WITH(id nvarchar(40), type nvarchar(40), name NVARCHAR(MAX), ppu NVARCHAR(MAX), batters NVARCHAR(MAX) AS JSON, topping NVARCHAR(MAX) AS JSON) AS
CROSS APPLY 
OPENJSON(batters, '$.batter') 
WITH(id1 nvarchar(100), type1 nvarchar(20)) 
CROSS APPLY 
OPENJSON(topping) 
WITH(id2 nvarchar(100), type2 nvarchar(20))

↑  Return to Top


Built-in functions for JSON processing

Source : https//blogs.technet.microsoft.com/dataplatforminsider/2016/01/06/json-in-sql-server-2016-part-2-of-4

 

SQL Server 2016 provides the functions for parsing and processing JSON text. JSON built-in functions, which are available in SQL Server 2016 are given below.

  • ISJSON( jsonText ) checks, if the NVARCHAR text is properly formatted according to the JSON specification. You can use this function to create check constraints on NVARCHAR columns, which contains JSON text
  • JSON_VALUE( jsonText, path ) parses jsonText and extracts the scalar values on the specified JavaScript-like path (see below for some JSON path examples).
  • JSON_QUERY( jsonText, path ) that parses jsonText and extracts objects or arrays on the specified JavaScript-like path (see below for some JSON path examples)

These functions use JSON paths for referencing the values or objects in JSON text. JSON paths use JavaScript-like syntax for referencing the properties in JSON text. Some examples are given below.

  • ‘$’ – references entire JSON object in the input text.
  • ‘$.property1’ – references property1 in JSON object.
  • ‘$[4]’ – references a 5-th element in JSON array (indexes are counted from 0 like in JavaScript).
  • ‘$.property1.property2.array1[5].property3.array2[15].property4’ – references complex nested property in the JSON object.
  • ‘$.info. “first name”‘ – references “first name” property in info object. If the key contains some special characters such as space, dollar, etc., it should be surrounded by double quotes.

The dollar sign ($) represents the input JSON object (similar to root “/” in XPath language). You can add any JavaScript-like property or an array after “$” to reference properties in JSON object. One simple example of a query, where these built-in functions are used is given below.

DECLARE @MyJSON NVARCHAR(4000) = N '{  
"info" { 
    "type" 
    1, "address" { 
        "town" 
        "Louisville", "county" 
        "Boulder", "country" 
        "USA" 
    }, "tags" ["Snow", "Breweries"] 
}, "LocationType" 
"East", "WeatherType" 
"Cold" 
'
Select * from OPENJSON(@MyJSON) 
WITH(type int '$.info.type', LocationType varchar(20) 
    '$.LocationType', WeatherType varchar(20) 
    '$.WeatherType', town varchar(200) 
    '$.info.address.town', county varchar(200) 
    '$.info.address.county', country varchar(200) 
    '$.info.address.country') AS
CROSS APPLY 
OPENJSON(@MyJSON, '$.info.tags')

↑  Return to Top


How to define Nested Objects in JSON

The examples are given above also contains a sample data that represents nested object.

DECLARE @json NVARCHAR(1000) 
SELECT @json = N '{  
"Employee" [{ 
    "Element"
    1
}, { 
    "Element"
    2
}, { 
    "Element"
    "n"
}] 
'

↑  Return to Top


Parsing Nested elements of JSON

In the employee example given below, the employeeDepartment is the root of the JSON. The array element DEPT has a dimension data, which represents the department details of each employee. The employee JSON structure has 3 objects.

DECLARE @MyJSON NVARCHAR(4000) = N '{   
"EmployeeDepartment": [{ 
    "EmployeeID": "E0001", 
    "FirstName": "Prashanth", 
    "LastName": "Jayaram", 
    "DOB": "1983-02-03", 
    "DEPT": [{ 
        "EmployeeDepartment": "Ducks" 
    }, { 
        "EmployeeDepartment": "Blues" 
    }] 
}, { 
    "EmployeeID": "E0002", 
    "FirstName": "Prarthana", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "EmployeeDepartment": "Red Wings" 
    }] 
}, { 
    "EmployeeID": "E0003", 
    "FirstName": "Pravitha", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "EmployeeDepartment": "Red Wings" 
    }, { 
        "EmployeeDepartment": "Green Bird" 
    }] 
}] 
'
--SELECT * FROM OPENJSON(@MyJSON) 
SELECT
EmployeeID, 
FirstName, 
LastName, 
DOB, 
DEPT, 
EmployeeDepartment 
FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') 
WITH(EmployeeID varchar(10), FirstName varchar(25), LastName varchar(25), DOB varchar(25), DEPT NVARCHAR(MAX) AS JSON) AS
CROSS APPLY 
OPENJSON(DEPT) 
WITH(EmployeeDepartment nvarchar(100))

↑  Return to Top


Reading JSON into separate rows

How about pulling them in separate rows, using JSON_Value() with OPENJSON() function. The query given below gives an overview of applying the JSON constructs on the nested elements.
DECLARE @MyJSON NVARCHAR(4000) = N '{   
"EmployeeDepartment": [{ 
    "EmployeeID": "E0001", 
    "FirstName": "Prashanth", 
    "LastName": "Jayaram", 
    "DOB": "1983-02-03", 
    "DEPT": [{ 
        "DeptID": "D1", 
        "DName": "Ducks" 
    }, { 
        "DeptID": "D2", 
        "DName": "Blues" 
    }] 
}, { 
    "EmployeeID": "E0002", 
    "FirstName": "Prarthana", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }] 
}, { 
    "EmployeeID": "E0003", 
    "FirstName": "Pravitha", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }, { 
        "DeptID": "D4", 
        "DName": "Green Bird" 
    }] 
}] 
'
SELECT
JSON_Value(c.value, '$.EmployeeID') as EmployeeID, 
    JSON_Value(c.value, '$.FirstName') as FirstName, 
    JSON_Value(C.value, '$.DOB') as DOB, 
    JSON_Value(p.value, '$.DeptID') as DEPTID, 
    JSON_Value(p.value, '$.DName') as DName 
FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') as
CROSS APPLY OPENJSON(c.value, '$.DEPT') as p

↑  Return to Top


Reading JSON elements into separate columns

You can specify the child elements with the full path by using the dollar sign “$” inside the WITH() clause to segregate the data into the separate columns.

DECLARE @MyJSON NVARCHAR(4000) = N '{   
"EmployeeDepartment": [{ 
    "EmployeeID": "E0001", 
    "FirstName": "Prashanth", 
    "LastName": "Jayaram", 
    "DOB": "1983-02-03", 
    "DEPT": [{ 
        "DeptID": "D1", 
        "DName": "Ducks" 
    }, { 
        "DeptID": "D2", 
        "DName": "Blues" 
    }] 
}, { 
    "EmployeeID": "E0002", 
    "FirstName": "Prarthana", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }] 
}, { 
    "EmployeeID": "E0003", 
    "FirstName": "Pravitha", 
    "LastName": "Prashanth", 
    "DOB": "2015-07-06", 
    "DEPT": [{ 
        "DeptID": "D3", 
        "DName": "Red Wings" 
    }, { 
        "DeptID": "D4", 
        "DName": "Green Bird" 
    }] 
}] 
'
SELECT
EmployeeID, 
FirstName, 
DOB, 
Dept1, DName1, 
Dept2, DName2 
FROM OPENJSON(@MyJSON, '$.EmployeeDepartment') 
WITH(EmployeeID varchar(20) 
    '$.EmployeeID', FirstName varchar(20) 
    '$.FirstName', DOB varchar(20) 
    '$.DOB', Dept1 varchar(20) 
    '$.DEPT[0].DeptID', Dname1 varchar(20) 
    '$.DEPT[0].DName', Dept2 varchar(20) 
    '$.DEPT[1].DeptID', Dname2 varchar(20) 
    '$.DEPT[1].DName') AS EMP

↑  Return to Top


Conclusion

SQL 2016 contains some very powerful JSON constructs. Mixing the power of the relational databases with the flexibility of JSON offers many benefits from the point of Migration, Integration, and Deployment. It is flexible because of simple syntax and lesser overhead to maintain and manage the JSON data.The powerful JSON SQL constructs enable to query and analyze JSON data as well as transform JSON to the relational data and the relational data to JSON.

There are plenty of examples and resources, which are available under various links. This is an effort to combine real-world scenarios and details the various ways of JSON data manipulation, using SQL 2016 JSON constructs. NoSQL offers different concepts – complex structures are placed together into the collections of the entities, where you can take everything, which you need with one read operation or where you can insert complex structure with a single write. The bad side is that sometimes you want to organize your information in different collections and then you will find that it is very hard to JOIN entities from the two collections.

With new SQL server, you have options to choose between these two concepts and use the best of both worlds. In your data models, you can choose when to use traditional relational structures and when to introduce NoSQL concepts.

↑  Return to Top


References

For JSON AUTO/PATH

Posted in JSON, SQL, SQL 2016, T-SQL | Tagged , , , , , , , , | Leave a comment
%d bloggers like this: