Status of Statistics updated last time.

March 20, 2018

To know the status of statistics update on any tables when it was last updated with table name & statistics name, use below query to find that details.

SELECT t.name TableName, s.[name] StatName, STATS_DATE(t.object_id,s.[stats_id]) LastUpdated
FROM sys.[stats] AS s
JOIN sys.[tables] AS t
ON [s].[object_id] = [t].[object_id]
WHERE t.type = ‘u’
ORDER BY lastupdated DESC


MS SQL Server 2016 : Features.

March 13, 2017

1. Always Encrypted : As the word suggests, ‘Always Encrypted’ feature of SQL 2016 ‘Always’ keeps your sensitive data ‘Encrypted’ either at rest (local environment) or at remote (Cloud/Azure). It will help to protect data from people who may play around it like DBAs, Cloud operators, high-privileged but unauthorised users, again that also has considerations for ad-hoc queries, reporting and exporting the data.

Always Encrypted is designed to protect data at rest or in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and the encryption key can reside with the application. Encryption and decryption of data happens transparently inside the application.

How It Works
You can set Always Encrypted to individual column (where your sensitive data resides). While configuring columns, you need to specify encryption algorithm and cryptographic keys for data protection. There are basically two keys you need to define:

  1. Encryption Key for column data encryption (It will be used to encrypt data for specific column)
  2. Master Key: for Encryption of column encryption keys

So basically, it’s a double encryption protection, only program can access it, client application will automatically encrypt and decrypt data before fetching data from database.

2. Enhancements to AlwaysOn : SQL Server 2016 will also continue to advance high availability and disaster recovery with several enhancements to AlwaysOn. The upcoming SQL Server 2016 release will enhance AlwaysOn with the ability to have up to three synchronous replicas. Additionally, it will include DTC (Distributed Transaction Coordinator) support as well as support for round-robin load balancing of the secondaries replicas. There will also be support for automatic failover based on database health.

3. Enhanced In-Memory OLTP : From SQL Server 2014, In-Memory OLTP will continue to mature in SQL Server 2016. Microsoft will enhance In-Memory OLTP by extending the functionality to more applications while also enhancing concurrency. This means they will be expanding the T-SQL surface area, increasing the total amount of memory supported into the terabyte range as well as supporting a greater number of parallel CPUs.

In-Memory OLTP optimally designed for high-speed loading of data with no locking issues or high-volume session state issues. There were a lot of limitations particularly around constraints and procedures. In SQL Server 2016, this feature is vastly improved, supporting foreign keys, check and unique constraints and parallelism. Additionally, tables up to 2TB are now supported (up from 256GB).

Another part of in-memory is column store indexes, which are commonly used in data warehouse workloads. In 2016 it receives some enhancements around sorting and better support with AlwaysOn Availability Groups.

4. Management Studio

  1. SQL Server Management Studio supports the Active Directory Authentication Library (ADAL) which is under development for connecting to Microsoft Azure. This replaces the certificate-based authentication used in SQL Server 2014 Management Studio.
  2. SQL Server Management Studio installation requires installing .NET 4.6 as a pre-requisite. .NET 4.6 will be automatically installed by setup when SQL Server Management Studio is installed.
  3. A new query result grid option supports keeping Carriage Return/Line Feed (newline characters) when copying or saving text from the results grid. Set this from the Tools/Options menu.
  4. SQL Server Management Tools is no longer installed from the main feature tree.

5. New Default Database Size and Autogrowth Values : New values for the model database and default values for new databases (which are based on model). The initial size of the data and log files is now 8 MB. The default auto-growth of data and log files is now 64 MB.

6. Stretch Database : This would enable your frequently accessed or hot data to stay on-premise and your infrequently accessed cold data to be moved to the cloud. This could enable you to take advantage of low cost Azure store and still have high performance applications.

The basics of Stretch Database are that some part of your tables (configurable or automated) will be moved into an Azure SQL Database in the cloud in a secure fashion.

When you query those tables, the query optimizer knows which rows are on your server and which rows are in Azure, and divides the workload accordingly. The query processing on the Azure rows takes place in Azure so the only latency is for the return of the rows over the network. As an additional enhancement, you are only charged for the SQL Database in Azure when it is used for queries.

So we can say it’s a cost-effective solution for COLD data storage, your data is available anytime for query and manage. You can access your data without changing queries either it is present on local or at stretch database.

To configure it, you need an Azure account and database instance that you need to stretch. The following snap will clear your idea.

7. PolyBase into SQL Server : Microsoft had introduced Polybase, a SQL Server connector to Hadoop (and Azure Blob Storage) to its data warehouse appliance Analytics Platform System in 2015. But now Microsoft has incorporated that functionality into the regular on-premises product. We can connect to all relational and non-relational data from single point, it helps you to connect Hadoop database and Azure Blob storage.

This feature will benefit you if your regular data processing involves dealing with a lot of large text files — they can be stored in Azure Blob Storage or Hadoop, and queried as if they were database tables.

A common scenario where you might use this would be an extract, transform and load (ETL) process, where you were taking a subset of the text file to load into your database. Basically, PolyBase creates a bridge between a data that is outside SQL scope, while querying on Hadoop or Azure storage no additional knowledge or installation is needed. Simply, you can Import and Export data To and From Hadoop or Azure storage. Additionally, you can integrate it with Microsoft’s business intelligence.

8. Native JSON Support : JSON (JavaScript Object Notation) is a standardized data exchange format that is previously not supported natively by SQL Server. To perform JSON imports and exports you need to hand-code complex T-SQL, SQLCLR or JavaScript. SQL Server 2016 incorporated JSON support directly into SQL Server much like XML. SQL Server 2016 will natively parse and store JSON as relational data and will support exporting relational data to JSON.

Several other large databases have added this support in recent years as the trend towards Web APIs using JSON has increased. The way this is implemented in SQL 2016 is very similar to the way XML support is built in with FOR JSON and OPENJSON — providing the ability to quickly move JSON data into tables. SQL has also provided in-built functions for JSON.

SQL has the facility to read JSON format data, load them in table, support index properties in JSON columns.JSON data will be stored in NVARCHAR type. Due to NVARCHAR type, an application has the following benefits:

Already stored JSON data (as text) can be easily migrated on new feature.
As NVARCHAR is supported by all SQL components so is the JSON too.

You can easily fetch data FOR JSON from SQL with the below syntax:

SELECT column, expression, column as alias
FROM table1, table2, table3
FOR JSON [AUTO | PATH]

It is a SELECT command so when we fire the above query, SQL will format each row/cell value and return as JSON object.

9. Query Store : One common issue many organizations face when upgrading versions of SQL Server is changes in the query optimizer, which negatively impacting performance. Without comprehensive testing, this has traditionally been a hard problem to identify and then resolve. The Query Store feature maintains a history of query execution plans
with their performance data, and quickly identifies queries that have gotten slower recently, allowing administrators or developers to force the use of an older, better plan if needed. The Query Store is configured at the individual database level.

Up till now, to check Query plan and execution statistics, we need dynamic management views in SQL but neither will it give you Query plan that executed by past/old queries nor will it store them anywhere so that you can review, but SQL 2016 provides you ‘Query Store’ that takes you through query plan, statistics and Query execution for current and past queries.

To enable it, just right click on database, go to properties. You will see ‘Query store’ at the left corner, select it and click on Enable ‘true’ or you can do it using Query syntax:

ALTER DATABASE [Database1] SET QUERY_STORE = ON

10. Row Level Security : This is again one of the security features of SQL 2016. This restricts which users can view what data in a table, based on a function. SQL Server 2016 introduces this feature, which is very useful in multi-tenant environments where you may want to limit data access based on customer ID. The implementation of RLS in SQL 2016 still has it limits (updates and inserts are not covered). It allows you to secure your data row wise, in short you can define a row, that will be viewed by a particular SQL user only. So depending upon the SQL user access permission, we can restrict row level data.

To implement Row level security, you need to define Security policy with a predicate and function.

Security policy: We need to create a policy for security, here is simple syntax:

CREATE SECURITY POLICY fn_security
ADD [FILTER | BLOCK] PREDICATE FunctionName ON TableName

In the above syntax, FILTER and BLOCK are the predicates that will either FILTER rows and display only those that are available for read or BLOCK rows for write operation.

Function: Function is a simple user defined function, but here are some restrictions for user defined function that are used in Row Level Security

  1. Database modification operations are not allowed
  2. OUTPUT INTO clause is not allowed in function
  3. Multiple result set should not be returned from function

11. Dynamic Data Masking : This is again one of the security features of SQL 2016. It just wraps MASK on your data, in short, it hides your confidential data that you don’t want to display. It just avoids disclosure of your sensitive data. After masking, SQL User with limited rights will not view original text, he can view only Masked Text, SQL has pre-defined masking functions you just need to apply it on different columns

To apply this on specific columns, you just need to ALTER column with ‘MASKED WITH’ function name, see below syntax:

ALTER TABLE tablename ALTER COLUMN columnname MASKED WITH (FUNCTION=‚default()‘)

12.Multiple TempDB : It is always a good practice to have a Multiple Temp data files, if you are working on a big crucial data, up till now (SQL 2014), you need to manually add temp db files to your database but SQL 2016 provides you temp DB configuration settings, in which you can configure Number of TempDB files at the time of SQL installation.
Default number of files are 8 with default size of 64 MB will be given. So you no longer need to configure/create it manually.

13. Temporal Table : When you want to store history of your SQL tables to review your old records after table updation, Then you can go with this features. SQL 2016 provides record version facility in which it keeps a history of changed record and maintains it for timely analysis. This feature can be useful for Audit, checking data trend, accidental update/delete data and many more.

How It Works

Basically, the system keeps pair of a table for history and adds two additional columns in it named ‘SysStartTime’ and ‘SysEndTime’ for start time and end time for row respectively. Live table contains current record of row, whereas history table contains previous record of row. We can fetch data from History table, with the
following query syntax:

SELECT * FROM table1 FOR SYSTEM_TIME
BETWEEN date1 AND date2
WHERE condition;

14. Revamped SQL Server Data Tools : Another change in SQL Server 2016 is the reconsolidation of SQL Server Data Tools (SSDT). As Microsoft worked to replace Business Development Studio (BIDS) with SQL Server Data Tools they wound up confusing almost everyone by creating not one but two versions of SQL Server Data Tools both of which needed to be downloaded separately from installing SQL Server itself. With the SQL Server 2016 release Microsoft has indicated that they intend to reconsolidate SQL Server Data Tools.

15. Real-time Operational Analytics : This feature uses the dynamic duo of SQL Server’s in-memory technologies; it combines In-Memory OLTP with the in-memory columnstore for real-time operational analytics. Its purpose is to tune your system for optimal transactional performance as well as increase workload concurrency. This sounds like a great combination and applying analytics to your system’s performance is something a lot of customers have asked for a long time but you will certainly need to have the memory to take advantage of it.

16. R Introduction : Have you stored statistical data in SQL? Want to use R to analyze it? You export data each time from SQL to R? Then your headache will now be covered in SQL 2016, because it is now with R. You can run R script on SQL. You need to install this feature at the time of SQL setup.


MS SQL Server 2014 : Features.

July 25, 2014

1. New In-Memory OLTP Engine : Most important new feature in SQL Server 2014 is the In-Memory OLTP engine. By moving select tables and stored procedures into memory, you can drastically reduce I/O and improve performance of your OLTP applications. The In-Memory OLTP engine is designed for high concurrency and uses a new optimistic concurrency control mechanism to eliminate locking delays. An all-new lock-free engine processes the transactions for memory-resident tables. Stored procedure performance is improved by compiling the stored procedures into native code DLLs. This feature is supported in SQL Server 2014 Enterprise, Developer, or Evaluation edition. Memory-Optimized tables can be defined as durable, so data persists on server restart and with schema only duration to preserve table definition alone, useful for ETL transformations and data staging.

2. Enhanced Windows Server 2012 Integration : provides improved integration with Windows Server 2012 R2 and Windows Server 2012. SQL Server 2014 will have the ability to scale up to 640 logical processors and 4TB of memory in a physical environment. It can scale up to 64 virtual processors and 1TB of memory when running on a virtual machine (VM).

3. Enhancements to AlwaysOn Availability Groups : enhanced with support for additional secondary replicas and Windows Azure integration. SQL Server 2012, AlwaysOn Availability Groups protect multiple databases with up to four secondary replicas. In SQL Server 2014, Microsoft has enhanced AlwaysOn integration by expanding the maximum number of secondary replicas from four to eight. Readable secondary replicas are now also available for read workloads, even when the primary replica is unavailable. This new Windows Azure integration feature enables you to create asynchronous availability group replicas in Windows Azure for disaster recovery. In the event of a local database outage, you can run your SQL Server databases from Windows Azure VMs. The new Windows Azure AlwaysOn availability options are fully integrated into SQL Server Management Studio (SSMS).

4. Enhancements to Backups : Database backups in SQL Server now support built-in database encryption. Previous releases all required a third-party product to encrypt database backups. The backup encryption process uses either a certificate or an asymmetric key to encrypt the data. The supported backup encryption algorithms are Advanced Encryption Standard (AES) 128, AES 192, AES 256, and Triple DES (3DES). SQL Server 2014 also provides Windows Azure integration to SQL Server’s backup capabilities. We can specify a Windows Azure URL as the target for your SQL Server 2014 database backups. This new Windows Azure backup feature is fully integrated into SSMS.

5. Updateable Columnstore Indexes : Columnstore indexes are another of Microsoft’s high performance in-memory technologies. Microsoft introduced the columnstore index in SQL Server 2012 to provide significantly improved performance for data warehousing types of queries. Microsoft states that for some types of queries, columnstore indexes can provide up to 10x performance improvements. However, in the original implementation of the columnstore indexes, the underlying table had to be read-only. SQL Server 2014 eliminates this restriction. The new updateable columnstore index enables updates to be performed to the underlying table without first needing to drop the columnstore index. A SQL Server 2014 columnstore index must use all of the columns in the table, and it can’t be combined with other indexes.

6. SQL Server Data Tools for Business Intelligence : Previously known as Business Intelligence Development Studio (BIDS) and SQL Server Data Tools (SSDT), the new SQL Server Data Tools for BI (SSDT BI) is used to create SQL Server Analysis Services (SSAS) models, SQL Server Reporting Services (SSRS) reports, and SQL Server Integration Services (SSIS) packages. SSDT BI is based on Microsoft Visual Studio 2012 and supports SSAS and SSRS for SQL Server 2014 and earlier, but SSIS projects are limited to SQL Server 2014. In the pre-release version of SQL Server 2014 Community Technology Preview 2 (CTP2), SQL Server Setup doesn’t install SSDT BI. Instead, you must download it separately from the Microsoft Download Center.

7. Power BI for Office 365 Integration : Power BI for Office 365 is Microsoft’s cloud-based BI solution that leverages familiar Office 365 and Excel tools. Power BI for Office 365 provides business insights through data visualization and navigation capabilities. Power Pivot is Excel 2010 and Excel 2013 ProPlus add-in enables Excel to perform data analysis on large volumes of data. Power View is Excel 2013 ProPlus add-in provides a Silverlight-based data visualization and navigation tool. Microsoft has extended Power View so that you can now use it with multidimensional models (OLAP cubes). Power View now supports a number of data visualizations, including tables, matrixes, bubble charts, and geographical maps. Power Query is Excel 2013 add-in lets you discover and integrate data into Excel. It supports SQL Server data sources as well as external sources such as Windows Azure, text files, XML files, Open Data Protocol (OData) feeds, web pages, Hadoop data sets, and public data sets. Power Map is Excel 2013 ProPlus add-in provides 3D mapping visualizations.

8. Buffer Pool Extension : SQL Server 2014 provides a new solid state disk (SSD) integration capability that lets you use SSDs to expand the SQL Server 2014 Buffer Pool as nonvolatile RAM (NvRAM). With the new Buffer Pool Extensions feature, you can use SSD drives to expand the buffer pool in systems that have maxed out their memory. Buffer Pool Extensions can provide performance gains for read-heavy OLTP workloads.

9. Storage I/O control : The Resource Governor lets you limit the amount of CPU and memory that a given workload can consume. SQL Server 2014 extends the reach of the Resource Governor to manage storage I/O usage as well. The SQL Server 2014 Resource Governor can limit the physical I/Os issued for user threads in a given resource pool.

10. Transactions with Delayed Durability : In order to reduce latency, transactions can be defined as delayed durable, which means that transaction returns control to the client before the Transaction Log record is written to disk. This can be defined at the database level, COMMIT level, or ATOMIC block level in Natively Compiled Stored Procedures. Also the following Stored Procedure sys.sp_flush_log is included to flush the Transaction Log to disk in order to make previously committed transactions durable with delayed durability.

11. New Security Features : SQL Server 2014 has introduced some new security permissions that allow you to now setup server level roles that provide only the access that administrators need to perform their duties. New server level permissions to help you with establishing these new server roles to allow for separation of duties. These new server permissions are:
1. CONNECT ANY DATABASE
2. IMPERSONATE ANY LOGIN
3. SELECT ALL USER SECURABLES
These new permissions will now allow you to set up permissions so database administrators can perform their duties but not see the underlying user data. Additionally, you can use these server permissions to allow your middle tier applications to impersonate any logon except those with high-level permissions like sa, or sysadmin.

12. Query Handling Enhancements : In SQL Server 2014 added new cardinality estimator. The new estimator should provide more consistent and predictable query performance, as well as more accurate cardinality estimates so better query plans can be chosen.
For a while SQL Server has had a method to automatically update statistics when the database engine feels they are out of date. In SQL Server 2014 has made enhancements to the algorithm that determines when statistics are out of date. This means that statistics will be invalidated sooner, which will lead to the triggering of the update statistics process. By updating statistics more often SQL Server will be able to create better optimized query plans.
SQL Server 2014 has increased the performance of the SELECT INTO statement. They have done this by allowing the SELECT INTO to perform parallel operations when inserting rows into a table.


MS SQL Server 2012 : Features.

July 24, 2014

1. Always On : A feature which enables to setup both high availability as well as the disaster recovery of DB. The new AlwaysOn feature combines the powers of clustering and mirroring into one High Availability option, but also allows you to interact with the secondary databases something that clustering and mirroring do not allow.There are two types of Always On.
     1. Always On Availability Groups : Availability groups provide an integrated set of options including automatic and manual failover of a logical group of databases, support for up to four secondary replicas, fast application failover, and automatic page repair.
     2. Always On Failover Cluster Instances : Enhance the SQL Server failover clustering feature & support multisite clustering across subnets, which enables cross-data-center failover of SQL Server instances. Faster and more predictable instance failover is key benefit that enables faster application recovery.

2. Contained Database : Contained Database which intends to reduce or eliminate the dependencies that a database has on the SQL Server instance, making it easier to migrate a database to a new instance with less of the work involved in reproducing and validating these dependencies. A contained database basically includes all database settings and the metadata within itself thereby resulting in no configuration dependencies on the instance of the SQL Server Database Engine where the database is actually installed. Users will be able to connect to a contained database without authenticating a login at the Database Engine level. Errors related to orphan users are no longer an issue with contained databases, since a contained database user can now be created without an associated login. Authentication can now occur at the database level. Contained database users can be Windows and SQL Server authentication users. A contained database user can access only contained database objects. They cannot access system databases and cannot access server objects. Metadata is stored on the contained database and not stored on system databases. The containment setting of a database can be NONE, PARTIAL or FULL. But only NONE and PARTIAL are supported on SQL Server 2012.
     1. None : Database settings when all objects inside or outside database boundaries managed by SQL instance.
     2. Partial : Some possible objects managed by databases & other uncontained objects by SQL instances. Use DMV sys.dm_db_uncontained_entities to check uncontained objects or features of your database.
     3. Full : Full containment setting is a condition when database can handle everything by itself. When database not required SQL instance. This feature is not feasible for SQL server till yet.

3. Database Audit : Auditing an instance of the SQL Server Database Engine or an individual database involves tracking and logging events that occur on the Database Engine. Server level auditing is now available in all SQL Server 2012 editions. Database level auditing is available only in Enterprise, Developer and the Evaluation editions. Auditing is more resistant to auditing destination failures and can recover in most cases after the destination reappears as available. This is advantageous for several reasons:
1. Can create multiple audits and targets, Auditing performance is better than traces,
2. An audit restarts with the server instance, so it persists its state,
3. An audit is resilient, Authorized users can turn off a trace, but auditing will record who stopped an audit, preventing some kinds of insider sabotage.

4. Windows Server Core Support : Core is the GUI-less version of Windows that uses DOS and PowerShell for user interaction. It has a much lower footprint (50% less memory and disk space utilization), requires fewer patches, and is more secure than the full install. Starting with SQL 2012, it is supported for SQL Server. Windows Server Core is a stripped-down, minimal configuration installation option.
     1. More secure – Because of the limited features, reduced surface area for any potential attacks, especially for servers that are constantly connected to the Internet.
     2. Relatively perform better – With Windows Server Core, the only services available are those that you turn on.
     3. Reduced management – With fewer components come less patches to install. And less patches to install mean reduced downtime.

5. Column Store Indexes : It helps us improve performance of data warehouse queries. It stores data in a per column manner rather than classical row storage thus the Column Store index is stored using columnar data format whereas the traditional indexes and heaps, which store data in B-Tree structure. Row stores rows of data on a page and column store stores all the data in a column on the same page. These columns are much easier to search instead of a query searching all the data in an entire row, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. Once you add a column store to a table, though, you cannot delete, insert or update the data, as it is READ ONLY.

6. User Defined Roles : Which simplifies instance wide administration and helps in increasing the security of the instance by letting you define different groups of persons with different sets of permissions as per their role and responsibilities. DBAs have always had the ability to create custom database role, but never server wide. Now, the DBA can create a role, which has read/write access on every DB on the server, or any other custom server wide role. There was a limitation in the previous versions of SQL Server to create a User Defined Server Level Role. However, in SQL Server 2012, we can create user defined server roles and add server level
permissions to the user defined server roles. We can create user defined server role either using the SSMS (SQL Server Management Studio) or by writing a TSQL command (CREATE SERVER ROLE, ALTER SERVER ROLE and DROP SERVER ROLE). SQL Server also allows you to create these user defined roles in hierarchical manner as well, which means one user defined server role can belong to another user defined server role and so on.

7. Sequence Objects : A sequence is a user defined, schema bound object that will generate a sequence of numeric values (in ascending or descending order) according to specification. Unlike identity columns, a sequence is created independent of any table. Sequence object can be referenced by applications directly and can also be shared by many rows or many tables. Difference between sequences and identity is in the way they are stored in a database. Identity relies on an existence of a table, thus they are stored along the properties of a table. On the other hand, sequences are stored independently of tables. Sequences, unlike Identity, also have the advantage of generating new increment values during an UPDATE statement. Finally, because sequences can be generated outside an insert statement, their values can be used in multiple columns within a given table.

8. Pagination : Pagination is the best way to display large result set by splitting them. Pagination is possible by using “OFFSET” and “FETCH’ commands. This function is very intuitive and would be applicable for returning data set a “page” (batch of rows) at a time. The OFFSET value can be expressed as an integer variable and so can the FETCH NEXT value, which allows easy and configurable data set paging for web/reporting applications. It also improves performance (because it picks up only certain records from the database) and reduces the number of codes and effort. Now the developers can do the data paging not only from front end (. NET) but also from the back end.

9. Error Handling : SQL Server 2012 provided a new and a flexible way of doing the error handling with the additional feature “THROW” to make it effective and efficient. RAISERROR detail is unable to handle the error properly. It displays the information message correct but the rest of the info like error number & line number is incorrect. Keyword “THROW” returns more accurate result Like error number & line number to the caller, than the RAISERROR. Also, it reduces the number of codes. You can use new THROW statement in two ways:
1. As an alternative to RAISERROR,
2. As an away to re-throw the original error that occurred.
It is important to note that all exceptions raised using the THROW statement will have a severity of 16 and the statement before the THROW statement must be followed by the semicolon (;) statement terminator.

10. SSDT Environment (SQL Server Data Tools) : An integration environment that allows developers to work with Visual Studio and BI projects AND also access many of the capabilities that were previously found in SQL Server Management Studio. SSDT provides functionalities for both data-tier and application-tier developers to perform all their development work against any SQL Server platform within Visual Studio. The goal is to take the tasks and features that are routinely used by developers in SSMS and bring them over into SSDT. SSDT as a replacement for BIDS that uses the Database Services, Analysis Services, Reporting Services, and Integration Services project types. SSDT includes offline projects, object creation interfaces (i.e. for tables, views, etc…) schema snapshot and compare, database version targeting (for SQL Server 2005, 2008, 2012 and for SQL Azure) and refactoring with preview.

11. Troubleshooting in SSIS : More improvements to SSIS Catalog based logging. You can now set a server wide default logging level, capture data flow component timing information, and row counts for all paths within a data flow. In this release, standard reports are available in SQL Server Management Studio to help you troubleshoot Integration Services packages that have been deployed to the SSISDB catalog. These operations reports help you to view package execution status and identify the cause of execution failures.

12. SSIS CDC Support : SSIS 2012 introduces new control flow and data flow tasks for managing CDC processes and for reading data from CDC tables. The control flow task is the CDC Control task, which allows an ETL developer to control the lifecycle of CDC processes. There are two CDC data flow components. The first is the CDC source, which allows you to open a CDC change tracking log table and read the rows into the pipeline. The second is the CDC splitter, which separates the rows in a change tracking table into three distinct pipelines: new rows from the change tracking table, updated rows (with the before and after values from the updates), and deleted rows.

13. SSIS parameter support : You can create project parameters at the project level and package parameters at the package level. Project parameters are used to supply any external input the project receives to one or more packages in the project. Package parameters allow you to modify package execution without having to edit and redeploy the package.

14. Data Quality Services (DQS) : DQS helps you build a knowledge base for your data and you can then use this knowledge base to do data cleaning. This new feature used for improving and maintaining the quality of your data across enterprise. It consists of two components: Data Quality Server and Data Quality Client. DQS is knowledge based solution that provides both interactive or automated/batch mode approaches to manage the integrity and quality of the data. DQS also supports using reference data from Windows Azure Marketplace to enable reference data providers to provide reference data services through the Marketplace. Data Quality Services lets you resolve data quality issues before the data is presented to Master Data Services. Data Quality Services is also a part of the Enterprise and Business Intelligence editions.

15. Tabular Model (SSAS) : Tabular model is a new type of analysis services database structure that Analysis Services supports in SQL Server 2012. When we create a project based on the tabular model, SQL Server Data Tool (SSDT) adds a Model.bim file to the project and creates a workspace database on the Analysis Services instance (installed in tabular mode) that we specify. It uses this workspace database as temporary storage for data, while you develop the model by importing data from the underlying data sources and designing objects that organize, enhance, and secure the data. It is faster to develop and less expensive to use in terms of time, resources and skill requirement.

16. Power View : an interactive, web-based data exploration, visualization, Silverlight browser-based reporting tool and presentation technology. The core objective of Power View is to present the modeled data in a fun and meaningful way, using highly interactive visualizations, animations, smart querying and rich Storyboarding presentation capabilities. To use Power View, you need to have SharePoint 2010 installed and configured, and SQL Server 2012 Reporting Services configured in SharePoint integrated mode. Power View is available in Enterprise Edition and the new Business Intelligence editions.

17. Enhanced PowerShell Support : SQL Server 2012 also requires that PowerShell 2.0 and both Microsoft .NET Framework 3.5 SP1 and Microsoft .NET Framework 4.0 are installed. Unlike SQL Server 2008 R2, SQL Server 2012 won’t necessarily install the required software for you. Support for Windows PowerShell has been extended in SQL Server 2012. The sqlps utility now snaps into the PowerShell 2.0 environment rather than being its own PowerShell 1.0 mini-shell. This change will be a big win for integrating Windows and SQL Server management actions. SSAS and PowerPivot now offer PowerShell cmdlets. configuration of SSRS’s integration into SharePoint with SharePoint utilities or with PowerShell is possible.

18. SQL Azure Enhancements : The ability to back up SQL Azure databases has been added to SQL Azure. you can use the new copy feature to make SQL Azure-based database backups. Being copies of the database, they do count toward the SQL Azure limit of 150 databases. SQL Azure database backup is available for SQL Azure now. As part of SQL Server 2008 R2, SQL Server Management Studio was modified to be able to connect to SQL Azure. SQL Azure Data Sync is a cloud-based data synchronization service that’s built using the Microsoft Sync Framework. It will be able to synchronize data between on-premises SQL Server systems and SQL Azure in the cloud. It can also replicate data to remote offices, and it will support scheduled synchronization and conflict handling for duplicate data. With SQL Azure Reporting Services, reports can be created using BIDs, published to SQL Azure, and managed using the cloud-based Windows Azure Developer Portal.

19. Big Data Support : Microsoft is working with HortonWorks to develop bi-directional connectors for Hadoop and SQL Server. The SQL Server connector for Apache Hadoop lets customers move large volumes of data between Hadoop and SQL Server 2008 R2 or SQL Server 2012. There will also be a SQL Server Parallel Data Warehouse (PDW) connector for Hadoop that transfers data between Hadoop and SQL Server PDW. These new connectors will enable customers to work with both structured SQL Server data and unstructured data from Hadoop. Hadoop will be used in conjunction with SQL Server’s relational and analytic capabilities to enable enterprises to deploy Hadoop implementations alongside their exiting IT systems. This will extend the types of data that you can use in your applications similarly to how SQL Server Analysis Services (SSAS) does with the SQL Server relational database engine.

20. File Table : The filestream object feature introduced in 2008, which integrates the relational database engine with the windows file system to provide efficient storage & management of data. Filestream is used to store unstructured data and maintain transactional consistency between structured and unstructured data. File table is new feature introduced in 2012, which builds on already existing functionality of Filestream objects. File Table enables the non-transactional access to file table data. This means the file table data can be access through SQL in transactional way, as well as by the Windows API as if it was accessing a file object. File table basically convert SQL tables into folders which can be access through Windows explorer. The directory structure and file attributes are stored in table as columns. Files can be bulk loaded, updated as well as managed in T-SQL like any other column. SQL Server also supports backup and restore job for this.


MS SQL Server 2008 R2 : Features.

July 24, 2014

1. Multi Server Dashboards : While the SQL Server Management Studio could always connection to multiple servers, each was managed independently with no central view of all of them. Now with SQL Server 2008 R2, Dashboards showing combined server data can be created.includes a new Utility Explorer, which is part of SSMS, to meet this need. The Utility Explorer lets you create a SQL Server Utility Control Point where you can enlist multiple SQL Server instances to be managed. The Utility Explorer can manage as many as 25 SQL Server instances. displays consolidated performance, capacity, and asset information for all the registered servers. enhancements to Policy Based Management will boost SQL Server 2008 R2’s multi-server management capabilities.

2. SQL Server 2008 R2 Datacenter : The new edition parallels Windows Server Datacenter Edition, and with support from 256 logical processors, high numbers of instances, and as much memory as the operating system will support.This support enables greater scalability in the x64 line than ever before. The Datacenter edition has no memory limitation and offers support for more than 25 instances.

3. SQL Server 2008 R2 Parallel Data Warehouse : Parallel Data Warehouse, specializes in handling extremely large amounts of data. This new version uses massively parallel processing to spread large tables over multiple SQL nodes. SQL Server 2008 R2 Parallel Data Warehouse is essentially the SQL Server equivalent to the Windows Server High Performance Computing (HPC) edition. Scalability in PDW means handling tens of terabytes of data and then moving to hundreds of terabytes worth

4. StreamInsight : StreamInsight is a near real-time event monitoring and processing framework. It’s designed to process thousands of events per second, selecting and writing out pertinent data to a SQL Server database. StreamInsight is a programming framework and doesn’t have a graphical interface. This technology is ideal for applications that need to process high volumes of event stream data with no latency. There are essentially five sections of the Microsoft StreamInsight technology: event sources, input adapters, the StreamInsight engine, output adapters, and event targets. It’s available only in SQL Server 2008 R2 Datacenter Edition.

5. Master Data Services : Master Data Services (MDS) is both a concept and a product. The concept of a Master Data Service is that there is a central data gate keeper of core business data. A Master Data Service application would ensure that all tables would have only one correct address. Master Data Services gives you the ability to create a master data definition for the enterprise to map and convert data from all the different date sources into that central data repository. You can use Master Data Services to act as a corporate data hub, where it can serve as the authoritative source for enterprise data. Master Data Services is available in SQL Server 2008 R2’s Enterprise Edition and Datacenter Edition. Master Data Services (MDS) feature provides a central portal from which administrators can create and update master data members and hierarchies, with the goal of ensuring data consistency across the enterprise.

6. PowerPivot for SharePoint & Excel : PowerPivot is an end-user tool that works in conjunction with SharePoint, SQL Server 2008 R2, and Excel 2010 to process large amounts of data in seconds. PowerPivot works like an Excel Pivot Table, and includes analytical capabilities. It adds powerful data analysis capabilities to Excel. Built-in data compression enables PowerPivot for Excel to work with millions of rows and still deliver subsecond response time. PowerPivot for Excel can connect to SQL Server 2008 databases, but it can also connect to previous versions of SQL Server as well as other data sources, including Oracle and Teradata, and even SSRS reports. Also includes a new cube-oriented calculation language called Data Analysis Expressions (DAX), which extends Excel’s data analysis capabilities with the multidimensional capabilities of the MDX language.
PowerPivot for SharePoint enables the sharing, collaboration, and management of PowerPivot worksheets. PowerPivot for SharePoint is supported by SQL Server 2008 R2 Enterprise Edition and higher.

7. Data-Tier Application : A Data-Tier Application is an object that stores all the needed database information for a project, such as login, tables, and procedures into one package that can be consumed by Visual Studio. By creating a Data-Tier Application, a SQL Server package version could be saved with each Visual Studio build of your application. This would allow application code builds to be married to a database build in an easily managed way. At a high level the Data-Tier application provides development, deployment, and update capabilities for an application database. The unit of deployment is a DAC package which is a file that contains everything needed to deploy (or update) the application database. The DAC package can be created from a Visual Studio 2010 Data-Tier project or from SQL Server 2008 R2 Management Studio (SSMS).

8. Unicode Compression : SQL Server 2008 R2 uses a new algorithm known as Simple Compression Scheme for Unicode storage. This reduces the amount of disk spaced used by Unicode characters. This new format happens automatically and is managed by the SQL Server engine. ASCII characters will only occupy a single byte even in NCHAR and NVARCHAR columns – only those characters that require two bytes will actually use them. It is enabled as part of Page or Row data compression. If you have an existing table or index that is using data compression in SQL Server 2008, you will have to rebuild the index(s) after you upgrade to SQL Server 2008 R2 to get Unicode Compression on the existing data. If you insert a new row in a table or if you update a Unicode column in an existing row, that column will get Unicode compression.

9. SQL Server Utility Control Point (UCP) : The new SQL Server Utility is a repository object for centrally controlling multiple SQL Server instances. Performance data and configuration policies can be stored in a single Utility. The Utility also includes an Explorer tool where multi-server dashboards can be created. At a high level the Utility Control Point provides a dashboard view of the overall health and resource utilization of the SQL Server instances and Data-tier applications in your organization. The processing unit of SQL Server Utility is called Utility Control Point (UCP), which collects resource utilization (Instance CPU utilization, Computer CPU utilization, both data and log file space utilization, and storage volume utilization) data using the defined policy evaluation from all the enrolled SQL Server instances on the interval and stores it in Utility Management Data Warehouse (UMDW) database (called sysutility_mdw and created when you create UCP) in consolidated form. Data gets collected from all the enrolled instances every 15 minutes and you can set the retention policy for the collected data. By default, collected data is retained for one year, though you can change it to 1 month, 3 months, 6 months or 2 years per your need and requirement. you can create a UCP on SQL Server 2008 R2 or higher versions only. Also it’s available with Datacenter, Enterprise, Developer or Evaluation editions only.

10. Report Builder 3.0 : Report Builder is a tool set for developing rich reports that can be delivered over the web. Some of the features of Report Builder include the ability to create reports containing graphs, charts, tables, and printing controls. In addition, Report Builder also supports drill downs and sorting. Report Builder 3.0 include: Map Layers, which can hold spatial and analytical data and will integrate with Microsoft Virtual Earth. Also includes support for adding spikelines and data bars to your reports so that queries can be reused in multiple reports.


MS SQL Server 2008 : Features.

July 14, 2014

1. IntelliSense for Query Editing : DBAs and developers will not have to remember all the syntax or browse online references. IntelliSense offers a few additional features besides just completing the world.

2. Multi Server Query : Being able to submit a batch of queries against multiple servers simultaneously. It allows you to administer multiple servers and consolidate the SQL Server registration for all the SQL Server users such as developers, DBAs, etc.

3. Activity Monitors : Used by database developers and administrators to get a quick overview of SQL Server 2008 system performance. There are many enhancements in Activity Monitor in SQL Server 2008 like a graphical display of Processor Time, Waiting Tasks, Database I/O’s, Batch Requests, Processes, Resource Waits, Data File I/O’s and also information about the most expensive queries.

4. Transparent Database Encryption : The ability to encrypt an entire database without having to change any code in your application adding an additional layer to your data security. Transparent Data Encryption is the process of encrypting your databases at rest transparently from the application. this feature is also only available in the Enterprise and Developer editions of SQL Server 2008.

5. Policy Based Management: The ability to manage multiple servers from a central set of policies. You can write policies that will control things like naming standards, server configuration, import/export requirements, etc. By using a standard set of policies across all your SQL Server instances, you can consistently manage your servers and minimize your administrative costs to enforce those policies. Policy Management is available in all editions of SQL Server 2008.

6. Ability to Hot-Add CPUs : SQL Server 2005 introduced the ability to hot-add memory, and SQL Server 2008 extends this with the ability to hot-add CPUs on supported platforms with no downtime, enhancing scalability. This ability will also be an important foundation feature for the dynamic data center.

7. Data Compression : Data Compression in SQL Server has been made available in SQL Server 2005 with Service Pack 2 where a new storage format for storing decimal and numeric data was introduced. This concept has been extended in SQL Server 2008 to all fixed-legth data types such as integer, char, and float data types. Data compression reduces the storage costs and increases query performance by reducing I/O and increasing buffer-hit rates. SQL Server 2008’s data compression enables you to compress data stored in the database. This reduces storage requirements and can actually improve the performance of workloads that have high I/O requirements. SQL Server 2008 also supports compressing backups. SQL Server 2008 supports both row and page compression for both tables and indexes.

8. Log Stream Compression : SQL Server 2008’s new support for log stream compression enhances the performance of database mirroring by automatically compressing the log stream that’s sent between the database mirroring participants. This minimizes network bandwidth use.

9. Transparent Failover : Database mirroring in SQL Server 2005 enables clients to automatically redirect to a mirrored server in the event of a failure in the principal database. However, it requires special client configuration to specify the mirror server. SQL Server 2008 will allow clients to automatically fail over to a mirrored server without requiring any manual configuration on the client.

10. Language-Integrated Query : LINQ is Microsoft’s latest application data-access technology. It enables Visual Basic and C# applications to use set-oriented queries that are developed in the native language, rather than requiring that the queries be written in T-SQL. LINQ uses the native SqlClient to connect to SQL Server, and its development-time connection lets developers create strongly typed queries.

11. Enhanced Office Integration : SQL Server 2008’s enhanced integration with Microsoft Office 2007 enables users to create database-enabled reports directly from either Microsoft Office Word 2007 or Excel 2007. These reports can then be published and shared by using Microsoft Office SharePoint Server 2007.

12. Resource Governor : When you are running many different kinds of processes on your SQL Server machine, you need a way to control the resource intensive processes so they don’t consume all the resources of your machine. With the Resource Governor, you can limit CPU and Memory resources for those sessions that can be identified as using excessive resources. By using the Resource Governor available with SQL Server 2008, a DBA can better control resources usage to provide a balanced throughput
of all sessions. The Resource Governor is only available with the Enterprise and Developer editions of SQL Server 2008.

13. Table Compression : Table Compression helps to conserve on the amount of disk space consumed by a table and it indexes, This feature is only available in the Enterprise and Developer editions of SQL Server 2008. There are two different types of compression that can be used: Row and Page. Row compression compresses the fixed fields to save additional space in a row. It first does row compression and then performs prefix compression and dictionary compression. to improve the elapsed time of
your T-SQL statement the cost savings associated with performing less I/Os needs to be greater than the additional CPU cost incurred by performing Table Compression.

14. Backup Compression : New in SQL Server 2008 Enterprise and Developer edition is database Backup Compression. With Backup Compression, the database backups are compressed as the backup file is being written. By using compressed backups, you can save valuable disks space if a disk device is the target for your backups. Compressing backup will help minimize the amount of space needed for your backup. Database backup compression feature is only available in the Enterprise Edition, restoring a compressed database backup can be can be done on all editions of SQL Server 2008. Implementation of database backup compression to reduce the duration of both backup and restore operations which can be very helpful in very large databases.

15. Change Data Capture : To track changes to your database overtime using a new feature called “Changed Data Capture”. This new change tracking feature is only available in the Enterprise Edition and Developer edition. Once a table in a database is enabled for change data capture all changes to that table are tracked by storing changes in a change table. The change table will contain one record for every INSERT that can be used to identify column values for the inserted records. Each time a DELETE is performed the change table will contain one record for each DELETE that will show the values in each column prior to the DELETE. When an UPDATE is preformed, against a change data capture enabled table, two records will be created in the change table, one with the updated column values and one with the original column values. By using change data capture, you can track changes that have occurred over time to your table. This kind of functionality is useful for applications, like a data warehouse load process that need to identify changes so they can correctly apply updates to track historical changes over time.

16. MERGE SQL Statement : With the introduction of the MERGE SQL Statement, developers can more effectively handle common data warehousing scenarios, like checking whether a row exists, and then executing an insert or update. Using a single statement, we can Add/Update records in our database table, without explicitly checking for the existence of records to perform operations like Insert or Update. Atomic statement combining INSERT, UPDATE and DELETE operations based on conditional logic. Useful in both OLTP and Data Warehouse environments, OLTP: merging recent information from external source & DW: incremental updates of fact, slowly changing dimensions.

17. Filtered Indexes : It is an optimized non-clustered index, allows us to define a filter predicate, a WHERE clause, while creating the index. Filtered index support lets you more efficiently index columns containing sparse data. The filtered indexes functionality lets SQL Server 2008 indexes ignore rows with no data and include only those rows in which there are valid data values, which can improve access time for sparsely populated indexes by an order of magnitude. Filtered Index is used to
index a portion of rows in a table that means it applies filter on INDEX which improves query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes.

18. Debugging : The integrated debugging feature lets you set breakpoints; single step through T-SQL code; and view the Locals, Call Stack, and Quick Watch windows. With this feature you can navigate through your code, line by line, watch the current state of variables in the program, an output window where you can observe what the code is doing, error windows, the ability to navigate backward and forward through each line, observe system objects such as the call stack and threads, not only this you can even place “breakpoints” and then tell the debugger to run the code up to that point and then stop.

19. Database Mirroring : Log Performance Enhancements:
1. Compression of stream data for which at least a 12.5 percent compression ratio can be achieved.
2. Write-ahead on the incoming log stream on the mirror server.
3. Improved use of log send buffers.
4. Page read-ahead during the undo phase.
5. Automatic Recovery from Corrupted Pages: A database mirroring partner running on SQL Server 2008 or later versions automatically tries to resolve certain types of errors that prevent reading a data page.

20. Data Collector:
SQL Server 2008 introduces a data collector that you can use to obtain and save data that is gathered from several sources. The Performance Data Collector is fully supported by the Enterprise and Standard editions of SQL Server 2008. It will automatically collect all the important performance data DBAs need to identify and fix  performance issues; it would store the data in a single format, in a single location, and include advanced reporting that allowed DBAs to easily interpret the data so they could identify relevant solutions.

 


MS SQL Server 2005 : Features.

July 14, 2014

1. Database Mirroring : High-Availability feature that the users can use to achieve fault tolerance and to prevent server outages and data loss. The primary database continuously sends transaction logs to the backup database on a separate SQL Server instance. A third SQL Server instance is then used as a witness database to monitor the interaction between the primary and the mirror databases. SQL Server 2005 enables us to configure database mirroring under mixed mode authentication using SQL Server logins with the added security of using certificates.

2. Database Snapshots : A database snapshot is essentially an instant read-only copy of a database, and it is a good option for any type of reporting solution. Snapshots can be considered as a real time point in time restore option which takes the image of a database. Anytime later, this image/snapshot can be used to revert the database to the point the snapshot was taken. Snapshots could be used to preserve data for financial statistics/analysis.

3. Database Mail : Enhanced e-mail solution available in the database engine. With Database Mail, there is no longer a dependency on Microsoft Outlook or MAPI e-mail clients. Database Mail uses standard SMTP to send e-mail messages. These messages may contain query results, attachments. In addition, the e-mail process runs outside of the database engine space, which means that messages can continue to be queued even when the database engine has stopped.

4. Index Enhancements : INCLUDED columns in indices allows adding a Non-Key column to an index, also more queries to get executed by taking full advantage of indexes. This Enhancements added ability to disable an index. Disabling an index stops that index to be being maintained and used by SQL Server engine. When an index is disabled, database engine de-allocates the storage space used by an index but maintains the meta-data of that index. Alter Index statement is used to rebuild an index and only after that, disabled index can be enabled again. It also allows online index operations, which enables applications to access the index and perform database operations like insert, update and delete on a table while the index is rebuilding.

5. Generate Script Wizard : The wizard can script out all of the objects in the database including permissions, collation, and constraints. Use of this handy feature to save a database schema for versioning or create duplicate databases. This wizard is very flexible and works with modes like schema only, data only or both. It generates a single SQL script file which can be used to recreate the contents of a database by manually executing the script on a target server.

6. Object Explorer : The dropdown menu that lets you connect to and manage the Database Engine, Analysis Services, Reporting Services, SQL Server Mobile, and SSIS. Selecting an option adds a new set of management nodes to the Object Explorer window.

7. Peer-to-Peer Transactional Replication : The bidirectional replication capability of the new peer-to-peer model. Peer-to-peer transactional replication is designed to let changes made in different databases be replicated back and forth between multiple servers. If one server is unavailable, you can set up the system to route transactions to the remaining servers.

8. .NET Framework CLR Integration : T-SQL is a procedural language, not an object-oriented language.The integration of the .NET CLR with SQL Server 2005 enables the development of stored procedures, user defined functions, triggers, aggregates and user defined types using any of the modern object oriented .NET languages like VB.NET and C#. .NET languages support complex logic, have better computation capabilities, provide easier access to external resources and facilitate code reuse.

9. Hyper-Threading : Hyper-threading is a CPU technology by Intel in which each physical processor in system creates two logical processors. Each logical processor is capable of simultaneously executing separate threads. The goal of hyper-threading is to provide better resource consumption for multithreaded applications running on a single machine.

10. XML integration : Introduces the new XML data-type. You can store full XML documents in this new data-type, and you can place validations on the well-formed documents in the database. Additional enhancements include the ability to query the XML documents and create indexes on the XML data-type. The integrated bi-directional mapping between XML and relational data provides support for triggers on XML, replication and bulk load of XML data, and data access support via SOAP.

11. Service Broker : This feature gives you the ability to create asynchronous, message-based applications in the database entirely through TSQL. The database engine guarantees message delivery, message order consistency, and handles message grouping. In addition, Service Broker gives you the ability to send messages between different SQL Server instances. Server Broker is also used in several other features in SQL Server 2005. For example, you can define Event Notifications in the database to send a message to a Queue in the database when someone attempts to alter a table structure, of if there is a string of login failures.

12. TRY…CATCH : TRY/CATCH helps to write logic separate the action and error handling code. The code meant for the action is enclosed in the TRY block and the code for error handling is enclosed in the CATCH block. In case the code within the TRY block fails, the control automatically jumps to the CATCH block, letting the transaction roll back and resume execution. In addition to this, the CATCH block captures and provides error information that shows you the ID, message text, state, severity and transaction state of an error. Deadlocks, which are virtually impossible to handle at the database level in SQL Server 2000, can now be handled with ease.

13. PIVOT and UNPIVOT : PIVOT and UNPIVOT will be used to summarise the report data. PIVOT means convert the rows into columns. UNPIVOT means rotating the columns into rows. PIVOTing and UNPIVOTing allows a user to shred values of a table-valued expression into separate columns. PIVOT provides syntax that is simpler and more readable than what may otherwise be specified in a complex series of SELECT…CASE statements.

14. CTE (Common Table Expressions) : It will returns the result set.It works like the views.This can be join with the tables, views,etc.., like tables.This will be working up to the scope of the program.It can be used only once. It simplifies complex queries and most importantly enables you to recurse. CTE are not replacement of the Temp Table or Temp Variable Table, Always begin CTE with semi-comma, The scope of the CTE is limited to very first SELECT statement, Usually the execution plan of the CTE like sub-query but there are cases when it can be different as well, Use MAXRECURSION to prevent infinite loop in recursive CTE.

15. DDL Triggers : DDL triggers are defined at the server or database level and fire when DDL statements occur. This gives you the ability to audit when new tables, stored procedures, or logins are created. DDL triggers can be associated with creating, altering and dropping database objects such as tables, views. Allows database developers and administrators to control the use of Create, Alter and Drop statements over database objects.

16. SQL Server Integration Services (SSIS) : SSIS is an ETL tool (Extract, Transform and Load) which is very much needed for the Data warehousing applications. Also SSIS is used to perform operations like loading the data based on the need, performing different transformations on the data like doing calculations (Sum, Average etc.) and to define a workflow of how the process should flow and perform some tasks on the day to day activity. Usability of the Import/Export Wizard has been improved in multiple-table scenarios. A new data processing extension provides a graphical query designer that you can use to access InfoCubes and queries in Reporting Services reports for SAP NetWeaver Business Intelligence. To develop your SSIS package, you need to get installed with the SQL Server Business Intelligence Development Studio.

17. SQL Server Reporting Services with Report Builder : SSRS includes all the development and management pieces necessary to publish end user reports in HTML, PDF, Excel, and CSV formats. With Reporting Services built into SQL, expect to see product adoption rise quickly. In addition to the drag and drop report builder, SSRS also includes an application for letting power users generate their own ad hoc reports called Report Builder. Another unique feature of SSRS is the ability to create report
subscriptions, where users can request reports to be emailed to them automatically at various intervals. You can now add static images, such as company logos or other graphics, to reports in Report Builder.


MS SQL Server 2000 : Features.

July 14, 2014

1. Log Shipping : Use to copy and load transaction log backups from one database to one or more databases on a constant basis. This allows you to have a primary read/write database with one or more read-only copies of the database that are kept synchronized by restoring the logs from the primary database.

2. XML Support : Return query result sets directly in XML format. Retrieve data from an XML document as if it were a SQL Server table.

3. Multiple SQL Server Instances : Support for running multiple instances of SQL Server on the same system. This allows you to simultaneously run one instance of SQL Server 6.5 or 7.0 along with one or more instances of SQL Server 2000. Each SQL Server instance runs independently of the others and has its own set of system and user databases, security configuration, and so on.

4. Cascading RI Constraints : Provides the ability to specify the action to take when a column referenced by a foreign key constraint is updated or deleted. You can still abort the update or delete if related foreign key records exist by specifying the NO ACTION option, or you can specify the new CASCADE option, which will cascade the update or delete operation to the related foreign key records.

5. Text in Row Data : Provides a new text in row table option that allows small text and image data values to be placed directly in the data row, instead of requiring a separate data page. This can reduce the amount of space required to store small text and image data values, as well as reduce the amount of I/O required to retrieve rows containing small text and image data values.

6. INSTEAD OF and AFTER Triggers : Providing the ability to define which AFTER trigger fires first and which fires last. INSTEAD OF triggers can be specified on both tables and views. (AFTER triggers can still only be specified on tables.) If an INSTEAD OF trigger is defined on a table or view, the trigger will be executed in place of the data modification action for which it is defined.

7. Indexed Views : Called Indexed view or permanent view. Actually it stores the data permanently. There an option like SCHEMABINDING while creating the view.

8. User-Defined Functions : Introduces the long-awaited support for user-defined functions. User-defined functions can take zero or more input parameters and return a single value—either a scalar value like the system-defined functions, or a table result. Table-valued functions can be used anywhere table or view expressions can be used in queries, and they can perform more complex logic than is allowed in a view.

9. Federated Database Servers : Introduces support for updatable, distributed partitioned views. These views can be used to partition subsets of the rows in a table across a set of instances of SQL Server. These partitioned views can be used to spread the processing of one table across multiple instances of SQL Server, each on a separate server.


NAS vs. SAN and DAS

July 8, 2014

NAS is often contrasted with SANs, but NAS is actually under the “storage network” umbrella. The major difference is that the SAN is channel attached, and the NAS is network attached.

SAN technology is similar but distinct from network attached storage (NAS) technology. While SANs traditionally employ low-level network protocols for transferring disk blocks, a NAS device typically works over TCP/IP and can be integrated fairly easily into home computer networks.

A NAS is a single storage device that operate on data files, while a SAN is a local network of multiple devices that operate on disk blocks.

A SAN commonly utilizes iSCSI & Fibre Channel interconnects. A NAS typically makes Ethernet and TCP/IP connections.

At a high level, Storage Area Networks (SANs) serve the same purpose as a NAS system. A SAN supplies data storage capability to other network devices. Traditional SANs differed from traditional NAS in several ways. Specifically, SANs often utilized Fibre Channel rather than Ethernet, and a SAN often incorporated multiple network devices or “endpoints” on a self-contained or “private” LAN, whereas NAS relied on individual devices connected directly to the existing public LAN.

The traditional NAS system is a simpler network storage solution, effectively a subset of a full SAN implementation.  The distinction between NAS and SAN has grown fuzzy in recent times, as technology companies continue to invent and market new network storage products.

 

Today’s SANs sometimes use Ethernet, NAS systems sometimes use Fibre Channel, and NAS systems sometimes incorporate private networks with multiple endpoints. The primary differentiator between NAS and SAN products now boils down to the choice of network protocol. SAN systems transfer data over the network in the form of disk blocks (fixed-sized file chunks, using low-level storage protocols like SCSI) whereas NAS systems operate at a higher level with the file itself.

san-nas-das

 

Advantages & Disadvantages:

Although SAN solutions are more expensive and complex in maintenance comparing to NAS, they have fare more advantages. Let’s mention some of them:

The main and undeniable advantage of SAN is its High Performance. Traffic optimization is provided as early as at the stage of building of enterprise network infrastructure. Usually, SAN is implemented in a separate network segment. It allows not only to offload main communications, but also to secure data against unapproved access. Moreover, block-based data access substantially optimizes performance of many resource-intensive applications, including databases, video processing, and so on.

Scalability. SAN allows creating huge, geographically-remote pool of storage devices. The re-allocation process of storage network is transparent for users. Several SANs located in different parts of the Earth, can be easily consolidated to the common network, so that the servers are able to access storage volumes being “unaware of” location of a specific resource. Thanks to that SANs are often used in solutions for virtualization.

Reliability. SAN structure usually allows using of several backup options, such as mirroring and replication of data between the remote servers. Network can be organized in a way, that if one or multiple nodes are disrupted, administration immediately moves to the reserve node, thus eliminating the possibility of data loss. For building of HA networks, SAN storages are used.

Safety. The SAN structure practically eliminates the possibility of unapproved access to the data by an unauthorized user.

The only and very conditional disadvantage of the SAN technology is the relatively high (in comparison with NAS) cost of implementation and maintenance. Highly-qualified personnel are required for administering of very complex structure. However, at the stage of estimation of expenses for storage creation it is necessary to take into account that with scaling-up the relative cost for NAS’ maintenance will increase, and for SAN will decrease.

Thus, on basis of the listed features, the following conclusions can be made:

The NAS-based solutions match the needs and requirements of small homogeneous enterprise networks where high-performance software is not used. In this case, the key factor for choice is low cost and easiness to maintain.

The SAN solutions are used when the key factors are reliability, fault-tolerance, and high performance. Most critical functions are difficult to implement for the NAS devices, that is why use of SAN is becoming standard for the HA applications.

The only thing that cannot be implemented by means of SAN is file server that can be accessed by clients using various platforms and operating systems. However, that is easy to achieve by means of NAS. For such specific cases the hybrid systems exist, giving access to data both at the block-, and files-level.


NAS: Network Attached Storage.

July 8, 2014

 

The advantages of NAS over conventional server-attached storage can be summed up in three words: economy, speed, and ease. The NAS device sits on the network and is optimized for a single purpose: to pump data to users efficiently without the overhead and complexity of general-purpose servers.

NAS devices can produce improved file access performance at a substantially lower cost than general-purpose network servers. When factoring in the additional cost savings generated with plug-and-play installation that literally takes just minutes and ongoing reduced management costs, NAS is a true bargain for network storage needs.

NAS challenges the traditional file server approach by creating systems designed specifically for data storage. Instead of starting with a general-purpose computer and configuring or removing features from that base, NAS designs begin with the bare-bones components necessary to support file transfers and add features “from the bottom up.”

Like traditional file servers, NAS follows a client/server design. A single hardware device, often called the NAS box or NAS head, acts as the interface between the NAS and network clients. These NAS devices require no monitor, keyboard or mouse. They generally run an embedded operating system rather than a full-featured NOS. One or more disk (and possibly tape) drives can be attached to many NAS systems to increase total capacity.

Clients always connect to the NAS head, however, rather than to the individual storage devices. Clients generally access a NAS over an Ethernet connection. The NAS appears on the network as a single “node” that is the IP address of the head device.  A NAS can store any data that appears in the form of files, such as email boxes, Web content, remote system backups, and so on.

Overall, the uses of a NAS parallel those of traditional file servers. NAS systems strive for reliable operation and easy administration. They often include built-in features such as disk space quotas, secure authentication, or the automatic sending of email alerts should an error be detected.

 

Proponents of NAS claim that NAS technology provides these advantages over traditional file servers:

  1. lower cost
  2. better security
  3. higher availability (less downtime)
  4. easier to use and administer

NAS


Fibre Channel

July 8, 2014

 

Fibre Channel is a set of related physical layer networking standards. Fibre Channel technology handles High-performance disk storage for applications on many corporate networks. Fibre Channel supports data backups, clustering and replication. Like Ethernet, its main competitor, Fibre Channel can utilize copper wiring. However, copper limits Fibre Channel to a maximum recommended reach of 30 meters. When using more expensive fiber optic cables, Fibre Channel reaches to 10 kilometers. The original version of Fibre Channel operated at a maximum data rate of 1 Gbps. Newer versions of the standard increased this rate up to 16 Gbps, with 8 Gbps and 10 Gbps versions also used.

Fibre Channel networks have a historical reputation for being expensive to build, difficult to manage, and inflexible to upgrade due to incompatibilities between vendor products.


SAN: Storage Area Network

July 8, 2014

SAN: Storage Area Network

In addition to offering any-to-any connections, a SAN creates a scalable environment. Since storage and servers are independent from each other, storage devices or servers can be added or removed from their respective networks without affecting each other.

Storage devices can be added to the SAN without any worry about a server’s configurations. Isolating the potential disruptions of the servers from those of the storage reduces potential for interruptions.

SAN is designed around and encapsulated SCSI protocol. SAN is a type of local area network (LAN) designed to handle large data transfers. A SAN typically supports data storage, retrieval and replication on business networks using high-end servers, multiple disk arrays and Fibre Channel Interconnection Technology.

The term SAN can sometimes refer to system area networks instead of a storage area network. System area networks are clusters of high performance computers used for distributed processing applications requiring fast local network performance. Storage area networks, on the other, are designed specifically for data management.

A Storage Area Network (SAN) is an enterprise-level storage network of several (often, hundreds) of hard drives managed by an intelligent device.  This allows for maximally efficient use of all available storage space, lowering total costs of ownership (TCO) and increasing return on investment (ROI).  Most importantly, if a drive crashes, your data is not lost!

Servers that host applications communicate with a SAN through Fibre channel network equipment (iSCSI and F/C over IP are also supported), and the SAN determines where data is stored.  If a hard drive fails for some reason, the other hard drives in the storage network essentially rebuild the data through a series of algorithms so no information is lost.

The data on the hard drives is backed up every night to tape and stored off site as well.  This protects against a catastrophic systems compromise.

SAN


Query Processing Life Cycle

July 8, 2014

 

Parsing and Binding

The query is parsed and bound. Assuming the query is valid, the output of this phase is a logical tree, with each node in the tree representing a logical operation that the query must perform, such as reading a particular table, or performing an inner join. This logical tree is then used to run the query optimization process, which roughly consists of the following two steps;

Generate possible execution plans 

Using the logical tree, the Query Optimizer devises a number of possible ways to execute the query i.e. a number of possible execution plans. An execution plan is, in essence, a set of physical operations (an index seek, a nested loop join, and so on), that can be performed to produce the required result, as described by the logical tree;

Cost-assessment of each plan 

While the Query Optimizer does not generate every possible execution plan, it assesses the resource and time cost of each plan it does generate. The plan that the Query Optimizer deems to have the lowest cost of those it’s assessed is selected, and passed along to the Execution Engine;

Query execution, plan caching

The query is executed by the Execution Engine, according to the selected plan. The plan may be stored in memory, in the plan cache.


Date for different Langagues.

February 3, 2012

Use following scripts to display different langagues date formats.

DECLARE @Date DATETIME

SET @Date = GETDATE()

— DATENAME(MONTH,GETADATE()) is returning numeric value of the month as ’09’

SET LANGUAGE Japanese

SELECT @Date AS [DATE],

DATENAME(YEAR, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [YEAR]

,DATENAME(MONTH, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [MONTH]

,DATENAME(DAY, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [DAY]

,DATENAME(dayofyear, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [dayofyear]

,DATENAME(weekday, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [weekday]

,DATENAME(Quarter, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [Quarter]

,CONVERT(TINYINT,DATENAME(MONTH, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))))AS [MONTH]

,’Q’+DATENAME(Quarter, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [Quarter]

———–

DECLARE @Date DATETIME

SET @Date = GETDATE()

SET LANGUAGE us_english

SELECT @Date AS [DATE],

DATENAME(YEAR, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [YEAR]

,DATENAME(MONTH, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [MONTH]

,DATENAME(DAY, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [DAY]

,DATENAME(dayofyear, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [dayofyear]

,DATENAME(weekday, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [weekday]

,DATENAME(Quarter, CONVERT(DATETIME,CONVERT(VARCHAR,@Date,112))) AS [Quarter]


Nested Convert() Function.

February 3, 2012

Following is syntax for Nested Convert statement.

Syntax:

SELECT Column_Name, CONVERT(BIGINT,CONVERT(FLOAT, Column_Name)) FROM Table_Name

Exampel:

SELECT Emp_Salary, CONVERT(BIGINT,CONVERT(FLOAT, Emp_Salary)) FROM Employee