| A community of more than 1,600,000 database professionals and growing |
| | SQL Server is Getting Smarter There's been a lot of press and media about Microsoft on the AI and machine learning work they're heavily investing in. From Cognitive Services to Cortana to Bots, Microsoft is really investing in developers and applications that will perform detailed analysis and make more complex decisions in any environment. Event SQL Server has gotten Python added to R Services and will change the way we run queries for analysis and reporting. The effect this will have on data professionals is not just limited to SQL Server's machine learning capabilities. This week I saw a couple announcements from Microsoft on changes in SQL Server. Automatic plan correction is coming in SQL Server 2017 and automatic index management is in SQL Azure. I expect the latter to make an appearance in the on-premises product at some point, perhaps 2018 or 2019, but I would certainly count on this coming to your local installations at some point. Currently plan correction still requires a DBA to decide which plans need correcting, but once that the decisions are made, SQL Server can handle things. However, with automatic plan correction, we can allow SQL Server to force the last good plan when it detects an regression. This isn't a huge change, but it can dramatically reduce some of the random calls that DBAs get when plans regress and performance tanks. Over time, we might find that many of those nuisance calls go away. I hope that most of you have other work that you can do instead of tracking down plan regressions, and you certainly should have more. There are other changes, such as threat detection, that I expect will allow a single person, perhaps the accidental DBA or developer, to manage a lot of the trivial, but important, administrative items for a SQL Server instance. This means that we need less data professionals that focus on the infrastructure side of databases. There will be more and more ways that SQL Server improves to handle the mundane tasks, amplifying the power of a single human, reducing the management burden, and using less people to manage more and more. I expect additional capabilities in SQL Server to simplify most tasks over time, and I'm hoping there's one in particular that gets built soon. I'd like to see automatic backups (full and log) as a part of the database creation process. Some helpful defaults, perhaps at the instance level, that are applied to ensure that all databases are being backed up and we never see the "transaction log full" error unless we run out of disk space. It will come, as we move to the quick, button click, automatic setting method of managing all the cattle in our infrastructure. At least, I'm planning on this happening. Steve Jones from SQLServerCentral.comJoin the debate, and respond to today's editorial on the forums |
| The Voice of the DBA Podcast Listen to the MP3 Audio ( 3.8MB) podcast or subscribe to the feed at iTunes and Libsyn. The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. | |
|
|
| ADVERTISEMENT | | Could your SQL coding be more efficient? Data Platform MVPs and SQL Server experts share their tips on how to standardize formatting whilst stripping out the repetition of coding with SQL Prompt. Over these 20 short videos we will help you write better, shareable SQL faster. Check out the tips |
| | The industry standard for comparing and deploying SQL Server database schemas Trusted by 71% of Fortune 100 companies, SQL Compare is the fastest way to compare changes, and create and deploy error-free scripts in minutes. Plus you can easily find and fix errors caused by database differences. Download your free trial |
|
|
|
| | | Amarendra Reddy Thummeti from SQLServerCentral.com In this article, I will provide a set of examples to showcase the use of OUTPUT clause within the MERGE statement and how to capture the OUTPUT clause results into an archive table. More » |
| Additional Articles from MSSQLTips.com Partitioning data is a standard SQL Server administration practice. Partitions enable independent administration of different slices of data. When a SQL Server Analysis Services (SSAS) tabular data model is developed and processed, data is read from the source system and loaded into the tabular data model configured in In-Memory processing mode. Every time the model is processed, the entire data set may not require re-processing. Only certain slices of data containing changes may require re-processing which can be achieved by partitioning data into logical slices. In this post, Siddharth Mehta looks at how to partition tables in Tabular SSAS. More » |
| Learn how Microsoft adopted DevOps at scale in this free Redgate webinar on June 12. Donovan Brown, Senior DevOps Program Manager in Microsoft’s US Developer Division, will share how Microsoft embarked on its own DevOps journey reducing a three-year release cycle down to three weeks. Sign up now. More » |
| matthew.mcgiffen 73574 from SQLServerCentral Blogs When running workshops on the subject of SQL Query Performance the first question I always ask is how do we measure... More » |
| Kenneth Fisher from SQLServerCentral Blogs I read an interesting question today. Someone wanted to be able to run the same stored procedure multiple times, at... More » |
|
|
| | Today's Question (by Stanley Kapfunde): What does the following command do? DBCC SQLPREF |
Think you know the answer? Click here, and find out if you are right. We keep track of your score to give you bragging rights against your peers. This question is worth 1 point in this category: DBCC. We'd love to give you credit for your own question and answer. To submit a QOTD, simply log in to the Contribution Center. |
|
|
| |
ADVERTISEMENT | Exam Ref 70-761 Querying Data with Transact-SQL Prepare for Microsoft Exam 70-761–and help demonstrate your real-world mastery of SQL Server 2016 Transact-SQL data management, queries, and database programming. Designed for experienced IT professionals ready to advance their status, Exam Ref focuses on the critical-thinking and decision-making acumen needed for success at the MCSA level. Get your copy from Amazon today. | | |
|
|
|
|
|
| Yesterday's Question of the Day |
| Yesterday's Question (by Steve Jones): What happens if I create a vector in R with disparate values, such as this: x <- c(42, "Towel", TRUE) Answer: The vector is created with all values converted to strings Explanation: When you create a vector with disparate types, they are all converted to strings. Ref: c - click here » Discuss this question and answer on the forums |
|
|
| | Wayne Findley from SQLServerCentral.com ETL is truly a continuing journey of exploration. It is often the case that source databases are, for whatever reason, inaccessible as SQL databases so surrogates have to be used. These can be files of various sorts, scheduled and exported. Today's tale relates to that most verbose of formats - XML. As with so many ETL projects, the scope and complexity of the ETL phase has been understated, and so quick-and-dirty - that is to say - Spackle - is the order of the day. The XML in question is not a pure database extract: it is an XML representation of a cross-tab Statistical report from a hotel reservation system. What is required is a simple list of the various statistics, their values, and the grouping to which each belongs. These rows can then be furher shaped and finally inserted to a standard star-schema warehouse. Attempt 1 used SSIS for the XML file read, via an XML Data Source. This resulted in a formidable XSD file to describe the source file contents and assign datatypes. But as should be obvious from the xml, most of the document object model is quite irrelevant for our purposes. Prime example - Column Numbers, which don't pass the 'So What?' test for the warehouse destination. In another of those real-world sidebars, the entire reservation system is under review. This extract may have a life only of months, so making this whole exercise as simple as possible is an important consideration. It may have to be rejigged for quite another data source - very possibly non-XML - once a replacement system is decided upon. Recalling that this is an xml rendition of a report, one has to ask - what if a user in the source application tweaks the report definition? The XSD is likely to be rendered instantly obselete, but as it is embedded deep within an SSIS Control Flow, this would cause the task to fail. Failed tasks mean SSIS changes and re-deployment - not a very maintainable proposition. So XML Data Source plus XSD definitions won't do. Attempt 2, results from a decision to use pure SQL and the old standby: OPENROWSET - which has been well covered in these very forums (or, for the purist, fora), for two reasons. 1 - as pure SQL, the script can be wrapped up as a stored procedure on the BI server, and thus quickly amended to suit any application report definition changes. As many such reports need to be ETL'ed, the procedure can be invoked by a simple SSIS Filewatcher task (Konesans or similar) and the relevant filename passed to it. 2 - as SSIS is so monstrously picky about datatypes, a 'With Result Sets' invocation of the procedure will allow explicit datatyping at the point of extraction - always the best in my experience. Attempt 2 looks 'down' the XML tree, so the initial Nodes definition in the Cross Apply is'//RES_STATISTICS2/LIST_G_LAST_YEAR/G_LAST_YEAR/LIST_G_CROSS/G_CROSS/LIST_G_SUBGROUP/G_SUBGROUP/LIST_G_DETAIL/G_DETAIL/LIST_G_HEADING1/G_HEADING1/LIST_G_AMOUNT/G_AMOUNT' This slavishly follows the tree top-down but will, clearly, suffer if that report definition is changed (number of layers, names of elements), plus is a rather formidable string. The procedure does have the advantage of running, but a little optimising seems possible. Attempt 3 simplifies the Cross Apply nodes to the rather simpler './/G_AMOUNT' and mirabile dictu, the script still works. Job done. On to the next challenge. I explain this as follows. It is actually wonderfully simple, and I hope it is for readers too: Find the lowest/innermost element in the document tree, at which required data resides. In this case, it is the SUM_AMOUNT, which has a parent G_AMOUNT. Point the Cross Apply nodes at G_AMOUNT parent - this determines the number of rows in the output. './/G_AMOUNT' means 'ignore all of the parents, just get all G_AMOUNT's' The Select from the PackageSource CTE can then acquire the other values, by stepping up or down a known number of parents/children from G_AMOUNT - 1 click down for SUM_AMOUNT, 2 clicks up for HEADING_1, 6 clicks up for MASTER_VALUE This bottom-up 'layer counting' approach completely ignores intermediate element names: it avoids the need to quote them at all. If the source application report definition changes, the procedure can be easily maintained, either as to element names needed for the output columns, or as to the number of parents back up (or children down) the document tree to climb for each required value. This can be achieved by just looking at the XML. The procedure is executed 'with result sets' to be absolutely sure of datatypes. The procedure is itself a dynamic SQL execution, because, for reasons best known to the authors, a BULK command won't take a parameter as its file name. SSIS is thus likely not to be able to 'sniff out' datatypes for its dataflows otherwise. And we have all had experience of letting SSIS decide these for itself.....best not to. Caveat: The source files, being renditions of summary reports, are inherently small. This method may not scale well as file sizes increase. YMMV, as always Images below: the xml (top of doc only) and the result set - exactly as required. More » |
|
|
| Database Pros Who Need Your Help |
| Here's a few of the new posts today on the forums. To see more, visit the forums. SQL Server Multiple Databases Management Solution - Hi Senior DBA's. Do you know of a way to automate the below requirement to create a daily report ( on a... Best way to handle a table-based queue? - NOTE: This is NOT a Service Broker question. The table defined below is accessed by concurrent jobs that select the next... And/OR logic in SQL - Guys, Maybe most of your guys it is Simple question and it is simple question but I am a little... Why do some DBAs avoid DATETIME2? - I've been considering using DATETIME2(3) over DATETIME for new work due to the 1-byte saving of equivalent values (DATETIME2(3) vs... The server principal is not able to access the database under the current security context. URGENT - This is an error which seems to happen a lot but I cannot really get an answer We have a user... Wrong Alert? - Hello I have created a job which send me an alert when D drive(dedicated drive for tempdb) free space is less than 2048... Simulate filling Transaction log - It may sound extremely simple and might be it is.... but I am trying to fill the transaction log of... Stopping Service in Cluster - Stopped SQL service from Configuration manager in a cluster, is the failover supposed to happen? Will stopping SQL Server resource or... Special Character Being Inserted into Data - This is a strange issue, but I'm not sure how to troubleshoot this. Users are using an ERP system which... Info Required on ALTER SCHEMA - Hi Experts, I noticed that in one of the sp they have user ALTER SCHEMA schemaname TRANSFER schemaname.tabe name.i googled it... Need Help Getting Max Date - Hello Everyone, I am trying to work with this query to return the latest date (in this case it is userDate2): [code... Find user connections to SQL server tables - I want to check, how many users are connected to any particular SQL Table and cube (SSAS). This will be... Heap vs Clustered Wildcard Search - I am working on converting a heap table to a clustered table by adding a primary key to the identity... Unable to connect SSMS - In a 2 node cluster, I can able to connect SSMS in Node 1 only. When i tried to connect... Things to consider while creating new table for an application - Hi All, I am going to design a new table for one functionality which we are going to add in our... Converting User SID Binary to String TSQL - Hello All, Basically I'm trying to use TSQL to convert user SIDS from Binary to String and visa versa but I'm... SPID of top CPU consuming queries - Good Morning Experts, I am finding top CPU queries using SSMS(Reports->Standard Reports->Performance-Top Que Question about Mesure groups/partitions - Hello, I passed my exam 70-467 and I had this question, please do you know the reply ? Thanks. You are designing... Open SSIS packages in BIDS - SHort simple article that shows how to open a package from any store (SQL Server, File system, etc.) in BIDS Database in Recovery Pending State - One of my databases is in Recovery pending state. I tried to run an Alter command on the database to... |
|
| This email has been sent to newsletter@newslettercollector.com. To be removed from this list, please click here. If you have any problems leaving the list, please contact the webmaster@sqlservercentral.com. | This newsletter was sent to you because you signed up at SQLServerCentral.com. Feel free to forward this to any colleagues that you think might be interested. If you have received this email from a colleague, you can register to receive it here. | This transmission is ©2017 Redgate Software Ltd, Newnham House, Cambridge Business Park, Cambridge, CB4 0WZ, United Kingdom. All rights reserved. Contact: webmaster@sqlservercentral.com |
|
|