T-SQL Tuesday #156: Ready for Production?

Welcome to another edition of T-SQL Tuesday! This month’s edition is hosted by Tom Zika (b|t) and Tom wants to know what makes “code production-ready?”

Measure Twice, Cut Once…

I still remember learning early on my career, the mantra of “Measure Twice, Cut Once.” It’s a important lesson that I’ve had to be re-taught periodically over my career, sometimes in a rather stressful or painful manner. But in the unforgiving realm of code, that is quite literal if you think about it, accuracy is paramount. We should always thoroughly test our code before it gets to Production.

No One Develops in Production, Right? RIGHT?

On the hand, I’ve also had more than a handful of occasions where Production is broken in a BAD WAY, and you have to FIX IT NOW. I remember when I worked in the financial commercial credit card industry, where our post-deployment QA smoke testing would take anywhere from 8-12 hours, and that’s AFTER a minimum 4 hour deployment. We rolled out a big release on Friday night, QA started their work in the wee hours of the morning Saturday and we found a nasty breaking bug that only manifested with Production data in play. Our app devs were trying to figure out how they could fix the code, but even the effort of committing, integrating, and cutting a new build was a multi-hour affair. In the meantime, we database developers were trying to figure out if there was a viable SQL Server/stored procedure workaround that could be implemented to allow us to not have to rollback everything.

I was the db dev lead on this particular release and my gut told me that there had to be a workaround – I just had to iron it out. I requested 90 minutes of “do not bother me NO MATTER WHAT,” focused, and 90 minutes (and one interruption) later, I had successfully coded a workaround fix. But I also committed a cardinal sin – I developed in Production.

Was that Production Ready? Well, QA did spend a few hours testing it and validated it. But was it subject to the full battery of integration and performance tests that our application would typically go through? No… but we had little choice and in this case, it worked. Funny thing is that workaround fix, like many thing things that were never meant to be permanent, remained permanent.

Is There a Lesson to be Learned Here?

If I had to share one key takeaway, is that I believe it is critical that every business have a true “Prod + 1” environment. This means having a second FULL COPY of Production that’s refreshed on a regular basis, into which “next builds” are installed for testing. Unfortunately, this is not an easy or inexpensive task. Fortunately, there’s many more solutions available these days (like a certain orange organization I happen to know) that make it far more feasible too.

Thanks for reading.


T-SQL Tuesday #153: Don’t Forget the Little Things

Welcome to another edition of T-SQL Tuesday!

This month’s edition is hosted by my good friend and former SentryOne colleague Kevin Kline (b|t). For his T-SQL Tuesday topic, he asks bloggers to “Tell us the story of how attending an IT conference or event resulted in an amazing career or life opportunity.

Been There, Done That

The thing is, I’ve already written about exactly that, when I first kicked off this blog after PASS Summit 2013: SQL Family Got Me Here. And nine years later, I’ve been blessed with a wealth of speaking opportunities, and two amazing jobs at SentryOne and Pure Storage.

Don’t Forget the Little Things

I’m not going to write about something that resulted in “an amazing” whatever. Instead I’ll share this story.

One evening, early in the week of my second PASS Summit, a friend was organizing a group to go out to dinner. I asked to tag along and they said sure, figuring the restaurant could accommodate one more. As we departed, we wound up picking up two more folks, who were also newcomers to PASS Summit and the SQL community.

When we got to the restaurant, we learned that they could not accommodate beyond the original reservation headcount at a single table, but could seat us three “tag-alongs” at our own table. Okay, that’s cool. Two of the three of us were relatively new friends and the third, I don’t quite recall if we were just Twitter friends at that point or what… but the point more was that we had a great time getting to know one another. We referred to ourselves at the “kids table.” Coincidentally because it was just the three of us and not a larger group, we managed to order, eat, and pay before the other table had even ordered their dinner!!!


That story is not an amazing life changing whatever… but it is one that has stuck with me, despite being insignificant in of itself. And the truly amazing thing is, I have many of these that I could share… mundane, but joyous gatherings of good people, who I am happy to have in my life in one way or another… and never would have met had I not gone to one of these conferences and pushed beyond my anti-social shell.

Thanks for reading.

T-SQL Tuesday #152: Consequences

Welcome to another edition of T-SQL Tuesday!

This month’s edition is hosted Deborah Melkin (b|t). Deb asks for each of us to share our personal infamous “rant” as we database professionals all have strong opinions about many things!

What’s Your Rant Andy?

Those of you who know me might think this post will wind up being about…

  • NCHAR/NVARCHAR vs CHAR/VARCHAR – aka Unicode or non-Unicode
  • Leading or trailing commas in SELECT statements?
  • All flash or spinning rust?
  • Coffee – with creamer or black?

But it won’t be about any of those! Instead, as I often like to do with T-SQL Tuesday themes, I’m going to take a slight twist and talk about what happens after you make a potentially controversial decision.

When X, then Y…

Consequences… ramifications… or afterclaps (my new favorite thesaurus-supplied alternate)… we’ve all had to pay the price of decisions that were made long ago. But sometimes we find ourselves having to make a less-than-desirable decision with only losing options to choose from.

I once worked for an Entity Framework SaaS company that was having horrific T-SQL performance issues. We put out a major release one weekend and spent the subsequent week buried in horrible performance hell. Because all T-SQL code was generated from Entity Framework (stored procedures were banned by a development exec prior to me joining the company), there were no easy fixes that could be implemented (and a Production release typically took 12 – 36 hours, I wish I was joking).

The manager of infrastructure had heard about SQL Server’s missing index recommendations:

  • Infra Mgr: Hey Ops DBA (I was a DB Dev). Are there any missing index recommendations in Production?
  • Ops DBA: Yeah, over 200.
  • Infra Mgr: Creating indexes doesn’t require us to do a Production Release. We can apply these overnight when client use is minimal.
  • Me: NOOOOOOO!!!!!

I did my best to explain why this was an absolutely horrific idea. But the Infrastructure Manager was desperate for any kind of relief from their performance woes and did not care to listen about how this could make things even worse. I was sidelined and the Ops DBA was ordered to script and implement each and every one.

And what was the afterclap? <giggle>

A couple of our key, gigantic core tables now had over 30 non-clustered indexes on them. Many of which were redundant and/or duplicate indexes as one might expect. A few queries were improved but backup times went through the roof, nightly ETL execution times doubled, and the Production database became horrifically bloated. A month later, I was finally given authorization to go through an index consolidation and clean up effort, which took me another month of workload analysis and index prioritization/consolidation.

… Then Y… but Why?

In the end, the Infrastructure Manager made a less than ideal decision, yes. But that is not what I have a problem with. What I have a problem with is that he refused to listen and understand the ramifications of the decision. He acted in a willfully ignorant manner, choosing not to listen to his subject matter experts, and instead grasp at straws in the hopes of resolving the issues. And the cost in the end was high in terms of cleaning up the bigger mess that was left behind.

Moral of the Story

So what’s the moral of the story here?

I want to encourage you to try and listen and understand the consequences of choosing X over Y. If you really want to save a few bucks and buy spinning disk instead of an all-flash SAN, understand and acknowledge the short term and long term consequences. And sometimes you do need to choose the less-than-ideal option. But if you must, I say do so with a full understanding of the afterclap.

Thanks for reading!

T-SQL Tuesday #150: My First Tech Job

Welcome to another edition of T-SQL Tuesday! This month’s host is Kenneth Fisher and Kenneth asks for bloggers to write about “Your first technical job”

Something A Little Different

I suspect that most will write about their first jobs after college. But for me, I’m going to go further back to senior year of high school.

Took Economics as an elective and it was a really awesome class. Got along really well with my teacher, who knew I was a computer nerd. At some point, she told me that her (adult) daughter needed a copy of WordPerfect and a CD-Rom drive installed in her home desktop and asked if I’d be willing to do the job for her. Sure, why not? I was already acing that class, so it’s not like I needed extra credit or brownie points. She said she’d pay me but never said how much. I didn’t really care; I was eager to help.

Drove over to her daughter’s apartment, installed the CD-Rom drive & WordPerfect, gave them a tour of the software, chatted about Sierra adventure video games with her boyfriend/husband, and left. They handed me an envelope and it had $150 in it! To high schooler Andy, that was a TON of money for an hour of my time. And I I suppose that makes that my “first technical job.”

What’s So Significant About That?

Why am I writing about that story? Because that first job was something I enjoyed and I was eager to help. I knew I was getting paid, but I didn’t really care how much.

Flashing forward more years than I care to admit, and I can say that I’ve been blessed with a career where I’ve been able to continue to work with technology that I love, and help people along the way, all while getting paid to do it. It’s taken a great deal of work and there’s certainly been some rough patches, but I appreciate and am grateful that I continue to be able to help people day in and day out.

What About You?

If you’re reading this, then I would challenge you to think over your career. Do you have an earlier story that helped to shape or foretold who you have become today?

Thanks for reading.

T-SQL Tuesday #149: Less Is More… Or Is It?

Welcome to another edition of T-SQL Tuesday! This month’s host is Camila Henrique and Camila asks for bloggers to write “T-SQL Advice you’d give to your younger self.”

Less Is More

I studied Computer Science in college, then my first job out of school I was a combination internet developer and sysadmin. I was taught “less is more” when it came to writing code, as many of us were.

So when I started working with T-SQL, I started crafting gigantic, do-it-all T-SQL statements. So while the statement/query itself might have been many lines of code, it was just one single discrete command. So less is more, right? Better to only send over one query rather than a bunch, right?

… Or Is It?

The “less is more” bias was turned around when I started to learn about the storage engine and query optimizer internals. As I talk about in many of my presentations, I’d rather send a bunch of simple queries to the optimizer rather than one single gigantic one (generally speaking).

There’s many reasons for this but one key one is that the query optimizer’s end goal is NOT to come up with the best execution plan possible for a given query, but to create one that is good enough as quickly as possible. And that the query optimizer has a limited amount of time before it stops permutating and just goes with whatever is the best “draft” it has at that point.

When It Comes to T-SQL

My TL;DR advice… “Less Is More… work for the query optimizer.” Instead, keep your queries simple, even if it means sending more discrete queries to the optimizer. Break it down into intermediate steps. It may be counter to other programming languages but in the world of T-SQL, Less is NOT More.

T-SQL Tuesday #148 – Building Community

This month, Rie Merritt is asking us to share advice about User Groups. I had the honor of running the Chicago Suburban User Group for over 6 years.

Build Community

My advice is something I stumbled upon by complete accident. Prior to our first meeting, we sent out a SurveyMonkey asking attendees what they wanted to get out of these meetings. Networking was a top answer. And on the drive over to our first meeting, I came up with an idea.

When we kicked off, I explained the survey results wanting networking, so I simply asked everyone to introduce themselves to whomever they were seated by. I said just share your name, your role, and what you do. I expected this to last maybe 2-3 minutes tops. Instead it transformed into 15-20 minutes of chatter!

As our group evolved, we offered food but had to locate to a common area to eat (food wasn’t allowed in our classroom). So I’d repeat the “introduce yourself” bit, remind people that this is their time to chat, etc. I always made the joke that “if you’re an introvert, you’re in luck… almost all of us in the room are!” And I said that if you cannot think of anything to talk about, just vent about something you’ve been struggling with at work lately – we ALL have work headaches and challenges.

And it worked. It worked really… really… well! I’m very proud of the community that we built with the Chicago Suburban User Group.

T-SQL Tuesday #146 Round-up! Upending Preconceived Notions

Hello everyone!

First of all, I want to thank everyone who took some time to participate in this month’s T-SQL Tuesday: Upending Preconceived Notions. There’s a fantastic line-up of contributions! I was tickled that many of the topics chosen were things I once held one viewpoint on, later learned more, and evolved that viewpoint.

That was a lot of fun. And I’m happy to say that I learned some stuff.

Thank you again to everyone who took some time to contribute a blog post!

Dear readers, I hope you take the time to read each and every contribution, and come away learning something new too!

T-SQL Tuesday #146: Disk Storage – More Than Meets The Eye

Welcome to another edition of T-SQL Tuesday. This month, I’m the host and I’ve asked participants to blog about “Upending Preconceived Notions.”

What’s Your Preconceived Notion Andy?

Just over 3 months ago, I joined Pure Storage, which is fundamentally a storage infrastructure company. Confession: I’ve never been an infrastructure OR storage guru. I’ve spent the vast majority of my career focused on slinging code, and not the underlying hardware. But like many things in life, these things can be taught and I’ve been learning a LOT these past few months!

What I Used To Think About Disk Storage

For the vast majority of my career, I just thought of storage as a place to read and write your data from. Storage was merely a receptacle… a place for your data to live. Of course, as data professionals, we want to move tons of data quickly, so things like latency, throughput, and capacity still mattered to me. But beyond that, disk was disk.

So What Changed?

Frankly, I learned about how Pure Storage’s FlashArray worked underneath the covers. (DISCLAIMER: I may slightly butcher what I explain next, as I’m still getting to the point of being able to eloquently articulate the stuff I’ve been learning.) When writing data blocks, FlashArray essentially maintains “metadata” about what data tidbits live where in the storage array. I’m going to over-simplify and say let’s just think of this metadata as a glorified index or table of contents, okay?

So why is that special? FlashArray uses this metadata not only to find where your data lives but to MANAGE it as well. We do things like make copies of our data and I know we can ALL relate to staring at progress meters moving at a glacier’s pace. Well what if all we had to do is make an adjustment to the metadata instead? I’d rather just copy a small little “table of contents” instead of an entire book of information, and it’s a heck of a lot FASTER!

If you’re curious to learn more about this, here’s a cool blog about ODX and FlashArray. https://blog.purestorage.com/purely-technical/odx-flasharray-engineers-perspective/

Why does all of this matter?

As data professionals, we have to move data around a lot. And it is often a hassle. Refresh dev/QA/staging with a copy of Prod. Reseed an Availability Group. What has blown my mind is that because of clever decisions that Pure has made, regarding how data is stored in FlashArray, now enables data professionals to attack those copy/data movement headaches in ways they could not before! I’m amazed at some of the things that can be done almost instaneously, that would take hours in another vendor’s storage array. And that is extremely exciting to me!


As I’m writing this very blog post, I just had a personal revelation.

Years ago, I attended SQLskills’ IE1 class. One of the first topics was SQL Server storage engine internals and it was another one of those eye opening moments in my career journey. And I just realized that my new understanding of FlashArray and how it works underneath the covers, has really opened by eyes and changed how I view storage SOLUTIONS! This bit is totally just a personal notation, but I figured I’d write and share it here anyway!

Think Back

Anyway, I hope you all enjoyed this story. Sorry if it came across as a Pure Storage commercial – it wasn’t meant to be, but damn, they’re doing some really wicked awesome stuff underneath the covers!

Regardless, if you’re reading now, think back over the course of your career and ask yourself if there’s ever been a time when you’ve learned something that’s totally changed your outlook or view on something bigger.

Thanks for reading!

T-SQL Tuesday #146: Upending Preconceived Notions

Welcome back to another edition of T-SQL Tuesday! I’m honored to be your host once again!

Theme to Kick off 2022

This month, I’d like to ask everyone to think about something you’ve learned, that subsequently changed your opinion/viewpoint/etc. on something. Maybe you’ve had a certain opinion, belief, or bias? Perhaps you’ve always thought something worked a certain way? Or you’ve always thought that a certain something (called “X”) was only good for “A”, only to later learn that it can help with “B”, and “C” as well. Regardless, you learned something and it totally upended and changed that preconceived notion you held previously.

When has this happened to you Andy?

Let me share an example. In my past as a T-SQL developer, I remember when I first learned about CTEs. I thought the world of them and started using them everywhere! However, there was one slight problem. I was under the mistaken impression that they pre-materialize each sub-query. And they do… in OTHER RDBMS’s. Whoops! After a few years, I learned that they don’t behave that way in SQL Server. Instead the query optimizer inlines the query in the CTE, making them functionally no different that a subquery. And well, let’s just say that that made me regret some of the coding decisions I’d made during my “CTEs-are-awesome” phase.

Rule Refresher

To remind you of the rules, please write a blog post corresponding to this month’s theme, include the T-SQL Tuesday logo, link back to this announcement post for the trackback, and leave a comment here with a link back to your contribution. Deadline is end of day Jan 11, 2022 (don’t worry about time zones, I’m not picky about late entries).

The other thing to remember is, your blog is your blog. If you want to gently twist the theme, to write a blog that better suits you, by all means do so! Just want to encourage you all to participate!

Go Forth and Blog!

So take some time to think about something you’ve learned that subsequently upended a preconceived notion you held before! I look forward to reading your contributions.

Happy Blogging!

T-SQL Tuesday #143 – Random Fun

This month’s T-SQL Tuesday is hosted by John McCormack, who asked participants to blog about Short Code Examples.

A few years ago, I wrote a similar blog about T-SQL Tools that I could not live without. Those were my snippets files, which I maintain in Google Drive. Not much has changed since I’d spent the last 4 years working for SentryOne, but I still maintain the entire toolset and did use it periodically.

For today’s, I’m going to dive into my 1_Scripts.sql file and share some of my favorites all involving “random” values!

Random Fun

First, there are times where you need multiple random numbers. Simply using multiple iterations of RAND() in a single statement won’t cut it, as you need to vary the seed. So I keep this snippet handy for when I need a bunch of random values in a single statement:

/* Random Number between 0-99 */

Wait For Me Randomly

During my time at SentryOne, I found that I had to create mock workload on a regular basis. I wanted to introduce some unpredictability to make demos more interesting, so wrote this chunk of code to introduce random WAITFOR DELAY intervals.

/* Shell Agent Job with randomized runtime */

    @DelayMin INT = 5,
    @DelayLength CHAR(8) = '00:00:00'; -- hh:mm:ss
    -- Create randomized 5-15 MINUTE WAITFOR DELAY
   @DelayLength = '00:'
   + RIGHT('0' + CAST((@DelayMin + CAST((RAND(CAST(NEWID() AS VARBINARY)) * 100) AS INT) % 15) AS VARCHAR(2)), 2)
   + ':00';
   -- Create randomized 5-15 SECOND WAITFOR DELAY
   --@DelayLength = '00:00:'
   --+ RIGHT('0' + CAST((@DelayMin + CAST((RAND(CAST(NEWID() AS VARBINARY)) * 100) AS INT) % 15) AS VARCHAR(2)), 2)

SELECT @DelayLength AS '@DelayLength';

PRINT 'START - Do Something';


PRINT 'END - All Done With Something';

For My Final Trick, Random Strings

In much of my AutoDealershipDemo database, I have random strings for various things, so needed code to do that. I’m using a Tally Table approach, that then combines CHAR() with modulus operators to only general alpha-numeric values. But this can easily be adapted for other values available in CHAR().

; WITH Ten_CTE (N) AS (
), Thousand_CTE (N) AS (
), Tally_CTE (N) AS (
   -- This provides the "base" CTE and limits the number of rows right up front
   -- for both a performance gain and prevention of accidental "overruns"
   FROM Thousand_CTE
   -- CROSS APPLY Thousand_CTE t2 -- UNCOMMENT IF YOU WANT 1,000,000 results instead of 1,000
   -- Increase concat below to increase length of string
   CHAR(ABS(CHECKSUM(NEWID())) % 26 + 97)
   + CHAR(ABS(CHECKSUM(NEWID())) % 26 + 97)
   + CHAR(ABS(CHECKSUM(NEWID())) % 26 + 97)
   + CHAR(ABS(CHECKSUM(NEWID())) % 26 + 97)
   + CHAR(ABS(CHECKSUM(NEWID())) % 26 + 97)
   + CHAR(ABS(CHECKSUM(NEWID())) % 26 + 97)
   AS RandomString

Hope you’ve enjoyed reading. These will most likely not be useful to the vast majority of you. But if you do ever have to write some mock workload, demo scripts, etc., then hopefully you’ll remember this blog post and make use of the above!