Tuesday, March 13, 2012

With an Iron Fist (SQL PBM)

Maybe you were a system admin that got sucked into the exotic land of database administration because there was no one else to do it. Maybe you are a DBA on purpose. It might even be possible that one day you were writing web applications in your remote and intentionally isolated cube at the far end of the “trailer park” and faster than you can say DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS you found yourself in charge of an unwieldy, quickly expanding and barely governed SQL Server environment.

In any case, you may have wondered, if there were some way to control the environment of SQL Server proactively and to have an automated way to enforce all of those naming convention standards and object settings that you TALKED about, documented and trained on. Have I got news for you!

In SQL Server 2008, a new feature called Policy Based Management (PBM) was introduced. The idea is that if you have a setting or convention that you wish to check for and/or ENFORCE (get your control freak on), you just create a policy that defines the object (maybe a database), a facet (the property that can be checked for or enforced, like the recovery model of a database), and a condition (the value of the property, like Full for the recovery model). Systems Administrators (yes the “A” is capitalized!) will recognize this concept as having some similarity with Group Policy in Active Directory Domain Services, although the application and verification of the policy is done quite differently.

The really fun part is that using Central Management Servers in SSMS, you can enlist one or more instances to monitor or enforce, manually or on a schedule, from one instance. You can do ad-hoc policy checks and enforce the policy on objects that are not in compliance and you can include all objects of the type within an instance or you can un-enlist individual objects.

Some examples of the types of policies you might create and assign might be controlling role membership and preventing future modification by doing a ROLLBACK when someone is added (sounds like Restricted Groups in AD), or perhaps you need to make sure that every instance in your organization must use Windows Authentication only. Maybe, instead of asking nicely and resending out that paper policy that governs object name conventions, you just need to make those ornery developers name all of their user stored procedures with a “usp_” prefix (I love you guys, really). Whatever the standardization need, PBM probably has the property you need to get a handle on. And standardizing is the whole point. It is an interesting exercise to document standards. It is a whole lot more satisfying to be able to inflict -- um – enforce them.

Now go forth and standardize!


- Peter Trast, SQL Expert; MCITP DBA, MCITP EA, MCT
LinkIn with Peter

Thursday, March 8, 2012

You DO need an SSD, forget the cost

For those of you who, like me, have to run labs on a portable device, like a laptop, an SSD is an absolute MUST.

I recently mourned (well, I didn't really cry... actually, I danced a jig) the passing of my old HP DV7-1135 Desktop replacement notebook, or as we referred to ole' Bessie in my house, "the portable lap scorcher". It was ok in it's time, if you can overlook the premature death of the optical drive one week after warranty expiration, the 2 times I had to resolder the power connector and the necessity of needing to remove EVERY internal component to replace the fan. But with the second drive bay that I used to add a 60GB SSD, it ran extremely fast... and hot enough to actually blister my leg. Burn injuries aside, the SSD allowed me to run virtual servers in VMware workstation with a performance that rivals production grade hardware. The main problem was the capacity of the drive which only allowed me to run 4 or 5 servers with about 1GB of RAM each.

After Bessie was shipped off to the recycler, I set about looking for another constant silicon companion (not the cube). The number one priority was to find something with 2 hard drive bays. But, apparently, business class laptops had moved away from this configuration somewhat, so I ended up looking at gamer rigs.

What do you know, I stumbled onto the ASUS G53S, a very nice i7 machine with 8 cores and 16GB of memory. It also has a very nice full HD screen that, at 15.6 inches, has plenty of visual real estate. Of course, the second drive bay was the main factor in my search and the first thing I did within an hour of taking home my belated Christmas persent to myself was to open the case and slap in the new 240GB OCZ drive that I bought at the same time.

Folks, I don't know if your experience with PC's goes back nearly as far as mine, but I remember waiting between 30 seconds to 2 minutes to wait for boot time. Boot time for the laptop on the new drive is 8 seconds. Login time is ONE SECOND. Applications like Word and Excel open so fast you can barely see the program name on the splash screen. And the 12 virtual servers? Well, my domain controllers reboot in less than one minute and login takes 3 seconds. I have a 3 node SQL cluster that runs like Forest Gump. This thing is a beast and mostly due to the SSD. The task manager typically shows 100% on CPU and RAM with everything running and never misses a beat. I always knew that disk I/O was a big killer. I now have solid proof.

So on that next laptop purchase, you might want ot take a look at the new SSD's. The one I have "only" has a 525MB/s read rating.

And there is now one rated at 1500.

Still sitting there??

- Peter Trast, MCITP DBA, MCITP EA, MCT LinkIn with Peter

Tired of slow SQL queries?

Getting tired of those poorly performing queries or stored procedures? Getting even more tired of the phone calls that result from those poorly performing queries or stored procedures? Obviously, disconnecting your phone and huddling in the corner crying is not the answer (take it from me). Maybe what you need to do is look at the structure of your database and consider a little modification. First, a few questions.

Is at least one of your tables the size of (insert favorite Hollywood actor’s name)’s ego? And does that titanically (not a movie reference) humongous table have at least one column that could be used to divide the data into smaller chunks, like a date column in a Sales table with many months or years of data with hundreds of thousands, maybe even millions of rows? Can you add (would your budget allow) more physical disks to your SQL Server solution? And the biggest question, can you afford, or do you already have, the Enterprise edition of SQL server?
Well, if you were able to answer yes to all of those questions, it is possible that you might be able to tweak that lumbering hulk into exhibiting a few more miles per hour by making a few simple, if not inexpensive, changes.
The short version (level 000) is that you create new physical drives, define ranges of data and assign those ranges of data to different disks (or RAID 5 arrays). This is called table partitioning. Read on only if you really want to know how it is done (level 100)!
First, you must decide where to divide your data. For example, if you have about 10 million rows of Sales data for the last 5 years, you need to choose how to break that data down into smaller pieces. This decision is really based on how many physical disks you can add to your system. If you like keeping your data on RAID 5’s and you can get your hands on 5 more RAID 5 arrays, you can divide you data into 5 parts (which just happens to nicely match 5 years of data).
So you create 5 new RAID 5 arrays. And in your database properties you create 5 new filegroups with at least one file each, one filegroup per RAID 5 array. Then, we are going to use these 5 different filegroups residing on 5 different arrays to create our partitioning strategy.
Now, we use a Transact SQL statement like this one
CREATE PARTITION FUNCTION [myDateRangePF1] (datetime)
AS RANGE RIGHT FOR VALUES ('20030101', '20040101', '20050101',
'20060101', '20070101');
to define the range of values for each portion of our table that will be stored separately from the rest. In this case, all sales for the year 2003, date 20030101 thru 20031231 (RANGE RIGHT starts with date 20030101 and ends before the next date, 20040101) will be assigned to the first partition in our function. Then 2004 is assigned to the next and so on. The last partition, 20070101, being the last listed, includes all subsequent dates, unless the function is later modified, which it can be.
Next, each range is assigned to a filegroup with a statement that looks like this
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1 --the name of the function we just created
TO (test1fg, test2fg, test3fg, test4fg, test5fg, test6fg); --these are our filegroups
Now, all of the data for 2003 will be stored in the first filegroup on the first RAID 5 array, the data for 2004 will be stored on the second new array, and so on. This will give you more actual disks supporting queries for a single table, reducing (theoretically) disk I/O and increasing (keep your fingers crossed) query response times. Yeah, I know I mentioned 6 filegroups in my scheme but I need to leave at least one mystery hanging out there for you to explore on your own or read on http://msdn.microsoft.com.
“Seeya at tha pahty, Richtah…”

- Peter Trast,  MCITP DBA, MCITP EA, MCT
LinkIn with Peter

DNS Matters... ALOT

DNS reminder for Active Directory experts.

I recently worked with a client spent 10 hours on the phone with a well known consulting company trying to resolve GPO, COM+, WinRM SPN creation, ip6.arpa and replication issues. After spending thousands of dollars looking at errors and warnings in event logs and performing internet searches for possible fixes, the call ended with nothing resolved.

So, I got the call to give it a shot. Obviously, my client was very skeptical about my ability to help after witnessing the strikeout of the previous consultant.I spent a few hours reviewing all AD health tests and best practices. I discovered, as I often do, that the client had configured both domain controllers pointed to loopback for the primary DNS client setting and pointed at the other domain controller for the secondary DNS client setting.

We reconfigured this by using the Microsoft best practice of configuring all (both) of the domain controllers' primary DNS client setting to point at the PDC and then to the other domain controller, but, before we got around to going through the Active Directory health checks again, which were scheduled for 2 days later, he changed one of the servers BACK ... after the call… He apparently did not agree with my opinion (that is to say, MICROSOFT's opinion) on DNS best practices.

After spending a little time going through health checks on the next appointment, I discovered the change. After a little coaxing, I got him to point both DC’s to PDC for primary DNS client setting and then the other DC for secondary. And guess what?

Within 15 minutes every health check was clean and group policy was working perfectly.

Check and VERIFY DNS first… and check it again if you have to call back. It really is a best practice. It could save thousands of dollars and keep you from being on the phone all night, like my unlucky client.

The last thing I asked him before finishing the call was, “What is the most important configuration in your environment?”

What do you think he said? J

- Peter Trast, MCITP DBA, MCITP EA, MCT LinkIn with Peter 

Monday, March 5, 2012

She's a little runaway (query)

Stop that runaway query!
Have you ever noticed how SQL Server likes to just take over the CPU and memory of the box it is installed on? It just says, in it’s little SQL brain, “Gee, I think 90% of the CPU should be sufficient to run this query, and maybe I will just keep this memory in case I need it for something else.” Meanwhile, all other queries and/or applications start turning blue from lack of resources and die on the way to the query optimizer.
Are you ready to take back control of the CPU and memory and force applications or users to only use their fair share of the server’s resources? Well, another great, new feature of SQL Server 2008 is called the Resource Governor. You can set limits on how much CPU and memory can be used by creating functions that define an application name, a user name, a host name, a server role name, and so on.
You start by creating Resource Pools and assign minimum and maximum CPU and memory percentages to each pool. For example, you create a pool called Pool1 and assign it a minimum of 20% and a maximum of 30% CPU. This means that anything assigned to the pool will always have some CPU available (20%) but will never exceed 30%.
You also create Workload Groups which are assigned to specific pools. These workload groups can be assigned a high, medium or low priority within that pool. At this point, it would be fair to mention that all unclassified work is dumped into the Default Workload Group. This pool has UNLIMITED access to system resources, the reason that so many of us have seen runaway applications.
The third part is to create Classifier Functions. This defines which users, applications, roles, etc. (workloads), go into which workload groups. Now, we have control!
Just so you know, SQL Server 2008 reserves some CPU and memory for itself that is never given to any user or application. Didn’t you ever wonder how you could access a locked up server using the “SQLCMD –A” utility? There is always CPU and memory on reserve just to keep the motor running like the well oiled machine that SQL Server is.
And if you like, I can show you how to build one of these Resource Governor contraptions right here in my MS 6231 class which runs from October 18-22 (it’s actually covered on Friday, usually). And if all else fails…
“Just hit it with a hammer!”

- Peter Trast,  MCITP DBA, MCITP EA, MCT LinkIn with Peter