Sunday, August 26, 2012

Active Directory MCM: Going to Seattle

It is official. I am definitely attending the 9/9 rotation, just two weeks to orientation day. Time to book my flight and choose some lodging. But that is just logistics.

What I really need to do is get some reading done. For the uninitiated, there is a "pre-reading" list to go through to ensure familiarity with all of the topics that will be covered. If you have not taken the time to look at the list, I can safely say that there are probably over a thousand pages of Technet articles (some of them 30 to 60 pages all by themselves). If that sounds like alot, let me assure you, it IS. But if you have a passion for the technology that you are certifying in, it is actually pretty fun finding out the details of so many things that you know "just work"... well, work most of the time, because if they never broke, we wouldn't have jobs, would we?

In the Active Directory Master certification, the list of general topics include things like directory concepts, name resolution, rodc, 2008 features, authentication, AD replication, sites, migration, FRS, DFS, group policy, disaster recovery, certitificates, LDS, ADFS and RMS. Most of these are topics that any AD administrator knows something about (or should), but the great thing about the reading is that you get sucked down into some of the details that really come in handy if you are an actual engineer in the field. Even if you are NOT looking for a Master certification, this reading could make you a much better technician.

So after about  weeks of part time reading, I am a little over halfway through the reading material, and I have the next two weeks to read full time. Fortunately, I have not run across many new concepts, mostly technical details that I was not previously awareof. I have no doubt that I have only seen the tip of the iceberg in that regard. Hopefully the rest of the iceberg does not rip a hole in my skull and make my brain start leaking...

If anything of note comes up before I get to Seattle, I will be sure to add an update. And I would like to hear your MCM tale if you have one.

"The problem with internet quotes is that no one has verified the source" -- Abraham Lincoln

- Peter Trast, MCITP EA, MCITP DBA, MCT LinkIn with Peter

Tuesday, August 14, 2012

Microsft Certified Master for Directory. Let the adventure begin!

Excellence takes effort. Time. Resources. Desire.

Sometimes, it may seem to make sense that we only work as hard as it takes to stay employed, or maybe get that next raise -- or maybe just avoid that next reduction in force. I worked for 16 years in a profession that I just fell into. Like many 19 year olds, I had dreams that weren't terribly realistic (and those ARE important), so I needed a "job" to tide me over. So I joined the Army, learned how to "fix" helicopters, and then worked as an "aviation technician" ( a top rated one, it is fair to note) while I awaited my dreams of fame as a musician to materialize.

This may shock some of you, but that just didn't pan out like I expected. Luckily, I had a great work ethic and had continued to develop in other areas, like computer technology. I had started becoming an IT professional without even knowing it. Many of my coworkers saw it much more clearly than I did and, fortunately, told me so.

So 10 years ago I decided to become a full time IT professional. I had a friend who was trying to help me get a job and quoted what sounded like a very nice salary. Fun job making good money? What's not to love??

Now the last two have been a couple of crazy years full of amazing projects and difficult service calls, preceded by four great years of consulting part-time while passing on my knowledge of Microsoft-based network infrastructure to career changers, and it has all been so worth it! It has culminated in my acceptance into the Microsoft Certified Master Windows Server 2008 R2 Directory program and I could not be more grateful and excited, especially since my company, ALexander Open Systems, just became a National Service Integrator partner. Yes, Johnny, that is a big deal.

I do not know where this process will take me exactly, or whether I will attain the shiny title of "Master" at the end (I would really like to think so), but I do know from all of the other experiences I have read about, I will have been exposed to deep product knowledge presented by the best of the best. I will undoubtedly become a much better technician, maybe even a better consultant just by mere association with those who will run the gauntlet with me. The certification is certainly difficult to get and only goes to those who truly have the skill level of a Master, but even if it eludes me, hopefully only temporarily, I will have gained many things that make the two weeks of 16 hour days worth it. Heck, Army basic training was 13 weeks of 16 hour days (and some nights of interrupted sleep for guard duty) full of loud, angry men shouting at me while trying to manage some fairly difficult tasks, and that went pretty well. I think I might rather enjoy this.

If you are not familiar with the program, it does take a bit of doing to get it all put together. First you have to determine if YOU think you might be qualified. The program description and other related information can be found here:

You can follow a link from there to find out more about the specific certification that you are interested in. There will be some prerequisites that you must meet, which usually include the number of years on the subject matter that you are an expert in, what parts of the subject matter you should know best, plus a minimum set of certifications already attained in your chosen field.

If you meet those criteria, you can follow another link to apply. This usually entails a fee similar to the one you pay for taking a certification exam and submitting your transcript of certifications for review and approval. This is only the first part of the applicaiton process and usually only takes about a day. If you are approved, now it gets a little more intense.

Next, you will likely need to submit some project docs that you have authored to show your detailed expertise. These will be reviewed to see if they support your claim of expertise. If the program manager approves of these, you are in the program. Now you have to pay your fee, the big one, which I am about to do, and get scheduled. Getting on the schedule is the final confirmation for attending which I hope to complete in the next day or so.

The rest of it is all hearsay for me, but I expect to fly out to Redmond in September and spend two fun filled weeks of Active Directory adventure, and I plan on sharing as much of the experience here as they will allow. Good, bad or ugly.

More to come shortly. Wish me luck. I hear it comes in really handy...

"The problem with internet quotes is that no one has verified the source" -- Abraham Lincoln

- Peter Trast, MCITP DBA, MCITP EA, MCT LinkIn with Peter

Monday, July 2, 2012

Is your domain migration data valid?

For many of us who spend alot of time in customer environments, fear of customer supplied data can cause the sandman to detour around our sleeping quarters more than one night in a row. It is difficult for those of us who practice so much self-reliance to depend on lists given to us by the people we are trying to help. And failure to validate the data they supply us with can lead to results worthy of that fear.

So please, for the sake of your own sleep quota, take a moment to consider checking the data.

I do lots of domain migrations, and of all the things I do in those migrations, merging users is one of the most potentially tedious simply due to the negative and far reaching effects that the mispelling of a login or samAccountName can have. If a user is migrated with the intent of merging with another account in a target domain, the mispelling has an avalanche effect. First, instead of merging the accounts, a "new" account is generated in the target domain containing the login name and sidHistory that was intended to be merged. And finding these duplicates is not as easy as simply running a "Find" on the directory, but may require running a CSVDE export to locate the offending object. Also, running a computer migration after this mistake can create a secondary profile on the workstation instead of merging the source and target domains, causing the user to lose the ability to see the original profile settings and making it appear that the user data is lost, requiring a backout process that CAN lose data if done incorrectly.

So why not make sure it is just done right in the first place? I modified a couple of simple scripts to help validate the data before using it in the ADMT tool.

The first one is a powershell .ps1 script that can verify if a user name submitted for migration exists and is spelled right. It requires one input .txt file with a single column of data, with UserName as the header followed by a login name on each subsequent line, like this:

user1
user2
user3

The body of the script looks like this:

$struser = import-Csv C:\users\res_migrator\desktop\userinput.csv foreach($User in $struser) {$dom = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$root =$dom.GetDirectoryEntry()
$search = [System.DirectoryServices.DirectorySearcher]$root
$search.Filter = "(samAccountName="+$user.username+")"
$result =$search.FindOne() #start-transcript "c:\mu.txt"
if ($result -ne$null)
{
Out-File -Append c:\userexists.txt -inputObject $user.username } else { Out-File -Append c:\userdoesnotexist.txt -inputObject$user.username

}

There are two output files, verified names and unverified names. The second one, "userdoesnotexist.txt" will have the unverified user names that be corrected. This script can be run in both the source and target domains.

Once the names are validated, user migrations can be done with a high degree of certainty that a merge will occur and sidHistory copied over, if that option is picked.

But that's not all you get, folks! I have a second .bat script to verify the sidHistory existence on each of the target accounts, using the same list of target login names used in the migration. This one requires an input .txt file, users.txt which requires only a single column of names without a header, i.e.:

user1
user2
user3

and the text of the batch file is:

For /f %%i in (C:\Users.txt) Do (
dsquery * -Filter "(samaccountname=%%i)" -Attr samAccountName ObjectSID sidHistory >> C:\User_Info.txt)

The resulting output file should have a sidHistory attribute listed for each name. If the attribute is missing, search for duplicate accounts by running csvde -f c:\users.csv and searching the resulting .csv for instances of the login name to find potential duplicates.

Hopefully, this will make your migrations go a little smoother. It sure helped me alot on my last one!

Happy Migrating!

"That's no moon...it's a space station".

- Peter Trast, SQL Expert; MCITP DBA, MCITP EA, MCT LinkIn with Peter

Tuesday, May 1, 2012

Does Microsoft certification still have any value?

You know, when I was helping former mechanics, cops and railroad engineers learn the basics of an Active Directory infrastructure, I heard many versions of this question. I asked the same question myself when I finally decided to make the plunge into full time IT myself. I quit a perfectly good job (which I really didn't enjoy or desire) and went to school and started knocking off the certifications. One of the reasons I was motivated to do the certification part on top of the training was the promise from one recruiter to offer me an extra \$11,000 a year if I came out of school with my MCSE. I got the cert and got the job.

I never looked back.

Year after year, version after version, I keep renewing my certifications on the latest offering from the folks at Uncle Bill's software factory (Mr. Gates has refused to adopt me... well, answer my emails... so it's Uncle). And with only one exception (everyone had a bad year that year), I have been seen generous compensation increases each year that are not tied strictly to my experience. I know many IT professionals with comparable experience who struggle at much lower salaries, all the while scoffing at the lack of value that a certification brings. Well, the employers I have picked seem to think there is something to the whole certification "thing".

Maybe I just picked the right businesses? Maybe being willing to change companies from time to time has made more opportunity? Do certifications really have any value other than to show you can pick the right answer out of four with the advantage of having two obviously wrong answers?

I believe certifications tell a story, a story about people who care enough about the technology to spend thier own personal time learning more about it. They weave a tale about people who don't want to just get by, but want to move ahead and excel. Studying for a certification is hard work, regardless of the method.

And now, there is the Microsoft Certified Master program. A written test and a lab, with high failure rates I might add, both of which really test in a way that goes way beyond simple multiple choice questions and really get down to the business of seeing what you really know. A Master cannot be doubted. They have done the real thing, and they know their business. Training by the top people in each technology, and real world problems with real world answers. And the compensation for most Masters is very impressive.

Yes, I am going after the Active Directory Master certification first chance I get, hopefully this year. It will be exciting (and scary) to see how much I can learn and how good I really am.

And, you know, funny thing, recruiters keep searching on those keywords like MCSE, MCITP and MCM. That alone makes the certifications relevant, regardless of the validity some IT people place on them.

Get some. Certification matters.

- Peter Trast, SQL Expert; MCITP DBA, MCITP EA, MCT LinkIn with Peter

Tuesday, March 13, 2012

With an Iron Fist (SQL PBM)

Maybe you were a system admin that got sucked into the exotic land of database administration because there was no one else to do it. Maybe you are a DBA on purpose. It might even be possible that one day you were writing web applications in your remote and intentionally isolated cube at the far end of the “trailer park” and faster than you can say DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS you found yourself in charge of an unwieldy, quickly expanding and barely governed SQL Server environment.

In any case, you may have wondered, if there were some way to control the environment of SQL Server proactively and to have an automated way to enforce all of those naming convention standards and object settings that you TALKED about, documented and trained on. Have I got news for you!

In SQL Server 2008, a new feature called Policy Based Management (PBM) was introduced. The idea is that if you have a setting or convention that you wish to check for and/or ENFORCE (get your control freak on), you just create a policy that defines the object (maybe a database), a facet (the property that can be checked for or enforced, like the recovery model of a database), and a condition (the value of the property, like Full for the recovery model). Systems Administrators (yes the “A” is capitalized!) will recognize this concept as having some similarity with Group Policy in Active Directory Domain Services, although the application and verification of the policy is done quite differently.

The really fun part is that using Central Management Servers in SSMS, you can enlist one or more instances to monitor or enforce, manually or on a schedule, from one instance. You can do ad-hoc policy checks and enforce the policy on objects that are not in compliance and you can include all objects of the type within an instance or you can un-enlist individual objects.

Some examples of the types of policies you might create and assign might be controlling role membership and preventing future modification by doing a ROLLBACK when someone is added (sounds like Restricted Groups in AD), or perhaps you need to make sure that every instance in your organization must use Windows Authentication only. Maybe, instead of asking nicely and resending out that paper policy that governs object name conventions, you just need to make those ornery developers name all of their user stored procedures with a “usp_” prefix (I love you guys, really). Whatever the standardization need, PBM probably has the property you need to get a handle on. And standardizing is the whole point. It is an interesting exercise to document standards. It is a whole lot more satisfying to be able to inflict -- um – enforce them.

Now go forth and standardize!

- Peter Trast, SQL Expert; MCITP DBA, MCITP EA, MCT

Thursday, March 8, 2012

You DO need an SSD, forget the cost

For those of you who, like me, have to run labs on a portable device, like a laptop, an SSD is an absolute MUST.

I recently mourned (well, I didn't really cry... actually, I danced a jig) the passing of my old HP DV7-1135 Desktop replacement notebook, or as we referred to ole' Bessie in my house, "the portable lap scorcher". It was ok in it's time, if you can overlook the premature death of the optical drive one week after warranty expiration, the 2 times I had to resolder the power connector and the necessity of needing to remove EVERY internal component to replace the fan. But with the second drive bay that I used to add a 60GB SSD, it ran extremely fast... and hot enough to actually blister my leg. Burn injuries aside, the SSD allowed me to run virtual servers in VMware workstation with a performance that rivals production grade hardware. The main problem was the capacity of the drive which only allowed me to run 4 or 5 servers with about 1GB of RAM each.

After Bessie was shipped off to the recycler, I set about looking for another constant silicon companion (not the cube). The number one priority was to find something with 2 hard drive bays. But, apparently, business class laptops had moved away from this configuration somewhat, so I ended up looking at gamer rigs.

What do you know, I stumbled onto the ASUS G53S, a very nice i7 machine with 8 cores and 16GB of memory. It also has a very nice full HD screen that, at 15.6 inches, has plenty of visual real estate. Of course, the second drive bay was the main factor in my search and the first thing I did within an hour of taking home my belated Christmas persent to myself was to open the case and slap in the new 240GB OCZ drive that I bought at the same time.

Folks, I don't know if your experience with PC's goes back nearly as far as mine, but I remember waiting between 30 seconds to 2 minutes to wait for boot time. Boot time for the laptop on the new drive is 8 seconds. Login time is ONE SECOND. Applications like Word and Excel open so fast you can barely see the program name on the splash screen. And the 12 virtual servers? Well, my domain controllers reboot in less than one minute and login takes 3 seconds. I have a 3 node SQL cluster that runs like Forest Gump. This thing is a beast and mostly due to the SSD. The task manager typically shows 100% on CPU and RAM with everything running and never misses a beat. I always knew that disk I/O was a big killer. I now have solid proof.

So on that next laptop purchase, you might want ot take a look at the new SSD's. The one I have "only" has a 525MB/s read rating.

And there is now one rated at 1500.

Still sitting there??

- Peter Trast, MCITP DBA, MCITP EA, MCT LinkIn with Peter

Tired of slow SQL queries?

Getting tired of those poorly performing queries or stored procedures? Getting even more tired of the phone calls that result from those poorly performing queries or stored procedures? Obviously, disconnecting your phone and huddling in the corner crying is not the answer (take it from me). Maybe what you need to do is look at the structure of your database and consider a little modification. First, a few questions.

Is at least one of your tables the size of (insert favorite Hollywood actor’s name)’s ego? And does that titanically (not a movie reference) humongous table have at least one column that could be used to divide the data into smaller chunks, like a date column in a Sales table with many months or years of data with hundreds of thousands, maybe even millions of rows? Can you add (would your budget allow) more physical disks to your SQL Server solution? And the biggest question, can you afford, or do you already have, the Enterprise edition of SQL server?
Well, if you were able to answer yes to all of those questions, it is possible that you might be able to tweak that lumbering hulk into exhibiting a few more miles per hour by making a few simple, if not inexpensive, changes.
The short version (level 000) is that you create new physical drives, define ranges of data and assign those ranges of data to different disks (or RAID 5 arrays). This is called table partitioning. Read on only if you really want to know how it is done (level 100)!
First, you must decide where to divide your data. For example, if you have about 10 million rows of Sales data for the last 5 years, you need to choose how to break that data down into smaller pieces. This decision is really based on how many physical disks you can add to your system. If you like keeping your data on RAID 5’s and you can get your hands on 5 more RAID 5 arrays, you can divide you data into 5 parts (which just happens to nicely match 5 years of data).
So you create 5 new RAID 5 arrays. And in your database properties you create 5 new filegroups with at least one file each, one filegroup per RAID 5 array. Then, we are going to use these 5 different filegroups residing on 5 different arrays to create our partitioning strategy.
Now, we use a Transact SQL statement like this one
CREATE PARTITION FUNCTION [myDateRangePF1] (datetime)
AS RANGE RIGHT FOR VALUES ('20030101', '20040101', '20050101',
'20060101', '20070101');
to define the range of values for each portion of our table that will be stored separately from the rest. In this case, all sales for the year 2003, date 20030101 thru 20031231 (RANGE RIGHT starts with date 20030101 and ends before the next date, 20040101) will be assigned to the first partition in our function. Then 2004 is assigned to the next and so on. The last partition, 20070101, being the last listed, includes all subsequent dates, unless the function is later modified, which it can be.
Next, each range is assigned to a filegroup with a statement that looks like this
CREATE PARTITION SCHEME myRangePS1
AS PARTITION myRangePF1 --the name of the function we just created
TO (test1fg, test2fg, test3fg, test4fg, test5fg, test6fg); --these are our filegroups
Now, all of the data for 2003 will be stored in the first filegroup on the first RAID 5 array, the data for 2004 will be stored on the second new array, and so on. This will give you more actual disks supporting queries for a single table, reducing (theoretically) disk I/O and increasing (keep your fingers crossed) query response times. Yeah, I know I mentioned 6 filegroups in my scheme but I need to leave at least one mystery hanging out there for you to explore on your own or read on http://msdn.microsoft.com.
“Seeya at tha pahty, Richtah…”

- Peter Trast,  MCITP DBA, MCITP EA, MCT

DNS Matters... ALOT

DNS reminder for Active Directory experts.

I recently worked with a client spent 10 hours on the phone with a well known consulting company trying to resolve GPO, COM+, WinRM SPN creation, ip6.arpa and replication issues. After spending thousands of dollars looking at errors and warnings in event logs and performing internet searches for possible fixes, the call ended with nothing resolved.

So, I got the call to give it a shot. Obviously, my client was very skeptical about my ability to help after witnessing the strikeout of the previous consultant.I spent a few hours reviewing all AD health tests and best practices. I discovered, as I often do, that the client had configured both domain controllers pointed to loopback for the primary DNS client setting and pointed at the other domain controller for the secondary DNS client setting.

We reconfigured this by using the Microsoft best practice of configuring all (both) of the domain controllers' primary DNS client setting to point at the PDC and then to the other domain controller, but, before we got around to going through the Active Directory health checks again, which were scheduled for 2 days later, he changed one of the servers BACK ... after the call… He apparently did not agree with my opinion (that is to say, MICROSOFT's opinion) on DNS best practices.

After spending a little time going through health checks on the next appointment, I discovered the change. After a little coaxing, I got him to point both DC’s to PDC for primary DNS client setting and then the other DC for secondary. And guess what?

Within 15 minutes every health check was clean and group policy was working perfectly.

Check and VERIFY DNS first… and check it again if you have to call back. It really is a best practice. It could save thousands of dollars and keep you from being on the phone all night, like my unlucky client.

The last thing I asked him before finishing the call was, “What is the most important configuration in your environment?”

What do you think he said? J

- Peter Trast, MCITP DBA, MCITP EA, MCT LinkIn with Peter

Monday, March 5, 2012

She's a little runaway (query)

Stop that runaway query!
Have you ever noticed how SQL Server likes to just take over the CPU and memory of the box it is installed on? It just says, in it’s little SQL brain, “Gee, I think 90% of the CPU should be sufficient to run this query, and maybe I will just keep this memory in case I need it for something else.” Meanwhile, all other queries and/or applications start turning blue from lack of resources and die on the way to the query optimizer.
Are you ready to take back control of the CPU and memory and force applications or users to only use their fair share of the server’s resources? Well, another great, new feature of SQL Server 2008 is called the Resource Governor. You can set limits on how much CPU and memory can be used by creating functions that define an application name, a user name, a host name, a server role name, and so on.
You start by creating Resource Pools and assign minimum and maximum CPU and memory percentages to each pool. For example, you create a pool called Pool1 and assign it a minimum of 20% and a maximum of 30% CPU. This means that anything assigned to the pool will always have some CPU available (20%) but will never exceed 30%.
You also create Workload Groups which are assigned to specific pools. These workload groups can be assigned a high, medium or low priority within that pool. At this point, it would be fair to mention that all unclassified work is dumped into the Default Workload Group. This pool has UNLIMITED access to system resources, the reason that so many of us have seen runaway applications.
The third part is to create Classifier Functions. This defines which users, applications, roles, etc. (workloads), go into which workload groups. Now, we have control!
Just so you know, SQL Server 2008 reserves some CPU and memory for itself that is never given to any user or application. Didn’t you ever wonder how you could access a locked up server using the “SQLCMD –A” utility? There is always CPU and memory on reserve just to keep the motor running like the well oiled machine that SQL Server is.
And if you like, I can show you how to build one of these Resource Governor contraptions right here in my MS 6231 class which runs from October 18-22 (it’s actually covered on Friday, usually). And if all else fails…
“Just hit it with a hammer!”

- Peter Trast,  MCITP DBA, MCITP EA, MCT LinkIn with Peter