Saturday, May 13, 2017

Why the NHS ransomware shambles was an accident waiting to happen


I watched the news unfold on Friday afternoon.  A handful of hospitals had been by the WannaCry Decryptor ransomware virus, transmitted by email in a massive, international wave that began a few hours earlier.  As more and more people opened their emails, the virus spread exponentially, eventually affecting tens of hospital trusts and more than 70 countries worldwide.

Of course, I deplore any attacks, especially  on such critical infrastructure and think the authors of the attack should be found and brought to justice.  But there's no denying this is partly the fault of the NHS.

Many top security specialists have been warning for years that a critical IT incident is an accident waiting to happen, with the latest report published just two days prior to the attack.

The NHS have known for years that running insecure operating systems (XP) was a massive security risk, and yet their money was spent on maintaining PFI contracts and paying overblown management consultants rather than investing in their infrastructure.  As the NHS started to creak alarmingly in the latter half of 2016, even less money was available and the ethos was make do and mend.

Meanwhile, some genius thought up the Rate Caps, the answer to all their prayers.  If we pay staff less, they reasoned, especially external staff, we will save money!  Unfortunately what they failed to see, and what was clear to everyone else, was that paying staff less would drive them away, leaving gaps in service - including IT, especially when the new rates were around 60% of the old ones.

Unfortunately, the NHS in particular has a habit of recruiting staff from their existing pool, meaning many IT personnel came from clinical backgrounds.  Nothing wrong with that, you might say, but when your PC is Cryptolockered, who do you want to fix it - an IT specialist with years of training and qualifications or a nurse-turned-password-resetter?

The final nail in the coffin was the recent IR35 changes, and more particularly the decision by senior NHS managers to classify all contractors as caught by the legislation.  This ensured that any contractors remaining couldn't afford to stay, as they'd automatically become liable for 40% tax on not only their earnings but their past earnings too, irrespective of how much corporation tax their limited company had already paid.  This meant an effective tax of 60%+ on earnings, and the effect was immediate.  Contractors didn't renew and staff left in waves.  One estimate is that the MoD lost over 98% of contractors in this way, and there have been ramifications throughout all public sector areas due to staff shortages.  This left the burden on permanent staff, and they failed spectacularly.

At the time of writing our Home Secretary is chairing an emergency meeting of COBRA to work out an appropriate response.  The real work will be done by hundreds of underquipped NHS IT workers over the weekend.  But the appropriate response would be to identify and publicly shame the management idiots who prioritised cost saving over patient lives, and who have no ability to see the consequences of their actions.

Tuesday, April 5, 2016

Is the multi-stakeholder model a viable way forward for Internet governance?


Email:                    derek@derekcolley.co.uk
Twitter:                 @dcolleySQL

 
An essay written for the International Institute of Strategic Studies (www.iiss.org), February 2016 


Is the multi-stakeholder model a viable way forward for Internet governance?

 

Image result for calvin coolidge
 
 
"We pay too little attention to the reserve power of the people to take care of themselves. We are too solicitous for government intervention, on the theory, first, that the people themselves are helpless, and second, that the government has superior capacity for action. Oftentimes, both of these conclusions are wrong."
 
- Calvin Coolidge, President of the United States (1923-1929)
 


 The Internet as we know it today has roots in the technological developments of the Industrial Revolution and the inventions that came into being as a direct result of research and development into computational technologies by a diverse range of commercial, academic and governmental groups.  This essay will address the role that combined technological contributions and shared governance has had on the development of the Internet since its inception, and speculate on how Internet governance for the future may best develop.

 
PAST

Arguably, modern-day computation started with the Babbage Difference Engine, proposed by Charles Babbage in his paper of 1822 to the Royal Astronomical Society which took the often tedious and painstaking process of calculation by hand into the realm of automation.  Over the next hundred years, computing developed from simple arithmetic into complex machinery with real-life applications, growth of which was driven by the twin engines of commerce and war through hundreds, if not thousands of contributors across a multitude of sectors.  By the time World War II began, computation had reached a point where these applications of computing culminated in the development of the Colossus series of computers at Bletchley Park and subsequently the Bombe, used to decrypt German communications and which made a huge impact to the war effort.
 

Bletchley Park's NI1b and 'Room 40' government agencies were not the only organisations to develop significant advancements in computation around that time.  Commercial organisations, and those affiliated to but standing independent from government, also drove progress.  For example, free from Bletchley Park after the war ended, Tommy Flowers, the inventor of Colossus, re-joined the Post Office Research Station throughout the 1950s subsequently helping to develop ERNIE, one of the world's first random number generators, a successor of which is used today to pick Premium Bond winners; he and his team also developed hitherto unknown pulse modulation techniques for telephone exchanges.  Across the Atlantic, IBM were busy with a multitude of projects, from the IBM Automatic Sequence Controlled Calculator (the 'Harvard Mark I') in 1944 to 1956 when IBM were busy pioneering disk drives in their San Jose laboratory.  The transistor was invented in 1947 by a team at Bell Laboratories; the EDSAC, a paper-tape computer was built in 1949 at Cambridge University. From hand-held mechanical calculators to trackballs, the acceleration of progress in computing was marked, driven both by commercial and government interests, and with significant contributions from academic institutions and individuals too.

 
Throughout this period of early computing, evidence shows that individuals, government, academic and commercial organisations all contributed towards technological progress, and that premise holds true today.  The technology 'gold rush', which has culminated in Facebook in the modern era, continued through the 1950s to 1970s with an ever-accelerating number of new insights, inventions and ideas.  After the development of ARPANET by DARPA in the early 1970s, networked computing was opened to universities and consequently the general public.  Open-source computing in the form of the UNIX operating system began as early as 1969, and anyone was free to develop and improve upon it.  The miniaturisation of silicon chips throughout the 1970s and 1980s led to Moore's Law, named after the co-founder of Intel which states that the number of components on an IC chip would double approximately every two years (now thought to have reached a plateau). 

 
Frameworks for these new technologies came not exclusively from central government, but from gifted individuals such as Alan Turing, John Von Neumann, Clive Sinclair and Jack Kilby.  Sinclair, an entrepreneur, brought affordable consumer electronics to the masses; a latter-day James Dyson, his inventions ranged from the ill-fated Sinclair C5 electric trike to the wildly successful Sinclair ZX Spectrum.  Kilby invented the integrated circuit as part of his work with Texas Instruments, an integral component of every modern computer system.  Famously, Bill Gates co-founded Microsoft from a garage, a company which has grown to be a market-leading force today. 

 
We can deduce then that the impetus for research and development into early computing was sourced from not one controlling entity, but by a vast array of experts working for a variety of causes.  This understandably led to a plethora of competing standards and protocols in the field.  For example, TCP/IP was invented in 1974 independently from UDP in 1980, but both each able to handle Internet traffic (with key differences).  Early console-type computers such as the Acorn Electron, ZX Spectrum and Commodore 64 competed in the commercial marketplace, and the rough-and-tumble of commerce drove manufacturers to produce bigger and better solutions.

 
Technological progress up to the 1970s and 1980s demonstrated that governments, corporations and individuals could work alongside each other developing new and exciting changes to the computing landscape.  With each organisation having its own motivations - whether these were financial, philanthropic, militaristic or educational - progress in the field was at an apogee, with new developments arriving faster than the general public could consume them, and, tellingly, faster than the government could regulate them.

 
This lack of regulation arguably contributed to the explosive arrival of the Internet in the late 1980s.  First operating via dial-up modem, connection to a World Wide Web of shared, unregulated, free material for the cost of a phone call was an attractive consumer proposition.  Leading the way was AOL, with their 'GameLine' product - a service that linked an Atari 2600 via phone line to a server, allowing consumers to rent games for as little as $1.00.  AOL transformed quickly into a generalist consumer Internet Service Provider, and alongside the likes of Yahoo they cornered the market in user-friendly Internet connectivity.  Corporations driven by the survivalist need for growth and the requirement to please their shareholders with financial success were technologically outpacing national governments.  

 
Nevertheless, governments tried hard to catch up, both in research and development and in introducing regulation ostensibly to protect Internet users and to safeguard national security.  The appearance of the world's first computer worm, the Morris worm by a person who admitted they had created it to 'see how big the Internet is' in 1988 frightened the US Government and led to the very first conviction under the newly-minted US Computer Fraud and Abuse Act.  It also provided a demonstration of how the Internet was never under governmental control, despite much of the funding for it originating from federal budgets. 

 
IANA, the Internet Assigned Numbers Authority, who administered top-level DNS servers for the Internet, came into being after calls for regulation from key players in the development of the new World Wide Web such as Vint Cerf and Jon Postel.  IANA was a not-for-profit American corporation, however it was financed by DARPA funds that originated from the US Government.  When the U.S. Science Foundation tried to hand over control of IANA to a private US corporation, Network Solutions, there was a backlash from Internet users who felt that the Internet would be in jeopardy of losing its freedom and independence, a period referred to as the 'DNS Wars' and which lasted until the creation of ICANN.

 
 PRESENT

Today, there exists a tension and ongoing argument surrounding ICANN, who administer top-level domains on the Internet.  This tension arises from the American nature and location of the ICANN organisation - with the Internet today being truly global, many nations feel that top-level Internet administration should rest with a global organisation not so closely tied to one particular country.  Following the expiration of the original 11-year contract created at the transition between IANA and ICANN, as of October 1 2009, ICANN became a private (albeit American) organisation, not directly controlled by the US Department of Commerce.

 
This power struggle at the top of the Internet hierarchy is a reflection of the nature of the true globalisation of the Internet.  With regulatory oversight undertaken by a variety of stakeholders that are not exclusively governmental, the same kind of innovative fast-paced development of technology established in the late 1800s can continue into the future.  With control of the upper echelons of the Internet held by a diverse board of members rather than a single state with specific aims, the Internet can be governed by a broader democracy.
 

Let us look at the alternative.  Governments have a track record of interfering with or prohibiting new technology to progress their own goals or ideals.  This has early historical precedents; on the invention of the printing press early in the 15th century, Arabic imams in the Ottoman Empire banned the printing press on religious grounds, with the first press in the region not established until 1729. More recently, the United Arab Emirates threatened to ban Blackberry mobile phones unless the encryption keys were provided to the government by the manufacturer.  Today, Google Street View remains banned in Greece and Austria on privacy and national security grounds, and there is an ongoing argument playing out in the media between Apple and the U.S. Government about provision of technology to break the previously uncrackable iPhone on iOS 9.1 in the name of fighting terror.

 
In historical cases, resistance to this kind of government instruction led in many events not only to violence but to capitulation or a reformation of the stance taken by the governmental organisation to bend to the will of society.  As an example, consider the recent row in the United Kingdom about the 'Snooper's Charter' - the Investigatory Powers Bill - initially quashed by Parliament due to large opposition from both MPs and the general public, and now being put through Parliament for a second time by the Home Secretary under cover of the ongoing EU referendum debate.  Suspiciously attracting near-zero coverage in mainstream media, it is likely this privacy-invading Orwellesque bill will result in legislation, which will have a profound effect on the ability of the UK government to spy on its citizens. 

 
One example of a nation where this level of control, and more, has been successfully imposed on the population is North Korea.  In this totalitarian state, the opposite of a multi-stakeholder model has been developed - with the government in full control of North Korean's Internet access, the dominant network is an intranet-like closed domestic system called Kwangmyong.  Wider Internet access is prohibited except by special permission and only to a small group, under close government audit.  This is reflective of the culture in this state, essential to allow the government to control and limit the knowledge and views of its people.  Freely available Internet websites are not allowed on Kwangmyong unless pre-reviewed, approved and when necessary censored by government.  Other states such as Cuba and Myanmar have similar systems.

 
Is this the kind of Internet we would like to strive towards in the West?  The obvious answer is no.  Freedom remains a founding principle of the constitution of the United States; while not enshrined in a UK bill of rights, it remains a foundation of UK culture; in the rest of Europe, freedom is a given in most, if not all member states.  Single ownership evidently leads to unitary control; unitary control leads to censorship and limitations on individual freedom.  The current multi-stakeholder governance and development models must be kept and nurtured then, both to allow people the freedom to express and share their opinions without fear of reprisal and to keep up the pace of technological progress.

 
Let us imagine an Internet governed entirely by states.  How would such a model be funded?  In the past, websites would be created on a non-profit basis by enthusiasts and experts, and many 'webmasters' would keep their websites operational from love of their specialism rather than the prospect of monetary gain.  Even now, there are examples across the web of free services funded by donation, but these are dwindling in favour of 'freemium' services driven by commercial interests.  At present, advertising revenue provides the momentum that keeps the Internet moving. Under a governmental mandate, such an Internet could not exist. 

 
Anyone who has worked for local or national government could imagine how such a quick-thinking, dynamic community would fit into bureaucratic, Kafkaesque models of governance.  What if the controlling government forced users to fill out forms to seek approval for new websites?  Or imposed caps on the number of posts a user could make citing resource constraints?  Or, more seriously impose further restrictions on the freedom of expression of users?  Censorship of views, however moderate or extremist?  What if a paranoid government, desperate to dispel anti-government protests, decided to prohibit criticism of the ruling party?

 
Such an Internet would be unthinkable, yet as described earlier, this model exists already in totalitarian countries.  The West must be extremely careful not to allow the freedom of the Internet to be further compromised in the name of ideological causes, whether these are driven by paranoid dictatorships or the ever-increasing wish of so-called democratic countries to ceaselessly surveil their citizens.

 
Let's now look at one company that is established today, and try to understand whether we have preserved the multi-stakeholder model on the Internet that started with a handful of entrepreneurs, scientists and engineers two hundred years ago, or whether the commercial sphere is moving towards single-handed control.  Facebook is a large, well-known social media platform with dominant market share.  With the effective death of MySpace by 2011 (despite a relaunch in 2013 with mixed results) and the closure of Friends Reunited in February 2016, Facebook now dominates the social media landscape with over a billion active monthly users.  Facebook is an example of where the multi-stakeholder model has converged into the single-stakeholder model.

 
Does this mean Facebook is an example of a successful single-stakeholder model for Internet use?  Its dubious recent history casts long shadows.  Facebook spent over US$10m on lobbying efforts in the United States in 2015, exerting influence on lawmakers and other political entities over Internet-related legislation.  To emphasise this point - this is a non-elected commercial entity exerting pressure on elected political leaders to shape certain aspects of the Internet to suit the interests of the company - not the interests of the general public, nor altruistic or philanthropic groups, but the company, and their shareholders.

 
Up until 2013, it was impossible to permanently delete a Facebook account.  Information posted on Facebook such as political views, personal communications, identity information and more would remain in the possession and ownership of Facebook with no right for the user to demand deletion.  After 2013, permanent deletion of accounts became possible but it is doubtful whether the data is actually removed from Facebook servers - it is far more likely that data is simply rendered inaccessible, since this presents the most cost-effective option.  From a societal perspective, Facebook has also been blamed for high divorce rates - in a survey in 2009, it was estimated that approximately 20 percent of all divorces included some reference to the social platform. 

 
There are many more criticisms that the company has drawn in the last decade, many of which have striking parallels to North Korea - issues relating to censorship, influence, control, privacy and surveillance.  With increasing polarisation of the Internet around large, flagship corporations such as Facebook and consequently the reduction of the number of stakeholders in Internet development and governance, basic human rights are at risk of violation.  This underlines the need to have a diverse range of controlling entities in charge of the planet's most prolific network instead of encouraging migration to a system with a unified controlling party.

 
FUTURE

Effective governance of the Internet in the future is a doubtful proposition.  While social media is thriving, greater tectonic shifts are underway and the future of the Internet looks to be moving at an accelerated pace, not just towards mobile but towards wearable and embedded devices - the so-called 'Internet of Things' (IoT).  With devices such as the Fitbit monitoring heart rate, exercise, sleep patterns, food intake and even sexual activity, and this data sharable to a mobile app, traditional website and via social media to friends and family, how does a government effectively regulate the use of such a device? 

Current tools such as the UK Computer Misuse Act 1990 (with revisions) are beginning to look hopelessly traditional and outdated.  Section 3A of the Act, for example, prohibits the supply of articles that may be used to commit offences as defined in the other sections of the Act.  Does this mean computer manufacturers may be committing an offence?  What if a wearable device such as a smartwatch was used to record video that was later used to commit an offence - does this render the manufacturer and retailer liable to prosecution?  The Computer Misuse Act has no reference to IoT concepts such as wearables, or embedded domestic environmental controls such as the NestCam. 

With encryption now commonplace in e-commerce and a move towards encryption as a standard protocol, governments are left without means to enact surveillance or control of Internet users.  Even with the advanced tools available to specialist government departments (such as GCHQ and MI5 in the UK and the NSA in the USA), as the current Apple debate shows, governments are a long way off from maintaining control of Internet activity.

This increasing lack of ability to govern the Internet is a welcome move.  Historical examples have shown that hobbling free-market technological progress or imposing excessive governance driven through state-driven paranoid megalomania simply doesn't work.  Printing presses came about despite the government; the early Internet, funded by DARPA, exploded into the public domain.  Despite the USA banning the export of encryption software, so did encryption, and progress into technologies such as the IoT in the future will happen despite, not because, of state intervention.  Regulatory and governance powers as they exist today cannot be effectively exercised when governments fail to keep up with the pace of technology. 

 

CONCLUSIONS


Through examining the history of computing, the present state of governance and the attractiveness of the multi-stakeholder model, it is clear that the only viable path forwards is to maintain the multi-stakeholder model.

As it has always done, commerce will provide the infrastructure.  End users will continue to provide the advertising revenue that powers the engine of growth.  Although the content of Internet may be tainted by commercial interests, it should remain a domain where ordinary people are free to speak, create, criticise, trial, explore and imagine.  It should remain an invention that we can use to communicate with each other, building bridges, doing business, exploring new places.  Ceding control of Internet governance to single organisations, whether governmental or commercial, would be disastrous - North Korea provides the example of poor state governance, and the negative impact that Facebook has had on Internet impartiality provides the commercial example.

 Government does, however have a role.  State governance should guide development, in much the same way that banks of a river guide the flow.  Governments should step in only when necessary to curb the worst excesses of complete freedom; where existing laws are broken online, the government should support the prosecution of offenders under existing legislation.  If commercial interests cause significant disruption with negative connotations or consequences, governments could introduce rules or caps to limit the influence that such commercial giants have on the rest of the online population.  If governments want to encourage technological progress, then from a governance perspective they would be well advised to stand aside and let today's entrepreneurs, scientists, tinkerers, academics, hobbyists, experimenters and ordinary users provide the driving force. 

We are some way from the Internet transforming from a free-market commercial Utopia into a philanthropic reflection of the best aspects of humanity, driven by altruism, enthusiasm, compassion and a wish to improve the human condition.  It may never happen.  But over-regulation through consolidation and seizure of Internet control is not only the wrong answer to a question that should never have been asked; it would be disastrous, severely limiting economic, social and cultural outlooks globally.
 
 

 

Tuesday, December 2, 2014

The Importance of Being Social



I'm not a very social person.  In my day job as a DBA, I used to think this didn't matter too much.  I was able to configure replication; create clustered SQL Server installations; diagnose performance issues; fix failed backup jobs and perform most of the other miscellany that comprises the daily tasks of a typical DBA.  Hard at work outside the office, I was writing steadily for websites like http://www.mssqltips.com, passing Microsoft certifications and generally doing the database thing.  These activities aren't noted for their levels of social interaction.


One part of my job that I couldn't stand, however, was ... The Meeting.  For some reason, everyone in my organisation seemed to believe that every meeting required a DBA.  Whether that meeting was technical, or not.  As a result, I spent as many as 4 hours of *every day* sat in a meeting room, listening to mostly non-technical people carry on about issues ranging from new project launches to 'elf and safety initiatives.  It was very tiresome, and my typical response to that rage-inducing disclaimer that certain people use when they engage me in conversation ('I'm not technical!', they say sheepishly, raising their hands in a defensive posture) is to respond, 'It's okay - I'm no good with people!'.


Personally, I hate the format of the meeting.  I subscribe to the Pratchettism which I'll paraphrase as ' the IQ of a mob is the IQ of the stupidest member of that mob, divided by the number of persons within it.'  I find the most productive way of working is alone, with access to reference information.  Meetings provide a forum for many people to communicate, but the efficiency of that communication is low and leaves (IMHO) a lot to be desired.


I've always had time for those who *don't* understand - but never time for those who *won't* understand.  You know the type I mean.  The type who ask you for a technical report on some issue - perhaps a root cause analysis for some recent downtime - and when you present your short(-ish) document detailing uptime statistics, log analyses, cross-references to Microsoft white papers, they either don't read it or decorate their bin with it.  Or ask - infuriatingly - for an 'executive summary'.  My typical riposte to this was, 'I would, but I've run out of crayons'.  Those who make zero effort to understand underlying technical concepts.


And yet.  As I've matured as a DBA over the last decade, and grown as a person, I've found my attitude changing.  I've awoken to the realisation - obvious perhaps - that many folks have very different skill sets.  Some have no skills at all.  But, they're all employed at the same firm (for better or worse) and that means one has to take a more relaxed attitude to conveying information. 
 
Everyone's just people, and people are all different.  Some people have the same aversion to reading a technical report or looking through a log file as I do to examining business flowcharts.  And it's taken me a very long time to realise this.  I've found that to get the best out of the team I'm working with, I do have to modify my behaviour and expectations to match the abilities of the team - and this is a lesson that, for me at least, has been a long time coming. 


There's plenty of IT guys out there (perhaps some reading this) who will identify with my technological roots, espouse all things Tech and profess hatred for all things User.  But what we all seem to forget is that the reason an IT department (or a DBA team) exists is to serve the business.  Without the business, there is no need for a database estate.  And sometimes, this means we have to engage with the people who run that business.


So next time you're in a meeting, be strategic about how you handle it.  Set some clear expectations early on about the desired outcomes of the meeting - this will help focus people and keep conversation on-track.  Use an agenda to help you do this.  Make sure everyone in the room is aware of the names and responsibilities of other attendees. 


When asked to explain technical concepts, there's a technique, which in the world of journalism is called pyramid writing.  Start with a high-level sentence that explains the outcome - 'We performed a root-cause analysis, and found the server malfunctioned due to a faulty hard drive.'  Then assuming interest doesn't wane, expand.  'During our investigation it came to light that the drive had been warning-flagged by the software management system a month ago.  This was missed as we don't use automatic alerting.'  Then further.  'We found that another drive in the RAID 5 cluster had failed two months ago, and hadn't been replaced.  This second drive failure meant data loss, since three-member RAID 5 clusters aren't tolerant to more than one drive loss.'


Remember that there are those in your organisation who are more receptive to different communication styles, too.  For example, I like dense technical material such as books, white papers, academic papers even.  I don't particularly like learning from video, nor looking at diagrams, to the extent that I prefer the XML version of an execution plan.  But others will 'grok' your points if you explain them visually.  Occasionally, then, illustrate your explanations with pen and paper.  Pick up Visio and put together some flowcharts.  In e-mails, use visual aids to classify and characterise your information - from bullet points to different size and colour fonts.  I find using the highlighting tool to highlight specific points within my text to be very effective with non-technical people.


In conclusion - think about, respond to, and care about, your audience.  Examine your attitude to communication, and see if you can make any improvements.  And remember that without business people - we're all out of a job!



Thursday, October 30, 2014

Notes on Nested Transactions


Last week I was approached by one of our developers, with a troublesome piece of code.  Essentially, the T-SQL he was attempting to run was inside a transaction, and using an INTO clause to put the result set into a target table.  Unfortunately, it wasn't working as anticipated - he was able to open the transaction and run the code, but when it came to checking the row count output he noted 0 rows were inserted into the table before he committed - and he was curious why.

My immediate suspicion was that because the transaction wasn't committed, the rows weren't being inserted, so an alternative method of getting the row count would be to print the value of the @@ROWCOUNT variable at the appropriate time.  However, in the course of testing this, I made a bit of a mistake - I decided to try a nested transaction, to get a better understanding of the problem.

So here's some example code of a straightforward use of the INTO clause.




Working as expected.  So now let's open up a transaction and do the same.  




We have ten rows reported.  Let's query the table and see if the rows actually exist.




Querying the table returns the columns.  Yet the transaction remains open.  This behaviour is not what I was seeing from the developer's code, so I decided to play with it a little and see if we could force this expected behaviour by using a nested transaction - that is, to introduce ANOTHER transaction that deals explicitly with this insert, then leave the outer transaction open so all other behaviours could be rolled back.  So here's what I did.




So far, so good.  The COMMIT was hit, and the transaction committed, and using DBCC OPENTRANS confirms this (not shown).  So now I needed to test that this behaviour would work if the ROLLBACK was hit instead of the COMMIT.  So I modified the code as follows:




This has worked, right?  The inner transaction would have rolled back, leaving one open transaction?  Right?  Wrong.




In fact BOTH transactions were rolled back.  It didn't selectively roll back one transaction, even though the COMMIT only commits one transaction.  And here's the lesson.  When using ROLLBACK, all transactions are rolled back.  This is actually documented in BOL, and I kicked myself for not checking before messing with it.  Here's the quote from BOL:

"When nesting transactions, this [ROLLBACK] statement rolls back all inner transactions to the outermost BEGIN TRANSACTION statement." 

Here's an interesting thing.  I could have attempted to avoid this by using a transaction name, so e.g. BEGIN TRAN b and ROLLBACK TRAN b.  It would have errored though - it appears (though playing with this) that transaction b is subsumed by transaction a - @@TRANCOUNT is 2 as expected, but I cannot roll back b and DBCC OPENTRAN only shows the oldest active transaction, a:




So why was this such a bad thing?  Well, in the code above, -- some other stuff here was actually a couple of DELETE statements - that weren't supposed to be run without an explicit COMMIT from the developer.  Because the nested transaction did NOT commit and a ROLLBACK was issued, the rollback rolled back BOTH the open transactions, and the batch proceeded as normal and executed the DELETE statements, outside of a transaction context - leaving us no way of getting them back, bar recovery from a backup.

Luckily this wasn't a disaster in this case as the data wasn't of huge importance.  But it was a lesson for me on using nested transactions.  Many other sources on the web call them 'evil' and 'not to be trusted' - and now I know why.

Wednesday, October 15, 2014

The Symbiosis of Managers and Engineers


I was struck by a blog entry recently.  A senior executive from a company in the same field as a recent client of mine (am I being suitably anonymous yet?) wrote about his recent experiences trying to find more information on becoming an effective product manager.  Having first self-identified his deficiencies in this area, he was frustrated when, upon Googling the term, visiting a website about becoming a better product manager and downloading the recommended e-book, that the text in question was around 200 pages long with over 40 chapters.  In the remaining couple of paragraphs, he vented his disgust that there wasn't a five-minute summary or a short synopsis (beyond the blurb), or a summary chapter which he could read, so he could distil the essence and perhaps return to the finer detail of the chapters at a later date, as and when required.  He wrote that he wasn't 'the kind of person' to whom that format would appeal, and went further in slating the publishers for failing to consider the end-user (him) - cleverly relating this back to the very principles of good product management that the publishers failed to meet.

Now I got a little suspicious at this point - I tend to now, having gotten that bit older.  I'm a lot more suspicious of unsubstantiated claims.  So I tracked through, found the link, found the book.  In three short clicks I found the table of contents.  And the final chapter in the contents?  A summary of the key points from the entire book.  Three pages.  And it took me under a minute to find.  Why, oh why I opined, could this manager not take the trouble to read just SOME of the detail?  He would have discovered, had he scrolled to the end of the contents pages, this summary chapter.  Alternatively, from Googling the title and author I was able to find detailed Scribd notes with bullet-point synopses of each chapter.  Additionally, customer reviews also gave subjective opinions from readers who had already bought the book on Amazon.  None of what I did was very technical, just a case of looking for details.  As you might have gathered by the evident negative bias I'm already placing on this account, I'm a technician, or an engineer, and this kind of attention-deficit behaviour I find incomprehensible.  Alas, this lack of focus on detail is symptomatic of the management class, the big-picture people.

Now before you think, 'here we go, another rant against managers', I'd like to explain why I'm writing this -  I want to lay out some of the fundamental differences between the management and engineering camps as I see them, and provide an argument about why co-operation is vital between these two warring factions.  Feel free to substitute your own terms as you read - manager can be replaced with leader or non-technician, and the terms engineer and technician are used interchangeably anyway.  I'm interested in exploring my own thoughts on the dynamic between these two groups in a semi-structured way, yet I won't apologise for any tangents or rambling; although I'll try and keep the grammatical errors to a minimum. If you're still reading, hopefully you'll come along to the end.  

Sometimes, the worlds of management and the worlds of engineers seem miles apart.  Indeed, if you'll permit my generalisations, almost every trait varies between the two groups.  Take workplace fashion as a surface example; with managers dressed for business, smart shirts, ties and trousers, and engineers (or technicians, if you prefer the term), typically dressed for an evening in front of the XBox. It's not just physical characteristics that vary - the manager tends to be an extrovert, comfortable in front of large audiences, excited and energetic in the face of crisis, and the driving force behind many a team.  Whereas in contrast the typical engineer will be quiet, introverted, even withdrawn; some may say sullen, opinionated, comfortable in their own intellect but a poor team player and a worse leader.

Now before I get a hundred comments flaming me, accusing me of making crass generalisations, let me say that I'm working with just the stereotypes of the manager and engineer here - I fully recognise that each type has edge cases, and the condition of 'manager' or 'engineer' actually encompasses two broad spectra, on which can be observed many of the characteristics of people.  However for the sake of argument (and because you all probably know at least one manager and one engineer that conform to my definitions) let's accept these definitions and roll on. 

So let's look at the differences in time management between the manager and the engineer.  The manager fills their day rushing from meeting to meeting, with occasional stops at their desk to read their emails and dash off brief replies.  In transit, they stride along broken-necked, staring at their iThings.  They may carry around pieces of paper with fragments of project plans, or keep a notebook with minutes of meetings.  These meetings can happen in dedicated rooms, at desks, in corridors or on the phone.  These are people that 'do lunch'.  To an outsider observing management behaviour, it may seem that no real work is being done, that the manager is simply an outmoded and inefficient paradigm left over from the days of Filofaxes, Rolodexes and dinosaurs, rushing around talking to people about nothing at all, when everyone knows the REAL work these days is in technical roles, don't they?  Don't they?

Let's try and describe the job of a manager.  Managers have a very specific role, and the clue's in the title.  They manage (well, duh).  So what does this mean?  They communicate, empathise, associate, organise, motivate, prioritise.  They drive others, assess others, recruit others and sometimes fire others.  These are qualitative verbs, 'doing' words that don't immediately conjure tangible results.  What I mean by this is that organisation, communication, understanding - these don't seem 'real' in many ways, they don't present you with a finished product, a piece of code, a software module, a completed project - they simply seem too intangible to matter.  What IS it that managers actually DO?  Why do they have to spend the day communicating with others instead of producing quantitative, measurable output?  Compare these verbs with some engineering verbs.  Program.  Test.  Build.  Measure.  Analyse.  Maintain.  Diagnose.  The engineering verbs immediately suggest measurable, tangible, quantifiable output.  

But this isn't entirely fair.  This argument implies that the work of managers is somehow worth less than the work of engineers.  That the manager is there to make the bustle and the engineer is there in the background to deliver the product.  But counter to this line of argument, there is arguably a case that without the business bullshit, there'd be no work for engineers at all.

The skills of, for example, a project manager must include the ability to communicate complex ideas to different groups with different skill sets.  The PM must learn to 'talk tech' with the developers, the DBAs, the architects.  They need to understand the business data too, able to converse with the analysts and understand their needs.  The successful project manager must be able to take a thousand pieces of information and coalesce the ideas into meaningful, aggregated communications for different audiences, from the boardroom to the development team.  Can you imagine the terror of a new manager standing in front of his new technical team, their average IQ 148, with 3 PhDs in the room, introducing himself and subliminally trying to convince them how his addition to the team is a net benefit?  Then switching focus to the boardroom and doing it all over again?  Not many engineers could deal with the stress.  Can any management-resenting developer or engineer honestly claim to possess these skills?  

Indeed, I'll go further and argue that the stereotypical engineer is a self-confessed sociopath; anti-social and misanthropic, self-absorbed and arrogant.  The typical engineer isn't remotely equipped to use qualitative skills.  If you're a non-technician, stand up right now, find your nearest software developer and ask them to explain why TRUE isn't equal to NULL in simple terms.  Watch them squirm as they try and fail to find non-technical terms, to analogise, to explain.  They likely know the answer.  But they may laugh, or stutter, or struggle with embarrassment.  This works especially well if you're female and attractive.  Now repeat the experiment, but find a project manager and ask them to explain S.M.A.R.T. objectives.  They'll likely deliver a comprehensible, smooth, balanced answer and engage you in conversation about it afterwards.  Now do you see the difference between the two camps, and the benefits that management can bring to facilitating communication?  Given the right preparation, a non-technical manager is also capable of delivering complex TECHNICAL ideas too,  thus acting as an effective translation medium between the board and the shop floor. All he needs are the right facts.

Let's lighten the mood.  Here's one of my favourite quotes from the film Armageddon.  Billy Bob Thornton plays the hardcore military leader looking for answers from his geeky technical team on the approaching asteroid.  The technician is bearded, pale, unhealthily plump and terrified.  He is summoned to the war room and everyone's looking at him.  The dialogue goes something like this:

<General>
So give me a summary. 
<Technician><splutters - is shaking>
Uh.. well... (panics) ... We've been..uh.. looking at... uh...
<General><purposefully, not unkind>
Okay, I need someone who's had a little less caffeine this morning.

This is flippant, I know, but in this fictional example, the technician crumbles under pressure - lots of valuable technical information crammed into his head but without the ability to summarise and express it when it matters.  Billy Bob is playing the leader, the manager, the organiser.  There's nothing he personally, practically, using his hands, can do about the asteroid except marshal and drive the team that WILL do something about it, but he's confident in his actions and clear on his objectives.  And conversely, without his guidance, his orders, the drilling team who do the work would never get the chance to go to space, to get up there and sort out the problem in the first place as they don't have the necessary qualitative skills to arrange it.  Look at the misfits sent to space in Armageddon - of one, 'our toxicology results revealed ketamine'.  Co-operation between technicians and managers has to be a symbiotic relationship to work.

Without management providing other qualities such as the goals, the motivation, the team spirit and the drive, technicians like to think they would be coding to save the world, working on cutting-edge projects and developing new and exciting technologies, creating the new Facebook or inventing the hoverboard.  In reality, engineers would sit around on Reddit all day, or head home for an afternoon's nap.  Devoid of the ability to work in larger teams with disparate groups of people, devoid of the ability to plan for an event longer than the next guild raid, devoid of the ability to organise even their own wardrobe, let alone a complex juggernaut of a critical business project, technicians quickly lose business value.  Without external motivation, an engineer quickly becomes bored, restless, even depressed.  Anecdotally, I've seen a lot more developers quit their jobs than management.  Managers tend to stick it out, fix what's broken.  Engineers will cut and run when they reach some threshold of alienation from the business, from the environment around them.  I know, I've been one of them myself.  Because we lack longer-term focus, and get can frustrated by complex social situations (such as endless, endless meetings), we're more inclined to rage-quit, to get our coats on, shout fuck it and go home to bed.  Managers have a longer-term focus, and are better at juggling these longer-term priorities and goals.

Both camps have a lot to learn about each other in order to maximise the benefit and minimise the cost of association.  Management must realise that engineers normally enjoy their job, often to the point of obsession.  They enjoy wandering into work in an Atari t-shirt and trainers clutching a coffee-stained mug.  They enjoy the complexity of long, difficult problems to solve.  They will often work 12-hour days, then go home and spend the evening behind the computer.  They get job satisfaction from untangling knotty problems, rewriting software, designing hardware, creating TANGIBLES.  There's nothing less appealing for an engineer than having the impression that his whole day has been for nought.  And if you as a manager strive to create the ideal conditions for engineers to work (with yourself as a supporting actor and a large quantity of free coffee available), you'll get far more productivity, lower staff turnover and a higher level of morale from these teams than otherwise.

And us technicians, in turn, must realise that the job of management isn't to code up our data import module, nor advise us on code reuse or answer whether a function or a procedure is preferred.  Their job is to guide, provide an escalation point, help and support us into achieving the goals that we've been assigned.  It's to translate instructions between different groups, facilitate open communication, observe business procedures, obtain approvals and a hundred other intangibles that assist us in doing our jobs.

Tuesday, July 22, 2014

Back in the game...

Well, that was a long hiatus.  In the last 5 months I've completed 3 short-term DBA contracts and been everywhere from Chester to Manchester, Stoke-on-Trent to Coventry, Trafford to Milton Keynes.  What with my busy day-to-day role and being Dad to my three Lords of Misrule - my kids - I haven't found time to actually sit down and write.  So here I am, in a charming(!) hotel room in a town called Binley, finally finding time to re-acquaint myself with my keyboard.

Luckily, my American employers over at http://www.mssqltips.com have been very understanding about my complete lack of output for the last 6 months.  I'm making amends, I've just sent over a new, fresh article on using OLE automation extended stored procedures in SQL Server - procedures which will allow you to access OLE objects such as FileSystemObject, enabling you to call methods and get properties from these objects back into SQL Server.  A little easier than using PowerShell, and hopefully of use to some data hacks out there.  Watch this space.

What else this week?  Once again, I find myself supporting unsupported SQL Server installations - my primary database servers this time are 2008, which isn't too bad, but there's a few linked servers including SQL Server 2000.  I've just unlearned how to use DATABASEPROPERTYEX, I've no wish to go rooting through archived TechNet articles again, but that's the way it goes sometimes.

I was shocked to find out Monsters Inc. is 13 years old!  The actress who played Boo is now approaching 15, a far cry from the 2-year-old who had to be chased around the set with a mic as she wouldn't stand in one place long enough to say her lines.  This film is more than three times older than my youngest child ... I can feel the grey hairs growing already.  I'll soon be buying one of those horrible grey hair cardigans and perfecting the comb-over, then there's no turning back.

I was sad that I couldn't attend SQLBits XII (http://www.sqlbits.com/) this year, despite registering to attend months ago.  What with the timing of a new contract and my recent wedding, it simply wasn't possible to go.  I'm hoping to catch up with some live sessions over on the website and see if I can get hold of last year's, too.

Recently I read a great book by Isaiah Hankel PhD, called 'Black Hole Focus'.  I'm a sucker for self-help books, and recognise my weakness, but this truly is a great volume.  One key lesson I took from it was how to motivate yourself - Dr. Hankel suggests creating a motivation board, a board that you hang on your wall with your goals in a brainstorm around your name, or the words 'My Goals'.  A little cheesy, yes, but I'm giving it a try - I got the Pritt-Stick and scissors out on Sunday night and created my first collage since art lessons at school.  And ... it works.  I've hung it in front of my office desk (at home) where I'm now spending three days a week.  When you're looking at it every day, it really helps you to focus, and I've got a clear plan for the next few months.

Next up, identifying those skills which are going rusty and getting a plan together to toughen up my training in them.  I'm thinking data science, particularly with respect to statistics and using R.  I have a strong suspicion the new 'BI', the new 'Big Data' is going to be 'Data Science', and analysts who can interpret data, analyse it and draw conclusions (MI, in other words, and exactly what I'm lined up to do for the next three months) are going to be in high demand.

Tuesday, February 25, 2014

Getting Around Strict GROUP BY Restrictions in SQL Server

SQL Server adheres to the ANSI-SQL standard, in that column names in queries with GROUP BY clauses (aggregate) queries must be included in the GROUP BY or in the aggregate clause itself.  

However, sometimes you don't want to group by a certain column.  Why would that be so?  Say, for example, you want to query the msdb.dbo.backupset column for information on your last successful backup for each database, and you also want to return the backup size for that backup, too.  Grouping the database name on MAX(backup_date) is fine, but throw in the backup size and you'll end up with a separate row for each database name, distinct backup size and maximum backup date.  I'll show you what I mean.  

Let's try writing a query to return the database name, backup size and the last backup date, grouped by database name.

SELECT   database_name, backup_size, MAX(backup_finish_date) 
FROM   msdb.dbo.backupset 
GROUP BY  database_name, backup_size



All very well ... except we have multiple rows per database.  So, let's try the next logical step - putting a MAX(backup_size) into the mix:

SELECT   database_name, MAX(backup_size), MAX(backup_finish_date) 
FROM   msdb.dbo.backupset 
GROUP BY  database_name



Did this work?  On the face of it, yes.  But let's not be so hasty - let's query the whole dataset on our interesting columns to see if the information presented is, in fact, true:

SELECT database_name, backup_size, backup_finish_date 
FROM msdb.dbo.backupset 


No!  As you can see, the former query with two MAXes returned a row saying that SANDBOX was last backed up on 2014-02-25 20:22:28.000 with a size of 9325877248 bytes.  This is not true.  The last backup date was 2014-02-25 20:22:28.000 with a backup size of 2727645184 bytes, as seen below.  The double MAXes have mangled the result set, displaying incorrect data!  Why?  Because we simply concatenated both MAX values - in the former query, they are independent, and we need them to be coupled.



Why is this query so difficult to write?  This is because T-SQL has a strict adherence to the GROUP BY standards defined in ANSI-SQL unlike some other database engines, such as MySQL.  The way to get around this is to query back on the data in a different way - a self-join, but on a slightly altered GROUP BY syntax.  This will effectively allow us to JOIN one aggregated query on the data (by backup_finish_date) on another aggregated query (by backup_size), as follows:


SELECT b1.database_name, b2.backup_size, b1.last_backup_date
FROM (
SELECT b.database_name, 
MAX(b.backup_finish_date) [last_backup_date]
FROM msdb.dbo.backupset b
GROUP BY b.database_name ) b1 
INNER JOIN 
(
SELECT b.database_name, b.backup_size, 
b.backup_finish_date 
FROM msdb.dbo.backupset b ) b2
ON b1.database_name = b2.database_name 
AND b1.last_backup_date = b2.backup_finish_date 




Why does this work?  Basically, the first sub-query contains the last backup date per database.  By inner joining on both the database name and the last backup date to the ungrouped result set from msdb.dbo.backupset, we can bring the backup size for that database name and backup date (the maximum per database) into the result set, resulting in a listing of database name, backup size and last backup date (with the backup size correct for the backup date).

This is one way of getting around the GROUP BY restrictions in SQL Server - I'd be keen to hear more.  If you have any alternatives, please feel free to leave a comment below.