Why cybersecurity requires hardware, software and meatware to work together

August 25, 2015 Leave a comment

Unless you are inherently fearful, danger tends to live in the realm of abstraction until something bad happens in reality. Recently a couple we know insisted my wife and I go out and try tandem bicycling with them. My wife regularly goes for 60-, 70-, 80-, even 100-mile rides on her own bike. I’m more of an occasional rider, but I’ve owned and ridden multi-geared bikes of one sort or another since about 1970.

The $10,000 bike this couple let us borrow didn’t feel right to either one of us. Custom-made, titanium beauty that it was, it felt hard to tame, even when I tried it myself in a parking lot. Uneasily, we climbed on and plunged out onto Rock Creek Parkway in Montgomery County — a narrow road with plenty of car traffic. I wasn’t comfortable with the shifters. The thing felt wobbly and too tall. We didn’t make it a half mile before crashing, one of us landing on either side of this elongated contraption. Cars stopped, people jumped out to help. Other bikes stopped to see if we were alive. The biggest cost was pride. But my left hand still hurts nearly a month later, as does my wife’s tailbone. And the episode set us back $310 for a new shifter.

Lessons learned: Practice where there’s no traffic and you can weave a lot. Learn to use foreign shifters beforehand. Get your road legs on a cheap, low-slung bike (you can buy a whole new tandem bike for $310). Don’t ignore your misgivings.

If we were a government agency, I’d say we didn’t do a good risk assessment, and we didn’t integrate our software with the hardware very well. We had what could have been a doomsday scenario, literally.

Until now, it seems as if federal cybersecurity has been operating on a wing and a prayer, too. The OPM data breach shattered whatever complacency anyone might have had. As it recedes into the past, the 30-day cyber sprint  has left a lasting legacy. Not simply that federal systems are more thoroughly protected than they were. They may well be, but success in cybersecurity is ephemeral. Like a sand castle, you can never stop shoring it up.  In one sense, every month should be a 30-day sprint.

And not simply that the sprint got everyone to realize at once how basic cybersecurity is to everything else the government has to do. And how poor the government is at it. That also may have happened.

Read this summary of the Office of Management and Budget’s after-action report from the sprint. Not the one for public consumption, but the internal one, which Federal News Radio’s Jason Miller got to see. It showed:

  • Some 75 open vulnerabilities identified, two thirds of them festering for more than 30 days. Only 60 percent of them patched, and new ones keep popping up. At least agencies know to look for them now.
  • Old software running past the end of vendor support, including new patches.
  • The weakness of two-factor authentication in the face of super-realistic phishing e-mails.
  • Privileged access rights to networks given out willy nilly.

I think the most important effect of the near-doomsday breach and subsequent sprint was driving home the need for an architectural approach to cybersecurity, taking it down to the storage hardware level. Here’s one example. The White House called this week for ideas pursuant to its Precision Medicine Initiative. The idea is to eventually gather health information on millions of people so it can be mined for trends leading to more personalized medical treatments than people have now.  Among the areas for which it seeks suggestions: “Technology to support the storage and analysis of large amounts of data, with strong security safeguards.” Cybersecurity is embedded throughout the call for comments. That’s a good sign.

Industry is starting to offer new approaches. The other week I was talking to people from Seagate, a disk drive and storage subsystem OEM. It’s part of a coalition of network equipment and software companies that contribute to what they call a Multi-Level Security Ecosystem. In the federal market, Lockheed-Martin and Vion offer it as a secure storage and file system for high-performance simulation and modeling applications that fuse together large, disparate data sets.

Seagate Federal’s Henry Newman explains, the company built a set of services on top of SELinux to accommodate functions such as network communications, database access and data sharing across parallel file systems. So, for example, a large set of video surveillance could be engineered such that access to individual files are restricted to certain individuals based on their authorities. Personally identifiable information, compliance information or intellectual property within a system can be made subject to access controls and auditing, while limiting the need for expensive hardware redundancy.

Other contributors to the MLE ecosystem include supercomputer makers Cray and SGI, log analytics vendor Splunk, and Altair, a maker of job scheduling and management software.

Government practitioners like to say security should be built in, not bolted on. But they usually bolt it on. The Multilevel Secure group is just one example, but it shows where systems deployment is heading where security is baked in.

Want fries with that treatment, soldier?

August 14, 2015 Leave a comment

Customer service is all the rage in the federal government.

Again.

A series of lapses that includes the healthcare.gov rollout and the well-documented problems with service provided by the Veterans Affairs Department have alerted the administration to the need for better customer experiences, whether in person, on the phone or online. The digital strategy is supposed to take care of improving the online part. It is one in a series of initiatives dating back to the Clinton administration’s E-gov project. That in turn had antecedent in the “Service to the Citizen” movement of the George H.W. Bush administration of the pre-Web days. E-gov’s offspring was the Quicksliver series of projects of the George W. Bush administration.

It’s good that these efforts are revisited periodically. Technology and expectations change. Too bad the government has to lurch from crisis to crisis to get with it, though.

I had to chuckle when discovering that VA Secretary Bob McDonald brought in former McDonald’s executive Tom Allin, the fast food chain, as the chief veterans experience manager. As a habitue of McDonald’s for its coffee and occasional Egg McMuffin, I’ve seen customer service there up close. Don’t tell me you don’t go to McDonald’s. Nobody goes to McDonald’s like nobody watches television or listens to the radio unless it’s NPR. Yeah, sure.

At McDonald’s, I noticed the other day that counter employees work in an incessant cacophony of beeping food preparation apparatus, back-shop employees shouting at one another, and piped in Musak. They have to scurry to and fro for all of the detritus — bags, napkins, cups ketchups, and the food itself — that make up an order. Something’s always broken, like the receipt printer, the credit card reader, the machine that squirts out “ice cream,” … something. When the young lady finally collected herself and met my eyes, I couldn’t help but ask, “Are you still taking orders?” To myself, I thought, if this is fast food, what the heck is slow food? As one of only two people in line I wondered, How do they cope when it’s crowded?

I’d walked over from my car dealer, where I’d left my car for an oil change. It was quieter there, but the customer service representatives had all of this elaborate paperwork, had to dart back to a bank of printers, and out of their booths to the rear. It felt like it took as long to check in a car for an oil change as to actually change the oil.

These service employees face the same bureaucratically-induced barrier of process complexity and unreliable systems as their counterparts in the government. It’s a fine step for VA to have metrics for appointment wait times, or the IRS for phone answering times. But unless the systems are geared to enable people to reach these goals, they won’t happen. Insufficient staff, crappy software, an overly complex process — these can all get in the way of the even the most dedicated humans who are trying to do a good job.

I spoke about customer service the other day with Deloitte principal Greg Pellegrino, who headed up a survey on the state of customer service in the federal government. The survey’s basic finding, to not put maple syrup on a pickle, is the government thinks it gives better service than the public thinks it does.

Pelligrino points out three data points. One, the latest American Customer Satisfaction Index shows federal service getting worse, at the bottom of the heap. Two, Gallup polls show a slippage in public confidence in the government. Three, the most recent Viewpoint survey of federal employees shows a decline in job satisfaction. The third point is related to the first two, Pellegrino says. Basically, a combination of stingy budgets, lack of focus on customer service and unhappiness on the job have combined to weigh down the experience have with federal services.

All that plus a mismatch of intent and the technology to carry it out.

A new way to think about this, or perhaps it’s an old way dusted off at a time of great technological change, is outlined in a Harvard Business Review article by Jon Kolko, a vice president of Blackboard. He describes an approach called design centric thinking. It’s a “set of principles [encompassing] empathy with users, a discipline of prototyping, and a tolerance for failure” all aimed a creating a customer-centric culture. Translation: You combine clear thinking with agile development principles.

Kolko says design-centric thinking applied originally to physical objects. Now organizations are applying it to services. And get this: There’s a great example at the Veterans Affairs Department, of all places. VA’s Center for Innovation used this kind of thinking to envision a “customer journey map to understand veterans’ emotional highs and lows in their interaction with the VA.” A map like that can point the way to better customer service by aligning systems, processes and what the customer wants.

Image that.

The big cybersecurity challenge: Time-to-detection

July 29, 2015 Leave a comment

Do you sunbathe? You shouldn’t in this day of hypersensitivity about skin cancer. But if you do, the sunlight falling on your liver-spotted, lizard-like skin has been traveling through space for about nine minutes. When you gaze at the night sky and see Alpha Centauri, you probably remember from grade school that light from that nearby planet takes about 4.3 years to get to earth.

If something like a Burning Man festival were held on Alpha Centauri, you wouldn’t know about it until 4.3 years after it was over. Too late to load up your Airstream and get there in time for the fun. Most stars are so far away, they probably collapsed into black holes a billion years ago, yet all we see is merry twinkling millennium after millennium.

Not to over-dramatize, but this is how things are in cybersecurity — specifically intrusion detection. When the Office of Personnel Management was patching its systems, it discovered its great breach, months after the break had occurred. It might have been still more months before anyone noticed the anomaly. It reminds me of a corny roadside display in Pennsylvania when i was a kid. A sign on a little barn said, “World’s Biggest Steer Inside.” When you pulled over and peered in the window, you saw a big jagged hole in the back of the barn, a chain lying in the dirt, and another sign, “Too bad, guess he got away!” There must’ve been a gift shop or goat’s milk fudge stand nearby.

This is one of the big problems with modern-day cyber attacks. Too often, IT and security staffs only find out about them long after the damage has been done and the hackers moved on to other soft targets. If it takes seconds or minutes to exfiltrate data, what good does discovering it do next year?

I recently spoke with John Stewart, one of the top security guys at Cisco. The topic was Cisco’s Midyear Security Report. Here’s my summary: Federal IT and security people, like everyone else, have plenty to worry about. Like the fact that a thousand new security product vendors have started up in the last five years, yet most of them sell non-interoperable software. Or that the white-hat, good-guys side of the cybersecurity equation is literally about a million qualified people short.

Yet among the most seemingly intractable problems lies time-to-detection, or how long on average it takes for organizations to find out they’ve been hacked. This makes it likely that many more successful attacks have occurred than systems administrators are aware of. Stewart says most of the data show that IT staffs routinely take months to detect breaches. A major goal of the products industry and practitioners’ skill sets must therefore be getting time-to-detection down to seconds. At this point, I’ll bet many federal agencies would be happy with days or hours.

Malicious hackers aren’t standing still, the Cisco report points out. They’re switch vectors and modalities at lightning speed. They’re using wealth transfer techniques that stretch law enforcement’s ability to detect. Stewart says, systems like Bitcoin and the murky avenues of the dark web don’t include or even require the typical middlemen of the surface financial transaction world — such as banks, transfer networks, mules. He describes the bad-hacker industry using a term the government likes to use for itself: innovative. 

Embedded IP domains and fungible URLs, jacking up data write-rewrite cycles to dizzying speeds, or quietly turning trusted systems into automated spies in the time it takes someone to go for coffee — that kind of thing. You might call it agility. They’re dancing circles around systems owners. The hacking community has become wickedly innovative at evading detection, Stewart says, exploiting the common systems and software everyone uses routinely.

He adds that the motivations of bad hackers have blossomed into a veritable bouquet. They go after systems for espionage, theft of money or intellectual property, terrorism, political activism, service disruption and even outright destruction. That’s a good case for the so-called risk-based approach to cybersecurity planning. If you’re a utility, disruption or destruction is more likely to be the hackers’ goal. If you’re a database of people with clearance, espionage and theft are good bets.

Answers? As cybersecurity people like to say, there is no silver bullet. Stewart says nations will have to cooperate more, tools will have to improve, people will have to get smarter. Cisco hopes to build some sort of architecture framework into which the polyglot of cyber tools can plug, reducing what he calls the friction of integration.

For now, a good strategy for everyone connected to cybersecurity is to bore in on the essential question: How soon can we know what’s going on?

Thoughts on bloated web sites, complex software

July 21, 2015 Leave a comment

With my wife at the wheel, we swing off Route 21 in New Jersey onto E 46. The GPS in the dash of our new Subaru is guiding us to Saratoga Springs, NY for the weekend. Kitty-corner from the exit is a big bilboard that reads, “WHO IS JESUS? CALL 855-FOR-TRUTH. Nice and succinct. I admired the certitude, but didn’t try the number.

The car is filled with slightly more mystifying tech. Somewhere I read the average modern car has 200 microprocessors. How many lines of code do they run, I wonder? No matter, the car does what it’s supposed to. Anyone who ever dealt with distributor caps, points and engine timing lights appreciates the way today’s cars work.

The GPS-bluetooth-navigation complex in the dash is another matter. It’s a mishmash of hard-to-follow menus. No matter what we do, every time we turn on the car, the podcasts on my wife’s phone starts up. As for navigation, no two systems I’ve ever seen work quite the same way, at least their user interfaces don’t. Voice commands can be ambiguous, and if occasionally directs you off the highway only to direct you right back on again.

This same overload is ruining many web sites, as it has many once-simple applications. No wonder people love apps, in the sense of applications designed or adapted to work easily and quickly on the small touch screens of mobile devices. Standards like Word, Outlook, iTunes and many other have become so choked with features and choices, I’ve practically given up on them. I can figure out what they do, but it’s all too much, too fussy and time-consuming to manage.

The major media sites are so choked with links — most of them for ads, sponsor content, and unrelated junk such as 24 celebrity face-lifts gone horribly wrong — that you can barely navigate them with out constant, unwanted and frustrating detours.

The drive to make software more and more functional may be behind what seems to be a disturbing trend towards failures in critical systems. They’ve happened a lot lately. In fact, it happened first rather close to home. Literally a minute before going on the air one recent morning, the system that delivers scripts and audio segments failed. A Federal News Radio, we’d gone paperless for a year, reading scripts online and saving a package of printing paper every day. Talking, trying to sound calm, ad-libbing while gesticulating wildly to my producer — that’s what a software crash causes. Controlled panic. Panic, anyhow. It took the engineers an hour to fix. It turned out, a buffer overflow crashed the Active Directory on which the broadcast content environment depends for user privileges. So down it went with the ship.

It was the same day United Airlines passenger boarding system failed, apparently the result of lingering incompatibility from the merger with Continental. And the same day that the New York Stock Exchange famously experienced an hours-long crash, reportedly because of network connectivity issue. Earlier in the month, a hardware-software interaction interrupted for two weeks the State Department’s globally-distributed system for issuing visas.

Successive program managers for the F-35 fighters have all complained they can’t get the software development for this fussy and delicate airplane in any sort of predictable schedule. Yet the plane is unflyable and unmaintainable without its software.

In short, two problems linger with software controlled systems. They can be difficult to interact with. And in their complexity they produce effects even expert operators can’t foresee. I believe this is the basis for the spreading appeal of agile development. It forces people to develop in pieces small enough that people can keep track of what is going on. And in ways that the users can assimilate easily.

Complexity, or the desire to avoid it, is why people like apps on mobile devices. I confess to checking Buzzfeed on my phone when I’m bored. The content is inane, but it’s such a fast, simple app, like eating gumdrops. I recently checked out the regular Web site of Buzzfeed, and sure enough, it’s a confusing kaleidoscope. Although, an ice cream cone swaddled in Cocoa Krispies does sound good.

Archuleta departs. Now what? Some ideas

July 10, 2015 1 comment

OPM director Katherine Archuleta, as I predicted three weeks ago, has resigned. Problem solved, let’s move on.

Fat chance.

In reality, Archuleta’s departure solves nothing fundamental. But she had to go, as I’m sure she understood probably from the moment she peered over the edge and realized — long before most everyone else — the size of the abyss caused by The Data Breach. Talk about big data. As primarily a politician, Archuleta must have realized that she would eventually take the fall for the administration, which of course is ultimately responsible. That’s the way of Washington; always has been. Katherine Archuleta isn’t a horrible person, nor do we have any reason to think she didn’t have the best interests of federal employees at heart. But as President Obama’s campaign manager who secured a visible, plum job, she would get it: This goes with the territory.

And the more the White House spokesman, a sort of latter day Ron Ziegler, pushed culpability away from the administration in the aftermath of Thursday afternoon’s revelation, the more it’s clear the White House itself knows it is somehow responsible for potentially messing up 22 million lives, compromising national security, and making the government look totally incompetent.

“There are significant challenges that are faced not just by the federal government, but by private-sector entities as well. This is a priority of the president,” the spokesman said. Yeah, well, the vulnerabilities of OPM’s systems and the Interior Department facility that houses them existed seven years ago and before that. The incoming just happened to land and explode now. Now we can presume they really, really are a priority.

So now what? It will fall to Beth Colbert, the deputy director for management at the White House, to salve the wounds.

And Obama himself ought to voice his personal concern over this. Some things that occur externally do get to presidents personally. Johnson and Nixon waded into crowds of Vietnam protesters. But more than that, some concrete things should happen:

  • The White House should convene a meeting of the CIO Council to make it clear the 30-day cyber sprint ordered by Federal CIO Tony Scott is now a year-long effort.
  • Pressure test every important system in the government. Hire the top corporate cybersecurity experts — a group populated in part by some famous formerly malevolent hackers — and have them bang away until they find all the weaknesses. Then give agency heads one working week to prove their vulnerabilities are plugged. Two-factor authentication, encryption of data at rest — for heaven’s sake do it already.
  • Hire a tiger team to install Einstein 3A in every agency by July 31st, never mind December 31st. Require the internet service providers to do whatever it takes to make their inbound traffic compatible with this system. If Einstein 3A is so good, how come it’s taken so long?

I know what you’re saying. Yes, it does sound naive. I wasn’t born last night either. This is one of those times, though, that requires an all-out effort. For years we’ve heard warnings of a cyber 9/11. Well, we just had one.

This data loss was no third-rate burglary. Mr. President, America is under attack.

Have you visited the deep, dark Web recently?

July 2, 2015 Leave a comment

At the end of a long cul-de-sac at the bottom of a steep hill, our house sat near a storm sewer opening that in my memory is a couple of yards or so wide. If you poked your head down the opening and looked in the right direction, you could see daylight where the culvert emptied into a sort of open catch basin. None of us ever had the nerve to slip into the drain and walk through the dark pipe to come out in the catch basin, maybe 300 yards away. But that is where, at the age of maybe 5 or 6 years old, I realized a vast network of drainage pipes existed under our street, beneath our houses. That culvert fascinated me and my friends endlessly. We’d try to peer at one other from each end, or shout to see if our voices would carry. Or, after a rain, we’d drop a paper boat down the opening and see how long it would take to flow into the catch basin.

That sewer is like the Internet. Underneath the manifest “streets” that are thoroughly used and mapped lies a vast subterranean zone with its own stores of data. Some experts say the surface or easily accessed Internet holds only 4 percent of what’s out there. Much of the out-of-view, deep Internet consists of intellectual property that people — like academics or scientists — want to keep to themselves or share only with people they choose. But other areas lie within the deep Internet where criminal and terrorist elements  gather and communicate. That’s called the dark Internet. It’s also where dissidents who might be targeted by their own country communicate with one another. To people using regular browsers and search engines, this vast online zone is like a broadcast occurring at a frequency you need a special antenna to detect.

At the recent GEOINT conference, held for the first time in Washington, I heard a theme from several companies: Agencies will need to exploit the deep web and its subset dark web to keep up with these unsavory elements. The trend in geographical intelligence is mashing up multiple, non-geographic data sources with geographic data. In this and a subsequent post I’ll describe some of the work going on. In this post, I’ll describe work at two companies, one large and one small. They have in common some serious chops in GEOINT.

Mashup is the idea behind a Lockheed-Martin service called Halogen. Clients are intelligence community and Defense agencies, but it’s easy to see how many civilian agencies could benefit from it. Matt Nieland, a former Marine Corps intelligence officer and the program manager for the product, says the Halogen team, from its operations center somewhere in Lockheed, responds to requests from clients for unconventional intel. This requires data from the deep internet. It may be inaccessible to ordinary tools, but it still falls into publicly available data. Neiland draws a crude sketch in my notebook like a potato standing on end. The upper thin slice is the ordinary Internet. A tiny slice on the bottom is the dark element. The bulk of the potato represents the deep.

Halogen uses the surface, searchable Internet in the unclassified realm. Analysts ingest material like news feeds, social media, Twitter. They mix in material that is inaccessible to standard browsers and search engines, but are neither secret nor requiring hacking. It does take skill with the anonymizing Tor browser and knowledge of how to find the lookup tables giving URLs that otherwise look like gibberish. Beyond that, Nieland says Lockheed has contacts with people around the world who can verify what it finds online. Halogen’s secret sauce is the proprietary algorithms and trade craft its analysts use to create intel products.

At the opposite end of the size spectrum from Lockheed Martin, OGSystems assembles teams of non-traditional, mostly West Coast companies to help federal agencies solve unusual problems in cybersecurity and intel, or problems they can’t find solutions for in the standard federal contractors. CEO Omar Balkissoon says the company specializes in getting non-traditional people to think about traditional questions. A typical project is the Jivango community where agencies can source answers to GEOINT questions.

OGSystems calls its R&D section VIPER Labs, crafts services, techniques and data products for national security. At GEOINT I walked to a big monitor by Jessica Thomas, a data analyst and team leader at VIPER Labs. She’s working on a OSINT (open source intelligence) product for finding and stopping human traffickers and people who exploit minors. It’s a good example of mashing up non-GEO data with GEO. The product uses an ontology used by law enforcement and national security types of words found on shady Web sites and postings to them that may be markers for this type of activity. Thomas pulled two weeks worth of posting traffic and used a geo-coding algorithm to map it to the rough locations of the IP addresses. Posters tend to be careless about how easy it is to reverse-lookup IP addresses to get a general area from where it originated. In many cases, posts included phone numbers. It wasn’t long before clusters of locations emerged indicating a possible network of human trafficking.

An enthusiastic Tor user, Thomas wants to add dark Internet material to her trafficking data mashup. She also hopes to incorporate photo recognition, and sentiment analysis that can detect emotion within language found on a web site. She says OGSystems has applied for a grant to develop its trafficking detection technology into a tool useful for wildlife trafficking — a major source of funding for terror-criminal groups like El Shabab.

Next week, some amazing things text documents can add to GEOINT.

Numbers tell the tale of the #OPMbreach

June 25, 2015 Leave a comment

Some big stories become defined by the people and the emotions connected to them. The earliest news memory burned into my gray cells occurred on November 22, 1963. Emerging from my third grade classroom, I recall the emotions of the clustering Walking home, I remember my mother in tears before our black-and-white RCA console TV, declaring to me and, I guess, heaven, “Someday there will be another Kennedy in the White House!” I’m not certain I have  direct memory of Walter Cronkite pulling off his heavy glasses, having seen the kinescopes of that moment replayed periodically through the years.

We remember other big stories for their remarkable numbers and statistics. Not that it lacks emotional content, but the Bernard Madoff crime is notable for its sheer audacity of size, expressible in lurid statistics: 11 federal felonies, $65 billion in fraudulently stated gains, 150 years in the slammer. Who knows, the word “Madoff” could well replace the word “Ponzi” in the vernacular reference to really big frauds against innocent individuals.

OPM’s data breach, which has spawned its own hashtag — the new form of vernacular among what might kindly be called the vulgate — is a numbers story, leavened by the justifiable frustration and anger of the employees involved. Mike Causey likens this story to The Blob (one of my all-time favorite movies). Here the horror is underscored by numbers:

  • 4.2 million feds and retirees affected
  • 14 million more in another breach that claimed data from SF-86 forms
  • 18 million Social Security Numbers may have been purloined, according to testimony from OPM Administrator Katherine Archuleta.
  • Two hour waits to reach someone on the telephone at contractor CSID
  • 36 hours between OPM hanging out the notice of the credit monitoring services requirement and awarding a contract.
  • $21 million for the credit monitoring services so far.
  • 15 points in the plan OPM came up with for fixing its cybersecurity vulnerabilities
  • 1 direct report cybersecurity advisor working for Archuleta

This is one of those drip-drip stories in which details come out serially, although not all that clearly. From one of the congressional hearings we got a sense of how much more money OPM thinks it will need. In a kind of symmetry, Archuleta says OPM may need still another $21 million in 2016 to button up its systems. The agency has asked for a total of $32 million more for 2016, but is also saying the total cost of the breach could be as hight as $80 million. That figure won’t buy a wing assembly for an F-35, but it’s a significant figure against OPM’s roughly $400 million spending authority.

In business, bad results tend to bring on one of two outcomes. Either your budget gets cut. Or the money becomes available to fix the problem, but you don’t get to spend it because you’re gone.

So where are all the numbers and this story headed? It’s not over yet. We still don’t know the full extent of how many names, Social Security numbers and SF-86 forms were taken. When that many people are affected, it’s hard to make it disappear. We still have yet to learn the motivation of the data thieves, which means millions will be holding their breath for a long time.

As an unseen benefit, every other agency is scrambling to make sure it’s not the next OPM. One departmental CIO told me as much just the other day. Software vendors will have Christmas in the summer as agencies get serious about two-factor authentication, continuous diagnostics and mitigation and the tools that go with them. Homeland Security is scrambling to get Einstein 3A into place. So, some silver linings.

No TV news anchor pulled off his glasses and put them on again to mask emotional turmoil over the OPM breach. But at least the lessons learned, as the government likes to say, will stick this time.

Follow

Get every new post delivered to your Inbox.

Join 32 other followers