Archive

Posts Tagged ‘Lockheed’

Why cybersecurity requires hardware, software and meatware to work together

August 25, 2015 Leave a comment

Unless you are inherently fearful, danger tends to live in the realm of abstraction until something bad happens in reality. Recently a couple we know insisted my wife and I go out and try tandem bicycling with them. My wife regularly goes for 60-, 70-, 80-, even 100-mile rides on her own bike. I’m more of an occasional rider, but I’ve owned and ridden multi-geared bikes of one sort or another since about 1970.

The $10,000 bike this couple let us borrow didn’t feel right to either one of us. Custom-made, titanium beauty that it was, it felt hard to tame, even when I tried it myself in a parking lot. Uneasily, we climbed on and plunged out onto Rock Creek Parkway in Montgomery County — a narrow road with plenty of car traffic. I wasn’t comfortable with the shifters. The thing felt wobbly and too tall. We didn’t make it a half mile before crashing, one of us landing on either side of this elongated contraption. Cars stopped, people jumped out to help. Other bikes stopped to see if we were alive. The biggest cost was pride. But my left hand still hurts nearly a month later, as does my wife’s tailbone. And the episode set us back $310 for a new shifter.

Lessons learned: Practice where there’s no traffic and you can weave a lot. Learn to use foreign shifters beforehand. Get your road legs on a cheap, low-slung bike (you can buy a whole new tandem bike for $310). Don’t ignore your misgivings.

If we were a government agency, I’d say we didn’t do a good risk assessment, and we didn’t integrate our software with the hardware very well. We had what could have been a doomsday scenario, literally.

Until now, it seems as if federal cybersecurity has been operating on a wing and a prayer, too. The OPM data breach shattered whatever complacency anyone might have had. As it recedes into the past, the 30-day cyber sprint  has left a lasting legacy. Not simply that federal systems are more thoroughly protected than they were. They may well be, but success in cybersecurity is ephemeral. Like a sand castle, you can never stop shoring it up.  In one sense, every month should be a 30-day sprint.

And not simply that the sprint got everyone to realize at once how basic cybersecurity is to everything else the government has to do. And how poor the government is at it. That also may have happened.

Read this summary of the Office of Management and Budget’s after-action report from the sprint. Not the one for public consumption, but the internal one, which Federal News Radio’s Jason Miller got to see. It showed:

  • Some 75 open vulnerabilities identified, two thirds of them festering for more than 30 days. Only 60 percent of them patched, and new ones keep popping up. At least agencies know to look for them now.
  • Old software running past the end of vendor support, including new patches.
  • The weakness of two-factor authentication in the face of super-realistic phishing e-mails.
  • Privileged access rights to networks given out willy nilly.

I think the most important effect of the near-doomsday breach and subsequent sprint was driving home the need for an architectural approach to cybersecurity, taking it down to the storage hardware level. Here’s one example. The White House called this week for ideas pursuant to its Precision Medicine Initiative. The idea is to eventually gather health information on millions of people so it can be mined for trends leading to more personalized medical treatments than people have now.  Among the areas for which it seeks suggestions: “Technology to support the storage and analysis of large amounts of data, with strong security safeguards.” Cybersecurity is embedded throughout the call for comments. That’s a good sign.

Industry is starting to offer new approaches. The other week I was talking to people from Seagate, a disk drive and storage subsystem OEM. It’s part of a coalition of network equipment and software companies that contribute to what they call a Multi-Level Security Ecosystem. In the federal market, Lockheed-Martin and Vion offer it as a secure storage and file system for high-performance simulation and modeling applications that fuse together large, disparate data sets.

Seagate Federal’s Henry Newman explains, the company built a set of services on top of SELinux to accommodate functions such as network communications, database access and data sharing across parallel file systems. So, for example, a large set of video surveillance could be engineered such that access to individual files are restricted to certain individuals based on their authorities. Personally identifiable information, compliance information or intellectual property within a system can be made subject to access controls and auditing, while limiting the need for expensive hardware redundancy.

Other contributors to the MLE ecosystem include supercomputer makers Cray and SGI, log analytics vendor Splunk, and Altair, a maker of job scheduling and management software.

Government practitioners like to say security should be built in, not bolted on. But they usually bolt it on. The Multilevel Secure group is just one example, but it shows where systems deployment is heading where security is baked in.

Advertisements

Have you visited the deep, dark Web recently?

July 2, 2015 Leave a comment

At the end of a long cul-de-sac at the bottom of a steep hill, our house sat near a storm sewer opening that in my memory is a couple of yards or so wide. If you poked your head down the opening and looked in the right direction, you could see daylight where the culvert emptied into a sort of open catch basin. None of us ever had the nerve to slip into the drain and walk through the dark pipe to come out in the catch basin, maybe 300 yards away. But that is where, at the age of maybe 5 or 6 years old, I realized a vast network of drainage pipes existed under our street, beneath our houses. That culvert fascinated me and my friends endlessly. We’d try to peer at one other from each end, or shout to see if our voices would carry. Or, after a rain, we’d drop a paper boat down the opening and see how long it would take to flow into the catch basin.

That sewer is like the Internet. Underneath the manifest “streets” that are thoroughly used and mapped lies a vast subterranean zone with its own stores of data. Some experts say the surface or easily accessed Internet holds only 4 percent of what’s out there. Much of the out-of-view, deep Internet consists of intellectual property that people — like academics or scientists — want to keep to themselves or share only with people they choose. But other areas lie within the deep Internet where criminal and terrorist elements  gather and communicate. That’s called the dark Internet. It’s also where dissidents who might be targeted by their own country communicate with one another. To people using regular browsers and search engines, this vast online zone is like a broadcast occurring at a frequency you need a special antenna to detect.

At the recent GEOINT conference, held for the first time in Washington, I heard a theme from several companies: Agencies will need to exploit the deep web and its subset dark web to keep up with these unsavory elements. The trend in geographical intelligence is mashing up multiple, non-geographic data sources with geographic data. In this and a subsequent post I’ll describe some of the work going on. In this post, I’ll describe work at two companies, one large and one small. They have in common some serious chops in GEOINT.

Mashup is the idea behind a Lockheed-Martin service called Halogen. Clients are intelligence community and Defense agencies, but it’s easy to see how many civilian agencies could benefit from it. Matt Nieland, a former Marine Corps intelligence officer and the program manager for the product, says the Halogen team, from its operations center somewhere in Lockheed, responds to requests from clients for unconventional intel. This requires data from the deep internet. It may be inaccessible to ordinary tools, but it still falls into publicly available data. Neiland draws a crude sketch in my notebook like a potato standing on end. The upper thin slice is the ordinary Internet. A tiny slice on the bottom is the dark element. The bulk of the potato represents the deep.

Halogen uses the surface, searchable Internet in the unclassified realm. Analysts ingest material like news feeds, social media, Twitter. They mix in material that is inaccessible to standard browsers and search engines, but are neither secret nor requiring hacking. It does take skill with the anonymizing Tor browser and knowledge of how to find the lookup tables giving URLs that otherwise look like gibberish. Beyond that, Nieland says Lockheed has contacts with people around the world who can verify what it finds online. Halogen’s secret sauce is the proprietary algorithms and trade craft its analysts use to create intel products.

At the opposite end of the size spectrum from Lockheed Martin, OGSystems assembles teams of non-traditional, mostly West Coast companies to help federal agencies solve unusual problems in cybersecurity and intel, or problems they can’t find solutions for in the standard federal contractors. CEO Omar Balkissoon says the company specializes in getting non-traditional people to think about traditional questions. A typical project is the Jivango community where agencies can source answers to GEOINT questions.

OGSystems calls its R&D section VIPER Labs, crafts services, techniques and data products for national security. At GEOINT I walked to a big monitor by Jessica Thomas, a data analyst and team leader at VIPER Labs. She’s working on a OSINT (open source intelligence) product for finding and stopping human traffickers and people who exploit minors. It’s a good example of mashing up non-GEO data with GEO. The product uses an ontology used by law enforcement and national security types of words found on shady Web sites and postings to them that may be markers for this type of activity. Thomas pulled two weeks worth of posting traffic and used a geo-coding algorithm to map it to the rough locations of the IP addresses. Posters tend to be careless about how easy it is to reverse-lookup IP addresses to get a general area from where it originated. In many cases, posts included phone numbers. It wasn’t long before clusters of locations emerged indicating a possible network of human trafficking.

An enthusiastic Tor user, Thomas wants to add dark Internet material to her trafficking data mashup. She also hopes to incorporate photo recognition, and sentiment analysis that can detect emotion within language found on a web site. She says OGSystems has applied for a grant to develop its trafficking detection technology into a tool useful for wildlife trafficking — a major source of funding for terror-criminal groups like El Shabab.

Next week, some amazing things text documents can add to GEOINT.

Fails happen. It’s how agencies react that matters

June 9, 2015 1 comment

An old, familiar shibboleth came up again this week. “Washington is a city of second chances.” That’s what a Washington Post article said about a popular millennial writer who was fired from a popular web site for plagiarism. He popped up at another web site a year later, where he’s boosting its traffic. Dennis Hastert, the former House speaker now enmeshed in a really bad scandal, probably is too old to have a second chance.

Organizations can have second chances, often because they have the wherewithal to buy their way back. I remember the Ford Pinto gas tank scandal (1977), the time Lockheed nearly went bankrupt (1971) save for a federally-backed loan, and the Tylenol poisoning scare (1982), which was a problem not of the company’s making. Today, Ford, Lockheed-Martin and Johnson and Johnson prosper quite nicely.

Can federal agencies have a second chance, I’ve been wondering? Technically no, since they can’t go out of business unless Congress decrees it, which it never does. So when they goof up, there might be temporary hell to pay, but not the threat of going out of business. In fact, serious failures are often rewarded with big budget increases, as in the case of the Veterans Affairs Department. Congress can readily replace money. Reputation and perceived legitimacy — harder to recover.

Yet agencies are obligated to react when things go wrong. Recently two examples occurred I point out as case studies of the right way to react and retain the confidence of the public.

A whistleblower, still anonymous, complained to the FDA about poor practices and fungus contamination at the National Institutes of Health. Specifically, in the Pharmaceutical Development Section of NIH’s Clinical Center. This is where doctors and technicians whip up experimental drug for small groups of patients. Two vials of albumen, a medium for injecting drugs into patients, were found to have the fungus. Patients had been given injections from different vials in the same batch. The FDA investigated the lab, and the NIH suspended sterile production. It won’t resume until at least June 19th.

The NIH went public with the episode, including a mea culpa from the director, Dr. Francis Collins. When I spotted the release, I asked for an interview the next morning with Collins. NIH public affairs people — they are among the best in the government — got me the principal deputy director, Dr. Lawrence Tabak. He said the NIH welcomed the highly irregular incursion by another federal agency. We don’t know what personnel changes will happen with the troubled section, but the speed and forthrightness of the NIH response seemed refreshing and, well, grown up.

Another agency, the relatively small National Highway Transportation Safety Administration published last week the results of a study of how it can function more effectively. The agency launched the review in response to how sluggishly it responded to the General Motors fiasco of the malignant ignition switches and non-deploying airbags. The defects caused at least 100 deaths when people’s cars turned off at highway speeds. This last year’s incident is still in the news, overshadowed though it may be by the explosive Takata airbag situation that’s affected millions and millions of cars by many makers.

Somehow the GM ignition switch-airbag issue went on for a dozen years before the 2014 recall, and the NHTSA blames itself in part. It says it was pushed around by GM, and it lacked the technical understanding staff needed to stay on top of these issues. The NHSTA report says the agency “failed to identify and follow up on trends in its own data sources and investigations.” The upshot: The agency has produced a detailed internal improvement plan, and appointed three outside experts to guide the improvement effort, including a former astronaut.

And what of the Office of Personnel Management, from which vast amounts of personal data on current and former federal employees were stolen? The lag between discovery and disclosure is troubling. More disturbing is the frequency of similar attacks and the seeming ease with which whomever — China, some lunatic insider, maybe a combination of both — is getting into federal data bases. As Jason Miller reported this week, the government has experienced nine incidents in less than a year in which hackers attempted or succeeded in stealing personal information on government and contractor employees.

How did the agency react? OPM did the obligatory offers of credit monitoring. It worked with US-CERT and the FBI, but the US-CERT report is incomplete, and in any case isn’t available at its web site.. The agencies still don’t know how much data was taken, or else they haven’t said. The stain is still spreading. As pointed out in my interview with cyber expert Rodney Joffe of Neustar, the loss of SP-86 data exposes not only employees, but friends, neighbors, and any foreigner they’ve ever done any sort of business with. Plus travel records and passport information. That lost data could pester people for the rest of their lives.

OPM says it’s techies secured remote server access and installed new cyber tools. The White House ordered the acceleration of Einstein 3A monitoring tools, not that the current version worked so well. Lots of sturm und drang, but no clear sense that the government is doing much more than improvising against something it only dimly understands and only feebly deal with.

My hope is that when the scope of the OPM breach is known, the same unflinching, critical and public self analysis exhibited by NIH and NHTSA will occur in the federal cybersecurity apparatus.

One of the Big Guys Contributes to Open Source

November 25, 2013 Leave a comment

Over the years I am regularly reminded about how much software permeates industrial activity far beyond the software industry itself. Thirty years ago the procurement director at McDonnell-Douglas told me that software was the most challenging thing to predict planning and executing on aircraft design and production schedules.

A lot of software expertise resides in companies that don’t publish software. In the national defense domain, software development is diffused among Defense Department components and its many contractors.

That’s prelude to why I was intrigued by a press release last month from Lockheed Martin. Engineers in the company’s Information Systems and Global Solutions unit donated a data search engine/discovery tool to  the open source community Codice Foundation. Codice is a 2012 offshoot of the Mil-OSS project. Both work to move defense related software from the proprietary world into the open source world. Codice members modeled the effort after the granddaddy of open source foundations, Apache.

By the way, Apache last month released a new version of the widely-used Hadoop framework for distributed, big-data applications. It was a very big deal in the open source and big data application development world. For the uninitiated, here is a good explanation of Hadoop from InformationWeek.

Lockheed donated to Codice what the company describes as the core software in the Distributed Common Ground System (DCGS) Integrated Environment, or DIB. It’s a mouthful, but DOD uses it to share ISR (intelligence, surveillance and reconnaissance) data. The donated core is dubbed the Distributed Data Framework (DDF).

Andy Goodson, a Lockheed program manager, explained that the DIB started as an Air Force-led joint program a decade ago. Its basic function is to make data discoverable. As an open source tool, the DDF can be adapted to any domain where multiple data sources must also be rendered discoverable by an application. It makes data where it resides available to application algorithms as well as to processes resulting in the data’s presentation to users.

In effect, the DDF furthers the fast-emerging computing model that treats data and applications as separate and distinct resources. The federal government has been trying to adapt to this model for some time, as manifest in (among other things) the Digital Government Strategy. In practice, to be useful, data even from disparate sources within a domain must adhere to that domain’s standards. But it need not be connected to a particular algorithm or application, so the data is reusable.

Goodson said Lockeed found that the code, because it is independent of any particular data source, was not subject to export control even though it was used in a sensitive military environment. It had advice and counsel from its federal client in this determination. Now, the software is available through Codice to other agencies, systems integrators and developers dealing with the issue of big data applications.