Archive

Posts Tagged ‘federal IT’

Why cybersecurity requires hardware, software and meatware to work together

August 25, 2015 Leave a comment

Unless you are inherently fearful, danger tends to live in the realm of abstraction until something bad happens in reality. Recently a couple we know insisted my wife and I go out and try tandem bicycling with them. My wife regularly goes for 60-, 70-, 80-, even 100-mile rides on her own bike. I’m more of an occasional rider, but I’ve owned and ridden multi-geared bikes of one sort or another since about 1970.

The $10,000 bike this couple let us borrow didn’t feel right to either one of us. Custom-made, titanium beauty that it was, it felt hard to tame, even when I tried it myself in a parking lot. Uneasily, we climbed on and plunged out onto Rock Creek Parkway in Montgomery County — a narrow road with plenty of car traffic. I wasn’t comfortable with the shifters. The thing felt wobbly and too tall. We didn’t make it a half mile before crashing, one of us landing on either side of this elongated contraption. Cars stopped, people jumped out to help. Other bikes stopped to see if we were alive. The biggest cost was pride. But my left hand still hurts nearly a month later, as does my wife’s tailbone. And the episode set us back $310 for a new shifter.

Lessons learned: Practice where there’s no traffic and you can weave a lot. Learn to use foreign shifters beforehand. Get your road legs on a cheap, low-slung bike (you can buy a whole new tandem bike for $310). Don’t ignore your misgivings.

If we were a government agency, I’d say we didn’t do a good risk assessment, and we didn’t integrate our software with the hardware very well. We had what could have been a doomsday scenario, literally.

Until now, it seems as if federal cybersecurity has been operating on a wing and a prayer, too. The OPM data breach shattered whatever complacency anyone might have had. As it recedes into the past, the 30-day cyber sprint  has left a lasting legacy. Not simply that federal systems are more thoroughly protected than they were. They may well be, but success in cybersecurity is ephemeral. Like a sand castle, you can never stop shoring it up.  In one sense, every month should be a 30-day sprint.

And not simply that the sprint got everyone to realize at once how basic cybersecurity is to everything else the government has to do. And how poor the government is at it. That also may have happened.

Read this summary of the Office of Management and Budget’s after-action report from the sprint. Not the one for public consumption, but the internal one, which Federal News Radio’s Jason Miller got to see. It showed:

  • Some 75 open vulnerabilities identified, two thirds of them festering for more than 30 days. Only 60 percent of them patched, and new ones keep popping up. At least agencies know to look for them now.
  • Old software running past the end of vendor support, including new patches.
  • The weakness of two-factor authentication in the face of super-realistic phishing e-mails.
  • Privileged access rights to networks given out willy nilly.

I think the most important effect of the near-doomsday breach and subsequent sprint was driving home the need for an architectural approach to cybersecurity, taking it down to the storage hardware level. Here’s one example. The White House called this week for ideas pursuant to its Precision Medicine Initiative. The idea is to eventually gather health information on millions of people so it can be mined for trends leading to more personalized medical treatments than people have now.  Among the areas for which it seeks suggestions: “Technology to support the storage and analysis of large amounts of data, with strong security safeguards.” Cybersecurity is embedded throughout the call for comments. That’s a good sign.

Industry is starting to offer new approaches. The other week I was talking to people from Seagate, a disk drive and storage subsystem OEM. It’s part of a coalition of network equipment and software companies that contribute to what they call a Multi-Level Security Ecosystem. In the federal market, Lockheed-Martin and Vion offer it as a secure storage and file system for high-performance simulation and modeling applications that fuse together large, disparate data sets.

Seagate Federal’s Henry Newman explains, the company built a set of services on top of SELinux to accommodate functions such as network communications, database access and data sharing across parallel file systems. So, for example, a large set of video surveillance could be engineered such that access to individual files are restricted to certain individuals based on their authorities. Personally identifiable information, compliance information or intellectual property within a system can be made subject to access controls and auditing, while limiting the need for expensive hardware redundancy.

Other contributors to the MLE ecosystem include supercomputer makers Cray and SGI, log analytics vendor Splunk, and Altair, a maker of job scheduling and management software.

Government practitioners like to say security should be built in, not bolted on. But they usually bolt it on. The Multilevel Secure group is just one example, but it shows where systems deployment is heading where security is baked in.

Want fries with that treatment, soldier?

August 14, 2015 Leave a comment

Customer service is all the rage in the federal government.

Again.

A series of lapses that includes the healthcare.gov rollout and the well-documented problems with service provided by the Veterans Affairs Department have alerted the administration to the need for better customer experiences, whether in person, on the phone or online. The digital strategy is supposed to take care of improving the online part. It is one in a series of initiatives dating back to the Clinton administration’s E-gov project. That in turn had antecedent in the “Service to the Citizen” movement of the George H.W. Bush administration of the pre-Web days. E-gov’s offspring was the Quicksliver series of projects of the George W. Bush administration.

It’s good that these efforts are revisited periodically. Technology and expectations change. Too bad the government has to lurch from crisis to crisis to get with it, though.

I had to chuckle when discovering that VA Secretary Bob McDonald brought in former McDonald’s executive Tom Allin, the fast food chain, as the chief veterans experience manager. As a habitue of McDonald’s for its coffee and occasional Egg McMuffin, I’ve seen customer service there up close. Don’t tell me you don’t go to McDonald’s. Nobody goes to McDonald’s like nobody watches television or listens to the radio unless it’s NPR. Yeah, sure.

At McDonald’s, I noticed the other day that counter employees work in an incessant cacophony of beeping food preparation apparatus, back-shop employees shouting at one another, and piped in Musak. They have to scurry to and fro for all of the detritus — bags, napkins, cups ketchups, and the food itself — that make up an order. Something’s always broken, like the receipt printer, the credit card reader, the machine that squirts out “ice cream,” … something. When the young lady finally collected herself and met my eyes, I couldn’t help but ask, “Are you still taking orders?” To myself, I thought, if this is fast food, what the heck is slow food? As one of only two people in line I wondered, How do they cope when it’s crowded?

I’d walked over from my car dealer, where I’d left my car for an oil change. It was quieter there, but the customer service representatives had all of this elaborate paperwork, had to dart back to a bank of printers, and out of their booths to the rear. It felt like it took as long to check in a car for an oil change as to actually change the oil.

These service employees face the same bureaucratically-induced barrier of process complexity and unreliable systems as their counterparts in the government. It’s a fine step for VA to have metrics for appointment wait times, or the IRS for phone answering times. But unless the systems are geared to enable people to reach these goals, they won’t happen. Insufficient staff, crappy software, an overly complex process — these can all get in the way of the even the most dedicated humans who are trying to do a good job.

I spoke about customer service the other day with Deloitte principal Greg Pellegrino, who headed up a survey on the state of customer service in the federal government. The survey’s basic finding, to not put maple syrup on a pickle, is the government thinks it gives better service than the public thinks it does.

Pelligrino points out three data points. One, the latest American Customer Satisfaction Index shows federal service getting worse, at the bottom of the heap. Two, Gallup polls show a slippage in public confidence in the government. Three, the most recent Viewpoint survey of federal employees shows a decline in job satisfaction. The third point is related to the first two, Pellegrino says. Basically, a combination of stingy budgets, lack of focus on customer service and unhappiness on the job have combined to weigh down the experience have with federal services.

All that plus a mismatch of intent and the technology to carry it out.

A new way to think about this, or perhaps it’s an old way dusted off at a time of great technological change, is outlined in a Harvard Business Review article by Jon Kolko, a vice president of Blackboard. He describes an approach called design centric thinking. It’s a “set of principles [encompassing] empathy with users, a discipline of prototyping, and a tolerance for failure” all aimed a creating a customer-centric culture. Translation: You combine clear thinking with agile development principles.

Kolko says design-centric thinking applied originally to physical objects. Now organizations are applying it to services. And get this: There’s a great example at the Veterans Affairs Department, of all places. VA’s Center for Innovation used this kind of thinking to envision a “customer journey map to understand veterans’ emotional highs and lows in their interaction with the VA.” A map like that can point the way to better customer service by aligning systems, processes and what the customer wants.

Image that.

The big cybersecurity challenge: Time-to-detection

July 29, 2015 Leave a comment

Do you sunbathe? You shouldn’t in this day of hypersensitivity about skin cancer. But if you do, the sunlight falling on your liver-spotted, lizard-like skin has been traveling through space for about nine minutes. When you gaze at the night sky and see Alpha Centauri, you probably remember from grade school that light from that nearby planet takes about 4.3 years to get to earth.

If something like a Burning Man festival were held on Alpha Centauri, you wouldn’t know about it until 4.3 years after it was over. Too late to load up your Airstream and get there in time for the fun. Most stars are so far away, they probably collapsed into black holes a billion years ago, yet all we see is merry twinkling millennium after millennium.

Not to over-dramatize, but this is how things are in cybersecurity — specifically intrusion detection. When the Office of Personnel Management was patching its systems, it discovered its great breach, months after the break had occurred. It might have been still more months before anyone noticed the anomaly. It reminds me of a corny roadside display in Pennsylvania when i was a kid. A sign on a little barn said, “World’s Biggest Steer Inside.” When you pulled over and peered in the window, you saw a big jagged hole in the back of the barn, a chain lying in the dirt, and another sign, “Too bad, guess he got away!” There must’ve been a gift shop or goat’s milk fudge stand nearby.

This is one of the big problems with modern-day cyber attacks. Too often, IT and security staffs only find out about them long after the damage has been done and the hackers moved on to other soft targets. If it takes seconds or minutes to exfiltrate data, what good does discovering it do next year?

I recently spoke with John Stewart, one of the top security guys at Cisco. The topic was Cisco’s Midyear Security Report. Here’s my summary: Federal IT and security people, like everyone else, have plenty to worry about. Like the fact that a thousand new security product vendors have started up in the last five years, yet most of them sell non-interoperable software. Or that the white-hat, good-guys side of the cybersecurity equation is literally about a million qualified people short.

Yet among the most seemingly intractable problems lies time-to-detection, or how long on average it takes for organizations to find out they’ve been hacked. This makes it likely that many more successful attacks have occurred than systems administrators are aware of. Stewart says most of the data show that IT staffs routinely take months to detect breaches. A major goal of the products industry and practitioners’ skill sets must therefore be getting time-to-detection down to seconds. At this point, I’ll bet many federal agencies would be happy with days or hours.

Malicious hackers aren’t standing still, the Cisco report points out. They’re switch vectors and modalities at lightning speed. They’re using wealth transfer techniques that stretch law enforcement’s ability to detect. Stewart says, systems like Bitcoin and the murky avenues of the dark web don’t include or even require the typical middlemen of the surface financial transaction world — such as banks, transfer networks, mules. He describes the bad-hacker industry using a term the government likes to use for itself: innovative. 

Embedded IP domains and fungible URLs, jacking up data write-rewrite cycles to dizzying speeds, or quietly turning trusted systems into automated spies in the time it takes someone to go for coffee — that kind of thing. You might call it agility. They’re dancing circles around systems owners. The hacking community has become wickedly innovative at evading detection, Stewart says, exploiting the common systems and software everyone uses routinely.

He adds that the motivations of bad hackers have blossomed into a veritable bouquet. They go after systems for espionage, theft of money or intellectual property, terrorism, political activism, service disruption and even outright destruction. That’s a good case for the so-called risk-based approach to cybersecurity planning. If you’re a utility, disruption or destruction is more likely to be the hackers’ goal. If you’re a database of people with clearance, espionage and theft are good bets.

Answers? As cybersecurity people like to say, there is no silver bullet. Stewart says nations will have to cooperate more, tools will have to improve, people will have to get smarter. Cisco hopes to build some sort of architecture framework into which the polyglot of cyber tools can plug, reducing what he calls the friction of integration.

For now, a good strategy for everyone connected to cybersecurity is to bore in on the essential question: How soon can we know what’s going on?

Thoughts on bloated web sites, complex software

July 21, 2015 Leave a comment

With my wife at the wheel, we swing off Route 21 in New Jersey onto E 46. The GPS in the dash of our new Subaru is guiding us to Saratoga Springs, NY for the weekend. Kitty-corner from the exit is a big bilboard that reads, “WHO IS JESUS? CALL 855-FOR-TRUTH. Nice and succinct. I admired the certitude, but didn’t try the number.

The car is filled with slightly more mystifying tech. Somewhere I read the average modern car has 200 microprocessors. How many lines of code do they run, I wonder? No matter, the car does what it’s supposed to. Anyone who ever dealt with distributor caps, points and engine timing lights appreciates the way today’s cars work.

The GPS-bluetooth-navigation complex in the dash is another matter. It’s a mishmash of hard-to-follow menus. No matter what we do, every time we turn on the car, the podcasts on my wife’s phone starts up. As for navigation, no two systems I’ve ever seen work quite the same way, at least their user interfaces don’t. Voice commands can be ambiguous, and if occasionally directs you off the highway only to direct you right back on again.

This same overload is ruining many web sites, as it has many once-simple applications. No wonder people love apps, in the sense of applications designed or adapted to work easily and quickly on the small touch screens of mobile devices. Standards like Word, Outlook, iTunes and many other have become so choked with features and choices, I’ve practically given up on them. I can figure out what they do, but it’s all too much, too fussy and time-consuming to manage.

The major media sites are so choked with links — most of them for ads, sponsor content, and unrelated junk such as 24 celebrity face-lifts gone horribly wrong — that you can barely navigate them with out constant, unwanted and frustrating detours.

The drive to make software more and more functional may be behind what seems to be a disturbing trend towards failures in critical systems. They’ve happened a lot lately. In fact, it happened first rather close to home. Literally a minute before going on the air one recent morning, the system that delivers scripts and audio segments failed. A Federal News Radio, we’d gone paperless for a year, reading scripts online and saving a package of printing paper every day. Talking, trying to sound calm, ad-libbing while gesticulating wildly to my producer — that’s what a software crash causes. Controlled panic. Panic, anyhow. It took the engineers an hour to fix. It turned out, a buffer overflow crashed the Active Directory on which the broadcast content environment depends for user privileges. So down it went with the ship.

It was the same day United Airlines passenger boarding system failed, apparently the result of lingering incompatibility from the merger with Continental. And the same day that the New York Stock Exchange famously experienced an hours-long crash, reportedly because of network connectivity issue. Earlier in the month, a hardware-software interaction interrupted for two weeks the State Department’s globally-distributed system for issuing visas.

Successive program managers for the F-35 fighters have all complained they can’t get the software development for this fussy and delicate airplane in any sort of predictable schedule. Yet the plane is unflyable and unmaintainable without its software.

In short, two problems linger with software controlled systems. They can be difficult to interact with. And in their complexity they produce effects even expert operators can’t foresee. I believe this is the basis for the spreading appeal of agile development. It forces people to develop in pieces small enough that people can keep track of what is going on. And in ways that the users can assimilate easily.

Complexity, or the desire to avoid it, is why people like apps on mobile devices. I confess to checking Buzzfeed on my phone when I’m bored. The content is inane, but it’s such a fast, simple app, like eating gumdrops. I recently checked out the regular Web site of Buzzfeed, and sure enough, it’s a confusing kaleidoscope. Although, an ice cream cone swaddled in Cocoa Krispies does sound good.

OPM left a sizzling burger on the counter. The dog ate it. Who do you blame?

June 16, 2015 1 comment

Dog trainers like to say there are no bad dogs, only bad owners. I know. We have a now-elderly greyhound. She rules the roost, mostly. But because of her mild personality, she’s never out of control, never pulls on the leash, and has never so much as made a growl at anyone. Mostly she saunters into the middle of the room and lays on her back, her tummy available for anyone who cares to rub it.

But leave a hamburger on the counter, a cold drink on a side table, or an unattended dinner plate of food, and oh boy. Don’t turn your back. She’ll pretty much have it devoured before you can turn around and say, “No!” One time the extended family retired to the living room and family room after Thanksgiving dinner. After putting away some dishes I went into the dining room to pull the tablecloth. There was Lizzie, atop of the dining room table, licking up crumbs and tidbits.

Unlike China, which denies everything when it is caught stealing data, a dog caught stealing food looks at you and says through her eyes, “What did I do? You left it there.”

Dog on table

A young Lizzie cleaning up after Thanksgiving.

This is what I thought of when reading comments former CIA Director Michael Hayden made to a Wall Street Journal conference regarding the awful database breach. The U.S. personnel records were “a legitimate foreign intelligence target,” Hayden said. He added that our intelligence apparatus would do the same thing if it had half a chance. Hayden said he wouldn’t have thought twice about grabbing any Chinese government database the CIA could.

“This is not ‘shame on China.’ This is ‘shame on us’ for not protecting that kind of information,” Hayden said.

OPM left a juicy, sizzling hamburger on the counter. The dog snatched it.

Perhaps the U.S. government does do the same thing to rival nations. We don’t know for sure. Let’s hope so, because at the least it would leave things in a rough state of Spy vs Spy equilibrium. Because it is justifiably embarrassed, and because it can’t really do anything about Chinese cyber behavior, the accusations from the administration have been mild and sporadic.

Unfortunately, I see no other recourse other than for OPM Director Katherine Archuleta to resign. I don’t say this with any satisfaction. Not that she was personally responsible for the breach. Not that she’s a bad person. But the warnings were there, she had the knowledge that the hacked systems were behind on their FISMA certifications, and of the string of attacks going back a year. It all happened on her watch and it potentially harmed enough people to fill New York City, Chicago, Baltimore and Dallas. It’s not that she was personally malfeasant, it’s just goes with the territory. Had a rocket landed on the OPM building, that would have been one thing. But an egregious organizational performance lapse of this scale claims the person ultimately responsible.

Recall what happened back in 2012 at the General Services Administration. A conference 18 months earlier on which regional officials spent indiscreetly and contracted criminally came to light. Administrator Martha Johnson resigned before the reason why became known. Veterans Affairs Secretary Eric Shinseki toughed it out for a while, but ultimately had to step down after the drip-drip-drip of bad news from the patient scheduling scandal of last year.

OPM, as Francis Rose points out, has lost its credibility. Now it needs new leadership to restore it.

Fails happen. It’s how agencies react that matters

June 9, 2015 1 comment

An old, familiar shibboleth came up again this week. “Washington is a city of second chances.” That’s what a Washington Post article said about a popular millennial writer who was fired from a popular web site for plagiarism. He popped up at another web site a year later, where he’s boosting its traffic. Dennis Hastert, the former House speaker now enmeshed in a really bad scandal, probably is too old to have a second chance.

Organizations can have second chances, often because they have the wherewithal to buy their way back. I remember the Ford Pinto gas tank scandal (1977), the time Lockheed nearly went bankrupt (1971) save for a federally-backed loan, and the Tylenol poisoning scare (1982), which was a problem not of the company’s making. Today, Ford, Lockheed-Martin and Johnson and Johnson prosper quite nicely.

Can federal agencies have a second chance, I’ve been wondering? Technically no, since they can’t go out of business unless Congress decrees it, which it never does. So when they goof up, there might be temporary hell to pay, but not the threat of going out of business. In fact, serious failures are often rewarded with big budget increases, as in the case of the Veterans Affairs Department. Congress can readily replace money. Reputation and perceived legitimacy — harder to recover.

Yet agencies are obligated to react when things go wrong. Recently two examples occurred I point out as case studies of the right way to react and retain the confidence of the public.

A whistleblower, still anonymous, complained to the FDA about poor practices and fungus contamination at the National Institutes of Health. Specifically, in the Pharmaceutical Development Section of NIH’s Clinical Center. This is where doctors and technicians whip up experimental drug for small groups of patients. Two vials of albumen, a medium for injecting drugs into patients, were found to have the fungus. Patients had been given injections from different vials in the same batch. The FDA investigated the lab, and the NIH suspended sterile production. It won’t resume until at least June 19th.

The NIH went public with the episode, including a mea culpa from the director, Dr. Francis Collins. When I spotted the release, I asked for an interview the next morning with Collins. NIH public affairs people — they are among the best in the government — got me the principal deputy director, Dr. Lawrence Tabak. He said the NIH welcomed the highly irregular incursion by another federal agency. We don’t know what personnel changes will happen with the troubled section, but the speed and forthrightness of the NIH response seemed refreshing and, well, grown up.

Another agency, the relatively small National Highway Transportation Safety Administration published last week the results of a study of how it can function more effectively. The agency launched the review in response to how sluggishly it responded to the General Motors fiasco of the malignant ignition switches and non-deploying airbags. The defects caused at least 100 deaths when people’s cars turned off at highway speeds. This last year’s incident is still in the news, overshadowed though it may be by the explosive Takata airbag situation that’s affected millions and millions of cars by many makers.

Somehow the GM ignition switch-airbag issue went on for a dozen years before the 2014 recall, and the NHTSA blames itself in part. It says it was pushed around by GM, and it lacked the technical understanding staff needed to stay on top of these issues. The NHSTA report says the agency “failed to identify and follow up on trends in its own data sources and investigations.” The upshot: The agency has produced a detailed internal improvement plan, and appointed three outside experts to guide the improvement effort, including a former astronaut.

And what of the Office of Personnel Management, from which vast amounts of personal data on current and former federal employees were stolen? The lag between discovery and disclosure is troubling. More disturbing is the frequency of similar attacks and the seeming ease with which whomever — China, some lunatic insider, maybe a combination of both — is getting into federal data bases. As Jason Miller reported this week, the government has experienced nine incidents in less than a year in which hackers attempted or succeeded in stealing personal information on government and contractor employees.

How did the agency react? OPM did the obligatory offers of credit monitoring. It worked with US-CERT and the FBI, but the US-CERT report is incomplete, and in any case isn’t available at its web site.. The agencies still don’t know how much data was taken, or else they haven’t said. The stain is still spreading. As pointed out in my interview with cyber expert Rodney Joffe of Neustar, the loss of SP-86 data exposes not only employees, but friends, neighbors, and any foreigner they’ve ever done any sort of business with. Plus travel records and passport information. That lost data could pester people for the rest of their lives.

OPM says it’s techies secured remote server access and installed new cyber tools. The White House ordered the acceleration of Einstein 3A monitoring tools, not that the current version worked so well. Lots of sturm und drang, but no clear sense that the government is doing much more than improvising against something it only dimly understands and only feebly deal with.

My hope is that when the scope of the OPM breach is known, the same unflinching, critical and public self analysis exhibited by NIH and NHTSA will occur in the federal cybersecurity apparatus.

Five Steps For The Government To Regain Trust

March 30, 2015 Leave a comment

Last month the Obama administration rolled out something called the federal feedback button. Officials describe it as a Yelp-like way for people to give feedback on the online service they get. That is all well and good. People visiting federal websites should have a good experience, easy to navigate and returning the results they seek. I think for the most part they do. Still, you can never have too much feedback. Sites vary. Some are still tough to navigate, others are right up there with the best of them. Some adapt perfectly to mobile devices, others have yet to be redone with responsive, mobile-aware coding. But on the whole, people responsible for federal web sites care a lot about their work.

One goal of the federal feedback button puts a little too much on the shoulders of web managers. Specifically, the notion that better digital service and gimmicks like a website button can help restore faith in government. A lousy web experience might reinforce the notion that government is incompetent if a visitor is inclined to think that way. Most people take a poor web experience for what it is — a poor web experience. To make an analogy, I’m highly loyal to the brand of car I drive. The company’s website is over-engineered and precious to the point of being annoying and hard to figure out. But that shakes my faith in its web people, not in the car.

Distrust of government stems from problems way deeper than digital service. All you have to do is scan the last few weeks’ headlines to see examples of what makes government sink in citizens’ estimation. None of these sources of mistrust will be remedied with the federal feedback button.

Nor will they be fixed with simple-minded assertions about the efficiency or motivation of the federal workforce. Good people working in bad systems will produce bad results. The way to better, more trustworthy government tuns through fixing the systems and processes, and funding them adequately. Then you’ve got the tools necessary to hold people accountable.

Here are my five picks for systems that need fixed to restore faith in government.

1. Fulfill FOIA requests. How many more decades must pass before federal agencies figure out a way to answer Freedom of Information Act requests within days or hours, and then fulfill most of them? A default to secrecy and withholding clings stubbornly. Just a month ago the Center for Effective Government came out with another dreary accounting of agency FOIA performance. The open data movement, exemplified by data.gov and the hiring of a chief data officer at the Commerce Department are fine moves for helping untrap the government’s vast stores of data. But FOIA performance is a powerful indicator of how open the government is with respect to information people demonstrably want.

2. Get serious about not wasting money. $124 billion in improper payments for fiscal 2014. That’s two years worth of Overseas Contingency Operations budgets. Three years of operating the Homeland Security Department. Four years of the Energy Department. It’s around $350 for every American. The administration deserves credit for diligent efforts over the last few years to push improper payments down. But it’s like trying to suppress in your hands a balloon that’s connected to an air source.

3. Remind high (and low) officials to think before they act. A secretary of state used a rigged-up server to do four years of federal business then erases the whole thing. The deputy DHS secretary is found by the inspector general to have improperly intervened in staff work regarding visa clearances, on behalf of politically connected-individuals. A member of Congress spends $40,000 of somebody else’s money decorating his office. The Justice IG can hardly keep up with all of the misbehavior at law enforcement agencies. Not all the people in these episodes are bad or evil. Alejandro Mayorkas contends that, in the case of the visas, he was expediting stalled applications. He has a distinguished record of public service, but golly, I wish he’d stopped for just a sec and looked at the expediting from a poor taxpaying schlub’s point of view.

4. Stop writing badly-worded laws. Like the VA overhaul bill that gives veterans living more than 40 miles from a VA facility the option of using private health care. Congress wrote in a provision telling VA to use geodesic measurement, meaning a 40-mile radius drawn by protractor around each VA facility. But people don’t drive like a crow flies, as Deputy Secretary Sloan Gibson pointed out at a hearing. The whole thing made VA look goofy. It bewildered veterans. And it limited the utility of an expensive program. Now they’ll use online maps to calculate 40 miles even though that’s not really what the law says. Sloppy.

5. End backlogs. Good service means speedy service. Veterans Affairs has a first-time-claims claims backlog of about 245,000. That’s a sharp reduction from its peak, but it’s not likely to disappear, even though the department has promised a zero backlog by the end of the year. Social Security’s disability claims backlog runs close to 1 million. The Patent and Trademark Office, the backlog runs to more than 600,000. The people handling all of these claims aren’t lazy or incompetent. But they’re working in a system that makes them look that way.

The administration favors challenges and crowd-sourcing of ideas. Here are five persistent problems that, if rectified, would significantly increase faith in the competence of the government, and by extension, the people who work for it. These conditions persist not because government employees are bad or don’t care. It’s because they work in a culture that avoids risk an makes easier to say no to an idea than it is to push it through to completion.