The Problems With the FIU Bridge Began Before It Collapsed

Because paying attention to actual engineering wouldn’t be Woke. Shocking photos show there were large cracks in Florida International University bridge five days BEFORE six were crushed to death when it collapsed onto them.

To remind folks, back in March (“Beware the Ides of March”) a pedestrian bridge collapsed crushing several people. Six people died. My take on the issue was that the groups responsible were spending a lot of money to build a monument to diversity. (14 or 15 million dollar grant from the federal .gov was involved.) They just couldn’t be bothered to spend time on the actual engineering. (Math is hard.)

Now it turns out they couldn’t be bothered to pay attention to the fact that their bridge was collapsing before their eyes.

Large cracks appeared in the Florida International University pedestrian bridge just days before it collapsed, killing six.

An update from the National Transport Safety Board says their investigation has revealed significant breaks in the concrete emerged after the bridge was moved to its place above the roadway on March 10.

That is 5 days before the collapse. That is plenty of time to have the road closed. The bridge shored up with timbers. Whatever. But all that is nasty engineering and construction. If you click thru to The Daily Mail, the photos are not of minor cracks from concrete curing too fast. It is hard to gauge size in the photo, because there is no reference, but one crack would appear to be about 1/2 inch or more – right adjacent to where a vertical support meets a deck. (If I saw that kind of crack, I think I would avoid that bridge for awhile.) Maybe I’m mistaken, but the result – five days later – would seem to imply that paranoia would have been appropriate.

Let’s not forget that this pedestrian bridge weighed in at 950 tons. Because beauty. Or something. And those cracks started showing up even before the bridge was lifted into place on the roadway supports.

Here are some previous posts on the bridge collapse.

Hat tip to Irons in the Fire.


Medical Equipment Maker Ignores Security

I’m sure that this isn’t the only company with a problem like this. This story is from Black Hat. Hack causes pacemakers to deliver life-threatening shocks.

Life-saving pacemakers manufactured by Medtronic don’t rely on encryption to safeguard firmware updates, a failing that makes it possible for hackers to remotely install malicious wares that threaten patients’ lives, security researchers said Thursday.

Basic security is ignored. Like digital signatures for the code in question. And there is a proof-of-concept exploit. After a year, the company has done nothing. (Well, maybe not nothing, but the exploit still works.) Ditto for at least one of Medtronic’s insulin pumps. (Maybe they only have 1, I don’t know)

The company released a statement that has several platitudes, calls on the high gods of the FDA, and has a bit of boilerplate added. It is mostly meaningless. (Having good physical security over the programmer won’t stop a man-in-the-middle attack.)

Hat tip to A Geek with Guns. Both DEFCON and Black Hat are running, so there should be some interesting security stories this week, as the talks are made known.

What About When You Don’t Have Cell Coverage?

If you ever venture out of your urban enclave, you might come to a place where you don’t have cellphone service. Any service. Mother, son-in-law killed in fiery crash in Jasper National Park.

“We were trying to give all our efforts to save the injured and control the fire,” he explained. “The challenging part was there was no cell phone coverage. People were shouting to call 911.”

Because that is what they are trained to do. When that doesn’t work, they are at a complete loss.

Now I’m not saying that you need to become a HAM radio operator and install a 2-meter radio in your vehicle, or go the Citizens’ Band route either. But screaming at people who have no cellphone service that they should call 911 is not helping anyone.

And yes, Virginia, there exist places without cellphone service, not even 3G.

Privacy Is Such a 20th Century Concept

“What Could Possibly Go Wrong?” Free Facial Recognition Tool Can Track People Across Social Media Sites

This tool was developed for ethical hackers, penetration testing, etc. The fact that it will facilitate spear phishing by anyone is just a side benefit.

I’m sure that black-hat hackers or stalkers won’t use this at all at all. Or will they?

However, since the tool is now available in open-source, anyone including bad actors or intelligence agencies can reuse facial recognition tech to build their own surveillance tools to search against already collected trove of data.

The press release about this tool states the following “benefits.”

  • Create fake social media profiles to “Friend” targets and then send them links to downloadable malware or credential capturing landing web pages.
  • Trick targets into disclosing their emails and phone numbers with vouchers and offers to pivot into “phishing, vishing or smishing.”
  • Create custom phishing campaigns for each social media platform, making sure that the target has an account, and make these more realistic by including their profile picture in the email. Then capture the passwords for password reuse.
  • View target’s photos looking for employee access card badges and familiarise yourself with building interiors.

I’m sure no bad actors are interested in any OPEN SOURCE tool with those capabilities! (The internet was fun while it lasted.)

Because There Isn’t Enough To Worry About

Researchers Developed Artificial Intelligence-Powered Stealthy Malware. Yeah, that’s just what we need.

AI has been marketed as a cure for malware. It can detect the signs of viruses, Trojans, et al and save the day. But the reverse is also true.

However, the same technology can also be weaponized by threat actors to power a new generation of malware that can evade even the best cyber-security defenses and infects a computer network or launch an attack only when the target’s face is detected by the camera.

To demonstrate this scenario, security researchers at IBM Research came up with DeepLocker—a new breed of “highly targeted and evasive” attack tool powered by AI,” which conceals its malicious intent until it reached a specific victim.

Coming to a state-sponsored hacking team interested in you soon. (The internet was fun while it lasted!)

The Cost of Ignoring Computer Security

TSMC (Taiwan Semiconductor Manufacturing Co.) got hit by a variant of WannaCry (again?) that stopped their manufacturing dead. Taiwan Semiconductor faces revenue hit after computer virus closes factories.

So they installed a new set of tools or updated software on some existing tools (it isn’t quite clear). In the process they infected their internal network with a variant of WannaCry. Manufacturing ground to a halt. That was on Friday. By Sunday they were apparently back in business.

This article says 3% of revenue. Steve Gibson, on Security Now, listed the cost as $256 million. (Links are to video and show-notes respectively.)

However you slice it, that is a large amount of money. TSMC promises that procedures will get better, to avoid a replay.

So will this encourage people to take security a little more seriously? Somehow I doubt it. Maersk Lines lost a similar amount of money and it didn’t change anything. And the European subsidiary of FedEx ditto. UK’s NHS was hit. Other medical facilities. Now this.

So Calling It “Autopilot” Seems to Be Marketing Nonsense

But then what did we expect? Car assistance systems only boost safety if drivers pay attention, tests find.

During the test drive, Autopilot generally performed well on a busy stretch of New Jersey highway, but the car nearly drove into another when two lanes merged together.

“Not quite a perfect system. The car was not aware that there was another car that was about to steer into us,” Stevens said.

The net safety benefit isn’t clear. If it mostly encourages drivers to not pay attention, then they might not be paying attention at a crucial moment.

On the magazine’s test track, Fisher demonstrated how Tesla’s Autopilot struggled to navigate turns when the road lines faded and relied on the driver to hit the brakes as the car approached the end of the track. He said Autopilot can’t monitor how the technology is used or a driver’s attention to the road.