Yeah, you’d think if they have the codes those should be more reliable than free text descriptions. I don’t see a scenario where you’d want to do natural language processing instead of just relying on the code to figure out what to do.
AI reading logs is actually a lot more common than people in smaller industries realize. It’s pretty much the modern way to tell the health of major systems, like AWS sub-components. Usually they call it “Log heuristics” or something.
What’s scary is services like Wise (for international banking) use it to determine when money goes missing… That’s how they find out a random bank in Sri Lanka that has no API updated their website and broke Wise’s RPA.
For sure, but you can do structured logging. Every large project I’ve worked on logs JSON and that saves you a lot of time down the road. But yeah, a lot of systems out there just barf out random text in logs and then you’re stuck parsing that.
Oh, I can see that scenario. Mine was a rhetoric question, as I’ve been working in the data-shipping field for the past few years.
Thing is, if there’s a thousand power stations, there may well be a thousand different implementations + error codes, because for decades there was no need for a common method of error reporting.
The only common interface was humans. That’s why all of these implementations describe errors in human-readable text. And I would bet a lot of money that they’ve already had to extraxt those error codes from text logs.
Writing them out in e.g. a standardized JSON format, requires standardization efforts, which no one is going to push for while individually building these power stations.
That’s how you end up with a huge mess of different errors and differently described+formatted error codes, which only a human or human-imitating AI can attempt to read.
I mean, there’s definitely things they could have done that are less artificially intelligent, like keyword matching or even just counting how many error codes a power station produces. And I’m not sure you necessarily want a blackbox-AI deciding what gets power and what not. But realistically, companies around the planet will adopt similar approaches.
Ah ha, that explains a lot actually. I just realize how ignorant I’m about how power plants(and many other factories) were built before I was born and still running well today. And how costly it could be to upgrade them altogether.
Yeah that’s a good point, if you have lots of disparate systems that don’t have standard coding then the codes wouldn’t be of much use. I can see how standardizing that sort of things would be a huge effort, so in that context the approach makes sense.
I also assume that the humans have the final say, but in most cases I imagine having the computer do the initial routing will get better results than doing nothing at all while humans figure out what the overall picture is.
I mean, I’m glad it’s not just some dumb if-else chain, or even just basic circuitry, that’s being sold as “AI” here.
But at the same time: How did we get to a point where this is the best solution?
Yeah, you’d think if they have the codes those should be more reliable than free text descriptions. I don’t see a scenario where you’d want to do natural language processing instead of just relying on the code to figure out what to do.
AI reading logs is actually a lot more common than people in smaller industries realize. It’s pretty much the modern way to tell the health of major systems, like AWS sub-components. Usually they call it “Log heuristics” or something.
What’s scary is services like Wise (for international banking) use it to determine when money goes missing… That’s how they find out a random bank in Sri Lanka that has no API updated their website and broke Wise’s RPA.
For sure, but you can do structured logging. Every large project I’ve worked on logs JSON and that saves you a lot of time down the road. But yeah, a lot of systems out there just barf out random text in logs and then you’re stuck parsing that.
Oh, I can see that scenario. Mine was a rhetoric question, as I’ve been working in the data-shipping field for the past few years.
Thing is, if there’s a thousand power stations, there may well be a thousand different implementations + error codes, because for decades there was no need for a common method of error reporting.
The only common interface was humans. That’s why all of these implementations describe errors in human-readable text. And I would bet a lot of money that they’ve already had to extraxt those error codes from text logs.
Writing them out in e.g. a standardized JSON format, requires standardization efforts, which no one is going to push for while individually building these power stations.
That’s how you end up with a huge mess of different errors and differently described+formatted error codes, which only a human or human-imitating AI can attempt to read.
I mean, there’s definitely things they could have done that are less artificially intelligent, like keyword matching or even just counting how many error codes a power station produces. And I’m not sure you necessarily want a blackbox-AI deciding what gets power and what not. But realistically, companies around the planet will adopt similar approaches.
Ah ha, that explains a lot actually. I just realize how ignorant I’m about how power plants(and many other factories) were built before I was born and still running well today. And how costly it could be to upgrade them altogether.
Yeah that’s a good point, if you have lots of disparate systems that don’t have standard coding then the codes wouldn’t be of much use. I can see how standardizing that sort of things would be a huge effort, so in that context the approach makes sense.
I also assume that the humans have the final say, but in most cases I imagine having the computer do the initial routing will get better results than doing nothing at all while humans figure out what the overall picture is.