iOS 18.1 and Apple Intelligence

If I remember correctly Lenovo got its hand caught in the cookie jar with a BIOS backdoor. Since it was/is a Chinese company, the DoD banned them from any part of its operation. It was a while ago, and maybe they have proven themselves?
Was that the Bloomberg saga where they claimed Apple had been compromised and refused to back down once proven wrong? I didn’t realize Lenovo was a Chinese company; weren’t they spun off from IBM?
 
Was that the Bloomberg saga where they claimed Apple had been compromised and refused to back down once proven wrong? I didn’t realize Lenovo was a Chinese company; weren’t they spun off from IBM?
The Bloomberg thing seems to have been entirely made up.
 
And weren't they building them for IBM before they took over?
I think you might be right? Don't remember it clearly, but it would make some sense for IBM to outsource manufacturing, then decide to get out of the business altogether, and end up selling to their contract manufacturer.
 
OS 18.2 and Image Playground.
Lemme put it this way, if it were a stand-alone app, I wouldn’t pay for it. What it does to hair is a travesty. My daughter’s picture came out Great.
IMG_0117.jpeg

Mine, not so much. But not too far off the mark.

IMG_0118.jpeg
 
I’ve been using it and keep wondering: Is summarizing messages so advanced that it took AI to accomplish it? Doesn’t seem like a heavy lift.
macOS has included a "Summarize" tool since Mac OS X 10.3 or something like it. I don't think it was very good though.

I think it speaks to a bigger societal issue where we must comprehend everything within a few seconds, rather than take a little time to simply read and take it in. All I can say is I'm glad to have grown up reading books and taking life at a slower pace, we are becoming systematically dumber as we let smart computing do everything for us.
Some people just can't read. I remember a particularly frustrating issue at a previous job where the project manager just couldn't understand my reports of issues when I wrote more than a handful of lines. It's hard to explain, they were able to identify the "key words" of my messages well, but they completely missed any information that required parsing the "connectors" between words, even after multiple back-and-forths. Yet when speaking in person there were no such issues.

Sadly, I think this kind of things will get worse with AI. I think a lot of people see long messages as some sort of needless formality, writing long texts because it's "expected" (particularly in a work setting) and not because they need to convey a lot of information.
 
macOS has included a "Summarize" tool since Mac OS X 10.3 or something like it. I don't think it was very good though.


Some people just can't read. I remember a particularly frustrating issue at a previous job where the project manager just couldn't understand my reports of issues when I wrote more than a handful of lines. It's hard to explain, they were able to identify the "key words" of my messages well, but they completely missed any information that required parsing the "connectors" between words, even after multiple back-and-forths. Yet when speaking in person there were no such issues.

Sadly, I think this kind of things will get worse with AI. I think a lot of people see long messages as some sort of needless formality, writing long texts because it's "expected" (particularly in a work setting) and not because they need to convey a lot of information.
Well, you're describing nearly every PM I've ever worked with, most of which are overworked and overtasked so I guess I can see it from their perspective as well, particularly when they have to get into the minutia of a project. I completely agree that a F2F meeting gets so much more accomplished in that sense, seems easier to communicate key points that way.

So do we entrust what we're trying to communicate to AI to properly summarize it? I have a feeling we'll have to train ourselves to properly steer it toward the algorithm once we understand it better, just as we do social media now. An interesting turn of events really.
 
macOS has included a "Summarize" tool since Mac OS X 10.3 or something like it. I don't think it was very good though.


Some people just can't read. I remember a particularly frustrating issue at a previous job where the project manager just couldn't understand my reports of issues when I wrote more than a handful of lines. It's hard to explain, they were able to identify the "key words" of my messages well, but they completely missed any information that required parsing the "connectors" between words, even after multiple back-and-forths. Yet when speaking in person there were no such issues.

Sadly, I think this kind of things will get worse with AI. I think a lot of people see long messages as some sort of needless formality, writing long texts because it's "expected" (particularly in a work setting) and not because they need to convey a lot of information.
I’ve noticed a problem I have recently - certain associates like to send me “walls of text,” with color coded portions, bold, italics, etc., all over the place. In the old days I’d have no problem. Now, because of my vision, I can’t handle it. Where I used to be able to look at a page of single-spaced 10-pt text and my brain would see all the words at once, now I have to go word by word just to be able to make out what it says, and by the time I’ve read three lines, my brain has had to work so hard to figure out what it says, that I have lost the context. So I get on the phone and just talk through whatever they had to say. Purely a vision problem - I have no problem with 12-point double-spaced black-on-white stuff.

Also, associates write too much. I don’t need paragraphs - just give me bullet points at the top, and put the details down below if I need them.

Argh. Get off my lawn.
 
Well, you're describing nearly every PM I've ever worked with, most of which are overworked and overtasked so I guess I can see it from their perspective as well, particularly when they have to get into the minutia of a project. I completely agree that a F2F meeting gets so much more accomplished in that sense, seems easier to communicate key points that way.
I know that happens, but this had to be something else. I remember a truly trivial thing: an app whose sole purpose was to show charts of moving averages, wasn't computing the moving averages correctly. When averaging over sparse data, empty points were counted as zeros, making all the averages (and charts) wrong. This took like an entire day to explain, in a thread with dozens of back and forth messages. It was really bizarre, because only the words "average" and "chart" seemed to register. The thread took too long for it to be a lack of time issue.

So do we entrust what we're trying to communicate to AI to properly summarize it? I have a feeling we'll have to train ourselves to properly steer it toward the algorithm once we understand it better, just as we do social media now. An interesting turn of events really.
You're probably right, but it saddens me. I do spend time thinking about the intent behind the words I write and the connotations they have. Not for everything obviously, but I find it existentially sad that there's a possibility that not long from now we will be basically shouting into the void most of the time. Getting rid of all the subtle intent the words of the author may have had in exchange for a marginally more convenient to read version. Meh.

I’ve noticed a problem I have recently - certain associates like to send me “walls of text,” with color coded portions, bold, italics, etc., all over the place. In the old days I’d have no problem. Now, because of my vision, I can’t handle it. Where I used to be able to look at a page of single-spaced 10-pt text and my brain would see all the words at once, now I have to go word by word just to be able to make out what it says, and by the time I’ve read three lines, my brain has had to work so hard to figure out what it says, that I have lost the context. So I get on the phone and just talk through whatever they had to say. Purely a vision problem - I have no problem with 12-point double-spaced black-on-white stuff.
I'm sorry about that. For what it's worth, I try to spend the extra time ensuring things like Dynamic Type work on any new UIs I write. Even though, to this date, I have found little evidence of anyone else caring (within the companies I've worked at). But hopefully useful to people out there.
 
I know that happens, but this had to be something else. I remember a truly trivial thing: an app whose sole purpose was to show charts of moving averages, wasn't computing the moving averages correctly. When averaging over sparse data, empty points were counted as zeros, making all the averages (and charts) wrong. This took like an entire day to explain, in a thread with dozens of back and forth messages. It was really bizarre, because only the words "average" and "chart" seemed to register. The thread took too long for it to be a lack of time issue.
  • How do we properly redefine zero as a null value?
(how did I do compared to AI?) ;)

You're probably right, but it saddens me. I do spend time thinking about the intent behind the words I write and the connotations they have. Not for everything obviously, but I find it existentially sad that there's a possibility that not long from now we will be basically shouting into the void most of the time. Getting rid of all the subtle intent the words of the author may have had in exchange for a marginally more convenient to read version. Meh.
In the social media world the algorithm trains you, or you simply drop off their radar.
 
  • How do we properly redefine zero as a null value?
(how did I do compared to AI?) ;)
Turns out, much better than ChatGPT. Because ChatGPT spew this out:
An app designed to display moving average charts was incorrectly computing the averages by treating empty data points as zeros. This led to inaccurate averages and flawed charts, especially when dealing with sparse data.
First things first: it's not a summary. And it's also wrong! The app was not flawed especially when dealing with sparse data, it was flawed only when dealing with sparse data. This is exactly the kind of thing I was talking about earlier. Argh.

In the social media world the algorithm trains you, or you simply drop off their radar.
Hey, there's a reason I'm still on forums in the age of social media :)
 
Back
Top