The Ai thread

Yeah, i took it as some sort of mathematical work-around to extract private keys, the theory being that the Russians were still (in the movie universe) using symmetric keys (and we weren’t? but we were.)

Anyway, suspend disbelief.

RSA and it's ilk were under an export ban at the time (I remember actually reading about it as a teen learning about PGP). Looking it up, GOST R 34.11-94's public keys share more with elliptic curve public key cryptography, but the documentation on the older encryption is pretty poor in the English-speaking web. Janek also explicitly mentions the number-field sieve, which makes sense in the context of trying to break RSA, but much less so in ECC. So I think Greg's claim holds up, in context. Which is enough.

Behind the scenes confirms it was discussion with experts, discussing RSA and how one would need to break it completely that inspired the black box. And it wasn't until 1994 that Shor's paper demonstrated the same algorithm could attack both RSA and ECC. So sure, it's not perfect, but it is pretty damn good for something aimed at the average audience, it's digestible, but if you do pick it apart, it's not completely hand-wavy. I like it.
 
I think it's even worse... this is an Overseer in all but name. Because this will get used for those minimum wage jobs to determine who to cut.

Why aren't you wearing more pieces of flair?

I worked at McDonald’s during one summer in high school. I worked the cash register and fry station when needed. Never flipped a burger.

One day I was working and I had been there for 8 hours and the assistant manager wouldn’t let me take lunch. When I finally got done with my shift, I drove over to Burger King, in uniform. I walked in and bought a sack of whopper jr’s, and drove back to the McDonalds. I stood just outside the entrance (it was a stand-alone restaurant inside a bigger building, in Woodbury Commons, a giant outlets place that had just opened, so the entrance was indoors and everyone working in McDonalds could see me - the entrance was a big wide opening, like a gate in the front of a store in the mall)

Anyway, I started asking every customer coming in if they wanted free Whopper Jr’s, and I handed all of them out.

Two things came of this.

First, the following Monday, the owner started to give me a stern talking to, but immediately shut up when I mentioned NY child labor laws. The assistant manager was fired.

Second, I became a legend in my high school, since everyone who worked there was in the same grade as me.
 
I worked at McDonald’s during one summer in high school. I worked the cash register and fry station when needed. Never flipped a burger.

Mine was an Arby's. Couldn't even work back line because you had to be 18 to work the deli slicer. Not sure if that's still true 30 years later. Absolutely there for a story of someone leveraging the law for good.

But when I see HR teams going in and finding small things to fire people over when drama shows up, things like this make such things even easier. "Oh, I didn't fire this gal because she reported sexual harassment, but because the AI said she said 'thank you' 10% less often than any other employee."
 
Cliff being a lawyer even back in high school.
I guess we are lucky that you took a detour into chip design, otherwise I'd be running an IA-64 machine at work now.
I didn’t know much about the law other than after 4 hours they had to let me have a lunch break.

The only reason I was working there was because you needed an employer to sign the form that entitled you to a parking spot at the high school. So my plan all along was to quit as soon as senior year started, and drive my 1973 Duster in style to school instead of having to take the bus. I didn’t really care if I got fired.

I’ve had a lot of jobs in my life - substitute paperboy, 13-year old computer programmer for an electrical contractor, IBEW local 3 apprentice, NY Dept. of Health programmer for the epidemiology department - converting code from Fortran to C and writing an app to predict cancer from smokestack emissions, intern at WITCO, a company that owns oil refineries, free rent from my landlord in grad school for helping him with his hairbrained invention (involving me writing dBase stuff), teaching assistant, web designer for a local college, research assistant on a DARPA contract, the CPU design jobs, now law. But I’ll always appreciate that job at McDonalds for teaching me to respect service workers.

And you’re welcome for not having Itaniums in your laptop.
 


Super interesting thread on a pre-publication paper (not peer reviewed) comparing LLMs and Human belief systems - basically have LLMs construct belief systems based on synthetic people (i.e. based on demographics or a belief trait) and compare the results of questions to actual people matching said characteristics.

Large language models (LLMs) are increasingly used to stand in for survey respondents, often by conditioning on demographic “personas” to simulate diverse publics. That is, practitioners propose prompting an LLM to answer a survey question as if it were the description of the person it is given. Comparing responses from a large number of models to a high-quality representative sample of the US population, we show that persona-based prompting produces political belief systems that are unrealistically coherent. Real people commonly hold cross-cutting views that do not fit neatly on a single ideological scale like left or right; LLM personas instead generate answers that are tightly aligned, strongly one-dimensional, and more predictable from demographics. This matters because many applications treat synthetic surveys as substitutes for human samples. Our results suggest that persona-conditioned LLM data can systematically distort the messy character of mass opinion and overstate how consistently demographics map onto ideology.

Note this is different from using an LLM to summarize or categorize a written survey response from actual humans which I have also seen. I was not aware of people attempting to use LLMs simulating responses in lieu of asking actual humans in surveys however.
 


Note this is different from using an LLM to summarize or categorize a written survey response from actual humans which I have also seen. I was not aware of people attempting to use LLMs simulating responses in lieu of asking actual humans in surveys however.
It's a bad trend even though I understand the motivation. In my field ethical constraints hobble most experiments. One has to use ingenious and indirect methods to get at the phenomena of interest, and deception - so that participants don't know what you're really measuring - must be well justified and used sparingly (and of course the deception must always be revealed to participants afterward).

Hence the temptation to use 'participants' that can't be harmed.

But - seriously?!

I look forward to seeing the finished paper.
 
It's a bad trend even though I understand the motivation. In my field ethical constraints hobble most experiments. One has to use ingenious and indirect methods to get at the phenomena of interest, and deception - so that participants don't know what you're really measuring - must be well justified and used sparingly (and of course the deception must always be revealed to participants afterward).

Hence the temptation to use 'participants' that can't be harmed.

But - seriously?!

I look forward to seeing the finished paper.
He links to the preprint paper at the end if you want to get a head start:

 
Back
Top