The Ai thread

MS is going to spend big on "AI" after their investments pay off.

There are other ways to build "AI" systems in which lasers flow through non-linear media, instantiating dynamic logic gates.

These systems seem to be a bit or a lot more efficient than compute-card SCs, but they are kind of nascent, requiring more development before they can bei implemented at whichever levels they would be most useful. We obviously need "AI" right now because it is vital for surplussing white collar workers and using complex inference to get us to buy more stuff.
 
Because people are frickin' morons (the common clay of the new West) who simply do not comprehend lens effects. Also, "AI" has become a convenient whipping-boy. If it looks a bit off, AI!

Some people are also trying to appear smart by "being able to pick AI" that other people can't.
 
So.... I went down a rabbit hole...



I've had 3 fairly large glasses of rum on the rocks (its now midnight) and am on the internet talking to a bot about hooking up Siri Shortcuts to chatgpt via shell scripts and Api calls to triage level 1 Helpdesk calls.

The scary thing is.... I built a teams bot today. I built a level 1 Helpdesk triage bot in like.... an hour.

This AI shit isn't actually much effort to get SIGNIFICANT results.


Any one on the fence.

  1. Get a paid account
  2. Do a couple of hours of R&D into prompts
  3. Profit is actually step 3?
 
So.... I went down a rabbit hole...



I've had 3 fairly large glasses of rum on the rocks (its now midnight) and am on the internet talking to a bot about hooking up Siri Shortcuts to chatgpt via shell scripts and Api calls to triage level 1 Helpdesk calls.

The scary thing is.... I built a teams bot today. I built a level 1 Helpdesk triage bot in like.... an hour.

This AI shit isn't actually much effort to get SIGNIFICANT results.


Any one on the fence.

  1. Get a paid account
  2. Do a couple of hours of R&D into prompts
  3. Profit is actually step 3?
They hook you into the paid account, which I opted to do. As much as I despise AI for things like automatically creating art or taking the humanity out of reading/writing "summaries" it has its uses.

I initially tested it out after writing my own script for a youtube video, it came back and realized what I was trying to do and broke it all down into several constituent parts that I had no idea I needed. Things like "at this point insert a video with ethereal music and end it at such and such point" and I mean it was an eye opening experience that has made the content so much better.

In the end I'm still using 95% of my own written words but the structuring and filling in gaps with music/video types is invaluable, I ran out of free credits and bought it for 1 year and it's totally worth it for this to me.
 
So.... I went down a rabbit hole...



I've had 3 fairly large glasses of rum on the rocks (its now midnight) and am on the internet talking to a bot about hooking up Siri Shortcuts to chatgpt via shell scripts and Api calls to triage level 1 Helpdesk calls.

The scary thing is.... I built a teams bot today. I built a level 1 Helpdesk triage bot in like.... an hour.

This AI shit isn't actually much effort to get SIGNIFICANT results.


Any one on the fence.

  1. Get a paid account
  2. Do a couple of hours of R&D into prompts
  3. Profit is actually step 3?
Yeah, I was a little shocked last semester when I decided to go back to school to finish a Master's. I have a Learning Disability that was diagnosed long ago (so I don't have the documentation), and for any special consideration, I am not willing to drop the cash to get retested. So in comes AI to help me digest these absolutely boring and dense Harvard Business reviews. Knowing me, I probably would read something for 30 minutes and couldn't tell you a single thing that I just read. Not only will it summarize it for me, before I read it (because I do not trust it), but making it into a podcast makes it so much more digestible and entertaining for me. I was shocked at how quickly it took a 10 page HBR and made a somewhat interesting 13 minute podcast between two people.
 
They hook you into the paid account, which I opted to do. As much as I despise AI for things like automatically creating art or taking the humanity out of reading/writing "summaries" it has its uses.

I initially tested it out after writing my own script for a youtube video, it came back and realized what I was trying to do and broke it all down into several constituent parts that I had no idea I needed. Things like "at this point insert a video with ethereal music and end it at such and such point" and I mean it was an eye opening experience that has made the content so much better.

In the end I'm still using 95% of my own written words but the structuring and filling in gaps with music/video types is invaluable, I ran out of free credits and bought it for 1 year and it's totally worth it for this to me.

I wonder how much of this type thing is going to lead to creative stagnation, like the end product is great but everybody's output is going to be samey. But at the same time over the past decades it's become expected that creative people do everything themselves that normally took a team of people and is well outside the interest or expertise of the artist. You're expected to take your drone footage and release it like an HBO documentary team was behind it. So I can see where this can help with that expectation.

I heard somebody talk about attempting to use AI as a movie script writing assistant and he said it started out great but then ran into frustrating memory limitations. I don't know if I am explaining this correctly, but it sounded like if you were writing software code but it could only retain something like 20 lines of code back. So once you get to the 21st line of code it only references back to the 2nd line and not the 1st. Without that 1st line everything just breaks. By his estimate he got about 40% done and then it just became useless. It sounded like it wasn't a pay tier limitation. It was a tech limitation.
 
The scary thing is.... I built a teams bot today.

How long would it take you to harvest video from the department manager's laptop camera and train a LLM to generate a real-time animation of the manager that you could drop into a video conference, in order to find out how long it would take the team to figure out it was a fake?
 
Legitimately, I believe you can real time overlay somebody's face on your own in real time with an RTX3090 class card now using only a photo of their face. no webcam shenanigans required.

Not using chatgpt, but other video tools. been this way for well over 12 months
 
So what did I do today?

Started experimenting with the chatGPT API.

Essentially you can do it via shell script or from within shortcuts with a URL handler.

Made a shortcut to send the contents of the clipboard to chatGPT... and display a response.

Screenshot 2025-08-01 at 2.41.52 pm.png


(the response talking about a shortcut error is because I just randomly copied something into it from when I was trying to figure out how to export shortcut files to something I can edit in a text editor rather than drag/drop - shortcuts app really sucks, but hey... I was wanting to see how to hook it up to GPT)


Why? Can't you just use Siri/chatGPT app?

As a proof of concept for talking to chatgpt via the API. I also used a shell script to ask chatGPT questions via the CLI. its as simple as this:


Code:
#!/bin/zsh
OPENAPI_API_KEY="xxxx"

prompt="$1"
if [ -z "$prompt" ]; then
  echo "No query?"
  exit 1
fi

response=$(curl -s https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAPI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "'"$prompt"'"}],
    "temperature": 0.2
  }' | jq -r '.choices[0].message.content')

echo "$response"


If I can send ChatGPT shit via a script, I can do shell logic using chatGPT, log analysis using chatGPT, turn error codes into human readable errors, etc.

Also, openAI's API is standard. Same method to send to an on-prem model, etc. So anywhere I mention chatGPT above, could be a local host running Ollama (I think) or whatever. Just I know chatGPT works and don't need to set up a local LLM to play yet :)
 
Before I left work for the day I got this working with a locally hosted oLlama instance to send queries off to a locally hosted model. :)
 
Back
Top