Swift / Apple Development Chat

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
^ Interesting. Not related to this thread but i checked out ChatGPT late last year (under my social sciences hat). Initial enthusiasm - ’gee, this is like having a quite good intern/research assistant on tap’ - rapidly drained away when i encountered errors and repetitive answers that missed important differences in questions. I found exactly what you did in your last paragraph
It's more like having a super-confident doctor that never actually went to med school and faked its degree instead.

  • Then it decided it was okay to recreate all child view models (which tends to break things in SwiftUI through over-invalidation)
Yep, I've asked ChatGPT a few SwiftUI questions and saw the same things all over the place. Probably a symptom that many people are doing it, though.

Swift:
func controllerDidChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) {
    children = controller.fetchedObjects?.compactMap { ChildViewModel(child: $0) } ?? []
}

(As an aside, I kinda hate this code as it uses trailing closures in a way that makes the code harder to read.)
It is in fact quite hard to mentally parse that code.

Uh... what? No? That's not how initializers work?

This is an interesting tool for probing some existing knowledge and faster/easier than swimming through the SEO gray goo that search engines are quickly becoming, but it is also just as much possible to lead you into a different type of grey goo where there's this sort of air of authority to everything ChatGPT is putting out, but it's about as useful as a new hire out of college working in an unfamiliar language, trawling through Stack Overflow.
Yeah, a lot of the answers are like that. I actually had to re-read the sentence to figure out what was wrong.

I tried to ask ChatGPT how to programmatically scroll to the bottom of a List in SwiftUI (this just came up at work today). ChatGPT proposed:
Swift:
struct ContentView: View {
    @State private var messages: [String] = []
    
    var body: some View {
        VStack {
            List(messages, id: \.self) { message in
                Text(message)
            }
            HStack {
                TextField("Enter a message", text: $newMessage)
                Button(action: {
                    messages.append(newMessage)
                    newMessage = ""
                    scrollToBottom()
                }) {
                    Text("Send")
                }
            }
        }
        .onAppear {
            scrollToBottom() // Scroll to bottom when the view appears
        }
    }
    
    private var newMessage = ""
    
    private func scrollToBottom() {
        // Use ScrollViewReader to scroll to the bottom of the list
        ScrollViewReader { scrollView in
            scrollView.scrollTo(messages.last, anchor: .bottom)
        }
    }
}
ScrollViewReaders do not work like that. At all. It's comically bad. Also, the List probably won't work as-is either, since String doesn't conform to Identifiable and you absolutely need that to use scrollTo... The explanation of why anchor: .bottom is used was also wrong.

I tried a couple more examples but it's clear that it's not useful at all for SwiftUI.

It will improve, but I think the worry I have is that with the big recent publicity, there's folks (Microsoft being one of them) that are pushing to try to adapt this into their workflow. But if you aren't able to recognize that its output is inherently untrustworthy, then you can find yourself going down the wrong path and having to start over once you realize the answer is wrong. Even worse is that as people become dependent on tools like ChatGPT, the issue of citing sources gets harder, not easier, and so checking things gets more difficult. We make ourselves dependent on a tool that doesn't reason the way we do, in order to reason about topics and issues. Woof.
I'm still very skeptical that Large Language Models will surpass their many pitfalls any time soon. I think we're at the limit of what can be achieved without higher order thinking, and ChatGPT clearly doesn't have that.

Main actor-isolated property 'id' cannot be used to satisfy nonisolated protocol requirement

Because identifiable is a non-isolated protocol, but it's being used in a main actor context, the compiler cannot resolve the conflict. This makes sense since Identifiable shouldn't be an isolated protocol in practice (it's a very generic concept), but it does mean I can't provide an implementation of Identifiable and mark the class as MainActor at the same time. This is the sort of pain I'm thinking of when it comes to fixing this stuff.

So realistically, you probably want to mark the async functions in an Observable Object as MainActor, rather than the whole type. Meanwhile things like UIView/View/etc should absolutely inherit MainActor for the whole type where possible.
Hm. I don't use protocols very much in the View Models themselves though. I think. Maybe after turning on the strict concurrency checks I'll discover that I actually do. Haven't had much time to play with it yet. But yeah, I can see that in some scenarios you'd only annotate some functions/properties as @MainActor in there.
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
BTW, a lot of people were talking about this blog post on Twitter/Mastodon today. It was an interesting read. I wonder if some programming patterns have a measurable impact in performance on real apps that are not compute-heavy (like increased power consumption).

For what is worth, I think he takes the optimization too far, but I like the initial switch-based version more than the polymorphism-based code.
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
Yep, I've asked ChatGPT a few SwiftUI questions and saw the same things all over the place. Probably a symptom that many people are doing it, though.

Very likely. That scroll view example is something else though. Wow.

It is in fact quite hard to mentally parse that code.

Yup, you’ll frequently see me avoid trailing enclosures in my map/filter type calls in my code, just to make it clearer what is going on.

I'm still very skeptical that Large Language Models will surpass their many pitfalls any time soon. I think we're at the limit of what can be achieved without higher order thinking, and ChatGPT clearly doesn't have that.

What I think I mean is that there will be a point where something like ChatGPT will be “good enough” that people start to trust it, even though fundamentally you are probably not completely wrong. So we’ll get folks who start getting into their mindset that “This chatbot told me this was the truth, so this other person is probably wrong.” I also worry that much like SEO, someone could learn how to mess with the training data to get a desired result.

I suppose we could try to brute force things to the point where we start building something that looks more like general AI in size and scope. Just look at the sheer growth of the network ChatGPT-4 is poised to bring, and I assume their goals are even more grandiose.

Hm. I don't use protocols very much in the View Models themselves though. I think. Maybe after turning on the strict concurrency checks I'll discover that I actually do. Haven't had much time to play with it yet. But yeah, I can see that in some scenarios you'd only annotate some functions/properties as @MainActor in there.

This is a specific response I had to the issue of regenerating view models from a CoreData fetch. There’s the clearly bad approach I took which is to return the CoreData objects as the children and make views wrap them into ViewModels, which created all sorts of problems for me that I’ve now got to fix. There’s also the approach that you should return view models instead of the model objects. But in the latter case, if you naively re-create ViewModels on a fresh fetch like ChatGPT suggested, then things get messy as the auto-generated Id of the view model changes on each fetch. So SwiftUI thinks the entire list has been replaced. No bueno. So either you write a bunch of code to track the changes, or you can make the view model identifiable, and have it pass along the CoreData Id to SwiftUI (or some other deterministic Id from the underlying model) so that when you do recreate a ViewModel fresh after the fetch, SwiftUI sees it as the same entity as it should.

It’s also the case that naively mapping the NSArray you get from the NSFetchedResultsController to a Swift array steals some performance for large fetch results (NSFetchedResultsController uses a special array that can page in results). And since my fetches for things like “all items in your library”, that paging matters.

Maybe I’m just missing something? My ViewModels tend to be very low state, but I suspect I’m probably still giving away some performance by not caching ViewModels so I am not resubscribing to publishers every time the list is iterated. This update is still in progress, so I haven’t had a chance to address the need for caching yet. It’s possible I still need to go down that road of tracking what gets added/removed during a new fetch.
 

ArgoDuck

Power User
Site Donor
Posts
101
Reaction score
161
Location
New Zealand
Main Camera
Canon
BTW, a lot of people were talking about this blog post on Twitter/Mastodon today. It was an interesting read. I wonder if some programming patterns have a measurable impact in performance on real apps that are not compute-heavy (like increased power consumption).

For what is worth, I think he takes the optimization too far, but I like the initial switch-based version more than the polymorphism-based code.
Yes an interesting read. Because my code tends to be compute heavy i have sometimes unrolled it for performance reasons, but mostly - in my OOP/C++ days - i liked the ease of extending code offered by inheritance and polymorphism (with protocols i‘m starting to think about this differently now). Maintainability, in other words

But coming back to ChatGPT and Nycturne’s "something like" it (but better)…i guess maintainability, extensibility and even readability would no longer be our concern?! Or its. And so, i wonder what our ChatGPT+-written code would optimize for and look like?
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
But coming back to ChatGPT and Nycturne’s "something like" it (but better)…i guess maintainability, extensibility and even readability would no longer be our concern?! Or its. And so, i wonder what our ChatGPT+-written code would optimize for and look like?

I think the fundamental issue is that for certain types of development, performance isn’t even favored. It’s the whole “bang for buck”. React Native, Electron and the like.

So while I wouldn’t expect machine-written code to be performant as it’s just going to do what it “learns” from the training data, it will be fast to write, and just as bad as what a baseline developer is writing. And if a lot of people are say, writing JavaScript for everything, that’s where the best results are ironically going to be.

But fundamentally, you still need people around creating in order to push the state of the art.
 

ArgoDuck

Power User
Site Donor
Posts
101
Reaction score
161
Location
New Zealand
Main Camera
Canon
^ i agree, i think. Sensible, moderate views of ChatGPT’s (CG) impact are it’s just another office tool ultimately, for example offloading the more boring and straightforward parts of writing a scientific paper. As usual, right now hype exceeds reality.

OTOH I don’t recall whether CG is adversarial or just a pure LLM? If adversarial, we could presumably optimize some better, later version of it for desired outcomes, such as performance, ‘quality’ however we define that and so on.

Completely agree with your last sentence. Things like CG won’t generate meaningful questions or novel ideas anytime soon. Although, i recall reading somewhere (since forgotten) a few years ago of some who would like to replace scientific work in precisely this way!
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
Very likely. That scroll view example is something else though. Wow.
Something funny happened today, quite relevant to all of this. After the ScrollView example I tried a few more prompts and one of them send me through a concurrency-related rabbit hole. ChatGPT seems to lean in favor of Grand Central Dispatch over async/await, probably because there's more of that in the training data. I tried to force ChatGPT to use async/await only. But it kept wrapping async/await code with internal calls to DispatchQueues. I then tried to get it to rewrite everything using only NSLock. Thought the opposite might be true (too old for enough data to be available on the training data). I got a good idea of which flavor of bad concurrency code ChatGPT writes.

And then, just today, I was looking through one of my coworker's Pull Requests. And I saw something that looked *exactly* like the kind of bad concurrency code ChatGPT was outputting. I was 100% sure at first, but I'm wondering now if I'm imagining things because I had just tried ChatGPT. The code was something like this (some variable names have been changed):
Swift:
func clearData(data: SomeData) {
    let dispatchSemaphore = DispatchSemaphore(value: 0)
    let backgroundQueue = DispatchQueue(label: "background_queue", qos: .background)

    backgroundQueue.async { [weak self] in
        try? self?.deleteSomeStuff(data)
        dispatchSemaphore.signal()
        _ = dispatchSemaphore.wait(timeout: DispatchTime.now())
    }
}
It's just wrong in so many levels. The scope of some variables doesn't make any sense, the semaphore isn't needed at all when there's a serial queue there... and even without the queue, it still wouldn't make sense to not do anything with the semaphore until *after* the resource has been used. It certainly doesn't make any sense to call signal() and immediately wait(). I can't fathom how a human could have written this. Again, I might be imagining things, but... IDK, it felt suspicious. Those kind of meaningless/unused variables were quite common when I tried ChatGPT for SwiftUI, and that looks exactly like that.

What I think I mean is that there will be a point where something like ChatGPT will be “good enough” that people start to trust it, even though fundamentally you are probably not completely wrong. So we’ll get folks who start getting into their mindset that “This chatbot told me this was the truth, so this other person is probably wrong.” I also worry that much like SEO, someone could learn how to mess with the training data to get a desired result.
Oh yeah, that's bound to happen, and it's probably not too far away.

^ i agree, i think. Sensible, moderate views of ChatGPT’s (CG) impact are it’s just another office tool ultimately, for example offloading the more boring and straightforward parts of writing a scientific paper. As usual, right now hype exceeds reality.
There's already a lot of people on Twitter claiming to be using ChatGPT to compose emails and things like that. Personally, I'm horrified with the examples I've seen. The whole idea of formal language and even jargon is to be able to be succinct and concise to communicate complex topics without having to explain every concept. These people seem to be trying to do the opposite: explain a simple idea (the prompt) using a lot of words, often devoid of meaning.
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
I finally got enough time to read through the post.

BTW, a lot of people were talking about this blog post on Twitter/Mastodon today. It was an interesting read. I wonder if some programming patterns have a measurable impact in performance on real apps that are not compute-heavy (like increased power consumption).
I believe they do have a measurable impact. But the question is: what is the cost of reducing that impact?

One YouTube channel talking about software engineering concepts puts forward the idea that what engineers will do is maintain a balance between maintainability, performance, and velocity. Skewing towards one necessarily pulls away from the others to a degree, and the key is finding out when and where to strike one balance vs another in a project.

Some projects demand a better focus towards raw performance than others, but in general I think what has been done with more compute power is enable more abstractions that let us build more complicated systems. Machine-generated code is just another abstraction in that sense. But it needs to also be able to produce useful code first.

Yes an interesting read. Because my code tends to be compute heavy i have sometimes unrolled it for performance reasons, but mostly - in my OOP/C++ days - i liked the ease of extending code offered by inheritance and polymorphism (with protocols i‘m starting to think about this differently now). Maintainability, in other words

I think you can pry POP from my cold dead hands at this point, especially when paired with generics and extensions. The sort of nonsense I’ve had to shoehorn into projects that used inheritance for polymorphism is something I’ve never had to do with Swift. Inheriting from a base class is not something I miss, while still maintaining DRY principles. I’m not a follower of all the principles of “clean code” myself, but instead loosely follow it where it makes sense. Functions getting long are harder to reason about, and watching someone repeat same 3 lines of code that could easily go into an inline function isn’t great either.

In the author’s example, the vtable has a cost, which of course it does. Function calls can be expensive to setup, and table dispatch adds a level of indirection on every call just making it worse. POP unfortunately doesn’t improve much on this, since protocols still rely on table dispatch, but my issue is less with polymorphism and more with inheritance creating some wonky scenarios that conformances just handle more cleanly.
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
I like to write code in a way that the compiler doesn't let me forget me basic things.
Another improvement point I have noticed: throwing functions. I've found try? to be overused in the codebases I work with, so a lot of errors were handled correctly at first but ultimately converted to optionals and immediately causing an early exit in a guard, without getting to show any visible feedback to the user. Using throwing functions when a function can fail avoids a lot of these optionals, which in turn avoid a lot of the guard let fooVar else { return } that are written almost mindlessly and rarely handle errors in a sensible way for the user to notice something has gone wrong.

This is a specific response I had to the issue of regenerating view models from a CoreData fetch. There’s the clearly bad approach I took which is to return the CoreData objects as the children and make views wrap them into ViewModels, which created all sorts of problems for me that I’ve now got to fix. There’s also the approach that you should return view models instead of the model objects. But in the latter case, if you naively re-create ViewModels on a fresh fetch like ChatGPT suggested, then things get messy as the auto-generated Id of the view model changes on each fetch. So SwiftUI thinks the entire list has been replaced. No bueno. So either you write a bunch of code to track the changes, or you can make the view model identifiable, and have it pass along the CoreData Id to SwiftUI (or some other deterministic Id from the underlying model) so that when you do recreate a ViewModel fresh after the fetch, SwiftUI sees it as the same entity as it should.

It’s also the case that naively mapping the NSArray you get from the NSFetchedResultsController to a Swift array steals some performance for large fetch results (NSFetchedResultsController uses a special array that can page in results). And since my fetches for things like “all items in your library”, that paging matters.

Maybe I’m just missing something? My ViewModels tend to be very low state, but I suspect I’m probably still giving away some performance by not caching ViewModels so I am not resubscribing to publishers every time the list is iterated. This update is still in progress, so I haven’t had a chance to address the need for caching yet. It’s possible I still need to go down that road of tracking what gets added/removed during a new fetch.
Oh I mostly got around this problems because I had most of the views that display data from a CoreData object use the model directly (no View Models). The CoreData objects themselves can trivially conform to Identifiable, so doing:
Swift:
List(coreDataObjects) { coreDataObject in
    RowView(coreDataObject)
}
Avoids initializing anything other than the changed rows. I can get away with this because most of these views are final UI components (like a single image), so I don't lose much by switching from MVVM to MV.

I believe they do have a measurable impact. But the question is: what is the cost of reducing that impact?

One YouTube channel talking about software engineering concepts puts forward the idea that what engineers will do is maintain a balance between maintainability, performance, and velocity. Skewing towards one necessarily pulls away from the others to a degree, and the key is finding out when and where to strike one balance vs another in a project.

Some projects demand a better focus towards raw performance than others, but in general I think what has been done with more compute power is enable more abstractions that let us build more complicated systems. Machine-generated code is just another abstraction in that sense. But it needs to also be able to produce useful code first.
Yup, it's all about balance. And I do agree that skewing towards one necessarily pulls away from the others. That's why I believe it's so important to identify the right abstraction for a given task and avoid either abstracting too little (and needing to copy-paste code all around) or abstracting too much (and adding a ton of unused flexibility at the cost of performance and readability). No one argues with the former, but the latter...

I’m not a follower of all the principles of “clean code” myself, but instead loosely follow it where it makes sense. Functions getting long are harder to reason about, and watching someone repeat same 3 lines of code that could easily go into an inline function isn’t great either.
Well that's the right approach :) Nothing worse than blindly following "clean code", "SOLID" or any of that without understanding why those principles were created and what they are intended to avoid.

In the author’s example, the vtable has a cost, which of course it does. Function calls can be expensive to setup, and table dispatch adds a level of indirection on every call just making it worse. POP unfortunately doesn’t improve much on this, since protocols still rely on table dispatch, but my issue is less with polymorphism and more with inheritance creating some wonky scenarios that conformances just handle more cleanly.
For non-existential types (anything tagged with any, basically) Swift uses static dispatch. If I remember correctly, on one of this year's WWDC talks Apple engineers put some emphasis on how it's preferred to always have non-existential types unless you need the added flexibility, to avoid using dynamic dispatch instead of static. And those cases are, in my experience, not all that common. I'd say the vast majority of the code that I've written that conforms to protocols is using non-existential types, so the vtable can be skipped.

The example in the blog post (a list of different shapes) would still require dynamic dispatch in Swift though. In most cases anyway. If you call totalArea(shapes: [...]) with a list of shapes all conforming to the same type (known at compile time), the compiler would be able to optimize the vtable out.

I think it's the perfect balance of flexibility vs performance. The compiler can make lots of optimizations, unless you actively use any. And then you can easily see where you're compromising performance.
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
Another improvement point I have noticed: throwing functions. I've found try? to be overused in the codebases I work with, so a lot of errors were handled correctly at first but ultimately converted to optionals and immediately causing an early exit in a guard, without getting to show any visible feedback to the user. Using throwing functions when a function can fail avoids a lot of these optionals, which in turn avoid a lot of the guard let fooVar else { return } that are written almost mindlessly and rarely handle errors in a sensible way for the user to notice something has gone wrong.

I think Swift’s error handling, while more correct/explicit than C++, is tedious to implement. The end result is that people will try to cut down on the work they have to do when they are in a hurry.

That said, I use it because it’s the most natural way to do many things like you describe.

Oh I mostly got around this problems because I had most of the views that display data from a CoreData object use the model directly (no View Models). The CoreData objects themselves can trivially conform to Identifiable, so doing:
Swift:
List(coreDataObjects) { coreDataObject in
    RowView(coreDataObject)
}
Avoids initializing anything other than the changed rows. I can get away with this because most of these views are final UI components (like a single image), so I don't lose much by switching from MVVM to MV.

Yeah, I just got tired of repeating myself or having to extend the model with view-oriented protocol extensions for certain types to enable a generic view.

Yup, it's all about balance. And I do agree that skewing towards one necessarily pulls away from the others. That's why I believe it's so important to identify the right abstraction for a given task and avoid either abstracting too little (and needing to copy-paste code all around) or abstracting too much (and adding a ton of unused flexibility at the cost of performance and readability). No one argues with the former, but the latter...

I am a fan of the “clean as you go” approach because of this. Focus on what you need the code to do, and let the code tell you what it needs as it comes together. That said, I tend to be the one giving “We do this 5 times in the same function“ and “it’s getting hard to read this function, can we break it up” feedback in code reviews.

For non-existential types (anything tagged with any, basically) Swift uses static dispatch. If I remember correctly, on one of this year's WWDC talks Apple engineers put some emphasis on how it's preferred to always have non-existential types unless you need the added flexibility, to avoid using dynamic dispatch instead of static. And those cases are, in my experience, not all that common. I'd say the vast majority of the code that I've written that conforms to protocols is using non-existential types, so the vtable can be skipped.

You woudn’t happen to remember the name of the talk? My understanding was that protocol-as-type was existential, so it would be good to understand why it isn’t the case in the examples you describe.
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
I think Swift’s error handling, while more correct/explicit than C++, is tedious to implement. The end result is that people will try to cut down on the work they have to do when they are in a hurry.

That said, I use it because it’s the most natural way to do many things like you describe.
I agree that it's tedious to implement, hence why I hadn't bothered to go all the way with it. However, after bothering to let throwing functions propagate the throws trough the entire call stack (instead of using Optionals)... it works really well!

I am a fan of the “clean as you go” approach because of this. Focus on what you need the code to do, and let the code tell you what it needs as it comes together. That said, I tend to be the one giving “We do this 5 times in the same function“ and “it’s getting hard to read this function, can we break it up” feedback in code reviews.
Well, ideally that's what you want in a code review. Another thing I've found to be overlooked in code reviews are API semantics, variable names... it's super important to get it right and yet I don't think I've EVER seen anyone arguing with a variable/type name. I'm now tasked at my day job with adding a few new features to a codebase where a lot of the variable names and types had super generic names like "Variant", "Item", "Model", "Type"... all of that should have been at least discussed in code reviews.

You woudn’t happen to remember the name of the talk? My understanding was that protocol-as-type was existential, so it would be good to understand why it isn’t the case in the examples you describe.
I believe it was Embrace Swift Generics, although the encouragement to use Generics instead of AnyView actually comes from Demystify SwiftUI. It actually depends on whether you implement it using any or generics/some:

Swift:
func feedAnimalA(_ animal: any Animal) {
    // This uses an existential type.
    print(animal.name)
}

func feedAnimalB<T: Animal>(_ animal: T) {
    // This uses a concrete type.
    print(animal.name)
}

func feedAnimalC(_ animal: some Animal) {
    // This is the same as the function above, so it
    // should be a concrete type too.
    print(animal.name)
}
On the first function, the any keyword is not required now, but it'll be required eventually. Either way, it *is* existential by default now. Second and third functions should resolve to concrete types and avoid dispatch through a vtable. On the example of the blog post, since you have an array of different shapes, you'd need to use a variable of type [Shape] (moving forward: [any Shape]). Once you're required to use any it'll be more obvious that you're trading performance for flexibility here. It's kinda cool that, if you try to remove any from the code, you'll likely end up with the data-oriented programming approach (keeping separate arrays for each Shape type).

Also, you can apparently do this after Swift 5.7 (the compiler automatically unboxes the existential type of animal to the specific concrete type feedAnimalB() requires).:
Swift:
let animals: [any Animal] = [Chicken(), Duck(), Duck()]

for animal in animals {
   // `animal` is `any Animal` here
    feedAnimalB(animal)
}
This probably involves dynamic dispatch when switching from existential to concrete types. However, I wonder if the compiler is able to do further optimizations in this particular scenario since animals is a let and its contents are known at compile time.
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
I agree that it's tedious to implement, hence why I hadn't bothered to go all the way with it. However, after bothering to let throwing functions propagate the throws trough the entire call stack (instead of using Optionals)... it works really well!

Yeah, that’s the same reason I use it. It forces a “handle or propagate” approach so long as you avoid try? or try!. Optionals are strictly for “it’s expected that this might not exist”.

Well, ideally that's what you want in a code review. Another thing I've found to be overlooked in code reviews are API semantics, variable names... it's super important to get it right and yet I don't think I've EVER seen anyone arguing with a variable/type name. I'm now tasked at my day job with adding a few new features to a codebase where a lot of the variable names and types had super generic names like "Variant", "Item", "Model", "Type"... all of that should have been at least discussed in code reviews.

What I mean is if I’m not the one bringing it up, it doesn’t happen. And it isn’t a small team.

Generally, API design in the type of projects I work on should be front-loaded and consensus got before the code to be checked in is written, after we’ve done some prototyping. But as teams I work on adopt the “move fast and break ****” philosophy, documentation has gotten worse rather than better.

Naming stuff I do agree with, I’ve seen some rather unhelpful names come across my screen. Usually because they had some sort of name collision and tried to think of a new name to describe a similar thing. (This is also usually where I ask if maybe this complex bit of logic with internal variables should be a utility function instead)

I believe it was Embrace Swift Generics, although the encouragement to use Generics instead of AnyView actually comes from Demystify SwiftUI. It actually depends on whether you implement it using any or generics/some:

I’ll take a look, this area of Swift is something I’ve been ignoring for a bit because of all the concurrency stuff. Interesting that you can use “some” as a way to make a generic function now. Seems cleaner for simple cases.
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
Yeah, that’s the same reason I use it. It forces a “handle or propagate” approach so long as you avoid try? or try!. Optionals are strictly for “it’s expected that this might not exist”.
Exactly. It's kinda funny, because it's literally the first thing any Swift tutorial for beginners teaches. And yet it has taken me quite a few years to actually do the very first thing I learned about the language.

What I mean is if I’m not the one bringing it up, it doesn’t happen. And it isn’t a small team.

Generally, API design in the type of projects I work on should be front-loaded and consensus got before the code to be checked in is written, after we’ve done some prototyping. But as teams I work on adopt the “move fast and break ****” philosophy, documentation has gotten worse rather than better.

Naming stuff I do agree with, I’ve seen some rather unhelpful names come across my screen. Usually because they had some sort of name collision and tried to think of a new name to describe a similar thing. (This is also usually where I ask if maybe this complex bit of logic with internal variables should be a utility function instead)
Code review culture is very different between different companies. I've seen too many projects with hundreds of PRs merged without a single comment. Let's say that I think it's unlikely that all contributors are that good.

Sometimes it doesn't even need to be something fixed in that same pull request, if time is of the essence. It can be a heads up to fix or refactor something the next time someone makes changes in that part of the codebase. I usually approve PRs even if I leave a few comments of things I believe can be improved, as long as there's nothing inherently broken.

But even when there are comments in the code reviews, people often discuss the little things instead of the big ones: is this the right level of abstraction? Why do we need this data structure? I believe that happens because the little things can be objectively wrong (forgetting to call some function at the end, or making a typo) while the big picture is very rarely just objectively wrong, so it feels more nit-picky.

I’ll take a look, this area of Swift is something I’ve been ignoring for a bit because of all the concurrency stuff. Interesting that you can use “some” as a way to make a generic function now. Seems cleaner for simple cases.
Yeah, some is much cleaner, and less scary for newbies. I actually didn't try anything of the Generics stuff for a long time just because the angle notation was intimidating. Turns out it wasn't all that difficult, and the results are super cool. I first had to use it to implement a SwiftUI View that could take another View as a parameter and use specific properties of that view, without using AnyView, because I wanted to make the initializer of my view similar to those Apple provides for initializing components like Button or Section.

I'm a bit torn about forcing to use any or some when dealing with protocols as types from now on. On one hand, it's super useful to quickly tell if the compiler is aware of the concrete type of a variable within that scope, and I like that a lot. On the other hand, I worry that forcing to choose between some and any is going to require people new to the language to be exposed to a very nuanced topic (existential vs concrete types) early on.
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
Yeah, some is much cleaner, and less scary for newbies. I actually didn't try anything of the Generics stuff for a long time just because the angle notation was intimidating. Turns out it wasn't all that difficult, and the results are super cool. I first had to use it to implement a SwiftUI View that could take another View as a parameter and use specific properties of that view, without using AnyView, because I wanted to make the initializer of my view similar to those Apple provides for initializing components like Button or Section.

Yup, generics are pretty much required to build clean reusable components in SwiftUI. It's a common pattern in my code. Mostly what I'm trying to learn is how to create clean reuse of components across platforms when styles and the like don't exist on one platform or another. Or need to have different default spacings but fundamentally similar constructs (say a gallery that is always 2 wide in compact mode on iOS, and is size based on regular width, macOS and tvOS).

I'm used to C++ templates, so it wasn't too gnarly to get my head wrapped around it initially. The big hurdle for me was just how extensible/specific you can get with a generic type and extensions. "Here's the core type that always works with T", "Here's the extended type that works when T conforms to P", etc. It's very powerful stuff.

I'm a bit torn about forcing to use any or some when dealing with protocols as types from now on. On one hand, it's super useful to quickly tell if the compiler is aware of the concrete type of a variable within that scope, and I like that a lot. On the other hand, I worry that forcing to choose between some and any is going to require people new to the language to be exposed to a very nuanced topic (existential vs concrete types) early on.

That last thought is very reminiscent of debates I've had with folks over Apps Hungarian, when I last worked on a project that used it. Part of the argument given was that by using symbols for types that aren't human words (even though a Window Rect usually is something like RCW in this context), a developer needs to internalize what the type actually represents. It front loads the cognitive work of understanding code before editing code. For my part, I didn't like the velocity hit it required to understand a function full of a dozen of these types, and I still don't. Large complex functions with types that could be eldritch runes take a long time to understand and makes learning from code reviews harder. That said, the other side of the argument wasn't wrong about the fact that it makes an engineer act more intentionally and with understanding once they get over the hurdle.

There is probably an argument that could be made about making semantics explicit for very similar reasons. Yes, it's a hurdle for a new developer, but odds are they will try something and see how it goes or ask and learn. A seasoned developer reviewing can catch mistakes quickly based on the context, so at least when the developer fails to ask/learn before making changes, it's easier.

Code review culture is very different between different companies. I've seen too many projects with hundreds of PRs merged without a single comment. Let's say that I think it's unlikely that all contributors are that good.

Very much so. It's part of that velocity vs performance vs maintainability. Too much emphasis on velocity means people are less inclined to review. Some personality types are also less likely to criticize someone else's work. I fall into that latter bucket a bit. It's taken me years to get to a point where I can reliably bring up issues on someone's PR, and some of it is finding a tone that does the job for both sides.
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
Or need to have different default spacings but fundamentally similar constructs (say a gallery that is always 2 wide in compact mode on iOS, and is size based on regular width, macOS and tvOS).
I'm curious about tvOS. I've heard that it's a pain to deal with due to the lack of some basic UI components that are available on every other platform. Bu never actually tried it myself.

That last thought is very reminiscent of debates I've had with folks over Apps Hungarian, when I last worked on a project that used it. Part of the argument given was that by using symbols for types that aren't human words (even though a Window Rect usually is something like RCW in this context), a developer needs to internalize what the type actually represents. It front loads the cognitive work of understanding code before editing code. For my part, I didn't like the velocity hit it required to understand a function full of a dozen of these types, and I still don't. Large complex functions with types that could be eldritch runes take a long time to understand and makes learning from code reviews harder. That said, the other side of the argument wasn't wrong about the fact that it makes an engineer act more intentionally and with understanding once they get over the hurdle.

There is probably an argument that could be made about making semantics explicit for very similar reasons. Yes, it's a hurdle for a new developer, but odds are they will try something and see how it goes or ask and learn. A seasoned developer reviewing can catch mistakes quickly based on the context, so at least when the developer fails to ask/learn before making changes, it's easier.
Hadn't heard about Apps Hungarian. I don't use any particular scheme for naming variables, although I try to mimic what apple is doing. So I don't even use abbreviations for the most part (unless things like ID for identifier and such). Anyway, my variable names usually only contain info about the logical use of the variable, never about the type. The last project I worked on oddly had some type info on the variable names, but sometimes the type ended up being wrong (probably the code had been contributed by many people?) and it was *super* confusing to discover after 20 mins of debugging that "categoryList" was, in fact, not a list. Things like that. So I'd say I'm not generally in favor of type indications on variable names anyway.

Very much so. It's part of that velocity vs performance vs maintainability. Too much emphasis on velocity means people are less inclined to review. Some personality types are also less likely to criticize someone else's work. I fall into that latter bucket a bit. It's taken me years to get to a point where I can reliably bring up issues on someone's PR, and some of it is finding a tone that does the job for both sides.
My employer isn't even that focused on velocity. But people just don't bother commenting anything on PRs for some reason. And it's having quite an impact on some projects. I haven't said anything to management or to my tech lead because it's happening in projects I'm not a part of (so it's not my place to say anything), but I have checked out other repos out of curiosity (there's a lot to be learned from reading other people's code) and I've seen some huge red flags on a few of them. The kind of red flags that cue that a complete rewrite will be required for significant portions of the code. And it could've been avoided entirely at the code review stage, it only would've needed a nudge in the right direction. Oh well.
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
I'm curious about tvOS. I've heard that it's a pain to deal with due to the lack of some basic UI components that are available on every other platform. Bu never actually tried it myself.
Much like macOS, SwiftUI doesn't seem to fully expose everything you need when I was targeting tvOS 15. Might be getting better. So trying to recreate things that Apple does in other apps or Apple outlines in the UI guidelines is surprisingly annoying.

It's one reason I've put the tvOS version of my app on hold and focused on Mac/iOS for the moment. It works, but it's not a great experience at the moment. Haven't quite gotten a UX I like.

Hadn't heard about Apps Hungarian. I don't use any particular scheme for naming variables, although I try to mimic what apple is doing. So I don't even use abbreviations for the most part (unless things like ID for identifier and such). Anyway, my variable names usually only contain info about the logical use of the variable, never about the type. The last project I worked on oddly had some type info on the variable names, but sometimes the type ended up being wrong (probably the code had been contributed by many people?) and it was *super* confusing to discover after 20 mins of debugging that "categoryList" was, in fact, not a list. Things like that. So I'd say I'm not generally in favor of type indications on variable names anyway.

Hungarian is also about naming types. As an example, see wordwin.h, line 324. You've got this struct called RC, a rectangle specified by the X/Y of the four borders (or the two corners as points). There is also DRC which is specified by the top left and a width and height. So you'll get stuff like this in actual code:

C:
prcw->xwRight - prcw->xwLeft // Width of an RCW (window rect).
pdrcl->dxl // Width of a DRC

Now to see how this can slow comprehension of a data type, look at something like WWD in disp.h at line 301. To understand what a "WWD" really is, there's all these other types you need to understand. IDRs, HPLDRs, PLs, KMPs, RCs, IPGDs. Oof. One thing I notice in this code though is that for basic types, the variable name carries the full semantic typing, rather than going through the effort of a huge bunch of typedefs for every semantic purpose. But any struct uses hungarian naming conventions for the type.

My employer isn't even that focused on velocity. But people just don't bother commenting anything on PRs for some reason. And it's having quite an impact on some projects. I haven't said anything to management or to my tech lead because it's happening in projects I'm not a part of (so it's not my place to say anything), but I have checked out other repos out of curiosity (there's a lot to be learned from reading other people's code) and I've seen some huge red flags on a few of them. The kind of red flags that cue that a complete rewrite will be required for significant portions of the code. And it could've been avoided entirely at the code review stage, it only would've needed a nudge in the right direction. Oh well.

No I hear ya. Large projects suffer from this as well. And even more insidious is the "Yeah, this is a problem, but let's fix it later". Of course it never does.
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
Hungarian is also about naming types. As an example, see wordwin.h, line 324. You've got this struct called RC, a rectangle specified by the X/Y of the four borders (or the two corners as points). There is also DRC which is specified by the top left and a width and height. So you'll get stuff like this in actual code:

C:
prcw->xwRight - prcw->xwLeft // Width of an RCW (window rect).
pdrcl->dxl // Width of a DRC

Now to see how this can slow comprehension of a data type, look at something like WWD in disp.h at line 301. To understand what a "WWD" really is, there's all these other types you need to understand. IDRs, HPLDRs, PLs, KMPs, RCs, IPGDs. Oof. One thing I notice in this code though is that for basic types, the variable name carries the full semantic typing, rather than going through the effort of a huge bunch of typedefs for every semantic purpose. But any struct uses hungarian naming conventions for the type.
I see. The amount of discipline required to reliably write code like that is mind-blowing though. I've grown too used to Xcode's to show variable type and documentation :LOL: It's already gotten extremely annoying when it fails. Can't imagine having to parse and interiorize all that from variable names alone.

On an unrelated note, I found out yesterday that you can automatically map (using Codable) an enum with a Raw Type of String or Int to a custom Swift enum. Like, I didn't know this worked until yesterday:
Swift:
enum BackendOption: String, Codable {
    case optionA = "OPTION_A"
    case optionB = "OPTION_B"
    case optionC = "OPTION_C"
}

struct FooNetworkType: Codable {
    let id: Int
    let fooVar: String
    let fooOption: BackendOption
}
But it's so cool! I was trying to keep variables with a limited number of possible values (fetched from a REST API) as strongly typed as possible (instead of using fooOption: String or something like that) and finding out that it also saved a bunch of lines of code (from switches all throughout the app and stuff) was a nice unexpected bonus.
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
I see. The amount of discipline required to reliably write code like that is mind-blowing though. I've grown too used to Xcode's to show variable type and documentation :LOL: It's already gotten extremely annoying when it fails. Can't imagine having to parse and interiorize all that from variable names alone.

It dates back to an era where the “IDE” didn’t really exist. So having context locally about typing while looking at code helps a lot. That said, despite having nearly a decade out of college, joining a team that used Hungarian took a while to get up to speed. And it was frustrating that it took so long. I felt like a new hire straight from college.

But I find myself missing Hungarian sometimes in code reviews, where you can’t just jump around looking at type definitions most of the time without bringing up an IDE to do so. So instead I annoy folks whenever I have to think too hard about something in context. :)

But it's so cool! I was trying to keep variables with a limited number of possible values (fetched from a REST API) as strongly typed as possible (instead of using fooOption: String or something like that) and finding out that it also saved a bunch of lines of code (from switches all throughout the app and stuff) was a nice unexpected bonus.

Yup, it is a great little bit of functionality. RawRepresentable is a common tool in my enums because of this (declaring it based on String or Int automatically makes it RawRepresentable, which is what enables the automatic Coding conformance). It’s useful for YAML config files too, as Yams using Coding to do (de)serialization.
 

Andropov

Site Champ
Posts
602
Reaction score
754
Location
Spain
It dates back to an era where the “IDE” didn’t really exist. So having context locally about typing while looking at code helps a lot. That said, despite having nearly a decade out of college, joining a team that used Hungarian took a while to get up to speed. And it was frustrating that it took so long. I felt like a new hire straight from college.

But I find myself missing Hungarian sometimes in code reviews, where you can’t just jump around looking at type definitions most of the time without bringing up an IDE to do so. So instead I annoy folks whenever I have to think too hard about something in context. :)
Nowadays unless it's a very minor change I typically have the IDE open, checked out at the pull request branch, side by side with the pull request itself. So I can use the full abilities of the IDE to understand what I'm reviewing. Otherwise, I could see the benefits of naming schemes like Hungarian. Still don't think it's worth needing such deep knowledge upfront to read something that could very well be a trivial piece of code.

Of course, if the project is large or it has too many dependencies, switching branches can take too long though, I often tried to avoid doing that at my first job because switching branches took ~half an hour.

Yup, it is a great little bit of functionality. RawRepresentable is a common tool in my enums because of this (declaring it based on String or Int automatically makes it RawRepresentable, which is what enables the automatic Coding conformance). It’s useful for YAML config files too, as Yams using Coding to do (de)serialization.
Unfortunately I'm not going to be able to use it for most projects at my day job since most projects use Apollo to do the network (GraphQL) calls. The idea of Apollo is that network (Swift) types are autogenerated (from backend's project schema, downloaded automatically), and there's no need to use Codable, Apollo takes care of decoding the JSON to the autogenerated types.

I'm... not sold on the benefits of that over using Codable and rolling our own types. A colleague argues that it was a much better approach "because it enforces compile-time detection of decoding errors". I'm calling BS on that. It only checks that the Swift types match backend's types at the moment the app is compiled. That's a far cry from a real strong type system. It's brittle as hell: the moment backend changes anything, or you point the app to a different environment, it can all break down in exactly the same way as a Codable approach. If I wanted want to ensure that the API calls work at the moment the app is compiled, I'd rather add tests. It doesn't add extra libraries nor extra compilation time because you don't need to run tests on every compilation, and it's equally brittle (in the sense that as soon as backend makes changes that affect the API it can break down).
 

Nycturne

Elite Member
Posts
1,108
Reaction score
1,417
Still don't think it's worth needing such deep knowledge upfront to read something that could very well be a trivial piece of code.
That was the side of the argument I took back in the day.

These days, I’m realizing that simpler reviews is a benefit when everyone already agrees to use the same “dialect” and so you don’t need to do the cross referencing in another tool. Is it worth adopting Hungarian in projects today? Probably not. I don’t. But I still take the good things I learned and try to apply it in some way so that local context is preserved when possible.

Of course, if the project is large or it has too many dependencies, switching branches can take too long though, I often tried to avoid doing that at my first job because switching branches took ~half an hour.
Sounds like a project that could benefit from either perforce or Git LFS?

Unfortunately I'm not going to be able to use it for most projects at my day job since most projects use Apollo to do the network (GraphQL) calls. The idea of Apollo is that network (Swift) types are autogenerated (from backend's project schema, downloaded automatically), and there's no need to use Codable, Apollo takes care of decoding the JSON to the autogenerated types.

I'm... not sold on the benefits of that over using Codable and rolling our own types. A colleague argues that it was a much better approach "because it enforces compile-time detection of decoding errors". I'm calling BS on that. It only checks that the Swift types match backend's types at the moment the app is compiled. That's a far cry from a real strong type system. It's brittle as hell: the moment backend changes anything, or you point the app to a different environment, it can all break down in exactly the same way as a Codable approach. If I wanted want to ensure that the API calls work at the moment the app is compiled, I'd rather add tests. It doesn't add extra libraries nor extra compilation time because you don't need to run tests on every compilation, and it's equally brittle (in the sense that as soon as backend makes changes that affect the API it can break down).

Especially if the back end isn’t yours and doesn’t offer a schema or GraphQL.

That said, I’d probably seriously consider something like Apollo for a project where I could. Maintaining a hierarchy of types that can be generated from a schema is overhead that I’d rather spend elsewhere. Make the mobile team co-owners of the schema file so PRs against it require both sides to be aware and sign off. At least catch some of the brittleness by that. Even better would be proper API versioning.

Doesn’t mean you can’t autogenerate Codables from a schema as a build step, though. I’m just not sure I’d be wanting to write one myself.
 
Top Bottom
1 2