• 1 Post
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle

  • Containers can be based on operating systems that are different to your computer.

    Containers utilise the host’s kernel - which is why there needs to be some hoops to run Linux container on Windows (VM/WSL).

    That’s one of the most key differences between VMs and containers. VMs virtualise all the hardware, so you can have a totally different guest and host operating systems; whereas because a container is using the host kernel, it must use the same kind of operating system and accesses the host’s hardware through the kernel.

    The big advantage of that approach, over VMs, is that containers are much more lightweight and performant because they don’t have a virtual kernel/hardware/etc. I find its best to think of them as a process wrapper, kind of like chroot for a specific application - you’re just giving the application you’re running a box to run in - but the host OS is still doing the heavy lifting.



  • I was recently helping someone working on a mini-project to do a bit of parsing of docker compose files, when I discovered that the docker compose spec is published as JSON Schema here.

    I converted that into TypeScript types using JSON Schema to TypeScript. So I can create docker compose config in code and then just export it as yaml - I have a build/deploy script that does this at the end.

    But now the great thing is that I can export/import that config, share it between projects, extend configs, mix-in, and so on. I’ve just started doing it and it’s been really nice so far, when I get a chance and it’s stabilised a bit I’m going to tidy it up and share it. But there’s not much I’ve added beyond the above at the moment (just some bits to mix-in arrays, which was what set me off on this whole thing!)



  • With regards to education, one of the things I’ve come to understand goes entirely counter to the way I was taught at University - for me, programming is a creative activity. It’s an iterative process, and the less constraints I have on how I achieve something, not what I achieve, the better I enjoy it, the more productive I am, and the better by many measures the end solution will be.

    I think that is a key part of what’s missing from CS education, to understand that and lean into it to both increase engagement but also to get people thinking outside the box for solutions to their problems. Students seem to be taught so much, but very little about “Here’s a high-level problem, provide a solution” which is the “core loop” of software development (outside of being a code monkey implementing other people’s designs). You go over requirements and specifications, but you don’t actually DO it… you don’t speak to people, ask the questions, realise they’d don’t know much about software, then later go “Oh shit, I made this assumption and made the wrong thing!”

    One of the things that I used to like more than anything was achieving things even though there were constraints. For example, back in the 90’s even before even AJAX was a thing, I created a site for a betting company that was a SPA and pulled in data and live betting odds. I did this by having a message queue in JavaScript, a hidden frame from which to send messages from the queue to the server using a form, and then the server returned JavaScript code which executed and put the data where needed and updated the page. I absolutely loved that project, and most people on the team just couldn’t believe it was even possible.

    But I didn’t solve it through engineering, I solved it through playing - trying things, seeing what would work/what didn’t, adapting the idea, etc. until I found something that worked - and it was based on some of the things I’d been messing about with in my own time (somewhat bizarrely, creating a sort of online aquarium of Dr. Seuss fish where each one was a person viewing the site!)

    I think if we can inject more of the creativity, tinkering, iterative, playful side into our education it’ll make a huge difference.


  • I left University in the late 90’s and got my first job based on the things I’d been messing about with in my spare time with the University’s facilities/at home (Unix, Internet protocols, client/server arch, distributed computing, etc.) rather than anything I’d been taught. I learnt more in my first 3 months in work than 3 years of education.

    Then the dot-com boom hit, and the number of applicants for any position surged - everyone was going into software development for the money. The whole team became involved in selecting candidates and being part of the interviewing process - it was a nightmare trying to give every person a fair chance. We had some good hires and some bad hires, but the bad hires became such a problem because we had to go through the recruitment mill again.

    But we realised that the number one factor for whether they’d be a good hire or not was not education, but their own personal projects. That’s what mattered. Doing this for fun was the key indicator of being good, and became the ONLY thing we looked for on CVs in the first pass. Doesn’t matter if you have a 1st from Cambridge, if you don’t demonstrate you have a passion for the subject, you don’t get an interview. It was a huge success, and we built an amazing team and saved ourselves a ton of time during recruitment.

    Those people still exist though, I see it all the time! But I think now that the “industry” has grown so much that in any given field there are less people (relatively) being attracted to it. For example, I can see that while back in the 80’s I was drawn to the personal computer, then the 90’s the internet - those things are staples of everyday life now. But I can see more modern young people being attracted to things like AI, drones, quantum computing, 3D printing, and so on as well.




  • vampatori@feddit.uktoProgramming@beehaw.orgEmail is Dead
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    From a personal perspective, I absolutely agree - I only check my email when I’m specifically expecting something, which is rarely. But at work emails are still incredibly important.

    Are there any protocols/services designed specifically for one time codes? Receipts? I think something that’s dedicated to those kinds of tasks would be great from an ease-of-use perspective - no more messing about waiting for delivery, searching through hordes of emails, checking spam folder, etc.

    Another problem we have is the rise of oauth - the core idea is great, but the reality is that it ties a lot of people to these Big Tech services.



  • vampatori@feddit.uktoSelfhosted@lemmy.worldDefeated by NGINX
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Assume nothing! Test every little assumption and you’ll find the problem. Some things to get you started:

    • Does the “app” domain resolve to the app container’s IP from within the nginx container?
    • Can you proxy_pass to the host:port directly rather than using an upstream definition? If not, what about IP:port?
    • Can you connect to the app container from outside (if exposed)? What about from inside the nginx container? What about inside the app container?
    • Is the http(s) connection to the server (demo.example.com) actually going to your nginx instance? Shut it down and see if it changes.
    • If it works locally on 80, can you get it to work on the VPS on 80?
    • Are you using the exact same docker-compose.yaml file for this as locally? If not, what’s different?
    • Are you building the image? If so, are you incrementing the version number of the build so it gets updated?
    • Is there a firewall running on the host OS? If so, is it somehow interfering? Disable it and see.

    While not a direct solution to your problem, I no longer manually configure my reverse proxies at all now and use auto-configuring ones instead. The nginx-proxy image is great, along with it’s ACME companion image for automatic SSL cert generation with certbot - you’ll be up and running in under 30 mins. I used that for a long time and it was great.

    I’ve since moved to using Traefik as it’s more flexible and offers more features, but it’s a bit more involved to configure (simple, but the additional flexibility means everything requires more config).

    That way you just bring up your container and the reverse proxy pulls meta-data from it (e.g. host to map/certbot email) and off it goes.


  • The issues with LLM’s for coding are numerous - they don’t produce good results in my experience, there’s plenty of articles on their flaws.

    But… they do highlight something very important that I think we as developers have been guilty of for decades… a large chunk of what we do is busy work; the model definitions, the api to wrap the model, the endpoint to expose the model, the client to connect to the endpoint, the ui that links to the client, the server-side validation, the client-side validation, etc. On and on… so much of it is just busy work. No wonder LLM’s can offer up solutions to these things so easily - we’ve all been re-inventing the wheel over and over and over again.

    Busy work is the worst and it played a big part in why I took a decade-long break from professional software development. But now I’m back running my own business and I’m spending significant time reducing busy work - for profit but also for my own personal enjoyment of doing the work.

    I have two primary high-level goals:

    1. Maximise reuse - As much as possible should be re-usable both within and between projects.
    2. Minimise definition - I should only use the minimum definition possible to provide the desired solution.

    When you look at projects with these in mind, you realise that so many “fundamentals” of software development are terrible and inherently lead to busy work.

    I’ll give a simple example… let’s say I have the following definition for a model of a simple blog:

    User:
      id: int generate primary-key
      name: string
    
    Post:
      id: int generate primary-key
      user_id: int foreign-key(User.id)
      title: string
      body: string
    

    Seems fairly straight-forward, we’ve all done this before - it can be in SQL, prisma, etc. But there’s some fundamental flaws right here:

    1. We’ve tightly coupled Post to User through the user_id field. That means Post is instantly far less reusable.
    2. We’ve forced an id scheme that might not be appropriate for different solutions - for example a blogging site with millions of bloggers with a distributed database backend may prefer bigint or even some form of UUID.
    3. This isn’t true for everything, but is for things like SQL, Prisma, etc. - we’ve defined the model in a data-definition language that doesn’t support many reusability features like importing, extending, mixins, overriding, etc.
    4. We’re going to have to define this model again in multiple places… our API that wraps the database, any clients that consume that API, any endpoints that serve that API up, in the UI, the validation, and so on.

    Now this is just a really simple, almost superficial example - but even then it highlights these problems.

    So I’m working on a “pattern” to help solve these kinds of problems, but with a reference implementation in TypeScript. Let’s look at the same example above in my reference implementation:

    export const user = new Entity({
        name: "User",
        fields: [
            new NameField(),
        ],
    });
    
    export const post = new Entity({
        name: "Post",
        fields: [
            new NameField("title", { maxLength: 100 }),
            new TextField("body"),
        ],
    });
    
    export const userPosts = new ContentCreator({
        name: "UserPosts",
        author: user,
        content: post,
    });
    
    export const blogSchema = new Schema({
        relationships: [
            userPosts,
        ],
    });
    

    So there’s several things to note:

    1. Entities are defined in isolation without coupling to each other.
    2. We have sane defaults, no need to specify an id field for each entity (though you can).
    3. You can’t see it here because of the above, but there are abstract id field definitions: IDField and AutoIDField. It’s the specific implementation of this schema where you specify the type of ID you want to use, e.g. IntField, BigIntField, UUIDField, etc.
    4. Relationships are defined separately and used to link together entities.
    5. Relationships can bestow meaning - the ContentCreator relationship just extends OneToMany, but adds meta-data from which we can infer things in our UI, authorization, etc.
    6. Fields can be extended to provide meaning and to abstract implementations - for example the NameField extends TextField, but adds meta-data so we know it’s the name of this entity, and that it’s unique, so we can therefore have UI that uses that for links to this entity, or use it for a slug, etc.
    7. Everything is a separately exported variable which can be imported into any project, extended, overridden, mixed in, etc.
    8. When defining the relationship we sane defaults are used so we don’t need to explicitly define the entity fields we’re using to make the link, though we can if we want.
    9. We don’t need to explicitly add both our entities and relationships to our schema (though we can) as we can infer the entities from the relationships.

    There is another layer beyond this, which is where you define an Application which then lets you specify code generation components that to do all the busy work for you, settings like the ID scheme you want to use, etc.

    It’s early days, I’m still refining things, and there is a ton of work yet to do - but I am now using it in anger on commercial projects and it’s saving me time - generating types/interfaces/classes, database definitions, api’s, end points, ui components, etc.

    But it’s less about this specific implementation and more about the core idea - can we maximise reuse and minimise what we need to define for a given solution?

    There’s so many things that come off the back of it - so much config that isn’t reusable (e.g. docker compose files), so many things that can be automatically determined based on data (e.g. database optimisations), so many things that can be abstracted (e.g. deployment/scaling strategies).

    So much busy work that needs to be eliminated, allowing us to give LLM’s a run for their money!