Generally, the problem with the type of search VSCode does is:
1. It returns zero results because you made a typo or added a term that doesn’t exist in any document. Codebased uses semantic search so it’s better at this, but if you typed in something random, i.e. “argle bargle” it won’t return results.
2. It returns too many results and they’re not guaranteed to be in order of relevance. This can happen if you type in a commonly used function or class name. Codebased actually ranks code blocks using BM25 and L2 distance before sending them to the reranker for even better results.
Vision pro + AI allows me to create a decent looking, functional user interface just by talking about it. While feeling like I'm on a mountaintop, immersed in nature.
midjourney magazine is "soulless and embodying the threat AI poses to journalism", merely "pages upon pages of large images of varying quality in varying genres... captioned with the prompt used to generate them".
The author doesn't seem to be a big fan of midjourney magazine and what it represents.
It's true that Midjourney magazine doesn't compete as an art publication in a traditional sense.
But it does something far more interesting: it makes me, a mere mortal, feel like an artist.
Every artwork is accompanied by its prompt, and the prompts work. It's fun to type these prompts into the midjourney discord and see something amazing created.
I can play with certain words, add phrases, modify descriptions and quickly iterate towards something that feels unique and, curiously, "my own".
Sometimes the prompts are surprising, like this one:
"don’t look at the eloquent red circle, surreal, glistening highlights –ar2:3 –s 33": https://shorturl.at/aBMRS
It creates dramatic red-toned images featuring with a haunting surrealistic robed humanoid beneath an ominous orb.
I iterated on this a bit, to explore the impact of different words.
Midjourney magazine is a manual for visual creation with AI that even me, a programmer with zero artistic ability, can use to feel like an artist. The prompts are amazing, surprising, and, unlike any other art publication, interactive.
Rendition creates a two-way sync between design and code. It’s a figma+github plugin that will write UI code for you from designs. To do this, it compiles figma designs into react code and creates a pull request in your repo.
It also syncs designs from code back into figma, and this lets designers more easily work off of the latest designs (“pull latest designs from code”). And because Rendition syncs design and code in both directions, iterating on UI works well by generating nice diffs for PR review and allowing you to edit the generated code.
We think of it as an extra pair of hands to implement html/css for us. It’s best for relatively small product teams who need to get a lot done quickly.
Let us know what you think, and yell at us if you think we’re wrong
Interesting, do you have experience with web agency work? I've never thought about the challenges they face, but they must face this challenge all the time
Yes, I worked with some of them and I have close friends running them.
They definitely have this problem all the time.
Imagine having n PM from big brands who need to build some websites / landing pages and don't want to commit their own development resources (either they don't have them, or they do).
Agencies generally have internal project managers to match the clients and extract requirements.
A common pattern is to have some developers and designers on the payroll (but the pay is not great, it's hard to get good talent) and a few contractors you worked with on the side for more exotic projects (eg. if a client wants a 3d website).
The designers come up with the designs. Sometimes the client will have their own design.
Often times the design + some notes on the side can replace a standard requirements documents.
After that the designs are sent to the developers who convert it into code and deploy, usually with some sort of CMS (WordPress is still pretty popular, Django as well).
The project gets demoed and adjustments / bugfixing starts.
Wiring up a frontend to a backend isn't a trivial task, but is pretty typical software engineering. If part of the work (the ui) could be done for you, wouldn't that be a clear value-add?
Yes and no. For the literal first prototype you get out of the door, yes, having some baseline UI work done for you would be nice. For the sake of the argument, let's assume that the code generated by said tool is at least somewhat reasonably structured so refactoring the code isn't a nightmare either. You wire the UI up to some data and get it out of the door to get some feedback. The problem comes after that initial step — you show the first prototype to some test users or stake holders and figure out what needs to change. Integrating the now changed output of the updated UI with the code you already have will no longer be trivial. So long as the prototype is lean, this will be manageable for some time, but sooner or later it will not. The problem is that for most projects, the initial prototype stage is a very miniscule part of the whole product lifetime. Getting one prototype for cheap can be good, but there are UI prototyping tools that can do this. Using those tools instead of trying to reapply the updated output to your now-existing code base will most likely be less work.
In general I do think that tools in this niche will mature over time, but in my very subjective opinion, they won't be the holy grail of cutting down development time. Most of the time, the hard part of software development is the long haul, not the initial kickoff.
Ah, so what you're saying is that a big problem that dev+design teams will encounter is that they need to be able to iterate on designs together, so a real solution here would need to be able to integrate updates to the design into the codebase (and possibly vice versa). Am I understanding your point correctly?
Yes, exactly. This is a problem that many teams already struggle with when doing this manually, e.g. which design constraints are shared and which are unique. Context is hard to put into machine language and I see the above issue as one of the main hurdles in this space.
learns about your brand and creates custom email graphics for headers etc
pretty cool what gpt-image-1 can do
if curious, can check out https://graphic-design.email