no, the trigger needs to be coded, because CSS animations don't start on their own without either an iteraction likehover or js manipulation
would prob just add a class with js and write a keyframes animation with css, just normal stuff, gpt can probably do this
I asked ChatGPT to create an overlay in code, to adjust to the dimensions of "hero-image" and to create a keyframe animation. Yet, the code doesn't work. Can anyone see what's wrong?
so you just wanted to change opacity on the image?
I am not sure I understand the goal exactly, but this code can not work for multiple reasons, its jus doing wrong things
so you just wanted to change opacity on the image?
I want to darken the image β this could also work with transparency (image sits on top of a black section background).
Page loads: Image shows normal
After e,g 1 second: Transition starts darkening the image, so that text overlays get more readable.
I tried Transitions, Filters and Background Filters β but I found no way to set the start-state.
Yes, that's it. I still wonder what's the best way to prompt ChatGPT for such ...
I can share how I did that if you want.
<style>
@keyframes fadeIn {
from {
opacity: 0;
}
to {
opacity: .8;
}
}
</style>
I've done lots of those image overlay things before and my personal favorite is to have the image and then an absolute solid color overtop (zindex). Then I can control it by itself.
I felt lazy to write the CSS so I used an AI prompt:
"I have this structure:
.wrapper>.img-wrapper+.img-overlay
Make a css animation so the .img-overlay starts at 0 opacity and after 1 second the opacity starts to darken at 500ms rate."
Then I converted all those items to tokens.
I knew there was a transition delay so in a following prompt I asked it to separate out all the CSS attributes into long form.
This is the "overlay" layer. I didn't even need to add any classes.
Thanks again, Jeremy, also for explaining your prompting flow!
There's still something underlying what ChatGPT can't fill β your knowledge of how things need to get structured to work. With this missing, there are countless options to mess up.
***
I recently started using a 3D modeling app again, after years of pause, and was surprised how many details I had forgotten. Yet I still profit from understanding fundamental concepts β and from knowing that the feature exists β even if I can't recall it's name. π
Super, glad it worked!
I used to do 3d modeling as my career... like 20 years ago. It still is interesting, but I haven't kept up. What software is it?
What software is it?
Rhino.
For prompting... you can prompt AI to help you prompt. Tell it that you need to learn X and then ask it to write a basic prompt on how to learn the first step. I wrote GPT chat customizations to write my code DRY, accessible, performant, secure and with best practices in mind. I'll need to question it on occasion or ask proof but having that defined by default is helpful.
I often found limiting that ChatGPT doesn't have context. Even if you tell it that you're using a visual builder,similar to Webflow it may output code that you'd use in an IDE. I even uploaded screenshots, but this wasn't particularly helpful, either. Do you use a custom instruction that explains how Webstudio handles things?
Yeah, it chokes on that quite often. I'll often remind it that I use "Webstudio Cloud" version and don't have access to code in that sense. I'll have that in my initial prompt and it will go smoothly for a while, then when it spits out some react code, I'll remind it. (I recently learned that Claude gets confused and drops parts of your conversations after you carry it on for a long time, I'm assuming GPT does the same since it saves them tokens/cost).
When I prompt, I tell it exactly what I'm doing and what I expect. I'll tell it to take 1 step at a time, and confirm with me when it's time to move on. Being too verbose seems to confuse it. I take one item at a time and it seems to understand that better.
I will try slicing things to smaller pieces next time. Yet starting from Olegs initial note, I still wasn't sure how exactly darkening should happen. There are quite a few ways to approach this... Creating a "physical" overlay as you did, creating the overlay in code, decreasing the opacity of the image (that sits on a dark background) , there are likely a few more.
As the AI doesn't know your working environment, it will likely not be good at recommending the most straightforward approach. I hope that the team can at some point creates a custom GPT that knows where it is operating.
It would be cool if a training model was set on the discord channel, Webstudio's blog, YouTube channel & docs. Maybe in time? π
Here's an example prompt I would write if I didn't know the direction I wanted to take. Once I see something that fits my requirements, I drill down more on the options.
Prompt: "I'm building in Webstudio.is (cloud version).
I have a full-height background image that is lighter in color. I have buttons and text on top of it. I want to make the buttons & text pass accessibility contrast standards. Create a table of different ways that I can fix this, including a column for effort, user experience, and the website performance and notes for best practices that I'm missing."
I try to be direct and tell relevant context.
Excuse my late reply, Jeremy β I barely touched my computer yesterday. Letting GPT suggest is a great approach. Thanks a lot for sharing!