Apple’s AI capabilities have been less than impressive to date, but there’s one new feature coming with iOS 26 that’s actually really handy: adding stuff to your calendar with a screenshot.
I’ve been testing this feature out for the past few weeks in the developer beta, and I’m pleased to report that it works, easily making it my favorite Apple Intelligence feature to date. That’s admittedly a low bar to clear — and it’s not quite as capable as Android’s version — but boy is it a nice change of pace to use an AI feature on a phone that feels like it’s actually saving me time.
Maybe adding things to your calendar doesn’t sound all that exciting, but I am a person who is Bad At Calendars. I will confidently add events to the wrong day, put them on the wrong calendar, or forget to add them at all. Not my finest quality.
The iOS version of “use AI to add things to your calendar” taps into Visual Intelligence. iOS 18 included the ability to create calendar events based on photos, and now iOS 26 is extending that to anything on your screen. You just take a screenshot and a prompt will appear with the words “Add to calendar.” Tap it, and after a few moments you’ll see a preview of the event to be added with the top-level details. You can tap to edit the event or just create it if everything looks good and you’re ready to move on with your life.
None of this would be useful if it didn’t work consistently; thankfully, it does. I’ve yet to see it hallucinate the wrong day, time, or location for an event — though it didn’t account for a timezone difference in one case. For the most part though, everything goes on my calendar as it should, and I rejoice a little bit every time it saves me a trip to the calendar app. The only limitation I’ve come across is that it can’t create multiple events from a screenshot. It kind of just lands on the first one it sees and suggests an event based on that. If you want that kind of functionality from your AI, you’ll need an Android phone.
Gemini Assistant has been able to add events based on what’s on your screen since August of last year, and in January it added support for Samsung Calendar. To access it, you can summon Google Assistant and tap an icon that says “Ask about screen.” Gemini creates a screenshot that it references, then you just type or speak your prompt to have it add the event to your calendar. This has failed to work for me as recently as a couple of months ago, but it’s miles better now.
I gave Gemini Assistant on the Pixel 9 Pro the task of adding a bunch of preschool events to my calendar all listed at the end of an email — and it created an event for every one of them on the correct day. In a separate case, it also clocked that the events I was adding were listed in Eastern Time and accounted for that difference. In some instances it even fills in a description for the event based on text on the screen. I also used Gemini in Google Calendar on my laptop, because Gemini is always lurking around the corner when you use literally any Google product, and it turned a list of school closure dates into calendar events.
This is great and all, but is this just an AI-rebranding of some existing feature? As far as I can tell, not exactly. Versions of this feature already existed on both platforms, but in a much more basic form. On my Apple Intelligence-less iPhone 13 Mini, you can tap on a date in an email for an option to add it to your calendar. But it uses the email subject line as the event title; a decent starting point, but adding five events to my calendar with the heading “Preschool July Newsletter” isn’t ideal. Android will also prompt you to add an event to your calendar from a screenshot, but it frequently gets dates and times wrong. AI does seem to be better suited for this particular task, and I’m ready to embrace it.
Read the full article here