Connect with us

Camera

Sirui announces an 85mm F1.4 full-frame autofocus lens for Sony, Nikon and Fujifilm

Published

on

Sirui announces an 85mm F1.4 full-frame autofocus lens for Sony, Nikon and Fujifilm


When you use DPReview links to buy products, the site may earn a commission.
Image: Sirui

Sirui, a company best known for its relatively inexpensive tripods and cinema lenses, has announced a lens that may appeal to photographers and videographers looking for an inexpensive portrait lens: a full-frame 85mm F1.4 with autofocus for Nikon Z-mount, Sony E-mount, and Fujifilm X-mount. On the latter it’ll provide a roughly 128mm full-frame equiv. focal length.

The company says the lens is part of its Aurora series, though it’s currently the only one bearing that nameplate; that does seem to suggest that we can expect more like it in the future.

The lens features an AF/MF switch, an autofocus lock button, and a switch to control whether the aperture ring is clicked or clickless. The company also claims it has “dustproof and waterproof construction” and a fluorine coating on the front element to help repel oils and water.

Aurora85f1p4 angle
Image: Sirui

The lens has 14 elements in 9 groups: 1 aspherical lens, 2 ED elements, and 3 HRI elements. It has a 15-blade aperture and a minimum focusing distance of 0.85m (2 ft., 9.5″). The E-mount version weighs 540g (1.2lb), while the X and Z-mount versions are 10 and 30g heavier, respectively.

The Aurora 85mm F1.4 has a 67mm thread and a USB-C port on the lens mount for updating its firmware.

While fast 85mm lenses with autofocus aren’t exactly rare – there’s even already another third-party 85 F1.4 for the notoriously locked-down Nikon Z mount from Meike – it’s nice to see another option hit the market, especially one with so many features. It’ll be interesting to see what the image quality is like, especially given the relatively affordable price point Sirui is selling it at; the lens will normally retail for $599, but the company is doing an ‘early bird’ pricing promotion until December 31st which knocks it down to $499 if you buy it directly from them.

Sirui Aurora 85mm F1.4 Specifications

Principal specifications
Lens type Prime lens
Max Format size 35mm FF
Focal length 85 mm
Image stabilization No
Lens mount Fujifilm X, Nikon Z, Sony E
Aperture
Maximum aperture F1.4–16
Minimum aperture F1.4–16
Aperture ring Yes
Number of diaphragm blades 15
Optics
Elements 14
Groups 9
Special elements / coatings 1 aspherical, 2 ED, 3 HRI
Focus
Minimum focus 0.85 m (33.46)
Maximum magnification 0.1152×
Autofocus Yes
Motor type Stepper motor
Distance scale No
DoF scale No
Physical
Weight 540 g (1.19 lb)
Diameter 80 mm (3.15)
Length 102 mm (4.02)
Sealing Yes
Colour Black
Filter thread 67 mm
Hood supplied Yes
Tripod collar No



Source link

Continue Reading
Click to comment

Leave a Reply

Camera

Behind the Scenes: the story behind new features in Adobe Photoshop & Lightroom

Published

on

By

Behind the Scenes: the story behind new features in Adobe Photoshop & Lightroom


Logos: Adobe

This year, at Adobe’s Max conference, the company announced several new AI features coming to Photoshop, Lightroom, and Adobe Camera Raw. We talked to some of the managers and engineers behind these products to get an idea of how those features came about and to try to get a sense of what the future holds for Adobe’s photo editing suite.

Lightroom

Lightroom-Quick-Actions-UI
The Quick Actions UI gives you easy access to a variety of subject-specific edits.

One of the major new features for Lightroom Web and Mobile is called Quick Actions. It’s a panel that lets you easily adjust various parts of your image, giving you different sliders and suggestions based on what type of subject it detects.

“It really started with a multi-year investment into masking,” said Rob Christensen, director of product management for Adobe Lightroom and Adobe Camera Raw. “We had to make sure that masking was amazing. And so for multiple years, our R&D teams and our design teams came up with an experience that was outstanding. So once we had masking in place, and you could identify a subject, hair, lips, teeth, all of that, we realized, well, let’s pair that up now with edits, and we’ll call them Adaptive Presets.”

Quick Actions essentially serve to make that work more visible and accessible. “With Quick Actions, what you’re selecting in many cases are just Adaptive Presets that are relevant to that specific photo,” Christensen said. “We’re building from masking, Adaptive Presets, now Quick Actions. And it’s all coming together now into a unified experience – that was our vision years ago, and now it’s coming to life.”

Christensen said that Adobe actually quietly launched the feature on the web a few months ago. “We didn’t make a lot of noise around it, but customers have been using it on the web. Part of the reason why we brought it to Web first is it’s just easier. We could get some additional feedback, we could do more experimentation; the web is very easy to iterate on.”

“Part of the reason we also brought it to mobile is it’s really designed for the mobile user, where they want to get to a quick result,” Christensen said. “They don’t necessarily want to go through all the different panels. In a mobile UI, a lot of things are hidden – but what if we could surface all of these advanced capabilities for mobile users to get to an edit? A bit of a goal over the last six months that’s connected with Quick Actions is how do we help users capture, import, and share an amazing photo in under 60 seconds?”

At the moment, it’s unclear if the feature will be coming to the dedicated desktop apps. “We’re definitely looking at and listening to customer feedback. And so far, I think there’s a lot of excitement, especially from desktop users. But we’re not making any official announcements at this time.”

Adobe-Lightroom-Generative-Remove
The selection tool for Generative Remove has also been improved.

Image: Adobe

Generative Remove, which lets you use AI to erase objects from a scene using AI, is also now generally available across all versions of Lightroom. It’s the type of thing you could easily do if you opened an image in Photoshop, but now you don’t have to leave Lightroom.

“The way we think about what we’re building with Lightroom is it’s purpose-built for photographers,” said Christensen. “So if they have a specific use case that is important for photography, we will look at bringing that into Lightroom. Distraction removal is a great example of an area that makes sense for photography. That’s how a lot of customers are using generative remove today.”

Finally, for Lightroom Classic devotees worried about any plans to completely replace the older-school version of the app with the new cloud-based Lightroom, Christensen seemed to offer some reassurance. “As it stands right now, we’re continuing to innovate on both surfaces. We have a lot of customers on both that love the unique benefits.”

Adobe Camera Raw

Adobe-Adaptive-Example-hummingbird
Left: Adobe Color. Right: Adobe Adaptive

Image: Adobe

One of the most compelling photography-related features announced at Max is the new Adobe Adaptive profile for ACR. It’s meant to give you a better starting point for your own edits than older profiles like Adobe Color.

“One of the things that makes Adobe Adaptive unique is the fact that it’s a lot more image content aware,” said Eric Chan, a Senior Principal Scientist on the Adobe Camera Raw team. “In the past we would look at basic properties in the histogram and other attributes of the image. But with AI models now, we have a lot more semantic information about whether there’s a person in it, whether there’s a sky in it, etc.”

That awareness helps it make base-level adjustments, giving you a better starting point to put your own edits on top. “It can do things like fix skies, fix backlit portraits, it can do things nicely with faces, and it can control a lot more attributes of the image than our previous profiles,” Chan said.

Adobe-Adaptive-Control-Panel
You can control how intense the Adobe Adaptive look using the ‘Amount’ slider.

Unlike pressing the ‘Auto’ button on other profiles, Adobe Adaptive doesn’t change the sliders for parameters like exposure, contrast, highlights, etc.; those are still set to 0, allowing you room to do your own edits. ” I think the other unique aspect that there’s an amount slider that’s underneath the profile itself,” said Chan. “You can do a quick edit. Like, I like what it’s doing, but maybe it’s too much, let’s go to 80%, or maybe you want to go beyond, like 150%. But then there’s the finer-granularity control, things like color panels that you can combine with that.”

The company’s also bringing its Generative AI features to ACR, including Generative Remove and Generative Expand, which lets you “go beyond the boundaries of your photo using the power of AI.” In other words, you ask it to make your picture wider or taller, and it will try to fill in the space in a reasonable way. Any changes you make in ACR will also apply to the AI-generated portion of your picture, and the program will add a Content Credentials tag to the image, marking it as containing AI-generated content.

ACR-Generative-Expand
Generative Expand essentially lets you ‘crop out’ with AI imagery.

Those are interesting features to see in Adobe Camera Raw since, as the name implies, the program has previously been dedicated to adjusting the data your camera captured. Editing content using AI or other tools has been the domain of Photoshop and, to a lesser extent, Lightroom, which has had the Generative Remove feature for a while.

We asked what the thinking was behind adding Generative AI to ACR and Christensen said: “With Lightroom and ACR we’re trying to ensure that photographers can observe that moment as best they can. When we talk to customers, they feel it’s unfortunate if they have 90% of an amazing photo, but it’s just that 10% that is not how they remember the scene. Maybe because they couldn’t get the camera at the right spot at the right time.” He also reiterated that using the generative AI features was completely optional.

The line about making images according to people’s memories isn’t new; in fact, it’s very similar to how phone manufacturers like Samsung and Google are talking about their generative AI features – it’s just a bit odd to hear it in reference to an app dedicated to Raw photography. However, Christensen says there’s a line between what you can do in ACR, and what you can do in Photoshop. “We are not introducing capabilities like Generative Fill, where you can say ‘add an elephant flying from the sky with an umbrella.’ That doesn’t capture the moment; that’s creativity.”

Photoshop

This year, Adobe made several of its generative AI tools in Photoshop generally available and added a new “Distraction Removal” tool that can automatically remove wires, cables, and people from images. Removing wires can be done with a single click, while the people mode gives you the chance to refine the selection in case it selected people you still want in the picture, or didn’t select people you want to get rid of.

Photoshop-Find-Distractions-People
The ‘People’ mode of the Find Distractions feature lets you decide which subjects you want to keep or to add more subjects to remove.

According to Stephen Nielson, Senior Product Manager for Photoshop, Adobe plans to add an additional mode for the Distraction Removal tool to handle non-human or cable distractions. “The way we’ve approached this is, first, the most popular thing that people want to remove from a photo is people. So tourists or people in the background or whatever,” he said. “And so the categories that we’re working on are first: people. Second: cables and wires because they’re a pretty specific thing. And then there’s a category of basically everything else.”

Nielson says the everything else category will be like the people one, where Photoshop will select what it thinks are distractions but let you add to or remove from the selection before hitting the remove button.

It’s quite challenging to come up with a single model that can detect all sorts of distractons

Adobe’s not currently announcing when that feature will roll out, as it’s still in the process of building the model. “It’s quite challenging to come up with a single model that can detect all sorts of distractions, whether it’s somebody’s shoulder that’s in the image, or a garbage can, or a pile of leaves, or a random bicycle. It could be anything, right?”

According to Nielson, the training process involved a lot of human work. “We actually give pictures to people and say which objects are distracting?’ You do that enough times, and you can train a model to say, ‘Hey, this is what people usually say is distracting,'” he said. “That’s not the only kind of data that’s included in our training data set, but a lot of it is, like, hey, somebody’s gone through and annotated data to suggest which objects are distracting.”

Photoshop-Generative-AI-Dropdown
If you want to use the Remove tool without generative AI, you can.

Like many features in Photoshop, Distraction Removal can take advantage of Adobe’s generative AI, though it’s not 100% reliant on it. “It actually can either use Content-Aware fill or generative fill technology,” said Nielson. “We’ve built an automatic selector that will, based on what you’ve selected and you’re trying to remove, automatically choose either Content-Aware fill or generative fill, depending on which one’s best.”

Adobe has also added a drop-down menu that lets you manually select whether you want any part of the Remove tool, including the Distraction Removal feature, to use Generative AI or Content-Aware Fill. Nielson, however, recommended leaving it on auto. “Content-Aware Fill is better for areas with similar textures, areas where there’s lots of noise, or higher resolution images. Whereas Generative Fill is really good at details, which Content-Aware Fill just isn’t good at. So there’s a good use case for both, and the auto selector we have allows the algorithm to choose which one’s going to be best.”

We think generative technology is huge, but it’s not the answer for everything

Nielson thinks Generative AI will play a big part in future Photoshop features, but it won’t be the only way the company improves the program. “There’s still a lot of areas where we think generative technology is going to dramatically simplify things that were previously tedious and time-consuming inside Photoshop and give you more time to be creative.”

The company showed off one such example at its Sneaks presentation, which showcases tech demos that may or may not actually make it into Adobe products in the future. The demo, nicknamed ‘Perfect Blend,’ automatically matches lighting and color between objects you’re photoshopping in to a background.

“But there’s also going to be a lot of other non-gen AI improvements that we want to put into Photoshop,” Nielson said. “Just making the application run smoother, faster, be more efficient, speed up workflows with non-genitive technology. We think generative technology is huge, but it’s not the answer for everything. So, there’s a lot of other things that we are planning just to make the app better.”



Source link

Continue Reading

Camera

Apple's gearing up for a week of new Mac announcements

Published

on

By

Apple's gearing up for a week of new Mac announcements


Image: Apple

Apple will be announcing some Mac news next week, according to SVP of marketing
Greg Joswiak. In a post on X (formerly Twitter) he says to “Mac” your calendars, as the company has “an exciting week of announcements ahead, starting on Monday morning.” The post is accompanied by a video of the Finder logo turning into an Apple logo, further reinforcing the Mac theme.

While Apple never comments on its future plans, the rumors point to the company announcing updates to its MacBook Pro lineup, as well as a new iMac and a potentially redesigned Mac Mini. It also seems likely that the company will try to link the new hardware with its ‘Apple Intelligence’ AI, as the colors in the video Joswiak posted match the ones for the redesigned Siri interface.

“Mac (😉) your calendars! We have an exciting week of announcements ahead, starting on Monday morning. Stay tuned…”

It’s currently unclear what form these announcements will take. While Apple has previously announced updates to the iMac via press release, it would be odd if the company updated a product as important as the MacBook Pro without some sort of pre-recorded presentation, especially since the computer will almost certainly have new variations of the M4 chip that debuted in the iPad Pro earlier this year. So far, we’ve only seen the base M4, while most models of the MacBook Pro have traditionally used ‘Pro’ and ‘Max’ variants of the chip with more cores and capabilities.

There’s a fair amount of excitement around these potential releases, as benchmarks of the M4 show a decent uplift in single-core performance compared to the previous-generation M3. That should result in an overall snappier experience when editing and exporting photos. Those who bought the first generation of M1-powered MacBook Pros – especially the awful 13″ touch bar model that the author of this article still uses as their personal computer – can expect an even larger jump in performance from a theoretical M4-powered Mac.

Of course, there’s a possibility that’s not what we’ll be hearing about; Apple could surprise us all by leaving the MacBook Pro completely unchanged and instead reviving the tiny, single-port MacBook. It almost certainly won’t, but we won’t know for sure until Monday. We’ll be sure to cover the news as it happens, so stay tuned.



Source link

Continue Reading

Camera

Adobe Content Credentials check-in: the quest to verify images, video, and more on the web

Published

on

By

Adobe Content Credentials check-in: the quest to verify images, video, and more on the web


Image: C2PA

It’s been a few years since Adobe started testing Content Credentials in Creative Cloud apps, and a year since the company announced it’d use them to mark images generated by its Firefly AI. If you’re unfamiliar, Content Credentials aren’t just about AI; they’re also pitched as a secure way to track how images were created and edited in the hopes of slowing down the spread of misinformation. Adobe bills the system as a “nutrition label” for digital content.

At Adobe’s MAX conference, we got to sit down with Andy Parsons, Senior Director of the Content Authenticity Initiative at Adobe, and ask him some questions about Content Credentials. Given the opportunity, it also seemed like a good time to check in with the system.

Content Credentials on the Web

Earlier this year, Adobe began rolling out support for adding Content Credentials to your photos in Adobe Camera Raw, Lightroom, and Photoshop. These features are still currently in Early Access or Beta. There’s also a Content Credentials verification site that anyone can use to inspect image, video and audio files to see if they have Content Credentials attached or if they’ve been watermarked with a link to Content Credentials.

However, the company is also looking to make the tech available even to people who don’t use its products. This month, it announced a private beta for a Content Authenticity web app. The site lets people who have joined via waitlist upload a JPEG or PNG and attach their name and social media accounts to it after verifying ownership of those accounts by logging in to them. After the person attests that they own the image or have permission to apply credentials to it – there’s currently no way to verify that’s actually true – it lets them download the image with Content Credentials attached. The tool also lets you attach a piece of metadata, asking companies not to use your image for training AI.

Adobe doesn’t aspire to store every content credential in the universe

“From the beginning, before we wrote the first line of code for this tool, we asked creators in the Adobe ecosystem and outside the Adobe ecosystem what they wanted to see in it,” said Parsons. “We got a lot of feedback, but we haven’t finished this. So the private beta is meant to last a few months, during which we’ll collect more feedback.”

The system also adds an invisible watermark to the image that links to the credentials stored on Adobe’s servers. If someone tries to strip that information out of the image or takes a screenshot of it, it should be recoverable. If someone alters the image, the credentials will theoretically disappear, and the image will no longer be verified as authentic.

“Photoshop users don’t want a watermark that somehow changes the look or adds noise to an image that has it. So we did a lot of work to make sure that this was noise-free, that it works with images of very different resolutions and different kinds of color content,” Parsons said.

The site is an example of how Content Credentials can work, but if the technology becomes widespread, there’ll likely be many more like it. “Adobe doesn’t aspire to store every content credential in the universe,” Parsons said. “That’s why an interoperable standard is so critical. Getty Images could host its own content credential store. Adobe has ours. Someone else could do this on the blockchain; it’s really up to the specific platform.”

Storing content credentials doesn’t require as much storage as it may seem. “We don’t store your image; we’re not building a massive registry of everyone’s content. We store just that 1KB or so of cryptographically signed metadata. And anyone can do that.”

Screenshot of the AI Info button and information pannel on Instagram
Attached Content Credentials are one of the signals Meta looks for when generating its ‘AI Info’ labels on Facebook, Instagram, and Threads.

Image: Meta

Some websites have also started using Content Credentials to provide additional context for images and videos. According to Parsons, Meta uses Content Credentials as a signal when applying the “AI Info” label it uses for Instagram, Facebook, and Threads.

YouTube has also begun using Content Credentials to label videos posted on its site. If someone uses a camera or app that attaches credentials to a video and doesn’t make any edits to it, the video will receive a “Captured with a camera” label meant to certify that what you’re seeing is an unaltered version of what the camera captured.

Adobe also recently released the Adobe Content Authenticity extension for the Chrome browser, which surfaces Content Credentials on any site if it detects images that have them attached. “I think of it as sort of a decoder ring,” said Parsons. “Once you install the decoder ring, you can see all the invisible stuff on the web.”

Screenshot of the Content Authenticity extension displaying credentials for several images on a webpage
The Chrome extension can pick out images with Content Credentials, even if the site they’re hosted on doesn’t natively tag them.

He anticipates that, someday, the extension won’t be necessary and that the information it provides will be more broadly available. “Of course, it really belongs in web browsers and operating systems,” he said. “I do anticipate a fair amount of work in the next 12 months going into browser support from folks like Microsoft and Google and others. That’s really the big next step.”

A not-so-seamless experience

We ran into some strange behavior when testing these tools, though the issues were limited to how they were being displayed – or rather, not displayed – on the web. We added an AI-generated element to two images using Photoshop, then exported and uploaded them to Instagram.

The Content Credentials inspection site properly identified the images as having been edited and showed the changes we’d made. Instagram, however, only added the “AI Info” option to one of them and not the other, despite them having gone through the same chain. The label never showed up when the same images were posted to Threads. When we opened the images on Instagram, Adobe’s Chrome extension said there were no images on the page with Credentials attached, though it’s worth noting that the tool is still in beta.

Screenshot of the Content Credentials inspection tool showing a tree of edits made to an image
We were eventually able to see a history of the edits made to this image after screenshotting it from Instagram, re-uploading it to the verification site, and clicking the “Search for possible matches” button, but that’s not exactly a seamless experience.

Adobe’s verification site successfully recovered the credentials after we hit the “Search for Possible Matches” button. However, there’s clearly still a long way to go before sites can reliably use Content Credentials to provide information about an image’s providence or to identify images that were made or altered using AI image generation. That’s certainly a bit disappointing, as photographers and artists hoping to use the system to watermark images uploaded to social media as their own can’t necessarily rely on it yet.

It’s also worth noting that our test was essentially the best-case scenario; we made no efforts to hide that AI was used or to remove the Content Credentials. But while it does show cracks in the ecosystem, Content Credentials not showing up on an image that should have them is a much better outcome than if they had showed up on an image that shouldn’t.

New Cameras with Content Credentials

During Adobe Max, Nikon announced that it’s bringing Content Credentials to the Z6III at some point next year. During a demo at Adobe Max, images taken with the Z6III had credentials attached verifying the time and date they were taken and information about the ISO, shutter speed, and aperture used.

Currently, it seems like the function will be limited to professional users, such as photojournalists.

What’s left to do?

Despite the ecosystem improvements, there’s absolutely still work to be done on Content Credentials. When we tested the system in July, we found a surprising lack of interoperability between Lightroom / ACR and Photoshop, and the issue still persists today. If you make edits to an image in Lightroom or ACR, then open it in Photoshop and save the file with Content Credentials, there won’t be any information about what you did in ACR or Lightroom. You can work around this by saving the file from Lightroom or ACR as a PNG or JPEG and then opening that in Photoshop, but obviously, that’s not an ideal workflow.

That watermarking durability guarantee is important

The tools for incorporating Content Credentials into video are even less mature. Parsons says there are some third-party tools starting to support the metadata, such as streaming video players, and that Adobe is working on applying the invisible watermarks to videos as well. “For us, that watermarking durability guarantee is important. And we’ll have video with that – I can’t put a date on it, but that’s something that we’re very focused on. Same for audio.”

Then there’s the issue of cameras. Even if you have a camera that theoretically supports Content Credentials, such as several of Sony’s flagships or the Nikon Z6III, you almost certainly can’t use them. Both companies currently treat it as a feature exclusively for businesses, governments and journalists, requiring special firmware and licenses to enable it.

To be fair, those entities are generally the ones producing images where Content Credentials will be the most important. Most photographers’ work doesn’t require the same level of transparency and scrutiny as images released by law enforcement agencies or photojournalism wire services. However, in an age where news is increasingly documented by regular people using their cell phones, the feature will have to become available to average consumers at some point to have any hope of gaining traction.

I don’t think anybody cares how secure a picture of my cat is.

One camera manufacturer is letting people use Content Credentials out of the box: Leica. Its implementation also uses special hardware, similar to Apple’s Secure Enclave or Google’s Titan chips, which are used to store biometrics and other sensitive data, instead of relying on software. Nikon’s Z6III also features hardware support for Content Credentials, unlike the Z8 and Z9. In reference to the information stored on Apple’s chip, Parsons said, “Three-letter agencies in the U.S. government don’t have access to that, neither does Apple in this case. So that’s the vision that we have for cameras.” According to him, “If you want ultimate security and a testament to the fact that the camera made a particular image, we’d prefer to see that as a hardware implementation.”

He did, however, re-iterate that there are times when that level of security isn’t necessary. “If you are the NSA or a government or somebody working in a sensitive area… Maybe somewhere where your identity could be compromised, or you’d be put in harm’s way as a photojournalist, you probably do want that level of security. And certain devices need to provide it. Think about a body-cam image versus my picture of my cat. In the former case, it’s probably very important because that’s likely to see the scrutiny of a court of law, but I don’t think anybody cares how secure a picture of my cat is.”


Content Credentials and other authenticity systems are only part of building trust in an age of generative AI and widespread misinformation campaigns. “This is not a silver bullet,” Parsons said. “It’s not solving the totality of the problem. We know from many studies that many organizations have done in many parts of the world that people tend to share what fits their worldview on social media. In many cases, even if they know it’s fake. If it fits your point of view and you want to further your point of view, it almost doesn’t matter.”

“This is not a silver bullet”

Instead, Parsons views Content Credentials as one of the tools people can use when deciding to trust certain sources or pieces of content. “If somebody receives an image that someone has deliberately shared, you know, misinformation or deliberate disinformation, and can tap on that nutrition label and find out for themselves what it is, we think that fulfills a basic right that we all have.”



Source link

Continue Reading

Trending