From June 3-7, our software developers Steve Richey and Daniel Pfeiffer attended WWDC in San Jose, CA. WWDC hosted over 100 Apple presented technical and design-focused sessions, hands-on labs, expert-led consultations, special events, and exciting guest speakers.
Our team had a full schedule and thoroughly enjoyed their time at the conference. Upon their return, our office was eager to hear about Steve and Dan’s experience. Here are their Top 8 favorite things about WWDC 2019.
The iPad gets its own dedicated OS this year in iPadOS. While iPadOS is still iOS underneath the hood, this is more than just a marketing tactic. A dedicated OS signifies a renewed commitment to the iPad and comes with a host of new features that dramatically enhance users’ productivity on the iPad.
Safari on iPadOS now boasts a full desktop-class browsing experience. This is more than just simulating the user agent of desktop Safari on the iPad. Safari has optimized web page rendering and interactions so that even complex Web apps (think Google Docs) now work on the iPad. Safari on iPadOS also includes a download manager and new keyboard shortcuts.
iPadOS also introduces a unique multitasking experience that allows a single app to open multiple windows to place in a split view (like Safari in iOS 12) or be placed side-by-side with other apps to create “spaces” (so you could, for example, have your notes next to Safari and next to Mail at the same time). Developers of third-party applications will need to make a few changes to their apps to support this feature.
In addition to Safari and multitasking, iPadOS also includes updates to the Files app (adding network share and USB drive support), the home screen (allows widgets to be pinned to the first screen), and more.
SwiftUI was a massive announcement for developers. It is a new framework that offers a declarative and unified API for implementing user interfaces across all of their platforms (including WatchOS, iOS, iPadOS, tvOS, and macOS).
A declarative syntax allows developers to create an application’s UI by describing what the interface should do, as opposed to the current imperative approach, which requires developers to describe how the UI works. This shift in strategy promises to give developers increased productivity with fewer bugs resulting from disagreements between the UI and the state of the app’s data. This approach to creating UIs is popular with other frameworks, such as React.
SwiftUI is paired with an impressive set of design tools in Xcode 11 that enable developers to preview their UIs in real-time as they are writing them. This will further increase developer productivity by making it easier to catch layout issues that may be dependent on specific screen sizes or environments.
While SwiftUI is maybe not quite ready for complex applications, it does promise to be a future replacement (maybe in 2-3 years) for current UIKit/AppKit tooling (including Interface Builder and Storyboards).
Marzipan is Project Catalyst
With this new addition, Apple will finally allow more apps to be available on your Mac. Project Catalyst will enable developers to bring iOS app experiences to the upcoming version of macOS. Although several iOS apps have migrated to macOS (News, Stocks, Voice Memos, and Home), there’s a huge opportunity to expand the app catalog.
Apple’s Senior VP of Software Engineering Craig Federighi announced exciting news at 2019 WWDC. “One development team, for the first time, can create a single app that spans from the iPhone to the iPad to the Mac,” said Federighi.
SF Symbols will introduce a comprehensive library of vector-based symbols which you can incorporate into your apps. This will allow you to simplify the layout of the UI elements through an automatic alignment with text and support for various sizes and weights. With this addition of SF Symbols, Apple hopes it will be easier for developers to adapt to different size screens and overall improve the ease of access for apps.
RealityKit and Reality Composer
Apple made some recent updates to its ARKit, which will make it easier for AR apps to capture movements and recognizing people, known as people occlusion. Also, at WWDC, Apple announced a new AR technology called RealityKit, which will help ensure photorealistic renderings and obedience to the laws of physics. ARKit will also work with Apple’s Swift programming software for developers, and an extension that can create AR experiences on Mac, iPad, and iPhone.
Check out this video for more info.
Core ML 3 has been expanded to provide even more mobile machine learning capabilities to your app. With on-device learning, developers can ensure that a user’s instance of a neural network will be specialized for them, without requiring the user to share personal data with the cloud. Apple even revealed that Face ID is using this constant learning process now to update as a user’s face changes over time. Also, a new macOS Create ML app will make it easy to build Core ML models for a variety of tasks from image classification to sentiment analysis, making it easier than ever for novice users to develop machine learning experiences. Models created in Create ML can be exported immediately for use in an iOS or iPadOS application.
For more info, click here.
Sign in with Apple (Privacy Emphasis)
Apple unveiled it’s new “Sign in with Apple” (SIW/Apple) login feature which boasts about being more private and secure than rivals (i.e., Facebook and Google). Although the competitors’ sign in features are safe, they can compromise your privacy and track you. Apple’s sign in, however, allows you to sign into your third-party applications “without revealing any personal information.”
In addition to this, SIW/Apple offers you the option to randomly generate an email if you don’t want to use your own to sign in with on a third-party app. These randomly generated emails can then be forwarded to your real account, and then you can shut down that profile when you chose to. Finally, the Apple Face ID technology will also enable you to sign into your accounts with your face.
Natural Language Framework
Natural Language is a redesigned framework designed to provide high-performance, on-device APIs for fundamental NLP tasks across Apple platforms. This is done through the integration of Core ML and Create ML, giving you the ability to train custom NLP models to perform various inferences and leverage the power of NLP in your apps. Combined with an existing suite of Core ML and Vision Framework tools, Apple has provided a clear path for developers to perform real-time text detection, recognition, and analysis. This entire pipeline can run on-device, ensuring that user privacy and security is maintained.
Watch this video for more information.
Did you attend WWDC this year? What stood out to you? Interested in what was revealed last year? Check it out here.
Latest posts by Alexis Benson (see all)
- Top 3 Takeaways from CETS - August 19, 2019
- How Convenience Stores Use Mobile Learning to Reduce Staff Turnover - August 9, 2019
- Come See Float at the Chicago eLearning & Technology Showcase - August 1, 2019