Feed aggregator

Recommendation: Hollywood Africans by Jon Batiste

iPhone J.D. - Wed, 10/17/2018 - 01:12

I don't talk about music very much on iPhone J.D., but if you are looking for something truly amazing to listen to on your iPhone and you enjoy the piano, I strongly recommend that you check out the newest album by Jon Batiste called Hollywood Africans.  Although Jon Batiste has been playing music his entire life — he comes from a big music family in New Orleans — I suspect that most folks simply know him as the bandleader on The Late Show with Stephen Colbert.  But he is far from simply a TV personality; he is a seriously talented musician, and I often find my jaw dropping as I watch him play the piano. 

Before listening to the album, I recommend that you listen to the first 20 minutes of a great recent episode of NPR's Fresh Air podcast, in which Batiste sits down at a piano with Terry Gross, plays parts of some of the songs on the album, and explains what motivated him to create this album.  Click here to listen on the NPR website, or if you use the Overcast app to listen to podcasts, here is a direct link. Using just my Apple Watch Series 4 and my AirPods, I enjoyed listening to that episode last night during an outdoor walk.  As I used my Apple Watch to listen to Jon Batiste, I remembered that he was actually featured in a 15 second ad for the Apple Watch in early 2016; the link in my In the news post from back then no longer works, but you can still watch the video on YouTube at this link.

As for the album itself, every song is great, but I'll just mention the first two.  The first song is Kenner Boogie (Apple Music link), an original piano song that that will make you tap your toes and smile, all the while wondering how one person can play all of those piano keys so quickly with just two hands.  The second song is What a Wonderful World (Apple Music link), a song first recorded by Louis Armstrong in 1967.  That song has been performed and interpreted countless times, but I've never heard an arrangement anything like this.  Incredibly beautiful and moving.

I've seen Jon Batiste perform several times, and the first time I saw him was on May 1, 2005 at Jazz Fest in New Orleans, back when he was a teenager studying at Juilliard.  I only know the date because I was so impressed by his performance that I bought his first album, Times in New Orleans (Apple Music link), and my wife took the picture at the right of me doing so.  He was good back then; he is fantastic today.

Click here to listen to Hollywood Africans on Apple Music

Click here to get Hollywood Africans on Amazon

Categories: iPhone Web Sites

Injecting Code into Windows Protected Processes using COM - Part 1

Google Project Zero - Tue, 10/16/2018 - 12:34
Posted by James Forshaw, Google Project Zero
At Recon Montreal 2018 I presented “Unknown Known DLLs and other Code Integrity Trust Violations” with Alex Ionescu. We described the implementation of Microsoft Windows’ Code Integrity mechanisms and how Microsoft implemented Protected Processes (PP). As part of that I demonstrated various ways of bypassing Protected Process Light (PPL), some requiring administrator privileges, others not.
In this blog I’m going to describe the process I went through to discover a way of injecting code into a PPL on Windows 10 1803. As the only issue Microsoft considered to be violating a defended security boundary has now been fixed I can discuss the exploit in more detail.Background on Windows Protected ProcessesThe origins of the Windows Protected Process (PP) model stretch back to Vista where it was introduced to protect DRM processes. The protected process model was heavily restricted, limiting loaded DLLs to a subset of code installed with the operating system. Also for an executable to be considered eligible to be started protected it must be signed with a specific Microsoft certificate which is embedded in the binary. One protection that the kernel enforced is that a non-protected process couldn’t open a handle to a protected process with enough rights to inject arbitrary code or read memory.
In Windows 8.1 a new mechanism was introduced, Protected Process Light (PPL), which made the protection more generalized. PPL loosened some of the restrictions on what DLLs were considered valid for loading into a protected process and introduced different signing requirements for the main executable. Another big change was the introduction of a set of signing levels to separate out different types of protected processes. A PPL in one level can open for full access any process at the same signing level or below, with a restricted set of access granted to levels above. These signing levels were extended to the old PP model, a PP at one level can open all PP and PPL at the same signing level or below, however the reverse was not true, a PPL can never open a PP at any signing level for full access. Some of the levels and this relationship are shown below:
Signing levels allow Microsoft to open up protected processes to third-parties, although at the current time the only type of protected process that a third party can create is an Anti-Malware PPL. The Anti-Malware level is special as it allows the third party to add additional permitted signing keys by registering an Early Launch Anti-Malware (ELAM) certificate. There is also Microsoft’s TruePlay, which is an Anti-Cheat technology for games which uses components of PPL but it isn’t really important for this discussion.
I could spend a lot of this blog post describing how PP and PPL work under the hood, but I recommend reading the blog post series by Alex Ionescu instead (Parts 1, 2 and 3) which will do a better job. While the blog posts are primarily based on Windows 8.1, most of the concepts haven’t changed substantially in Windows 10.
I’ve written about Protected Processes before [link], in the form of the custom implementation by Oracle in their VirtualBox virtualization platform on Windows. The blog showed how I bypassed the process protection using multiple different techniques. What I didn’t mention at the time was the first technique I described, injecting JScript code into the process, also worked against Microsoft's PPL implementation. I reported that I could inject arbitrary code into a PPL to Microsoft (see Issue 1336) from an abundance of caution in case Microsoft wanted to fix it. In this case Microsoft decided it wouldn’t be fixed as a security bulletin. However Microsoft did fix the issue in the next major release on Windows (version 1803) by adding the following code to CI.DLL, the Kernel’s Code Integrity library:
UNICODE_STRING g_BlockedDllsForPPL[] = {

NTSTATUS CipMitigatePPLBypassThroughInterpreters(PEPROCESS Process,
                                                LPBYTE Image,
                                                SIZE_T ImageSize) {
 if (!PsIsProtectedProcess(Process))

 UNICODE_STRING OriginalImageName;
 // Get the original filename from the image resources.
     Image, ImageSize, &OriginalImageName);
 for(int i = 0; i < _countof(g_BlockedDllsForPPL); ++i) {
   if (RtlEqualUnicodeString(g_BlockedDllsForPPL[i],
                             &OriginalImageName, TRUE)) {
The fix checks the original file name in the resource section of the image being loaded against a blacklist of 5 DLLs. The blacklist includes DLLs such as JSCRIPT.DLL, which implements the original JScript scripting engine, and SCROBJ.DLL, which implements scriptlet objects. If the kernel detects a PP or PPL loading one of these DLLs the image load is rejected with STATUS_DYNAMIC_CODE_BLOCKED. This kills my exploit, if you modify the resource section of one of the listed DLLs the signature of the image will be invalidated resulting in the image load failing due to a cryptographic hash mismatch. It’s actually the same fix that Oracle used to block the attack in VirtualBox, although that was implemented in user-mode.Finding New TargetsThe previous injection technique using script code was a generic technique that worked on any PPL which loaded a COM object. With the technique fixed I decided to go back and look at what executables will load as a PPL to see if they have any obvious vulnerabilities I could exploit to get arbitrary code execution. I could have chosen to go after a full PP, but PPL seemed the easier of the two and I’ve got to start somewhere. There’s so many ways to inject into a PPL if we could just get administrator privileges, the least of which is just loading a kernel driver. For that reason any vulnerability I discover must work from a normal user account. Also I wanted to get the highest signing level I can get, which means PPL at Windows TCB signing level.
The first step was to identify executables which run as a protected process, this gives us the maximum attack surface to analyze for vulnerabilities. Based on the blog posts from Alex it seemed that in order to be loaded as PP or PPL the signing certificate needs a special Object Identifier (OID) in the certificate’s Enhanced Key Usage (EKU) extension. There are separate OID for PP and PPL; we can see this below with a comparison between WERFAULTSECURE.EXE, which can run as PP/PPL, and CSRSS.EXE, which can only run as PPL.

I decided to look for executables which have an embedded signature with these EKU OIDs and that’ll give me a list of all executables to look for exploitable behavior. I wrote the Get-EmbeddedAuthenticodeSignature cmdlet for my NtObjectManager PowerShell module to extract this information.
At this point I realized there was a problem with the approach of relying on the signing certificate, there’s a lot of binaries I expected to be allowed to run as PP or PPL which were missing from the list I generated. As PP was originally designed for DRM there was no obvious executable to handle the Protected Media Path such as AUDIODG.EXE. Also, based on my previous research into Device Guard and Windows 10S, I knew there must be an executable in the .NET framework which could run as PPL to add cached signing level information to NGEN generated binaries (NGEN is an Ahead-of-Time JIT to convert a .NET assembly into native code). The criteria for PP/PPL were more fluid than I expected. Instead of doing static analysis I decided to perform dynamic analysis, just start protected every executable I could enumerate and query the protection level granted. I wrote the following script to test a single executable:
Import-Module NtObjectManager
function Test-ProtectedProcess {    [CmdletBinding()]    param(        [Parameter(Mandatory, ValueFromPipelineByPropertyName)]        [string]$FullName,        [NtApiDotNet.PsProtectedType]$ProtectedType = 0,        [NtApiDotNet.PsProtectedSigner]$ProtectedSigner = 0        )    BEGIN {        $config = New-NtProcessConfig abc -ProcessFlags ProtectedProcess `            -ThreadFlags Suspended -TerminateOnDispose `            -ProtectedType $ProtectedType `            -ProtectedSigner $ProtectedSigner    }
   PROCESS {        $path = Get-NtFilePath $FullName        Write-Host $path        try {            Use-NtObject($p = New-NtProcess $path -Config $config) {                $prot = $p.Process.Protection                $props = @{                    Path=$path;                    Type=$prot.Type;                    Signer=$prot.Signer;                    Level=$prot.Level.ToString("X");                }                $obj = New-Object –TypeName PSObject –Prop $props                Write-Output $obj            }        } catch {        }    }}
When this script is executed a function is defined, Test-ProtectedProcess. The function takes a path to an executable, starts that executable with a specified protection level and checks whether it was successful. If the ProtectedType and ProtectedSigner parameters are 0 then the kernel decides the “best” process level. This leads to some annoying quirks, for example SVCHOST.EXE is explicitly marked as PPL and will run at PPL-Windows level, however as it’s also a signed OS component the kernel will determine its maximum level is PP-Authenticode. Another interesting quirk is using the native process creation APIs it’s possible to start a DLL as main executable image. As a significant number of system DLLs have embedded Microsoft signatures they can also be started as PP-Authenticode, even though this isn’t necessarily that useful. The list of binaries that will run at PPL is shown below along with their maximum signing level.
PathSigning LevelC:\windows\Microsoft.Net\Framework\v4.0.30319\mscorsvw.exeCodeGenC:\windows\Microsoft.Net\Framework64\v4.0.30319\mscorsvw.exeCodeGenC:\windows\system32\SecurityHealthService.exeWindowsC:\windows\system32\svchost.exeWindowsC:\windows\system32\xbgmsvc.exeWindowsC:\windows\system32\csrss.exeWindows TCBC:\windows\system32\services.exeWindows TCBC:\windows\system32\smss.exeWindows TCBC:\windows\system32\werfaultsecure.exeWindows TCBC:\windows\system32\wininit.exeWindows TCBInjecting Arbitrary Code Into NGENAfter carefully reviewing the list of executables which run as PPL I settled on trying to attack the previously mentioned .NET NGEN binary, MSCORSVW.EXE. My rationale for choosing the NGEN binary was:
  • Most of the other binaries are service binaries which might need administrator privileges to start correctly.
  • The binary is likely to be loading complex functionality such as the .NET framework as well as having multiple COM interactions (my go-to technology for weird behavior).
  • In the worst case it might still yield a Device Guard bypass as the reason it runs as PPL is to give it access to the kernel APIs to apply a cached signing level. Any bug in the operation of this binary might be exploitable even if we can’t get arbitrary code running in a PPL.

But there is an issue with the NGEN binary, specifically it doesn’t meet my own criteria that I get the top signing level, Windows TCB. However, I knew that when Microsoft fixed Issue 1332 they left in a back door where a writable handle could be maintained during the signing process if the calling process is PPL as shown below:
NTSTATUS CiSetFileCache(HANDLE Handle, ...) {

 ObReferenceObjectByHandle(Handle, &FileObject);

 if (FileObject->SharedWrite ||
    (FileObject->WriteAccess &&
     PsGetProcessProtection().Type != PROTECTED_LIGHT)) {

 // Continue setting file cache.
If I could get code execution inside the NGEN binary I could reuse this backdoor to cache sign an arbitrary file which will load into any PPL. I could then DLL hijack a full PPL-WindowsTCB process to reach my goal.
To begin the investigation we need to determine how to use the MSCORSVW executable. Using MSCORSVW is not documented anywhere by Microsoft, so we’ll have to do a bit of digging. First off, this binary is not supposed to be run directly, instead it’s invoked by NGEN when creating an NGEN’ed binary. Therefore, we can run the NGEN binary and use a tool such as Process Monitor to capture what command line is being used for the MSCORSVW process. Executing the command:
C:\> NGEN install c:\some\binary.dll
Results in the following command line being executed:
MSCORSVW -StartupEvent A -InterruptEvent B -NGENProcess C -Pipe D
A, B, C and D are handles which NGEN ensures are inherited into the new process before it starts. As we don’t see any of the original NGEN command line parameters it seems likely they’re being passed over an IPC mechanism. The “Pipe” parameter gives an indication that  named pipes are used for IPC. Digging into the code in MSCORSVW, we find the method NGenWorkerEmbedding, which looks like the following:
void NGenWorkerEmbedding(HANDLE hPipe) {
 CorSvcBindToWorkerClassFactory factory;

 // Marshal class factory.
 IStream* pStm;
 CreateStreamOnHGlobal(nullptr, TRUE, &pStm);
 CoMarshalInterface(pStm, &IID_IClassFactory, &factory,                     MSHCTX_LOCAL, nullptr, MSHLFLAGS_NORMAL);

 // Read marshaled object and write to pipe.
 DWORD length;
 char* buffer = ReadEntireIStream(pStm, &length);
 WriteFile(hPipe, &length, sizeof(length));
 WriteFile(hPipe, buffer, length);

 // Set event to synchronize with parent.

 // Pump message loop to handle COM calls.

 // ...
This code is not quite what I expected. Rather than using the named pipe for the entire communication channel it’s only used to transfer a marshaled COM object back to the calling process. The COM object is a class factory instance, normally you’d register the factory using CoRegisterClassObject but that would make it accessible to all processes at the same security level so instead by using marshaling the connection can be left private only to the NGEN binary which spawned MSCORSVW. A .NET related process using COM gets me interested as I’ve previously described in another blog post how you can exploit COM objects implemented in .NET. If we’re lucky this COM object is implemented in .NET, we can determine if it is implemented in .NET by querying for its interfaces, for example we use the Get-ComInterface command in my OleViewDotNet PowerShell module as shown in the following screenshot.

We’re out of luck, this object is not implemented in .NET, as you’d at least expect to see an instance of the _Object interface. There’s only one interface implemented, ICorSvcBindToWorker so let’s dig into that interface to see if there’s anything we can exploit.
Something caught my eye, in the screenshot there’s a HasTypeLib column, for ICorSvcBindToWorker we see that the column is set to True. What HasTypeLib indicates is rather than the interface’s proxy code being implemented using an predefined NDR byte stream it’s generated on the fly from a type library. I’ve abused this auto-generating proxy mechanism before to elevate to SYSTEM, reported as issue 1112. In the issue I used some interesting behavior of the system’s Running Object Table (ROT) to force a type confusion in a system COM service. While Microsoft has fixed the issue for User to SYSTEM there’s nothing stopping us using the type confusion trick to exploit the MSCORSVW process running as PPL at the same privilege level and get arbitrary code execution. Another advantage of using a type library is a normal proxy would be loaded as a DLL which means that it must meet the PPL signing level requirements; however a type library is just data so can be loaded into a PPL without any signing level violations.
How does the type confusion work? Looking at the ICorSvcBindToWorker interface from the type library:
interface ICorSvcBindToWorker : IUnknown {
   HRESULT BindToRuntimeWorker(
             [in] BSTR pRuntimeVersion,
             [in] unsigned long ParentProcessID,
             [in] BSTR pInterruptEventName,
             [in] ICorSvcLogger* pCorSvcLogger,
             [out] ICorSvcWorker** pCorSvcWorker);
The single BindToRuntimeWorker takes 5 parameters, 4 are inbound and 1 is outbound. When trying to access the method over DCOM from our untrusted process the system will automatically generate the proxy and stub for the call. This will include marshaling COM interface parameters into a buffer, sending the buffer to the remote process and then unmarshaling to a pointer before calling the real function. For example imagine a simpler function, DoSomething which takes a single IUnknown pointer. The marshaling process looks like the following:
The operation of the method call is as follow:
  1. The untrusted process calls DoSomething on the interface which is actually a pointer to DoSomethingProxy which was auto-generated from the type library passing an IUnknown pointer parameter.
  2. DoSomethingProxy marshals the IUnknown pointer parameter into the buffer and calls over RPC to the Stub in the protected process.
  3. The COM runtime calls the DoSomethingStub method to handle the call. This method will unmarshal the interface pointer from the buffer. Note that this pointer is not the original pointer from step 1, it’s likely to be a new proxy which calls back to the untrusted process.
  4. The stub invokes the real implemented method inside the server, passing the unmarshaled interface pointer.
  5. DoSomething uses the interface pointer, for example by calling AddRef on it via the object’s VTable.

How would we exploit this? All we need to do is modify the type library so that instead of passing an interface pointer we pass almost anything else. While the type library file is in a system location which we can’t modify we can just replace the registration for it in the current user’s registry hive, or use the same ROT trick from before issue 1112. For example if we modifying the type library to pass an integer instead of an interface pointer we get the following:
The operation of the marshal now changes as follows:
  1. The untrusted process calls DoSomething on the interface which is actually a pointer to DoSomethingProxy which was auto-generated from the type library passing an arbitrary integer parameter.
  2. DoSomethingProxy marshals the integer parameter into the buffer and calls over RPC to the Stub in the protected process.
  3. The COM runtime calls the DoSomethingStub method to handle the call. This method will unmarshal the integer from the buffer.
  4. The stub invokes the real implement method inside the server, passing the integer as the parameter. However DoSomething hasn’t changed, it’s still the same method which accepts an interface pointer. As the COM runtime has no more type information at this point the integer is type confused with the interface pointer.
  5. DoSomething uses the interface pointer, for example by calling AddRef on it via the object’s VTable. As this pointer is completely under control of the untrusted process this likely results in arbitrary code execution.

By changing the type of parameter from an interface pointer to an integer we induce a type confusion which allows us to get an arbitrary pointer dereferenced, resulting in arbitrary code execution. We could even simplify the attack by adding to the type library the following structure:
struct FakeObject {
   BSTR FakeVTable;
If we pass a pointer to a FakeObject instead of the interface pointer the auto-generated proxy will marshal the structure and its BSTR, recreating it on the other side in the stub. As a BSTR is a counted string it can contain NULLs so this will create a pointer to an object, which contains a pointer to an arbitrary byte array which can act as a VTable. Place known function pointers in that BSTR and you can easily redirect execution without having to guess the location of a suitable VTable buffer.
To fully exploit this we’d need to call a suitable method, probably running a ROP chain and we might also have to bypass CFG. That all sounds too much like hard work, so instead I’ll take a different approach to get arbitrary code running in the PPL binary, by abusing KnownDlls.KnownDlls and Protected Processes.In my previous blog post I described a technique to elevate privileges from an arbitrary object directory creation vulnerability to SYSTEM by adding an entry into the KnownDlls directory and getting an arbitrary DLL loaded into a privileged process. I noted that this was also an administrator to PPL code injection as PPL will also load DLLs from the system’s KnownDlls location. As the code signing check is performed during section creation not section mapping as long as you can place an entry into KnownDlls you can load anything into a PPL even unsigned code.
This doesn’t immediately seem that useful, we can’t write to KnownDlls without being an administrator, and even then without some clever tricks. However it’s worth looking at how a Known DLL is loaded to get an understanding on how it can be abused. Inside NTDLL’s loader (LDR) code is the following function to determine if there’s a preexisting Known DLL.
NTSTATUS LdrpFindKnownDll(PUNICODE_STRING DllName, HANDLE *SectionHandle) {
 // If KnownDll directory handle not open then return error.
 if (!LdrpKnownDllDirectoryHandle)

 OBJECT_ATTRIBUTES ObjectAttributes;

 return NtOpenSection(SectionHandle,
The LdrpFindKnownDll function calls NtOpenSection to open the named section object for the Known DLL. It doesn’t open an absolute path, instead it uses the feature of the native system calls to specify a root directory for the object name lookup in the OBJECT_ATTRIBUTES structure. This root directory comes from the global variable LdrpKnownDllDirectoryHandle. Implementing the call this way allows the loader to only specify the filename (e.g. EXAMPLE.DLL) and not have to reconstruct the absolute path as the lookup with be relative to an existing directory. Chasing references to LdrpKnownDllDirectoryHandle we can find it’s initialized in LdrpInitializeProcess as follows:
NTSTATUS LdrpInitializeProcess() {
 // ...
 PPEB peb = // ...
 // If a full protected process don't use KnownDlls.
 if (peb->IsProtectedProcess && !peb->IsProtectedProcessLight) {
   LdrpKnownDllDirectoryHandle = nullptr;
 } else {
   OBJECT_ATTRIBUTES ObjectAttributes;
   RtlInitUnicodeString(&DirName, L"\\KnownDlls");
                              nullptr, nullptr);
   // Open KnownDlls directory.
                         DIRECTORY_QUERY | DIRECTORY_TRAVERSE,
This code shouldn’t be that unexpected, the implementation calls NtOpenDirectoryObject, passing the absolute path to the KnownDlls directory as the object name. The opened handle is stored in the LdrpKnownDllDirectoryHandle global variable for later use. It’s worth noting that this code checks the PEB to determine if the current process is a full protected process. Support for loading Known DLLs is disabled in full protected process mode, which is why even with administrator privileges and the clever trick I outlined in the last blog post we could only compromise PPL, not PP.
How does this knowledge help us? We can use our COM type confusion trick to write values into arbitrary memory locations instead of trying to hijack code execution resulting in a data only attack. As we can inherit any handles we like into the new PPL process we can setup an object directory with a named section, then use the type confusion to change the value of LdrpKnownDllDirectoryHandle to the value of the inherited handle. If we induce a DLL load from System32 with a known name the LDR will check our fake directory for the named section and map our unsigned code into memory, even calling DllMain for us. No need for injecting threads, ROP or bypassing CFG.
All we need is a suitable primitive to write an arbitrary value, unfortunately while I could find methods which would cause an arbitrary write I couldn’t sufficiently control the value being written. In the end I used the following interface and method which was implemented on the object returned by ICorSvcBindToWorker::BindToRuntimeWorker.
interface ICorSvcPooledWorker : IUnknown {
   HRESULT CanReuseProcess(
           [in] OptimizationScenario scenario,
           [in] ICorSvcLogger* pCorSvcLogger,
           [out] long* pCanContinue);
In the implementation of CanReuseProcess the target value of pCanContinue is always initialized to 0. Therefore by replacing the [out] long* in the type library definition with [in] long we can get 0 written to any memory location we specify. By prefilling the lower 16 bits of the new process’ handle table with handles to a fake KnownDlls directory we can be sure of an alias between the real KnownDlls which will be opened once the process starts and our fake ones by just modifying the top 16 bits of the handle to 0. This is shown in the following diagram:

Once we’ve overwritten the top 16 bits with 0 (the write is 32 bits but handles are 64 bits in 64 bit mode, so we won’t overwrite anything important) LdrpKnownDllDirectoryHandle now points to one of our fake KnownDlls handles. We can then easily induce a DLL load by sending a custom marshaled object to the same method and we’ll get arbitrary code execution inside the PPL.Elevating to PPL-Windows TCBWe can’t stop here, attacking MSCORSVW only gets us PPL at the CodeGen signing level, not Windows TCB. Knowing that generating a fake cached signed DLL should run in a PPL as well as Microsoft leaving a backdoor for PPL processes at any signing level I converted my C# code from Issue 1332 to C++ to generate a fake cached signed DLL. By abusing a DLL hijack in WERFAULTSECURE.EXE which will run as PPL Windows TCB we should get code execution at the desired signing level. This worked on Windows 10 1709 and earlier, however it didn’t work on 1803. Clearly Microsoft had changed the behavior of cached signing level in some way, perhaps they’d removed its trust in PPL entirely. That seemed unlikely as it would have a negative performance impact.
After discussing this a bit with Alex Ionescu I decided to put together a quick parser with information from Alex for the cached signing data on a file. This is exposed in NtObjectManager as the Get-NtCachedSigningLevel command. I ran this command against a fake signed binary and a system binary which was also cached signed and immediately noticed a difference:

For the fake signed file the Flags are set to TrustedSignature (0x02), however for the system binary PowerShell couldn’t decode the enumeration and so just outputs the integer value of 66 which is 0x42 in hex. The value 0x40 was an extra flag on top of the original trusted signature flag. It seemed likely that without this flag set the DLL wouldn’t be loaded into a PPL process. Something must be setting this flag so I decided to check what happened if I loaded a valid cached signed DLL without the extra flag into a PPL process. Monitoring it in Process Monitor I got my answer:

The Process Monitor trace shows that first the kernel queries for the Extended Attributes (EA) from the DLL. The cached signing level data is stored in the file’s EA so this is almost certainly an indication of the cached signing level being read. In the full trace artifacts of checking the full signature are shown such as enumerating catalog files, I’ve removed those artifacts from the screenshot for brevity. Finally the EA is set, if I check the cached signing level of the file it now includes the extra flag. So setting the cached signing level is done automatically, the question is how? By pulling up the stack trace we can see how it happens:

Looking at the middle of the stack trace we can see the call to CipSetFileCache originates from the call to NtCreateSection. The kernel is automatically caching the signature when it makes sense to do so, e.g. in a PPL so that subsequent image mapping don’t need to recheck the signature. It’s possible to map an image section from a file with write access so we can reuse the same attack from Issue 1332 and replace the call to NtSetCachedSigningLevel with NtCreateSection and we can fake sign any DLL. It turned out that the call to set the file cache happened after the write check introducted to fix Issue 1332 and so it was possible to use this to bypass Device Guard again. For that reason I reported the bypass as Issue 1597 which was fixed in September 2018 as CVE-2018-8449. However, as with Issue 1332 the back door for PPL is still in place so even though the fix eliminated the Device Guard bypass it can still be used to get us from PPL-CodeGen to PPL-WindowsTCB. ConclusionsThis blog showed how I was able to inject arbitrary code into a PPL without requiring administrator privileges. What could you do with this new found power? Actually not a great deal as a normal user but there are some parts of the OS, such as the Windows Store which rely on PPL to secure files and resources which you can’t modify as a normal user. If you elevate to administrator and then inject into a PPL you’ll get many more things to attack such as CSRSS (through which you can certainly get kernel code execution) or attack Windows Defender which runs as PPL Anti-Malware. Over time I’m sure the majority of the use cases for PPL will be replaced with Virtual Secure Mode (VSM) and Isolated User Mode (IUM) applications which have greater security guarantees and are also considered security boundaries that Microsoft will defend and fix.
Did I report these issues to Microsoft? Microsoft has made it clear that they will not fix issues only affecting PP and PPL in a security bulletin. Without a security bulletin the researcher receives no acknowledgement for the find, such as a CVE. The issue will not be fixed in current versions of Windows although it might be fixed in the next major version. Previously confirming Microsoft’s policy on fixing a particular security issue was based on precedent, however they’ve recently published a list of Windows technologies that will or will not be fixed in the Windows Security Service Criteria which, as shown below for Protected Process Light, Microsoft will not fix or pay a bounty for issues relating to the feature. Therefore, from now on I will not be engaging Microsoft if I discover issues which I believe to only affect PP or PPL.

The one bug I reported to Microsoft was only fixed because it could be used to bypass Device Guard. When you think about it, only fixing for Device Guard is somewhat odd. I can still bypass Device Guard by injecting into a PPL and setting a cached signing level, and yet Microsoft won’t fix PPL issues but will fix Device Guard issues. Much as the Windows Security Service Criteria document really helps to clarify what Microsoft will and won’t fix it’s still somewhat arbitrary. A secure feature is rarely secure in isolation, the feature is almost certainly secure because other features enable it to be so.
In part 2 of this blog we’ll go into how I was also able to break into Full PP-WindowsTCB processes using another interesting feature of COM.
Categories: Security

IBM DS8880 Architecture and Implementation (Release 8.5)

IBM Redbooks Site - Mon, 10/15/2018 - 09:30
Draft Redbook, last updated: Mon, 15 Oct 2018

This IBM® Redbooks® publication describes the concepts, architecture, and implementation of the IBM DS8880 family.

Categories: Technology

IBM DS8880 High-Performance Flash Enclosure Gen2

IBM Redbooks Site - Fri, 10/12/2018 - 09:30
Redpaper, published: Fri, 12 Oct 2018

This IBM® Redpaper™ publication describes the IBM DS8880 High-Performance Enclosure (HPFE) Gen2 architecture and configuration.

Categories: Technology

In the news

iPhone J.D. - Fri, 10/12/2018 - 00:50

I was talking to an attorney this week about buying a new iPad, and I'll tell you the same thing I told him:  don't.  At least, not right now.  All signs are that Apple will introduce two new models of the iPad Pro in the next few weeks, and perhaps a second generation version of the Apple Pencil — which part of me hopes Apple will call the "No. 2 Pencil."  The speculation is that it will support Face ID, have smaller bezels, and perhaps even use USB-C instead of Lightning.  We'll see.  And now, the news of note from the past week:

  • Virginia attorney Sharon Nelson discusses a recent incident in which the FBI compelled an iPhone owner (via a warrant) to unlock his iPhone using Face ID.
  • Nelson also discusses an incident in which police arrested someone for murder based on data from the victim's Fitbit — and it could have just as easily been an Apple Watch.  Her heart rate spiked, and then ceased to register at all, during the time that video surveillance showed that her stepfather was in her house.
  • California attorney David Sparks discusses Apple's announcement yesterday that 53% of users of iOS devices sold in the last four years have already updated to iOS 12.  Once iOS 12.1 comes out with the new Emoji I discussed earlier this year, I'm sure even more folks will rush to upgrade.
  • Speak of Sparks, yesterday I recommended his video field guide on using the Shortcuts app, and I also see that this week he experimented with replacing all of the icons on his iPhone's home screen with Siri shortcuts.  Interesting.
  • If you are looking for a place to find and download some interesting iPhone shortcuts, check out Sharecuts.app.
  • Don't be like Kanye West.  There are probably many ways one could apply that rule, but right now I'm referring to his Oval Office meeting with President Trump yesterday morning in which Kanye entered his iPhone passcode while a camera was filming him from behind — his first no-no — and then the entire world saw that Kanye's password is 000000, i.e. just six zeros.  Chance Miller of 9to5Mac has the details including a video clip.  Seriously, don't do that.
  • Speaking of iPhone security, Glenn Fleishman of TidBITS explains how two-factor authentication is improved in iOS 12, and also explains why you should try not to use SMS (text messaging) as a second factor.
  • As I noted above, the next version of the iPad Pro might have USB-C.  In an article for Macworld, Jason Snell analyzes what that could mean for users.
  • Zac Hall of 9to5Mac wrote a great overview of the types of HomeKit accessories that you can use to control your home with Siri, and he even recommends some of the best specific brands.  I continue to be a huge fan of the Lutron switches in my house, which I reviewed in 2015
  • Bryan Wolfe of the iDownloadBlog explains how to use the Live Listen feature of iOS 12.  Place your iPhone close to a source of sound, put on your AirPods, and then your iPhone will act as a remote microphone for your AirPods.  Useful if you need to hear something or someone but you are too far away to do so.
  • One of my favorite features of Apple Music is the ability to request a song by part of a lyric — Hey Siri, play the song that goes [say a few words in the lyrics].  Benjamin Mayo of 9to5Mac reports that this function will improve because Apple is now incorporating more lyrics from a company called Genius.
  • There was a horrible story in the news this week about a reporter who wrote for the Washington Post being killed while in the Saudi consulate in Turkey.  Reuters reports that information gained from the Apple Watch he was wearing might help the investigators to figure out what happened.
  • Here is a useful page on the Apple website which describes each of the status icons and symbols on the Apple Watch.
  • Security expert Rich Mogull happens to also be a paramedic, and in an article for TidBITS, he describes how the Apple Watch Series 4 may (and may not) help to save lives.
  • Matthew Cassinelli of The Sweet Setup explains why the 1Password app is so useful on an Apple Watch.  I agree.
  • Jesse Hollington of iLounge reports that today Apple is debuting Season 2 of Carpool Karaoke, including one episode featuring the Muppets.  It's time to play the music, it's time to light the lights...
  • And finally, here is a video from Apple showing off some of the new features of the iPhone XS and XR.  That's one reason to watch the video, but another reason is that it does a great job of showing off Apple's new Apple Park campus:

Categories: iPhone Web Sites

Review: Siri Shortcuts Field Guide by David Sparks -- learn how to create useful shortcuts

iPhone J.D. - Thu, 10/11/2018 - 01:12

One of my favorite features in iOS 12 is the new Shortcuts app and its deep integration with iOS, allowing you to create all sorts of useful automations to be more productive on your iPhone and iPad.  There is a learning curve, and thus I'm sure that lots of iPhone users won't even bother to pay much attention to shortcuts.  But if you are smart enough to have made it through Con Law I and the Rule Against Perpetuities part of your 1L Property class, you are more than smart enough to use the Shortcuts app.  Even so, it helps to have a guide hold your hand while you get started.

California attorney David Sparks created what he calls a video field guide — a series of short video lessons, a total of 3 hours and 15 minutes — to walk you through the Shortcuts app.  The course is called the Siri Shortcuts Field Guide and costs $29, although it is currently discounted to $24 during the introductory period.  David gave me a free pass to the course so that I could check it out, and I'm super impressed.  Whether you are starting from square 1 or you have a general sense of how shortcuts work but want to learn more (which describes me), this is a fantastic resource.

You access the course in any web browser.  It was perfect to watch it on my iPad Pro, but you could also watch it on an iPhone or a computer if you prefer.

On the iPad, there is a list of chapters on the left.  I'm sure that David designed the course to go through each one in order, but instead I jumped around, skipping the chapters devoted to topics that I thought I already knew.  Sometimes I went back to watch that chapter anyway because I realized that I didn't know as much as I thought I knew.

The course does a great job of walking you through the Shortcuts app itself, and then it shows you how to do things with the app, including working with different types of information.  In each lesson, you see David's iPad screen as he is describing to you what he is doing.  There is a great interface for the videos; you can scroll your finger across the bottom to jump ahead or go back.

I particularly enjoyed the lesson in the Advanced Siri Shortcuts Tools section on creating and using variables.  Before this course, I had no idea what a Magic Variable was, but after watching David describe what they do and actually create a shortcut using Magic Variables, now it all makes perfect sense to me.

I think that the best part of the course is the last main section called Useful Shorcuts.  David walks you through 12 shortcuts that you might actually use, explaining how he created each one why he did what he did.  You can create the shortcuts on your own by following along with David, or you can just download the complete shortcut.

One such shortcut useful to lawyers is a date calculator.  The shortcut David created lets you count a certain number of days after a date or before a date, or even the number of days between dates.  For me, this is so useful that I even added a Siri command to it so that I can just say "Hey Siri, date calculator" to bring it up.  And now that I understand how the shortcut works, I can modify it to meet my particular needs.  Here is a very short video showing me using the date calculator shortcut that David describes and provides in the lesson:


If you have any interest in creating shortcuts to increase your efficiency and accomplish tasks, I highly recommend this video course.  And I especially recommend getting into this now.  What Apple has already done with the Shortcuts app is amazing, but I know that it will get more useful in future updates.  By getting your arms around this stuff now, you will be well-positioned to take advantage of the improvements to the Shortcuts app over the coming months and years.

Click here to get more information and to sign up for the Siri Shortcuts Field Guide.

Categories: iPhone Web Sites

Apple Watch tip: switch from grid view to list view

iPhone J.D. - Mon, 10/08/2018 - 23:49

The Apple Watch has supported third party apps since it was first went on sale on April 24, 2015.  Unfortunately, however, because of the limitations of the hardware and the software, usability has been limited.  Graham Bower of Cult of Mac wasn't very far off the mark when he wrote an article in 2016 titled "Apple Watch apps kinda suck, but Cupertino hopes you won’t notice." 

Fortunately, with the new Apple Watch Series 4 and watchOS 5, I think those days are over.  Third party apps which have complications on my watch face or which are stored in my dock now launch pretty much instantly.  And just as impressively, even third party apps which I use less often and need to access by pressing the Digital Crown to see all of my apps now launch almost instantly, often under a second.  Moreover, with the speed of the Apple Watch Series 4, performance is high enough that apps are much more responsive.  As a result, Apple Watch apps no longer "suck," and I'm sure that Cupertino is happy for you to notice that.

All of this means that I'm starting to download more apps for my Apple Watch.  Some are more useful than others, but at least now all third party apps have the ability to be really good. Just to name one example, PCalc is a great calculator on the iPhone, but it is also a very usable calculator on the Apple Watch — much better than the Casio Calculator Watch I wore back in the 1980s.

As I have downloaded more apps to my Apple Watch, there are more apps to choose from when I press the Digital Crown on the side of my watch.  To make it easier to find the app that I want, I'm now taking advantage of a feature that was introduced in watchOS 4 last year:  the ability to switch from grid view to list view.  Grid view with its honeycomb layout is pretty, but unless you remember exactly where you placed an app, you will waste time searching around the screen to find it.  In list view, everything is alphabetical, and it is quick and easy to spin the Digital Crown to scroll to the name of the app that you want — something which is made even easier with the haptic feedback added to the Digital Crown in the Apple Watch Series 4.  You can now feel it as you scroll past every app in the list.

To switch from one view to another, simply press the Digital Crown, and then regardless of whether you are currently in grid view or list view, force press on the center of the screen.  This brings up a screen with the option to select either grid or list view.

If you own an Apple Watch Series 4, I encourage you to enable the list view so that it is easier for you take advantage of third party apps, even if you don't use them very often.

Categories: iPhone Web Sites

IBM DS8880 Product Guide (Release 8.5)

IBM Redbooks Site - Mon, 10/08/2018 - 09:30
Draft Redpaper, last updated: Mon, 8 Oct 2018

This IBM Redbooks® Product Guide gives an overview of the features and functions that are available with the IBM DS8880 models running microcode Release 8.5 (DS8000 License Machine Code 8.8.50.xx.xx).

Categories: Technology

IBM DS8880 High-Performance Flash Enclosure Gen2

IBM Redbooks Site - Mon, 10/08/2018 - 09:30
Draft Redpaper, last updated: Mon, 8 Oct 2018

This IBM® Redpaper™ publication describes the High-Performance Enclosure (HPFE) Gen2 architecture and configuration.

Categories: Technology

In the news

iPhone J.D. - Fri, 10/05/2018 - 00:15

I posted my review of the Apple Watch Series 4 earlier this week, and so did many others.  I particularly enjoyed the reviews by Jason Snell of Six Colors and Zac Hall of 9to5Mac.  Michael Steeber of 9to5Mac writes about the new-and-improved Digital Crown on the Apple Watch Series 4.  Also notable was the review by Joann Stern of the Wall Street Journal because of the video which accompanies that review; she hired a stunt woman to test the fall detection feature.  Even if you don't read the review, you should watch the fun video so that you can see how fall detection works without having to fall down yourself.  And now, the news of note from the past week:

Categories: iPhone Web Sites

365 Days Later: Finding and Exploiting Safari Bugs using Publicly Available Tools

Google Project Zero - Thu, 10/04/2018 - 12:40
Posted by Ivan Fratric, Google Project Zero
Around a year ago, we published the results of research about the resilience of modern browsers against DOM fuzzing, a well-known technique for finding browser bugs. Together with the bug statistics we also published Domato, our DOM fuzzing tool that was used to find those bugs.
Given that in the previous research, Apple Safari, or more specifically, WebKit (its DOM engine) did noticeably worse than other browsers, we decided to revisit it after a year using exactly the same methodology and exactly the same tools to see whether anything changed.
Test Setup
As in the original research, the fuzzing was initially done against WebKitGTK+ and then all the crashes were tested against Apple Safari running on a Mac. This makes the fuzzing setup easier as WebKitGTK+ uses the same DOM engine as Safari, but allows for fuzzing on a regular Linux machine. In this research, WebKitGTK+ version 2.20.2 was used which can be downloaded here.
To improve the fuzzing process, a couple of custom changes were made to WebKitGTK+:
  • Made fixes to be able to build WebKitGTK+ with ASan (Address Sanitizer).

  • Changed window.alert() implementation to immediately call the garbage collector instead of displaying a message window. This works well because window.alert() is not something we would normally call during fuzzing.

  • Normally, when a DOM bug causes a crash, due to the multi-process nature of WebKit, only the web process would crash, but the main process would continue running. Code was added that monitors a web process and, if it crashes, the code would “crash” the main process with the same status.

  • Created a custom target binary.

After the previous research was published, we got a lot of questions about the details of our fuzzing setup. This is why, this time, we are publishing the changes made to the WebKitGTK+ code as well as the detailed build instructions below. A patch file can be found here. Note that the patch was made with WebKitGTK+ 2.20.2 and might not work as is on other versions.
Once WebKitGTK+ code was prepared, it was built with ASan by running the following commands from the WebKitGTK+ directory:
export CC=/usr/bin/clangexport CXX=/usr/bin/clang++export CFLAGS="-fsanitize=address"export CXXFLAGS="-fsanitize=address"export LDFLAGS="-fsanitize=address"export ASAN_OPTIONS="detect_leaks=0"
mkdir buildcd build
make -j 4
mkdir -p libexec/webkit2gtk-4.0cp bin/WebKit*Process libexec/webkit2gtk-4.0/
If you are doing this for the first time, the cmake/make step will likely complain about missing dependencies, which you will then have to install. You might note that a lot of features deemed not overly important for DOM fuzzing were disabled via -DENABLE flags. This was mainly to save us from having to install the corresponding dependencies but in some cases also to create a build that was more “portable”.
After the build completes, the fuzzing is as simple as creating a sample with Domato, running the target binary as
ASAN_OPTIONS=detect_leaks=0,exitcode=42 ASAN_SYMBOLIZER_PATH=/path/to/llvm-symbolizer LD_LIBRARY_PATH=./lib ./bin/webkitfuzz /path/to/sample <timeout>
and waiting for the exit code 42 (which, if you take a look at the command line above as well as the changes we made to the WebKitGTK+ code, indicates an ASan crash).
After collecting crashes, an ASan build of the most recent WebKit source code was created on the actual Mac hardware. This is as simple as running
./Tools/Scripts/set-webkit-configuration --release --asan./Tools/Scripts/build-webkit
Each crash obtained on WebKitGTK+ was tested against the Mac build before reporting to Apple.
The Results
After running the fuzzer for 100.000.000 iterations (the same as a year ago) I ended up with 9 unique bugs that were reported to Apple. Last year, I estimated that the computational power to perform this number of iterations could be purchased for about $1000 and this probably hasn’t changed - an amount well within the payment range of a wide range of attackers with varying motivation.
The bugs are summarized in the table below. Please note that all of the bugs have been fixed at the time of release of this blog post.
Project Zero bug IDCVETypeAffected Safari 11.1.2Older than 6 monthsOlder than 1 year1593CVE-2018-4197UAFYESYESNO1594CVE-2018-4318UAFNONONO1595CVE-2018-4317UAFNOYESNO1596CVE-2018-4314UAFYESYESNO1602CVE-2018-4306UAFYESYESNO1603CVE-2018-4312UAFNONONO1604CVE-2018-4315UAFYESYESNO1609CVE-2018-4323UAFYESYESNO1610CVE-2018-4328OOB readYESYESYESUAF = use-after-free. OOB = out-of-bounds
As can be seen in the table, out of the 9 bugs found, 6 affected the release version of Apple Safari, directly affecting Safari users.
While 9 or 6 bugs (depending how you count) is significantly less than the 17 found a year ago, it is still a respectable number of bugs, especially if we take into an account that the fuzzer has been public for a long time now.
After the results were in, I looked into how long these bugs have been in the WebKit codebase. To check this, all the bugs were tested against a version of WebKitGTK+ that was more than 6 months old (WebKitGTK+ 2.19.6) as well as a version that was more than a year old (WebKitGTK+ 2.16.6).
The results are interesting—most of the bugs were sitting in the WebKit codebase for longer than 6 months, however, only 1 of them is older than 1 year. Here, it might be important to note that throughout the past year (between the previous and this blog post) I also did fuzzing runs using the same approach and reported 14 bugs. Unfortunately, it is impossible to know how many of those 14 bugs would have survived until now and how many would have been found in this fuzz run. It is also possible that some of the newly found bugs are actually older, but don’t trigger with the provided PoCs is in the older versions due to unrelated code changes in the DOM. I didn’t investigate this possibility.
However, even if we assume that all of the previously reported bugs would not have survived until now, the results still indicate that (a) the security vulnerabilities keep getting introduced in the WebKit codebase and (b) many of those bugs get incorporated into the release products before they are caught by internal security efforts.
While (a) is not unusual for any piece of software that changes as rapidly as a DOM engine, (b) might indicate the need to put more computational resources into fuzzing and/or review before release.
The Exploit
To prove that bugs like this can indeed lead to a browser compromise, I decided to write an exploit for one of them. The goal was not to write a very reliable or sophisticated exploit - highly advanced attackers would likely not choose to use the bugs found by public tools whose lifetime is expected to be relatively short. However, if someone with exploit writing skills was to use such a bug in, for example, a malware spreading campaign, they could potentially do a lot of damage even with an unreliable exploit.
Out of the 6 issues affecting the release version of Safari, I selected what I believed to be the easiest one to exploit—a use-after-free where, unlike in the other use-after-free issues found, the freed object is not on the isolated heap—a mitigation recently introduced in WebKit to make use-after-free exploitation harder.
Let us first start by examining the bug we’re going to exploit. The issue is a use-after-free in the SVGAnimateElementBase::resetAnimatedType() function. If you look at the code of the function, you are going to see that, first, the function gets a raw pointer to the SVGAnimatedTypeAnimator object on the line
   SVGAnimatedTypeAnimator* animator = ensureAnimator();
and, towards the end of the function, the animator object is used to obtain a pointer to a SVGAnimatedType object (unless one already exists) on the line
   m_animatedType = animator->constructFromString(baseValue);
The problem is that, in between these two lines, attacker-controlled JavaScript code could run. Specifically, this could happen during a call to computeCSSPropertyValue(). The JavaScript code could then cause SVGAnimateElementBase::resetAnimatedPropertyType() to be called, which would delete the animator object. Thus, the constructFromString() function would be called on the freed animator object - a typical use-after-free scenario, at least on the first glance. There is a bit more to this bug though, but we’ll get to that later.
The vulnerability has been fixed in the latest Safari by no longer triggering JavaScript callbacks through computeCSSPropertyValue(). Instead, the event handler is going to be processed at some later time. The patch can be seen here.
A simple proof of concept for the vulnerability is:
<body onload="setTimeout(go, 100)">  <svg id="svg">    <animate id="animate" attributeName="fill" />  </svg>  <div id="inputParent" onfocusin="handler()">    <input id="input">  </div>  <script>    function handler() {      animate.setAttribute('attributeName','fill');    }    function go() {      input.autofocus = true;      inputParent.after(inputParent);      svg.setCurrentTime(1);    }  </script></body>
Here, svg.setCurrentTime() results in resetAnimatedType() being called, which in turn, due to DOM mutations made previously, causes a JavaScript event handler to be called. In the event handler, the animator object is deleted by resetting the attributeName attribute of the animate element.
Since constructFromString() is a virtual method of the SVGAnimatedType class, the primitive the vulnerability gives us is a virtual method call on a freed object.
In the days before ASLR, such a vulnerability would be immediately exploitable by replacing the freed object with data we control and faking the virtual method table of the freed object, so that when the virtual method is called, execution is redirected to the attacker’s ROP chain. But due to ASLR we won’t know the addresses of any executable modules in the process.
A classic way to overcome this is to combine such a use-after-free bug with an infoleak bug that can leak an address of one of the executable modules. But, there is a problem: In our crop of bugs, there wasn’t a good infoleak we could use for this purpose. A less masochistic vulnerability researcher would simply continue to run the fuzzer until a good infoleak bug would pop up. However, instead of finding better bugs, I deliberately wanted to limit myself to just the bugs found in the same number of iterations as in the previous research. As a consequence, the majority of time spent working on this exploit was to turn the bug into an infoleak.
As stated before, the primitive we have is a virtual method call on the freed object. Without an ASLR bypass, the only thing we can do with it that would not cause an immediate crash is to replace the freed object with another object that also has a vtable, so that when a virtual method is called, it is called on the other object. Most of the time, this would mean calling a valid virtual method on a valid object and nothing interesting would happen. However, there are several scenarios where doing this could lead to interesting results:
  1. The virtual method could be something dangerous to call out-of-context. For example, if we can call a destructor of some object, its members could get freed while the object itself continues to live. With this, we could turn the original use-after-free issue into another use-after-free issue, but possibly one that gives us a better exploitation primitive.

  1. Since constructFromString() takes a single parameter of the type String, we could potentially cause a type confusion on the input parameter if the other virtual method expects a parameter of another type. Additionally, if the other virtual method takes more parameters than constructFromString(), these would be uninitialized which could also lead to exploitable behavior.

  1. As constructFromString() is expected to return a pointer of type SVGAnimatedType, if the other virtual method returns some other type, this will lead to the type confusion on the return value. Additionally, if the other virtual method does not return anything, then the return value remains uninitialized.

  1. If the vtables of the freed object and the object we replaced it with are of different size, calling a vtable pointer on the freed object could result in an out-of-bounds read on the vtable of the other object, resulting in calling a virtual function of some third class.

In this exploit we used option 3, but with a twist. To understand what the twist is, let’s examine the SVGAnimateElementBase class more closely: It implements (most of) the functionality of the SVG <animate> element. The SVG <animate> element is used to, as the name suggests, animate a property of another element. For example, having the following element in an SVG image
<animate attributeName="x" from="0" to="100" dur="10s" />
will cause the x coordinate of the target element (by default, the parent element) to grow from 0 to 100 over the duration of 10 seconds. We can use an <animate> element to animate various CSS or XML properties, which is controlled by the attributeName property of the <animate> element.
Here’s the interesting part: These properties can have different types. For example, we might use an <animate> element to animate the x coordinate of an element, which is of type SVGLengthValue (number + unit), or we might use it to animate the fill attribute, which is of type Color.
In an SVGAnimateElementBase class, the type of animated property is tracked via a member variable declared as
   AnimatedPropertyType m_animatedPropertyType;
Where AnimatedPropertyType is the enumeration of possible types. Two other member variables of note are
   std::unique_ptr<SVGAnimatedTypeAnimator> m_animator;    std::unique_ptr<SVGAnimatedType> m_animatedType;
The m_animator here is the use-after-free object, while m_animatedType is the object created from the (possibly freed) m_animator.
SVGAnimatedTypeAnimator (type of m_animator) is a superclass which has subclasses for all possible values of AnimatedPropertyType, such as SVGAnimatedBooleanAnimator, SVGAnimatedColorAnimator etc. SVGAnimatedType (type of m_animatedType) is a variant that contains a type and a union of possible values depending on the type.
The important thing to note is that normally, both the subclass of m_animator and the type of m_animatedType are supposed to match m_animatedPropertyType. For example, if m_animatedPropertyType is AnimatedBoolean, then the type of m_animatedType variant should be the same, and m_animator should be an instance of SVGAnimatedBooleanAnimator.
After all, why shouldn’t all these types match, since m_animator is created based on m_animatedPropertyType here and m_animatedType is created by m_animator here. Oh wait, that’s exactly where the vulnerability occurs!
So instead of replacing a freed animator with something completely different and causing a type confusion between SVGAnimatedType and another class, we can instead replace the freed animator with another animator subclass and confuse SVGAnimatedType with type = A to another SVGAnimatedType with type = B.
But one interesting thing about this bug is that it would still be a bug even if the animator object did not get freed. In that case, the bug turns into a type confusion: To trigger it, one would simply change the m_animatedPropertyType of the <animate> element to a different type in the JavaScript callback (we’ll examine how this happens in detail later). This led to some discussion in the office whether the bug should be called an use-after-free at all, or is this really a different type of bug where the use-after-free is merely a symptom.
Note that the animator object is always going to get freed as soon as the type of the <animate> element changes, which leads to an interesting scenario where to exploit a bug (however you choose to call it), instead of replacing the freed object with an object of another type, we could either replace it with the object of the same type or make sure it doesn’t get replaced at all. Due to how memory allocation in WebKit works, the latter is actually going to happen on its own most of the time anyway - objects allocated in a memory page will only start getting replaced once the whole page becomes full. Additionally, freeing an object in WebKit doesn’t corrupt it as would be the case in some other allocators, which allows us to still use it normally even after being freed.
Let’s now examine how this type confusion works and what effects it has:
  1. We start with an <animate> element for type A. m_animatedPropertyType, m_animator and m_animatedType all match type A.

  1. resetAnimatedType() gets called and it retrieves an animator pointer of type A here.

  1. resetAnimatedType() calls computeCSSPropertyValue() here, which triggers a JavaScript callback.

  1. In the JavaScript callback, we change the type of <animate> element to B by changing its attributeName attribute. This causes SVGAnimateElementBase::resetAnimatedPropertyType() to be called. In it, m_animatedType and m_animator get deleted, while m_animatedPropertyType gets set to B according to the new attributeName here. Now, m_animatedType and m_animator are null, while m_animatedPropertyType is B.

  1. We return into resetAnimatedType(), where we still have a local variable animator which still points to (freed but still functional) animator for type A.

  1. m_animatedType gets created based on the freed animator here. Now, m_animatedType is of type A, m_animatedPropertyType is B and m_animator is null.

  1. resetAnimatedType() returns, and the animator local variable pointing to the freed animator of type A gets lost, never to be seen again.

  1. Eventually, resetAnimatedType() gets called again. Since m_animator is still null, but m_animatedPropertyType is B, it creates m_animator of type B here.

  1. Since m_animatedType is non-null, instead of creating it anew, we just initialize it by calling m_animatedType->setValueAsString() here. We now have m_animatedPropertyType for type B, m_animator for type B and m_animatedType for type A.

  1. At some point, the value of the animated property gets calculated. That happens in SVGAnimateElementBase::calculateAnimatedValue() on this line by calling m_animator->calculateAnimatedValue(..., m_animatedType). Here, there is a mismatch between the m_animator (type B) and  m_animatedType (type A). However, because the mismatch wouldn’t normally occur, the animator won’t check the type of the argument (there might be some debug asserts but nothing in the release) and will attempt to write the calculated animated value of type B into the SVGAnimatedType with type A.

  1. After the animated value has been computed, it is read out as string and set to the corresponding CSS property. This happens here.

The actual type confusion only happens in step 10: there, we will write to the SVGAnimatedType of type A as if it actually was type B. The rest of the interactions with m_animatedType are not dangerous since they are simply getting and setting the value as string, an operation that is safe to do regardless of the actual type.
Note that, although the <animate> element supports animating XML properties as well as CSS properties, we can only do the above dance with CSS properties as the code for handling XML properties is different. The list of CSS properties we can work with can be found here.
So, how do we exploit this type confusion for an infoleak? The initial idea was to exploit with A = <some numeric type> and B = String. This way, when the type confusion on write occurs, a string pointer is written over a number and then we would be able to read it in step 11 above. But there is a problem with this (as well as with a large number of type combinations): The value read in step 11 must be a valid CSS property value in the context of the current animated property, otherwise it won’t be set correctly and we would not be able to read it out. For example, we were unable to find a string CSS property (from the list above) that would accept a value like 1.4e-45 or similar.
A more promising approach, due to limitations of step 11, would be to replace a numeric type with another numeric type. We had some success with A = FloatRect and B = SVGLengthListValues, which is a vector of SVGLengthValue values. Like above, this results in a vector pointer being written over FloatRect type. This sometimes leads to successfully disclosing a heap address. Why sometimes? Because the only CSS property with type SVGLengthListValues we can use is stroke-dasharray, and stroke-dasharray accepts only positive values. Thus, if lower 32-bits of the heap address we want to disclose look like a negative floating point number (i.e. the highest bit is set), then we would not be able to disclose that address. This problem can be overcome by spraying the heap with 2GB of data so that the lower 32-bits of heap addresses start becoming positive. But, since we need heap spraying anyway, there is another approach we can take.
The approach we actually ended up using is with A = SVGLengthListValues (stroke-dasharray CSS property) and B = float (stroke-miterlimit CSS property). What this type confusion does, is overwrites the lowest 32 bits of a SVGLengthValue vector with a floating point number.
Before we trigger this type confusion we need to spray the heap with approximately 4GB of data (doable on modern computers), which gives us a good probability that when we change an original heap address 0x000000XXXXXXXXXX to 0x000000XXYYYYYYYY, the resulting address is still going to be a valid heap address, especially if YYYYYYYY is high. This way, we can disclose not-quite-arbitrary data at 0x000000XX00000000 + arbitrary offset.
Why not-quite-arbitrary? Because there are still some limitations:
  1. As stroke-miterlimit must be positive, once again we can only disclose data from the heap interpretable as a 32-bit float.

  1. SVGLengthValue is a type which consists of a 32-bit float followed by an enumeration that describes the units used. When a SVGLengthValue is read out as string in step 11 above, if the unit value is valid, it will be appended to the number (e.g. ‘100px’). If we attempt to set a string like that to the stroke-miterlimit property it will fail. Thus, the next byte after the heap value we want to read must interpret as invalid unit (in which case the unit is not appended when reading out SVGLengthValue as string).

Note that both of these limitations can often be worked around by doing non-aligned reads.
Now that we have our more-or-less usable read, what do we read out? As the whole point is to defeat ASLR, we should read a pointer to an executable module. Often in exploitation, one would do that by reading out the vtable pointer of some object on the heap. However, on MacOS it appears that vtable pointers point to a separate memory region than the one containing executable code of the corresponding module. So instead of reading out a vtable pointer, we need to read a function pointer instead.
What we ended up doing is using VTTRegion objects in our heap spray. A VTTRegion object contains a Timer which contains a pointer to Function object which (in this case) contains a function pointer to VTTRegion::scrollTimerFired(). Thus, we can spray with VTTRegion objects (which takes about 10 seconds on a quite not-state-of-the-art Mac Mini) and then scan the resulting memory for a function pointer.
This gives us the ASLR bypass, but one other thing useful to have for the next phase is the address of the payload (ROP chain and shellcode). We disclose it by the following steps:
  1. Find a VTTRegion object in the heap spray.

  1. By setting the VTTRegion.height property during the heap spray to an index in the spray array, we can identify exactly which of the millions of VTTRegion objects we just read.

  1. Set the VTTRegion.id property of the VTTRegion object to the payload.

  1. Read out the VTTRegion.id pointer.

We are now ready for triggering the vulnerability a second time, this time for code exec. This time, it is the classic use-after-free exploitation scenario: we overwrite the freed SVGAnimatedTypeAnimator object with the data we control.
As Apple recently introduced gigacage (a separate large region of memory) for a lot of attacker-controlled datatypes (strings, arrays, etc.) this is no longer trivial. However, one thing still allocated on the main heap is Vector content. By finding a vector whose content we fully control, we can overcome the heap limitations.
What I ended up using is a temporary vector used when TypedArray.set() is called to copy values from one JavaScript typed array into another typed array. This vector is temporary, meaning it will be deleted immediately after use, but again, due to how memory allocation works in webkit it is not too horrible. Like other stability improvements, the task of finding a more permanent controllable allocation is left to the exercise of the reader. :-)
This time, in the JavaScript event handler, we can replace the freed SVGAnimatedTypeAnimator with a vector whose first 8 bytes are set to point to the ROP chain + shellcode payload.
The ROP chain is pretty straightforward, but one thing that is perhaps more interesting is the stack pivot gadget (or, in this case, gadgets) used. In the scenario we have, the virtual function on the freed object is called as
call qword ptr [rax+10h]
where rax points to our payload. Additionally, rsi points to the freed object (that we now also control). The first thing we want to do for ROP is control the stack, but I was unable to find any “classic” gadgets that accomplish this such as
mov rsp, rax; ret;push rax; pop rsp; ret;xchg rax, rsp; ret;
What I ended up doing is breaking the stack pivot into two gadgets:
push rax; mov rax, [rsi], call [rax + offset];
This first gadget pushes the payload address on the stack and is very common because, after all, that’s exactly how the original virtual function was called (apart from push rax that can be an epilogue of some other instruction). The second gadget can then be
pop whatever; pop rsp; ret;
where the first pop pops the return address from the stack and the second pop finally gets the controlled value into rsp. This gadget is less common, but still appears to be way more common than the stack pivot mentioned previously, at least in our binary.
The final ROP chain is (remember to start reading from offset 0x10):
[address of pop; pop; pop; ret]0[address of push rax; mov rax, [rsi], call [rax+0x28]];0[address of pop; ret][address of pop rbp; pop rsp; ret;][address of pop rdi; ret]0[address of pop rsi; ret]shellcode length[address of pop rdx; ret]PROT_EXEC + PROT_READ + PROT_WRITE[address of pop rcx; ret]MAP_ANON + MAP_PRIVATE[address of pop r8; pop rbp; ret]-10[address of pop r9; ret]0[address of mmap][address of push rax; pop rdi; ret][address of push rsp; pop rbp; ret][address of push rbp; pop rax; ret][address of add rax, 0x50; pop rbp; ret]0[address of push rax; pop rsi; pop rbp; ret]0[address of pop rdx; ret]shellcode length[address of memcpy][address of jmp rax;]0shellcode
The ROP chain calls
mmap(0, shellcode_length,  PROT_EXEC | PROT_READ | PROT_WRITE, MAP_ANON + MAP_PRIVATE, -1, 0)
Then calculates the shellcode address and copies it to the address returned by mmap(), after which the shellcode is called.
In our case, the shellcode is just a sequence of ‘int 3’ instructions and when reaching it, Safari will crash. If a debugger is attached, we can see that the shellcode was successfully reached as it will detect a breakpoint:
Process 5833 stopped* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BREAKPOINT (code=EXC_I386_BPT, subcode=0x0)    frame #0: 0x00000001b1b83001->  0x1b1b83001: int3       0x1b1b83002: int3       0x1b1b83003: int3       0x1b1b83004: int3   Target 0: (com.apple.WebKit.WebContent) stopped.
In the real-world scenario the shellcode could either be a second-stage exploit to break out of the Safari sandbox or, alternately, a payload that would turn the issue into an universal XSS, stealing cross-domain data.
The exploit was successfully tested on Mac OS 10.13.6 (build version 17G65). If you are still using this version, you might want to update. The full exploit can be seen here.
The impact of recent iOS mitigations
An interesting aspect of this exploit is that, on Safari for Mac OS it could be written in a very “old-school” way (infoleak + ROP) due to lack of control flow mitigations on the platform.
On the latest mobile hardware and in iOS 12, which was published after the exploit was already written, Apple introduced control flow mitigations by using Pointer Authentication Codes (PAC). While there are no plans to write another version of the exploit at this time, it is interesting to discuss how the exploit could be modified not to be affected by the recent mitigations.
The exploit, as presented here, consists of two parts: infoleak and getting code execution. PAC would not affect the infoleak part in any way, however it would prevent jumping to the ROP chain in the second part of the exploit, because we could not forge a correct signature for the vtable pointer.
Instead of jumping to the ROP code, the next stage of the exploit would likely need to be getting an arbitrary read-write primitive. This could potentially be accomplished by exploiting a similar type confusion that was used for the infoleak, but with a different object combination. I did notice that there are some type combinations that could result in a write (especially if the attacker already has an infoleak), but I didn’t investigate those in detail.
In the Webkit process, after the attacker has an arbitrary read-write primitive, they could find a way to overwrite JIT code (or, failing that, other data that would cause fully or partially controlled JIT code to be emitted) and achieve code execution that way.
So while the exploit could still be written, admittedly it would be somewhat more difficult to write.
On publishing the advisories
Before concluding this blog post, we want to draw some attention to how the patches for the issues listed in the blog post were announced and to the corresponding timeline. The issues were reported to Apple between June 15 and July 2nd, 2018. On September 17th 2018, Apple published security advisories for iOS 12, tvOS 12 and Safari 12 which fixed all of the issues. However, although the bugs were fixed at that time, the corresponding advisories did not initially mention them. The issues described in the blog post were only added to the advisories one week later, on September 24, 2018, when the security advisories for macOS Mojave 10.14 were also published.
To demonstrate the discrepancy between originally published advisories and the updated advisories, compare the archived version of Safari 12 advisories from September 18 here and the current version of the same advisories here (note that you might need to refresh the page if you still have the old version in your browser’s cache).
The original advisories most likely didn’t include all the issues because Apple wanted to wait for the issues to also be fixed on MacOS before adding them. However, this practice is misleading because customers interested in the Apple security advisories would most likely read them only once, when they are first released and the impression they would to get is that the product updates fix far less vulnerabilities and less severe vulnerabilities than is actually the case.
Furthermore, the practice of not publishing fixes for mobile or desktop operating systems at the same time can put the desktop customers at unnecessary risk, because attackers could reverse-engineer the patches from the mobile updates and develop exploits against desktop products, while the desktop customers would have no way to update and protect themselves.
While there were clearly improvements in WebKit DOM when tested with Domato, the now public fuzzer was still able to find a large number of interesting bugs in a non-overly-prohibitive number of iterations. And if a public tool was able to find that many bugs, it is expected that private ones might be even more successful.
And while it is easy to brush away such bugs as something we haven’t seen actual attackers use, that doesn’t mean it’s not happening or that it couldn’t happen, as the provided exploit demonstrates. The exploit doesn’t include a sandbox escape so it can’t be considered a full chain, however reports from other security researchers indicate that this other aspect of browser security, too, cracks under fuzzing (Note from Apple Security: this sandbox escape relies on attacking the WindowServer, access to which has been removed from the sandbox in Safari 12 on macOS Mojave 10.14). Additionally, a DOM exploit could be used to steal cross-domain data such as cookies even without a sandbox escape.
The fuzzing results might indicate that WebKit is getting fuzzed, but perhaps not with sufficient computing power to find all fuzzable, newly introduced bugs before they make it into the release version of the browser. We are hoping that this research will lead to improved user security by providing an incentive for Apple to allocate more resources into this area of browser security.
Categories: Security

IBM Z Connectivity Handbook

IBM Redbooks Site - Tue, 10/02/2018 - 09:30
Redbook, published: Tue, 2 Oct 2018

This IBM® Redbooks® publication describes the connectivity options that are available for use within and beyond the data center for the IBM Z family of mainframes, which includes these systems:

  • IBM z14®
  • IBM z14 Model ZR1
  • IBM z13®
  • IBM z13s™
  • IBM zEnterprise® EC12 (zEC12)
  • IBM zEnterprise BC12 (zBC12)
This book highlights the hardware and software components, functions, typical uses, coexistence, and relative merits of these connectivity features.
Categories: Technology

IBM Z Functional Matrix

IBM Redbooks Site - Tue, 10/02/2018 - 09:30
Redpaper, published: Tue, 2 Oct 2018

This IBM® Redpaper™ publication provides a list of features and functions that are supported on IBM Z, including the IBM z14(z14) - Machine types 3906 and 3907, IBM z13TM(z13®), IBM z13s™(z13s), IBM zEnterprise® EC12 (zEC12), and IBM zEnterprise BC12 (zBC12).

Categories: Technology

IBM z14 Model ZR1 Technical Introduction

IBM Redbooks Site - Tue, 10/02/2018 - 09:30
Redbook, published: Tue, 2 Oct 2018

This IBM® Redbooks® publication introduces the latest member of the IBM Z platform, the IBM z14 Model ZR1 (Machine Type 3907).

Categories: Technology

IBM z14 Technical Introduction

IBM Redbooks Site - Tue, 10/02/2018 - 09:30
Redbook, published: Tue, 2 Oct 2018

This IBM® Redbooks® publication introduces the latest IBM z platform, the IBM z14™.

Categories: Technology

Presidential Alert coming tomorrow, October 3.

iPhone J.D. - Tue, 10/02/2018 - 01:41

A few years ago, I wrote about wireless emergency alerts on the iPhone, and I explained that there are three kinds:  (1) emergency alerts issued because of an imminent threat to public safety or life, such as evacuation orders or shelter in place orders due to severe weather, a terrorist threat, or a chemical spill; (2) AMBER alerts for when a child is abducted, and (3) Presidential Alerts.  All three alerts arise out of the Warning Alert and Response Network Act, sometimes called the WARN Act, 47 U.S.C. § 1201, and more specifically the Wireless Emergency Alerts (WEA) program which was created pursuant to the WARN Act by the FCC working with FEMA.

When I wrote that post in 2013, I noted that no president had ever issued a Presidential Alert under WEA or similar prior systems.  And I also noted that while the WARN Act provides in 47 U.S.C. § 1201(b)(2)(E) that cell phone users may opt-out of emergency alerts and AMBER Alerts, a user may not opt-out of Presidential Alerts.  Thus, if you open the Settings app on your iPhone and tap Notifications and then scroll to the bottom, you will see that you only have on/off switches for the first two types of alerts:

Tomorrow, October 3, 2018 at 2:18 p.m. Eastern / 1:18 p.m. Central / 12:18 p.m. Mountain / 11:18 a.m. Pacific, FEMA and the FCC will conduct the first-ever test of a Presidential Alert.  Note that while the test will start at 2:18 p.m. Eastern, it will continue for 30 minutes, so if your iPhone doesn't get the alert right away, it may come at some other time during that 30 minute window.  (This test was originally planned for September 20, but it was delayed because of Hurricane Florence.)  The message will have a header that reads "Presidential Alert" and the body of the message will say:  "THIS IS A TEST of the National Wireless Emergency Alert System.  No action is needed."

If you will be in court or somewhere else where it would be inappropriate for your iPhone to make a loud noise, TURN OFF YOUR IPHONE BEFORE THAT TIME.  And if you are around other cellphones that make a loud noise tomorrow, now you know what is going on.

Hopefully the test will be deemed a success and we won't have to go through this again for a long time.  And also, my understanding is that the rumors are false, and President Trump will not begin using the Presidential Alert system to send all of his tweets to each of us.  At least, I hope those rumors are false.

Categories: iPhone Web Sites

Review: Apple Watch Series 4 -- see more, do more

iPhone J.D. - Mon, 10/01/2018 - 00:47

When I reviewed the new 2018 versions of the iPhone, I noted that this is just an "s" year.  There are definitely some nice new features in all of the new iPhones, especially for taking pictures, and if you want a larger screen or a cheaper iPhone X, it is great that Apple has three new models.  Nevertheless, this is not as big of an iPhone upgrade as we saw a year ago.

The opposite is true with the Apple Watch.  The Apple Watch Series 4 is the first significant upgrade to the Apple Watch hardware since the Apple Watch was first previewed in 2014 and started selling in early 2015.  Unlike 2014, when Apple wasn't really sure how the Apple Watch would be used, Apple now has years of experience and knows what people like most about an Apple Watch.  And those are precisely the parts of the Apple Watch that Apple improved.  I've been using the Apple Watch Series 4 for a week, and I am blown away at how amazing this device is.  I use it daily in my law practice, outside of the office for messages and entertainment purposes, when exercising, and pretty much all day long no matter what I'm doing from when I wake up until I go to sleep.  This is an incredibly useful device that I recommend highly to any attorney who uses an iPhone.

The iPhone X version of the Apple Watch

Last year, the iPhone X was a huge leap forward in the iPhone world because Apple figured out a way to make the screen go virtually edge-to-edge.  Thus, the physical size of the iPhone remained familiar, but the usable screen was larger.  You saw much more in the same amount of space.  Apple has applied the same design magic to the Series 4 Apple Watch. There are some changes to the physical size of the watch.  First, the new watch is slightly thinner.  It's not a big change, but it is welcome nevertheless.  In these pictures, my Series 2 is on the left and my Series 4 is on the right:

Another physical change is that the face is slightly larger, with 40 mm and 44 mm sizes instead of the former 42 mm and 38 mm sizes.  This increase is so minor that you probably won't ever notice it unless you put the new Apple Watch next to an old one.  In this picture, my old Apple Watch Series 2 is on the left, and the new Apple Watch Series 4 is on the right:

I've heard some people wonder if the increase from 42 to 44 mm means that a person who previously used a 42 mm should instead get the smaller 40 mm model.  Maybe for some folks this makes sense, but I suspect that most folks who have previously used a 42 mm will be perfectly happy with the 44 mm.  It's really not a big difference in physical size.

The real change to the size of the Apple Watch is that, much like the iPhone X, Apple has brought the usable screen closer to the edges of the watch.  As a result of the improvements, the new screen is now 30% larger.

This is a huge, noticeable improvement.  The additional information that you can see is fantastic.  For example, I've always been able to look at an email on my Apple Watch, but the size of the watch face severely limits how many words you can see at one time.  With the larger screen on the Series 4, I typically see one additional sentence on the screen as compared to the older models.  For longer emails, I'll have to use the scroll wheel to scroll down on either watch, but less scrolling is necessary on the Series 4.  The same is true for text messages and any other app which puts lots of information on the screen.  You see more, and thus you can obtain, and can act upon, the information more quickly.

Other apps simply expand to fill the larger screen so you get a larger watch face, larger controls for music and podcasts, etc.  For these tasks, the larger face makes the Apple Watch much easier and more enjoyable to use.  Here's a s simple example, but one which matters because I do it every day:  typing in my passcode to unlock my Apple Watch is far easier with the larger screen on the Series 4 with the larger buttons.

The larger screen also makes it possible to have new watch faces with many more complications.  The following picture uses the new Infograph watch face that Apple keeps showing off in its press pictures.  It has eight complications in addition to the time:

I am not sure if I am going to use the Infograph as it seems a little too busy to me, plus I prefer digital time over hands on a watch face, but I love that this is an option.

Note that even with the different screen and sizes, you can still use your old Apple Watch bands with the new Series 4.  That's good news for me because I love my Milanese Loop watch band, but it is $150 so I'm glad that I didn't have to buy a new one.


Early models of the Apple Watch were rather slow, which had a negative impact on usability.  But with each new generation, the Apple Watch gets faster.  The Series 4 is the first Apple Watch to feature a 64-bit processor, which Apple says is twice as fast as the Series 3 — which was 70% faster than the Series 2, and the Series 2 was 50% faster than the original Apple Watch.  Thus, if you are upgrading from an earlier version of the Apple Watch, this speed increase should be quite noticeable — especially if you are using something older than a Series 3.

At this point, you may be thinking "ho hum, it's faster, but every new model is faster."  Fair enough, but this time, the speed increase has real consequences.  With the Series 4, the Apple Watch has crossed over from being a device that operates so slowly that sometimes I just don't bother to use it into a device which operates so quickly that I have no hesitation to use the device to perform tasks.

Let's go back to that email example.  On my Series 2, working with emails works fine, but it is somewhat slow.  On the Series 4, working with email is lightning fast, just as fast as working with emails on my iPhone.  Because of this speed increase, along with the larger screen, I am working with emails on my Apple Watch far more than I ever have before.  I can very quickly triage my inbox by deleting the junk mail and mail that doesn't really interest me.  I can quickly read emails that do matter to me and then act upon them.  Responding to emails is still easier on an iPhone or iPad if I need to type something of substance, but if I just want to send a quick reply, the watch works fine.  And of course I can dictate or scribble out the words of a longer reply if I need to do so.

If your law practice is anything like mine, this is huge.  I get tons of email every day.  When new emails come in, with the Series 4 I can often deal with them faster on my Apple Watch than on my iPhone, in large part because the watch is right their on my wrist whereas I need to dig out the iPhone and then put it away when I'm finished.  Plus, when I pick up my iPhone, there is a greater risk that I will be distracted by some other app on the iPhone.  When working with emails on my Apple Watch, I get in and out more quickly and then get back to my work.  I had no idea before using the Series 4 a week ago that working with emails would be so dramatically improved thanks to the larger screen and the faster watch.

Here's another example where the speed has a direct effect on usability.  I have lots of lights in my house which are controlled by HomeKit,  It is handy to use my Apple Watch to turn lights on and off, sometimes by speaking to Siri, other times by tapping a button in the Home app on the watch.  On my Series 2, sometimes this feature worked OK, and other times it was so slow that it was painful.  With my Series 4 watch, HomeKit devices respond to my Apple Watch commands right away — as quickly as commands coming from an iPhone.  The speed increase means that I no longer hesitate to use my Apple Watch with HomeKit devices, and thus it is almost like HomeKit performance is an additional feature of the Series 4.


Apple added cellular support to the Apple Watch Series 3, but I never owned a Series 3 so I've been using cellular on my Apple Watch for the first time this week.  Thanks to a new ceramic back, which reduces interference with radio waves, Apple says that cellular activity is works even better on the Series 4.

Before last week, I didn't think that this would be that significant for me.  After all, don't I carry my iPhone pretty much all the time?  But it has been been a nicer feature than I expected, especially when I've walked or jogged in a park to try to close my activity circles.  There often isn't really a good place to put an iPhone in exercise clothes, and with the Series 4, I don't have to.  I pair my AirPods with my Apple Watch, and then I'm off.  I've tested receiving and sending emails, receiving and sending text messages, and placing and picking up phone calls when my Apple Watch is using cellular.  It all just works.  It is so nice to know that I'm connected to the outside world in case someone needs me or I need to contact someone else – even though I'm not carrying around a heavy iPhone.  Indeed, I don't even feel the weight of an Apple Watch on my arm or AirPods in my ears, so I get all of this without feeling ANY extra weight at all.

Digital Crown

As part of the redesign, Apple made the Digital Crown on the side smaller.  I don't notice the difference in normal usage.  Apple also added haptic feedback when spinning the Digital Crown, and the clicks make a big difference.  It makes spinning the crown feel far more precise because you feel a click as each item is passed on the scrolling list.  If you haven't tried a Series 4 yet this might not sound like a very big deal, but in normal usage it is really nice. 


In addition to monitoring your heart beats, the Series 4 adds the ability to check your heart activity by running a simple EKG test (sometimes called an ECG).  Just put your finger on the digital crown, start the test, and you'll get results in 30 seconds.  I'm a lawyer not a doctor, but from what I've been reading for the last few days, this feature can help to save lives.

For example, here is a post on Reddit by a doctor explaining that the new Apple Watch can help to detect Atrial Fibrillation, which is the most common cardiac arrhythmia, and something that is experienced by up to 25% of people over 40 years old.

Note that this EKG feature requires a special app, which Apple says it will release later this year.  And for many folks, this feature will be unimportant.  But for some folks, especially those working with a heart doctor, this feature could be literally life changing.


The new speaker in the Apple Watch is 50% louder.  And the microphone was moved to the right side of the Apple Watch (the opposite side as the speaker) to reduce interference.  If you are using your Apple Watch to make phone calls or to use the new Walkie-Talkie feature, the improved speaker should help.  I usually keep sounds turned off on my Apple Watch, so this feature doesn't matter so much to me.

Fall detection

The Series 4 Apple Watch includes a more advanced accelerometer and gyroscope which can detect if you fall.  And if you fall down and then don't move for 60 seconds, the Apple Watch can even call 911 and your emergency contacts.  For folks above a certain age — or for anyone who can be clumsy — this looks like a feature that you hope to never use, but that you will very much appreciate if you need it.


There is a lot more that is packed into the Apple Watch Series 4, including new watch faces, Bluetooth 5.0 (which I hope will improve communications between the Apple Watch and the iPhone), increased battery life for outdoor workouts when you are using GPS, and more.


This is the first version of the Apple Watch that does not come in a more expensive Special Edition version made of high-end materials (gold in the first Apple Watch, ceramic in later models).  However, there is now a new gold stainless steel version of the watch.  You can also select the Nike+ version or the Hermès versions, which include different watch bands and a special watch face.

Apple no longer calls the aluminum version of the Apple Watch the "sport" model.  You just get an Apple Watch, and you choose whether you want aluminum and stainless steel, with stainless steel costing $300 more.  I prefer the look and feel of the stainless steel over aluminum, and I also like that the stainless steel version has a more durable screen — a sapphire crystal face, instead of Ion-X glass.  Even though I have hit the face of my Series 2 Apple Watch on countless objects over the yaars, I have never gotten a scratch.  My wife is far more poised and less clumsy than me, but her Series 2 aluminum Apple Watch does have some small scratches.


I was really excited about the iPhone X when it came out a year ago, and I absolutely loved using it for the past year.  I feel the same way about the Apple Watch Series 4.  The larger screen and the increase in speed make everything better.  Indeed, some features are so much better than I am using them far more than ever before.  The Apple Watch Series 4 is a huge leap forward.  If you have been thinking about getting an Apple Watch but were waiting for the right time, that time is now.  If you have an older Apple Watch and you already know that it is a useful device for you, upgrading to a Series 4 will be a huge improvement to what you already love.

Categories: iPhone Web Sites

Introducing the IBM DS8882F Rack Mounted Storage system

IBM Redbooks Site - Fri, 09/28/2018 - 09:30
Draft Redpaper, last updated: Fri, 28 Sep 2018

This IBM® Redpaper™ presents and positions the DS8882F.

Categories: Technology

DS8880 SafeGuarded Copy

IBM Redbooks Site - Fri, 09/28/2018 - 09:30
Draft Redpaper, last updated: Fri, 28 Sep 2018

This IBM RedPaper publication explains the DS8880 Safeguarded Copy functionality.

Categories: Technology

In the news

iPhone J.D. - Fri, 09/28/2018 - 00:27

This is the best time of the year for the iPhone.  We have new devices, and the iPhone XS has been a real champ for me this week during a crazy busy week for me both at work and after work.  We are also seeing more apps being updated to work with iOS 12 and watchOS 5.  And CarPlay has been seeing some nice improvements thanks to more third party apps.  Here is the news of note from the past week:

Categories: iPhone Web Sites


Subscribe to www.hdgonline.net aggregator