Feed aggregator

IBM Tape Library Guide for Open Systems

IBM Redbooks Site - 8 hours 14 min ago
Redbook, published: Fri, 14 Dec 2018

This IBM® Redbooks® publication presents a general introduction to the latest IBM tape and tape library technologies.

Categories: Technology

In the news

iPhone J.D. - Thu, 12/13/2018 - 23:23

There is an interesting article in the New York Times this week by Jennifer Valentino-DeVries, Natasha Singer, Michael H. Keller and Aaron Krolik about how an iPhone can track, and unfortunately sometimes share, your current location.  The article is interesting, but the way that it is presented on the page is also very interesting with lots of graphics that change as you scroll through the article.  Virginia attorney Sharon Nelson discusses the article on her Ride The Lightning blog, noting that while the companies collecting location data claim to keep the data anonymous, she has her doubts.  And now, the news of note from the past week:

  • Illinois attorney John Voorhees of MacStories reports on the latest update to the fantastic CARROT Weather app (my review). In the new version, you can select your weather source — Dark Sky, The Weather Channel, AccuWeather and Aeris Weather — plus there are no Apple Watch complications, support for certain personal weather station data, and more.
  • I use my Apple Pencil with my iPad Pro pretty much every day that I am at work, but I realize that some folks have not yet realized for themselves how useful this device is.  In an article for Macworld, Jason Snell explains how the second generation version has finally turned him into a believer in the Apple Pencil.
  • Amie Tsang and Adam Satariano of the New York Times report that Apple is going to build a $1 billion campus in Austin, Texas.
  • Christina Farr of CNBC reports that Apple has hired dozens of doctors — sometimes secretly — to work with Apple to improve the Apple Watch and other health technology.
  • If you use Philips Hue lights, you already know that if you lose power in your home, the lights come back on at full brightness with power is restored — which can be rather alarming.  Ben Lovejoy of 9to5Mac reports that the latest version of the Philips Hue app fixes this so that lights can be configured to return to their previous states when the power comes back.
  • If you use HomeKit-compatible smarthome devices, HomeRun is a great Apple Watch app for controlling your devices.  Ryan Christoffel of MacStories reports that the app can now create custom complications.
  • If you want a HomeKit-compatible outdoor outlet, I'm still enjoying the iHome iSP100 which I reviewed earlier this year.  Christopher Null of TechHive reviews a more expensive competitor, the iDevices Outdoor Switch.
  • The Apple Watch Series 4 now support the ECG/EKG function.  But it also can do a better job checking your heart rate.  Apple recently updated a support page to explain:  "To use the electrical heart sensor to measure your heart rate, open the Heart Rate app and place your finger on the Digital Crown. You will get a faster reading with higher fidelity — getting a measurement every second instead of every 5 seconds."
  • Andrew Orr of The Mac Observer lists all of the shortcuts you can do with a keyboard connected to an iPad using Apple's apps.
  • Starting next week, you will be able to use an Amazon Echo with Apple Music, as reported by Federico Viticci of MacStories.
  • David Griner of AdWeek runs down the 25 best ads of 2018.  Three of them are Apple ads, including #2 on the list.
  • And finally, here is a video Apple released a few weeks ago to show off many of the features of the iPad Pro called Five Reasons iPad Pro Can Be Your Next Computer:

Categories: iPhone Web Sites

Adventures in Video Conferencing Part 5: Where Do We Go from Here?

Google Project Zero - Thu, 12/13/2018 - 13:55
Posted by Natalie Silvanovich, Project Zero
Overall, our video conferencing research found a total of 11 bugs in WebRTC, FaceTime and WhatsApp. The majority of these were found through less than 15 minutes of mutation fuzzing RTP. We were surprised to find remote bugs so easily in code that is so widely distributed. There are several properties of video conferencing that likely led to the frequency and shallowness of these issues.WebRTC Bug ReportingWhen we started looking at WebRTC, we were surprised to discover that their website did not describe how to report vulnerabilities to the project. They had an open bug tracker, but no specific guidance on how to flag or report vulnerabilities. They also provided no security guidance for integrators, and there was no clear way for integrators to determine when they needed to update their source for security fixes. Many integrators seem to have branched WebRTC without consideration for applying security updates. The combination of these factors make it more likely that vulnerabilities did not get reported, vulnerabilities or fixes got ‘lost’ in the tracker, fixes regressed or fixes did not get applied to implementations that use the source in part.
We worked with the WebRTC team to add this guidance to the site, and to clarify their vulnerability reporting process. Despite these changes, several large software vendors reached out to our team with questions about how to fix the vulnerabilities we reported. This shows there is still a lack of clarity on how to fix vulnerabilities in WebRTC.Video Conferencing Test ToolsWe also discovered that most video conferencing solutions lack adequate test tools. In most implementations, there is no way to collect data that allows for problems with an RTP stream to be diagnosed. The vendors we asked did not have such a tool, even internally.  WebRTC had a mostly complete tool that allows streams to be recorded in the browser and replayed, but it did not work with streams that used non-default settings. This tool has now been updated to collect enough data to be able to replay any stream. The lack of tooling available to test RTP implementations likely contributed to the ease of finding vulnerabilities, and certainly made reproducing and reporting vulnerabilities more difficultVideo Conferencing StandardsThe standards that comprise video conferencing such as RTP, RTCP and FEC introduce a lot of complexity in achieving their goal of enabling reliable audio and video streams across any type of connection. While the majority of this complexity provides value to the end user, it also means that it is inherently difficult to implement securely. The Scope of Video ConferencingWebRTC has billions of users. While it was originally created for use in the Chrome browser, it is now integrated by at least two Android applications that eclipse Chrome in terms of users: Facebook and WhatsApp (which only uses part of WebRTC). It is also used by Firefox and Safari. It is likely that most mobile devices run multiple copies of the WebRTC library. The ubiquity of WebRTC coupled with the lack of a clear patch strategy make it an especially concerning target for attackers.Recommendations for DevelopersThis section contains recommendations for developers who are implementing video conferencing based on our observations from this research.
First, it is a good idea to use an existing solution for video conferencing (either WebRTC or PJSIP) as opposed to implementing a new one. Video conferencing is very complex, and every implementation we looked at had vulnerabilities, so it is unlikely a new implementation would avoid these problems. Existing solutions have undergone at least some security testing and would likely have fewer problems.
It is also advisable to avoid branching existing video conferencing code. We have received questions from vendors who have branched WebRTC, and it is clear that this makes patching vulnerabilities more difficult. While branching can solve problems in the short term, integrators often regret it in the long term.
It is important to have a patch strategy when implementing video conferencing, as there will inevitably be vulnerabilities found in any implementation that is used. Developers should understand how security patches are distributed for any third-party library they integrate, and have a plan for applying them as soon as they are available.
It is also important to have adequate test tools for a video conferencing application, even if a third-party implementation is used. It is a good idea to have a way to reproduce a call from end to end. This is useful in diagnosing crashes, which could have a security impact, as well as functional problems.
Several mobile applications we looked at had unnecessary attack surface. Specifically codecs and other features of the video conferencing implementation were enabled and accessible via RTP even though no legitimate call would ever use them. WebRTC and PJSIP support disabling specific features such as codecs and FEC. It is a good idea to disable the features that are not being used.
Finally, video conferencing vulnerabilities can generally be split into those that require the target to answer the incoming call, and those that do not. Vulnerabilities that do not require the call to be answered are more dangerous. We observed that some video conferencing applications perform much more parsing of untrusted data before a call is answered than others. We recommend that developers put as much functionality after the call is answered as possible.Tools
In order to open up the most popular video conferencing implementations to more security research, we are releasing the tools we developed to do this research. Street Party is a suite of tools that allows the RTP streams of video conferencing implementations to be viewed and modified. It includes:
  • WebRTC: instructions for recording and replaying RTP packets using WebRTC’s existing tools
  • FaceTime: hooks for recording and replaying FaceTime calls
  • WhatsApp: hooks for recording and replaying WhatsApp calls on Android

We hope these tools encourage even more investigation into the security properties of video conferencing. Contributions are welcome.Conclusion
We reviewed WebRTC, FaceTime and WhatsApp and found 11 serious vulnerabilities in their video conferencing implementations. Accessing and altering their encrypted content streams required substantial tooling. We are releasing this tooling to enable additional security research on these targets. There are many properties of video conferencing that make it susceptible to vulnerabilities. Adequate testing, conservative design and frequent patching can reduce the security risk of video conferencing implementations.
Categories: Security

Adventures in Video Conferencing Part 4: What Didn't Work Out with WhatsApp

Google Project Zero - Wed, 12/12/2018 - 13:54
Posted by Natalie Silvanovich, Project Zero
Not every attempt to find bugs is successful. When looking at WhatsApp, we spent a lot of time reviewing call signalling hoping to find a remote, interaction-less vulnerability. No such bugs were found. We are sharing our work with the hopes of saving other researchers the time it took to go down this very long road. Or maybe it will give others ideas for vulnerabilities we didn’t find.
As discussed in Part 1, signalling is the process through which video conferencing peers initiate a call. Usually, at least part of signalling occurs before the receiving peer answers the call. This means that if there is a vulnerability in the code that processes incoming signals before the call is answered, it does not require any user interaction.
WhatsApp implements signalling using a series of WhatsApp messages. Opening libwhatsapp.so in IDA, there are several native calls that handle incoming signalling messages.
Java_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferJava_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferAckJava_com_whatsapp_voipcalling_Voip_nativeHandleCallGroupInfoJava_com_whatsapp_voipcalling_Voip_nativeHandleCallRekeyRequestJava_com_whatsapp_voipcalling_Voip_nativeHandleCallFlowControlJava_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferReceiptJava_com_whatsapp_voipcalling_Voip_nativeHandleCallAcceptReceiptJava_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferAcceptJava_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferPreAcceptJava_com_whatsapp_voipcalling_Voip_nativeHandleCallVideoChangedJava_com_whatsapp_voipcalling_Voip_nativeHandleCallVideoChangedAckJava_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferRejectJava_com_whatsapp_voipcalling_Voip_nativeHandleCallTerminateJava_com_whatsapp_voipcalling_Voip_nativeHandleCallTransportJava_com_whatsapp_voipcalling_Voip_nativeHandleCallRelayLatencyJava_com_whatsapp_voipcalling_Voip_nativeHandleCallRelayElectionJava_com_whatsapp_voipcalling_Voip_nativeHandleCallInterruptedJava_com_whatsapp_voipcalling_Voip_nativeHandleCallMutedJava_com_whatsapp_voipcalling_Voip_nativeHandleWebClientMessage
Using apktool to extract the WhatsApp APK, it appears these natives are called from a loop in the com.whatsapp.voipcalling.Voip class. Looking at the smali, it looks like signalling messages are sent as WhatsApp messages via the WhatsApp server, and this loop handles the incoming messages.
Immediately, I noticed that there was a peer-to-peer encrypted portion of the message (the rest of the message is only encrypted peer-to-server). I thought this had the highest potential of reaching bugs, as the server would not be able to sanitize the data. In order to be able to read and alter encrypted packets, I set up a remote server with a python script that opens a socket. Whenever this socket receives data, the data is displayed on the screen, and I have the option of either sending the unaltered packet or altering the packet before it is sent. I then looked for the point in the WhatsApp smali where messages are peer-to-peer encrypted.
Since WhatsApp uses libsignal for peer-to-peer encryption, I was able to find where messages are encrypted by matching log entries. I then added smali code that sends a packet with the bytes of the message to the server I set up, and then replaces it with the bytes the server returns (changing the size of the byte array if necessary). This allowed me to view and alter the peer-to-peer encrypted message. Making a call using this modified APK, I discovered that the peer-to-peer message was always exactly 24 bytes long, and appeared to be random. I suspected that this was the encryption key used by the call, and confirmed this by looking at the smali.
A single encryption key doesn’t have a lot of potential for malformed data to lead to bugs (I tried lengthening and shortening it to be safe, but got nothing but unexploitable null pointer issues), so I moved on to looking at the peer-to-server encrypted messages. Looking at the Voip loop in smali, it looked like the general flow is that the device receives an incoming message, it is deserialized and if it is of the right type, it is forwarded to the messaging loop. Then certain properties are read from the message, and it is forwarded to a processing function based on its type. Then the processing function reads even more properties, and calls one of the above native methods with the properties as its parameters. Most of these functions have more than 20 parameters.
Many of these functions perform logging when they are called, so by making a test call, I could figure out which functions get called before a call is picked up. It turns out that during a normal incoming call, the device only receives an offer and calls Java_com_whatsapp_voipcalling_Voip_nativeHandleCallOffer, and then spawns the incoming call screen in WhatsApp. All the other signal types are not used until the call is picked up.
An immediate question I had was whether other signal types are processed if they are received before a call is picked up. Just because the initiating device never sends these signal types before the call is picked up doesn’t mean the receiving device wouldn’t process them if it received them.
Looking through the APK smali, I found the class com.whatsapp.voipcalling.VoiceService$DefaultSignalingCallback that has several methods like sendOffer and sendAccept that appeared to send the messages that are processed by these native calls. I changed sendOffer to call other send methods, like sendAccept instead of its normal messaging functionality. Trying this, I discovered that the Voip loop will process any signal type regardless of whether the call has been answered. The native methods will then parse the parameters, process them and put the results in a buffer, and then call a single method to process the buffer. It is only at that point processing will stop if the message is of the wrong type.I then reviewed all of the above methods in IDA. The code was very conservatively written, and most needed checks were performed. However, there were a few areas that potentially had bugs that I wanted to investigate more. I decided that changing the parameters to calls in the com.whatsapp.voipcalling.VoiceService$DefaultSignalingCallback was too slow to test the number of cases I wanted to test, and went looking for another way to alter the messages.
Ideally, I wanted a way to pass peer-to-server encrypted messages to my server before they were sent, so I could view and alter them. I went through the WhatsApp APK smali looking for a point after serialization but before encryption where I could add my smali function that sends and alters the packets. This was fairly difficult and time consuming, and I eventually put my smali in every method that wrote to a non-file ByteArrayOutputStream in the com.whatsapp.protocol and com.whatsapp.messaging packages (about 10 total) and looked for where it got called. I figured out where it got called, and fixed the class so that anywhere a byte array was written out from a stream, it got sent to my server, and removed the other calls. (If you’re following along at home, the smali file I changed included the string “Double byte dictionary token out of range”, and the two methods I changed contained calls to toByteArray, and ended with invoking a protocol interface.) Looking at what got sent to my server, it seemed like a reasonably comprehensive collection of WhatsApp messages, and the signalling messages contained what I thought they would.
WhatsApp messages are in a compressed XMPP format. A lot of parsers have been written for reverse engineering this protocol, but I found the whatsapp-reveng parser worked the best. I did have to replace the tokens in whatsapp_defines.py with a list extracted from the APK for it to work correctly though. This made it easier to figure out what was in each packet sent to the server.
Playing with this a bit, I discovered that there are three types of checks in WhatsApp signalling messages. First, the server validates and modifies incoming signalling messages. Secondly, the messages are deserialized, and this can cause errors if the format is incorrect, and generally limits the contents of the Java message object that is passed on. Finally, the native methods perform checks on their parameters.
These additional checks prevented several of the areas I thought were problems from actually being problems. For example, there is a function called by Java_com_whatsapp_voipcalling_Voip_nativeHandleCallOffer that takes in an array of byte arrays, an array of integers and an array of booleans. It uses these values to construct candidates for the call. It checks that the array of byte arrays and the array of integers are of the same length before it loops through them, using values from each, but it does not perform the same check on the boolean array. I thought that this could go out of bounds, but it turns out that the integer and booleans are serialized as a vector of <int,bool> pairs, and the arrays are then copied from the vector, so it is not actually possible to send arrays with different lengths.
One area of the signalling messages that looked especially concerning was the voip_options field of the message. This field is never sent from the sending device, but is added to the message by the server before it is forwarded to the receiving device. It is a buffer in JSON format that is processed by the receiving device and contains dozens of configuration parameters.
{"aec":{"offset":"0","mode":"2","echo_detector_mode":"4","echo_detector_impl":"2","ec_threshold":"50","ec_off_threshold":"40","disable_agc":"1","algorithm":{"use_audio_packet_rate":"1","delay_based_bwe_trendline_filter_enabled":"1","delay_based_bwe_bitrate_estimator_enabled":"1","bwe_impl":"5"},"aecm_adapt_step_size":"2"},"agc":{"mode":"0","limiterenable":"1","compressiongain":"9","targetlevel":"1"},"bwe":{"use_audio_packet_rate":"1","delay_based_bwe_trendline_filter_enabled":"1","delay_based_bwe_bitrate_estimator_enabled":"1","bwe_impl":"5"},"encode":{"complexity":"5","cbr":"0"},"init_bwe":{"use_local_probing_rx_bitrate":"1","test_flags":"982188032","max_tx_rott_based_bitrate":"128000","max_bytes":"8000","max_bitrate":"350000"},"ns":{"mode":"1"},"options":{"connecting_tone_desc": "test","video_codec_priority":"2","transport_stats_p2p_threshold":"0.5","spam_call_threshold_seconds":"55","mtu_size":"1200","media_pipeline_setup_wait_threshold_in_msec":"1500","low_battery_notify_threshold":"5","ip_config":"1","enc_fps_over_capture_fps_threshold":"1","enable_ssrc_demux":"1","enable_preaccept_received_update":"1","enable_periodical_aud_rr_processing":"1","enable_new_transport_stats":"1","enable_group_call":"1","enable_camera_abtest_texture_preview":"1","enable_audio_video_switch":"1","caller_end_call_threshold":"1500","call_start_delay":"1200","audio_encode_offload":"1","android_call_connected_toast":"1"}Sample voip_options (truncated)
If a peer could send a voip_options parameter to another peer, it would open up a lot of attack surface, including a JSON parser and the processing of these parameters. Since this parameter almost always appears in an offer, I tried modifying an offer to contain one, but the offer was rejected by the WhatsApp server with error 403. Looking at the binary, there were three other signal types in the incoming call flow that could accept a voip_options parameter. Java_com_whatsapp_voipcalling_Voip_nativeHandleCallOfferAccept and Java_com_whatsapp_voipcalling_Voip_nativeHandleCallVideoChanged were accepted by the server if a voip_options parameter was included, but it was stripped before the message was sent to the peer. However, if a voip_options parameter was attached to a Java_com_whatsapp_voipcalling_Voip_nativeHandleCallGroupInfo message, it would be forwarded to the peer device. I confirmed this by sending malformed JSON looking at the log of the receiving device for an error.
The voip_options parameter is processed by WhatsApp in three stages. First, the JSON is parsed into a tree. Then the tree is transformed to a map, so JSON object properties can be looked up efficiently even though there are dozens of them. Finally, WhatsApp goes through the map, looking for specific parameters and processes them, usually copying them to an area in memory where they will set a value relevant to the call being made.
Starting off with the JSON parser, it was clearly the PJSIP JSON parser. I compiled the code and fuzzed it, and only found one minor out-of-bounds read issue.
I then looked at the conversion of the JSON tree output from the parser into the map. The map is a very efficient structure. It is a hash map that uses FarmHash as its hashing algorithm, and it is designed so that the entire map is stored in a single slab of memory, even if the JSON objects are deeply nested. I looked at many open source projects that contained similar structures, but could not find one that looked similar. I looked through the creation of this structure in great detail, looking especially for type confusion bugs as well as errors when the memory slab is expanded, but did not find any issues.
I also looked at the functions that go through the map and handle specific parameters. These functions are extremely long, and I suspect they are generated using a code generation tool such as bison. They mostly copy parameters into static areas of memory, at which point they become difficult to trace. I did not find any bugs in this area either. Other than going through parameter names and looking for value that seemed likely to cause problems, I did not do any analysis of how the values fetched from JSON are actually used. One parameter that seemed especially promising was an A/B test parameter called setup_video_stream_before_accept. I hoped that setting this would allow the device to accept RTP before the call is answered, which would make RTP bugs interaction-less, but I was unable to get this to work.
In the process of looking at this code, it became difficult to verify its functionality without the ability to debug it. Since WhatsApp ships an x86 library for Android, I wondered if it would be possible to run the JSON parser on Linux.
Tavis Ormandy created a tool that can load the libwhatsapp.so library on Linux and run native functions, so long as they do not have a dependency on the JVM. It works by patching the .dynamic ELF section to remove unnecessary dependencies by replacing DT_NEEDED tags with DT_DEBUG tags. We also needed to remove constructors and deconstructors by changing the DT_FINI_ARRAYSZ and DT_INIT_ARRAYSZ to zero. With these changs in place, we could load the library using dlopen() and use dlsym() and dlclose() as normal.
Using this tool, I was able to look at the JSON parsing in more detail. I also set up distributed fuzzing of the JSON binary. Unfortunately, it did not uncover any bugs either.
Overall, WhatsApp signalling seemed like a promising attack surface, but we did not find any vulnerabilities in it. There were two areas where we were able to extend the attack surface beyond what is used in the basic call flow. First, it was possible to send signalling messages that should only be sent after a call is answered before the call is answered, and they were processed by the receiving device. Second, it was possible for a peer to send voip_options JSON to another device. WhatsApp could reduce the attack surface of signalling by removing these capabilities.
I made these suggestions to WhatsApp, and they responded that they were already aware of the first issue as well as variants of the second issue. They said they were in the process of limiting what signalling messages can be processed by the device before a call is answered. They had already fixed other issues where a peer can send voip_options JSON to another peer, and fixed the method I reported as well. They said they are also considering adding cryptographic signing to the voip_options parameter so a device can verify it came from the server to further avoid issues like this. We appreciate their quick resolution of the voip_options issue and strong interest in implementing defense-in-depth measures.
In Part 5, we will discuss the conclusions of our research and make recommendations for better securing video conferencing.
Categories: Security

Adventures in Video Conferencing Part 3: The Even Wilder World of WhatsApp

Google Project Zero - Tue, 12/11/2018 - 12:42
Posted by Natalie Silvanovich, Project Zero
WhatsApp is another application that supports video conferencing that does not use WebRTC as its core implementation. Instead, it uses PJSIP, which contains some WebRTC code, but also contains a substantial amount of other code, and predates the WebRTC project. I fuzzed this implementation to see if it had similar results to WebRTC and FaceTime.Fuzzing Set-upPJSIP is open source, so it was easy to identify the PJSIP code in the Android WhatsApp binary (libwhatsapp.so). Since PJSIP uses the open source library libsrtp, I started off by opening the binary in IDA and searching for the string srtp_protect, the name of the function libsrtp uses for encryption. This led to a log entry emitted by a function that looked like srtp_protect. There was only one function in the binary that called this function, and called memcpy soon before the call. Some log entries before the call contained the file name srtp_transport.c, which exists in the PJSIP repository. The log entries in the WhatsApp binary say that the function being called is transport_send_rtp2 and the PJSIP source only has a function called transport_send_rtp, but it looks similar to the function calling srtp_protect in WhatsApp, in that it has the same number of calls before and after the memcpy. Assuming that the code in WhatsApp is some variation of that code, the memcpy copies the entire unencrypted packet right before it is encrypted.
Hooking this memcpy seemed like a possible way to fuzz WhatsApp video calling. I started off by hooking memcpy for the entire app using a tool called Frida. This tool can easily hook native function in Android applications, and I was able to see calls to memcpy from WhatsApp within minutes. Unfortunately though, video conferencing is very performance sensitive, and a delay sending video packets actually influences the contents of the next packet, so hooking every memcpy call didn’t seem practical. Instead, I decided to change the single memcpy to point to a function I wrote.
I started off by writing a function in assembly that loaded a library from the filesystem using dlopen, retrieved a symbol by calling dlsym and then called into the library. Frida was very useful in debugging this, as it could hook calls to dlopen and dlsym to make sure they were being called correctly. I overwrote a function in the WhatsApp GIF transcoder with this function, as it is only used in sending text messages, which I didn’t plan to do with this altered version. I then set the memcpy call to point to this function instead of memcpy, using this online ARM branch finder.
sub_2F8CCMOV             X21, X30MOV             X22, X0MOV             X23, X1MOV             X20, X2MOV             X1, #1ADRP            X0, #aDataDataCom_wh@PAGE ; "/data/data/com.whatsapp/libn.so"ADD             X0, X0, #aDataDataCom_wh@PAGEOFF ; "/data/data/com.whatsapp/libn.so"BL              .dlopenADRP            X1, #aApthread@PAGE ; "apthread"ADD             X1, X1, #aApthread@PAGEOFF ; "apthread"BL              .dlsymMOV             X8, X0MOV             X0, X22MOV             X1, X23MOV             X2, X20NOPBLR             X8MOV             X30, X21RETThe library loading function
I then wrote a library for Android which had the same parameters as memcpy, but fuzzed and copied the buffer instead of just copying it, and put it on the filesystem where it would be loaded by dlopen. I then tried making a WhatsApp call with this setup. The video call looked like it was being fuzzed and crashed in roughly fifteen minutes.Replay Set-up
To replay the packets I added logging to the library, so that each buffer that was altered would also be saved to a file. Then I created a second library that copied the logged packets into the buffer being copied instead of altering it. This required modifying the WhatsApp binary slightly, because the logged packet will usually not be the same size as the packet currently being sent. I changed the length of the hooked memcpy to be passed by reference instead of by value, and then had the library change the length to the length of the logged packet. This changed the value of the length so that it would be correct for the call to srtp_protect. Luckily, the buffer that the packet is copied into is a fixed length, so there is no concern that a valid packet will overflow the buffer length. This is a common design pattern in RTP processing that improves performance by reducing length checks. It was also helpful in modifying FaceTime to replay packets of varying length, as described in the previous post.
This initial replay setup did not work, and looking at the logged packets, it turned out that WhatsApp uses four streams with different SSRCs for video conferencing (possibly one for video, one for audio, one for synchronization and one for good luck). The streams each had only one payload type, and they were all different, so it was fairly easy to map each SSRC to its stream. So I modified the replay library to determine the current SSRC for each stream based on the payload types of incoming packets, and then to replace the SSRC of the replayed packets with the correct one based on their payload type. This reliably replayed a WhatsApp call. I was then able to fuzz and reproduce crashes on WhatsApp.ResultsUsing this setup, I reported one heap corruption issue on WhatsApp, CVE-2018-6344. This issue has since been fixed. After this issue was resolved, fuzzing did not yield any additional crashes with security impact, and we moved on to other methodologies. Part 4 will describe our other (unsuccessful) attempts to find vulnerabilities in WhatsApp.
Categories: Security

2018 ABA Tech Survey shows over two-thirds of attorneys use iPhone, over one-quarter use Android

iPhone J.D. - Tue, 12/11/2018 - 02:19

The iPhone remains, by far, the most popular smartphone for attorneys.  Nevertheless, in 2018 an all-time high of one-quarter of all attorneys reported using an Android phone, and that increase is mostly attributable to sole practitioners, where iPhone-to-Android use is a 2-to-1 ratio.

Every year, the ABA's Legal Technology Resource Center conducts a survey to gauge the use of legal technology by attorneys in private practice in the United States.  The 2018 report (edited by Gabriella Mihm) was recently released, and as always, I was particularly interested in Volume VI, titled Mobile Lawyers.  No survey is perfect, but the ABA tries hard to ensure that its survey has statistical significance, and every year this is one of the best sources of information on how attorneys use technology.  Note that the survey was conducted from June to October, 2018, so these numbers don't reflect any changes in what attorneys started using when Apple introduced the 2018 versions of the iPhone or iPad Pro. This is the ninth year that I have reported on this survey, and with multiple years of data we can see some interesting trends.  (My reports on prior ABA surveys are located here: 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010.)

Over two-thirds of all U.S. attorneys use an iPhone, and Android use hits all-time high

The 2o18 survey revealed that around 95% of all attorneys use a smartphone to get work done out of the office.  For attorneys using a smartphone, over two-thirds use an iPhone, and for the first time ever more than 25% report using an Android, with the highest Android use among solo attorneys.

The survey asks each attorney "Do you use a smartphone (e.g. iPhone, Android) for law-related tasks while away from your primary workplace?"  Back in 2010, the number of attorneys answering "no" was around 12%. That number decreased over the years to 2017 when it hit an all-time low of only 4.4%.  This year, the number increased only slightly to 4.9%.  We can still say that over 95% of all attorneys use a smartphone to get work done.

In 2013, the big news was that, for the first time, over half of all attorneys were using an iPhone.  In 2014 and 2015 the percentage was around 60%.  In 2016, there was a big increase up to 68.4%.  In 2017, the number was up to 74.9%.  In 2018, the number is down slightly to 72%.  Taking into account that 4.9% of all attorneys are not using a smartphone, we can say that 68.4% of all attorneys in private practice in the U.S. are using an iPhone in their law practice, which is the same percentage as 2016.  According to the ABA 2018 National Lawyer Population Survey, there are 1,338,678 attorneys in the U.S., which suggests that there could be over 916,000 attorneys in the U.S. using an iPhone.

If 68.4% of all attorneys are using an iPhone, and 4.9% of attorneys are not using any smartphone, what are the others using?  Most of them are using an Android smartphone, around 25.4%.  That is an all-time high for Android, so 2018 marks the first year in which more than one-quarter of all attorneys are using an Android phone.

Back in 2011, 40% of all attorneys used a BlackBerry, and there was a time when it was incredibly common to see another lawyer with a BlackBerry.  However, BlackBerry use by attorneys has dropped sharply since 2011.  In 2018, the number reached a new low of only 1.5%.  According to the survey, the most significant use of BlackBerry devices this year is in law firms with 50-99 lawyers; in those firms, 100% of the attorneys are using a smartphone, and while iPhone use is a little higher than the national average at 72.7%, Android use is down to 18.2% and BlackBerry use is at 9.1%.  If you are looking for an attorney who is still using a BlackBerry phone, your best bet is to look at a law firm with 50-100 attorneys.

If you are looking for an attorney who is using an Android phone, your best bet is to look for a sole practitioner.  Only 91.4% of solo attorneys use a smartphone, fewer than the statistic associated with any other firm size.  60.1% of solo attorneys use an iPhone, and 30.4% of solo attorneys use Android.  So for solo attorneys, almost 1/10 are not even using a smartphone, and for those who do, Android is half as popular as iPhone.  That's still a large number of solo attorneys using an iPhone, but it is interesting that Android phones are more popular with solo attorneys than with attorneys who work with other attorneys at a law firm.  I just did a quick search and couldn't find recent numbers, but historically I know that almost half of all attorneys are sole practitioners, so that is a big market.

Finally, there are almost 1% of attorneys using some sort of Microsoft Windows operating system on their smartphone in 2018, and another almost 0.7% say that they don't know what kind of smartphone they are using. 

If you add the numbers, you'll notice that they add up to over 100%.  But it makes sense for the number to be slightly over 100% because I know that a small number of attorneys use multiple smartphones.

The following pie chart is somewhat imprecise because, as I just noted, the actual numbers add up to just over 100%, but it gives you a general, graphical sense of the relative use:

To place these numbers in historical context, the following chart shows lawyer smartphone use over recent years.  The two dramatic changes in this chart are of course the plunge in BlackBerry use and the surge in iPhone use.  There has been a more gradual, but noticeable, decrease in the number of attorneys not using a smartphone at all.  As for Android use, there was a slight increase from 2011 to 2015, then a slight decrease for two years, and then the all-time high this year.  The "Other" category in this chart includes Windows, something else, and those who don't know what smartphone they are using.

Almost 40% of U.S attorneys use an iPad

Apple introduced the original iPad in 2010, and for the first few years it resulted in a surge in lawyer tablet use.  In 2011, only 15% of all attorneys responded that they use a tablet.  That number more than doubled to 33% in 2012, and rose to 48% in 2013.  Since 2013, the number has stayed between 48% and 50%; in 2018, it was 48.5%.  Suffice it to say that about half of all U.S. attorneys in private practice currently use a tablet, and that has remained true for the last five years.

It used to be that around 90% of attorneys using a tablet were using an iPad.  It was 89% in 2011, 91% in 2012, and 91% in 2013.  From 2014 to 2016, that number stayed around 84%.  In 2017, that number dropped to 81.3%, and in 2018 it is at 78.1%. 

It looks like the very slight drop in attorneys using iPads is mostly attributable to slightly fewer overall attorneys using tablet devices.  Android and Windows tablet use by attorneys has really changed very much.  That surprises me on the Windows side because I do seem to hear more attorneys talking about using a Windows Surface device.

Here is a historical chart of attorney tablet use:

Popular apps

The survey also asked attorneys to identify apps that they use.  I want to start by making the same objection that I have been making for many years now:  I don't like how the ABA asks the question.  The ABA first asks "Have you ever downloaded a legal-specific app for your smartphone?"  In 2018, 49.4% said yes.  When I see the word "smartphone" in this question, I think of my iPhone, not my iPad.  Then the next question asks:  "What legal specific app(s) did you download?"  When I read the questions in that order, I'm thinking of the apps that I downloaded on my iPhone, not my iPad.  But others must be reading the question differently because I see TrialPad and TranscriptPad in the answers, and those apps exist only on the iPad, not on the iPhone.  I would have never mentioned those apps when answering the question, even though I use them on my iPad, and TranscriptPad is one of my favorite legal specific apps.

So while I question how much value you can put in these answers, for what it is worth, the top 13 apps listed are, in order of the percentage of attorneys mentioning them:

  1. Fastcase
  2. Westlaw
  3. Lexis Advance
  4. A legal dictionary app
  5. TrialPad
  6. TranscriptPad
  7. Clio
  8. LexisNexis Get Cases & Shepardize
  9. LexisNexis Legal News
  10. Courtlink
  11. Casemaker
  12. Westlaw News
  13. HeinOnline

Congrats to Ed Walters and the team at Fastcase for moving up to the #1 spot this year. 

The ABA then asked about general business apps, and the questions have the same ambiguity:  the ABA first asked if the attorney ever downloaded a general business app to a smartphone (50.2% said yes in 2018), and then the ABA asked which apps were downloaded, without making it clear whether the question was asking about the iPhone and iPad.  The answers provided were, in this order:

  1. Dropbox
  2. LinkedIn
  3. Evernote
  4. LogMeIn
  5. Documents to Go
  6. GoodReader
  7. Box
  8. QuickOffice
  9. MS Office/Word
  10. Notability
  11. QuickBooks

It amazes me that Microsoft Word is so low on this list (only 4.5% report using it).  I consider Word an essential app for attorneys using an iPhone or an iPad.

Categories: iPhone Web Sites

In the news

iPhone J.D. - Fri, 12/07/2018 - 00:55

If you are using a Series 4 Apple Watch in the U.S., Apple has now turned on the ability to use your Apple Watch to do an EKG/ECG.  Just update to the latest version of watchOS, 5.1.2, to start using the feature.  When you first configure the ECG app, you are also given the option to turn on having the Apple Watch do additional periodic checks on your heart.  Apple points out that this feature can only do so much, and it is certainly no substitute for talking to your doctor if you are not feeling good.  Nevertheless, it is fascinating to see how far Apple has extended the health capabilities of the Apple Watch in the short amount of time that the product has been available.  I'm sure that Apple has much more planned in this area, and Alex Fitzpatrick of TIME magazine interviewed Apple CEO Tim Cook and others to discuss this brave new world.  And now, the news of note from the past week:

  • Michael Payne of Legaltech news discusses the end of paper as attorneys move from a legal pad to an iPad.
  • Nazia Parveen of The Guardian reports on the trial of a pharmacist in the UK who was convicted of murdering his wife, in part due to evidence obtained from his iPhone and his wife's iPhone providing evidence of heart rates and moving around at specific times.
  • Zac Hall of 9to5Mac has some good suggestions for using HomeKit to automate your holiday lights.  My advice:  if you do nothing more than this, adding a smart plug to a Christmas Tree is a huge improvement.  It is much less awkward than reaching behind a tree to plug it in, may give you the ability to dim your tree, allows you to have the tree turn off automatically at a certain time, etc.  And the ability to tell Siri to turn on your tree lights is really useful.
  • Joanna Stern of the Wall Street Journal recommends the best mesh Wi-Fi systems.  And as usual, her article has a great video to accompany it.
  • Jonny Evans of Computerworld has 12 Siri tips that you might not know about.
  • Dave Mark of The Loop notes a few new iPad Pro hardware tricks (such as the ability to spin your Apple Pencil -- I figured out that one too) based on a video from DailyTekk.
  • Active military personnel and veterans can now get a 10% discount on Apple products, as noted by Michael Potuck of 9to5Mac.
  • iOS 12.1.1 was released this week.  It improves RTT/TTY support, which is a form of texting used by individuals who have difficulty making audio phone calls.  As each letter is typed on one screen, it appears on the other person's screen.  Chance Miller of 9to5Mac explains this feature and how RTT/TTY is improved in iOS 12.1.1.
  • And finally, in this video, which Apple calls Real Stories, four people share stories of how an Apple Watch helped to save their life.

Categories: iPhone Web Sites

IBM TS4500 R5 Tape Library Guide

IBM Redbooks Site - Thu, 12/06/2018 - 08:30
Redbook, published: Thu, 6 Dec 2018

The IBM® TS4500 (TS4500) tape library is a next-generation tape solution that offers higher storage density and integrated management than previous solutions.

Categories: Technology

Adventures in Video Conferencing Part 2: Fun with FaceTime

Google Project Zero - Wed, 12/05/2018 - 13:43
Posted by Natalie Silvanovich, Project Zero
FaceTime is Apple’s video conferencing application for iOS and Mac. It is closed source, and does not appear to use any third-party libraries for its core functionality. I wondered whether fuzzing the contents of FaceTime’s audio and video streams would lead to similar results as WebRTC.Fuzzing Set-up
Philipp Hancke performed an excellent analysis of FaceTime’s architecture in 2015. It is similar to WebRTC, in that it exchanges signalling information in SDP format and then uses RTP for audio and video streams. Looking at the FaceTime implementation on a Mac, it seemed the bulk of the calling functionality of FaceTime is in a daemon called avconferenced. Opening up the binary that supports its functionality, AVConference in IDA, it contains a function called SRTPEncryptData. This function then calls CCCryptorUpdate, which appeared to encrypt RTP packets below the header.
To do a quick test of whether fuzzing was likely to be effective, I hooked this function and altered the underlying encrypted data. Normally, this can be done by setting the DYLD_INSERT_LIBRARIES environment variable, but since avconferenced is a daemon that restarts automatically when it dies, there wasn’t an easy way to set an environment variable. I eventually used insert_dylib to alter the AVConference binary to load a library on startup, and restarted the process. The library loaded used DYLD_INTERPOSE to replace CCCryptorUpdate with a version that fuzzed every input buffer (using fuzzer q from Part 1) before it was processed. This implementation had a lot of problems: it fuzzed both encryption and decryption, it affected every call to CCCryptorUpdate from avconferenced, not just ones involved in SRTP and there was no way to reproduce a crash. But using the modified FaceTime to call an iPhone led to video output that looked corrupted, and the phone crashed in a few minutes. This confirmed that this function was indeed where FaceTime calls are encrypted, and that fuzzing was likely to find bugs.
I made a few changes to the function that hooked CCCryptorUpdate to attempt to solve these problems. I limited fuzzing the input buffer to the two threads that write audio and video output to RTP, which also solved the problem of decrypted packets being fuzzed, as these threads only ever encrypt. I then added functionality that wrote the encrypted, fuzzed contents of each packet to a series of log files, so that test cases could be replayed. This required altering the sandbox of avconferenced so that it could write files to the log location, and adding spinlocks to the hook, as calling CCCryptorUpdate is thread safe, but logging packets isn’t. Call Replay
I then wrote a second library that hooks CCCryptorUpdate and replays packets logged by the first library by copying the logged packets in sequence into the packet buffers passed into the function. Unfortunately, this required a small modification to the AVConference binary, as the SRTPEncryptData function does not respect the length returned by CCCryptorUpdate; instead, it assumes that the length of the encrypted data is the same as the length as the plaintext data, which is reasonable when CCCryptorUpdate isn’t being hooked. Since SRTPEncryptData always uses a large fixed-size buffer for encryption, and encryption is in-place, I changed the function to retrieve the length of the encrypted buffer from the very end of the buffer, which was set in the hooked CCCryptorUpdate call. This memory is unlikely to be used for other purposes due to the typical shorter length of RTP packets. Unfortunately though, even though the same encrypted data was being replayed to the target, it wasn’t being processed correctly by the receiving device.
To understand why requires an explanation of how RTP works. An RTP packet has the following format.

It contains several fields that impact how its payload is interpreted. The SSRC is a random identifier that identifies a stream. For example, in FaceTime the audio and video streams have different SSRCs. SSRCs can also help differentiate between streams in a situation where a user could potentially have an unlimited number of streams, for example, multiple participants in a video call. RTP packets also have a payload type (PT in the diagram) which is used to differentiate different types of data in the payload. The payload type for a certain data type is consistent across calls. In FaceTime, the video stream has a single payload type for video data, but the audio stream has two payload types, likely one for audio data and the other for synchronization. The marker (M in the diagram) field of RTP is also used by FaceTime to represent when a packet is fragmented, and needs to be reassembled.
From this it is clear that simply copying logged data into the current encrypted packet won’t function correctly, because the data needs to have the correct SSRC, payload type and marker, or it won’t be interpreted correctly. This wasn’t necessary in WebRTC, because I had enough control over WebRTC that I could create a connection with a single SSRC and payload type for fuzzing purposes. But there is no way to do this in FaceTime, even muting a video call leads to silent audio packets being sent as opposed to the audio stream shutting down. So these values needed to be manually corrected.
An RTP feature called extensions made correcting these fields difficult. An extension is an optional header that can be added to an RTP packet. Extensions are not supposed to depend on the RTP payload to be interpreted, and extensions are often used to transmit network or display features. Some examples of supported extensions include the orientation extension, which tells the endpoint the orientation of the receiving device and the mute extension, which tells the endpoint whether the receiving device is muted.
Extensions mean that even if it is possible to determine the payload type, marker and SSRC of data, this is not sufficient to replay the exact packet that was sent. Moreover, FaceTime creates extensions after the packet is encrypted, so it is not possible to create the complete RTP packet by hooking CCCryptorUpdate, because extensions could be added later.
At this point, it seemed necessary to hook sendmsg as well as CCCryptorUpdate. This would allow the outgoing RTP header to be modified once it is complete. There were a few challenges in doing this. To start, audio and video packets are sent by different threads in FaceTime, and can be reordered between the time they are encrypted and the time they are sent by sendmsg. So I couldn’t assume that if sendmsg received an RTP packet that it was necessarily the last one that was encrypted. There was also the problem that SSRCs are dynamic, so replaying an RTP packet with the same SSRC it is recorded with won’t work, it needs to have the new SSRC for the audio or video stream.
Note that in MacOS Mojave, FaceTime can call sendmsg via either the AVConference binary or the IDSFoundation binary, depending on the network configuration. So to capture and replay unencrypted RTP traffic on newer systems, it is necessary  to hook CCCryptorUpdate in AConference and sendmsg in IDSFoundation (AVConference calls into IDSFoundation when it calls sendmsg). Otherwise, the process is the same as on older systems.
I ended up implementing a solution that recorded packets by recording the unencrypted payload, and then recorded its RTP header, and using a snippet of the encrypted payload to pair headers with the correct unencrypted payload. Then to replay packets, the packets encrypted in CCCryptorUpdate were replaced with the logged packets, and once the encrypted payload came through to sendmsg, the header was replaced with the logged one for that payload. Fortunately, the two streams with unique SSRCs used by FaceTime do not share any payload types, so it was possible to determine the new SSRC for each stream by waiting for an incoming packet with the correct payload type. Then in each subsequent packet, the SSRC was replaced with the correct one.
Unfortunately, this still did not replay a FaceTime call correctly, and calls often experienced decryption failures. I eventually determined that audio and video on FaceTime are encrypted with different keys, and updated the replay script to queue the CCCryptor used by CCCryptorUpdate function based on whether it was audio or video content. Then in sendmsg, the entire logged RTP packet, including the unencrypted payload, was copied into the outgoing packet, the SSRC was fixed, and then the payload encrypted with the next CCCryptor out of the appropriate queue. If a CCCryptor wasn’t available, outgoing packets were dropped until a new one was created. At this point, it was possible to stop using the modified AVConference binary, as all the packet modification was now happening in sendmsg. This implementation still had reliability problems.
Digging more deeply into how FaceTime encryption works, packets are encrypted in CTS mode, which requires a counter. The counter is initialized to a unique value for each packet that is sent. During the initialization of the RTP stream, the peers exchange two 16-byte random tokens, one for audio and one for video. The counter value for each packet is then calculated by exclusive or-ing the token with several values found in the packet, including the SSRC and the sequence number. Only one value in this calculation, the sequence number, changes between each packet. So it is possible to calculate the counter value for each packet by knowing the initial counter value and sequence number, which can be retrieved by hooking CCCryptorCreateWithMode. The sequence number is xor-ed with the random token at index 0x12 when FaceTime constructs a counter, so by xor-ing this location with the initial sequence number and then a packet’s sequence number, the counter value for that packet can be calculated. The key can also be retrieved by hooking CCCryptorCreateWithMode.This allowed me to dispense with queuing cryptors, as I now had all the information I needed to construct a cryptor for any packet. This allowed for packets to be encrypted faster and more accurately.
Sequence numbers still posed a problem though, as the initial sequence number of an RTP stream is randomly generated at the beginning of the call, and is different between subsequent calls. Also, sequence numbers are used to reconstruct video streams in order, so they need to be correct. I altered the replay tool determine the starting sequence number of each stream, and then calculate the difference between the starting sequence number of each logged stream and the sequence number of the logged packet and then add it to this value. These two changes finally made the replay tool work, though replay gets slower and slower as a stream is replayed due to dropped packets.   
Results
Using this setup, I was able to fuzz FaceTime calls and reproduce the crashes. I reported three bugs in FaceTime based on this work. All these issues have been fixed in recent updates.
CVE-2018-4366 is an out-of-bounds read in video processing that occurs on Macs only.
CVE-2018-4367 is a stack corruption vulnerability that affects iOS and Mac. There are a fair number of variables on the stack of the affected function before the stack cookie, and several fuzz crashes due to this issue caused segmentation faults as opposed to stack_chk crashes, so it is likely exploitable.
CVE-2018-4384 is a kernel heap corruption issue in video processing that affects iOS. It is likely similar to this vulnerability found by Adam Donenfeld of Zimperium.
All of these issues took less than 15 minutes of fuzzing to find on a live device. Unfortunately, this was the limit of fuzzing that could be performed on FaceTime, as it would be difficult to create a command line fuzzing tool with coverage like we did for WebRTC as it is closed source.
In Part 3, we will look at video calling in WhatsApp.
Categories: Security

Adventures in Video Conferencing Part 1: The Wild World of WebRTC

Google Project Zero - Tue, 12/04/2018 - 14:40
Posted by Natalie Silvanovich, Project Zero
Over the past five years, video conferencing support in websites and applications has exploded. Facebook, WhatsApp, FaceTime and Signal are just a few of the many ways that users can make audio and video calls across networks. While a lot of research has been done into the cryptographic and privacy properties of video conferencing, there is limited information available about the attack surface of these platforms and their susceptibility to vulnerabilities. We reviewed the three most widely-used video conferencing implementations. In this series of blog posts, we describe what we found.
This part will discuss our analysis of WebRTC. Part 2 will cover our analysis of FaceTime. Part 3 will discuss how we fuzzed WhatsApp. Part 4 will describe some attacks against WhatsApp that didn’t work out. And finally, Part 5 will discuss the future of video conferencing and steps that  developers can take to improve the security of their implementation.Typical Video Conferencing Architecture
All the video conferencing implementations we investigated allow at least two peers anywhere on the Internet to communicate through audiovisual streams. Implementing this capability so that it is reliable and has good audio and video quality presents several challenges. First, the peers need to be able to find each other and establish a connection regardless of NATs or other network infrastructure. Then they need to be able to communicate, even though they could be on different platforms, application versions or browsers. Finally, they need to maintain audio and video quality, even if the connection is low-bandwidth or noisy.
Almost all video conferencing solutions have converged on a single architecture. It assumes that two peers can communicate via a secure, integrity checked channel which may have low bandwidth or involve an intermediary server, and it allows them to create a faster, higher-bandwidth peer-to-peer channel.
The first stage in creating a connection is called signalling. It is the process through which the two peers exchange the information they will need to create a connection, including network addresses, supported codecs and cryptographic keys. Usually, the calling peer sends a call request including information about itself to the receiving peer, and then the receiving peer responds with similar information. SDP is a common protocol for exchanging this information, but it is not always used, and most implementations do not conform to the specification. It is common for mobile messaging apps to send this information in a specially formatted message, sent through the same channel text messages are sent. Websites that support video conferencing often use WebSockets to exchange information, or exchange it via HTTPS using the webserver as an intermediary.
Once signalling is complete, the peers find a way to route traffic to each other using the STUN, TURN and ICE protocols. Based on what these protocols determine, the peers can create UDP, UDP-over-STUN and occasionally TCP connections based of what is favorable for the network conditions.
Once the connection has been made, the peers communicate using Real-time Transport Protocol. Though this protocol is standardized, most implementations deviate somewhat from the standard. RTP can be encrypted using a protocol called Secure RTP (SRTP), and some implementations also encrypt streams using DTLS. Under the encryption envelope, RTP supports features that allow multiple streams and formats of data to be exchanged simultaneously. Then, based on how RTP classifies the data, it is passed on to other processing, such as video codecs.  Stream Control Transmission Protocol (SCTP) is also sometimes used to exchange small amounts of data (for example a text message on top of a call) during video conferencing, but it is less commonly used than RTP.
Even when it is encrypted, RTP often doesn’t include integrity protection, and if it does, it usually doesn’t discard malformed packets. Instead, it attempts to recover them using strategies such as Forward Error Correction (FEC). Most video conferencing solutions also detect when a channel is noisy or low-bandwidth and attempt to handle the situation in a way that leads to the best audio and video quality, for example, sending fewer frames or changing codecs. Real Time Control Protocol (RTCP) is used to exchange statistics on network quality and coordinate adjusting properties of RTP streams to adapt to network conditions.WebRTC
WebRTC is an open source project that enables video conferencing. It is by far the most commonly used implementation. Chrome, Safari, Firefox, Facebook Messenger, Signal and many other mobile applications use WebRTC. WebRTC seemed like a good starting point for looking at video conferencing as it is heavily used, open source and reasonably well-documented.WebRTC Signalling
I started by looking at WebRTC signalling, because it is an attack surface that does not require any user interaction. Protocols like RTP usually start being processed after a user has picked up the video call, but signalling is performed before the user is notified of the call. WebRTC uses SDP for signalling.
I reviewed the WebRTC SDP parser code, but did not find any bugs. I also compiled it so it would accept an SDP file on the commandline and fuzzed it, but I did not find any bugs through fuzzing either. I later discovered that WebRTC signalling is not implemented consistently across browsers anyhow. Chrome uses the main WebRTC implementation, Safari has branched slightly and Firefox uses their own implementation. Most mobile applications that use WebRTC implement their own signalling in a protocol that is not SDP as well. So it is not likely that a bug in WebRTC signalling would affect a wide variety of targets.
RTP Fuzzing
I then decided to look at how RTP is processed in WebRTC. While RTP is not an interaction-less attack surface because the user usually has to answer the call before RTP traffic is processed, picking up a call is a reasonable action to expect a user to take. I started by looking at the WebRTC source, but it is very large and complex, so I decided fuzzing would be a better approach.
The WebRTC repository contains fuzzers written for OSS-Fuzz for every protocol and codec supported by WebRTC, but they do not simulate the interactions between the various parsers, and do not maintain state between test cases, so it seemed likely that end-to-end fuzzing would provide additional coverage.
Setting up end-to-end fuzzing was fairly time intensive, so to see if it was likely to find many bugs, I altered Chrome to send malformed RTP packets. I changed the srtp_protect function in libsrtp so that it ran the following fuzzer on every packet:
void fuzz(char* buf, int len){
int q = rand()%10;
if (q == 7){ int ind = rand()%len; buf[ind] = rand(); }
if(q == 5){ for(int i = 0; i < len; i++) buf[i] = rand();
}RTP fuzzer (fuzzer q)
When this version was used to make a WebRTC call to an unmodified instance of Chrome, it crashed roughly every 30 seconds.
Most of the crashes were due to divide-by-zero exceptions, which I submitted patches for, but there were three interesting crashes. I reproduced them by altering the WebRTC source in Chrome so that it would generate the packets that caused the same crashes, and then set up a standalone build of WebRTC to reproduce them, so that it was not necessary to rebuild Chrome to reproduce the issues.
The first issue, CVE-2018-6130 is an out-of-bounds memory issue related to the use of std::map find in processing VP9 (a video codec). In the following code, the value t10_pic_idx is pulled out of an RTP packet unverified (GOF stands for group of frames).
if (frame->frame_type() == kVideoFrameKey) {    ...    GofInfo info = gof_info_.find(codec_header.tl0_pic_idx)->second;    FrameReceivedVp9(frame->id.picture_id, &info);    UnwrapPictureIds(frame);    return kHandOff;  }

If this value does not exist in the gof_info_ array, std::map::find returns the end value of the map, which points to one element past the allocated values for the map. Depending on memory layout, dereferencing this iterator will either crash or return the contents of unallocated memory.
The second issue, CVE-2018-6129 is a more typical out-of-bounds read issue, where the index of a field is read out of an RTP packet, and not verified before it is used to index a vector.
The third issue, CVE-2018-6157 is a type confusion issue that occurs when a packet that looks like a VP8 packet is sent to the H264 parser. The packet will eventually be treated like an H264 packet even though it hasn’t gone through the necessary checks for H264. The impact of this issue is also limited to reading out of bounds.
There are a lot of limitations to the approach of fuzzing in a browser. It is very slow, the issues are difficult to reproduce, and it is difficult to fuzz a variety of test cases, because each call needs to be started manually, and certain properties, such as the default codec, can’t change in the middle of the call. After I reported these issues, the WebRTC team suggested that I use the video_replay tool, which can be used to replay RTP streams recorded in a patched browser. The tool was not able to reproduce a lot of my issues because they used non-default WebRTC settings configured through signalling, so I added the ability to load a configuration file alongside the RTP dump to this tool. This made it possible to quickly reproduce vulnerabilities in WebRTC.
This tool also had the benefit of enabling much faster fuzzing, as it was possible to fuzz RTP by fuzzing the RTP dump file and loading it into video_replay. There were some false positives, as it was also possible that fuzzing caused bugs in parsing the RTP dump file format, but most of the bugs were actually in RTP processing.Fuzzing with the video_replay tool with code coverage and ASAN enabled led to four more bugs. We ran the fuzzer on 50 cores for about two weeks to find these issues.
CVE-2018-6156 is probably the most exploitable bug uncovered. It is a large overflow in FEC. The buffer WebRTC uses to process FEC packets is 1500 bytes, but it does no size checking of these packets once they are extracted from RTP. Practically, they can be up to about 2000 bytes long.
CVE-2018-6155 is a use-after-free in a video codec called VP8. It is interesting because it affects the VP8 library, libvpx as opposed to code in WebRTC, so it has the potential to affect software that uses this library other than WebRTC. A generic fix for libvpx was released as a result of this bug.
CVE-2018-16071 is a use-after-free in VP9 processing that is somewhat similar to CVE-2018-6130. Once again, an untrusted index is pulled out of a packet, but this time it is used as the upper bounds of a vector erase operation, so it is possible to delete all the elements of the vector before it is used.
CVE-2018-16083 is an out-of-bounds read in FEC that occurs due to a lack of bounds checking.
Overall, end-to-end fuzzing found a lot of bugs in WebRTC, and a few were fairly serious. They have all now been fixed. This shows that end-to-end fuzzing is an effective approach for finding vulnerabilities in this type of video conferencing solution. In Part 2, we will try a similar approach on FaceTime. Stay tuned!
Categories: Security

IBM DS8000 Easy Tier (for DS8880 R8.5 or later)

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Redpaper, published: Tue, 4 Dec 2018

This IBM® Redpaper™ publication describes the concepts and functions of IBM System Storage® Easy Tier®, and explains its practical use with the IBM DS8000® series and License Machine Code 8.8.50.xx.xx or later.

Categories: Technology

IBM Storage Networking SAN768C-6 Product Guide

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Redpaper, published: Tue, 4 Dec 2018

This IBM® Redbooks® Product Guide describes the IBM Storage Networking SAN768C-6.

Categories: Technology

IBM DS8880 Encryption for data at rest and Transparent Cloud Tiering (DS8000 Release 8.5)

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Draft Redpaper, last updated: Tue, 4 Dec 2018

-update for Release 8.5 - IBM experts recognize the need for data protection, both from hardware or software failures, and also from physical relocation of hardware, theft, and retasking of existing hardware.

Categories: Technology

IBM Storage Networking SAN192C-6 Product Guide

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Web Doc, published: Tue, 4 Dec 2018

This IBM® Redbooks® Product Guide describes the IBM Storage Networking SAN192C-6.

Categories: Technology

IBM Storage Networking SAN32C-6 Product Guide

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Web Doc, published: Tue, 4 Dec 2018

The IBM Storage Networking SAN32C-6 provides high-speed Fibre Channel (FC) connectivity from the server rack to the SAN core.

Categories: Technology

IBM Storage Networking SAN384C-6 Product Guide

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Web Doc, published: Tue, 4 Dec 2018

This IBM® Redbooks® Product Guide introduces IBM Storage Networking SAN384C-6.

Categories: Technology

IBM Storage Networking SAN50C-R Product Guide

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Web Doc, published: Tue, 4 Dec 2018

This IBM® Redbooks® Product Guide describes the IBM Storage Networking SAN50C-R.

Categories: Technology

IBM Storage Networking SAN768C-6 Product Guide

IBM Redbooks Site - Tue, 12/04/2018 - 08:30
Draft Redpaper, last updated: Tue, 4 Dec 2018

This IBM® Redbooks® Product Guide describes the IBM Storage Networking SAN768C-6.

Categories: Technology

Gen Why Lawyer #169 -- Putting Your iPhones, iPads and Tech Tools to Good Use in Your Law Firm with iPhone J.D. Jeff Richardson

iPhone J.D. - Tue, 12/04/2018 - 00:10

This week, I was the guest on the Gen Why Lawyer podcast, a podcast hosted by California patent attorney, and millennial, Karima Gulick.  I talked about why I started iPhone J.D., and I also provided some general tips for attorneys, especially younger millennial attorneys, about using an iPhone and iPad in a law practice.  Karima does a great job with this podcast, and as enjoyable as it was to be a guest, I have also enjoyed listening to — and learning a lot from — the other episodes of this podcast.

Click here for the page on the Gen Why Lawyer webpage for this podcast.  Or you can use these links to listen in your podcast player of choice:

Categories: iPhone Web Sites

IBM DS8880 and IBM Z Synergy

IBM Redbooks Site - Mon, 12/03/2018 - 08:30
Draft Redpaper, last updated: Mon, 3 Dec 2018

From the beginning, what is known today as IBM® Z always had a close and unique relationship to its storage.

Categories: Technology

Pages

Subscribe to www.hdgonline.net aggregator