7 Things Introverts Wish Extraverts Knew

As I pointed out a few columns ago, introverts tend to be more creative, more reliable, more trustworthy and to work harder than extraverts. Not surprisingly the great inventors and innovators of history have been markedly introverted: Einstein, Bill Gates, Mark Zuckerberg, Elon Musk, Isaac Newton, Nikola Tesla, Archimedes, and Charles Darwin.

Insanely, though, contemporary business culture values extraversion.  They hire people based upon first impressions (extraverts are good at that), goal them on “collaborating” (which extraverts love) and then spending billions of dollars creating open plan playgrounds perfectly suited for extraverts.

As I said, insane. Or maybe “inane” is the mot juste.  

Anyway, because of the egregious management boneheadedness, most workplaces are dominated by extraverts… to the great detriment of both productivity and innovation. However, since, alas that’s not likely to change any time soon, introverts and extraverts will need to learn how to get along.

Unfortunately, while introverts can see right through extraverts, extraverts simply don’t seem to grok introverts at all. So, since I’m off-the-scale introverted, I’ll take in on myself to speak for my fellow introverts to tell the extraverts what we wish they already knew. Here goes:

1. You’re talking too much.

Introverts are good listeners but the fact that we’re listening to you and not saying anything doesn’t mean that we’re enthralled by everything you’re saying. Quite the contrary. If you’ve been talking for more than a couple of minutes without pause, we’ve mentally proceeded from “What a bore!” to “OMG, will he never stop talking!?” to “For God’s sake, STFU!!” We’re not going to say anything, though, because if we did, we would never hear the end of it.

2. We don’t want to change.

Even though it’s abundantly clear that society (in general) and workplaces (in particular) tend to value outgoing “people-people,” we introverts don’t want to, nor feel the need to, change to fit other people’s ideas of how we ought to act and feel. We’re perfectly fine the way we are, thank you very much. What’s more, we’d greatly appreciate it if you stopped assuming we envy you. We don’t. Believe me. We don’t want to be like you.

3. Give us private offices or let us work-from-home.

Today’s open plan offices are productivity toilets and health hazards for everyone. For introverts, though, they’re particularly hellish because there’s no way to get away from other people. Forcing an introvert to work in an open plan office is like forcing an extravert to spend all day in solitary confinement. We need privacy. Please have the common sense and common decency to give it to us.

4. We resent doing more than our share.

Because extraverts spend so much time collaborating, sharing, and gossiping, the burden of actually getting real work accomplished falls to the introverts. After a while–no, scratch that–from day one, we resent that you waste time and money socializing while we’re working our asses off. And we really resent it when you pipe up to steal the credit.

5. Leave us alone to recharge.

Introverts feel physically, emotionally, mentally and spiritually drained after being forced to interact with other people. The only way that we can recharge is by disconnecting and being by ourselves. Yes, we know that you draw energy from other people. Like a vampire. But we’re the opposite. So when you see us sitting by ourselves, don’t think you’re doing us a favor by pestering us. You’re not.

6. We are not shy loners.

Quite the contrary. Introverts are often talented at public speaking. We often have a small circle of close friends and family with whom we enjoy spending time. We’re not bashful about our accomplishments; we just don’t feel the need to toot our own horns. We don’t talk about ourselves because we’d rather talk about something more interesting than stuff we already know.

7. Go away, please.

Google staff discussed tweaking search results to counter travel ban: WSJ

(Reuters) – Google employees brainstormed ways to alter search functions to counter the Trump administration’s controversial 2017 travel ban, the Wall Street Journal reported on Thursday, citing internal emails.

FILE PHOTO: A Google logo in an office building in Zurich September 5, 2018. REUTERS/Arnd WIegmann/File Photo

Google employees discussed how they could tweak the company’s search-related functions to show users how to contribute to pro-immigration organizations and contact lawmakers and government agencies, the WSJ said. The ideas were not implemented. on.wsj.com/2DePzWh

President Donald Trump’s travel ban temporarily barred visitors and immigrants from seven majority Muslim countries. It spurred public outcry and was revised several times. Trump said the travel ban was needed to protect the United States against attacks by Islamist militants, and the Supreme Court upheld the measure in June.

The Google employees proposed ways to “leverage” search functions and take steps to counter what they considered to be “islamophobic, algorithmically biased results from search terms ‘Islam’, ‘Muslim’, ‘Iran’, etc.” and “prejudiced, algorithmically biased search results from search terms ‘Mexico’, ‘Hispanic’, ‘Latino’, etc,” the Journal added, quoting from the emails.

A Google spokesperson said the emails represented brainstorming and none of the ideas were implemented. She said the company does not manipulate search results or modify products to promote political views.

“Our processes and policies would not have allowed for any manipulation of search results to promote political ideologies,” the spokesperson said in a statement.

Reporting by Rama Venkat in Bengaluru; Editing by Cynthia Osterman

American Airlines Just Raised Its Baggage Fee and Offered an Incredible, Maddening Explanation

Absurdly Driven looks at the world of business with a skeptical eye and a firmly rooted tongue in cheek. 

You knew it was going to happen.

I knew it was going to happen.

American Airlines knew it was going to happen too. 

The only question was how many hours the populace would be waiting before American followed Delta and United Airlines (and JetBlue) in raising baggage fees to $30.

When the announcement was made, I sat and pondered the meaning of life for a while.

Then I did the only thing my Yoda could suggest. I contacted American to ask for its logic in making this unpopular move.

An American spokesman told me: 

Like fares, baggage fees are set by the supply and demand for the product in the marketplace, and today’s changes are in line with what other U.S. competitors are charging. 

I stared at this for quite some time, tried to absorb it thoroughly and only then did I consider its fine logic.

I fear some might observe that if baggage fees are set by supply and demand, does that mean that American will raise them for every flight that happens to have a lot of cargo in the hold? 

After all, there might be less space. Ergo, the price should go up.

Please consider arriving at the ticket counter, to be told:

Yeah, sorry, we’ve got a big shipment of golf equipment in the hold today. So your baggage fee will be $175.

And when baggage fees didn’t exist, did this mean there was simply far too much space in the hold, none of it was precious, so it could be just given away?

I fear what American might actually mean by supply and demand is that when four airlines hold more than 80 percent of all available seats, they have most of the supply.

They therefore have the power to set the price of anything to a considerable extent.

The only thing that might even hold them back even a little is the existence of a budget airline on a specific route or, in this case, Southwest’s insistence that its customers’ bags fly free.

There’s a little more logical consistency, I fear, in the second part of American’s statement: United and Delta have done it, so we will too. What did you expect?

Of course, it’ll be fascinating to see whether the more baggage fees go up, the more people try and haul all their belongings onto the plane, hence delaying departure.

That’s something airlines really don’t like.

The baggage fee hike is merely a fare hike by other means. It also comes with a lower tax rate for the airline, as fees are taxed differently from fares.

I wonder if, for even a nanosecond over a third cocktail, an American executive or two might have considered that not raising the baggage fee might have given the airline a little point of difference.

Ach, but what’s the point of difference when your only true distinction is your network and you can just keep on scooping up (what you think is) your fair share?

Cboe exchange turns to machines to police its 'fear gauge'

NEW YORK (Reuters) – Hard pressed to quash allegations that its popular “fear gauge” is being manipulated, Cboe Global Markets (CBOE.Z) is turning to artificial intelligence to help put those concerns to rest.

People walk by the Chicago Board Options Exchange (CBOE) Global Markets headquarters building in Chicago, Illinois, U.S., September 19, 2018. REUTERS/Michael Hirtzer

The exchange, which owns the lucrative volatility index the VIX .VIX, has taken several steps to confront manipulation claims that have helped drive the Cboe’s stock down about 15 percent this year, putting it on pace for its worst year ever.

In its latest effort to police trading tied to the index, the Cboe is working with FINRA, its regulatory services provider, to develop machine learning techniques to tell whether market conditions surrounding the VIX settlement are potentially anomalous, the exchange told Reuters.

“Incorporating the use of machine learning and AI (Artificial Intelligence) is a logical part of the ongoing enhancement of our overall regulatory program,” Greg Hoogasian, Cboe chief regulatory officer, said in an emailed statement.

Cboe declined to elaborate on when it began using machine learning techniques to monitor VIX settlements.

Any steps, however, may take a while to change investors’ minds on the stock.

“Any time you see controversy over manipulating markets and it involves a company, there are people who will walk away from the stock,” said Peter Tuz, president of Chase Investment Counsel in Charlottesville, Virginia.

“It ends up tarnishing the company and subjecting them to legal risk that is very hard to quantify,” he said.

Tuz said Chase Investment Counsel, which owned nearly 19,000 Cboe shares in mid-2017, began selling its stake early this year, shedding the last of it on May 21.

Cboe’s stock performance this year has lagged that of other major exchange operators. Shares of Nasdaq Inc (NDAQ.O) are up about 17 percent, Intercontinental Exchange Inc’s (ICE.N) is up about 10 percent and CME Group Inc (CME.O) shares have risen 18 percent.

Concerns the index was being manipulated surfaced last year after John Griffin and Amin Shams of the McCombs School of Business at the University of Texas, Austin wrote an academic paper that noted significant spikes in trading volume in S&P 500 index options right at the time of settlement.

The paper also compared the value of the VIX at settlement with its value as calculated from S&P 500 options right after the settlement, and showed the two tend to diverge.

Instances of big deviations are taken as evidence by some that unscrupulous traders have been deliberately moving the settlement price.

Chicago Board Options Exchange (CBOE) Global Markets sign hangs at its headquarters building in Chicago, Illinois, U.S., September 19, 2018. REUTERS/Michael Hirtzer

A stock market fall on Feb. 5 that caused the VIX to surge the most in its 25-year history brought further scrutiny to the index, and led to dozens of lawsuits and ongoing probes into the matter by the U.S. Securities and Exchange Commission and the Commodity Futures Trading Commission.

The regulators have yet to comment on the matter and Cboe has denied the manipulation accusations, citing liquidity problems and legitimate hedging activity as reasons for unusual moves on settlement days.

“Only a forensic analysis of those episodes can confirm or refute such a claim,” said Kambiz Kazemi, partner at Canadian investment management firm La Financière Constance.

Meanwhile, the steps Cboe has taken to address the claims of manipulation are going in the right direction, said Kazemi.

The exchange operator recently overhauled the technology behind the auctions, improved the speed with which it sends alerts about auction imbalances, and sought to increase the number of market makers that provide buy and sell quotes for the auction.

POLICING THE FEAR GAUGE

Orderly VIX settlement auctions over the last few months have helped take some of the pressure off the Chicago-based exchange operator.

“I think we all will be observing the effects of the Cboe measures in the next few months,” Kazemi said.

VIX and associated products accounted for roughly a quarter of Cboe’s 2017 earnings, analysts estimate, and the controversy around the product has spooked some stockholders.

While financial firms have been using artificial intelligence software for everything from compliance to stock-picking, a growing number of firms have started to use it for market oversight.

Given the huge amount of data involved in market surveillance, machine learning algorithms can be far more efficient than humans in rooting out potential market manipulation, said Richard Johnson, a market structure and technology consultant at Greenwich Associates.

“It’s going to be a must have,” he said.

FINRA, which already monitors Cboe’s market on the company’s behalf, confirmed it was working on machine learning to enhance surveillance of the VIX settlement auctions, but would not offer specifics.

More generally, the Wall Street watchdog is working to use artificial intelligence to catch nefarious activities more quickly, including schemes that may have previously been unknown to regulators, said Tom Gira, who oversees FINRA’s market regulation department.

He said FINRA has begun using machine learning to scan for illegal activities across stock and options exchanges and is in the process of adding a feedback loop to the software that would regularly incorporate analysts’ data and allow the machines to detect ever-changing manipulation patterns.

Reporting by John McCrank and Saqib Iqbal Ahmed in NEW YORK; Additional reporting by Michelle Price in WASHINGTON; Editing by Megan Davies and Tomasz Janowski

IBM Debuts Tools to Help Prevent Bias In Artificial Intelligence

IBM wants to help companies mitigate the chances that their artificial intelligence technologies unintentionally discriminate against certain groups like women and minorities.

The technology giant’s tool, announced on Wednesday, can inspect AI-powered software for unintentional bias when it makes decisions, like when a loan might be denied to a particular person, explained Ruchir Puri, the chief technology officer and chief architect of IBM Watson.

The technology industry is increasingly combating the problem of bias in machine learning systems, used to power software that can automatically recognize images in pictures or translate languages. A number of companies have suffered a public relations black eye when their technologies failed to work as well for minority groups as for white users.

For instance, researchers discovered that Microsoft and IBM’s facial-recognition technology could more accurately identify the faces of lighter-skin males than darker-skin females. Both companies said they have since improved their technologies and have reduced error rates.

Researchers have pointed out that some of the problems may be related to the use of datasets that contain a lack of diverse images. Joy Buolamwini, the MIT researcher who probed Microsoft and IBM’s facial-recognition tech (along with China’s Megvii), recently told Fortune‘s Aaron Pressman that a lack of diversity within development teams could also contribute to bias because more diverse teams could be more aware of bias slipping into the algorithms.

In addition to IBM, a number of companies have introduced or plan to debut tools for vetting AI technologies. Google, for instance, revealed a similar tool last week while Microsoft said in May that it planned to release similar technology in the future.

Data crunching startup Diveplan said at Fortune’s recent Brainstorm Tech conference that it would release an AI-auditing tool later this year while consulting firm Accenture unveiled its own AI “fairness tool” over the summer.

Read More for an In-Depth Look: Unmasking A.I.’s Bias Problem

It’s unclear how each of these AI bias tools compare with one another because no outside organization has done a formal review.

Get Data Sheet, Fortune’s technology newsletter.

Puri said IBM’s tool built on the company’s cloud computing service is differentiated partly because it was created for business people and is easier to work with than similar tools from others that are intended only for developers.

Despite the flood of new AI-auditing tools, the problem of AI and bias will likely continue to persist because rooting out bias from AI is still in its infancy.

Linux's Creator Is Sorry. But Will He Change?

It’s been more than 25 years since Linus Torvalds created Linux, the open source operating system kernel that now powers much of the web, the world’s most popular smartphone operating system, and a fleet of other gadgets, including cars. During that time Torvalds has developed a reputation for behavior and harsh language that critics said crossed the line into emotional abuse.

Torvalds’ uncompromising style has often been praised, including by WIRED. But his tendency to berate other Linux contributors, calling them names or hurling profanities, has also drawn criticism for creating a toxic environment and making the project unwelcoming to women, minorities, or other underrepresented groups.

On Sunday, he apologized for years of improper behavior. “My flippant attacks in emails have been both unprofessional and uncalled for,” Torvalds wrote in an email to the Linux kernel mailing list. “I know now this was not OK and I am truly sorry.”

He also announced that the Linux kernel project will finally adopt a code of conduct and that he will take a break from the project to learn more about “how to understand people’s emotions and respond appropriately.”

“I’m not feeling like I don’t want to continue maintaining Linux. Quite the reverse,” Torvalds wrote. “I very much do want to continue to do this project that I’ve been working on for almost three decades.”

The code of conduct replaces an older “code of conflict” that encouraged anyone who felt “personally abused, threatened, or otherwise uncomfortable” to contact the technical advisory board of the Linux Foundation, the organization that stewards the Linux kernel and employs Torvalds, but didn’t list specific behaviors that were unacceptable. The new code specifies sexualized language and “trolling, insulting/derogatory comments, and personal or political attacks,” among other unacceptable behaviors.

But it wasn’t any of those things that prompted Torvalds to apologize after all these years. Instead, it was an apparently minor issue. Torvalds scheduled a vacation to Scotland that conflicted with a planned Linux developer summit in Vancouver, British Columbia, in November. The summit organizers announced earlier this month that the summit will relocate to Edinburgh, Scotland, rather than proceed without Torvalds. The decision rubbed many the wrong way.

Torvalds wrote that the incident led members of the Linux community to confront him about his “lifetime of not understanding emotions.” It’s hardly the first time. In 2013, former Linux kernel developer Sage Sharp, then using a different name, openly criticized Torvalds’ communication style and called for a code of conduct for the project. “Linus, you’re one of the worst offenders when it comes to verbally abusing people and publicly tearing their emotions apart,” Sharp wrote at the time.

Sharp later told WIRED about receiving thanks from developers on other open source projects, who said Torvalds’ behavior influenced the way people behaved in those other projects. Sharp also shared some of the intense hate mail they received after speaking up.

Torvalds agreed to talk things out with Sharp, but it didn’t amount to much. He panned the idea of a code of conduct in an email interview with WIRED, saying “venting of frustrations and anger is actually necessary, and trying to come up with some ‘code of conduct’ that says that people should be ‘respectful’ and ‘polite’ is just so much crap and bullshit.” He doubled down on his position at a conference in New Zealand in 2015, where, according to Ars Technica, he said that diversity is “not really important.”

That’s why Torvalds’ apology comes as a surprise—and why some people remain skeptical.

Many greeted the apology and planned code of conduct as good steps toward making the Linux community more welcoming, including Sarah Drasner, an open source developer, and April Wensel, founder of the software development company Compassionate Coding:

But others, including a developer who runs a YouTube channel under the name “Amy Codes” and software engineer Sarah Mei, lamented the praise that Torvalds received for his apology even though he had decades to correct his behavior.

Others criticized Torvalds’ explanation that he didn’t understand other people’s emotions as a reason for his behavior:

The Linux Foundation did not respond to a request for comment.

Sharp couldn’t be reached for comment but wrote on Twitter that the real test is whether the Linux kernel community changes.

The big hope is that by admitting that his behavior is wrong, Torvalds will make it harder for other open source developers to justify their own negative behaviors.


More Great WIRED Stories

Data Firms Team up to Prevent the Next Cambridge Analytica Scandal

A bipartisan group of political data firms are drafting a set of industry standards that they hope will prevent voter data from being misused like it was in 2016. The guidelines cover transparency, foreign influence in elections, responsible data sourcing and storage, and other measures meant to root out bad actors in the industry and help fend off security threats.

The conversations, which are being organized by Georgetown University’s Institute of Politics and Public Service, come at a time when data collection more broadly faces increased scrutiny from lawmakers and consumers. Ever since news broke this spring that the political firm Cambridge Analytica used an app to hoover up data on tens of millions of Americans and use it for political purposes, Facebook and other Silicon Valley tech giants have had to answer to Congress and their customers about their mass data collection operations. But the Georgetown group focuses specifically on the responsibilities of the companies that undergird some of the country’s biggest political campaigns. Among the firms participating in these discussions are Republican shops like DeepRoot Analytics, WPA Intelligence, and Targeted Victory, as well as Democratic firms, including Bully Pulpit Interactive, NGP VAN, and DSPolitical.

“These are the firms that power all of the elections in America, and so my hope was if you can get them in a room and get them to understand the importance of the data they’re using and to self-regulate, you could achieve a dramatic improvement on behalf of voters,” says Tim Sparapani, a fellow at the Georgetown Institute who is overseeing the group.

Sparapani served as Facebook’s first director of public policy from 2009 until 2011, after spending several years at the American Civil Liberties Union. A self-proclaimed privacy advocate, he has warned about the need for stricter oversight of data brokers for years. These are companies that collect, store, and analyze data about consumers for a variety of purposes. In the political world, that data can include basic information about how many times a person has voted, their party registration, and their donation record, but it can also include social media and commercial data that can help campaigns better understand who a given person is and target them with political advertising.

The data broker industry remains largely unregulated, both inside and outside politics. The Federal Trade Commission has urged Congress to regulate data brokers since at least 2012, but nothing has come of it so far. In June, Vermont became the first state to pass a data broker law, which goes into effect in January.

The Georgetown group first met last fall, months before Cambridge Analytica began making headlines. At the time, the industry’s primary concern was the risk of a data breach or a hack at the hands of a foreign threat: In the summer of 2017, a cybersecurity firm discovered DeepRoot Analytics’ entire trove of 198 million voter records was exposed in a misconfigured database, constituting the largest known voter data leak in history. Brent McGoldrick, CEO of DeepRoot, says the leak was a shock to the system.

“You just have a different mindset coming out of something like that, where you start to think differently about everything from security to privacy to the data you have and the perceptions of it,” he says.

Coupled with the intelligence community warnings about Russia and other foreign actors’ continued attacks on the American electoral system, McGoldrick says, it seemed well past time for his company and its competitors on both sides of the aisle to talk about protecting themselves and the people whose data they hold.

McGoldrick brought up the idea with Mo Elleithee, a former Democratic National Committee spokesperson who founded Georgetown’s Institute of Politics and Public Service in 2015. Together, they tapped Sparapani to oversee the effort. “We understand that in order to move the ball forward on privacy and security issues, we’re going to have to hear from people who, maybe we don’t like hearing what they have to say,” McGoldrick says. When the Cambridge Analytica story broke months later, he says, it only underscored the need for this kind of work.

The group, which has yet to be named, has begun circulating a set of guiding principles among data privacy advocates and the companies themselves to see what the participants are willing to agree to. While the final list is still being ironed out, Sparapani described a number of commitments for which there is broad-based support. One proposal would require the companies involved to alert one another and the proper government officials of any attempts by a foreign actor to influence the election. Another would have the companies vow to only use their tools to support people’s right to vote, not to suppress it. The group is working on a standard that would guarantee some transparency for consumers and educate them about how their data is being used. They’re also working on security standards around data storage, as well as language that they would commit to include in any contract with a potential client.

“It would make contractually binding not only their practices, but their clients’,” Sparapani says.

The hope is that these guidelines would act as a sort of seal of approval for political campaigns. “If firms have publicly stated they’re following these guidelines, hopefully candidates, committees, and causes will look for this when they’re trying to hire someone,” says Mark Jablonowski, DSPolitical’s chief technology officer, who has been involved in the initiative since its early days.

Of course, getting dozens of political opponents and business competitors who have never been regulated before to agree to any set of standard practices is no easy task. “Everyone’s got to have everything vetted through their lawyers,” McGoldrick says. “The last thing a lawyer likes is you voluntarily saying something you don’t have to say.”

“Sadly over the last few cycles there have been bad actors on both sides working in multiple campaigns,” says Chris Wilson, CEO of WPAIntelligence, which worked briefly with Cambridge Analytica during senator Ted Cruz’s 2016 presidential campaign. “I believe all in our industry, WPAi included, are hopeful that a set of standards will allow us, and the public, to be cognizant of the origins of data and its ultimate use.”

Until the details are finalized, it’s impossible to assess the effectiveness of this collaborative effort. As with any discussion around data privacy, it’s the fine print that matters. In California, where the governor recently signed a landmark privacy bill, lobbying groups have already begun picking apart nearly every sentence to better align with their interests.

Still, it is worth asking how much good this kind of work can ever do. These are well-known, well-regarded players in the industry committing themselves to a certain set of values. But what about everyone else? What about the people who are intending to deceive? Without substantive regulation, there’s nothing stopping anyone from harvesting data for nefarious purposes with impunity.

Then there’s the fact that these proposed guidelines don’t give consumers any real power. While other data privacy laws like the one that passed in California or Europe’s General Data Protection Regulation give people the ability to control what data is collected and see who it’s shared with, these proposed guidelines can’t promise the same.

Elleithee stresses that this is just the first step. Once the companies have all agreed to a set of standards, the Institute plans to convene a larger group from the broader tech and privacy communities. “As the conversation progresses, we want to bring more voices in,” he says.

Whatever the group eventually proposes, Sparapani says he fully expects pushback from privacy advocates. Even he has concerns. “If it were me, and I was critiquing this document, I could point out a dozen things I’d have the companies commit to,” he says. “In the room, they get an earful from me every time we meet, where I find this to be insufficient.”

But he also believes that waiting on the perfect solution that satisfies all parties will take more time than the country can afford. “Is it a fulsome commitment that I have been pushing for as an advocate? No. But does it begin to push companies to raise their standards to meet government and consumer expectations? Yes. And that’s a good thing.”


More Great WIRED Stories

BMW’s Vision iNEXT SUV Concept Sets a New, Electric Course

Changing notions of what customers want from cars have pushed automakers to do plenty of weird things. They’ve unmoored the driver’s seat from the left side of the car, revived the rotary engine, and turned windshields into screens. BMW, though, is most likely the first to put down carpeting in the cabin of a cargo jet.

Sitting on the tarmac at San Francisco International Airport’s cargo facility, the Lufthansa Boeing 777 has been converted into an unusually human-friendly vehicle. Along with the soft blue stuff underfoot, BMW has installed swanky, modern furniture throughout the cabin. The walls, floor, and ceiling of one bit are decked out in screens. And near the front sits the centerpiece of this flying showroom, which has already passed through Munich and New York and will soon set off for Beijing: the Vision iNEXT, a concept car BMW created to lay out its bet on the future.

“This is not just a show car,” says Klaus Fröhlich, a member of BMW’s board and the company’s soothsaying spokesperson, with the SUV, an early view at BMW’s next generation of forward-looking vehicles, behind him. “It’s a promise. In 2021, you will get it.”

Something like it, anyway.

Along with the rest of the auto industry, BMW has been scrambling to prepare itself for a future in which fewer people own and drive their own vehicles. It has dabbled in car sharing and launched an Uber competitor in Seattle. It’s researching autonomous technology and even talks about wild ways to encourage (electric) cycling. And because there’s no doubt that building and selling cars will remain the core of its business for decades to come, it’s adjusting there, too.

BMW has put some muscle into developing hybrid, plug-in hybrid, and fully electric models and is on track to sell 140,000 of them in 2018 (it moves about 2 million cars a year). But putting gasoline and diesel in the rearview is an early step on a long and perilous journey. “For us, electromobility is the new normal, it’s understood,” Fröhlich says. “Fine. Therefore, we take up the next challenge.”

So the iNEXT is fully electric, of course, and quite capable of driving itself, though a human can still take the wheel. (We’re far enough into concept-land that you shouldn’t expect any numbers on performance, range, or price.) From the outside, the baby SUV, about the size of BMW’s popular X5, looks roughly believable. The design team swapped the sideview mirrors for cameras that improve aerodynamics (a standard move on concepts, which European regulators have started allowing on production cars.) The narrow headlights resemble sideways apostrophes, and the bow-tie-shaped grille is clearly connected to BMW’s signature kidney design. The 24-inch wheels are likely a bit much for production, Fröhlich says, but otherwise “you will get what you see here.”

The backseat stretches from door to door, a jacquard-fabric-wrapped, sloping thing you could see the Little Mermaid hanging out on.

BMW

Tap the button to swing open the car’s suicide doors, though, and you get the hazier bit of the Vision iNEXT. The microsuede front seats don’t swivel around (an idea that has gone from daring to expected to trope in a few years’ worth of autonomous concepts), but they don’t look like conventional car seats, either. They seem to flow out of the wall, sit on aluminum legs, and don’t force you into looking straight ahead. (They will be a challenge for crash testing, Fröhlich admits, since airbags and crumple zones are designed to work for people sitting in particular positions.)

The backseat marks a new level of bönkers: The bench stretches from door to door, a sloping thing that invites you to curl up with a book, or whatever will entertain future you (your latest phone, presumably). It’s wrapped in blue-green jacquard cloth, a color BMW calls Enlightened Cloudburst and the sort of thing you could see the Little Mermaid hanging out on.

The controls for the two large screens mounted on the dashboard and the ceiling-mounted Intelligent Beam projector are not so much tucked away as hidden in plain sight: Thanks to integrated optic fibers and LEDs, the backseat and the walnut, coffee-table-like center console are themselves control pads. Want to hear your jams? Draw a little music note with your finger. Pinch to adjust the volume, swipe the change the song, and tap with three fingers to return to silence. BMW calls this “shy tech”—advanced but unobtrusive. “You decide where you want to have the interaction,” says UX design lead Olivier Pitrat. It’s a neat, if unnecessary, idea but far from ready for the real world. When someone asks Pitrat how it might handle spilled soda or a dog, he says, “That’s not a use case we have in the concept car.”

So the freeform seats and butt-side controls are unlikely bets for a car you’ll see on the dealer lots, not in 2021, anyway. But Fröhlich insists that the principles of an interior focused on lounging in luxury will make their way onto the production line. And while he also insists that performance will remain important, the iNEXT’s little steering wheel, programmed to slide back and away from the human when the computer’s in charge, looks rather vestigial. This just might be the ultimate riding machine.


More Great WIRED Stories

Here's Why Valuation Determines Total Dividend Payments For Overvalued Stocks: Johnson & Johnson

Introduction

In my most recent article, found here, a reader made a comment where a question was asked that I believe deserved a good answer. The following excerpt of the comment really reached out to me because this person claims to have been asking this question for 5 years without receiving a good answer. Here is the excerpt:

“Again, someone please explain to me how valuation determines the future direction for dividend growth and total dividend payments for overvalued stocks. I’ve been asking this question for five years here on SA w/o a good answer. Nothing theoretical please – I want to see actual data.”

Consequently, I felt compelled to write this article because this question is highly representative of what I consider my current life’s work. I have been in the investment business since 1970, and over those many decades, I have always followed a strict valuation investment strategy. However, when I was younger, I applied valuation to growth stocks because my objective was to build as much wealth as possible. As I have matured, my objective has become more focused on protecting my wealth while simultaneously letting my money that I worked so hard for to start working for me. In simple terms, I evolved from a growth investor into a more conservative dividend growth investor.

Examining the Theoretical In Real-World Conditions

The comment cited above asked to see actual data and nothing theoretical. Personally, I think that is a fair request because for theoretical to have any real value, it must apply under real-world circumstances. On the other hand, for a hypothesis (theory) to be proven, it must first be clearly articulated and laid out. Therefore, what follows is the theory, or perhaps more appropriately, the rationale as to why valuation has a material impact on total dividend payments.

The first and most important point about dividends are that they are paid on the number of shares owned. Consequently, regardless of what happens to the share price of the dividend stock once it’s purchased, the dividend amount is calculated based on the number of shares owned. Therefore, even if the stock price falls dramatically, your dividend income will remain the same and vice versa.

Consistent with this first point is the reality that lower valuation is simultaneously associated with lower stock prices – ceteris paribus. Therefore, when you buy a given stock at a lower valuation, you are initially purchasing more shares than had you bought it at a higher valuation. In this regard, price and valuation are related, but they are not the same. To be clear, a higher price earnings ratio related to a given level of earnings results in a higher price than a lower price earnings ratio related to the same level of earnings.

Moreover, it is true that the specific growth rate of the dividend itself (the rate of change of growth from one year to the next) will be identical regardless of valuation. However, the starting yield, which is often referred to as yield on cost, will be higher at a lower valuation than at a higher valuation. Consequently, your future yield will be higher, but more importantly so will your future level of cumulative dividends received.

The comment referenced in my introduction was presented and asked in two different yet similar ways. Here is a second excerpt, which was originally stated in the first two paragraphs of the comment:

“*** re: NEE “…there are some who think they can redeploy the net $12,000 into a better investment that will grow even more over the next 5 years. ***

In order to define “better”, the question that needs to be answered is what is the goal of the investment? If the goal is dividend growth and total dividend payments over a defined time period, someone needs to explain how the “elevated” valuation will affect future dividend growth and total dividend payments.”

I offer this second excerpt in order to establish a clarification. If the question relates to the current investment, the “elevated valuation” will not change the future dividend growth nor the total payments of that investment. However, if the “$12,000” is invested into a lower valued company that offers a higher current yield than the original investment is currently offering, then the future dividend payments will be significantly higher. On the other hand, the dividend growth of either the original or the new investment will be directly proportionate to the amount of operating growth and subsequently dividend growth rate that each individual investment would achieve.

The following screenshots cover purchasing Johnson & Johnson (NYSE:JNJ) over two historical 10-year time frames. With the first, Johnson & Johnson is purchased when valuation was excessive, and the second when Johnson & Johnson was purchased when valuation makes sense. Both historical earnings and price correlated graphs, as well as the associated performance graphs, tell the story. However, to really receive a clear explanation of how and why this works, I suggest the reader watch the analyze out loud video covering the same time frames that follows.

Johnson & Johnson: Purchased on December 31, 1998 Overvalued P/E ratio 38.4

Johnson & Johnson Purchased December 31, 2008 Undervalued P/E Ratio 13.1

FAST Graphs Analyze out Loud Video: Johnson & Johnson 10 Yr Results Overvalued Versus Undervalued

In the following analyze out loud video, I’m going to clearly illustrate how valuation does have a material impact on the dividend income. As an aside, valuation not only has a material impact on dividend income, it also has a major impact on total return. As a clue to what you will see in the video, when Johnson & Johnson was purchased when it was overvalued was also during a time when its earnings growth was over 13%. In contrast, when Johnson & Johnson was purchased when it was undervalued was during a time when its earnings growth rate was only 6% or half as fast. Nevertheless, Johnson & Johnson delivered more dividend income and a higher total return thanks to attractive valuation even though its growth rate was much lower.

Summary and Conclusions

In summary, and with all things remaining equal, you can increase your income and your total return by selling an overvalued dividend growth stock and reinvesting into a similar quality undervalued dividend growth stock. It’s important to remember that in both cases, the forecast of future growth is equally as tenuous. In other words, this will not work out if the original investment continues to grow while the new investment falters. Therefore, the concept of similar quality is essential, as well as the predictability of future growth.

Consequently, I do not suggest that investors act impetuously or frivolously when making buy and sell decisions on their portfolio. I’m a fervent believer in the old adage that “a portfolio is like a bar soap, the more you handle it the smaller it gets.” Therefore, and to be clear, this kind of strategy only works over a complete business cycle. Investors need to recognize that out-of-favor stocks tend to stay out of favor for a period of time and in favor stocks likewise. In other words, these decisions make sense as long-term decisions and not as short-term trading decisions.

If you enjoyed this article, scroll up and click on the “Follow” button next to our name to see updates on our future articles in your feed.

Disclosure: I am/we are long JNJ.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it. I have no business relationship with any company whose stock is mentioned in this article.

How the big five storage array makers tier data to the cloud

In recent articles we have looked at the extent of cloud storage products and services available. These have included the file, block and object storage available from the main cloud providers, and virtual storage appliances available in the cloud from the big storage array makers.

In this article we take a snapshot of integration between on-premise storage arrays and the cloud.

Methods used tend to break down into three main categories.

First, there are features and functionality that offer actual tiering to the cloud with various degrees of automation, mostly aimed at migrating inactive data off to cheaper storage.

Second, there are products and features that offer some form of backup and archiving to the cloud, through software or hardware appliances.

Finally, some suppilers – notably IBM and Hitachi Vantara – focus their cloud tiering efforts around a product that provides some kind of on-ramp to the cloud, as a facilitator of hybrid- or multi-cloud storage.

Dell EMC

Dell EMC’s midrange/enterprise Unity storage arrays offer file and block tiering to the cloud, “seamlessly” according to its publicity materials, using its Cloud Tiering Appliance (CTA).

This sits between the Unity on-prem deployment and the cloud. Files are migrated to the cloud according to user-defined policies and an 8kb stub left on the on-prem hardware. For block storage, snapshots are taken and these can be migrated to the cloud while the originals are erased. The snapshots can be restored to the original system or any other.

Cloud tiering from CTA is supported for Microsoft Azure, Amazon S3, and IBM Cloud Object Storage as well as Dell EMC’s Virtustream and Dell EMC Elastic Cloud Storage.

Dell EMC also offers CloudArray, which is a cloud tiering tool available as hardware or a software virtual appliance. CloudArray – gobbled up from TwinStrata in 2014 – can work with any SAN or NAS on-prem hardware, and can tier data to public cloud. It also offers snapshots, data deduplication and encryption functionality.

In addition, Unity arrays can be managed from the cloud and also come with Cloud IQ, a free cloud-based software-as-a-service suite with predictive analytics, alerts and remediation suggestions. Cloud IQ is supported in Unity, SC, XtremIO, VMAX, and PowerMax storage hardware.

Dell EMC’s scale-out NAS product, Isilon, has CloudPools, which allows policy-based automated tiering of data to the three key cloud providers as well as to private clouds.

Xtremio all-flash systems can tier data off to Dell EMC’s Virtustream, as can Unity and VMAX. No such option is available for PowerMax NVMe-equipped arrays but then if data is on those it’s not likely to be cold anyway. Dell EMC doesn’t seem to provide any cloud tiering for its SC series storage arrays. 

HPE

HPE’s StoreOnce data protection appliances have a feature called HPE Cloud Bank Storage. This offers use of the cloud as a target for backup and archiving, with change block tracking and data deduplication.

Cloud Bank Storage works with AWS and Microsoft Azure as well as private clouds built with Scality (see below) and can restore to any – presumably HPE – system in case of recovery from a disaster.

HPE 3Par publicity refers to use of Cloud Bank Storage as a “cloud tier” but it’s pretty clear this is backup/DR capability rather than a storage tier as such.

With HPE’s acquisition of Nimble Storage, it gained that company’s Cloud Volumes offering. This sees customers able to set up and provision Nimble flash-driven cloud storage instances in the Azure and AWS clouds. HPE calls it a tier, but there doesn’t seem to be any automated tiering functionality between on-premises deployments and Cloud Volumes.

HPE’s Scalable Object Storage – based on Scality’s RING architecture – presumably comes with the Zenko multi-cloud controller, announced in March.

IBM

IBM’s link between on-premises storage and the public cloud is IBM Spectrum Virtualize for Public Cloud.

This is the public cloud-capable update of IBM’s venerable SAN Volume Controller, formerly a hardware storage virtualisation box, but now runnable as a software appliance in the cloud and on-premise.

IBM Spectrum Virtualize for Public Cloud allows access to the public cloud – only IBM’s own, for now – from IBM storage to move data between on-premises datacentres and the cloud, to use the cloud for disaster recovery, devops and to provide asynchronous and synchronous remote replication.

Hitachi Vantara

Hitachi Vantara’s VSP all-flash F and hybrid flash G series arrays offer automated tiering via the company’s own Hitachi Content Platform to the Amazon, Microsoft Azure and IBM clouds. The focus is on reduction of storage costs by movement of inactive data to the cloud.

Hitachi Content Platform is based on an object storage platform, which can run in hardware and software versions and operate as private cloud storage with access to Azure, Amazon and Google clouds. With access from existing storage infrastructure it acts as an on-ramp to public cloud storage.

NetApp

For NetApp’s FAS all-flash and hybrid flash hardware it offers FabricPool. This allows tiering of inactive data off to public cloud storage, with Amazon S3 and Microsoft Azure Blob Storage supported, as well as private clouds. Tiering is automated and policies for data movement can be set on per-volume basis.

NetApp’s E-Series all-flash arrays can use NetApp SANtricity Cloud Connector for block-based backup, copy, and restore of E-Series volumes to an Amazon S3 account, with RESTful API job management of backup and restore tasks.

NetApp doesn’t appear to provide any connection to the cloud for its Solidfire all-flash storage. But then that may be because Solidfire is targeted as storage for those that want to provide cloud storage. 

Unique Premium WP Themes Free
Ritzywordpressthemes.com is a site dedicated to procure for you free Wordpress themes and/or uniquely designed premium Wordpress themes for your blogs. Though exclusively designed themes normally have a cost, for most cases, we are able to find a company willing to sponsor the theme; hence you will have it for free. Contact us for more info.
Cloud Computing Tutorials