Most hedge funds don’t make money. This hasn’t stopped a growing list of startups from trying their hands at employing machine learning to tip the scales in their favor. But Pit.ai, a new machine learning-powered hedge fund, adopted into the YC W17 class, thinks it can best Numerai, Quantopian and others with its own unique recipe for automating money making. Read More
If a picture is worth a thousand words, a video is worth that times the frame rate. Matroid, a computer vision startup launching out of stealth today, enables anyone to take advantage of the information inherently embedded in video. You can build your own detector within the company’s intuitive, non-technical, web platform to detect people and most other objects. Reza Zadeh, founder… Read More
Young management consultants may be novices, but they’re sold as experts. Conversely, even experienced consultants, who legitimately present themselves as experts, still feel like novices when they embark on a new project.
The challenge with effective consulting is that it depends on in-depth situational knowledge that consultants simply can’t have when they start an assignment. What’s more, they may not yet be completely clear on what the client — who’s paying top dollar and expects results immediately — really wants. So consultants must rapidly and discreetly gain knowledge of the client’s business while simultaneously giving an impression of competence and self-confidence. We call this challenge learning-credibility tension.
How do consultants overcome it?
Consultancy Work Is a Performance
For consultants, work is largely a performance. Like skillful actors, they use a combination of “backstage” preparation and “front stage” performance to make the audience (that is, the client) believe the story they want to tell.
Consultants are sometimes accused of trying to hoodwink their clients with smoke and mirrors, using management fashions or buzzwords for their own benefit. But our research suggests that they are far from being arch manipulators who control every interaction. Instead, they are doing everything they can to learn and deliver value at the same time, under the constant risk of failure.
The obvious way to gain knowledge is to ask direct questions. But if consultants try that, they risk looking uninformed or just useless. Clients might reason, “We shouldn’t have to train you!”
Experimentation could be another fruitful approach. But when leaders hire an expert to take on a challenging task, they don’t expect the person to try things out. They expect the expert to just know what to do.
Consultants can also attempt to display knowledge right away. But if they make a mistake that reveals their ignorance, they could look incompetent. If things go wrong later, the client might lose faith in the consultant’s expertise, making it even harder for them to deliver.
In other words, consultants really are faking it ’til they make it — or, more precisely, faking it so that they can make it. Their fakery is not cynical, but sincere.
We studied management consulting projects for almost two years and interviewed 79 consultants to understand how learning-credibility tension manifests in practice and how consultants deal with it. What we found is that consultants use a range of verbal and nonverbal tactics that help them manage perceptions and neutralize threats to their professional image.
Consultants deal with three types of threats to their self-image: competence threats, acceptance threats, and productivity threats. To neutralize them, they use three closely related tactics: crafting relevance, crafting resonance, and crafting substance. Let’s look at them in turn.
Crafting Relevance to Seem Competent While Learning
Consultants are usually hired to advise on business transformation, project management, or strategy. However, they must also show that they adequately understand the technical side of their assignments. In other words, they face competence threats, which they deal with by crafting relevance.
Crafting relevance is about having the maximum impact in the minimum time by leveraging all the bits of knowledge that are available. Consultants don’t have to know it all — just enough to be taken seriously and appear competent while they seek more information.
One way to do this is to collect nuggets of information and selectively present them back to clients. The information might come from written material on past consulting assignments, the client’s internal documents, or information in the public domain. By preparing thoroughly and using these nuggets to create a mental map, consultants start to build a high-level view of the client’s situation.
The other way consultants craft relevance is by approximating past experiences — that is, by telling stories from past assignments that have some parallel with the problem at hand. Backstage, they search their track record (or their colleagues’) for experiences that echo the current assignment. Then they bring them up in conversation with the client, perhaps pointing to their own contribution. This preserves face while encouraging the client to share more details.
Of course, clients know very well that their consultant hasn’t really learned an entire technical field in a matter of days. But they still appreciate that they’ve done their homework. For their part, consultants use crafting relevance to develop just enough expertise for them to interact with clients, with or without the ability to execute.
Crafting Resonance by Recycling Insider Knowledge
Clients must accept consultants as fellow professionals before they will follow their advice. But it’s hard for a newcomer to fit in straight away, because it takes time to appreciate “how we do things around here.” This exposes consultants to acceptance threats, which they deal with by crafting resonance: recycling insider knowledge to gain acceptance while acquiring new information.
Clever Hans was a horse who tapped his hoof to signal the answer to arithmetic questions. Of course, Hans couldn’t really do math. He simply watched his trainer for cues that he’d given the right answer.
Similarly, consultants monitor their clients for physical approval cues (such as facial expression or body posture) or the words and phrases they use, which often have special resonance. For instance, lawyers from a top firm responded positively to Latin expressions, as they were part of legal work culture and showed intellectual sophistication. So consultants would rehearse these expressions backstage, and then use them in conversations to show that they knew their Latin too, fostering acceptance. Having picked up these expressions, consultants can say the things that clients want to hear, allowing them to fit in despite being outsiders and triggering more engagement during their exchanges.
Second, consultants borrow internal insights from client staff, and then recycle them by presenting them as their own when they’re with other insiders. Some might say this is the sort of thing that gives consultants a bad name — people who “borrow your watch to tell you the time, then walk off with the watch.” But it’s more than just a confidence trick. By watching how people react to their borrowed judgments, consultants can discover which ideas (and people) have support within the organization and choose to amplify them. This can help them tackle “wicked” problems where there are no simple or clear-cut answers.
Crafting Substance by Creating Knowledge Objects
Consultancy services are usually expensive, so clients are concerned with getting value for money in the short term. But it usually takes consultants a while to get up to speed and deliver their highest-value output. In the meantime, the client may question their value add, exposing them to productivity threats. They deal with this using the third and final tactic: crafting substance. This is about creating knowledge objects to display productivity while seeking information at the same time.
The first way to craft substance is by manufacturing PowerPoint figures. While PowerPoint has a mixed reputation, it’s an indispensable tool for consultants to impress their clients with clear thinking, deep understanding, and task progress. Furthermore, PowerPoint figures also serve as prompts that elicit feedback on technical points — with the added bonus that any criticism is directed toward the figure rather than the consultants themselves.
Consultants often use ideographs, combinations of text and images, to express important ideas, and many consulting firms maintain a library of readymade templates to help consultants create their figures quickly and easily. These provide them with a sort of plug-and-play thinking, allowing them to quickly make sense of a situation, boil it down to its essentials, and communicate it.
Sometimes, client organizations already know the answers to their problems, but still can’t articulate them — which means they can’t act on them. By providing powerful ideographs that clients can’t create for themselves due to lack of time or resources, consultants can make a telling and visible contribution.
The second method of crafting substance is by tendering activity proofs such as timesheets and workload schedules. As well as giving an impression of control and professionalism, they can help draw out what the client expects, which can be a movable feast. They can also function as protective amulets to ward off clients’ anger at a perceived lack of progress.
Putting your ideas out there in a tangible, stable form is a risk. But it’s a risk that consultants must take, however little they know about the business context, because it shows clients that consultants are committed to the project and are providing value for the money. However, it also helps consultants build their understanding of the new setting, and creates a formal space for feedback on the assignment.
Many People Manage Learning-Credibility Tension
We studied how consultants manage learning-credibility tension. But many others must deal with it too, including temporary staff, project team members, analysts, professional advisers, and freelancers. These workers are not just optional extras; they make a crucial contribution to many organizations. No wonder global executives believe they will be in high demand for years to come. Besides, managers in general can also be included in this group — they are sometimes thought to be a kind of “consultant” themselves.
Like consultants, all of these types of workers have to adapt to a different setting with each new client or project and grapple with dynamic, hard-to-grasp problems from day one. They have to prepare carefully, establish their competence, understand the environment, and cultivate acceptance from new colleagues or clients, often by producing deliverables. And they may have to do all this without any backup from a consultancy firm.
Fortunately, anyone can use the tactics we’ve described, not just consultants. Learn to use them successfully, and you can build confidence, feel better about your work, and maintain your face.
However, managing learning-credibility tension is something much deeper than “personal PR” or acting out a role. It will also help you to gain new insights, share information, and work toward longer-term goals. After all, without belief and acceptance from those around you, your important new project will never get off the ground.
Given how chaotic and unpredictable working life can be, it’s not surprising that more and more people are falling prey to impostor syndrome, the fear that you’re not up to the task and will be found out. For most workers today, that feeling is ever-present.
However, when you reframe feeling impostor syndrome as managing learning-credibility tension, you turn it from a psychological flaw into a vital skill. In our research, we found that consultants don’t just have impostor syndrome, they actively embrace it — because it keeps them sharp and on the edge, where they need to be.
Traders who have an idea for a money-making algorithm have two choices: learn to code themselves, or hire a great engineer. But neither of these two options are realistic, especially for part-time traders who don’t have a large bankroll behind them. Meet Algoriz, a startup participating in Y Combinator’s Winter 2017 batch. Read More
Google and other tech companies have come up with glasses and contact lenses for the purposes of AR, but Omega Ophthalmics is taking a much more invasive approach by using surgically implanted lenses to create a space for augmented reality inside the eye. Read More
Source: TechCrunch Startup
There is no shortage of advice for those who aspire to be effective leaders. One piece of advice may be particularly enticing: if you want to be a successful leader, ensure that you are seen asa leader and not a follower. To do this, goes the usual advice, you should seek out opportunities to lead, adopt behaviors that people associate with leaders rather than followers (e.g., dominance and confidence), and — above all else — show your exceptionalism relative to your peers.
But there is a problem here. It is not just that there is limited evidence that leaders really are exceptional individuals. More importantly, it is that by seeking to demonstrate their specialness and exceptionalism, aspiring leaders may compromise their very ability to lead.
The simple reason for this is that, as Warren Bennis has observed, leaders are only ever as effective as their ability to engage followers. Without followership, leadership is nothing. As one of us (Haslam) observed in a 2011 book coauthored with Stephen Reicher and Michael Platow, The New Psychology of Leadership, this means that the key to success in leadership lies in the collective “we,” not the individual “I.”
In other words, leadership is a process that emerges from a relationship between leaders and followers who are bound together by their understanding that they are members of the same social group. People will be more effective leaders when their behaviors indicate that they are one of us, because they share our values, concerns and experiences, and are doing it for us, by looking to advance the interests of the group rather than own personal interests.
This perspective identifies a major flaw in the usual advice for aspiring leaders. Instead of seeking to stand out from their peers, they may be better served by ensuring that they are seen to be a good follower — as someone who is willing to work within the group and on its behalf. In short, leaders need to be seen as “one of us” (not “one of them”) and as “doing it for us” (not only for themselves or, worse, for “them”).
In a recent paper, we set out to test these ideas through a longitudinal analysis of emergent leadership among 218 male Royal Marines recruits who embarked on the elite training program after passing a series of tests of psychological aptitude and physical fitness. More specifically, we examined whether the capacity for recruits to be seen as displaying leadership by their peers was associated with their tendency to see themselves as natural leaders (with the skills and abilities to lead) or as followers (who were more concerned with getting things done than getting their own way).
For this purpose, we tracked recruits’ self-identification as leaders and followers across the course of a physically arduous 32-week infantry training that prepared them for warfare in a range of extreme environments. This culminated in the recruits and the commanders who oversaw their training casting votes for the award of the Commando Medal to the recruit who showed most leadership ability. So who gets the votes? Marines who set themselves up as leaders, or those who cast themselves as followers?
In line with the analysis that we present above, we found that recruits who considered themselves to be natural leaders were not able to convince their peers that this was the case. Instead, it was the recruits who saw themselves (and were seen by commanders) as followers who ultimately emerged as leaders. In other words, it seems that those who want to lead are well served by first endeavoring to follow.
Interestingly, though, alongside these results, we also found that recruits who saw themselves as natural leaders were seen by their commanders as having more leadership potential than recruits who saw themselves as followers. This suggests that what good leadership looks like is highly dependent on where evaluators are standing. Evaluators who are situated within the group, and able to personally experience the capacity of group members to influence one another, appear to recognize the leadership of those who see themselves as followers. In contrast, those who stand outside the group appear to be most attuned to a candidate’s correspondence to generic ideas of what a leader should look like.
This latter pattern tells us a lot about the dynamics of leadership selection and helps to explain why the people who are chosen as leaders by independent selection panels often fail to deliver when they are in the thick of the group that they actually need to lead. It also has the potential to complicate the picture for aspiring leaders. The reason for this is that in organizations that eschew democratic processes in their selection of leaders, employees who are seen as leaders (by themselves and by those who have the power to raise them up) may be more likely to be appointed to leadership positions that those who see themselves as followers.
However, as our Marines data suggest, this elevation of those who seek to distance themselves from their group may actually be a recipe for failure, not success. It encourages leaders to fall in love with their own image and to place themselves above and apart from followers. And that is the best way to get followers to fall out of love with the leader. Not only will this then undermine the leader’s capacity to lead but, more importantly, it will also stifle followers’ willingness to follow. And that can only ever be a path to organizational mediocrity.
Innovation is famously difficult — many projects end up losing money, frustrating employees, and going nowhere. And yet corporations and governments spend billions of dollars annually pursuing innovation. This huge spending would generate more value for businesses and societies if the innovation success rate were just a little higher. Is there a way to increase the success rate without spending more?
We think there is. Innovation projects often fail because the resources are spent on the wrong kind of innovation. Too much money is spent on attention-grabbing activities that are straightforward to do, like hiring new people, procuring new technologies, and buying more facilities. It is much less obvious, and usually harder, to change the design of a current service system, introduce new customer experiences, or build a better business model — but the return on those investments may be much higher.
Innovation needs to be considered in two ways: innovation capacity and innovation ability.
Innovation capacity is the organization’s potential for innovation. This is the stuff that’s easy to buy, and that organizations tend to spend too much on: assets and resources. This includes technology and people, as well as tangible, intangible, and financial assets. Most innovation investments, such as product improvement, technological innovation, and research and development (R&D) traditionally aim at strengthening the innovation capacity of the organization. Today, every company, small or multi-national, new or incumbent, can obtain innovation capacity. People can be hired through the sharing economy; technology can be rented by the hour; finance can be sought for any prototype, and assets bought. But capacity alone is insufficient to create new, significant, sustainable value for customers — no matter how huge the capacity.
That’s where innovation ability comes in. This term describes the more difficult aspects of creating value, like new customer experiences, a revised service system, or new business models. An organization may have many people providing innovation capacity, but may still struggle to increase innovation ability, because capacity by itself does not invent nor implement a new business model or a better customer experience. Yes, an organization requires a certain amount of innovation capacity, but there is no increased value creation through an increase in innovation capacity alone.
We’ve come to these conclusions after completing case study analyses of a range of companies, including Nokia, Kodak, Borders, Amazon, Apple, and Xerox. Together, these companies have spent billions on innovation. But although the latter three spent relatively less on innovation, they spent their innovation budgets more wisely, choosing to invest in innovation ability rather than capacity.
Nokia during 2007-2010 was an example of a corporation with great innovation capacity. Nokia always offered technologically feature-rich mobile phones — in fact, Nokia invented the smartphone. Nokia actually offered a touchscreen smartphone two years before Apple’s iPhone. Yet Nokia hung on to the Symbian operating system despite knowing its weaknesses in the eyes of the consumer. Nokia did have resources to develop a new operating system, but chose to stick with Symbian. As a result, Nokia became less and less able to create new value. At one point Nokia manufactured 90 different mobile phones. Their functionality was developed slightly from one model to the next, but most phones were examples of innovation driven by the company’s innovation capacity. In short, technology was a strength for both companies, but Apple did a much better job connecting its technology to a service system delivering new customer experiences through a relevant business model. Developers outside of Apple were allowed to sell apps through iTunes and the App Store. Apple kept 30% of the sales made by outside developers. The huge number of apps created provided customers with a very wide selection of new customer experiences.
Nokia launched the OVI Store globally in May 2009. The company was however unable to match the service system provided by the iPhone in combination with iTunes and the thousands of applications that had already been developed. The then Nokia CEO Stephen Elop was quoted in Wired of February 2011 stating: “The first iPhone shipped in 2007, and we still don’t have a product that is close to their experience.” The Ovi Store was discontinued in 2015.
Kodak is another example of a company that spent most of its resources on drivers of innovation capacity. The company famously spent over four billion dollars developing the digital camera, but chose not to develop a new business model to convert that innovation capacity into innovation ability — and as a result, failed to capture the value of what they’d invented. By contrast, Xerox invested in customer experiences, creating increased value for customers by expanding its platform, resulting in increased revenues. As Xerox’s CEO Anne Mulcahy said in the Dean’s Innovative Leader Series at MIT in 2006: “In trying to rebound, we spent the vast majority of our time talking to customers.” By 2011, two-thirds of Xerox’s revenues came from products or services it had introduced within the last two years. Put simply, Xerox embraced the digital era and developed a host of technologies enabling the firm’s ability to transform to a services business . Kodak — in contrast — tried to delay that transformation as long as possible, avoiding developing its service system, customer experiences, and business model.
Three lessons for value creation emerge here.
First, organizations should spend less on building the capacity for innovation. In other words, even if your organization increases the number of people working on innovation initiatives by 10% or even 20% — while at the same time no other changes are made internally — there is simply no legitimate reason to believe that the organization will create even greater value.
Second, to succeed with innovation initiatives, corporations need to consider the value drivers that change through innovation ability — the business model, customer experiences, and the service system. Even if an organization has a new idea, a new technology, a new product, or a new service, none of these will necessarily increase the organization’s innovation success rate unless innovation ability changes one or more of the value drivers.
Finally, the thinking and practice of innovation should start from the premise that successful innovation is driven by the shared value created. Innovation should be value-driven; corporations, and governments, need to create value for a network of stakeholders: customers, suppliers, and the firm — maximizing value solely for the owners is not enough.
A corporation can have the same idea, product, service or technology as its main competitor, but to win in the marketplace it must develop a new business model, customer experience, or service system that will put that new idea, product, or technology to use.
In January of 2018, Annette Zimmermann, vice president of research at Gartner, proclaimed: “By 2022, your personal device will know more about your emotional state than your own family.” Just two months later, a landmark study from the University of Ohio claimed that their algorithm was now better at detecting emotions than people are.
Emotional inputs will create a shift from data-driven IQ-heavy interactions to deep EQ-guided experiences, giving brands the opportunity to connect to customers on a much deeper, more personal level. But reading people’s emotions is a delicate business. Emotions are highly personal, and users will have concerns about fear privacy invasion and manipulation. Before companies dive in, leaders should consider questions like:
What are you offering? Does your value proposition naturally lend itself to the involvement of emotions? And can you credibly justify the inclusion of emotional clues for the betterment of the user experience?
What are your customers’ emotional intentions when interacting with your brand? What is the nature of the interaction?
Has the user given you explicit permission to analyze their emotions? Does the user stay in control of their data, and can they revoke their permission at any given time?
Is your system smart enough to accurately read and react to a user’s emotions?
What is the danger in any given situation if the system should fail — danger for the user, and/or danger for the brand?
Keeping those concerns in mind, business leaders should be aware of current applications for Emotional AI. These fall roughly into three categories:
Systems that use emotional analysis to adjust their response.
In this application, the AI service acknowledges emotions and factors them into its decision making process. However, the service’s output is completely emotion-free.
Conversational IVRs (interactive voice response) and chatbots promise to route customers to the right service flow faster and more accurately when factoring in emotions. For example, when the system detects a user to be angry, they are routed to a different escalation flow, or to a human.
AutoEmotive, Affectiva’s Automotive AI, and Ford are racing to get emotional car software market-ready to detect human emotions such as anger or lack of attention, and then take control over or stop the vehicle, preventing accidents or acts of road rage.
The security sector also dabbles in Emotion AI to detect stressed or angry people. The British government, for instance, monitors its citizens’ sentiments on certain topics over social media.
In this category, emotions play a part in the machine’s decision-making process. However, the machine still reacts like a machine — essentially, as a giant switchboard routing people in the right direction.
Systems that provide a targeted emotional analysis for learning purposes.
In 2009, Philips teamed up with a Dutch bank to develop the idea of a “rationalizer” bracelet to stop traders from making irrational decisions by monitoring their stress levels, which it measures by monitoring the wearer’s pulse. Making traders aware of their heightened emotional states made them pause and think before making impulse decisions.
Brain Power’s smart glasses help people with autism better understand emotions and social cues. The wearer of this Google Glass type device sees and hears special feedback geared to the situation — for example coaching on facial expressions of emotions, when to look at people, and even feedback on the user’s own emotional state.
These targeted emotional analysis systems acknowledge and interpret emotions. The insights are communicated to the user for learning purposes. On a personal level, these targeted applications will act like a Fitbit for the heart and mind, aiding in mindfulness, self-awareness, and ultimately self-improvement, while maintaining a machine-person relationship that keeps the user in charge.
Targeted emotional learning systems are also being tested for group settings, such as by analyzing the emotions of students for teachers, or workers for managers. Scaling to group settings can have an Orwellian feeling: Concerns about privacy, creativity, and individuality have these experiments playing on the edge of ethical acceptance. More importantly, adequate psychological training for the people in power is required to interpret the emotional results, and to make adequate adjustments.
Systems that mimic and ultimately replace human-to- human interactions.
When smart speakers entered the American living room in 2014, we started to get used to hearing computers refer to themselves as “I.” Call it a human error or an evolutionary shortcut, but when machines talk, people assume relationships.
There are now products and services that use conversational UIs and the concept of “computers as social actors” to try to alleviate mental-health concerns. These applications aim to coach users through crises using techniques from behavioral therapy. Ellie helps treat soldiers with PTSD. Karim helps Syrian refugees overcome trauma. Digital assistants are even tasked with helping alleviate loneliness among the elderly.
Casual applications like Microsoft’s XiaoIce, Google Assistant, or Amazon’s Alexa use social and emotional cues for a less altruistic purpose — their aim is to secure users’ loyalty by acting like new AI BFFs. Futurist Richard van Hooijdonk quips: “If a marketer can get you to cry, he can get you to buy.”
The discussion around addictive technology is starting to examine the intentions behind voice assistants. What does it mean for users if personal assistants are hooked up to advertisers? In a leaked Facebook memo, for example, the social media company boasted to advertisers that it could detect, and subsequently target, teens’ feelings of “worthlessness” and “insecurity,” among other emotions.
Judith Masthoff of the University of Aberdeen says, “I would like people to have their own guardian angel that could support them emotionally throughout the day.” But in order to get to that ideal, a series of (collectively agreed upon) experiments will need to guide designers and brands toward the appropriate level of intimacy, and a series of failures will determine the rules for maintaining trust, privacy, and emotional boundaries.
The biggest hurdle to finding the right balance might not be achieving more effective forms of emotional AI, but finding emotionally intelligent humans to build them.
Productive: “Achieving or producing a significant amount of result.” Enough: “As much or as many as required.”
As a time management coach, I’m keenly aware that you could answer the question “Am I productive enough?” using a variety of methods. I’m also familiar with the fact that individuals fall on a productivity spectrum. One person’s maximum productivity for a certain role in a particular environment could look vastly different from another person’s. These variations result from a combination of intrinsic ability, experience level, overall capacity, and desire.
For the purposes of this discussion, I’m narrowing the definition of “productive enough” to whether you are meeting the requirements of your job when operating at your personal peak performance. This reasoning process is outlined in the flowchart below, and we’ll walk through it step-by-step by answering a series of questions. At the end of this you should have a clearer sense of whether you can wrap up for the day knowing you were productive enough or whether you have room for improvement.
Question 1: Am I meeting expectations?
If “enough” is defined as “as much or as many as required,” then the initial essential question is whether you meet the requirements of your job. For people who have a well-defined job scope, answering this question may be easy: Did you meet the project milestones? Did you reply to customers within the specified times? Did you hit your sales targets? If you have a less clear job scope, this question may be a little harder to answer, but the answer should be evident by whether your manager has noted you have areas that need improvement.
If the answer is yes in regard to your key job responsibilities, then you’re productive enough. You could do more, but you don’t have to do more to meet expectations. If the answer is no, proceed to question two.
Question 2:Are these expectations my own, and not required by others?
Having high expectations of yourself can be a positive quality. But if you find yourself getting extremely stressed or working longer hours than you would prefer in order to meet expectations that aren’t significant to anyone else, your positive quality may have turned negative.
In these situations, you need to seriously ask yourself: Are these expectations my own, and not required — or potentially even noticed — by others? If the answer is yes, most likely you are productive enough. Instead of beating yourself up about what you’re not doing, it’s time to lower your expectations of yourself to a manageable level, aligned with everyone else’s. If the answer is no, if other people really do care about these expectations, then proceed to the next question.
Question 3: Am I owning my time management and using productivity resources?
Once you’ve clarified that you’re not meeting expectations that truly are important to fulfilling your job function, you need to evaluate whether you are owning your time management and using productivity resources.
Let’s dive a bit deeper into the two parts of this question.
Part one is: “Am I owning my time management”? From my perspective as a time management coach, this is asking whether you are proactive in how you allocate your time and effort. That includes clarifying priorities, planning your time, setting boundaries, and being focused when you are working. (Hint: If you obsessively check email, social media, or your phone and have little to no focused work time, you’re probably not meeting expectations in this area.) This is the strategic portion of your relationship with time.
Part two is: “Am I using productivity resources?” From my perspective, this entails utilizing the tools available to help you achieve efficiency. That could include having a written to-do list instead of keeping everything in your head, using tools like SaneBox or other email filtering systems, delegating more, or learning how to use your existing tools more efficiently. This is the tactical portion of your time management.
If you can confidently answer yes to both of the above, then within your current skill set, I would say you’re likely productive enough — you are doing the best you can within the circumstances. If you answer no to one or more of the above, then you’re likely not productive enough, meaning you are not producing the most you can within the circumstances.
How to Become Productive Enough
If you come to the end of the flowchart and recognize that you likely aren’t productive enough, then it’s time to evaluate your results and determine next steps.
One potential next step involves negotiating expectations. If you feel that you are owning your time management and using your productivity resources (so in a personal sense you’re productive enough), but you still worry you’re not meeting expectations, have a discussion with your manager. Lay out your different projects and deadlines as well as your work estimates and time capacity. Then see if you can get adjustments to your responsibilities. If your manager wants to consider a simple system for overall resource planning, tools such as float.com can help.
Another potential next step involves honing your time-management skills. If you’re not planning, prioritizing, and focusing at certain times throughout the day, and your job requires any type of proactive work, I’m 98.2% positive you’re leaving productivity on the table. It’s your responsibility to get the help you need to improve these skills.
The same is true for productivity resources. If you’re not utilizing any tools — even paper ones — that can help you stay organized, you’re very likely missing out and wasting time. I would work on improving in these areas before asking for significant adjustments to expectations.
If you’ve been wondering whether you’re productive enough, this is one way to answer that question from a time management point of view. I hope the answer frees you to breathe a little easier or to get motivated to do what you can to improve your situation.
Machine learning is increasingly being used to predict individuals’ attitudes, behaviors, and preferences across an array of applications — from personalized marketing to precision medicine. Unsurprisingly, given the speed of change and ever-increasing complexity, there have been several recent high-profile examples of “machine learning gone wrong.”
When models don’t perform as intended, people and process are normally to blame. Bias can manifest itself in many forms across various stages of the machine learning process, including data collection, data preparation, modeling, evaluation, and deployment. Sampling bias may produce models trained on data that is not fully representative of future cases. Performance bias can exaggerate perceptions of predictive power, generalizability, and performance homogeneity across data segments. Confirmation bias can cause information to be sought, interpreted, emphasized, and remembered in a way that confirms preconceptions. Anchoring bias may lead to over-reliance on the first piece of information examined. So how can we mitigate bias in machine learning?
In our federally-funded project (with Rick Netemeyer, David Dobolyi, and Indranil Bardhan), we are developing a patient-centric mobile/IoT platform for those at early risk of cardiovascular disease in the Stroke Belt — a region spanning the southeastern United States, where the incident rates for stroke are 25% to 40% higher than the national average. As part of the project, we built machine learning models based on various types of unstructured inputs including user-generated text and telemetric and sensor-based data. One critical component of the project involved developing deep learning text analytics models to infer psychometric dimensions — such as measures of numeracy, literacy, trust, and anxiety — which have been shown to have a profound impact on health outcomes including wellness, future doctor visits, and adherence to treatment regimens. The idea is that if a doctor could know that a patient was, for example, skeptical of the health profession, they could tailor their care to overcome that lack of trust. Our models predict these psychometric dimensions based on the data we collected.
Given that cardiovascular disease is disproportionately more likely to affect the health of disparate populations, we knew alleviating racial, gender, and socio-economic biases from our text analytics models would be vitally important. Borrowing from the concept of “privacy by design” popularized by the European Union’s General Data Protection Regulation (GDPR), we employed a “fairness by design” strategy encompassing a few key facets. Companies and data scientists looking to similarly design for fairness can take the following steps:
1. Pair data scientists with a social scientist. Data scientists and social scientists speak somewhat different languages. To a data scientist, “bias” has a particular technical meaning — it refers to the level of segmentation in a classification model. Similarly, the term “discriminatory potential” refers to the extent to which a model can accurately differentiate classes of data (e.g., patients at high versus low risk of cardiovascular disease). In data science, greater “discriminatory potential” is a primary goal. By contrast, when social scientists talk about bias or discrimination, they’re more likely to be referring to questions of equity. Social scientists are generally better equipped to provide a humanistic perspective on fairness and bias.
In our Stroke Belt project, from the start, we made sure to include psychologists, psychometricians, epidemiologists, and folks specialized in dealing with health-disparate populations. This allowed us to have a better awareness of demographic biases that might creep into the machine learning process.
2. Annotate with caution. Unstructured data such as text and images often is generated by human annotators who provide structured category labels that are then used to train machine learning models. For instance, annotators can label images containing people, or mark which texts contain positive versus negative sentiments.
Human annotation services have become a major business model, with numerous platforms emerging at the intersection of crowd-sourcing and the gig economy. Although the quality of annotation is adequate for many tasks, human annotation is inherently prone to a plethora of culturally ingrained biases.
In our project, we anticipated that this might introduce bias into our models. For example, given two individuals with similar levels of health numeracy, one of them is much more likely to be scored lower by annotators if his/her writing contains misspellings or grammatical mistakes. This can cause biases to seep into the trained models, such as overemphasizing the importance of misspellings relative to more substantive cues when predicting health numeracy.
One effective approach we have found is to include potential bias cases in annotator training modules to increase awareness. However, in the Stroke Belt project, we circumvented annotation entirely, instead relying on self-reported data. While this approach is not always feasible, and may come with its share of issues, it allowed us to avoid annotation-related racial biases.
3. Combine traditional machine learning metrics with fairness measures. The performance of machine learning classification models is typically measured using a small set of well-established metrics that focus on overall performance, class-level performance, and all-around model generalizability. However, these can be augmented with fairness measures designed to quantify machine learning bias. Such key performance indicators are essential for garnering situational awareness — as the saying goes, “if it cannot be measured, it cannot be improved.” By utilizing fairness measures, in the recidivism prediction study mentioned earlier, researchers noted that existing models were heavily skewed in their risk assessments for certain groups.
In our project, we examined model performance within various demographic segments, as well as underlying model assumptions, to identify demographic segments with higher susceptibility to bias in our context. Important fairness measures incorporated were within- and across-segment true/false, positive/negative rates and the level of reliance on demographic variables. Segments with disproportionately higher false positive or false negative rates might be prone to over-generalizations. For segments with seemingly fair outcomes at present, if demographic variables are weighed heavily relative to others and act as primary drivers of predictions, there might be potential for susceptibility to bias in future data.
4. When sampling, balance representativeness with critical mass constraints. For data sampling, the age-old mantra has been to ensure that samples are statistically representative of the future cases that a given model is likely to encounter. This is generally a good practice. The one issue with representativeness is that it undervalues minority cases — those that are statistically less common. While at the surface this seems intuitive and acceptable — there are always going to be more- and less-common cases — issues arise when certain demographic groups are statistical minorities in your dataset. Essentially, machine learning models are incentivized to learn patterns that apply to large groups, in order to become more accurate, meaning that if a particular group isn’t well represented in your data, the model will not prioritize learning about it. In our project, we had to significantly oversample cases related to certain demographic groups in order to ensure that we had a critical mass of training samples necessary to meet our fairness measures.
5. When building a model, keep de-biasing in mind. Even with the aforementioned steps, de-biasing during the model building and training phase is often necessary. Several tactics have been proposed. One approach is to completely strip the training data of any demographic cues, explicit and implicit. In the recidivism prediction study discussed earlier, the novice human predictors weren’t provided with any race information. Another approach is to build fairness measures into the model’s training objectives, for instance, by “boosting” the importance of certain minority or edge cases.
In our project, we found that it was helpful to train our models within demographic segments algorithmically identified as being highly susceptible to bias. For example, if segments A and B are prone to superfluous generalizations (as quantified by our fairness measures), learning patterns within these segments provides some semblance of demographic homogeneity and alleviates majority/minority sampling issues, thereby forcing the models to learn alternative patterns. In our case, this approach not only enhanced fairness measures markedly (by 5% to 10% for some segments), but also boosted overall accuracy by a couple of percentage points.
A few months back, we were at a conference where the CEO of a major multinational lamented about “the principle of precaution overshadowing the principle of innovation.” This is a concern voiced within C-suites and machine learning groups worldwide — in regards to both privacy and bias. But fairness by design isn’t about prioritizing political correctness above model accuracy. With careful consideration, it can allow us to develop high-performing models that are accurate and conscionable. Buying in to the idea of fairness by design entails examining different parts of the machine learning process from alternative vantage points, using competing theoretical lenses. In our Stroke Belt project, we were able to develop models with higher overall performance, greater generalizability across various demographic segments, and enhanced model stability — potentially making it easier for the health care system to match the right person with the right intervention in a timely manner.
By making fairness a guiding principle in machine learning projects, we didn’t just build fairer models — we built better ones, too.