The Research Colloquia provide a forum for interaction among faculty, students, and visitors interested in the applications of business and management. The colloquia include presentations by faculty from UC Irvine and other universities, as well as research institutes. Colloquia events are open to the public unless otherwise noted; please see event description for more details.
Q-learning algorithm with function approximation that is payoff-based, convergent, rational, and symmetric between the two players. In two-timescale Q-learning, the fast-timescale iterates are updated in spirit to the stochastic gradient descent and the slow-timescale iterates (which we use to compute the policies) are updated by taking a convex combination between its previous iterate and the latest fast-timescale iterate. Introducing the slow timescale as well as its update equation marks as our main algo-rithmic novelty. In the special case of linear function approximation, we establish, to the best of our knowledge, the first last-iterate finite-sample bound for payoff-based independent learning dynamics of these types. The result implies a polynomial sample complexity to find a Nash equilibrium in such stochastic games. To establish the results, we model our proposed algorithm as a two-timescale stochastic approximation and derive the finite-sample bound through a Lyapunov-based approach. The key novelty lies in constructing a valid Lyapunov function to capture the evolution of the slow-timescale iterates. Specifically, through a change of variable, we show that the update equation of the slow-timescale iterates resembles the classical smoothed best-response dynamics, where the regularized Nash gap serves as a valid Lyapunov function. This insight enables us to construct a valid Lyapunov function via a generalized variant of the Moreau envelope of the regularized Nash gap. The construction of our Lyapunov function might be of broad independent interest in studying the behavior of stochastic approximation algorithms.
Do new CEOs experience a honeymoon period following their appointment? The concept of a honeymoon – a period during which new organizational members are initially shielded from negative outcomes – has been considered a common underlying factor in many new appointments (Boswell et al., 2009; Fichman et al., 1991). Surprisingly, however, little systematic empirical research has investigated honeymoons in the most critical and complex appointments, namely those of CEOs. Instead, prior research has often relied on anecdotal evidence or assumed the presence or absence of a honeymoon period. Along these lines, some scholars have suggested that “new CEOs may have a short ‘honeymoon’ right after the succession, especially in the first year of their tenure” (Zhang, 2008: 870), due to the initial commitments from the organization (Fichman et al., 1991), which enable them to assimilate to the new job (Shen, 2003). In contrast, other scholars have argued against the logic of a honeymoon for CEOs, citing the liability of newness that affects CEOs and makes them vulnerable, as their knowledge and power are limited early in their tenure (Fredrickson et al., 1988; Hambrick et al., 1991).>
Recent press coverage of piracy and digital goods touts the practice of subscription (as opposed to selling) as a “piracy killer.” However, the effectiveness of digital goods subscriptions remains controversial in terms of the profitability for different supply chain members, including content providers and retailers. Specifically, the dearth of existing studies concerning business model choices in distribution channel structures indicates that the literature has yet to provide a comprehensive answer to this question. Therefore, we develop an analytical model to investigate the optimal business model choices for digital good firms with the existence of digital piracy in a centralized supply chain (CSC) or a decentralized supply chain (DSC) explicitly considering heterogeneous consumer usage rates (both heavy and light). Unique to the current literature, we find that illegal copies serve as a substitute for a different set of consumers, depending on the business model used by the firms. In particular, there are circumstances under which the firms optimally allow heavy-usage consumers to adopt illegal copies in the subscription model. In contrast, illegal copies can also serve as a substitute for the light-usage consumers in the selling-ownership model. Unlike current literature, we identify situations in a CSC whereby the selling-ownership model (a) is more profitable and (b) has fewer illegal goods adopted than the subscription model when piracy is present in the market. When analyzing a DSC, we find that the existence of piracy can actually aid in the coordination of the supply chain because digital piracy serves as a shadow competitor which effectively decreases the double marginalization between the two supply chain partners. More specifically, there are situations where both players prefer the subscription model, and there are other situations where they both prefer the selling-ownership model. This study bridges the literature gap between business models for digital goods and the impact of digital piracy. Our findings provide a possible explanation for the coexistence of various business models within digital goods markets, particularly when piracy is prevalent. Furthermore, we introduce actionable plans for practitioners such as providing incentives to retailers that may be apathetic to eradicating piracy and enhancing supplementary subscription-based services for better coordination.
This article studies the impact of political polarization on knowledge production. We construct a panel of 271,215 academic researchers linked with longitudinal voter registration records containing political party affiliation. Using this data, we first show that political polarization affects research collaboration. Following the divisive 2016 US presidential election, we find that Democrat and Republican researchers are less likely to collaborate as measured by co-authored publications. We then explore the impact of reduced inter-party collaboration on knowledge production outcomes. We find that Democrat (Republican) researchers who collaborated with Republicans (Democrats) prior to the 2016 election subsequently experience a decrease in publications, a decrease in citation-weighted publications, become less diverse in research topics, and are less likely to publish in new topic areas. Taken together, our results suggest that political polarization disrupts co-authorship collaboration networks, and that the disruption of such networks negatively impacts the quantity and quality of knowledge produced.
Livestream shopping is an alternative online shopping channel with unique features and has great future potential. In some sense, it is the early 2000’s all over again when online retail was taking shape. I explore this new channel in multiple projects and hope to present two papers. The first study is an analytical model of pricing in the livestream channel when competing brands hire a celebrity third-party livestreamer, who requires that brands provide a lowest price guarantee (LPG) in the livestream channel. Due to this constraint, brands cannot set a price lower than the livestream channel in other sales channels. We show that the livestream channel has unique pricing incentives that may enhance or mitigate price competition. Due to these countervailing incentives, brands may sometimes provide a discount in the livestreaming channel, but may sometimes want to set a price higher than other channels. In the latter case, the LPG becomes binding and provides a commitment device enabling brands to raise prices in other channels. As a result, brands may increase their profits compared to the case when LPG was not required. Finally, we show that consumers may lose surplus in the presence of LPG. To conclude, the requirement of LPG by Livestreamers in the LS channel may counterintuitively benefit the brands and hurt the consumers. The second study is an empirical analysis of the effect of generative-AI and animation technology based virtual livestreamers (VLS) on sales in the livestream channel. These VLS assist primary human livestreamers in presenting products, and entertaining and answering audience’s questions. We use livestream session-level data from over 500 livestreaming channels on a major livestream shopping platform in China to examine the effect of VLS adoption on sales. We find that adopting virtual livestreamers positively impacts session sales, but this positive impact declines over time. We find evidence that the performance of VLS responses to viewers' questions improves over time, yet viewers spend less time watching the sessions, which may reduce sales. This paradox is likely due to consumers getting more detailed and quicker answers to their questions by VLS as it improves. We examine some approaches to mitigate this declining effect on sales and find, for instance, that using cute stimuli like a mascot (as opposed to a humanoid) as the VLS interface attenuates the decline. These results suggest that generative AI should be implemented carefully to maintain its effectiveness in driving sales.
Motivated by applications in Reinforcement Learning (RL), this talk focuses on the Stochastic Approximation (SA) method to find fixed points of a contractive operator. First proposed by Robins and Monro, SA is a popular approach for solving fixed point equations when the information is corrupted by noise. We consider the SA algorithm for operators that are contractive under arbitrary norms (especially the l-infinity norm). We present finite sample bounds on the mean square error, which are established using a Lyapunov framework based on infimal convolution and generalized Moreau envelope. We then present our more recent result on concentration of the tail error, even when the iterates are not bounded by a constant. These tail bounds are obtained using exponential supermartingales in conjunction with the Moreau envelop and a novel bootstrapping approach. Our results immediately imply the state-of-the-art sample complexity results for a large class of RL algorithms.
Third parties that refer clients to expert service providers help clients navigate market uncertainty by: (i.) curating well-tailored matches between clients and experts, and (ii.) facilitating post-match trust. We argue that these two roles often conflict with one another because they require referrers to activate network relationships with different experts. While strong ties between referrers and experts promote trust between clients and experts, such ties reduce the likelihood that intermediaries consider referrals to more distal experts that are be better suited to serve a client’s needs. We examine this central and unexplored tension using full population medical claims data for the state of Massachusetts. We find that when primary care physicians (PCPs) refer patients to specialists with whom the PCP has a strong tie, patients demonstrate more confidence in the recommendations of the specialist. However, a strong tie between the PCP and specialist also reduces the expertise match between a patient’s diagnosis and a specialist’s clinical experience. These findings suggest that the two central means by which referrers add value may be at odds with one another because they are maximized by activation of different network relationships.
With the development of shared mobility (e.g., ride-sourcing systems such as Uber and Lyft), there has been a growing interest in pricing and empty vehicle relocation to maximize system performance. Although customers exhibit patience during their waiting for available driver, it has been neglected in most studies due to the complexities it introduces. In this work, we develop a provably near-optimal dynamic pricing and empty vehicle relocation mechanism for a ride-sourcing system with limited customer patience. We model the ride-sourcing system as a network of doubleended queues. To derive a near-optimal control policy, we first establish a fluid limit for the network in a large market regime and show that the fluid-based optimal solution provides an upper bound of the performance of the original ride-sourcing system for all dynamic policies. Then, we develop a simple dynamic policy for the original problem based on the fluid solution and show that its performance almost achieves that upper bound. Among our results, we answer two open questions raised in the literature: (i) the performance of our policy converges to that of the true optimal value exponentially fast in time when the market size is large; (ii) the customer loss of our proposed policy decreases to zero exponentially fast when market size increases. This is a joint work with M. Abdolmaleki, T. Radvand, and Y. Yin of The University of Michigan.
Despite the great volume of conflict management research across the social sciences, most theories and empirical work focus exclusively on those who are directly involved in conflict (e.g., negotiations). That focus means that most of the conflict literature is of limited use for understanding conflict in groups because every dyadic conflict within a group means there are teammates who are not directly involved, but likely to be affected by how the dyad deals with the conflict (Humphrey, Aime, Cushenbery, Hill, & Fairchild, 2017; Shah, Peterson, Jones, & Ferguson, 2021). Past research has largely dealt with this concern by assuming that conflict that originates between two individuals will quickly escalate to the whole group, thereby allowing the researcher to offer theory and data aggregated to the group level. However, recent research confirms that not all members of a given team perceive conflict in the same way (Jehn, Rispens, & Thatcher, 2010); that actual team-level conflict occurs in less than 20% of reported intragroup conflicts; and that in overtime studies conflict persists wherever it begins (i.e., individual, dyadic, subgroup, or whole team) (Shah, et, al. 2021). We consider these findings and explore the behaviors of different individuals playing different conflict roles within the team, including instigator, respondent, and bystander. We find, for example, that the presence of bystanders is essential to achieving positive outcomes for groups, and that dyadic task conflict within a group predicts positive outcomes for teams.
We assess measures of long-horizon investment outcomes and clarify the trading strategy interpretation of each. We introduce the notion of a “sustainable return” defined as the rate of periodic withdrawal for consumption consistent with the preservation of real capital. We illustrate this and several other long-horizon measures in a global stock sample, showing that the geometric mean return applies only in a special case, and that it is necessary in many contexts to consider the reinvestment of interim cash flows. Long-horizon measures based on the widelystudied arithmetic mean of short-horizon returns have relatively low correlation with other, more applicable, measures.
Using a new rubric-based approach on tasks from the O*NET database, we quantify the labor market impact potential of LLMs. Both human annotators and GPT-4 assess tasks based on their alignment with LLM capabilities and the capabilities of complementary software that may be built on top of these models with our rubric. Our findings reveal that between 61 and 86 percent of workers (for LLMs alone versus LLMs fully integrated with additional software) have at least 10 percent of their tasks exposed to LLMs. Additional software systems have the potential to increase the percentage of the U.S. workforce that has at least 10 percent of their work tasks exposed to the capabilities of LLMs by nearly 25 percent. We find that LLM impact potential is pervasive, LLMs improve over time, and complementary investments will be necessary to unlock their full potential. This suggests LLMs are general-purpose technologies (1). As such, LLMs could have considerable economic, societal, and policy implications, and their overall impacts are likely to be significantly amplified by complementary software.
The power distribution system, where most smart grid innovations will happen, is not well modeled, with the topology and line parameters poorly documented, inaccurate, or missing. This makes maintaining voltage stability challenging as renewable generation continues to proliferate. We present three results to address this challenge. The first result is a method to identify the topology and line admittances of a radial network from voltage and current measurements even when measurements are available only at a subset of the nodes. The second result is a learning-augmented feedback controller that can leverage real-time measurements to stabilize voltages without explicit knowledge of the network model. We provide convergence guarantee for the proposed method. Finally, we describe the design and deployment of a largescale EV charging system and an open-source research facility built upon it.
We leverage comprehensive data on firm-level AI investments to examine how firms' systematic risk changes with the advent of artificial intelligence (AI) during the 2010s. Firms that invest more in AI see increases in their systematic risk, measured by equity market beta. A onestandard-deviation increase in firm-level AI investments translates into a 0.05 increase in market beta. This result is unique to AI: robotics, IT, and general R&D investments do not display similar results during the sample period. We show that the increased market beta of AI-investing firms is not explained by leverage, asynchronous trading, increased correlation with the tech sector, withinindustry concentration, or correlated investor flows. Instead, our results are consistent with AI investments creating new growth options for firms: AI-investing firms become more growth-firmlike, and the effect on market betas is twice as large on the upside than the downside. Overall, our findings provide direct evidence that firms' investments in new technologies such as AI create growth options and affect the composition of the firms' risk profiles.
Despite the emerging prominence of generative artificial intelligence (Gen AI) in the business community, its nascent nature presents uncertainties that impede its broader adoption as a disruptive technology. We address critical research questions that elucidate the impact of Gen AI integration within a firm’s existing services, its interaction with a firm’s proprietary assets, and its financial viability. First, we investigate the potential of Gen AI, customized through domainexpertise-driven prompt engineering, to complement or substitute for existing services, with a specific focus on its impact on user engagement and its contribution to a firm’s profit. Second, we explore the implications of repurposing a firm’s proprietary asset such as the retrieval-augmented generation (RAG) technique by integrating it with Gen AI. Our findings reveal that RAG serves as a complementary asset to Gen AI, enhancing user engagement of existing services. Specifically, Gen AI not only acts as a safety net, providing solutions when a firm’s existing assets cannot address users’ requests, but also amplifies user engagement and profit contribution through the integration of proprietary assets via prompt engineering and RAG. Our study contributes to the understanding of Gen AI’s economic and strategic implications in business, offering actionable insights for how practitioners can leverage this technology for effective proprietary asset utilization and enhanced firm performance.
High potential programs offer a swift path up the corporate ladder for those who secure a place on them. However, the evaluation of “potential” occurs under considerable uncertainty, creating fertile ground for gender bias. In this paper, we argue that the process through which evaluators identify high-potentials is particularly biased for reasonably high-performing but not exceptional employees. We unpack a male advantage in this evaluation process by showing how evaluators make gendered inferences about passion, a widely used criterion for selection into high-potential programs. Drawing on the shifting standards model, we posit that passion increases perceptions of diligence among reasonably high-performing male (but not female) employees because observers expect women (but not men) to be diligent at baseline. This disparity, in turn, underlies men’s increased likelihood of attaining placement into high-potential programs. We provide supporting evidence across two studies examining high-potential program placement in a real talent review setting (N=796) and a pre-registered experiment that uses videos featuring trained actors (N=1,366). Our theory and findings extend our understanding of gender bias beyond gendered reactions that penalize women (i.e., backlash) to unveil a novel, additional, and pernicious form of gender bias that stems from gendered inferences from criteria that favor men.
To increase market demand for innovations that are unfamiliar and unconventional to consumers, firms frequently present the providers behind these innovative products to consumers. An important yet unexplored question is when and how presenting a team of product providers affects consumer intentions to adopt the team’s products. In this paper, we investigated the diversity among the product provider teams that are presented to consumers. Utilizing crowdfunding platform crawler data (N = 2338) and results in six experiments (five pre-registered; total N = 1781), we found that consumers perceived a team of high (vs. low) provider diversity and its products as more creative, leading to more favorable purchase intentions toward the team’s innovative products. Creativity perceptions explained innovation adoptions above and beyond other firm perceptions such as warmth, competence, and morality. Furthermore, the effect of provider diversity on innovation adoptions diminished for consumers less attracted by creativity (i.e., low openness to experience) and, more importantly, reversed for conventional (vs. innovative) products. The current research informs the benefits, limitations, and caveats of diversity representations in marketing messages, as well as adding to the extant work on innovation adoption, diversity, and creativity perceptions.
When people want to conduct a transaction, but doing so would be morally disreputable, they can obfuscate the fact that they are engaging in an exchange while still arranging for a set of transfers that are effectively equivalent to an exchange. Obfuscation through structures such as gift-giving and brokerage is pervasive across a wide range of disreputable exchanges, such as bribery and sex work. In this article, we develop a theoretical account that sheds light on when actors are more versus less likely to obfuscate. Specifically, we report a series of experiments addressing the effect of trust on the decision to engage in obfuscated disreputable exchange. We find that actors obfuscate more often with exchange partners high in loyalty-based trustworthiness, with expected reciprocity and moral discomfort mediating this effect. However, the effect is highly contingent on the type of trust; trust facilitates obfuscation when it is loyalty-based, but this effect flips when trust is ethics-based. Our findings not only offer insights into the important role of relational context in shaping moral understandings and choices about disreputable exchange, but they also contribute to scholarship on trust by demonstrating that distinct forms of trust can have diametrically opposed effects.
Consumers turn to pets when distressed, often equating the role of their pet to that of a close friend or family member. However, we know little about whether pets mitigate psychological pain more effectively than humans. A social media field experiment (n = 174,624) reveals that consumers are more likely to turn to pets than to other humans in times of distress. Importantly, two subsequent online consumer panel experiments (n = 1,182) show that thinking of beloved animals decreases psychological pain more than thinking of beloved humans, an effect that is mediated by perceived unconditional love.
We examine the potential of ChatGPT and other large language models (LLMs) to predict stock market returns using news. Categorizing headlines with ChatGPT as positive, negative, or neutral for companies’ stock prices, we document a significant correlation between ChatGPT scores and subsequent daily stock returns, outperforming traditional methods. Basic models like GPT-1 and BERT cannot accurately forecast returns, indicating return forecasting is an emerging capacity of more complex LLMs, which deliver higher Sharpe ratios. We explain these puzzling return predictability patterns by testing implications from economic theories involving information diffusion frictions, limits to arbitrage, and investor sophistication. Predictability strengthens among smaller stocks and following negative news, consistent with these theories. Only advanced LLMs maintain accuracy when interpreting complex news and press releases. Finally, we present an interpretability technique to evaluate LLMs’ reasoning. Overall, incorporating advanced language models into investment decisions can improve prediction accuracy and trading performance.
Technological advancements in consumer media have notably expanded the influence of social movements originated on social media, such as #NeverAgain, #BLM, and #MeToo. These movements often arise from significant sociopolitical events (e.g., the Parkland school shooting, George Floyd's murder, Harvey Weinstein's sexual harassment scandal) and engage with major industries and institutions (e.g., the NRA, law enforcement, the entertainment industry). They provoke widespread media coverage, emotional responses, and polarizing public debates. Despite their goal of societal reform, public opinion on the legitimacy, definition, and expectations of these reforms remains divided, leading to disparate demands for change from the implicated sectors. This raises critical questions about the real impact of social media-driven movements on societal norms and behaviors, particularly in relation to the industries at the heart of these events. Our research focuses on the #MeToo movement's impact within the US film industry, analyzing 1,326 movies from 2010 to 2020. We investigate how #MeToo has altered consumer perceptions of gender roles and norms, as reflected in the box office performance of films with stereotypical versus counter-stereotypical gender portrayals, before and after the movement's rise. Drawing on social role theory (Eagly, 1987; Eagly, Wood, and Diekman, 2000) and cultivation theory (Potter, 2014), we categorize these portrayals based on descriptive and injunctive norms (Cialdini and Trost, 1998). Our findings indicate a significant shift in consumer preferences about movies that contain female sexual objectification, and traditional or non-traditional courtship and relational gender norms. This shift suggests a broader change in societal attitudes toward gender roles, highlighting the transformative power of social movements like #MeToo. The implications of our study are twofold: it offers insights for academics and policymakers interested in the cultural and consumer impact of social movements and provides a roadmap for the $600 billion entertainment industry on responding to societal calls for reform.
This talk presents a novel generative probabilistic forecasting approach derived from the Wiener-Kallianpur innovation representation of nonparametric time series. Under the paradigm of generative artificial intelligence, we propose a weak innovation autoencoder architecture that transforms nonparametric multivariate random processes into canonical innovation sequences, from which future time series samples are generated according to conditional probability distribution on past samples. A novel deep-learning algorithm is proposed that constrains the latent process to be an independent and identically distributed sequence with matching inputoutput probability distributions of the autoencoder. Three applications involving highly dynamic and volatile time series in electricity markets are considered: (i) locational marginal price forecasting for merchant storage participants, (ii) price spread forecasting for virtual bidding in interchange markets, and (iii) area control error forecasting for frequency regulations. We compare the proposed innovation-based forecasting with classic and leading machine-learning techniques.
We explore the role of market feedback in navigating emerging corporate policies on AI/green technologies. By assembling and analyzing a comprehensive sample of corporate disclosures in which managers discuss their forward-looking investment plans on AI/green technologies, we find that firms adjust such investments upward (downward) in response to favorable (unfavorable) market reactions to the corresponding disclosures. This association is more likely due to managerial learning from the market than other alternative explanations, as it gets stronger when market reactions are unfavorable, when outside market participants are more knowledgeable about emerging technologies, and when managers have stronger incentives to promote investments in such fields. Such learning is absent for non-emerging-technology investment plans where managers have domain knowledge. Further, we find that following the market feedback on emerging corporate policies is rewarded by superior long-run operating and stock performance, especially when the feedback is unfavorable. We also find different learning patterns for AI and green technologies. Overall, our paper illustrates the usefulness of tapping the wisdom of the crowd when venturing into uncharted areas and sheds new light on what type of information managers learn from the stock market in different contexts of corporate policies.
A researcher enters your world and starts asking questions you would prefer not to answer. What do you do? Mostly, when an interloper appears, communities find ways to resist; they obstruct investigations and hide evidence, shelve complaints, silence dissent, and even forget about their own past. Such resistance—that is, the mechanisms deployed by social groups to maintain the status quo—is the bane of field researchers, for it often seems to slam the door in our face. How can we learn about a community when it resists so very strongly? The answer is that, sometimes, the resistance is itself the key. By closing ranks and creating obstacles, community members can disclose more than they mean. This talk will discuss how such resistance manifests itself and what it reveals about a given field and a particular researcher. Insights will be drawn from resistance in diverse field settings (including ones involving Nazi scientists and TSA officers) to help analyze resistance. I will argue that field resistance contains way more analytical possibilities than we imagine. Every form of resistance is retrospectively telling. They help us see what matters most to participants and how we are uniquely positioned to uncover these dynamics. Overall, resistance needs to be understood as a routine product (not by-product) of the field. That means that resistance is not only indicative of something else happening. Instead, it can prove rich data for our inquiries.
In this paper, we explore how to uncover an adverse issue that may occur in organizations with the capability to evade detection. To that end, we formalize the problem of designing efficient auditing and remedial strategies as a dynamic mechanism design model. In this set-up, a principal seeks to uncover and remedy an issue that occurs to an agent at a random point in time, and that harms the principal if not addressed promptly. Only the agent observes the issue’s occurrence, but the principal may uncover it by auditing the agent at a cost. The agent, however, can exert effort to reduce the audit’s effectiveness in discovering the issue. We first establish that this set-up reduces to the optimal stochastic control of a piecewise deterministic Markov process. The analysis of this process reveals that the principal should implement a dynamic cyclic auditing and remedial cost-sharing mechanism, which we characterize in closed form. Importantly, we find that the principal should randomly audit the agent unless the agent’s evasion capacity is not very effective, and the agent cannot afford to self-correct the issue. In this latter case, the principal should follow pre- determined audit schedules.
: Millions of employees are victims of violent crimes at work every year, particularly those in the retail industry, who are frequent targets of robbery. Why are some employees injured while others escape from these incidents physically unharmed? Departing from prevailing models of workplace violence, which focus on the static characteristics of perpetrators, victims, and work environments, we examine why and when injuries during robberies occur. Our multimethod investigation of convenience-store robberies sought evidence from detailed coding of surveillance videos and matched archival data, preregistered experiments with formerly incarcerated individuals and customer service personnel, and a 3-y longitudinal intervention study in the field. While standard retail industry safety protocols encourage employees to be out from behind the cash register area to be safer, we find that robbers are significantly more likely to injure or kill employees who are located there (versus behind the cash register area) when a robbery begins. A 3- y field study demonstrates that changing the safety training protocol—through providing employees with a behavioral script to follow should a robbery begin when they are on the sales floor—was associated with a significantly lower rate of injury during these robberies. Our research establishes the importance of understanding the interactive dynamics of workplace violence, crime, and conflict.
Considering the hazard posed by defective products, we know relatively little about how firms determine when to recall products once they are known to have defects. Drawing on two perspectives: threat rigidity and stealing the thunder, this study examines the effects of recall severity and scale on the time it takes firms to announce a product recall after becoming aware of the defect, and the contingencies surrounding them. Our study contributes insights to the literature on crisis management by clarifying the conditions when different perspectives are more likely to dominate in predicting the timing of firm response to crisis. We contribute to practice by illuminating key drivers of the timing of product recall decisions – a type of firm action with significant implications for public health and policy.
Using new data on mutual funds’ equity lending positions, we find that short sellers borrow shares from a small set of repeated lenders and the composition of lenders differs from stock to stock. We argue that this fragmented, persistent lender base is driven by investors’ inelastic lending supply, which contributes to limits-to-arbitrage. When existing lenders sell their shares, short sellers struggle to find replacement lenders and get squeezed, even when conventional measures suggest lending supply is slack. Consequently, lending fees spike, and stocks become more likely to be overpriced. Ex ante, risks implied by lender base are priced in equity prices.
This paper examines whether and how soft information acquisition affects commercial lending. Using proprietary data from a global bank, we measure soft information acquisition using loan officers’ detailed interaction records with borrowers during due diligence. We find that interactions are mostly driven by borrowers’ business prospects, and affect credit decisions in at least two ways. First, we show evidence consistent with that soft information acquired is incorporated into loan price to improve loan pricing efficiency. Second, our results also indicate that loan officers rely less on the bank’s internal risk ratings when approving credits to borrowers that they interact with, especially when ratings are less favorable. We further document soft information to be more useful when loan officers cannot rely on alternative information sources. Finally, consistent with loan officers better able to collect soft information in synchronous interactions with borrowers, we find larger beneficial effects for these interactions than for asynchronous ones. Taken together, our study provides novel evidence on whether and how soft information acquisition helps lenders mitigate information asymmetry in loan pricing decisions and credit approval decisions.
This study explores the role of institutional and retail (informed and uninformed) attention in the context of the sentiment of media news releases. We find that retail attention destabilizes the market when retail investors appear to struggle digesting complex business information, in particular if these retail investors are uninformed. The attention of informed retail investors is stabilizing, as is institutional attention. We also find that many news events are not paid attention to, even with our sample of S&P 500 firms, and with inattention comes drift if the news is of positive sentiment. With negative or mixed sentiment news and investor inattention there is little evidence of reversals or drift. We also find that when news events are paid attention to, consistent sentiment across contemporaneous news stories is important to identify when anticipating price reaction.
Cultural norms about gender play a critical role in scholarly accounts of gender inequality. In addition to descriptive beliefs about the ways men and women typically are, normative expectations about the ways men and women ostensibly should and should not be are critically important for understanding gendered patterns of decision-making and behavior in both organizations and in families. But knowledge about the specific content of prescriptive gender stereotypes in the contemporary US—especially on dimensions that relate to STEM segregation, entrepreneurship, and family divisions of labor—is surprisingly lacking. In particular, it remains unknown: 1) how prescriptive stereotypes map onto key gender-typed roles across work and family domains and 2) the degree to which these stereotypes are consensual (i.e. widely shared and understood) across key demographic groups. In this talk, I present findings from an original, population representative survey experiment that measures the content of prescriptive and descriptive stereotypes about men and women along 100 characteristics, over half of which have not been evaluated in previous research. A second study replicates and extends initial findings. Results reveal that prescriptive stereotypes pertaining to cultural ideals of parenting, homemaking, and care-intensive work are only modestly gendered; women’s advantage in such domains is smaller than one would expect on the basis of previous studies. Descriptive stereotypes on numerous agentic, dominance, and competence characteristics are also modestly gendered or not significant, suggesting an overall weakening of gender essentialist beliefs. However, we find large prescriptive male advantages on characteristics pertaining to the ideal worker and breadwinner, the ideal STEM worker, and the ideal entrepreneur. Moreover, beliefs prescribing men's status advantages (i.e. power and deference) are also quite large in magnitude. Findings indicate that normative gender expectations are highly uneven and contradictory, and are not necessarily universally shared across societal groups. More broadly, by mapping the content of prescriptive stereotypes along novel character dimensions, this study offers a fresh basis upon which to refine and specify our theoretical accounts of the forces driving gender inequalities and changes therein.
Big data allows active asset managers to find new trading signals but doing so requires new skills. Thus, it can reduce the ability of asset managers lacking these skills to produce superior returns. Consistent with this possibility, we find that the release of satellite imagery data tracking firms’ parking lots reduces active mutual funds’ stock picking abilities in stocks covered by this data. This decline is stronger for funds that are more likely to rely on traditional sources of expertise (e.g., specialized industry knowledge) to generate their signals, leading them to divest from covered stocks. These results suggest that big data has the potential to displace high-skill workers in finance.
This paper uses ChatGPT, a large language model, to extract managerial expectations of corporate policies from disclosures. We create a firm-level ChatGPT investment score, based on conference call texts, that measures managers’ anticipated changes in capital expenditures. We validate the ChatGPT investment score with interpretable textual content and its strong correlation with CFO survey responses. The investment score predicts future capital expenditure for up to nine quarters, controlling for Tobin’s q, other predictors, and fixed effects, implying the investment score provides incremental information about firms’ future investment opportunities. The investment score also separately forecasts future total, intangible, and R&D investments. High-investmentscore firms experience significant future abnormal returns adjusted for factors, including the investment factor. We demonstrate ChatGPT’s applicability to measure other policies, such as dividends and employment. ChatGPT revolutionizes our comprehension of corporate policies, enabling the construction of managerial expectations cost-effectively for a large sample of firms over an extended period.
The United States has seen a precipitous rise in drug overdose deaths in the past two decades, fueled by physicians’ high-risk prescribing. To combat the opioid crisis, states introduced regulations that limit the initial supply of opioids prescribed. What drives physicians’ variation in response to this regulation? I turn to social networks to investigate this puzzle. Drawing from a patient-sharing networks consisting of 269,542 physicians and 10.6 million initial opioid prescriptions, I find striking distinctions in the social networks supporting the cessation and persistence of high-risk prescribing. Despite a lack of formal legal sanctioning in this context, physicians centrally embedded in the network (i.e. who have many connections) were most responsive to the regulation to curtail their prescribing. Importantly, this effect was only realized when the focal physician stood out among their peers as over-prescribers. At the same time, highrisk prescribing continued to persist in networks solely consisting of high-risk prescribers and among isolated physicians. The results are consistent with peer sanctioning concerns at play for driving the cessation of deviance, and this concern is particularly salient for central actors. The findings challenge the prevailing notion that central actors have higher capacity to deviate from norms and contributes to a deeper understanding of the role social networks play in the abandonment of contentious practices. The results inform network-based interventions to combat the prescription drug crisis.
With algorithmic targeting getting increasingly common, the issue of fairness is quickly coming the fore. For example, financial firms have been accused of targeting minorities with pricey loans. Likewise, online job advertisements have disproportionately been targeted to male candidates, disadvantaging females. In order to examine such potentially unfair practices and their possible remedies, we develop an economic model where a firm sells two products and targets each potential consumer with one of them. Targeting is personalized, that is, not all consumers are recommended the same product. Each consumer decides whether to accept the firm's recommendation or to reject it and look for alternatives, or to reject it altogether and not do business with the firm at all. We explain why, if the firm has different levels of information on various demographic groups, it might have an incentive to target them in a differential manner, resulting in uneven welfare across consumer groups. Such unequal treatment is often viewed as discrimination in the policy parlance. We also explain why enforcing equal treatment might not yield the intended effect and in what circumstances it could even become counterproductive. Our results have important implications for ethical deployment of algorithmic targeting.
Textbook theory assumes that firm managers maximize the net present value of future cash flows. But when you ask them, the people running large public corporations say that they are maximizing something else entirely: earnings per share (EPS). Perhaps this is a mistake. No matter. We take managers at their word and show that EPS maximization provides a single unified explanation for a wide range of corporate policies such as leverage, share issuance and repurchases, M&A payment method, cash accumulation, and capital budgeting.
What are the effects of recent advances in Generative AI on the value of firms? Our study offers a quantitative answer to this question for U.S. publicly traded companies based on the exposures of their workforce to Generative AI. Our novel firm-level measure of workforce exposure to Generative AI is validated by data from earnings calls, and has intuitive relationships with firm and industry-level characteristics. Using Artificial Minus Human portfolios that are long firms with higher exposures and short firms with lower exposures, we show that higher exposure firms earned excess returns that are 0.4% higher on a daily basis than returns of firms with lower exposures following the release of ChatGPT. Moreover, we show that hiring activity by more exposed firms decreases after the ChatGPT release and shifts away from more exposed occupations, which, in turn, see relative wage declines at the national level, consistent with the substantive disruptive potential of Generative AI technologies.
We argue that digital collaboration technologies (DCTs) reduce supervisory burden on managers and should therefore lower managerial intensity. To test our argument, we apply a differences-in-differences design on a novel dataset built from firms’ job listings (Lightcast) and employees’ social profiles (Revelio), which comprises 3,017 US public firms that we track in the period 2010-2019. We find that, over the observation window, DCT adopters indeed show a 0.8% reduction in managerial intensity on average. Consistent with our argument that DCTs reduce supervisory burden, adopters also show a 5-7% increase in decentralization related skills in their job postings in the years following adoption. We also find that DCT adopters scaled up with fewer and less skilled managers.
We theorize that investors will respond positively to workforce gender diversity in major U.S. firms because they may believe that diversity has large upsides for such firms (e.g., reduced legal risks and increased creativity), whereas diversity’s potential downsides (e.g., increased conflict) can be mitigated if they are effectively managed. To test our predictions, we examine how investors respond to news about workforce gender diversity numbers, using financial event studies and randomized experiments. We find that U.S. technology firms and U.S. financial firms experience more positive stock price reactions when it is revealed that they have relatively higher (vs. lower) workforce gender diversity numbers. These stock price reactions are both economically and statistically significant. For example, we estimate that if a technology firm had revealed gender diversity numbers that were one standard deviation higher, its market valuation would have increased by $1.11 billion. Furthermore, we find parallel investor reactions in randomized experiments with investors, and these reactions seem to be mediated by investors’ beliefs about potential upsides of diversity for the firm (e.g., reduced legal risks; creativity) but not by investors’ beliefs about potential downsides of diversity for the firm (e.g., conflict). Our results point towards a new type of business case for diversity, driven by investors: if firms had more workforce gender diversity, then investors would likely “reward” them with substantially higher valuations
High-dimensional weakly coupled Markov decision processes (WDPs) arise in dynamic decision making and reinforcement learning, decomposing into smaller MDPs when linking constraints are relaxed. The Lagrangian relaxation of WDPs (LAG) exploits this property to compute policies and (optimistic) bounds efficiently; however, dualizing linking constraints averages away combinatorial information. We introduce feasibility network relaxations (FNRs), a new class of linear programming relaxations that exactly represents the linking constraints. We develop a procedure to obtain the unique minimally sized relaxation, which we refer to as self-adapting FNR, as its size automatically adjusts to the structure of the linking constraints. Our analysis informs model selection: (i) the self-adapting FNR provides (weakly) stronger bounds than LAG, is polynomially sized when linking constraints admit a tractable network representation, and can even be smaller than LAG, and (ii) self-adapting FNR provides bounds and policies that match the approximate linear programming (ALP) approach but is substantially smaller in size than the ALP formulation and a recent alternative Lagrangian that is equivalent to ALP. We perform numerical experiments on constrained dynamic assortment and preemptive maintenance applications. Our results show that self-adapting FNR significantly improves upon LAG in terms of policy performance and/or bounds, while being an order of magnitude faster than an alternative Lagrangian and ALP, which are unsolvable in several instances
We propose a model in which arbitrageurs act strategically in markets with entry costs. In a repeated game, arbitrageurs choose to specialize in some markets, which leads to the highest combined profits. We present evidence consistent with our theory from the options market, in which suboptimally unexercised options create arbitrage opportunities for intermediaries. Using transaction-level data, we identify the corresponding arbitrage trades. Consistent with the model, only 57% of these opportunities attract entry by arbitrageurs. Of those that do, 50% attract only one arbitrageur. Finally, our paper details how market participants circumvent a regulation devised to curtail this arbitrage strategy
Much of the M&A literature has focused on the negative consequences for target employees. They may anticipate layoffs, and cuts in pay, benefits, and training. While acquiring valuable human capital may be a central acquisition objective, post-acquisition turnover is often high, and the best people may be at the greatest risk of exiting. One might expect post-acquisition investments in human capital to be rare or unwise. We draw on a unique sample of Belgian M&A targets which includes social balance sheet accounts before and after mergers (formal and informal training, workforce composition, turnover, compensation, etc.). Findings indicate that post-acquisition investment increases (compensation and training) when the deal rationale involves revenue growth or buyers are in unrelated industries. We find increased compensation but reduced training in cross-border deals and for targets with highly educated workforces. Notably, in our sample, the vast majority of buyers seek growth-oriented objectives so human capital investment occurred most of the time. We also offer a more nuanced discussion breaking out different kinds of revenue growth strategies.
The rise of political polarization affected the landscape of the U.S. real asset market. Mergers between politically divergent firms became less common over time, and those between firms from politically divergent states have virtually disappeared in recent years. We analyze deal-level data to consider confounding factors and explore the mechanisms underlying these dramatic trends. We find that the likelihood of merger announcement or completion, announcement returns, and post-merger operating performance are lower for politically divergent firms. The effects are stronger when political polarization is greater, when firms plan to integrate operations, and during economic expansions. These findings hold after controlling for geographical distance, product similarity, and existing measures of corporate culture.
For digital platform orchestrators, using algorithm to intermediate between complementors and users is a key way to generate revenue. Yet, they often allow third-party human intermediaries, e.g. influencers, to co-exist and perform similar tasks on the platforms. Why do they do so? How do human intermediaries benefit platforms? Using Instagram data and Sentiment Analysis technique, we demonstrate that human intermediaries offer different value-add to complementors than algorithm – while algorithm enables wider user-reach breadth, influencers provide greater user-reach depth which generates positive sentiments towards complementors. The latter do more than trigger indirect network effects benefitting platforms. We show that algorithm directly benefits from influencers – it learns from them over time and closes the gap in positive user sentiments generated towards complementors via finding appropriate new users. Findings join recent research in highlighting how platforms exploit participants for their own benefits and how value-capture concerns drive platforms’ strategies.
We find widespread evidence of firms appearing to avoid paying overtime wages by exploiting a federal law that allows them to do so for employees termed as “managers” and paid a salary above a pre-defined dollar threshold. We show that listings for salaried positions with managerial titles exhibit an almost five-fold increase around the federal regulatory threshold, including the listing of managerial positions such as “Directors of First Impression,” whose jobs are otherwise equivalent to non-managerial employees (in this case, a front desk assistant). Overtime avoidance is more pronounced when firms have stronger bargaining power and employees have weaker rights. Moreover, it is more pronounced for firms with financial constraints and when there are weaker labor outside options in the region. We find stronger results for occupations in low-wage industries that are penalized more often for overtime violations. Our results suggest broad usage of overtime avoidance using job titles across locations and over time, persisting through the present day. Moreover, the wages avoided are substantial - we estimate that firms avoid roughly 13.5% in overtime expenses for each strategic “manager” hired during our sample period.
We propose a theory of corporate social responsibility (CSR) by linking it to a firm’s product market. In our model, the firm's product exhibits network effects whereby its value increases with the number of consumers who purchase it. Moreover, with advancements in technology and big data, the firm can adopt personalized pricing for each consumer. We show that such a firm could use CSR as a commitment device for low product prices, which helps overcome the coordination problem among consumers and increases firm profits, thus supporting the notion of “doing well by doing good.
We develop and test a theory to address when and why political elites may build cross-partisan ties in times of polarization. Key to the theory are two points: (1) insofar as cross-partisan ties provide information and reputational benefits for individual politicians that are not attainable from co-partisan ties, politicians are more likely to cooperate with opposite-partisan members sharing the common “foci” (e.g., legislative priorities) than co-partisan members doing the same; and (2) such tendencies are more likely when polarization is higher (thus when information and coordination from opposite-partisans are overall harder to achieve) and politicians are shielded from consequences of co-partisan disapproval. We test these ideas by examining the US Senate from 1973 to 2017. Results show that when two senators’ speeches express similar legislative priorities, their tendency to co-sponsor a bill is stronger when the two are in different parties than when they are in the same party, but only in times of polarization and among re-elected senators. The upshot is an endogenous “brake” to structural divisions, where cross-cutting ties emerge due to—and not despite—polarization.
Problem definition: The challenge of equitably allocating a divisible resource and its associated costs or savings among consumers with heterogeneous incomes and private levels of resource utility arises in many situations. The challenge lies in the dual dimensions of consumer characteristics and the coupled allocation problems. We devise and analyze various resource allocation schemes, using utility-led community solar as a focal application in this paper. Methodology/results: Our model incorporates consumer heterogeneity in both income levels and utility for the resource. We formulate the problem of allocating the resource and its associated cost or savings, with the objective of maximizing the aggregate welfare. We present lower and upper bounds for this problem, and study various alternative allocation schemes. The most sophisticated scheme offers consumers income-dependent menus (IDM) of quantity and cost options. We uncover various structural properties of these menus and analytically show their advantages over simpler alternatives. Using numerical studies calibrated by real-world community solar program data, we find that the IDM approach has remarkable performance, nearly achieving the first-best. Moreover, by endogenizing the size of the community solar program, we find that our IDM approach also promotes larger solar projects, enhancing both environmental and social benefits of solar energy. Managerial implications: In the realm of resource allocation where both income and resource utility levels are diverse, significant welfare gains can be realized by judiciously leveraging the dual dimensions of heterogeneity. Implementing our proposed IDM approach in the context of community solar allows environmentally conscious consumers to opt for greater capacity and contribute more, contingent on individual financial capacities. In simpler terms: “care more, get more, and give more—when your wallet agrees.”
Prosocial behaviors play a critical role in human groups, organizations, and societies, especially in response to harms like tragedies or crises. However, it remains controversial whether tragedies enervate or energize prosocial behaviors towards unrelated targets (i.e., targets unrelated to the original tragedies). We present large-scale quasi-experimental field evidence from the United States that fatal police shootings demotivate prosocial behaviors by volunteer crisis counselors towards targets experiencing personal crises (e.g., suicidal ideation) unrelated to the original shootings. Specifically, each fatal police shooting reduces potentially lifesaving prosocial behaviors by about 1%. This enervating impact of fatal police shootings is disproportionately driven by fatal police shootings of Black victims, which in turn is driven by shootings of Black victims in areas with high levels of racism. In contrast, fatal police shootings of non-Black victims do not reduce prosocial behaviors. Our results suggest prosocial behaviors are not demotivated by fatal police shootings in general, but specifically by fatal police shootings that can be construed as manifestations of racism – or racially motivated killings. Our findings illustrate how fatal police shootings of Black victims not only harm direct victims, their families, and their communities, but may also create spillover effects that put in motion a much broader set of harms – by widely sapping the prosocial motivation of critical volunteers across the country, and (in turn) ultimately harming many vulnerable people who suffer from unrelated personal crises. Although fatal police shootings are widely viewed as tragedies, they may be even more harmful than previously believed.
Disputes over transactions on two-sided platforms are common and usually arbitrated through platforms’ customer service departments or third-party service providers. This paper studies crowd-judging, a novel crowdsourcing mechanism whereby users (buyers and sellers) volunteer as jurors to decide disputes arising from the platform. Using a rich data set from the dispute resolution center at Taobao, a leading Chinese e-commerce platform, we aim to understand this innovation and propose and analyze potential operational improvements with a focus on in-group bias (buyer jurors favor the buyer, likewise for sellers). Platform users, especially sellers, share the perception that in-group bias is prevalent and systematically sways case outcomes as the majority of users on such platforms are buyers, undermining the legitimacy of crowd-judging. Our empirical findings suggest that such concern is not completely unfounded: on average, a seller juror is approximately 10% likelier (than a buyer juror) to vote for a seller. Such bias is aggravated among cases that are decided by a thin margin and when jurors perceive that their in-group’s interests are threatened. However, the bias diminishes as jurors gain experience: a user’s bias reduces by nearly 95% as experience grows from zero to the sample median level. Incorporating these findings and juror participation dynamics in a simulation study, the paper delivers three managerial insights. First, under the existing voting policy, in-group bias influences the outcomes of no more than 2% of cases. Second, simply increasing crowd size through either a larger case panel or aggressively recruiting new jurors may not be efficient in reducing the adverse effect of in-group bias. Finally, policies that allocate cases dynamically could simultaneously mitigate the impact of in-group bias and nurture a more sustainable juror pool.
Firms can both acquire new resources from external markets and optimize incumbent resources by redeploying them from one use to another. This study explores how firms can achieve balance between internal redeployment and external recruitment of resources by focusing on scientific and engineering human resources, i.e., inventors, in the global semiconductor industry. We find that externally recruited inventors tend to work in fewer technology areas but are more likely to enter new technology areas compared to incumbents. This is attributable to the fact that the former incur greater adjustment costs but lower opportunity costs compared to the latter. Furthermore, inventor productivity and knowledge similarity with targets facilitate redeployment but her central position in a firm’s knowledge base discourages it, which tendency is stronger for incumbents than externally recruited inventors, again due to lower opportunity cost. We further find that the lower performance of externally recruited inventors in the short-term improves to become comparable with incumbents in the long-run.
Consumers are increasingly subject to fees, often without knowing why they are charged. This fee growth is due partly to an increasingly complex and underregulated marketplace. In addition to annoying consumers, fees transfer wealth from consumers’ wallets to wealthy corporations and individuals. Aware of rising public concerns, many industries have adopted a la carte pricing, where consumers can choose options and associated fees. For example, at many hotels, guests can pay fees for early check-in or late check-out, for using the pool, Wi-Fi, and gym, and for breakfast. Other companies instead use all-inclusive pricing or assess mandatory fees (e.g., “resort fees”). In this talk, I first review prior research on how consumers react to different pricing models that involve fees (e.g., partitioned pricing, drip pricing). I then will present some research in progress that investigates consumers’ preference for the freedom to choose options and associated fees. We show in a series of studies involving different industries, options, and fee/additional charge structures, that consumers mispredict how much they like optional fees. While they prefer optional fees when comparing different pricing structures (all-inclusive, mandatory fees), as they might be asked to do in a marketing research pricing survey, they dislike optional fees in consumptions contexts that involve paying for optional items that have fees associated with them. We provide some initial evidence that this happens because consumers mispredict the pain of payment that comes with the freedom of choice.
Ransomware, a digital form of extortion, has emerged as one of the biggest threats to cybersecurity. Faced with business disruptions, many organizations accede to ransom demands and, in doing so, they incentivize attackers to launch more attacks, elevating the chance of a future breach not just for themselves but for others as well. We study this externality using a multiperiod game among multiple firms, each of which has a choice to pay or not pay if breached in a particular period, its choice having implications for all of them in the future. How should a policymaker intervene to mitigate this externality, and is prohibition really necessary? Our study raises several important questions and provide practical insights. Specifically, what might work or how it might work as a policy depends critically on the behavior of a third party—ransomware attacker—an economic agent absent from a typical externality setup. Our model of “extortionality”—externality due to extortion—provides a framework for comparing different types of policy interventions and raises concerns for policymakers to pause and ponder.
Platform giants typically possess strong power over other participants on the platforms. Such power asymmetry gives platform owners the edge on setting platform fees to capture the surplus created on their platforms. While there is a heated debate on regulating these powerful platforms, the lack of empirical studies hinders the progress towards evidence-based policymaking. This research empirically investigates this regulatory issue in the context of on-demand delivery. Delivery platforms (e.g., DoorDash) charge restaurants a commission fee, which can be as high as 30% per order. To support small businesses, recent regulatory scrutiny has started to cap the commission fees for independent restaurants. This research empirically evaluates the effectiveness of platform fee regulation, by investigating recent regulations across 14 cities and states in the United States. Our analyses show that independent restaurants in regulated cities (i.e., those paying reduced commission fees) experience a decline in orders and revenue, whereas chain restaurants (i.e., those paying the original fees) see an increase in orders and revenue. This intriguing finding suggests that chain restaurants, not independent restaurants, benefit from the regulations that were intended to support independent restaurants. We find that platforms’ discriminative responses to the regulation may explain the negative effects on independent restaurants. That is, after cities enact commission fee caps, delivery platforms become less likely to recommend independent restaurants to consumers, and instead turn to promote chain restaurants. Moreover, delivery platforms increase their delivery fees for consumers in regulated cities, suggesting that these platforms attempt to cover the loss of commission revenue by charging customers more.
Researchers have studied how employers express a commitment to racial diversity in hiring practices. However, ensuring practices increase the proportion of workers of color remains challenging. Job seekers' understanding of the cultural norms that govern the hiring process may be crucial for successful implementation. This study examines how employers rely on job seekers' knowledge of the hiring process to achieve a diverse workforce. I argue that job seekers facilitate hiring outcomes by performing culturally laden job search practices that reduce search frictions. To develop this theoretical argument, I observe the hiring process for entry-level workers at two national nonprofit employers over 18 months. Through interviews and direct observations of candidate deliberations, I identify how employers consider job seekers' cultural knowledge and behavior in evaluation and selection decisions. Findings indicate that job seekers possessing high job search cultural capital — a strong understanding of normative job search rules and labor market knowledge — elicit positive hiring preferences from employers. Conversely, qualified candidates with low job search cultural capital garner less support during selection decisions. This study also reveals how employers' preference for candidates familiar with job search norms can weaken the effectiveness of diversity-minded hiring practices. More broadly, the results illuminate how cultural capital influences well-intentioned hiring processes and organizational outcomes at the microlevel.
Although failures and other experiences can capture attention and motivate organizations to learn and improve, this knowledge is not always retained – leaving some organizations dangerously prone to repeat the same mistakes over time. We adapt theory on the Attention-Based View (ABV), and specifically on attentional engagement and vigilance, to shed new light on this process. While prior research has examined how competing events may draw attention away, our theory leads us to consider the circumstances that will motivate employees to push their attention right back, preserving or enhancing the learning that has already occurred. Our framework examines the conditions that turn attention back toward failures by raising the chances that related issues exist elsewhere, serving as continuing reminders or cues about the failure when attention begins to drifts away. We find support for related hypotheses involving a failure’s complexity, the firm’s culpability, and the use of related routines elsewhere in the firm. Our findings contribute to ABV by developing theory about attentional engagement and vigilance, and by emphasizing the conditions that can keep attention focused rather than drawing it away from a focal domain. We also contribute toward efforts to examine depreciation and forgetting in the organizational learning literature.
Problem definition: Last-mile delivery is a critical component of logistics networks, accounting for approximately 30-35% of costs. As delivery volumes have increased, truck route times have become unsustainably long. To address this issue, many logistics companies, including FedEx and UPS, have resorted to using a “Driver-Aide” to assist with deliveries. The aide can assist the driver in two ways. As a “Jumper”, the aide works with the driver in preparing and delivering packages, thus reducing the service time at a given stop. As a “Helper”, the aide can independently work at a location delivering packages, while the driver leaves to deliver packages at other locations and then returns. Given a set of delivery locations, travel times, service times, the jumper’s savings and the helper’s service times, the goal is to determine both the delivery route and the most effective way to use the aide (e.g., sometimes as a jumper and sometimes as a helper) to minimize the total delivery time. Methodology/results: We model this problem as an integer program with an exponential number of variables and an exponential number of constraints, and propose a branch-cut- and-price approach for solving it. Our computational experiments are based on simulated instances built on real-world data provided by an industrial partner and a dataset released by Amazon. More importantly, our results characterize the conditions under which this novel operation mode can lead to significant savings in terms of both the routing time and cost. Managerial implications: Our computational results show that the driver-aide with both jumper and helper modes is most effective when there are denser service regions and when the truck’s speed is higher (≥ 10 MPH). Coupled with an economic analysis, we come up with rules of thumb (that have close to 100% accuracy) to predict whether to use the aide, and in which mode. Empirically, we find that the service delivery routes with greater than 50% of the time devoted to delivery (as opposed to driving) are the ones that provide the greatest benefit. These routes are characterized by a high density of delivery locations.
This paper studies the costs and benefits of adding factors to empirical asset-pricing models. I argue that, for many purposes, the literature’s preference for models with fewer factors is misplaced. Including extra factors in a model, even redundant ones, can improve estimates of individual alphas and increase the power of asset-pricing tests. I provide empirical examples to illustrate these results.
Firms increasingly leverage external data with an aim to unlock improvements in their offerings, but it is challenging to measure the value of external data. Collaborating with a large Chinese technology company, we analyze a randomized field experiment where we manipulated access to the market leader’s application programming interface (API) to measure the causal impact of external data on the click-through rate (CTR) for the focal company's nascent search product. We report three main findings: First, compared to the baseline with access to the market leader’s API, API removal leads to a 4.6% decrease in CTR on average for search suggestions. Second, the negative effect due to API removal is more prevalent among heavy users, and it is driven by both mainstream and niche content. Third, the magnitude of this negative effect in the longer term is half as much as what we would have obtained with a short-term experiment. We provide suggestive mechanism evidence of the longer-term effect: the focal company's reliance on the market leader's data limits the development of its algorithmic system based on its internal data. This research informs managers of whether and how the market leader’s data affects a smaller player's product performance. It further sheds light on policies such as the Digital Markets Act that proposes data sharing by large digital platforms and a recent debate on whether big data undermines market competition.
Using a novel setting in which retailers receive bonuses when selling jackpot winning lottery tickets, we show that large windfalls not only increase the revenue and employment of existing businesses but also spur serial entrepreneurship. Serial ventures occur mainly in nonretail industries. We document a pecking order in entrepreneurs’ responses: small windfalls increase revenue, whereas windfalls larger than $100,000 trigger business creation and employment growth. Consistent with wealth effects as an indispensable mechanism, the effects become larger still when cash windfalls far surpass the amount required to start new businesses. Finally, cash windfalls do not lead to financial distress.
Firms increasingly engage in online communities to source external knowledge from voluntary contributors. Although prior literature has examined how to incentivize the crowd’s participation, limited research has focused on tensions between continued participation and contribution quality. We address this gap by studying how organizational gatekeepers interact with external contributors to shape contributors’ continued participation and subsequent contribution quality. We formalize our predictions on how input acceptance and knowledge sharing affect contributor behaviors in an analytical model with a belief updating framework. Utilizing a large dataset on newcomers’ contributions to firm-owned open source software products, we find that gatekeepers’ input acceptance and sharing of general knowledge increase continued participation, but decrease subsequent contribution quality. Only by sharing product-specific knowledge with newcomers can gatekeepers both motivate continued participation and improve contribution quality. We discuss the broader implications of our model and findings for the governance of online communities where participation and contribution are voluntary.
We analyze the run risk of USD-backed stablecoins and uncover a dilemma between stablecoins’ price stability and financial stability. Stablecoin runs bear important financial stability implications through the fire sale of US dollar assets like bank deposits, Treasuries, and corporate bonds. We show that panic runs exist even though general investors only trade stablecoins in secondary markets with flexible prices. Run incentives are reinstated by stablecoin issuers’ liquidity transformation and the fixed $1 at which arbitrageurs redeem stablecoins for cash in the primary market. We discover that more efficient arbitrage amplifies run risk. This explains why stablecoin issuers only authorize a small set of arbitragers even though it comes at the expense of maintaining a stable secondary price. In other words, the centralization of arbitrage embeds an inherent tradeoff between run risk and price stability. Our findings are based on a model and a novel dataset on stablecoin redemptions, trading, and reserve assets. Calibrating our model, we find a higher run risk for USDT, the largest stablecoin, compared to USDC, the second-largest stablecoin. However, even USDC bears significant run risk due to its less concentrated arbitrage and more concentrated deposit holdings.
There is a widespread belief that some employees exhibit the attribute of "clutch" or anti-clutch" performance, consistently raising or lowering their performance in pressure-filled periods. We subject this lay theory to the first empirical test in typical firms, using over one million new automobile sales by 21,896 salespeople at 1,034 franchised dealerships. Salespeople in these dealerships regularly face high month-end performance pressure due to lucrative manufacturer sales targets. We first establish common belief in a lay theory of clutch performers using an online study, then employ multiple analytical techniques to show clutch and anti-clutch performers to be rare and of limited economic importance in our setting. Employees' average performance under pressure closely mirrors their low-pressure performance, with the few clutch performers that do exist having little economic importance to the firm and being unidentifiable to management. We conclude that the ability to respond to pressure is not a meaningful source of employee heterogeneity in our setting. Star salespeople are consistently stars, while average employees are consistently average. We caution researchers and managers against categorizing employees as "clutch performers" or "anti-clutch" performers, given the risk that anecdotal or small-sample performance differences under pressure might reflect random chance and not underlying employee contribution or value. Doing so not can hurt organizational performance, but can also increase inequity if the categorization is based in stereotypes and other cognitive biases.
We document strong and unique inflation forecastability using the relative pricing between stocks with high- and low-inflation exposures. We construct the stock-level headline and core-focused inflation betas by taking advantage of the fact that stock returns exhibit persistent sensitivity to headline-CPI shocks during the calendar month of CPI, and to core-CPI news on CPI announcement days. Above and beyond the existing forecasting methods, our stock-based portfolios contain fresh and non-redundant predictive information, indicating active price discovery on inflation in cross-sectional stocks. The core-focused forecasting portfolio emerges as a unique and unparalleled predictor for core inflation, whose predictive power and economic significance increase dramatically during the inflation surge of 2021 and 1973. Moreover, our stock-based information is not incorporated by economists in their inflation forecasts, whose room for improvement is especially large during 2021-22. We also find stronger predictability under Fed’s QE and when the Fed is behind-the-curve in fighting inflation.
The Lean Startup has brought a sea-change in conventional wisdom to the practice of entrepreneurship: rather than commit and persevere, the advice is now that experimenting and pivoting is the key to success. Emerging scholarship suggests an entrepreneur should experiment, and examines the implications of pivoting; however, this literature has yet to fully articulate the conceptual logic underlying how much to experiment and its implications for how frequently to pivot. We focus on the design of what we call the program of experimentation — a sequentially interdependent set of experiments and pivot decisions undertaken as an entrepreneur seeks to develop a viable business idea. We conceptualize the program along two design dimensions: the number of experiments to run and the pivot threshold for evaluating experimental outcomes. We address two critical issues. First, how much should an entrepreneur experiment and what are the implications for when to pivot? Second, how is the design of the program of experimentation conditioned by the nature of an entrepreneur’s behavioral biases? Our computational model suggests that while experimenting and pivoting can improve new venture performance, it can also be taken too far. Programs of experimentation that generate frequent and early pivots may impede learning and underperform more conservative programs that generate fewer pivots. We also show that an effectively designed program of experiments can partially remedy entrepreneurs’ behavioral bias. Overconfidence (specifically, over-estimation bias) favors a program design with a more aggressive pivot threshold, though this may not necessitate an increase in the number of experiments.
Recent research has demonstrated that executives’ motivational orientations, as reflected in organizational communication (e.g., letters to shareholders), are strong predictors of important firm outcomes. Specifically, firms whose executives communicate a focus on growth and achievement (a promotion focus) pursue distinctly different strategies compared to firms whose executives communicate a focus on security and the avoidance of failure (a prevention focus). In this paper, we explore whether external stakeholders are sensitive to executives’ promotion focus and prevention focus communication and the degree to which these foci match the situation by examining investors’ reactions to communication during quarterly earnings calls. We find that external stakeholders appear responsive to executive communication such that stock market returns are higher when executives communicate a promotion focus. This relationship is stronger when past performance is positive. Additionally, we find evidence that prevention focus communication can ameliorate negative investor reactions following poor past financial performance. Our study has several theoretical implications for the study of regulatory focus and executive communication.
This paper explores how firms' sourcing and customer acquisition decisions shape the structure of production network. We propose a measure fragmentation that is based on a notion of communities in the production network. A community represents a set of firms that trade mostly connected with each other. Using history of buyer-supplier relationships between firms we build a production network that evolves in time and identify communities in this time-dependent network. We find that while firms in the networks become more connected with time, i.e., have more customers and suppliers, the network also becomes more fragmented, i.e., the number of communities increases and the dominance of large communities decreases. We explore a plausible mechanism that reconciles the increased connectivity with fragmentation. Furthermore, we identify firms that link communities in the production network, and demonstrate importance of this firms for improving visibility into supplier and customer networks.
We hypothesize that when price correction requires more capital than any one investor can provide, institutions coordinate trading via crowd-sourcing in the media. When the crowd reaches a consensus, synchronized trading occurs, prices are corrected, and anomaly returns result. We use over one million Wall Street Journal articles from 1980 to 2020 to develop a novel textual measure of institutional investors making predictions in the media (InstPred). We show that (i) both value and momentum anomaly returns are 34% to 63% larger when InstPred is higher, and (ii) institutional investors collectively trade the anomalies more aggressively when InstPred is higher. Our results are reinforced by tests using quasi-exogenous variation in temporal investor-WSJ connections and cannot be explained by existing measures such as document tone.
Research on learning from failure has found that industry accidents can inspire organizations to learn, or improve performance, vicariously from other firms’ failures, but also that they soon forget what they have learned, regressing back to old patterns. This research, at the organizational level, obscures the fact that individuals inside of organizations might approach these opportunities to learn differently. We argue that an important difference between individual workers that can affect learning patterns is their level of professionalism, or the extent to which one is trained and/or identifies with one’s profession. This distinction allows us to explain why those more threatened by an accident caused by negligence (those with less professionalism) react more strongly to the accident, driving the observed organizational patterns. What is more, we argue that the patterns that look like learning at the organizational level are not actual learning because these less-professional workers a) cannot sustain the change in behaviors after the accident and b) tend to engage in more superficial learning behaviors induced by institutional pressures reacting to the large-scale accident. As a result when institutional pressures wane, the positive change in behavior drops, explaining the forgetting patterns found at the organizational level. Through analyses of behavior in the context of a large-scale accident in the maritime industry, we find support for this argument and highlight the value of understanding learning patterns at the micro foundational level. By extending theory to the individual level we can explain organizational level patterns in more detail and highlight how professionalism shapes learning behaviors for individuals within firms ultimately shaping organizational performance.
Forming entrepreneurial strategy is difficult as the future value of strategy alternatives is uncertain. To create and capture value, firms are advised to consider and test multiple alternative strategy elements. Yet, how firms generate and test alternatives remains understudied. As entrepreneurial firms lack resources for broad search, they often draw upon advisory resources from outside the firm. However, advice can be difficult to extract, absorb and apply. While scholars have examined static attributes of the entrepreneur or advisor to explain if advice is used, a dynamic explanation of how advice is produced and informs strategy testing and formation is missing. In an 11-month field study, we observed 25 founders of 12 food and agriculture firms interacting with a common pool of 34 advisors in an entrepreneurship training program. Leveraging the program’s structured design, we observed 165 advice interactions over three phases. No firm took advice and applied it directly to firm strategy. When entrepreneurs engaged literally with advice, they later discounted it – distancing advice from strategy. In contrast, entrepreneurs that co produced advice challenged advisors to craft novel advice relevant to their strategy, translated it to make it actionable, and tested it – integrating advice into strategy. Firms that distanced advice from strategy did not test strategy alternatives, while firms that integrated advice into strategy tested multiple alternatives, explored broader markets and adapted their strategies. We contribute a grounded process model that explains how coproducing advice opens firms’ apertures to consider strategy alternatives, while testing informs the strategy elements chosen.
We examine the roles of cognitive and experiential learning in a less explored, multi-stage problem context where actions and outcomes are separated across time and decision makers face the challenge of temporal myopia (Levinthal and March 1993). We combine two bases of learning – one guided by an external, cognitive template and the other guided by experiential learning from feedback. We find a U-shaped relationship between the fidelity of cognitive representations and organizational performance. In particular, even when it consists of correct clues, a partial cognitive representation may bias experiential learning, resulting in a negative impact on organizational performance. Only when cognition is sufficiently complete, does it reinforce experiential learning, leading to an overall positive impact on organizational performance. Our finding suggests that benefits of cognitive representation may be contingent on the environment in which experiential learning takes place, as well as the fidelity of the representation.
We examine the consequences of principal-versus-agent (PA) considerations and the new revenue standard (ASC 606). Using a data set compiled through textual analysis of SEC filings and manual collection, we provide evidence indicating that (i) firms with PA exposures face heightened compliance risk and audit fees; (ii) the effect of PA considerations on revenue quality is negligible; and (iii) investors attach greater weight to revenue surprises of agents and, with a delay, smaller weight to revenue surprises of principals. Evaluating the impact of the adoption of ASC 606, we find evidence suggesting that the adoption reduces compliance risk and audit fees for firms with PA considerations and alleviates the disparity in investors’ processing of revenue information based on firms’ PA classifications.
We develop a model of the longitudinal unfolding of entrepreneurs’ experience of distress and their subsequent mobilization of relevant coping strategies, which we test with a five-wave survey of 574 entrepreneurs at the onset of the COVID-19 pandemic. We theorize and show that the more emotionally-exhausted entrepreneurs are at the crisis’ beginning, the more uncertainty they later perceive about their resources, and the more this hinders their subsequent mobilization of relevant coping strategies – namely, environmental scanning and reflexivity. In turn, we theorize and show that for environmental scanning to reap benefits in terms of reduced perceived uncertainty and emotional exhaustion, it must be accompanied by deliberate efforts in reflexivity. All in all, our work contributes new insights about the underlying psychological dynamics that explain the mobilization of relevant coping strategies – and of the effects these can have for becoming resilient.
We develop a model of the longitudinal unfolding of entrepreneurs’ experience of distress and their subsequent mobilization of relevant coping strategies, which we test with a five-wave survey of 574 entrepreneurs at the onset of the COVID-19 pandemic. We theorize and show that the more emotionally-exhausted entrepreneurs are at the crisis’ beginning, the more uncertainty they later perceive about their resources, and the more this hinders their subsequent mobilization of relevant coping strategies – namely, environmental scanning and reflexivity. In turn, we theorize and show that for environmental scanning to reap benefits in terms of reduced perceived uncertainty and emotional exhaustion, it must be accompanied by deliberate efforts in reflexivity. All in all, our work contributes new insights about the underlying psychological dynamics that explain the mobilization of relevant coping strategies – and of the effects these can have for becoming resilient.
The rise of innovation-conscious consumers has led to record demand for products with innovative attributes, such as low-sugar foods and sun-protective clothing. This market trend presents a profit-growth opportunity for the established companies, which have dominated the market based on traditional attributes, such as taste of food and the appearance of clothing. Yet, taking advantage of this opportunity is challenging due to the lack of information on consumers’ valuation of innovation and increased operational costs associated with delivering products with innovative attributes. We present a model of a monopolist developing and producing conventional and innovative products to serve a two-segment market consisting of innovation-conscious and innovation-neutral consumers. We use a two-dimensional differentiation-contingency framework to depict the rich set of the firm’s possible optimal strategies to segment the market and explore how the market environment and the firm’s operational environment affect the firm’s choice of the optimal product strategy. We find that while high innovation valuation drives the optimal strategy to be differentiated, variability in the innovation valuation drives contingency. The firm’s operational cost structure further leads to different prioritization within the innovative product’s quality dimensions: high development cost (resp. coupling cost between the two quality dimensions) induces prioritization of the traditional (resp. innovative) quality of an innovative product. We show the robustness of the framework developed to generalized valuation distribution and cost structures.
Professional accountability is considered important to the sustenance of a profession. Prior research has examined the role that scrutiny by constituents, such as supervisors, regulators, auditors, and certification bodies, plays in improving professional accountability. With the advent of social media, a dispersed, diverse, and pseudonymous public can now scrutinize the actions of professionals, especially those at the frontline. In this research, I examine how social media scrutiny from the public impacts the professional accountability of frontline professionals and the consequences to the work of downstream professionals in the ecosystem. Based on an ethnography of 911 emergency management, I find that social media scrutiny of 911 call-takers—the frontline professionals in this setting—can obscure rather than improve professional accountability. I elaborate on the processes that produce these paradoxical outcomes and discuss their theoretical significance. Specifically, I unpack how and why social media scrutiny pushes frontline professionals to deviate from their professional mandate, which, in turn, obscures their sense of professional accountability. Beyond the frontline professionals, these processes also negatively affect the everyday work of downstream professionals (e.g., 911 dispatchers, police officers) in the professional ecosystem, thereby producing a cascading set of unintended consequences for multiple actors across the ecosystem.
This paper studies how introducing a central bank digital currency (CBDC) can affect the banking system. We show that CBDC need not reduce bank lending unless frictions and synergies bind deposits and lending together. We then estimate a dynamic banking model to quantify the importance of these frictions and synergies for the impact of a CBDC on the banking system. Our counterfactual analysis shows that a CBDC can replace a significant fraction of bank deposits, especially when it pays interest. However, CBDC has a much smaller impact on bank lending because banks can replace a large fraction of any lost deposits with wholesale funding. Substitution to wholesale funding makes banks' funding costs more sensitive to changes in short-term rates, increasing their exposure to interest rate risk. We also show that a CBDC amplifies the impact of monetary policy shocks on bank lending.
External search allows organizations to source distant ideas from people outside the organization. We theorize that external search hinges upon the interplay between an organization’s selection of ideas and external contributors’ generation of ideas that, counterintuitively, narrows the ideas organizations gain access to. Specifically, an organization selects a subset of ideas generated by external contributors, who themselves strive to see their ideas implemented, and thus use this selection as a signal for the kinds of ideas the organization is looking for. Our hypothesis is that this results in a “co-evolutionary lock-in” where organizations with more selection consistency receive less future idea variety, which in turn limits the organizations’ future selection decisions. We find empirical support in an analysis of the crowdsourcing initiatives of 1,160 organizations. We leverage large-scale network analysis and natural language processing to examine the underlying mechanisms and contingencies. These findings have broader implications for the literatures on search, co-evolution, and crowdsourcing by demonstrating how selection consistency can result in co-evolution, and the underlying mechanisms for why this occurs.
This paper examines the role of social media in informing corporate decision-making by studying the decision of firm management to withdraw an announced merger. A standard deviation decline in abnormal social media sentiment following a merger announcement predicts a 0.73 percentage point increase in the likelihood of merger withdrawal (18.9% of the baseline rate). The informativeness of social media for merger withdrawals is not explained by abnormal price reactions or news sentiment, and in fact, it is stronger when these other signals disagree. Consistent with learning from external information, we find that the social media signal is most informative for complex mergers in which analyst conference calls take a negative tone, driven by the Q&A portion of the call. Overall, these findings imply that social media is not a sideshow, but an important aspect of firm information environment.
Most ETFs replicate indexes licensed by index providers. We show that index providers wield strong market power and charge large markups to ETFs that are passed on to investors. We document three stylized facts: (i) the index provider market is highly concentrated; (ii) investors care about the identities of index providers, although they explain little variation in ETF returns; and (iii) over one-third of ETF management fees are paid as licensing fees to index providers. A structural decomposition attributes 60% of licensing fees to index providers’ markups. Counterfactual analyses show that improving competition among index providers reduces ETF fees by up to 30%.
Disparities in accruing social capital contribute to persistent gender gaps in career trajectories. Processes like sponsorship, or when senior colleagues (sponsors) lend their social capital to facilitate the career advancement of junior colleagues (proteges), are critical to bypassing barriers to women's advancement. But how and why do sponsors decide to use their social capital, especially considering it is a valuable resource for facilitating their own advancement? Drawing from an inductive qualitative investigation of equity partners at a multinational consulting and accounting firm, I find men and women both recognize a potential cost of providing sponsorship but make decisions about using their social capital through different, gendered mental models of sponsorship. Men are more likely than women to display a transactional mental model: focusing on self-interested reciprocal exchanges, men treat social capital as a resource to be invested in high-potential proteges who "fit" consulting work, with the goal of proteges' future high performance yielding reputational benefits for sponsors. Women are more likely than men to display a relational mental model: driven by an intrinsic motivation to reciprocate their prior experiences receiving sponsorship, women view social capital as a valuable resource they have the responsibility to spend to help proteges perceived to be highly committed to the work. Drawing from this evidence, I introduce a process model for understanding how gendered mental models for sponsorship function as vehicles for the unequal reproduction of sponsors' social capital.
We propose that differences between overnight and daytime returns are the result of return extrapolation. After high daytime returns, morning order imbalances are high in the first 15 minutes of regular trading the next day, which is consistent with higher overnight returns. The effect is asymmetric, with positive returns having larger response than negative returns, and it is stronger in more overpriced stocks. At the portfolio level, extrapolative effects can explain most of the cross-sectional variation in the “tug of war” between overnight and daytime returns. Extrapolative trading is also consistent with the upward sloping relation between market beta and average overnight returns.
This study analyzes the effects of increased exposure to anti-corruption laws on firms’ geographic segment reporting. Using the 2010 adoption of the U.K. Bribery Act (UKBA) and its significant extraterritorial reach for identification, we conduct difference-in-differences analyses comparing changes in the segment reporting of U.S. multinational firms with and without a material business presence in the U.K. We find that exposure to the UKBA leads to less transparent geographic segment reporting with respect to a firm’s perceived corruption exposure. Unlike prior studies that focus on firms’ explicit changes in reported segments (i.e., re-segmenting), we find that these results are mostly attributable to a more subtle mechanism—specifically, without re-segmenting, firms change the mix of their revenues among existing segments. Our findings have implications for segment reporting research and the ongoing debate regarding the efficacy of the current management approach to segment reporting under ASC 280 and IFRS 8.
Online reviews are crucial for consumer decision making but there has not been a canonical, widely accepted measure for review quality. This absence hinders efforts to promote high-quality reviews and results in over-reliance on proxies such as the number of “helpfulness” votes received by reviews in both practice and academic research. Our study addresses this gap by developing a measure of online review quality using the Delphi method. Our Delphi study results in a measure of online review quality as an aggregation of five underlying aspects – relevant, trustworthy, comprehensive, well-written, and timely. Our empirical evaluation demonstrates that the measure has good inter-rater reliability and is substantially different from helpfulness votes. Interestingly, review quality is highly correlated with helpfulness, suggesting the divergence between helpfulness votes and review quality is operational rather than conceptual. We demonstrate that consumers overwhelmingly prefer review quality to helpfulness votes. Furthermore, the review quality measure can be accurately predicted using textual features extracted using BERT, suggesting a potential for large-scale deployment of the measure.
More than 65 years after the "Brown v. Board of Education" ruling that school segregation is unconstitutional, public schools across the U.S. are resegregating. In attempts to disentangle school segregation from neighborhood segregation, many cities have adopted policies for city-wide choice. However, these policies have largely not improved patterns of segregation. From 2018-2020, we worked with the San Francisco Unified School District (SFUSD) to design a new policy for student assignment system that meets the district’s goals of diversity, predictability, and proximity. To develop potential policies, we used optimization techniques to augment and operationalize the district’s proposal of restricting choice to zones. We compared these to district-wide choice approaches typically suggested by the school choice literature. We find that appropriately-designed zones with minority reserves can achieve all the district’s goals, at the expense of choice, and choice can resegregate diverse zones. Using predictive choice models developed using historical choice data, we show that a zone-based policy can decrease the percentage of racial minorities in high-poverty schools from 29% to 11%, decrease the average travel distance from 1.39 miles to 1.29 miles, and improve predictability, but reduce the percentage of students assigned to one of their top 3 programs from 80% to 59%. Traditional district-wide choice approaches can improve diversity and choice at the expense of proximity. Our work informed the design and approval of a zone-based policy for use starting the 2024-25 school year.
Corporate America is increasingly taking public stances on divisive sociopolitical issues via social media. We investigate the consequences of such disclosure due to its revelation of information about the company’s political ideology. Exploring S&P 1500 firms’ responses to the Black Lives Matter (BLM) movement, we first document a positive association between liberal-leaning proxies at firm-level and the likelihood for a firm to support BLM. We proceed to examine the consequences and show that liberal-leaning mutual funds and hedge funds exhibit abnormal purchases of responding firms’ shares whereas conservative-leaning funds exhibit abnormal sales relatively. The share turnover rates of responding firms increase but the share prices remain unchanged due to simultaneous increases in both investor purchases and sales. Furthermore, subsample evidence based on banks’ depositors shows that customers in liberal-leaning counties significantly increase deposits in the local branch of responding banks. Overall, our results suggest that firm disclosures on sociopolitical issues lead to a more ideologically-aligned investor and customer base.
We estimate an equilibrium demand-based corporate bond pricing model linking institutional holdings to bond characteristics. Our estimates show heterogeneity in demand elasticities across institutions, with elastic mutual funds demanding liquidity, akin to reaching for yield, and inelastic insurance companies. Moreover, we document stark differences in preferences for maturity, credit risk, and liquidity across institutions. In counterfactuals, we evaluate the pricing implications of credit quality migration, mutual fund fragility, monetary policy tightening, and a tapering of the Fed's corporate credit facility. Our model predicts substantial disruptions in bond prices through shifts in institutional demand and identifies the composition of institutional demand as an important state variable for corporate bond pricing.
In Misconceiving Merit, sociologists Mary Blair-Loy and Erin A. Cech uncover the cultural foundations of a paradox. On one hand, academic science, engineering, and math revere meritocracy, a system that recognizes and rewards those with the greatest talent and dedication. At the same time, women and some racial and sexual minorities remain underrepresented and often feel unwelcome and devalued in STEM. How can academic science, which so highly values meritocracy and objectivity, produce these unequal outcomes?
Blair-Loy and Cech studied more than five hundred STEM professors at a top research university to reveal how unequal and unfair outcomes can emerge alongside commitments to objectivity and excellence. The authors find that academic STEM harbors dominant cultural beliefs that not only perpetuate the mistreatment of scientists from underrepresented groups but hinder innovation. They show how two sets of cultural schemas – cognitive and moral preconceptions -- about what work devotion is and what scientific excellence should look like—function quietly in the background to shape interactions, downplay the contributions of underrepresented faculty, and legitimize this unfair treatment. Underrepresented groups –including women from all racial/ethnic backgrounds, Black and Latinx men, and LBGTQ- identifying faculty -- are often seen as less fully embodying merit compared to equally productive white and Asian heterosexual men. These negative career consequences persist regardless of professors’ actual academic productivity. These judgements help undermine scientific innovation.
This book advances the state of play in social science research on inequality in STEM, and in professional occupations more broadly, by taking seriously cultural beliefs and practices within the profession as mechanisms of inequality. The book is filled with insights for higher education administrators working toward greater equity as well as for scientists and engineers striving to understand and change entrenched patterns of inequality in STEM.
How does a platform firm’s diversification influence its existing business? We conjecture that a diversifying platform firm faces a unique challenge in allocating complementors’ resources between businesses due to its lack of ownership over them. At the same time, the potential synergy from serving multiple businesses in a diversifying platform firm can divert ownership-free complementors away from competing platform firms. We analyze changes in the rideshare business in Manhattan, New York City after Uber launched Uber Eats in the city. We find that the launch of Uber Eats was associated with a reduction in trip numbers for both Uber and Lyft. Both effects were weakened during rush hours, when the opportunity costs of resource redeployment to Uber Eats were higher for the rideshare drivers.
We construct a unique firm-level dataset to study the effect of robot adoption on productivity and employment in China. We find that robot adoption leads to higher levels of productivity and employment, on average. However, Chinese state-owned enterprises (SOEs) do not exhibit the same productivity boost as private firms when adopting robots. We also find some evidence that: (1) Chinese SOEs don't appear to hire the appropriate human capital necessary to take advantage of investment in robots and (2) Chinese SOEs don't appear to make the investments in complementary assets needed to obtain productivity improvements. Moreover, these effects appear to be mitigated in conditions where market pressures prevail. To explain these results, we propose that SOEs lack the market-based incentives needed to identify and invest in the complementary assets necessary to take full advantage of robots. Our findings highlight the role that organizational forms and institutional settings can play in enabling and constraining the use of new technologies.
The post-pandemic world requires a renewed focus from service providers in ensuring that all customer segments receive the essential services (food, healthcare, housing, education, etc.) they need. Philanthropic service providers are unable to cope with the increased demands caused by the social, economic, and operational challenges induced by the pandemic. Customer self-selecting no-pay service strategies are becoming popular in various settings. Obtaining insights into how they can efficiently balance societal and financial goals is critical for a for-profit service provider. We develop and analyze a quantitative model of customer utilities, vertically-differentiated product assortment, pricing, and market size to understand how service providers can effectively use customer segmentation and serve the poor at the bottom of the pyramid. We identify conditions under which designing the service delivery to be accessible to the poor can simultaneously benefit the for-profit service provider, customers, and the entire society. Our work provides a framework to obtain operational, economic, and strategic insights into socially responsible service delivery strategies.
All content-sharing sites, including social media platforms, face the creation and spread of misinformation, which leads to wrong beliefs, a hyper-partisan atmosphere, and public harm by the users. Given the dire consequences of the misinformation on society, government agencies, academic researchers, and industrial entities address misinformation creation and distribution on social media platforms. Experts have suggested leveraging the "wisdom of crowds" to identify misinformation to address the scalability issue in other solutions such as professional fact-checking. However, the implication of such crowdsourcing programs on its participants is not carefully studied in the field. We take the first step and leverage the quasi-field experiment of Twitter's Birdwatch program to investigate the causal effect of participating in the crowdsourcing program on the subsequent activities of the participants, especially on the propensity to generate content and create misinformation. We use cognition in writing to reflect the misinformation given the well-documented strong correlation between cognition and the lessened generation or spread of misinformation and the absence of a direct misinformation measure. Our results show a positive treatment effect on such cognition, suggesting the success of the program in dampening the generation of content with misinformation. However, we find that the users decrease the volume of content creation and the diversity of the content. Also, more importantly, we find a lowered average content engagement from other users, suggesting that the diminished misinformation is built on a cost of lowered volume, diversity, and interestingness of the user-generated content. All these results would be of concern to the platform owners. Our empirical research contributes to the literature on crowdsourcing and misinformation and provides significant implications for social media platform managers.
We investigate how the continuity of organizational participants’ careers is affected by their own, their peers’, and their subordinates’ detected misconduct. We also investigate the extent to which the effects of detected misconduct on organizational participants’ careers operate via the impact that detected misconduct has on the fate of their organizations. We explore these two questions using a comprehensive data set on performance enhancing drug (PED) use in the global professional cycling industry between 1999 and 2011. In this period, this industry consisted of 7,193 workers (competitive cyclists in different performance categories) and 1,751 managers (both senior and assistant managers) who were citizens of 25 nations and employed by 420 organizations (teams) based in 11 countries. Our analyses focus on career interruptions experienced by riders, their teammates, and their managers following riders’ variously definitive linkages to PED use (i.e., linkages that ranged from suspicion of PED use to conviction for PED use). We conclude by discussing how our results contribute to a more comprehensive theoretical understanding of the effects of detected organizational misconduct.
The COVID-19 pandemic has created new opportunities to develop and deploy high-impact analytics to combat severe resource shortages in a rapidly evolving environment. Nursing organizations suffered both during and in the aftermath of the pandemic from excess demand for and diminishing supply of nurses. Staffing inadequacy leads to high nurse burnout and turnover, decreased quality of care, worse patient outcomes, and enlarged disparity in health access. At the core of solving these issues are comprehensive, data-based analytics and predictions to understand: (i) the patient workload in real-time; (ii) how to most efficiently allocate resources to all patients; and (iii) how to effectively create surge capacity in response to resource shortage. In this research, we leverage a suite of analytics tools to develop an integrated, comprehensive solution to support decisions on all these aspects. Specifically, we develop novel machine-learning-based occupancy forecasting models that account for different patient acuity levels. Using distributional information from this forecast, we generate workload scenarios for the hospital network, which then are fed into a two-stage stochastic program to support nurse deployment and surge planning decisions. Based on a close partnership with IU Health System, the largest health system in Indiana with 16 hospitals, we launched an academia-industry venture to implement and deploy our data-driven solution. The tool was gone live as a pilot in October 2021. We logged the performance of the recommendations from October 2021 to March 2022 as proof of value. Analysis indicates system-wide improvements in all metrics: with reductions of 5% understaffing, 3% misallocation of resource nurses, and 1% overstaffing, with an estimated annual savings of over $300K.
In this talk I will provide an overview of LinkedIn, its enterprise products and where Data Science and AI provide business value. The talk will then go into more detail in a number of specific use cases where AI is used in these products, discussing not only the modeling details but also how AI is situated and used within these products and how different integration points in the system provide different types of business value.
We propose Active ESG Share, a novel metric that evaluates how a fund’s ESG strategy differs from that of its benchmark. Rather than focus on a fund’s Directional ESG—i.e., how does the fund’s average ESG rating compare to its benchmark’s average?—our metric compares the full distribution of a fund’s ESG rating to that of its benchmark. This approach allows us to capture the extent of a fund manager’s use of ESG information in portfolio construction. A relation between Active ESG Share and performance exists only for ESG funds, which we attribute to the effects of managerial specialization. Our results suggest that, while ESG ratings are financially material, that materiality is too complex to be operationalized by simply purchasing stocks with relatively high or low ESG ratings. Investors, nonetheless, disfavor high Active ESG Share when allocating capital.
In 2012, the Chinese Ministry of Finance issued a rule mandating that at least 80% of the Big 4 firms’ engagement partners must have CICPA qualifications by 2017. This rule required a reorganization of the firms’ human capital. We examine fourteen years around the implementation of the rule and rely on a difference-in-differences research design, comparing several outcomes between Big 4 and top-10 audit firms in China. We demonstrate that the Big 4 met the rule by promoting local talent, increasing the number of incoming partners with CICPA qualifications occupying junior roles, and diluting each partner’s share of the total firm’s clients. However, we do not find evidence that the rule influenced audit quality or had negative externalities for top-10 audit firms. Our findings suggest that the regulation achieved its intended objectives, primarily developing local human capital, without impairing audit quality.
Measured as yield spreads against AAA corporate bonds, the convenience premium of agency MBS averages 47 basis points over 1995 - 2021, about half of the long-term-Treasury convenience premium. Both MBS convenience premium and issuance amount depend on mortgage rate negatively, consistent with a prepayment-driven demand channel. This negative dependence contrasts strikingly with the positive dependence of the MBS-repo convenience premium on the level of interest rates as implied by the “opportunity cost of money” hypothesis. The placing of agencies into conservatorship in 2008 and introduction of liquidity coverage ratio in 2013 affect convenience premium significantly, consistent with the safety and regulatory-constraint channels of MBS demand. Based on “structural” restrictions in standard models, the ratio of MBS to Treasury convenience premia pinpoints the time-varying MBS-specific safe asset demand empirically.
This article examines a major historical change in employers’ pay-setting practices. In the postwar decades, most U.S. employers used bureaucratic tools to measure the worth of each job. Starting in the 1980s, employers abandoned these practices and relied instead on external market data to assess the price of a candidate. In doing so, organizations tied employee pay more tightly to the external labor market. This presents a puzzle for organizational theories, which propose that organizations aim to buffer internal functions from the environment. To describe this shift, I use a new database of 1,059 publications from the Society of Human Resources Management and 83 interviews with compensation professionals. These data highlight the role of law. When the U.S. courts rejected comparable worth lawsuits in the 1980s, their decisions created an opportunity for employers to reduce liability for discrimination by relying on external, market data. Those legal decisions encouraged employers to abandon bureaucratic methods. The analysis identifies market coupling—using the market to distance organizations from discriminatory outcomes—as a response to the law and highlights how the comparable worth movement backfired by facilitating a change in organizational practices that entrenched inequalities.
Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to "mine" variables of interest from available data, followed by the inclusion of those variables into an econometric framework, with the objective of estimating causal effects. Recent work highlights that, because the predictions from machine learning models are inevitably imperfect, econometric analyses based on the predicted variables are likely to suffer from bias due to measurement error. We propose a novel approach to mitigate these biases, leveraging the ensemble learning technique known as the random forest. We propose employing random forest not just for prediction, but also for generating instrumental variables to address the measurement error embedded in the prediction. The random forest algorithm performs best when comprised of a set of trees that are individually accurate in their predictions, yet which also make "different" mistakes, i.e., have weakly correlated prediction errors. A key observation is that these properties are closely related to the relevance and exclusion requirements of valid instrumental variables. We design a data-driven procedure to select tuples of individual trees from a random forest, in which one tree serves as the endogenous covariate and the other trees serve as its instruments. Simulation experiments demonstrate the efficacy of the proposed approach in mitigating estimation biases, and its superior performance over an alternative method (simulation-extrapolation), which has been suggested by prior work as a reasonable method of addressing the measurement error problem.
Link to events page here.
Host(s): Assistant Professor Tingting Nian
Speaker(s): Lynn Wu, Associate Professor of Operations, Information and Decisions
University: University of Pennsylvania, Wharton School of Business
Time: Friday, April 1, 2022; 11:00 AM – 12:30 PM PDT
Location: SB1 2321 (Judy Rosener Flexible Classroom)
We examine how data analytics can facilitate innovation in firms that have gone through an initial public offering (IPO). It has been documented that an IPO is associated with a decline in innovation despite the infusion of capital from the IPO that should have spurred innovation. By assembling and analyzing multi-year panel data at the firm level, we find that firms that possess or acquire data analytics capability has sustained the rate of innovation compared to similar firms that have not acquired that capability. This effect is even greater when only machine learning capabilities are considered. Moreover, we find this sustained rate of innovation is driven principally by the continued development of innovations that combine existing technologies into new ones – the form of innovation that are especially well-supported by analytics. By examining the three main mechanisms that inhibit firm innovation after IPO—short-term financial pressure, cost of disclosure requirements, and managerial incentives, we find that data analytics can ameliorate the pressure from meeting short-term financial goals and disclosure requirements, but analytics is limited in addressing managerial incentives. Overall, our results suggest that the increased deployment of analytics may reduce some of the innovative penalties of IPOs, and that investors and managers can potentially mitigate post-IPO reductions in innovative output by directing newly acquired capital to the acquisition of analytics capabilities.
Link to events page here.
How can firms protect new technological knowledge, and for how long? Although a considerable body of strategy research has explored mechanisms that support knowledge appropriation, this has typically focused on exogenous institutional factors such as the effectiveness of patent and contract enforcement, or on the characteristics of the technology itself. In this paper we call attention to a potential knowledge-protection mechanism that has received scant attention: a highly connected (or cohesive) intrafirm inventor network structure. Drawing on social network theory, we propose that organizations whose inventor networks are more connected enjoy greater appropriation through faster follow-on innovation relative to their rivals. Using patent data on nearly 1,400 large corporations over 33 years, we find evidence consistent with our hypotheses. We discuss implications for future research.
Link to events page here.
Mutual funds’ switches to monthly holding disclosures reduce the efficiency of corporate investments. Consistent with a crowding-out mechanism, the evidence suggests that monthly portfolio disclosures discourage information production activities by other market participants and, consequently, reduce corporate managers’ ability to learn from prices. This effect increases with managers’ incentives to learn from prices and investors’ potential use of monthly fund disclosures. The study sheds light on the regulatory debate on the efficacy of making monthly holdings disclosures available to the public.
Link to events page here.
• Host(s): Associate Professor Vibhanshu Abhishek
• Speaker(s): Navdeep Sahni, Associate Professor of Marketing
• University: Stanford University, Stanford Graduate School of Business
• Time: Friday, February 25, 2022; 10:00 AM – 11:30 AM PST
• Location: Zoom – https://uci.zoom.us/j/93417550211
Consumer inertia is a well documented phenomenon that effectively creates market power for firms over their existing customer base. However, it is unknown how self-aware consumers are about their inertia and how they preemptively respond to their future inertia. We quantify actual inertia, consumer anticipated inertia, and their responses to it using a large-scale field experiment with a leading European newspaper. We vary the price, duration, and whether a contract is automatically canceled or renews after a promotional period. We document higher subscription rates after the promo among those offered an auto-renewal contract, and at the same time find 24% fewer takers of any contract during the promotional period, and 9% fewer subscribers at any point in the two years that follow. Leveraging the price and duration treatments we quantify that the average consumer predicts a 13% chance of being inert within a month, versus an actual inertia of 36%. In a complementary approach of classifying potential subscribers to different types, we classify more than a third as sophisticates who avoid subscribing, a third as time-consistent who cancel immediately after the promo, and only a tenth as naive enough to remain subscribed for more than three months due to inertia. Our results imply that more consumers avoid these mildly exploitative contracts than fall prey to them.
Link to events page here.
Link to events page here.
Host(s): Associate Professor Tonya Bradford
Speaker(s): André Martin, Marketing PhD Student
University: University of North Carolina, Kenan-Flagler Business School
Time: Friday, February 25, 2022; 10:00 AM – 11:30 AM PST
Location: SB1 5200 (Lyman Porter Colloquia Room)
As agile marketing firms increasingly invest in MarTech (Marketing Technology) infrastructure to gather and analyze customer data to spot opportunities and trends, the responsibility to protect these customer data becomes all the more important. Inadequately protecting data can have longlasting negative consequences, not only for the affected customers, but also for the affected firms. In 2021, the overall number of data compromises went up by more than 68 percent compared to the previous year (2021 Annual Data Breach Report). On average, each of these data breaches costed firms $8.64 million in damages (Varonis.com 2021). Factoring in the cost of reputation damage, these losses can swell considerably as exemplified by Citi Group’s losses of $836 million following a data breach (Martin, Borah, and Palmatier 2017). The extent of this problem is so pervasive that former FBI Director Robert Muller (2012) stated that “I am convinced that there are only two types of companies: those that have been hacked and those that will be.”
Overall, the negative consequences of data breaches go well beyond the immediate fines and legal actions and lawsuits, and extend to loss of customer trust, brand equity damages, and eventually detrimental financial consequences. Prior research has made first steps to capture and measure the extent of these losses. As such, extant literature established the negative impact of data breaches on the firm’s market value (Malhotra and Malhotra 2011), on risk perceptions (Aivazpour, Valecha, and Chakraborty 2018), on word-of-mouth (Martin, Borah, and Palmatier 2017), on customer trust (Martin, Borah, and Palmatier 2017), and on customer spending (Janakiraman, Lim, and Rishika 2018).
Given the potential for negative implications of data breaches, the natural question to ask is what firms can do to mitigate the impact and recover from the crisis. Martin, Borah, and Palmatier (2017) find that the transparency of firms’ data use and customers’ ability to control information flow affect trust and consumer behavior, thereby emphasizing the role of a firm’s pre-crisis efforts and policy investments. To provide first insights in effective actions once and after breaches occur, Rasoulain et al. (2017) look into the role of compensation, improving processes, and apologies to minimize the negative impact of data breaches on firms’ idiosyncratic risk. In our study, we further explore and dig deeper into data breach recovery options by systematically analyzing the short- and long-term impact of multiple crisis recovery communication options. Specifically, we use the typology, as proposed by Diesterhöft et al. (2020) that combines elements of blame attribution theory (Coombs 2007) into eight response categories: Offering Compensation, Offering Apology, Engaging in Whitewash, Taking Objective Actions, Reinforcing Value Commitment, Highlighting Customer Relationship, Transparently Informing on Type of Information Disclosed, and Offering Customer Advise. We study to what extent each of these crisis communication elements manages to mitigate data breach harm.
To gauge data breach harm we look at its impact on a wide set of consumer mind-set metrics.
Customer mind-set metrics track brand health and allows firms to track consumers on their path on the brand funnel toward brand advocacy. As such, they measure ‘what a customer thinks’, and are leading indicators of their future behavior (Srinivasan, Vanhuele, and Pauwels 2010). This gives firms advanced notice to act appropriately to minimize the impact of MarTech crises. To this effect, we examine the impact on seven mind-set metrics: two top of the funnel recall metrics (Buzz and Impression); four middle of the funnel evaluation metrics (Quality, Value, Reputation, and Satisfaction), and one bottom of the funnel commitment metric (Recommendation).
Empirically, we study the impact of data breaches using high frequency (daily) data, for a variety of product and service categories and a broad set of several brands. Specifically, we use daily brand level customer mind-set metric data from YouGov between 2012 and 2021. Our data allows us to track over 2,000 brands in the US in a wide variety of industries (Financial Services, Hospitality, Travel, Retail, Healthcare, Consumables, and Durables). This permits us to examine brand and industry level heterogeneity and thus offer empirically generalizable implications. The high frequency and long-time series permit us to model not just the short-term impact of MarTech crises, but also the long-term impact, which no research to the best of our knowledge has examined to date. The real-time nature of the data gives managers instant access to decision dashboards to make fast mitigation decisions.
To assess the firm’s crisis communication styles we focus on the direct communication the firm provides to its consumers following a data breach. Specifically, in the US market, state laws regulate privacy breaches and require the affected firms to inform victims via a letter from the firm. This creates a direct communication channel between the firm and customer, which captures firms’ intentions, without any interpretation and perceptions of news media. In our ten-year observation window, 198 brands were embroiled in a data breach. Of these, 130 brands sent out a letter to their consumers. We do a text-analysis of these letters to measure different communication recovery strategies. Next, we use dynamic panel data models, which control for firm and temporal unobserved heterogeneity, potential endogeneity biases, and several other sources of observed heterogeneity to isolate the direct impact of data breaches and the moderating effect of firm communication strategies on customer mind-set metrics.
We find that privacy breaches negatively affect all seven dimensions of our customer mindset metrics, and surprisingly, that some firm recovery strategies exacerbate the adverse effects of the privacy data breach. Moreover, using our parameter estimates we assess to what extent and how long the potential negative effects linger, and to what extent communication strategies speed up the path towards recovery.
Link to events page here.
Host(s): Assistant Professor Luke Rhee
Speaker(s): Laura Poppo, Professor of Management
University: University of Nebraska-Lincoln, Nebraska College of Business
Time: Friday, February 18, 2022; 2:00 PM – 3:30 PM PST
Location: SB1 5200 (Lyman Porter Colloquia Room)
Over the decades, strategic management has evolved from an emphasis on simply adaptation – modifying the organization to better fit, or close the gap, between the organization and the changes in the environment – to that of finding and seizing of opportunities that have the potential to create value. How organizations go about this remains undertheorized. The most rigorously theorized is problem-solving, problematic search. Left largely unaddressed is how do managers go about ‘formulating’ a problem when the external environment is changing in novel and unsettled ways and the decision-making process is both unstructured and ambiguous. To explore this gap, we question: what process supports the discovery and formulation of problems as well as enables the generation of useful and novel (e.g. creative) solutions? To ground this research, one co-author spent several years exploring broadly and then narrowly, through interviews and several organizational sites, the practice of strategic planning and corporate entrepreneurship. Based on themes identified in this qualitative work, we developed a multi-level perspective, strategic problem engagement. A service organization volunteered to participate in our academic, empirical study. Critical to its selection was that the top management (CEO, corporate staff) was currently undertaking a system-wide strategic planning initiative focused on adapting the organization to novel, unsettled changes in the external environment and generating novel solutions.
Our results follow. First, we illustrate a multi-level approach to strategic problem engagement, as both the TMT as well as teams of knowledge-diverse lower-level employees can be integral to strategic formulation process and the generation of creative solutions. This multi-level approach helps overcome the cognitive limitations of bounded rationality that impedes decision makers’ abilities to identify and construct the right problem.
Second, we empirically demonstrate formulation as a process of strategic problem engagement, involving simultaneously two activities: 1) problem engagement, the process of discovering the problem through exploring, identifying, defining, and reconstructing it, and 2) strategic engagement, the process of recognizing the factors that create organizational value and using them to further explore the problem. This extends prior conceptualizations of formulation as a sensing or an awareness of a potential problem followed by the second stage, formulating a causal logic for how the issue in the environment relates to organization.
Third, our results show that strategic engagement, not problem engagement, leads to the generation of more novel and useful solutions. This finding helps to uncover the black box of creative synthesis. Finally, we examine additional factors that impact the cognitive and motivation challenges associated with complex problem solving.
Link to events page here.
Host(s): Assistant Professor Chenqi Zhu
Speaker(s): Brant Christensen, Associate Professor of Accounting, Glen McLaughlin Chair in Business Ethics
University: University of Oklahoma, Price College of Business
Time: Friday, February 18, 2022; 11:00 AM – 12:15 PM PST
Location: Zoom - https://uci.zoom.us/j/96996574801
Link to events page here.
Host(s): Associate Professor John Turner
Speaker(s): Luyi Yang, Assistant Professor of Operations & IT Management
University: University of California, Berkeley, Haas School of Business
Time: Friday, January 28, 2022; 10:00 AM - 11:30 AM PST
Location: Zoom - https://uci.zoom.us/j/95146558263?pwd=TUY2Zy9VaVlVV0RlR2pRalNUaHdHUT09
Link to events page here.
Host(s): Assistant Professor Tingting Nian
Speaker(s): John Horton, Associate Professor of Information Technologies
University: Massachusetts Institute of Technology, Sloan School of Management
Time: Friday, January 14, 2022; 11:00 AM - 12:30 PM PST
Location: Zoom - https://uci.zoom.us/j/95502043865
Link to events page here.