Loading…
TPRC45 has ended
Economic/Political Economy [clear filter]
Friday, September 8
 

9:00am EDT

A Typology of Information Distribution Organizations
Over the past several decades information distribution organizations (IDOs) have increasingly become the subject of law and policy considerations. IDOs are those organizations that play a significant role in the communication of information to news/information seeking audiences. These may include traditional news organizations, but also internet-based entities like Google and Facebook, and those agencies for whom the internet and digital communication technologies have now become indispensable tools. This paper investigates the ways in which IDOs create, use, distribute, and store information to create a taxonomy of these organizations and examine the many different categories of bodies.

The purpose of this taxonomy is two-fold. Definitions are important to considerations of privileges and responsibilities under certain laws. For example, many states have created so-called “reporters privilege” or shield laws. Key to many of these statutes is a requirement that the individual claiming the privilege be working for some kind of “news” organization. But the definition of news and that of information can be decidedly different. And conflicts about who may claim the privilege have arisen.

Definitional issues also arise with responsibilities required of IDOs by law. In the United States internet service providers, as many IDOs are, must be circumspect with how they handle the information they allow to be posted on their sites to have a safe harbor in libel law. Similarly, ISPs must comply with immediate requests to remove information alleged to violate copyright. These policies, and those like them, reveal the importance of how the organization interacts with information to the rights and responsibilities afforded. Of course, the consideration of how organizations exploit information is not solely a US policy phenomenon. The European Union conceptualization of the right to be forgotten, for instance, considers whether an organization is a data collector, controller, and/or processor, demonstrating, again, the importance of examining how these agencies use information.

This study, then is useful for considering the kinds of IDOs that are most subject to policy decisions and requirements. It also provides a deeper understanding of the many ways in which these organizations are and may be regulated.

Moderators
avatar for Martin B. H. Weiss

Martin B. H. Weiss

University of Pittsburgh

Presenter
JE

Jasmine E. McNealy

University of Florida


Friday September 8, 2017 9:00am - 9:33am EDT
ASLS Hazel Hall - Room 332

9:34am EDT

Technological Diversification into 'Blue Oceans'? A Patent-Based Analysis of Patent Profiles of ICT Firms
Over the last couple of years, considerable attention has been focused on the Internet of Things (IOT). Through combining a range of technologies with reductions in the cost and size of the components, the IOT has begun to grow – not only is the number of connections rapidly growing, but it can now be found across an ever wider array of sectors. Vodafone alone, for example, now claims to have more than 50 million IOT (Roberts, 2017). While IOT technologies are produced in industries such as aviation/automotive, electronics, medical equipment, software and services, telecommunications and computer hardware sector (Sadowski, Nomaler et al. 2016), they are applied in a large variety of sectors such as smart cities (Baccarne, Mechant et al. 2014; Anthopoulos 2015), smart energy (Gans, Alberini, & Longo, 2013) or smart industries (Da Silveira, Borenstein et al. 2001; Fogliatto, Da Silveira et al. 2012).

The economic literature suggests that patent analysis can be used to examine the knowledge base and the technological diversification of companies (Kogut and Zander 1992; Teece, Pisano et al. 1997; Zack 1999). As the existing knowledge of a firm provides a critical ingredient of competitive advantage and corporate success, the extent to which companies utilize technological diversification as a strategy to enter into new technological areas has only recently begun to be investigated (Kodama 1986; Granstrand 2001; Breschi, Lissoni et al. 2003; Garcia-Vega 2006; Lin, Chen et al. 2006). Technological diversification has been defined as the extent to which firm use their knowledge base to diversify into relevant or irrelevant technological fields (Kodama 1986; Lin, Chen et al. 2006). In this respect technological diversification allows firms to enhance their competitive advantages in the market (Garcia-Vega 2006). In this context, Sadowski, et al (2016) have shown that a higher degree of technological diversification can lead to valuable technological specialization in new emerging technological fields such as the Internet of things (IoT) (Sadowski, Nomaler et al. 2016).

Research has shown that the entry decisions of incumbent companies into new markets are affected by convergence (i.e., the blurring of boundaries between hitherto separate sectors) and increased competition in existing markets (Katz 1996). More recently, it has been demonstrated that firms prepare for a possible entry into these markets by anticipating and monitoring of processes of convergence of different sectors (Curran, Bröring et al. 2010; Curran and Leker 2011). As a response to convergence, companies diversify into new markets based on their existing competencies and resources since they change at a much slower pace than technologies and market conditions in converging sectors. Within the resource-based view theory (Wernerfelt 1984; Barney 2006), diversification into new emerging markets has been conceptualized as a “Blue Ocean” strategy (Kim and Mauborgne 2005; Kim and Mauborgne 2014) aimed a discovering (and benefiting) from pioneering innovations in these markets (van de Vrande, Vanhaverbeke et al. 2011). In exploring new technological opportunities in emerging markets, incumbent companies are able to enter into “blue oceans” of uncontested market space instead of battling competitors in traditional “red oceans”. In entering a new “blue ocean” market incumbent companies are able to unlock new demand as competition is irrelevant in these markets (Kim and Mauborgne 2014). In this tradition, research has rarely addressed the extent to which technological diversification into new markets has improved the knowledge position of incumbent companies. As technological diversification into IoT has been a common strategy of ICT companies over at least the past twenty years, large differences persist with respect to their positioning in these new emerging markets (Sadowski, Nomaler et al. 2016).

We follow Sadowski, Nomaler & Whalley (2016) in terms of defining the IOT. This definition enables us to identify relevant patents, which are then allocated to a specific company. Our study identifies 1322 ICT companies involved in IoT technologies which we classified according to the similarity of their patent profile. We group companies together on the basis of their patenting activity, thereby identifying a series of clusters. Given the volume of IOT patents and the number of companies involved, we then focus our analysis on healthcare and energy. Both sectors are often discussed in terms of being characterised by a series of challenges that the IOT can, at least partially, help to resolve through collecting more data, facilitating its analysis etc.
Not only does our analysis identify the leading actors present in the healthcare and energy areas, as determined by the number of patents and technological diversification, but it also demonstrates that previous experience of ICT patenting does not necessarily result in a substantial presence in these two areas. One way that this can be conceptualised is in terms of “red oceans” and “blue oceans” noted above (Kim & Mauborgne, 2015). We explore this distinction within healthcare and energy by investigating the extent to which the IOT patent portfolios of companies overlap with one another. We find that there is considerable variation in the overlap that exists across our sample.

Moderators
avatar for Martin B. H. Weiss

Martin B. H. Weiss

University of Pittsburgh

Presenter
Author
JW

Jason Whalley

Northumbria University

Friday September 8, 2017 9:34am - 10:07am EDT
ASLS Hazel Hall - Room 332

10:07am EDT

Using Aggregate Market Data to Estimate Patent Value
Intellectual property and its protection is one of the most valuable assets for entrepreneurs and firms in the information economy. This article describes a relatively straightforward method for measuring patent value with aggregate market data and the BLP model. We apply the method to United States smartphones. The demand estimates and recovered marginal costs produce sensible simulations of equilibria prices and shares from several hypothetical patent infringements. In one simulation, the presence of near field communication on the dominant firm’s flagship smartphone results in a 26 percent increase in profits per phone. This estimate provides a starting point for establishing a reasonable royalty between the patent holder and the dominant firm in a hypothetical negotiation.

Moderators
avatar for Martin B. H. Weiss

Martin B. H. Weiss

University of Pittsburgh

Presenter
SH

Scott Hiller

Fairfield University

Author

Friday September 8, 2017 10:07am - 10:40am EDT
ASLS Hazel Hall - Room 332

4:10pm EDT

Communications Act 2021
The Communications Act of 1934, as amended by the Telecommunications Act of 1996, is showing its age. Like an old New England house that added drafty new additions over the years to house a growing extended family, the Act is poorly suited to meet today's challenges. Much of what is included in the Act relates to earlier technologies, market structures, and regulatory constructs that address issues that are either no longer relevant or that cause confusion when one tries to map them to current circumstances. The legacy Act was crafted in a world of circuit-switched POTS telephony provided by public utilities, and even when substantially revised in 1996, barely mentions broadband or the Internet.

Moreover, the FCC has struggled in recent years to establish its authority to regulate broadband services and in its effort to craft a framework to protect an Open Internet (sometimes, referred to as Network Neutrality). While many of the fundamental concerns that the legacy Act addressed remain core concerns for public policy, the technology, market, and policy environment are substantially changed. For example, we believe that universal access to broadband and Internet services are important policy goals, but do not believe that the current framework enshrined in Title II of the legacy Act does a good job of advancing those goals.

In this paper, we identify the key concerns that a new Act should address and those issues in the legacy Act that may be of diminished importance. We propose a list of the key Titles that a new Communications Act of 2021 might include and identify their critical provisions. Our straw man proposal includes six titles: Title I establishes the basic goals of the Act and sets forth the scope and authority for the FCC; Title II provides the basic framework for regulating potential bottlenecks; Title III establishes a framework for monitoring the performance of communications markets, for addressing market failures, and for promoting industrial policy goals; Title IV focuses on managing radio-frequency spectrum; Title V focuses on public safety and critical infrastructure; and Title VI addresses the transition plan.

Our goal is to provoke a discussion about what a new Act might look like in an ideal, clean-slate world; not to address the political, procedural, or legal challenges that necessarily would confront any attempt at major reform. That such challenges are daunting we take as given and as a partial explanation for why the legacy Act has survived so long. Nevertheless, it is worthwhile having a clear picture of what a new Communications Act should include and the benefits that having a new Act might offer so we can better judge what our priorities ought to be and what reforms might best be attempted.

Moderators
OU

Olga Ukhaneva

Navigant and Georgetown University

Presenter
avatar for William Lehr

William Lehr

Massachusetts Institute of Technology

Author
DS

Douglas Sicker

College of Engineering, Design and Computing

Friday September 8, 2017 4:10pm - 4:43pm EDT
ASLS Hazel Hall - Room 332

4:10pm EDT

Sensitive-by Distance: Quasi-Health Data in the Algorithmic Era
“Quantified Self” apps and wearable devices collect and process an enormous amount of “quasi-health” data — information that does not fit within the legal definition of “health data”, but that is otherwise revelatory of individuals’ past, present, and future health statuses, like information about sleep-wake schedule or eating habits).

This article offers a new perspective on the boundaries between health and non-health data: the “data-sensitiveness-by-(computational)-distance” approach — or, more simply, the “sensitive-by-distance” approach. This approach takes into account two variables: the intrinsic sensitiveness (static variable) of personal data and the computational distance (a dynamic variable) between some kinds of personal data and pure health (or sensitive) data, which depends upon the computational capacity available in a given historical period of technological (and scientific) development.

Computational distance should be considered both objectively and subjectively. From an objective perspective, it depends on at least three factors: (1) the level of development of data retrieval technologies at a certain moment; (2) the availability of “accessory data” (personal or non-personal information), and (3) the applicable legal restraints on processing (or re-processing) data. From a subjective perspective, computational capacity depends on the specific data mining efforts (or ability to invest in them) taken by a given data controller: economic resources, human resources, and the utilization of accessory data.

A direct consequence of the expansion of augmented humanity in collecting and inferring personal data is the increasing loss of health data processing “legibility” for data subjects. Consequently, the first challenge to be addressed when searching for a balancing test between individual interests and other (public or commercial) interests is the achievement of a higher level of health data processing legibility, and thereby the empowerment of individuals’ roles in that processing. This is already possible by exploiting existing legal tools to empower data subjects — for instance, by supporting the full exercise of the right to access (i.e. awareness about the finality of processing and the logic involved in automated profiling), the right to data portability, and the right not to be subject to automated profiling.

Moderators
avatar for Tim Brennan

Tim Brennan

Professor Emeritus, UMBC

Presenter
GM

Gianclaudio Malgieri

Vrije Universiteit Brussel


Friday September 8, 2017 4:10pm - 4:43pm EDT
Founders Hall - Auditorium

4:43pm EDT

Identifying Market Power in Times of Constant Change
We show that traditional approaches to defining markets to investigate market power fail in times of rapid technological change because demand and supply are in constant flux. Currently, empirical analyses of market power rely upon historical data, the value of which degrades over time, possibly resulting in harmful regulatory decisions. This points to a need for a different approach to determining when regulation is an appropriate response to market power. We present an approach that relies upon essential factors leading to monopoly (EFs), such as control of essential resources, which persist across generations of products. Market power analyses should be a search for EFs and policy responses should focus on diffusing market power without destroying value.

This issue is particularly important for the broad category of telecommunications, as telecommunications continues to evolve from services provided via specialized networks to services provided by apps residing on generalized networks designed primarily to accommodate data. This transition in services and networks is disruptive to business and regulatory models that are based on the traditional network paradigm.

One failing of the traditional regulatory approach is the problem of analysis decay. While this reliance is appropriate under stable market conditions because it grounds the analyses in real experiences, it provides invalid results when demand characteristics are unstable or unknown, such as in rapidly changing markets and emerging products.

We use as an example over the top (OTT) services. Three issues may arise with OTT providers. One issue is whether the OTT provider should be considered a telecommunications provider. Per our analysis, the OTT provider is not a provider of a physical communications channel and so is not a telecommunications carrier but rather a software interface for customers. The OTT provider does not compete with telecommunications channels and is indeed dependent on them.

Another issue is how an OTT provider competes with telecommunications providers. We assert it is futile to base policy or regulation on a product rivalry when product definition evolves rapidly. Even if one could conduct a valid analysis, its relevance would quickly decay. Instead, decisions on whether to regulate should be based on analyzing whether any operator possesses EFs. Service operators that do not, should not be subjected to economic regulation, except to address consumer protection issues and perhaps network interconnection. Operators that do possess EFs will possess market power over time and over generations of products. How this market power should be addressed would depend upon the specifics of the situation.

A less prominent issue is the regulator’s role in the evolution of traditional telecommunications providers’ business models. Sometimes telecommunications providers seek to have regulations imposed on OTT providers. In our analysis this is an issue of how traditional operators will evolve their business models to an NGN world.

This theoretical analysis currently is complete. The next step is to provide a strong grounding in actual cases of factors that created market power to determine possibilities of impacts on antitrust policy. Some practitioners might resist this research as it calls into question the usefulness of a cottage industry of economists, lawyers, and policy-makers; however, applicable empirical work will inform the value of the EFs approach. TPRC’s combination of policy-makers, lawyers, economists, and industry leaders will lend itself well to this issue.

Moderators
avatar for Tim Brennan

Tim Brennan

Professor Emeritus, UMBC

Presenter
avatar for Janice Hauge

Janice Hauge

Professor, University of North Texas


Friday September 8, 2017 4:43pm - 5:15pm EDT
Founders Hall - Auditorium

4:43pm EDT

Price and Quality Competition in U.S. Broadband Service Markets
Official government price indexes show both residential and business wired internet access prices essentially flat or increasing in the United States since 2007. In stark contrast, prices for wireless telecommunications services have been falling at a consistent rate of about 2 to 4 percent per year over this period, while mobile broadband data prices appear to have been falling at rates a full order of magnitude greater. Can the sluggish pace of price decline in official data on wireline broadband service prices be explained by unmeasured quality improvement?

In the first part of this paper, we first construct direct measures of changes in wired broadband service quality over time, utilizing a relatively large sample of US households. The results show positive, statistically and economically significant rates of improvement in delivered broadband speed within given service quality tiers for most U.S. internet service producers in recent years, as well as a general shift within households toward higher service quality tiers. Improvements in performance within speed tiers appear to be comparable in magnitude to rates of improvement in quality-adjusted price indexes that have been estimated in econometric studies of broadband service prices. These statistical results are then used to construct within-tier service quality indexes, based on delivered vs. advertised data rates, for individual US broadband service providers.

In the second part of this paper, the effects of competition on within speed-tier quality improvement within US broadband markets are analyzed. We construct a dataset that allows us to model broadband quality improvement within core-based statistical areas (CBSAs). Our identification strategy for teasing out the short-run impact of increased competition on broadband quality allows for household-specific fixed effects, as well as controls for a variety of socio-economic characteristics of households within geographic areas, and hinges on the assumption that ISP-specific upgrades to capacity within a market (CBSA) potentially affect all households served by that that ISP. Our results show that in census tracts with large numbers of wireless mobile ISPs, quality improvement (measured by delivered speed) is greater than in census tracts that lack large numbers of wireless mobile competitors. Inter-modal broadband competition (wireless mobile vs. wireline and fixed wireless), at least in the short-run, appears to have statistically and economically significant impacts on delivered service quality.

In the final section of this paper, the ISP-level quality indexes previously constructed are combined with data on broadband service price, and measured characteristics of broadband service, from smaller, random samples of U.S. urban census tracts. A hedonic price index for U.S. residential broadband service is constructed, using a hedonic regression model estimated over pairs of adjacent time periods. This quality-adjusted price index spans the period from January 2014 to October 2016. Our hedonic price index shows quality-adjusted prices declining at annualized rates of approximately 3 to 4 percent. These magnitudes are only a little larger than our previous direct estimates of quality improvement We conclude that quality of delivered service, both within and across service tiers, is the primary dimension for competition amongst US broadband providers, and that the benefits of within-tier delivered speed improvement are substantial in magnitude when compared to quality-adjusted price declines estimated using hedonic methods.

Moderators
OU

Olga Ukhaneva

Navigant and Georgetown University

Presenter
KF

Kenneth Flamm

The University of Texas at Austin

Author

Friday September 8, 2017 4:43pm - 5:15pm EDT
ASLS Hazel Hall - Room 332

5:15pm EDT

Beyond the Mogul: From Media Conglomerates to Portfolio Media
Media ownership and market concentration are important topics of public debate and policy analysis. Today we are witnessing a new chapter in that discussion. It is important for the policy analysis community to look ahead and provide though leadership.

For a long time, critics of powerful private media focused on the classic moguls of the Murdoch and Redstone kind. More recent trends raise concerns of a different nature, about media being increasingly controlled by large interests that are outside the media sector. An example is Jeff Bezos of Amazon buying the Washington Post. Similar acquisition can be observed around the world. This has become known as “media capture.” This paper will take the discussion one step further by quantifying the development and identifying the dynamics of such outside ownership.

The analysis is based on a quantitative study of media companies and ownerships, using a large and unique global database of ownership and market share information from 30 countries, 13 media industries, and 20 years. Using a wide-ranging analysis across countries, industries, and time periods, permits us to identify general trends and avoid a discussion that is usually mostly anecdotal.

The analysis shows, so far that entry into media by non-media firms follows three phases, each with a different priority:
Stage 1: Seeking influence
Stage 2: Seeking business synergies
Stage 3: Seeking portfolio diversification

The analysis, so far, shows that the ownership of media by industrial companies as a way to create direct personal and corporate political influence has been declining in rich countries. The second phase for such a non-media/media cross-ownership is based on more direct business factors of economic synergies. It, too, has been declining in many rich countries.

On the other hand, there has been a significant growth of outside-ownership of an indirect type, through financial intermediaries of private equity finance and institutional investment funds.

In contrast, the media systems of emerging and developing countries are still operating in the first two phases of outside-ownership, centered on projection of influence, and seeking conglomerate business synergies.

Will these divergent trends in media control lead to fundamentally divergent media systems? It is likely that these dynamics will lead to a “capture gap” in the media of emerging and rich societies. Media in the former would be significantly controlled by the seekers of personal influence – “crony capitalists” – and conglomerateurs, while media in the latter are subject to professional investor imperatives of profitability, growth, and portfolio diversification. The same financial institutions from rich countries are also likely to seek acquisitions in the emerging markets by leapfrogging the two other stages and investing directly. If this would play itself out freely, a global media system might emerge whose ownership is not centered on individual moguls or conglomerates but on international financial institutions based in a few financial centers.

The responses are then predictable. Countries will impose restrictions on foreign ownership of media. And domestic conglomerates that step in and assume control will wrap themselves in the flag as protectors of national sovereignty. Media control by industrial firms will become patriotic.

Thus, the emerging challenges to diverse and pluralistic media comes less from inside the media and its large media companies, and more from the outside, through an ownership by non-media organizations: financial institutions in rich countries, and a combination of domestic industrial and foreign financial firms in poor and emerging countries.

The paper will conclude with an analysis of the policy issues and regulatory responses.

Moderators
avatar for Tim Brennan

Tim Brennan

Professor Emeritus, UMBC

Presenter
EM

Eli M. Noam

Columbia University


Friday September 8, 2017 5:15pm - 5:50pm EDT
Founders Hall - Auditorium

5:15pm EDT

Complementary Realities: Public Domain Internet Measurements in the Development of Canada's Universal Access Policies
Internet measurement has become a hot topic in Canada after the Canadian Radio-television and Telecommunications Commission (CRTC) reclassified both fixed and mobile broadband Internet as a “basic service”. It has set a goal that all Canadians should have access to 50 Mbps download speed by 2020. The CRTC in cooperation with the Federal government intends to reach that goal through new programs to fund broadband development. Given the newfound opportunity for broadband performance metrics to inform public policy, this paper evaluates the potentials and pitfalls of Internet measurement in Canada.

Effective usage of Internet measurement for broadband policy is threatened by:

1. Lack of comparative understanding of Internet measurement platforms: Demand for information about service quality operators deliver has led to the development of a wide variety of methodologies and testbeds that purport to offer a realistic picture of speeds and quality of what is now an essential service. Due to their distinctive methodologies and approaches to aggregating individual connection diagnosis tests, different sources of broadband speed measurements can generate inconsistent results both in terms of absolute performance metrics and in relative terms (e.g. across jurisdictions, operators, etc.). These inconsistencies can lead to confusion for both consumers and policymakers, leading to sub-optimal decisions in terms of operator selection and public policy development.

2. Reliance on marketing and advertising to evaluate performance: Advertised, up-to speeds do not reflect the realities of Internet use and digital divides in Canada. For example, the CRTC has concluded that connections with speeds higher than 50 Mbps are already available to more than 80% of Canadians that live in urban areas of the country. This creates the perception that the problem of universal access is only a rural one. While the CRTC has determined minimum speeds should reflect actual and Quality of Service (QoS) indicators (e.g. latency, packet loss, jitter, etc.), these targets have yet to be adopted in public policy.

To address these concerns, this paper adopts an analytical approach that emphasizes how multiple and potentially inconsistent Internet measurements can be combined to complement each other in helping develop a richer picture of broadband performance. Drawing on prior comparative research, much of it presented at the TPRC, we provide an overview of different approaches to broadband speed measurements and perspectives they offer into Internet infrastructure quality in Canada. Through a review of comparative metrics, we illustrate latency functions as an effective measurement of performance. Finally, through computational analysis of the Measurement Lab data set, we evaluate the level of broadband inequity in Canada and recommend minimum service quality standards in terms of latency.

Presenter
FM

Fenwick McKelvey

Concordia University

Author
avatar for Reza Rajbiun

Reza Rajbiun

Ryerson University

Friday September 8, 2017 5:15pm - 5:50pm EDT
ASLS Hazel Hall - Room 332
 
Saturday, September 9
 

9:00am EDT

Policy Alternatives for Better and Wiser Use of the NGN: Competition, Functional Separation, or What?
Background
To cope with increasing traffic and meeting various ISPs services, NTT East and West (NTT locals) initiated an NGN (Net generation network) service, which enables rapid and large-volume data transmission, in 2008. The NGN network is different from FTTH, since not only is it a bandwidth guaranteed network but QoS can also be controlled. It is the most advanced network. The number of NTT local NGN subscribers amounted to 18 million as of March 2016.
Contents become richer and richer, and the migration of PSTN to the IP network has started to be discussed and NTT locals are proposing that PSTN be accommodated by NGN. Although FTTH is the best-effort type, NGN guarantees that the bandwidth and can handle voice data as well. Regulations of NTT’s FTTH related to unbundling and connection charges were already fully implemented. This has caused its rapid diffusion in Japan. NGN has been less utilized by competitors and thus, except for the unbundling of minor services, the above issues were not focused on.

NGN Policy issues
Because of the increase in demand for NGN, competitors including carriers, broadband providers, and ISPs have been asking for the same regulations as FTTH by recognizing NGN as an essential facility. Not all countries implement unbundling, since it kills carriers’ incentive to deploy FTTH networks. The areas covered by FTTH networks currently amounts to 95% of Japan, and the rate of growth of FTTH subscribers has been declining, implying that it is approaching satiation level. NTT’s share of subscribers is 70%, and that of facilities is 78%. In the case that PSTN is accommodated by NGN, its total share will surely increase, because the number of legacy subscribers is 23 million and NTT’s share is 99.8%. With the process of migration, NTT’s share is expected to rise. In this sense, the implementation of competition policy to NGN is one possible alternative.

Another alternative?
Toward policies for NGN, the keys are the share of NTT and the essentiality of NGN. However, there is one more option; functional separation. Accounting separation and functional separation were already implemented with NTT’s FTTH, and further operational separation and ownership separation are alternatives. Regarding functional separation, the issues include incentives for deployment, promotion of competition among firms for access to NGN, and the efficiency of vertically separated networks. These are crucial.
The policy alternatives mentioned are two extremes; unbundling and functional separation, but other options exist between these two. This study aims to find the policy particularly by considering policy goals and policy evaluation. The traditional market shares are not enough anymore in the current telecommunication circumstances. NGN should be for all entities to access and utilize it for various applications leading to new economies such as industry 4.0, telecommunications 4.0, or telemedicne 2.0. This study thus focuses on how NGN can be utilized fully and wisely was we move toward the age of IoT and 5G.

Moderators
Presenter
MT

Masatsugu Tsuji

Professor, Kobe International University

Author
BE

Bronwyn E. Howell

Victoria University of Wellington
SS

Sobee Shinohara

KDDI Institute, Inc.

Saturday September 9, 2017 9:00am - 9:33am EDT
ASLS Hazel Hall - Room 332

9:34am EDT

Degrees of Ignorance About the Costs of Data Breaches: What Policymakers Can and Can't Do About the Lack of Good Empirical Data

Estimates of the costs incurred by a data breach can vary enormously. For instance, a 2015 Congressional Research Service report titled “The Target and Other Financial Data Breaches: Frequently Asked Questions” compiled seven different sources’ estimates of the total losses resulting from the 2013 Target breach, ranging from $11 million to $4.9 billion. The high degree of uncertainty and variability surrounding cost estimates for cybersecurity incidents has serious policy consequences, including making it more difficult to foster robust insurance markets for these risks as well as to make decisions about the appropriate level of investment in security controls and defensive interventions. Multiple factors contribute to the poor data quality, including that cybercrime is continuously evolving, cyber criminals succeed by covering their tracks and victims often see more risk than benefit in sharing information. Moreover, the data that does exist is often criticized for an over-reliance on self-reported survey data and the tendency of many security firms to overestimate the costs associated with security breaches in an effort to further promote their own products and services. 

While the general lack of good cost data presents a significant impediment to informed decision-making, ignorance of the economic impacts of data breaches varies across categories of costs, events, and stakeholders. Moreover, the need for precision, accuracy, or concurrence in data estimates varies depending on the specific decisions the data is intended to inform. Our overarching goals in this paper are to clarify which types of cybersecurity cost data are more easily collected than others; how policymakers might improve data access and why previous policy-based efforts to do so have largely failed; and what differential ignorance implies for cybersecurity policy and investment in cyber defenses and mitigation.

To address these questions, we examine several common presumptions about the relative magnitudes of cybercrime cost effects for which generally accepted and reasonably precise quantitative estimates are lacking. For example, we review the evidence supporting the commonly accepted and often cited claims that the aggregate investments in defending against and remediating cybercrimes significantly exceed the aggregate investments by attackers; and that the aggregate harm suffered by victims of cybercrimes exceeds the benefits realized by attackers. There are other such statements that are more contentious. For example, it is unclear whether the aggregate expenditures on cyber defense and remediation exceed the aggregate harms from cybercrimes; or whether a significant change in expenditures on cyber defense and remediation would result in proportionately larger changes in the harms resulting from cybercrimes. For each of these presumptions, we consider the existing evidence, what additional evidence might be needed to develop more precise quantitative estimates, and what better estimates might imply for cyber policy and investment. 

We argue that the persistent inability to accurately estimate certain types of costs associated with data breaches—especially reputational and loss-of-future-business costs—has played an outsize and detrimental role in dissuading policy-makers from pursuing the collection of cost data related to other, much less fundamentally uncertain costs, including legal fees, ex-ante defense investments, and credit monitoring and notification. Finally, we propose steps for policy-makers to take towards aggregating more reliable, consistently collected cost data associated with data breaches for the categories of costs that are most susceptible to rigorous measurement, without getting too bogged down in discussions of the costs that are most difficult to measure, and which are therefore, by necessity, likely to remain most uncertain. We argue that the high degree of ignorance and uncertainty surrounding this subset of data breach costs should not be used as a reason to abandon measurement of other types of losses incurred by these incidents, and that explicit consideration of our differential ignorance of breach cost elements can help us better understand which questions about the economic impacts of data breaches can and cannot be meaningfully answered. 


 


Moderators
Presenter
JW

Josephine Wolff

Rochester Institute of Technology

Author
avatar for William Lehr

William Lehr

Massachusetts Institute of Technology

Saturday September 9, 2017 9:34am - 10:07am EDT
ASLS Hazel Hall - Room 332

10:07am EDT

Content Analysis of Cyber Insurance Policies: How Do Carriers Write Policies and Price Cyber Risk?
Cyber insurance is a broad term for insurance policies that address first and third party losses as a result of a computer-based attack or malfunction of a firm’s information technology systems. For example, one carrier’s policy defines computer attacks as, “A hacking event or other instance of an unauthorized person gaining access to the computer system, [an] attack against the system by a virus or other malware, or [a] denial of service attack against the insured’s system.”

Despite the strong growth of the cyber insurance market over the past decade, insurance carriers are still faced with a number of key challenges: how to develop competitive policies that cover common losses, but also exclude risky events?; how to assess the variation in risks across potential insureds; and how to translate this variation into an appropriate pricing schedule?

In this research paper, we seek to answer fundamental questions concerning the current state of the cyber insurance market. Specifically, by collecting over 100 full insurance policies, we examine the composition and variation across three primary components: The coverage and exclusions of first and third party losses which define what is and is not covered; The security application questionnaires which are used to help assess an applicant’s security posture; and the rate schedules which define the algorithms used to compute premiums.

Overall, our research shows a much greater consistency among loss coverage and exclusions of insurance policies than is often assumed. For example, after examining only 5 policies, all coverage topics were identified, while it took only 13 policies to capture all exclusion topics. However, while each policy may include commonly covered losses or exclusions, there was often additional language further describing exceptions, conditions, or limits to the coverage. The application questionnaires provide insights into the security technologies and management practices that are (and are not) examined by carriers. For example, our analysis identified four main topic areas: Organizational, Technical, Policies and Procedures, and Legal and Compliance. Despite these sometimes lengthy questionnaires, however, there still appeared to be relevant gaps. For instance, information about the security posture of third-party service and supply chain providers and are notoriously difficult to assess properly (despite numerous breaches occurring from such compromise).

In regard to the rate schedules, we found a surprising variation in the sophistication of the equations and metrics used to price premiums. Many policies examined used a very simple, flat rate pricing (based simply on expected loss), while others incorporated more parameters such as the firm’s asset value (or firm revenue), or standard insurance metrics (e.g. limits, retention, coinsurance), and industry type. More sophisticated policies also included information specific information security controls and practices as collected from the security questionnaires. By examining these components of insurance contracts, we hope to provide the first-ever insights into how insurance carriers understand and price cyber risks.

Moderators
Presenter
SR

Sasha Romanosky

RAND
One twitter at @SashaRomanosky

Author

Saturday September 9, 2017 10:07am - 10:40am EDT
ASLS Hazel Hall - Room 332

11:05am EDT

An Analysis of Job and Wage Growth in the Telecom/Tech Sector
This paper reports on a systematic study of the quantity, wage level, and location of domestic jobs being created by the telecom/tech sector. In recent years, the leading telecom/tech companies have been repeatedly criticized for not producing enough jobs; for not producing enough middle skill jobs; and for not producing enough geographically diverse jobs. In this paper we bring together data from the Current Employment Statistics (CES), the Occupational Employment Statistics (OES), the Quarterly Census of Employment and Wages (QCEW), and organic job posting data to systematically address all three of these questions.

The first step was to identify several appropriate technology aggregates, including the broader digital sector, the telecom/tech sector, and the e-commerce sector. We show that for each aggregate that both job and establishment growth has significantly outpaced the overall private sector. Moreover, we estimate the domestic employment for the top ten telecom/tech companies (measured by market cap), and show that their domestic workforce have by 31% since 2007 compared to 5% for the private sector as a whole.

We then calculate the average real wage in each aggregate. Not surprisingly, we find that real wages in the technology aggregates are higher and rising faster than for the private sector as a whole. To correct for composition effects, we examine detailed occupational categories, and find that for middle-skill occupations such as sales and office support, the tech aggregates have significantly higher wages compared to the private sector.

Next, we examine the geography of telecom/tech job and payroll growth. We find that in recent years that the telecom/tech sector has “escaped” the coasts and is now propelling growth in states such as Kentucky, Ohio, and Indiana. We estimate the income gains to these states from telecom/tech expansion.

Finally, we project the impact on overall real wages if the current telecom/tech growth continues. We decompose the impact into a composition effect and a real wage effect.

Presenter
MM

Michael Mandel

Progressive Policy Institute


Saturday September 9, 2017 11:05am - 11:38am EDT
ASLS Hazel Hall - Room 332

11:38am EDT

Crowd Sourcing Internet Governance: The Case of ICANN’s Strategy Panel on Multistakeholder Innovation
e-Participation platforms for policy deliberation have been sought to facilitate more inclusive discourse, consensus building, and effective engagement of the public. Since many internet governance deliberations are global, distributed, multistakeholder and often not formally binding, the promise of e-participation platforms is multiplied. Yet, the effectiveness of implementation of such platforms, both in traditional and multistakeholder policy deliberation, is up for debate. The results of such initiatives tend to be mixed and literature in the field has criticized excessive focus on technical solutions, highlighting the tension between expectations and actual outcomes. Previous research suggests that the utility and effectiveness of these platforms depends not only on their technical design features, but also on the dynamic interactions of technical choices with community or organizational practices, including “politics of participation” (i.e., the power relations among stakeholders and the dynamics of their interactions). We argue the importance of unpacking the interactions between technical capacities, and organizational practices and politics in emergent e-participation tactics for internet governance deliberations.

To better understand the tension between expectations and outcomes of e-participation tools in internet governance deliberations, and to unpack the practices and politics of participation, we offer a case study of ICANN’s use of the IdeaScale platform to crowdsource multistakeholder strategies between November 2013 and January 2014. To the best of our understanding this is one of the first empirical investigations of e-participation in internet governance. This is an ongoing project, building on our own previous work presented at CHI 2016 on the impacts of crowdsourcing platforms on inclusiveness, authority, and legitimacy of global internet governance multistakeholder processes.

Empirically, we draw on interviews with organizers and users of the ICANN IdeaScale implementation (currently underway), coupled with analysis of their activity on the platform. Conceptually, we draw on crowdsourcing and e-participation literature and apply Aitamurto and Landemore’s (2015) five design principles for crowdsourced policymaking processes and platforms to evaluate ICANN’s system-level processes and impacts of the IdeaScale platform design on participant engagement, deliberative dynamics, and process outcomes. Our paper will conclude with design recommendations for crowdsourcing processes and technical recommendations for e-participation platforms used within non-binding, multistakeholder policy deliberation forums.


Saturday September 9, 2017 11:38am - 12:12pm EDT
ASLS Hazel Hall - Room 332

12:12pm EDT

Changing Markets in Operating Systems; a Socio-Economic Analysis
This paper explores into the character of the market for operating systems in order to reach a better understanding of the characteristics, consequences of fragmentation, and impact in the overall development of the internet and the digital economy. For this purpose we consider the effects on trade and innovation, and on the significance for the architectures of networks in the digital economy.
The article includes a review of the various forms of market definition of software operating systems to understand their economic characteristics from a socio-technological view. From the early dominance of IBM’s OS/360 to UNIX-related systems and the disk operating systems [DOS] of IBM and Microsoft through to Apple’s Mac OS and Google systems [Chrome OS and Android], there has been a succession of dominant players.
We address the economic theory behind markets in platforms and its relationship with operating systems. Arguments are presented to describe three major sectors where the operating systems market requires further analysis: a) boundaries between the standard roles of consumption and production are blurred in the consideration of operating systems b) novel concepts of ownership c) the decoupling between services and physical supports raises issue of control rather than ownership.
In the digital mobile environment, consumers do produce valuable services, or add value to the standard services sold to them. These generate information and data. These data become necessary for the actors operating in other layers of the production chain to add value to their services and products, and to generate brand-new services and applications. Thus, a novel situation occurs: the overall welfare of the system cannot be subdivided into consumer surplus and producer surplus; producers might appropriate some part of the overall welfare by becoming “consumers” themselves of the information and data generated by the (previously-labelled) consumers. We suggest that policy guidelines based on the standard industrial organization analysis are no longer quite so valid and legitimate. The concept of surplus changes meaning when the “consumption” side of the ecosystem can add value and generate new surplus to the “production side”.

In the digital mobile industry, most of the inputs used by consumers are not really owned by them. Most of the inputs (intended in terms of both services and goods) utilized by the end user cannot be employed by the latter at will, according to their own “utility function”. The concept of ownership in law and economics is defined by the condition that the “owner” has the right to exclude others from the use of their property and can control the way in which others can restrain their use.

The problem of control retraces standard issues covered by “vertical analysis” in competition policy (in terms of foreclosure, discrimination, and fair usage). The literature dealing with the problem of vertical restraints addressess: how ownership in one layer of the chain affects the control of elements or modules in other layers of the vertical production chain, and therefore their usage. The way in which vertical restraints shifts the rights of actors along the chain is a problem much less developed in the literature and we are not aware of any work explicitly modelling and developing this issue.

Moderators
SW

Scott Wallsten

President and Sr Fellow, Technology Policy Institute

Presenter
avatar for Silivia Monica Elaluf-Calderwood

Silivia Monica Elaluf-Calderwood

Florida International University
Dr. Silvia Elaluf-Calderwood is a professional consultant and researcher with academic and industrial experience in mobile technology research, business models analysis, telecommunications strategy, mobile payments, big data in the UK, the Netherlands and in multidisciplinary projects... Read More →

Author
JL

Jonathan Liebenau

London School of Economics

Saturday September 9, 2017 12:12pm - 12:45pm EDT
ASLS Hazel Hall - Room 332

2:00pm EDT

Uncertainty in the National Infrastructure Assessment of Mobile Telecommunications Infrastructure
The UK’s National Infrastructure Commission is undertaking the first ever National Infrastructure Assessment, of which telecommunications is a key component. The aim of this task it to ensure efficient and effective digital infrastructure delivery over the long-term, the results of which will be used to direct both industry and government over coming decades. However, taking a strategic long-term approach to the assessment of telecommunications infrastructure is a challenging endeavor due to rapid technological innovation in both the supply of, and demand for, digital services.

In this paper, the uncertainty associated with the National Infrastructure Assessment of digital communications infrastructure is explored in the UK, focusing specifically on issues pertaining to:

(i) uncertainty in future demand, and

(ii) ongoing convergence between sub-sectors (fixed, mobile, wireless and satellite).

These were the two key issues identified at The Future of Digital Communications workshop held at The University of Cambridge (UK) in February 2017. Currently industry and government have very little information to direct them as to how these issues will affect the long-term performance of digital infrastructure. This paper not only quantifies the uncertainty in different national telecommunications strategies, but it quantifies the spatio-temporal dynamics of infrastructure roll-out under each scenario. This is vital information for policy makers to understand disparities in the capacity and coverage of digital services over the long-term (e.g. in broadband markets), and helps in the early identification of areas of potential market failure (for which policy has traditionally been reactive not proactive).

The methodology applies the Cambridge Communications Assessment Model, which has been developed exclusively for the evaluation of national digital infrastructure strategies, over 2017-2030. The approach taken is to treat digital communications infrastructure as a system-of-systems which therefore includes the fixed, mobile, wireless and satellite sectors (hence, enabling the impact of convergence to be assessed). Demographic and economic forecast data indicate the total number of households and businesses annually, and an estimate of the penetration rate is calculated using this information. Network infrastructure data is then collated to indicate current capacity and coverage, with cost information then being applied to estimate viability of incremental infrastructure improvement. Existing annual capital investment is used to constrain roll-out of new infrastructure.

The results of this analysis actually quantify for policy-makers at the National Infrastructure Commission the uncertainty associated with:

(i) future demand, and

(ii) ongoing convergence in digital services.

It finds that more emphasis should be placed on how the demand for digital infrastructure affects the spatio-temporal roll-out of digital infrastructure due to viability issues. The results conclude that while national infrastructure assessment is a valid method for thinking more strategically about our long-term infrastructure needs, we must recognize the inherent uncertainty associated with this particular sector, as this has not been adequately addressed to date at the policy level in the UK. Rapid technological innovation affects our ability to accurately forecast long-term roll-out, making it essential that rigorous examination of this uncertainty is both quantified and visualized to support policy decision-making.

Moderators
avatar for Trey Hanbury

Trey Hanbury

Partner, Hogan Lovells

Presenter
EO

Edward Oughton

University of Cambridge


Saturday September 9, 2017 2:00pm - 2:33pm EDT
ASLS Hazel Hall - Room 332

2:33pm EDT

Limiting the Market for Information as a Tool of Governance: Evidence from Russia
This paper presents a novel measure of subtle government intervention in the news market achieved by throttling the Internet. In countries where the news media is highly regulated and censored, the free distribution of information (including auto and any visual imagery) over the Internet is often seen as a threat to the legitimacy of the ruling regime. This study compares electoral outcomes at polling station level between the Russian presidential election at the beginning of March 2012 with the parliamentary election held three months earlier in December 2011. Electoral regions in two cases are compared: regions that experienced internet censorship at the presidential election but not the parliamentary election; versus regions that maintained a good internet connection without interference for both elections. Internet censorship is identified using randomised internet probing data in accuracies down to 15-minute intervals for up to a year before the election. Using a difference in difference design, an average effect of increased vote share of 3.2 percentage point for the government candidate is found due to internet throttling. Results are robust to different specifications and electoral controls are used to account for the possibility of vote rigging.

Moderators
avatar for Trey Hanbury

Trey Hanbury

Partner, Hogan Lovells

Presenter
KA

Klaus Ackermann

University of Chicago


Saturday September 9, 2017 2:33pm - 3:05pm EDT
ASLS Hazel Hall - Room 332

3:05pm EDT

Business Data Services after the 1996 Act: Structure, Conduct, Performance in the Core of the Digital Communications Network
Business data services (BDS) have been growing at almost 15% per year for a decade and a half, driven by the fact that high capacity, high quality, always on connections are vital to a wide range of businesses and economic activities. Affected services include more than communications – like mobile, broadband and digital – but all forms of high capacity connections, ubiquitous networks like ATM or gas stations, and the evolving in the internet of things.

The ocean of data coursing through the digital network must become a stream directed to each individual consumer. The point at which takes place is the new chokepoint in the communications network.

This paper reviews the data gathered by the FCC that shows the BDS market is one of the most concentrated markets in the entire digital communications sector (with CR4 values close to 100% and HHI indices in the range of 6000 to 7000). The structure conduct performance paradigm frames the origins, extent and implications of the current performance of a near-monopoly and future prospects for competition in the BDS market. It shows that the anticompetitive behaviors of firms with this much market power expected by economic theory are well supported by the FCC data. The problem is clear, the solution is difficult and complex. The paper reviews the proposed remedies ranging from the deregulatory proposals of the incumbents to the partial reregulation scheme negotiated by some incumbents and competitors, to the full reregulation approach supported by others.

Moderators
avatar for Trey Hanbury

Trey Hanbury

Partner, Hogan Lovells

Presenter
MC

Mark Cooper

Consumer Federation of America


Saturday September 9, 2017 3:05pm - 3:40pm EDT
ASLS Hazel Hall - Room 332

4:05pm EDT

How Compatible are the DOJ and FCC's Approaches to Identifying Harms to Telecommunications Innovation?
The Department of Justice and Federal Trade Commission focus on innovation markets to identify and restrict transactions with potential to harm innovation within narrowly-defined R&D and intellectual property licensing markets. However, within the media and telecommunications industries, the Federal Communications Commission rarely uses this legal concept in practice. Instead, the FCC’s broader review standard protects innovation by identifying potential post-transaction reductions of the incentive to innovate, ability to innovate, or rate of innovation efforts using a broader conceptual definition of innovation. This approach often produces controversy related to its differences from the more traditional competition regulation framework. However, with the diversity of different business contexts in which these issues appear, these orders are a valuable source of insights into different possible types of harms to innovation.
Drawing upon comprehensive research into the uses of innovation across the FCC’s major transaction orders between 1997 and 2015, this work seeks to: (1) identify instances where potential harms to innovation were discussed within these transactions, (2) categorize different types of harms to innovation, and (3) consider the extent to which each category corresponds with the DOJ’s approach or may benefit from more clarity and formalization.

Moderators
avatar for Fernando Laguarda

Fernando Laguarda

Professorial Lecturer and Faculty Director, Program on Law and Government, American University Washington College of Law

Presenter
avatar for Ryland Sherman

Ryland Sherman

Benton Foundation


Saturday September 9, 2017 4:05pm - 4:38pm EDT
ASLS Hazel Hall - Room 332

4:38pm EDT

Emerging Business Models in the OTT Service Sector: A Global Inventory
This paper is an empirical analysis of emerging business models in the Over The Top (OTT) video content distribution sector. From an industrial organization perspective, it identifies six critical attributes of an OTT content distribution platform: ownership, programming source, vertical integration with content producers, platform/multiplatform compatibility, service type, and revenue model. Using SNL Kagan’s global database of 800 OTT distribution networks, it finds that certain combinations of these characteristics are more prevalent than others, and are more competitively sustainable within specific types of media ecosystems. The paper concludes that these ‘archetypes’ are likely to be the survivors within specific ecosystems as the OTT content distribution system continues to converge onto dominant business models.

The increasing bandwidth of broadband networks has created opportunities for OTT services to enter and erode traditional broadcasting markets. A full 25% of U.S. homes don’t subscribe to a pay-TV service any more (GfK, 2016). This trend is especially strong among the younger generation (18-34), who are much more likely to opt for alternative video delivery services. As the cord-cutting/cord-never phenomenon accelerates, the future seems bright for OTT video providers. Yet, Netflix is largely an exception in the global OTT market, where most new entrants are struggling to find traction (Agnese, 2016), and even for Netflix, subscriber growth might have already plateaued (Newman, 2016), and revenues are no longer growing exponentially (Kim, 2016).

In the absence of a truly outstanding business model, providers are experimenting with a wide variety of platforms, content sources, revenue models and multiscreen strategies. Ad-supported models like YouTube have far greater “audience enjoyment minutes” than any OTT provider; Amazon Prime Video has invested aggressively in original content (Castillo, 2016). Facebook’s video distribution is growing and Apple is planning to add video to its music service. Traditional video providers have added more features to their service package to compete with OTT: for example, CBS is now distributing original TV series exclusively over SVOD networks. Satellite TV operators are transforming themselves into internet MVPDs, such as Viasat to Viaplay, and DISH to Sling. Platforms for the distribution of content are proliferating: personal video recorders, transaction VoD, and subscription VoD, and set top boxes are deploying many new capabilities such as Roku TV that can pause and catch up live broadcasts.

Thus, despite the promise of OTT services and the threat they apparently pose to traditional broadcasters, there is no single dominant business model in the OTT video distribution sector. Instead, a wide diversity of options exist within each attribute of an OTT platform: for example, platform capability (PC/Mac, smartphone, tablet, connected TV, game console, Internet streaming players, pay TV set-top box), revenue models (free/ad-supported, transactional, subscription, app fee, premium content cost), etc.

Therefore, the objective of this paper is to investigate the OTT market from an industrial organization perspective, following the lead of Qin and Wei (2014). But unlike Qin and Wei, the paper examines not the performance of the OTT sector as an outcome of its structure and conduct, but the attributes of the OTT business models. It seeks to identify whether there are correlations between the six attributes of an OTT content distribution platform — ownership, programming source, vertical integration with content producers, platform/multiplatform compatibility, service type, and revenue model — and if yes, whether such dominant combinations of attributes constitute emerging archetypes or models for the OTT video business. Furthermore, the paper examines whether certain contributions of attributes (OTT models) are more prevalent in specific types of media systems (such as public TV dominated versus private competitive markets).

The primary data for this analysis is SNL Kagan’s Global OTT provider database, that lists 800 OTT providers from 71 different countries, and several of their attributes including platform capabilities, revenue models and programming sources. Additional information is drawn from industry reports and trade press articles. In addition, country information is coded in from sources such as the European Audiovisual Observatory, OECD, and United Nations agencies. The conclusions of the paper will be of interest to not only OTT providers, but also to media regulators and legislators.


References

Agnese, S. (2016, Nov. 22). Netflix’s Current Business Model is Not Sustainable. Ovum. Accessed at https://www.ovum.com/research/netflixs-current-business-model-is-not-sustainable/

Castillo, M. (2016, Oct. 17). Netflix plans to spend $6 billion on new shows, blowing away all but one of its rivals. Accessed at http://www.cnbc.com/2016/10/17/netflixs-6-billion-content-budget-in-2017-makes-it-one-of-the-top-spenders.html

GfK (2016). One-Quarter of US Households Live Without Cable, Satellite TV Reception – New GfK Study. Press release. accessed at http://www.gfk.com/en-us/insights/press-release/one-quarter-of-us-households-live-without-cable-satellite-tv-reception-new-gfk-study/

Kim, D. (2016). The future outlook of Netflix -- the financial perspective. Information and Communication Policy, 28(22), 1-19. Accessed at http://www.kisdi.re.kr/kisdi/fp/kr/publication/selectResearch.do?cmd=fpSelectResearch&sMenuType=2&curPage=5&searchKey=TITLE&searchValue=&sSDate=&sEDate=&controlNo=14011&langdiv=1 (Korean)

Newman, L. H. (2016, July 18). Wall Street is worried that Netflix has reached its saturation point. Slate.com. Accessed at http://www.slate.com/blogs/moneybox/2016/07/18/netflix_earnings_beat_expectations_but_the_stock_is_still_tanking.html

Qin, Q., & Wei, P. (2014). The Structure-Conduct-Performance Analysis of OTT Media. Advances in Management and Applied Economics, 4(5), 29.

Moderators
avatar for Fernando Laguarda

Fernando Laguarda

Professorial Lecturer and Faculty Director, Program on Law and Government, American University Washington College of Law

Presenter
EP

Eun-A Park

Western State Colorado University


Saturday September 9, 2017 4:38pm - 5:11pm EDT
ASLS Hazel Hall - Room 332

5:12pm EDT

Global Governance of the Embedded Internet: The Urgency and the Policy Response
This paper addresses the need to bring the Internet of Things and associated technologies under a global policy regime, built on the newly independent ICANN, acting in an expanded capacity as a recognized non-territorial, multi-stakeholder-based, sovereign entity under agreed and transparent normative standards.

The phrase “Internet governance” is highly contested over its technical, security, and sociopolitical aspects. Until recently, however, it had not been imagined to include networked devices with embedded intelligence, such as smart cars, smart watches, smart refrigerators, and a myriad of other devices. A rapidly emerging issue is how, if at all, the current global Internet governance regime relates to the emerging array of ubiquitous embedded information technologies which collect, store, process, learn from, and exploit information about all aspects of our lives. Does this call for a policy response?

This is a non-trivial issue, as it binds together the Internet of Things, big data analytics, cloud computing, and machine learning/artificial intelligence into a single, integrated system. Each component raises policy issues, but the bigger challenge may be unintended adverse consequences arising from their synchronous operation. Because of the inherently global nature of the underlying network, which seeks to connect “everything to everything else,” it is important to give consideration to whether these developments should have a central point of global policy development, coordination, and oversight.

This paper answers that question in the affirmative and, after reviewing multiple candidates which have been proposed, concludes that the emerging post-U.S. ICANN is most fit for that role. The authors believe it is important to keep the centers of technical and policy expertise together and efficiently available. The authors recognize that such governance is not a “singular system,” and that some issues, such as cybersecurity, may find other homes, perhaps even treaty-based.

The paper further argues that “new” ICANN, largely formally severed from the U.S., and with a revised and expanded role for governments in its management, has a very strong claim for legitimacy and non-territorial sovereignty. On that basis, it may feel more secure in expanding the scope of its mandate – indeed, there will likely be considerable pressure to do so.

Another critical factor is the uncertainty about the normative values that underpin, or in some cases undermine, global Internet governance. These values will continue to be contested, but there is already some broad acquiescence to general principles from the United Nations, which can form the basis for a transparent discussion in a multi-stakeholder venue about which norms and values are most appropriate to guide policy actions. Some of these policy alternatives are presented and discussed.

This topic, and the approach to it, are novel in the respect that very little work has been done in this area. The paper builds on, and considerably extends, that work.

Moderators
avatar for Fernando Laguarda

Fernando Laguarda

Professorial Lecturer and Faculty Director, Program on Law and Government, American University Washington College of Law

Presenter
avatar for Jenifer Sunrise Winter

Jenifer Sunrise Winter

Assoc. Prof., University of Hawaii at Manoa
The Pacific ICTD Collaborative - http://pictdc.socialsciences.hawaii.edu/ Privacy, digital inequalities, algorithmic discrimination in the context of big data and the Internet of Things Data governance and stewardship, including use of big data for the public good Broadband access... Read More →

Author

Saturday September 9, 2017 5:12pm - 5:45pm EDT
ASLS Hazel Hall - Room 332
 
Filter sessions
Apply filters to sessions.