Loading…
TPRC45 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Friday, September 8
 

8:00am

Registration and Coffee
Friday September 8, 2017 8:00am - 9:00am
Founders Hall - Multipurpose Room

9:00am

You Get What You Measure: Internet Performance Measurement as a Policy Tool
Paper click here
The research literature on Internet performance measurement is quite rich. Surveys of measurement tools such as “A Study of Traffic Management Detection Methods & Tools”[1] and “A Survey on Internet Performance Measurement Platforms and Related Standardization Efforts”[2] describe a multitude of tools such as NetPolice, NANO. DiffProbe, Glasnost, ShaperProbe, Chkdiff, Sam Knows, BISmark, Dasu, Netradar, Portolan, RIPE Atlas, and perfSONAR.

In addition to tools developed for academic research and policy enforcement, internet users rely on Speedtest and OpenSignal for troubleshooting. Finally, proprietary systems such as those developed by Akamai,[3] Sandvine,[4] and Cisco[5] are used to compile “State of the Internet” analyses aggregating several views of the Internet.

While current tools are quite useful for measuring the performance of Internet Service Provider networks, they’re much less useful for examining how well the Internet operates as a whole. The Internet is an “end-to-end network of networks” in which performance depends on an entire series of cooperating networks.

From the user perspective, it’s important to know whether websites are slow to load because of ISP network impairment, server overload or code bloat, or factors under the user’s direct control such as Wi-Fi issues or personal computer factors. In addition, users run multiple applications such as video streaming and conferencing that are subject to different performance goals than web browsing.

The emphasis on one facet of Internet performance, such as last mile networks or hot exchange point interfaces, tends to minimize other factors that may be more important to the user, such as web server capacity. In addition, a reliance on active measurement tools creates opportunities for gaming the system that are not possible in passive systems that merely observe application and network events in real time. Passive systems have privacy issues, however.

This paper explores the opportunities for developing additional performance tools more responsive to the broader social goal of better end-to-end Internet performance and reliability across the broad span of applications. It assumes that policy can only be successful when supported by measurement tools that are trustworthy, reproducible, and meaningful.

Moderators
PS

Patrick Sun

Industry Economist, FCC

Presenter

Friday September 8, 2017 9:00am - 9:33am
ASLS Hazel Hall - Room 329

9:00am

Common Sense: An Examination of Three Los Angeles Community WiFi Projects that Privileged Public Funding Over Commons-Based Infrastructure Management
Public funding for community WiFi initiatives in the United States is rare—despite that these networks are comparatively low-cost to deploy, and that a peer-to-peer model of connectivity may foster community and boost civic engagement. However, in 2015 the New York City Economic Development Corporation awarded the Red Hook Initiative several million dollars to expand its community WiFi network in Brooklyn. This development suggests a potential shift in attitude toward government support for grassroots WiFi networks. Therefore, it is critical to understand the successes and failures of projects that previously operated with government grants and subsidies. Using both a public goods framework and theory of the commons, this study examines three community WiFi networks in geographically and ethnically diverse L.A. communities subsidized by the city of Los Angeles or by California state agencies.

Specifically, this research examines whether Little Tokyo Unplugged, Open Mar Vista and a cluster of network sponsored by Manchester Community Technologies relinquished the ability to function as commons by accepting, or simply pursuing, grants and resources from public agencies. Each of these initiatives faltered, despite a combined $700,000 in government funding. The analysis is based on interviews with 11 key stakeholders, as well as a comprehensive review of relevant grant reports, archived website pages and media coverage.

In exchange for government subsidies, these three community WiFi projects prioritized public good goals articulated by policymakers—closing the digital divide in Los Angeles through infrastructure deployment and encouraging computer usage. In order to fulfill promises made to granting agencies, these community WiFi networks treated wireless internet access as a commodity, rather than as a tool for community empowerment. Significantly, none of the networks developed a strategy to remain sustainable after public subsidies expired, or after government agencies rejected requests for additional funding. Had these three L.A.-based community WiFi projects privileged a commons-based approach, characterized by inclusivity and a flat governance structure, they may have thrived. In a commons, communication systems are truly democratic, in the sense that community members themselves determine how the network is designed and deployed. Neither corporations nor policymakers get to influence those decisions.

The study concludes that, ultimately, money and resources provided by government agencies are inadequate substitutes for volunteers who traditionally share skills and passion to sustain community WiFi networks. However, the findings recognize that it is certainly possible for grassroots initiatives to partner with government agencies, while continuing to manage infrastructure as a commons. In 2010, the Detroit Digital Justice Coalition allocated a portion of its $1.8 million grant from the federal Broadband Technology Opportunities Program to launch community wireless networks in several neighborhoods. A guiding principle of this project, which continues to expand, is to enable community members to create their own technologies and to help shape communications infrastructure. The research stresses that both policymakers and community broadband groups must agree to balance potentially competing goals.

Moderators
CM

Chris McGovern

Connected Nation Inc.

Friday September 8, 2017 9:00am - 9:33am
ASLS Hazel - Room 120

9:00am

A Typology of Information Distribution Organizations
Over the past several decades information distribution organizations (IDOs) have increasingly become the subject of law and policy considerations. IDOs are those organizations that play a significant role in the communication of information to news/information seeking audiences. These may include traditional news organizations, but also internet-based entities like Google and Facebook, and those agencies for whom the internet and digital communication technologies have now become indispensable tools. This paper investigates the ways in which IDOs create, use, distribute, and store information to create a taxonomy of these organizations and examine the many different categories of bodies.

The purpose of this taxonomy is two-fold. Definitions are important to considerations of privileges and responsibilities under certain laws. For example, many states have created so-called “reporters privilege” or shield laws. Key to many of these statutes is a requirement that the individual claiming the privilege be working for some kind of “news” organization. But the definition of news and that of information can be decidedly different. And conflicts about who may claim the privilege have arisen.

Definitional issues also arise with responsibilities required of IDOs by law. In the United States internet service providers, as many IDOs are, must be circumspect with how they handle the information they allow to be posted on their sites to have a safe harbor in libel law. Similarly, ISPs must comply with immediate requests to remove information alleged to violate copyright. These policies, and those like them, reveal the importance of how the organization interacts with information to the rights and responsibilities afforded. Of course, the consideration of how organizations exploit information is not solely a US policy phenomenon. The European Union conceptualization of the right to be forgotten, for instance, considers whether an organization is a data collector, controller, and/or processor, demonstrating, again, the importance of examining how these agencies use information.

This study, then is useful for considering the kinds of IDOs that are most subject to policy decisions and requirements. It also provides a deeper understanding of the many ways in which these organizations are and may be regulated.

Moderators
avatar for Martin B. H. Weiss

Martin B. H. Weiss

University of Pittsburgh

Presenter
JE

Jasmine E. McNealy

University of Florida


Friday September 8, 2017 9:00am - 9:33am
ASLS Hazel Hall - Room 332

9:00am

How the GDPR Stacks Up to Best Practices for Privacy, Accountability and Trust

This paper assesses to what degree new European online privacy regulation addresses official European Union government research on best practices for online privacy. Specifically it investigates the European Commission’s General Data Protection Regulation (GDPR) and to what degree it coheres to the official EU government report on “Privacy, Accountability and Trust - Challenges and Opportunities” by the European Union Agency for Network and Information Security (ENISA). The paper maps the provisions of the GDPR to the model proposed by ENISA and finds that the GDPR regulation appears to overemphasize some inputs while under-emphasizing or even ignoring others. We propose a behavioral model to explain the discrepancy. We assume that the ENISA inputs represent the best official approximation of how to achieve long-term welfare outcomes while the GDPR regulation is optimized to maximize short-run outputs for political support.

The paper presents the provisions of the GDPR as a function of the four inputs for privacy as defined by ENISA: (1) the user’s knowledge of online privacy, (2) the technology design, (3) the practices of providers, and (4) the institutions governing the system. With regard to institutions, the GDPR applies to any entity regardless of location, if processing EU resident data; it requires that a data protection authority in each member state; and if GDPR provisions are violated, allows fines up to 4 percent of annual revenue of €20 million, whichever is greater. For practices of providers, the GDPR has provisions for consent, breach notification, right to access, right to be forgotten, data portability, and data protection officers. The GDPR has some privacy by design requirements which map to ENISA’s technological design parameter. Finally there are no specific GDPR provisions that would remedy users’ knowledge such as an education campaign etc. From the preliminary analysis, we find that the GDPR tends to overemphasize compliance and punishment while underemphasizing technology design and user education.

The GDPR encompasses perhaps the most monumental pan-European regulation in the last decade. It has the potential to impose significant penalties, add significant bureaucracy through the creation of national data protection authorities, and require the appointment of chief privacy officers in enterprises, a requirement that many medium size organizations find onerous. As the European Commission did not provide empirical justification for its rules other than a factsheet, this research attempts to quantify the choice of instruments and model political behavior and expectation based upon a selection of policy options.

The model is reviewed with e-privacy panel data for the 28 EU member states based on Eurostat and Eurobarometer. It investigates whether there are any relationships between the ENISA inputs and the GDPR provisions by measuring the level of privacy awareness and skills, the degree of deployment of privacy by design technologies, the presence of a data protection authority in the member state, and other variables. In addition to general online privacy research, the findings may shed light on the assumptions of the policy making process and to what degree evidence informs regulation.

Keywords: GDPR, online privacy, regulation, data protection, European Union, privacy by design, regulatory behavior and performance


Moderators
Presenter
avatar for Roslyn Layton

Roslyn Layton

Visiting Scholar, American Enterprise Institute/Aalborg University
Roslyn works internationally promoting evidence-based tech policy and helping policymakers use data to make decisions. She earned her PhD from Aalborg University's Doctoral School of Science and Engineering in Denmark with a study comparing the impact of different net neutrality instruments... Read More →


Friday September 8, 2017 9:00am - 9:33am
Founders Hall - Auditorium

9:00am

To Whom the Revenue Goes: A Network Economic Analysis of the Price War in the Wireless Telecommunication Industry
We analyze the ongoing price war among major mobile data carriers. Understanding better the underlying mechanisms of the noticed price war in the wireless ecosystem is imperative to develop better techno-economical models for future technologies and to plan possible necessary regulatory actions, e.g. in the case of mergers and acquisitions.

The detailed analysis requires us to consider the goals and requirements of the key stakeholders, including mobile service providers (MSPs) and end-users. In this paper, we analyze this situation by using game-theoretical models under general techno-economical conditions. We consider an oligopoly market where the MSPs aim to attract a pool of undecided users.

The revenue generation of MSPs is usually based on the monthly data plans. Three dominant pricing models are:

(i) flat-rate pricing where each user pays a fixed fee for unlimited data usage,

(ii) volume-based pricing where the fee increases with the data consumption,

(iii) cap-based pricing where the MSP charges a fixed fee up to a point and if the user exceeds this cap, there is an additional charge per unit of volume.

We quantify the cost of MSPs including marginal costs associated with each user, variable costs based on the data consumption per user and fixed costs generated by CAPEX and OPEX. We note that fixed costs include debt payments that may make a MSP risk-averse to high revenue volatility.

Key novelties of our model are the examination of these dominant pricing schemes under the same framework along with the quantification of the variable cost per user and the introduction of a weighted-metric for the users’ MSP selection where we do not consider only the price but also brand attraction. We analyze the outcome of these interactions by introducing a two-stage sequential game instead of a simultaneous game which, for simplicity, is adopted in earlier studies. In the first stage, each MSP announces its data pricing plans, and, in the second stage, each user chooses one of them.

In technical sense we explore the division of market shares and respective revenues under different selection of parameters. In the regulatory domain this is an interesting question as it shows how much there is true differentiation possibilities between MSPs. In this paper we are particularly interested in to study if the lower market share imply lower profits as is often claimed. Another important regulatory and techno-economical question is, if MSPs have motivation to unilaterally update their data pricing schemes. We also touch the question if it is rational for MSPs to offer all three pricing plans simultaneously. Apart of studying these issues, our work is aiming at to provide formal tools and related discussion to understand motivations for different pricing schemes, and asking if there is any motivation for MSPs to collude – and if so how to design markets to be robust against collusion.

Moderators
DB

Debra Berlyn

Consumer Policy Solutions

Presenter
avatar for Vaggelis Douros

Vaggelis Douros

RWTH Aachen University

Author

Friday September 8, 2017 9:00am - 9:33am
ASLS Hazel Hall - Room 221

9:34am

An Empirical Evaluation of Deployed DPI Middleboxes and Their Implications for Policymakers
Middleboxes are commonly deployed to implement policies (e.g., shaping, transcoding, etc.) governing traffic traversing ISPs. While middleboxes may be used for network management to limit the impact of bandwidth-intensive applications, they may also be applied opaquely to limit access to (or degrade) services that compete with those offered by the network provider. Without regulation or accountability, such practices could be used to raise the barrier to entry for new technologies, or block them entirely. Further, by breaking end-to-end system design principles, these practices can have negative side-effects on reachability, reliability and performance.

This paper presents evidence of deployed middlebox-enabled policies that provide differential service to network applications affecting subscribers of T-Mobile US, Boost Mobile, and others. We used rigorous controlled experiments and statistical analysis of the performance of popular online services to identify traffic differentiation. The observed policies include throttling bandwidth available to video streaming and VPN traffic, transcoding video, and selectively zero-rating traffic such as video and music streaming. Such policies appear to violate the “No Throttling” and/or “No Unreasonable Interference” provisions of the Open Internet Order (OIO), and potentially violate rules in different jurisdictions. Some of these policies were not transparent to consumers and/or were presented in misleading ways, violating the transparency requirement of the OIO. We recommend that providers concerned about traffic loads use application-agnostic techniques to throttle, thus meeting the “reasonable network management” clause of the OIO. Such policies are also easy for consumers to understand, thus providing better transparency.

We find that the observed policies are implemented using deep packet inspection (DPI) and simple text matching on contents of network traffic, potentially leading to misclassification. We validate that misclassification occurs, causing unintentional zero-rating or throttling. For example, video-specific policies can arbitrarily apply to non-video traffic, providing another example of “Unreasonable Interference” barred in the OIO. In fact, we show that current approaches to implementing network management policies are fundamentally vulnerable to unintentional behavior; i.e., the DPI-based approach to network management cannot guarantee 100% accuracy. We recommend that policymakers and network operators adopt alternative rules and approaches to network management that avoid such flaws and vulnerabilities.

Last, network management policies currently lack auditing provisions, and we argue that this hinders enforcement and compliance with rules. Further, network providers’ policies evolve over time, requiring constant vigilance. We recommend that regulators incorporate auditing technologies such as those presented in this work as part of future policies.

Moderators
PS

Patrick Sun

Industry Economist, FCC

Presenter
DC

David Choffnes

Assistant Professor, Northeastern University
Net neutrality, network measurement, QUIC, privacy Find me on Twitter: @proffnes

Author
AM

Alan Mislove

Northeastern University

Friday September 8, 2017 9:34am - 10:07am
ASLS Hazel Hall - Room 329

9:34am

Connecting the Unconnected: The Case of Mexico's Wholesale Shared Network
The Wholesale Shared Network (WSN) is the Mexican project intended to provide broadband access to 92.2% of the population. While most of the territory (94%) is already covered by the incumbent provider, the Mexican Government has recently launched a call for tenders to rollout a nation-wide mobile wireless wholesale network using the digital dividend frequencies (700 MHz). The successful bidder will not be able to provide retail services directly, nor offer services to the incumbent.

The Mexican Wholesale Shared Network is a novel policy worldwide, as to date this sort of initiatives has only been tested in fixed fiber-based networks. In addition to mobile service providers, other players of the Internet value chain might be interested in offering connectivity services relying on the WSN. This will possibly allow for new business models so far unexplored. Among others, the WSN may be an opportunity for vertically integrated online platforms (as big OTTs) to bridge the gap in the connectivity link that they currently face. 

For this reason, despite similar policies have often proven unsuccessful in fixed networks, it might result in a better performance in wireless. While the problem in fixed has been the lack of demand from both retail providers and subscribers, the multi-sided nature of online platforms may help stimulate the demand (partially) subsidizing connectivity through the revenues generated with the services provided on top. 

The aim of this paper is to explore different and novel business strategies to provide internet access to low-income and rural municipalities based on the WSN. Through a value chain analysis, we classify providers by its core business and will define the strategies that might be more relevant to each of them, according to i.e. the characterization of the reference population. We expect the implications of this research to provide valuable insights for policy-makers.

Moderators
CM

Chris McGovern

Connected Nation Inc.

Friday September 8, 2017 9:34am - 10:07am
ASLS Hazel - Room 120

9:34am

Technological Diversification into 'Blue Oceans'? A Patent-Based Analysis of Patent Profiles of ICT Firms
Over the last couple of years, considerable attention has been focused on the Internet of Things (IOT). Through combining a range of technologies with reductions in the cost and size of the components, the IOT has begun to grow – not only is the number of connections rapidly growing, but it can now be found across an ever wider array of sectors. Vodafone alone, for example, now claims to have more than 50 million IOT (Roberts, 2017). While IOT technologies are produced in industries such as aviation/automotive, electronics, medical equipment, software and services, telecommunications and computer hardware sector (Sadowski, Nomaler et al. 2016), they are applied in a large variety of sectors such as smart cities (Baccarne, Mechant et al. 2014; Anthopoulos 2015), smart energy (Gans, Alberini, & Longo, 2013) or smart industries (Da Silveira, Borenstein et al. 2001; Fogliatto, Da Silveira et al. 2012).

The economic literature suggests that patent analysis can be used to examine the knowledge base and the technological diversification of companies (Kogut and Zander 1992; Teece, Pisano et al. 1997; Zack 1999). As the existing knowledge of a firm provides a critical ingredient of competitive advantage and corporate success, the extent to which companies utilize technological diversification as a strategy to enter into new technological areas has only recently begun to be investigated (Kodama 1986; Granstrand 2001; Breschi, Lissoni et al. 2003; Garcia-Vega 2006; Lin, Chen et al. 2006). Technological diversification has been defined as the extent to which firm use their knowledge base to diversify into relevant or irrelevant technological fields (Kodama 1986; Lin, Chen et al. 2006). In this respect technological diversification allows firms to enhance their competitive advantages in the market (Garcia-Vega 2006). In this context, Sadowski, et al (2016) have shown that a higher degree of technological diversification can lead to valuable technological specialization in new emerging technological fields such as the Internet of things (IoT) (Sadowski, Nomaler et al. 2016).

Research has shown that the entry decisions of incumbent companies into new markets are affected by convergence (i.e., the blurring of boundaries between hitherto separate sectors) and increased competition in existing markets (Katz 1996). More recently, it has been demonstrated that firms prepare for a possible entry into these markets by anticipating and monitoring of processes of convergence of different sectors (Curran, Bröring et al. 2010; Curran and Leker 2011). As a response to convergence, companies diversify into new markets based on their existing competencies and resources since they change at a much slower pace than technologies and market conditions in converging sectors. Within the resource-based view theory (Wernerfelt 1984; Barney 2006), diversification into new emerging markets has been conceptualized as a “Blue Ocean” strategy (Kim and Mauborgne 2005; Kim and Mauborgne 2014) aimed a discovering (and benefiting) from pioneering innovations in these markets (van de Vrande, Vanhaverbeke et al. 2011). In exploring new technological opportunities in emerging markets, incumbent companies are able to enter into “blue oceans” of uncontested market space instead of battling competitors in traditional “red oceans”. In entering a new “blue ocean” market incumbent companies are able to unlock new demand as competition is irrelevant in these markets (Kim and Mauborgne 2014). In this tradition, research has rarely addressed the extent to which technological diversification into new markets has improved the knowledge position of incumbent companies. As technological diversification into IoT has been a common strategy of ICT companies over at least the past twenty years, large differences persist with respect to their positioning in these new emerging markets (Sadowski, Nomaler et al. 2016).

We follow Sadowski, Nomaler & Whalley (2016) in terms of defining the IOT. This definition enables us to identify relevant patents, which are then allocated to a specific company. Our study identifies 1322 ICT companies involved in IoT technologies which we classified according to the similarity of their patent profile. We group companies together on the basis of their patenting activity, thereby identifying a series of clusters. Given the volume of IOT patents and the number of companies involved, we then focus our analysis on healthcare and energy. Both sectors are often discussed in terms of being characterised by a series of challenges that the IOT can, at least partially, help to resolve through collecting more data, facilitating its analysis etc.
Not only does our analysis identify the leading actors present in the healthcare and energy areas, as determined by the number of patents and technological diversification, but it also demonstrates that previous experience of ICT patenting does not necessarily result in a substantial presence in these two areas. One way that this can be conceptualised is in terms of “red oceans” and “blue oceans” noted above (Kim & Mauborgne, 2015). We explore this distinction within healthcare and energy by investigating the extent to which the IOT patent portfolios of companies overlap with one another. We find that there is considerable variation in the overlap that exists across our sample.

Moderators
avatar for Martin B. H. Weiss

Martin B. H. Weiss

University of Pittsburgh

Presenter
Author
JW

Jason Whalley

Northumbria University

Friday September 8, 2017 9:34am - 10:07am
ASLS Hazel Hall - Room 332

9:34am

Balancing Security and Other Requirements in Hastily Formed Networks: The Case of the Syrian Refugee Response
The need for connectivity and communication during emergencies has spawned research and development of Hastily Formed Networks (HFNs) (Denning 2006; Tornqvist et al. 2009; Nelson et al. 2011; Lundberg et al 2014). These networks, both organizational and technical, are crucial to response effectiveness but are difficult to manage and deploy.

Research to improve HFNs has focused on organizational and interpersonal aspects, including HFNs’ ‘conversation spaces’ (Denning 2006), as well as the technical aspects of designing and deploying network infrastructure. These infrastructures typically focus on connecting emergency response personnel. However, recent efforts are expanding to include connections for affected communities. This expansion of the user base creates new requirements, ranging from different models of network management to far greater diversity in end-user equipment, knowledge and skills.

This paper examines organizational and contextual factors of network design and deployment for affected communities, focusing on (1) identifying the design requirements, (2) decision making concerning the appropriate balance of requirements, and (3) implementing designs. Our analysis also examines the conversation spaces through which decisions, particularly trade-offs, are made.

Through a case study of networks deployed in response to the Syrian refugee crisis, we examine cooperation between an NGO and two tech industry Corporate Social Responsibility (CSR) teams. Together, these partners deployed networks in refugee camps across several European countries. As of March 2017, the system has been deployed at 75 locations, supporting 600,000 users.

As detailed in our case study, design requirements fell into three categories, including replicability, limited management resources, and managing a diverse user base. Replicability generated requirements related to portability of equipment, while limited management resources required networks be largely self-managing. The diverse user base required bandwidth limits to ensure equity by preventing the emergence of ‘super users.’ This was necessary due to network access being provided free of charge. Further, and more importantly, the need for network security affected the balance for each requirement as the increased threat of cyberattacks related to the Syrian war, created an imperative to protect civilians against electronic exploitation.

Preliminary results suggest three outcomes. First, the conversation spaces for network design are positively influenced by ongoing commitments, trust and specialization. The NGO plays a coordinating role as well as serves as a ‘long term player in the market.’ The tech companies trust the work of the NGO’s staff due to their long term relationship. This, in turn, allowed one of the tech companies to focus on the network’s high level design, reflecting specialization between the team participants. Second, the accumulated expertise and an assessment of the context made from a distance by the tech company designers drove decision making concerning the integration of security features into the network’s design. Third, the design task of these hastily formed networks is a multi-stage and ongoing task, with initial decisions made independently prior to deployment, and subsequent decisions made jointly both in the stage of network deployment but also during ongoing operations.

Moderators
Presenter
Author
avatar for Rakesh Bharania

Rakesh Bharania

Network Consulting Engineer, Cisco Tactical Operations
Rakesh Bharania is the West Coast lead for Cisco Tactical Operations (TACOPS), Cisco’s primary technology response team for disaster relief and humanitarian assistance. Additionally, he serves as the chairman for the Global VSAT Forum (GVF) Cybersecurity Task Force, and is a recognized... Read More →

Friday September 8, 2017 9:34am - 10:07am
Founders Hall - Auditorium

9:34am

The Functionalities of Success- A Psychological Exploration of Mobile Messenger Apps' Success
Mobile Messenger Apps (MMAs) such as WhatsApp, Facebook Messenger, LINE, Signal or Snapchat enjoy impressive success worldwide. Since many of these applications are offered at no monetary cost, telecommunications providers have argued that consumers choose MMAs predominantly to save costs. In light of consumers’ complementary and multi-homing use of MMAs, it seems unlikely that saving money can fully explain their success. Our paper draws on more than 60 semi-structured qualitative interviews with consumers to explore why they opt for MMAs, why they use them complementary to Electronic Communications Services (ECS) and why they use multiple MMAs at a time.

Specifically, our paper follows a grounded theory approach. It uses three rounds of interviews. The first round (20 interviews) established an initial understanding of relevant success factors of MMAs from a consumer perspective. The second round (24 interviews) focused on the role of technological seams enabling consumer to negotiate their social sphere. The third round of interviews (20 interviews) emphasized how different functionalities may fulfill or thwart basic psychological needs (competence, relatedness and autonomy) as established in Self-Determination Theory (SDT).

Throughout the three rounds of interviews, it emerges that indeed saving costs is not a central motive for consumer to use MMAs. Additional functionalities enabling richer communication are the strongest drivers of MMAs success. In particular, group chats, awareness and notification as well as presentation of self functionalities was found to fulfill consumers’ psychological needs of competence, relatedness and autonomy better than texting via Short Message Service (SMS). Furthermore, our results show that both complementary use of ECS and MMAs as well as MMA multi-homing can be explained by consumers enacting subtle social codes along the stages of stage models of relationship development. During the Orientation Stage, consumers either use Instagram, Tinder or Lovoo to make new contacts or rely on SMS and email used for weak ties. Facebook (Messenger) echoes consumers’ requirement for a balanced presentation of selected intimate information required in the Exploratory Affective Stage. At the Affective Stage Snapchat and WhatsApp are favored by consumers due to their various functionalities enabling crafting social messages to start to disclose actually intimate information. Once the Stable Stage is reached, rich communication features offered by Skype and similar apps are most relevant to enact even the most subtle social codes in rich interactions.

In sum, our results highlight the relevance of additional and innovative functionalities of MMAs for their success. These functionalities generally help to fulfill consumers’ basic psychological needs better than SMS. Furthermore, consumers’ need fulfillment is supported by the opportunity to use various MMAs in parallel to adhere to a finely grained set of social codes associated with interpersonal communication. Hence policy and regulators should not interfere with the innovation paths of these applications as well as the technological seams that exist between them. For marketers, our results add further insights as regards potential targeting strategies for developing platform business models for MMAs.

Moderators
DB

Debra Berlyn

Consumer Policy Solutions

Presenter
avatar for René Arnold

René Arnold

WIK and Bruegel

Author

Friday September 8, 2017 9:34am - 10:07am
ASLS Hazel Hall - Room 221

10:07am

Digital and Economic Inclusion: How Internet Adoption Impacts Banking Status
Access to banking and other traditional financial services is critical to economic security and stability in the 21st century. The ability to build and access credit and savings are essential to daily tasks, as well as buying a home, planning for retirement, and growing a small business. According to a 2015 Federal Deposit Insurance Corporation (FDIC) survey, however, 7 percent of U.S. households were unbanked and 20 percent were considered underbanked. Financial technology (FinTech), including mobile money services, peer-to-peer lending, and mobile insurance is promised as a means to economic inclusion for the un(der)banked. Nonetheless, these online financial services require Internet access, adoption, and digital literacy. As National Telecommunications and Information Administration’s (NTIA) 2015 survey shows, the digital divide persists. In 2015, 27 percent of U.S. households did not access the Internet at home and 21 percent of households did not access the Internet anywhere.

In order to explore the interdependencies between banking and Internet adoption in the United States, this paper merges datasets from FDIC’s June 2015 Unbanked and Underbanked survey and NTIA’s July 2015 Computer and Internet Use survey to study the issue more closely. Both surveys are supplements to the U.S. Census’s monthly Current Population Survey (CPS) survey from consecutive months, which enables us to take advantage of the longitudinal aspects of the CPS panel of households. At the household level, approximately 35,000 households completed both the NTIA and FDIC surveys.

While the process of merging CPS supplements at the household level is fairly straightforward, treating sampling weights and variance raises some complications. Census uses weights to account for factors such as under or over sampling to ensure that survey results accurately represent the U.S. population. These weights change over time with variations in the sample, so we must create new weights to account for the distribution of the longitudinally merged sample. Our paper provides a methodology for merging CPS supplements and addressing these issues. This approach will hopefully assist future researchers in exploring similar intersectional issues.

Our results show that there is a relationship between households’ Internet connected device use and their banking status. Our preliminary results suggest: (1) there is a strong correlation between the un(der)banked and un(der)connected; (2) even when holding demographic factors constant, un(der)banked households also use fewer device types; and 3) households using a single device type, particularly smartphone-only, are less likely to engage in certain financial activities, such as direct deposit.

FinTech promises to bring economic inclusion to un(der)banked populations. Our research suggests, however, that these populations are also often less digitally connected than other groups. Reaching these underserved communities could require a combination of financial and digital literacy services

Moderators
CM

Chris McGovern

Connected Nation Inc.

Friday September 8, 2017 10:07am - 10:39am
ASLS Hazel - Room 120

10:07am

Disclosures on Network Management Practices and Performance of Broadband Service
This research addresses the following question: “Do customers understand network management practices and performance characteristics disclosed by broadband Internet access service providers?” Disclosures are required by the Federal Communications Commission (FCC) in its Open Internet ruling. Court support for transparency requirements was not based on reclassification of BIAS providers as common carriers, thus these requirements are likely to remain even if the new FCC administration rules back re-classification.

The research is based on four test surveys answered by 3,024 Amazon’s Mechanical Turk (M-Turk) participants in the US. Surveys included questions related to the disclosures from the following four fixed and mobile Broadband Internet Access Service (BIAS) providers–one survey for each provider–AT&T, Comcast, Cox and T-Mobile. M-Turk participants were randomly assigned one of the four surveys. (Footnote: The numbers in this abstract are preliminary and will change as more responses come in.)

80% of the participants in the survey understand the purpose of the BIAS disclosures, i.e., to provide information of the BIAS provider’s network management practices and performance characteristics so customers can make informed decisions about the BIAS offers.

However, only half of the participants understand what network congestion is. Participants tend to associate network congestion with how they individually use their BIAS rather than how a group of users is collectively using their provider’s local infrastructure. Network congestion is generally associated with specific types of traffic, like video streaming, and not with periods of peak usage.

For mobile BIAS, 50% and 60% of participants in T-Mobile and ATT’s survey, respectively, associate network congestion with crowded places.

Participants only partially comprehend the network management practices implemented by BIAS providers. Less than 12% of the participants were able to recognize all the network management practices described in the disclosure.

Most participants do not understand characteristics of network management practices such as buffer tuning or Binge On™, used by mobile BIAS providers, AT&T and T-Mobile, respectively. Only 6% for AT&T and 16% for T-Mobile, recognize all the characteristics explained in the disclosures as network management practices.

A non-negligible percentage of participants, between 15% and 25%, depending on the BIAS provider, except for AT&T, answered that latency has nothing to do with Internet performance, and some answered that latency is critical for e-mail service quality. 90% of participants in AT&T’s survey understand the concept of latency to some extent, relating it with the performance of voice and video conferencing services, or periods of congestion.

With regards to speed, most participants, above 80%, understand its relationship with Internet performance, mainly when engaged in video conferencing or online gaming.

Less than 50% of the participants, after reading a disclosure, understand the factors that can lead to poor connection performance, such as WiFi connections, the server hosting the content or application, a network interconnected with the customer BIAS provider’s network, and technical specifications of the device used to access the Internet. However, most participants, above 75%, do understand from the disclosures that the BIAS provider does not have complete end-to-end control of the service it provides.

Regarding mobile BIAS, 80% of the participants understand that unlimited plans do not mean that there are no data caps. However, only half of the participants understand the consequences of exceeding such data cap, e.g., reduced speed, higher latencies, etc.

In conclusion, based on this research, the disclosures fail to achieve the goal of informing the consumer so that he/she can make better choices. Very few participants, comprehend all the network management practices implemented by BIAS providers. Consequences of complex practices such as buffer tuning and Binge On™ are not understood by more than 80% of the participants in the survey.

Based on the above results and conclusions, disclosures should be re-designed to achieve their objective, i.e., inform the customer of the provider’s network management practices and performance characteristics in language that customers can understand so as to make informed decisions.

Moderators
PS

Patrick Sun

Industry Economist, FCC

Presenter
JM

Juan Manuel Roldan

Carnegie Mellon University


Friday September 8, 2017 10:07am - 10:40am
ASLS Hazel Hall - Room 329

10:07am

Using Aggregate Market Data to Estimate Patent Value
Intellectual property and its protection is one of the most valuable assets for entrepreneurs and firms in the information economy. This article describes a relatively straightforward method for measuring patent value with aggregate market data and the BLP model. We apply the method to United States smartphones. The demand estimates and recovered marginal costs produce sensible simulations of equilibria prices and shares from several hypothetical patent infringements. In one simulation, the presence of near field communication on the dominant firm’s flagship smartphone results in a 26 percent increase in profits per phone. This estimate provides a starting point for establishing a reasonable royalty between the patent holder and the dominant firm in a hypothetical negotiation.

Moderators
avatar for Martin B. H. Weiss

Martin B. H. Weiss

University of Pittsburgh

Presenter
SH

Scott Hiller

Fairfield University

Author

Friday September 8, 2017 10:07am - 10:40am
ASLS Hazel Hall - Room 332

10:07am

Cyber Security Capacity: Does It Matter?
It is assumed that the benefits of building national cyber security capacity are widespread, largely based on common sense, limited case studies, anecdotal evidence, and expert opinion. This paper reports on the early phase of a systematic effort to bring together cross-national data from multiple sources to examine indicators related to the cyber security capacity of a nation. We use these to determine if capacity matters – does it translate into conditions affecting end users of the Internet. Using data from approximately 120 countries, a multivariate analysis of indicators related to national cyber security capacity are shown to have had a strong impact in lowering end-user security problems. The sources for this analysis include data collected for the World Economic Forum Network Readiness Index, the World Bank, Internet World Statistics, and Microsoft, which are openly available. The results of this study reinforce the case that building cyber security capacity is a worthwhile investment, while also raising important issues around global inequalities in the ability to build greater cyber security capacity. The analysis also points to ways in which the data and analyses can be refined, such as through gathering more direct indicators of capacity-building efforts, such as those being developed by the GCSCC at Oxford University.

Moderators
Presenter
avatar for Ruth Shillair, Ph.D.

Ruth Shillair, Ph.D.

Assistant Professor, Director of Graduate Studies, Department of Media & Information Studies, Michigan State University
Interests include: improving protections for individuals by improving cybersecurity, reducing digital divides, making cybersecurity/privacy usable.

Author

Friday September 8, 2017 10:07am - 10:40am
Founders Hall - Auditorium

10:07am

User-Generated Content: An Examination of Users and the Commodification of Instagram Posts
The goal of the experiment was to examine the impact of sponsored content on users’ perceptions of Instagram. Specifically, it analyzed if and how the trust and credibility associated with electronic word-of-mouth are affected by cues on Instagram posts to indicate they are sponsored content. The project contributes to theoretical models of persuasion knowledge in digital contexts, especially when the line between sponsored- and user-generated-content is ambiguous. The Federal Trade Commission claims that to responsibly identify content as paid advertising, only certain terms are effective. In specific, they argue that “#promoted” is not sufficiently clear. In 2016, Lord & Taylor settled in a lawsuit with the FTC. The FTC’s grievance was that Lord & Taylor did not require the influencers to disclose the company had compensated them to post the photo, and none of the posts included such a disclosure regardless of the company’s Instagram handle being present.

Thus, this experiment tested two different images with five conditions to assess whether people recognize text-based sponsorship cues on an Instagram post and how that recognition affects their perceptions of the post’s credibility and trust in the message. The experiment was implemented using an online survey tool (Qualtrics) among 358 participants recruited among students at a university in Colorado. In total, the sample size surveyed was 274. The study examines one research question with four different variables and tests three hypotheses using ANOVAs and other statistical tests. The results showed that users recognized the @company_handle as an advertisement more than #ad, #sponsored, and #promotion. The research also showed no significant difference between user’s perception of the hashtags #ad, #sponsored, and #promotion. This leads to the recommendation that a company handle should be present along with either one of these hashtags in order to improve persuasion knowledge of Instagram users.

Moderators
DB

Debra Berlyn

Consumer Policy Solutions

Presenter
JM

June Macon

PhD Student, University of Illinois at Chicago


Friday September 8, 2017 10:07am - 10:40am
ASLS Hazel Hall - Room 221

10:40am

Coffee Break
Friday September 8, 2017 10:40am - 11:15am
Founders Hall - Multipurpose Room

11:15am

A Tale of Two Agencies: Privacy at the FCC and FTC

Should there be different regulators and approaches for broadband companies versus other Internet companies? The Congressional Review Act resolution was enacted earlier this year repealed the FCC's rule and barred similar rules in the future, but would it affect the FCC's ability to enforce Title II directly? What options will the CRA leave the FCC in the future? And what will happen if/when the FCC cedes jurisdiction over broadband privacy back to the FTC? Will the FTC have adequate authority? An appellate panel decision called the FTC's jurisdiction into question but the full Ninth Circuit has since vacated that decision and will rehear the case in September. Does Congress need to address the common carrier exception? What other changes should be made to the FTC or FCC's authority or approaches? What authority will the states and private parties have?



Friday September 8, 2017 11:15am - 12:45pm
ASLS Hazel - Room 120

11:15am

The Next Stages of the Network Neutrality Debate

The network neutrality debate has proceeded on two levels: 1) the typical dynamic of public interest regulation versus the private industry preference for regulatory restraint and 2) the battle between two media industry segments – the ISP platforms versus the heavy commercial users. Panelists will review how these dynamics are likely to play out in a new political context.


Moderators
RN

Russ Neuman

New York University
EM

Eli M. Noam

Columbia Institute for Tele-Information; Columbia Business School

Presenter

Friday September 8, 2017 11:15am - 12:45pm
Founders Hall - Auditorium

12:45pm

Lunch
Friday September 8, 2017 12:45pm - 2:15pm
Founders Hall - Multipurpose Room

2:15pm

Mobile Broadband Strategies: Comparing Policy Issues and Research Challenges in Developed and Developing Countries

This international panel will focus on the impact of the widespread penetration and use of intelligent mobile devices, in both developing and developed countries. The Panelists, whose expertise covers various countries and regions, will discuss and compare strategies being used in developed countries like the US, Australia and the EU, and developing countries like Mexico, Brazil and India, among others. [We wish to find out what has worked, what did not, the problems encountered and whether there are lessons to be learned that are of general applicability, as well as for particular countries.]


Moderators
PN

Prabir Neogi

Visiting Fellow, Carleton University
I am a retired Canadian public servant and a TPRC "old hand", having attended the Conference regularly since 1992. My broad areas of interest are: Broadband communications (both mobile and wireline), universality issues including urban-rural gaps, and the transformative uses of ICTs... Read More →

Presenter
Author
JW

Jason Whalley

Northumbria University

Friday September 8, 2017 2:15pm - 3:45pm
ASLS Hazel Hall - Room 329

2:15pm

Regulation for Internet Platforms

Debates about Internet policy frequently proceed from the premise that the Internet owes its success to presence of key platform technologies. Unfortunately, the concept of platforms remains badly undertheorized and understudied empirically. The result is that policymakers and enforcement authorities must often make key decisions without a clear idea of what aspects of platform design are essential and what practices are potentially problematic. The panel would include a discussion of the theoretical and empirical literature surrounding platforms. Key topics would the EU antitrust case against Google, the role of standard setting organizations, and the decisions not to include mobility and identity verification into IPv6.


Moderators
Presenter
Author
JL

Jonathan Liebenau

London School of Economics & Political Science (LSE) - Department of Management
DS

Douglas Sicker

Carnegie Mellon University

Friday September 8, 2017 2:15pm - 3:45pm
Founders Hall - Auditorium

2:15pm

The National Broadband Research Agenda: Next Steps

Key personnel from the NTIA, NSF and the Office of Educational Technology, Department of Education who led the development of the National Broadband Research Agenda (NBRA) will brief the TPRC community about the NBRA, and discuss potential areas of cooperation between government stakeholders and the academic community to further research and policy-making on broadband access. Speakers will discuss the research, data collection and funding priorities for their respective agencies.


Moderators
KJ

Krishna Jayakar

Penn State University

Presenter

Friday September 8, 2017 2:15pm - 3:45pm
ASLS Hazel - Room 120

3:45pm

Coffee Break
Friday September 8, 2017 3:45pm - 4:10pm
Founders Hall - Multipurpose Room

3:45pm

4:10pm

Tomorrow's Backhaul: Comparative Analysis of Backhaul Cost for Next Generation Mobile Broadband
Despite forecasted quadratic growth of cellular traffic over the next several years, no in-depth analysis of brownfield and greenfield Long Term Evolution (LTE) backhaul deployment over different local access network options has been published. The models in this paper integrate engineering with economic analysis of the backhaul to support LTE mobile broadband network using digital subscriber line (DSL), cable (DOCSIS), fiber, or microwave networks for the backhaul network. The results of the model allow a cellular provider to determine when to upgrade a network and determine which backhaul solution is the lowest cost depending upon traffic load and spectrum allocation. The model results indicate that the Net Present Value (NPV) do favor fiber passive optical network (PON) for a high population density, while for a low population density brownfield DOCSIS is favored. The model demonstrates that demand for the next ten years can be met with Long Term Evolution Unlicensed (LTE-U).

Dr. David Reed at University of Colorado at Boulder endorsed the submission of this paper to TPRC.

Moderators
MS

Marvin Sirbu

Carnegie Mellon University

Presenter

Friday September 8, 2017 4:10pm - 4:43pm
ASLS Hazel Hall - Room 329

4:10pm

Communications Act 2021
The Communications Act of 1934, as amended by the Telecommunications Act of 1996, is showing its age. Like an old New England house that added drafty new additions over the years to house a growing extended family, the Act is poorly suited to meet today's challenges. Much of what is included in the Act relates to earlier technologies, market structures, and regulatory constructs that address issues that are either no longer relevant or that cause confusion when one tries to map them to current circumstances. The legacy Act was crafted in a world of circuit-switched POTS telephony provided by public utilities, and even when substantially revised in 1996, barely mentions broadband or the Internet.

Moreover, the FCC has struggled in recent years to establish its authority to regulate broadband services and in its effort to craft a framework to protect an Open Internet (sometimes, referred to as Network Neutrality). While many of the fundamental concerns that the legacy Act addressed remain core concerns for public policy, the technology, market, and policy environment are substantially changed. For example, we believe that universal access to broadband and Internet services are important policy goals, but do not believe that the current framework enshrined in Title II of the legacy Act does a good job of advancing those goals.

In this paper, we identify the key concerns that a new Act should address and those issues in the legacy Act that may be of diminished importance. We propose a list of the key Titles that a new Communications Act of 2021 might include and identify their critical provisions. Our straw man proposal includes six titles: Title I establishes the basic goals of the Act and sets forth the scope and authority for the FCC; Title II provides the basic framework for regulating potential bottlenecks; Title III establishes a framework for monitoring the performance of communications markets, for addressing market failures, and for promoting industrial policy goals; Title IV focuses on managing radio-frequency spectrum; Title V focuses on public safety and critical infrastructure; and Title VI addresses the transition plan.

Our goal is to provoke a discussion about what a new Act might look like in an ideal, clean-slate world; not to address the political, procedural, or legal challenges that necessarily would confront any attempt at major reform. That such challenges are daunting we take as given and as a partial explanation for why the legacy Act has survived so long. Nevertheless, it is worthwhile having a clear picture of what a new Communications Act should include and the benefits that having a new Act might offer so we can better judge what our priorities ought to be and what reforms might best be attempted.

Moderators
OU

Olga Ukhaneva

Navigant and Georgetown University

Presenter
Author
DS

Douglas Sicker

Carnegie Mellon University

Friday September 8, 2017 4:10pm - 4:43pm
ASLS Hazel Hall - Room 332

4:10pm

Sensitive-by Distance: Quasi-Health Data in the Algorithmic Era
“Quantified Self” apps and wearable devices collect and process an enormous amount of “quasi-health” data — information that does not fit within the legal definition of “health data”, but that is otherwise revelatory of individuals’ past, present, and future health statuses, like information about sleep-wake schedule or eating habits).

This article offers a new perspective on the boundaries between health and non-health data: the “data-sensitiveness-by-(computational)-distance” approach — or, more simply, the “sensitive-by-distance” approach. This approach takes into account two variables: the intrinsic sensitiveness (static variable) of personal data and the computational distance (a dynamic variable) between some kinds of personal data and pure health (or sensitive) data, which depends upon the computational capacity available in a given historical period of technological (and scientific) development.

Computational distance should be considered both objectively and subjectively. From an objective perspective, it depends on at least three factors: (1) the level of development of data retrieval technologies at a certain moment; (2) the availability of “accessory data” (personal or non-personal information), and (3) the applicable legal restraints on processing (or re-processing) data. From a subjective perspective, computational capacity depends on the specific data mining efforts (or ability to invest in them) taken by a given data controller: economic resources, human resources, and the utilization of accessory data.

A direct consequence of the expansion of augmented humanity in collecting and inferring personal data is the increasing loss of health data processing “legibility” for data subjects. Consequently, the first challenge to be addressed when searching for a balancing test between individual interests and other (public or commercial) interests is the achievement of a higher level of health data processing legibility, and thereby the empowerment of individuals’ roles in that processing. This is already possible by exploiting existing legal tools to empower data subjects — for instance, by supporting the full exercise of the right to access (i.e. awareness about the finality of processing and the logic involved in automated profiling), the right to data portability, and the right not to be subject to automated profiling.

Moderators
TB

Tim Brennan

University of Maryland Baltimore Campus

Presenter
GM

Gianclaudio Malgieri

Vrije Universiteit Brussel


Friday September 8, 2017 4:10pm - 4:43pm
Founders Hall - Auditorium

4:10pm

TV White Space as a Feasible Solution to Spread Mobile Broadband
In the last years the consumption of mobile data services has greatly increased, as more users consume everyday through their smartphones, tablets, laptops, and other devices to access internet. In addition the significant increase of video represents a challenge, as increased usage strains the capacity of the airwaves. Many experts agree that, despite the continuous investment in networks and advances in wireless efficiency, the increment demand for mobile broadband service is likely to surpass the available spectrum capacity in the short term.

This has led to the mobile operators to compete in order to gain access to a share of the available spectrum commonly referred as “TV White Spaces” (TWS), which can be defined as the VHF/UHF frequencies left in idle by the Television broadcasting. These bands have excellent characteristics as: propagation, low level of noise, large macro-cell sizes and low density of subscribers. These idle frequencies also offer an exceptional opportunity to connect sparse communities found on the extensive rural areas to bridge digital gaps, especially because there are no plans to extend fiber to these communities and they provide a really cost-effective solution compared to fiber.

Starting in 2012, several pilots and experiences around the globe were launched. Most of them were endorsed by non-telco organizations such as Microsoft and Google from slightly different purposes, angles and perspectives. Uruguay, Colombia, Jamaica as well as Ghana, Kenya and many other countries including US and Europe participated in proofs of concept, pilots or initial developments to benchmark TVWS as a cost-effective mobile broadband solution for a sundry of scenarios.

The main purpose of this analysis is to query for those TVWS pilots and infer which is the current status of them worldwide. The first TWS pilots will surely provide suitable experience to define the roadmap and future applications in the development and uptake of the mobile broadband market.

The analysis does an exhaustive review and a compilation of the general data from the studies that report the status of TVWS pilots such as its main purpose, status of the trial, stakeholders involved, timeline, regulations, viability and political will to follow up with this technology. The analysis of data provided by those projects and pilots have been approved to check the reliability this technology may do have in each area. The focus is multidisciplinary, where technology plays a key role but regulations and the market has to be also included in the final equation to have a complete picture of the whole TVWS solution.

The results will allow us to summarize a set of case studies and to wrap up the lessons learned from the initial TVWS deployments. Good and best practices and the role played by TVWS in the national digital agendas whether this technology is still a solution to deploy mobile broadband solutions in certain markets.

Moderators
Presenter
MO

Miquel Oliver

Universitat Pompeu Fabra

Author
FS

Francisco Salas

Universitat Pompeu Fabra

Friday September 8, 2017 4:10pm - 4:43pm
ASLS Hazel Hall - Room 221

4:10pm

Commodifying Trust: Trusted Commerce Policy Intersecting Block-chain and IoT 
Blockchain or distributed ledger technology is the key innovation inside Bitcoin, the virtual currency, or distributed database commodity. Regulators in different states and nations have viewed and now regulate Bitcoin variously. For example, Bitcoin is property (IRS), a virtual currency (New York State Department of Financial Services and its BitLicense), and an unregulated technology (California, Texas). This regulatory divergence has not prevented the emergence of an $18 billion Bitcoin global market. It has however led some of its early enthusiasts to prison for crossing the line from trusted blockchain anonymity to money laundering. Distributed ledger applications are presently in experimental and early commercial use in applications and for industry sectors now extending far beyond Bitcoin, and far beyond fintech (financial technology.)

This paper evaluates blockchain technology and the role of regulators and policymakers in shaping the evolution and commercialization of this disruptive innovation particularly for the Internet of Things. As blockchain is increasingly used to establish a secure trust relationship and permanent record in a wide array of networked markets, will the diverse regulatory treatments of –essentially the same – innovation create new policy barriers to its wide application? Are there information policy measures, which can help industry and users, avoid the inevitable pitfalls of a novel technology? If so, is there a new alignment of distribution of authority among regulators, which these innovations will spark? Presently for example, the Securities and Exchange Commission and IRS.

This original research will be among the first to deconstruct blockchain for a wide array of industrial sectors and Internet of Things markets. Most prior work has focused on blockchain applications for financial markets, and specifically the cybercurrency Bitcoin, and in particular its cryptographically driven consensus process to establish and maintain trust. While it is important to understand how blockchain technology utilization can increase technical efficiency and reduce transaction costs with an immutable, auditable record of all transactions, that only explains why this technology innovation has sparked such interest. Most important is the ability of blockchain to combine trust and privacy with transparency in new way.

The research methods for evaluation of blockchaining the Internet of Things include socio-technical field tests and multi-method pilot studies currently being planned. Preliminary results and insights from industry and policymaker interviews and will be shared in this paper. Suggestions for further blockchain Internet of Things policy research will conclude the paper.

Moderators
Presenter
RE

Richie Etwaru

QuintilesIMS

Author
avatar for Lee W. McKnight

Lee W. McKnight

Associate Professor, Syracuse University
Lee W.McKnight is an Associate Professor in the iSchool (The School of Information Studies), Syracuse University. Lee was Principal Investigator of the National Science Foundation Partnerships for Innovation Wireless Grids Innovation Testbed (WiGiT) project, and recipient of the 2011... Read More →

Friday September 8, 2017 4:10pm - 4:43pm
ASLS Hazel - Room 120

4:43pm

Investigating End-To-End Integrity Violations in Internet Traffic
Internet applications are commonly implemented with the implicit assumption that network traffic is transported across the internet without modification; we refer to this end-to-end integrity. Put simply, most applications assume that the data they send will be received intact by the host they are communicating with (barring transient errors and normal packet loss). This expectation is encoded in the Federal Communications Commission (FCC) Open Internet Order, which states that Internet Service Providers (ISPs) should not impose “unreasonable interference” with customers' network traffic. However, it is increasingly common to find ISPs who deploy middleboxes that silently manipulate customers’ traffic in ways that impact security, privacy, and integrity.

Additionally, in late 2016, the FCC adopted a set of regulations with the goal of protecting consumer privacy (FCC 16-148). In brief, these regulations required Internet Service Providers (ISPs) to provide transparency and customer choice over how customers' “personally identifiable information” and “content of communications” are shared with third parties. In March 2017, both houses of Congress passed a bill that nullified these protections; it is expected that the President will sign this bill into law shortly. As a result, the issues of privacy and integrity of users' Internet traffic is of immediate importance to policymakers.

This paper presents evidence of multiple ISPs that modify customers’ traffic in-flight. We use a HTTP/S proxy service with millions of end hosts in residential networks to study the behavior of over 14,000 networks worldwide. Using this system, we route benign traffic via over 1.2 million hosts in these networks to test for end-to-end integrity. We find end-to-end integrity violations including hijacking of certain DNS responses — often sending users to pages with advertisements — by AT&T, Verizon, and Cox Communications (as well as a number of foreign ISPs). We also find content injection in web pages — often adding trackers or advertisements to web pages or censoring content — by a number of foreign ISPs.

Worse, we find that a number of hosts show evidence that their web requests are being monitored, suggesting the customer browsing data may be being sent to third parties. We find a number of foreign ISPs have large number of users whose traffic appears to be “duplicated”: when we ask a host to fetch a webpage on a server we control, we observe multiple web requests coming in from different locations on the internet. This observation indicates that users' browsing behavior is being transmitted to third parties, potentially without their knowledge or consent.

Given the increasing amounts of critical and privacy-sensitive information that is exchanged online, we recommend that regulators leverage active auditing technologies to inform and enforce current and future policies. Our methodology in particular can be deployed with low overhead and is scalable to millions of hosts and thousands of networks.

Moderators
MS

Marvin Sirbu

Carnegie Mellon University

Presenter
TC

Taejoong Chung

Postdoctoral Researcher, Northeastern University

Author
DC

David Choffnes

Assistant Professor, Northeastern University
Net neutrality, network measurement, QUIC, privacy Find me on Twitter: @proffnes
AM

Alan Mislove

Northeastern University

Friday September 8, 2017 4:43pm - 5:15pm
ASLS Hazel Hall - Room 329

4:43pm

Identifying Market Power in Times of Constant Change
We show that traditional approaches to defining markets to investigate market power fail in times of rapid technological change because demand and supply are in constant flux. Currently, empirical analyses of market power rely upon historical data, the value of which degrades over time, possibly resulting in harmful regulatory decisions. This points to a need for a different approach to determining when regulation is an appropriate response to market power. We present an approach that relies upon essential factors leading to monopoly (EFs), such as control of essential resources, which persist across generations of products. Market power analyses should be a search for EFs and policy responses should focus on diffusing market power without destroying value.

This issue is particularly important for the broad category of telecommunications, as telecommunications continues to evolve from services provided via specialized networks to services provided by apps residing on generalized networks designed primarily to accommodate data. This transition in services and networks is disruptive to business and regulatory models that are based on the traditional network paradigm.

One failing of the traditional regulatory approach is the problem of analysis decay. While this reliance is appropriate under stable market conditions because it grounds the analyses in real experiences, it provides invalid results when demand characteristics are unstable or unknown, such as in rapidly changing markets and emerging products.

We use as an example over the top (OTT) services. Three issues may arise with OTT providers. One issue is whether the OTT provider should be considered a telecommunications provider. Per our analysis, the OTT provider is not a provider of a physical communications channel and so is not a telecommunications carrier but rather a software interface for customers. The OTT provider does not compete with telecommunications channels and is indeed dependent on them.

Another issue is how an OTT provider competes with telecommunications providers. We assert it is futile to base policy or regulation on a product rivalry when product definition evolves rapidly. Even if one could conduct a valid analysis, its relevance would quickly decay. Instead, decisions on whether to regulate should be based on analyzing whether any operator possesses EFs. Service operators that do not, should not be subjected to economic regulation, except to address consumer protection issues and perhaps network interconnection. Operators that do possess EFs will possess market power over time and over generations of products. How this market power should be addressed would depend upon the specifics of the situation.

A less prominent issue is the regulator’s role in the evolution of traditional telecommunications providers’ business models. Sometimes telecommunications providers seek to have regulations imposed on OTT providers. In our analysis this is an issue of how traditional operators will evolve their business models to an NGN world.

This theoretical analysis currently is complete. The next step is to provide a strong grounding in actual cases of factors that created market power to determine possibilities of impacts on antitrust policy. Some practitioners might resist this research as it calls into question the usefulness of a cottage industry of economists, lawyers, and policy-makers; however, applicable empirical work will inform the value of the EFs approach. TPRC’s combination of policy-makers, lawyers, economists, and industry leaders will lend itself well to this issue.

Moderators
TB

Tim Brennan

University of Maryland Baltimore Campus

Presenter
avatar for Janice Alane Hauge

Janice Alane Hauge

Professor, University of North Texas


Friday September 8, 2017 4:43pm - 5:15pm
Founders Hall - Auditorium

4:43pm

Price and Quality Competition in U.S. Broadband Service Markets
Official government price indexes show both residential and business wired internet access prices essentially flat or increasing in the United States since 2007. In stark contrast, prices for wireless telecommunications services have been falling at a consistent rate of about 2 to 4 percent per year over this period, while mobile broadband data prices appear to have been falling at rates a full order of magnitude greater. Can the sluggish pace of price decline in official data on wireline broadband service prices be explained by unmeasured quality improvement?

In the first part of this paper, we first construct direct measures of changes in wired broadband service quality over time, utilizing a relatively large sample of US households. The results show positive, statistically and economically significant rates of improvement in delivered broadband speed within given service quality tiers for most U.S. internet service producers in recent years, as well as a general shift within households toward higher service quality tiers. Improvements in performance within speed tiers appear to be comparable in magnitude to rates of improvement in quality-adjusted price indexes that have been estimated in econometric studies of broadband service prices. These statistical results are then used to construct within-tier service quality indexes, based on delivered vs. advertised data rates, for individual US broadband service providers.

In the second part of this paper, the effects of competition on within speed-tier quality improvement within US broadband markets are analyzed. We construct a dataset that allows us to model broadband quality improvement within core-based statistical areas (CBSAs). Our identification strategy for teasing out the short-run impact of increased competition on broadband quality allows for household-specific fixed effects, as well as controls for a variety of socio-economic characteristics of households within geographic areas, and hinges on the assumption that ISP-specific upgrades to capacity within a market (CBSA) potentially affect all households served by that that ISP. Our results show that in census tracts with large numbers of wireless mobile ISPs, quality improvement (measured by delivered speed) is greater than in census tracts that lack large numbers of wireless mobile competitors. Inter-modal broadband competition (wireless mobile vs. wireline and fixed wireless), at least in the short-run, appears to have statistically and economically significant impacts on delivered service quality.

In the final section of this paper, the ISP-level quality indexes previously constructed are combined with data on broadband service price, and measured characteristics of broadband service, from smaller, random samples of U.S. urban census tracts. A hedonic price index for U.S. residential broadband service is constructed, using a hedonic regression model estimated over pairs of adjacent time periods. This quality-adjusted price index spans the period from January 2014 to October 2016. Our hedonic price index shows quality-adjusted prices declining at annualized rates of approximately 3 to 4 percent. These magnitudes are only a little larger than our previous direct estimates of quality improvement We conclude that quality of delivered service, both within and across service tiers, is the primary dimension for competition amongst US broadband providers, and that the benefits of within-tier delivered speed improvement are substantial in magnitude when compared to quality-adjusted price declines estimated using hedonic methods.

Moderators
OU

Olga Ukhaneva

Navigant and Georgetown University

Presenter
KF

Kenneth Flamm

Univ of Texas at Austin

Author

Friday September 8, 2017 4:43pm - 5:15pm
ASLS Hazel Hall - Room 332

4:43pm

Web Accessibility Standards as Organizational Innovations: An Empirical Analysis in a Developing Country Context
This paper is an investigation of the factors influencing the adoption of web accessibility (WCAG 2.0) guidelines by local government websites in China. Using Berry’s (1994) theoretical model of organizational innovation, the paper examines whether factors such as slack time and resource availability, organizational size, leadership, and the presence of external stakeholders influence the adoption of web accessibility guidelines, utilizing an econometric methodology. The study has implications not only for e-government and information access for persons with disabilities in a developing country context, but also more generally to organizational innovation and learning.

The WCAG 2.0 Guidelines are a globally recognized standard regarding information accessibility of websites. It has been adopted by governments around the world, including the United States, to improve information access to persons with disabilities (Li, Yen, Lu & Lin, 2012). Scholars have evaluated the compliance of government websites with the standard (Hanson & Richards, 2013), barriers to adoption (Velleman, Nahuis & van der Geest, 2017; Youngblood, 2014), and the consequences of adoption for factors such as trust in government (Singh, Naz & Belwar, 2010; OECD, 2013).

However, few studies have examined web accessibility standards adoption in a developing country; the few that do are largely exploratory studies that investigate the status of standards adoption by local governments, with less attention to the factors that promote or hinder adoption (Rau, Zhou, Sun, & Zhong, 2016; Shi, 2007; Zhao, Marghitu, & Mou, 2017). But studying these factors are all the more important in developing countries where local governments have access to fewer resources, the legal standards for web accessibility are less well-defined, and external stakeholders such as disability advocates are less well-organized. Adoption rates are therefore likely to be more variable, and the influence of predictors more uncertain.

In this context, this paper evaluates the accessibility of a randomly selected sample of municipal government websites using AChecker, an automated testing tool widely adopted by researchers in e-government and e-commerce (Fuglerud & Røssvoll, 2012; Gilbertson & Machin, 2012), supplemented by manual checking. Specifically, the number of violations of the WCAG 2.0 standard are compiled for each website. Econometric analysis is used to identify the factors from Berry’s (1994) model of organizational innovation that predict the adoption of web accessibility standards — the managerial characteristics of the local government (budgets, leadership, number of e-services offered) and the demographic and economic characteristics of the local jurisdiction. In conclusion, the study offers recommendations to promote successful adoption of web accessibility standards by local governments.

References

Berry, F. S. (1994). Innovation in public management: The adoption of strategic planning. Public Administration Review, 54(4): 322- 330.

Fuglerud, K. S., & Røssvoll, T. H. (2012). An evaluation of web-based voting usability and accessibility. Universal Access in the Information Society, 11(4), 359–373.

Gilbertson, T. D., & Machin, C. H. C. (2012). Guidelines, icons and marketable skills: An accessibility evaluation of 100 web development company homepages. Proceedings of the International Cross-Disciplinary Conference on Web Accessibility (W4A '12) (pp. 4). New York, NY, USA: ACM.

Hanson, V. L., & Richards, J. T. (2013). Progress on website accessibility? ACM Transactions on the Web (TWEB), 7(1), 1-30.

Li, S., Yen, D. C., Lu, W., & Lin, T. (2012). Migrating from WCAG 1.0 to WCAG 2.0 – A
comparative study based on web content accessibility guidelines in Taiwan. Computers in Human Behavior, 28(1), 87-96.

Organization for Economic and Co-operation and Development (2015). Government at a glance. OECD iLibrary. Retrieved from http://www.oecd-ilibrary.org/governance/government-
at-a-glance-2015_gov_glance-2015-en

Rau, P. P., Zhou, L., Sun, N., & Zhong, R. (2016). Evaluation of web accessibility in China: changes from 2009 to 2013. Universal Access to the Information Society, 15(2), 297.

Shi, Y. (2007). The accessibility of Chinese local government Web sites: An exploratory study. Government Information Quarterly 24(2):377–403.

Singh, G., Pathak, R. D., Naz, R., & Belwal, R. (2010). E-governance for improved public sector service delivery in India, Ethiopia and Fiji. International Journal of Public Sector Management, 23(3), 254-275.

Velleman, E. M., Nahuis, I., & van der Geest, T. (2017). Factors explaining adoption and implementation processes for web accessibility standards within eGovernment systems and organizations. Universal Access in the Information Society, 16(1), 173-190.

Youngblood, N. E. (2014). Revisiting Alabama state website accessibility. Government Information Quarterly, 31(3), 476 - 487.

Zhao, C., Marghitu, D., & Mou, L. (2017). An exploratory study of the accessibility of Chinese provincial government and postsecondary institution websites. Seattle, WA: DO-IT Program, University of Washington. Available at http://www.washington.edu/doit/exploratory-study-accessibility-chinese-provincial-government-and-postsecondary-institution-websites

Moderators
Presenter
KJ

Krishna Jayakar

Penn State University

Author
YB

Yang Bai

The Pennsylvania State University

Friday September 8, 2017 4:43pm - 5:15pm
ASLS Hazel Hall - Room 221

4:43pm

Information Policy Dimension of Emerging Technologies
During the past decades Information and Communication Technology (ICT) has changed patterns in which humans interact and use machines. In more recent years ICT has enabled connecting more and more devices, even very small ones, to the Internet and to the Cloud, commonly referred to as Internet-of-Things (IoT) [2, 3]. According to Gartner Inc. [4] there will be nearly 20.8 billion devices or sensors connected as IoT by 2020. These devices, along with smartphones, tablets, and computers, will generate twice as much data today as they did two years ago, and the trend is expected to continue. Hence, the world does see the cusp of a Big Data evolution. On one hand, Big Data analytics will continue to discover hidden patterns, predictions, and correlations in large datasets, which will in turn influence human activities and decisions in a plethora of fields, such as infrastructure and energy management, transportation systems, medical research, and home automation. But on the other hand, it raises visible concerns in terms of privacy, data security, and consumer protection in general. Some of the specific challenges in this context include (a) storage, processing, and deletion of the data itself, (b) personal information and identity protection of the individual, and (c) the inclusion and impact of initially unknown or unintended meta data due to data analysis. Concepts, technologies, security schemes and applications of trust are essential for IoT, especially for offered services, and have been addressed during the past years [5]. Any security, privacy and trust solutions developed in the research community can be categorized as follows, with each having different consequences for the users: (1) Either high security, trust and privacy are supported by the architecture and network structure of the solution. Resulting IoT services are user-unfriendly and have technical drawbacks (in terms of performance, energy and memory consumption, computational capacity) for the IoT devices (e.g., smartphones, smart-watches) and users. (2) Or desired security, trust and privacy levels are only supported to a limited extent or not realized at all and, thus, it contradicts the user’s request for controlling information disclosure in a secure and trustworthy manner.

The European Union has published the General Data Protection Regulation (EU-DSGVO) [1] in 2016 (with its implementation due in May 2018) and the Federal Trade Commission (FTC) of the United States of America released a report in 2015 that impacts the way device manufacturers, application developers, and other entities involved in IoT design, devise, and use the data generated from IoT-based devices, systems, and applications. EU-DSGVO will be applicable, if the data controller or processor (organization) or the data subject (person) is based in the EU. EU-DSGVO, however, already conflicts with other non-European laws and regulations (e.g., EU-US Privacy Shield) and practices (e.g., surveillance by governments). Organizations in such countries can no longer be considered acceptable for processing EU personal data. 

Therefore, the main contribution of this paper is to show how today IoT and Big Data are influenced by security, privacy, and trust aspects from the national, regional, and international legal and regulatory perspective. The scope of studying the major subset of these laws, acts, and policies is restricted to Switzerland (CH), the European Union (EU), and the United States of America (USA). Finally, by taking a detailed look into possible next steps, a set of recommendations is provided for organizations planning to invest in the development of IoT and Big Data analytics from the technical and information policy perspective.


[1] REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union, L 119, Apr. 27, 2016, http://eur- lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=DE, last access March 30, 2017.
[2] European Parliament: The Internet of Things – Opportunities and Challenges, May 2015, http://www.europarl.europa.eu/RegData/etudes/BRIE/2015/557012/EPRS_BRI(2015)557012_EN.pdf.
[3] ITU-T Recommendation Y.2060: Overview of the Internet of Things, June 2012, http://www.itu.int/itu-t/recommendations/rec.aspx?rec=Y.2060.
[4] R. von der Maulen: Gartner Says 6.4 Billion Connected “Things” Will Be in Use in 2016, Up 30 Percent From 2015, Gartner Inc., http://www.gartner.com/newsroom/ id/3165317, November 10, 2015.
[5] O. Vermesan, P. Friess: Internet-of-Things: Converging Technologies for Smart Environments and Integrated Ecosystems. River Publishers, Aalborg, Denmark, 2013.

Moderators
Presenter
RG

Radhika Garg

Syracuse University

Author

Friday September 8, 2017 4:43pm - 5:15pm
ASLS Hazel - Room 120

5:15pm

Understanding Mobile Service Substitution and the Urban-Rural Digital Divide in Nigeria

Across Sub-Saharan Africa (SSA), Internet penetration has lagged behind developed countries. Within countries in SSA, this divide exists between urban and rural areas with the offline population largely in rural areas. Mobile technologies have been identified as a means of leapfrogging the relatively expensive fixed Internet access and bridging the gap between the connected and unconnected populations. Furthermore, over-the-top services – that allow users to make calls and send messages over the Internet – and social networks have been a driver of Internet traffic in SSA. Using panel data from January 2016 to July 2017 of the billing records of 2 million unique customers retrieved from a mobile carrier in Nigeria, this study seeks to understand the urban-rural digital divide and how the relationship between cellular voice and mobile Internet varies across this divide. The results show that the increase in total minutes of voice calls, and total volume of data used by the sample over time, is largely driven by increase in the average volume used per person. Urban users have a significantly higher use of mobile Internet than rural users. The result showed mobile Internet is both a substitute and a complement to voice calls. The substitution was weaker for males, older users, those living in the South West region and those with a longer tenure on the network. Urban users also had a weaker substitution compared to rural users, while urban female users had a higher substitution than rural females.



Moderators
Presenter
Author
DS

Douglas Sicker

Carnegie Mellon University

Friday September 8, 2017 5:15pm - 5:48pm
ASLS Hazel Hall - Room 221

5:15pm

The Effects of Broadband Data Caps: A Critical Survey
The use of monthly broadband data caps has been an issue of public policy debate ever since their introduction. Proponents and opponents of data caps make claims that appear to conflict about almost every aspect of data caps, including:

• whether the purpose is to manage congestion, to increase fairness, and to recover the cost associated with heavy users, or to increase profit and to protect incumbent pay-television services;

• whether data caps result in lower service prices or monetize scarcity;

• whether their use results in greater network capacity and higher download and upload speeds;

• whether data caps increase broadband Internet subscription;

• whether data caps reduce congestion; and

• whether they increase or decrease consumer surplus.

It is the goal of this paper to provide a critical survey of both the claims and the academic literature on the use of broadband data caps. The literature includes papers that apply the economics literature to predict the impact of data caps; that present empirical results from the use of data caps; and that propose analytical models of data caps to predict that effects of data caps on broadband service plans, subscription, congestion, and/or welfare.

We first review conflicting claims regarding the purposes of data caps, the relationship between monthly data usage and network congestion, and conflicting claims regarding competition.

In the next five sections, we evaluate the conflicting claims regarding the effect of data caps on service prices, speeds, capacity, subscriptions, usage, congestion, and consumer welfare. The progression of the analysis - from aspects of service plans, through effects on the network, to effects on welfare - allows for empirical and analytical results about some effects to be applied to the analysis of other effects.

In each section, we critically survey the academic literature, which often seems to produce conflicting conclusions. For each such apparent conflict, we examine the basis for the conclusions, including the assumptions of analytical models and the settings for empirical data. We consider the following economics and engineering aspects of the model or data:

• fixed or mobile broadband;

• monopoly, duopoly, or competitive markets;

• the characteristics of the data cap;

• whether multiple service tiers are considered;

• the network capacity model;

• the network congestion model; and

• the utility function.

Given these aspects of the model or data, we analyze the limitations inherent in each paper’s conclusions. With these limitations in mind, we find the aggregation of apparently conflicting conclusions paints a more consistent and comprehensive picture of the effects of data caps. We summarize what we believe the literature concludes about the effects of various types of data caps in both fixed and mobile broadband.

In the concluding section, we discuss how data caps may be evaluated under the FCC’s 2015 Open Internet Order. We analyze whether various types of data caps would qualify as reasonable network management. For those that don’t, we analyze the pertinent factors to be used in assessing whether a data cap satisfies the Order’s general conduct rule: competitive effects; effect on innovation, investment, or broadband deployment; and end-user control.

Moderators
MS

Marvin Sirbu

Carnegie Mellon University

Presenter
avatar for Scott Jordan

Scott Jordan

University of California


Friday September 8, 2017 5:15pm - 5:50pm
ASLS Hazel Hall - Room 329

5:15pm

Beyond the Mogul: From Media Conglomerates to Portfolio Media
Media ownership and market concentration are important topics of public debate and policy analysis. Today we are witnessing a new chapter in that discussion. It is important for the policy analysis community to look ahead and provide though leadership.

For a long time, critics of powerful private media focused on the classic moguls of the Murdoch and Redstone kind. More recent trends raise concerns of a different nature, about media being increasingly controlled by large interests that are outside the media sector. An example is Jeff Bezos of Amazon buying the Washington Post. Similar acquisition can be observed around the world. This has become known as “media capture.” This paper will take the discussion one step further by quantifying the development and identifying the dynamics of such outside ownership.

The analysis is based on a quantitative study of media companies and ownerships, using a large and unique global database of ownership and market share information from 30 countries, 13 media industries, and 20 years. Using a wide-ranging analysis across countries, industries, and time periods, permits us to identify general trends and avoid a discussion that is usually mostly anecdotal.

The analysis shows, so far that entry into media by non-media firms follows three phases, each with a different priority:
Stage 1: Seeking influence
Stage 2: Seeking business synergies
Stage 3: Seeking portfolio diversification

The analysis, so far, shows that the ownership of media by industrial companies as a way to create direct personal and corporate political influence has been declining in rich countries. The second phase for such a non-media/media cross-ownership is based on more direct business factors of economic synergies. It, too, has been declining in many rich countries.

On the other hand, there has been a significant growth of outside-ownership of an indirect type, through financial intermediaries of private equity finance and institutional investment funds.

In contrast, the media systems of emerging and developing countries are still operating in the first two phases of outside-ownership, centered on projection of influence, and seeking conglomerate business synergies.

Will these divergent trends in media control lead to fundamentally divergent media systems? It is likely that these dynamics will lead to a “capture gap” in the media of emerging and rich societies. Media in the former would be significantly controlled by the seekers of personal influence – “crony capitalists” – and conglomerateurs, while media in the latter are subject to professional investor imperatives of profitability, growth, and portfolio diversification. The same financial institutions from rich countries are also likely to seek acquisitions in the emerging markets by leapfrogging the two other stages and investing directly. If this would play itself out freely, a global media system might emerge whose ownership is not centered on individual moguls or conglomerates but on international financial institutions based in a few financial centers.

The responses are then predictable. Countries will impose restrictions on foreign ownership of media. And domestic conglomerates that step in and assume control will wrap themselves in the flag as protectors of national sovereignty. Media control by industrial firms will become patriotic.

Thus, the emerging challenges to diverse and pluralistic media comes less from inside the media and its large media companies, and more from the outside, through an ownership by non-media organizations: financial institutions in rich countries, and a combination of domestic industrial and foreign financial firms in poor and emerging countries.

The paper will conclude with an analysis of the policy issues and regulatory responses.

Moderators
TB

Tim Brennan

University of Maryland Baltimore Campus

Presenter
EM

Eli M. Noam

Columbia Institute for Tele-Information; Columbia Business School


Friday September 8, 2017 5:15pm - 5:50pm
Founders Hall - Auditorium

5:15pm

Complementary Realities: Public Domain Internet Measurements in the Development of Canada's Universal Access Policies
Internet measurement has become a hot topic in Canada after the Canadian Radio-television and Telecommunications Commission (CRTC) reclassified both fixed and mobile broadband Internet as a “basic service”. It has set a goal that all Canadians should have access to 50 Mbps download speed by 2020. The CRTC in cooperation with the Federal government intends to reach that goal through new programs to fund broadband development. Given the newfound opportunity for broadband performance metrics to inform public policy, this paper evaluates the potentials and pitfalls of Internet measurement in Canada.

Effective usage of Internet measurement for broadband policy is threatened by:

1. Lack of comparative understanding of Internet measurement platforms: Demand for information about service quality operators deliver has led to the development of a wide variety of methodologies and testbeds that purport to offer a realistic picture of speeds and quality of what is now an essential service. Due to their distinctive methodologies and approaches to aggregating individual connection diagnosis tests, different sources of broadband speed measurements can generate inconsistent results both in terms of absolute performance metrics and in relative terms (e.g. across jurisdictions, operators, etc.). These inconsistencies can lead to confusion for both consumers and policymakers, leading to sub-optimal decisions in terms of operator selection and public policy development.

2. Reliance on marketing and advertising to evaluate performance: Advertised, up-to speeds do not reflect the realities of Internet use and digital divides in Canada. For example, the CRTC has concluded that connections with speeds higher than 50 Mbps are already available to more than 80% of Canadians that live in urban areas of the country. This creates the perception that the problem of universal access is only a rural one. While the CRTC has determined minimum speeds should reflect actual and Quality of Service (QoS) indicators (e.g. latency, packet loss, jitter, etc.), these targets have yet to be adopted in public policy.

To address these concerns, this paper adopts an analytical approach that emphasizes how multiple and potentially inconsistent Internet measurements can be combined to complement each other in helping develop a richer picture of broadband performance. Drawing on prior comparative research, much of it presented at the TPRC, we provide an overview of different approaches to broadband speed measurements and perspectives they offer into Internet infrastructure quality in Canada. Through a review of comparative metrics, we illustrate latency functions as an effective measurement of performance. Finally, through computational analysis of the Measurement Lab data set, we evaluate the level of broadband inequity in Canada and recommend minimum service quality standards in terms of latency.

Presenter
FM

Fenwick McKelvey

Concordia University

Author
avatar for Reza Rajbiun

Reza Rajbiun

Ryerson University

Friday September 8, 2017 5:15pm - 5:50pm
ASLS Hazel Hall - Room 332

5:15pm

Mitigating Risk: Insurance for the Internet of Unexpected Things
The Internet of Things will provide 50 billion new opportunities for interdependent devices to malfunction. The recent Marai botnet attack demonstrated that the Internet of Things is already capable of creating widespread interruptions in the Internet and the activities that depend on it.

Realizing the promise and positive benefits of the IoT will require not only technical innovation, but changes in how regulation, business, security, and risk in the system are handled. The current IoT ecosystem suffers from flaws that include vulnerability to cyberattack, technical system failures, and the problem of free riders who depend upon security and other safeguards in the network to compensate for their own insecure devices, protocols, and software. Cyberattacks and failures undermine confidence in the IoT and serve as a reminder that regardless of how much IoT security is improved, there will always be vulnerabilities and exploits. Networked failures can have significant socioeconomic consequences.

This paper proposes an insurance system for the Internet of Things. The intent is to address technical and market failures in the IoT ecosystem, propose a method of distributing risk more equitably, and examine ways to fund necessary responses to large scale incidents. Making insurance mandatory, or at least available and desirable, would promote security audits and formal internal procedures for the insured, leading to improved security, prevention, incident response, and recovery planning in the IoT ecosystem. An insurance model has not been widely adopted in the traditional Internet, but the increasing number and reach of IoT devices increases the risk and consequences of a network failure, and suggests the need for a risk management solution.

This paper applies the concept of insurance as an accepted method of risk management to the Internet of Things ecosystem. We take a constructivist approach to creating a new insurance business model framework, and a policy planning approach to creating policy and regulatory guidelines for IoT insurance. The proposed insurance business models would permit insurance to be offered by interested companies beyond traditional insurers, such as Internet service providers, telephone companies, cloud providers, or others with experience in assessing and managing security technology. Necessary regulation includes constructing a better defined liability framework to avoid the current "shell game" of responsibility. Also, disclosure requirements for companies that know of vulnerabilities or experience security incidents to assist in building actuarial data that would help insurance companies determine the actual risks, appropriate insurance products, and pricing structure. Regulation would also streamline the legal and procedural difficulties that currently exist when trying to make a claim, and assist in defining the rights and roles of insurers and claimants. Regulation would help establish what is currently an immature market, and could encourage standardization in products and procedures.

A regulatory framework for an IoT insurance system would help align the objectives of device manufacturers, network operators, services, and end users. With a proper insurance framework for the IoT, market solutions could develop that foster greater security, trust, and confidence in the IoT ecosystem.

Moderators
Presenter
Author

Friday September 8, 2017 5:15pm - 5:50pm
ASLS Hazel - Room 120

5:30pm

A Statistical Framework to Monitor the Quality of Service in Mobile Networks
In Mexico, the mobile network operators have reached penetration rates of about ninety percent of the population. However, the offered Quality of Service (QoS) still varies drastically between geographical areas due to, among others, the fact that the infrastructure deployed, for instance in rural areas, is not the newest access technology. Moreover, there is still a need to enhance the capacity of current mobile networks to satisfy the service demands of current and future users and applications.

In countries with no effective competition, this can be fostered by the telecommunications regulator whose role would be to define a set of metrics to help the enforcement of minimum standards for quality of telecommunication services. Furthermore, providing information produced by continuously monitoring QoS will provide a valuable source of data that can be used to empower the final users by keeping them informed about the QoS offered by operators enabling them to take decisions accordingly.

In this context, the recent Resolution 95 (2016) of the World Telecommunication Standardization Assembly resolves that the ITU Telecommunication Standardization Sector "provide references that assist developing and least developed countries in establishing a national quality measurement framework suitable to perform QoS and QoE measurement". Furthermore, it instructs study groups of the ITU Telecommunication Standardization Sector, among others, "to elaborate Recommendations providing guidance to regulators in regard to defining strategies and testing methodologies to monitor and measure QoS and QoE" and "to study scenarios, measurement strategies and testing tools to be adopted by regulators and operators".

Given this and the fact that there are not yet normalised technical specifications or recommendations targeted for regulators, in this paper, we propose a system of metrics to assess the mobile telecommunication services offered in Mexico (voice, short message service (SMS) and data transfer) as well as a methodology to monitor the QoS at a national level and to measure the proposed metrics.

We develop a two-step statistical modeling approach using a stratified random sampling in the first step to select the geographical locations to be measured and a simple random sampling in the second step to determine the sample size for each service to be tested. We describe the procedure to construct the strata by selecting non-overlapping groups from the geographical regions in the country. The idea behind using stratification is to produce a smaller bound on the error of estimation than would be produced by a simple random sample of the same size alone. Instead, we use stratification combined with simple random sampling in each stratum to estimate national values for each QoS metric defined.

A theoretical example of implementation of the testing methodology at national level in Mexico is also presented to show its feasibility considering factors such as access technology, current service coverage and working days needed to perform the measurements.

Finally, we outline a set of recommendations that can be customized by any regulator to measure the performance and QoS at local and national level.

Presenter
TV

TANIA VILLA

Federal Institute of Telecommunications

Author
NE

Nimbe Ewald

FEDERAL INSTITUTE OF TELECOMMUNICATIONS

Friday September 8, 2017 5:30pm - 6:30pm
Founders Hall - Multipurpose Room

5:30pm

Auctions for Essential Inputs

We study the design of auctions for the allocation of essential inputs, such as spectrum rights, transmission capacity or airport landing slots, to firms using these inputs to compete in a downstream market. When welfare matters in addition to auction revenues, there is a tradeoff: provisions aimed at fostering post-auction competition in the downstream market typically results in lower prices for consumers, but also in lower auction proceeds. We first characterize the optimal auction design from the standpoints of consumer and total welfare. We then examine how various regulatory instruments can be used to implement the desired allocation.

Presenter
DS

david salant

Toulouse School of Economics

Author

Friday September 8, 2017 5:30pm - 6:30pm
Founders Hall - Multipurpose Room

5:30pm

Externalities in Digital Markets: The Role of Standards in Promoting IoT Security
The internet of things (IoT) is ushering in a new era of online devices, with the potential to revolutionize the way consumers automate and interact with the world, both inside and outside the home. An unintended consequence of this connected device proliferation is a plethora of cost-effective attack vectors susceptible to malicious activity, made available, largely, by the IoT market’s lack of adoption and implementation of sufficient security. Recent examples of such malicious behavior illustrate the impact of insecurity can lead to large-scale network shutdowns and unwarranted access to sensitive consumer data. In this techno-economic analysis of the IoT security problem, we first describe why the IoT market fails to produce the socially optimal level of secure IoT devices, and why traditional solutions (e.g., state governance and privatization) to such market failures are infeasible. We then propose that standards, developed by a broad group of industry stakeholders, while not without drawbacks, can promote more optimal production of secure IoT devices by reducing information asymmetry between sellers and buyers, reducing the cost of security, and creating network effects that promote secure devices. The paper also explores the ideal attributes of such standards development, while recognizing the drawbacks of a standards approach. We also explore how theoretic frameworks for economic goods may apply to IoT, and by extension to the Internet more broadly. We note that traditional frameworks (e.g., common pool resource models or club good models) are instructive in certain respects, but do not align perfectly. This exploration allows us to extrapolate from the case of IoT security to techno-economic policy frameworks for the Internet.

Presenter
JM

Jacob Malone

CableLabs


Friday September 8, 2017 5:30pm - 6:30pm
Founders Hall - Multipurpose Room

5:30pm

Freeriding in Shared Spectrum Markets
Cellular spectrum is a limited natural resource becoming scarcer at a worrisome rate. To satisfy users’ expectation from wireless data services, researchers and practitioners recognized the necessity of more utilization and pervasive sharing of the spectrum. Though scarce, spectrum is underutilized in some areas or within certain operating hours due to the lack of appropriate regulatory policies, static allocation and emerging business challenges. Thus, finding ways to improve the utilization of this resource to make sharing more pervasive is of great importance. There already exists a number of solutions to increase spectrum utilization via increased sharing. Dynamic Spectrum Access (DSA) enables a cellular operator to participate in spectrum sharing in many ways, such as geological database and cognitive radios, but these systems perform spectrum sharing at the secondary level (i.e., the bands are shared if and only if the primary/licensed user is idle) and it is questionable if they will be sufficient to meet the future expectations of the spectral efficiency. Along with the secondary sharing, spectrum sharing among primary users is emerging as a new domain of future mode of pervasive sharing. We call this type of spectrum sharing among primary users as “pervasive spectrum sharing (PSS)”. However, such spectrum sharing among primary users requires strong incentives to share and ensuring a freeriding-free cellular market.

Freeriding in pervasively shared spectrum markets (be it via government incentives/subsidies/regulations or self-motivated coalitions among cellular operators) is a real techno-economic challenge to be addressed. In a PSS market, operators will share their resources with primary users of other operators and may sometimes have to block their own primary users in order to attain sharing goals. Small operators with lower quality service may freeride on large operators’ infrastructure in such pervasively shared markets. Even worse, since small operators’ users may perceive higher-than-expected service quality for a lower fee, this can cause customer loss to the large operators and motivate small operators to continue freeriding with additional earnings from the stolen customers. Thus, freeriding can drive a shared spectrum market to an unhealthy and unstable equilibrium. In this work, we model the freeriding by small operators in shared spectrum markets via a game-theoretic framework. We focus on a performance-based government incentivize scheme and aim to minimize the freeriding issue emerging in such PSS markets. We will present insights from the model and discuss policy and regulatory challenges.

Presenter
MR

Mostafizur Rahman

University of Central Florida

Author

Friday September 8, 2017 5:30pm - 6:30pm
Founders Hall - Multipurpose Room

5:30pm

Live Streaming of Terrestrial TV Programs in Japan (or lack thereof): public welfare through weak competition?
This study investigates the trials of Japan’s Broadcast TV (Television) stations for distributing their programs on the Internet simultaneously. Although those services are not still in reality partly because of the regulations in Japan does not permit it, MIC (Ministry of Internal Affairs and Communications of Japan) and TV broadcasters are discussing over those trials for starting prior to the Tokyo Olympic Games on 2020.

Inherently, the perception that TV programming would move to online is a natural idea. The Internet has been absorbed so many media such as magazines, newspapers, and radios. TV programming movements to online also seem as global phenomena, because the Internet clears the national borders. However the movements vary a lot depend on the countries because each country has different business environments or regulations.

This study focuses the case of Japan. Additionally, the study also compares Japan’s case to the U.S. situations of the virtual MVPDs (Multichannel Video Programming Distributors) .

Basically, business ecosystems of TV programing have been developed differently in Japan and the U.S. Japan’s TV business weighs much on the terrestrial TV programming, while many of the U.S. TV programing are embedded to MVPDs. In a similar way, the level of the Internet access services are different between Japan and the U.S. Japan has fairly high speed Internet access services with lower prices in almost every corner of the country, whereas the U.S. still needs to develop broadband access services especially in almost of all rural areas. As a consequence, the trials of the broadcast TV programming distribution on the Internet in Japan shows the different pictures to the virtual MVPDs in U.S.A. These differences and similarities figure the case of Japan and the U.S. more clearly.

Research Questions:

1) How can we describe the situations of Broadcast TV stations trials toward simultaneous online distribution in Japan?

2) What business or regulatory environments lead the differences of Japan and the U.S?

3) What similarities are there in Japan and the U.S. regardless of the differences of the business and regulatory environments?

Methodology:

In order to answer the research questions, this study conducts interviews to the experts in the media and the Internet business and policy fields in Japan. This study also collects and analyzes some empirical documents and data from government reports, journals, and newspapers.

Expected Results:

This study expects the results as follows.

1) Japan and the U.S. have differences in those business environments. The biggest difference is that Japan’s broadcasting companies are completely independent from broadband providers and telecommunications carriers because of the broadcast regulations in Japan, whereas many virtual MVPDs and MVPDs in the U.S. are embedded or merged with cable providers or telecommunications carriers and made the vertical integration businesses. These differences of business situations make the differences of speed toward the deployments of online program distributions. Japan is slower to the online TV program distributions because they stick to the business of status quo.

2) Japan and the U.S. have similarities on the media preferences of consumers. They are moving from the TV broadcast to the Broadband Internet and mobile Internet. In both countries, although majority of older consumers remain in the preference of TV broadcasting programs, younger generations are moving from TV programs to broadband contents and mobile services. These similarities are the potential to let TV broadcasting companies and MVPDs tackle their TV program distribution on the Internet.

3) Although Japan is slower to the TV program distribution on the Internet than in the U.S., almost of all Japan’s TV broadcasting companies have to tackle to the online program distribution because the consumers, the source of business, are moving toward online environment quickly. High penetration of broadband Internet and mobile Internet access in Japan also support the trial of the TV broadcasting companies.

Presenter
avatar for Shinichiro Terada

Shinichiro Terada

Visiting Scholar, UC Berkeley

Author

Friday September 8, 2017 5:30pm - 6:30pm
Founders Hall - Multipurpose Room

5:30pm

Strengthening the Internet for Global, Ubiquitous and Secure Commercial Use: Perspectives, Lessons, Issues and Challenges
Today we have near-universal availability of the Internet, with over 3 billion users in some 200 countries worldwide. Simultaneously the intelligent mobile phone, with some 7.1 billion subscriptions globally, has become the most widely used communications device in the world, the access device of choice in developing countries and it is often the only available device there for accessing the Internet and its associated services.

As broadband mobile Internet access becomes more readily available, affordable and the norm, intelligent mobile devices are being used widely for business applications and financial transactions, as well as for personal and social purposes. This expansion will create more billions of vulnerable new mobile Internet users worldwide, with the bulk located in developing countries, who are likely to become an additional target for malware, identity theft, cyber-fraud and cyber-crime.

Although the Internet has transformed our economy and society, it was never designed and built for global, ubiquitous and secure commercial use. The positive socio-economic benefits derived from the Internet and the World Wide Web are enormous and have been appropriated. However, the negative developments (e.g. malware, hacking, identity theft, organized cyber-fraud and cyber-crime) are increasing in magnitude and cost. They need to be studied more carefully by the TPRC community and the trade-offs with the benefits understood better, so that practical and workable remedial measures can be proposed, both nationally and globally.

In this paper the authors will show that the building of an appropriate institutional and legal infrastructure for the global digital marketplace and its underlying Internet infrastructure, as well as the creation of commonly accepted, understandable and internationally enforceable marketplace rules which provide trust and confidence for all those who operate in that marketplace or are affected by it, is a necessary condition for the efficient functioning of a global, digital economy. The paper makes the case that the status quo is untenable in the medium term. The increasing quantitative load put on the Internet by billions of new users (e.g. mobile users) and new uses (e.g. the Internet of Things), combined with increasing net threats, will eventually degrade the public Internet unless new institutional and governance arrangements can be created.

The task of building an environment of trust and confidence in the digital economy is complex: there is no magic or silver bullet. It will require a multi-stakeholder global approach and will involve concerted actions among many stakeholders: to create the requisite legal and regulatory environment; to develop voluntary codes of practice; to educate businesses, consumers and public service providers; and to create tools that are easy to use. Drawing upon the lessons of history and historical analogies, as well as examining some ideas proposed in various fora such as the OECD, ITU, IGF, ISOC, GCIG and by various experts, we shall explore some scenarios to strengthen the Internet that could lead to more security for all users. At the same time we shall review what sorts of “rules of the road” will be necessary to achieve that outcome.

Presenter
PN

Prabir Neogi

Visiting Fellow, Carleton University
I am a retired Canadian public servant and a TPRC "old hand", having attended the Conference regularly since 1992. My broad areas of interest are: Broadband communications (both mobile and wireline), universality issues including urban-rural gaps, and the transformative uses of ICTs... Read More →


Friday September 8, 2017 5:30pm - 6:30pm
Founders Hall - Multipurpose Room

5:30pm

Zero Rating and the Adoption of Virtual and Augmented Reality
By exempting the charges of using particular apps or websites from a user’s mobile bill, zero rating frees up resources (and, where applicable, data under a user’s data cap) enabling users to, among other things, experiment with and adopt new or less-widely used apps and content. Evidenced by its explosive adoption among mobile operators, zero rating is positioning itself as a business model for users’ early stage experimentation with and adoption of augmented reality, virtual reality and other cutting edge technologies that represent the internet’s next wave — but that also use vast amounts of data.

Currently, proponents of zero rating assert that it is a tool to increase internet usage in areas where mobile coverage exists, but the number of mobile internet users remains comparatively low, generally because of budget constraints or lack of familiarity with the internet. But in reality it may be even more significant: given the data demands of the newest interfaces and platforms, zero rating may be essential to ensuring that large segments of the population aren't completely excluded from the internet’s next iteration. Opponents, meanwhile, argue that it unduly discriminates against non-zero rated apps and content, which subscribers may consume only by paying for data. Whether couched in antitrust terms or not, the fundamental argument is one of anticompetitive foreclosure.

Characterized by activists as the “bleeding edge of net neutrality,” regulators around the world are grappling with whether to prohibit the practice, allow it, or allow it with various restrictions. The EU’s 2016 net neutrality guidelines, for example, created an uncertain patchwork of net neutrality regulations in the region. Some regulators have banned zero rating practices - Hungary, Sweden, and the Netherlands - while others that have not - Denmark, Germany, Spain, Poland, the United Kingdom, and Ukraine. And whether or not they allow the practice, regulators (e.g., Norway’s Nkom) have lamented the lack of regulatory certainty surrounding zero rating, a fact that is compounded by a lack of data on the subject.

The objective of this paper is to paint a clearer picture for regulators of the costs and benefits associated with zero rating so that they can better tailor regulations to the particular realities faced by both consumers and providers in their countries. We plan to accomplish this by:
• Providing more precise data points to regulators. We will identify different dynamics of zero rating offers depending on factors such as economic conditions, mobile internet penetration, market share of operators, and the type of app(s) or content zero-rated. At the same time we will clarify how the practice is currently being regulated in commonly misunderstood frameworks, including India, Brazil, and Chile; and
• Outlining an ex post, effects-based analysis of zero rating consistent with modern antitrust law and economics. Instead of foreclosing or mandating specific conduct, our aim is to provide an evidence-based approach that permits and fosters experimentation, innovation, and technological development, intervening only where actual competitive harms arise.

The goal of any well-designed zero rating regulatory regime is to preserve technical and commercial flexibility in order to ensure that the opportunity costs imposed by regulation do not outweigh the benefits of zero rating. Zero rating can be a tool for digital inclusion and experimentation to promote the adoption of both basic and innovative apps or content, such as data hungry virtual reality and augmented reality apps consumed over the mobile network. Our goal is to provide a detailed, flexible and readily applicable framework in order to help ensure that regulatory decisions regarding zero rating are undertaken in a manner that confers maximum consumer welfare around the world.

Presenter
AG

Allen Gibby

International Center for Law & Economics

Author
avatar for Geoffrey A. Manne

Geoffrey A. Manne

Executive Director, International Center for Law & Economics

Friday September 8, 2017 5:30pm - 6:30pm
Founders Hall - Multipurpose Room

5:45pm

Reception and Poster Session
Friday September 8, 2017 5:45pm - 6:45pm
Founders Hall - Multipurpose Room
 
Saturday, September 9
 

8:15am

9:00am

Regulating the Open Internet: Past Developments and Emerging Challenges
On June 14, 2016, in perhaps one of the most important rulings supporting Federal Communications Commission (FCC) policy, the D.C. Circuit Court of Appeals upheld the FCC’s 2015 Open Internet Order laying out network neutrality rules that govern how Internet service providers may price to edge platform users at the point of termination. Since that time, a number of political developments within the FCC and in the U.S. more broadly have led to speculation that the present rules would be, not for the first time, overturned.

In this manuscript, we provide a concise historical perspective of the FCC policies, proceedings and court decisions that led to this point and characterize emerging economic policy challenges that the current rules leave unresolved. We then frame these challenges in ongoing economic and political developments.

In particular, using an economic model of interconnection, we show that by refraining from regulating interconnection agreements and by allowing Internet service providers to engage in “zero rating,” the FCC has left the door open for certain anti-competitive practices to arise. In the case of interconnection, we find that unregulated competitors will optimally agree to an interconnection arrangement that permits them to earn monopoly profits. Regulatory agencies often justify relaxing regulation in the presence of a greater number of competitors. Our result indicates that a prohibition against price discrimination may be insufficient to prevent uncompetitive interconnection pricing even when competition intensifies.

We also find that firms will not engage in zero rating unless forbidden to set termination fees and that the latter practice can leave both consumers and firms worse off than if firms had been permitted to discriminate via termination fees. To the extent that practitioners continue to view zero rating as a potential anti-competitive concern — an issue that appears to have been more important to the previous administration than to the current one — our results indicate that net neutrality may have spurred firm rent seeking through zero rating.

Moderators
JC

Jane Coffin

Director, Development Strategy, Internet Society
IXPs, connectivity, access, connecting the next billion, development

Presenter
AY

Aleksandr Yankelevich

Michigan State University

Author
avatar for Kendall Koning

Kendall Koning

Ph.D Candidate, Michigan State University

Saturday September 9, 2017 9:00am - 9:33am
ASLS Hazel Hall - Room 329

9:00am

CANCELLED - Learning from or Leaning On? How Children Affect Internet Use by Adults'
Hernan flights are cancelled. Hurrican Irma!

Scholars have observed that children and teenagers can promote Internet adoption among adults by increasing exposure and positively influencing skills acquisition. However, it is also possible that the presence of children in the household discourages online engagement by adults, who may lean on children to act as proxy users. Both processes have been theorized, but the net result of these seemingly opposite effects has yet to be empirically tested. This study seeks to provide such as test by examining how the presence of children in the household affects Internet use by adults. It draws on data from large-scale household surveys in six countries in Latin America (Bolivia, Colombia, Ecuador, Mexico, Peru and Uruguay). These countries were selected based on the availability of comparable data, and are representative of the different contexts found in the region

The study makes several unique contributions to the extant scholarship on Internet adoption and family dynamics. First, the data is sourced from government-administered, nationally-representative household surveys. This allows for more precise statistical estimations and the use of causal inference techniques that are unfeasible in studies with small or non-representative samples. Second, the study advances our understanding of the role that children play in online engagement by adults, as well as of the factors that affect the tension between learning from and leaning on. Third, the study hypotheses are tested separately in different country contexts, thus strengthening the validity of results. Fourth, we employ a matching technique that significantly mitigates self-selection problems found in conventional regression analysis. This allows for a more robust estimation of causal effects than found in the existing literature.

Our results corroborate that children are a key factor in households’ decision to adopt Internet services, and that their role increases with children’s age and when children use the Internet at school. However, we also find that the presence of children is negatively correlated with Internet use by adults. Further, our matching estimates indicate that this effect is likely causal. This suggests that the intergenerational transfer of ICT knowledge and skills from children to adults is outweighed by leaning effects whereby parents rely on children to perform online tasks for them, ultimately discouraging engagement. We find this result to be consistent across countries and robust to different specifications. The study concludes with policy implications for digital inclusion initiatives and suggestions for further research in this area.

Moderators
MJ

Mark Jamison

PURC, University of Florida

Presenter

Saturday September 9, 2017 9:00am - 9:33am
ASLS Hazel - Room 120

9:00am

Policy Alternatives for Better and Wiser Use of the NGN: Competition, Functional Separation, or What?
Background
To cope with increasing traffic and meeting various ISPs services, NTT East and West (NTT locals) initiated an NGN (Net generation network) service, which enables rapid and large-volume data transmission, in 2008. The NGN network is different from FTTH, since not only is it a bandwidth guaranteed network but QoS can also be controlled. It is the most advanced network. The number of NTT local NGN subscribers amounted to 18 million as of March 2016.
Contents become richer and richer, and the migration of PSTN to the IP network has started to be discussed and NTT locals are proposing that PSTN be accommodated by NGN. Although FTTH is the best-effort type, NGN guarantees that the bandwidth and can handle voice data as well. Regulations of NTT’s FTTH related to unbundling and connection charges were already fully implemented. This has caused its rapid diffusion in Japan. NGN has been less utilized by competitors and thus, except for the unbundling of minor services, the above issues were not focused on.

NGN Policy issues
Because of the increase in demand for NGN, competitors including carriers, broadband providers, and ISPs have been asking for the same regulations as FTTH by recognizing NGN as an essential facility. Not all countries implement unbundling, since it kills carriers’ incentive to deploy FTTH networks. The areas covered by FTTH networks currently amounts to 95% of Japan, and the rate of growth of FTTH subscribers has been declining, implying that it is approaching satiation level. NTT’s share of subscribers is 70%, and that of facilities is 78%. In the case that PSTN is accommodated by NGN, its total share will surely increase, because the number of legacy subscribers is 23 million and NTT’s share is 99.8%. With the process of migration, NTT’s share is expected to rise. In this sense, the implementation of competition policy to NGN is one possible alternative.

Another alternative?
Toward policies for NGN, the keys are the share of NTT and the essentiality of NGN. However, there is one more option; functional separation. Accounting separation and functional separation were already implemented with NTT’s FTTH, and further operational separation and ownership separation are alternatives. Regarding functional separation, the issues include incentives for deployment, promotion of competition among firms for access to NGN, and the efficiency of vertically separated networks. These are crucial.
The policy alternatives mentioned are two extremes; unbundling and functional separation, but other options exist between these two. This study aims to find the policy particularly by considering policy goals and policy evaluation. The traditional market shares are not enough anymore in the current telecommunication circumstances. NGN should be for all entities to access and utilize it for various applications leading to new economies such as industry 4.0, telecommunications 4.0, or telemedicne 2.0. This study thus focuses on how NGN can be utilized fully and wisely was we move toward the age of IoT and 5G.

Moderators
Presenter
MT

Masatsugu Tsuji

Professor, Kobe International University

Author
BE

Bronwyn E. Howell

Victoria University of Wellington
SS

Sobee Shinohara

KDDI Institute, Inc.

Saturday September 9, 2017 9:00am - 9:33am
ASLS Hazel Hall - Room 332

9:00am

A Socio-Technical Analysis of China's Cyber Security Policy: Towards Delivering Trusted E-Government Services
On November 7, 2016, the Chinese government released a comprehensive new National Cybersecurity Law, with broad coverage of industrial sectors such as energy, transportation and information networks, and implications for disparate areas including data protection, privacy, and state surveillance, besides the security of information networks. On February 4, 2017, the Cyberspace Administration of China (CAC), charged with enforcing the Cybersecurity Law, produced a consultation draft for new administrative rules for online products and services, in preparation for June 2017 when the law goes into effect.

This paper examines the potential implications of the cybersecurity law and its operational rules on one of the many areas impacted by the law, namely the provision of e-government services. Specifically, it investigates whether (and how) the provisions of the law and its operational rules are likely to impact trust in e-government services, and consequently on the utilization rates of these services by consumers. Research has established that trust is a key predictor of the adoption of e-government services, and that security is a prime component of trust (Belanger & Carter, 2008; Borgman, Mubarak & Choo, 2015; Hung, Chang & Kuo, 2013).  

To investigate the possible impact of the new cybersecurity law on trust in e-government services, we apply the organizing framework of socio-technical systems (STS) theory, which visualizes that human and technical elements interact and are reciprocally shaped within complex systems (Walker, Stanton, Salmon & Jenkins, 2008). Neither technical capabilities or human behaviors are “given,” but are iteratively modified and jointly optimized. STS theory has been frequently used to evaluate ICT policies (Kim, Shin & Lee, 2015). In line with the STS framework, we ask the following questions: 1). How have the current technical aspects and organizational practices of e-government in China affected citizen’s trust in and utilization of e-government services? 2). What technical and organizational aspects of e-government services are affected by the new cybersecurity law and its operational rules, and in what manner? and therefore, 3). What is the likely impact of the new cybersecurity law on delivering trusted e-government services, and increasing their utilization rates?  

To answer these questions, we utilize prior survey research on citizen attitudes towards e-government in China, and a variety of sources on the e-government specific provisions of the cybersecurity law including the text of the legislation and the consultation draft rules, other government publications, industry reports, and academic articles. In addition, we conduct interviews with provincial and local government officials and technical staff directly responsible for providing e-government services. 

After examining the potential impacts of the cybersecurity law on e-government, we conclude with recommendations on the further steps the government may need to take to promote the uptake of e-government services. Of special relevance is better coordination and information-sharing between local governments in charge of implementing e-government services and the central government that determines the technical and operational characteristics of information infrastructures. We also comment on the role of ICT vendors, and on the importance of personal data rights protections among stakeholders. China’s experiences with its cybersecurity law and other ICT policies will also be of interest to other countries as they embark on their information infrastructure initiatives.

Moderators
JK

Jack Karsten

Brookings

Presenter
KJ

Krishna Jayakar

Penn State University

Author
YB

Yang Bai

The Pennsylvania State University

Saturday September 9, 2017 9:00am - 9:33am
ASLS Hazel Hall - Room 225

9:00am

The Evolution of U.S. Spectrum Values Over Time
Using data on all FCC auctions of spectrum related to cellular services from 1997 to 2015 we attempt to identify intrinsic spectrum values from winning auction bids. Our data set includes 17 auctions and close to 7,500 observations. We add two components to previous literature on this topic. First, we control for license and block specific auction rules that Connolly, Salisbury, Trivedi and Zaman (2017) catalogue. Second, we introduce two technological measures to separate out technological progress that effectively reduces spectrum scarcity from technological progress that increases demand for mobile applications. Previous papers have included simple time trends to reflect technological changes. Time trends are unable to distinguish between markets within the United States and conflate the effects of these two types of technological progress. Our results confirm previous theoretical and empirical findings for basic measures of demand such as population, population density, income levels, frequency levels, bandwidth, paired bands, and national licenses. Astonishingly, 49 percent of all cellular licenses since 1997 have been won by small bidders: 44 percent were won using small bidder credits, 14 percent were won in set-aside/closed licenses, and 9.5 percent were won in closed licenses using bidding credits. Our results further quantify the negative impact on headline winning bids when won using bidding credits, in closed licenses, and with the imposition of the open access requirement in the C block of Auction 73. Increased spectral efficiency appears to be reducing spectrum scarcity as evidenced by its lowering of winning bids, while market level communications infrastructure has a significant positive impact on the demand for and price of spectrum. Additionally, auction results confirm that the relative value of higher frequency spectrum is increasing over time as new technologies develop.


Saturday September 9, 2017 9:00am - 9:33am
ASLS Hazel Hall - Room 221

9:34am

The Internet of Platforms and Two-sided Markets: Implications for Competition and Consumers
This paper will examine developments in the Internet marketplace that limit and condition access in light of growing incentives to offer prioritized management and delivery at a premium price. Commercially-driven interconnection and compensation arrangements support biased networks rather than open ones. Single ventures, such as Amazon, Facebook, Google and Netflix have exploited, “winner take all” networking externalities resulting in the creation of dominant platforms and walled gardens. Advocates for network neutrality claim that limited and “pay to play” access will threaten a competitive marketplace of ideas by imposing higher costs on unaffiliated, disfavored and cash-poor content providers. Opponents argue that Internet Service Providers (“ISPs”) should have the flexibility to customize services and accrue marketplace rewards for superior products and services.

The paper identifies four types of government responses to price and quality of service discrimination that can exploit, or remedy choke points within the Internet ecosystem where large volumes of traffic have to traverse a single ISP network, or service provider platform. Governments can refrain from regulating access and accept aspects of market concentration as proper rewards to ventures offering desirable content and carriage services. Alternatively, they can impose access neutrality requirements to offset harmful discrimination and market dominance. Between these poles, governments can apply antitrust/competition policy remedies, or rely on expert regulatory agencies to respond to complaints.  

The paper examines case studies where ventures have created walled gardens and platforms as an intermediary between consumers and content providers. In some instances, government-imposed remedies offered a solution to a transitory, or possibly nonexistent problem. In other instances, governments have created an administrative process for responding to complaints and resolving valid disputes.

The paper concludes that governments should have a duty to remedy marketplace distortions generated by firms operating platforms in an anticompetitive manner. Rather than anticipate such an outcome, governments should offer remedies when and if they receive valid complaints documenting harm to consumers and competitors. Additionally, regulators should implement robust transparency and truth in billing safeguards that possibly can prevent most conflicts, or help identify practices that harm consumers and competitors.

Moderators
JC

Jane Coffin

Director, Development Strategy, Internet Society
IXPs, connectivity, access, connecting the next billion, development

Presenter
avatar for rob frieden

rob frieden

Professor, Bellisario College


Saturday September 9, 2017 9:34am - 10:07am
ASLS Hazel Hall - Room 329

9:34am

Libraries, the National Digital Platform, and Inclusion
Wireless hotspot lending programs are gaining popularity through library systems in several major cities in the U.S. Portable hotspot devices allow a patron to “take home” the Internet from the library, and are premised on providing free, cellular-based mobile access for Internet-ready devices in the home, usually to people who indicate they lack home-based broadband. In 2015, the New York Public Library partnered with the Maine State Library and the Kansas State Library to fund rural hotspot lending programs in small rural community libraries.

Extending the reach of Internet-based services in this fashion is a new addition to rural libraries’ functions, and this research seeks to how these programs impact the users and small communities where they operate. Extremely rural areas typically have less robust Internet services available commercially and lower home broadband adoption levels; research suggests the prices for fixed broadband services are sometimes much higher than the local populations can afford. Local libraries are typically the only site where people in these communities can access the Internet for free and/or at reasonably fast speeds. As more educational, health, government and commercial services migrate to and assume user Internet access, libraries stand out as particularly prized sites for these purposes in rural towns.  

Under a grant from the Institute of Museum and Library Services, our research assesses hotspot lending initiatives in 6 rural libraries in Maine and 18 libraries in Kansas. Most of the communities in the sample face several economic challenges. The hotspot programs themselves are fairly small (as are the communities in which they operate), but they provide insights into the role and operations of information seeking in areas bereft of many alternative sources while also providing a way to examine how libraries extend Internet access into underserved areas.  

The research investigates 1) how rural libraries implement and operate a hotspot-lending program; 2) their potential economic impacts in the community; 3) and larger community outcomes that might be associated with increased connectivity in rural areas. Our team of researchers investigated these outcomes through site visits to the libraries and their counties and towns, where librarians and local stakeholders - elected officials, school personnel, local telecommunications providers – provided qualitative data regarding Internet access, the hotspot program, and local information needs. In our current research phase, we are conducting focus groups with users in several sites, and also developing a quantitative database by surveying a broader population of users.  

In this paper we share the results of focus groups with patrons who utilized the device and characterize how this particular program may or may not influence broader information seeking and the use of various Internet-delivered services. Although our research project will not conclude until January of 2018, we will have sufficient data by fall, 2017 in order to characterize the Internet environment of our rural sites. We will address where libraries in rural America “fit” in the circulation and retrieval of information, and more broadly in the national picture of digital inclusion.  

Data collection for this study is still ongoing, but preliminary findings from qualitative interviews with library staff have resulted in clear definitions of the challenges and opportunities unique to remote areas implementing a hotspot-lending program. Our qualitative data from focus group meetings and personal interviews detail myriad creative ways that rural hotspot users and local institutions find and utilize connectivity that affects the civic, and sometimes economic, affairs of their households and their communities.

Moderators
MJ

Mark Jamison

PURC, University of Florida

Presenter
Author
avatar for Colin Rhinesmith

Colin Rhinesmith

Assistant Professor of Library and Information Science, Simmons College
Colin Rhinesmith is an assistant professor in the School of Library and Information Science at Simmons College and a faculty associate with the Berkman Klein Center for Internet & Society at Harvard University.
BW

Brian Whitacre

Oklahoma State University, Oklahoma State University

Saturday September 9, 2017 9:34am - 10:07am
ASLS Hazel - Room 120

9:34am

Degrees of Ignorance About the Costs of Data Breaches: What Policymakers Can and Can't Do About the Lack of Good Empirical Data

Estimates of the costs incurred by a data breach can vary enormously. For instance, a 2015 Congressional Research Service report titled “The Target and Other Financial Data Breaches: Frequently Asked Questions” compiled seven different sources’ estimates of the total losses resulting from the 2013 Target breach, ranging from $11 million to $4.9 billion. The high degree of uncertainty and variability surrounding cost estimates for cybersecurity incidents has serious policy consequences, including making it more difficult to foster robust insurance markets for these risks as well as to make decisions about the appropriate level of investment in security controls and defensive interventions. Multiple factors contribute to the poor data quality, including that cybercrime is continuously evolving, cyber criminals succeed by covering their tracks and victims often see more risk than benefit in sharing information. Moreover, the data that does exist is often criticized for an over-reliance on self-reported survey data and the tendency of many security firms to overestimate the costs associated with security breaches in an effort to further promote their own products and services. 

While the general lack of good cost data presents a significant impediment to informed decision-making, ignorance of the economic impacts of data breaches varies across categories of costs, events, and stakeholders. Moreover, the need for precision, accuracy, or concurrence in data estimates varies depending on the specific decisions the data is intended to inform. Our overarching goals in this paper are to clarify which types of cybersecurity cost data are more easily collected than others; how policymakers might improve data access and why previous policy-based efforts to do so have largely failed; and what differential ignorance implies for cybersecurity policy and investment in cyber defenses and mitigation.

To address these questions, we examine several common presumptions about the relative magnitudes of cybercrime cost effects for which generally accepted and reasonably precise quantitative estimates are lacking. For example, we review the evidence supporting the commonly accepted and often cited claims that the aggregate investments in defending against and remediating cybercrimes significantly exceed the aggregate investments by attackers; and that the aggregate harm suffered by victims of cybercrimes exceeds the benefits realized by attackers. There are other such statements that are more contentious. For example, it is unclear whether the aggregate expenditures on cyber defense and remediation exceed the aggregate harms from cybercrimes; or whether a significant change in expenditures on cyber defense and remediation would result in proportionately larger changes in the harms resulting from cybercrimes. For each of these presumptions, we consider the existing evidence, what additional evidence might be needed to develop more precise quantitative estimates, and what better estimates might imply for cyber policy and investment. 

We argue that the persistent inability to accurately estimate certain types of costs associated with data breaches—especially reputational and loss-of-future-business costs—has played an outsize and detrimental role in dissuading policy-makers from pursuing the collection of cost data related to other, much less fundamentally uncertain costs, including legal fees, ex-ante defense investments, and credit monitoring and notification. Finally, we propose steps for policy-makers to take towards aggregating more reliable, consistently collected cost data associated with data breaches for the categories of costs that are most susceptible to rigorous measurement, without getting too bogged down in discussions of the costs that are most difficult to measure, and which are therefore, by necessity, likely to remain most uncertain. We argue that the high degree of ignorance and uncertainty surrounding this subset of data breach costs should not be used as a reason to abandon measurement of other types of losses incurred by these incidents, and that explicit consideration of our differential ignorance of breach cost elements can help us better understand which questions about the economic impacts of data breaches can and cannot be meaningfully answered. 


 


Moderators
Presenter
JW

Josephine Wolff

Rochester Institute of Technology

Author

Saturday September 9, 2017 9:34am - 10:07am
ASLS Hazel Hall - Room 332

9:34am

Spectrum Policies for Intelligent Transportation Systems
The FCC has allocated 75 MHz of spectrum for Intelligent Transportation Systems (ITS). This spectrum will support connected vehicles by allowing automobiles to communicate with each other and with roadside infrastructure using a technology called Dedicated Short-Range Communications (DSRC). The question of whether ITS should have an exclusive allocation of 75 MHz is hotly debated, and there are two competing proposals that would make some or all of this spectrum available for unlicensed devices through either primary-secondary sharing or sharing on a co-equal basis. This paper investigates how much spectrum should be allocated to ITS, whether ITS spectrum should be shared with unlicensed devices, and if so under what rules.

One motivation for allocating spectrum to ITS is to support highway safety applications. While safety communications have priority, ITS spectrum can be used for other purposes when it is not needed for safety. As a result, much of the spectrum may be used for non-safety purposes. Previous work has shown that once spectrum is allocated to ITS and DSRC technology is widely deployed, it can be more cost-effective to providing Internet access using mesh networks of these devices than using today’s cellular networks. Thus, allocating more spectrum for ITS can reduce Internet costs. There are also disadvantages, because spectrum allocated for ITS is unavailable for other purposes.

This paper will assess various spectrum policies. We vary the amount of spectrum allocated to ITS. We also consider four possible policies regarding sharing the portions of ITS spectrum that do not contain safety-critical communications: (i) spectrum is allocated only for connected vehicles, (ii) spectrum is shared with unlicensed devices on a primary-secondary basis, such that unlicensed-devices can operate only if they are sufficiently far from any DSRC device, (iii) spectrum is shared with unlicensed devices on a co-equal basis, where DSRC and unlicensed devices must coexist but are not expected to cooperate, and (iv) spectrum is shared with unlicensed devices on a co-equal basis, where regulations require DSRC and unlicensed devices to cooperate.  

For each spectrum policy, this paper quantifies three possible impacts: (i) the cost savings obtained by providing Internet access over DSRC-based mesh networks in the ITS band instead of a cellular network, thereby reducing the number of cell towers needed, (ii) the value of spectrum allocated to ITS and therefore made unavailable for other purposes, and (iii) the extent to which ITS spectrum can be used by unlicensed devices that are sharing the band.  

Our method involves multiple inter-related models, and data collected from a citywide connected vehicle deployment. We estimate costs and cost savings by developing a detailed and realistic engineering-economic model of the networks used to provide Internet access. Some costs and savings depend on the throughput achieved in the DSRC-based mesh network, and the throughput achieved by unlicensed devices. To estimate throughputs, we have built a packet-level simulation of both DSRC and unlicensed devices. Finally, to make this simulation realistic, we have obtained information about vehicle location and movement, signal propagation, and other characteristics from a citywide deployment in Portugal.

Moderators
Presenter
avatar for Alexandre Ligo

Alexandre Ligo

PhD Candidate, Carnegie Mellon University

Author
avatar for Jon Peha

Jon Peha

Carnegie Mellon University

Saturday September 9, 2017 9:34am - 10:07am
ASLS Hazel Hall - Room 221

9:34am

Getting There from Here: 30 in 2020 in the Democratic Republic of Congo
Internet penetration in the Democratic Republic of Congo (DRC) is 4%. According to radio okapi, the high costs of services and the lack of infrastructure block most people from accessing the Internet. The high Internet costs are due to the high cost of satellite bandwidth use, which raises Internet providers operating costs. Low Internet penetration in the D.R.C. is a brake on the development of this country, experts in the sector confirm. This damages both the economy and society, and is especially dangerous in case of emergency. The objective of the Interagency Task Force and Advisory Group is to support the D.R.C. achieving 30% penetration as soon as 2020.

This paper assesses an Interagency Task Force and Advisory Group (IATAG) helping catalyze interagency and civil society cooperation with industry investment to enable D.R.C. Internet penetration to exceed 30% in 2020. If successful, the economic and social benefits are obvious as a more connected and wealthier D.R.C. creates more economic opportunities for citizens and investors, improves health and well-being, and improves the functioning of the public sector.
Fieldwork to identify key inhibitors and barriers, which must be removed or reformed if the DRC Broadband Vision is to be realized, is among the research methods to be used in this paper. Experiments with government ministries, governors, civil society actors, and firms will be evaluated. Sociopolitical challenges will vary both between and within the Provinces of the Democratic Republic of Congo. To obtain a realistic view of current conditions and critical barriers, infrastructure providers of telecommunications and energy, and their business, residential and public sector customers must be encouraged to contribute, along with potential new market entrants. Investors and industry partners, who affirm they would contribute if identified barriers were removed, are participating in this multi-stakeholder process.  

The conclusions of this paper may have significant application to realizing efforts of international agencies and the technology industry to facilitate the next billion and half people accessing the Internet. Already, the nascent efforts in the DRC are being attracting growing interest in several other African nations whose geography, economy, and political instability may also have left large numbers of people excluded from Internet access to date. Are there lessons to be learned from the DRC on what policy approaches and technology innovations may help rural areas, education and emergency services leap ahead? 

Aggregating demand by supporting inter-city hybrid heterogeneous (fiber, wireless, and satellite, as well as off-grid solutions) networks will be key to enabling sustainable, rapid growth in connectivity and Internet access, this research indicates.

Establishment of an annual series of DRC Internet Forum meetings, to facilitate continuous multistakeholder dialogue is planned. Several Provinces whose Governors and citizens may be prepared to commit to supporting the Vision will prove their commitment by formally volunteering their Provinces as Innovation Zones. Both government and civil society clearly must be prepared to take practical, and difficult, actions to make the plan implementable, and change possible.  
The IATAG is developing a checklist and brief questionnaire for submission by the Governor of interested Provinces, with indications of broad support by local officials, university and school leaders, businesses, and community organizations. Results of this questionnaire may be shared with TPRC participants for their feedback and suggestions. Incumbent telecommunications operators identifying what they perceive to be the key challenges in the region is also critical input for the Task Force to consider. For example, it may be that mobile backhaul is the top obstacle, as it was in Ugandan regions where Facebook co-invested in 2017 with two operators to remove that barrier. This paper will also report on findings from the point of view of incumbent and new entrant operators.

The virtuous circle of private sector actors encouraged by reformed government innovation policies and engaged community groups inviting change, will risk investment in advanced wireless, mobile and fiber infrastructure in the DRC as soon as possible. This can increase access and can lower costs for everyone while improving quality and variety of services far beyond 4G. These include innovative hybrid heterogeneous software-defined and virtualized networks offering cloud services across wireless grids for the Internet of Things with edgeware.

Moderators
JK

Jack Karsten

Brookings

Saturday September 9, 2017 9:34am - 10:08am
ASLS Hazel Hall - Room 225

10:07am

Mobile Communications Among the Bedouin in Israel: The Digital Exclusion of an Indigenous People
The Bedouin are the indigenous people of the southern half of Israel – the Negev (in Arabic: Naqab) dessert. The 230K population constitutes 3.5% of the Israeli population and close to 30% of the population of the Negev. Yet, despite their centuries’ old ancestral roots in the region, since the founding of the state they have been systematically discriminated against and deprived of basic freedoms and rights, among them partaking in egalitarian policy discourse regarding their own livelihood, including the equal opportunity to utilize network communication technologies in their towns and villages. More than 60,000 of them live in “unrecognized” villages, which the State Comptroller has described as “insufferable conditions.”

This first-of-its-kind inventory of wireless services available to the Bedouin community demonstrates empirically the combined effect of discriminating state policies and industry neglect of a poverty-stricken and systematically marginalized community. Incorporating critical analyses of policy documents, systematic mapping of infrastructure and facilities, and industry responses, this study paints a picture of exclusionary practices and the way they are implemented and justified in the digital wireless media industry.

The empirical data consists of:

1. Official universal service and mobile deployment standards as dictated by law, regulations and licenses.

2. Levels of connectivity to wireline services in Bedouin towns, both “recognized” and “unrecognized,” compared with each other, with neighboring Jewish towns and with national averages and standards.

3. Levels of connectivity to wireless services in Bedouin towns as compared to neighboring Jewish towns, taking into account the number of towers/transmitters in each locality and the density of the population.

4. Mapping of the Bedouin “diaspora” and measures of the distance between towers/ transmitters and villages. These measures, using official location maps provided by the ministry of environmental protection, are divided by different service providers.

5. Quality of service, determined by fieldwork in which transmission and reception of signals were measured, identifying deployment of the different “generations” of mobile services.

6. Official positions and reactions of industry and operators regarding service provision to Bedouin towns and villages.

Initial findings indicate:

1. None of the Bedouin towns are served by the cable industry. Landline penetration among the Bedouin is significantly lower than among Jewish towns.

2. There is a large variation in connectivity levels to broadband among Bedouin towns. It ranges from 10% in Tel-Sheva to 45% in Rahat. The national level of broadband penetration in 2014 was over 71%.

3. There is large variation in number of cellular towers/transmitters per capita among the Bedouin “recognized” towns, ranging from 1/3,000 residents in Kseife to 1/9,400 in Hura.

4. There is a dramatic difference in the number of towers/transmitters between Jewish suburbs and Bedouin towns. Some Jewish settlements have as many as 1 tower per 157 residents (Shoval and Nevatim). The lowest rate being 1/1,775 (Meitar).

5. In the “unrecognized” Bedouin diaspora, the distance of the closest tower to a village can be as much as 7 kilometers. Of the 52 villages only 2(!) are less than a kilometer away from the closest tower.

Moderators
JK

Jack Karsten

Brookings

Presenter
avatar for Amit M. Schejter

Amit M. Schejter

Ben-Gurion University of the Negev

Author

Saturday September 9, 2017 10:07am - 10:35am
ASLS Hazel Hall - Room 225

10:07am

Smartphones and Urban Transportation Mode Choice
The United States in the last decade has experienced an explosive advancement of mobile information technology. Consumers can now access traffic information, transit schedules, directions, and any other data on the internet mobilely using smartphones. The arrival of this new good has profoundly impacted how we interact and make decisions of what activities to consume and how to travel to them. Using travel diary data covering Portland, OR in 2011, we estimate a mode choice model with smartphone ownership as an exogenous variable as well as a model that treats smartphone ownership as an endogenous variable. In order to address potential correlation in unobservables for smartphone ownership and mode choice, we exploit the timing of the release of the iPhone 4s (the first iPhone featuring Siri), after which smartphone penetration increased significantly. We find that smartphone ownership does, in fact, increase the utility of riding public transit over other mode options and that 28% of public transit commutes can be attributed to smartphone ownership.

Moderators
JC

Jane Coffin

Director, Development Strategy, Internet Society
IXPs, connectivity, access, connecting the next billion, development

Presenter

Saturday September 9, 2017 10:07am - 10:40am
ASLS Hazel Hall - Room 329

10:07am

What is the Impact of Broadband Bandwidth Variability on Quality of Life?: Lessons from Sweden
Connectivity opens economic possibilities. Broadband opens the possibility to connect millions via Internet. Economic Impact of broadband has been studied by various policy organizations and scholars. It is widely argued that, broadband technology can directly and indirectly engender economic activities in the region. Many of these arguments are based on various theories of change and innovation. Some scholars have used data at various levels to empirically ascertain the impact. However, the number of data driven research on this area has been small. Many researchers have used some aggregate macroeconomic indicators to ascertain the impact of use of broadband. However, these indicators such as GDP may not always successfully explain various aspects of quality of life of individuals in different societies. Along with that policy makers often debate on the variability of broadband bandwidth. Higher bandwidth should give better experience. Therefore, an important question is to ascertain whether or not variability of Broadband bandwidth is correlated with different aspects of quality of life (QoL). We endeavor to find an answer to this question via econometric estimations using a unique dataset from Sweden. Our hypothesis is based on the notion that, higher speed and reliable communication via broadband both in mobile and fixed form would engender new economic activities which in turn may shape various aspects of life.  

OECD describes, quality of life should consist of various indicators such as health, education, leisure, social connections, civic engagement and governance, environmental quality and personal security. Although access to high-speed networks may indirectly impact these variables, it is not yet obvious how much direct influence access to broadband might have on most of these indicators. (Lehr, Osorio, Gillet and Sirbu, 2005) attempted to ascertain the economic impact of broadband availability on the American society, using wage, rent, employment, industry mix. We posit that in this era of Internet of Things, connectivity without reliable and low bandwidth broadband may fail to impact on the society. We have used indicators such as education, economic growth, wage, rent, sector wise employment and industry mix, peoples’ commuting pattern, public participation as a collective measure of economic well-being. Our analysis also distinguishes between wired and wireless communications devices. 

The observations in our econometric models are of time series (years after introduction of technology) cross sectional nature (at municipalities levels). Independent variables are as follows: Broadband- variable indicating the number of people using internet at the ‘broadband level’ defined by a specific country, Technology - a binary variable that indicates mobile or wired connectivity, Speed -indicating the minimum broadband bandwidth (download rate at the last mile). As control variables, we introduce area specific fixed effect variables, and various other time specific demographic, socio-political and economic variables. A number of fixed effect regressions were employed to ascertain the impact of a. Having access to broadband at various speed and b. mobile vs. wired broadband, on the aforementioned economic activity variables. 

The dataset has around 1700 data points at municipality levels from years 2009 to 2015. The data are obtained from various sources that collect such data. Economic activity, demography and various other control variables are taken from Swedish Statistical Agency. 

Our initial findings indicate that broadband bandwidth variability has mixed impact on various aspects of quality of life.  

Mobile broadband speed has positive impact on English and Mathematics and negative impact on native language (in this case Swedish). However, the results show that reliable broadband coverage has positive impact on Mathematics and Swedish. This is may indicate that there may be a scarcity of educational materials in Swedish in comparison with English and Mathematics. Also, it might indicate that the materials at high speed are focused on entertainment rather than on education. 

We find that fixed broadband coverage has positive impact on total number of firms. However, mobile download speed does not have significant impact on number of start-ups or total number of firms. This is quite intuitive as firms rely on fixed connectivity. 

The estimations show that mobile broadband speed on average does not have significantly positive impact on job creation- both in service and manufacturing industry. Estimations indicate, mobile data speed may have negative impact on service sector if time specific idiosyncrasies are controlled. We also see that coverage of 10 Mbps has negative impact on job creation at the service sector. Our estimations categorize jobs into service and manufacturing sub-groups. However, most jobs in both the sub-groups require both skilled and unskilled labor. The decreasing trend may indicate that that broadband speed and coverage may have negative effect on jobs in the unskilled parts of these sectors. The negative impact of coverage is more prominent in the small cities than in the metropolitan and large cities. Overall, it might mean broadband speed is requiring more of high skill labor in all the job sectors and replacing low skill jobs. 

Municipalities that have higher mobile download speed has higher housing price. This might indirectly indicate that better mobile speed is a proxy for infrastructural acumen. Dwelling expenses rise in places where infrastructure is better. However, we do not see any significant impact of coverage or reliability variables. 

Mobile broadband speed does not have any significant impact on salary. This is intuitive as people probably do not rely on mobile data for work related issues. We see that municipalities with better coverage of 10 Mbps have seen less average salary. This may be an aggregate picture as municipalities are of various categories and have different types of job requirements. However, at the large cities broadband at 100 Mbps has positive impact on salary. As large cities are mostly the innovation hubs and job incubators, the results indicate that better and reliable coverage of high speed fixed broadband may increase high salaried jobs and in turn indicate scarcity of high skill labor. This indicates options for creation of more jobs for skilled labors. 

Mobile broadband speed has enabled people to work from distance and we see a positive impact of download speed on the increase in number of workplaces and people’s commute. This may indicate that the firms can now build workplaces at remote areas and still communicate. This may also mean that as mobile technology is improving, people are increasingly using data on the go for communication via video and audio. This in turn is encouraging them to travel further till they are able to communicate via their phones. We also see that better broadband coverage is increasing number of workplaces indicating that firms can now create facilities in places far apart and still can communicate and manage using broadband infrastructure. However, the most significant find is on the people’s commuting patter which we see in decrease with increase in reliable fixed connectivity. This indicates that broadband is maybe associated with decrease in physical travel for work related issues which can be a positive sign for environment. 

Our findings show that both mobile broadband speed and reliable connectivity have positive impact on public participation. People votes more in municipalities where mobile broadband speed is higher and fixed communication converge is better. This may indicate the positive impact of high connectivity and availability of information.  

The findings of this extensive research should aid the policy makers contemplating on proliferating high bandwidth broadband around the world.

Moderators
MJ

Mark Jamison

PURC, University of Florida

Presenter
avatar for Moinul Zaber

Moinul Zaber

University of Dhaka/LIRNEasia


Saturday September 9, 2017 10:07am - 10:40am
ASLS Hazel - Room 120

10:07am

Content Analysis of Cyber Insurance Policies: How Do Carriers Write Policies and Price Cyber Risk?
Cyber insurance is a broad term for insurance policies that address first and third party losses as a result of a computer-based attack or malfunction of a firm’s information technology systems. For example, one carrier’s policy defines computer attacks as, “A hacking event or other instance of an unauthorized person gaining access to the computer system, [an] attack against the system by a virus or other malware, or [a] denial of service attack against the insured’s system.”

Despite the strong growth of the cyber insurance market over the past decade, insurance carriers are still faced with a number of key challenges: how to develop competitive policies that cover common losses, but also exclude risky events?; how to assess the variation in risks across potential insureds; and how to translate this variation into an appropriate pricing schedule?

In this research paper, we seek to answer fundamental questions concerning the current state of the cyber insurance market. Specifically, by collecting over 100 full insurance policies, we examine the composition and variation across three primary components: The coverage and exclusions of first and third party losses which define what is and is not covered; The security application questionnaires which are used to help assess an applicant’s security posture; and the rate schedules which define the algorithms used to compute premiums.

Overall, our research shows a much greater consistency among loss coverage and exclusions of insurance policies than is often assumed. For example, after examining only 5 policies, all coverage topics were identified, while it took only 13 policies to capture all exclusion topics. However, while each policy may include commonly covered losses or exclusions, there was often additional language further describing exceptions, conditions, or limits to the coverage. The application questionnaires provide insights into the security technologies and management practices that are (and are not) examined by carriers. For example, our analysis identified four main topic areas: Organizational, Technical, Policies and Procedures, and Legal and Compliance. Despite these sometimes lengthy questionnaires, however, there still appeared to be relevant gaps. For instance, information about the security posture of third-party service and supply chain providers and are notoriously difficult to assess properly (despite numerous breaches occurring from such compromise).

In regard to the rate schedules, we found a surprising variation in the sophistication of the equations and metrics used to price premiums. Many policies examined used a very simple, flat rate pricing (based simply on expected loss), while others incorporated more parameters such as the firm’s asset value (or firm revenue), or standard insurance metrics (e.g. limits, retention, coinsurance), and industry type. More sophisticated policies also included information specific information security controls and practices as collected from the security questionnaires. By examining these components of insurance contracts, we hope to provide the first-ever insights into how insurance carriers understand and price cyber risks.

Moderators
Presenter
SR

Sasha Romanosky

RAND Corporation
One twitter at @SashaRomanosky

Author

Saturday September 9, 2017 10:07am - 10:40am
ASLS Hazel Hall - Room 332

10:07am

A New Spectrum License for Old Circumstances: A Retrospective Look at the Nextel Interference Proceedings
In this paper we apply an idea few are familiar with to a situation that was resolved over a decade ago. Why?

We consider the idea of a spectrum license with certain traditional features, but where the regulator retains an option to modify parameters of the license in pre-specified ways over time. For example, the regulator may guarantee access to a certain spectrum bandwidth, but retain the option to change the center frequency within a specified band with appropriate notice. The intent of this license structure is to provide flexibility for the regulator and certainty for the licensee that does not always exist under current licensing approaches.

In this paper we examine, retrospectively, the application of this license-with-an-option feature to the case of Nextel Communications interfering with public safety communications in the 800 MHz band. This issue initially surfaced in 1999, and took until 2004 to reach agreement on a resolution. Implementation of the agreement took nearly four years more. Resolution required several proposals by the FCC and others, and over 2,200 filings by interested parties. The duration and rich variety of concerns covered by the proceedings provide the opportunity to examine a new idea in the context of an old problem.

Despite the historical perspective, the underlying spectrum management issues remain as relevant as ever, and we examine application of the new license concept looking forward as well as backward. Our license proposal is intended to provide flexibility and certainty to a variety of situations, including (1) changes in technology, demand, or use; (2) coexistence between multiple services, and; (3) efficient use of spectrum over time. These were central issues in the Nextel interference proceedings, but re-surface in recent cases such as the Lightsquared proposal, and even current debates over spectrum sharing with public safety. 

Spectrum issues such as allocation and allotment, assignment, service rules, and compliance and enforcement continue as contentious management issues, even as the situations and applications evolve. We suggest that existing fixed licensing models are sub-optimal, and in some situations are themselves the source of inflexibility and artificial scarcity. In this paper we contribute, via a case study, further development of a license model that augments existing approaches across a wide range of governance models and assignment approaches. This license capitalizes on the present trends toward spectrum sharing and more efficient use of existing spectrum, and advances a model that assists with these goals.

Moderators
Presenter
Author


Saturday September 9, 2017 10:07am - 10:40am
ASLS Hazel Hall - Room 221

10:40am

Coffee Break
Saturday September 9, 2017 10:40am - 11:10am
Founders Hall - Multipurpose Room

10:40am

Discussion Tables about TPRC Submission Process - Full Paper vs. Abstract
An opportunity to discuss submissions. Have you taken the survey?  https://www.surveymonkey.com/r/TSD8X83

Saturday September 9, 2017 10:40am - 11:10am
Founders Hall - Multipurpose Room

11:05am

Municipal Fiber in the United States: An Empirical Assessment
The apparent success of municipal fiber projects in places such as Chattanooga has led to widespread calls for other cities to undertake similar initiatives. Unfortunately, empirical assessments of municipal fiber projects’ performance are few and far between, with most of the literature consisting of advocacy pieces that have been long on rhetoric and short on data. The anecdotal nature of these analyses has led the focus almost exclusively on supposed success stories instead of analyzing the universe of municipal fiber projects systematically.

To fill this gap, we present an empirical evaluation based on a unique dataset derived from the audited financial statements from 2010 to 2014, the public bond filings of every municipal fiber project in the U.S., and U.S. Census data. We conduct a discounted cash flow analysis on the data to assess whether particular projects will remain solvent or whether they will have to default on their bond obligations. We also present detailed case studies of particular projects identified as potential success stories by our analysis and by media reports. The net result is a fairly sobering picture of how likely municipal fiber projects will be able to generate sufficient returns to recover the investments needed to construct these networks and the implications such shortfalls can have.

Moderators
Presenter
CS

Christopher S. Yoo

University of Pennsylvania Law School


Saturday September 9, 2017 11:05am - 11:38am
ASLS Hazel - Room 120

11:05am

The 'Innovation Radar': A New Policy Tool to Support Innovation Management
Introduction: In this paper we describe a new policy tool to support innovation management and increase the innovation impact of research and innovation programs. With nearly 80 billion Euro over a period of 7 years (2014-2020), Horizon 2020 is the largest ever publicly funded research and innovation program in the European Union. Cutting-edge technologies are being developed within the program and a significant part of these technologies could be commercialized. But not all technologies and innovation with commercial potential actual reach the market. The questions are why, and what additional actions are needed on the part of the policy makers to address this problem? To this effect the European Commission developed and successfully tested an "Innovation Radar" in the part of the program that focuses on Information and Communication Technologies (ICT) and their applications. The Innovation Radar focuses on the identification of high-potential innovations and the key organizations developing these innovations in the program. Our paper presents the Innovation Radar (IR) methodology which was developed by our team jointly with the team from European Commission's Directorate General for Communications Networks, Content & Technology that manages the ICT part of Horizon 2020. The paper then presents the results of its pilot application.

Approach and Data: The IR methodology is based on assessment of innovation and new technology ventures. The IR uses two composite indicators aiming at capturing the heterogeneity in innovation activities and innovators across projects. First, the "Innovation Potential Indicator" provides a holistic view of the innovation potential of projects. Second, the "Innovator Capacity Indicator" is capturing the innovator's capacity in conducting innovation activities. Each of these two indicators includes several sub-indicators. The IR characterizes innovations with respect to their technical readiness, innovation management and market potential. For innovators - usually researchers - it delivers information on their ability to innovate and their environment. We applied the OECD/JRC methodology to construct composite indicators.

The data used in the paper was collected via a structured questionnaire during the IR pilot phase from May 2014 to January 2015 on the occasion of annual projects reviews conducted by external experts (projects typically run for 3 years).

Results overview: Out of the 2600 projects running at the time, the IR scanned 280 projects. Over 500 innovations with market potential were identified, or on average nearly two new or substantially improved products or services within each project. The IR pilot phase provided in particular evidence that while innovators demonstrate high levels of technological expertise, they usually pay less attention to the business-related dimensions. The most common needs expressed by innovators are partnerships with other companies, business plan development and expanding to more markets. They less frequently mention needs for incubation, investment training, or participation in accelerators. More than 40% of the innovators mention lack of financing as a major external bottleneck to innovation exploitation, although interestingly only 5% have sought or are planning to seek private or public funding. Regulation and IPR issues are also considered important bottlenecks.

Impact of the IR covers 3 main dimensions:

1. Providing structured and quantified intelligence on innovations and innovators in publicly funded projects and use this intelligence to design customized support to those projects and organizations.

2. Creating bridges between innovative organizations/projects and external stakeholders such as venture capitalists.

3. Providing feedback on the effectiveness of the program in funding innovation, thereby enabling the policy makers to improve its impact.

Having completed the IR pilot phase and analyzed the collected information, it can be concluded that, for the first time, policy makers and project participants can obtain up-to-date structured information on the innovative output of the projects, and that the IR demonstrated its real potential as a policy tool to support innovation management.

Presenter
avatar for Paul Desruelle

Paul Desruelle

Digital Economy Unit, European Commission - Joint Research Centre
Growth & Innovation Directorate Digital Economy Unit

Author

Saturday September 9, 2017 11:05am - 11:38am
ASLS Hazel Hall - Room 329

11:05am

An Analysis of Job and Wage Growth in the Telecom/Tech Sector
This paper reports on a systematic study of the quantity, wage level, and location of domestic jobs being created by the telecom/tech sector. In recent years, the leading telecom/tech companies have been repeatedly criticized for not producing enough jobs; for not producing enough middle skill jobs; and for not producing enough geographically diverse jobs. In this paper we bring together data from the Current Employment Statistics (CES), the Occupational Employment Statistics (OES), the Quarterly Census of Employment and Wages (QCEW), and organic job posting data to systematically address all three of these questions.

The first step was to identify several appropriate technology aggregates, including the broader digital sector, the telecom/tech sector, and the e-commerce sector. We show that for each aggregate that both job and establishment growth has significantly outpaced the overall private sector. Moreover, we estimate the domestic employment for the top ten telecom/tech companies (measured by market cap), and show that their domestic workforce have by 31% since 2007 compared to 5% for the private sector as a whole.

We then calculate the average real wage in each aggregate. Not surprisingly, we find that real wages in the technology aggregates are higher and rising faster than for the private sector as a whole. To correct for composition effects, we examine detailed occupational categories, and find that for middle-skill occupations such as sales and office support, the tech aggregates have significantly higher wages compared to the private sector.

Next, we examine the geography of telecom/tech job and payroll growth. We find that in recent years that the telecom/tech sector has “escaped” the coasts and is now propelling growth in states such as Kentucky, Ohio, and Indiana. We estimate the income gains to these states from telecom/tech expansion.

Finally, we project the impact on overall real wages if the current telecom/tech growth continues. We decompose the impact into a composition effect and a real wage effect.

Presenter
MM

Michael Mandel

Progressive Policy Institute


Saturday September 9, 2017 11:05am - 11:38am
ASLS Hazel Hall - Room 332

11:05am

Spectrum Management Issues for the Operation of Commercial Services with Drones
Commercial operations for Unmanned Air Vehicles (a.k.a. Drones) are envisioned to allow new services over civilian airspace and neighborhoods. However, a communications infrastructure that supports the control and air traffic management interactions of these devices beyond line of sight operations is required. This infrastructure will rely on wireless communication links and their associated radio frequency spectrum resources for its implementation. This paper aims to present an analysis of current and upcoming spectrum policy issues that need to be taken into account and will affect UAV operations and traffic management in the near future. The analysis incorporates a discussion on potential frequency bands being vacated by the FAA and DoD among others and which could be leveraged for commercial UAV applications.

We also analyze why traditional cellular bands and equipment that were designed for terrestrial services might not be adequate for supporting UAV operations as a drone-based user terminal can be in range of several base stations at the same time and cause interference in a much wider area than that of a terminal in the ground. We provide a regulatory and technical context for discussions on whether commercial UAV operations can and should be supported by the use of spectrum resources managed and operated by commercial wireless service providers (LTE, 5G) or if assigning a specific frequency band for these operations would be better.

We contrast the pros and cons of each approach under current and future expected technological advances related to spectrum management (Dynamic spectrum access, MIMO, etc) in order to provide a spectrum policy based view of how UAV operations could take place in the near future with supporting technical information for its feasibility. Our analysis is further complemented by considering scenarios where drone flight paths are mostly unconstrained (except for FAA rules) or limited most of the time to specific air traffic paths. Each scenario has different spectrum management and agility requirements that provide context on how FAA, FCC and NTIA regulations should coordinate if commercial UAV operations are to take off in the near future.

Presenter
CC

Carlos Caicedo

Syracuse University, School of Information Studies


Saturday September 9, 2017 11:05am - 11:38am
ASLS Hazel Hall - Room 225

11:05am

Cellular Economies of Scale and Why Disparities in Spectrum Holdings are Detrimental
What has been driving consolidation of the cellular industry in recent years? Now that traffic volumes are increasing rapidly, the cost of expanding capacity has become a large portion of expenditures for cellular carriers. This paper develops a realistic engineering-economic model of the principal factors that determine both cost and capacity of cellular infrastructure. It then uses analysis of that model to show that there are strong economies of scale when cellular capacity is rapidly expanding, assuming that each carrier makes design and resource decisions that minimize cost for any given capacity. This occurs in part because a carrier with more spectrum benefits more from every new cell tower, and a carrier with more towers benefits more from every new MHz of spectrum. While it is technically possible to expand capacity by increasing either towers or spectrum holdings, we find that the cost-effective approach for carriers is to increase both types of assets at a similar rate, which contradicts the publicly-stated assumptions of some spectrum regulatory agencies. This makes access to spectrum important for all carriers. However, our model shows that large carriers should be willing to pay more for spectrum in auctions and other markets. In the absence of countervailing policies, the big carriers will get bigger, in terms of spectrum holdings, towers, capacity, and ultimately market share.

For policymakers, this economy of scale creates a trade-off between two important objectives: reducing the cost of cellular capacity, and increasing competition. This paper derives the Pareto optimal division of spectrum with respect to these two competing objectives, and shows that any Pareto optimal assignment will split the spectrum fairly evenly (although not exactly equally) among competing carriers. (More specifically, in any Pareto optimal division with k competitors, k-1 would have the same amount of spectrum, and the kth would have less.) This is not simply a method of ensuring that there are many competitors; spectrum should be divided fairly evenly regardless of whether the number of competitors is large or small. A large disparity in spectrum holdings among competitors may yield poor results with respect to both objectives, i.e. the lower cost-effectiveness of a larger number of carriers, and the lower competitive pressure of a smaller number of carriers. One effective way to achieve a division of spectrum that is close to Pareto optimal is a spectrum cap, provided that this cap is set at a level consistent with other regulations and policy objectives. We show that in some counties, such as the U.K., this is not the case today. Thus, there is reason to change current policy.

Moderators
Presenter
avatar for Jon Peha

Jon Peha

Carnegie Mellon University


Saturday September 9, 2017 11:05am - 11:38am
ASLS Hazel Hall - Room 221

11:38am

A Practical Guide to Applying the Law of the Sea into the Internet: Where the Internet Root Zone and the High Seas Find Each Other
In today’s world there is no treaty that regulates the Internet. Although the multi-stakeholder model has been successful in keeping the Internet free of one stakeholder group’s dominance, there are still nation-states who advocate for a government-based-model (Moyer, 2016; Schaller, 2014). There are various reasons for championing a treaty based Internet governance model. Some nation-states intend to assert sovereignty over the Internet and some do not want the laws of one nation apply to what they assume to be their Internet territory, for example to their country code top level domain names.

Marrying the concept of a multistakeholder governance system with sovereigntists’ ideas is not our intention. However, looking at possible international laws that can be applied to the Internet governance has been the focus of scholars and professionals work. Being this the case, in this paper we will explore whether or how international laws can be applicable over the Internet infrastructure considering the multi stakeholder dimension of its governance. Our focus is mainly on the comparison of the Law of the Sea and the governance of the Internet. Some scholars argue that the provisions of the United Nations Convention on the Law of the Sea (UNCLOS) can be used as a model law to have an international agreement between nation-states on how to govern the Internet (Barcomb, 2013; Kalpokienė & Kalpokas, 2012; Schmidt, 2017; Steven, 2001).

This potential comparison between the sea and the Internet comes from the fact that, in the past, the sea was considered a space beyond nation-states’ national jurisdictions constantly subject to sovereignty claims, a space for war, communications and economic production, pretty much as the Internet is today. Nevertheless although this “comparison” exists, and academics provided the rationale as to why it is appropriate to look at the law of the sea for governing the Internet, they did not discuss how or why it might happen in practical legal terms. Moreover, such studies did not substantiate on how the application of UNCLOS’ provisions, in its very technical term, can be made possible and what its legal implications are.

Recently, there have been some attempts to customize the law of the sea to be applicable to certain aspects of the governance of the Internet, such as the root zone, and not to all aspects of Internet governance (Kurbalija, 2011). While we value such approach, we argue that it is time to analyze the argument that UNCLOS provisions can inspire a governance model, applicable at least to some aspects of the Internet governance (namely to the root zone), while upholding the multi stakeholder governance of the Internet.

With this purpose in mind, our aim in this paper is to illustrate what the consequences are of applying international practices and UNCLOS provisions from the “high seas,” into a very narrow but crucial function of the Internet called the “root zone”. The root zone is a file that contains the names and the numeric Internet Protocol (IP) addresses for all the Top Level Domains (TLDs), including the Generic Top Level Domains (gTLDs), like .COM, .NET or .ORG and, all the Country Code Top Level Domains (ccTLDs) such as .PE (country code for Peru) (Clark, Berson, & Lin, 2014; Mueller, 2002). According to the contract signed between Verisign and the Internet Corporation for Assigned Names and Numbers (ICANN), Verisign is in charge of managing the root zone. Any change to the root zone has to be approved by ICANN and Internet Assigned Numbers Authority (IANA). Currently no international principles apply to the root zone.

On the other hand, “high seas” is the name for a maritime space recognized by nearly 2000 years of nation-state practices. The high seas, as UNCLOS establishes, start beyond 200 nautical miles from shore and is open and free to everyone. It is a space governed by the principle of equal rights for all because the resources within the high seas belong to the human kind, and not to a specific nation-state. When ratifying UNCLOS, all members acknowledge that: 1) no nation-state can act or interfere with justified and equal interests of the rest of the human kind, 2) there is freedom of navigation for all nation-states, and 3) maritime security activities can be considered part of navigational activities as they protect vessels from interference by third parties (Messeguer Sanchez, 1999; Williams, 2017).

Root zone and high seas have many identifiable similarities. As we will argue in this paper, they are both global shared resources that can be subject to the conflict over assertion of sovereignty by states or they can be used or abused by states or private parties. There are also similarities between the Internet zone (where the individual top level domains are being operated) and the space known as “territorial sea,” a space where coastal nation-states can apply their national laws as if it was part of their own territory. For example, as each nation-state can operate its own ccTLD the way it desires and according to its national laws, each nation-state may decide to forbid the passage of other nation-states vessels within its territorial sea.

Considering the similarities and differences of the root zone and high seas, this paper addresses the following questions: in a hypothetical world, 1) what are the legal implications of applying UNCLOS provisions similar to that of UNCLOS on the high seas into the root zone? 2) what are the benefits of applying UNCLOS provisions on the high seas into the root zone? 3) How would the high seas provisions be applicable into the root zone?

Finally, we want to clarify that this paper does not advocate for a governance model for the root zone based on the high seas provisions. Findings expect to clarify the diverse opinions this matter has generated in the academic literature, and provide a practical view of if such comparison is even viable and if it is viable, how should Internet governance learn from such international laws.

Presenter
FB

Farzaneh Badiei

Internet Governance Project/Georgia Tech

Author
avatar for Patricia Adriana Vargas Leon

Patricia Adriana Vargas Leon

PhD Candidate, Syracuse University
Patricia is currently a PhD candidate at the Syracuse University School of Information Studies. Her research and teaching interests focus on information policy and Internet governance with emphasis on issues such as national security, control over the Internet infrastructure, net... Read More →

Saturday September 9, 2017 11:38am - 12:11pm
ASLS Hazel Hall - Room 329

11:38am

The Devil is in the Details: Lessons from Indian Spectrum Auctions
The Indian telecom growth as in other countries, largely driven by mobile, saw its teledensity reach nearly 87% by 2016 over a population base of nearly 1.2 bn. As is the trend globally, spectrum has become a critical resource for further growth in the sector, especially with greater demand for data. In India, the initial slow growth was largely attributed to the Department of Telecommunication’s (DoT) design of auctions and the mismatch of the auction outcomes with the market reality. Limited amount of spectrum made available for mobile services and high population densities created a spectrum crunch. Further, given the potential of growth in the market and telecom policies that encouraged competition led to about 10-14 service providers per state creating a further pressure on spectrum. In addition, difficult exit rules and M&A guidelines reduced the scope for consolidation, creating operational and financial pressure for operators. These factors influenced bidding/acquisition of spectrum.

From 1995-2010, India largely adopted spectrum auctions other than in 2007-08 when it adopted first-come-first-serve (FCFS) policy for 2G. In 2010, it adopted a sophisticated simultaneous multiple round (SMRA) auctions for 3G. Subsequently, a Supreme Court judgment in 2012 canceled all licenses awarded through FCFS and mandated that all spectrum had to be allocated through auctions. Subsequent auction designs have used SMRA for different bands.

The last auction was in 2016. It was a multi-band auction covering 700MHz, 800MHz, 900 MHz, 1800 MHz, 2300 MHz, and 2500 MHz. The 700 MHz band, which is a valued band for 3G and 4G saw no buyers and the 2500 Mhz band which did not have a well developed ecosystem saw significant competition and was allocated.

The above raises issues about the efficacy of auction design. While the DoT accepted SMRA as an efficient and effective auction mechanism, it recognized that the specific design elements such as reserve price, withdrawal rules, stopping rules, minimum bid increments have influenced bid outcomes. These parameters needed to incorporate the context of the auction such as requirements of spectrum for different bidders, expectations about future availability, competitor’s bidding strategies, whether single or multiple bands, etc.

We plan to document the spectrum auction design, bidder strategies and outcomes, for the auction held in 2016. This auction provides a rich backdrop for the study as it was held in an environment where several prior auctions that had poor outcomes in terms of participation, bid amounts and amount of spectrum made available and new regulations regarding spectrum sharing and trading had become operational. The competitive environment had also changed significantly with possibilities of future M&A.

We shall identify the learning from the analysis so as to help with the subsequent auction designs. Such recommendations will also take into account developments in this space in other countries.

Methodology: We shall follow the case based approach and document the same using secondary sources of data.

Moderators
Presenter
RJ

Rekha Jain

IIM Ahmedabad


Saturday September 9, 2017 11:38am - 12:11pm
ASLS Hazel Hall - Room 221

11:38am

The Empirical Economics of Online Attention
In several markets, firms compete not for consumer expenditure but instead for consumer attention. We model and characterize how households allocate their scarce attention in arguably the largest market for attention: the Internet. Our characterization of household attention allocation operates along three dimensions: how much attention is allocated, where that attention is allocated, and how that attention is allocated. Using click-stream data for thousands of U.S. households, we assess if and how attention allocation on each dimension changed between 2008 and 2013, a time of large increases in online offerings. We identify vast and expected changes in where households allocate their attention (away from chat and news towards video and social media), and yet we simultaneously identify remarkable stability in how much attention is allocated and how it is allocated. Specifically, we identify (i) persistence in the elasticity of attention according to income and (ii) complete stability in the dispersion of attention across sites and in the intensity of attention within sites. We note that these findings may be more consistent with a standard model of optimal attention, appended with time slots and constrained minima, compared to a model without such constraints. We conclude that increasingly valuable offerings change where households go online, but not their general online attention patterns. This conclusion has important implications for competition and welfare in other markets for attention.

Moderators
Presenter
JP

Jeffrey Prince

Indiana University

Author

Saturday September 9, 2017 11:38am - 12:12pm
ASLS Hazel - Room 120

11:38am

Crowd Sourcing Internet Governance: The Case of ICANN’s Strategy Panel on Multistakeholder Innovation
e-Participation platforms for policy deliberation have been sought to facilitate more inclusive discourse, consensus building, and effective engagement of the public. Since many internet governance deliberations are global, distributed, multistakeholder and often not formally binding, the promise of e-participation platforms is multiplied. Yet, the effectiveness of implementation of such platforms, both in traditional and multistakeholder policy deliberation, is up for debate. The results of such initiatives tend to be mixed and literature in the field has criticized excessive focus on technical solutions, highlighting the tension between expectations and actual outcomes. Previous research suggests that the utility and effectiveness of these platforms depends not only on their technical design features, but also on the dynamic interactions of technical choices with community or organizational practices, including “politics of participation” (i.e., the power relations among stakeholders and the dynamics of their interactions). We argue the importance of unpacking the interactions between technical capacities, and organizational practices and politics in emergent e-participation tactics for internet governance deliberations.

To better understand the tension between expectations and outcomes of e-participation tools in internet governance deliberations, and to unpack the practices and politics of participation, we offer a case study of ICANN’s use of the IdeaScale platform to crowdsource multistakeholder strategies between November 2013 and January 2014. To the best of our understanding this is one of the first empirical investigations of e-participation in internet governance. This is an ongoing project, building on our own previous work presented at CHI 2016 on the impacts of crowdsourcing platforms on inclusiveness, authority, and legitimacy of global internet governance multistakeholder processes.

Empirically, we draw on interviews with organizers and users of the ICANN IdeaScale implementation (currently underway), coupled with analysis of their activity on the platform. Conceptually, we draw on crowdsourcing and e-participation literature and apply Aitamurto and Landemore’s (2015) five design principles for crowdsourced policymaking processes and platforms to evaluate ICANN’s system-level processes and impacts of the IdeaScale platform design on participant engagement, deliberative dynamics, and process outcomes. Our paper will conclude with design recommendations for crowdsourcing processes and technical recommendations for e-participation platforms used within non-binding, multistakeholder policy deliberation forums.


Saturday September 9, 2017 11:38am - 12:12pm
ASLS Hazel Hall - Room 332

11:38am

Price-Cap Regulation of Firms that Supply Their Rivals
Motivated by the Federal Communications Commission’s Business Data Services proceeding, we study the effects of price-cap regulation in a market in which a vertically integrated upstream monopolist sells a requisite input to a downstream competitor. Business data services lines are dedicated high-capacity connections used by businesses and institutions to transmit voice and data traffic. Markets for business data services are often dominated by an incumbent local exchange carrier that sells wholesale service to a downstream rival. Because competition in the provision of business data services is limited in the United States, these services have a history of regulation by the FCC.

Using a theoretical model of firms that supply their rivals, we find that, in the absence of regulation, entry benefits both firms but may be detrimental to (downstream) consumers because the upstream monopolist can set a high input price that pushes downstream prices above the monopoly level. However, if a regulator imposes price caps that constrain the incumbent's upstream and downstream prices, consumers—as well as both firms—benefit from entry. Moreover, price cap regulation does not encourage the incumbent to attempt to foreclose potential entry because to the contrary: entry serves as the primary means by which the regulated incumbent earns above zero profit.

Using dynamic extensions of the model, we also explore the concerns that price caps may induce incumbents to forgo cost-reducing investments or dampen entrants' incentives to self-provision the essential input. In particular, a regulated incumbent that relies on entry for its main source of profit may be less interested in marginal cost reductions for its own services than if the incumbent remained unregulated. As we show, this intuition turns out to be incomplete: under regulation, less of the gain from increased efficiency is passed on to consumers than without regulation, motivating a regulated incumbent to invest more.

In contrast, we find that, under most parameter values in our model, entrants are more likely to self-provision the essential input without regulation. On the one hand, entrants seeking to self-provision face a lower priced rival incumbent under regulation than without, and hence less of a rival response when they lower their own price. On the other hand, when the input price is not regulated, self-provision can dramatically reduce entrant costs. This latter effect usually dominates.

Thus, regulators considering the impact of price cap regulation in markets where firms supply their rivals must weigh both static and dynamic considerations. Specifically, regulators like the FCC must consider whether cost-reducing investment by incumbents or upstream market entry are more important for long term competition when determining whether or not to rely on price caps in new markets.

Presenter
AY

Aleksandr Yankelevich

Michigan State University

Author

Saturday September 9, 2017 11:38am - 12:12pm
ASLS Hazel Hall - Room 225

12:12pm

When Regulation Fills a Policy Gap: Toward Universal Broadband in the Remote North
Access to broadband is necessary to participate in the digital economy – for services such as online banking, ecommerce, government programs, education and training, telehealth, community and small business entrepreneurship. These services are particularly important for people in remote regions, where there may be no banks, physicians, colleges, or government offices. However, connectivity in these areas is generally much more limited than in urban and suburban areas, and prices for internet access including monthly fees and usage charges are typically significantly higher. While these conditions are commonplace in many parts of the developing world, they are also found in indigenous communities of the far North.

This paper analyzes a recent proceeding and decision by the Canadian regulator, the CRTC, which concluded that broadband is a universal service, and therefore broadband must be available to all Canadians, including those living in the remote North. This was a landmark proceeding not only in its outcome but also in the approach to participation and engagement with indigenous organizations and consumer representatives. The proceeding was also unusual in that it took deliberate steps to fill policy gaps that in Canada and many other countries would usually be the responsibility of a government ministry, rather than a regulator.

By expanding the definition of universal service from voice to include broadband, the Commission mandated that all residents, including those in the remote North, must have access to broadband. It also set performance requirements including speed and quality of service, and target dates to cover unserved and underserved populations.

However, as the Commission pointed out: “A country the size of Canada, with its varying geography and climate, faces unique challenges in providing similar broadband Internet access services for all Canadians.” It therefore established a new fund to extend and upgrade broadband infrastructure in rural and remote areas. In contrast to previous funds open only to incumbents, this new resource is to be open to all providers including indigenous and community organizations.

Unlike many countries, Canada has no formal national broadband plan, which would generally be the responsibility of its communications ministry. Recognizing this gap, the CRTC used the proceeding to put forward its own blueprint, reflected in the title change from “Review of basic telecommunications services” for the proceeding to “The path forward for Canada’s digital economy” for its decision following the proceeding.

The paper analyzes each of the major components of its decision, and the process through which it was determined. It also examines issues that remain to be addressed, including additional funding required for infrastructure investment and the issue of affordability, which was raised repeatedly by consumer representatives. However, the Commission rejected proposals including low income user subsidies (similar to the U.S. Lifeline program), ceilings on prices for data overages, and operational (as opposed to infrastructure construction) subsidies.

Several indigenous organizations made written submissions and testified at the in-person hearings. The paper examines the impact of these participants including recommendations by indigenous organizations that were adopted by the Commission and specific references to indigenous testimony in the decision.

The paper concludes with an analysis of lessons from this recent Canadian regulatory experience that are relevant for other countries attempting to expand broadband to remote, indigenous or developing regions.

Moderators
Presenter
HH

Heather Hudson

ISER, University of Alaska Anchorage


Saturday September 9, 2017 12:12pm - 12:45pm
ASLS Hazel - Room 120

12:12pm

CANCELLED - Compatibility and Interoperability in Mobile Phone-Based Banking Networks
Nick is ill. 

In many developing countries of Africa and Asia, cell phones are used (i) to transfer money across individuals; (ii) to securely self-transport money, and (iii) to save/store money. These banking networks ride on top of wireless telecommunications networks. Traditionally each banking network was tied to the network of a telecom carrier and transfer were available only with the carrier’s network, making it incompatible with banking networks of other carriers. In Tanzania, mobile banking under incompatibility was well established for a decade until the summer of 2016 when the second, third, and fourth largest carriers established full compatibility of their banking networks. Analyzing a comprehensive dataset of banking transactions provided by a large telecom carrier in Tanzania, this paper discusses pricing under compatibility, contrasts with pricing under incompatibility. We analyze the transaction termination fees in this environment of practically no regulation, and assesses the individual and collective incentives for compatibility, noting that the largest carrier has remained incompatible.


Saturday September 9, 2017 12:12pm - 12:45pm
ASLS Hazel Hall - Room 329

12:12pm

Changing Markets in Operating Systems; a Socio-Economic Analysis
This paper explores into the character of the market for operating systems in order to reach a better understanding of the characteristics, consequences of fragmentation, and impact in the overall development of the internet and the digital economy. For this purpose we consider the effects on trade and innovation, and on the significance for the architectures of networks in the digital economy.
The article includes a review of the various forms of market definition of software operating systems to understand their economic characteristics from a socio-technological view. From the early dominance of IBM’s OS/360 to UNIX-related systems and the disk operating systems [DOS] of IBM and Microsoft through to Apple’s Mac OS and Google systems [Chrome OS and Android], there has been a succession of dominant players.
We address the economic theory behind markets in platforms and its relationship with operating systems. Arguments are presented to describe three major sectors where the operating systems market requires further analysis: a) boundaries between the standard roles of consumption and production are blurred in the consideration of operating systems b) novel concepts of ownership c) the decoupling between services and physical supports raises issue of control rather than ownership.
In the digital mobile environment, consumers do produce valuable services, or add value to the standard services sold to them. These generate information and data. These data become necessary for the actors operating in other layers of the production chain to add value to their services and products, and to generate brand-new services and applications. Thus, a novel situation occurs: the overall welfare of the system cannot be subdivided into consumer surplus and producer surplus; producers might appropriate some part of the overall welfare by becoming “consumers” themselves of the information and data generated by the (previously-labelled) consumers. We suggest that policy guidelines based on the standard industrial organization analysis are no longer quite so valid and legitimate. The concept of surplus changes meaning when the “consumption” side of the ecosystem can add value and generate new surplus to the “production side”.

In the digital mobile industry, most of the inputs used by consumers are not really owned by them. Most of the inputs (intended in terms of both services and goods) utilized by the end user cannot be employed by the latter at will, according to their own “utility function”. The concept of ownership in law and economics is defined by the condition that the “owner” has the right to exclude others from the use of their property and can control the way in which others can restrain their use.

The problem of control retraces standard issues covered by “vertical analysis” in competition policy (in terms of foreclosure, discrimination, and fair usage). The literature dealing with the problem of vertical restraints addressess: how ownership in one layer of the chain affects the control of elements or modules in other layers of the vertical production chain, and therefore their usage. The way in which vertical restraints shifts the rights of actors along the chain is a problem much less developed in the literature and we are not aware of any work explicitly modelling and developing this issue.

Moderators
SW

Scott Wallsten

Technology Policy Institute

Presenter
SM

Silivia Monica Elaluf-Calderwood

Florida International University

Author
JL

Jonathan Liebenau

London School of Economics & Political Science (LSE) - Department of Management

Saturday September 9, 2017 12:12pm - 12:45pm
ASLS Hazel Hall - Room 332

12:12pm

An Economic Welfare Analysis of Shared Spectrum Use: A Case Study of LTE-U/LAA
LTE-U is an interesting case of spectrum sharing in that it is asymmetric, since the wireless service provider can utilize licensed and unlicensed spectrum, while non-subscribers can only utilize unlicensed spectrum. Furthermore, the use of spectrum by one party can impose a cost upon another party and, because the service provider can at times benefit from imposing such a cost, it can have the wrong incentive to access spectrum in the unlicensed band. In addition, unlicensed spectrum may currently be over-consumed in certain situations and, thus, may be inefficiently used. Finally, prices are not fully relied upon to guide the assignment of users to licensed and unlicensed spectrum since the unlicensed band serves as a common pool resource. Together these unique features of the wireless market contribute to the lively debate among interested parties regarding the welfare effects of a wireless service provider’s decision to adopt LTE-U. Using standard economic techniques, this paper presents a set of economic models designed to assess the welfare effects of LTE-U under different economic and technical conditions.

Presenter
Author

Saturday September 9, 2017 12:12pm - 12:45pm
ASLS Hazel Hall - Room 225

12:45pm

2:00pm

Towards the Successful Deployment of 5G in Europe: Two Contrasting Scenarios
This paper reports on research into policy and regulatory scenarios for the successful development and deployment of 5G in Europe, and possibly beyond. The starting point for the research is the combination of two overarching policy objectives:

(1) European leadership in the development and deployment of 5G; and

(2) moving beyond the mass consumer market to serve the specific needs of business users, the so-called vertical industries, such as automotive, health, agriculture, etc.

The research explores how these two objectives might be enabled through policy and regulatory action, using the scenario approach

Since the success of GSM, introduced in 1991, reaching its peak in deployment in 2015 with 3.83 billion subscribers and 700 operators in 219 countries and territories, the question of European leadership in the development and deployment of mobile communications is being raised with each successive generation. European policy makers have a keen interest in the success of the next generation because ubiquitous and high capacity electronic communication infrastructure is recognized as a cornerstone of economic development. This also applies for 5G scheduled for introduction around 2020.

To develop the policy and regulatory options, the first part of this paper researches and identifies the attributes that have led to the success of GSM using historical analysis. The findings are compared with the developments around 3G and 4G. Subsequently the paper investigates how the lessons learned can be transposed to the political and industrial context of 5G. This leads to the “Evolution” scenario as the base line.

The second part of the paper questions whether the path towards the future is predetermined by previous generations, by a prevailing industry structure. It positions that there is indeed a fork in the road ahead that gives way to an alternative future, which is captured in the “Revolution” scenario. This fork in the road needs to be navigated by policy makers and regulators as it will lead to different possible futures. Whereby one future outcome may be more desirable than the other.

The “Revolution” scenario represents a clear break with the trends underpinning the “Evolution” scenario. It exploits the opportunities of standardized APIs for service creation, being enabled by network virtualization as part of 5G. These open APIs allow the market entry of a multitude of virtual mobile network operators (VMNOs), dedicated to serve particular industry verticals or economic sectors with tailored feature sets and qualities of services. As firms compete for end-users, they will compete for providing the best virtual mobile services as well. This is expected to results in a very dynamic wholesale market. A market that unlocks a higher willingness to pay, which, through differentiation of network services, will flow through to incentivize 5G network investments.

The paper will describe the various dimensions of the two scenarios and elaborate the policy and regulatory actions required to enable each of the scenarios, addressing topics such as: retail market access; open and common APIs; net neutrality; liberalization of SIM usage; and multiple VMNOs on a single device.

Moderators
DG

David Gabel

Queens College

Presenter
avatar for Wolter Lemstra

Wolter Lemstra

Associate Professor, Nyenrode Business University
I am Associate Professor in Digital Strategy and Transformation at Nyenrode Business University, the Netherlands; Senior Research Fellow at the Department Technology, Policy & Management of the Delft University of Technology, The Netherlands; Research Fellow at CERRE, the Centre on... Read More →


Saturday September 9, 2017 2:00pm - 2:33pm
ASLS Hazel Hall - Room 329

2:00pm

Geographic Patterns and Socio-Economic Influences on Internet Use in U.S. States: A Spatial and Multivariate Analysis
Discourse and interest in the digital divide research community is steadily shifting beyond access and adoption to utilization, impact, and outcomes of information and communications technologies (ICTs), particularly the internet. In the United States, studies and surveys conducted by the National Telecommunications and Information Administration (NTIA) indicate increase in internet use in every corner of the country over the last two decades. However, recent surveys on ICT use indicate significant disparities in dimensions of internet use. For example Americans’ use of the internet to pursue e-education, e-health, e-commerce, e-entertainment, and telecommuting has varied significantly – longitudinally as well as geographically. Additionally, internet use habits are rapidly expanding, providing new insights into the emerging internet of things, wearable technologies, and new forms of social media usage. As novel technologies and lifestyles emerge, analysis of new disparities and dimensions of the “usage digital divide” stemming from social, economic, societal, and environmental factors becomes important.

This research examines spatial clusters, geographic disparities, and socio-economic dimensions of existing and emerging dimensions of internet use among the 50 U.S. states. We adapt the Spatially Aware Technology Utilization Model (SATUM) for internet use by positing associations of 20 independent demographic, economic, infrastructural, affordability, innovation, societal openness, and social capital variables with 17 indicators of internet use spanning e-education, e-commerce, e-health, e-education, telecommuting, and emerging forms of internet use. Data on the 17 indicators of internet use are sourced from the July 2015 CPS Supplement on internet use from the U.S. Census. Data on traditional independent correlates are sourced from the same Supplement, U.S. Census of Population, U.S. Economic Census, while data on societal openness, social capital, infrastructure correlates are collected from George Mason University’ Mercatus Center, FCC’s National Broadband Map initiative, and noted political scientist Robert Putnam’s publicly available data on civic engagement.

First, descriptive mapping provides important visual cues about patterns of internet use in U.S. states. Subsequently, K-means cluster analysis of multiple internet use-related factors is conducted to determine agglomerations of states that are most similar in patterns of internet use and outcomes. Subsequently, statistically significant “hotspots” and “coldspots” of internet use and outcomes among U.S. states are identified, followed by spatial autocorrelation analysis of various dimensions of internet usage. A-priori diagnosis of spatial autocorrelation is critical to understand and possibly account for the presence of spatial bias while examining social, economic, societal, and environmental underpinnings of internet usage. Regression residuals are mapped and examined for spatial autocorrelation.

Systematic examination of rapidly evolving dimensions of internet use among U.S. states distinguishes this work. Novelties include thorough analysis of disparities stemming from geography, results showing socio-economic, infrastructural, affordability, civic engagement, and societal openness determinants of the internet “usage digital divide,” and longitudinal analysis of change dimensions. A methodological novelty is diagnosis of spatial autocorrelation of internet use, largely ignored in digital divide literature. Left undiagnosed, this can potentially bias regression-based associations of independent variables associated with internet usage. Finally, the findings of this work have critical policy implications at a time when expanding and stimulating greater variety and intensity of internet use and impacts are well-recognized as aspirations of state and federal policies.

Moderators
Presenter
AS

Avijit Sarkar

University of Redlands

Author

Saturday September 9, 2017 2:00pm - 2:33pm
ASLS Hazel - Room 120

2:00pm

Uncertainty in the National Infrastructure Assessment of Mobile Telecommunications Infrastructure
The UK’s National Infrastructure Commission is undertaking the first ever National Infrastructure Assessment, of which telecommunications is a key component. The aim of this task it to ensure efficient and effective digital infrastructure delivery over the long-term, the results of which will be used to direct both industry and government over coming decades. However, taking a strategic long-term approach to the assessment of telecommunications infrastructure is a challenging endeavor due to rapid technological innovation in both the supply of, and demand for, digital services.

In this paper, the uncertainty associated with the National Infrastructure Assessment of digital communications infrastructure is explored in the UK, focusing specifically on issues pertaining to:

(i) uncertainty in future demand, and

(ii) ongoing convergence between sub-sectors (fixed, mobile, wireless and satellite).

These were the two key issues identified at The Future of Digital Communications workshop held at The University of Cambridge (UK) in February 2017. Currently industry and government have very little information to direct them as to how these issues will affect the long-term performance of digital infrastructure. This paper not only quantifies the uncertainty in different national telecommunications strategies, but it quantifies the spatio-temporal dynamics of infrastructure roll-out under each scenario. This is vital information for policy makers to understand disparities in the capacity and coverage of digital services over the long-term (e.g. in broadband markets), and helps in the early identification of areas of potential market failure (for which policy has traditionally been reactive not proactive).

The methodology applies the Cambridge Communications Assessment Model, which has been developed exclusively for the evaluation of national digital infrastructure strategies, over 2017-2030. The approach taken is to treat digital communications infrastructure as a system-of-systems which therefore includes the fixed, mobile, wireless and satellite sectors (hence, enabling the impact of convergence to be assessed). Demographic and economic forecast data indicate the total number of households and businesses annually, and an estimate of the penetration rate is calculated using this information. Network infrastructure data is then collated to indicate current capacity and coverage, with cost information then being applied to estimate viability of incremental infrastructure improvement. Existing annual capital investment is used to constrain roll-out of new infrastructure.

The results of this analysis actually quantify for policy-makers at the National Infrastructure Commission the uncertainty associated with:

(i) future demand, and

(ii) ongoing convergence in digital services.

It finds that more emphasis should be placed on how the demand for digital infrastructure affects the spatio-temporal roll-out of digital infrastructure due to viability issues. The results conclude that while national infrastructure assessment is a valid method for thinking more strategically about our long-term infrastructure needs, we must recognize the inherent uncertainty associated with this particular sector, as this has not been adequately addressed to date at the policy level in the UK. Rapid technological innovation affects our ability to accurately forecast long-term roll-out, making it essential that rigorous examination of this uncertainty is both quantified and visualized to support policy decision-making.

Moderators
avatar for Trey Hanbury

Trey Hanbury

Partner, Hogan Lovells

Presenter
EO

Edward Oughton

University of Cambridge


Saturday September 9, 2017 2:00pm - 2:33pm
ASLS Hazel Hall - Room 332

2:00pm

Networked Privacy and Security
Much of the existing analysis of privacy has sought to clarify the difference between data protection and more fundamental privacy concepts, in particular by incorporating explicit ethical aspects such as: voluntary participation; clear and optimised value; meaningful and informed consent; respect for privacy, identity and confidentiality preferences; ‘ethics by design’ to maintain integrity, quality and transparency; and clarity regarding specific interests. These considerations are all relational in nature, so attention has naturally begun to shift from data protection to data governance and from individual privacy to relational privity. This approach is already bearing concrete fruit in contexts such as data science and the design and assessment of cyberphysical systems (including the IoT). But it is still relatively insensitive to the structure of these relationships; the objective of this research is to apply methods drawn from network game theory to the understanding of information access and utilisation structures, with an eye ultimately to replacing todays crude privacy, data protection and cybersecurity rules – which tend to pay attention only to the individual level (e.g. European concepts of data protection as a fundamental right of individuals), pairwise notions (confidentiality rules) and entire groups (security rules and ‘public information’ concepts) with something that more accurately reflects the importance and dynamics of structures as they have emerged in practice. The reason for using network game theory is that it replaces: i) the ‘big group’ of non-cooperative games (where all the players ‘play together’) with explicit structures that determine who plays with whom; and ii) the ‘coalitions’ of cooperative game theory (where membership in a coalition is reflexive, symmetric, and transitive) with a specific geometry of (binary or higher) interactions. To apply these tools to privacy and security, it is necessary to clarify the nodes and links that make up the network. It is already clear from the work on relational privacy that informational or data privacy can be straightforwardly represented; people are the nodes, and access to or flows of their personal information determine the links. One contribution of the proposed paper is that privacy in various senses is also given an explicit topological structure. Access and permitted actions define proximity and explicit contacts and contracts are supplemented by shared norms. Thus people may be ‘close’ either in the sense that they are less private or secure from each other than from others or by having similar views of privacy and security – and thus similar responses to unexpected developments, willingness to support changes in law and availability to enter new relationships. This allows: i) a characterisation of outcomes and the impact of rules and norms for different structures; ii) the analysis of models of the evolution of privacy, privity and security conventions along the lines of behavioural conventions (in particular that ‘slow-growth’ topologies favour rapid convergence to risk-dominant outcomes); and iii) modelling the evolution of networks along pairwise stability lines. The original wrinkle is that information shared (or withheld) changes the payoffs and alters higher-order beliefs embodying reputations or trust relations. While standard network game models have fixed strategies and payoffs (the evolution of conventions model) or fixed notions of what each player gets in each network structure (in the structural evolution model); the network privacy model allows these to change as information is shared and used.

Moderators
JS

Jesse Sowell

Senior Advisor, Vice Chair of GDC Directing Outreach, Cybersecurity Fellow, M3AAWG / Stanford

Presenter
avatar for Jonathan Cave

Jonathan Cave

University of Warwick
Economist working on regulation, policy impact, privacy, cybersecurity etc.Turing fellow working on digital ethics, deep learning and algorithmic bias/collusion.Econmist member of UK Regulatory Policy Committee, scrutinising impact assessments, working Better Regulation.


Saturday September 9, 2017 2:00pm - 2:33pm
ASLS Hazel Hall - Room 225

2:00pm

The Challenge of Internet and Social Media on Shield Law Legislation: Four Dimensions of Reporter's Privilege
In spite of commonly shared understanding of the benefits of shield laws as a vital part of journalism’s watch dog function in democratic societies, there is a great deal of variation on how shield laws are regulated in different countries, states and territories. The scope of shield law, the persons covered, interpretation power of judges, and the exceptions for the main rule vary considerably among jurisdictions. The differences reflect not only details of stipulation but also fundamental principles behind the objectives of legislature. One of the main dividing factors is the outlook on Internet based contents like blogs, podcasts or websites like WikiLeaks. It is not at all clear anymore who is a journalist and what kind of action can be defined as journalism.

The objective of this paper is to present an analytic classification of existing shield law legislations based on international comparison. Classification is based on analysis of the contents of shield law/reporter’s privilege legislations in Australia and its territories, Finland, Germany, Norway, Sweden, and the USA (states).

There are plenty of analyses of shield laws at national level but international comparisons – especially over language barriers – are rare. The research frame and the results of this study are unique. This classification helps to identify the key differences in the legislations and hopefully promotes the discussions about the limits and the possibilities of different approaches correspondingly.

As a result of the comparison the approaches of different jurisdictions on shield law are divided here in four categories: affiliation approach, function approach, intention approach, and universal approach. The main characteristics of the categories are as follows.

Affiliation approach limits the realm of privilege to professional journalists working in traditional news organizations (print, radio, television). In function approach the focus is not on one’s affiliation with the media organization but on whether the person functions as a journalist. This extends the reporter’s privilege to freelancers and non-traditional media like web sites. Like function approach, the intent approach has been applied in cases where the realm of shield law has been extended outside the journalists of traditional news media. Intent approach asks whether the person had the intent to disseminate to the public the information she/he has obtained through investigation. Finally, universal approach refers to a shield law legislation which guarantees the right to protect the confidentiality of the information sources to everyone who has drawn up a message or delivered it to the public, personal blogs included. There are examples of all these approaches in the legislations analyzed for this study.

Saturday September 9, 2017 2:00pm - 2:33pm
ASLS Hazel Hall - Room 221

2:33pm

Understanding the Trend to Mobile-Only for Internet Connections: A Decomposition Analysis
Household internet access via a mobile-only connection increased from 8.86% in 2011 to 20.00% in 2015, more than doubling in only four years. Understanding the driving factors behind this trend will be important for future iterations of broadband policy. The drivers of the diffusion of broadband access are well documented, but little has been done on the topic of the mobile-only connection. Many demographics have embraced mobile-only connections over this period, including Hispanics, Asians, older Americans, and Americans in non-metro areas. An open question, however, is which relationships are driving this shift to mobile. For example, is the shifting relationship between age and a mobile-only connection a more important driver than that for race or non-metro status?

Data for this paper comes from the Current Population Survey (CPS) supplemental survey on Computer and Internet Use for July 2011, 2013, and 2015. Beginning in July of 2011, the CPS began including a mobile only connection as an explicit option when asking how the household connects to the internet. Combining this information with demographic variables from the survey allows for examination of how the relationships have changed over this four year time period.

To answer this question, the propensity of an individual to adopt the internet through a mobile only connection is modeled using a logistic regression for each of the three years. Following these regressions, a non-linear Blinder-Oaxaca decomposition is used to determine which of the shifting relationships are most responsible for the trend to a mobile only connection over the time period of 2011 to 2015. Changes due to shifts in the demographic characteristics of the population themselves are not likely to be responsible, since the nationally representative CPS data did not change very significantly across the years. The results bear this out, with changing characteristics accounting for less than 1% of the full 11.14% increase in mobile-only adoption. However, behavioral relationships across demographics did change dramatically as specific groups were much more likely to be in the mobile only category in 2015. The leading behavioral relationships impacting the trend are those associated with age (50.55%), race/ethnic background (4.75%), and non-metro status (1.88%), indicating that these demographic groups are becoming more willing to adopt the internet via the mobile only connection. By understanding which demographics are driving the shifts to mobile-only, programs focused on bridging the digital divide (such as Lifeline); can be more focused in their efforts.

Moderators
DG

David Gabel

Queens College

Presenter
BW

Brian Whitacre

Oklahoma State University, Oklahoma State University


Saturday September 9, 2017 2:33pm - 3:05pm
ASLS Hazel Hall - Room 329

2:33pm

Distinguishing Bandwidth and Latency in Household’s Willingness-To-Pay for Broadband Internet Speed
We measure households’ willingness-to-pay for increases in home broadband Internet connection speed using data from a nationally administered discrete choice survey. We characterize Internet speed along two dimensions – bandwidth and latency. We administered two different surveys; both have variation in price, data caps, and download and upload bandwidth, but only one describes, and has variation in, latency. These surveys allow us to measure tradeoffs between bandwidth and other connectivity features such as price and data caps, and perhaps most notably, provide the only empirical evidence to date of tradeoffs between bandwidth and latency. The presence/absence of latency in the two survey versions also allows us to assess if and how valuation of bandwidth changes when latency is considered. The information this research generates is necessary for making informed broadband policy decisions. For example, in designing its reverse auction for subsidizing broadband coverage, the Federal Communications Commission should know how much consumers value incremental improvements in bandwidth, latency, and usage allowance, as well as tradeoffs among the three.

To estimate consumer valuation of these connectivity characteristics, we conduct discrete choice experiments within our survey. Specifically, we give intuitive and detailed explanations of connectivity features and ask respondents to choose among alternative home Internet connection options. Respondents also compare alternatives we construct with their current home connection. The choices we generate employ an efficient survey design that elicits realistic responses and follows established practice in the literature. As of this writing, we have a finalized survey design based on market research and meetings with several focus groups assembled by a professional focus-group establishment in Indianapolis. We are coding the survey for online dissemination by a nationally recognized online survey firm (ResearchNow), and expect to distribute the survey within the next month. The results and initial draft will be assembled no later than June, 2017.

Moderators
Presenter
YL

Yu-Hsin Liu

Kelley School of Business, Indiana University

Author
JP

Jeffrey Prince

Indiana University
SW

Scott Wallsten

Technology Policy Institute

Saturday September 9, 2017 2:33pm - 3:05pm
ASLS Hazel - Room 120

2:33pm

Limiting the Market for Information as a Tool of Governance: Evidence from Russia
This paper presents a novel measure of subtle government intervention in the news market achieved by throttling the Internet. In countries where the news media is highly regulated and censored, the free distribution of information (including auto and any visual imagery) over the Internet is often seen as a threat to the legitimacy of the ruling regime. This study compares electoral outcomes at polling station level between the Russian presidential election at the beginning of March 2012 with the parliamentary election held three months earlier in December 2011. Electoral regions in two cases are compared: regions that experienced internet censorship at the presidential election but not the parliamentary election; versus regions that maintained a good internet connection without interference for both elections. Internet censorship is identified using randomised internet probing data in accuracies down to 15-minute intervals for up to a year before the election. Using a difference in difference design, an average effect of increased vote share of 3.2 percentage point for the government candidate is found due to internet throttling. Results are robust to different specifications and electoral controls are used to account for the possibility of vote rigging.

Moderators
avatar for Trey Hanbury

Trey Hanbury

Partner, Hogan Lovells

Presenter
KA

Klaus Ackermann

University of Chicago


Saturday September 9, 2017 2:33pm - 3:05pm
ASLS Hazel Hall - Room 332

2:33pm

Privacy, Information Acquisition, and Market Competition
This paper analyzes how personal information affects market outcomes in a two-sided market where sellers target advertisements to individuals who have varying privacy concerns. I focus on how a market entrant that has worse targeting technology than an incumbent is disproportionately affected by a lack of information. I show that an entrant always purchases data to overcome its initial targeting disadvantage, whereas the incumbent only does so when consumers are relatively privacy-sensitive. When the incumbent also buys data, the entrant suffers from lower market share. With regard to data-driven vertical integration between the platform and a seller, the platform and incumbent will merge if consumers become more privacy-sensitive, and they will always prevent the unaffiliated entrant from obtaining data access. Overall, an entrant is disproportionately affected by consumers' privacy concerns. The welfare analysis shows that privacy concerns and the resulting market outcomes lower consumer surplus and social welfare. Therefore, individually optimal decisions on data disclosure might not be socially optimal when aggregated.

Moderators
JS

Jesse Sowell

Senior Advisor, Vice Chair of GDC Directing Outreach, Cybersecurity Fellow, M3AAWG / Stanford

Presenter
SJ

Soo Jin Kim

Michigan State University


Saturday September 9, 2017 2:33pm - 3:05pm
ASLS Hazel Hall - Room 225

2:33pm

Siri, Who's the Boss? For Whom Do Intelligent Agents Work?
Intelligent agents are already here. They exist not only in our smartphones (Apple’s Siri), but increasingly, in our homes (Amazon’s Alexa) and vehicles – consider the workings of the semi-self-driving Tesla. Not too long ago, the view that computers and computer programs were mere tools of the person using them was seen as incontestable. But increasingly, policymakers are confronting the possibility that computer programs may develop the capacity to act as agents in the legal sense.

This Article confronts three different established paradigms by which to answer a potentially very important legal question: For whom do intelligent agents work? The answer to this question may affect liability under a variety of legal regimes, including contract, tort and competition law – and looking forward, as the capacity for agency increases, labor and anti-discrimination law. First, contract law might answer that intelligent agents’ true masters are defined by their terms of service – but a review of these agreements reveals key contract formation problems. Second, agency law might suggest that users – who chiefly direct these intelligent agents – are their masters, though current agency law’s fixation with software as a “mere tool” of its user oversimplifies the relationship. Finally, competition law and its “single entity” doctrines focuses less on form than on function; however, intelligent agents may perform both economic and noneconomic functions that might require more varied treatment than competition law provides. As a result, this Article concludes that, initially, a sliding scale based on the degree of economic versus non-economic function should be used to determine the standard to be applied to decide for whom intelligent agents work.

Presenter
GS

Gigi Sohn

Georgetown University



Saturday September 9, 2017 2:33pm - 3:05pm
ASLS Hazel Hall - Room 221

3:05pm

The Evolution of the Internet Interconnection Ecosystem: A Peering Policy Survey
In 2009, William Norton (cofounder of the IXP Equinix and author of the Internet Peering Playbook) conducted a survey of peering policies. A “peering policy” is an expression by a network of inclination to enter into settlement free peering arrangements. Norton’s objective was to foster peering by demonstrating common benchmarks used by industry participants and encouraging newer entrants to expand the value of their networks through peering. Increasing interest in interconnection is good, after all, for the business plans of those who facilitate interconnection: Internet exchange points (IXPs) such as Equinix.

Peering policies historically were proxy measurements for when networks were “peers” (a.k.a. roughly equal networks in terms of geographic reach, traffic, service or size). When networks were peers and peering policies were met, networks would enter into settlement free peering arrangements where each paid their own costs to reach the interconnection point, but neither network charged the other based on the exchange of traffic. Peering increased the reach of both networks and increased their value through network effect (peering also improved network performance, increased resiliency, and has even been reported to result in increased levels of happiness among network engineers).

The interconnection market evolved. No longer is traffic mostly exchanged between two tier 1 backbone networks meeting at in the core of the Internet. Now, content has moved closer to eyeballs, with CDNs locating servers at the gateways of broadband Internet access service providers (at regional IXPs), and backbone networks becoming feeder networks for CDN and cloud services. Broadband Internet access service providers have evolved from paying transit to receive Internet traffic, to receiving access paid peering fees from content providers for access to eyeballs. Large broadband Internet access service providers, through mergers and buildout, have horizontally diversified their business plans to include backbone services and CDNs (they have also vertically diversified). Backbone providers have diversified to provide CDN and enterprise access services. It is increasingly difficult to identify two networks which would meet traditional proxy benchmarks of being peer “like networks.”

This paper seeks to engaged in a new survey of online published peering policies (and similar online data published in peeringDB) to identify how the market evolution has impacted peering policies. This work will generate a baseline of normative behavior and contrast outlier policies. Based on initial work, the following emerges: content networks have the most open peering policies while broadband access services have the most restrictive; peering policies generally require a geographic diversity of interconnection at 3 to 4 major peering cities; broadband access services require balanced traffic ratios within a 2:1 range where content networks have no traffic ratios; broadband access services policies have localization requirements; and bit-mile routing (a proposal of Level 3) is only required by Level 3’s peering policy.

Moderators
DG

David Gabel

Queens College

Presenter
RC

Robert Cannon

Senior Counsel, FCC


Saturday September 9, 2017 3:05pm - 3:40pm
ASLS Hazel Hall - Room 329

3:05pm

Race and Digital Inequalities: Policy Implications
Objectives
As Internet use rises in the United States, Pew Research Center reports that only 12% of Americans were not online in 2016 [1]. Those offline are disproportionately from groups with lower incomes, lower educational qualifications, as well as from rural communities and communities of color. Distressed urban neighborhoods are disproportionately digitally excluded. In Detroit, for example, 63% of households with incomes below $35,000 did not have Internet access at home in 2015 [2]. And while any Internet access can be deemed better than none, only 73% of Americans had access to broadband at home. The proportion of households with broadband is considerably higher among White Americans (78%) than Hispanics (58%) or African Americans (65%), as well as among households with higher incomes and higher educational qualifications. In this paper, we will examine digital inequalities and intersectionality using secondary survey data and primary qualitative data. While race has previously been investigated as one of many factors that have an impact on digital inequalities, intersectionality has rarely been the focus of digital inequality studies.

Methods
To shed light on this topic, we will analyze secondary survey data from the National Telecommunications and Information Administration (NTIA [3]) and the Pew Research Center [4] to provide a broad overview of digital inequalities in the US. We will also conduct an analysis of the NTIA county level data (Form 477 [5]) for a number of US cities. In addition, we will use primary qualitative data from a recent study of community-based organizations, across these same cities, that are working to promote digital inclusion, as case studies that highlight the intersectionality of digital divides. In addition to socio-economic factors — such as age, education, and income — digital inequalities are also highly impacted by race. This characteristic is significantly more pronounced in the US than in other highly developed countries and should be investigated to further understand how race intersects with other identities.

Novelty
According to a recent report from the Free Press, race is a strong factor explaining digital inequalities, even when controlling for income and education. At the same time, systemic discrimination in the US means that non-White communities, particularly in urban areas, are more likely to have lower incomes and lower educational qualifications, which exacerbates inequalities, both online and offline. The combination of quantitative and qualitative data will enable a deeper analysis of the intersectionality of these factors.

Relevance
As digital inequalities persist--especially in distressed urban areas — the intersectionality of race, gender, income, education, and other factors is under-researched in digital inequality studies. Results from this study will have an impact on potential policies that aim to tackle these inequalities. Policies that work for more affluent and predominantly white rural areas, for example, are unlikely to have a positive impact in distressed urban areas. Analyzing the intersectionality of digital inequalities will enable the formulation of policy recommendations that are tailored to specific contexts in which these inequalities occur.

[1] See http://www.pewinternet.org/fact-sheet/internet-broadband/.
[2] See http://www.digitalinclusionalliance.org/blog/2015/9/20/worst-connected-cities-2014.
[3] See https://www.ntia.doc.gov/page/download-digital-nation-datasets.
[4] See http://www.pewinternet.org/datasets/.
[5] See https://www.fcc.gov/general/form-477-county-data-internet-access-services.

Moderators
Presenter
Author
avatar for Colin Rhinesmith

Colin Rhinesmith

Assistant Professor of Library and Information Science, Simmons College
Colin Rhinesmith is an assistant professor in the School of Library and Information Science at Simmons College and a faculty associate with the Berkman Klein Center for Internet & Society at Harvard University.

Saturday September 9, 2017 3:05pm - 3:40pm
ASLS Hazel - Room 120

3:05pm

Business Data Services after the 1996 Act: Structure, Conduct, Performance in the Core of the Digital Communications Network
Business data services (BDS) have been growing at almost 15% per year for a decade and a half, driven by the fact that high capacity, high quality, always on connections are vital to a wide range of businesses and economic activities. Affected services include more than communications – like mobile, broadband and digital – but all forms of high capacity connections, ubiquitous networks like ATM or gas stations, and the evolving in the internet of things.

The ocean of data coursing through the digital network must become a stream directed to each individual consumer. The point at which takes place is the new chokepoint in the communications network.

This paper reviews the data gathered by the FCC that shows the BDS market is one of the most concentrated markets in the entire digital communications sector (with CR4 values close to 100% and HHI indices in the range of 6000 to 7000). The structure conduct performance paradigm frames the origins, extent and implications of the current performance of a near-monopoly and future prospects for competition in the BDS market. It shows that the anticompetitive behaviors of firms with this much market power expected by economic theory are well supported by the FCC data. The problem is clear, the solution is difficult and complex. The paper reviews the proposed remedies ranging from the deregulatory proposals of the incumbents to the partial reregulation scheme negotiated by some incumbents and competitors, to the full reregulation approach supported by others.

Moderators
avatar for Trey Hanbury

Trey Hanbury

Partner, Hogan Lovells

Presenter
MC

Mark Cooper

Consumer Federation of America


Saturday September 9, 2017 3:05pm - 3:40pm
ASLS Hazel Hall - Room 332

3:05pm

Smile for the Camera: Privacy and Policy Implications of Emotion AI
We are biologically programmed to publicly display emotions as social cues and involuntary physiological reflexes: grimaces of disgust alert others to poisonous food, pursed lips and furrowed brows warn of mounting aggression, and spontaneous smiles relay our joy and friendship. Though designed to be public under evolutionary pressure, these signals were only seen within a few feet of our compatriots — purposefully fleeting, fuzzy in definition, and rooted within the immediate and proximate social context.

The introduction of artificial intelligence (AI) on visual images for emotional analysis obliterates the natural subjectivity and contextual dependence of our facial displays. This technology may be easily deployed in numerous contexts by diverse actors for purposes ranging from nefarious to socially assistive — like proposed autism therapies. Emotion AI places itself as an algorithmic lens on our digital artifacts and real-time interactions, creating the illusion of a new, objective class of data: our emotional and mental states. Building upon a rich network of existing public photographs — as well as fresh feeds from surveillance footage or smart phone cameras — these emotion algorithms require no additional infrastructure or improvements on image quality.

Privacy and security implications stemming from the collection of emotional surveillance are unprecedented — especially when taken alongside physiological biosignals (e.g., heartrate or body temperature). Emotion AI also presents new methods to manipulate individuals by targeting political propaganda or fish for passwords based on micro-reactions. The lack of transparency or notice on these practices makes public inquiry unlikely, if not impossible.

To better understand the risks and threat scenarios associated with emotional AI, we examine three distinct technology scenarios: 1) retroactive use on public social media photos; 2) real-time use on adaptive advertisements, including political ads; 3) mass surveillance on people in public.

Based on these three technically plausible scenarios, we illustrate how the data collection and use of emotional AI data falls outside of existing privacy legal frameworks in the U.S. and in the E.U. For instance, within the comprehensive EU General Data Protection Regulation the law only restricts data that are identifying and thus considered biometrics. Many risks associated with emotional AI do not require individual identification, like adaptive marketing or screening at an international border. Emotional data are also not currently considered health information, but could relay sensitive information about the internal mental state of an individual — especially when recorded over time.

Our research points to the unique privacy and security implications of emotion AI technology, and the impact it may have on both communities and individuals. Based on our assessment of analogous privacy laws and regulations, we illustrate the ways emotional data could cause harm even when conducted in accordance with EU and US laws. We then highlight possible elements of these laws that could be restructured to cover these threat scenarios. Given the challenges in controlling the flow of these data, we call for the development of policy remedies in response to outlined emotional intelligence threat models.

Moderators
JS

Jesse Sowell

Senior Advisor, Vice Chair of GDC Directing Outreach, Cybersecurity Fellow, M3AAWG / Stanford

Presenter
ES

Elaine Sedenberg

UC Berkeley

Author

Saturday September 9, 2017 3:05pm - 3:40pm
ASLS Hazel Hall - Room 225

3:05pm

What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble
A central tenet of First Amendment theory is that more speech is an effective remedy against false speech. This counterspeech doctrine was first explicitly articulated by Justice Louis Brandeis in Whitney v. California (1927), in which he wrote, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.” Since then, the effectiveness of counterspeech has become an integral to most conceptualizations of a functioning marketplace of ideas, in which direct government regulation of speech is minimized in favor of an open and competitive speech environment in which ideas are free to circulate, and in which truthful speech is presumed to be inherently capable of winning out over false speech.

This paper seeks to unpack the assumptions about the dynamics of the production, dissemination, and consumption of news that are embedded in the counterspeech doctrine. This paper then questions whether these assumptions remain viable in the face of the realities of the contemporary media ecosystem; and if not, what this means for contemporary media policy.

In addressing this issue, this paper will first review the counterspeech doctrine; the ways it has been put into practice in legal and policy decision-making; and the critiques that have been leveled against. This section will illustrate that critiques thus far have focused questioning whether counterspeech provides adequate protections against types of speech such as pornography and hate speech. Missing, at this point, has been a broader inquiry into whether the media ecosystem has evolved in ways that undermine the validity of the counterspeech doctrine.

This paper will then detail the technological changes that have affected the media ecosystem and media users over the past two decades that bear directly on the continued validity of the counterspeech doctrine. Specifically, technological changes have:

a) affected the relative prominence of the production of true versus false news;

b) diminished the gatekeeping barriers that have traditionally curtailed the dissemination of false news;

c) increased the ability of those producing false news to target those most likely to be affected by false news;

d) enhanced the speed at which false news travels;

e) diminished the likelihood of being exposed to accurate news that counteracts false news.

Thus, just as it has been argued that the assumptions underlying the Second Amendment right to bear arms (written in the era of muskets and flintlocks) may not be transferrable to today’s technological environment of automatic assault weapons, it may be time to reconsider whether fundamental aspects of First Amendment theory are effectively transferrable to today’s radically different media environment.

Finally, in considering the media law and policy implications of this argument, this paper will consider the implications of the seldom discussed qualification in Brandeis’ statement (“if there be time…”), its possible relevance to contemporary media policymaking, and whether other qualifications are now in order.

Presenter
GS

Gigi Sohn

Georgetown University


Saturday September 9, 2017 3:05pm - 3:40pm
ASLS Hazel Hall - Room 221

3:45pm

Coffee Break
Saturday September 9, 2017 3:45pm - 4:05pm
Founders Hall - Multipurpose Room

4:05pm

Police Perceptions of Body-Worn Cameras
There has been growing interest in the emerging technology of police body-worn cameras (BWCs). Some proponents see it as a tool for police to identify and collect evidence against criminals, and some proponents see it as a tool to hold police accountable, while detractors are concerned about the high cost of BWCs or the risk of undermining the privacy of police officers, the general public, or both. Much of this is conjecture, and more research is needed to determine the actual impact of BWCs.

In this paper, we investigate how police officers view BWCs, including the potential benefits and harms from wide-scale adoption of BWC technology. While different groups may have different opinions about BWCs, the views of police are important for two reasons. First, those police officers who have used BWCs have important first-hand knowledge about what does and does not work. Some have even concrete ideas of how to improve the technology or the policies regarding its use. Second, police are important stakeholders in decisions about whether to adopt BWCs, and what usage policies to employ. Police unions have strenuously objected to their deployment in major cities like Boston, New Orleans, and New York City. Resistance from police may reduce the potential benefits of deploying BWCs, or even prevent deployment in the first place.  

We ascertained police views using surveys and interviews. First, we conducted a written survey of police officers in Pittsburgh, where BWCs had been used for several years, but not throughout the department. 179 officers filled out surveys, some of whom had experience with BWCs and some of whom did not. Then, we supplemented surveys with interviews of Pittsburgh police officers so that some officers could respond to more open-ended questions.  

We found that overall, officers strongly believe in the ability of BWCs to reduce citizen complaints and maintain police-community relations, but support for deploying BWCs throughout the city is low (31%). However, that support dramatically increases among officers with hands-on BWC experience (57%). Officers who oppose city-wide adoption were far more likely to believe that BWCs would erode trust between officers and their superiors. We also found that officer age did not significantly affect their perceptions toward BWCs.

Our interviews revealed officer concerns that BWCs can inhibit discretion when interacting with citizens, and that the wire that attaches the battery to the camera presents a safety hazard in certain situations. Officers also strongly supported the idea to use positive footage of citizen interactions to improve general police training and increase skill diffusion among newer recruits.

In addition, we conducted the first study that incorporates survey data on police perceptions of BWCs from the Midwest, and compared them with previous studies to determine whether national trends are emerging. We found that support for expanding the technology throughout their departments has increased substantially over time, especially among officers who gain hands-on experience with the cameras. However, there are still high levels of disagreement on whether BWCs are easy to use, decrease levels of paperwork, or increase officer safety.

These and other results suggest that changes in BWC technology, police policy and procedure, and police training may lead to better police BWC programs.

Moderators
JM

Jim McConnaughey

George Mason University

Presenter
avatar for Max Goetschel

Max Goetschel

Analyst, Carnegie Mellon University

Author
avatar for Jon Peha

Jon Peha

Carnegie Mellon University

Saturday September 9, 2017 4:05pm - 4:38pm
ASLS Hazel - Room 120

4:05pm

Social Shaping of the Politics of Internet Search and Net-working: Moving Beyond Filter Bubbles, Echo Chambers, and Fake News
Global debate over the impact that algorithms and search can have on shaping political opinions has been increasing in the aftermath of 2016 election results in Europe and the US. Powerful images of the Internet enabling access to a global treasure trove of information have shifted to worries over the degree to which those who use social media, and online tools such as search engines, are being fed inaccurate, fake, or politically-targeted information that distorts public opinion and political change. There are serious questions over whether biases embedded in the algorithms that drive search engines and social media will have major political consequences, such as creating filter bubbles or echo chambers. For example, are search engines and social media providing people with information that aligns with their beliefs and opinions or challenging them to consider countervailing perspectives? Most generally, the predominant concern is whether or not these media have a major impact on the political opinions and viewpoints of the public, and if so, for the better or worse.

This study addresses these issues by asking Internet users how they use search, social media, and other important media, for political information, and what difference it makes for them. We conducted an online survey of stratified random samples of Internet users in seven nations, including France, Germany, Italy, Poland, Spain, the United Kingdom, and the US.

The descriptive and multivariate findings cast doubt on deterministic perspectives on search. We find that technology matters – search indeed plays a major role in shaping opinion – but it is not deterministic. For example, the thesis of a filter bubble is overstated, as our pattern of findings counter this expectation. For instance, search engines are among an array of media consulted by those interested in politics. Another more sociotechnical deterministic narrative is around the concept of echo chambers, where users enabled by increased media choice and social media tend to surround themselves with the viewpoints of likeminded people. Our evidence contradicts this view as well. Most of those who search for political information expose themselves to a variety of viewpoints.

Results also demonstrate that research has tended to underestimate the social shaping of technology. National cultures and media systems play an unrecognized role, as well as individual differences in political and Internet orientations. This study shows how overestimating technical determinants while underestimating social influences has led to disproportionate levels of concern over the bias of search.

The findings suggest that misinformation can fool some search engine users some of the time, suggesting that a sizeable group of users could benefit from more support and training in the use of search engines. Also, the findings should caution governments, business, and industry from over-reacting to panic over the potential bias of search in shaping political information and opinion.

Moderators
Presenter
Author

Saturday September 9, 2017 4:05pm - 4:38pm
ASLS Hazel Hall - Room 329

4:05pm

How Compatible are the DOJ and FCC's Approaches to Identifying Harms to Telecommunications Innovation?
The Department of Justice and Federal Trade Commission focus on innovation markets to identify and restrict transactions with potential to harm innovation within narrowly-defined R&D and intellectual property licensing markets. However, within the media and telecommunications industries, the Federal Communications Commission rarely uses this legal concept in practice. Instead, the FCC’s broader review standard protects innovation by identifying potential post-transaction reductions of the incentive to innovate, ability to innovate, or rate of innovation efforts using a broader conceptual definition of innovation. This approach often produces controversy related to its differences from the more traditional competition regulation framework. However, with the diversity of different business contexts in which these issues appear, these orders are a valuable source of insights into different possible types of harms to innovation.
Drawing upon comprehensive research into the uses of innovation across the FCC’s major transaction orders between 1997 and 2015, this work seeks to: (1) identify instances where potential harms to innovation were discussed within these transactions, (2) categorize different types of harms to innovation, and (3) consider the extent to which each category corresponds with the DOJ’s approach or may benefit from more clarity and formalization.

Moderators
avatar for Fernando Laguarda

Fernando Laguarda

Professorial Lecturer and Faculty Director, Program on Law and Government, American University Washington College of Law

Presenter
avatar for Ryland Sherman

Ryland Sherman

Benton Foundation


Saturday September 9, 2017 4:05pm - 4:38pm
ASLS Hazel Hall - Room 332

4:05pm

Fake News, Fake Problem? An Analysis of the Fake News Audience in the Lead Up to the 2016 Presidential Election
In light of the recent U.S. election, many fear that “fake news” has become a powerful and sinister force in the news media environment. These fears stem from the idea that as news consumption increasingly takes place via social media sites, news audiences are more likely to find themselves drawn in by sensational headlines to sources that lack accuracy or legitimacy, with troubling consequences for democracy. However, we know little about the extent to which online audiences are exposed to fake news, and how these outlets factor into the average digital news diet. In this paper, I argue that fears about fake news consumption echo fears about partisan selective exposure, in that both stem from concerns that more media choice leads audiences to consume news that align with their beliefs, and to ignore news that does not. Yet recent studies have concluded that the partisan media audience (1) is small and (2) also consumes news from popular, centrist outlets. I use online news audience data to show a similar phenomenon plays out when it comes to fake news. Findings reveal that social media does indeed play an outsized role in generating traffic to fake news sites; however, the actual fake news audience is small, and a large portion of it also visits more popular, “real” news sites. I conclude by discussing the implications of a news media landscape where the audience is exposed to contradictory sources of public affairs information

Moderators
GL

Gavin Logan

National Urban League

Presenter

Saturday September 9, 2017 4:05pm - 4:38pm
ASLS Hazel Hall - Room 225

4:05pm

Matching Markets for Spectrum Sharing
Next generation networks aim at improving connectivity and capacity, adding to the current range of available services and expanding their reachability. For these systems to work, they need to be compatible with legacy technologies in addition to making use of (limited) available spectrum resources. This is one of the reasons why spectrum sharing has been at the forefront of the list of enablers for such systems. From federal-commercial sharing to finding opportunities in millimeter-wave spectrum, we have witnessed the formulation of multiple approaches to making spectrum sharing happen.

Existing work on spectrum sharing is wide ranging and includes technological as well as market based approaches. Spectrum markets settings have called for different definitions of spectrum-related resources as a means to increase market thickness and thus improve the opportunities for finding market liquidity. In a similar way, we find proposals of network models which aim at adapting technical definitions of spectrum resources, such as those that are the product of virtualization. In this work, we adopt a market perspective for spectrum sharing within the context of more comprehensive network definitions such as those envisioned for next generation networks.

The particular network model we build upon is that introduced by Doyle et al. in [1]. This framework envisions heterogeneous physical networks that collaborate through virtualization to provide a consistent service to end users. Such an approach suggests three main participating entities: Resource Providers, Virtual Network Builders and Service Providers. The Resource Providers (RPs) are current resource owners or incumbents who can make their excess resources available in a common pool. The Service Providers (SPs) are new market entrants or existing providers who need additional resources in order to meet the demand of their customers (i.e., end users). The Virtual Network Builders (VNBs) are intermediaries (or “middlemen") whose function is to obtain resources from the pool according to the demand of a subset of SPs whom they consider their customers. These entities and their definitions aim at outlining what a next-generation network would look like and what it would require.

Our objective in this work is to build a model representing a market setting that would be compatible with this kind of network. The theoretical framework for this model is both “middleman" theory [2] and “matching markets" [3, 4, 5]. We model VNB - SP interactions based on real-world middlemen or brokers; in fact, we envision a partnership-forming process to take place between these two sets of entities. The modeled matching process takes into account parameters that are relevant to SPs and VNBs in order to define their preferences and thus form acceptable matches. This method further allows us to observe how the preference parameters can influence the matching outcome. In other words, by applying different weights to these criteria we can observe whether there are changes on the resulting number of partnerships formed; the percentage of geographical demand that becomes market demand; the range of lump sum fees requested by and paid to VNBs, among other factors.

We employ an agent-based model written in MATLAB to explore the function and performance of this model. MATLAB offers the necessary tools for handling the data we expect our agents to utilize and the results generated by the model. For result analysis, we rely on Python and R, due to the additional tools provided by these platforms for data processing and analysis. This modeling approach allows us to examine the benefits and constraints that novel network configurations entail. Particularly, our results can be useful to determine the drivers for resource sharing under the proposed configuration. In addition, we can formulate recommendations that could be extrapolated to other proposed sharing schemes, which include similar network participants and settings.

References
 [1] L. Doyle, J. Kibilda, T. K. Forde, and L. DaSilva, \Spectrum Without Bounds, Networks Without Borders," Proceedings of the IEEE, vol. 102, no. 3, pp. 351-365, mar 2014.
[2] M. Krakovsky, The Middleman Economy. Palgrave Macmillan, 2015.
[3] A. E. Roth and M. A. O. Sotomayor, Two-sided matching: A study in game-theoretic modeling and analysis. Cambridge University Press, 1992, no. 18.
[4] A. E. Roth, \What Have We Learned from Market Design?" The Economic Journal, vol. 118, no. 527, pp. 285-310, 2008.
[5] A. E. Roth, Who Gets What and Why. Houghton Mifflin Harcourt Publishing Company, 2015.

Moderators
avatar for Peter Tenhula

Peter Tenhula

Deputy Associate Administrator, NTIA

Presenter
avatar for Marcela Gomez

Marcela Gomez

Visiting Research Assistant Professor, University of Pittsburgh

Author

Saturday September 9, 2017 4:05pm - 4:38pm
ASLS Hazel Hall - Room 221

4:38pm

How News Organizations Paraphrase Their Stories on Social Media? Computational Text Analysis Approach
Social media has become one of major sources of news. As information overload prevails, news organizations need to form social media strategies to reach news readers’ limited attention (Lanham, 2006; Anderson & de Palma, 2013). This study aims to investigate one of news organizations’ potential strategies – paraphrasing a news story on a social media post. 

Previous literature on the choice of news headlines found that commercial news media relying on advertising for their revenue tend to frame their news story as sensational in its headline (Reah 1998; Molek-Kozakowska, 2013). Similarly, recent studies on search engine optimization (SEO) show that news media carefully choose titles and keywords tagged in URL and HTML to maximize chances for their stories to be searched online monitoring their competitors (Dick, 2011). If these strategies are effective, news organizations are likely to adopt a similar strategy on social media. In particular, they may paraphrase their news information to make it:

(a) concise enough to fit into a text limit on a social media platform, 

(b) informative enough to signal news content, 

(c) and appealing to the news demand. 

This strategy can influence news readers’ perception of a news topic because news consumption via social media tends to be relatively instant and less-lasting (Mitchell, Jurkowitz & Olmstead, 2014). Many news readers may learn about a news topic from a social media post rather than an original news story, as they used to do with headlines and leads of traditional news (Andrew, 2007). This implies that how news organizations paraphrase news for social media may frame news readers’ perception of a public issue. 

To reveal news organizations’ paraphrasing strategy, I apply computational information retrieval and text analysis methods. Previous studies based on hand-coding approaches often analyze only social media posts (Newman, 2011) due to the large amount of data from social media posts themselves, original news articles and relationships between the two. This approach targets only information after paraphrasing, but does not allow for looking at how the paraphrase is related to the original text. Instead, I crawled news articles from 117 news organizations’ websites and their official Twitter accounts for a week (Feb 27, 2017 – Mar 5, 2017), which amount to 13,773 news stories and 61,219 tweets. Also, I could identify news articles shared in each social media post matching URLs from an article and from a tweet. 

I analyze how news organizations paraphrase their news articles by looking at which word in an original text is likely to make it on a social media post. This task can be carried out by discriminating words algorithms such as Logistic Lasso regression (Mitra & Gilbert, 2014) or Multinomial Inverse regression (Taddy, 2013) recently developed in statistics and machine learning fields. Unlike popular dictionary methods such as LIWC, these algorithms allow for words likely to be in a social media post to emerge as an outcome of the empirical analysis without pre-assigning psychological meanings to dictionary words.

Moderators
JM

Jim McConnaughey

George Mason University

Saturday September 9, 2017 4:38pm - 5:11pm
ASLS Hazel - Room 120

4:38pm

Number Effects and Tacit Collusion in Oligopolistic Markets
Concerns about market power and coordinated behavior frequently confront competition and regulatory authorities with the question: How many competitors are enough to ensure competition? For example, several high-profile merger control proceedings in the European Union as well as in the US have dealt with cases that would reduce the remaining number of competitors from four to three major mobile telecommunications operators. In the US airline industry, the Department of Justice had initially filed a lawsuit to block the merger between American Airlines and US Airways that reduced the number of legacy carriers from four to three, explicitly referring to the low number of competitors as a critical threat to effective competition. Even in a high-tech commodity industry like the hard disk drive industry, consolidation among manufacturers raises the question whether there is a magical number to reconcile scale synergies and pro-competitive effects.

In general, the number of firms in a specific market is determined endogenously by the competitive process and particularly by firms’ entry and exit decisions. However, in merger cases and regulatory proceedings, authorities are often required to determine a specific number of competitors exogenously. This makes it necessary to estimate the impact of number effects on the competitiveness of a market. It is well known that equilibrium predictions for market prices are generally decreasing with a higher number of competitors. However, the impact on the degree of tacit collusion, i.e., the ability of firms to sustain a supra-competitive price above the equilibrium price, is not as clear.

In this article, we investigate the research hypothesis that tacit collusion in oligopolistic markets with two, three, and four competitors decreases strictly monotonically with the number of competing firms. From a methodological point of view, experimental laboratory experiments are well suited to address this question, because they allow to observe out-of-equilibrium behavior while controlling for environmental conditions. Whereas, a meta-analysis of the extant literature supports the notion that duopolies are significantly more prone to tacit collusion than quadropolies, i.e., that “two are few and four are many”, there is no empirical support for a significant effect when moving from four to three firms. However, the lack of statistical power across and within existing studies precludes a conclusive evaluation. Moreover, the review of the literature reveals a lack of systematic evaluation of number effects under different competition models with symmetric and asymmetric firms, and under consideration of different theoretical equilibrium predictions. Therefore, we conduct two laboratory experiments, which are explicitly designed to test for number effects on tacit collusion under price and quantity competition, as well as with symmetric and asymmetric firms. We do find a significant increase in tacit collusion from four to three firms as well as from three to two firms. In fact, the empirically observed increase of tacit collusion is almost identical from four to three as from three to two, suggesting a linear number effect for highly concentrated oligopolies with regard to the (in)ability to coordinate on a price level above the theoretical Nash prediction.

Moderators
Presenter
avatar for Jan Kraemer

Jan Kraemer

Full Professor, University of Passau

Author

Saturday September 9, 2017 4:38pm - 5:11pm
ASLS Hazel Hall - Room 329

4:38pm

Emerging Business Models in the OTT Service Sector: A Global Inventory
This paper is an empirical analysis of emerging business models in the Over The Top (OTT) video content distribution sector. From an industrial organization perspective, it identifies six critical attributes of an OTT content distribution platform: ownership, programming source, vertical integration with content producers, platform/multiplatform compatibility, service type, and revenue model. Using SNL Kagan’s global database of 800 OTT distribution networks, it finds that certain combinations of these characteristics are more prevalent than others, and are more competitively sustainable within specific types of media ecosystems. The paper concludes that these ‘archetypes’ are likely to be the survivors within specific ecosystems as the OTT content distribution system continues to converge onto dominant business models.

The increasing bandwidth of broadband networks has created opportunities for OTT services to enter and erode traditional broadcasting markets. A full 25% of U.S. homes don’t subscribe to a pay-TV service any more (GfK, 2016). This trend is especially strong among the younger generation (18-34), who are much more likely to opt for alternative video delivery services. As the cord-cutting/cord-never phenomenon accelerates, the future seems bright for OTT video providers. Yet, Netflix is largely an exception in the global OTT market, where most new entrants are struggling to find traction (Agnese, 2016), and even for Netflix, subscriber growth might have already plateaued (Newman, 2016), and revenues are no longer growing exponentially (Kim, 2016).

In the absence of a truly outstanding business model, providers are experimenting with a wide variety of platforms, content sources, revenue models and multiscreen strategies. Ad-supported models like YouTube have far greater “audience enjoyment minutes” than any OTT provider; Amazon Prime Video has invested aggressively in original content (Castillo, 2016). Facebook’s video distribution is growing and Apple is planning to add video to its music service. Traditional video providers have added more features to their service package to compete with OTT: for example, CBS is now distributing original TV series exclusively over SVOD networks. Satellite TV operators are transforming themselves into internet MVPDs, such as Viasat to Viaplay, and DISH to Sling. Platforms for the distribution of content are proliferating: personal video recorders, transaction VoD, and subscription VoD, and set top boxes are deploying many new capabilities such as Roku TV that can pause and catch up live broadcasts.

Thus, despite the promise of OTT services and the threat they apparently pose to traditional broadcasters, there is no single dominant business model in the OTT video distribution sector. Instead, a wide diversity of options exist within each attribute of an OTT platform: for example, platform capability (PC/Mac, smartphone, tablet, connected TV, game console, Internet streaming players, pay TV set-top box), revenue models (free/ad-supported, transactional, subscription, app fee, premium content cost), etc.

Therefore, the objective of this paper is to investigate the OTT market from an industrial organization perspective, following the lead of Qin and Wei (2014). But unlike Qin and Wei, the paper examines not the performance of the OTT sector as an outcome of its structure and conduct, but the attributes of the OTT business models. It seeks to identify whether there are correlations between the six attributes of an OTT content distribution platform — ownership, programming source, vertical integration with content producers, platform/multiplatform compatibility, service type, and revenue model — and if yes, whether such dominant combinations of attributes constitute emerging archetypes or models for the OTT video business. Furthermore, the paper examines whether certain contributions of attributes (OTT models) are more prevalent in specific types of media systems (such as public TV dominated versus private competitive markets).

The primary data for this analysis is SNL Kagan’s Global OTT provider database, that lists 800 OTT providers from 71 different countries, and several of their attributes including platform capabilities, revenue models and programming sources. Additional information is drawn from industry reports and trade press articles. In addition, country information is coded in from sources such as the European Audiovisual Observatory, OECD, and United Nations agencies. The conclusions of the paper will be of interest to not only OTT providers, but also to media regulators and legislators.


References

Agnese, S. (2016, Nov. 22). Netflix’s Current Business Model is Not Sustainable. Ovum. Accessed at https://www.ovum.com/research/netflixs-current-business-model-is-not-sustainable/

Castillo, M. (2016, Oct. 17). Netflix plans to spend $6 billion on new shows, blowing away all but one of its rivals. Accessed at http://www.cnbc.com/2016/10/17/netflixs-6-billion-content-budget-in-2017-makes-it-one-of-the-top-spenders.html

GfK (2016). One-Quarter of US Households Live Without Cable, Satellite TV Reception – New GfK Study. Press release. accessed at http://www.gfk.com/en-us/insights/press-release/one-quarter-of-us-households-live-without-cable-satellite-tv-reception-new-gfk-study/

Kim, D. (2016). The future outlook of Netflix -- the financial perspective. Information and Communication Policy, 28(22), 1-19. Accessed at http://www.kisdi.re.kr/kisdi/fp/kr/publication/selectResearch.do?cmd=fpSelectResearch&sMenuType=2&curPage=5&searchKey=TITLE&searchValue=&sSDate=&sEDate=&controlNo=14011&langdiv=1 (Korean)

Newman, L. H. (2016, July 18). Wall Street is worried that Netflix has reached its saturation point. Slate.com. Accessed at http://www.slate.com/blogs/moneybox/2016/07/18/netflix_earnings_beat_expectations_but_the_stock_is_still_tanking.html

Qin, Q., & Wei, P. (2014). The Structure-Conduct-Performance Analysis of OTT Media. Advances in Management and Applied Economics, 4(5), 29.

Moderators
avatar for Fernando Laguarda

Fernando Laguarda

Professorial Lecturer and Faculty Director, Program on Law and Government, American University Washington College of Law

Presenter
EP

EUN-A PARK

Western State Colorado University


Saturday September 9, 2017 4:38pm - 5:11pm
ASLS Hazel Hall - Room 332

4:38pm

Exploration of the Federal Communications Commission's Experimental Radio Service (ERS): Understanding of Ten Years of Experimental Spectrum Licenses
The primary purpose of the ERS is to provide for experimental uses of spectrum resources that are not otherwise permitted under other existing rules of the FCC. Further, these licenses provide opportunities to experiment with new radio technologies, equipment design, propagation methods or new concepts related to the use of spectrum. While experimentation and development are also allowed in well-defined existing services(1), we also find that experimental licenses are serving other purposes rather than experimenting. For instance, the use of the ERS to enable temporary use of spectrum for broadcasting and support of communication equipment of televised events (sports, political debates, etc.)(2). Thus, making experimental licenses to be a key component in guarantying access to spectrum resources that otherwise are restricted, while improving the innovation and development of wireless technologies

Experimental licenses have been awarded by the FCC for more than thirty years. Indeed, in the past 30 years (since 1987) more than 20,000 licenses have been granted. Nevertheless, little research has been published on this topic, although it appears to be directly tied to the development of wireless technologies in the U.S. We believe that in order to comprehend the relationship between experimental licenses and innovation, the first step is to evaluate and understand how they have been used. To this end, we propose a comprehensive analysis of the assignment of these licenses in the past ten years (2007-2016).

Utilizing publicly available information in the website of the Office of Engineering and Technology of the FCC (3), we have built a single repository (database) for all the technical and non-technical details of these licenses. This has permitted us to differentiate among the existing types of Experimental Licenses and, subsequently, analyze their details. We pay particular attention to the evolution, over time, of various parameters such as number and duration of licenses, the frequency of assignment, processing times, operational parameters (mainly authorized frequencies and transmission power levels), the purpose of operation, and others.

For a broader understanding and analysis of the experimental licenses, this work also aims at mining additional details within the experimental license framework. For instance, we explore current processing time trends to better understand whether obtaining experimental licenses is a time burden. Additionally, we look at factors that may influence the license granting process. In this manner, we can assess whether any factor has a more significant weight, thus influencing the likelihood of obtaining a license. Furthermore, we explore the similarities among the members of the different applicant categories defined by the FCC.

This analysis, together with our exploratory work allows us to shed light on important aspects of the experimental licensing system. From a regulatory point of view, we can assess whether the experimental licensing outcome is consistent with the original purposes of this type of licenses. From an applicant perspective, we can provide guidelines on what license parameters and services are more likely to obtain a licensing and the time constraints in the application process. Finally, from a research point of view, it would allow us to evaluate the impact that this type of licenses may have for the advancement of the testing and development of new technologies.

Finally, if time allows, we will explore how experimental spectrum licenses are correlated with the development of well known wireless developments. In this manner, we aim to first study how the ERS is related to spectrum sharing methods such as the utilization of TV White Spaces. Moreover, we would like to analyze the relationship between experimental licenses and new wireless techniques such as LTE-U or 5G.

(1) For instance, developmental rules for broadcast stations. Nevertheless, these activities are restricted to applicants that are eligible to apply for a license in this particular service and on frequencies allocated for it.

(2) In the initial exploration, we discovered that around 70% of Special Temporary Authorization of experimental licenses are issued for this kind of events.

(3) The OET is the branch within the FCC responsible for the management and assignation of licenses under the ERS.

Moderators
GL

Gavin Logan

National Urban League

Presenter
avatar for Pedro Bustamante

Pedro Bustamante

University of Pittsburgh

Author
DS

Douglas Sicker

Carnegie Mellon University
avatar for Martin B. H. Weiss

Martin B. H. Weiss

University of Pittsburgh

Saturday September 9, 2017 4:38pm - 5:11pm
ASLS Hazel Hall - Room 225

4:38pm

Streamlining Permitting Processes for Small Cells in the Right of Way
Increasing demand for wireless capacity and throughput necessitates more widespread and variegated deployment of wireless infrastructure. Along with traditional cell towers, the proliferation of small cells, Distributed Antenna Systems (DAS), outdoor Wi-Fi, and Internet of Things nodes are challenging municipalities and other government entities to streamline their processes for siting and modification of these facilities on public land, buildings, and the Right-of-Way. For new developments, permitting entities request well-integrated, context-appropriate, and secure facilities, while wireless representatives respond that onerous requirements create a significant barrier in deployment. This paper explores, analyzes, and prescribes concrete best practices and areas of compromise for municipalities and the wireless industry to streamline permitting processes, diminish costs, and reduce delays in siting of wireless infrastructure facilities.

The research methodology will identify a rigorous and methodical approach to understanding and improving on existing permitting processes, especially as they relate to fee structures, access to facilities, design concerns, and unanticipated issues. This effort involves four subproblems, including: first, identifying, describing, and characterizing issues that cause cost overruns and delays; second, surveying existing best practices and recommendations to improve negotiation and communication between municipalities and the wireless industry; third, the extent to which improvements result in cost-savings and diminished delays; and fourth, the process by which best practices can be implemented and disseminated. Research and evidence will be gathered from several sources, including submissions to the currently ongoing Federal Communications Proceeding titled “Streamlining Deployment of Small Cell Infrastructure by Improving Wireless Siting Policies” (docket 16-421), a survey of several municipal code regulations that appropriately address common problems in siting wireless facilities, and numerous interviews with municipal and wireless industry representatives.

This methodology will enable creating materials that expedite permitting processes. These materials, which will include forms and checklists that comprehensively cover all the issues that are typically encountered in the permitting process, will be methodical and applicable to any municipality. These checklists will involve engineering and other plans for new sites, modifications, maintenance, contingencies, and exit strategies. They will also address requirements and best practices in site features, aesthetic design, environmental factors, and safety practices. These recommendations and best practices will improve communication and negotiation procedures between municipalities and the wireless industry and result in cost-savings and diminished delays.

Moderators
avatar for Peter Tenhula

Peter Tenhula

Deputy Associate Administrator, NTIA

Presenter
IS

Irena Stevens

University of Colorado Boulder


Saturday September 9, 2017 4:38pm - 5:11pm
ASLS Hazel Hall - Room 221

5:12pm

An Analysis of Diffusion of Universal Basic Income Policy Over Twitter
INTRODUCTION: Technological advances have increasingly automated tasks that have hitherto been done by humans. Society would benefit from the open discussion of alternative policy approaches, such as Universal Basic Income (UBI), that could alleviate social tensions related to increasing wealth inequality and potential joblessness that have become more problematic due to automation and technological advance.

OBJECTIVE: In this study, we examine the discussion of UBI on Twitter in an effort to understand the types of messages most likely to spread information about policy innovations, and most likely to bring new voices into the discussion. Society would benefit from the open discussion of alternative policy approaches. For example, one such prescription is UBI, which involves giving each citizen above a certain age a set income, regardless of work status or wealth.

We have seen the potential for social media sites such as Twitter to draw attention to the grievances of activists in recent social movements. Policy advocates, like political activists, could benefit from research that provides insights into how ideas and messages spread, and ways to grow their audiences. Much work has examined factors related to message diffusion on Twitter. 

The diffusion of ideas and information can instigate social change. This paper used structuration theory to illustrate how this happens, suggesting that as information flows through our networks it can both re-instantiate and transform social norms. It re-instantiates norms by showing us what we expect to see; what we would think of as normal behavior for a given situation. This framework informs our research questions:
1. Are more retweet event (RTEs) labeled interesting than other types, and specifically, more than resonance?
2. Are tweet messages coded as resonance retweeted more than those coded as interesting?
3. Do RTEs labeled as resonance tend to bring more new users than other types, particularly those labeled interesting?
DATA: We collect and analyze tweets related to Universal Basic Income and use both content analysis and inferential statistics. We collected Tweets from Twitter’s streaming API using an open source toolkit developed by a co-author. Our collection period spanned from July, 25, 2016 to December 12, 2016, involving 157,832 tweets posted by 35,102 users.

METHOD: The researchers coded a sample of 1000 tweets independently successfully testing for intercoder reliability.

RESULTS: The results imply that part of the building of activist networks is the long hard work of trying to get content to spread over weak tie bridges in ways that build one’s audience and reduce the length that information has to flow to reach new potential audiences.

RELEVANCE: We apply Roger’s theory of the diffusion of innovations from the field of communication to understand the diffusion of policy ideas on Twitter. Given the recent discussion of the diffusion of fake news, this work is timely as it sheds light on the kinds of messages that diffuse in networks as well as which types of messages tend to bring more new users into a discussion space. We further discuss the impact of Twitter on the policy discussion of Universal Basic Income. This leads to prescriptions for policy advocates wishing to grow their audience and network.

Moderators
JM

Jim McConnaughey

George Mason University

Presenter
Author

Saturday September 9, 2017 5:12pm - 5:45pm
ASLS Hazel - Room 120

5:12pm

Dropping the Bundle? Welfare Effects of Content and Internet
Bundling of broadband access with other services has been a defining characteristic of internet access markets for as long as broadband technologies have been available. Initially, cable television competitors entered telecommunications markets by bundling first voice telephony, and subsequently (broadband) internet access with their television products. Telecommunications firms rapidly followed suit by reselling access to pay television (either via third-party infrastructures or their own), leading to the ubiquitous ‘triple play’ offering coming to dominate residential market purchase. Initially, such bundling likely led to higher levels of broadband uptake than would have occurred under mandatory unbundling, as those with low willingness-to-pay for broadband but higher willingness-to-pay for the other products (i.e. negative correlation) might buy broadband in a bundle, but not at stand-alone prices (Heatley & Howell, 2010).

From the outset, concerns have been voiced that bundling access to content (television) and infrastructure (broadband) by a telecommunications provider with market power could result in foreclosure of competition in content (television) markets (e.g. Papandrea, Stoeckl & Daly, 2003; Krämer, 2009; Maruyama & Minamikawa, 2008). Such fears led to mandatory separation of cable television and telecommunications providers in Australia, and some other OECD countries (OECD, 2000) and is a recurring theme of political discussion around the topic.

Theoretical models, however, suggest that even though such foreclosure may occur under some circumstances, under others bundling may yield both higher profits and higher total surplus than mandatory unbundling (à la carte sales). These include products with very low marginal costs (Bakos & Brynjolfsson, 1999) and that are nonrivalrous in consumption (Liebowitz & Margolis, 2008), certain relative demand elasticities for the products in the bundle (Papandrea, Stoeckl & Daly, 2003) and where economies of scope increase consumer surplus (Arlandis, 2008). Indeed, regulations to cap market share or impose à la carte on cable operators may reduce total surplus, and absent offsetting increases in consumer welfare, such policy measures may reduce total welfare (Adilov, Alexander & Cunningham, 2012).

Despite highly-nuanced circumstances influencing the welfare effects of bundling internet content with internet access, the potential to reduce competition by offering premium content with broadband access poses questions about the potential substantial lessening of competition in mergers such as that proposed between AT&T and Time-Warner. The bundling of broadband and premium content in particular may increase the risk of foreclosure, but focusing unduly on the supply-side arguably risks giving too little weight to the interplay between consumer-specific factors on the demand side (Howell & Potgieter, 2016). This risk is exacerbated by the paucity of empirical evidence to inform decision-making. Theoretical models provide some insights, but are limited by their stylized assumptions, which do not necessarily map neatly to the actual decisions made by participants in the relevant exchanges. Simulation analysis, however, offers insights to inform such complex merger decisions, as the scenarios examined can take account of the multiple nuances in both observed and hypothesized demand-side interactions.

To this end, we apply simulation analyses to investigate the situation where a basic content package, a premium content package and broadband are offered by a firm and analyze the firm's price-setting behavior when customers react to a given set of prices by maximizing their individual consumer surplus.

The model:

The model assumes that there are consumers, each with a known a priori willingness-to-pay (WTP) for a basic content package, a premium content package and unbundled broadband respectively. Each customer then has an imputed willingness to pay for the four bundles under consideration: basic plus premium content (index 2), basic content plus broadband (index 4), premium content plus broadband (index 5) and basic as well as premium content plus broadband (index 6).

Given a tuple of prices chosen by the monopoly provider of the services, each customer selects which (if any) of the products or product bundles to purchase to maximize its consumer surplus well as possible restrictions imposed by a regulator. The producer chooses prices so as to maximize its revenue. For information goods this can be treated as identical to the profit of the producer.

We randomly assigned for each consumer. WTP values and prices are assumed to be integers and prices set by the producer. This calculation requires a very large number of iterations and considerable computing resources.

The analysis and implications:

We analyze a large number of instances of the problem, subject to assumptions about the WTP distributions that, in our view, are realistic in order to characterize the market outcomes and to indicate where and how often regulatory intervention will positively affect total (respectively, consumer) welfare. Early indications are that welfare is often maximized when a triple play bundle can be offered. Furthermore determining where social welfare is maximal depends in a complex way on the underlying willingness to pay of heterogeneous groups of customers. The implications for merger investigations are discussed.

References

Adilov, N., Alexander, P., & Cunningham, B. (2012). Smaller pie, larger slice: how bargaining power affects the decision to bundle. The B.E. Journal of Economic Analysis and Policy 12(1), Article 12.

Arlandis, A. (2008). Bundling and economies of scope. Communications and Strategies Special Issue, November 2008, 117-129.

Bakos, Y., & Brynjolfsson, E. (1999). Bundling information goods. Management Science 45(12), 1613-1630.

Heatley, D., & Howell, B. (2009). The brand is the bundle: strategies for the mobile ecosystem. Communications and Strategies 75, 79-100.

Howell, B., & Potgieter, P. (2016). Submission on Letter of Unresolved Issues. http://www.comcom.govt.nz/dmsdocument/14963

Krämer, J. (2009). Bundling vertically differentiated communications services to leverage market power. Info 11(3), 64 – 74.

Leibowitz, S., & Margolis, S. (2008). Bundles of joy: the ubiquity and efficiency of bundles in new technology markets. Journal of Competition Law & Economics 5(1), 1–47.

Maruyama, M., & Minamikawa, K. (2009). Vertical integration, bundling and discounts. Information Economics and Policy 21 62–71.

Papandrea, F., Stoeckl, N. & Daly, A. (2003). Bundling in the Australian telecommunications industry. The Australian Economic Review 36(1), 41-54.

Moderators
Presenter
Author
BE

Bronwyn E. Howell

Victoria University of Wellington

Saturday September 9, 2017 5:12pm - 5:45pm
ASLS Hazel Hall - Room 329

5:12pm

Global Governance of the Embedded Internet: The Urgency and the Policy Response
This paper addresses the need to bring the Internet of Things and associated technologies under a global policy regime, built on the newly independent ICANN, acting in an expanded capacity as a recognized non-territorial, multi-stakeholder-based, sovereign entity under agreed and transparent normative standards.

The phrase “Internet governance” is highly contested over its technical, security, and sociopolitical aspects. Until recently, however, it had not been imagined to include networked devices with embedded intelligence, such as smart cars, smart watches, smart refrigerators, and a myriad of other devices. A rapidly emerging issue is how, if at all, the current global Internet governance regime relates to the emerging array of ubiquitous embedded information technologies which collect, store, process, learn from, and exploit information about all aspects of our lives. Does this call for a policy response?

This is a non-trivial issue, as it binds together the Internet of Things, big data analytics, cloud computing, and machine learning/artificial intelligence into a single, integrated system. Each component raises policy issues, but the bigger challenge may be unintended adverse consequences arising from their synchronous operation. Because of the inherently global nature of the underlying network, which seeks to connect “everything to everything else,” it is important to give consideration to whether these developments should have a central point of global policy development, coordination, and oversight.

This paper answers that question in the affirmative and, after reviewing multiple candidates which have been proposed, concludes that the emerging post-U.S. ICANN is most fit for that role. The authors believe it is important to keep the centers of technical and policy expertise together and efficiently available. The authors recognize that such governance is not a “singular system,” and that some issues, such as cybersecurity, may find other homes, perhaps even treaty-based.

The paper further argues that “new” ICANN, largely formally severed from the U.S., and with a revised and expanded role for governments in its management, has a very strong claim for legitimacy and non-territorial sovereignty. On that basis, it may feel more secure in expanding the scope of its mandate – indeed, there will likely be considerable pressure to do so.

Another critical factor is the uncertainty about the normative values that underpin, or in some cases undermine, global Internet governance. These values will continue to be contested, but there is already some broad acquiescence to general principles from the United Nations, which can form the basis for a transparent discussion in a multi-stakeholder venue about which norms and values are most appropriate to guide policy actions. Some of these policy alternatives are presented and discussed.

This topic, and the approach to it, are novel in the respect that very little work has been done in this area. The paper builds on, and considerably extends, that work.

Moderators
avatar for Fernando Laguarda

Fernando Laguarda

Professorial Lecturer and Faculty Director, Program on Law and Government, American University Washington College of Law

Presenter
avatar for Jenifer Sunrise Winter

Jenifer Sunrise Winter

Assoc. Prof., University of Hawaii at Manoa
The Pacific ICTD Collaborative - http://pictdc.socialsciences.hawaii.edu/ Privacy, digital inequalities, algorithmic discrimination in the context of big data and the Internet of Things Data governance and stewardship, including use of big data for the public good Broadband access... Read More →

Author

Saturday September 9, 2017 5:12pm - 5:45pm
ASLS Hazel Hall - Room 332

5:12pm

From Net Neutrality to Application Store Neutrality? The Impact of Application Stores' Ranking Policies on Application Quality and Welfare
We consider the impact of different ranking regimes in application stores, like Apple’s App Store or Alphabet’s Play Store, on the quality of applications and welfare. Application stores are essential gatekeepers between application developers and consumers. In particular, previous empirical research has shown that the ranking position of an application in an application store has a tremendous effect on the application’s demand, and thus, its success. Both Apple and Alphabet have recently introduced sponsored search results in their respective application stores, allowing application developers to be listed higher in return for a ‘ranking fee’. This setting has some similarities to the net neutrality debate, where network providers are the essential gatekeepers between content providers and consumers, and where paid prioritization for content providers was heavily scrutinized. Similarly, here we investigate in the context of dominant application stores if it is reasonable to prohibit sponsored ranking in lieu of a “neutral” ranking policy that is based solely on the applications’ quality.

Specifically, we develop a game theoretic model with a monopoly application store and two competing, symmetric application developers, where every developer can invest in quality improvements of its application. The application store sets an entry fee for consumers (e.g., through the price of the device needed to access the store) and a maximal price that developers may charge from consumers. The consumer demand for an application that is not in the top position is normalized to zero. We then compare the sponsored ranking scenario, where the application store can, but does not have to, leverage a ranking fee from the application developers, to the neutrality scenario, where the application store cannot leverage an additional fee from application developers and ranks them according to their quality. We find in the ranking scenario, that the application store always has an incentive to introduce sponsored ranking and to rank the applications irrespective of quality. Nevertheless, we can show that a neutrality regulation can be detrimental for the application store, total welfare and surprisingly even for consumers. This is because application stores allow developers to take higher prices from consumers under the sponsored ranking scenario, which in turn gives them a higher incentive to increase application quality. At the same time, the application store lowers the entry fee for consumers, which increases consumer surplus. Overall, we can show that neutrality regulation can only be beneficial for welfare if the application store’s bargaining power with respect to application developers is low. This is because under the sponsored ranking scenario with low bargaining power, the application store has only low incentives to increase the maximal price, such that developers do not increase the quality of their applications enough to offset the possible loss in consumer welfare due to the excessive consumption of the lower quality application. We additionally show that these results are robust with two competing application stores.

Moderators
GL

Gavin Logan

National Urban League

Presenter
OZ

Oliver Zierke

University of Passau

Author
avatar for Jan Kraemer

Jan Kraemer

Full Professor, University of Passau

Saturday September 9, 2017 5:12pm - 5:45pm
ASLS Hazel Hall - Room 225

5:12pm

Not a Scarce Natural Resource: Alternatives to Spectrum-Think
Spectrum policy is in a rut, and so is spectrum policy research. To get out, one needs to dig into the language that underlies the practice.

“Spectrum” is one of the most common words used to described the wireless ecosystem. However, the term has never had a single, clear meaning, and other concepts to describe wireless – such as radio stations or radio services – have waxed and waned. We will investigate how the terms used in wireless policy have changed from the early 20th century to the present day by examining congressional testimony and regulatory proceedings. Preliminary results suggest that early discussion focused on radio operations rather than spectrum, and spectrum – when used – denoted frequencies rather than a resource that operators used to provide radio services.

Today, spectrum is commonly said to be a scarce natural resource. Metaphors are pervasive in complex legal and regulatory issues, and shape the way stakeholders and policymakers think. We will show that the scarce natural resource analogy – while providing some insight into radio policy questions – is difficult to support, fails to adequately convey the dynamics of radio operation, and leads to radio policy increasingly focused on management concepts based on physical resources like land. When viewing spectrum this way, the regulator or operator may fail to see future conflicts, and be blindsided by harms that could have otherwise been prevented.

We will use textual analysis to examine alternative views of spectrum, and explore the applicability of these divergent perspectives in a modern understanding of spectrum. We will consider spectrum resources as frequency bands or radio operating rights, non-physical resource analogies, and radio station operation as the object of regulation.

This work proposes that assumptions about the objects of wireless policy affect regulatory oversight. The hypothesis will be tested by examining case studies such as the unexpected interference to public safety in 800 MHz when cellular operation was introduced; the shift from out-of-band emissions to adjacent band interference concerns in the GPS/LightSquared case; and interference with SiriusXM reception due to T-Mobile transmissions in other bands. For example, rules that focus on operating rights in individual bands may under-estimate intermodulation interference such as that experienced by SiriusXM receivers. Framing regulatory decisions in terms of radio operations rather than bands may have forestalled this problem.

Our preliminary conclusion is that focusing on radio station operations gives a more reliable picture of how a rights grant will affect existing users and neighboring licenses. By shifting the focus towards a view of spectrum as radio operation, regulators can craft better rules and anticipate future harms that might otherwise be missed.

Moderators
avatar for Peter Tenhula

Peter Tenhula

Deputy Associate Administrator, NTIA

Presenter
Author

Saturday September 9, 2017 5:12pm - 5:45pm
ASLS Hazel Hall - Room 221

5:45pm

Closing Reception
Saturday September 9, 2017 5:45pm - 6:45pm
Founders Hall - Multipurpose Room