High-speed internet access has become a necessity for working and learning from home especially during the COVID-19 pandemic. However, many American households lack a decent broadband connection. To tackle this problem, U.S. researchers have developed a new tool to smooth the collection of federal broadband access data that helps pinpoint coverage gaps across the U.S. The research reveals that nearly 21% of students in urban areas are without at-home broadband, while 25% and 37% lack at-home broadband in suburban and rural areas.
As more of day-to-day activities continue to move online, including education, commerce and health care, understanding the gaps in digital infrastructure is essential. Understanding the gaps is important to address disparities in access related to demographics, socioeconomic status, and educational attainment.
When the U.S. Congress first passed the Telecommunications Act of 1996, the goal was to encourage competition in the telecommunications industry while improving the quality of service and lowering customer prices. To determine the act’s effectiveness, the Federal Communications Commission created a standardised form (Form 477) where twice a year, internet service providers need to report where they provide service to residential and business customers.
Form 477 data remains the best publicly available data source regarding broadband deployment. Unfortunately, there are a lot of nuances to these data which to this point have prevented the researchers from conducting useful analyses over time. One of these nuances is that the data collected from 2008 to 2018 spans the two census reporting periods of 2000 and 2010. This has made it difficult to look at the data overall and align it with the shifting census geographies, which do change each census year.
Several other U.S. researchers worked together to produce a new dataset that resolves some of these issues by linking the breaks in the Form 477 data into a continuous timeline and aligning the data to the 2010 census. They developed a procedure for using the data to produce an integrated broadband time series. The team has labelled the dataset a Broadband Integrated Time Series (BITS).
The researchers hope that these BITS data will be a tool to diagnosing gaps in broadband availability to help close the digital divide and enhance the participation of all people in online activities. With shrinking public budgets and a need to pinpoint locations suffering from a chronic shortage of broadband, it is critical for policymakers to efficiently allocate the human, infrastructural and policy resources required to improve local conditions.
As digital transformation is inevitable, data has become critical for public sectors. As reported by OpenGov Asia, federal agencies must follow the Federal Data Strategy framework which focuses on building a culture that values data, governing, managing, and protecting data, as well as promoting efficient and appropriate data use. Similarly, many states have enacted, or are currently enacting, data privacy laws. To help adhere to these policies, agencies must examine whether the data they gather and store is at risk of exposure. Backing up SaaS data can help them meet data governance and privacy regulations.
The vast majority of organisations backup their on-premise application data. They know how crippling it could be if the data they rely upon to run their missions and perform their services is lost or corrupted. However, it is not the case with SaaS application data. when the vendor is keeping an agency’s SaaS app running, it does not automatically mean it is protecting the data.
By centralising backed-up SaaS data in a cloud data lake that they own, agencies can create pools of data for authorised users. IT teams can then use cloud-native tools that plug into the lake, automatically streaming data into applications and systems that can be tracked. Backing up SaaS data is extremely important. By capturing data at high frequencies in a cloud data lake they own, federal, local and state governments can better protect their data while maximising the value they get from it.