US Census

Maps that look at the US Census at the macro-perspective of all counties in the United States.

US Counties – Square Miles

Which counties are the largest counties in an area? This interactive map, which pulls data from the Census TIGER files is colored using Natural Breaks Jenks coloring to compare the size various counties in America. Blue counties are smaller, red counties are larger.

Data Source: Population estimates, shown in the based on the 2016 US Census Population estimates. American Fact Finder. https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?src=bkmk

Households That Make Less then $20k in America πŸ’΅

If Puerto Rico was a state, it would be by far the poorest in America. Significant poverty continues to exist in the south, indeed the most impoverished states are Mississippi, West Virginia, Louisiana, New Mexico and Alabama according to the 2020 American Community Survey 5 year averages.

State Households Making Under 20k
Puerto Rico 48.2
Mississippi 22.6
West Virginia 20.6
Louisiana 20.1
New Mexico 19.6
Alabama 19.3
Arkansas 18.9
Kentucky 18.8
South Carolina 17.0
Tennessee 16.6
Oklahoma 16.2
North Carolina 15.8
Ohio 15.6
Georgia 15.2
Missouri 15.2
District of Columbia 15.2
Montana 15.1
New York 15.0
Maine 15.0
Michigan 14.9
Florida 14.9
Rhode Island 14.6
Indiana 14.6
Pennsylvania 14.2
Texas 13.9
Arizona 13.7
Vermont 13.7
Illinois 13.7
Kansas 13.6
North Dakota 13.5
South Dakota 13.5
Iowa 13.4
Nevada 13.4
Idaho 13.3
Oregon 13.1
Wyoming 13.0
Wisconsin 12.9
Nebraska 12.7
Massachusetts 12.5
California 11.9
Delaware 11.7
Connecticut 11.6
Virginia 11.3
New Jersey 11.0
Minnesota 10.9
Colorado 10.8
Washington 10.6
Hawaii 10.0
Maryland 10.0
New Hampshire 9.9
Alaska 9.9
Utah 9.2

Here is the R code for making these statistics:

library(tidycensus)
income <- get_acs(
  geography = “state”,
  table = ‘B19001’,
  year = 2020,
  output = ‘wide’,
  survey = “acs5”,
  geometry = F);
perincome <- income  %>% select(ends_with(‘E’), -ends_with(‘001E’)) %>%
  rowwise() %>% mutate(total = sum(across(matches(‘dE’)))) %>%
  mutate(across(matches(‘dE’), ~(./total)*100 )) %>% select(-total)
perincome %>% rowwise %>% mutate(under20k = sum(across(c(B19001_002E, B19001_003E, B19001_004E)))) %>% select(NAME, under20k) %>% arrange(-under20k) %>% write_csv(‘/tmp/hhunder20k.csv’)

Median Home Value

The Eastern Seaboard from Washington DC to Boston MA has some of the United States' highest home values, along with Southern California from San Francisco down to Los Angeles. Seattle, Colorado, and parts of Southern Florida are also expensive places to live. Surprisingly, Chicago IL is relatively inexpensive place to buy a home. Areas that are blue and green are less expensive to buy a home, yellows are about average, while oranges and reds are the most expensive places to own a home.

Data Source: Median Home Value, 2011-2015 American Community Survey 5-Year Estimates. https://factfinder.census.gov/faces/nav/jsf/pages/searchresults.xhtml?refresh=t

Working PANDAS and American Community Survey Summary File

Want to be able to work with American Community Survey data offline using your own local copy of the ACS 5-year Summary File? It’s pretty easy to do with PANDAS. If you are planning a lot of Census queries, this can be a very fast way to extract data.

Before you can use this script, you will need to download some data:

import pandas as pd

path = '/home/andy/Desktop/acs-summary-file/'

# list of geography
geo = pd.read_excel(path+'5_year_Mini_Geo.xlsx', sheet_name='ny',index_col='Logical Record Number')

# load headers
header = pd.read_excel(path+'ACS_5yr_Seq_Table_Number_Lookup.xlsx')

# create a column with census variable headers
header['COL_NAME'] = header['Table ID'] + '_' + header['Line Number'].apply(lambda a: "{0:.0f}".format(a).zfill(3))

# segment id, along with ACS year and state
segId = 135
year = 2019
state = 'ny'

# create a list of headers for segment file
segHead = ['FILEID','FILETYPE','STUSAB','CHARITER','SEQUENCE','LOGRECNO'] \
    + header.query('`Sequence Number` == '+str(segId)).dropna(subset=['Line Number'])['COL_NAME'].to_list()

# read the segment file, including column names above    
seg = pd.read_csv(path+'e'+str(year)+'5'+state+(str(segId).zfill(4))+'000.txt',header=None, names=segHead, index_col=5)

# join the segment file to geography using Logical Record number
seg = geo.join(seg)

# calculate percentage of households with internet subscriptions -- codes from ACS_5yr_Seq_Table_Number_Lookup.xlsx
seg['Internet Subscription']=seg['B28011_002']/seg['B28011_001']*100

# output the percentage of households by county with internet subscriptions
seg[seg['Geography ID'].str.startswith('050')][['Geography Name','Internet Subscription']]

Geography NameInternet Subscription
Logical Record Number
13Albany County, New York83.888889
14Allegany County, New York76.248050
15Bronx County, New York75.917821
16Broome County, New York82.222562
17Cattaraugus County, New York72.431480
70Washington County, New York80.224036
71Wayne County, New York81.508715
72Westchester County, New York86.371288
73Wyoming County, New York78.387887
74Yates County, New York75.916583
# alternatively you can display human readable columns automatically
seg.rename(dict(zip(header['COL_NAME'],header['Table Title'])),axis=1)
StateGeography IDGeography NameFILEIDFILETYPESTUSABCHARITERSEQUENCETotal:Has one or more types of computing devices:
Logical Record Number
1NY04000US36New YorkACSSF201900000.0ny0.0135.07343234.06581493.0
2NY04001US36New York — UrbanACSSF201900000.0ny0.0135.06433524.05771681.0
3NY04043US36New York — RuralACSSF201900000.0ny0.0135.0909710.0809812.0
4NY040A0US36New York — In metropolitan or micropolitan st…ACSSF201900000.0ny0.0135.07189902.06449723.0
5NY040C0US36New York — In metropolitan statistical areaACSSF201900000.0ny0.0135.06796057.06109882.0
28400NY97000US3631920Yonkers City School District, New YorkACSSF201900000.0ny0.0135.074897.065767.0
28401NY97000US3631950York Central School District, New YorkACSSF201900000.0ny0.0135.02116.01964.0
28402NY97000US3631980Yorktown Central School District, New YorkACSSF201900000.0ny0.0135.07068.06751.0
28403NY97000US3632010Cuba-Rushford Central School District, New YorkACSSF201900000.0ny0.0135.02629.02186.0
28404NY97000US3699999Remainder of New York, New YorkACSSF201900000.0ny0.0135.079779.075425.0

Too much work or don’t want to download the summary file yourself? You can query the Census API directly using PyPI’s censusdata library from PIP. For infrequent queries where you are online, for those with Internet at home, you would be much better off just querying the API directly.

import pandas as pd
import censusdata as cd

# attributes to load
cdcol=['B28011_001','B28011_002']

cdf = cd.download('acs5', 2019,
           cd.censusgeo([('state', '36'),
                         ('county','*')]),
          cdcol)


# seperate out the geoid and geography name
geoid=[]
geoname=[]

for index in cdf.index.tolist():
    geopart=''
    for part in index.geo:
        geopart = geopart + part[1]
    geoid.append(geopart)
    geoname.append(index.name)

cdf['geoid']=geoid
cdf['geoname']=geoname

# calculate percentage with internet subscriptions
cdf['Internet Subscription']=cdf['B28011_002']/cdf['B28011_001']*100

# output a similar table as above
cdf

Learn how to load into PANDAS the PL 94-171 2020 Redistricting Data, a process that is similar but different then ACS data.

Also, calculate the population of an area and it’s average demographics, including areas that don’t have Census demographics such as Election Districts or County Legislative districts.