2021 Speakers

Bryan Blanc

Pronouns: he/him
Nelson\Nygaard Consulting Associates, Portland, OR

Session: Shiny and R

Using Shiny Dashboards to Understand Bus Transit Delay and Sketch Solutions for King County Metro Transit

Buses are the backbone of most urban transit systems – they are where transit agencies invest the most service hours, have the most flexibility in their systems, but are also the most susceptible to the constraints and congestion on urban surface streets. Using R, we analyzed high resolution GPS and passenger count data to understand bus transit delay, and aggregated this information to a custom geometric representation of King County Metro’s bus network. We then used Shiny to develop an interactive dashboard that planners and analysts could use to view the data at both systemwide and location specific levels. Additionally, we developed a module of the application that enables the development of projects and scenarios with an associated forecast of delay reduction. Using these features, the agency could both understand existing sources and locations of transit delay and sketch solutions for addressing delay very quickly. We will present an overview of the data aggregation process, and then demonstrate the dashboard, highlighting what packages were used in development and what components had to be developed custom for the application.

Bio: Bryan is a transportation planner/data scientist at Nelson\Nygaard Consulting Associates, a transportation planning consulting firm. He has been an R user since he started using R in graduate school at Portland State University in 2013 both for coursework and academic research. Shortly after graduating with an M.S. in 2015, Bryan joined Nelson\Nygaard, and has supported a wide variety of transportation planning efforts including bus transit plans, parking supply management analyses, active transportation safety analyses, commuter travel surveys, and others. Throughout that time, he has used R to support his transportation planning work, and is now focused on using R and other data science tools and skills to support his colleagues and their clients. In consultation with TriMet (Portland’s transit agency) and Metro (the Portland area’s regional government), Bryan developed the analysis framework underlying the initial iteration of the Bus Delay Analysis Tool (BDAT, a shiny dashboard tool to be presented) developed to support their bus transit project prioritization in 2019-2020. Since then, Bryan has worked with Esther Needham (his co-presenter) and other members of the Nelson\Nygaard team to develop further iterations of BDAT for other transit agencies, including King County Metro (Seattle, WA) and Regional Transportation District (Denver, CO).

Chanté Davis

Pronouns: she/her
NOAA Fisheries’ West Coast Regional Center, Oregon

Session: Sharing R Love

Different Strategies for Teaching Your Colleagues R: Lessons Learned and Recommendations

Authors: Chanté Davis, Emily Markowitz & Diana Dishman. There is a clear and present need across all science fields to make workflows more efficient, improve data management, and make our results accessible and digestible to general audiences. The first step toward this is to embrace analysis-to-product workflows in programs like R. However, this can exclude professionals in an organization who do not know R. Without a skill set in a common format and language, colleagues unfamiliar with R may feel left out of a growing scientific R community or no longer feel able to constructively contribute to the workload. Without support, many are resigned to the daunting task of learning R outside of work or changing how they contribute to their field. Instead, we can take it upon ourselves to build an inclusive community for teams to collaborate with and learn from. This internally-driven investment in professional development extends the impact of a team member and takes the weight off the organization’s R experts as new users are able to take on more tasks in R. Combined, at NOAA fisheries we have prepared and taught four very different trainings/workshops for our colleagues, each with unique needs, goals, and time constraints. Trainings included 1) a two-day agency-wide workshop on how R can be used to support our needs, 2) a 90 min workshop to “jump start” new coders, 3) a five-week course with office hours, homework, and department support aimed at broadening the internal R community and providing an introduction to R, and 4) a three-day facilitated course of self-paced tutorials helping new users learn basic skills and develop confidence working with their own data. Each training received positive feedback, we have been able to see our colleagues progress post-training, and many lessons were learned. At the core of each training, we aimed to introduce attendees to the various capabilities the R platform provides, build an inclusive community for skill sharing, and infuse our colleagues with enthusiasm for R. In summary, supporting R culture and internally investing time in our organizations has empowered our colleagues to invest in their own R journeys, has made our local scientific communities more inclusive, and allowed our colleagues to work more efficiently, deliberately, and impactfully.

Bio: Chanté Davis is a Natural Resource Management Specialist in the Sustainable Fisheries Division of NOAA Fisheries’ West Coast Region. She has a PhD in Fisheries and Wildlife from Oregon State University, a Master’s in marine science from California State University Monterey Bay, and a Bachelor’s in Earth Systems Science and Policy from California State University Monterey Bay. Chanté was a Knauss Marine Policy fellow, and prior to her fellowship she was a Graduate Research Fellow with OSU, where she completed independent research that brought together three fields of research: spatial statistics, ecology, and population genetics. In her current role with NOAA Fisheries Chanté is helping her branch modernize how they analyze and track the impacts of hatchery programs using innovative approaches in R, and has spearheaded efforts to start an internal R training program for the Region. Chanté lives on the central Oregon coast, and in her free time enjoys reading, knitting, and playing with her dog on the beach.

Cordero Ortiz

Pronouns: he/him/his
Portland State University, Portland, OR

Session: Using R

PurpleAir PM2.5 Modeling in Portland, Oregon

The Portland, Oregon airshed poses a high public health risk from air toxics due to the city’s population density, development activities, and role as a global freight corridor. PM2.5 is a particularly harmful air toxic composed of a complex mixture of droplets and particles 2.5 micrometers or less in diameter. As a pollutant consistently among the many environmental issues affecting urban areas, relatively low-cost consumer-grade PM2.5 sensors have been developed in recent years that allow users to know more about their local air quality. PurpleAir’s consumer-grade air quality sensors are unique in that each sensor in its network also records PM2.5 data to a ThingSpeak API. The AirSensor R package was written specifically for querying real-time and historical PurpleAir data from ThingSpeak; however, there are limitations that inhibit simultaneously querying the historical time series data of multiple sensors. This talk will explore parallelizing API queries of PurpleAir time series data of over 150 sensors in the Portland metro area, utilizing the caret package to train predictive random forest machine learning models of monthly mean PM2.5 concentrations, and then packaging the monthly models into a Shiny Dashboard that also incorporates spatial analysis of real-time PurpleAir data.

Bio: Cordero Ortiz is an adjunct research assistant in the Sustaining Urban Places Research (SUPR) Lab at Portland State University. His 2019 thesis at Reed College, Mapping the NOx Plumes of Transportation Infrastructure in Portland, Oregon, sparked his interest in GIS and motivated his enrollment and subsequent completion of the Graduate GIS Certificate Program at Portland State University in the winter of 2021. He currently has a special interest in remote sensing, APIs, machine learning, and dynamic web mapping for Shiny dashboards, all of which he tied together in an aptly titled project, PurpleAir PM2.5 Modeling in Portland, OR.

David Keyes

Pronouns: he/him
R for the Rest of Us, Portland, OR

Session: Reporting & Sharing of R

Making Beautiful Reports that Communicate Effectively with pagedown and pagedreport

Most reports made in R look like … reports made in R. Starkly minimal and supremely dedicated to content over aesthetics, these reports work fine for internal reporting, but would never be fit for wide-scale public exposure. Even among users who produce high-quality data visualization, reports that incorporate graphs made in R are often laid out by a professional graphic designer in a tool like InDesign. But having access to a professional designer is rare. Might it be possible to make reports using R that look good and communicate well? Over the last several years, I’ve worked with organizations that want to improve the quality of their reporting. One of these was Connecticut-based Partnership for Strong Communities. In 2020, I worked with them, as well as partners the Connecticut Data Collaborative and Thomas Vroylandt of Tillac Data, to produce a set of reports on housing and population data for each of the 169 towns in the state. In this talk, I’ll provide a case study of developing attractive reports without ever leaving R. I’ll explain how we used the pagedown package to design and create these reports. And I’ll show a package, pagedreport that Thomas Vroylandt and I developed in order to help others make attractive reports from within R.

Bio: David Keyes is the founder of R for the Rest of Us. Through online courses and custom trainings, he helps people and organizations learn R. In addition to training folks to use R, David does consulting work, helping organizations to use R to create compelling data visualization, improve their workflow, and much more.

Emily Markowitz

Pronouns: she/her
NOAA Fisheries/Alaska Fisheries Science Center, Seattle, WA

Session: Reporting & Sharing of R

Reproducible Reports in R Markdown: Perspectives and {NMFSReports}

Scientists in government and beyond are often tasked with preparing analysis-driven reports that inform policy and are crucial for documenting the state of their programs at regular intervals (e.g., annually, quarterly, monthly). Although these reports typically follow the same format for each iteration, updating content with new data from previous document line by line can lead to inefficient writing and introduce errors. Alternatively, R and R Markdown can be used to systematically modernize report creation. To address this need within our agency I have developed a new R package, {NMFSReports}, which uses R and R Markdown to provide an analysis-to-product approach to report writing by centralizing back-end data analyses and efficiently streamlining copy-edit and design sub-processes. The {NMFSReports} R package first creates the basic report outline and folder architecture to create reproducible reports, and then provides users with grammar and organization helper functions that assist in report writing. To aid in the final publication process, this workflow can also be used to produce intermediate output files for subject matter experts and collaborators to review and use. {NMFSReports} can produce copy-edit ready and accessibility-compliant documents for editors, style guide-formatted and flow-in ready text (including bibliography, footnote, and figure and table caption management) for authors, tables and figures for graphic designers, and web-ready data files for web tool developers. Though this package is in early stages of development, it is already clear it has the potential to save colleagues across our agency countless hours and improve efficiency and consistency among our teams and offices. Though developed for reports produced by scientists at NOAA Fisheries, the concepts and structures behind {NMFSReports} have utility for anyone seeking to streamline reports, graphics, and web tools.

Bio: Emily Markowitz is a Research Fisheries Biologist in the Eastern Bering Sea Survey Team in the Groundfish Assessment Group at NOAA’s Alaska Fisheries Science Center (AFSC) in Seattle, WA. Before AFSC, Em worked in Silver Spring, MD where she was a contractor for the Office of Science and Technology (OST) in the Economics and Social Analysis Division providing statistical and data visualization expertise for the national annual Fisheries Economics of the US report. Before that, Em was a John A. Knauss Marine Policy Fellow working in OST’s Assessment and Monitoring Division’s Protected Species Science Branch working on sea turtle issues and marine mammal acoustics. Em obtained her BS and MS degrees in quantitative fisheries ecology from Stony Brook University. Her thesis research focused on the development of species distribution models that combined fisheries-independent bottom trawl survey data with oceanographic models to predict suitable habitat and distributional shifts.

Ericka Smith

Pronouns: she/hers
Oregon State University, Corvallis, OR

Session: Reporting & Sharing of R

Addressing Gaps in Data Accessibility with Shiny Dashboards

The concept of publicly available data stems from ideals surrounding open science, reproducibility, and integrity. It is underscored by federal laws that codify the societal benefits, while simultaneously laying out rules to ensure this concept is practiced. Unfortunately, there are clear gaps between these ideas and their execution. This project elucidates a way to fill those gaps. Specifically, we created a Shiny Dashboard to address the difficulties and hurdles that exist for the public in approaching and understanding large climate model data. There are three primary challenges that we focused on: obscure file and data types, complex models, and the unwieldy size of these datasets. The first is managed by using a Shiny Dashboard as a tool. Since the user is not interacting with the code or data themselves, they cannot be hindered at this point. Making models approachable to a general audience was a more significant undertaking. We solved this issue via the design of the dashboard, by giving careful consideration to navigation and how we led users to higher levels of complexity. The size was addressed in a multifaceted manner which, in summary, amounted to being very intentional about which things are calculated and when. Ultimately, these data went from an initial state of requiring significant knowledge and resource investment to even look at, to being approachable to anyone with a link to the website. Our methodology is available on GitHub and could be scaled to other public datasets. There is a clear benefit to the initial investment required to make a tool like this. This project acts as a proof of concept, showing that Shiny Dashboards are a viable tool for creating truly accessible data.

Bio: Ericka Smith recently graduated with a master's in statistics from Oregon State University. She has a strong passion for data accessibility. With a background in natural resources, she has firsthand experience with everything that happens to data before it gets into R and after it leaves. This expertise gives her a unique insight into the pitfalls and traps we can fall into as R users working with data that is not our own, as well as knowledge about how to avoid them.

Esther Needham

Pronouns: she/her
Nelson\Nygaard Consulting Associates, Portland, OR

Session: Shiny and R

Using Shiny Dashboards to Understand Bus Transit Delay and Sketch Solutions for King County Metro Transit

Buses are the backbone of most urban transit systems – they are where transit agencies invest the most service hours, have the most flexibility in their systems, but are also the most susceptible to the constraints and congestion on urban surface streets. Using R, we analyzed high resolution GPS and passenger count data to understand bus transit delay, and aggregated this information to a custom geometric representation of King County Metro’s bus network. We then used Shiny to develop an interactive dashboard that planners and analysts could use to view the data at both systemwide and location specific levels. Additionally, we developed a module of the application that enables the development of projects and scenarios with an associated forecast of delay reduction. Using these features, the agency could both understand existing sources and locations of transit delay and sketch solutions for addressing delay very quickly. We will present an overview of the data aggregation process, and then demonstrate the dashboard, highlighting what packages were used in development and what components had to be developed custom for the application.

Bio: Esther is a transportation planner at Nelson\Nygaard Consulting Associates, working out of the Portland, Oregon office. Prior to joining Nelson\Nygaard in 2019, Esther worked as a data analyst and project manager at Azavea, a geospatial software development company in Philadelphia, PA. Esther was introduced to R and programming while attending graduate school for city and regional planning at the University of Pennsylvania in 2014. Since then she has been using R for data analysis with a focus on geospatial analysis, visualizations, and building interactive tools. Since joining Nelson\Nygaard, Esther has focused primarily on transit and active transportation safety analyses and building dashboards using Shiny. She has been working with Bryan Blanc (co-presenter) to further develop the Bus Delay Analysis Tool (BDAT) and implement features that allow users to experiment with treatments to improve bus performance in order to plan for future real-world investments.

Jacqueline Nolis

Pronouns: she/her
Seattle, WA

Session: Using R

I made an entire e-commerce platform on Shiny

E-commerce has many components that must be securely handled–managing a user's shopping cart, checking out and taking payment, and fulfilling orders. I am excited to say that I've successfully created an e-commerce platform entirely in a single Shiny app for my side project: {ggirl}. Using the experimental {brochure} package by Colin Fay I was able to make a complex Shiny web service that lets R users order physical postcards of ggplots. I integrated Stripe for payments and used webhooks to know when to fulfill orders. I even used {httr} to make API calls to order the physical products from suppliers after the customer payments are received. In this talk I'll go through the architecture I devised and how you can make an ecommerce platform yourself!

Bio: Dr. Jacqueline Nolis is a data science leader with over 15 years of experience in managing data science teams and projects at companies ranging from DSW to Airbnb. She currently is the Head of Data Science at Saturn Cloud where she helps design products for data scientists. Jacqueline has a PhD in Industrial Engineering and coauthored the book Build a Career in Data Science. For fun she likes to use data science for humor—like using deep learning to generate offensive license plates.

Joe Cheng

Pronouns: he/him
RStudio, PBC / Redmond, WA

Session: Shiny and R

Extending Tableau with R and Shiny

Many organizations rely on Tableau to provide day-to-day insights from their data. Thanks to Tableau’s point-and-click interface and focused feature set, almost anyone can produce attractive and useful visualizations and dashboards. On the other hand, tasks that are routine to R users can sometimes be difficult or impossible to achieve with Tableau. So it’s common to use R to preprocess data that is then fed to Tableau, or use Tableau’s R integration features to fortify their data tables with columns calculated by R. But until now, there hasn’t been an obvious way to let Tableau take advantage of R’s powerful visualization and reporting capabilities. This talk will introduce {shinytableau}, an experimental new package that lets R users create reusable Tableau dashboard extensions, using the power of R and Shiny to generate visualizations that are not achievable with Tableau alone.

Bio: Joe Cheng is RStudio's Chief Technology Officer, and the original creator of Shiny. He was the first employee at RStudio, joining founder J.J. Allaire in 2009 to help build the RStudio IDE. He continues to work on packages at the intersection of R and the web.

Johann Windt

Pronouns: he/him
Vancouver Whitecaps Football Club

Session: Using R

The PlayerMakeR Pipeline - Using R across the data pipeline with the Vancouver Whitecaps Football Club, an example using wearable technology.

Professional football players fulfill various roles within their respective teams, like a reliable central defender or an elite goal scorer. If the tools used by the Vancouver Whitecaps’ Data Science Department formed a team, R would be the ultimate ‘utility player’, the generalist that could be asked to play multiple roles depending on what the team needs. One clear example of how R fulfills these different roles within our sporting environment can be seen in how we collect, integrate, aggregate, and communicate data collected from PlayerMaker devices – two small accelerometers worn in rubber straps attached around a player’s left and right boot). These devices are worn by all of our academy players, across 5 teams, during training every training session and match. In this talk, I will describe how R is deployed across our data pipeline daily in this specific use case with the Vancouver Whitecaps Football Club, so that the data we collect every day with dozens of academy players can be communicated effectively and reliably to coaches, performance department practitioners, and the players themselves. From a data processing standpoint – we download 3 separate Excel files from each training session (15 files a day), with different levels of information and varying levels of cleaning/processing required. I will describe which packages we use to process these files – ensure that the links between raw velocity traces or possession files are connected to summary data from each drill/session. From a data communication standpoint, we believe that context is key, but privacy is preeminent. Therefore, while we want to give data back to our players so they can better understand and learn from it, we must ensure that they receive only their own information in an identifiable way. I will describe how we accomplish these two objectives through generating player-specific reports as ggplot2 images, and send individual emails to each player on a weekly basis with these reports attached inline. Technology, however fancy it may be, is useless in isolation and if it does not alter practice or inform decision making. Therefore, ensuring that the data provided by any given technology can be integrated within an organization’s systems and communicated clearly to the individuals who need it is vital for success. Accomplishing this process at the Vancouver Whitecaps FC – as demonstrated in our PlayerMaker example – is made possible by one of our favourite data science department tools – R.

Bio: Johann Windt is the Head of Data Science – Performance with the Vancouver Whitecaps FC of Major League Soccer (MLS). His stated objective is to make everyone else's job easier and more efficient by overseeing data collection, integration and reporting across football operations - including Scouting & Recruitment, Sport Science & Medicine, Player Development, and First Team Football. Prior to his time with the Whitecaps, Johann worked as a sports medicine data analyst in Colorado Springs with the United States Olympic and Paralympic Committee. Academically, his PhD work involved intensive longitudinal data from player tracking technology, and how to conceptually and analytically examine how training data relates to athletes’ health and performance. His research and professional interests include how technology and data science can inform organizational decision-making. An avid R user in his academic and applied worlds, he has rarely went throughout a major project without spinning up a fresh RStudio session.

Kate Hertweck

Pronouns: perceived pronouns
Seattle, WA

Session: Sharing R Love

Coordination and collaboration within teams using R

The continuous and rapid development of new packages and tools in the R ecosystem is one of the most exciting parts of working in the R community. However, this rate of change can also make it challenging to work with a team of R coders: everyone has different levels of excitement about learning and trying out new things, as well as different preferences about “the best” way to perform certain coding tasks. This variation across a team, and a lack of clarity about how to prioritize accommodating new approaches, can cause confusion and friction among team members. This talk will overview impediments to effective R collaboration among teams. I'll then discuss two main approaches to alleviate these challenges: 1) technology to improve team workflows, and 2) behaviors to encourage collaborative culture. Whether you're cultivating a new team of R coders while attempting to determine the best ways to work together, or belong to an established team that is considering transitioning to new workflows, let's think together about how we make decisions to work together more effectively.

Bio: Kate Hertweck is a scientist and educator with seven years of experience as an R educator, including certification as an instructor (and instructor trainer) for The Carpentries. Kate has taught R to hundreds of people with diverse backgrounds and interests: from high school students to experts with Ph.D.s, researchers and medical professionals to librarians and social scientists, and for people interested in applying R to an enormous array of problems in coding and data science. Kate specializes in training biomedical scientists to use coding and reproducible computational methods to improve the reproducibility, robustness, and openness of their science.

Kim Dill-McFarland

Pronouns: she/her
University of Washington, Seattle, WA

Session: Sharing R Love

First of her name: Fostering R usage as the first bioinformatician in my department

I was taught to leave a place better than I found it. In my science career, this has taken the form of teaching R. Whether you are a new student actively seeking help or a career scientist not looking to change your ways, I believe R can improve research efficiency, reproducibility, and accessibility. The catch is convincing everyone else. In this talk, I share my experiences spreading R as the first bioinformatician in my group at the U of Washington’s School of Medicine. When I arrived, the extent of code was a couple STATA scripts that few knew how to run. Now, about two years later, all analyses have reproducible scripts, publications have accompanying GitHub repositories, and (nearly) everyone uses R. I describe a range of methods from passive to active, structured to unstructured that I have found helpful in introducing non-coders to R and convincing the inconvincible to change their ways. I highlight things that worked, what I learned from things that didn’t, and how the pandemic helped and hurt my efforts.

Bio: Dr. Kim Dill-McFarland is a bioinformatician at the U. of Washington. She works at the intersection of microbiology and computer science, applying computational approaches to biological problems. Using sequencing and other high-throughput techniques, she works with several UW labs to research how the human immune system responds to disease.

Kristin Bott

Pronouns: she/her
Reed College, Portland, Oregon

Session: Education & Community

Supporting R across disciplines + building community

You have extolled the virtues of R to your [colleagues, students, peers, higher-ups], and they’re ready to use this popular and powerful tool in their work. Fantastic. Challenge: learning takes time and practice, mismatched parentheses can be problematic (for code, for egos), and solving problems alone … doesn’t really work. How can you build resources and support for users with diverse needs, while subject to constraints of staffing, time, and budget? In this talk, I’ll tell you how my team of student workers and I work to meet the needs of faculty and students across the curriculum, with an approach built on a foundation of collaboration, independent learning, kind communication, and custom Slack emojis. While this story comes from academia, I believe these approaches transfer beyond the ivory tower – relevant to anyone who has users to support and is interested in building community in the process.

Bio: As part of the instructional technology team at Reed, Kristin supports quantitative data across the curriculum, working with faculty to integrate analytical tools (e.g. R) in their teaching and scholarship through workshops, guest lectures, and class/research projects. She also supports students in working with data for coursework and independent research, and is Reed's point person for spatial analysis and mapping. All of this work is made possible by collaborating with a talented and kind team of student workers, who provide front-line support for users and help develop materials to make data work accessible and fun.

Nick Paterno

Pronouns: he/him/his
OpenIntro & Pacific Lutheran University, Tacoma, Washington

Session: Education & Community

Teaching Non-STEM Students to Code

Learning to write code under ‘normal’ circumstances can be daunting for many students, especially those in non-STEM majors. I will discuss the evolution of my approach to teaching R to non-STEM majors as a part of an introductory statistics course. The journey begins a few years ago with an attempt to string together a set of script files and the inevitable crash and burn to rmarkdown templates in a custom package.

Bio: https://npaterno.github.io/left_coast_stats/about.html

Nicole Kauer

Pronouns: she/they
Sage Bionetworks

Session: Shiny and R

Hey, I want that app! Designing contagious Shiny apps

The AD Knowledge Portal has data on over 25,000 specimens from nearly 8,000 subjects, with more data coming in monthly, from Alzheimer Disease researchers spread across the globe. In order to collate this data into useable, coherent datasets suitable for secondary research, our team needed a flexible, adaptable, long-term solution to common data curation problems. We developed a set of customizable, reusable Shiny applications, dccvalidator and dccmonitor. dccvalidator is capable of finding recurring curation problems and puts the initial curation process in the hands of those who know the data best: data contributors. The sister application, dccmonitor, gives curators a dashboard to view contributor progress and find the challenging problems that are unable to be automatically detected. Together, these applications have streamlined the curation process so much so that other teams have launched their own versions of these applications, customized to their data needs. Check out this talk to find out how the design of these applications have led them, and their underlying ideas, to spread through our organization.

Bio: Nicole originally earned an undergrad degree in Mechanical Engineering from the University of Washington, with the intent to design and build medical devices. However, Nicole realized that her dream career was at the intersection of engineering, medical research, and software – not hardware. Nicole went on to earn a Masters in Computer Science and Systems, with a focus on Bioinformatics, at the University of Washington - Tacoma. Nicole now works as a Bioinformatics Engineer in the Systems Biology Infrastructure team at Sage Bionetworks, a health research nonprofit. During business hours, she spends her time building software solutions that help researchers share their data in a way that makes it easy for others to find, learn about, download, and reuse. When Nicole isn't working, she is living life; she's a maker, a hiker, a reader, and a gamer.

Njesa Totty

Oregon State University, Corvallis, Oregon

Session: Education & Community

Validating the assumptions of bootstrap intervals for responsible implementation using the R package bootEd

As statistical computing has become an increasingly prevalent component of introductory statistics courses, so too has the use of bootstrapping. While bootstrapping is a powerful tool, it requires that users validate a set of often overlooked, but important, assumptions before the results can be considered valid and trustworthy. We hope to discuss why these overlooked assumptions are so important for valid inferences to be made and introduce a new R package, bootEd. This software package has been designed to help students and teachers of introductory statistics courses implement bootstrap methods easily while also emphasizing the process of assessing the assumptions that are foundational to the efficacy of these methods. We hope that participants will understand that effectively communicating the assumptions behind such methods is a necessary step in the teaching process and become aware of the bootEd package as a tool for accomplishing that goal. As the use of statistics and statistical computing becomes increasingly pertinent to an improved society, more introductory statistics courses are teaching methods such as bootstrapping and more students are taking these courses. Observers of industry and academia can see that statistical computing is catching on in many fields. In order to ensure that popular methods are applied correctly in industry, we must take a good look at how these methods are taught in the classroom. Instructors can select tools like bootEd to ensure that students leave the classroom with the capability to correctly apply these methods.

Bio: Njesa Totty is passionate about all things related to higher education. Upon completing her undergraduate studies in mathematics, with a minor in education, she spent time giving back to her community and teaching math at an inner-city charter school before heading off to graduate school. Currently, she is wrapping up her PhD program in Statistics at Oregon State University (OSU) where she holds a graduate teaching assistant (GTA) position and conducts research pertaining to statistical applications in higher education and statistics education. In her time as a GTA at OSU she has also worked on curriculum development and course reconstruction projects within her department and served as a graduate research assistant for projects pertaining to university student success initiatives. She was awarded the Outstanding GTA Award and is a Southern Regional Education Board Doctoral Scholar. As a current member of the College of Science Student Board of Advisors, she has gained insight into the many issues that higher education institutions may face and is formulating ideas on how these can be solved using statistics and data science. Upon completing her PhD, Njesa desires to work in a faculty position at a university where she can continue to teach and perform her research.