Bar and line graphics with a pen tip pointing at them

Biostatistics Consultation Core

Biostatistics Consultation Core

2024 Events

How to Use Propensity Score Adjustments

Tuesday, Mar. 26 at 10 a.m.
Zoom password: 235427
Zoom meeting ID: 820 6300 9804

Most social science and public health research is based on observational studies, as randomized‐control trials are often not a feasible study design approach. However, observational studies are typically subject to confounding, where the treatment status is confounded with either measured or unmeasured respondents’ characteristics, which limits one’s ability to draw causal inferences. Propensity score adjustments are an increasingly popular tool to analyze and draw causal inferences from observational data. In this presentation, we will show a step‐by‐step application of the propensity score method, covering the main assumptions, different approaches, and diagnostics.

Join on Zoom

How to Spatially & Temporally Preprocess Brain Imaging Data and How to Statistically Analyze Brain Imaging Data

Tuesday, Apr. 23 at 10 a.m.
Zoom password: 822258
Zoom meeting ID: 896 2748 9828

A special type of data our CHS biostatistical core can handle is high dimensional neuroimaging (or brain imaging) which itself has different types often referred to as modalities such as magnetic resonance imaging (MRI) with different acquisition sequences or positron emission tomography (PET) with different radioactive tracers. Neuroimaging techniques have been used in numerous medical researches, especially in various brain disease investigations, including Alzheimer’s disease & related dementia (ADRC). Multiple steps are needed to analyze neuroimaging data. These steps can be categorized into preprocessing, model construction, parameter estimation & statistical inference (or model generalization).

This presentation will cover the basic steps for preprocessing the neuroimaging data spatially (& temporally depending on the modalities). A key step of the preprocessing is the spatial normalization of the imaging data from each individual research participant to a common template coordinate system via linear & nonlinear transformation. In the common template coordinate space, location by location statistical analysis can be carried out. Though the subsequent statistical analysis can be of univariate, multivariate, or machine learning (ML) based, we will focus on the simple univariate to illustrate the basic steps.

Join on Zoom


Past 2024 Events

 

Tuesday, Jan. 30

Systematic review and meta-analysis is a common approach in intervention research to combine meta-data from diverse studies to reach a more reliable and efficient conclusion with fixed-effects and random-effects meta-analysis methods. This presentation is aimed to give an overview of the systematic review and meta-analysis with a real example.

Tuesday, Feb. 20

Often a research question will call for the prediction of an outcome based on a certain measure of risk. This measure, however, may not be unidimensional, and may therefore best be created by compiling various factors of risk into a single score. This presentation will detail the multi‐step process of calculating a composite risk factor score and then using this score in a predictive model. The intention behind this method is to generate a comprehensive and multidimensional measure of a variable that is more stable and contains less variability than a group of predictor variables that are entered into a statistical model individually.

2023 Events

 

Monday, Apr. 24 at 10 a.m.

Drawing on theory and prior research, evidence-based research can generate data from different sources and phases of research. A central feature of evidence-based research is the sequential and longitudinal implementation of data analysis allowing research participants to be removed for various study related end-points throughout the research timeline. This How-to seminar is to primarily give an overview of pitfalls in longitudinal data analysis and further discuss an integrative data harmonization with joint modeling of longitudinal data and time-to-event (such as dropout and censored) data simultaneously using real data from a HIV/AIDS clinical trial. We demonstrate that an integrative data harmonization has the potential to produce a more efficient and more powerful statistical analysis.

Wednesday, Mar. 29 at 11 a.m.

Missing data is a common issue for researchers who carry out any type of quantitative analysis, whether based on primary or secondary data. While the amount of missing can be minimized in the study design phase, it cannot be completely eliminated. This How-to webinar will introduce the issue of missing data—distinguishing between unit non-response and item non-response—and the multiple imputations approach, a practical solution for dealing with missing data. The webinar will show how to implement multiple imputations using different software (Stata, R, SAS, and SPSS).

Wednesday, Feb. 22 at 11 a.m.

Structural Equation Modeling (SEM) is a statistical technique that is capable of modeling complex relationships among multiple independent and dependent variables. This approach is especially useful when describing the interrelationships of variables comprising ecosystem-like phenomena, such as recursive and mediating relationships. This How-to webinar will introduce the basic principles of SEM and review approaches to coding SEM in various statistical programs.

Monday, Jan. 30 at 10 a.m.

Reporting empirical evidence that an instrument measures what it purports to measure gives us confidence that results are valid. Thus, developing and validating survey measurements using Exploratory and/or Confirmatory Factor Analysis reduces bias in the interpretation of results. This webinar will focus on conducting factor analysis to validate survey measurement models using 4 different statistical software packages (R, STATA, SAS, and SPSS).

2022 Events

 

Wednesday, Nov. 30 at 11 a.m.

Longitudinal analysis is a powerful tool to study individual changes over time. This webinar will cover different types of longitudinal analysis (e.g., fixed effects, random effects and mixed effects models), with examples from five different statistical and data management software.

Wednesday, Oct. 26 at 11 a.m.

Longitudinal data are increasingly common in health research, as they are necessary to study individual changes over time. This webinar will show the steps and best practices to prepare your dataset for a longitudinal analysis using five different statistical and data management software.

Wednesday, Sept. 28 at 11 a.m.

An adequate sample size is crucial for ensuring that statistical tests have the power to detect the relationships outlined in research aims. This session will detail what information is required to enter into sample size calculations and how to perform these calculations in various computing software.

Tuesday, Aug. 30 at 10 a.m.

REDCap is a secure, free, software platform designed for robust research data collection. Survey data, case report forms and operational data can be collected and managed all within one database, enhancing the efficiency of any project. The Biostatistics Consultation Core administers REDCap for Arizona State University and will present best practices and guidance on how to design your database to maximize REDCap's functionality. Topics covered will include e-consent, survey distribution, data quality verification, survey scoring and reports/exports to commonly used analysis packages such as SPSS, SAS and R.