In today’s fast-paced business environment, effective project management is crucial for the successful completion of tasks and achieving organizational goals. As technology evolves, so do the tools we use to facilitate project management. One such tool…
Category: SAS
Is SAS a programming language?
SAS has a programming language, but IS that all it is? Nope, but it still ranks high as a most marketable programming skill.
The post Is SAS a programming language? appeared first on The SAS Dummy.
**StudySAS Blog: Mastering Clinical Data Management with SAS** 2024-09-10 13:23:00
Understanding EC vs. EX Domains in SDTM: When to Use Each
In SDTM, the EC (Exposure as Collected) and EX (Exposure) domains are both used to capture data related to drug or therapy exposure, but they serve different purposes depending on how the exposure data is collected.
EC (Exposure as Collected) Domain
The EC domain is intended to capture exposure data exactly as it is collected in the study. This is useful when the collected exposure data is complex or variable, such as when doses or regimens vary between subjects or over time.
Use EC when:
- The collected data cannot be easily derived into planned doses or regimens.
- Exposure is captured in a format that includes variations such as adjustments, interruptions, or titrations.
- When the study involves complex exposure schemes (e.g., titration or dose adjustments based on lab values).
SDTM IG 3.3 and 3.4 Guidance for EC:
- EC is meant for direct representation of collected exposure data.
- EC should be used when the exposure data cannot be directly mapped into the planned dosing regimen captured in EX.
- EC also allows capturing the exact timestamps or administration details, even if those details change frequently.
EX (Exposure) Domain
The EX domain is designed to represent planned exposure data—what was intended or scheduled in the study protocol. EX typically includes drug administration data that reflects the planned dosage, route, and frequency, regardless of minor deviations in administration.
Use EX when:
- The exposure data can be described using the planned protocol-specified doses.
- The collected data aligns well with planned dosing schedules (e.g., consistent dose across subjects and visits).
SDTM IG 3.3 and 3.4 Guidance for EX:
- EX captures planned exposure (e.g., what the protocol intended to administer).
- It’s used for studies where exposure data is straightforward and corresponds directly to what was scheduled.
- Even when exposure data is collected in variable amounts, EX can still be used if the planned dose was administered as intended.
Key Differences Between EC and EX in SDTM IG 3.3 and 3.4:
- EC: Focuses on capturing the exact details of exposure as they were collected, reflecting actual administration, including variability.
- EX: Reflects the planned or intended exposure based on the study design, following the study’s dosing protocol.
SDTM IG 3.3 vs. SDTM IG 3.4:
The SDTM IG 3.4 version introduces more clarifications on when to use EC vs. EX, emphasizing the importance of using EC when there is variability in exposure that cannot be easily captured in EX. SDTM IG 3.4 also provides additional examples and details on mapping complex exposure data, particularly for biologics or therapies with varying administration schedules.
In summary, EC is used for more complex, collected exposure data, while EX is used for planned, consistent exposures based on the protocol. The SDTM IG 3.3 and 3.4 versions emphasize using EC when there is significant variation in the collected data.
When to Use the EC Domain
The EC domain captures the exact exposure data as it is collected in the study. This is especially useful when exposure data varies between subjects, such as in cases of dose titrations, interruptions, or other adjustments. The key feature of the EC domain is its ability to reflect actual data, making it indispensable in complex trials where the administration schedule doesn’t always follow the protocol exactly.
For instance, if subjects are receiving doses that are adjusted based on their responses or lab results, or if subjects experience dose interruptions, the EC domain should be used to capture this variability. It provides an accurate picture of what really happened, even if the data does not align with the protocol-specified dose.
Example: Titration or Adjusted Dosing Scenario
In a trial where Drug B’s dose is titrated based on a subject’s response, one subject might start at 25 mg and increase to 50 mg after 10 days. Another subject could remain at 25 mg due to adverse events, and a third subject might increase to 75 mg. These variations should be captured in the EC domain.
STUDYID | USUBJID | ECDOSE | ECDOSU | ECDOSFRM | ECSTDTC | ECENDTC | ECREASND |
---|---|---|---|---|---|---|---|
ABC123 | 001 | 25 | mg | Tablet | 2024-01-01 | 2024-01-10 | Titration |
ABC123 | 001 | 50 | mg | Tablet | 2024-01-11 | 2024-01-14 | |
ABC123 | 002 | 25 | mg | Tablet | 2024-01-01 | 2024-01-15 | Adverse Event |
When to Use the EX Domain
The EX domain captures the planned exposure based on the study protocol. It is used when the actual exposure follows the protocol as intended. The EX domain should be used for trials where the dosing regimen is straightforward and subjects receive the planned doses at scheduled times.
For example, if a trial protocol specifies that subjects receive 50 mg of Drug A daily for 30 days, and all subjects follow this schedule without any variations, the EX domain can capture this data.
Example: Simple Dosing Scenario
In a study where Drug A is administered in a fixed dose of 50 mg daily, the EX domain captures the planned exposure:
STUDYID | USUBJID | EXTRT | EXDOSE | EXDOSU | EXROUTE | EXSTDTC |
---|---|---|---|---|---|---|
XYZ456 | 001 | Drug A | 50 | mg | Oral | 2024-02-01 |
XYZ456 | 002 | Drug A | 50 | mg | Oral | 2024-02-01 |
Using Both EC and EX Domains Together
In some cases, both domains can be used together to represent the planned vs. actual exposure. For instance, the EX domain captures the protocol-specified dose (e.g., 50 mg daily), while the EC domain captures deviations, such as dose interruptions or adjustments. This approach provides a complete picture of the exposure.
Example: Combined Use of EC and EX Domains
In a study where Drug D is administered as 50 mg daily but a subject misses doses due to personal reasons, the EX domain would capture the planned regimen, while the EC domain would record the missed doses.
EX Domain (Planned Dose):
STUDYID | USUBJID | EXTRT | EXDOSE | EXDOSU | EXROUTE | EXSTDTC |
---|---|---|---|---|---|---|
DEF789 | 001 | Drug D | 50 | mg | Oral | 2024-03-01 |
EC Domain (Actual Doses with Missed Doses):
STUDYID | USUBJID | ECDOSE | ECDOSU | ECDOSFRM | ECSTDTC | ECENDTC | ECREASND |
---|---|---|---|---|---|---|---|
DEF789 | 001 | 50 | mg | Tablet | 2024-03-01 | 2024-03-05 | |
DEF789 | 001 | 50 | mg | Tablet | 2024-03-07 | 2024-03-30 | Missed Dose |
Key Takeaways from SDTM IG 3.3 and 3.4
The SDTM IG (Implementation Guide) versions 3.3 and 3.4 provide specific guidance on when to use the EC and EX domains:
- EC Domain should be used when the collected exposure data includes dose adjustments, interruptions, or variations from the planned exposure.
- EX Domain is suitable for straightforward, consistent administration as per the protocol.
- SDTM IG 3.4 provides further clarity on the importance of capturing deviations in the EC domain when complex administration schedules are involved.
Conclusion
The EC and EX domains both play important roles in capturing exposure data in clinical trials. By understanding when to use each domain, you can ensure that your study data accurately reflects both the planned and actual administration of investigational treatments. As the SDTM guidelines evolve, leveraging both domains appropriately helps ensure that data is captured comprehensively and consistently.
Getting Started with Python Integration to SAS Viya for Predictive Modeling – Fitting a Gradient Boosting Model
Fitting a Gradient Boosting Model – Learn how to fit a gradient boosting model and use your model to score new data In Part 6, Part 7, and Part 9 of this series, we fit a logistic regression, decision tree and random forest model to the Home Equity data we […]
Getting Started with Python Integration to SAS Viya for Predictive Modeling – Fitting a Gradient Boosting Model was published on SAS Users.
**StudySAS Blog: Mastering Clinical Data Management with SAS** 2024-09-09 18:07:00
Study Start Date in SDTM – Why Getting It Right Matters
Study Start Date in SDTM – Why Getting It Right Matters
The Study Start Date (SSTDTC) is a crucial element in the submission of clinical trial data, especially in meeti…
**StudySAS Blog: Mastering Clinical Data Management with SAS** 2024-09-09 14:59:00
Best Practices for Joining Additional Columns into an Existing Table Using PROC SQL
When working with large datasets, it’s common to add new columns from another table to an existing table using SQL. However, many programmers encounter the challenge of recursive referencing in PROC SQL when attempting to create a new table that references itself. This blog post discusses the best practices for adding columns to an existing table using PROC SQL and provides alternative methods that avoid inefficiencies.
1. The Common Approach and Its Pitfall
Here’s a simplified example of a common approach to adding columns via a LEFT JOIN
:
PROC SQL;
CREATE TABLE WORK.main_table AS
SELECT main.*, a.newcol1, a.newcol2
FROM WORK.main_table main
LEFT JOIN WORK.addl_data a
ON main.id = a.id;
QUIT;
While this approach might seem straightforward, it leads to a warning: “CREATE TABLE statement recursively references the target table”. This happens because you’re trying to reference the main_table
both as the source and the target table in the same query. Furthermore, if you’re dealing with large datasets, creating a new table might take up too much server space.
2. Best Practice 1: Use a Temporary Table
A better approach is to use a temporary table to store the joined result and then replace the original table. Here’s how you can implement this:
PROC SQL;
CREATE TABLE work.temp_table AS
SELECT main.*, a.newcol1, a.newcol2
FROM WORK.main_table main
LEFT JOIN WORK.addl_data a
ON main.id = a.id;
QUIT;
PROC SQL;
DROP TABLE work.main_table;
CREATE TABLE work.main_table AS
SELECT * FROM work.temp_table;
QUIT;
PROC SQL;
DROP TABLE work.temp_table;
QUIT;
This ensures that the original table is updated with the new columns without violating the recursive referencing rule. It also minimizes space usage since the temp_table
will be dropped after the operation.
3. Best Practice 2: Use ALTER TABLE
for Adding Columns
If you’re simply adding new columns without updating existing ones, you can use the ALTER TABLE
statement in combination with UPDATE
to populate the new columns:
PROC SQL;
ALTER TABLE WORK.main_table
ADD newcol1 NUM,
newcol2 NUM;
UPDATE WORK.main_table main
SET newcol1 = (SELECT a.newcol1
FROM WORK.addl_data a
WHERE main.id = a.id),
newcol2 = (SELECT a.newcol2
FROM WORK.addl_data a
WHERE main.id = a.id);
QUIT;
This approach avoids creating a new table altogether, and instead modifies the structure of the existing table.
4. Best Practice 3: Consider the DATA Step MERGE
for Large Datasets
For very large datasets, a DATA step MERGE
can sometimes be more efficient than PROC SQL
. The MERGE
statement allows you to combine datasets based on a common key variable, as shown below:
PROC SORT DATA=WORK.main_table; BY id; RUN;
PROC SORT DATA=WORK.addl_data; BY id; RUN;
DATA WORK.main_table;
MERGE WORK.main_table (IN=in1)
WORK.addl_data (IN=in2);
BY id;
IF in1; /* Keep only records from main_table */
RUN;
While some might find the MERGE
approach less intuitive, it can be a powerful tool for handling large tables when combined with proper sorting of the datasets.
Conclusion
The best method for joining additional columns into an existing table depends on your specific needs, including dataset size and available server space. Using a temporary table or the ALTER TABLE
method can be more efficient in certain situations, while the DATA step MERGE
is a reliable fallback for large datasets.
By following these best practices, you can avoid common pitfalls and improve the performance of your SQL queries in SAS.
**StudySAS Blog: Mastering Clinical Data Management with SAS** 2024-09-07 21:29:00
html_content = “””
Mastering the SDTM Review Process: Comprehensive Insights with Real-World Examples
The process of ensuring compliance with Study Data Tabulation Model (SDTM) standards can be challenging due to the diverse requirements and guidelines that span across multiple sources. These include the SDTM Implementation Guide (SDTMIG), the domain-specific assumptions sections, and the FDA Study Data Technical Conformance Guide. While automated tools like Pinnacle 21 play a critical role in detecting many issues, they have limitations. This article provides an in-depth guide to conducting a thorough SDTM review, enhanced by real-world examples that highlight commonly observed pitfalls and solutions.
1. Understanding the Complexity of SDTM Review
One of the first challenges in SDTM review is recognizing that SDTM requirements are spread across different guidelines and manuals. Each source offers a unique perspective on compliance:
- SDTMIG domain specifications: Provide detailed variable-level specifications.
- SDTMIG domain assumptions: Offer clarifications for how variables should be populated.
- FDA Study Data Technical Conformance Guide: Adds regulatory requirements for submitting SDTM data to health authorities.
Real-World Example: Misinterpreting Domain Assumptions
In a multi-site oncology trial, a programmer misunderstood the domain assumptions for the “Events” domains (such as AE – Adverse Events). The SDTMIG advises that adverse events should be reported based on their actual date of occurrence, but the programmer initially used the visit date, leading to incorrect representation of events.
2. Leveraging Pinnacle 21: What It Catches and What It Misses
Pinnacle 21 is a powerful tool for validating SDTM datasets, but it has limitations:
- What it catches: Missing mandatory variables, incorrect metadata, and value-level issues (non-conformant values).
- What it misses: Study-specific variables that should be excluded, domain-specific assumptions that must be manually reviewed.
Real-World Example: Inapplicable Variables Passing Pinnacle 21
In a dermatology study, the variable ARM (Treatment Arm) was populated for all subjects, including those in an observational cohort. Since observational subjects did not receive a treatment, this variable should have been blank. Pinnacle 21 didn’t flag this, but a manual review revealed the issue.
3. Key Findings in the Review Process
3.1 General Findings
- Incorrect Population of Date Variables: Properly populating start and end dates (–STDTC, –ENDTC) is challenging.
- Missing SUPPQUAL Links: Incomplete or incorrect links between parent domains and SUPPQUAL can lead to misinterpretation.
Real-World Example: Incorrect Dates in a Global Trial
In a global cardiology trial, visit start dates were incorrectly populated due to time zone differences between sites in the U.S. and Europe. A manual review of the date variables identified these inconsistencies and corrected them.
3.2 Domain-Specific Findings
- Incorrect Usage of Age Units (AGEU): Misuse of AGEU in pediatric studies can lead to incorrect data representation.
- Inconsistent Use of Controlled Terminology: Discrepancies in controlled terminology like MedDRA or WHO Drug Dictionary can cause significant issues.
Real-World Example: Incorrect AGEU in a Pediatric Study
In a pediatric vaccine trial, the AGEU variable was incorrectly populated with “YEARS” for infants under one year old, when it should have been “MONTHS.” This was not flagged by Pinnacle 21 but was discovered during manual review.
4. Optimizing the SDTM Review Process
To conduct an effective SDTM review, follow these steps:
- Review SDTM Specifications Early: Identify potential issues before SDTM datasets are created.
- Analyze Pinnacle 21 Reports Critically: Don’t rely solely on automated checks—investigate warnings and study-specific variables manually.
- Manual Domain Review: Ensure assumptions are met and variables are used correctly in specific domains.
5. Conclusion: Building a Holistic SDTM Review Process
By combining early manual review, critical analysis of automated checks, and a detailed review of domain-specific assumptions, programmers can significantly enhance the accuracy and compliance of SDTM datasets. The real-world examples provided highlight how even small errors can lead to significant downstream problems. A holistic SDTM review process not only saves time but also ensures higher data quality and compliance during regulatory submission.
“””
**StudySAS Blog: Mastering Clinical Data Management with SAS** 2024-09-07 19:22:00
Revolutionizing SDTM Programming in Pharma with ChatGPT
Revolutionizing SDTM Programming in Pharma with ChatGPT
By Sarath
Introduction
In the pharmaceutical industry, standardizing…