Alternative text

github

Troubleshooting

This page is developing based on the user feedback.


General

Error

Conflict. The container name XXX is already in use by container “XXX”. You have to remove (or rename) that container to be able to reuse that name.

Reason: Process stopped unexpectedly and docker container was not closed.

Fix: Remove the docker container (not image!) that is causing the conflict

Error

No files in the output folder, but PipeCraft said “Workflow finished”.

Reason: ?

Fix: Check if there was a README.txt output and read that. Please report unexpexted errors.

ASVs workflow

Error

“Workflow stopped”

Alternative text

Possible reason: Computer’s memory is full, cannot finish the analyses.

Fix: Analyse fewer number of samples or increase RAM size.


Error

“Error in derepFastq(fls[[i]], qualityType = qualityType) : Not all provided files exist. Calls: learnErrors -> derepFastq. Execution halted”

Alternative text

Possible reason: Some samples have completely discarded by quality filtering process.

Fix: Examine seq_count_summary.txt file in qualFiltered_out folder and discard samples, which had 0 quality filtered sequences (poor quality samples). Or edit the quality filtering settings.


Error

Error in filterAndTrim. Every input file must have a corresponding output file.

Alternative text

Possible reason: wrong read identifiers for read R1 and read R2 in QUALITY FILTERING panel.

Fix: Check the input fastq file names and edit the identifiers. Specify identifyer string that is common for all R1 reads (e.g. when all R1 files have ‘.R1’ string, then enter ‘\.R1’. Note that backslash is only needed to escape dot regex; e.g. when all R1 files have ‘_R1’ string, then enter ‘_R1’.). When demultiplexing data in during ASV (DADA2) workflow, then specify as ‘\.R1’ ____________________________________________________

Error

“Error rates could not be estimated (this is usually because of very few reads). Error in getErrors(err, enforce = TRUE) : Error matrix is null.”

Alternative text

Possible reason: Too small data set; samples contain too few reads for DADA2 denoising.

Fix: use OTU workflow.