bdgenomics.adam.rdd.VariantDataset

class bdgenomics.adam.rdd.VariantDataset(jvmRdd, sc)[source]

Wraps an GenomicDataset with Variant metadata and functions.

__init__(jvmRdd, sc)[source]

Constructs a Python VariantDataset from a JVM VariantDataset. Should not be called from user code; instead, go through bdgenomics.adamContext.ADAMContext.

Parameters:
  • jvmRdd – Py4j handle to the underlying JVM VariantDataset.
  • sc (pyspark.context.SparkContext) – Active Spark Context.

Methods

__init__(jvmRdd, sc) Constructs a Python VariantDataset from a JVM VariantDataset.
addAllAlleleArrayFormatHeaderLine(name, …) Adds a VCF header line describing an ‘R’ array format field.
addAllAlleleArrayInfoHeaderLine(name, …) Adds a VCF header line describing an ‘R’ array info field.
addAlternateAlleleArrayFormatHeaderLine(…) Adds a VCF header line describing an ‘A’ array format field.
addAlternateAlleleArrayInfoHeaderLine(name, …) Adds a VCF header line describing an ‘A’ array info field.
addFilterHeaderLine(name, description) Adds a VCF header line describing a variant/genotype filter.
addFixedArrayFormatHeaderLine(name, count, …) Adds a VCF header line describing an array format field, with fixed count.
addFixedArrayInfoHeaderLine(name, count, …) Adds a VCF header line describing an array info field, with fixed count.
addGenotypeArrayFormatHeaderLine(name, …) Adds a VCF header line describing an ‘G’ array format field.
addScalarFormatHeaderLine(name, description, …) Adds a VCF header line describing a scalar format field.
addScalarInfoHeaderLine(name, description, …) Adds a VCF header line describing a scalar info field.
broadcastRegionJoin(genomicDataset[, flankSize]) Performs a broadcast inner join between this genomic dataset and another genomic dataset.
broadcastRegionJoinAndGroupByRight(…[, …]) Performs a broadcast inner join between this genomic dataset and another genomic dataset.
cache() Caches underlying RDD in memory.
filterByOverlappingRegion(query) Runs a filter that selects data in the underlying RDD that overlaps a single genomic region.
filterByOverlappingRegions(querys) Runs a filter that selects data in the underlying RDD that overlaps a several genomic regions.
fullOuterShuffleRegionJoin(genomicDataset[, …]) Performs a sort-merge full outer join between this genomic dataset and another genomic dataset.
leftOuterShuffleRegionJoin(genomicDataset[, …]) Performs a sort-merge left outer join between this genomic dataset and another genomic dataset.
leftOuterShuffleRegionJoinAndGroupByLeft(…) Performs a sort-merge left outer join between this genomic dataset and another genomic dataset, followed by a groupBy on the left value.
persist(sl) Persists underlying RDD in memory or disk.
pipe(cmd, tFormatter, xFormatter, convFn[, …]) Pipes genomic data to a subprocess that runs in parallel using Spark.
rightOuterBroadcastRegionJoin(genomicDataset) Performs a broadcast right outer join between this genomic dataset and another genomic dataset.
rightOuterBroadcastRegionJoinAndGroupByRight(…) Performs a broadcast right outer join between this genomic dataset and another genomic dataset.
rightOuterShuffleRegionJoin(genomicDataset) Performs a sort-merge right outer join between this genomic dataset and another genomic dataset.
rightOuterShuffleRegionJoinAndGroupByLeft(…) Performs a sort-merge right outer join between this genomic dataset and another genomic dataset, followed by a groupBy on the left value, if not null.
saveAsParquet(filePath) Saves this genomic dataset of variants to disk as Parquet.
shuffleRegionJoin(genomicDataset[, flankSize]) Performs a sort-merge inner join between this genomic dataset and another genomic dataset.
shuffleRegionJoinAndGroupByLeft(genomicDataset) Performs a sort-merge inner join between this genomic dataset and another genomic dataset, followed by a groupBy on the left value.
sort() Sorts our genome aligned data by reference positions, with contigs ordered by index.
sortLexicographically() Sorts our genome aligned data by reference positions, with contigs ordered lexicographically
toDF() Converts this GenomicDataset into a DataFrame.
toVariantContexts()
return:These variants, converted to variant contexts.
transform(tFn) Applies a function that transforms the underlying DataFrame into a new DataFrame using the Spark SQL API.
transmute(tFn, destClass[, convFn]) Applies a function that transmutes the underlying DataFrame into a new genomic dataset of a different type.
union(datasets) Unions together multiple genomic datasets.
unpersist() Unpersists underlying RDD from memory or disk.