This posting is recruiting for 2 positions.
The Department of Genetics at Stanford University has an excellent opportunity for a motivated and experienced scientist that is interested in big data management and wrangling. In this position you will join a team of professionals including Ph.D. biologists and geneticists at the Data Administrative Coordination Center (DACC) of the Impact of Genomic Variation on Function (IGVF) Consortium, an ongoing international collaboration of research groups funded by the National Human Genome Research Institute (NHGRI). The IGVF Consortium aims to understand how genomic variation affects genome function, which in turn impacts phenotype. The IGVF DACC's primary task is to curate, uniformly process and validate the data generated and submitted by IGVF Consortium members in preparation for release to the scientific community. As a Senior Data Wrangler you will join a team serving as the primary liaison with international research laboratories facilitating the acquisition of the high-throughput data produced by the consortium labs and responsible for data modeling and curation.
For the role under consideration, an optimal candidate will possess profound expertise in high-throughput experimental methodologies along with a deep understanding of large genomic datasets and related computational analyses. Such knowledge base is essential for accurately interpreting and describing the nature of complex data inherent to this role. Previous experience in laboratory settings with a focus on generating genomic datasets will be beneficial. Furthermore, specialization in bioinformatics with an ability to discern RNA transcripts, transcription factor binding sites, epigenomic patterns, histone modifications, Single Nucleotide Polymorphisms (SNPs), and variants within such genomic datasets is highly desirable. Competence in genomic perturbation techniques, such as CRISPR screening, single cell data manipulation and perturbation analyses, will also be advantageous for this role. Past experience in employing controlled vocabularies and ontologies to elucidate biological concepts will be viewed favorably. An advanced degree, specifically a Ph.D. in Biology, Genetics or Bioinformatics, is highly desirable for effectively assessing intricate experimental details and critically examining novel experimental data for quality control.
Familiarity with Agile project management principles and SCRUM methodologies will be beneficial. The role requires close collaboration with software engineering, data curation, and bioinformatics staff for the design, refinement, and implementation of tools purposed to integrate, search, and exhibit experimental data.
Excellent verbal and written communication skills are essential in order to provide instruction and feedback to members of the scientific community and to facilitate the acquisition and description of high-throughput data, to effectively provide documentation and specifications to co-workers during tool development, and to contribute to FAQs, tutorials, and publications.
Previous experience working in an academic environment is a plus.
Experience with scripting languages is a plus.
Compliance with and support of University and government health & safety regulations & policies is required.
- Define novel computational approaches and analytical tools as required by research goals.
- Run bioinformatic uniform processing pipelines in cloud environment and review the results.
- Propose new hypotheses and design experiments to test the hypotheses; develop new data models.
- Compile and create educated summaries of the literature, discovering new facts by analyzing the collected data.
- Participate in collaborations with other labs.
- Work with users to answer questions about the contents of the database and provide assistance for submission of information.
- Participate in presentations and demonstrations of the database at conferences or other institutions.
- Create a variety of reports and user demonstration; collaborate with the users in the discovery of new knowledge.
- Collect and analyze information from peer-reviewed scientific journals and through direct submissions; abstract data into the required format, and verify them for accuracy.
- Supervise or guide staff, as needed; schedule and assign workload; set appropriate deadlines; review work for quality and timelines.
- Other duties may also be assigned