text
stringlengths
21
4k
label
stringclasses
2 values
Game-changing brain-inspired computing for market innovation and disruption The Consortium Health Climate AI weather & climate modelling Accident prevention Energy Financial Real-time fraud detection The HYBRAIN project has received funding from the European Union's Innovation Council Pathfinder programme under Grant Agreement no.101046878. Sustainabile AI solutions Hybrid technology for ultra-low latency, energy-efficient Artificial Intelligence applications Join the community @HYBRAIN_EU hybrain.eu info@hybrain.eu HYBRAIN Project PROJECT COORDINATOR
poster
Utrecht, Netherlands Riga, Latvia Cascais, Portugal Together with residents from four green-deprived neighbourhoods, we are co-creating community-based actions to collect data on temperature, relative humidity and heat stress. Through the use of wearables and mobile applications, women with a migration background and young people will contribute to citizen-powered monitoring to help reduce the urban heat island effect and shape effective climate change adaptation strategies. Riga, is engaging diverse audiences to address concerns about air pollution and greenspace usage, to ensure better informed policies. The Riga Planning Region, the city of Riga’s digital agency and local communities are joining forces to deploy 20 low-cost air quality monitors to collect temporal, hyperlocal, geospatial data on pollutants such as Particulate Matter (PM2.5). This network, which cover both high traffic areas as well as urban greenspaces, offer an efficient means of acquiring real-time data on PM2.5, to identify pollution hotspots and subsequently implement targeted interventions. The Cascais Climate Change Adaptation Action Plan (2017) was the first of its kind in Portugal. Despite progress, assessing the value of urban green infrastructure still remains a major challenge. In Urban ReLeaf, through the use of surveys conducted across 7 unique greenspaces in the region, citizens are sharing perceptions and thermal comfort levels to validate the effectiveness of its parks. Dundee, UK Athens, Greece Mannheim, Germany In Dundee, a city facing increasing grey infrastructure in deprived areas, actions to enhance the accessibility of greenspaces are co-developed with citizens and stakeholders. Bespoke surveys delivered through a mobile application will not only help residents discover and connect with local greenspaces, but also contribute valuable insights to help design the city’s Open Space Strategy in 2025. The Greek capital faces severe heatwaves more often, while air pollution reaches high levels both during the winter and summer. To advance the spatial and temporal coverage of existing data, a network of low- cost air quality monitors will be deployed across the 7 districts of the municipality. In addition, Athens is undergoing a greening transformation with a new tree registry the combines Earth Observation and crowdsourcing methods to provide critical data for better management of greenspaces. ​ Known as one of Germany’s hottest cities, Mannheim has become a pioneer by approving a heat action plan to safeguard its most vulnerable residents. We are co-developing a comprehensive tree registry with the help of citizen observations on mature trees and trees on public land to support urban climate protection and adaptation measures. The aim of the registry is to help city departments streamline management processes and highlight future tree planting locations. 15 6 10 4 €5.2m 9 Partners Pilot Cities Accelerator Cities Years Budget Countries Our Mission Urban ReLeaf promotes collaboration between local communities and public authorities to address urgent climate issues related to urban greenspace planning, heat stress, and air pollution. Urban Trees Greenspace Perception Wearable Sensors Heat Stress Air Quality Mobile Applications Air Quality Monitors PM2.5 Themes Technologies www.urbanreleaf.eu #releafcities @releafcities Find out more A look at the project in numbers Our Objectives 1. 2. 3. 4. 5. Understand current urban greening policies in six pilot cities and co-develop opportunities for citizen participation to complement existing practices. Support the long-term inclusion of active and passive data from citizens within official data streams for urban monitoring, planning and innovation in public policy. Mobilize and empower local communities in issues of public interest surrounding urban green infrastructure and environmental stewardship. Establish a community of practice and an accelerator programme to encourage knowledge exchange and tra
poster
Reconstruction of bacterial genomes from a Northern Patagonian fjords (42 ° S) with potential biotechnological applicability Valentín Berrios Farias1, Eduardo Castro-Nallar1 (1) University of Talca, Departament of microbiology, Faculty of Biological Sciences. Created with BioRender Poster Builder BACKGROUND Microorganisms have the capability of producing non ribosomal peptides (NRPs) associated with their secondary metabolism. Previous works have demosntrated the biotechnological applicability of these peptides . Bioinformatic steategies have been developed to detect these peptides along with other metabolites of industrial interest from sequencing data by detecting biosynthetic gene clusters (BGCs) . Soil and marine metagenomes have been broadly studied regarding the biosynthetic potential of the microbial communities to synthetise these peptides . Nevertheless estuarine microbial communities have been poorly studied in respect their natural products. The Comau Fjord 93 water samples were colelcted from Comau fjord from 2016 to 2019. The prokaryotic DNA was captured and sequenced with illumina technologies. Prokaryotic eDNA was sequenced, genomes were reconstructed and metabolic predictions were made We predicted secondary metabolites from 677 reconstruncted environmental bacterial genomes from sequencing data Biosynthetic gene clusters producing non ribosomal peptides were detected in almost all phyla including Verrucomicrobiota, Bacteroidetes, Actinobacteria , Proteobacteria and Planctomycetes, Desulfofacteria and Myxococcota. Besides NRPS biosynthetic gene clusters, other secondary metabolites were inferred using antiSMASH software: How can non ribosomal peptides be predicted from sequencing data? Non ribosomal peptides are synthezised by enzymatic complexes called non ribosomal peptide synthases (NRPS). The domains that constitute these enzymes can be localized in BGCs along the bacterial genomes , thus, robust probabilistic models have been developed to identify these BGCs. Future directions It has been hard to synthesize and isolate the natural products synthesized by BGCs . Most of these genetic clusters remains cryptic on their hosts,. This is how heterologous expression strategies of these clusters have emerged as an ingenious way of producing the cryptic secondary metabolites stored by their original hosts. CONCLUSION The bacterial communities of the Comau fjord constitute a relevant reservoir of biosynthetic gene clusters that produces several types of natural products. In particular, two bacterial genomes stands outs in terms of major number of predicted BGCs. The major number of NRPs were predicted from a Gammaproteobacteria genome and the major number of predicted BGCs of any class were predicted from a Planctomycetes genome. Infered chemycal novelty of predicted BGCs on the Planctomycetes genome establishes this bacteria as a promising candidate from which novel terapeutic compunds such as antivirals can be discovered.
poster
Universidade Federal de Uberlândia – Faculdade de Engenharia Elétrica AVALIAÇÃO DE SINAL-RUÍDO EM IMAGENS MAMOGRÁFICAS UTILIZANDO A TÉCNICA UNSHARP MASK Ana Laura Shimamoto1 , Guilherme Orpheu1 , Pedro Cunha Carneiro1 , Ana Claudia Patrocinio1 Câncer de Mama em mulheres • O câncer de mama é a principal causa de morte por câncer em mulheres brasileiras; • É um dos tipos de câncer mais temidos pelas mulheres, devido à sua alta frequência e efeitos psicológicos, como alterações da sexualidade e da imagem corporal, medo de recidivas, ansiedade, dor e baixa autoestima; O exame de mamografia • É o exame dos tecidos moles das mamas, em mulheres com 35 anos ou mais, que permite a identificação de alterações não perceptíveis ao ECM; • 60% dos mamógrafos não são submetidos à testes de controle de qualidade; Imagem e Processamento Digital • A resolução de contraste e resolução espacial influenciam na qualidade da imagem mamográfica; • O processamento digital destaca uma região de interesse, de modo que a imagem resultante seja mais apropriada para uma aplicação específica do que a imagem original; Objetivo do artigo Através da técnica de realce de contraste Unsharp Mask (máscara de nitidez), quantificar o realce considerando a relação sinal-ruído e o contraste nas mamografias; Banco Digital de Imagens  INbreast: 76 imagens, do Padrão 3 e Padrão 4 (pelo BI-RADS), anonimizadas. Nas categorias de: apenas nódulos, apenas micros, nódulos+micros e normal;  Hologic: 10 imagens, do tipo 2D; • Todas as 86 imagens estão no formato DICOM, com 12 bits de resolução de contraste; Etapas do Processamento Digital 1. Filtro de Wiener (3x3): É um filtro de suavização que utiliza a teoria dos mínimos quadrados afim de remover ruídos e restaurar a imagem; 2. Unsharp Mask (UM): Consiste na subtração de uma cópia de imagem original borrada (unsharp) da própria imagem original, para enfatizar as bordas das lesões e melhorar a qualidade visual da imagem. • UM ao aprimorar detalhes, também amplifica ruídos, e o realce é maior em áreas de maior contraste na imagem, o que pode originar artefatos na imagem de saída. Dentro do UM existem 3 critérios que definem a intensidade do realce: Radius (raio): Esse critério não apresenta valores típicos, e, o seu valor afeta o tamanho das bordas a serem aprimoradas. Um raio menor aumenta os detalhes em menor escala. Amount: Esse critério apresenta variação de 0-2, e especifica o grau em um número escalar do "sharpening". Um alto valor implica em um aumento do contraste dos pixels. Threshold: Esse critério apresenta variação de 0- 1. Altos valores permitem o "sharpening“ apenas em regiões de alto contraste, como bordas mais grosseiras, enquanto valores mais baixos permitem a nitidez em regiões mais suaves da imagem. • As imagens foram processadas por meio de um código no software Matlab R2017b (9.3.0.713579), variadas dentro dos valores típicos dos critérios e organizadas de acordo com a Tabela 1. Por fim obteve-se os valores médios e o desvio padrão (DP) das métricas para fim comparativo, PSNR e SNR. Tabela 1: Valores estabelecidos para os critérios da UM. SNR: É a razão entre a amplitude do sinal pela amplitude do ruído, dado pela Equação (1). (1) Onde, “A” é o valor quadrático médio (RMS) da amplitude, “s” é o vetor que representa o sinal original, “r” representa o vetor de ruído e “n” o comprimento do vetor do sinal. PSNR: É o valor de pico da relação sinal-ruído, ou seja, é a relação entre a máxima energia de um sinal e o ruído, dado pela Equação (3). O PSNR é obtido através do MSE (Mean Square Error), que representa o erro quadrático entre a imagem compactada e a imagem original, dado pela Equação (2). (2) Onde “I” e “K” são as imagens original e processada, e “m” e “n” são o número de linhas e colunas da matriz, respectivamente. (3) Onde o “MAX” é o maior valor de pixel a ser atribuído na imagem. Faculdade de Engenharia Elétrica1, Universidade Federal de Uberlândia, Minas Gerais, Brasil. Introdução • Ao fin
poster
Building a Community of Practice for “Big Science” Organizations Mahedi Hasan1, Kerk F Kee1, and Ewa Deelman2 Overview This work explores the development and implementation of Communities of Practice (CoP) within “Big Science” Organizations. A CoP is a group of professionals who share a common interest, profession, or passion and actively engage in collaborative learning and knowledge sharing (Wenger, 1998). In the context of “big science” (e.g., large-scale observatories), a CoP plays a crucial role in fostering collaboration, enhancing knowledge exchange, and driving innovation across multiple facilities, especially in the domain of cyberinfrastructure (CI). Based on 23 interviews conducted with directors, project staff, technical staff, and scientists across various “big science” organizations funded by the National Science Foundation (NSF) called Major and Mid Scale Facilities (M& MFs), we used the grounded theory approach (Corbin & Strauss, 1990) as our method to analyze the data. Strategies for Building a CoP What strategies can help build a CoP for M&MF professionals? Identifying Common Interests/Challenges: Common interests or challenges, such as cloud computing, cybersecurity, and long-term data storage, can bring the community together. Promoting Knowledge Sharing: Encouraging the exchange of information, best practices, and experiences among community members. Regular virtual meetings and collaborative platforms can facilitate knowledge sharing. Organizing Guest Sessions: Inviting experts and experienced community members to share their insights and knowledge through webinars, workshops, and guest lectures. For example, CI Compass has organized guest sessions featuring experts from various M&MFs to discuss topics like data management and cybersecurity (Baldin et al., 2024). Providing Networking Opportunities: Organizing conferences and workshops can facilitate networking among MF professionals, allowing them to connect, collaborate, and build relationships. Creating an Inclusive Environment: Ensuring that all members of the community feel valued, respected, and included. Efforts have been made to include diverse voices and perspectives in community activities, fostering an inclusive environment. Facilitating Collaborative Projects/ Initiatives: Encouraging joint projects that address common challenges and leverage collective expertise is crucial. Identifying projects that can benefit from collaboration and providing resources to support these initiatives is essential. Exploring New Partnerships: Seeking partnerships with other organizations with the NSF-funded MFs to expand resources and knowledge is essential for building a CoP. It helps to proactively look for opportunities to collaborate with institutions that have complementary strengths and goals. Having a Community Facilitator: Organizing a community is not an easy task. An enduring structure does not always emerge by chance. Having a community facilitator or a facilitator team can be helpful and make this effort more intentional. References: Baldin, I., Brower, D., Butcher, D., Casey, R., Clark, C., Deelman, E., Flynn, B., Hasan, M., Kee, K., Livny, M., Mandal, A., Murillo, A., Nabrzyski, J., Pascucci, V., Petruzza, S., Romsos, C., Stanzione, D., Vahi, K., & Virdone, N. (2024). 2024 Cyberinfrastructure for U.S. NSF Major Facilities Workshop Report (1.0). Zenodo. https://doi.org/10.5281/zenodo.11372561 Corbin, J. M., & Strauss, A. (1990). Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative sociology, 13(1), 3-21. Wenger, E. (1998). Communities of practice: Learning as a social system. Systems Thinker, 9, 2-3. This material is based upon work supported by the U.S. National Science Foundation under Grant No #2127548. 1 College of Media and Communication, Texas Tech University 2 Information Sciences Institute, University of Southern California
poster
Rotation-activity relations and flares of M dwarfs with K2 long- and short-cadence data 1IAAT Tübingen, Germany; 2INAF/OA Palermo, Italy; 3INAF/OA Torino, Italy; 4SUPA St.Andrews, Scotland St. Raetz1, B. Stelzer1,2, M. Damasso3, and A. Scholz4 Abstract The sample 2 1 4 Studies of the rotation-activity relation of late-type stars are essential to enhance our understanding of stellar dynamos and angular momentum evolution. Photometric observations with space telescopes provide rotation periods even with low amplitudes as well as a wealth of activity diagnostics. Our previous study of the rotation-activity relation based on photometric activity indicators from long cadence K2 data (Stelzer et al. 2016) revealed, that, at a critical rotation period of ~10d, the activity level changes abruptly. This phenomenon represents an open problem within the framework of dynamo theory. We have now extended our work to K2 short cadence data to examine a possible influence of the data sampling on the shape of the rotation-activity relation, in particular with respect to the different sensitivity to the detection of stellar flares. Starspot modulation Flare Detection 3 Results (Raetz et al. 2020a) 1. Comparison of flares in long cadence and short cadence data 2. Flare durations: ●durations are more overestimated for shorter flares consequence of the strong artificial quantization due to the ~30 min cadence ●bimodality is only barely visible in long-cadence data 3. Flare frequencies ●fast rotator regime (Prot < 10d): flare rate is ~5 times higher for short-cadence data ●slow rotator regime (Prot > 10d): flare rate is ~3 times higher for short-cadence data ●whole sample:  (Nflares/day)SC = 4.6 x (Nflares/day)LC ●the highest flare rates are not found among the fastest rotators (also seen by Mondrik et al., 2019 in the MEarth data set) 4. Flare energies ●stars with the highest flare rates do not show the most energetic flares ●average power law slope for the fit of the flare energy frequency distributions for all targets: α = 1.84 ± 0.14 (Consistent with previous M dwarf studies and the value found for the Sun.) ●the superflare frequency (E ≥ 5 × 1034 erg) for the fast-rotating M stars is twice higher than for solar like stars in the same period range (Maehara et al., 2012) Summary 6 5 The sample was selected from the Superblink proper motion catalog by Lépine & Gaidos (2011), which includes 9000 bright M dwarfs ∼ (J < 10 mag) with spectral types from K7 to M7. Our K2 observing project (PI: Scholz) comprised all Lépine & Gaidos (2011) M dwarfs in the K2 field of view. During 20 K2 campaigns (C0–C19), 485 light curves of 430 targets were obtained. For a subsample of these 430 M dwarfs short-cadence light curves are available. Our final sample comprises 56 bright and nearby M dwarfs observed by K2 during campaigns C0-C18 in long and short cadence mode. Fig.1: Distribution of the spectral types for our K2 M dwarf sample. Negative values denote spectral types earlier than M. We extracted the raw light curves from the target pixel files with the KeplerGO/lightkurve Code. We developed a procedure that allows us to use the detrending by Vanderburg & Johnson (2014) and apply it to the original long and short cadence light curves. Stellar rotation rates are derived from the periodic brightness variations that are caused by cool spots on a stellar surface. We used three standard time series analysis techniques to search for the rotation period: We searched for flares in the K2 light curves. Our flare analysis procedure is based on the routine used in Stelzer et al. (2016): Iterative process of ●boxcar smoothing ●removing 2σ outliers ➔ final smoothed LC (interpolated to all points of the original input LC) ➔points that lie 3σ above the final smoothed LC where flagged ➔groups of at least two (long cadence) / five (short cadence) consecutive flagged points were assigned as potential flares Flare validation: The following criteria have to be fulfilled: (1) the flare do
poster
VITO NV | Boeretang 200 | BE-2400 Mol | + 32 14 33 55 11 | www.vito.be SUSTAINABLE - ENTREPRENEURIAL - INSPIRING - CREATIVE THE NATURE OF THE POLYMERIC GLYCO-LINKER CONTROLS THE SIGNAL OUTPUTS FOR PLASMONIC GOLD NANORODS BIOSENSORS IN COMPLEX MEDIA DUE TO BIOCORONA FORMATION Alessia Pancaro,a-b Michal Szymonik,a Panagiotis G. Georgiou,c Alexander N. Baker,c Marc Walker,e Peter Adriaensens,f Jelle Hendrix,b Matthew I. Gibsonc,d and Inge Nelissena a Health Unit, Flemish Institute for Technological Research (VITO), Mol, Belgium; b Dynamic Bioimaging Lab, Advanced Optical Microscopy Centre and Biomedical Research Institute, Hasselt University, Diepenbeek, Belgium; c Department of Chemistry, University of Warwick, Coventry, UK; d Warwick Medical School, University of Warwick, Coventry, UK; e Department of Physics, University of Warwick, Coventry, UK; fApplied and Analytical Chemistry, Institute for Materials Research, Hasselt University, Diepenbeek, Belgium. Aim: Evaluate the nature of RAFT polymer ligands on plasmonic lectin sensing in human serum. https://doi.org/10.1039/D1NR01548F BACKGROUND Poly(N-(2-hydroxypropyl) methacrylamide) (PHPMA) and poly(N- hydroxyethyl acrylamide) (PHEA) were synthesised by photo-initiated RAFT polymerization with different degrees of polymerisation (DP = 40, 50, 55, 68 for PHPMA and DP = 26, 35, 50, 60 for PHEA) by tuning the feed ratio. Conjugation of glycopolymers onto gold nanorods (A) UV-Vis (C) XPS (B) DLS (D) TEM High colloidal stability. Successful attachment of the glycopolymers to the gold particle surface. Higher surface grafting density for the PHEA polymer. This work is supported by: Marie-Curie European Training Network (MC-ETN) NanoCarb N° 814236; VITO; Hasselt University; the Research Foundation Flanders (FWO Vlaanderen; Hercules project AUHL/15/2 - GOH3816N); the ERC (Grant 866056); the BBSRC-funded MIBTP program (BB/M01116X/1) and Iceni Diagnostics ltd. The Warwick Polymer Research Technology Platform is acknowledged for SEC analysis, and the Warwick Electron Microscopy Research Technology Platform is acknowledged for TEM. ACKNOWLEDGEMENTS Lectin binding studies in buffer and human serum Contribution and impact of serum proteins on sensing performance • The chemistry of the polymer linker used for nanoparticle-based nanoplasmonic assays can impact the nature of the sensor response. • Initial screening of sensor response in buffer is not a reliable strategy for identification of the best-performing surface modifications for biosensing in complex biofluids. • The nature of polymeric tethers and their grafting density directs the formation of biocoronas in serum, and this has a huge impact on biosensing of lectins. CONCLUSIONS LSPR peak following the addition of glyco-GNRs to a dilution series of SBA in serum: A) Gal-PHPMA GNRs showed no response to SBA in the concentration range tested (in contrast to the strong LSPR shift observed in buffer alone). B) Gal-PHEA nanorods showed dose- dependent LSPR shifts plus SBA (in contrast to the aggregative behaviour observed for the same assay performed in buffer). Evolution of LSPR peak wavelength location over time (5 hours) after addition of Gal-PHEA35 GNRs to serum containing different SBA concentrations. The LSPR peak wavelength of Gal-PHEA GNRs in serum decreases upon SBA addition, consistent with overall loss of mass from the particle surface, rather than a net mass gain due to lectin binding. The localised surface plasmon resonance (LSPR) peak of gold nanorods (GNRs) is located in the near-infrared optical window and is sensitive to local binding events, enabling label-free detection of biomarkers in complex biological fluids. Glycan-binding proteins, called lectins, play a key role in many biological processes. To enable LSPR-based biosensing, the nanoparticles must be decorated with recognition units tethered to the surface by a ligand. We studied the sensing performance of GNRs coated with galactosamine-modified polymeric linkers fo
poster
Nathalie Huck und Carmen Schwietzer info@landesarchiv.berlin.de www.landesarchiv-berlin.de Landesarchiv Berlin Open Research Data im Landesarchiv Berlin Namensverzeichnisse der Berliner Standesämter Mit Einführung der sogenannten ‚Standesämter‘ 1874 bekam Berlin über zwanzig derartige Einrichtungen, und mit der Bildung „Groß-Berlins“ kamen 1920 noch Dutzende dazu. Zu diesen Beständen gibt es keine übergreifende Namenskartei. Stattdessen wurden in jedem Amt ein- zeln Namensverzeichnisse erstellt, für jeden Typ (Geburt, Heirat, Tod) und jedes Jahr getrennt. Namensverzeichnisse sind nach dem ersten Buchstaben des Nachnamens geordnet. Das Landes- archiv Berlin bietet mit der Onlinestellung der digitalisierten Namensverzeichnisse der Stan- desämter die Grundlage für den Rechercheeinstieg in die Familienforschung. Datentyp Primärdaten Infrastrukturen Online-Datenbank mit Digi- talisaten Nutzende niedrigschwelliger Zugang „citizen science“ Lehrende und Lernende; Historikerinnen und Historiker, Kultur- wissenschaftlerinnen und Kulturwissenschaftler, Soziologinnen und Soziologen, Journalistinnen und Journalisten, Erbener- mittelnde, Provenienzforschende Nutzungszwecke Heimat- kundliche Forschung, Stolpersteinrecherche, statistische Aus- wertungen, Genealogische Forschungen, Beleg für rechtliche Ansprüche Daten ca. 44.650 Besucherinnen und Besucher und ca. 339.000 Seitenaufrufe jährlich Lizenz: Creative Commons Lizenz CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/) HistoMap Berlin Die Online-Anwendung HistoMap Berlin wurde in Kooperation mit der Beuth Hochschule für Tech- nik Berlin, Büro für Geomedien, entwickelt. Auf Basis von georeferenzierter Karten von 1910 bis 2012 wird eine grundstücksgenaue Recherche ermöglicht. Die einzelnen Kartenblätter können übereinandergelegt werden und schaff en damit eine Vergleichsmöglichkeit der Grundstücke über die Jahrzehnte hinweg. Datentyp Primärdaten Infrastrukturen Geoinformationssystem Nutzende niedrigschwelliger Zugang „citizen science“ historisch und geografisch Interes- sierte; Lehrende und Lernende; Historikerinnen und Historiker, Verwaltungswissen- schaftlerinnen und Verwaltungswissenschaftler, Kulturwissenschaftlerinnen und Kulturwissenschaftler , Soziologinnen und Soziologen, Journalistinnen und Journa- listen, Stadtplanende, Architekten und Architektinnen Nutzungszwecke Heimat- kundliche Forschung, Straßenumbenennungen, Grundstücksverschiebungen, Stadtpla- nung und –bau Lizenz: Creative Commons Lizenz CC BY 4.0 (https://creativecommons. org/licenses/by/4.0/) Lizenz: Creative Commons Lizenz CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/) Bilder: Landesarchiv Berlin
poster
Multi-scale simulation of epitaxial processes — Linda Jäckel1,2,3, Andreas Zienert1,2,3, Erik E. Lorenz1,2,3, Max Huber1,2,3, Jörg Schuster1,2,3 1 Fraunhofer Institute for Electronic Nano Systems ENAS, Chemnitz, Germany ² Center for Materials, Architectures and Integration of Nanomembranes (MAIN), Chemnitz University of Technology, Chemnitz, Germany ³ Center for Microtechnologies, Chemnitz University of Technology, Chemnitz, Germany Introduction Epitaxial growth are competitive processes in semiconductor industry Helpful insights for process optimization and transfer through modeling and simulation strategies possible Multi-scale simulations needed to investigate growth on small features Reactor-scale modeling Example: Single-wafer reactor for Si epitaxy Goal: Optimization of single-wafer reactor Si epitaxial growth from DCS (SiH2Cl2) Newly developed chemistry model for low-temperature Si epitaxial growth Process optimization using reactor-scale simulation to identify relevant parameters Result: Reduced wafer rotation speed ω most promising option for variability reduction Si growth rates by wafer rotation speed ω. SiGe growth rate over 100 horizontally oriented wafers. Simplified single-wafer reactor with gas flow velocity magnitudes (color) and velocity vectors (black arrows). Multi-wafer reactor with wafer boat (blue), inlet pipes (orange), inlets (red), outlet (green) and the outer reactor hull (gray). Example: Multi-wafer reactor for SiGe epitaxy Goal: Optimization of complex multi-wafer batch reactor with multiple gas inlets Result: Wafer-to-wafer variability of growth rate due to local cooling near gas inlets more inlets less variability Feature-scale modeling Si film growth with 10x10 nm² size constraint: Atomistic representation (left) and extracted facets (right). Goal: Investigation of epitaxial growth on feature-scale Approach: Kinetic Monte Carlo-based effective attachment model2 Effective growth surface, dependent on temperature, composition, surrounding features Simulation of pattern- and time-dependent local growth rates Facet extraction: Delauney-based Alpha shape extraction with simplical decomposition3 Multi-scale modeling Multi-scale coupling approach combing CFD and KMC. Deposition processes on structured substrates: only feature- scale model alone not sufficient Multi-scale model combing feature-scale and reactor-scale approach is possible4 Implementation in CFD-ACE+ with user defined functions Local effects of global gas flow considered Result: model which fully captures the deposition characteristics for epitaxial growth on small features Si Growth rate as a function of trench width. 1 C. Werner and M. Hierlemann, “Applications of PHOENICS-CVD to epitaxial SiGe polysilicon and silicon dioxide deposition in range of CVD reactors,” PHOENICS Journal, vol. 8, no. 4, pp. 538–552, 1995. 2 R. Chen, W. Choi, A. Schmidt, K.-H. Lee, and Y. Park, “A new kinetic lattice monte carlo modeling framework for the source-drain selective epitaxial growth process,” in 2013 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD). IEEE, Sep. 2013. DOI: 10.1109/sispad.2013.6650561 3 P. G. Goerss and J. F. Jardine, Simplicial homotopy theory. Springer Science & Business Media, 2009. 4 N. Cheimarios, G. Kokkoris, and A. G. Boudouvis, “Multiscale modeling in chemical vapor deposition processes: Coupling reactor scale with feature scale computations,” Chemical Engineering Science, vol. 65, no. 17, pp. 5018–5028, 2010. DOI: 10.1016/j.ces.2010.06.004 This work was funded by the EFRE fund of the European Commission and by funding of the Free State of Saxony of the Federal Republic of Germany (project MOMENTUM, grant number 100358750). Methods Reactor-scale: Computational fluid dynamics using CFD-ACE+ Feature-scale: Inhouse-Code Multi-scale coupling with user defined functions in CFD-ACE+
poster
Abstract: The role of magne-c field in the AGN jet physics is s-ll not fully determined. At pc scale, it is known that it is important in the accelera-on and collima-on processes while at arcsecond scale it could reveal fundamental pieces of the jet dynamics and energe-cs and its surrounding environment. At intermediate scales, the scenario is more debated. To contribute in this framework, we need to resolve polarized emission even in the low surface brightness extended structures (e.g. lobes). This absolutely requires high sensi-vity observa-ons. With the advent of ALMA, now it is possible also in the millimeter, a band which was unexplored by previous facili-es. Here I present the impressive images in polariza-on obtained using ALMA archival mul- band data of an ALMA calibrator PKS 0521-365 which represents a prototype of BL Lac object with extended resolved structures (jet and hotspot) at all frequencies from op-cal to X-rays. Polarized emission in the mm band of PKS0521-365: ALMA observaAons. E.Liuzzo, R. Paladino, V. Galluzzi & IT ARC node Next steps and perspecAves: ALMA observaAons and results The peculiar case of PKS 0521-365 References: This is a nearby (z = 0.0554) radio-loud object and bright FERMI source, exhibi-ng a variety of nuclear and extranuclear phenomena (Falomo et al. 2009). It is one of the most remarkable object in the sourthern sky: It is one of the three known BL Lac objects showing a kiloparsec-scale jet well resolved at all bands (Liuzzo et al. 2011). As showed in Fig.1, a one-side radio jet extends in N-W side up to 7 arcsec , with the presence of many knots that are also detected from op-cal to X–rays (Falomo et al. 2009). An hotspot is also detected in all bands at 8 arcsec from the nucleus in the southeast direc-on. At low frequency, the arcsecond-scale radio structure is dominated by an extended lobe. The overall energy distribu-on of PKS 0521-365 is consistent with a jet oriented at about 30 degrees with respect to the line of sight. This is also in agreement with the absence of superluminal mo-on in the parsec–scale jet (Falomo et al. 2009). In the millimeter bands, extended structures (hotspot and jet) of this object are detected up to 320 GHz, with similar structures from op-cal to X–rays (Liuzzo et al. 2015, Leon et al 2016). An es-mate of molecular gas content is also given together with an analysis od the SED of each source component (Liuzzo et al. 2015). We analysed polariza-on data in • Band 3 from our propretary data (PI: V. Galluzzi); • Band 5 from science verifica-on observa-ons; • Band 7 from public archival data. We found that the source is well resolved upt to 350 GHz with detec-on of core, inner jet and hotspot. Polarized emission is revealed: • in the core and hotspot up to B 7. • also in the extended jet and lobe in B 3 with posi-on angle parallel to the jet direc-on and to the shock front in the lobe. Brentjens & de Bruyn 2005, A&A, 441 1217, Falomo et al. 2009, A&A, 501, 907 Leon et al. 2016, A&A, 586, A70 Liuzzo et al. 2011, arXiv:1105.5226 Liuzzo et al. 2015 Revolu-on in Astronomy with ALMA: the 3° year, 499, 129 In the following I discuss what it is now possible to do thanks the presented high quality ALMA data in polariza-on, like those presented here: • Magne&c field study in radio loud AGN: Nearly 10-20 % of AGN are radio loud and the form of their spectra as a func-on of frequency implies that the radio emission is non-thermal (synchrotron) in origin, due to rela-vis-c plasma moving in strong and ordered magne-c fields. Since the polarised signal in these objects is typically a few percentage of the total intensity, collec-ng informa-on on magne-c field of RL AGN requires telescope with high sensi-vity (<<1mJy/beam) as ALMA. The results obtained for PKS 0521-365 show that in only 10 mins on source polarized emission is revealed even in the lobes with angular resolu-on < 0.5 arcsec, demonstra-ng that now with ALMA impressive results could be reached also for faint source com
poster
Evaluating State‐of‐the‐Art Handwritten Text Recognition (HTR) Engines; with Large Language Models (LLMs) for Historical Document Digitisation CRediT (Contributor Roles Taxonomy) Conceptualization: AR; GL; CAR Data curation: AR; CAR; REPBUBLIC! Formal Analysis: GL; CAR Methodology: GL Software: GL; PS; AR; CAR Writing – original draft: GL; CAR Writing – review & editing: all. Conclusions: ‐ All used engines can handle challenges in the given data (small trainingset/difficult handwriting/abbreviations/stenography) and can be trained to improve performance. ‐ HTR+ outperforms Pylaia in Glagolitic, whereas Pylaia is better on Shorthand. A reason could be that HTR+ suffers from a too large subsampling of the image, so that the transcription does not fit into the probability matrix. ‐ There is a tradeoff between inference time and quality: HTR+ and Pylaia are very fast, whereas IDA and especially TrOCR are slower. Also the number of trainable parameters varies from 3M (HTR+, Pylaia) to 558M (TrOCR). ‐ While HTR+, Pylaia and IDA need good surrounding polygons, TrOCR is pre‐trained and fine‐ tuned on unmasked text lines and does not suffer from defective surrounding polygons. ‐ LMs always improve the three models. This cannot be applied for TrOCR because it always incorporates a LM. ‐ HTR+, Pylaia and IDA are trained from scratch. For IDA, it took about 48 hours on one GPU for the Republic dataset to reach the performance. Training Pylaia only takes 9 hours 22 minutes. ‐ TrOCR reaches very good performance even without fine‐tuning the Vision Transformer. Moreover, fine‐tuning on only several thousand lines is fast (e.,g., it took about 10h to fine‐ tune on the roughly 40k lines from the Republic dataset). ‐ The considerable CER improvement when analyzing the Glagolitic dataset and comparing base and alnum can be explained by the fact that the most difficult task for the models is the expansion of abbreviations containing parentheses “(” and “)”. Since they are filtered out in alnum, the CER goes down. Since TrOCR has a model structure that can connect multiple characters to one optical symbol, the TrOCR improvement is smaller. ‐ Easy training of Pylaia using Transkribus (no programming skills needed), TrOCR is freely available with programming skills, IDA closed source of a company. TrOCR and IDA are trained manually by IT experts, but no hyperparameter optimization has been applied. Methods: Engines: The engines can be separated into three technologies: ‐ Convolutional+LSTM architecture (approx. 3M trainable weights): HTR+, Pylaia. ‐ Convolutional+Conformer architecture (approx. 46M trainable weights): IDA. ‐ Vision‐Transformer+LLM‐Transformer architecture (approx. 558M trainable weights): TrOCR. Since TrOCR uses a Transformer‐Decoder, it directly generates a token/character sequence. Being pre‐trained on hundreds of millions of (synthetic) line images containing English text, it incorporates a strong English language model (LM). All engines except TrOCR are trained using CTC.1 This results in a probability matrix which enables the opportunity to apply an additional/external LM. The LM is trained on pure text, which in our case is the transcription in the training set. This mainly improves the word error rate, but also the character error rate (CER). Comparison: To compare the different HTR engines, we decided to mainly focus on the CER for now. In general for each text line we calculate the Levenshtein‐Distance between hypothesis and ground truth and sum up over all lines. Dividing these errors by the total number of all ground truth characters, results in the CER. Due to the different approaches of the HTR engines, they can have different strengths and weaknesses regarding the characters they can recognize. We decided to evaluate the CER by filtering the ground truth and hypothesis characters to find out these strengths and weaknesses. For simplification we filter by Unicode character categories2. As general normalization we map “
poster
FOLLOW US @ImAFUSA_EU @ImAFUSA The framework and its tools will collect insights influencing societal acceptance of UAM and produce indicators for: Data on citizen noise perceptions, perceptions of visual pollution, safety perceptions and overall UAM acceptance will be collected during 3 immersive citizen experiences of UAM applications in the city of Athens, Greece. In each area, innovative performance indicators will be described while mathematical formulas and algorithms will be developed to quantify them. Impact & Capacity Assessment Framework for U-space Societal Acceptance This project is co-funded by the European Union under Grant Agreement No. 101114776 and supported by the SESAR 3 Joint Undertaking and its founding members. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or SESAR 3 JU. Neither the European Union nor the granting authority can be held responsible for them. ENVIRONMENTAL IMPACT SAFETY IMPACT SOCIOECONOMIC IMPACT Different dimensions for measuring Environmental Impact will be defined like emissions, noise, air quality, visual pollution, etc. New indicators for the wide deployment of U-space services will be developed assesing the Socioeconomic Impact on everyday life. Existing air and ground safety indicators are summarised and combined with user feedback to define UAM Safety Indicators.
poster
I. Ruffa1,2, C. Vignali1,3, A. Mignano2 1University of Bologna, IT, 2INAF-IRA, IT, 3INAF-OABO, IT ALMA  Spherical simmetry of the CO line emitting region; median radius: 6.4 kpc  The HI mass is unknown Following Gilli et al. (2014) Nh = (8.0 ± 0.9) × 1021 cm-2 X-RAY  Flat observed photon index: intrinsic neutral absorber Using a simple, phenomenological model: Nh = (8.3 ± 2.0) × 1022 cm-2 The molecular gas (traced by the CO) may contri- bute only for a fraction to the obscuration observed in X-ray (given the assumptions adopted here) REFERENCES: Carilli & Walter 2013, ARA&A, 51, 105; Downes & Solomon 1998, ApJ, 507, 615; Garcia et al. 2010, ApJ, 718, 695; Garcia et al. 2013, ApJ, 768, 146; Gilli et al. 2014, A&A, 562, 67; Mao et al. 2014, MNRAS, 440, L31; Murphy & Yaqoob 2009, MNRAS, 397, 1549; Norris et al. 2012, MNRAS, 422, 1453; Rigopoulou et al. 1999, AJ, 118, 2625; Spoon et al. 2007, ApJ, 654, L49; Spoon et al. 2009, ApJ 693, 1223 IRAS 00183-7111 (I00183) is an Ultra Luminous Infrared Galaxy (ULIRG) at z=0.327; most of the ULIRGs are known to host a heavily obscured AGN in their nuclear regions. The de- tection of the most heavily obscured sources is crucial to shed light on the obscured accretion phase in black hole growth, the AGN/host-galaxies co-evolution issues, and eventually esti- mate the contribution of these sources to the X-ray cosmic background. We present a study of the multi-frequency properties of I00183, connecting ALMA mm/sub-mm observations with those at high energies (Ruffa et al. in prep.); one of the purposes consists in verifying at what level the gas, traced by the CO, may be responsible for the obscuration observed in X-rays (NH ≈ 2 × 1023 cm-2, Nandra & Iwasawa 2007). I00183 was selected from the top-left region of the so-called Spoon diagnostic diagram (Spoon et al. 2007, black circle in fig. 1); it compares the strength of the 9.7 μm silicate absorption feature and the equivalent width of the 6.2 μm PAH emission feature to investigate about the AGN/starburst components in ULIRGs: the top-left region is populated by the absorption- dominated sources. I00183 is one of the most powerful ULIRGs known: Lbol = 9 × 1012 Lsun, mostly in the far-IR (Spoon et al. 2009). Its K-band image (Rigopoulou et al. 1999) shows a disturbed morpology and a single nucleus, interpreted as signs of a recent merger. This property is typical of ULIRGs: their high IR luminosity is attributed to the merger of two gas-rich spirals which triggers both AGN activity and a powerful nuclear starburst. The high amount of dust accompanying the vigorous starburst activity in I00183 (≈ 220 Msun yr-1, Mao et al. 2014) causes significant extinction to its nucleus (AV≥90, Spoon et al. 2009); for this rea- son, I00183 mid-IR spectrum lacks the typical AGN tracers (e.g. the 7.65 μm [Ne IV], and the 14.3 and 24.3 μm [Ne V]). However, the presence of a powerful AGN source in its nucleus was already established by the X-ray study presented by Nandra & Iwasawa (2007). ALMA observation properties νobs (GHz) Time on target Angular resolution Spectral resolution Band 3 87 79 min 1.8” 90 km/s Band 6 270 118 min 0.8” 2 km/s 260 98 min 0.8” 2 km/s X-ray observation properties Exp. time Obs. date Instrument Chandra 22 ks 13/02/2013 ACIS-S XMM-Newton 22.2 ks 16/04/2003 EPIC PN - MOS 1/2 NuSTAR 115 ks 21/12/2015-26/04/2016 FPMA - FPMB To link the sub-mm to the X-ray properties of I00183, ALMA archival Cycle 0 data in Band 3 and Band 6 were calibrated and analyzed. The X-ray analysis was carried out using Chandra, XMM-Newton, and NuSTAR (courtesy of the PI, K. Iwasawa) data, allowing a broad-band coverage of the X-ray spectrum (0.5 − 30 keV). The multi-frequency observation properties are summarized in table 1. Fig. 1: Spoon diagnostic plot of the equivalent width of the 6.2 μm PAH emission feature versus the 9.7 μm silicate ab- sorption feature, adapted from Spoon et al. (2007). The ga- laxy spectra are classified into nine classes, identified by t
poster
Actividad de ondas ecuatoriales largas durante el evento el Niño 2018-2019 G. Rivera*1, K. Mosquera1, V. Mayta2 1Subdirección de Ciencias de la Atmósfera e Hidrósfera, Instituto Geofísico del Perú, Lima, Perú 2Department of Climate and Space Sciences and Engineering, University of Michigan, MI, USA Introducción Entre octubre de 2018 y julio de 2019, la temperatu­ ra superficial del mar se mantuvo con valores supe­ riores a su normal en la región del Pacífico Central y Oriental. Este periodo fue considerado como un evento El Niño, que según Índice Oceánico El Niño (ONI, por sus siglas en inglés), alcanzó la magnitud débil. Asumiendo una propagación de ondas en el prim­ er modo baroclínico, se logró aislar, en estructuras meridionales, las contribuciones individuales de Kel­ vin y tres primeros modos de Rossby sobre el nivel del mar (Boulanger y Menkes, 1995 y 1999), donde se identificó la propagación de un paquete de on­ das en Setiembre del 2018, el cual aparentemente favoreció al inicio de las condiciones cálidas en la región central del Pacífico. Mediante los coeficien­ tes de proyección meridional, este estudio se cen­ tra en describir la dinámica y participación de ondas ecuatoriales largas en el evento El Niño 2018/2019. Datos y Método Los datos de nivel del mar L4 fueron obtenidos del CMEMS en una grilla regular de 0.25°. Las anom­ alías se calcularon con una climatología estacional de 1993-2010; posteriormente, se realizó un filtro pasabanda Lanczos de 20-180 días para aislar la señal interestacional. La descomposición del nivel del mar en estructur­ as meridionales se realiza usando las expansiones de las soluciones nodimensionales a la ecuación de estado en la aproximación de onda larga y baja fre­ cuencia. Las estructuras meridionales obtenidas representan el comportamiento meridional del nivel del mar y de las corrientes / esfuerzo de vientos. La multipli­ cación de los coeficientes de proyección con estas estructuras nos retorna el campo reconstruido para Kelvin (m=0) y Rossby (m=1, 2, 3, ..., n) Resultados Figura 1. Estructuras meridionales para (a) nivel del mar y (b) corrientes / esfuerzo de viento. Referencias Boulanger, J.-P., Menkes, C., 1995. Propagation and reflection of long equatorial waves in the Pa­ cific Ocean during the 1992–1993 El Niño. Jour­ nal of Geophysical Research 100, 25041. https:// doi.org/10.1029/95JC02956 Boulanger, J.-P., Menkes, C., 1999. Long equa­ torial wave reflection in the Pacific Ocean from TOPEX/POSEIDON data during the 1992-1998 period. Climate Dynamics 15, 205–225. https:// doi.org/10.1007/s003820050277 Boulanger, J.-P., 2003. Reflected and locally wind- forced interannual equatorial Kelvin waves in the western Pacific Ocean. Journal of Geophys­ ical Research 108. https://doi.org/10.1029/ 2002JC001760 Conclusiones Durante el evento 2018-2019, dentro del paquete de ondas Kelvin se logró distinguir la propagación de tres perturbaciones cálidas. Aparentemente, la reflección de ondas Rossby no tuvo una participación mayor en la generación de ondas Kelvin. La culminación del evento estuvo más relacionado a la ausencia de pulsos de vientos del oeste que a la actividad del oscilador atrasado. Agradecimientos Este trabajo utilizó los recursos computacionales, HPC-Linux- Cluster, del Laboratorio de Dinámica de Fluidos Geofísicos Computacionales del Instituto Geofísico del Perú (Subvenciones 101-2014-FOND­ ECYT, SPIRALES2012 IRD-IGP, Manglares IGP-IDRC, PpR068) Figura 3. Diagramas longitud-tiempo para anomalías observadas de esfuerzo de viento, profundidad de la isoterma de 20°C, nivel del mar y temperatura superficial del mar. Todos los diagramas tienen superpuesto la isoterma superficial de 28.5°C observada (línea sólida) y climatológica (línea rayada). Figura 2. Mapas de coeficientes de regresión lineal para el periodo 1993-2019 entre la serie de tiempo filtrada para 140°W y la anomalía del nivel del mar observado. Se observa la propagación característica de una onda Kel­ vin y su re
poster
Contacts Investigating Scientific Misinformation Originating from Retracted Publications and Their Perception Juliane Stiller1, Senta Terner2, Violeta Trkulja1 1Grenzenlos Digital e. V., 2Uppsala University 86th Annual Meeting of the Association for Information Science and Technology, 27-31 October 2023, London, UK 1 Dr. Juliane Stiller juliane@grenzenlos-digital.org @stillinsky@mastodon.social The scientific community is built on the principles of transparency and accountability, where published research is subject to scrutiny and open to re-evaluation. One of the mechanisms by which the scientific record is corrected is through retractions (Wray & Anderson, 2018), which signal to the scientific community that a published article contains significant flaws or errors and results cannot be relied upon (COPE, 2019). As retracted papers are often related to erroneous research, we use these to learn more about the relationship between scientific publication and false information and the spread of false information in the media. We distinguish between four different causes of scientific misinformation: 1) information that originally met scientific criteria but is now considered outdated, 2) information produced by scientists either intentionally or due to unintentional errors, 3) information that appears scientific but lacks a scientific basis (pseudoscience), and 4) information that meets scientific criteria but is distorted or falsified in its reception. Introduction 2 3 1 2 3 Different Media Perceptions Use statement of original paper to strengthen claim that masks do not help. Use retraction to strengthen claims that masks do help as paper stating otherwise was retracted. Use statement of original paper to say that more measure than masks are needed. “Smokers are less likely to get Covid” “Masks do not help” Original study is referenced to promote conspiracy myths. Use the study as evidence that the COVID-19 vaccine is dangerous, connected with the call for not getting vaccinated. Retraction is used as a proof that people get silenced if they are critical about vaccinations. Dr. Violeta Trkulja violeta@grenzenlos-digital.org @viokeka@mastodon.social Use Case Analysis Different Contexts Shape the Message DOI publication: 10.7326/M20-1342 Publication date: 06.04.2020 DOI retraction: 10.7326/L20-0745 Retraction date: 07.07.2020 Authors’ Intention are Questioned Communities with Closed World Views • Original Paper is used to back-up different claims • Retraction is used as evidence that claims opposing the claim made in retracted paper are true • Strong conclusion can be a red flag for sensitive topics DOI publication: 10.1183/13993003.02144-2020 Publication date: 07.06.2020 DOI retraction: 10.1183/13993003.02144-2020 Retraction date: 04.03.2021 Discussion The three identified use cases demonstrate the dissemination of scientific misinformation in the public domain. Our investigation specifically focused on scientific misinformation originating from article retractions and how these retractions are perceived and discussed in news outlet articles. We identified four key factors that influence the portrayal of retracted articles and the information presented in these news outlets: 1) the Object of Reception determines whether the article itself or its retraction is discussed within the news outlets, 2) the Characteristics of News Articles determines the depth and extent to which information about the retraction is integrated into the news articles, 3) the Community of Reception pertains to the intended readership or audience of the news article, and 4) the Characteristics of the Publication deals with the potential motives or intentions of the authors behind the publication. Our investigation provides a foundation for further analysis and advances our understanding of how dubious scientific publications are perceived and how scientific misinformation spreads. ALLEA (Eds.). (2021). Fact or Fake? Tackling Science Disinformation (ALLEA
poster
EFEITO DO PERCENTUAL DE PTFE NA ENERGIA ABSORVIDA POR DOSÍMETROS DE CaSO4:Dy Isabella Pereira Tobias1, Lucas Wilian Gonçalves de Souza1, Samara Pavan Souza1, William Souza Santos1, Ana Paula Perini1,2, Lucio Pereira Neves1,2 1Programa de Pós-Graduação em Engenharia Biomédica, Universidade Federal de Uberlândia, Uberlândia, Brasil. 2Instituto de Física, Universidade Federal de Uberlândia, Uberlândia, Brasil. 1. INTRODUÇÃO O dosímetro de radiação é um dispositivo, instrumento ou sistema que mede, direta ou indiretamente, as doses de radiação ionizante [1]. O sulfato de cálcio dopado com disprósio (CaSO4:Dy) possui várias aplicações no campo de dosimetria das radiações ionizantes devido à sua facilidade de preparação, em comparação com muitos outros dosímetros termoluminescentes (TL) [2]. 2. MATERIAIS E MÉTODOS O Monte Carlo N-Particle 6 (MCNP6) é um código em Fortran desenvolvido pelo Los Alamos National Laboratory (LANL) para a simulação do transporte e interação da radiação com a matéria [3]. A energia depositada em cada dosímetro em MeV/g/partícula foi contabilizada utilizando-se o Tally F6 do código MCNP6. Foram simuladas 1x109 histórias para garantir incertezas menores que 1%. O espectro de raios-X foi gerado no programa SRS 78 [4] com os seguintes parâmetros: energia de 90 keV, ângulo anódico 12°, filtração de 2,5 mmAl e campo circular com 15 cm de raio. Tabela 1. Densidade do dosímetro em função do percentual de PTFE. 3. RESULTADOS E DISCUSSÕES Os resultados de energia absorvida por cada dosímetro CaSO4:Dy+PTFE normalizados pela energia absorvida pelo dosímetro de ar são apresentados na Fig. 2. Observa-se que no intervalo entre 10% e 20% de PTFE, a energia absorvida aumenta. Entre 20% e 100% ocorre uma redução da energia absorvida. 4. CONCLUSÃO A resposta relativa do dosímetro para concentrações parciais sob a irradiação de fótons foi que a menor absorção verificou-se para a amostra com maior taxa de PTFE. Embora a melhor resposta do dosímetro ocorreu pra concentrações de PTFE entre 10% a 20%, ainda se fazem necessários estudos adicionais para aferir a utilização desse dosímetro em dosimetria das radiações. Gráfico 1. Geometria simulada, em 1 está localizada a fonte, 2 é o feixe de partículas e 3 o dosímetro irradiado. Percentual do PTFE (%) Densidade (g/cm3) 10 2.89 20 2.80 30 2.72 40 2.64 50 2.57 60 2.50 70 2.43 80 2.37 90 2.31 100 2.25 REFERÊNCIAS BIBLIOGRÁFICAS [1] W. L. McLaughlin et al “Dosimetry for radiation processing”, 1989. [2] Kamal, S. M.; Gerges, A.; AL-SAID, M. Thermoluminescence properties of home-made caso 4: Dy for dosimetry purposes. 2004. [3] R. J. McConn, J. G. Gesh, R. T. Pagh, R. A. Rucker, e R. Williams III. Compendium of material composition data for radiation transport modeling. No. PNNL-15870 Rev. 1. Pacific Northwest National Lab.(PNNL), Richland, WA (United States), 2011. [4] J.-K. Chang, Y.-M. Nam, J.-L. Kim, S.-Y. Chang, e B.-H. Kim. "Calculated energy dependence of CaSO4: Dy TL phosphor and phosphor embedded Teflon for X and gamma rays," Rad. Meas., vol. 33, pp. 675-678, 2001.
poster
Rebooting the Research Data Management Program: The First Year and Looking Ahead Objective and Introduction The University of Connecticut Library had an inactive program for research data management education until it hired two new librarians in June 2017 to fulfill this role. The goal of our position is to work as science librarian liaisons and to provide programming, education, and outreach around the subject of research data management. Our poster will outline the three areas of activities that we undertook as a team to meet our objective of relaunching the RDM program. These three areas are: • Needs assessment • Skill building • Workshop and event planning Our goal of presenting this poster is to elicit feedback and to start a conversation about the role of research data management librarians in developing academic library workshops and programming. Researcher Interviews We interviewed 8 researchers from the UConn science community. What were the takeaways? • We learned that researchers need help with storage options for their research. • Researchers are unsure of what resources are available to help them with DMP plans and storage • Professors want workshops about research data management and version control for their students. Our questionnaire was based upon: Read, K. B., Surkis, A., Larson, C., McCrillis, A., Graff, A., Nicholson, J., & Xu, J. (2015). Starting the data conversation: informing data services at an academic health sciences library. Journal of the Medical Library Association : JMLA, 103(3), 131–135. http://doi.org/10.3163/1536- 5050.103.3.005 Needs Assessment: Environmental Scan • Survey of 16 peer and aspirant institutions: • Assessed services, classes, and events offered • Phone interviews with librarians from the peers/aspirants list • Selected literature review • Site visits to other data librarians What we learned: • the state of RDM at libraries varies greatly • seeing that others are in the same position of starting a program is helpful and reassuring • seeing what others are achieving is inspirational • To be successful we need strong advocacy and financial support from our institution • We also need help with advocacy and connections with other departments/campus units • We need to expand our services to build the program and to be competitive with other institutions • We also need to be mindful of what works for this library and this institution • Some institutions have librarians do programming, and others have researcher/PHDs do data topics instruction - we found this more often than expected. • We need to consider what training we need or what partnerships to develop to bring the needed programming to our institution Community Building We established relationships with community partners such as: • Connecticut Data Collaborative • New England Library Carpentries (NESCLIC) • City of Hartford Skill Building • NNLM/NIH Biomedical and Health Research Data Management for Librarians course • Carpentries' Instructor certification • New England Research Data Management Roundtables • Renee attended August 2018 Carpentries training at the University of Calgary • Force11 Scholarly Communications Institute: Summer 2019 Conclusion We have found that establishing a vibrant research data program requires being proactive and taking a scaffolded, multi-pronged approach. It is important to conduct outreach to different communities (graduate students, faculty, student groups, non-profits, city departments, etc.) in order to maximize your effectiveness. Past Events Open Data Salon, Hartford Public Library For this event, we invited ten people: UConn researchers, librarians, data journalists, city employees, and non-profit educators, all of whom use open municipal data in their work, to present at the UConn location in downtown Hartford. UConn Hartford's library is located inside the Hartford Public Library, which provides for opportunities for outreach to Harford residents. Tyler Kleycamp, the Chief Data Officer
poster
Domain Researchers Software Engineers JUPYTER-IDE [ALSO] HAS: a short primer and example of literate programming for research contexts a self-paced tutorial on the Git extension and version control a curated list of extensions with tutorials giving users access to featurs like language server processing REPO BACKGROUND Research software engineers exist on a continuum of different backgrounds and skill sets, often using different tools and approaches for programming. Preferred development platforms each have their own advantages which cater to various users, but when programmers collaborate closely it can be beneficial to use a shared platform to encourage the shared use of shared practices. JupyterLab, an extensible platform, is capable of supporting a wide variety of development approaches and practices. We propose JupyterLab as an ideal platform for collaboration among programmers that prefer platforms ranging from Jupyter Notebooks to more traditional IDEs. We introduce JupyterIDE, a set of tutorials to assist interdisciplinary teams in converging on a set of shared best practices centered around JupyterLab. Domain Researchers Software Engineers Branches Tags New Branch Filter Changes History Summary (required) Commit import cv2 import PIL import numpy as np wellspelt_var = cv2.imread(“path/to/file.jpg”) wellspelt_array = np.array(misspelt_var) Code Python 3 (ipykernel) Severity SOLUTION JupyterIDE provides short modules explaining core software development practices, like Git version control, implemented in JupyterLab. For individuals who want more features in JupyterLab, there are independent modules explaining other features available through curated extensions for JupyterLab. Lastly, there is a short primer on the intersection of iterative and literate programming approaches that is designed with research contexts in mind. may benefit from a solid understanding of basic version control can benefit from a shared coding paradigm for research software may want a development tool with more features Now we will import a test image from our data folder. In this case, we will use an image which is not too far from our other manuscripts. The page will not have any significant blemishes, just to see if we can get a standard preprocessing method going. We will also convert it to a numpy array to test some specific methods. With this image, we will try a couple different ‘kernel’ sizes for the different convolutions we will be using for any number of different tasks like edge extraction, binarization, and so on. We will also test a number of different boundary conditions to see if we end up served better with one of them or another. First, we will use the cv2 library for edge extraction. You can see below some of the different methods already built into the library for this sort of purpose. ACKNOWLEDGMENTS This work was supported by the Better Scientific Software Fellowship Program, funded Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DOE or NSF. by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy (DOE) Office of Science and the National Nuclear Security Administration; and by the National Science Foundation (NSF) under Grant No. 2154495. JupyterIDE JupyterIDE Promoting JupyterLab features and Promoting JupyterLab features and extensions that facilitate collaboration extensions that facilitate collaboration among researchers and RSEs among researchers and RSEs dcostel5@asu.edu nbrewer6@asu.edu nshah50@asu.edu David Costello Nicole Brewer Namita Shah POTENTIAL NEEDS: File Edit View Run Kernel Settings Tabs Current Repository jupyterlab-ide Current Branch main main thresholding preProcessingInterface test.ipynb sensitiveData Changed Staged Untracked (0) (1) (1) Description test.ipynb Diagnostics Panel Message Code Source Cell Line Ch pyflakes mypy pyflakes PIL importe
poster
Towards a Green Detergent Discovery by cell-free protein synthesis and biochemical characterization of thermostable glycerol oxidases O2 H2O2 Lars L. Santema, Laura Rotilio, Ruite Xiang, Gwen Tjallinks, Victor Guallar, Andrea Mattevi & Marco W. Fraaije A sticky situation The market size of detergents in 2020 was roughly 7 billion USD[1]. A substantial portion of these detergents ultimately enters waste- water streams, and owing to their limited biodegradability, they contribute to the decline of natural water quality[2]. Especially chemicals used for antibacterial properties are harmful for the environment. To avoid the use of these chemicals, we sought out to find new glycerol oxidases to create hydrogen peroxide as bactericidal agent in detergents. By integrating in silico biopros- pecting, cell-free protein synthesis and activity screening, an effective pipeline was developed to rapidly identify glycerol oxidases. Characterization [1] Liquid Laundry Detergent Market Size Report ID: FBI102962 [2] Mousavi S. A. & Khodadoost F. (2019) Effects of detergent on natural exosystems and wastewater treatment processes: a review. Environmental Science and Pollution research 26, 26439 - 26448 [3] Heuts, D. P. H. M., Hellemond, E. W. van, Janssen, D. B., & Fraaije, M. W. (2007). Discovery, characterization, and kinetic analysis of an alditol oxidase from Streptomyces coelicolor. J Biol Chem 282:20283–20291. https://doi.org/10.1074/jbc.M610849200 [4] Winter, R. T., Heuts, D. P. H. M., Rijpkema, E. M. A., van Bloois, E., Wijma, H. J., & Fraaije, M. W. (2012). Hot or not? Discovery and characterization of a thermostable alditol oxidase from Acidothermus cellulolyticus 11B. Appl Microbiol Biotechnol 95:389–403. https://doi.org/10.1007/s00253-011-3750-0 Steady-state kinetics are everything Blast Search Substrate simulations In vitro expression coupled with an HRP assay Activity ‘proven’ in a few hours! Neg Tf St Ab Ch 6 7 8 9 6 5 7 0 7 5 8 0 8 5 9 0 M e ltin g te m p e ra tu re s pH T e m p e ra tu re ( C ) A ldO A b A ld O C h A ldO S t A ldO Tf V 2 5 8 L _ P 2 5 9 I Organism of origin Yield (mg/L culture) KM (mM) kcat (s-1) kcat/ KM (M-1 s-1) references xylitol glycerol xylitol glycerol xylitol glycerol AldOSc S. coelicolor 350 (but a low thermostability) 0.32 350 13 1.6 4.1 x 104 4.6 Heuts et al. 2007[3] AldOAc A. cellulolyticus 2,5 0.07 270 1.9 1.3 2.7 x 104 4.8 Winter et al. 2012[4] AldOAb A. bacterium 75 0.03 184 4.2 2.6 14 x 104 14 this study AldOCh T. chromogena 30 0.04 143 3.5 2.0 12 x 104 14 this study AldOSt S. thermoviolaceus 71 0.02 523 1.9 4.2 9.5 x 104 8 this study AldOTf T. flexuosa 300 0.03 50 3.1 1.6 10 x 104 32 this study V258L_P259I AldOTf T. flexuosa 300 0.04 41 4.3 4.0 11 x 104 98 this study V257L_P258I AldOAb A. bacterium 72 n.d. 157 n.d. 1.4 n.d. 9 this study 1.5 Å resolution structure of AldOAb Room for improvement Conclusion Our Lab About me Read details soon! Manuscript submitted Protein Energy Landscape Exploration V258L P259I • A new pipeline was developed by integrating computational bioprospecting, in vitro expression and an oxidase assay to quickly discover novel glycerol oxidases. • Three thermostable alditol oxidases active on glycerol where found (from Actinobacteria bacterium, Streptomyces thermoviolaceus, and Thermostaphylospora chromogena). • A high resolution crystal structure of the alditol oxidase from the Actinobacteria isolate was obtained. • A structure-inspired double mutant of the alditol oxidase from Thermopolyspora flexuosa was engineered and found to be the most efficient glycerol oxidase known. More!
poster
@ F A C E I TA rc t i c @ F A C E I TA rc t i c @ fa c e _i t _a rc t i c @ The F A C E -I T Pro j e c t www.face-it-project.eu FACE-IT has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 869154. Historical changes in biomass, total abundance, and species composition of seaweed-associated fauna in Kongsfjorden, Svalbard Simon Jungblut1, Jessica Niklass2, Markus Brand3, Christian Buschbaum4, Martin Paar5, Inka Bartsch6, Markus Molis2 1Marine Botany, University of Bremen, Germany 2Department of Arctic and Marine Biology, The Arctic University of Norway, Tromsø, Norway 3Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Helgoland, Germany jungblut@uni-bremen.de Ice loss is changing zonation patterns The shift from sea- to land-terminating glaciers causes changes in sediment loads (i.e., underwater light climate) and less ice- scouring (i.e., disturbance) in shallow-water benthic communities. Sea urchins are the dominant grazers At Hansneset, Kongsfjorden (Svalbard), kelps (brown algae) profound changes in macroalgae seaweed biomass and species composition have been found. Abundance individuals * m-2 Biomass ash-free dry weight g * m-2 Seaweeds provide habitat for many associated animals Hypothesis: Seaweed- associated fauna is changing concomitantly to the changes observed in seaweeds in 1) biomass, 2) abundance and 3) taxon composition. Fauna taxonomic composition 1996/98 ≠ 2012/13 ≈ 2021 depth dependent year independent Total fauna biomass & abundance 1996/98 ≈ 2021 in 2012/13 only about 50% Fauna biomass 1996/98: 2.5 m < 15 m 2012/13: 2.5 m > 15 m 2021: peak at 5 m Biomass 2012/13 → 2021 Cirripeds increased Bryozoans decreased Düsedau et al. (in prep.) 4Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, List/Sylt, Germany 5University of Rostock, Germany 6Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven, Germany
poster
2017 SIAM Conference on Computational Science and Engineering (CSE17) This work is supported by the U.S. Department of Energy Office of Science, ASCR and BER programs. CSE Complete: R&D for Productivity Improvement M. Heroux, L.C. McInnes, D. Bernholdt, J. D. Moulton Goal: Improve CSE application developer productivity and software sustainability, as key aspects of increasing overall scientific productivity www.ideas-productivity.org We are actively soliciting contributions to the BetterScientificSoftware site, see https://github.com/betterscientificsoftware/betterscientificsoftware.github.io • Software Challenges: Exploit massive on-node concurrency and handle disruptive architectural changes while working toward predictive simulations that couple physics, scales, analytics, and more • Strategy: In collaboration with CSE community • Customize and curate methodologies for ECP application productivity and sustainability • Create an Application Development Kit of customizable resources for improving scientific software development • Partner with application teams on software improvements • Training and outreach in partnership with computing facilities IDEAS-ECP team: Catalysts for engaging CSE community on issues in productivity and sustainability • Partnerships with ECP applications teams • Understand productivity bottlenecks and improve software practices • Collaborate to curate, create, & disseminate software methodologies, processes, and tools that lead to improved scientific software • Software Carpentry-type approach to training for extreme-scale software productivity topics • Web-based hub for collaborative content development & delivery • Community-driven collection of resources to improve scientific software productivity, quality, and sustainability History and Accessibility IDEAS: Interoperable Design of Extreme-scale Application Software • Project began in Sept 2014 as ASCR/BER partnership to improve application software productivity, quality, and sustainability Resources: https://ideas-productivity.org/resources Highlights: • WhatIs and HowTo docs: concise characterizations & best practices • What is CSE Software Testing? Ÿ What is Version Control? • What is Good Documentation? Ÿ How to Write Good Documentation • How to Add and Improve Testing in a CSE Software Project • How to do Version Control with Git in your CSE Project • Webinar series, 2016: Best Practices for HPC Software Developers (slides and videos) • What All Codes Should Do: Overview of Best Practices in HPC Software Development • Developing, Configuring, Building, and Deploying HPC Software • Distributed Version Control and Continuous Integration Testing • Testing and Documenting your Code Ÿ more topics … IDEAS-ECP team: Michael Heroux (SNL), Co-Lead PI Lois Curfman McInnes (ANL), Co-Lead PI David Bernholdt (ORNL), Co-PI, Outreach Lead Todd Gamblin (LLNL), Co-PI Osni Marques (LBNL), Co-PI David Moulton (LANL), Co-PI Boyana Norris (Univ of Oregon), Co-PI Satish Balay (ANL) Roscoe Bartlett (SNL) Anshu Dubey (ANL) Rinku Gupta (ANL) Christoph Junghans (LANL) Alicia Klinvex (SNL) Reed Milewicz (SNL) Mark Miller (LLNL) Elaine Raybourn (SNL) Barry Smith (ANL) Louis Vernon (LANL) Greg Watson (ORNL) James Willenbring (SNL) Lisa Childers (ALCF) Rebecca Hartman-Baker (NERSC) Judy Hill (OLCF) Hai Ah Nam (LANL) Jean Shuler (LLNL) Facilities Liaisons Approach Phase-1 release: Spring 2017 Productivity and Sustainability Improvement Plans and Progress Tracking Cards Framework for software teams to identify, plan, and track improvements in productivity and sustainability Progress Tracking Card: Test Coverage Other practices: Source management system, documentation, software distribution, issue tracking, developer training, etc. • "What Is" doc: defines terms and concepts in a particular topic area. • "How To" doc: describes a process for improving productivity & sustainability. • Original experience: original article for informing the CSE community about how to impro
poster
Generalized Likelihood Ratio (GLR) : As parameters are unknown, we have to estimate them by maximum of likelihood under both hypothesis. Under H , we estimate the mean of the ₀ speckles and the covariance matrix, and under H , we estimate ₁ the mean of the speckles, the covariance matrix and the flux of the planet. We then compute the Likelihood Ratio which leads to the following statistic of test : When there is a planet, the determinant of the covariance matrix under H will be smaller than the one under H as the signature ₁ ₀ of the planet is not retrieved under H . Thus, the threshold will be ₀ superior when there is a planet. Neyman Pearson Test : In order to get the maximum theoretical performance, we build a Neyman Pearson test. It is the hypothesis test when all parameters are known. It leads to the following statistic of test : This statistic is thus a adapted filter applied on patches. Exoplanets detection by direct imaging using statistical learning Romain MAYER*, David MARY*, Élodie CHOQUET** * Lagrange (UCA, OCA,CNRS), Nice ** LAM, Marseille romain.mayer@oca.eu +33688448046 www.linkedin.com/in/ romain-mayer TOPIC AND CONTEXT OF THE PROJECT STATISTICAL LEARNING RESULTS ON 1D SIMULATIONS Modelisation : Each patch is modeled under both hypothesis : Under H (lack of planet) , a patch is composed from the ₀ mean of speckles μ and the noise ε, assumed Gaussian with covariance C. Under H , there is the signature of a potential planet : it is the ₁ PSF multiplied by the flux α . Objective : We build a new exoplanet detection algorithm based on statistical methods applied to spatial imagery data (from JWST and VLT/SPHERE). This field has already been explored through different methods, like the PACO¹ algorithm. The goal of our work is to compare and combine statistical learning approaches and machine learning methods, to set up an innovative hybrid method. Statistical method : We apply three hypothesis test by maximum of likelihood ratio on each pixel of an image considering the hypothesis whether there is a planet (H ) or not ₁ (H ). ₀ Patches construction: We use Angular Differential Imaging observations to get F frames over time centered on a studied star. Given the rotation of the earth, a potential planet will rotate around the star. We will consider patches (square of size K*K) centered on the studied pixel, Consequently each pixels and associated patches will rotate frame by frame. Using the projection of each patch on the frames, we get F patches per frame. We also note that patches will overlap each other over time. 1 2 F Projection of the first patch on frame 2 Projection of the first patch on frame F Simulation : In a first place, we simulate 1D-patches to test our methods. We want to compare three GLR methods (with more or less simplified assumptions) and the Neyman Pearson test. We thus implement: ● The Neyman Pearson test (dotted curves) ● The Global GLR (green curves) by not making the assumption of spatial independence between the patches ● The Spatial GLR (yellow curves) by making the assumption of spatial independence ● The Simplified GLR (blue curves) where we do not distinguish the two hypothesis. We distinguish a case without overlap and a case with strong overlap to see the impact it has on the performance of the methods ROC curves in case of no overlap ROC curves in case of strong overlap θ PSF of a planet Speckles RGLR= det( ^C0) det( ^C1) RNP=∑ f=1 F hf C −1(r f−μ) H0: r f=μ +ϵf 0 H 1: r f=μ +α∗PSF +ϵf 1 with ϵf i∼N(0,Ci) Results : We observe first that the Simplified GLR performance deteriorates when we increase the rate of overlap. Indeed, the mean of the speckles is not estimated under each hypothesis by this method. Thus the signature of a potential planet highly contaminate the estimation of μ when the overlap increases. The Global GLR is less efficient when there is no overlap as it leads to estimate a huge matrix of covariance. The Spatial GLR appears to be a good compromise as it assumes the
poster
Exploring CP sensitivity in the presence of Lorentz invariance violations at T2HK/T2HKK Supriya Pan1 Srubabati Goswami 1 Kaustav Chakraborty 1 1Physical Research Laboratory, Ahmedabad, Gujarat, India Abstract Breaching of Lorentz invariance can lead to CPT violations. Conservation/violation of CP symmetry in leptonic sector is essential in understanding the universe’s evolution. We examine how CPT-violating LIV parameters aeµ, aeτ, aµτ impact CP violation within two proposed setups for the forthcoming T2HK experiment: (i) individual detectors placed at 295 km and 1100 km [T2HKK], and (ii) a detector with double capacity positioned at 295 km [T2HK]. We explore[1] the role of different beam channels and baseline and their synergy in the context of CP sensitivity. Lorentz Invariance Violations (LIV) The total Hamiltonian in presence of LIV is given by, H = 1 2E  U    0 0 0 0 ∆21 0 0 0 ∆31   U† +    A 0 0 0 0 0 0 0 0     + HLIV , (1) A = 2E √ 2GFNe = 7.6 × 10−5ρ(g/cc)E(GeV) eV2, GF: Fermi Constant, Ne: No. density of e−in matter, ρ: matter density. HLIV =   aee aeµ aeτ a⋆ eµ aµµ aµτ a⋆ eτ a⋆ µτ aττ  −4 3E   cee ceµ ceτ c⋆ eµ cµµ cµτ c⋆ eτ c⋆ µτ cττ   (2) aαβ: CPT violating, cαβ: CPT conserving. Oscillation Probability Pµe = 2αs13 sin 2θ12 sin 2θ23 sin[ ˆA∆] ˆA sin[( ˆA −1)∆] ˆA −1 cos(∆+ δ13) + 4s2 13s2 23 sin2[( ˆA −1)∆] ( ˆA −1)2 + P aeµ µe + P aeτ µe (3) Pµµ = 1 −sin2 2θ23 sin2 ∆+ P aµτ µµ (4) P aeµ µe ≃4|aeµ| ˆA∆s13 sin 2θ23 sin ∆ √ 2GFNe [Zeµ sin ψeµ + Weµ cos ψeµ](5) P aeτ µe ≃4|aeτ| ˆA∆s13 sin 2θ23 sin ∆ √ 2GFNe [Zeτ sin ψeτ + Weτ cos ψeτ](6) P aµτ µµ = 4|aµτ| ˆA∆sin 2θ23 sin ∆ √ 2GFNe [Zµτ cos φµτ + Wµτ cos φµτ](7) where ∆= ∆31L/4E, α = ∆21/∆31, ˆA = 2 √ 2GFNeE/∆31, sij = sin θij, cij = cos θij, ψeµ = δ13 + φeµ, ψeτ = δ13 + φeτ Zeµ = −cos θ23 sin ∆, Weµ = c23(s2 23 sin ∆ ∆.c2 23 + cos ∆) (8) Zeτ = sin θ23 sin ∆, Weτ = s23(sin ∆ ∆ −cos ∆) (9) Zµτ = −sin2 2θ23 cos ∆, Wµτ = −cos2 2θ23 sin ∆ ∆ (10) Pµe[2] depends on aeµ, aeτ and their respective phases. Pµµ[2, 3] depends on aµτ and φµτ at leading order. Tokai to Hyper-Kamiokande (T2HK) and Tokai to Hyper-Kamiokande and Korea (T2HKK) Figure 1. Schematic of T2HK and T2HKK experiment [neutrino.skku.edu] T2HK[4]: Detector of 374 kt at 295 km (2.5◦off-axis beam) in Hyper-K site. The 1st oscillation maxima is at 0.6 GeV. T2HKK[5]: One Detector of 187 kt at 295 km in Hyper-K site and another similar detector in Korea at 1100 km (1.5◦ off-axis beam), the 2nd oscillation maxima is at 0.6 GeV. 1.3 MW beam with an exposure of 27 × 1021 POT. Channel 295 km 1100 km νe Appearance 3.2%(5%) 3.8%(5%) νµ Disappearance 3.6%(5%) 3.8%(5%) ¯νe Appearance 3.9%(5%) 4.1%(5%) ¯νµ Disappearance 3.6%(5%) 3.8%(5%) Table 1. The signal (background) normalization uncertainties Numerical Analysis GLoBES[6, 7] has been used with the values in tables 1,2. Parameter True Value Marginalization Range θ12 33.4◦ N.A. θ13 8.62◦ N.A. θ23 49◦ (39◦, 51◦) δ13 (−180◦, 180◦) 0◦, 180◦ ∆21 7.4 × 10−5 eV2 N.A. |∆31| 2.5 × 10−3 eV2 (2.4, 2.6) × 10−3 eV2 aαβ 10−23 GeV (10−22, 10−24) GeV φαβ (−180◦, 180◦) 0◦, 180◦ Table 2. True values and range of marginalization of all the parameters Single Detector Analysis 295 km (δ13 test,φeμ test):0°,180° Fixed: aeμ:10-23GeV True: θ23:49°,δ13:-90°,φeμ:180° Δ χ2 0 20 40 60 80 100 θ23 test (°) 40 42 44 46 48 50 295 km Fixed: θ23:49° (δ13 test,φeμ test):0°,180° True: δ13:-90°,φeμ:180°,aeμ:10-23GeV Total νe νμ νe νμ Δ χ2 20 40 60 80 100 aeμ test (× 10-23 GeV) 0 2 4 6 8 10 Total νe νμ νe νμ 1100 km (δ13 test,φeμ test):0°,180° Fixed: aeμ:10-23GeV True: θ23:49°,δ13:-90°,φeμ:180° Δ χ2 0 20 40 60 80 100 θ23 test (°) 40 42 44 46 48 50 1100 km Fixed: θ23:49° (δ13 test,φeμ test):0°,180° True: δ13:-90°,φeμ:180°,aeμ:10-23GeV Δ χ2 20 40 60 80 100 aeμ test (× 10-23 GeV) 0 2 4 6 8 10 Figure 2. Sensitivity of CP conservation versus θtest 23 (left) and atest eµ (right) The marginalization in θ23, aαβ enhances sensitivity. Major contribution of χ2 comes from νe, ¯νe channels and ν
poster
LevyfemtoscopywithPHENIXatRHIC QM2019 - XXVIIIth International Conference on Ultra-relativistic Nucleus-Nucleus Collisions Brett Fadem for the PHENIX Collaboration The PHENIX experiment at RHIC West Beam View PHENIX Detector 2010 East MPC RxNP HBD PbSc PbSc PbSc PbSc PbSc PbGl PbSc PbGl TOF-E PC1 PC1 PC3 PC2 Central Magnet TEC PC3 BB RICH RICH DC DC Aerogel TOF-W 7.9 m = 26 ft • Collisions of various nuclei, 7.7 −200 GeV/nucleon • Charged pion ID from ∼0.2 to 2 GeV/c • Pair reconstruction (in q = p1 −p2) from ∼10 MeV/c Bose-Einstein correlations map out the femtometer source • N1(p), N2(p1, p2), N3(p1, p2, p3), . . . - invariant momentum distributions, the definition of the correlation function: Cn(p1, . . . , pn) = Nn(p1, . . . , pn) N1(p1) · · · N1(pn), where Nn(p1, . . . , pn) = Z S(x1, p1) · · · S(xn, pn)|Ψsymm n (x1, . . . , xn)|2d4x1 · · · d4xn • S(x, p) source function (usually assumed to be Gaussian - Levy if more general, c.f. anomalous diffusion) • Ψ2 two-particle wave function - interaction free case: |Ψ2|2 = 1 + cos(qx) • Leads to eS, the Fourier-transformed of S; if S normalized: C2(q, K) ≃1 + eS(q, K) 2 , eS(q, K) = Z S(x, K)eiqxd4x, q = k1 −k2, K = (k1 + k2)/2, q ≪K Final state interactions, resonances to take into account • Identical charged pions: Coulomb interaction distort the simple picture • Different methods of handling, e.g. Coulomb-correction factor: CB-E(q) = K(q) · Cmeas.(q) • Two-component pion source: S = Score + Shalo – Primordial pions: Core ≲10 fm – Resonance pions - from very far regions: Halo – Observed Cn(q →0) = 1 + λn ̸= 2 – Measure: √λ2 = fc = core/(core+halo) – Resonances reduce corr. func. [1, 2] – Cross-check: √λ3 = 3f 2 c + 2f 3 c [3] – Introduce κ3 = 0.5(λ3 −3λ2)/√λ2 3 ≡1 – If partial coherence: possible deviations, κ3 ̸= 1? 1 2 1+ core core+halo unresolvable region C(q) The Levy-distribution as source function • Expanding hadron resonance gas, increasing mean free path →Levy-flight • Anomalous diffusion, generalized central limit theorem →Levy-distribution L(α, R, r) = (2π)−3 Z d3qeiqre−1 2 |qR|α, α = 1 : Cauchy, α = 2 : Gauss • Shape of the correlation function [4]: C2(|q|) = 1 + λ · e−(R|q|)α with |k| = |q|/2 • Critical behaviour is described by critical exponents • Spatial correlation ∝r−(d−2+η) →defines η exponent • Symmetric stable distributions (Levy) lead to source ∝r−1−α  α ≡critical exponent η • QCD universality class ↔3D Ising [5, 6] nη(CEP) = 0.03631(3) (3D Ising) [7] η(CEP) = 0.5 ± 0.05 (random field 3D Ising) [8] Levy exponent α vs. mT • Levy source: statistically valid description in 1D & 3D • Far from Gaussian (α = 2) and expo. (α = 1) • Far from rfd.3D Ising value at CEP (α = 0.5) • See more details in Ref. [9, 10] Levy exponent α vs. centrality • Clear decrease of α for central collisions • All far from Gauss (α = 2) and CEP (α = 0.5) • See more details in Ref. [11] Correlation strength λ vs. mT { • Core-Halo model: √ λ = core/(core + halo) • Small mT -decrease: in-medium η′ mass reduction? • See more details in Ref. [9] Levy exponent α vs. √sNN [GeV] NN s 10 2 10  1 1.1 1.2 1.3 1.4 1.5 [GeV]: NN s ]: 2 [MeV/c T m ∆ 14.6 300 19.6 300 27 200 39 150 62.4 50 200 12 +  +  + - - , 2 = 0.420 GeV/c 〉 T m 〈 PHENIX 0-30% Au+Au, PH ENIX preliminary • No clear collision energy dependence • All far from Gauss (α = 2) and CEP (α = 0.5) • See more details in Ref. [12] Chaoticity analysis via κ3 • If pure core-halo model: κ3 ≡1 • Deviations: coherence or other phenomena [3] • If partially coherent production: ⇒κ3 < 1 • Results so far compatible with chaotic π production • See more details in Ref. [13] Summary • Levy femtoscopy at √sNN = 200 GeV – Statistically acceptable description, far from Gaussian – 1D and 3D fits yield similar results – α far from CEP value – 3π analysis: chaoticity & core-halo assumptions valid – Decrease of λ at small mT : resonance effects – In-medium η′ mass modification? • Centrality and collision energy dependence – Clear centrality dependence of α – No stro
poster
Innovative Research for a Sustainable Future www.epa.gov/research Enriched pathways uniquely associated with 4 d ATRA exposure were driven by differential expression of 29 histone-encoding genes concentration Transcriptome profiling to identify ATRA-responsive genes in human iPSC-derived endoderm for high-throughput point of departure analysis Katerine S. Saili, Todd J. Zurlinden, Todor Antonijevic, Imran Shah, Thomas B. Knudsen National Center for Computational Toxicology, U.S. Environmental Protection Agency, Office of Research and Development, Research Triangle Park, NC, USA Background Methods Results (pathway analysis) Summary and Conclusions References saili.katerine@epa.gov l 919-541-3871 • Toxicological tipping points occur at chemical concentrations that overwhelm a cell’s adaptive response [Shah et al. 2016] • Human iPSC-derived endodermal differentiation (endogenesis) is an in vitro platform for probing the developmental impacts of a toxicological tipping point • Endogenesis is critical for organs such as the stomach, intestine, colon, pancreas, liver, urinary bladder, trachea, lung, pharynx, thyroid, and parathyroid glands, visceral yolk sac • Retinoid signaling is critical for early development and directs morphogenesis, growth, and differentiation of the embryo including endogenesis via retinoic acid gradients • Epigenetic changes mediate retinoid-induced stem cell differentiation [Gudas. 2013] Log2(RPKM+1) values were used for differential gene expression analysis (ANOVA) and filtered (p < 0.00001; -2 > fold-change >2) KEGG Pathway Analysis: two highly enriched pathways (p < 0.0001) • Endodermal differentiation based on FOXA2 expression occurred between 6 h and 4 d • ATRA first suppressed differentiation at 0.1 µM (by 4 d) indicated by multiple pathways such as protein digestion and absorption that characterize the trajectories for tipping point analysis • ATRA increased the expression of 29 histone encoding genes by 4 d, suggesting DNA remodeling is a target process for tipping point analysis • Endogenesis may be a suitable ToxCast platform for molecular-level characterization of toxicological tipping points during cell differentiation • Shah, et al. 2016. Using ToxCast™ Data to Reconstruct Dynamic Cell State Trajectories and Estimate Toxicological Points of Departure. Environmental Health Perspectives. Jul;124(7) • Gudas. 2013. Retinoids induce stem cell differentiation via epigenetic changes. Semin Cell Dev Biol. Dec; 24(0) • The views expressed in this poster are those of the authors and may not reflect U.S. EPA policy Total RNA for each treatment (x2) Human fibroblast-derived iPSCs Low serum + Activin differentiating endoderm Exposures (Vala Sciences) Sample Collection (Vala Sciences) 0d 6h 1d 2d 3d 4d 5d 6d 7d 8d 0.1% DMSO 0.001 µM ATRA 0.01 µM ATRA 0.1 µM ATRA 1 µM ATRA 10 µM ATRA Cells (Allele Biotech) RNA Sequencing (Cofactor Genomics) Data Analysis (US EPA) Vitamin A  All-trans retinoic acid (ATRA) RAR/RXR Total RNA for each treatment (x2) Total RNA for each treatment (x2) >30 million single end reads 26011 genes (75339 transcripts) 0.1% DMSO 10 µM ATRA FOXA2 expression (purple) at 4 d ATRA reduced FOXA2 expression and other endoderm-specific biomarkers in a concentration- dependent manner at 4 d Results (epigenetics) KEGG pathway analysis of differentially expressed genes (6033) returned 8 highly enriched signaling pathways (p<0.0001) Tipping point FOXA2 fold-change (versus time-matched DMSO control) un: undifferentiating control Pathway name Conc. Time OBJECTIVE: Identify biomarkers for high content analysis of tipping points in a developmental context Ectodermal markers (FC versus 6h DMSO control) Endodermal markers (FC versus 6h DMSO control) Histone modifying enzymes Histones ATRA-responsive genes (p<0.00001, 2 < FC < -2) List of differentiation markers provided by Sid Hunter (EPA/NHEERL) Mean FC vs. time-matched DMSO control Mean FC vs. 6h DMSO control Mean FC vs. 6h DMSO control
poster
Where do pale and yellow signals go in the Drosophila brain? Michael B. Reiser, Aljoscha Nern, Arthur Zhao, Kit Longden, Gerry Rubin, Miriam Flynn, Connor Laughland, Henrique Ludwig, and Ruchi Parekh Janelia Research campus, HHMI aMe12 Rh6-R8 (yellow ommatidia) Rh3-R7 (pale ommatidia) aMe12 Recently characterized aMe12 cells extend along pale columns in the medulla aMe12 cells reconstructed in FAFB SS-TEM volume aMe12 pR7 pR8 yR7 yR8 aMe12 cells identify pale and yellow columns for targeted reconstruction pR7 yR7 pR8 yR8 total % of total % of R7 % of R8 % of pale % of yellow Dm9 (6) 70 61 98 170 399 14.90 12.19 18.97 42.11 57.89 Dm8 (14) 226 151 0 0 377 14.08 35.07 0.00 59.95 40.05 Tm_unc (14) 59 17 50 46 172 6.43 7.07 6.79 63.37 36.63 Tm_OTu (6) 75 90 0 0 165 6.16 15.35 0.00 45.45 54.55 Tm5c (6) 8 4 20 110 142 5.30 1.12 9.20 19.72 80.28 Tm20 (4) 5 2 65 62 134 5.01 0.65 8.99 52.24 47.76 Mi4 (4) 0 0 56 57 113 4.22 0.00 8.00 49.56 50.44 Mi15 (4) 6 6 27 73 112 4.18 1.12 7.08 29.46 70.54 Dm2 (4) 21 3 55 15 94 3.51 2.23 4.95 80.85 19.15 ML1 (4) 0 0 56 38 94 3.51 0.00 6.65 59.57 40.43 yR7 (3) 0 3 0 80 83 3.10 0.28 5.66 0.00 100.00 pR7 (2) 0 0 68 0 68 2.54 0.00 4.81 100.00 0.00 Mi1 (4) 0 0 30 29 59 2.20 0.00 4.18 50.85 49.15 L3 (4) 18 15 11 12 56 2.09 3.07 1.63 51.79 48.21 Dm11 (2) 23 26 3 2 54 2.02 4.56 0.35 48.15 51.85 Tm5a (2) 0 52 0 0 52 1.94 4.84 0.00 0.00 100.00 Mt_unc (7) 2 11 11 18 42 1.57 1.21 2.05 30.95 69.05 Tm5b (3) 20 5 2 11 38 1.42 2.33 0.92 57.89 42.11 L1 (4) 2 1 19 14 36 1.34 0.28 2.34 58.33 41.67 Mi9 (4) 3 8 24 0 35 1.31 1.02 1.70 77.14 22.86 aMe12 (2) 4 0 24 0 28 1.05 0.37 1.70 100.00 0.00 yR8 (2) 0 27 0 0 27 1.01 2.51 0.00 0.00 100.00 C2 (3) 7 13 6 0 26 0.97 1.86 0.42 50.00 50.00 pR8 (2) 25 0 1 0 26 0.97 2.33 0.07 100.00 0.00 ML2b (3) 0 0 25 0 25 0.93 0.00 1.77 100.00 0.00 Dm_unc (3) 5 1 9 5 20 0.75 0.56 0.99 70.00 30.00 Mi10 (1) 0 0 0 6 6 0.22 0.00 0.42 0.00 100.00 C3 (1) 0 0 0 3 3 0.11 0.00 0.21 0.00 100.00 ML2a (1) 0 0 0 2 2 0.07 0.00 0.14 0.00 100.00 unknown (143) 30 51 43 65 189 7.06 7.53 7.64 38.62 61.38 total 579 496 660 753 2677 100.00 100.00 100.00 46.28 46.66 pre-synaptic cell types post-synaptic cell types Pale Yellow DRA Eye Lamina Medulla Lobula Lobula Plate 300 350 400 450 500 500 600 650 0 0.5 1 R7 Rh3 Rh4 Rh5 Rh6 Rh1 R8 R1-6 ) c ( ) a ( (d) (b) Spectral sensitivity Wavelength (nm) Rh5 Rh6 Rh3 Rh1 Rh1 Rh1 Rh1 Rh5 Rh1 Rh1 Rh3 Rh3 Rh4 Rh6 Motion vision Phototaxis Color vision Polarized light vision Current Opinion in Neurobiology Drosophila eye background Behnia & Desplan (2015) 50 µm lamina medulla central brain R7 Example R7 manually traced through ~2200 sections Full adult fly brain (Zheng, et al. 2018; temca2data.org) CATMAID environment (Saalfeld, Cardona, et al. 2009) Dm8 Dm2 Dm9 Dm11 TmOTu Tm5c Tm20 Mi4 Mi15 ML1 Mi1 L3 Tm5a Tm5b L1 Mi9 C2 C3 ML2a/b Examples of major R7 and R8 target neurons reconstructed to identification R7 R8 100 500 50 20 inputs (post-) R7 R8 outputs (pre-) synapse count 7-column FIBSEM 2-column pale TEM 2-column yellow TEM Comparing the synapse counts from our 4-column reconstruction to those of Janelia FlyEM’s 7-column FIBSEM (Takemura, et al. 2015). Surprisingly we find a similar number of synapses. A partial explanation is that the TEM volume contains the lamina and connections. 15.4% of input and 7.4% of output synapses are found outside of the medulla. In the 7-column reconstruction, only 57% of R8’s pre-synapse partners could be identified; 48% for R7’s partners. Connectivity summary •’Greedy’ approach to targeted connectome of R7/R8 shown to be practical; propose to use as test case for new analysis methods. •Unique identity for many Tm cells proving difficult only using morphology -- more extensive reconstruction and connectivity required. •Several candidate pale or yellow ‘specialist’ target neurons identified. Ongoing work to verify these and search for evidence of potential opponency mechanism. • What would you do next?
poster
tub. Forschungsdatenmanagement Nachhaltigkeit Open Access Transparenz Transparenz Creative Commons Denickestr. 22 21073 Hamburg www.tub.tuhh.de Universitätsbibliothek TU Hamburg tub. Open Research Open Research Open Data Offenheit Open Infrastructure OER Forschen und Publizieren Open Research FDM Open Access CC-Lizenz Forschungsdatenmanagement Offene Lizenzen eLearning Open Data Offene Lizenzen Urheberrecht HOS HOOU Transparenz Open Research Open Infrastructure Open Science Forschen und PublizierenWissen Lehren und Lernen Offenheit Digitalisierung Nachhaltigkeit Forschungsinformation Open Access Repository Research Data Management Organizer Wissenschaftliches Arbeiten Hamburg Open Science Hamburg Open Online University Open Educational Resources Creative Commons tub.services Offenheit Transparenz Open Research Nachhaltigkeit Wissen Informationskompetenz Open Research Forschungsdaten Copyright Wissenschaftliches Arbeiten Open Data Offenheit Open Science Wissen Open Data RDMO OA-Publikationsfonds Nachhaltigkeit Wissen Open Access Offenheit openTUHH Forschungsdatenrepositorium Information und Beratung Collect Write Publish Wissen
poster
v Citizen collectives for healthcare spread to nearby neighborhoods Kevin Wittenberg, Rense Corten performance Main result 1 Care collectives are spatially clustered, even while accounting for regional attributes Main result 2 Care collectives are not much better predicted than baseline. Political climate may be important Using residuals of a data-driven model, we estimate spatial correlation that cannot be attributed to underlying correlates of collective action. Methods Background Citizens increasingly organize (in)formal care services among themselves in residential communities. We investigate their emergence based on a theoretical frame of diffusion of behavior. To do so, we test whether these care collectives are geographically clustered, while accounting for spurious causes of clustering. Administrative data Machine learning The Contagion of Collective Action: The Spread of Citizen Collectives for Care in The Netherlands. Spatial statistics v Work in progress Test set performance of final model Cross-validated performance of various methods Moran’s I interpretation Overall Accuracy: 0.83 Recall: 0.28 Precision: 0.75 Results for H1 & H2 Joincount interpretation Hypothesis testing -1 1 Residual Presence of collective Absence of collective H1 Care collectives are positively spatially correlated H2 Residuals of care collectives are positively spatially correlated H3 Local spatial correlation is weaker across municipal borders Conclusion We are among the first studies to quantify the geographic dispersion of citizen collectives for care. We find support for the notion that citizen collectives can influence collective action in neighboring regions. We combined data- driven techniques and inferential statistics.
poster
Self-Identifying Factors include.. Diversity is often measured in open source communities through surveys by factors such as: ●Geographic region (i.e. city, state, country, continent) ●Role in an organization (i.e. developers, managers, executives) ●Types of problems someone works on (i.e. design, front-end engineering) ●Experience level (i.e. number of months or years working with a project) ●Project community (i.e. Gophers are members of the Go community) Diversity is also been measured in open source communities by factors such as: Gender, Age, Sexual orientation, Race, Ethnicity Known Measurement Problems ●Survey Design1 ●Biased sampling ●Oversimplification ●Overstating results Representation and counting people matters, but for whom do these kinds of statistic serve? Who does this data belong to? We must also consistently ask of our demographic collection and goals: ●Who does this serve? ●Who does this leave out? ●Who does this harm? ●Who does this expose? Patterns and Anti-Patterns when Measuring Diversity in Open Source amanda casari — amcasari@google.com References 1. "Writing Survey Questions", Pew Research Center, https://www.pewresearch.org/our-meth ods/u-s-surveys/writing-survey-questio ns/ 2. Data Feminism, Catherine D'Ignazio and Lauren F. Klein, (2020) 3. Golang Developer Survey, Go Community (2021). 4. Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todorov, Physiognomy’s New Clothes, (2017). 5. Lockhart, J.W., King, M.M. & Munsch, C. Name-based demographic inference and the unequal distribution of misrecognition. Nat Hum Behav (2023). Open Source Programs Office opensource.google About this Project Google Open Source's mission is to bring the value of open source to Google and all of the resources of Google to open source. We believe that open source solves real-work problems for everyone. Because when we say everyone, we mean everyone. So, how do we know we are - enabling everyone? encouraging everyone? creating space for everyone? So...how do you measure "everyone"? We set specific goals. We measure against them. We aim to do even better. Pattern: Understand Your Intent + Design with Purpose 1. Diversity, equity, and inclusion programs may be centralized or decentralized, but DEI work is integrated into all Google human-centered programs 2. If your mission statement includes words like "everyone", "all", or "community", you will want to know how your work is meeting those goals Anti-Pattern: Tacking demographic questions onto any available mechanism 1. Relying on demographic questions from ad-hoc surveys can complicate survey design 2. Regularly scheduled data collection and transparent analysis demonstrate commitment to community goals Pattern: Collecting Data for Accountability 1. Be clear + transparent with your communities on your diversity goals 2. Explain why you ask specific questions + what you find in plain language a. "These questions allow us to measure diversity in the community and highlight opportunities for outreach and growth." (Golang Developer Survey 2021) Anti-Pattern: Every Single Inferential Method assigning identity At Google, we don't use, recommend, or release products which algorithmically infer gender, race, age, sexual orientation, ethnicity … These inferential algorithmic methods continue to be debunked as pseudoscience, even when dressed in technology: 1. Physiognomy’s New Clothes, 20174 2. Name-based demographic inference and the unequal distribution of misrecognition, 20235 Measuring Diversity "Measuring diversity" specifically refers here to measuring differences in self-identifying factors that people use to confirm themselves in relation to others. Using principles from biodiversity + ecosystem diversity, we focus on more than just individuals or flattened profiles. ●"Nature abhors a vacuum" or "Nature ...umm...finds a way." We are not measuring in a vacuum, designing for monocultures, or assuming spherical cows. Pattern: Ask Well-Designed Questions (build with!) 1. Designing
poster
Gravity measurements in the Moscow gravity network I. A. Oshchepkov1,3, R. A. Sermyagin1, A. A. Spesivtsev1, V. D. Yushkin1, A. V. Pozdnyakov1, A. A. Kovrov2, P. A. Yuzefovich2,4 1Center of Geodesy, Cartography and SDI (ex. TsNIIGAiK) 2Moscow State University of Geodesy and Cartography (MIIGAiK) 3ilya@geod.ru, 4pauzhas@gmail.com Introduction The metrological local precise gravity networks are created to calibrate relative gravimeters, to test absolute gravimeters (AG) and to improve methods of performing and processing gravity measurements. We report on the gravity measurements campaign in the Moscow Gravity Network in Spring 2015. The major goals of this campaign were to determine: • linear scale-factors for relative gravimeters, • gravity at all stations w.r.t. International Comparisons of Absolute Gravimeters (ICAG2013), • offsets of the AG which did not participate in Russian–Finnish Comparisons of Absolute Gravimeters (RFCAG2013). 1. Network The Moscow Gravity Network consists of six stations (see Fig.1): TsNIIAGik, Zvenigorod, Krasnaya Pres- nya, Ledovo, Troitse Seltso, Mendeleevo. The largest gravity difference is about 52 mGal. All stations are permanent and have gravity values from absolute measurements which are usually repeated annualy. 46 −6 1 20 27 −52 −19 −7 −19 −7 TsNIIGAiK Troitse−Seltso Krasnaya Presnya Mendeleevo Zvenigorod Ledovo 20 km Figure 1: Moscow gravity network The absolute gravity measurements at TsNIIGAiK and Zvenigorod were performed by FG5x #221 (Finnish Geospatial Research Institute) in 2013 and gravity values there are known w.r.t. ICAG2013 in Walferdange (Luxembourg). The relative gravity measurements were planned so that each point was connected to the nearest three neighbors. There are 10 lines in the network. 2. Instruments and measurements Figure 2: All CG-5 gravimeters (left) and absolute gravimeter GBL-M (right) 2.1 Absolute gravity measurements The absolute gravity measurements were made with GBL-M #001 and #003 field ballistic gravimeters[3] (see Fig.2). Both gravimeters did not participate in RFCAG2013 and have no correction w.r.t. ICAG2013. The measurements were performed at three points with each gravimeter with overlaps (see table below). S/N Stations GBL–M #001 TsNIIGAiK, Krasnaya Presnya, Mendeleevo GBL–M #003 TsNIIGAiK, Krasnaya Presnya, Ledovo All processing methods of the absolute gravity measurements were standard, except: • no self–attraction and diffraction corrections were applied, • Atlantida3.1 2014[2] was used to calculate tide correction. 2.2 Relative gravity measurements The relative gravity measurements campaign with the five Scintrex CG–5 • #41262, 41265, 40443 (of TsNIIGAiK) and • #41077, 4173 (of MIIGAiK) spring gravimeters took place in March–May 2015 at all stations of the network for the first time. Here are some highlights: • difference method (from two to four runs per line); • 30 – 60 minutes of measurements per station, • thus no more than one line per day; • very careful transportation in the hands of the operators to minimize the errors due to tilts, shocks, weather conditions etc.; 3. Data processing All measurements were processed and adjusted with the newly and actively developed gravity processing software (not public yet) in Python programming language. 3.1 Vertical gravity gradients The gravity changes above the mark were approximated by second degree polynomial (see Table. 2) as discussed in [1], unadjusted measurements at three or four vertical levels were used as input data. Table 2: Polynomial approximation results: a and σa in µGal, b and σb in µGal m−1 Station a b σa σb σab TsNIIGAiK -349.59 15.59 3.50 2.48 -8.49 Zvenigorod -321.86 4.00 4.20 2.93 -12.13 Krasnaya Presnya -297.49 3.34 6.03 4.20 -24.82 Ledovo -320.91 7.83 6.06 4.29 -25.50 Troitse-Seltso -355.84 12.44 4.89 3.27 -15.56 Mendeleevo -255.04 -3.43 8.69 5.41 -46.24 3.2 Network adjustment The least–squares adjustment solutions were computed for three main cases: 1. RG+ICAG2013: relative measurem
poster
Acknowledgements This project is supported by NCATS UL1TR001067, NLM 5T15LM007124 and 1R18 HS022641-01A1. Computer resources were provided by University of Utah Center for High Performance Computing. Contact: Ram Gouripeddi (ram.gouripeddi@utah.edu) Towards a Scalable Informatics Platform for Enhancing Accrual into Clinical Research Studies Utah CCTS Ramkiran Gouripeddi1,2, Mollie Cummins1,2,3, Elizabeth Lane4, Randy Madsen2, Ryan Butcher2, Jianyin Shao1, Katherine Sward1,2,3, Bernie LaSalle1,2, Robinson Singleton2,5, Julie Fritz3, Julio Facelli1,2 1Department of Biomedical Informatics, 2Center for Clinical and Translational Science, 3College of Nursing, 4College of Health, 5Department of Internal Medicine, University of Utah, Salt Lake City, Utah, USA Introduction • Issues with recruiting targeted number of participants in a timely manner often results in underpowered studies. • More than 60% of clinical studies fail to complete or requiring extensions due to enrollment issues. • Objective of this study is to develop and implement a scalable, organization wide platform to enhance accrual into clinical research studies. Methods • We are developing and evaluating an informatics platform: Utah Utility for Research Recruitment (U2R2) consisting of: • Semantic Matcher: An automated trial criterion to patient matching component that reports uncertainties associated with the match • Match Delivery: mechanisms to deliver lists of matched patients for different research and clinical settings • Structured Trial Criteria Capture. • As a first step, we limited the Semantic Matcher to utilize only structured data elements from the patient record and trial criteria. • We evaluated this first phase of U2R2 for an ongoing randomized trial with a target enrollment of 220 participants that compares two treatment strategies for managing back pain (physical therapy and usual care) for individuals consulting a non-surgical provider and symptomatic less than 90 days. • Study team notified of matching patients on a biweekly basis. Recruitment Processes within Utah Utility for Research Recruitment Current recruitment results for Back Pain study Discussion • Recruitment platforms can enhance potential participant identification. • Development and operationalization of such platforms requires attention to multiple issues involved with clinical research studies. • Unstructured data • Clinical eligibility criteria are usually unstructured and require human mediation and abstraction into discrete data elements for matching against patient records. • Key eligibility data are often embedded within text in the patient record. • Distributional semantic approaches, by leveraging this content, can identify potential participants for screening with more specificity. (See poster titled “Semantic Characterization of Clinical Trial Descriptions from ClinicalTrials.gov and Patient Notes from MIMIC-III”) • Structured capture of eligibility criteria could improve match performance. • Match Delivery: Workflows and thresholds for delivery of matched patients should consider characteristics of : (1) Research study, (2) Population, (3) Targeted enrollment, (4) Organizational and socio-technical issues surrounding clinical practice and research, (5) Standards for messaging (FHIR). • Utilizing user-centered design approaches and including clinicians, clinics, and patients in recruitment workflows could yield higher accrual indices. Example eligibility criteria and REDCap-based discrete capture form with embedded branching logic for metadata and modifiers
poster
Inside-out quenching Outside-in quenching PS ridge −0.5 0.0 0.5 1.0 log EW(Hα) 0.002 0.004 0.006 L′(Hα) 1.5 2.0 2.5 3.0 D4000 ●Twice as frequent in globally passive systems than in globally active systems: → in line with simulations of ring formation via captures of small satellites ([1], [3]). ● α"emission in the rings of the globally active S0s is 4 times higher than in those globally passive, implying: →star formation feeds on the gas of the host, → in"plain rings. 0.0 0.5 1.0 log EW(Hα) 0.00025 0.00050 0.00075 L′(Hα) 1.8 2.0 2.2 D4000 PS PCA of spectral profiles of ~500 S0 MaNGA galaxies. Radial binning and stacking of spaxels to generate the spectral profile of each galaxy, i. e., the mean spectrum as a function of the galactocentric distance. Projection of the spectral profiles onto their first 2 principal components (~90% of the sample variance [4]). José L. Tous1, Jaime D. Perea2, José M. Solanes1, elena Domínguez Sánchez3 ¹Institut de Ciències del Cosmos, Universitat de Barcelona (ICCUB), ² Instituto de Astrofísica de Andalucía (IAA"CSIC) , ³ Instituto de Ciencias del Espacio (ICE"CSIC) The examination of the spatially resolved IFS maps in a sample of more than 500 lenticular (S0) galaxies drawn from the MaNGA survey has unveiled the existence of transient inner annular structures (⟨R⟩ 1 R ∼ 1 R e) betraying ongoing star formation in a good number of these objects. Activity gradients in these galaxies have been measured through a novel methodology based on the principal component analysis of their optical spectra averaged over bins of galactocentric radius. We find that the sign of these gradients is closely linked to the presence of rings in the spectral maps, which are specially conspicuous in the equivalent width of α"emission, EW( α), with a fractional abundance —22–37% depending on the strictness of ring identification— larger than that inferred from optical imaging. While the numbers of S0s with globally positive, negative, and flat activity gradients are comparable, star"forming rings are found almost exclusively in objects with a positive slope for which quenching proceeds inside"out, in good agreement with predictions from cosmological simulations studying S0 buildup. The assessment of these structures and the properties of the galaxies harboring them also indicates that the frequency of such rings increases with the mass of their hosts, that they feed mainly on the gas from the disks, that are more short"lived in galaxies with ongoing star formation, and that the local environment does not play a relevant role in their formation. From the present analysis, we conclude that the presence of rings in the EW of the α emission line is a common phenomenon among fully formed S0s, possibly associated with annular disk resonances driven by weakly disruptive mergers preferentially involving the more massive representatives of these galaxies and their smaller and closer satellites. – ACKNOWLEDGMENTS: Financial support from the Spanish state agency MCIN/AEI/10.13039/501100011033 and ERDF through the grants: PID2019–106027GB–C41, PID2019–106027GB–C43, PRE2020–091838 and PID2020"115098RJ"I00. 1 Put a ring on it: the origin of star-forming rings in S0 galaxies The closer to the PS in the PC1–PC2 plane, the weaker the activity (SF/AGN) [2]. Vectorization and quenching classification. Profiles are converted into vectors. Then, galaxies are classified according to the orientation of these vectors wrt the PS ridge. This orientation establishes how quenching proceeds in galaxies. 2 Uniform SF Uniform SF Star-forming rings are common in S0 galaxies (20% to 40%) and are almost exclusively found in galaxies with inside-out profiles. 3 Implications: → the gas hinders the formation of rings, → the captured satellites must be tiny galaxies. The ring fraction increases with stellar mass. No dependence with environment. 4 REFERENCES: [1] Deeley, S., Drinkwater, M. J., Sweet, S. M., et al. 2021, MNRAS, 508, 895, 2021MNRAS.508..895D; [2]
poster
Correlation between electron shower energy and beam momentum Correlation between electron shower energy and beam momentum can be fitted as 𝑬𝒔𝒉𝒐𝒘𝒆𝒓= 𝒑𝟎+ 𝒑𝟏⋅𝑬𝒃𝒆𝒂𝒎a linear straight line, which can be influenced by the missing energy. *Preliminary result MC p0 = -0.0164 ± 0.001547 P1 = 0.9416 ± 0.0008942 The calibration data with 7 GeV beam momentum is removed due to its bad electron life time —Runs with 7 GeV beam —Runs with other beam energy Interested in the determination of electron shower energy and its resolution A typical example of electron shower in ProtoDUNE detector Electron shower event Reference: [1] B. Abi, R. Acciarri, M. A. Acero, M. Adamowski, C. Adams, et al.. The Single-Phase ProtoDUNE Technical Design Report; 2017 [2] Acciarri R, Adams C, An R, Aparicio A, Aponte S, Asaadi J et al.. Design and Construction of the MicroBooNE Detector; 2016 Linhui Gu for DUNE collaboration Electron Shower Energy Reconstruction for the ProtoDUNE experiment Analysis goal 2 Resolution calculation 4 Detection principle Neutrino interacts with the argon in the TPC, creating secondary particles Charged particles in the TPC excite and ionize the argon along their travel through the detector About 6000 electrons per mm are released from the ionization of argon These electrons are attracted by the anode plane where they are detected, while the much heavier positive ions move relatively slow in this field; Together with the scintillation light detected by a photon detection system, a 3D reconstruction of the interaction can be performed to identify the particles. Operating principle for liquid argon time projection chambers.[2] The beam of the tertiary particles was designed to cover the expected spectrum of particles from neutrino interactions in the DUNE detectors. Get shower charge Electron life time correction Missing energy correction Energy charge conversion Get shower energy Charge calculation 3 ∼0.1 GeV shift caused by missing energy for 1 GeV beam momentum events *Preliminary result Calibration run with beam energies: 0.3, 0.5, 1, 2, 3 and 6 GeV Beam momentum Beam momentum resolution measured as a function of momentum, which will directly contributes to shower charge resolution as the constant parameter *Preliminary result Energy resolution vs beam momentum: 𝝈𝑬 𝑬𝒔𝒉𝒐𝒘𝒆𝒓= 𝒂𝟐+ 𝒃 𝑬𝒃𝒆𝒂𝒎 𝟐 + 𝒄 𝑬𝒃𝒆𝒂𝒎 𝟐 Stochastic term of resolution: 𝟏 𝑬𝒔𝒉𝒐𝒘𝒆𝒓 a = 0.066 ± 0.0097 Constant term • From beam momentum b = 0.023 ± 0.0639 GeV Stochastic term • Contributed by missing energy • Represents the fluctuation in the ionization process c = 0.050 ± 0.0099 GeV Noise term • Fluctuation in upstream energy loss Energy resolution *Preliminary result Electron lifetime correction Impurities in liquid argon can capture ionization electrons, reducing the signal read on the wires 𝑸𝒄𝒐𝒍𝒍. This charge loss can be corrected by a function of a hit drift time 𝒕and the free electron lifetime 𝜏𝑒of the TPC: 𝑸𝒄𝒐𝒓𝒓= 𝑸𝒄𝒐𝒍𝒍⋅𝒆Τ 𝒕𝝉𝒆 Larger electron lifetime means better purity, so we have a goodness requirement: 𝜏𝑒>10ms Missing energy correction Missing energy is caused by ADC charge threshold (around 100 keV/tick), which will directly affect the charge calculation of the electron shower. In this work we estimate the missing energy based on MC simulated data. Introduction 1 ProtoDUNE experiment Structure of protoDUNE-SP detector[1] ProtoDUNE detector at CERN. There are two ProtoDUNE detectors, namely ProtoDUNE-SP and ProtoDUNE-DP(now VD), which are horizontal and vertical drift TPCs respectively. •The ProtoDUNE experiment is a full engineering prototype of the DUNE far detector, and took test beam data at CERN. • ProtoDUNE is the largest Liquid Argon Time Projection Chamber (LArTPC), which contains about 770 tons of liquid argon, with 420 tons in the active volume. • These engineering prototypes will test DUNE detector components and help establish construction procedures. ProtoDUNE-SP ProtoDUNE-DP(now VD) Average missing energy per event ∼56 MeV/event (1 GeV) Beam momentum 1 GeV
poster
Mariano Lazzeri1; Ana Clara Ventura2,3 zzmariano17@gmail.com 1 Facultad de Ciencias de la Educación (FACE), Universidad Nacional del Comahue (UNCo) 2 “Estudios Culturales y Cognitivos”, GV al IPEHCS (CONICET-UNCo: sede CRUB). 3 Centro Regional Universitario Bariloche, Universidad Nacional del Comahue (UNCo) OBJETIVO I. Analizar el despliegue metacognitivo de niñes de primer grado durante la producción de un texto al inicio del año escolar, y durante su revisión seis meses más tarde, con la finalidad de comparar los procesos desplegados por un mismo niñe en ambas instancias (producción y revisión). Participaron 60 niñes de primer grado (SD = 3 meses; Min = 5,3; Max= 6,9). En conjunto con las docentes se seleccionaron 16 niñes que evidenciaran diferente rendimiento académico. Instrumento: Tarjeta de Identidad (Dockrell y Teubal, 2007) Procedimiento: Análisis mixtos; de grano fino y detallado de los ítems de todas las tarjetas de identidad (32). Cálculo de distribución de frecuencias y prueba chi cuadrado. PROCESOS METACOGNITIVOS DESPLEGADOS EN FASES DE PRODUCCIÓN Y REVISIÓN *Nota las siglas corresponden a: CP (conocimiento de la persona); CT (conocimiento de la tarea); CEs (conocimiento de las estrategias); Pl (planificación); M (monitoreo); C (control); E (evaluación); ME (monitoreo emocional-motivacional); CEm (control emocional-motivacional) Nuestros resultados evidencian que niñes de primer grado pueden revisar un texto de su propia autoría, al encontrar: I. Todo el arco de procesos metacognitivos tanto en la fase de producción (FP) como en la de revisión (FR). II. Diferencias significativas en cuanto al despliegue de los siguientes procesos: planificación, monitoreo, control, evaluación y, monitoreo y control emocional/motivacional en FR respecto de FP. En línea con lo anterior, se plantea la relevancia de incorporar actividades de revisión en los primeros años de la trayectoria escolar. Pensamos analizar cómo niñes de primer grado revisan un texto antes de comenzar a escribirlo (Calil y Myhill, 2020). Calil, E., & Myhill, D. (2020). Dialogue, erasure and spontaneous comments during textual composition: What students’ metalinguistic talk reveals about newly-literate writers’ understanding of revision. Linguistics and Education, 60, 1–15. https://doi.org/10.1016/j.linged.2020.100875 Dockrell, J. E., & Teubal, E. (2007). Distinguishing numeracy from literacy: evidence from children's early notations. In E. Teubal, J. E. Dockrell, & L. Tolchinsky (Eds.), Notational knowledge. Developmental and historical perspectives (pp.113-134). Rotterdam: Sense Publisher. Whitebread, D., Coltman, P., Pino Pasternak, D., Sangster, C., Grau, V., Bingham, S., Almeqdad, Q., & Demetriou, D. (2009). The development of two observational tools for assessing metacognition and self-regulated learning in young children. Metacognition and Learning, 4, 63–85. https://doi.org/10.1007/s11409-008-9033-1 Fase de Producción Fase de Revisión PROCESOS METACOGNITIVOS DE NIÑES DE PRIMER GRADO DURANTE LA PRODUCCIÓN Y REVISIÓN DE UN TEXTO SOBRE SÍ MISMOS Estudio realizado bajo una Beca EVC-CIN Res CE N° 1518/20 del primer autor con la dirección de la segunda autora. PROBLEMA ¿Despliegan niñes de primer grado distintos procesos metacognitivos al revisar un texto que al producirlo? ¿Qué diferencias pueden identificarse? METACOGNICIÓN (Whitebread et al 2009) ÁREAS PROCESOS EJEMPLOS (de este corpus) CONOCIMIENTO METACOGNITIVO PERSONAS FP: “No sé mucho leer pero estoy aprendiendo en mi casa” (Kayla)/ FR: “Yo sé que Castek empieza con la “A”” (Lorenzo). TAREAS FP: “Como la de mamá [refiere a la letra M]… es muy difícil” (Rocío. M)/ FR: “La G de gato y la de Gian Luca” (Gian Luca). ESTRATEGIAS FP: “Si no te acordás otro día le dices a mamá y ahí lo escribís” (Jassiel)/FR: La cabeza, la tengo que borrar porque la hice muy chiquitita (Jassiel). REGULACIÓN METACOGNITIVA PLANIFICACIÓN FP: “Me decís como voy a hacer que haga todo el Bariloche... Me podes ayudar haciendo
poster
Phase change insulation for energy efficiency based on wax-halloysite composites Yafei Zhao, Suvhashis Thapa, Leland Weiss, Yuri Lvov Institute for Micromanufacturing, Louisiana Tech University, Ruston, LA 71270 Abstract Wax can be used as a phase change material in solar energy storage but has low thermal conductivity and cannot sustain shape at higher temperature (above 55 C). Introducing 50% halloysite clay nanotubes into wax yields a stable and homogenous phase change composite with thermal conductivity of 0.36 Wm-1K-1 and no leaking until 70 C (preserving layer-shape above the original wax melting point). Thermal conductivity of wax/halloysite/graphite (45/45/10%) composite showed a six-fold conductivity increase to 1.4 Wm-1K-1 compared to pure wax and had no liquid wax leakage until 81 C. Wax/halloysite/graphite/carbon nanotubes (45/45/5/10%) composite show thermal conductivity of 0.85 Wm-1K-1 while maintaining the original shape until 91 C. Vectorial thermal energy transfer for double layers of different phase change materials demonstrated heat flux difference in the opposite directions differed by 25%. This variance in layer conductivity allows for smart building roof insulators with increased absorption during hot weather but limited thermal losses during periods of cooler temperatures. The new wax-nanoclay composite is a promising heat storage material due to good heat capacity, high thermal conductivity and ability to preserve its shape during wax melting. Conclusions (1)Halloysite can offer as a support to maintain wax shape during heat absorption and phase change even when the temperature of the PCM exceeds the melting point. (2)Halloysite can also improve the thermal conductivity of wax. Shape stabilized temperature and thermal conductivity can be further enhanced by adding graphite or carbon nanotubes to wax/halloysite composite. (3)More than one phase change materials with different thermal conductivity and melting point can be combined to improve the performance of phase chage storage, and this design has a potential application in the building roof. Acknowledgements Support with NSF-1029147 and NSF- EPS1003897 grants are acknowledged. Authors are thankful to Applied Minerals Inc., USA for providing halloysite samples. Wax and Halloysite nanotubes Wax as a phase change material (PCM) Advantages: High heat of fusion (150-250 J/g), Appropriate melting temperature (51-53 C), Chemically inert and stable and noncorrosive and nontoxic Disadvantages: Low thermal conductivity (~0.25 W/ mK) For further information contact ylvov@latech.edu Experimental setup The composites were prepared by reflux blending and impregnation method. Wax was melted at 80 C and dissolved in 80 mL isooctane solution to form a homogenous solution under continuous stirring for 10 min. Halloysite (and graphite and carbon nanotubes) was added to the above solution and the resulting mixture was than stirred and refluxed under 80 C for 3 hours. Finally, the composites were sealed in a vacuum drier at room temperature for 3 days to evaporate isooctane solvent from the mixture. Results and discussion Table 2. Thermal conducivity of composites Figure 3. Images of pure wax (a) wax/halloysite (b) wax/graphite (c) wax/halloysite/graphite (d) wax/halloysite/graphite/carbon nanotubes at different temperature Reference [1] Y.M. Lvov, D.G. Shchukin, H. Möhwald, R.R. Price, ACS Nano. 2 (2008), 814-820. [2] E. Abdullayev, K. Sakakibara, K. Okamoto, W. Wei, K. Ariga, Y. Lvov, ACS Appl. Mater. Interface 3 (2011), 4040-4046. [3] B. Zhang, J. Zhang, et al. Solar Energy 86 (2012) 1142–1148. [4] M. Xiao, B. Feng, K. Gong. Energy conver. Manag.43 (2002) 103-108. [5] H. Kao, M. Li, X. Lv, J. Tan. J. Therm. Anal. Calorim .107 (2012) 299-303. W:HNT:G:CNT (w%) Melting Transition T (ºC) Melting point T (ºC) Melting latent heat ∆H (J/g) Total latent heat ∆H (J/g) 100:0:0:0 47.5 52.3 140.5 171.0 50:50:0:0 47.4 53.3 89.9 107.7 45:45:10:0 47.1 52.9 57.4 68.4 45:45:5:
poster
Evidence for Early Formation of the Most Compact Quiescent Galaxies at High Redshift Using Deep Hubble Grism Data Vince Estrada-Carpenter1 NASA FINESST Future Investigator Casey Papovich1, Iva Momcheva2, Gabe Brammer3, Raymond Simons2 and the CLEAR Collaboration The origin of the correlations between mass, morphology, and formation history in galaxies is difficult to define, primarily due to uncertainties in galaxy star-formation histories, which are better constrained for higher redshift galaxies. Here we use a forward modeling technique on deep HST grism data (a method which can be applied to grism data from the Nancy Grace Roman Space Telescope) from the CLEAR (CANDELS Lyα Emission at Reionization) survey. We derive constraints on the formation and quenching timescales of quiescent galaxies at 0.7 < z < 2.5 using “non-parametric” star-formation histories. The galaxy formation redshifts, z 50 (defined as the point where they formed 50% of their stellar mass) range from z50 ~ 2 (shortly prior to the observed epoch) up to z50 ~ 5-8. We find that early formation redshifts are correlated with high stellar-mass surface densities, log(Σ1) > 10.25, where Σ1 is the stellar mass within 1 proper kpc. Quiescent galaxies with the highest stellar-mass surface density, log(Σ1) > 10.25, show a minimum formation redshift: all such objects in our sample have z50 > 2.9. Quiescent galaxies with lower surface density show a range of formation epochs (z50 ~ 1.5 - 8), implying these galaxies experienced a range of formation and assembly histories. We argue that the surface density threshold log(Σ1) > 10.25 uniquely identifies galaxies that formed in the first few Gyr after the big bang, and discuss the implications this has for galaxy formation models. Abstract Acknowledgment References VEC acknowledges support from the NASA Headquarters under the Future Investigators in NASA Earth and Space Science and Technology (FINESST) award 19-ASTRO19-0122, as well as support from the Hagler Institute for Advanced Study at Texas A&M University. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program #14227. Support for program #14227 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. The Most Compact Galaxies Formed at High Redshift Our SFHs Span Many Shapes Extended Galaxies Have Extended Star-Formation ●UVJ selected quiescent galaxies at 0.6 < z < 3.5 ●Selection criteria from Whitaker et al. 2011, prevents selection of dusty star-forming galaxies ●Selected ~ 100 massive quiescent galaxies We Select Quiescent Galaxies From CLEAR ●Main result of this work shows that galaxies with higher ∑1 measurements have higher formation redshifts ●The most compact galaxies all have z50 > 2.9 creating an envelope at the highest ∑1 , suggesting that the most compact galaxies can not form at lower redshifts ●The figures on the right show that this relationship monotonically rises and the standard deviation drops ●These results suggest that the most compact galaxies get their compact morphology from having formed in the early universe, when the universe was more dense ●Data: WFC3 G102 (CLEAR), WFC3 G141 (3D-HST, Momcheva et al 2016), photometry (3D-HST catalog, Skelton 2014) ●Fit all data simultaneously using forward modeling technique outlined in Estrada-Carpenter et al. 2019 and non-parametric star-formation histories (SFHs) from Leja et al. 19 ●SFHs (bottom panel in each subplot) span many shapes ○Use SFHs to measure z50 (formation redshift - when galaxy formed half its mass) We Quantify “Compactness” with ∑ ●Use ∑1 (stellar-mass surface density within 1 pkpc) to quantify how compact a galaxy is ●Being a in
poster
Building an ontology of logic definitions for groups of biological organisms to enable data integration Gaurav Vaidya1, Hilmar Lapp2, Nico Cellinese1 1 University of Florida and Florida Museum of Natural History 2 Duke University All organisms are related to each other: an example with alligators Biologists visualize hypotheses of how organisms are related to each other using phylogenetic trees. For example, the following phylogenetic tree includes the hypothesis that alligators (Alligatoroidea) and crocodiles (Crocodylidae) are more closely related to each other than either are to gavials (Gavialis gangeticus). Fig 3. Lineages on the Open Tree of Life synthetic phylogenetic tree as visualized in [6]. Acknowledgments The Phyloreferencing project is funded by the US NSF through collaborative grants DBI- 1458484 (HL) and DBI-1458604 (NC). GV’s attendance of US2TS was funded by the US2TS Travel Grant. References 1.Brochu (2003) Ann Rev Earth Plan Sci 2003 31:1, 357-397. https://doi.org/10.1146/annurev.earth.31.100901.141308 2.Prosdocimi et al. (2009) Evol Bioinform Online. 5:47–66. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2747124/ 3.de Queiroz and Gauthier (1990) Syst Biol 39(4):307–322. https://doi.org/10.2307/2992353 4.Carral et al. (2017) Arxiv 1710.05096 https://arxiv.org/abs/1710.05096 5.Musen, M.A.ACM SIG AI 1(4) http://dx.doi.org/10.1145/2557001.25757003 6.Hinchliff et al. (2015) Proc Nat Acad Sci 112.41 (2015): 12764-12769. https://doi.org/10.1073/pnas.1423041112 Finding a clade on the Open Tree of Life The Open Tree of Life [6] synthesizes information from 987 phylogenies and several taxonomic resources to provide a draft hypothesis of evolutionary relationships between 2,640,941 taxa. Resolving clade definitions on the Open Tree of Life would facilitate navigation. Phylogenetic clade definitions identify groups of organisms unambiguously Phylogenetic clade definitions [3] define groups in terms of their ancestral relationships by specifying biological groups that must be either included in or excluded from the clade on a particular phylogenetic tree: Fig 1. Phylogenetic tree of alligators, crocodiles and caimans from Brochu 2003 [1]. Alligatorinae Alligator mississippiensis and all crocodylians closer to it than to Caiman crocodilus. Brevirostres Last common ancestor of Alligator mississippiensis and Crocodylus niloticus and all of its descendents. How can we translate a definition based on ancestral relationships into an OWL restriction expression? We can use OWL Property Chains as used in [4] to create clade definitions in OWL to traverse phylogenetic trees: - includes TU: has Descendant o represents TU - excludes TU: has Sibling o has Descendant o represents TU Node Alligator mississippiensis represents TU some (Has Name some (Zoological name and Has Scientific Name value "Alligator mississippiensis")) Node Crocodylus niloticus represents TU some (Has Name some (Zoological name and Has Scientific Name value "Crocodylus niloticus")) Phylogenetic trees can be represented in OWL using CDAO The Comparative Data Analysis Ontology (CDAO) [2] allows phylogenetic trees to be represented as an ontology in the Web Ontology Language (OWL). Alligator mississippiensis Caiman crocodilus Crocodylus niloticus Gavialis gangeticus Alligatoridae Brevirostres Crocodylia has Child Transitive as has Descendant has Sibling Symmetrical Note that nodes on this phylogenetic tree represent biological groups (taxa). We can relate these nodes to taxonomic names using the represents TU property from CDAO [2]. Name OWL restriction Nodes matched Alligatorinae includes TU some Alligator mississippiensis Alligator mississippiensis, Alligatoridae, Brevirostres, Crocodylia and excludes TU some Caiman crocodilus Alligator mississippiensis, Crocodylus niloticus, Gavialis gangeticus Name OWL restriction Nodes matched Brevirostres has Child some Brevirostres (includes TU some Alligator mississippiensis Alligator mississippiensis, Alligatoridae, Breviro
poster
0.004 * * * * * * 95 * 97 * * * 93 * * * 68 * * * * * 98 * Epipactis microphylla Neottia ovata Neottia pinetorum Tropidia polystachya Neottia cordata Palmorchis trilobulata Epipactis purpurata Epipactis palustris Epipactis mairei1 Palmorchis pabstii Cephalanthera damasonium Neottia japonica Epipactis helleborine Russia Cephalanthera longifolia Diceratostele gabonensis Epipactis mairei Epipactis helleborine- White Epipactis gigantea Epipactis atrorubens Cephalanthera longibracteata Neottia suzukii Epipactis albensis Neottia fugongensis Cephalanthera rubra Epipactis veratrifolia Limodorum abortivum Epipactis helleborine-Green Neottia japonica1 E. helleborine species complex Mycoparasitism does not correlate with changes in the plastid genome in the orchid Epipactis helleborine Janice Valencia-D.¹, Kierstin Lipe¹, Kurt Neubig¹, W. Mark Whitten² ¹Plant Biology Department, Southern Illinois University, Carbondale, Illinois, 62901 ²Florida Museum of Natural History, University of Florida, P.O. Box 117800, Gainesville, FL 32611 Introduction Epipactis helleborine L.(Crantz) is a mixotrophic orchid that can obtain its carbon supplies in different ways that range from autotrophism to mycoheterotrophism (Fig. 1). Full mycoheterotrophism (=mycoparasitism) is associated with a drastic reduction in the chloroplast (plastid) genome due to the loss of genes related to photosynthetic pathways. However, this has not been shown in mixotrophic species. To determine if achlorophyllous individuals of Epipactis helleborine have alterations in their photosynthesis-related genes, we sequenced and assembled plastid genomes for green and albino plants. Figure 1. Jacquemyn & Merckx´s continuum trophic model (2019). A. E. helleborine albino form, a full mycoheterotroph (Photo: Damon Tighe). B. E. helleborine green form, a partial mycoheterotroph (Photo: Roh Curtis) 1. Green and albino forms of E. helleborine were collected in Virginia, Giles County, Mountain Lake Biological Station. DNA was extracted using a CTAB method (Doyle & Doyle, 1987), followed by a silica purification column step (Neubig et al., 2014). Sequencing was performed on an Illumina HiSeqX. The poorly assembled region at trnC- ACA intron was improved by adding sequences from PCR. 2. Plastid genomes were assembled in Geneious 10.2.3. Annotations were made using CHLOROBOX and were manually curated. 3. A Maximum Likelihood analysis of the Neottieae tribe, to which Epipactis belongs, was performed using coding regions and rRNAs from our two samples, three unpublished orchid plastid genomes, and 23 more downloaded from GenBank. For the E. helleborine species complex, the full plastid genome was compared to assess substitutions per site and hotspots. • The plastid genome of both forms of E. helleborine consisted of a single, circular DNA sequence, in the typical quadripartite structure (Fig. 2). • The plastid genome of E. helleborine contains all gene regions expected in an autotrophic orchid, including complete and functional ndh genes, which are the first group of genes that tend to be lost in taxa that exhibit some transition to partial or full mycoheterotrophy. • There was no evidence of mutations in coding regions in the plastid genome of the albino plant, which suggests that this condition is regulated by nuclear or mitochondrial genes in this species. • Among the widespread E. helleborine, numerous varieties, subspecies and forms have been described due to the high plasticity in vegetative morphology and floral size and color. A broad taxonomic revision is needed to fully understand the morphological breadth of the species complex, considering the genetic variation. • We identify 4 hotspots in the plastid genome and 141 sites that can be useful to identify phylogenetic groups inside the E. helleborine complex. These results are relevant for population genetic studies and for determining evolutionary dynamics like foundational effect in the North American individuals, inbreeding and fluct
poster
Development of the Self-Archiving System in the Social Science Japan Data Archive 1Institute for Social Sciences, The University of Tokyo; ssjda@iss.u-tokyo.ac.jp Megumi Ikeda1, Nobutada Yokouchi1 & Satoshi Miwa1 Introduction Statistics of SSJDA Self-Archiving System of the Social SSJDA Visit our website! Challenges of our Self-Archiving Systems 0 200 400 600 800 1000 1200 1400 1600 0 500 1000 1500 2000 2500 3000 1998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022 Cumulative Accessible Date Sets (left scale) New Accessible Date Sets (left scale) Usage Applications (right scale) n The University of Tokyo, Institute of Social Science, Center for Social Research and Data Archives (CSRDA) created the Social Science Japan Data Archive (SSJDA) to support empirical research in social sciences in Japan and has been disseminating raw data since April 1998. n The collection and storage have been smooth. Over 2,400 data sets have been put into the public domain. n Our recent challenge is that we have been unable to keep up with the increasing volume of deposits. Therefore, SSJDA built a self-archiving system in February 2023. nNumber of applications: 1,410 (75 applications from international researchers and graduate students around the world) nThe total number of data provisions is 23,800. Data Depositors SSJDA STAFF ・Management of Depositor's Personal Information ・Prepare Data for Public Use Data Users SSJDA Direct ・Online Search System ・Online Download System ・Online Data Analysis System Registration and Updating user information Information of Deposited Data Usage Approval Usage Count Contact SSJDA Message from SSJDA User Account Page ・Searching for Data ・Applying for Data Usage ・Downloading Data ・Reporting Data Usage Depositors Account Page The Shift to Self -Management Search/Apply Aprove/Provide Use data for One-year Remind user to submit data usage report Submit publications Link data and publication n What can depositors do with the Self-Archiving System? n Register raw data n Select conditions for data n Create metadata n Check the list of applications for use, etc n Status of Self-Archive System Utilization n Of the 32 deposited, 21 have been self- archived (as of April 2023) n Depositors commented on the "convenience" of the system. n The lack of depositors creating metadata is a serious problem: many depositors need more experience creating FAIR metadata, and metadata creation is time-consuming, only one metadata currently created n The manual is already available. However, we need to show how to create metadata n To create benefits, such as shortening the time between deposit and release if they register metadata n Use AI to help create metadata n Many people deposit data the previous way n More promotion needs to be done that the self-archiving system has been created. n We need to inform depositors about the benefits of the self-archiving system (e.g., no need to contact CSRDA, no need to submit a deposit form). Image of Self Archiving Stem Overview of SSJDA
poster
Synchrotronsignatures fromthecosmic-raydrivendynamo Dominik Wólta´nski, Mateusz Ogrodnik & Michał Hanasz Centre for Astronomy, Nicolaus Copernicus University, Toru´n, Poland Abstract We investigate synchrotron emission of galactic interstellar medium with the aid of a new Cosmic Ray Energy Spectrum Module (CRESP) for modeling energy-dependent transport of CR electrons. The module solves the Focker-Planck equation for a piece-wise power-law distribution function, with synchrotron and adiabatic cooling effects taken into account together with diffusive and advective propagation of CR electrons on an Eulerian grid. To demonstrate capabilities of the new module we perform numerical simulations of the CR electron spectrum in a galaxy similar to the Milky Way. The simulation results allow us to construct synthetic radio-maps of synchrotron radio-emission. Motivation Numerical studies of dynamical effects of cosmic rays in galaxies, including magnetic field ampli- fication and galactic winds relying on a single- component propagation of CRs (Schlickeiser & Lerche, 1985), demonstrated formation of X-shaped magnetic fields in galactic halos (Hanasz, Wólta´nski & Kowalik, 2009) as well as magnetic arms (Beck, 2016) (Kulpa-Dybeł et al., 2011). This simplified treatment of CRs does not enable, however, to model galactic synchrotron emission with precision suffi- cient to confront CR propagation models with the wealth of data coming from radio observations. In this contribution we demonstrate that coupling the MHD system of equations with the CRESP solver of Focker-Planck equation, within PIERNIK code (see poster by Mateusz Ogrodnik et al), makes it pos- sible to model evolution of the CR electron spectrum, and thereby to construct synthetic radio-maps of po- larized synchrotron emission from disk galaxies. ISM model with CR protons and electrons We assume that magnetized galactic interstellar medium (ISM) is stratified by vertical gravity of the stellar and dark matter components. The initial dis- tribution of interstellar gas and galactic gravity fol- low the model of the Milky Way by Ferriere (1998), while regular initial magnetic field is assumed to be purely toroidal and its strength corresponds to pressure equilibrium with thermal gas. The basic characteristics of the model are similar to the global model of CR-driven dynamo by Hanasz, Wólta´nski & Kowalik (2009), except that initial magnetic field is currently set near energy equipartition with thermal gas. The thermal gas component is described by the stan- dard set of MHD equations. We assume that super- novae (SNe) explode randomly in the disk and that 10% of SN explosion energy is converted to energy of CRs consisting of CR protons and electrons, while the thermal energy output from SNe is neglected. We assume that CR protons diffuse anizotropically along magnetic field lines and are dynamically cou- pled to the thermal gas. We assume an Eulerian grid of 384x384x192 cells spanning Lx × Ly × Lz = 76 kpc×76 kpc×38 kpc in x, y, z directions, extended with the CR electron spectrum distributed over 16 bins spanning the energy range of 10 −105mec2. Acknowledgements This work has been supported by the (Polish) National Science Centre through the grant No. 2015/19/ST9/02959. Calculations were carried out at the Academic Computer Centre in Gda´nsk. CR electron spectrum and synchrotron emission We assume also that CR electrons are represented by a piece-wise power-law distribution function and are described by the Focker-Planck equation (Miniati, 2001, see poster by Mateusz Ogrodnik et al.). We take into account advection, energy-dependent (D ∝p0.5), magnetic field-aligned diffusion as well as adiabatic and synchrotron cooling effects in dynamical evolution of CR electrons in the wind driven by CR protons. We construct synthetic radio-maps of polarized synchrotron emission of CR electrons by integration of Stokes parameters I, U and Q along the line of sight, with Faraday rotation taken into accoun
poster
Presented at the Eighth International Congress on Peer Review and Scientific Publication (September 10–12, 2017; Chicago, USA) Abstract Objective: The Guidance for Developers of Health Research Reporting Guidelines recommend multidisciplinary stakeholder involvement, transparent and complete reporting, and updating guidelines based on feedback. Developers are accountable for stakeholder engagement, but how broad and meaningful is such engagement? Our objective was to provide empirical feedback to developers by investigating (1) the involvement of those ultimately affected by guidelines (eg, patients and carers) and regular end users of guidelines (eg, publication professionals), and (2) the transparency and completeness of reporting stakeholder involvement. Design: For this prospective study, conducted from September 2016 to January 2017, we included every reporting guideline for the main study types, as listed on the EQUATOR Network website. We pilot-tested a standardized data collection spreadsheet to extract data from the corresponding guideline publications. We quantified patient, carer, and publication professional involvement and used statisticians (listed as stakeholders in the Guidelines) as a control group. We assessed reporting transparency and completeness using the AGREE Reporting Checklist for documenting stakeholder involvement. For qualitative insights, we interviewed leaders from nonprofit, international, patient advocacy (International Alliance of Patients’ Organizations [IAPO]) and publication professional (Global Alliance of Publication Professionals [GAPP]) organizations. Results: Of the 33 guideline publications, the mean (SD) number of authors was 9 (SD 5.7, min 3, max 30) (median, 7; IQR, 5-11) and the mean (SD) number of working group members was 45 (SD 38.4, min 5, max 147) (median, 30; IQR, 23-43). Statisticians were authors for 24% (8/33) of the publications and were working group members for 15% (5/33). Patients, carers, and publication professionals were rarely identified, either as authors (0, 0, and 0, respectively) or working group members (0, 1 [3%], and 0, respectively). Reporting stakeholder involvement was deficient (eg, for statistician involvement, only 25% of publications met AGREE Recommendations). Leaders from IAPO and GAPP were not aware of having been invited to participate in developing guidelines, but thought that their stakeholders could provide unique and important insights. They encourage guideline developers to contact them to facilitate meaningful involvement. Conclusions: Guideline developers have rarely involved stakeholders affected by guidelines (patients, carers) or those regularly using guidelines (publication professionals) in the development process. The involvement of these key stakeholders could enhance the credibility, dissemination, and use of guidelines. If patients, carers, and publication professionals were represented by other stakeholders (which is not ideal given potential conflicts of interest), this was not documented; readers do not know who represented whom. The transparency and completeness of reporting of stakeholder involvement should be improved. Transparency and Completeness in the Reporting of Stakeholder Involvement in the Development and Reporting of Research Reporting Guidelines Karen L. Woolley PhD CMPP,1-3 Serina Stretton PhD CMPP,4 Lauri Arnstein MBBS5 1ProScribe KK Envision Pharma Group, Tokyo, Japan; 2University of Queensland, Brisbane, Queensland, Australia; 3University of the Sunshine Coast, Maroochydore DC, Queensland, Australia (karen.woolley@envisionpharmagroup.com); 4ProScribe Envision Pharma Group, Sydney, New South Wales, Australia; 5Evidence Envision Pharma Group, London, UK Acknowledgments and disclosures • No external funds were used for this research study. All authors are employees and shareholders of Envision Pharma Group and members of not-for-profit associations supporting ethical publication practices. KLW is a shareholder of
poster
I II III IV V VI VII VIII IX X XI XII Ian Feb Mar Apr Mai Iun Iul Aug Sep Oct Noi Dec CICLU BIOLOGIC ANUAL Parthenocissus quinquefolia SPECII CU ASPECT SIMILAR CUM O RECUNOAȘTEM? Familia Vitaceae, Clasa Magnoliopsida Flori: dispuse în inflorescențe compuse, pe pediceli de 3-7 cm lungime. Frunze: palmat-compuse cu 5 foliole eliptice pânăla oblongi, de 5-12 cm lungime, acuminate, cu baza cuneată, distanțat și acut serate (margini dințate), lucioase și colorate în verde-închis pe fața superioară și mai deschis și lucioase pe cea inferioară. Fruct - bace sferice de culoare negru- albăstrui (0,8 cm diametru), fiecare conținând 3-4 semințe cordiforme (în formă de inimă). - prezintă rădăcini aeriene; - cârcei cu 5-12 ramificații terminate cu discuri adezive; - frunzele pe fața inferioară sunt glauce (verde-albăstrii); - florile sunt grupate într-un panicul terminal. Parthenocissus inserta Viță de Canada Habitat Fructe 6-18 cm Frunză 1 2 3 4 5 foliole Floare Liană, cu cârcei care au 3-5 ramificații răsucite, purpurii, de regulă fără discuri aderente. Înălțime: <20 m Inflorescențe
poster
1. Objetivos del Proyecto En junio de 2021, se formalizó un acuerdo de colaboración entre el Planetario de Pamplona y varios Institutos de Edu- cación Secundaria (IES) en Navarra con el objetivo de promover el "Bachillerato de Investigación". Esta iniciativa educativa in- cluyó a los IES de Barañáin, Plaza de la Cruz, Ribera del Arga y Valle del Ebro, ofreciendo a los estudiantes una opción educa- tiva orientada a profundizar en métodos de investigación y análisis de problemas científicos. El proyecto se centró en la construcción de una réplica del rover Perseverance y en la in- vestigación del entorno marciano, siguiendo una metodología de trabajo inspirada en la NASA. El rover Perseverance en los Bachilleratos de Investigación Ordóñez-Etxeberria, I., Planetario de Pamplona, (i.ordonez@pamplonetario.org), 2. Desarrollo del Proyecto Durante el curso 2021/2022, doce estudiantes de los cuatro institutos participaron en la construcción del rover y en investi- gaciones relacionadas con Marte. Estos estudiantes, reparti- dos equitativamente entre chicos y chicas, se reunieron men- sualmente para compartir avances y enfrentar obstáculos en la fabricación de las piezas asignadas a cada instituto. Además, exploraron temas como la meteorología, la geología, los cráteres de impacto y las tormentas de polvo en Marte, utili- zando datos de misiones espaciales y repositorios públicos del Planetary Data System (PDS). 3. Continuación y Nuevas Fases del Proyecto El proyecto continuó en el curso 2022/2023, con los estudi- antes avanzando en sus investigaciones y defendiendo sus proyectos a inicios del año. Una nueva fase del proyecto fue lanzada, centrada en la implementación de un brazo robótico para el rover. Esta fase contó con la participación de cuatro estudiantes del IES Plaza de la Cruz y del IES de Barañáin, quienes finalizarán la parte técnica y científica del proyecto en 2024. Para el curso 2023/2024, el enfoque se trasladó a la meteorología marciana, proponiendo la construcción de esta- ciones meteorológicas que permitirán comparar datos de la Tierra y Marte. 4. Impacto y Reconocimientos El impacto del Bachillerato de Investigación ha sido significa- tivo, ampliándose con la participación de tres nuevos IES in- teresados en desarrollar proyectos junto con el Planetario de Pamplona. Esta expansión ha ampliado la cantidad de estudi- antes involucrados y ha fomentado una dinámica de trabajo colaborativa basada en reuniones mensuales. Estas sesiones permiten a los estudiantes compartir el proceso de construc- ción del rover y el avance de sus proyectos científicos, pro- moviendo el intercambio de ideas y el aprendizaje mutuo. Además de los logros internos, el proyecto ha sido reconoci- do en diversos congresos y certámenes. Destacan vario pre- mios y participaciones como Tecnociencia 2023 y el Certamen Jóvenes Investigadores 2023. El proyecto ha sido galardona- dos por sus trabajos innovadores, demostrando el éxito y el potencial del programa para fomentar el pensamiento científico y el trabajo investigador en la juventud.  Figura 1.- Desarrollo de la construcción de la réplica del rover Perseverance. Figura 2.- Recreación del modelo del rover en la superficie de Marte. Figura 3.- Réplica del rover Perseverance construida en el proyecto, y expuesta en el Planetario de Pamplona.
poster
Ruprecht-Karls-Universität Heidelberg Heidelberg Graduate School of Mathematical and Computational Methods for the Science Graduiertenkolleg 1114 Heidelberg Collaboratory for Image Processing Institute of Environmental Physics Novel Technique for 3D-Space Visualization of Concentration Fields of Air-Water Gas Transfer Darya Trofimova, Christine Kräuter und Bernd Jähne Heidelberg Collaboratory for Image Processing, Institute of Environmental Physics, University of Heidelberg, Germany Measuring Technique NH4+ NH3 Z C Water Phase Air Phase H+ NH4+ Z C H+ NH3 NH3 OH- fluorescence z* z* znl NH3(aq) + H3O+ <-> NH4 ++H2O 1D Measurements: Experimental Set-up wind Basler acA2500 - 14gm Basler acA2500 - 14gm diode laser beam expander laser beam fluorescent light from Pyranine air-sided camera water-sided camera glass window of the facility Wind-wave facility: Lenght = 1.75 m Width = 25 cm Depth = 20 cm Water volume ≈ 22 L Air volume ≈ 220 L Optical Set-up: ·Diode laser: wavelength - 445 nm ·Galileo expander ·Basler cameras: 2592×16 AOI 2.2×2.2 µm pixel size 350Hz frame rate ·Scheimpflug optics magnification: 0.43 air-side 0.32 water-side Fig.2. Sketch of the LIF experimental set-up for 1D measurements 1D Measurements: Results Fig.5. Experimental profiles for different ammonia concentration with fitted functions that comply with small-eddy model Fig.4. Experimental profiles for different Pyranine concentration 3D Measurements: Experimental Set-up sCOM pco.edge sCOM pco.edge sCOM pco.edge 1.4 m 1.0 m LED fluorescent water surface left camera middle camera right camera 3D Measurements: Results Conclusion: free stream velocity 1.8 m/s, friction velocity in water 3.7 cm/s free stream velocity 2.2 m/s, friction velocity in water 5.5 cm/s free stream velocity 2.6 m/s, friction velocity in water 6.4 cm/s Fig.3. Time-depth images of the temporal fluctuations of ammonia in the water. The size of the vertical coordinte is 9.25 mm and horizontal coordinate - 13.75 sec. ammonia flux ammonia flux Fig.1. Sketch of NH3 invasion into the acid water for low and high concentrations of NH3. The main objective of modified LIF technique is to gain simplified shape of concentration fields of observed molecules. This can be achived by demanding the vertical concentra- tion profiles to have a steep decay at the certain depth resulting in binary fields. Fig.6. Sketch of the LIF experi- mental set-up for 3D measurements Fig.7. Example of estimated disparity maps. Lower disparity values correspond to higher distances. The images are 0.6s apart. The wave breaking event is visible. Large Annual Aeolotron Facility: Height = 2.41 m Width = 60 cm Water volume ≈ 18000 L Air volume ≈ 25000 L Optical Set-up: ·blue high power LEDs (wavelength - 455 nm) ·sCMOS cameras pco.edge 2160×2560 resolution 6.5×6.5 µm pixel size 100Hz frame rate Full Camera Calibration: estimation of intrinsic and extrinsic parameters for each camera Dense Reconstruction: every image was rectified to compute the disparity map using images from left and right camera. Block matching algorithm is used to find correspondence between the images. - The experimental technique for mass boundary layer visualization was verified with the measurements of concentration profiles at the linear wind-wave facility with high spatial and temporal resolution. - The binary representation of concentration fields can be achieved with Pyranine concen- tration of 10-5 mol/L and initial pH value of 4. Varying ammonia concentrations in the air, the fraction of mass boundary layer can be investigated. - The technique allows observation of binary concentration fields at larger spatial scales. The invasion of ammonia was viewed with multiple camera set-up from underneath the fa- cility. The third dimension was reconstructed using stereo algorithm, benefiting the picture of a gas exchange through the air-water interface in four dimensions. NH4+ NH3 OH OH 7th International Symposium on Gas Transfer at Water Surfaces doi: 10.5281/zenodo.17672 7th Internati
poster
Chemical Curation of Lantana camara Emissions across Native and Invaded Habitats Vipin Kumara, Sonali Chauhanb, Manish Kumara, Renu Kumaria, Suresh Babub, and Gitanjali Yadava a. Computational Biology Laboratory, National Institute of Plant Genome Research, Aruna Asaf Ali Marg, New Delhi 110067, India b. School of Human Ecology, Lothian Road, Kashmere Gate 110006, Ambedkar University Delhi Lantana camara is a major threat to biodiversity in many parts of the globe. Chemical profiling is an effective tool to decipher this complex genus, to support ecology. This study compares chemical profiles of Lantana in native and invaded habitats. ØChemical curation led to accurate identification of compounds ØDistinct chemical profiles in native and invaded habitats ØGreater chemical diversity in Invaded Habitats. INTRODUCTION METHODOLOGY SUMMARY & PERSPECTIVES This work has been funded by NIPGR Core funds, Hamied Award from University of for support anCambridge and TIGR2ESS grant. Authors thank Ucam & NIPGR d Infrastructure. 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 (Z)-3-hexenol n-hexanol alpha-pinene sabinene beta-pinene myrcene alpha-phellandrene delta-3-carene o-cymene limonene 1,8-cineole (e)-beta- ocimene terpinolene linalool camphor alpha-copaene beta-bourbonene beta-cubebene beta-elememe beta-caryophyllene beta-copaene beta-guaiene alpha-humulene (e)-beta-farnesene alloaromadendrene t-muurolene germacrene D alpha-chamigrene alpha-farnesene delta-cadinene germacrene b spathulenol caryophyllene oxide % Area Compounds Chemical profile of Lantana camara var. in Native Range (Venezuela) 0 2 4 6 8 10 12 14 16 18 20 3-hexen-1-ol, (Z) α-pinene camphen Sabinene β-pinene 1-octen-3-Ol myrcene hexanol, ethyl α-phellandrene δ-3-carene α-terpinene p-cymene eucalyptol (E)-ocimene (Z)-ocimene 𝛾-terpinene 4-thujanol terpinolene linalool geranyl nitrile 2-p-menthen-1-ol camphor L-borneol 4-terpineol methyl salicylate α-terpineol 1-verbenone isobutenyl methyl ketone pentanoate,cis-3-hexenyl acetald bicyclogermacrene bicycloelemene δ-elemene α-cubebene cyclosativene α-copaene β-elemene elemene caryophyllene 𝛾-cadinene α-humulene neoalloocimene 𝛾-muurolene germacrene D bicyclogermacrene germacrene A cubebol sesquilavandulol (Z) davanone B davanone D nerolidol,E denderalasin spathulenol humulene oxide β-atlantone isoaromadendrene epoxide germacrene D-4-ol geranyl linalool isomer B guaia-3,9-dien-11-ol, cis- cedreanol lanceol,cis 6-epi-shyobunol farnesol, (2E,6Z) β-davanone-2ol β-chamigrene aromadendrene-4,10-diol thujopsan-2-α-ol longipinanol % Area Compounds Chemical profile of Lantana camara complex in Invaded Range (India) 11.98 25.10 0.59 53.60 3.96 3.57 % Area of Metabolome in Essential oil Alcohol Monoterpene Monoterpene alcohol Sesquiterpene Azulenic sesquiterpenol Not Classified 1.4 30.2 1 62.8 0.80.9 % Area of Metabolome in Essential oil Alcohol Monoterpene Monoterpene alcohol Sesquiterpene Azulenic sesquiterpenol Sesquiterpene oxide Foliar chemical profiles: Leaf samples of Lantana camara complex were collected in the month of March- April 2019 in Delhi (India). Hydro-distillation : For extraction of essential oil profiles using Clevenger assembly for 3-4 hours. Recovery: Pure essential oil yield via anhydrous sodium sulphate and diethylether Chemical profiling via GC-FID and GC-MS instrument Shimadzu QP2010. Bibliometry and Comparative Analysis: Chemical profiles obtained above were compared Venezuelan nativ datasets (Tesch, et. al. 2011). Statistics in R. REFERENCES 1. Love, A., Naik, D., Basak, S. K., Babu, S., Pathak, N., & Babu, C. R. (2009). Variability in foliar essential oils among different morphotypes of Lantana species complexes, and its taxonomic and ecological significance. Chemistry & biodiversity, 6(12), 2263- 2274. 2. Tesch, N. R., Mora, F., Rojas, L., Díaz, T., Velasco, J., Yánez, C., ... & Pasquale, S. (2011). Chemical composition and antibacterial activity of the essential oil of Lantana ØDifficulties to compare com
poster
ENVIRONMENTAL STUDIES PROGRAM INFORMATION SYSTEM Configured, Not Coded: Rethinking How the General Public Discovers Environmental Science To better serve the public, the Bureau of Ocean Energy Management’s Environmental Studies Program (ESP) is rebuilding its science dissemination tool, the Environmental Studies Program Information System (ESPIS). The target audience for this tool are users with little knowledge of ocean science and the engaged government agencies. Built on the ArcGIS Online (AGOL) hub, the tool uses large format infographic cards to give users options to browse among four science topics: Physical, Chemical, Biological, and Social Sciences. Under each science topic, the user can select from a dozen or more themes. Under each theme, the user will find a templated theme page that uses website widgets to search, sort, and filter curated lists of research projects, products, and applications. This buildable, sustainable interface enhances the public’s discovery of environmental science information. Title Theme pages have catchy titles that help users navigate from science topics to themes. Introduction When users first navigate to the theme page, the header section presents them with this text element, which helps them glean the contents of the theme page. Region Bookmark The region widget enables users to quickly navigate to a predefined map extent and reduces the number of cards in the map section to the ones from the bookmarked region Map The user interacts with tools to browse studies and related content. The user can select a map feature to highlight the corresponding card from a map section list. Studies List These are all the studies that contribute to the theme, and users can interact with all the map section tools to search, sort, and filter this list. Related Content List These derived products and webpages use a study product and/or combine parts of multiple study products to create value-added scientific information. When the user selects a related content card, the user actions function highlights the related study product(s). By: Jonathan Blythe (BOEM ESP) and Emily Sandrowicz (NV5, on contract to BOEM) The user can discover all the products resulting from ESP studies that contribute to a theme. If the user first selected a card in the map section, then the related study product(s) will automatically highlight. Study Products The user can search, filter, and sort the list of documents published by the ESP Documents List The user interacts with the list of journal publications by using the search, filter, and sort tools. Publications List The recommender section includes one or more button widgets that link to similar theme pages, in case users find this theme page useful, and so they do not have to navigate back to the science topic theme list in order to find these other relevant theme pages Recommender Bureau of Ocean Energy Management Study Studies cards include the study title and program number elements. When the user selects a study card, the user actions function highlights the related study product.
poster
Introduction & Question Material & Methods Conclusion Leandro B. C. Menezes1, Marcos R. Severgnini1, Diogo B. Provete1,2,3 leandrobc.menezes@gmail.com, marcosrafaelsevergnini@gmail.com, diogo.provete@ufms.br 1 - Institute of Biosciences, Federal University of Mato Grosso do Sul, Brazil; 2 - German Centre for Integrative Biodiversity Research, Leipzig, Germany; 3 - Gothenburg Global Biodiversity Centre, Göteborg, Sweden. Effects of urbanization on intra- and interspecific variability of frog metacommunities in two tropical cities Figure 1. Maps of Campo Grande (Mato Grosso do Sul, Brazil) and São José dos Campos (São Paulo, Brazil) with the 20 sampling sites (ponds), showing the change in land use between 1985 and 2022. Data from MapBiomas V. 8.0. Changes in land use Changes in ecoevolutionary dynamics 40 ponds Left side measurement (mm) of males (5 adults per species per pond) Relative leg length (residuals of a linear model) Relative head length and width (log) Body size Results & Discussion landscapemetrics São José dos Campos, SP, Brazil Campo Grande, MS, Brazil Impervious surface Pervious surface Urban pond Buffer 500m Figure 2. Urbanization gradient in Campo Grande (MS) and São José dos Campos (SP) with data from 2022. Urbanization in 2022 (%) Pond ID 1,051 individuals measured from 44 species and 5 families in both cities Rate of change in radiance (% 2022 - % 2012) 10 years Rate of change in green areas (% 2022 - % 1985) 37 years Rate of change in urbanization (% 2022 - % 1985) 37 years Urban Heat Island in 2022 (Cº) Building density in 2022 (number of buildings) buffer area Figure 6. Relative contribution of environmental variables to the components of phenotypic variation (intraspecific and turnover). The vertical black lines represent each environmental variable. In Campo Grande, urbanization and radiance rate, and building density had a significant effect on leg size at interspecific scale, and rate of change in green areas on body size at the intraspecific scale. In São José dos Campos, radiance and building density had a significant effect on head length and width at the intraspecific scale. São José dos Campos, SP, Brazil Campo Grande, MS, Brazil Figure 3. Standardized effect size (SES) of the T statistics for phenotypic traits at distinct hierarchical levels in each city. Dots represent the observed data; boxes indicate the standard deviation of the SES obtained from the null model. In both cities, the population variation in relation to the community variation (TIP/IC) for all traits had a lower mean and variation than expected under the null model, differently of TIC/IR and TPC/PR. This demonstrates that interspecific and regional-scale variability were greater in the urban gradient. Figure 4. Partitioning of the phenotypic variability into hierarchical levels. All phenotypic traits had little variation between ponds (< 0.001) and appear as 0% in the figure. The variability for genus, species. and region is greater for all phenotypic traits. Relative leg length had high variability at the population scale (within) along the urban gradient. Long-time urbanized Recently urbanized Habitat fragmentation Low effect High effect Changes in phenotypic variability High genetic drift Decrease in population size Low gene flow Do urban environments restrict phenotypic variability phenotypic traits related to the impact and requirement niche? At Intraspecific and interespecific scales Phenotypic adaptive changes Genetic adaptive changes to urban environments São José dos Campos, SP, Brazil Campo Grande, MS, Brazil Environmental variables at the landscape scale raster sf gtools terra tidyr 37 years 2022 1985 T statistics IP = Intra-population variability IC = Intra-community variability IC = Intra-community variability IR = Intra-regional pool variability PC = Variability in population means PR = Variability in community means Null models (local and regional pool) Hierarchical levels (population, species, community, regiona
poster
The author acknowledges the General Director of Department of Fisheries Malaysia, Economic Planning Unit (EPU) for the funding support, the Director of the Fisheries Research Institute, Director of FRI Bintawa, Director of JPLS and Director of FRI Kampung Acheh, and to the officers and staffs of FRI for the hard work and cooperation. Acknowledgement 2020@Copyright Reserved Disediakan oleh: Jamil Musel Mohammad Hafiz Hassan Arfazieda Anuar Time of Fishing Operation The operation shall be conducted either during the day or night. The operation will be conducted 4 times in 24 hours with total magazine at least 20 units. Each operation takes about 3 hours. Bear in mind that the fishing operation is depends on the weather condition, safety in operating and landings. Fishing Depth Usually, 95-180 metres is the preferred depth for longline fishing. Fishing ground depends on the seabed condition, such as rocky, muddy, sandy and seagrass areas are desired. As a reminder, the artificial reef area shall be avoided as it is a conservation area. Shooting The shooting operation will be conducted in1 hour for 5 kilometre per operation. The fishing depth will increase with the increasing number of magazine used. Boat Speed/Direction Boat speed shall remain consistent during shooting at 1-2 knot. During hauling, the speed shall be decelerated to 1.5 knot. Type of Bait Bait can be minced or used as a whole. Bait type is usually barracuda size 20-22 cm long or squid, cut into pieces, hooked and placed in boxes. The bait- hooking process is automatically performed by the ABLL itself as the boat move forward. As the boat arrives at a desired fishing area, the flagpole will be tossed out into the sea as a starting fishing position indicator. Consequently, a sinker will be thrown out to the sea, followed by a sets of hooks until the end of the magazine, where another sinker is attached. Recording of Start Shooting Time The time when any part of the gear reaches the sea. Data Recording for Location Surveys The data and location are both recorded as below: • Record position for each station using Global Positioning System • Record position and start and stop times for shooting • Record position and start and stop times for hauling Target Species Target species are the demersal fishes from the family Carangidae, Serranidae, Lutjanidae and Nemipteridae and a small amount of pelagic fishes. The Recording of Finish Hauling Time The time when the operator hauled all part of gear on-board. Setting distance shall be calculated from position of start shooting and finish shooting compare with length of mainline deployed. Recording of Finish Shooting Time The time when the last part of the gear shot overboard. Recording of Start Hauling Time The time when operator hauled any part of gear on- board. Water inlet system will be set off and flows into the baiter to prevent line entangling. ABLL full body assemble and ready for fishing operation. Recording of Fishing Position Fishing position shall be recorded by using the GPS or an equally accurate navigation system. Position recording will be in Latitude and Longitude format. Gear malfunction Malfunctioning of fishing gear usually due to the disconnected rope or entangling of mainline under water during hauling operation. The details of entangling of longline shall be detected using floats that connected to the mainline of the longline. The position and station numbers shall be recorded. Recording of Start Fishing Position The position that any part of the gear reaches the sea. Recording of Finish Fishing Position The position that last part of the gear shot overboard. ABLL is a kind of stationary and long fishing gear, so that information of start hauling position and finish hauling position are required. Then the hooks will move one after another as the boat moves forward and the fishing operation starts. The baits will be deposited into the empty baiter together with water. Bait Indicator of Abundance Other dat
poster
Two main experiments were conducted for this study: Hot stage microscopy −Mixture: raw material : slag = 1:1 weight ratio −Shape: small cube of 3 mm side −Heating: 10°C/min up to 1500°C −Computer treatment: measure of height of sample Heat treatment of pellets −Mixture: raw material : slag = 3:1 weight ratio −Pellets: Ø20 mm, pressed at 80 MPa −Heat treatment: 5°C/min, dwell 1h at 1400°C −Observation by SEM with EDS ESR Position 04 CHARACTERISTIC TEMPERATURE DETERMINATION OF REFRACTORY RAW MATERIALS AND SLAG MIXTURES Camille REYNAERT Early Stage Researcher mail: reynaert@agh.edu.pl Advanced THermomechanical multiscale mOdelling of Refractory Linings Supervisors & University Consortium of 15 European Partners Edyta ŚNIEŻEK Authors: Camille REYNAERT, Edyta ŚNIEŻEK, Jacek SZCZERBA Address: AGH University of Science and Technology, Faculty of Materials Science and Ceramics, Krakow, Poland Materials: Introduction: Conclusions: Steel industry is the most important consumer of refractory bricks. Refractories are commonly used as the linings of steel equipment because they can sustain very high temperature and corrosive attacks in the working environment. However due to the harsh working condition, their life span is limited. Corrosion mainly by slag is the principal phenomenon responsible for the degradation and wear of the bricks. Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement no.764987 The materials used are typical powders that can be found in refractories for steel making applications and slags pick up from a steel ladle. The powder is calcium zirconate with composition given in table 1. The slags are silica rich slags. They were taken at different moment of the treatment of a batch of steel in the ladle. They composition is given in table 2 and they equilibria in a phase diagram in the figure 2. Table 2: Composition of the slags obtained by XRF. Table 1: Composition of the powder obtained by XRD. Figure 2: Curves obtain by hot stage microscopy for calcium zirconate. Jacek SZCZERBA Calcium zirconate corroded by different slags exhibit different behaviors. This is particularly noticeable between the first mixture (CaZrO3 + SiS1) and the others. This is due the major difference in composition between SiS1 and the others slags (SiS2/3/4). When using phase equilibria diagram, one can notice that SiS1 is in another equilibria zone compare to SiS2/3/4 (figure 1). MgO SiO2 SiS2/3/4 SiS1 Results: Expansion Shrinkage Initial softening Flowing CaZrO3 Start End Start End + SIS1 100 1080 1080 1380 1400 1460 + SiS2 100 960 960 1410 1410 / + SiS3 100 920 1080 1400 1420 / + SiS4 100 940 940 1400 1400 / • For SiS1, the matrix melted and grains were partly dissolved as they appear rounded. • For SiS2/3/4, only part of the slag melted but grains were not attacked: almost no rounded grains. The different characteristics temperatures for each mixture are given in table 3. The observed differences can be explained by the formation of different phases in the mixture and differences in microstructure as is observed in Fig 3a and b. CaZrO3 + SiS1 CaZrO3 + SiS2 CaZrO3 + SiS3 CaZrO3 + SiS4 Calcim zirconate - CaZrO3 Zirconium oxide - ZrO2 Spinel - Mg8Al16O32 Mn3O4 Gehlenite - Ca4Al4Si2O14 Braunite - Mn56Si8O96 Grossular - Ca24Al16Si24O96 Calcim zirconate - CaZrO3 Zirconium oxide - ZrO2 Periclase - MgO β-Ca2SiO4 – Monoclinic Oldhamite - CaS Ca16Si8F8O28 Calcim zirconate - CaZrO3 Zirconium oxide - ZrO2 Periclase - MgO β-Ca2SiO4 – Monoclinic Oldhamite - CaS Ca16Si8F8O28 Calcim zirconate - CaZrO3 Zirconium oxide - ZrO2 Periclase - MgO β-Ca2SiO4 – Monoclinic Oldhamite - CaS Ca16Si8F8O28 Pyroxene ideal - MgSiO3 The phase compositions help to understand the phenomena that took places. For the first mixtures, gehlenite melts around 1250°C which explains the sudden drop in volume which in then sustain by the dissolution of zirconate grain in the slag as can be observed
poster
Identifying new functions of the EGF/ EGFR pathway through tissue-specific recombination Silvan Spiri, Louisa Mereu, Matthias Morf, Alex Hajnal Institute of Molecular Life Sciences, University of Zurich LIN-3 EGF LET-23 EGFR AC P4.p P5.p P6.p P7.p P8.p P3.p Basement membrane 1° LIN-3 EGF LET-23 EGFR AC P4.p P5.p P6.p P7.p P8.p P3.p Basement membrane A) EGF/ LIN-3 dependent induction of the vulval precursor cells. VulE VulE VulF VulF AC VulF VulF VulE VulE C) The AC directly promotes dorsal lumen formation during vulva morphogenesis. Ventral nerve cord Basement membrane 2° 2° 1° 1° 1° 1° UNC-6 FOS-1 ? ? B) The anchor cell (AC), guided by Netrin/ UNC-6 and unkown VPC guidance cue`s, breaches the basement membrane after the VPCs divided the second time. FRT GFP let-23::gfp let-23 let-23::gfp FRT Protein kinase domain FRT GFP E) Two FRT sites, flanking the sole kinase domain of egfr/ let-23, along with a GFP sequence are inserted into the endogenous egfr/ let-23 locus. Upon Flp expression the kinase domain is excised resulting in in- active receptor and loss of GFP signal. Tool to induce a temporal or spatial specific knock-out of egfr/ let-23: ! " VulE VulE VulF VulF AC VulF VulF VulE VulE utse utse utse utse utse LET-23 EGFR LIN-3 EGF uv1 cell fate uv1 utse uv1 D) EGF/ LIN-3 is expressed in the 1° vulval cell lineage after vulva induction and is necessary to specify the uterin uv1 cell fate. Introduction Model Summary VulE VulE VulF VulF VulF VulF VulE VulE AC LET-23 EGFR LIN-3 EGF EGFR is expressed in the AC from the onset of vulval mor- phogenesis, securing dorsal lumen formation by stabilizing cytoskeleton components during lumen opening 1) EGFR is expressed in the AC during vulval morphogenesis 2) AC mispositioned in AC EGFR-KO animals during L4.1-2 3) F-actin in AC is disorganized during L4.2-3 in AC EGFR-KOs 4) In a sensitized background BM breaching during morpho- genesis is affected in AC EGFR-KO animals 5) Narrow dorsal lumen upon AC EGFR-KO during L4.4 6) KO of lin-3 in the VPCs or let-23 in the AC leads to egg laying defective adults let-23 egf X egfrX 3) KO of egfr/ let-23 in the AC leads to more variability in AC alignment during L4.1 and L4.2 4) KO of egfr/ let-23 in the AC leads to disorganized F-actin network in L4.2 6) KO of egfr/ let-23 in the AC leads to a decreased dorsal lumen diameter during L4.4 7) KO of lin-3 in the VPCs or let-23 in the AC leads to egg-laying defectie animals Ventral nerve cord Basement membrane 2° 2° 1° 1° 1° 1° UNC-6 FOS-1 ? ? Vulval development and morphogenesis: let-23::gfp Ac::Flp; Nomarski Ctrl; Nomarski let-23::gfp Anchor cell Alignment L4.1 ns p=0.0086 p=0.0005 0 1µm 2µm 3µm 4µm Ctrl Ac::Flp Ctrl Ac::Flp KO Anchor cell Alignment L4.2 p=0.0024 ns p=0.0120 Ctrl Ac::Flp Ctrl Ac::Flp KO 0 1µm 2µm 3µm 4µm LET-23::GFP signal is lost upon AC specific KO of egfr/ let-23 resulting in a more variable position of the AC during L4.1 and L4.2: (A) schematic of AC specific KO of egfr/ let-23. (B) Example images of AC specific KO animals during L4.1 on the left and control animals on the right. Upper row Nomarski images, bottom row LET-23::GFP. Scale bar represents 5 µm, arrowhead indicates the AC. (C) Quantifiaction of AC alignment relative to the vulval midline during L4.1 and L4.2. A) B) C) Ac::Flp; Nomarski Ctrl; Nomarski AJM-1::GFP AJM-1::GFP AC specific KO of egfr/ let-23 leads to a smaller dorsal lumen diameter during L4.4: (A) Example images of AC specific KO animals during L4.4 on the left and control animals on the right. Upper row Nomarski images, bottom row AJM-1::GFP to mark the apical junctions. (B) the dorsal lumen diameter was determed by measuring the inner diameter of the vulF toroid using the vulF/ vulE apical junctions. A) B) L4.4 Dorsal lumen diameter p=0.0002 Ctrl n=22 Ac::Flp n=22 9µm 8µm 7µm 6µm 5µm 4µm 3µm 2µm 1µm 5) KO of egfr/ let-23 in the AC leads to decreased base- ment membrane breaching during morphogenesis in a sensitized background A) B) egfr/ let-23 KO increases failure
poster
X-chromosome Inactivation In humans one of the two X chromosomes in females is inactivated, but some genes on the inactive X escape inactivation (Carrel and Willard, 2005; Figure 4). If X chromosome inactivation (XCI) evolves in response to loss of gene content on the Y chromosome, then XCI status should be a strong predictor of the status of Y-linked homologs. Comparing Classes For each gene data was collected on several possible predictive features including expression intensity and expression breadth across 79 human tissues (Su et al., 2004), GO functional categories, association with human disease in Online Mendelian Inheritance of Man, and selective constraint at nucleotide sites. For each feature, each class of genes was compared using a permutation test with 1000 iterations to detect differences between the classes. There were no significant differences between any classes with respect to function, expression or selective constraint. Learning from genetic fossils on the Y chromosome Acknowledgments Many thanks to the Mohnkern Scholarship and the Miller Institute for Basic Research in Science for supporting this research. M.A. Wilson Sayres and K.D. Makova Sex Chromosome Evolution Mammalian X and Y evolved from a pair of homologous autosomes (Fig. 1). In the absence of X- Y recombination, caused by a series of inversions on the Y chromosome, the Y lost much of its gene content. Are some types of genes more likely to be lost? Is it possible to predict the fate of Y-linked genes? Figure 2. Existence of a gene in homologous regions across species was used as evidence for the presence of that gene on the ancestral X. Figure 1. Some strata (labeled with numbers) are shared between mammals while some are species-specific. This means that Y-evolution is unique to each species and highlights the need for a general understanding of how Y chromosomes evolve. Figure 3. Proportion of X-linked genes with functional (black), pseudogenized (grey), or lost (white) Y homologs along chrX in 5Mb windows. Table 1. Results of LDA. Predicting Y retention and loss Using a linear discriminant analysis (LDA) with XCI status as the sole predictor, the model successfully predicts whether and X-linked gene has a functional Y homolog or has lost its Y homolog, with intermediate success predicting whether the Y is a pseudogene (Figure 5 and Table 1). Figure 5. Significant differences in X-inactivation status between classes of X-linked genes. Conclusions • Current function, expression and selective constraint of X-linked genes were not significantly different between any of the classes. It may be that the signals have diminished over time. • X-chromosome inactivation results from gene loss on the Y. • XCI status can correctly predict the status of Y-linked homologs References Carrel and Willard (2005) Nature, 434(7031): 400-4. Lahn and Page (1999) Science, 286(5441): 964-7. Su et al. (2004) PNAS, 101(10): 6062-7. Classifying X-linked genes Genes present on the ancestral X chromosome (Figure 2) were categorized into three classes (Figure 3). We identified hundreds of previously undescribed Y-linked pseudogene sequences using lastZ. Figure 4. X-inactivation in females mirrors gene loss on the Y chromosome in males. X-linked gene class Correct LDA Classification Functional Y homolog 85% Pseudogenized Y homolog 56% Lost Y homolog 92%
poster
KIT – The Research University in the Helmholtz Association Expected angular distribution. D. Hinz, poster #352 scintillation photons detected electron transmitted electron scintillator magnetic field lines cell size ca. 150 μm x 150 μm x 300 μm Design of a Scintillating Active Transverse Energy Filter (scint-aTEF) for Background Suppression at the KATRIN Experiment Nathanael Gutknecht for the KATRIN Collaboration Institute of Experimental Particle Physics (ETP) We acknowledge the support of Helmholtz Association (HGF), Ministry for Education and Research BMBF, the doctoral school KSETA at KIT, Helmholtz Initiative and Networking Fund, Max Planck Research Group, and DFG in Germany; Ministry of Education, Youth and Sport in the Czech Republic; INFN in Italy; the National Science, Research and Innovation Fund via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation in Thailand; and the DOE Office of Science, Nuclear Physics in the United States. This project has received funding from the ERC under the European Union Horizon 2020 research and innovation programme. We thank the computing cluster support at the Institute for Astroparticle Physics at KIT, Max Planck Computing and Data Facility (MPCDF), and the National Energy Research Scientific Computing Center (NERSC) at LBNL. calibration remove tritium e– from T2 decay filter electrons by their energy count events KATRIN Collaboration, adapted The Karlsruhe Tritium Neutrino Experiment (KATRIN) Measurement principle for  Measure kinetic energy of e–  Look for distortion of energy spectrum near endpoint caused by neutrino mass  Goal: < 0.3 eV / c² (90 % C.L.) Concept of the scint-aTEF Simulations for Optimal Geometry Angular detection efficiency  Purely based on Geometry ROC curve for bkg-suppression  assuming 30 % Rydberg fraction Expected improvement on  KNM2-like scenario  assumes scint-aTEF from day zero Conclusion  Valid concept in case of large angular separation between β- and background e– (Rydberg electrons)  good light yield of scintillator is essential  Implementation of scint-aTEF in KATRIN unlikely due to long development time Development of the scint-aTEF 3d printed plastic scintillator  2 photon polymerization at μm scale at KIT-APH Readout: SPAD arrays  “IDP4” from ZITI (Heidelberg) for single e– event detection J. Lauer P. Kiefer 3d printed microstructure, non scintillating. From https://doi.org/10.5445/IR/1000167180 Single electron event (top left), captured with a SPAD array and a commercial scintillator. Discriminate keV- electrons by their cyclotron motion  β-electrons: large angles (isotropic)  background: small angles
poster
Abstract The GAPDH locus is attractive for recombinant protein expression according to knockdown and transcriptomic data as it is the 3rd most expressed gene in HEK293. DHFR KO cells growing exclusively in media supplemented in hypoxanthine and thymidine (HT). Integration of exogenous DHFR alongside the gene of interest in DHFR KO cells effectively rescues cells allowing selection simply using media without HT supplementation. CRISPR-Cas9 is an efficient gene-editing technology that allows scientists to effectively remove, insert and modify DNA in cells Introduction This proposed methodology aims to optimize recombinant protein production in HEK293 cells to improve their efficacy, lower the potential of immunogenicity and increase the industrial potential of human derived cell hosts. Significance Methods Results Conclusions Fig. 1 Bienzymatic (EcoRV & BbsI) Digestion of cloned PX330 CRISPR plasmids confirms successful guide integration Successful integration results in one band whereas unsuccessful integration results in 2 bands. For each guide, 2 clones were tested via this method and sequenced. At least one clone for each guide was successfully integrated (C, control (undigested) & D, digested) Fig. 2 PCR Screening Confirms homozygous DHFR1 Knockout Genomic DNA from single cell clones was isolated and a PCR reaction was run using primers designed outside of the determined target sites to test for a reduction in the produced fragment. All clones except clone 20 exhibit successful DHFR1 homozygous knockout. Fig. 3 Fluorescent Microscopy Confirms Stable integration of eGFP and Puromycin in HEK293 cells HEK293 WT cells were transfected with the Cas9-AAVS1 construct and the HDR-AAVS1. Cells underwent selection for 10 days using 1.5ug/ml of puromycin followed by single cell cloning to confirm stable integration of the HDR template into the AAVS1 site All CRISPR-Cas9 constructs were successfully cloned into the PX330 plasmid as confirmed by digestion using EcoRV and BbsI restriction enzymes and sequencing PCR screening shows successful isolation of monoclonal populations with desired homozygous DHFR1 knockout as evidenced by the reduction in the product size eGFP-Puro-AAVS1 cell line was successfully created as cells were selectable using puromycin and maintained eGFP expression for over 18 days post transfection Generate DHFR2 KO cells using successful DHFR1 KO clones followed by partial integration of DHFR and finally eGFP and the remaining portion of DHFR to create the complete experimental cell line. eGFP expression to be compared to that of the eGFP-Puro-AAVS1 Control Cell line using Western Blot and Flow Cytometry Confirm site specific integration of eGFP and puromycin genes in the eGFP-Puro-AAVS1 cell line by screening single cell clones using PCR and sequencing Adapt this system to produce a therapeutic recombinant protein (e.g. EPO) by cloning in the gene into the position of eGFP in the HDR-GAPDH2 plasmid Future Directions Acknowledgments I would like to thank Dr. Ihab Younis, Dr. Ravichandra Bachu and Dr. Mazen Sidani for their mentorship throughout this project. I would also like to thank Dr. Simon Faulkner and Dr. Mohamed Bouaouina for their useful insight. Finally, I would like to thank Maya Kamaldean, Bernadette Bernales and Maria Navarro for their support in the lab. References Establishing a Stable Recombinant Protein Production Cell Line Using CRISPR- Cas9 Mediated Genome Editing of HEK293 Cells Martin Sikhondze | Dr. Ravichandra Bachu | Dr. Ihab Younis | Dr. Mazen Sidani Biological Sciences Program | Carnegie Mellon University Qatar Recombinant proteins are biopharmaceuticals used in the treatment of various diseases including cancer and diabetes. Current methods to produce these therapeutics are dependent on bacterial & CHO cells however these cell hosts have non-identical post transcriptional and translation modifications increasing the likelihood of ineffective protein products in addition to possible immunoge
poster
European Conference on Crop Diversification September 18-21 September 2019, Budapest, Hungary Evaluation of the Land Equivalent Ratio index in barley-pea and bread wheat-faba bean intercropping Tavoletti S*±1. F. Straccia1, A. Merletti1, L. Iommarini1 1 Dipartimento di Scienza Agrarie, Alimentari ed Ambientali, Università Politecnica delle Marche, Via Brecce Bianche, 60131 Ancona – Italy ∗ Speaker ± Corresponding author: s.tavoletti@univpm.it 1 Introduction To improve agricultural sustainability, a valuable strategy could be the increase of biodiversity of cultivated fields through intercropping (Bedoussac et al., 2015; Brooker et al., 2015; Yu et al., 2016). Cereal-legume intercropping exploits the complementarity between these two groups of crops. In 2017, a field trial showed that the Mix1(50-50) was not effective because barley was too much competitive against pea, and faba bean plant density could have been increased. In the present research, new barley-pea and bread wheat-faba bean combinations were evaluated to increase the Land Equivalent Ratio (LER) and to reduce weed competition in mixed vs sole legume crops. 2 Materials and Methods 2.1 Trial 1- WP2 Barley-Pea trial: conventional farming – no herbicide treatments. At the UNIVPM Experimental Station a split-plot design (4 replicates, sub-plot size 9x1.2m2) was performed including one barley (Tea) and two pea (Hardy, Astronaute) varieties as sole crops and mixed crops: Mix1(50-50), Mix2(33-67), Mix3(25-75), Mix4(20-80). Nitrogen fertilization (whole-plots) was applied as shown in Table 1, amount of N in mixed crops being proportional to the barley seed density. The trial was sown on February 1st, 2018, and harvested on July 3rd, 2018. Table 1. Trial 1: Nitrogen fertilization plan (N kg/ha). Pea sole Barley sole Mix1 (50:50) Mix2 (33-67) Mix3 (25-75) Mix4 (20-80) High N 20.0 80.0 40.0 26.4 20.0 16.8 Low N 0.0 40.0 20.0 13.2 10.0 8.4 2.2 Trials 2 and 3 –WP4 Barley-Pea trials: organic farming – no nitrogen fertilization. Two trials were performed with participatory farmers at 2 locations: Monte San Martino – MC (Trial 2) and Sterpeti - PU (Trial 3). At each site, a randomized complete block (4 replicates) was applied (Plot size: 50x4.5m2 for Trial 1 and 50x4.2m2 for Trial 2. One barley (Tea) and one pea (Hardy) varieties were included as sole crops and as Mix2(33-67) and Mix3(25-75). Trials were sown on Jannuary 26th, 2018 (Trial2) and March 28th, 2018 (Trial3), and harvested on July 3rd, 2018 (Trial2) and July 9th, 2018 (Trial3). 2.3 Trial 4 – WP4 bread wheat-faba bean trial: organic farming. The trial was performed at Rocca Priora – AN as a randomized complete block design (3 replicates, plot size 10x1.2m2). Three bread wheat (ACA320, Bologna, Marcopolo) and two faba bean (Chiaro di Torre Lama, Prothabat69) varieties were evaluated as sole crops and as Mix1(50-50), Mix2(50-65), Mix3(50-80), Mix4(33-75) and Mix5(33-80). The trial was sown on January 31st, 2018 and harvested on July 19th, 2018. 3 Results 3.1 Trial 1: Barley-Pea In mixed crops, LERbarley was always much higher and LERpea was always much lower than expected (Table 2). Even though LERtotal was always higher than 1, Mix1 showed the highest values because of the high yielding ability of barley in mixed crop combinations. As frequently observed in mixed cropping trials, the LERtotal values were higher at low than at high nitrogen fertilization levels.
poster
Estudio sobre la tecnología y cómo te ves a vos mismo En este estudio, nos interesaba en cómo el uso de las redes sociales y la tecnología podía influir en cómo te ves. Teníamos una entrevista con preguntas sobre qué tecnología y redes sociales usas, y encontramos unas respuestas muy interesantes. ¡Gracias por charlar con nosotros durante nuestro tiempo en Namqom! La tecnología …de la gente dice que usa estos aparatos. • 49% de la gente usa un celular una o más veces al día. • La televisión es muy popular. Programas de television como Elif y Lazos de Sangre y películas como Rapido y Furioso se ven con frecuencia. 69% de la gente de Namqom mira a la televisión una o más veces al día. • Sin embargo, muchas personas hablaban de la adicción a los celulares, especialmente los jóvenes. • La adicción a los teléfonos celulares es un gran problema mundial, que se asociaba con disturbios de sueño, depresión y ansiedad. Las redes sociales • Mucha gente en Namqom disfruta las redes sociales, especialmente Facebook y WhatsApp. De hecho, 80% de la gente usa las redes sociales. • La razón principal para usar las redes sociales es para comunicarse con familia y amigos. • Sin embargo, es importante que tengamos cuidado con el uso excesivo de estas aplicaciones. Las aplicaciones como Facebook se crean para fomentar un uso repetitivo. …de la gente dice que usa estas aplicaciones. Ideales de cuerpo y cómo te ves La exposición repetida a imágenes de ciertos tipos de cuerpos en las redes sociales puede no solo cambiar lo que ves, sino también cómo te ves a vos mismo. Te preguntamos cómo te ves a vos mismo, qué cuerpo es el más saludable y cuál es el más atractivo para ver si tu uso de las redes sociales cambió de alguna manera tu preferencia por el tipo de cuerpo. • La mayoría de los hombres dicen que el cuerpo N°6 es cómo se ven y que el cuerpo N°4 es el más sano o saludable. • En 2009, los hombres dijeron que el cuerpo N°5 fue el más sano o saludable. Había un cambio cultural. • La mayoría de las mujeres dice que el cuerpo N°5 es cómo se ven y que el cuerpo N°4 es el más sano o saludable. • En 2009, las mujeres dijeron que el cuerpo N°6 fue el más sano o saludable. Había un cambio cultural. La conexión: Las redes sociales y cómo te ves • 52% de la gente ha tratado de adelgazar en su vida. • 12% de la gente ha tratado de engordar en su vida. • 27% de la gente no se siente bien con su cuerpo. • A la mayoría (55%) de las mujeres le gustaría adelgazar. • A la mayoría (38%) de los hombres le gustaría adelgazar. • Sin embargo, 71% de la gente dice que les gustan sus cuerpos. • Los sitios de redes sociales se introdujeron en el mundo muy recientemente. • El uso de ellos y la tecnología puede influir cómo te ves y cómo te sientes con tu cuerpo. • Por eso, ten cuidado con las redes sociales. ¡Pueden afectarte más de lo que piensas! • La religión también influye mucho cómo la gente se ve. Algunas personas dicen que le gustan sus cuerpos porque Dios se los dio. Datos internacionales Medir tu peso y altura nos ayudó mucho en calcular tu índice de masa corporal (IMC). El IMC nos indica el estado nutricional: si está por debajo del peso apropiado, en el peso normal, con sobrepeso o con obesidad. Cuando eligieron una silueta–una que se parece a vos o una que le parece la más saludable– se observó que prefirieron siluetas que corresponden a diferentes estados nutricional. 97% 89% 17% 80% La tecnología Las redes sociales Ideales de cuerpo y cómo te ves La conexión: Las redes sociales y cómo te ves Porque medimos tu peso y altura 70% 68% 58% 52%
poster
Towards HPC-Friendly Quantum Programming Models Mateusz Meller Supervisors: Dr. Oliver Thomson Brown1, Dr. Joseph Lee1, Vendel Szeremi2 1. EPCC, University of Edinburgh 2. Hartree Centre, STFC, UKRI Introduction Even though quantum computing promises advantage on some of the applications, such as quantum hamiltonian simulation, current devices are prone to errors and have short coherence times. Most research focuses on circuit-oriented programming model. While this model provides universal computation, it is inefficient. On the other hand, analog quantum computers, can solve some of the problems by leveraging native hardware controls, providing shorter time-to-solution, thus enabling us to study larger problems. However, this is an under-explored territory, especially from the point of view of HPC. No satisfactory tools exist for the integration of classical HPC with analog QC. The goal of this project is to develop such tools. Figure 1. There are two general schemes for implementing Quantum Simulation on a quantum computer. In a circuit-oriented scheme we apply sequence of unitaries, known as gates. In Hamiltonian-oriented scheme we synthesise Hamiltonian terms which evolve in time. Figures taken from [1]. Analog Quantum Simulation On-machine instructions Programming Model Figure 2. Gate-based quantum circuit code written using CUDA Quantum [2]. User specifies the quantum kernel as a sequence of gates applied to the quantum register. Figure 3. Hamiltonian-oriented code written using SimuQ - framework for programming quantum simulation on analog devices [1]. Figure 6. Resulting pulse schedule generated by the circuit-based Qiskit compiler [3]. Figure taken from [1]. Figure 7. Resulting pulse schedule generated by SimuQ compiler targeting analog machines. Figure taken from [1]. Figure 8. General outline of the project. Having stated the problem, the project will consist of 4 phases: Setup, Prototyping, Development and Validation. Project goals Problem: Design HPC friendly programming model for analog quantum computing Port Quake MLIR Dialect to XDSL Develop dialect for analog QC in XDSL Explore mappings between dialects and underlying constructs Understand target ISA by collaborating with Strathclyde Develop hardware-aware QIR lowering, with optimal instruction selection Validate solutions on QC simulator (Tensor Networks / Statevector) Identify higher-level abstractions mapping well to Dialects Benchmark on a real quantum device Setup Prototyping Validation Develop DSL which leverages findings Development Results Compilation process C++ Clang AST to LLVM MLIR Quake Dialect QIR (quantum extended LLVM IR) Device code registration Link-time override (CUDAQ func to MLIR func) Link Opt Exe Figure 4. Compilation process for the CUDA Quantum framework. CUDAQ kernel constructs are mapped to Quake MLIR dialect, which is further lowered to Quantum Intermediete Representation (QIR). Figure 5. Compilation process for the SimuQ Quantum Analog framework. The steps involved focus on the correct resource and instruction scheduling. py Instruction Schedule Conflict Resolver Hamiltonian Synthesizer Block Schedule Scheduler Signal Line Schedule Pulse Translator Pulse Schedule Exe References [1] Peng, Yuxiang, Jacob Young, Pengyu Liu, and Xiaodi Wu. ‘SimuQ: A Framework for Programming Quantum Hamiltonian Simulation with Analog Compilation’. Proceedings of the ACM on Programming Languages 8, no. POPL (5 January 2024): 2425–55. [2] Kim, Jin-Sung, Alex McCaskey, Bettina Heim, Manish Modani, Sam Stanwyck, and Timothy Costa. ‘CUDA Quantum: The Platform for Integrated Quantum- Classical Computing’. In 2023 60th ACM/IEEE Design Automation Conference (DAC), 1–4, 2023. [3] Aleksandrowicz, Gadi, Thomas Alexander, Panagiotis Barkoutsos, Luciano Bello, Yael Ben-Haim, David Bucher, Francisco Jose Cabrera-Hernández, et al. ‘Qiskit: An Open-Source Framework for Quantum Computing’. Zenodo, 23 January 2019. Quantum Circuit
poster
FREEZE! A manifesto for safeguarding and preserving born-digital heritage Aims. Finding ways to preserve born-digital heritage has become a matter of urgency and growing concern. Websites, games and interactive documentaries each bring specific challenges that need to be addressed. It takes three to tango: Ensuring that our digital lives and digital creativity are not lost to future generations requires a joint effort by the principal players: creators, heritage professionals and policy makers. This manifesto lays out the actions they need to take today to safeguard born-digital heritage. Creators. Digital products are at risk of being lost from the moment they are created. Creators are therefore part of the preservation process - whether by writing code, editing digital content or by creating some other form of digital expression. We encourage creators as follows: • Invest time to describe your work carefully, whatever platform you use to store and manage your work. Provide at least a minimal set of metadata (who, what, where, when). Always include versioning data and information about the rights status of the work. • Document your work as copiously as possible. Documentation enables future users to understand and reuse your work more easily. Describe the technical specifications of your work, for example the hardware and software used to create the work. • If possible, assign open licenses (such as Creative Commons) to your work. This enables content to be reused. Reuse will help to ensure the longevity of your work. • Where possible, use open-source software and open-source hardware. Your work will withstand the test of time better, since open means: independent of proprietary technology and vendor lock-in, and transparent availability of the source code and building blocks of your work. Heritage Professionals. Digital material presents several challenges for heritage professionals. For instance, the sheer amount of material created, dispersed among diverse platforms, hardware and domains makes selection a daunting task. There is little standardization of file formats and environments that supports these. Norms for describing and managing this complexity are inadequately developed. The tasks involved in collecting, preserving and making digital materials accessible fall into three categories. We encourage heritage professionals as follows: Policy • Identify vulnerable digital heritage in your area of activity and find out which forms of digital heritage your organisation develops, manages or intends to manage (in line with collection policy plans). Create a convergent digital landscape by harmonising collection policies with other institutions. To ensure success, avoid overlaps and gaps in the combined collections. • Develop policies for acquiring and keeping born-digital material accessible sustainably. Use existing models, as described in the ‘DIY Handbook of Web Archaeology’. • Obtain legal advice regarding storage and reuse. Act responsibly when using, managing and making personal data or information accessible. Implementation: • Where possible, cooperate with (fellow) institutions and industrial partners to find collective solutions. Choose robust –preferably open - technical infrastructures and operating systems. • Assume that your current technology will need to be updated regularly. So prepare your exit strategy: can you move data from system A to system B easily? • Use well-documented, open standards, e.g. for storage formats and exchange protocols. Non- dependence on suppliers ensures your archive material remains interchangeable in the future. • Agree clear guidelines for delivery of acquired and transferred born-digital material: when, why and under what terms. Outline the rights and obligations before and after material is transferred. If accessibility is an objective, organise this when the acquisition is realised: lay down terms for accessing the collections. • Ensure copious metadata records are kept of digi
poster
A cross-institutional, FAIR VIVO for Metabolomics Michael Conlon, Kevin S. Hanson, Taeber Rapczak, Naomi Braun, Christopher P. Barnes, University of Florida, Gainesville, Florida, USA Metabolomics Data The Metabolomics Workbench (MWB) (https://metabolomicsworkbench.org) is the National Metabolomics Data Repository of the US National Institutes of Health (NIH). Anyone can deposit data to the MWB. As of August 28, 2019, the workbench provides data from 982 publicly available studies. Another 205 studies are currently embargoed and will be available subject to their embargo dates. MWB develops and uses RefMet (http://bit.ly/2PkxY5p), a nomenclature for representing metabolites found using mass spectroscopy (MS) and nuclear magnetic resonance (NMR) techniques. Investigators upload study data to the workbench and provide metadata regarding themselves and their work. The MWB provides an API that can be used to access metadata about studies and investigators. MWB metadata has been mapped to an ontology developed by the authors to represent it as RDF and load it to VIVO. PubMed (http://pubmed.gov) is an open access index to literature in metabolomics. Its API can be used to find and retrieve publication data regarding metabolomics investigators. Many groups use PubMed data in VIVO. PubMed data has been mapped to the VIVO Ontology (http://vivoweb.org/ontology/core) Data regarding software used in metabolomics studies has been difficult to find. An index to such software will be created and curated by a group at the University of Colorado Anschutz Medical Campus funded by the NIH. This data will be in the form of a spreadsheet. The authors will use the Software Ontology (SWO) (http://www.obofoundry.org/ontology/swo.html) to represent the data as RDF and load the data into VIVO. Metabolomics Metabolomics is the scientific study of metabolites present within an organism, cell, or tissue. Human metabolites are small molecules found in human tissue that occur naturally as a result of human metabolism, or are present as a result of drugs, food components, or exposure to environmental conditions. Along with genomics (the study of DNA), transcriptomics (RNA), proteomics (proteins), metabolomics provides information regarding biochemical compounds and processes in cells, leading to a better understanding of cellular biology. “Metabolic profiling can give an instantaneous snapshot of the physiology of a cell, and thus, metabolomics provides a direct "functional readout of the physiological state" of an organism.” (Wikipedia http://bit.ly/2Pg06X3) Such profiles can be used to distinguish tissue types, disease states, and health status of individuals. Understanding the changes in profiles over time can lead to improved understanding of diseases such as cancer and diabetes, leading to potential treatments. Over 450,000 human metabolites have been identified to date. Metabolomics is considered an emerging field. Technical challenges in compound identification and data analysis are significant. The NIH Common Fund Metabolomics consortium originated in 2013 to help address these and other challenges. Fourteen investigators across the US are supported to advance metabolomics. Five are engaged in compound identification, seven in software tools and data analysis. The group at UCSD supports the Metabolomics Workbench, a data repository. The group at the University of Florida provides overall coordination for the consortium. FAIR Data Principles The FAIR Data Principles (Wilkinson, et. al https://www.nature.com/articles/sdata201618) provide a framework for creating data that can be used by groups beyond the group that created the data. The principles are difficult to achieve in practice and much has been written about implementation. VIVO supports the principles naturally, it is designed to share data. • Findable – data should be found using search engines on the Internet • Accessible – the data can be accessed without technical, legal or operational barr
poster
Humulus scandens Hamei japonez Familia Cannabaceae, Clasa Magnoliopsida Plantă anuală, volubilă, cățărătoare. Frunzele sunt palmate, opuse, de 5-13 cm lungime, cu 5-9 lobi, cu marginea dințată. Florile sunt verzi palid – gălbui, cu cinci petale. Florile bărbăteşti și femeieşti apar pe plante separate (plante dioice). SPECII CU ASPECT SIMILAR Humulus lupulus Fructul este o achenă ovoidală, galben-maronie. Semințele mici sunt răspândite de vânt și apă . Foto credit: https://www.discoverlife.org/20/q?search=Humulus+japonicus Floare cu fruct semințe 5-13 cm Floare Parthenocissus quinquefolia Floare Frunze mai mari, cu 3-5 lobi; Fructele seamănă cu un con. Frunze compuse din 5-7 foliole mai mici, zimțate, care toamna capătă o culoare roșie intensă; fructele sunt sub formă de bobițe (bacă), violet închis. CUM ÎL RECUNOAȘTEM? Specie de interes pentru UE
poster
INTRODUCTION Our research is focused on the study of FIR images (1.5°, 100px) of the region around WD GD 61 located at R.A. (J2000) 04h 38m 39.37s and Dec. (J2000) 41o 09’ 32.34” using Improved Reprocessing of the IRAS (IRIS) and AKARI surveys from SkyView Virtual Observatory along with SIMBAD Astronomical Database and Gaia Archive of ESA. a) b) Figure 1: Isolated Dust Structure Contour Around WD GD 61 a) in IRIS 100 μm (major axis = 1.093o, minor axis = 33.62’) and b) in AKARI 90 μm Pre-processed Using Aladin v10.0 (Tag 0 Represents the Pixel with Minimum Relative Flux Density). METHOD AND TECHNICAL DETAILS From various white dwarf candidate catalogues, clearly isolated dust structures around the white dwarfs are selected in IRIS and AKARI surveys using SkyView Virtual Observatory followed by cross-validation using Simbad Astronomical Database. Then .fits extension files of the selected region are downloaded to pre-process in Aladin v10.0. After contouring of isolated structure, major and minor axes are determined to calculated inclination angle (Holmber, 1946). The pixel extraction is done for accessing relative flux density values of each pixels. The data is then processed to calculate the dust properties such as flux distribution, dust color map and temperature distribution (Schnee et al., 2005) subsequently visualized using python. For the distance calculation, parallax data is accessed from the Gaia Archive of ESA. The calculated distance is then used to estimated the mass of the structure (Young et al., 1993). a) b) Figure 2: Dust Color Map of IRIS Relative Flux Density Distribution at a) 60 μm and b) 100 μm a) b) Figure 3: Dust Color Map of AKARI Relative Flux Density Distribution at a) 90 μm and b) 140 μm RESULTS The size of the cavity within the region is 23.18 pc × 11.97 pc with an inclination angle of 61.53°. With the IRIS data, the temperature of the whole region was found between a maximum value 24.39 ± 0.61 K to a minimum value 22.53 ± 0.31 K with an offset of 1.86 K and an average temperature of 23.16 ± 18.55×10−3 K. Similarly, using AKARI data, the temperature was found between a maximum of 14.48 ± 0.37 K and a minimum of 13.38 ± 0.18 K with an offset of 1.12 K and an average temperature of 13.75 ± 1.10×10-3 K. Distribution of the temperature in both the surveys was found to be Gaussian. a) b) Figure 5: Gaussian Fit of the Temperature Distribution in a) IRIS and b) AKARI data STUDY OF DUST PROPERTIES AROUND THE WHITE DWARF GD 61 IN IRIS AND AKARI MAPS Sanjay Rijal1, Madhu Sudan Paudel2 1,2Tri-Chandra Multiple Campus, Institute of Science and Technology, Tribhuvan University, Nepal sanjay1.745401@trc.tu.edu.np1, mspaudel27@gmail.com2 ANALYSIS The inclination angle of the cavity indicates that it is neither a face-on (i→0o) nor an edge-on (i→90o) object. Very low offset temperatures (< 5K) suggest that the cavity might be evolving independently with less disruption from background radiation sources. Also, the Gaussian distribution of the temperature in both the surveys implies that the region might be in a local thermodynamic equilibrium. REFERENCES Holmberg, E. (1946). Investigations of the systematic errors in the apparent diameters of the nebulae. 117. Schnee, S. L., Ridge, N. A., Goodman, A. A., & Li, J. G. (2005). A complete look at the use of iras emission maps to estimate extinction and dust temperature. The Astrophysical Journal, 634 (1), 442. Young, K., Phillips, T., & Knapp, G. (1993). Circumstellar shells resolved in iras survey data. ii-analysis. The Astrophysical Journal, 409, 725–738.
poster
Data enrichment & indexation Link to Simbad, VizieR, data producer,... Positional indexation Photometry Data description Catalogs/tables/columns description Abstract Aknowledgments, copyrights... fits fits fits images Tabular data SQL-like script Tabular data ReadMe Awk script METAdata Origin ; author, date... Keywords : mission, wavelength, astronomical words Acronym or usual name Media contents : images, spectrum, cube meta-data (tex) FTP Spectrum output An intuitive web application using « widgets » result of a SQL-like query script. It includes capabilities as zoom, scaling,... The SED output The VizieR web application A big data collection ●~14,000 catalogs coming from the most popular astronomical journals : A&A, ApJ, AJ, MNRAS,.. ●Large surveys catalogs : ESA, NASA,.. ➔2MASS (~450,000,000 records) ➔SDSS (~800,000,000 records) ➔GAIA simulation catalog (2,000,000,000 records) ➔... ●~400 catalogs containing spectra ●~1150 catalogs containing time-serie (CoRoT ~300,000 timeseries) ●~130 catalogs containing images A close collaboration between documentalists, astronomers and computer scientists The VizieR ingestion pipeline is operated by documentalists and astronomers for the catalogs coming from journals. The big catalogs coming from large surveys are ingested by computer scientists and astronomers. Added value The documentalists add relevant information like URL links to external applications, links to the Simbad database or to object from other VizieR catalogs. The documentalists add visualization output for associated data like the spectrum or incorporate images in the VizieR result page (ex CoRoT, Herschel). Catalog management at CDS ~ project support ~ VizieR provides access to the most complete library of published astronomical catalogs (data tables and associated data) available on line and organized in a self-documented database (12359 catalogs in may 2014). The VizieR search engine uses indexation based on several criteria which requires the expertise of scientists and documentalists to ingest each catalog. These meta-data go with an efficient position search engine adapted to big data. G. Landais, T. Boch, M. Brouty, S. Guehenneux, F. Genova, F. Ochsenbein, P. Ocvirk, E. Perret, FX. Pineau, AC. Simon, P. Vannier, S. Lesteven Observatoire Astronomique de Strasbourg, Université de Strasbourg/CNRS, UMR 7550 Catalog entry Catalog archived A data storage adapted to big surveys VizieR is involved in data dissemination coming from missions. The VizieR positional indexation is adapted to big volumetry : VizieR provides currently data coming from mission like 2MASS, SDSS, WISE which produces big volumetry. This efficient data access including crossmatching capabilities with other VizieR catalogs or with SIMBAD put the CDS as a partner of major projects like GAIA . VO dissemination The VOTable is a standard output available in VizieR and is recognized by VO-softwares like the Aladin image viewer (CDS) or the spreadsheet TOPcat (University of Bristol). The VOTable output is a XML format includind standardized definitions (UCD) which are included in the VizieR meta-data. The VizieR data and VizieR search capabilities (like to search by position in all VizieR) are fully accessible throught the VO. VizieR collaborations with space missions : VizieR is present in different steps of a mission : ●Collaboration with the Gaia project providing simulation GAIA catalog with all CDS capabilities. ●Collaboration with CoRoT to publish a catalog containing >300,000 time-series of exoplanets ●Provide a set of catalogs used by the XMM pipeline Take advantage of meta-data to compute photometry plots among ~2400 catalogs ~1000 filters This widget integrates three linked views: a zoomable plot with photometry points, a sky chart and the VizieR tabular data. The VizieR web application contains facillity like the all-sky viewer « Aladin Lite », links to external applications or easy tools (SAMP) to put data in a VO-application
poster
Developing Metrics for Assessing the Impact of Open Science Services Huajin Wang, Melanie Gainey, Patrick Campbell, Katie Behrman, Sarah Young University Libraries, Carnegie Mellon University, Pittsburgh, PA, USA Summary Preliminary results Methods Logic Model Metrics framework development Program overview Future assessment development Research Data Management DMP Tool Open Science Symposium AIDR Collaborative Bioinformatics Hackathon dataCoLab protocols.io Open Science Framework KiltHub R Python OpenRefine Citizen Science LabArchives APC Fund Open Educational Resources ●Usage and attendance data were collected from vendors and dashboards for the following tools and platforms: KiltHub (Institutional Repository), Open Science Framework, LabArchives, protocols.io, OSDC Newsletter, open science-themed workshops, Carpentries workshops, Open Science Symposium, and AIDR. ●The KiltHub dashboard includes departmental affiliation for all staff, faculty, and graduate students at CMU. We used that dataset to determine the departmental affiliation for users and attendees of all other platforms and events. Since undergraduates are not included in the KiltHub dashboard, we did not include their departmental affiliation and they are represented in our data as “undergraduates”. ●Data collected between January and April 2021 for each platform or event was used to generate summary data on departmental and institutional affiliation of users. ●Platform usage over time data for KiltHub, LabArchives, and protocols.io were collected in December 2021. University Libraries’ Open Science and Data Collaborations (OSDC) Program at Carnegie Mellon University (CMU) was created in 2018 to meet the growing needs and interest from researchers in making data and other research products publicly available and reproducible. The program offers five categories of support from librarians and staff with expertise in research and data sharing: tools, trainings, events, assessment, and collaboration. Our services provide end-to-end support throughout the research lifecycle. After three years of building the program, we are currently assessing the success of our initiatives with the development of a logic model and usage metrics. Development of metrics will allow us to take the scattered usage or attendance data from each platform and event and integrate them into a single dataset. We can then answer questions about who is using our services, how they are using them, and what impact the services are having on research workflows. Preliminary data suggest that users come from a broad swath of disciplines with researchers from Heinz College of Information Systems and Public Policy being the heaviest users of program services. The department with the highest number of public items on KiltHub was the Software Engineering Institute, but these deposits represented many users that each have a low number of uploads. Superusers (users with a high number of public items) came from other departments such as Psychology, Computer Science, and Materials Science Engineering. Platform usage for KiltHub, LabArchives, and protocols.io has increased each year since 2019 when we licensed protocols.io and LabArchives. Metrics data and feedback from the Advisory Group will be used to design targeted surveys. Advisory group from top users (2021) User data Targeted surveys & interviews ●How much time / money are we saving researchers ●How many grant awards / publications / career opportunities are we helping researchers to obtain ●How many communities are we helping to adopt open science ●How much are we contributing to the cultural shift ●Who uses our tools and participates our activities? ●Who are our top users? ●Which disciplines are the most engaged? ●How do people use our tools or activities? ●Why do people use our tools or activities? ●What impact are we making? Integrated user data Scattered data from each service Questions 5 Ws and 1 H Metric Variable(s) Source of Data Who User affilia
poster
© 2022 California Institute of Technology. Government sponsorship acknowledged. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsements by the United States Government or the Jet Propulsion Laboratory, California Institute of Technology. National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology Pasadena, California Federated Digital Twins for Flood Prediction and Analysis NASA AIST IDEAS: Thomas Huang1, Cedric David1, Gary Doran1, Jason Kang1, Grace Llewellyn1, Kevin Marlis1, Miles Milosevich (intern)1, Stepheny Perez1, Wai (William) Phyo1, Joe T. Roberts1, Catalina M. Oaida Taglialatela1 | Sujay V. Kumar2, Nishan Biswas2 | Paul Stackhouse3, David Borges3, Madison P. Broddle3, Bradley MacPhserson3 SCO FloodDAM DT: Raquel Rodriguez Suquet4, Simon Baillarin4, Frederic Bretar4,5, Gwendoline Blanchet4, Peter Kettig4 | Sophie Ricci6, Andrea Piacentini6, Thanh-Huy Nguyen6 | Guillaume Valladeau7, Jean-Christophe Poisson7 | Alice Froidevaux8, Antoine Guiot8, Romane Raynal8, Huynh Thanh-long8, Christophe Fatras9, Sylvain Brunato9, Eric Guzzonato9 [1] NASA Jet Propulsion Laboratory, Caltech Institute of Technology; [2] NASA Goddard Space Flight Center; [3] NASA Langley Research Center; [4] Centre National d’Etudes Spatiales; [5] Space Climate Observatory; [6] CECI, CERFACS/UMR5318-CNRS; [7] Vortex.IO; [8] QuantCube; [9] Collecte Localization Satellites Abstract Water resource science is multidisciplinary in nature, and it not only assesses the impact from our changing climate using measurements and modeling, but it also offers science-guided, data-driven decision support. An Earth System Digital Twin (ESDT) is a dynamic, interactive, digital replica of the state and temporal evolution of Earth systems. It integrates multiple models along with observational data, and connects them with analysis, AI, and visualization tools. Together, these enable users to explore the current state of the Earth system, predict future conditions, and run hypothetical scenarios to understand how the system would evolve under various assumptions. NASA’s Advanced Information Systems Technology (AIST) Integrated Digital Earth Analysis System (IDEAS) project partners together with the France’s Science for Climate Observatory (SCO) FloodDAM Digital Twin effort is to establish an extensible architectural solution to develop digital twins of our physical environment for Earth Science. The joint effort delivers a formal system architecture with mechanisms for the outputs of one model to feed into others; for driving models with observation data; and for harmonizing observation data and model outputs for analysis. The work presents a multi-agency joint effort to define and develop digital twin for Earth system that includes continuous integration and harmonization of the latest measurements, assimilates model simulations, AI-driven scenario-based actionable predictions, and dynamic integration of the most recent, relevant observations, here with an initial focus on water resources and flood analysis case studies for proof of concept. Automate Access to Many Repositories and Services Acquire Observation and Analysis Decision Support and Science Planning Forecast and Prediction Assimilation and Numerical Models AI-based Analysis Harmonize Observation and Model Data Access and Analysis FloodDAM FloodDAM NASA Advanced Information Systems Technology IDEAS: Integrated Digital Earth Analysis System got ideas? Data-Driven Community-Driven Professional Open-Source Analysis-Driven New Observing Strategies Analytic Collaborative Frameworks Latest Observations Numerical Models Actionable Predictions Federated Architecture Multi-Agency Multi-Center Multi-Computing On-demand Product Generation Clearance: CL#23-3441 Mississippi River 2019-03 - 2019-08 Garonne River 2022–01-01 - 2022-01-31 Observations & Analysis Apache
poster
A User-centric Approach to Trust Management in Wi-Fi Networks Carlos Ballester (University of Geneva), Jean-Marc Seigneur (University of Geneva), Paolo di Francesco (Level7), Valentin Moreno (FON), Rute Sofia (SITILabs, University Lusófona), Waldir Moreira (SITILabs, University Lusófona), Alessandro Bogliolo (University of Urbino), Nuno Martins (Caixa Mágica Software) Abstract—This technological demonstration goes over a new trust management framework which can assist in achieving secure connections in wireless networks, without necessarily implying the use of strong security associations. The demonstration is based on open-source software, and has been developed to run on OpenWRT as well as on Android. I. INTRODUCTION The EU ULOOP project [1] is investigating and imple- menting technology to overcome the limitation of today’s broadband access technologies, expanding the backbone in- frastructure by means of low-cost wireless technologies that embody a multi-operator model, i.e., a local-loop based upon what a specific community of individuals (end-users) is willing to share, backed up by specific cooperation incentives and “good behaviour” rules. This represents a paradigm shift in the Internet evolution, as the user may be in control of parts of the network, in a way that is acknowledged (or not) by Internet stakeholders. In such scenarios where several strangers are expected to interact for the sake of robust data transmission, trust is of vital importance as it establishes a way for the nodes involved in the system to communicate with each other in a safe manner, to share services and information, and above all, to form communities that assist in sustaining robust connectivity models. This technological demonstration goes over the trust man- agement framework provided in ULOOP, which has been currently developed to run both in open-source Access Points (OpenWRT) and in Android. The demo shall assist in ex- plaining how ULOOP envisions trust management to achieve security in a flexible way, without necessarily implying the use of strong security associations. II. TRUST MANAGEMENT IN ULOOP In ULOOP, trust management and cooperation incentives are related to the understanding of how to define and build circles of trust on-the-fly. Such circles of trust aim of sustain- ing an environment for allowing devices to share resources to support the dynamic behavior of user-centric networks. Trust management is based on reputation mechanisms able to identify end-user misbehavior and to address social aspects, e.g., the different levels of trust users may have in different communities (e.g., family, affiliation). In situations where the created network of trust is not enough to allow resources to be shared, ULOOP devices are able to use a cooperation incentive scheme that allows a node to gain credits in an amount directly proportional to the amount of shared resources: such credits can then be used to gain access to other resources. Hence, trust management and cooperation incentives aspects are split into three main blocks: i) Trust management; ii) Cooperation Incentives; iii) Identity management. From a ULOOP software suite perspective and as has been explained [2], there are modules which are activated if a ULOOP element plays the role of a regular node, or of a gateway. A ULOOP gateway is a role (software functionality) that reflects an operational behavior making a ULOOP node capable of acting as a mediator between ULOOP systems and non-ULOOP systems – the outside world. This gateway role may or may not be owned and controlled by a ULOOP user, it may also be controlled by an access operator. The key differentiating factor of the role of gateway, in contrast to a regular ULOOP node, is the operational intelligence and mediation capability. Similarly to ULOOP nodes, the ULOOP gateway functionality may reside in the user-equipment, in Access Points, or even in the access network. Hence, they exhibit a feature that is key in user-centric envi
poster
ENTRE O POLÍTICO E O FAMILIAR: DISCURSOS DE ÓDIO E ATRAVESSAMENTOS CONJUNTURAIS Victória Rosa da Silva. Universidade Federal Fluminense. Niterói - Rio de Janeiro. Waldenilson Teixeira Ramos. Fundação de Empreendimentos Científicos e Tecnológicos (FINATEC). Niterói - Rio de Janeiro. Objetivo: Este trabalho se apresenta enquanto um relato de pesquisa que se endereça a refletir sobre a atual conjuntura política no Brasil e as dinâmicas de adoecimento que podem ser provocadas por esta no meio familiar. Metodologia: Parte-se de uma Psicologia Social Crítica, instrumentalizada através das contribuições de pensamentos dos intelectuais: Michel Foucault (2021), Gilles Deleuze e Félix Guattari (2011). Com o auxílio paralelo de notícias brasileiras que informam sobre os acontecimentos de violência no contexto familiar e suas repercussões no âmbito da saúde mental, principalmente nos períodos eleitorais do país. Resultados e Discussão: Em 9 de julho de 2022, Marcelo Arruda foi morto em meio a comemoração familiar de seu aniversário devido a dissonâncias políticas. Ocorreu uma troca de tiros em que, no fim, o assassino de Marcelo foi chutado pelos familiares presentes enquanto estava baleado no chão. A cena choca não só pelas violências presentes, mas pelas profundas feridas psíquicas nos membros familiares que acompanharam o desenrolar do acontecimento. Tal cenário ajuda a ilustrar certa alegoria da realidade vivida por algumas familia brasileiras, o que desvela um quadro sintomático em tempos de polarização extrema do cenário político do país, em que uma discordância entre partidos políticos têm a potencialidade de instigar comportamentos agressivos, violências sem precedentes e adoecimento mental. Não obstante, se evidencia na atual conjuntura brasileira a insurgência da propagação do discurso de ódio, que se apresenta como potencializador desse adoecimento intrafamiliar. Referências: DELEUZE, Gilles & GUATTARI, Félix. Mil platôs: capitalismo e esquizofrenia, 2. ed. Rio de Janeiro: Editora 34, 2011. FOUCAULT, Michel. Microfísica do Poder. 13. ed. São Paulo: Editora Paz e Terra; 2021. INFOGRÁFICO: ENTENDA ORDEM DOS ACONTECIMENTOS NO DIA DO ASSASSINATO DE PETISTA EM FESTA DE ANIVERSÁRIO, SEGUNDO A POLÍCIA. Site G1 - Oeste e Sudoeste. Disponível em: <https://g1.globo.com/pr/oeste-sudoeste/noticia/2022/07/16/infografico-entenda-ordem-dos-acontecimentos-no-dia-d o-assassinato-de-petista-em-festa-de-aniversario-segundo-a-policia.ghtml> . Último acesso em 24 de maio de 2023. MENDES, Sandy. PETISTA É ASSASSINADO POR BOLSONARISTA EM FESTA DE ANIVERSÁRIO NA QUAL HOMENAGEAVA LULA. Site do OUL: Congresso em Foco. Disponível em: <https://congressoemfoco.uol.com.br/area/pais/homem-e-morto-por-bolsonarista-em-aniversario-com-tematica-do- pt/>. Último acesso em 24 de maio de 2023.
poster
Structural and Effective Pathways to the Amygdala involved in Spatial Frequency and Emotion Processing Jessica McFadyen1,6, Martial Mermillod2,3, Veronika Halász1,4, Jason B. Mattingley4,5,6, & Marta Garrido1,4,6 1 Centre for Advanced Imaging, University of Queensland, St Lucia, QLD, Australia | 2 University Grenoble Alpes, Grenoble, France | 3 Institut Universitaire de France, Paris, France 4 Queensland Brain Institute, University of Queensland, St Lucia, QLD, Australia | 5 School of Psychology, University of Queensland, St Lucia, QLD, Australia 6 Australian Research Centre of Excellence for Integrative Brain Function, Australia Method Participants N = 26, 50% female, 18-32 years (M = 22.69) Design Duration = 200ms, ISI = 750-1500ms, Viewing angle = 22.8° Technique = Magnetoencephalography (MEG) Task = Report the gender of the face Spatiotemporal Results References 1 Garvert, M. M., Friston, K. J., Dolan, R. J., & Garrido, M. I. (2014). Subcortical amygdala pathways enable rapid face processing. NeuroImage, 102, 309-316. 2 Tamietto, M., & De Gelder, B. (2010). Neural bases of the non-conscious perception of emotional signals. Nature Reviews Neuroscience, 11(10), 697-709. 3 Pessoa, L., & Adolphs, R. (2010). Emotion processing and the amygdala: from a ‘low road’ to ‘many roads’ of evaluating biological significance. Nature Reviews Neuroscience, 11(11), 773-783. Lateral Geniculate Nucleus (LGN) Pulvinar (PUL) Primary Visual Cortex (V1) Amygdala (AMY) Subcortical Medial Cortical [ 3780 ] 0 15 5 Latency Advantage for Low SF (ms) 10 fT/mm -4 -3 -2 -1 0 1 2 3 0 100 200 300 400 500 600 Lag = 13ms Time (ms) BSF LSF HSF Fearful Neutral Research Question Is there a functional subcortical route to the amygdala that bypasses the visual cortex and, if so, what spatial frequency and emotional content does it carry? Cortical Hypothesis3: Transmission of visual information to the amygdala is via the cortex: • Just as fast • Used for all information • Any pulvinar influence is due to cortical input Background Dual Hypothesis2: There is a colliculo-pulvinar pathway to the amygdala: • Fast • Direct • Used for low spatial frequencies and emotional information Conclusions Using statistical parametric mapping (SPM) and dynamic causal modelling (DCM) we found: 1. A dual model explained the data better than other plausible anatomical models1. 2. Simulated amygdala activity was influenced by the subcortical route as early as 70ms. 3. Low spatial frequency modulated neural activity earlier than high spatial frequency. 4. The subcortical route was not modulated by spatial frequency, while the cortical route conveyed primarily high spatial frequencies. p < .05 FWE -0.8 -0.4 0 0.4 0.8 0 100 200 300 400 500 600 Lag = 12ms Time (ms) Latency Cross-Correlation Analysis 10 20 30Left Right 0 10 20Posterior Anterior 30 600 450 300 150 0 Time (ms) Difference in Field Intensity (fT/mm) -1.6 0.8 0 1.6 -0.8 Low High 10 20 30Left Right 0 10 20Posterior Anterior 30 0 300 200 100 0 400 Neutral vs. Fearful Emotional Expressions Field Intensity (fT/mm) -0.4 0.2 0 0.4 -0.2 Fearful Neutral 470ms 310ms 175ms 150ms 370ms Statistical Parametric Mapping Low vs. High Spatial Frequency Dynamic Causal Modelling Anatomical Models V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT Cortical Dual Medial All = included in some models = included in all models 0 5 10 15 20 Exceedance Probability (%) Models 1 36 18 9 27 V1 AMY LGN PUL INPUT Model 21 V1 AMY LGN PUL INPUT Model 29 Cortical Dual All Medial 0 10 20 30 40 50 60 70 80 90 100 Exceedance Probability (%) 1 14 28 Models V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT V1 AMY LGN PUL INPUT = modulation by either: 1. Spatial Frequency (Low vs. High) 2. Emotional Expression (Neutral vs. Fearful) 3. Spatial Frequency & Emotional Expression V1 AMY LGN PUL INPUT
poster
Convection in M-type dwarfs: predictions from 3D hydrodynamical model atmospheres J. Klevas1, A. Kučinskas1, H.-G. Ludwig2 1 Astronomical Observatory of Vilnius University, Saulėtekio al. 3, Vilnius 10257, Lithuania (email: jonas.klevas@tfai.vu.lt) 2 Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, 69117 Heidelberg, Germany Although convection plays an important both in the atmospheres and interiors of M-type dwarfs, its impact on the atmospheric structures and observable properties of these stars is still relatively poorly understood. In particular, standard 1D model atmospheres that are used in the analysis of M-type dwarfs rely on the calibration of convection by using mixing length parameter, MLT. As a rule, 1D model atmospheres of M- type dwarfs are computed using a fixed MLT value while in reality it may vary considerably in stellar parameter space (Magic 2015, Sonoi 2019). With the aid of 3D hydrodynamical model atmospheres, we investigate how the mixing length parameter, MLT, changes with effective temperature, gravity and metallicity typical to those observed in Galactic M-type dwarfs. We also provide a grid of MLT values that could be used with the 1D classical model atmospheres. To study convective properties of M-type dwarfs we used a new grid of 3D hydrodynamic model atmospheres (Klevas et al., in prep.) computed with CO5BOLD code (Freytag et al. 2012). The models were calculated using opacities that were grouped into 14 opacity bins based on continuum formation depth. All model had the same resolution of 140x140x160 which roughly corresponded to 1000 x 1000 x 400 km, and covered atmospheric layers in the range of -6 < log Ross <4. To investigate how convection works in M-type dwarfs, for comparison we used 1D hydrostatic models computed with LHD code using the same equation of state and opacities as the 3D hydrodynamical CO5BOLD models. This allows direct differential comparison with CO5BOLD models, thus leaving 1D hydrostatic nature and parametrization of convection as the only source of differences from the 3D hydrodynamic model atmospheres. The LHD model atmospheres were computed using the mixing length theory (Mihalas 1978), for a set of different values of MLT . Fig. 1. The mixing length parameter, MLT, obtained as the best fit of 1D model atmosphere entropy profile to that of the corresponding mean 3D model. Models in each column have identical Teff and log g values but differ in their metallicities ([M/H]) and best-fitted MLT. Grey dots are M-dwarfs observed by Kepler and TESS space missions. We present new results aimed at calibrating MLT parameter for use with the 1D hydrostatic model atmospheres of M-type dwarfs. In the range of atmospheric parameters typical to M- type dwarfs, the obtained MLT estimates vary in the range of MLT = 1.6 to 2.2 depending on the model effective temperature, gravity and metallicity, with [M/H] having the largest impact on MLT. The effect of Teff and log g are smaller and lead to changes of D MLT <0.3 and < 0.2 over the intervals of DTeff =1000K and Dlog g = 0.5, respectively. While the 1D MLT models may be used to represent optically thick layers in M-type dwarfs reasonably well, there significant deviations in the entropy profiles predicted by the 1D MLT and averaged 3D hydrodynamical model atmospheres in the outer atmospheric layers. This happens because the overshooting of convective flow into layers beyond the optical surface and subsequent adiabatic cooling are not accounted for in the 1D models. These differences can not be neglected, especially in more metal-poor stars where cooling of these layers is more efficient (Fig. 3). Extra caution is advised when layers outside of overshooting zone are of interest (log Ross < -2) because of this cooling. As shown in Fig. 1, MLT does vary significantly along the atmospheric parameter space. In particular, it is very sensitive to metallicity where the difference in MLT in model
poster
Isophote Shapes of Early-Type Galaxies in Massive Clusters at z ~ 1 and 0 ● Dependance of a4 parameter on mass and size Kazuma Mitsuda(1), Mamoru Doi(1), Tomoki Morokuma(1), Nao Suzuki(1), Naoki Yasuda(1), Saul Perlmutter(2), Greg Aldering(2), Josh Meyers(3) GALAXY EVOLUTION ACROSS TIME, Paris, June 12-16, 2017 ① Introduction ● Size evolution of the ETG samples ● Measuring a4 parameter Refereces ● Evolution of FR fraction: fFR [写真] Dynamics of early-type galaxies (ETGs), whether they are supported by rotation or dispersion, is a clue to understand their assembly history. We compare the isophote shape parameter a4 between z ∼ 1 and 0 as a proxy for dynamics to investigate the epoch at which the dynamical properties are established. We create cluster ETG samples with stellar masses of log(M✽/M⦿) ≥ 10.5 with spectroscopic redshifts. We have 130 ETGs from the Hubble Space Telescope Cluster Supernova Survey for z ∼ 1 and 355 ETGs from the Sloan Digital Sky Survey for z ∼ 0. We find similar dependence of the a4 parameter on the mass at z ∼ 1 and 0; the main population changes from disky (a4 > 0) to boxy (a4 ≤ 0) at a critical mass of log(M✽/M⦿) ~ 11.5 with the massive end dominated by boxy ETGs. The disky ETG fraction is consistent between these redshifts. Although uncertainties are large, the results suggest that the isophote shapes and probably dynamical properties of cluster ETGs are already in place at z > 1 and do not significantly evolve in z < 1, despite significant size evolution. The constant disky fraction imply that the processes responsible for the size evolution is not enough violent to convert the dynamical properties of ETGs. Slow rotators (SRs) Massive: log(M✽/M⦿) > 11.3-5 Tend to be boxy: a4 < 0 Fast rotators (FRs) Less massive: log(M✽/M⦿) < 11.3-5) Tend to be disky: a4 > 0 Dispersion Rotation Slow Rotators Fast Rotators Late-type galaxy Sab Scd Boxy E (a4<0) Disky E (a4>0) S0 (a4>0) Early-type galaxy (ETG) Spin parameter λ SRs FRs Stellar Mass MJAM [M⦿] Effective radius Re[kpc] ● Dynamics and shapes of early-type galaxies (ETGs) ●Illustris Simulation (Penoyre+17) ⇒ Increase of fFR from z ~ 1 to 0 FRs at z~1 SRs at z~0 major mergers etc… z=0 z=1 We investigate evolution of disky (a4 > 0) fraction as a proxy for fFR. ➦ z = 1: HST Cluster Supernova Survey ● z ~ 1 and 0 Cluster ETG Samples z = 0: Sloan Digital Sky Survey z ~ 1 : HST z850 image, z ~ 0 : SDSS g image ② Fit ellipses with Fourier deviation ① Determine isophote contours ③ Get mean a4 value from radial profile Luminosity weighted mean within 2 rPSF < r < 2 rh z=1.2 disky 2 rpsf 2 rh radial profile of a4 z=1.2 boxy 2 rpsf 2 rh radial profile of a4 (Right figures: example of z = 1.2 galaxies) Radial profiles of a4 parameter etc… are derived. 100x a4/a FR SR Disky Boxy log (V / σ) ● Disky ETG fraction (fdisky) at z ~ 0 and 1 Massive end (log(M✽/M⦿)≳11.5) is dominated by boxy ETGs both for z ~ 0 and 1. Disky Boxy For less massive ETGs, smaller galaxies tend to be more disky with larger a4 value both for z ~ 0 and 1. ⬇Averaged a4 value color coded on the size-mass plane ▹ ▹ z ~ 0 sample has galaxies with large sizes (σ ~ 50 - 100 km/s). → recently (z < 1) quenched galaxies? ▹ Disky fraction is consistent between z ~ 0 and 1 within uncertainty taking account of Eddington bias (no significant evolution in z < 1). ▹ z = 0 simulation: fFR z=1 simulation: fFR : z ~ 1 cluster ETGs corrected for Eddington bias Boxy shapes of massive ETGs are already in place at z > 1. ▹ Isophote shapes (boxy/disky) may not reflect dynamical states (SR/FR), taking account of the discrepancy between fdisky in observation and fFR in simulation at z = 0. ✻ ⬇Size-mass relation and normalized size distribution of boxy/disky ETGs ✻ normalized size log(re, M11/kpc) = log(re/kpc) - 0.57 ∙ {log(M✽/M⦿)-11} Significant size evolution from z ~ 0 to 1 (both in boxy and disky) ⇒ ▹ Cappellari+13 Kormendy & Bender 96 Emsellem+11 K. Mitsuda et al., 2017, ApJ, 834, 109; E. Emsellem et al., 2011, MNRAS, 414, 888; M
poster
In Europe, salmonellosis remains the second most common foodborne disease1, while hepatitis E virus (HEV) prevalence in humans has increased significantly during recent decades.2 Studies proved that pigs are common hosts of Salmonella and HEV. Hence, pork products classify as a risk factor for these zoonotic diseases. Costs of selected biosecurity measures (11) along the PPC BSM-specific calculations Country-specific values used if available Costs evaluated per sow, slaughter pig or farm conditioned by the BSM Published data, national guidelines, experts Salmonella Hepatitis E Pork production chain Consumer Biosecurity measures BSM-effect considered in QMRA QMRA OUTPUT Output translated to monetary benefits BENEFITS COSTS Economic analysis Cost-benefit analysis conducted for each of the pathogens Disease Output = change in the number of human cases due to selected biosecurity measures along the PPC Pork production chain associated incidences To assess the cost-effectiveness of the considered biosecurity measures Introduction 1 Concept and methods applied 2 Development of an economic model to assess the cost-effectiveness of biosecurity measures to reduce the burden of Salmonella and hepatitis E virus in the pork production chain Bester C.1, Marschik T. 1, Schmoll F. 2, Käsbohrer A. 1,3 1Unit of Veterinary Public Health and Epidemiology, University of Veterinary Medicine Vienna 2 Division for Animal Health, Austrian Agency for Health and Food Safety (AGES) Moedling, Austria 3 Unit Epidemiology, Zoonoses and Antimicrobial Resistance, German Federal Institute for Risk Assessment, Berlin, Germany ►Specific effective BSMs collated within BIOPIGEE will be evaluated in the pathogen-specific quantitative microbial risk assessment models (QMRA), parameterized for selected countries. The output, namely reduced human cases and prevalence in pigs, will be translated into monetary terms. In a cost-benefit analysis, the disease-specific associated costs and costs of BSMs will be evaluated against the monetary benefits of the reduced incidences and prevalence. The calculations are based on country-specific data on disease rates, cost of disease, and cost for the implementation of specific BSMs, as well as information on national pig farming structure and pork production statistics. Cost of illness Pathogen-specific methodology applied Salmonella3* Hepatitis E4,5 Country Cost per human case, 2019 Average across pathogen-specific severity outcomes AT € 980.4 € 408.2 UK € 1,217.2 € 424.3 DE € 959.6 € 414.1 FR € 771.9 € 410.2 Biosecurity measures 3 (1) EFSA, Zoonosis Report, 2020 (4) Anckorn et al., 2020 (2) Adlhoch et al., 2016 (5) Mangen et al., 2014 (3) FCC Consortium, 2010 (6) Pires et al., 2011 This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 773830. * Consideration of an underreporting factor (7.3)1 and a region specific source attribution factor6 Flow of processes Effect Risk BSM-specific risk ratio Reductions of human cases as well as prevalence reduction in pigs and associated monetary benefits will be considered provided sufficient data is available. Disease-associated economic effects play an important role in decision- making processes. Significant economic consequences have been identified as a result of pork-attributable human salmonellosis cases (approximately € 90 million annually in the European Union).3 In contrast, the monetary impact of HEV cases has not yet been determined. ►The present analysis evaluates the cost-effectiveness of specific BSMs proven to reduce the prevalence of Salmonella or HEV at specific stages along the pig production chain (PPC). The following demonstrates the concept behind the analysis. Effective biosecurity measures (BSM) along the production chain can reduce this impact. Therefore, the cost-effectiveness of applicable BSMs must be evaluated to support the establishment of national disease control programmes. The
poster
The SwE Toolbox: a Toolbox for the Analysis of Longitudinal and Repeated Measures Neuroimaging Data Bryan Guillaume1,2, Xue Hua3, Paul M. Thompson3, Lourens Waldorp4, Thomas E. Nichols1 and the Alzheimer’s Disease Neuroimaging Initiative 1University of Warwick, Coventry, United Kingdom, 2University of Li`ege, Li`ege, Belgium, 3University of California, Los Angeles, USA, 4University of Amsterdam, Amsterdam, Netherlands. Introduction Neuroimaging software packages like SPM and FSL currently model longi- tudinal and repeated measures neuroimaging data using restrictive assump- tions. In particular, SPM assumes a common covariance structure for all the voxels in the brain and FSL assumes Compound Symmetry (CS), the state of all equal variances and all equal covariances. While more accurate meth- ods have been recently proposed to analyse such data [1,2,4,5,7,8], there remain few easy-to-use implementations of these methods. Here, we present an SPM toolbox allowing the use of the Sandwich Estimator (SwE) method, a fast, non-iterative tool for longitudinal and repeated measures data [5], and illustrate its use on data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Methods The SwE toolbox estimates parameters of interest using an Ordinary Least Squares (OLS) model and their variances/covariances using the so-called Sandwich Estimator [5]. The toolbox consists of a set of Matlab scripts de- signed to work in conjunction with the most recent versions of SPM (i.e. SPM8 or SPM12). The toolbox offers a user interface (Figure 1) allowing an easy specification of the design and data, and a display of results similar to stan- dard SPM analyses. The use of the toolbox can be divided into 3 stages: the model setup, the model estimation and the display of results. The setup stage calls the Matlab batch system (Figure 1, top left) with a dedicated module for the SwE toolbox, which can be used to easily specify the data and design of the analysis. The second stage estimates the model. Finally, the third stage allows the specification of contrasts of interest to make inference and display results (Figure 1, middle and right) in a similar way as standard SPM analy- ses. Regarding the inferences, the current release version of the toolbox only allows for the use of parametric uncorrected and voxel-wise False Discovery Rate (FDR) inferences based on results in [5]; methods for non-parametric uncorrected, FDR, and Family-Wise Error (FWE) inferences based on a Wild Bootstrap [9] resampling method are forthcoming. Here, we demonstrate the toolbox on a highly unbalanced longitudinal dataset (i.e. with many missing visits) consisting of Tensor Base Morphometry images obtained from the ADNI project, where 229 healthy elderly Normal control, 400 Mild Cognitive Impairment (MCI) and 188 Alzheimer’s Disease (AD) sub- jects were scanned up to 6 times over a period of 3 years [6]. As comparison, we also analyse the same dataset using 2 alternative methods: the Naive- OLS (N-OLS) method which includes subject dummy variables in an OLS model and assumes, by construction, Compound Symmetry (CS), and the Summary Statistic OLS (SS-OLS) method which first fits a regression model for each subject to obtain subject-specific estimates of the parameters of in- terest, and then computes a simple model on these summary measures. In addition, we conducted a Box’s test of CS [3] on the largest subset of the ADNI dataset without missing data to check the validity of the CS assumption. Results Figure 2 shows the Box’s test F-score image (centred at the anterior com- missure) thresholded at 5% after using a Bonferroni correction. 56% of the in-mask voxels survived the thresholding, indicating a strong evidence of non- Compound Symmetry in the brain and challenging the validity of the N-OLS method. Figure 3 and 4 show a comparison of thresholded t-score images (centred at the anterior commissure) obtained with the N-OLS, SS-OLS and SwE meth- ods on the difference in (lon
poster
ceuci Desenvolvimento [urbano] baseado n o conhecimento, alinhado ao modelo "Triple Bottom Line" “Knowledge Cities”; Estratégia de gestão para áreas urbanas, baseada na Tripl a Hélice: desenvolvimen to econômico, social e ambiental Desenvolvimento econômico, social e espacial-urbano; Análise da atração e administração de indústrias e trabalhadores d o conhecimento, espaços urban os de alta qualidade Quádrupla e Quíntupla Hélice: participação da sociedade e sustentabilidade ambiental; KBUD permite que as cidades promovam desenvolvimento, inovação e qualidade d e vida através da colaboração universidad e-indústria-governo-sociedade 1995 “Three pillars of KBUD” Velibeyoğlu e Yigitcanlar 2000 2005 2010 2010 2013 Econômico Economia Baseada em conhecimento; Indústria de serviço; Indústria criativa; Indústria Verde ; Capacidade de Inovação; Empreendedorism o; Competitividade; Capital Financeiro Sociocultural Diversidade urbana; Tolerância; Coesão Social; Transparência e Responsabilidade; Qualidade d e Vida; Capital Humano e social; Capital Intelectual Urbano Desenvolvimento urbano sustentável; Infraestrutura urbana; Sistema de transport e inteligente; Sustentabilidade ecológica; Morad ia Social; Qualidade dos espaços; Identidade Urbana Institucional Planejamento Estratégico; Gestão da Infraestrutura; Gestão de Sistemas de Capital ; Gestão do Setor Público; Engajamento da comunidade; Participação Pública O ESTADO DA ARTE DO KNOWLEDGE-BASED URBAN DEVELOPMENT (KBUD): ANÁLISE DE QUADROS CONCEITUAIS (FRAMEWORKS) Bruno Borelli Liza, Silvia Stuchi, Marcela Noronha, Milena Pavan Serafim, Processo Fapesp nº 23/01212-3 No século XXI, transformações socioeconômicas, ambientais e tecnológicas têm impactado a vida urbana. Consequentemente, surgiram necessidades de novos mecanismos para promover o desenvolvimento urbano sustentável, tornando as cidades resilientes a tais transformações. O Knowledge-Based Urban Development (KBUD) é uma estratégia crucial neste contexto, partindo do princípio que a geração de conhecimento é vital para o desenvolvimento urbano sustentável e a competitividade econômica. Este trabalho visa identificar o estado da arte da literatura do KBUD, focando na análise de quadros conceituais (frameworks). INTRODUÇÃO MATERIAIS E MÉTODOS A metodologia envolve um mapeamento sistemático da literatura, uma revisão sistematizada e uma análise cronológica de quadros conceituais para identificar o estado da arte do KBUD. As etapas do protocolo de pesquisa incluem: pesquisa em bases de dados acadêmicos; snowballing da literatura; catalogação, filtragem e seleção final dos textos; fichamento, revisão e análise crítica. O mapeamento incluiu as bases Web of Science, Periódicos CAPES, Scopus e Google Scholar. Os frameworks mais relevantes para o tema das publicações foram extraídos, permitindo revisar as abordagens principais, notar mudanças nas perspectivas teóricas e práticas e identificar conceitos no campo do KBUD. Por fim, foi elaborado um estudo cronológico a partir dos principais quadros e abordagens identificadas na literatura do KBUD. A revisão cronológica das publicações permitiu identificar a evolução da pesquisa e prática do KBUD e dos frameworks, estruturas que organizam suas principais dimensões e indicadores. A interconexão entre tecnologia, inovação, economia, meio ambiente, governança e participação comunitária indica a necessidade de abordagens unidas e colaborativas para o desenvolvimento urbano sustentável. Inicialmente, o KBUD era visto como parte de uma estratégia de desenvolvimento econômico e sustentável (alinhado ao modelo "Triple Bottom Line"). Com a inclusão de novos conceitos e atores (em linha com os modelos de inovação de Quádrupla e Quíntupla Hélice), ele se expandiu para uma abordagem integrada e multidimensional. Finalmente, seus objetivos incluem: desenvolvimento econômico, sócio-cultural, espacial, ambiental e institucional, gerando prosperidade, equidade e sustentabilidade ambiental em uma cidade projetada
poster
Knowing Unknowns in an Age of Incomplete Information Saurabh Khanna 1,  @saurabhk2701  saurabh.khanna@uva.nl Olga Eisele, Chei Billedo, Britta Brugman, Sandra Jacobs, Lauren Taylor, Marina Tulin, Marieke van Hoof 1 Amsterdam School of Communication Research, University of Amsterdam 🕯️[In]visible The Internet has revolutionized our lives but also brought the challenge of invisible information — essential knowledge we miss out on. As algorithms prioritize content for maximizing consumption, we engage with only a fraction of relevant information. We intend to understand, quantify, and boost this hidden knowledge. 📸 PictoPercept PictoPercept is a visual survey tool that uncovers hidden preferences through rapid forced-choice tasks, revealing subconscious attitudes often masked by social desirability. 🗣️ Lost Without Translation The Internet is a very unfair representation of human linguistic diversity. Lost Without Translation is an attempt to make these stark inequalities visible by quantifying and boosting online visibility for all living languages scripted by humans. 👁️‍🗨️ Shadowbans Shadowbans examines how shadow banning — hiding or de-emphasizing content without users’ knowledge — affects online information visibility and shapes digital interactions. 🦋 Beyond Words A majority of knowledge visible to us is built on communication among humans. Could we make the invisible visible by developing ways to communicate information effectively beyond our species? Beyond Words is an attempt to answer this challenging question. 🌍 32.5% humans still lack internet access 🤖 Information curated by for us to maximize understanding consumption 🙎‍♂️ We are consistently consuming the tip of the information iceberg What can we do? 👉
poster
Gut microbiota composition in Iberian pigs fed with olive oil by-products during the growing period M. Muñoz1,2, J.M. García-Casco1,2, G. Lemonnier3, D. Jardet3, O. Bouchez4, M.A. Fernández- Barroso1,2, F.R. Massacci3, A.I. Fernández1, A. López-García1,2, C. Caraballo1,2, E. González- Sánchez5, C. Óvilo1 and J. Estellé3 1INIA, Mejora Genética Animal, Crta. de la Coruña, km 7,5, 28040 Madrid, Spain, 2Centro de I+D en Cerdo Ibérico INIA-Zafra, Ctra Ex-101, km 4.7, 06300 Zafra, Spain, 3INRA, UMR1313 GABI, AgroParisTech, Université Paris-Saclay, Jouy-en-Josas, France, 4INRA, US 1426 GeT-PlaGe, Genotoul, Castanet-Tolosan, France, 5Universidad de Extremadura, Escuela de Ingenierías Agrarias, Avda. Adolfo Suárez, s/n, 06007 Badajoz, Spain; mariamm@inia.es The traditional Iberian pig production system is characterized by a final open-air fattening period (montanera), in which the animals are fed acorns and grass, preceded by a growing period where the feeding is restricted to avoid undesirable weight gain. New growing diets based on olive agro-industrial by-products could be an alternative to avoid this restriction. The objective of the current study (of TREASURE project) was to analyse the effect of two growing alternative diets on the gut microbiota composition from faecal samples collected before and after montanera. Three diets, one incorporating dry olive pulp in the feed (DD), one incorporating olive cake in wet form (WD) and a control diet (CD) were supplied to 45 animals (15 per diet) during the growing period (45 to 95 kg of body weight). The gut microbiota composition of each individual was evaluated at two time points: before transition to montanera (95 kg) and at slaughter (160 kg). Microbiota analyses were performed by re-sequencing the bacterial 16S gene (V3-V4) in an Illumina MiSeq. Bioinformatics analyses were performed by using Qiime’s open-reference subsampled OTU calling approach. The effect of diets on microbiota composition and diversity was evaluated using the Vegan package in R. Bray-Curtis distances, NMDS and PERMANOVA analyses showed significant effects on microbiota composition. WD caused an increase in microbiota diversity. In the second sampling point, differences in composition and diversity could be also observed after acorn supplementation. Funded by European Union’s H2020 RIA program (Grant agreement no. 634476).
poster
The HOOP project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement N°101000836. Technology #4 Insect larvae farmed on biowaste or agri-food by-products Innovation keys for the environment • Combination of larval rearing and waste management (potentially lucrative business). • Proteins production with a up to 10 times lower land use than meat proteins. (1) • Lauric acid produced from an alternative source to coco and palm seeds. • Alternative fertiliser as side-stream. Process flowchart Animal-free agrifood by-products Larvae   HoReCa Food waste(2) Pre-treatment  Larvae fattening biowaste reduction  Larvae sieving  Hygenization Drying Dried larvae  solid digestate    OFMSW(2) Sieving Fractioning Grinding   Protein hydrolizate  AD digestate(2)  Dewatering Frass  Lipids Larve meal (feed)  Chitin / Chitosan Legend: Biowaste feedstock Process input Process step Bioproduct Description Some insects can be safely included in feed and food formulations. Species like Tenebrio Molitor (yellow mealworm) and Hermetia Illucensis (black soldier fly - BSF) are farmed with different substrates, including residual streams of human food. Their bodies provide high-quality nutrients and chitin with a much lower environmental impact than traditional value chain processes. Moreover, their excreta (frass) is a valuable and marketable fertiliser. Biowaste feedstocks • Agri-food by-products • Brewery by-products • Market leftovers (animal free) • Households or HORECA food waste (2) • Digestate from anaerobic digestion (2) Market price [EUR] Business 2 Business or Business to customer Bioproduct(s) Market sector Market price Larval meal (grinded larvae) feed for domestic and exotic animal 10 €/kg for BSF [B2C] Livestock feed (cattle, aquaculture) 1.5-2 € / kg [B2B] Frass (larvae excreta) agriculture 30 € / ton [B2B] Hidrolised Insect Protein Livestock and pet feed 6’500 € / ton dry matter [B2B] Chitosan (from chitin) Agrochemicals, water treatment 40’000 – 50'000 € / ton [B2B] Fats / oils 1’600 € / ton [B2B] Lauric Acid Cosmetic 1'900 – 2'100 € / ton [B2B] Live larvae feed for domestic and exotic animal 7-8 €/kg [B2C]
poster
Faculteit / departement Echoes of Yesterday An Exploration of Vintage Postcards Visit the project website The data Metadata ●Named entity tagging ●Geographic coordinates retrieval Image processing ●Image clustering ●Visual semantic search Website ●Visual semantic search engine ●Interactive map ●Exploratory analysis ●Tourism analysis Constrained by time, this project also acknowledges that the dataset has a lot to offer for future research. The analysis of dates on the postcards indicated that the majority of them date to a period between 1889 and 1922, but what would a detailed analysis of those dates bring? What would it say of the location as it stands today? Were the monuments and architecture on it destroyed in the World Wars? Is it protected as heritage today? An analysis of stamps on the postcards could perhaps offer an insight into changes in postal stamps over time, they could track tourism trends, locations, and people as they (postcards and persons) travel through Belgium. The existing images could also be used for creation of new images through the use of AI and Machine Learning models. Future research The team Tim Van de Cruys Benoît Crucifix Ina Lamllari Lucia Allende Garrido Iva Stojevic Ivania Nadine Donoso Guzmán Yining Zhu Peiyang Huo Dawn Zhuang The Poststars team worked on the Postcards dataset, a collection of 19th and 20th century Belgian postcards. We analyzed 35,000 of the postcards, selected based on copyright restrictions. The postcards were collected in Belgium and depict various aspects of daily life, including places, monuments, and villages from that era. Echoes of Yesterday: an Exploration of Vintage Postcards is the webpage that created for this project. It features an interactive map that showcases the various locations represented on the postcards, along with an exploratory analysis on their publication and distribution. Additionally, there is a tourism analysis based on image clustering. What's particularly intriguing is that users are able to search through the postcard collections using keywords via the search engine on the website. Methodology The dataset consists of image files and metadata for each individual postcard. Uniform titles in the metadata were tagged with wikineural named entity tagging model, as the index for geographic coordinates retrieval from openstreetmap data. CLIP (Contrastive Language Image Pre-Training) is a pre-trained neural model that learns to recognize the relationship between language and images. By encoding our dataset as CLIP embeddings, we can use the model for visual semantic search.
poster
Introduction Young-Woo Seo1, Won-Jae Lee1, Young-Joo Kim1, Sun-Ok Chung1*, Young-Kyun Jang2, Seung-Ho Jang2 , Im-Sung Bae3 , College of Agriculture and Life Sciences, Chungnam National University, Republic of Korea1, Shinan Green-Tech Co., Ltd, Suncheon, Cheonnam 58027, Republic of Korea2, Green Control System Co., Ltd, Gwangju 61027, Republic of Korea3 Temperature/humidity in greenhouse Automatic or manual control in set value Real-time, visible at a glance Low error rate, responsible Materials and Methods Results and discussion Conclusions System block diagram • Sensor node : Sensors + ZigBee • Control node : ATmega128 + ZigBee + Relay Design and Construction of a Remote Monitoring and Control System for a Dehumidifier combined with a Heating Module Block diagram of monitoring and control system Developed sensor and control interface monitoring program Review of the results  Average error = 0.28 ˚C in the target temperature  Average error = 0.99% in the target humidity Temperature(left) and humidity(right) control experiment graph Target Temperature Average and Standard deviation 30 ˚C 29.9±0.28 ˚C 30.51±1.05 ˚C Target Humidity Average and Standard deviation 70% 70.7±0.99% 69.9±1.25% Average and standard deviation on temperature control experiment Average and standard deviation on humidity control experiment Acknowledgement Developed sensor and control interface module(Left), Location of the sensor nodes used in the experiment(Right) This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry and Fisheries(IPET) through Advanced Production Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs(MAFRA)(Project No. 316082-03) Control Systems Receive data in real-time Auto/manual control Save files in .xls or .txt format Dehumidifier Control Schematic of the main control system Improvement factors  More and various sensors input in need  Further test under crop growing conditions  Control algorithm
poster
Comment faire partie de l’EOSC ? Il existe de nombreux points d’accès vers l’environnement EOSC, couvrant un large spectre de communautés de recherche thématiques ou régionales, ainsi que différentes façons de s’impliquer dans la gouvernance de l’EOSC. C’est le cas de l’EOSC Association, au sein de laquelle les organisations de recherche en Europe peuvent s’impliquer: eosc.eu. Qu’est-ce que l’EOSC? L’EOSC est une démarche visant à construire un environnement virtuel pour les infrastructures de données scientifiques existantes. L’objectif est de rendre les données de recherche, les outils et les services associés plus faciles à trouver, à accéder et à réutiliser. L’EOSC vise également à interconnecter et combiner les jeux de données entre disciplines et au-delà des frontières, afin de rendre possible des découvertes inattendues. Comment les chercheurs peuvent-ils en profiter? Un premier bénéfice pour les chercheurs est l’accès facilité aux données, publications, outils et services. Vous pouvez également tirer avantage des données réutilisables, d’une plus grande visibilité pour votre recherche, et d’un accès aux services dédiés à la recherche mis à disposition par les meilleurs fournisseurs de service Européens à travers le portail EOSC: eosc-portal.eu. Comment les chercheurs peuvent-ils s’impliquer? Rendez vos données trouvable, accessibles, interopérables et réutilisables (FAIR). Utilisez les outils et données à travers l’EOSC pour augmenter sa visibilité. Impliquez-vous dans la gestion des données de recherche. Rejoignez le programme des ambassadeurs EOSC: eosc-pillar.eu/ambassadors-programme! Connecting Research Across Europe and Beyond Ambassadors Programme L’EOSC (European Open Science Cloud, Cloud Fédéré Européen pour la Science Ouverte) a un rôle important à jouer dans la façon dont travaillent les chercheurs en Europe. En facilitant le partage et la réutilisation des résultats de recherche et des données, l’EOSC a un impact positif non seulement sur la science, mais sur l’ensemble de la société. Etes-vous prêts à soutenir votre réseau et votre institution dans leur implication dans l’EOSC? Contribuez au Cloud Fédéré Européen pour la Science Ouverte Etes-vous intéressés par la science ouverte ? Faites connaitre l’European Open Science Cloud (EOSC) – et montrez aux chercheurs, étudiants et décideurs de votre institution comment profiter de l’EOSC et comment s’impliquer ! Si vous souhaitez plus d’informations sur le programme d’ambassadeurs EOSC-Pillar, EOSC-Pillar has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 857650. contactez-nous
poster
Das Text+-Langzeitarchiv: Eine generische Lösung für den nachhaltigen Erhalt von Daten in den Geisteswissenschaften George Dogaru Gesellschaft für wiss. Datenverarbeitung mbH Göttingen (GWDG) • Das Text+-Langzeitarchiv soll 2023 an den Start gehen. Es wird im Rahmen der Measure 2 in der Text+ Task Area Infrastructure/Operations durch die GWDG, die SUB Göttingen und die DNB realisiert. • Es adressiert den grundlegenden Bedarf nach einer Infrastruktur für den langfristigen und sicheren Erhalt von Daten mit geisteswissenschaftlichem Bezug. • Es wird das unkomplizierte Archivieren ermöglichen, ohne Einschränkung des Datenformats. • Es wird bitstream preservation bieten sowie PIDs. • Sowohl offene als auch geschützte Archivbereiche werden möglich sein. Das Text+-Langzeitarchiv (provisorischer Name) Spezialisierung Daten mit geisteswissen- schaftlichem Bezug Modalität Alle (keine Einschränkung) Akzeptierte Datenformate Alle (keine Einschränkung) Ansprech- person(en) George Dogaru (george.dogaru@gwdg.de) Das NFDI-Konsortium Text+ wird gefördert durch die Deutsche Forschungsgemeinschaft (DFG) unter der Projektnummer 460033370. CDSTAR § Repository-Software § die Interaktion mit dem Langzeitarchiv erfolgt ausschließlich über CDSTAR § bedienbar über eine REST-API, § sowie über eine einfache eigene Web-basierte GUI mit komfortablen Möglichkeiten zum § Hochladen § Vergeben von Metadaten § Suchen § Herunterladen § kann die Daten ins Langzeitarchiv übertragen und vom Archiv holen § aber auch die Daten bereithalten, wie ein Repository § es lassen sich sowohl offene als auch geschützte Archivbereiche realisieren CDSTAR-Koala-Connector § aktuell in Entwicklung § übersetzt zwischen dem internen CDSTAR- und dem internen Koala-Format § übergibt die Daten von CDSTAR an Koala und umgekehrt Koala § Software für Langzeitarchivierung § sorgt für die effiziente und sichere Übertragung der Daten aus dem/in den Storage, für die dauerhafte Integrität der Daten § die Langzeitarchivierungslösung der Deutschen National- bibliothek basiert auf Koala Architektur, Features, Bedienung Langzeitarchiv vs. Repository Anders als Repositories, die ihre Daten für die sofortige Nutzung bereithalten, dient ein Langzeitarchiv der langfristigen Aufbewah- rung von Daten, die möglicherweise nicht häufig abgerufen oder aktiv genutzt werden. Eine doppelte Nutzung des Text+-Langzeit- archivs als Archiv und als Repository wäre technisch möglich. CDSTAR- Koala- Connector Text+ could help facilitate the work of OPERAS [...] as a service that stores sensitive data [...] Most urgently needed would be a reliable central and long-term storage of the conducted data […]. This would not only facilitate the work of all project partners as they would not need to think about how to store this data, it would also be centrally available to everyone with access rights throughout the project and after if the project’s grant agreements require it. https://www.text-plus.org/en/research-data/user-story-311/ OPERAS – open scholarly communication in the european research area for social sciences and humanities Text+ could provide a repository for long term archiving of the data. https://www.text-plus.org/en/research- data/user-story-334/ The history of the “Deutschland-Institut” in Beijing (1932–1950): an example for Sino-German academic collaboration Getting researchers to get their valuable research data to a long-term archive is hard, so we need to lower the barrier to archive/publish research data. https://www.text-plus.org/en/research- data/user-story-351/ Data Archiving Support Creating an institution for securing and making available our research data for the long term is a matter of existential importance for the future of our discipline. The information we generate (and which should be saved for the long term) could be interconnected with databases of other German institutions, which otherwise might get lost. https://www.text-plus.org/en/research-data/user- story-342/ Epigraphy at the BBAW
poster
Calibrating the lithium-age relation and its dependence with rotation, activity and metallicity using open clusters and associations Acknowledgments: Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme focus ID 188.B- 3002. These data have been obtained from the GES Data Archive. This work has made use of data from the European Space Agency (ESA) mission Gaia. We acknowledge financial support from the Universidad Complutense de Madrid (UCM) and by the Spanish Ministerio de Ciencia, Innovación y Universidades, Ministerio de Economía y Competitividad, from project AYA2016-79425-C3-1-P and PID2019-109522GB-C5[4]/AEI/10.13039/501100011033. Preliminary results: dependence with rotation, activity and metallicity. Li-age relation Candidate members for each cluster are selected from the Gaia-ESO Survey (GES) (Gilmore et al. 2012) iDR6 data based on the following criteria: RVs, Gaia astrometry (proper motions and parallaxes), gravity indicators - Kiel (logg vs Teff) and 𝛾index diagrams, [Fe/H] metallicity, and the position in the EW(Li) vs Teff diagram. As an example we show here the case of IC 2602 (RV, PMs, CMD, EW(Li) vs Teff and 𝛾index) and NGC 6705 (Kiel diagram and [Fe/H] histogram): Selection criteria and cluster membership Abstract EW(Li) vs Teff: Plotting the lithium envelopes of IC 2602 (35 Myr), the Pleiades (78-125 Myr), and the Hyades (750 Myr) in a EW(Li) vs Teff figure, we can estimate age ranges and verify probable members (see Montes et al. 2001). Kinematic selection: We studied the RV distribution of each cluster by applying a 2-sigma clipping procedure and adopting a 2σ limit about the cluster mean yielded by the Gaussian fit to identify the most likely RV members. The Kiel diagram enables us to discard giant outliers (logg < 3.5) – some of them Li-rich giants (A(Li) > 1.5) – and other field contaminants. We used the PARSEC isochrones (Bressan et al. 2012), with Z = 0.019 and ages ranging from 1 Myr to 7 Gyr. For young clusters, we used the gravity indicator gamma (see γ index vs Teff diagram above) to discard giant contaminants before applying any kinematic and astrometric criteria,. Star Clusters: The Gaia Revolution MW Gaia WG 1/2 online workshop 5-7 October 2021 [Fe/H] histograms also help rule out stars with metallicities too far away from the mean for each cluster. M.L. Gutiérrez Albarrán1, D. Montes1 , H.M. Tabernero1,2, J.I. González Hernández3,4, A. Frasca5, A.C. Lanzafame5, R. Smiljanic6, A.J. Korn7, S. Randich8, G. Gilmore9, et al. & GES Survey Builders 1Departamento de Física de la Tierra y Astrofísica and IPARCOS-UCM, Facultad de Ciencias Físicas, Universidad Complutense de Madrid, E-28040, Madrid, Spain; 2 CAB; 3 IAC; 4 Universidad de la Laguna, Tenerife; 5 INAF Catania; 6 Nicolaus Copernicus Astronomical Center; 7 Department of Physics and Astronomy, Uppsala University; 8 INAF Arcetri; 9 Institute of Astronomy, University of Cambridge In this work we use a series of open clusters and associations observed by the Gaia-ESO Survey (GES) to study the use of lithium abundances (Li I spectral line at 6708 Å) as an age indicator for pre- and main-sequence FGKM late-type stars. Previous studies of open clusters have shown that lithium depletion is not only strongly age dependent, but also shows a complex pattern with several other parameters, such as rotation, chromospheric activity and metallicity. Using the available data from both GES iDR6 and Gaia EDR3, we performed a thorough membership analysis and obtained lists of candidate members for 41 open clusters, ranging in age from 1-3 Myr to 5 Gyr. We then conducted a comparative study that allowed us to quantify the observable lithium dispersion in each cluster and study influence of rotation, activity and metallicity in the lithium dispersion of the selected candidates. All this allows us to calibrate a Li-age relation and create empirical lithium envelopes for several clusters in our sample.
poster
1. Radio relics Relics are elongated Mpc-size radio sources located in the outskirts of merging galaxy clusters. These radio relics trace merger shocks where particles are accelerated to relativistic energies, causing them to emit at radio wavelengths. It is still being debated by which physical mechanism these particles are accelerated. Reinout van Weeren, Felipe Andrade-Santos, Christine Jones, William Forman, Georgiana Ogrean Harvard-Smithsonian Center for Astrophysics Probing the Physics of Particle Acceleration at ICM Shocks Subaru gri 150 MHz LOFAR radio 0.5 – 4.0 keV Chandra 2. Shock Mach number Particle acceleration models make predictions about the relation between shock Mach number and the observed radio spectral index. However, existing X-ray observations lack the sensitivity to measure the Mach numbers at large cluster centric radii with high- accuracy to test these models. 3. A case study A textbook example of a radio relic is the one in the cluster RXJ0603.3+4214 (z=0.225). In a 240 ks Chandra observation a hint of a weak M~1.4 shock is found at the location of the relic, by measuring the X-ray surface brightness jump. No useful constraint could be obtained on the pre- shock temperature. 5. Conclusion X-ray Surveyor observations will enable precise measurements of low-Mach number shocks in cluster outskirts both via the density and temperature jump, crucial for testing particle acceleration models. X-ray Surveyor simulated (hdxi, 100 ks) Chandra observed (ACIS-I, 240 ks) Radio Relic pre-shock region post-shock region 4. X-ray Surveyor simulations Based on the Chandra count rate and measured ICM properties, we simulated an X-ray Surveyor observation of a Mach = 1.4 shock with, a post-shock temperature of 9 keV. We determine the shock Mach number from the surface brightness profile and via measurements of the pre- and post-shock temperatures. References: van Weeren et al. (2010, Science, 330, 337); van Weeren et al. 2012 (A&A, 546, 124); van Weeren et al. 2015 (ApJ, submitted); Dawson et al. (in prep); Eckert et al. (2011, A&A, 526, 79); Ogrean et al. (2013, MNRAS, 433, 812) 900 kpc 4 arcmin 0.5-4.0 keV 0.5-4.0 keV X-ray Surveyor simulated pre- and post-shock spectra (calorimeter, 100ks) Tpre-shock = 5.9 ± 0.3 keV (model T=6.4 keV) Tpost-shock = 8.8 ± 0.2 keV (model T=9.0 keV) Mach number = 1.5 ± 0.1 (model M=1.4) Mach number = 1.38 ± 0.03 (model M=1.4) Mach number = 1.4 ± 0.3 (observed) Observed and simulated surface brightness profiles across the relic RX J0603.3+4214 Redshift (0.225) and Abundance (0.25) were frozen in the fit, nH was left as a free parameter. Double power-law projected density profile fit with a jump Subaru image: William Dawson, Nathan Golovich
poster
. Discovery of Low-ionization Envelopes in the Planetary Nebula NGC 5189 Spatially-resolved Diagnostics from HST Observations Ashkbiz Danehkar, Margarita Karovska, W. Peter Maksym, and Rodolfo Montez Jr. Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138, USA The planetary nebula NGC 5189 shows one of the most spectacular morphology among planetary nebulae. Using high-resolution HST/WFC3 imaging, we discovered the low-ionization structures within 0.3pc × 0.2pc around the central binary system. We used Hα, [O III] , and [S II] emission line images to construct line-ratio diagnostic maps, which allowed us to spatially resolve two distinct low-ionization envelopes within the inner regions. Our diagnostic maps show that highly-ionized gas surrounds these low-ionization envelopes. These envelopes could be a result of a powerful outburst from the central interacting binary, when one of the companions (now a [WO] star) was in its AGB evolutionary stage. Dense material ejected from the progenitor AGB star is likely heated up as propagating along a symmetric axis into the previously expelled material. Our diagnostic mapping using high-resolution imaging can provide a novel approach to detection of low-ionization regions in other planetary nebulae and in other objects including symbiotic systems and expanding envelopes of novae. References Baldwin, Phillips, Terlevich, 1981, PASP, 93, 5 Danehkar et al. 2018, ApJ, [arXiv:1711.11111] Garcia-Rojas et al. 2012, A&A, 538, A54 Kewley et al. 2006, MNRAS, 372, 961 Acknowledgment E-mail: ashkbiz.danehkar@cfa.harvard.edu Spatially-resolved Flux Ratio Maps Resolved Diagnostic Mapping [S II]/Hα flux ratio: high in low-excitation regions [O III]/Hα flux ratio: high in high-excitation regions FIG 1. Image with specific colors of fluorescing sulfur, hydrogen, and oxygen. Credit: NASA/ESA Hubble Space Telescope (Dec 18, 2012). FIG 4. Spatially-resolved diagnostic map of the inner region of NGC 5189. Red and green pixels correspond to low-ionization and photo-ionized regions, respectively. FIG 5. Excitation diagnostic diagram and BPT diagram produced from the 120" × 90" region of NGC 5189 to disentangle fast LISs from photoionized regions. Solid black lines: LINER-like and Seyfert-like boundaries (Kewley+2006), solid red line: nebular photon-shock dividing line (Raga + 2008), star (∗): the mean flux ratios, cross (×): Garcıa-Rojas+2012, and plus (+): Kingsburgh & Barlow (1994). Kingsburgh & Barlow 1994, MNRAS, 271, 257 Maksym et al. 2016, ApJ, 829, 46 Raga et al. 2008, A&A, 489, 1141 Sabin et al. 2012, RMxAA, 48, 165 Main Findings and Future Prospects Two low-ionization envelopes within the inner, ionized gaseous environment, extending over 0.15pc from the central star. Both appear to be expanding along a NE to SW axis. The SW envelope appears smaller than its NE counterpart. These low-ionization envelopes are surrounded by the highly ionized, low-density gas. Our diagnostic methodology is a powerful tool for identifying low-ionization structures in other objects, e.g. complex planetary nebulae, expanding nebulae around symbiotic systems and expanding shells of novae (e.g. V445 Puppis). Diagnostic diagrams based on [O III]/Hα vs. [S II]/Hα ratios Dividing Shock-ionized and Photoionized regions (Raga + 2008) Similar to LINER and Seyfert regions (e.g. Maksym + 2016) in BPT diagrams (Baldwin, Phillips & Terlevich 1981; Kewley + 2006) Shock-ionized Structures within Photoionized Nebula FIG 3. Logarithmic flux [S II]/Hα ratio map of the 120" × 90" region chosen from the HST observations. Two main morphological structures are labelled 120 ” 90” FIG 2. Logarithmic flux [O III]/Hα ratio map of the 120" × 90" region chosen from the HST observations. The contour lines show the boundaries of photo-ionized and low-ionization regions. Two main morphological structures are labelled. We are grateful to Mr. Zoltan Levay and the Hubble Heritage Team who have obtained the NGC5189 HST da
poster
Epidemiological studies on reservoir hosts and potential vectors of Grapevine flavescence dorée and validation of different diagnostic procedures for GFD (GRAFDEPI) Funding Mixed non-competitive-virtual pot funding mechanism. Each funder only pays for the participation of their own national researchers. Total funding € 213,000 Picture 1 Picture 1 Picture 1 Goals The goals of the GRAFDEPI project are: • to improve the knowledge on epidemiological cycle of the disease; • to harmonize flavescence dorée diagnostic procedures within the EU. Research consortium IT-CRA-PAV, AGES-AT, CRAW-BE, TR-PPRS, PT-INRB, ACW-CH, BE-ILVO, DISTA-IT, DIPROVE-IT, CRA-ABP-IT, IPEP-SB, NIB-SI, IRTA-SP Contact information Project coordinator: Graziella Pasquini graziella.pasquini@entecra.it Key outputs and results • availability of validated conventional and real time PCR protocols for phytosanitary labs • identification of new potential vectors of FD phytoplasma • upgrading of the geographical distribution of FD strains in involved countries and individuation of ‘unusual isolates’ • design of surveillance schemes for the disease control Objectives • Validation of diagnostic protocols at European level • evaluation of new alternative hosts and vectors in the epidemiological cycle of the disease • characterization of FD isolates in different countries • suggestions for new control strategies 05/2012-04/2014
poster
Comparison of Numerical Schemes to Solve the Newton-Lorentz Equation Jabus van den Berg1 1 Centre for Space Research, North-West University, Potchefstroom, South Africa 24182869@nwu.ac.za Abstract To better understand the transport of solar energetic particles, a conceptual understanding of the micro-physics of charged particle propagation in electric and magnetic fields is needed. The movement of charged particles are governed by the Newton-Lorentz equation, which becomes increasingly difficult to solve analytically for complicated electric and magnetic fields. In this work, the methods of Boris (1970) and Vay (2008), as well as a forth-order Runge-Kutta scheme, are investigated for numerically solving the Newton-Lorentz equation. This work focuses on accuracy rather than computational speed, since these numerical schemes will be used to analyse the motion of particles in turbulent electric and magnetic fields. The Larmor radius, deviation between the final numerical and analytical position, as well as the execution time, are calculated and recorded for different time steps. 1 The Newton-Lorentz Equation The motion of a particle with mass m and charge q, moving with velocity ⃗v in an electric field ⃗E and magnetic field ⃗B, is governed by the Newton-Lorentz equation md(γ⃗v) dt = q  ⃗E + ⃗v × ⃗B  , (1) where γ = 1/ p 1 −(v/c)2 is the Lorentz factor, with c the speed of light in vacuum (Griffiths, 1999). This can be split into two, first-order differential equations d⃗r(t) dt = ⃗v(t) and md⃗u(t) dt = q h ⃗E (⃗r(t); t) + ⃗v(t) × ⃗B (⃗r(t); t) i , (2) where ⃗u(t) = γ(t)⃗v(t). This is solved in Cartesian coordinates since the vector product is easily calculated and it is difficult to find a symmetry axis for a general magnetic field. The numerical methods used to solve this equations should be accurate to simulate charged particles in turbulent electric and magnetic fields, as shown in Fig. 1. The Larmor radius of the particle’s gyration is defined as rL = v⊥/ωc, (3) where v⊥= v sin[arccos(⃗v · ⃗B/vB)] is the particle’s speed perpendicular to the magnetic field and ωc = |q|B/γm (4) is the cyclotron frequency. s 1 3 2 1 0 1 s 2 3 2 1 0 1 2 3 s 0 2 4 6 8 x [m] 6 5 4 3 2 1 0 1 y [m] 1 0 1 2 3 4 5 z [m] 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Fig. 1: Left panel: Background (black) and turbulent (green) magnetic field lines of a composite slab-2D turbulence model at time t = 0. The magnetic field lines can be though of as the path of the particle’s guiding centre if the particle is stuck to the field line. Right panel: Trajectory (red) of an electron in the composite turbulence model, simulated using the Runge-Kutta method. A single background magnetic field line (black dashed) and the particle’s guiding centre (blue dotted) is also shown. 2.1 The Method of Boris (1970) The position can be calculated by a time-centred approximation of Eq. 2a ⃗rn+1/2 −⃗rn−1/2 ∆t ≈⃗vn =⇒ ⃗rn+1/2 = ⃗rn−1/2 + ⃗vn∆t, (5) where a subscript n denotes that quantity at time tn = t0 + n∆t, with t0 the initial time and n = 0; 1; 2; . . . , and it should be remembered that ⃗vn = ⃗un/γn. Boris (1970) suggested a decoupling of the electric and magnetic field to solve Eq. 2b by first applying half of the electric field, then applying the rotation of the velocity vector by the magnetic field and lastly applying the other half of the electric field: ⃗u−= ⃗un−1 + q∆t 2m ⃗En−1/2 ⃗un+1/2 = ⃗u−+ f1⃗u−× ⃗Bn−1/2 (6a) ⃗u+ = ⃗u−+ f2⃗un+1/2 × ⃗Bn−1/2 ⃗un+1 = ⃗u+ + q∆t 2m ⃗En−1/2, (6b) where f1 = tan qBn−1/2∆t/2mγ−  /Bn−1/2, (7a) f2 = 2f1/  1 + (f1Bn−1/2)2 and γn = p 1 + (un/c)2, (7b) with ⃗En−1/2 = ⃗E(⃗rn−1/2; tn−1/2), ⃗Bn−1/2 = ⃗B(⃗rn−1/2; tn−1/2) and Bn−1/2 = |⃗Bn−1/2|. 2.1.1 An Approximation Since tan θ ≈θ to first order in θ if |θ| ≪1, as should hold for a small time step, Eqs. 6a to 7b can be simplified (Birdsall & Langdon, 1991): ⃗u−= ⃗un−1 + q∆t 2m ⃗En−1/2 ⃗un+1/2 = ⃗u−+ ⃗u−× ⃗f1 (8a) ⃗u+ = ⃗u−+ ⃗un+1/2 × ⃗f2 ⃗un+1 = ⃗u+ + q∆t 2m ⃗En−1/2, (8b) where ⃗f1 ≈q ⃗Bn−1/2∆t/2mγ− and ⃗f2 = 2⃗f1/(1 + f 2
poster
Estimation of physicochemical properties using WebTEST 2.0, a database centered modeling platform for building and deploying QSAR models email: martin.todd@epa.gov Overview of WebTEST 2.0 Martin, T1.; Ramsland, C.2; Charest, N.1; Lowe, C.1; Williams1, A. 1 US EPA/CCTE; 2 ORAU US EPA Disclaimer: The findings and conclusions in this poster have not been formally disseminated by the U.S. EPA and should not be construed to represent any agency determination or policy. WebTEST2.0’s database has the following schemas: Datasets, molecular descriptors, and QSAR methods can be versioned Schema Purpose exp_prop Raw experimental data qsar_datasets Datasets used for QSAR modeling qsar_descriptors Molecular descriptor values and QSAR ready SMILES qsar_models QSAR models, predictions, reports, and statistics QSAR Modeling Procedure Chemicals can be automatically mapped to DSSTox records using their source identifiers (name, CAS, SMILES) (or can be manually curated) Records are omitted if they have invalid molecular structures, property values, units, or experimental parameters A record for QSAR modeling consists of an identifier (e.g. QSAR ready smiles), the property value, and the molecular descriptor values WebTEST2.0 can generate Mordred, PaDEL, RDKit, ToxPrints, and WebTEST descriptors Using Python, WebTEST2.0 can develop random forest (RF), support vector machine (SVM), k Nearest neighbors (kNN), and consensus models Genetic algorithm (GA) feature selection can be used to tailor kNN models to specific datasets Physicochemical property data by source* Source HLC VP WS BP LogP MP Sum OChem 0 10241 17257 15365 38566 60424 141853 OPERA 694 2786 5074 5375 13549 8464 35942 Zang, et al 2017 N/A N/A N/A N/A 13798 N/A 13798 Open Melting Point Dataset N/A N/A N/A N/A N/A 11057 11057 PubChem N/A 0 2125 0 0 7896 10021 eChemPortalAPI 0 0 2129 0 0 5658 7787 AqSolDB N/A N/A 7374 N/A N/A N/A 7374 Sander 6322 N/A N/A N/A N/A N/A 6322 Kovdienko et al. 2010 N/A N/A 2593 N/A N/A N/A 2593 Bradley N/A N/A 2481 N/A N/A N/A 2481 QSARDB N/A 34 945 N/A 607 807 2393 Yalkowsky et al. 2002 N/A N/A 1017 N/A 691 N/A 1708 Oxford University Chemical Safety Data N/A N/A N/A 0 N/A 1205 1205 Tetko et al. 2001 N/A N/A 1103 N/A N/A N/A 1103 Lewis et al. 2016 N/A N/A 1065 N/A N/A N/A 1065 … Sum 7279 13680 44566 22118 67584 96009 251236 Property Property Range Parameter range Raw Mapped Flattened* HLC 1e-13<HLC (atm m3/mol)<1e2 20<T(C)<30 6.5<pH< 7.5 10521 7279 1910 VP 1e-14<VP(mmHg)<1e6 20<T(C)<30 34849 13680 3454 BP -150<BP(°C)<800 740 < P (mmHg) < 780 375470 22118 7061 WS 1e-14<WS(M)<100 1e-11<WS(g/L)<990 20<T(C)<30 740<P(mmHg)<780 6.5<pH< 7.5 82142 44566 9814 LogP -6<LogKow)<11 20<T(C)<30 107514 67584 14711 MP -250<MP(°C)<550 740 < P (mmHg) < 780 450775 96009 29513 * Number of records which can be mapped to DSSTox structures * Flattened records (one property value per structure) are used for modeling Physicochemical property data sets Modeling results Dataset knn rf xgb svm con HLC 0.70 0.81 0.82 0.80 0.82 VP 0.85 0.89 0.90 0.88 0.91 WS 0.76 0.82 0.82 0.82 0.84 BP 0.70 0.75 0.76 0.72 0.77 LogP 0.82 0.87 0.90 0.91 0.91 MP 0.65 0.74 0.76 0.76 0.77 Average 0.75 0.81 0.83 0.82 0.84 Q2_CV Dataset knn rf xgb svm con HLC 0.77 0.86 0.85 0.84 0.87 VP 0.87 0.90 0.92 0.89 0.92 WS 0.78 0.84 0.84 0.84 0.86 BP 0.74 0.76 0.77 0.72 0.78 LogP 0.84 0.89 0.91 0.93 0.93 MP 0.67 0.75 0.76 0.77 0.78 Average 0.78 0.83 0.84 0.83 0.86 R2_Test 2.0 HLC WS VP HLC WS VP Beta: https://www.epa.gov/chemical-research/cheminformatics QMRF Excel model summary Pred. report WebTEST2.0 Prediction report
poster
GEODESIC NEIGHBORHOODS FOR PIECEWISE AFFINE INTERPOLATION OF SPARSE DATA Gabriele Facciolo and Vicent Caselles Departament de Tecnologies de la Informaci´o i les Comunicacions, Universitat Pompeu Fabra - Barcelona gabriele.facciolo@upf.edu, vicent.caselles@upf.edu Abstract We propose a new interpolation method for sparse data that al- lows incorporation of geometric information of a reference image u. The idea consists in defining a geodesic Voronoi cell for each data sample, and fit a model to interpolate inside each cell. A geodesic distance permits both: to effectively adapt the shape of the cells to the image structures; and to compute a set of neighboring samples that are used for fitting a piecewise affine model at each cell. Objective Interpolate a set of range measurements (LIDAR or sparse disparity data [1]) using the additional knowledge provided by a photograph u of the scene. Samples with known depth (5% of the total pixels). Reference image u. Interpolated samples. Lambertian hypothesis: a uniform surface with a constant angle has a constant inten- sity in the image. Allows to extrapolate information across uni- form regions of the image. Geodesic Voronoi Cells and Neighborhoods We use the geodesic distance to incorporate the radiometric information provided by the im- age u into the interpolation as in [2]. Reference Image u(x) : Ω→R+, with Ω⊂R2. Positions of the samples Λ ⊂Ω. Depth values of samples G(λ) : Λ →R, λ ∈Λ. Curve C(p) : [0, 1] →Ω, and Cs,t curve con- necting s and t. Geodesic distance between s and t measures the minimum variation of u between s and t d(s, t) = min Cs,t Z 1 0 |∇u · ˙Cs,t(p)| + ε| ˙Cs,t(p)|dp Observe: The shortest path is the one with less discontinuities of u along it. Geodesic Voronoi diagram of the sites in Λ, successfully accounts for discontinuities in the image. Geodesic neighborhood GNK(p) of the point p, is the set formed by the K-nearest (in the geodesic sense) samples of Λ to the point p. Samples in GNK(p) are likely to have the same model as p. Reference image Samples & Voronoi cells Geodesic Voronoi cells Geodesic Neighborhood Robust affine plane interpolation using GNK Profile: Linear blend of the 5 nearest (geodesic) samples Profile: Affine plane estim. using the same 5 samples For each point p ∈Λ we fit an affine plane trough GNK(p). Then we extended the plane to the entire cell, the initial piecewise affine model H is the union of all the cells. If GNK(p) contains outlier samples (that do not belong to the same surface as p) then the result will be bi- ased. To remove these outliers we use a modified RANdom SAmple Consensus (RANSAC) [4]. Constrained region merging Problem: When interpolating noisy data • the model of each cell is independently estimated, • adjacent cells may end up with different models. Our solution: Merge adjacent regions with compati- ble models, and compute a common model to reduce noise effects. Result without merging Before merging After merging Simplified Mumford-Shah [5] minimizes E(B, f) = P R∈P(Ω) RErr(R, f) + λ R B g(s)ds, λ ≥0. • Error term: RErr(X, f) = P x∈X∩Λ |fX(x) −H(x)|2 H: the initial plane interpolation fX: the affine model for the region X • Boundary lenght: g(s) is big at poorly contrasted bound- aries and small at well contrasted ones. • Also constrained by the segments of the original image [6]. • Greedy algorithm: merges pair of regions while the error term is small. Results Image Sample Positions Result model Regions Discussion • Method for interpolating range data that incorporates the geometric infor- mation provided by an image of the scene. • Geodesic neighborhoods constitute a fast and robust tool for modelling the local information, it can be adapted to other (non-affine) models [3]. • Poor results for badly contrasted edges between strongly textured regions. More results at: http://gpi.upf.edu/static/geoint . References 1. N. Sabater, A. Almansa, and J-M. Morel, “Rejecting wrong matches in stereovision,” CMLA Preprint, 2008. 2. L. Yatz
poster
Results: Baseline characteristics for the healthy study group: The LV myocardium was significantly thicker in all segments for men compared with women. Mean segmental LVMT were 5.4±0.8mm for women and 6.6±0.9 mm for men. By multivariate analysis, male gender, age, body surface area, systolic blood pressure, hard physical activity, and smoking were significantly associated with thicker LV myocardium. The LV showed same overall geometrical patterns for men and women. However, the LV of women where significantly more heterogenic. LV geometry: Women Men Background: Left ventricular hypertrophy is associated with cardiovascular complications and geometry is important for prognosis. Computed tomographic coronary angiography (CTA) is increasingly being accepted as a key diagnostic modality for the non-invasive detection of coronary artery disease. Precise imaging of the cardiac geometry is available as a spin-off from each CTA, without increasing the imaging time, radiation exposure, or administered contrast media to the patient. Contact: Louise Hindsø, Email: louisehindsoe@hotmail.com, Phone: +45 51948114 Methods: 4,295 participants from the Copenhagen General Population Study were randomly enrolled in a MDCT sub study and a CTA were performed in late diastole. We defined a healthy study group (n=805) by following exclusion criteria: [1] evidence of cardiopulmonary disease, [2] hypertension, [3] subjects taking prescription medications for cardiovascular disease [4] overweight defined as BMI ≥25, and [5] diabetes. Images were analyzed by semi-automatic computer software, Vitrea 6.3, and regional left ventricular thickness was assessed using the American Heart Association 17 segment Model. The 17th segment was excluded due to artefacts. LV geometry was illustrated, by dividing segmental thickness, in each individual segment, with mean overall thickness of the LV. Conclusion: The normal human heart was found to be morphologically heterogenic, and significantly more heterogenic for woman than men. This study provided gender specific reference values for regional LVMT and LV geometry, derived with MDCT in a healthy population. The aim of this study was to derive reference values of regional LV myocardial thickness (LVMT) and geometry, with the use of 320-detector computed tomography (MDCT). Moreover, we wanted to determine gender related differences. Regional left ventricular myocardial thickness and geometry – assessed with 320-detector computed tomographyin a healthy population in a healthy populationL Hindsø L Hindsø1,21,2, A Fuchs, A Fuchs11, T Kühl, T Kühl11, BG Nordestgaard, BG Nordestgaard3,43,4, L Køber, L Køber1,31,3, KF Kofoed, KF Kofoed1,21,21: Department of Cardiology, 1: Department of Cardiology, RighospitaletRighospitalet, University of Copenhagen, Denmark, University of Copenhagen, Denmark2: Department of Radiology, 2: Department of Radiology, RighospitaletRighospitalet, University of Copenhagen, Denmark, University of Copenhagen, Denmark3: 3: Faculty of Health Sciences, University of Copenhagen, Denmark 4: Department of Clinical Biochemistry, Herlev Hospital, University of Copenhagen, Denmark Characteristics Men (n=276) Women (n=529) P-value Age (years) 57 (10) 55 (9) 0.04 Height (cm) 180 (7) 167 (6) <0.01 Weight (kg) 75 (7) 63 (6) <0.01 BMI (kg/m2) 23 (1) 22 (2) <0.01 BSA (m2) 1.9 (0.1) 1.7 (0.1) <0.01 Systolic BP (mmHg) 128 (10) 125 (12) <0.01 Diastolic BP (mmHg) 77 (7) 76 (8) 0.23 Smoking Former 42 (15%) 77 (15%) 0.79 Current 115 (42%) 204 (38%) 0.38 Hard physical activity 53 (19%) 39 (7%) <0.01
poster
XLII CONVEGNO NAZIONALE DELLA SOCIETÀ ITALIANA DI CHIMICA AGRARIA 35 Soil Abstract 102 A LIMING AGENT BY RECYCLING MOLLUSC SHELLS Dell''Orto M.*[1], Nisar S.[2], Overmo M.B.[3], Josué G.[2], Andreola C.[2], De Nisi P.[1], Tambone F.[1], Baldi C.[5], Føreid B.[4], Fatone F.[2], Adani F.[1] [1]Università degli Studi di Milano-Dipartimento di Scienze Agrarie e Ambientali ~ Milano ~ Italy, [2]Università Politecnica delle Marche ~ Ancona ~ Italy, [3]Norsk Landbruksrådgiving Nord Norge ~ Trofors ~ Norway, [4]Norsk Institutt for Bioøkonomi ~ Ås ~ Norway, [5]Co.Pe.Mo- Cooperativa Pescatori Molluschicoltori ~ Ancona ~ Italy The total EU fishery production amounted to around 4.5 million tonnes in 2022, of which 1.1 from aquaculture. This poses a challenge regarding the huge volume of wastes derived from seafood production and processing, in both economic and environmental terms. Anyway, these wastes contain valuable compounds that can be recovered in agriculture. The EU-funded SEA2LAND project, based on the circular economy model, explores the production of large-scale fertilisers in the EU from fishery and aquaculture wastes. In mollusc cultivation and fishery, the main waste is composed by shells, composed of CaCO3 and, at a lesser extent, of MgCO3, making them a promising source of liming agent for acidic soil correction. Several areas in Europe have low pH soils, due both to natural and anthropogenic processes, requiring lime distribution for agricultural production. Agricultural lime products are categorized as ‘EC Fertiliser Liming Materials’ in the EU Fertilizing Product Regulation (2022), the most used being ground limestone, dolomitic ground limestone, chalk, ground chalk, burnt lime and hydrated lime. However, although needed for soil quality and agricultural production, liming is often discarded due to its costs. The possibility to use waste products as liming agents allows to address both economic and environmental issues. The shellfish waste used in this work is a mixture of mussel, clam and murex shells discarded from a mollusc processing facility located in Ancona (Italy), whose production of discards accounts for about 1.4 ± 0.2 t·d-1. Waste was pre-treated by shredding the feedstock in the presence of water in a 1:3 (water: shellfish) ratio and the crushed shells (around 80% dry matter) were separated by gravity. The obtained shells were dried, milled and finally, sieved at 1 mm to obtain the liming agent with different particle sizes. These milled shells were first compared to reference liming agents (CaCO3 and CaO) in an incubation experiment to determine their effect on soil pH (UNI EN 14984:2006 method). Then the recycled liming agent was applied in two sites in northern Norway that needed liming, one with oceanic climate and one with more continental climate. The recycled liming agent was compared to conventional liming agent and no liming. Grass was sown at both sites. Results from the incubation trial suggest that the recycled liming agent has a high pH corrective power, similar to CaO and CaCO3. The results on soil pH and yield from the field trials will also be presented.
poster
Lipase Activity Assay: Improvements to the Classical p-Nitrophenol Palmitate Spectrophotometrical Method Rafael Picazo Espinosa, University of Granada, Spain rafaelpicazo@hotmail.com Abstract Gupta's lipase activity assay is widely used for checking lipase activity of isolates from different sources (olive oil mill wastewaters, sludge from waste water treatment plants, dairy industry effluents, oil contaminated soils, etc) when seeking for biotechnological applications for lipolytic strains in biofuels production, cosmetics, food and even pharmaceutical industry. However, this assay shows several limitations when the study of lipase activity requires the use of different pH, organic solvents or temperature conditions for one single strain or the comparison of the activity levels of several strains. In the present study, several improvements have been realized so the lipase assay can be applied to the simultaneous study of lipase activity of several bacterial strains under different conditions of pH, temperature and presence of organic solvents. Studied strains' lipases showed different performances at different pH, solvents and temperature conditions, and the most interesting conditions for using the isolates for methanolysis and ethanolysis of fatty acids were determined. Particularly, Microbacterium sp. S18 and Proteus sp. S53 showed a better activity level at pH 5 in presence of methanol, while their performance was quite similar both in ethanol and methanol at pH 8. Besides, feasibility of studied lipases for their application in different biotechnological fields could be sugested from the activity levels in presence of most used organic solvents. Gupta's lipase activity assay limitations Lipase assay developed by Gupta and colaborators is based in an spectrophotometrical method to determine the lipase activity of a bacterial culture throw the quantification of the p-nitrophenol released by lipase enzymes from the substrate p-nitrophenol palmitate (pNPP) (Gupta, et al., 2002). In the normal assay, 1 ml of cell-free medium obtained by centrifugation of an overnight culture is mixed with 9 ml of substrate solution. This reaction mix is incubated at 30ºC for 30 minutes. The reaction is finished by incubation at 100ºC for 4 minutes before the spectrophotometrical determination. Improvements to the protocol Nevertheless, this lipase assay requires relatively long volumes of reagents and cultures, and the prolongated incubation time for each sample creates a bottleneck that reduces the applicability of the method to the study of multiple samples or multiple parameters. Besides, the assay has a low replicability because of p-NPP absorbance changes; as well as Triton X100, p-NPP and arabic gum emulsification problems when pH values are far from standard pH 8. Briefly, the changes made to the original protocol have been: 1.The buffer system includes citrate buffer to adjust pH 5-6, Sorensens' phosphate buffer to adjust pH 7-8, glycine-NaOH buffer to pH 9-10.4. Thus, pH of the reaction mix can be modified according to the needs of the study and the performance of the lipases can be analysed under pH conditions far from the standard and it is possible to make a more realistic prediction of the behaviour of the system where the enzyme will be deployed. 2.The emulsification problems have been fixed by directly disolving Triton X100 in isopropanol (200 µl/ml), resuspending the p-NPP directly with the obtained emulsifying solution (6.6 mg of p-NPP per ml of detergent solution) and suplementing the buffer with arabic gum (11 mg of arabic gum in 20 ml of buffer) prior to mixing it with the p-NPP-isopropanol-Triton solution. After incubating at the desired temperature for 45 minutes, absorbance at 410nm is readed in a microtiter reader. 3.The proportions of the different components of the reaction mix have been modified to improve the replicability and adapt the assay to 96 wells plates. Thus, 1 ml of TritonX100-isopropanol-p-NPP is added t
poster
‘Storyboarding’ in the classroom Malizukiswe V. Vacu1, Craig Ehrenreich1, Nelisiwe Chonco1, Grace Aguti2, Xiangcong A. Luo3, Bianke Loedolff1† 1Faculty of AgriSciences, Stellenbosch University; 2Earth University; 3Faculty of Arts & Social Sciences, Stellenbosch University Professional Educational Development of Academics (PREDAC) 2019 vs. https://www.storyboardthat.com/ https://wordsearchlabs.com/view/108017 1 2 3 4 5 Ibhodi elixoxa indaba insiza-kufundisa exoxa indaba ngokusebenzisa izithombe engalekelela abafundi kuzona zonke izinhlaka emkhakheni wezemfundo. Isibonisi sezithombe sinika amandla imiqondo eyisisekelo esekwe kwisifundo ngasinye. Bobali othisha kanye nabafundi babamba iqhaza kulenqubo yokubekwa kwendaba ngezingxoxo eziqhubekayo egumbini lokufundela. Imiqondo nemibono ikhonjiswa ngendlela ebonakalayo ebhodini elixoxa indaba futhi ngokujwayelekile ingafaka (i) ukubuyekeza isihloko kanye nokwethulwa kwemibono ethuthukile ngaphakathi kwesihloko, (ii) izinjongo zokufunda, (iii) iskhathi sokuzilungiselela kwabafundi ngaphandle nangaphakathi kwegumbi lokufundela, (iv) izithiyo zokufunda kanye (v) nokuqonda imiqondo yokufunda eyinkimbinkimbi equkethwe. Abafundi baphonswa inselela yokuthola imqondo esemqoka ngaphakthi kwendida-magama, ukuxhumana kwawo amagama, nokukhuliswa kwemibuzo eyinselelo yengqikithi. Ukusetshenziswa kwamabhodi axoxa indaba kunika abafundi ithuba lokukhulisa amakhono abalulekile okucabanga nokudala futhi kunikeza abafundi inkundla yokusebenzisa nokuhlanganisa okuphokophelwe kulesosifundo. †Correspondence: bianke@sun.ac.za †Correspondence: bianke@sun.ac.za
poster
KIT – The Research University in the Helmholtz Association First transmission of electrons and ions through the KATRIN beamline KATRIN collaboration 2018 JINST 13 P04020 Modeling of the response function of the KATRIN experiment J. Behrens and L. Schimpf for the KATRIN collaboration e e e e e Cyclotron radiation depends on pitch angle and z- position Asymmetric broadening of transmission function by amount of emitted cyclotron radiation We acknowledge the support of Helmholtz Association (HGF), Ministry for Education and Research BMBF (05A17PM3, 05A17PX3, 05A17VK2, and 05A17WO3), Helmholtz Alliance for Astroparticle Physics (HAP), and Helmholtz Young Investigator Group (VH-NG-1055) in Germany; Ministry of Education, Youth and Sport (CANAM-LM2011019), cooperation with the JINR Dubna (3+3 grants) 2017–2019 in the Czech Republic; and the Department of Energy through grants DE-FG02-97ER41020, DE-FG02-94ER40818, DE-SC0004036, DE-FG02-97ER41033, DE-FG02-97ER41041, DE-AC02-05CH11231, and DE-SC0011091 in the United States. Emission of cyclotron radiation Influence visible at 83mKr-campaign by broadening of lines up to 3% (see poster #13) Properties of the transmission function Electric and magnetic field studies Impact on 𝒎𝝂measurement Correction of cyclotron radiation compensates the systematic shift of Δ𝑚𝜈2 = −24.1 ⋅10−3 eV2 not to exceed the total error budget of 𝜎sys,tot = 0.017 eV2 Kassiopeia: a modern, extensible C++ particle tracking package Daniel Furse et al 2017 New J. Phys. 19 053012 Detailed magnetic and electrostatic simulations required for modeling of transmission function Δ𝐸 𝐸= 𝐵min 𝐵max ∼1 eV at 𝐸0 = 18.6 keV Analytical transmission function (isotropic source) defined by MAC-E filter properties 𝑇𝐸0, 𝑈a = 0 for 𝐸0 < 𝑞𝑈a 1 − 1 −𝐸0 −𝑞𝑈a 𝐸0 𝐵S 𝐵a for 𝑞𝑈< 𝐸0 < 𝑞𝑈a + Δ𝐸 1 − 1 − 𝐵S 𝐵max for 𝑞𝑈a + Δ𝐸< 𝐸0 Energy resolution Near future: Investigation by dedicated electron source (see poster #26) Response function is convolution of spectrum, transmission function and other systematics Field inhomogeneities in analyzing plane lead to radial dependencies (due to large spectrometer size) 𝑈a, 𝐵min computed by detailed simulations using full beamline with 3D main spectrometer geometry Impact on 𝒎𝝂measurement Correcting for radial dependencies compensates the systematic shift of Δ𝑚𝜈2 = +3.3 ⋅10−3eV2 (𝐵min) and Δ𝑚𝜈2 = −3.6 ⋅10−3eV2 (𝑈a) β Decay Spectrum, Response Function and Statistical Model for Neutrino Mass Measurements with the KATRIN Experiment Marco Kleesiek et al 2018 Analyzing plane of the MAC-E filter Comparison with measurement results from calibration sources (e.g. 83mKr) to validate simulation model (see poster #8) 𝐵min 𝐵max Analyzing plane Magnetic Field (T) 2 4 𝐵S 𝑈a Kassiopeia software package used for particle-tracking and field calcu- lations z in m (from center of WGTS) Pitch angle Θ 𝐸C in meV
poster
MY MIDA JOURNEY ITN - N°813547 IDA Marie-Curie Skłodowska Actions Mohamed El-Moursi ESR 1 Paris
poster
Ian Berry 1, Stanley Owocki 1, Matt Shultz 1, Asif ud-Doula 2 Abstract: σ Ori E is a prototypical magnetic B star with a rapid 1.2 day rotation period. Two dips in brightness can be seen in its photometric light curve, which was well fit by the Rigidly Rotating Magnetosphere (RRM) model. This model shows that if a star is rotating rapidly enough, then material will become trapped in the centrifugal magnetosphere (CM). However, this model only considers considers absorption and as such does not fully explain the light curve of σ Ori E. We must take emission into account as well. To do this, we examine the possibility of electron scattering from the CM being responsible for extra emission seen in σ Ori E’s photometric light curve. These initial results could provide insight into explaining photometric light curves of other magnetic stars measured by measured by TESS. 1 University of Delaware 2 Penn State Scranton 13 - 17 July 2020 MOBSTER-1 ₋ Rotation, winds & outflows Conclusion: For this poster we set out to examine if simple electron scattering could be responsible for extra emission previously unaccounted for in the predicted light curve of σ Ori E. So far the results, which indeed show extra emission from electron scattering are encouraging. In the future we will tune the results to σ Ori E, as well as to other stars observed by TESS via fine-tuning of the oblique rotator model parameters. Finally, we will incorporate these results into the RRM model. will incorporate these results into the RRM model. S = 0 Pure Absorption Figure 4. Bottom: Surface intensity plotted at the same four phases as Fig. 3, with emission in red. The black disk represents the star, with the red cloud showing occultation of the star by the CM. Top: Resulting light curves, for both the scattering source function case (solid curve) and a pure absorption model (dotted curve). Note the scattering flux exceeds the continuum (dashed line) by a few percent in the scattering case at phases phases 0.1 and 0.6. The arrow connects the relevant phases of the light curve to their associated surface brightness panels. Figure 2. Diagram showing how a CM can form if a star is rotating fast enough such that Rk < RA , as is the case with the 3D MHD simulations used in this model. S = J Instead of RRM, we use 3D-MHD simulations (see ud-Doula contributed talk on Friday for more details) with Rk=1.25 R* < RA = 5.5 R* , and magnetic obliquity angle β = 45 degrees. This gives us the associated 3D distribution of mass density in the CM above the Kepler radius. Ignoring material below the Kepler radius, we use this to synthesize light curves wherein electron scattering from the CM both reduces the brightness from the star, but but also provides extra emission from outside the stellar limb. The plot below shows electron scattering optical depth at four different rotational phases for an observer viewing the star at an inclination angle i = 45 degrees. Can Magnetospheric Scattering Explain Inferred Emission in Photometric Light Curves of σ Ori E and Magnetic Stars Observed with TESS? Figure 3. Surface plot of electron scattering optical depth τ for the 3D MHD model with a magnetic obliquity angle β = 45 degrees at four rotational phases and an inclination angle i = 45 degrees. The observer is looking directly down onto the magnetic pole during phases 0.5 and 1, and along the magnetic equator at phases 0.25 and 0.75 . The rotation axis is vertical. This gives us a visual representaion of material in the CM rotating around the sta star, which is not shown. Figure 1. The photometric light curve for σ Ori E, with the solid line being the predicted light curve from the RRM model and the diamonds being observations (Townsend et. al 2005). Extra emission can be seen at phase 0.6, denoted by the red circle. where µ* = (1-R2/r 2) 1/2 is the cosine of the angular size for the stellar radius R at circumstellar radius r. S = J = ์⁄ ๗ I * (1-µ*) (2) We compute the surface intensity I from
poster
conservation of somatic cell replacement during early development of mouse female germline cysts Wanbao Niu, Allan C. Spradling HHMI lab, Department of Embryology, Carnegie Institution for Science Abstract In mice fetal ovary, Wnt4-expressing somatic cells we term "escort-like cells (ELCs)" in- teract with early developing cysts of both sexes. After E12.5, Lgr5+ pre-granulosa cells ingress from the ovarian surface epithelium, and our lineage tracing showed that they replace escort-like cells in the cortical region to establish the wave 2 granulosa cell pop- ulation supporting primordial follicles. In contrast, lineage marking of ELCs at E10.5 showed that early somatic cells are not heavily replaced in the medullar ovarian region that give rise to wave 1 follicles. Reflecting their distinct somatic cellular origins, second wave follicles were ablated by diptheria toxin treatment of Lgr5-DTR-EGFP mice at E16.5, while first wave follicles developed normally and supported fertility. These findings argue that somatic cell replacement in mice and Drosophila has been evolu- tionarily conserved, and while not essential, likely aids female gamete development. Figure 1 (A) In situ hybridization (ISH) analysis shows Wnt6 (blue) and Fmr1 (red) mRNA expression in the E14.5 ovaries. (B) Cellular localization of Id1 in E14.5 ovaries. Ovaries were stained for Id1, and the oocyte marker DDX4 at E14.5 by immunofluorescence. (C) Cellular localization of Gata4 in E14.5 ovaries by immunohistochemistry. (D) Electron micrograph of E14.5 ovary showing part of a germline cyst sur- rounded by ELCs (yellow asterisks). Squamous membranes of ELCs surrounding the germ cells are indi- cated by arrowheads. Figure 2 (A) Differentiation trajectory of E12.5, E14.5, and E18.5 ovary beginning with escort-like cells and epithelial cells and splitting into two branches (Gw1, Gw2) constructed by Monocle. Cell colored by the developmental state. (B) In situ hybridization analysis shows Wnt6 (blue) and Lgr5 (red) mRNA expres- sion in the E14.5 ovary. Lgr5+ pregranulosa cells were detected in the cortical but not in the deeper sub- regions. E14.5 Ovary wnt6 fmr1 E14.5 ovary wnt6 lgr5 F’ F’ F’’ F’’ Developmental State E12.5 E14.5 E18.5 Component 2 Component 1 0 5 -5 0.0 2.5 5.0 -2.5 Gw1 Gw2 Escort-like cells Epithelial cells Epithelial cells E10.5 E19.5 E12.5E15.5 P21 Tmx Analyze CreERT2 Crossing: Axin2CreERT2/+; R26RYFP/+ YFP Stop Axin2 Rosa26 } Cortex Medulla E12.5 10 μm Cortex Medulla E19.5 30 μm 10 μm YFP DDX4 DAPI DDX4 tdTomato DAPI P1 30 μm 30 μm 30 μm 20 μm E14.5 20 μm 20 μm 20 μm 10 μm WT-control Ddx4 DAPI (P21) WT-control Lgr5DTR/+ Lgr5DTR/+ Ddx4 DAPI (P5) 30 μm 30 μm 30 μm 30 μm 30 μm 30 μm 30 μm 30 μm WT-control Lgr5DTR/+ PrimordialPrimary Antral Number of follicles (P21) *** * NS Number of follicles (P5) *** NS Primordial Primary WT-control Lgr5DTR/+ E16.5 P5 P21 DT Analyze DTR-EGFP Lgr5 promoter (chromosome 10 in) 10 μm E14.5 Gata4 20 μm E14.5 A B C D A B B A C D E F A B C D 2 μm ELCs ELCs * * * * E13.5E14.5 E16.5 E18.5 P1 Tmx Analyze CreERT2 Crossing: Lgr5CreERT2/+;R26RtdT/+ tdTomato Stop Lgr5 Rosa26 } Number of labeled ELCs Number of labeled pre- granulosa cells Figure 3 (A-B) Schematic of ELC and surface granulosa cell trac- ing strategy. (C) Lineage tracing of Axin2+ escort-like cell proge- ny for E12.5 and E19.5 demonstrates that Axin2+ escort-like cells mainly contribute to the first wave follicles. (D) Lineage tracing of Lgr5+ surface-derived pregranulosa cell progeny demonstrates that surface pregranulosa cells mainly contribute to the cortical second wave follicles. (E-F) YFP labeled ELCs and tdTomato la beled cortical cells were quantitated. Results Discussion Figure 4 (A) Experimental strategy to ablate Lgr5-expressing cells using the Lgr5-DTR-EGFP mouse model. (B) Histological analysis of the ovary from wild-type mice and Lgr5DTR/+ animals at P5 and P21. (C-D) Follicle quantification in the ovary at P5 and P21 after DT administration at E16.5. N
poster
Acknowledgments Το έργο «Μυθολογικές Διαδρομές στην Ανατολική Μακεδονία και Θράκη» έχει στόχο να αναδείξει τον πολιτισμικό πλούτο της Περιφέρειας Ανατολικής Μακεδονίας και Θράκης (Π.Α.Μ.Θ.) ως άξονα τουριστικής ανάπτυξης της περιοχής, με αφετηρία τη συστηματική καταγραφή, χαρτογράφηση και ανάδειξη των μύθων που αναφέρονται σε περιοχές της Π.Α.Μ.Θ. Το έργο υλοποιείται από το ΙΕΛ/Ε.Κ. Αθηνά-Παράρτημα Ξάνθης και το ΤΕΦ/Δημοκρίτειο Πανεπιστήμιο Θράκης. Το υλικό που συλλέχθηκε από την έρευνα αξιοποιήθηκε για την ανάπτυξη εκπαιδευτικής εφαρμογής που απευθύνεται σε διδάσκοντες και μαθητές όλων των βαθμίδων εκπαίδευσης. Εισαγωγή Μυθοτοπία, η εκπαιδευτική Εφαρμογή Συμπέρασμα Π. Καριώρης, Ά. Βακαλοπούλου, Γ. Γιαννόπουλος, Β. Γιούλη, Α. Δούπας, Ν. Μιχαηλίδου, Γ. Μούρθος, Ν. Μπικάκης, Π. Μποτίνη, Δ. Σαραφοπούλου, Ν. Σιδηρόπουλος, Γ. Σταϊνχάουερ, Δ. Τσιαφάκη, Χ.Φλούδα Ινστιτούτο Επεξεργασίας του Λόγου, Ερευνητικό Κέντρο ΑΘΗΝΑ {pkarior, avacalop, giann, voula, adoupas, amixaili, jmourthos, bikakis, pbotini, domna.sarafopoulou, nsidir, stein, tsiafaki, cflouda}@athenarc.gr​ Η εκπαιδευτική εφαρμογή περιλαμβάνει πρωτότυπα φύλλα εργασίας, σχέδια μαθημάτων, ασκήσεις και δραστηριότητες. Αυτά ανάλογα με το επίπεδο και την ηλικία των χρηστών, αξιοποιούν παιδαγωγικά τα αποτελέσματα της έρευνας στο πλαίσιο του έργου. Τα μαθήματα που καλύπτουν είναι η Γλώσσα, η Αρχαία Ελληνική Γλώσσα και Γραμματεία, η Ιστορία, η Πληροφορική, οι Ερευνητικές Δημιουργικές Δραστηριότητες, ο Ελληνικός και Ευρωπαϊκός Πολιτισμός. Στην Τριτοβάθμια Εκπαίδευση στα Τμήματα της Σχολής Κλασικών και Ανθρωπιστικών Σπουδών αλλά και σε άλλα σχετικά τμήματα καλύπτουν την Αρχαία Ελληνική και Λατινική Λογοτεχνία, την Ιστορία και άλλα. Μεθοδολογία Άλλα στοιχεία Παράλληλα με το ήδη υπάρχον υλικό, οι εκπαιδευτικοί μπορούν εύκολα να προσθέσουν δικά τους εκπαιδευτικά σενάρια και δικές τους δραστηριότητες και να σχεδιάσουν πρωτότυπο εκπαιδευτικό υλικό για τις ανάγκες των μαθητών ή των φοιτητών τους, βασισμένο στο πλούσιο πολυμεσικό υλικό της πλατφόρμας. Επίσης οι χρήστες – καθηγητές μπορούν να παρακολουθήσουν την πρόοδο των χρηστών - μαθητών τους μέσω δημιουργίας εκθέσεων. Δυνατότητες 1st Workshop on Human-Centric Sciences and Technologies in Greece | October 20, 2022, Xanthi, Greece Μέσα από την πλατφόρμα της εκπαιδευτικής εφαρμογής, η ΜΥΘΟΤΟΠΙΑ περνάει από τους μύθους στον τόπο και από εκεί ξεδιπλώνει πολλές λεπτομέρειες για κάθε πτυχή και κάθε άξονα ενδιαφέροντος. Τόσο το έργο από μόνο του όσο και ο τρόπος παρουσίασης του υλικού προσφέρονται για πρωτότυπες εκπαιδευτικές και διαδραστικές παρουσιάσεις. Το πλούσιο πολυμεσικό υλικό μπορεί να γίνει το μέσο για άπειρα εκπαιδευτικά σενάρια σε πολλά γνωστικά αντικείμενα ανάλογα με τη βαθμίδα εκπαίδευσης. Παράλληλα οι μύθοι μπορούν να αποτελέσουν την αφόρμηση για την προσωπική ανάπτυξη των μαθητών και στην καλλιέργεια κοινωνικών και συναισθηματικών δεξιοτήτων. Μέσα από το υλικό της ΜΥΘΟΤΟΠΙΑΣ ο εκπαιδευτικός μπορεί να σχεδιάσει ενότητες, σε συνδυασμό με το Αναλυτικό Πρόγραμμα Σπουδών, παρέχοντας στους μαθητές καινοτόμες και ενδιαφέρουσες δραστηριότητες. H εργασία αυτή υλοποιήθηκε στο πλαίσιο της Πράξης «Μυθολογικές Διαδρομές στην Ανατολική Μακεδονία και Θράκη» (MIS 5047101) που εντάσσεται στη Δράση «Υποστήριξη Περιφερειακής Αριστείας» και χρηματοδοτείται από το Επιχειρησιακό Πρόγραμμα «Ανταγωνιστικότητα, Επιχειρηματικότητα και Καινοτομία» στο πλαίσιο του ΕΣΠΑ 2014-2020, με τη συγχρηματοδότηση της Ελλάδας και της Ευρωπαϊκής Ένωσης (Ευρωπαϊκό Ταμείο Περιφερειακής Ανάπτυξης). Η διάρκεια του έργου είναι από 02/12/2020 έως 15/05/2023 Υπάρχει η δυνατότητα λήψης του υλικού καθώς και η εκτύπωσή του. εκπαιδευτική εφαρμογή: σενάρια εκπαιδευτική εφαρμογή: δραστηριότητες https://mythotopia.eu
poster
References: EGP+ Mayorga et al. (2021) ApJ, PICASO Batalha et al. (2018) ApJ, NASA Exoplanet Archive. Further Reading: Robinson & Marley (2014) ApJ, Marley & McKay (1999) Icarus, Ackerman & Marley (2001) ApJ, Winn & Fabrycky (2015) ARA&A ⊕ 1 2 3 4 In the era of multi-method planet characterization, eccentric planets offer the opportunity to holistically understand a planet’s atmosphere. The time-variable radiation deposited in the atmosphere of an eccentric planet induces variations in the thermal and chemical structure that may be detectable with current and future missions. Using EGP+, we explore a grid of cloudless atmospheres to ascertain the dominant variables that control atmospheric response to periastron passage and predict observable features indicative of their response. Our preliminary grid (★) explores: orbital period, eccentricity, stellar effective temperature, and planet gravity, resulting in 48 model giant planets. Overview A year in the life of an atmosphere… 1 At apastron the planet is cold and in equilibrium Cloud decks may be found 2 The planet begins to heat up Cloud decks shift upwards and begin to evaporate 3 The planet is rapidly heated Only the hottest condensates remain, if any 4 The planet begins to cool Clouds begin to reform Planet’s Orbit Direct Imaging Occulted Region Direct Imaging High-Res Ground-Based Spectroscopy Primary Transit Secondary Eclipse Observed by… Chemical Responses in the Atmosphere I’m actually an animation! Please scan me! With PICASO, we can explore where each chemical species has the most dominant effect on the observed spectrum as brought on by changing temperatures. We track gasses such as methane, carbon monoxide, carbon dioxide, water, and ammonia as well as high temperature oxides and alkalis. A. The planet-star flux ratio B. The flux ratio with an equilibrium temperature blackbody removed to highlight chemical changes, rather than temperature changes C. The temperature-pressure profile as the planet progresses through each orbital location (inset) D. The pressure level at which the chemical species would make the atmosphere optically thick E. The abundance profiles of the chemical species A B C D E Future developments to EGP+ include enabling self-consistent clouds. The addition of cloud opacity could drastically change the resulting spectra! As the planet approaches periastron, the atmosphere heats, altering the behavior of clouds. Cloud decks may shift upwards, or they may become cold trapped below the observable region of the atmosphere until the planet sufficiently cools at the lowest pressures. Adding Clouds Observational Signatures of Atmospheric Variability Laura Mayorga1, J. Lustig-Yaeger1, M. Marley2, T. Robinson3, K. Stevenson1, E. May1 1Johns Hopkins University Applied Physics Laboratory, 2Lunar and Planetary Laboratory, 3Northern Arizona University Eccentric Giant Planets laura.mayorga@jhuapl.edu
poster
FBN / Wilhelm‐Stahl‐Allee 2 / 18196 Dummerstorf / www.fbn‐dummerstorf.de Wer? Wo? Zum Einfluss des Zuchtziels auf die Lernfähigkeit und Verhaltensflexibilität von Ziegen – erste Ergebnisse Christian Nawroth1, Katrina Rosenberger2, Nina Keil2, Jan Langbein1 1 Leibniz‐Institut für Nutztierbiologie, Institut für Verhaltensphysiologie, Dummerstorf 2 Bundesamt für Lebensmittelsicherheit und Veterinärwesen, Zentrum für tiergerechte Haltung: Wiederkäuer und Schweine, Agroscope Tänikon, Ettenhausen Zuchtziele (z.B. Milchleistung) können indirekt Physiologie und Verhalten von Nutztieren beeinflussen (Rauw et al., 1998) Ressource Allocation Theory: Energie in Produktion lässt weniger Energie für andere Merkmale (Schütz & Jensen, 2001) Bisher ist wenig bekannt, inwiefern Zucht auf Leistung auch kognitive Parameter beeinflusst Wir verglichen die Lernleistung und ‐flexibilität von Zwerg‐ (geringe Milchleistung, extensive Nutzung) und Milchziegen (hohe Milchleistung, intensive Nutzung) anhand einer visuellen Diskriminierungsaufgabe sowie einer Umkehrlernaufgabe. Wie? Diskriminierungsaufgabe Umkehrlernaufgabe 8 Zwergziegen 9 Milchziegen Zwerg‐ und Milchziegen unterschieden sich nicht in ihrer Geschwindigkeit, eine Diskriminierungsaufgabe zu lernen Zwergziegen waren tendenziell schneller als Milchziegen, eine Umkehrlernaufgabe zu lernen Die Geschwindigkeit zum Lernen einer visuellen Diskriminierungsaufgabe korrelierte mit der Leistung in einer folgenden Umkehrlernaufgabe @GOATSTHATSTARE Beispielvideo  Anzahl Sessions bis zum Lernkriterium für Zwerg (dunkelgrau)‐ und Milchziegen (hellgrau) (t‐test, Diskriminierungsaufgabe: t8 = 0,870; P = 0,40; Umkehrlernaufgabe: t8 = 2,050; P = 0,058) Anzahl Sessions bis zum Lernkriterium für Diskriminierungs‐ und Umkehrlernaufgabe (Spearman Rangkorrelationstest) P = 0,058 P = 0,40 R2 = 0,58 P = 0,033 Versuchsaufbau NAWROTH@FBN‐DUMMERSTORF.DE CHRISTIAN.NAWROTH.WORDPRESS.COM DU WILLST MEHR WISSEN? GEFÖRDERT DURCH Einleitung Ergebnisse Zusammenfassung Methoden vs. 20 Sessions mit je 12 Trials Lernkriterium: 2 x 10/12 Trials korrekt
poster
Canonical X-Ray Fluorescence Line Intensities as Column Density Indicators in X-Ray Binaries Roi Rahin and Ehud Behar Department of Physics, Technion, Haifa 32000, Israel X-ray line fluorescence is ubiquitous around powerful accretion sources, such as X-ray binaries. The brightest and best-studied line is the Fe Kα line at 6.4 keV, but the Kα lines of other elements hold essential information about the source. We present a survey of well-measured Chandra/HETG grating spectra featuring several Kα fluorescence lines from elements between Mg and Ni. We identify a common trend that dictates the Kα line intensity ratios between elements. The line intensities are well described by a simple, plane-parallel approximation of a near-neutral, solar-abundance, high column density (> 1024 cm−2) medium. We interpret deviations from these intensity ratios as due to excess column density along the line of sight beyond the Galactic column. Specifically for Vela X-1 and GX 301-2, our method shows phase-dependent estimates of the excess column, which sheds light on the varying environment of the binaries as the system rotates. In GX 301-2, we can differentiate between the excess obstruction on the line of sight towards the ambient ionized gas and the neutral fluorescing medium. Conclusions Several conclusion follow from our analysis: • Kα intensity ratios between elements follow similar trends in most sources, except for a few cases in which the lower-Z lines (Mg, Si) are reduced by orders of magnitude. • For the most part, the relative Kα intensities follow a simple plane-parallel approximation of a dense, near-neutral optically thick medium. • The reduced Kα intensities in the low-Z elements is explained satisfactorily and self-consistently by excess column density along the line of sight. This excess column is corroborated by independent measurements • Kα intensity ratios thus can be used to measure ambient column density in the X-ray binary system Excess column density and canonical values X-ray Fluorescence Relative Line Intensities The relative intensities of X-ray fluorescence lines are derivable from fundamental physics under a few simple assumptions. Using a simple plane-parallel approximation, we derive line intensities using only the source spectrum I, the elemental abundances AZ, the fluorescence yield ωKα, and the reflector column density. By assuming an optically thick reflecting medium, we reduce the dependency to be on the source spectrum, abundances, and fluorescence yield. The resulting expression is: We compare the relative intensities derived from this model to observations. The figures clearly show a discrepancy that is more pronounced for low-Z elements. Notice especially GX 301-2, where disparity of two orders of magnitude is visible for Si. Another discrepancy is between different Vela X-1 phases, where it is clear the abundances cannot change, and the source spectrum is unlikely to change drastically. The model gives a good fit for NGC 1068 and MRK 3, implying a missing ingredient in the model for the other sources. We find excess line-of-sight column density as a possible solution. Top: relative intensities of Kα fluorescence lines originating from different column densities of the fluorescing medium. Bottom: relative intensities for different power-law indices for an optically thick medium Relative intensities of Kα fluorescence lines for an optically thick medium compared to actual observations from various sources. Relative flux of Kα fluorescence lines compared to theoretical values after excess column density correction. The green line in the bottom figure corresponds to a finite reflector medium with 𝑁𝐻= 2 × 1023cm−2 We propose a simple explanation for the relatively weak low-Z lines. Since the deficiency is Z- dependent, excess line-of-sight column density beyond the Galactic column could explain the observed ratios. If this explanation is correct, a single column density value must account for the quenching of a
poster
Stella Thoben, Franziska Weng, Thilo Paul-Stueve Kiel University Service Centre Research, IT and strategic Innovation, Computing Centre and University Library https://www.synfo.uni-kiel.de Synergy Creation on the Operational Level of Research Data Management Procedures Work flows for a cross-institutional research data management Model cooperation agreements Legal basis for the implementation of procedures infrastructurally Policy drafts Binding regulations for cross-institutional research projects In the first work package of the project, a survey on research data management practice of the researchers of universities and other institutions in Schleswig- Holstein was conducted. First results show demand for storage solutions, legal and technical advice by over 50% of the respondents, while most respondents publish research data and software code without a license specification. Project SynFo Organisational  Workflows  Involved parties  Responsibilities Legal  Authorship  Right of use  Data protection  Licenses Technical Data- and metadata formats Protocols Access Different Scientific Cultures ■Disciplines ■Institutions Research data management takes place under complex organisational framework conditions, especially in research associations, such as CRCs, forms of Excellence Initiative, BMBF or EU projects. Not only can researchers from one institution be involved in several alliances and thus be subject to different data management plans for identical data sets. A majority of cooperating institutions with their own institutional or superior guidelines and strategies for research data management are often involved in a research network. Each type of research institution is characterized by its institutional and scientific culture; it has a specific mandate due to the sponsoring and is usually associated with other research institutions in an umbrella organisation. This can lead to different or even contradictory data management policies and plans within one research alliance. The combination of an increasing proportion of transdisciplinary research with very different types of research data and the often formulated requirements for structured data management thus creates a difficult situation for researchers and providers of data management infrastructures. The project is therefore to develop a researcher-centric solution for a pragmatic research data management. Based on an evaluation of concrete organizational and technical implementations in various institutions involved in a research network, the commonalities are revealed in structural and legal context. The aim is to define organisational and technical interfaces and, where possible, to harmonise data management for researchers. The research location Kiel is predestined for this project: due to the diversity of research subjects at the Kiel University (Christian-Albrechts-University of Kiel; CAU) as the only full University of the state of Schleswig-Holstein, there are partly overlapping research data management cooperations with non-university institutions such as Helmholtz institutes (notably GEOMAR and Geesthacht), Leibniz institutes and service facilities (such as IfW, IPN, ZBW, Research Centre Borstel) or the Max Planck Institute for Evolutionary Biology in Plön; all institutions maintain close cooperation with the CAU and with each other. Example Workflow Possible results for a cross- institutional perspective can be harnessed as a basis for cross-cutting, synergetic services in the form of procedures, model cooperation agreements, and corresponding policy drafts for collaborative projects. Kiel Marine Science Kiel Life Science Societal, Environmental and Cultural Change Kiel Nano Surface & Interface Science SFB 1182 J o h a n n a M e s t o r f A c a d e m y Zentrum für Baltische und Skandinavische Archäologie Research Landscape at Kiel – CAU Perspective Project SynFo is sponsored by the German Ministry of Education and Research (BMBF). The results of th
poster
0 200 400 600 800 1000 1200 CPU Time cg-pc_uw-no_scale cg-bfbt-scale cg-pc_uw-no_scale-unpcnorm fgmres-bfbt-scale fgmresR-upper-K_inexact-pc_uw-scale gmresR-upper-K_inexact-bfbt-scale gmresR-upper-K_inexact-pc_uw-no_scale gmresR-upper-K_inexact-pc_uw-scale [Citcom style] Comparison of preconditioned iterative methods applied to Stokes flow 0 200 400 600 800 1000 1200 CPU Time cg-pc_uw-no_scale cg-bfbt-scale cg-pc_uw-no_scale-unpcnorm fgmres-bfbt-scale fgmresR-upper-K_inexact-pc_uw-scale gmresR-upper-K_inexact-bfbt-scale gmresR-upper-K_inexact-pc_uw-no_scale gmresR-upper-K_inexact-pc_uw-scale [Citcom style] Comparison of preconditioned iterative methods applied to Stokes flow 0 200 400 600 800 1000 1200 CPU Time cg-pc_uw-no_scale cg-bfbt-scale cg-pc_uw-no_scale-unpcnorm fgmres-bfbt-scale fgmresR-upper-K_inexact-pc_uw-scale gmresR-upper-K_inexact-bfbt-scale gmresR-upper-K_inexact-pc_uw-no_scale gmresR-upper-K_inexact-pc_uw-scale [Citcom style] Comparison of preconditioned iterative methods applied to Stokes flow Quasi 2D problem Fast healing Slow healing Strong softening Blue paint Shear bands Plastic layer Velocity boundary conditions Periodic boundary conditions Velocity boundary conditions Motivation We are interested in exploring how the knowledge of shear- band spacing from 2D (and 3D) extension — see figure M1 — models carries over to transform environments. The experimental setup is a simple 2 layer box with plastic material over a viscous substrate — figure M2 — to which simple-shear boundary conditions are applied. The shear bands that form are relatively stable to further deformation because of the geometry involved and so we can watch how the population of structures evolves knowing that the effects of block rotations are minimal. We can also look at quasi 2d experiments (thin boxes) to further simplify the analysis of the results. Low contrast High contrast 2 3 7 6 3 6 6 8 8 8 9 3 4 4 5 6 7 7 0 1 2 3 4 5 h i 0 2 4 6 8 10 12 number of faults along 160 km ! = cst rh = 0.5 rh = 0.4 rh = 1.0 r" = 1.0 wf = 0.1 Wijns et al M1 M2 DI31A-1779 V∥ V↑ Softening rate Weak lower layer Strong lower layer Very strong lower layer 0 0.5 1 1.5 2 2.5 0 100000 200000 300000 400000 FAST: Viscous layer FAST: Plastic not failed FAST: Plastic failed SLOW: Viscous layer SLOW: Plastic not failed SLOW: Plastic failed 0 0.5 1 1.5 2 2.5 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 FAST: Deep shear bands FAST: Shallow shear bands SLOW: Deep shear bands SLOW: Shallow shear bands V∥ Softening rate 3D Shear band vol Dissipation
poster
Characterization, modification and applicability of an in vitro placental transfer assay for screening purposes P92 Caroline Gomes*; Catharina W. van Dongen; Barbara Birk; Eric Fabian; Julian Doersam; Bennard van Ravenzwaay; Robert Landsiedel Experimental Toxicology and Ecology, BASF SE *Contact: caroline.a.gomes@basf.com Ø Alternative methods to animal testing are being developed to address developmental toxicity and the placental transfer is essential to determine the potential embryo toxic effects of a substance. Ø The objective of this study was to improve and characterize the methodology of this model, and to assess the placental transfer of 7 substances to assess the model applicability. Introduction Figure 1. Schematic overview of in vitro BeWo transfer assay. Ø An in vitro placental transfer assay using the human trophoblastic cell line, BeWo b30 clone, has been developed using a transwell system (Figure 1) and showed good correlations with ex vivo data1,2 Methodology BeWo cells b30 clone (Addexbio, USA) were cultivated in the “regular condition” which refers to the methodology described in literature1. An optimization was tested to avoid refreshing the cell culture medium every day (Figure 2). The parameters used to check the barrier integrity, cell growth or cytotoxicity of the control substances are summarized in Figure 2. The Papp values were calculated by the equation: !"## (%&/() = ∆,/∆- .∗01 where ΔQ is the amount of compound entered in the basolateral compartment (nmol), Δt is the time (seconds) of the transfer experiment, A is the cell surface area in cm2 of the insert and C0 the exposure concentration (µM). Figure 2. Methodology scheme of the in vitro placental transfer using the BeWo b30 cell line. The parameters assessed overtime or at the testing transfer day (day 6) are numbered from 1 to 4. Fluorescein was measured by spectrophotometer. Amoxicillin and antipyrine were determined by LC-MS analysis at Pharmacelsus GmbH, we are grateful to Dr. Ursula Müller-Vieira. TEER: trans-epithelial electrical resistance. WPI: World Precision Instruments (USA). Results: characterization and optimization Figure 3. TEER measurements of BeWo b30 cell layer from day 3 to day 7 after seeding (A). Fluorescein transfer in the in vitro placental transfer model from day 3 to 7 after seeding (B). Data are shown as mean ± standard deviation of triplicates per experiment. Substance Ex vivo placental transfer index In vitro placental transfer Relative Papp Papp (10-6 cm/s) Recovery (%) Acyclovir 0.327 0.41 18 ± 1.2 128 Caffeine 0.958 1.34 59 ± 11 103 Cimetidine 0.469 0.30 13 ± 0.9 109 0 10 20 30 40 50 60 70 2 3 4 5 6 7 8 TEER (Ω·cm2) Days (post-seeding) 0 2 4 6 8 10 12 14 16 18 20 2 4 6 8 Papp value (x10-6 cm/s) Days (post-seeding) Run 1 - Regular Run 1 - Optimized Run 2 - Regular Run 2 Optimized Run 3 - Optimized 15 25 35 45 55 65 Papp value (x10-6 cm/s) Antipyrine Li et al., 2013 (1) Li et al., 2015 (3) Li et al., 2016 (4) Kloet et al., 2015 (5) Strikwold et al., 2017 (6) Run 1 - Regular Run 1 - Optimized Run 2 - Regular Run 2 - Optimized Run 3 - Optimized Results: model applicability Methodology The applicability of the model was assessed by: a) Comparing the in vitro relative Papp values of 3 substances (caffeine, acyclovir and cimetidine) with data from the ex vivo human placental transfer assay in the optimized condition b) Assessing the in vitro placental transfer of 4 azoles (difenoconazole, flusilazole, miconazole and triadimefon) in the regular condition R² = 0,9913 R² = 0,5991 R² = 0,9945 R² = 0,9742 0 2 4 6 8 10 0 20 40 60 80 100 Basolateral amount (nmol) Time (minutes) Flusilazole Miconazole Difenoconazole Triadimefon 0 5 10 Papp value (x10-6 cm/s) Amoxicillin Regular Optimized Whole insert Diverse layers Uniform layers Figure 4. Papp values of the permeability controls, Antipyrine and Amoxicillin, determined at day 6 (at 60 minutes) in the in vitro placental transfer model compared to literature. Data are shown as mea
poster