text
stringlengths
21
4k
label
stringclasses
2 values
Adapted from the Design made by pikisuperstar / Freepik “ ” A gateway to the European Open Science Cloud OpenAIRE Content Provider Dashboard: enabling trust and value within the EOSC repositories Authors: André Vieira, andre.vieira@usdb.uminho.pt Pedro Príncipe, pedro.principe@usdb.uminho.pt OpenAIRE has received funding from the European Union's Horizon Europe Research and InnovaƟon programme under Grant Agreements No. 101017452, 101017536. What is PROVIDE? Is the scholarly gateway that receives content registraƟon requests and provides guidance for content providers by viewing a personalised, user-friendly dashboard that shows the necessary steps and status of the services to complete the process. Benefits Extending repository metadata models to Open Science practices. Keeping the collections up-to-date: alerts, enrichments, additions. Fostering notification-based and federated dissemination of knowledge. Improve Repository interoperability, collections and content visibility. Advance Institution memory & research assessment. Support Features Validator service and data source registration. Collection monitor to track the aggregation history. OA Broker service offers metadata enrichments. Usage Counts provides metrics (downloads and views) of data sources. hƩps://provide.openaire.eu Validate and register your data source using OpenAIRE authoritative registries Role in the EOSC onboarding Interoperability compliance via the OpenAIRE Guidelines. Every data source validated in OpenAIRE is EOSC compliant (Guidelines: v3.0, v4.0, data, CRIS) Integration of the data sources registration workflow within the EOSC Research Product Catalogue OpenAIRE Graph becomes a key asset for discovery and monitoring within the EOSC: https://graph.openaire.eu/ OpenAIRE Support Learn more: www.openaire.eu/guides Ask a question: www.openaire.eu/helpdesk EOSC Providers Hub: https://eosc-portal.eu/eosc-provid ers-hub Instructions to EOSC onboarding: https://eosc-portal.eu/eosc-providers- hub/how-become-eosc-provider
poster
THE DELIVERY STATUS OF JUNO 20" PMTS AND THEIR PERFORMANCE Rong Zhao 1 (on behalf of JUNO collaboration) 1Sun Yat-Sen University zhaor25@mail2.sysu.edu.cn Abstract The Jiangmen Underground Neutrino Observatory (JUNO) [1] is a multi-purposed neutrino experiment currently under construction in South China. Due to its unprecedented requirement on the photocathode coverage rate in order to reach enough energy res- olution, totally 20k 20-inch photomultipliers (PMTs) will be de- ployed in the detector system, including 5k conventional Hama- matsu dynode PMTs(R12860) and 15k newly developed PMTs us- ing micro-channel plates (MCP-PMT) by North Night Vision Tech- nology (NNVT). The JUNO collaboration has developed a PMT mass testing system to characterize these PMTs and selected the qualified ones for the final installation. Up to now, we have re- ceived and completed the characterization of 5k Hamamatsu PMTs and 13.1k MCP-PMTs. In this poster, we will present the latest de- livery details and the performance results including the typical sin- gle photon-electron (SPE) waveform, SPE spectra of two types of PMTs, SPE resolution, photon detection efficiency(PDE) and dark count rate(DCR) etc. Introduction of JUNO Detector The JUNO central detector will be filled with liquid scintillator, about 15k 20"MCP-PMTs and 5k 20"Hamamatsu PMTs will be used to collect scintillation light and Cherenkov light; the geometry of JUNO detector and PMTs are shown in Figure 1. Figure 1:JUNO detector and geometry of Hamamatsu PMT and MCP-PMT PMT Delivery and Test Status By May 2020, 18k of the total 20k PMTs have been delivered to JUNO PMT storage site, the acceptance test of 15k PMTs have been finished and 5k MCP-PMTs have been potted, as illustrated in figure below. 13.1k delivered 10k tested 15k total 5k tested 5.3k potted HAMAMATSU DYNODE PMT NNVT MCP-PMT All of the delivered PMTs will experience a detailed visual check and electronic performance test before they are potted and finally installed into JUNO detector. PMT Testing System Two container PMT testing systems with electromagnetic shield- ing are currently used for mass acceptance tests[4]. Each of them consists of 36 individual drawers (see Figure 2). The PMT inside each drawer can be illuminated with a self-stabilized LED or a pico- second laser, both with 420 nm wavelength. Two PMT scanning stations [2][3] were built for characterization of whole PMT photo- cathode. It allows for the testing of individual PMTs in all relevant aspects by scanning the photocathode and identifying any poten- tial problems. Figure 2:The container PMT mass test system and PMT scanning station JUNO PMT Performance of tested PMTs The typical single photon electron waveform of Hamamatsu PMT and MCP-PMT is shown in Figure 3: 100 150 200 250 300 350 400 t[ns] 0 2 4 6 8 10 12 14 U[mV] 100 150 200 250 300 350 400 t[ns] 6 8 10 12 14 16 18 20 U[mV] Figure 3:typical SPE waveform and 2D waveforms of Hamamatsu and MCP-PMTs. left: wave- form of Hamamatsu PMT EA0419; right: waveform of MCP-PMT PA1707-1090 SPE charge spectrum of PMTs obtained by LED with low illumina- tion(average photon number µ ≤1 ) are shown in Figure 4: Figure 4:charge spectrum of PMTs. left: Hamatsu PMT EA0419 with gain=9.9E6 and µ = 0.16, 0.79; right: MCP-PMT PA1707-1090 with gain=1.04E7 and µ = 0.14, 0.84. Performance parameters of PMTs can be derive from the charge spectrum and other dedicated measurements. Table 1:peformance parameters of JUNO accepted PMTs PMT PDE(%) DCR(kHz) FWHM(ns) TTS(ns) resolution risetime(ns) falltime(ns) p/V Hamamatsu 28.17 15.38 11.56 1.42 0.28 6.94 10.20 3.83 MCP 27.20 48.94 8.18 6.35 0.32 4.01 17.97 4.28 MCP(HQE) 30.14 49.73 7.83 6.35 0.33 4.95 17.10 3.93 PDE and DCR Distributions The PDE and DCR(the methodology and the exact definition can be found in [4]) distributions of PMTs are shown in Figure 5, for MCP-PMT the high quantum efficiency version has higher detec- tion efficiency benefiting from the new photocathode technol
poster
Characterising the detector response of the SuperFGD as part of the T2K near detector upgrade Tristan Doyle (on behalf of the T2K collaboration) tristan.doyle@stonybrook.edu 1. The T2K Experiment The Tokai-to-Kamioka (T2K) experiment is a long-baseline neutrino experiment based in Japan νµ (¯νµ) beam produced at J-PARC, characterised by near detectors and detected at Super-Kamiokande Measures νµ (¯νµ) disappearance and νe (¯νe) appearance 2. Near Detector (ND280) Upgrade Replace π0 detector with three new sub-detectors: →Super Fine-Grained Detector (SuperFGD): Highly segmented target material with ability to reconstruct neutrons and lower momentum protons →High-Angle Time Projection Chambers (HATPCs): measure momentum, charge and particle ID with better angular acceptance than before See posters by Matteo Feltre and Ulysse Virginet →Time-of-Flight (ToF): Precise timing information to reject backgrounds and improve reconstruction See physics capabilities in posters by Liz Kneale, Katharina Lachner and Weijun Li 3. SuperFGD Concept and Construction 2 million optically isolated 1 cm3 plastic scintillator cubes [1] 56,000 wavelength shifting (WLS) fibers →Each coupled to a multi-pixel photon counter (MPPC) →Three orthogonal fibers per cube Concept proven in charged particle [2] and neutron [3] beam tests Detector assembly at J-PARC October 2022 - April 2023: →Cube layers installed with fishing lines →Vertical alignment using metallic rods →Fishing lines replaced with WLS fibers →MPPCs and LED calibration system installed SuperFGD installed in October 2023 Find description of the electronics in poster by Viet Nguyen 4. Light Yield Measure ADC counts from electronics Can convert ADC counts to photoelectrons (p.e.) See poster on calibration by Daniel Ferlewicz For each hit there is a high gain (HG) ADC and a low gain (LG) ADC Linear relationship between HG and LG provides larger dynamic range Also measure time over threshold (ToT) for each hit Can convert ToT to HG using exponential relationship Provides even larger dynamic range than LG 5. Attenuation Length Having three fibers per cube allows construction of attenuation length plot →More reliable characterisation of response and calibration For a given distance from the MPPCs, plot observed light from hits in cosmic events Fit distribution as a function of distance with an exponential function to extract attenuation length →Measured attenuation length consistent with specification of WLS fibers 6. Time Resolution Select hits > 40 p.e. matched in all three dimensions Compare mean time of hit to mean time for event Gives ∼1.2 ns time resolution →Can be improved by electronics firmware update! 7. Neutrino Interactions in the SuperFGD Some of the first neutrino interaction candidates in the SuperFGD. Possible proton (left) and pion (right) candidates are highlighted. 7. Next Steps Continue tuning Monte Carlo simulation using measurements of light yield and attenuation length Measure dE/dx in the SuperFGD and make selection of proton candidates using Bragg peak Head towards first physics analyses with the SuperFGD! References [1] Y. Abreu et al 2017 JINST 12 P04024 [2] A. Blondel et al 2020 JINST 15 P12003 [3] A. Agarwal et al 2023 Phys. Lett. B 840 137843
poster
MYRIAD-EU has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant agreement ID: 101003276 Introduction MYRIAD-EU: Multi-hazard and sYstemic framework for enhancing Risk- Informed mAnagement and Decision-making in the EU Philip J. Ward, Marleen C. de Ruiter, Timothy Tiggeloven, Judith N. Claassen, Ruoying Dai While the last decade saw huge scientific advances in the understanding of natural hazard risk, most research and policy still address risk from a single-hazard, single-sector perspective not taking into account the interconnected reality of these events. MYRIAD-EU’s vision is to catalyse the paradigm shift required to move towards a multi-risk, multi-sector, systemic approach to risk assessment and management. To achieve this vision, the overall aim is that by the end of MYRIAD-EU policy-makers, decision-makers, and practitioners will be able to develop forward-looking disaster risk management pathways that assess trade-offs and synergies across sectors, hazards, and scales. The project will focus on five multi-scale Pilot regions, in which we aim to better understand indirect, interregional, and cross-sectoral risk within Europe. Through this project we aim to reduce risks together: please contact us if you are interested in collaboration. Work Packages The MYRIAD-EU project is centred around eight Work Packages (WP’s) summarized in Figure 1. Hazard Classes Geological Hydrological Climatological Biological Meteorological The following hazard classes are included in the project Pilots and Sectors The Pilots (see Fig.2) are selected to provide: a spread of different spatial scales, geographical locations, institutional settings across the EU; hazard ranges and hazard interrelation types (triggering, amplifying, consecutive, and compound). The key sectors that will be covered in MYRIAD- EU are: 1. infrastructure & transport 2. food & agriculture 3. ecosystems & forestry 4. energy 5. finance 6. tourism Partners Figure 1: Overview of MYRIAD-EU Work Packages (WPs) Figure 2: Overview of MYRIAD-EU Pilots #MYRIAD-EU Prof. Dr. Philip Ward philip.ward@vu.nl
poster
Process Microstructural analysis after sintering at 1500°C Mechanical properties • Cold Crushing Strength determination (CCS): • Resonant Frequency Damping Analysis (RFDA): Beneficiaries Use of Metallurgical Residues as potential Raw materials for High Performance Refractory Castables Mathilda Derensy1, Thorsten Tonnesen1, Jong-Won Shin2, Jesus Gonzalez-Julian1 1. Institute of Mineral Engineering, RWTH Aachen University, Forckenbeckestraße 33, 52074 Aachen, Germany 2. Calderys Deutschland GmbH, In der Sohl 122, 56564 Neuwied, Germany CESAREF PhD 04 : Industrial and Academic supervisors: Acknowledgements: Mathilda DERENSY MSCA Doctoral candidate mail: derensy@ghi.rwth-aachen.de This project has received funding from the European Union's Horizon Europe research and innovation program under grant agreement no.101072625 Goals: Results and Discussion: Development of mineral processing route of aggregates and study of the behavior in refractory castables. Further thermo-mechanical tests with castables containing vanadium slag as the bonding phase. Different trials to extract the vanadium in a sustainable way Contact different slag suppliers for further comparison with current slag composition Ongoing work: • Strength decreases significantly when adding slag to the formulation but the reference is based on high alumina castables, rarely employed in industries • E and G modulus data do not display considerable deviation with the different amounts of slag • Vanadium reacts preferably with Mg, Al and Ca towards a stable phase after sintering and does not attack the alumina grains. Conclusion: Context: The steelmaking industry generates several by-products during the different stages of steel production. As a by-product, vanadium slag account for more than 60% of the world’s overall production. However, it is tough to find vanadium in its pure state as it occurs in combination with various minerals. Using secondary resources for refractory castables solves the issue of natural raw materials, reduces costs related to their extraction and processing, and is friendly to the environment. Figure 1. Towards a new steel production route. Figure 2. Microstructure analysis of a castable with 0% slag; 2.5 wt.% slag and 2.5 wt.% cement and 5 wt.% slag after sintering at 1500°C for 6h and corresponding EDS scans of the different points on the graph. 1 2 3 4 5 6 7 O 56.28 57.09 57.56 55.28 57.02 56.68 57.55 Na 0.06 0.15 0.30 1.84 Mg 0.59 0.20 0.82 1.48 Al 39.54 39.40 42.44 41.16 42.98 38.88 37.55 Cl 0.19 Ca 3.25 3.16 3.07 3.27 1.37 V 0.28 0.35 0.21 Figure 3. Cold crushing strength experiment (Top) and resonant frequency damping analysis (Bottom) on castables after sintering at 1500°C/6h. 2.5 wt.% slag ; 2.5 wt.% cement 1 2 3 20 µm 5 wt.% slag 5 wt.% slag 6 7 100 µm 20 µm 0% slag (5 wt.% cement) 0% slag (5 wt.% cement) 4 5 20 µm 100 µm Advanced metal extraction of vanadium 2 min Dry mixing 5 min mixing Wet chamber 24h Demolding 1500°C 6h Dryer 24h Coarse Aggregate Fine Aggregate Porosity Matrix (ultra-fine) As coarse aggregates As bonding phase Castable Microstructure Raw Materials H2O + Deflocculant Mold Use Vanadium slag in castables composition Good properties to withstand harsh conditions Reducing emissions and costs related to extraction Reducing waste by recycling
poster
SPHERE quality control and dataflow operations The first observing period Wolfgang Hummel, ESO, Data Management and Operations Division (DMO) SPHERE calibration plan Science templates One peculiarity of the SPHERE calibration plan are the science templates that generate in advance of the science data also calibration data valid for the particular pointing only. These include an optional image of the waffle pattern to reconstruct the source center behind the coronograph (called CENTER) , an optional source offset image for the flux calibration (called FLUX) and optional sky frames. Introduction Since April 2015 ESO operates the new planet finder instrument SPHERE (Fig. 1). SPHERE is one of the most complex instruments at the VLT. It consists of the common path and infrastructure (CPI) for AO and coronography and of three arms: IRDIS, IFS and ZIMPOL. A total of eight different observing modes is supported, each one with its own independent calibration plan and data association scheme. The modes include NIR differential beam imaging, integral field spectroscopy and differential polarimetric imaging with 30msec time resolution in the visual. [1] The first observing period of an instrument is usually the most critical phase in its life time, in particular in the context of quality control (QC) which is a task shared between science operations on site and the QC group in Garching. Here we report on the data flow design challenges, the QC system and operational highlights of the first six months of SPHERE operations. Operational Highlights: IFS light leakage We found a light leakage in the IFS-arm detector. As a consequence the dark calibrations show:  The dark current is linear function of DIT (Fig. 4, Box 4).  The dark current depends on the temperature at the optical bench (Fig. 4, Box 6).  The statistical noise measured is dominated by the shot noise introduced by the light leakage (and not by the readout noise ~3 ADU)  The statistical noise scales as square root with DIT (Fig.5, Box 4) ACKNOWLEDGEMENTS/REFERENCES Narrow band - broad band filter match The time to acquire day-time calibrations scales with the number of different instrument setups the observations use in one night, but the time window for taking day-time calibrations is confined due to operational reasons. Since it is justified for IRDIS and ZIMPOL to calibrate narrow band filter science observations with broad band filter day- time calibrations (covering the narrow band), this filter association was implemented for reasons to make use of these synergies. Insert other logos (optional) Fig.2: Left: Waffle pattern with the four cross points marked to find the center of the star behind the coronographic mask. CENTER calibration. Middle: Star at offset position, FLUX calibration. Right: Star behind the coronographic mask, science frame. Fig. 4: Sphere Health Check Plot to monitor QC parameters retrieved from DARK calibrations of the IFS-arm. The upper three boxes show the dark levels in ADU for three different DIT values. The lower middle plot shows the optical bench temperature with nearly the same pattern. The lower left box shows the linear dependence of the dark level with exposure time. (Cyan circles mark last entries.) The lower right box show the temperature dependence for the three mentioned exposure times (2sec, 8sec and 30sec). CONTACT Wolfgang Hummel Astronomical Data Quality Control Scientist ESO, Karl-Schwarzschild Str. 2, D-85748 Garching whummel@eso.org Live versions of the presented health check plots can be found here: Insert other logos (optional) Fig. 5: Sphere Health Check Plot to monitor QC parameters retrieved from DARK calibrations of the IFS-arm. The upper three boxes show the readout noise in ADU for three different DIT values. The lower middle plot shows the optical bench temperature with nearly the same pattern. The lower left box shows the square root dependence of the readout noise with exposure time. (Cyan circles mark last entries.) T
poster
Elliott C. Shuppy University of Wisconsin-Madison School of Library and Information Studies Developing an Ecology-specific Research Data Management Workshop Acknowledgments: Thanks to Erin Carrillo, Brianna Marshall, and Ryan Schryver of University of Wisconsin-Madison Research Data Services References: Borer, E.T., Seabloom, E.W., Jones, M.B., & Schildhauer, M. (2009). Some simple guidelines for effective data management. Bulletin of the Ecological Society of America. 90: 2, 205-214. Michener, W. K. (2000). Ecological data: design, management, and processing. Oxford: Blackwell Science. Key Campus Unit Stakeholders • Research Data Services (RDS): Interdisciplinary organization committed to advancing research data management practice on the UW-Madison campus. • University of Wisconsin-Madison Center for Limnology (CFL): Research center dedicated to the study of fresh-water ecosystems and ecology programs, houses graduate students from zoology, ecology, and engineering. Problem: Research Data Literacy Graduate students from the University of Wisconsin Center for Limnology were awarded a grant funding from Integrative Graduate Education and Research Traineeship (IGERT) to host a research data management education workshop for graduate colleagues. Although data are used frequently by the graduate student group, sound data management practices and a thorough understanding of the importance of data stewardship are still challenges. Implication Facilitation Organization Conception  Proposal Initial development of the workshop was proposed to RDS by graduate student leaders from CFL in late spring 2014. RDS subsequently identified four of its own members to lead and plan the workshop.  Focus on ecology Data management practices in limnology share a considerable number of similarities with ecology. The planning stage of the workshop focused on collecting information about discipline and data workflow characteristics of ecology from published resources, conversations about the importance of teaching certain topics, and the graduate students’ own personal beliefs and values associated with data management which were collected by survey prior to the proposal.  Needs One meeting with CFL graduate student leaders allowed RDS leads to gather support and input about attendee needs. Subsequent regular emails were sent to CFL leaders and audience members to gather more information about data management barriers and challenges.  Objectives RDS leads identified three objectives for the audience: 1) Increase awareness about RDS services 2) Expand data management proficiency with module content 3) Build understanding of complete data management workflow  Module topics and organization Based on models of ecological data workflows, topics selected and organized into a two-day workshop format. Day One (10am-4pm): Intro to RDS, spreadsheets, metadata, storage, file organization, and sharing Day Two (10am-12:30pm): Preservation and data management plans Hands-on work time was initially built into each module  Instructor selection The planning team chose several instructors from departments affiliated with Research Data Services who are likely to train future researchers on campus. Also chosen were instructors from ecology-associated departments who are likely to influence data management education for ecologists.  Realized module workflow Due to instructor schedules, and content overlap workshop modules were reorganized thusly: Day One (10am-2pm), November 10, 2014: Intro to RDS, spreadsheets, file organization & collaboration, and time allotted at end for collaborative work time Day Two (9am-12:30pm), November 11, 2014: Storage & preservation, data management plans, scholar presentation on ecology data management research  Activities Since hands-on activities were not practical for modules on the second day, time was allotted at the end of Day One to allow audience members to work with each other and instructors to garner feedback about data ma
poster
Statistics of annotated validation dataset. Statistics of annotated validation dataset. The preliminary results of using Point Transformer trained on a public 3DEP point cloud classification dataset are promising. BACKGROUND: The USGS 3D Elevation Program (3DEP) is collecting lidar point data for the entire continental U.S. Minimum classification is following for current 3DEP. Accurately classified point clouds can support many down-stream applications such as in hydrologic analysis, urban planning, and forest management. This project will employ proven Deep Learning (DL) technologies in the development of a user-friendly open-source toolkit that will automate point cloud classification to refine and enrich the attributes of existing and future 3DEP data. MATERIALS and METHODS 1. 3DEP lidar data: National Map Download Client site (https://apps.nationalmap.gov/downloader/) 2. Training data: 2019 IEEE GRSS Data Fusion Contest (https://ieee-dataport.org/open- access/data-fusion-contest-2019-dfc2019) 3. DL model: Point Transformer 4. Validation RESULTS 1. Buildings and vegetation are now in different classes 2. Most of noisy points are re- classified/removed 3. Point Transformer had good performance in major classes (bare earth, vegetation, building) 4. The performance of the classes with fewer points (“water” and “bridge deck”) was poor NEXT A deep learning-based point cloud classification tool on 3DEP PRESENTER: Jung kuan Liu (jliu@usgs.gov) Automated Deep Learning-based point cloud classification on USGS 3DEP LiDAR Data Using a Transformer Jung kuan Liu1, Rongjun Qin2 1 U.S. Geological Survey, Center of Excellence for Geospatial Information Science 2 Department of Civil, Environmental, and Geodetic Engineering, The Ohio State UniversityU.S. Department of the InteriorU.S. Geological Survey Alameda County, CA IoU Accuracy Num. of Points Bare earth 0.97 0.98 24.6 M Vegetation 0.86 0.89 3.6 M Building 0.84 0.98 3.2 M Water 0.19 0.26 254.9 K Bridge deck 0.44 0.47 88.5K Average 0.69 0.73 / Class # of points (original) # of points (re-labeled) % of points (re-labeled) Unclassified 12,911,236 405,179 1.26 Bare earth 18,317,594 24,580,630 76.47 Vegetation 0 3,601,862 11.20 Building 0 3,210,429 9.98 Water 194,817 254,956 0.79 Bridge deck 12,831 88,502 0.27 Noise 32,487 0 0 Others 672,593 0 0 Statistics of annotated validation dataset. Quantitative evaluation
poster
(1) Xi’An Jiaotong-Liverpool University ; (2) Leiden university; (3) Chinese Academy of Sciences; (4) Kavli Institute for Astronomy and Astrophysics at Peking University; (5) Zentrum für Astronomie der Universität Heidelberg Abstract Most stars in the Galaxy form in stellar groupings that either dissolve within several tens of millions of years, or evolve into open clusters. It is important to study the dynamics of planetary systems in such structures to explain the properties of observed exoplanets. Here, we numerically evolve open star clusters with planetary systems by combining the star cluster code NBODY6++GPU code with the planetary system code REBOUND. We use different sets of initial conditions, such as the virial ratio and stellar density, and evolve the system for 50 Myr. We find that stellar encounter properties, the star cluster density, and the planetary system architecture are the main factors that affect the evolution of the planetary systems. Although most planetary systems remain intact, others lose most of their outer planets, while some are stripped of all planets. Finally, we find that the presence of the planet Jupiter plays a prominent role in the survival chances of terrestrial planets. References [1] Lada 2003, A&A, 03011540 [2]Rein and Liu 2012, A&A 537, A128 [3] Wang, Spurzem et al,2015, MNRAS 450, 4070–4080 [4] Cai et al 2015, ApJ, 1506.0759 [5] hdfgroup.org [6] Cai et al 2017, 1706.03789 [7]Flammini Dotti et al 2020a [8]Flammini Dotti et al,2019 [9] Fujii et al 2019, A&A, 624, A110 [10]Flammini Dotti et al 2020b Initial Conditions Star Clusters: N= 500, 5000, 10000 stars; Q = 0.4,0.5,0.6; Rcluster = 1 pc; IMF Kroupa (2001), Plummer sphere, Standard Galactic Tidal Field Planetary Systems: ~1 MSun Host,: 6 planets (Earth, Mars and four gas giants), no protoplanetary disk, number of planetary system per star cluster, respectively, 25, 125, 200 Conclusions and Discussion We have carried out a set of N-body simulation in order to understand the influence of the star cluster on a Solar system-like planetary system. Most planetary system remain stable during the whole simulation, while most of the unstable one have ejected the outermost planets. The terrestrial planets are ejected mostly if Jupiter orbits gets perturbed, and it is completely destroyed with a impulsive encounter[8] . In the presence of a encounter, Jupiter acts both as dynamical barrier for weak encounters and as a perturber for the terrestrial planet for stronger encounters-. The final architecture of planets can be seen in the table, where the 1 and 2 planetary system are always, for all simulation and initial conditions, Jupiter for the 1 planet Solar system and Jupiter and Saturn for the 2 planet Solar system. The three planets scenario is unlikely for the empirical consequence of the Saturn presence in the 2-planets system. Black holes have also shown to impact the ejection of planets from their stellar systems[10]. Methodology We use REBOUND[2 ] as integrator for planetary systems and NBODY6++GPU [3] for the star cluster in order to model the gravitational dynamics, via N-Body codes, of both. The latter is using a C code environment, also working along with Python, optimised for few-body integrations, while the last is optimised for highly accurate and fast calculation and also supports MPI and GPU implementations, built for many-body integrations. We used 50 Myr as an evolutionary time, while 1000 years is the time-step for the star cluster output. The planetary output time-step is adaptive, and reproduces the previous 1000 years time-steps in the data storage, while it warns the user if the planets either escape or merge during this period (in short block time storage[4] ). We use the HDF5 format, which allows us to store data with high temporal resolution.[5] Finally the simulation can be resumed in 3 principal steps: star cluster and planetary system initial conditions, stars gravitational evolution with the nearby stars and plane
poster
They have SCLC that has progressed or relapsed after at least two previous lines of therapy, including at least one platinum-based treatment They have a tumor tissue sample available They do not have unresolved side effects from previous treatments They have epNEC or LCNEC of the lung that progressed or relapsed, after at least one platinum-based treatment The cancer has spread to the brain and is growing uncontrollably They received a previous anti-cancer treatment that targets DLL3 They have interstitial lung disease, which refers to a group of diseases that causes inflammation and scarring of the lungs They have a weak immune system that struggles to fight off diseases. Also referred to as immunodeficient What is the percentage of patients whose tumors shrunk by at least 30% This is referred to as the ORR How long patients remain on treatment before the treatment stops working or the cancer progresses? This are referred to as DOR and PFS What are the side effects related to the treatment? These are referred to as TRAEs How long do patients live after the start of treatment? This is referred to as OS How often do the side effects occur? What are the effects of the treatment on the patients’ quality of life? Disclosures This study was funded by Boehringer Ingelheim. This summary is based on a poster presentation at AACR-NCI-EORTC 2023, Boston, MA, USA, October 11–15, 2023. The authors were fully responsible for all content and editorial decisions of the associated infographic Congress presentation authors Valentina Gambardella, Alastair Greystoke, Martin Reck, Meiruo Liu, Martha Mueller, Ulrich Duenzinger, Emily B Bergsland, Taofeek Owonikoko Disclaimer: BI 764532 is an investigational agent and has not been approved by any regulatory authority Reference 1. Wermke M, et al. J Clin Oncol 2023;41(suppl 16):abstr 8502 Acknowledgments The authors thank the patients and their caregivers for making the study possible. Medical writing support for the development of this infographic summary was provided by Frans Everson (PhD) of Ashfield MedComms, an Inizio Company, and funded by Boehringer Ingelheim Abbreviations CD3, cluster of differentiation 3; DLL3, delta-like 3; DOR, duration of objective response; IgG, immunoglobulin G; ORR, objective response rate; OS, overall survival; PFS, progression- free survival; TRAEs, treatment related adverse events What is the study design? Where is the trial taking place? What is BI 764532 and why is it important? How does BI 764532 work? The DareonTM-5 study will test how safe and effective BI 764532 is in patients with SCLC, epNEC, or LCNEC of the lung at two dose levels that were selected during Phase I CD3 is present on immune cells called T-cells that are key to initiating immune responses. DLL3 is present on the surface of many SCLC, epNEC, and LCNEC of the lung tumors BI 764532 acts as a bridge between tumor cells and T-cells. When it binds to T-cells they become activated. Activated T cells release toxins and cytokines - substances that attract other immune cells - leading to tumor cell death DareonTM-5: An open-label Phase II trial of BI 764532, a DLL3-targeting T-cell engager, in patients with relapsed/refractory small cell lung cancer or other neuroendocrine carcinomas Who can take part in the study? What are the key questions that the trial will answer? Inactive T-cell CD3 DLL3 Cancer cell Human IgG- like structure BI 764532 anti-DLL3 anti-CD3 Given the promising activity of BI 764532 in patients with DLL3-positive SCLC, epNEC, and LCNEC of the lung (NCT04429087), researchers have designed a Phase II clinical trial called DareonTM-5 (NCT05882058). BI 764532 is an IgG-like ‘T-cell engager’ investigational compound. It binds to immune system cells and to cancer cells that express DLL3 Many tumors can grow unchecked by evading the immune system DLL3 is a protein found on the surface of some cancer cells but not on healthy cells Cancers that express DLL3 and could be targeted with BI 764532
poster
UNIVERSITY OF LEEDS Results III The basic Microscale Model has been extended to include moist dynamics - including the contribution from latent heat exchange Exploring a cut-cell representation of terrain in a microscale model Abstract Moves to increasingly high-resolution models give rise to more variations in the underlying orography being captured by the model grid. Consequently, high-resolution models must overcome instabilities associated with terrain-following (see Fig. 1a) approaches. This work further explores the capabilities of a cut-cell representation of orography for idealised orographically-forced and moist microscale flows. The model is based on terrain-intersecting coordinates (see Fig. 1b) and solves flow in the resulting cut-cells using a finite-volume approximation. The model has been designed for the purposes of very high-resolution simulations. Comparisons with benchmark orographic and moist test cases demonstrate very good results. Further tests show the potential for the cut-cell approach for stably resolving flows over very steep orography. Institute for Climate & Atmospheric Science SCHOOL OF EARTH AND ENVIRONMENT, UNIVERSITY OF LEEDS S. Lock, A. Coals, A. Gadian, S. Mobbs Microscale Model is a research model for very high-resolution studies - of grid-resolutions x ~ z < 100m. The project has focussed particularly on flows over steep and complex terrain. The model is 3- dimensional, nonhydrostatic and fully compressible, with prognostic variables u,v,w,’,’. Integration is computed with a time-splitting approach (based on Klemp & Wilhelmson, 1978) using 2nd-order leapfrog for the long timestep and a 1st-order forward-backward scheme for the short step. A 2nd-order centred scheme is used for spatial differencing on a staggered grid (Arakawa-C in the horizontal, Charney-Phillips in the vertical). The orographic surface is represented by a piecewise bilinear function, and a finite-volume approach is used to solve flows through the resulting cut-cells. Background Cut-cells were first explored in environmental modelling by Marshall et al. (1997) for representing ocean-bottom topography. Results showed a piecewise linear representation of the surface compared favourably with step representations, particularly for surfaces that show large variation at the grid-resolution. The cut-cell work was extended to a compressible atmospheric model by Bonaventura (2000) and results were seen to compare well with a traditional terrain-following model. Steppeler et al. (2002, 2006) further extended the work, implementing a 3D piecewise bilinear representation of orography in a full atmospheric model (COSMO-DE). Tests showed good comparison with the equivalent terrain-following model, and studies of real cases showed evidence of improved precipitation forecasts from the cut-cell approach. Model equations describe a 3D, nonhydrostatic, fully compressible system. Written in time-split form, with fast modes (acoustic and gravity) on the LHS and slow modes on the RHS, the equations take the form: Cut-cell approach is based on the method discussed in Steppeler et al. (2002, 2006). The orographic surface is represented by continuous piecewise bilinear surfaces, that intersect the model grid – Fig. 2. The orographic surface results in some cells that are cut – partially beneath the surface; partially above. Flow through the cut-cells is solved by Fig. 1: Vertical levels in (a) a terrain-following, and (b) a terrain-intersecting grid a) b) Fig. 2: Example orographic surface intersecting a grid- column, from Steppeler et al. (2006) using a finite-volume method to compute the divergence term in the pressure equation, using Gauss’s theorem over a volume V, bounded by surface area A: Applied to a cut grid-cell, the method requires the fluxes of the wind-field across the cell surfaces. For surfaces that coincide with the grid, the wind components are conveniently located (staggered grid), and there is zero flux across the orographic
poster
The sage2 Project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement 800999 Sage2 – Percipient StorAGe for Exascale Data Centric Computing Consortium: 9 partners led by Seagate (UK) Ltd. covering technology providers, computing centres and application providers and users Objective: Design an exascale ready architecture based on hierarchical storage layers and object storage (MERO), adaptable to novel storage media . Justification: I/O can represent a bottleneck in many applications . This bottleneck can be addressed by facilitating use of fast but expensive storage media in an hierarchical architecture. The use of storage-class memory additionally to spinning disks blurs the line between byte addressable and block data store Unique Features: • Containers for treating many objects the same • Distributed Transaction Management • Supports global memory abstraction • Integrated AI framework • Function shipping to move processing to the data • Data Aware Scheduling based on SLURM • QoS based Architecture Global Memory Abstraction: applications, tools and programming models access data on the SAGE system either directly by mapping objects to memory in the tier nodes, or, by accessing them as a storage system through the Clovis API. Three scenarios envisaged: • Make NVRAM-C (NVRAM within compute nodea) appear as a single addressable space • For larger objects, make use of external NVRAM (NVRAM-E), directly accessing using RDMA • Using standard MERO APIs to access data in any tier Under Development: • Differential checkpointing of objects to back them up to lower level storage • Data Reshaping to allow restart with different numbers of nodes Co-Design: SAGE architecture has been co-designed with application developers and users. Applications cover • Space weather (IPIC3D) • Satellite Data Processing (JÜRASSIC) • Finite Element Modelling (paraFEM) • CFD (BOUT++) • Neural Simulation Tool (NEST) • CAT (Savu) Application Support and Programming Models ARM have developed extensions to their Allinea Forge toolkit and are supporting integration of MERO onto ARM processors. Kitware are adapting their visualization tools to optimize user experience when analysing results on SAGE. Current and Future Status: • Prototype system installed and operational at Jülich Supercomputing Centre • MERO object store under consideration for Open Sourcing S. de Witt1, D. Samaddar1, A.Davis1, S. Narasimhamurthy2, G. Umanesan2, D. Pleiter3, M. Salem El Sayed3 1 United Kingdom Atomic Energy Authority 2 Seagate (UK) Ltd 3 Jülich Supercomputing Centre
poster
Optofluidic Force Induction Scheme for the Characterization of Nanoparticle Ensembles Marko Šimić,a,b Gerhard Prossliner,b,c Ruth Prassl,c Christian Hill,b,c Ulrich Hohenestera a Institute of Physics, University of Graz, Austria; b Brave Analytics GmbH, Austria; c Gottfried Schatz Research Center, Division of Biophysics, Medical University of Graz, Austria Abstract Momentum transfer from light to matter provides the basic principle of optical tweezers. Most studies have hitherto employed this principle for trapping and manipulation of single nanoparticles. However, in a microfluidic channel one can also monitor the effect of optical forces exerted on ensembles of dielectric nanoparticles, to acquire knowledge about various nanoparticle parameters, such as size, shape or material distributions. Here, we present an optofluidic force induction scheme (OF2i) for real-time, on-line optical characterization of nanoparticles. Our experimental setup builds on precisely controlled fluidics as well as optical elements, in combination with a focused laser beam with orbital angular momentum. By monitoring the single particle light scattering and trajectories in presence of optical and fluidic forces, we obtain detailed information about the properties of the individually tracked particles. We analyse the trajectories using a detailed simulation approach based on Maxwell’s equations and Mie’s theory, in combination with laser fields and fluidic forces. We discuss the basic physical principles underlying the OF2i scheme and demonstrate its applicability. Our results prove that OF2i provides a flexible work bench for numerous applications. Introduction Here we employ optofluidic forces on ensembles of nanoparticles using a laser tuned at 532 nm with precise micro-fluidic pumps. Both, optical and fluidic components generate forces acting on dielectric nanoparticles as shown in figure and . Under certain conditions, particles are constrained to a 2D-optical trap and travel along characteristic trajectories, see figure . Single-particle trajectories shown in are processed in real-time by recording single particle light scattering via an ultramicroscope setup and a CMOS camera. References [1] Ashkin A. A., PNAS 1997 , 94 , 4853 - 4860 [2] C. Hill. (2020). EU Patent No. 3422364B1. European Patent Office. [3] A. D. Kiselev and D. O. Plutenko, Phys. Rev. A 2014, 89, 043803. Institute of Physics Brave Analytics GmbH Gottfried Schatz Research Center Author Mail: marko.simic@uni-graz.at Methods In order to simulate particle motion within our capillary, we perform a multipole expansion of the incoming fields and solve for the scattered fields employing Mie’s theory for Laguerre- Gaussian beams. The time-averaged optical forces are computed by where is Maxwell’s Stress Tensor. The integration is performed using the total fields and a Gauss- Legendre quadrature for spherical particles with and being material constants. Results The experimental data for 400 nm Standard-Latex particles is compared to simulated velocities and depicted in figure . The resulting size distributions are shown in figure for mono- and polydisperse samples. We compare our results to those of Nanoparticle Tracking Analysis (NTA). Discussion The OF2i scheme is presented with its underlying physical principles together with a theoretical description based on Mie’s theory and higher order Laguerre-Gaussian modes. Our results show very good agreement between experimental and theoretical data on the example of various standardized Latex particles. Furthermore, we prove the working principle of OF2i and demonstrate its applicability to various nanoparticles. Simulations We now combine Newton’s equation of motion with Stokes’ drag and obtain for the particle’s velocity at any position within the capillary. Integrating particle velocity, we obtain the corresponding trajectory using a Runge-Kutta scheme. Figure shows selected trajectories for 200 nm, 400 nm, 600 nm and 900 nm using above’s
poster
Figure t Introduction Methods References Cortical synchronization controls EEG Trough-max vs EEG Peak-max Funding / Conflicts of Interest Disclosure A B C D E F 50 -100 50 -100 Single PYdr Voltage Zoomed TC→PY AMPA current Single TC Voltage Mean TC→PY AMPA current [μA/cm2] [μA/cm2] [mV] [mV] 1.5 0 1.6 1.40 1 2 Time [sec] 0 0 0 2 2 2 π -π 0 Alpha Amplitude (12 Hz) SWO Phase Bin (1.0 Hz) Un-normalized Modulation Index 30 5 G H I 50 -100 50 -100 Single PYdr Voltage Single PYso Voltage Mean PY→PY AMPA current [μA/cm2] [mV] [mV] 0.03 00 0 0 2 2 2 1 Time [sec] J π -π 0 Alpha Amplitude (12 Hz) SWO Phase Bin (1.0 Hz) PY→PY Comodulogram 0.1 0.03 0 15 Time [sec] 7.5 K Coupling Map 0.5 1 Alpha Frequency 8 10 12 SWO Frequency 1.5 2 2.5 Un-normalized Modulation Index 0.2 0.8 EEG Trough-max Spike Rastergrams EEG Peak-max Spike Rastergrams Austin E. Soplata1,2,3(austin.soplata@gmail.com), Michelle M. McCarthy2,3, Erik Roberts1,2, Emery N. Brown2,4,5,6,7, Patrick L. Purdon2,4,5, Nancy Kopell2,3 Cortical UP/DOWN state synchrony drives propofol phase-amplitude coupling in slow waves 289.18 / D4 1Graduate Program for Neuroscience, 2Cognitive Rhythms Collaborative, and 3Department of Mathematics & Statistics, Boston University, Boston, MA, 4Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, 5Department of Brain and Cognitive Sciences, 6Division of Health Sciences and Technology, and 7Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA Download this poster at: asoplata.com/asoplata-poster.pdf Or scan QR code here: • The anesthetic propofol induces beta (12-20 Hz), alpha (8-12 Hz), and Slow Wave Oscillations (SWO, 0.1-1.5 Hz) on the EEG of human patients [1] • At low propofol, near Loss of Consciousness, alpha amplitude is maximal during the trough of the SWO phase, called "Trough-max" phase- amplitude coupling (PAC) [1] • At high propofol, in deep anesthesia, alpha amplitude is maximal during the peak of the SWO phase, called "Peak-max" PAC [1] • SWOs in natural sleep often begin in the cortex [2], while simulations suggest propofol alpha is generated by the thalamus [3] • Propofol "directly" affects properties of thalamic and cortical cells and synapses such as GABA-A conductance, GABA-A decay time, and H-current conductance [1,3] • Propofol "indirectly" affects thalamic and cortical cells via decreasing cortical acetylcholine (ACh) [4], which affects K(Na)-current conductance, corticocortical, and thalamocortical synaptic strengths [5] • We hypothesized that the direct effects of propofol would produce and control both trough-max and peak-max PAC in a full, thalamocortical model, primarily by modulating thalamic behavior. However, we found that indirect effects from propofol on ACh and changes to the thalamocortical feedback loop could control trough-max vs peak-max changes. •Our simulations modeled 100 cortical dendrite compartments (PYdr), 100 cortical axo-somatic compartments (PYso), 20 cortical interneurons (IN), 20 thalamic reticular neurons (TRN), and 20 thalamocortical neurons (TC) using the biophysical Hodgkin-Huxley formalism [3,5]. Synapses are connected via a nearest-neighbor radius. •Our artificial EEG signal was modeled from the combination of AMPAergic corticocortical (PY→PY) and thalamocortical (TC→PY) synaptic currents onto cortical dendrites •Our PAC analysis was based on the standard Modulation Index coupling measure [6] EEG Trough-max Involves Synaptic Competition • We found that while direct effects were necessary for thalamic propofol alpha, indirect effects were also necessary for SWO expression • EEG signal had two components: thalamocortical and corticocortical synapses onto cortical dendrites • In the thalamocortical synapse case: • Thalamic cells exhibit a persistent alpha oscillation, while target cortical cells exhibit a SWO rhythm • The TC→PY synaptic current produces a Trough-max PAC signal since the alpha amplitude
poster
Pragmatic service development and customisation with the CEDA OGC Web Services framework (COWS) Stephen Pascoe, Ag Stephens, and Dominic Lowe British Atmospheric Data Centre, Rutherford Appleton Laboratory, UK (Stephen.Pascoe@stfc.ac.uk) Application: UK Climate Projections '09 User Interface http://ukclimateprojections-ui.defra.gov.uk COWS Architecture The CEDA OGC Web Services framework (COWS) emphasises rapid service development by providing a lightweight layer of OGC web service logic on top of Pylons, a mature web application framework for the Python language. This approach gives developers a flexible web service development environment without compromising access to the full range of web application tools and patterns: Model-View-Controller paradigm, XML templating, Object-Relational-Mapper integration and authentication/authorization. Application: QESDI Data Portal http://phobos.badc.rl.ac.uk/qesdi/ The QUEST Earth System Data Initiative (QESDI) facilitates exchange of data within the QUEST thematic programme. It provides access to a diverse range of data products produced within QUEST and complementary external datasets. The QESDI portal's data visualisation interface is driven by COWS WMS server and client components. Data download is available through WCS. Features: ●Overlay WMS layers from internal and external WMS servers ●Browse arbitrary WMS dimensions ●Integrates with COWS Server to provide extended capabilities ●Styling Options: contouring, colourmap configuration ●Data download via WCS KVP request ●Publication quality plot generation ●Intuitive display of Climatology dimensions QESDI map view showing WMS layer selection and styling options discovered from WMS GetCapabilities extensions. The contour plotting feature of COWS server facilitates overlay of overlapping layers. Geoserver (Java) Geoserver (Java) UI (PHP) UI (PHP) COWS-WPS (Python) COWS-WPS (Python) COWS-WMS (Python) COWS-WMS (Python) UKCP Data lib UKCP Data lib geoplot geoplot chartplot chartplot Weather Generator Weather Generator Threshold Detector Threshold Detector Sun Grid Engine Sun Grid Engine cdat_lite, matplotlib, rpy, nappy cdat_lite, matplotlib, rpy, nappy User Interface Layer Application Layer Library Layer Processing Layer ddp-ps1 ddp-ps2 ddp-ps3 ddp-ui1 UI (php) geoserver (tomcat) spatialdb (postgres) ddp-app1 WMS (python) WPS (python) ddp-store1 SGE execd archive ddp-u2i UI (php) geoserver (tomcat) spatialdb (postgres) ddp-app2 WMS (python) WPS (python) ddp-store2 SGE execd archive ddp-ui3 UI (php) geoserver (tomcat) spatialdb (postgres) ddp-app3 WMS (python) WPS (python) ddp-store3 SGE execd archive acache 1 acache 2 acache 3 Master server and backup Physical servers Virtual Machines ddp-ps5 ddp-admin1 haproxy SGE master userdb (postgres) cache ddp-ps4 ddp-adminbak1 mirror state cachebak haproxy The UK Climate Projections 2009 are a set of probabilistic data products for the UK climate over the 21st century. The UKCP09 UI provides a rich, interactive user experience including map-based selections and outputs. The implementation integrates and extends Open Source GIS technologies including GeoServer, PostGIS, OpenLayers and Tilecache. Where necessary new components have been developed using the COWS framework. The UKCP-UI graphics page allows geospatial and probabilistic plots to be customised by the user. Requests for publication quality plots or the underlying data can then be sent to the WPS for processing before arriving in the user's download area. The UKCP09 UI was designed from the outset to be highly scalable and fault tolerant. The system is deployed as a cluster of XEN Virtual Machines, dividing the system into a set of functional VM types. These VMs are spread over multiple physical servers to provide both physical and virtual system failover. Load-balanced VM types are deployed in parallel allowing the system to scale up and down according to load, whereas the VMs managing state employ a separate hot-backup failover strategy
poster
fBLS is a novel technique for detecting transiting planets, based fBLS is a novel technique for detecting transiting planets, based on the fast-folding algorithm [1], extensively used in pulsar on the fast-folding algorithm [1], extensively used in pulsar astronomy [2]. astronomy [2]. fBLS is efficient and scalable. It can substantially reduce the fBLS is efficient and scalable. It can substantially reduce the computation time required to search for transiting planets. For computation time required to search for transiting planets. For example, when applied example, when applied to the to the Kepler Kepler data, fBLS reduces the data, fBLS reduces the computation time by a factor of up to ~3000. computation time by a factor of up to ~3000. fBLS is well suited to the analysis of PLATO lightcurves. We fBLS is well suited to the analysis of PLATO lightcurves. We currently adapt fBLS and extend its operation to efficiently detect currently adapt fBLS and extend its operation to efficiently detect planets with varying transit duration and timing [4]. planets with varying transit duration and timing [4]. We used fBLS We used fBLS to detect small rocky transiting planets with to detect small rocky transiting planets with periods shorter than one day, a period range for which the periods shorter than one day, a period range for which the computation is extensive [3]. We have discovered five new planet computation is extensive [3]. We have discovered five new planet candidates (Shahaf et al., in prep.) candidates (Shahaf et al., in prep.) For a given lightcurve of measurements, fBLS simultaneously produces binned and phase-folded lightcurves for an array of ‘sufficiently different’ trial periods, with arithmetic operations. This procedure is conceptually similar to FFT. For comparison, a standard BLS implementation requires operations. For each folded lightcurve we compute the BLS statistic [5], producing a standard BLS periodogram. Demonstration: Kepler-78b Figure 1: fBLS periodogram of Kepler-78. The left panel is a grayscale diagram demonstrating the fast-folding algorithm output for a narrow frequency range centered around the periodogram peak. The fBLS periodogram appears on the Right panel, where the BLS scores were calculated for each of the fast-folded profiles. The central peak is highlighted in yellow. The required single-core computation time for the settings used in this example is ~10 sec. Ultra-short period (USP) planets are exoplanets with orbital periods shorter than one day, which tend to be terrestrial. Studies of these extreme systems can shed light on planet formation, star-planet and planet-planet interaction, and orbital evolution. A fast folding algorithm to produce BLS A fast folding algorithm to produce BLS periodograms in search for transiting planets periodograms in search for transiting planets R E L A T E D L I T E R A T U R E [1] Staelin D. H., 1969, Proceedings of the IEEE, 57, 724 [2] Morello V., et al., 2020,MNRAS, 497, 4654 [3] Winn J. N., Sanchis-Ojeda R., Rappaport S., 2018, New Astron. Rev., 83, 37 [4] Shahaf S., Mazeh T., Zucker S., Fabrycky D., 2021, MNRAS, 505, 1293 [5] Kovács G., Zucker S., Mazeh T., 2002, A&A, 391, 369 Sahar Shahaf (1, 11), Barak Zackay (1), Pascal Guterman (111), Tsevi Mazeh (11), Shay Zucker (1v) and Simchon Faigler (11). A F F I L I A T I O N S (1) Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 7610001, Israel. (11) School of Physics and Astronomy, Tel Aviv University, Tel Aviv, 6997801, Israel. (111) Aix Marseille University, CNRS, CNES, LAM, Marseille, France. (1v) Porter School of the Environment and Earth Sciences, Tel Aviv University, Tel Aviv, 6997801, Israel. C O N T A C T : sahar.shahaf@weizmann.ac.il Figure 2: Two examples of phase folded USP planet lightcurves with the same number of bins. Top—a new USP candidate, detected by fBLS. Bottom— Kepler-78b, plotted for reference. Vertical axis range is identical. Despite the shall
poster
Markers of Evolution in Rich Galaxy Clusters Elena Panko (panko.elena@gmail.com) and Sviatoslav Yemelianov, Anton Sirginava Department of Theoretical Physics and Astronomy, I. I. Mechnikov Odessa National University, Odessa, Ukraine Elena Panko The idea, formulation of the problem, input data, lgorithm, etc Sviatoslav Yemelianov soft, ~200 rich galaxy clusters data analysis, discussion Anton Sirginava Input data, ~200 rich galaxy clusters data analysis, discussion Abstract We detected statistically significant regular substructures in 2D distribution of galaxies in rich clusters with redshifts till to 0.15. Input data were extracted from “The Catalogue of Galaxy Clusters and Groups” (Panko & Flin, 2006). Main type of substructures is linear elongated overdence regions in the cluster field. The direction of the linear substructure points usually to the nearest neighbor. In additional we found X- or Y- type regular substructures as well as short and/or compact chains. Bright galaxies in clusters are located in the regular substructures mainly and galaxies in the chains also show significant alignment along the chain. Founded peculiarities can be considered as a result of the formation of galaxy cluster at the intersection of DM filaments or filament-wall interaction. Input data format: The real 4200"4200" field of PF 0381-1789 cluster from DSS-R (upper panel) and the CC map 4000"4000". The background stars are present on the real frame. Weak galaxies on the frame are not visible. The reasons for present study: According to CDM model computer simulations the distribution of galaxies in the clusters is the point index of the hot gas and DM distribution: Gas density tracing the cosmic web in a subvolume (12.5 Mpch-1 comoving horizontally, stacked over 25 Mpch-1 comoving along the line of sight) of the HORIZON- AGN simulation (Codis, Gavazzi, et al.,2015) Projected density map, nodes in filaments. GADGET-4 code (Wang et al., 2020) We supposed the ways of evolution in 2D distribution of galaxies are reflected as: O  I  C  cD (Struble & Rood,1982) or O  OL  IL  CL  C  cD (Panko, 2019) or O  O (X, Y, c)  I(X,Y, c)  IL  CL  C  cD (present study), were types of galaxy clusters: O  Open, I  Intermediate, C  Concentrated, cD  having cD galaxy, L  linear substructure, X,Y, c  other regular peculiarities (Panko, 2013, Panko et al., in prep.) Input Data: MRSS, the list of galaxies and PF Catalogue of galaxy clusters and groups The Muenster Red Sky Survey. It covers an area of about 5000 deg2 on the southern hemisphere. The catalogue includes 5.5 million galaxies and is complete till to rF=18m.3 (Ungruhe et al., 2003). It's a result of scanning of 217 plates of Southern Sky Atlas R (ESO) by PDS 2020GMplus and automated recognition of galaxies with careful control. White spot is the region around the SMC. PF Galaxy clusters (based in the MRSS (Panko & Flin, 2006). Our data set contains 460 rich galaxy clusters contain 100 and more members. MRSS Cluster data 0016-5711 0.1670739 -57.107367 940 2130600 88 31 48 907 617 0.32 117.4 13.02 13.72 15.05 97.4 0.41 22.8 Galaxies data 149-40043 59 2.75 5.30 3.85 0.31 24.0 16.56 16.53 0.136070 -57.054264 149-39501 21 2.15 4.26 2.51 0.49 178.0 17.98 17.95 0.139043 -57.081429 111-51504 394 6.14 12.73 7.33 0.51 16.5 14.81 14.77 0.140979 -57.240734 111-50854 111 4.09 7.88 4.37 0.54 19.1 16.35 16.32 0.143299 -57.250084 149-38602 36 3.27 7.23 2.99 0.72 34.9 17.53 17.50 0.143872 -56.993507 Cluster Cartography (CC) set allows to visualize the input data and to analyze the distribution of galaxies as well as to study the orientations of cluster members. CC allows automatically: to determine the level of the concentration to the cluster center; to detect linear substructures and other types of regular substructures, namely crosses, semicrosses. Curved chains and compact short dense stripes are need visual control. The shape of symbol corresponds to MRSS data, the size of symbols corresponds to
poster
Research Data Management Service Sustainability Tina Griffin, MLIS. Asst. Professor. University of Illinois Chicago. Chicago, IL 60612 Margaret Janz. Interpreter II. Great Parks of Hamilton County. Cincinnati, OH 54231 Introduction Libraries at high-level research and other academic institutions have been involved in research data management service (RDMS) for more than ten years. As the demand for these services has increased, libraries have adapted their service models. There have been several studies that have reviewed the “state of RDM services,” but these have only provided a snapshot of activities at a given time. Research Data Management activities require support through infrastructure, personnel, time, and money. Previous studies address these topics individually, but there is little connection between RDMS data collected over time and are few longitudinal studies. These studies also do not address the necessary decisions and changes that must have been made to make RDMS sustainable over time. This study will attempt to fill that gap by updating the RDMS status to describe the current baseline, broadening the facets addressed so that details about sustainability can be highlighted, and presenting the data by cohort to help other institutions identify parallels to their RDMS. Methodology • A 62-question survey was distributed in the fall of 2020 for 6 weeks through data listservs and message boards (DataCure Google Group, Research Data Access and Preservation discussion forum, International Association for Social Science Information Service and Technology listserv, Digital Library Federation listserv, Research Data Alliance forums). Two reminders were sent, one at 3 weeks and one the day before close. • Questions asked about changes in service model, staffing and funding, accountability, and planning for research data management services since service inception. • The survey was reviewed by eight data librarians currently in the field, and edited for bias, errors, and clarity. A copy of he survey instrument can be downloaded here: https://go.library.uic.edu/RDMSUSurveyInstrument. • IRB at both authors’ institutions reviewed the protocol and it was determined to be exempt. • The survey data was collected via Qualtrics Survey Software (Qualtrics, LLC. Provo, UT, USA. https://www.qualtrics.com/). • Received 118 responses. 70 responses were analyzed, including 32 which were partially complete. • Responses were excluded if: • did not contain any data (21) • only answered the question regarding service duration (9) • services were offered less than one year (16) • did not consent (2) • Personally identifying and some institutional demographic information was deleted before analysis, as per the informed consent agreement. Selected Results – Respondent Characteristics 13 14 31 12 1-3Y 3-5Y 5-10Y 10+Y Number of Respondents by Years offering RDMS 12 10 5 34 >7M 8-25M 25-50M unsure Respondents Library budget (Millions) Selected Results – Service profile Almost all respondents indicated that they have increased both technical and advisory services over time. Among all groups, the number of services offered ranged from two to fifteen. The average number of services offered from 1-3Y was 6.5, from 3- 5Y was about 10, cohort 5-10Y or 10+Y offered just under 8 each. Cohorts differ as to which services they offer. Number of Respondents offering RDS by Service level and cohort. Technical Only Advisory Only Both 1 1 3 2 2 1 2 1 3 2 4 6 5 4 4 1 1 1 2 2 2 1 1 2 1 3 4 5 Deidentification Analysis Offer resources Visualization Metadata Purchase Documentation Curation Preservation Citation Storage Finding Data Sharing Education/trai… Planning 1-3Y 2 1 1 1 4 2 6 8 6 4 8 2 9 8 3 2 1 1 1 1 3 3 3 2 2 3 5 2 6 2 3 9 Deidentification Analysis Offer resources Visualization Metadata Purchase Documentation Curation Preservation Citation Storage Finding Data Sharing Education/traini… Planning 3-5Y 1 4 4 6 7 7 4 10 11 6 14 13 13 11 11 2 1 1 5 2 4 2 4 9 1
poster
sentinel-3A SLSTR – Early results The main objective of the Sentinel-3 SLSTR instrument is to maintain continuity with the ENVISAT (A)ATSR series of instruments within the European Copernicus programme. SLSTR will retrieve global coverage sea surface skin temperature (SSTskin) with zero bias and an uncertainty of ± 0.3 K (1σ) for a 5 x 5 degree latitude longitude area, having a temporal stability of 0.1 K/decade in support of Copernicus climate monitoring and operational Numerical Ocean/Weather Prediction (NOP/NWP) applications. In addition, SLSTR using a suite of visible and infrared radiance measurements, monitoring, ice surface temperature, cloud imagery, atmospheric aerosol, land, forestry and hydrology products in support of Copernicus serviceswill provide land surface temperature, active fire. This poster presentation presents initial results from the Sentinel-3A SLSTR as of May 25th 2016. C. Donlon, D. Smith (RAL), B. Berruti, J. Nieke and J. Frerick. Contact: Craig.Donlon@esa.int and for dave.smith@stfc.ac.uk further information. èAbout sentinel-3 Sea and Land Surface Temperature Radiometer è A BIGGER PICTURE FOR COPERNICUS SLSTR Band L Centre [µm] ΔL [µm] SNR [-] / NeΔT [mK] Spatial Sampling Distance (SSD) [km] Function S1 0.555 0.02 20 0.5 Cloud screening, vegetation monitoring, aerosol S2 0.659 0.02 20 0.5 NDVI, vegetation monitoring, aerosol S3 0.865 0.02 20 0.5 NDVI, Cloud flagging, Pixel co- registration S4 1.375 0.015 20 0.5 Cirrus detection over land S5 1.61 0.06 20 0.5 Cloud clearing, Ice and snow, vegetation monitoring, S6 2.25 0.05 20 0.5 Vegetation State and Cloud Clearing S7 3.74 0.38 80 mK 1.0 SST, LST, Active Fire S8 10.95 0.9 50 mK 1.0 SST, LST, Active Fire S9 12 1.0 50 mK 1.0 SST, LST F1 3.74 0.38 < 1 K 1.0 Active Fire F2 10.95 0.9 < 0.5 K 1.0 Active Fire èSpectral channels èSLSTR Optical design ç The complete suite of AATSR and ATSR-2 spectral channels (0.55, 0.66, 0.85, 1.6, 3.7, 10.8 and 12 µm) is included in the SLSTR design in order to maintain continuity. Additional channels at 1.378 µm and 2.25 µm are included to enhance thin cirrus cloud detection. SLSTR has an additional capability to derive active fire measurements using an extended dynamic range of the 3.7 µm channel and dedicated optimized detectors at 10.8 µm that are capable of detecting fires at ~450 K without saturation. è SLSTR uses two independent scan chains each including a separate scan mirror (scanning at a constant velocity of 180 rpm), an off-axis paraboloid mirror, and a fold mirror to focus measured radiance into the instrument Detector Assembly (DA). An innovative recombination “flip” mirror alternately relays each of the scanned optical beams into a common field plane at the entrance of the DA where there is a cold baffle. ç While more complex than the single scan system employed by the (A)ATSR instrument series, the SLSTR 3-mirror scan configuration increases the instrument oblique view swath to ~750 km (centred at the SLSTR nadir point) and the nadir swath to~1400 km (offset in a westerly direction). The nadir swath is asymmetrical with respect to the nadir point to provide identical and contemporaneous coverage with OLCI ocean/land colour measurements. Visible and short-wave infrared detectors have a spatial resolution of 0.5 km on ground. Thermal infrared detectors have a spatial resolution of 1 km on ground. 05/04/2016 10:02 am. The first image from the Sentinel-3A SLSTR thermal-infrared channels at 1km spatial resolution reveals thermal signatures over a part of western Namibia and the South Atlantic Ocean. Cold water is seen along the Namibian coast upwelling from deeper waters. The Benguela current flows north along the west coast of South Africa driven by southeasterly winds, creating coastal upwelling. Many eddies and meanders are generated in this complex system and these small-scale features are captured beautifully. Over land, the distinct folds of desert dunes can be seen. In fact, further north, Gobabeb is the locati
poster
25-27 June 2008 Madrid, Spain Knowledge production in evaluating rehabilitation of buildings – Areas of interest and biases Anandasivakumar Ekambaram 1, Andreas Økland 2 1SINTEF, 7031 Trondheim, Norway, siva@sintef.no 2SINTEF, 7031 Trondheim; Norway, Andreas.Okland@sintef.no Background – A structured literature review on evaluating rehabilitation of buildings Journals that are considered: • Building and environment • Cleaner Production • Construction Innovation • Construction Management and Economics • Energy and Buildings • Facility • International Journal of Project Management • Sustainability Selected period of publication: 2016-2018 Number of relevant articles (selected after the final round of study): 58 Preliminary results – Areas of interest Areas of interest that were found after the first round of categorization of the results: • Support for decision making: o Evaluation of potential improvement scenarios / options o Improving decision making • Simulation • User behavior • Life cycle approach: o Life Cycle assessment o Life Cycle Cost • Payback time • Policy These are key areas of interest. This categorization is based on the number of articles that focus on the areas of interest. At the early stage of the study, as each of the articles was read, all areas of interest (that is, what areas that each article focuses on) were noted. Biases • Number of citations of the articles can play a role in determining the areas of interest • Average number of citations per area of interest could lead to a dilemma / bias: • Articles published in 2016 could probably have more citations than the articles published in 2018 • Areas of interest could overlap each other • Choosing what area(s) of interest that an article focuses on could be subjective REZBUILD This study is connected to research project: The REZBUILD project (REfurbishment decision making platform through advanced technologies for near Zero energy BUILDing renovation) grows with the main aim of defining a collaborative refurbishment ecosystem focused on the existing residential building stock. • Awarded by the European Commission with a H2020 Programme Grant of € 6,996,128.25 • Total budget: € 9,038,208.75 • Duration: 4 years (Started in October 2017) REZBUILD will base its refurbishment ecosystem on the integration of cost-effective technologies, business models and life-cycles interaction. https://rezbuildproject.eu/ 20th European Conference on Knowledge Management | Portugal, 2019 An area of interest that was described in one article, and that article has larger number of citations An area of interest that was described in many articles, but those articles have lesser number of citations Versus
poster
The public codes ASOHF and VORTEX for the post-processing of cosmological simulations David Vallés-Pérez 1,† Susana Planelles 1,2 Vicent Quilis 1,2 1Departament d'Astronomia i Astrofísica, Universitat de València 2Observatori Astronòmic, Universitat de València †david.valles-perez@uv.es ASOHF (Vallés-Pérez, Planelles & Quilis, 2022) ASOHF is an adaptive spherical overdensity DM halo finder and galaxy finder. Among its main features, we can count: Haloes are detected from density peaks in configuration space. Dynamical information is subsequently used through several complementary unbinding processes. Physically motivated, self-consistent definition of substructures (see below). Possibility to look for galaxies within DM haloes (see below). Complete merger trees, robust to the loss of haloes in a number of snapshots. Outputs include lists of haloes with several (∼50) properties and, optionally, complete member particle lists. Python packages provided. Optimal scaling properties, ∆twall ∼Npart/ncores in shared-memory platforms (OMP) and low memory footprint. Does not require external, special libraries to be installed. Parallelisation (beyond OMP) Large (≳50 Mpc) simulations with many (≳109) particles can benefit from the domain decompo- sition strategy, which is performed externally to the code. Different subdomains can be run con- currently or sequentially, and the different domain outputs are seamlessly merged afterwards. Substructure Substructures are delimited by their Jacobi radius, RJ, encompassing the region more bound to the satellite than to the host halo. This is usually a more restrictive definition than others based on the density profile alone. Galaxy finder Galaxies are looked for within DM (sub)haloes us- ing a variation of the SO method, and are finally characterised as fully independent objects (with different centre, bulk velocity, etc.). Galaxy ex- tents are delimited by density and density gradi- ent, in addition to phase space arguments. Learn more The paper The code The docs A&A 664 A42 github.com/dvallesp/ASOHF ASOHF.github.io 100 101 102 103 104 105 N( > M) 25 particles 100 particles ASOHF AHF ROCKSTAR SUBFIND Tinker+08 All haloes Subhaloes 101 102 103 104 105 106 Npart 109 1010 1011 1012 1013 1014 M (M ) 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 N( > M) / N( > M) vortex (Vallés-Pérez, Planelles & Quilis, 2021a,b; Vallés-Pérez et al. 2024) vortex is a post-processing code for performing several computationally-intensive analyses on the velocity fields of hydrodynamical simulations. While it was originally designed for block-based AMR simulations, it is now a general tool. Helmholtz-Hodge decomposition v(x) = vsol(x) | {z } Solenoidal part ∇·vsol=0 + vcomp(x) | {z } Compressive part ∇×vcomp=0 + vharm(x) | {z } Harmonic part (mostly irrelevant) The decomposition is straightforward for uniform grids, but much more complex for multi-resolution data, especially if O(N2) is to be avoided. We solve for a vector and a scalar potentials, such that: vcomp = −∇φ vsol = ∇× A ∇2φ = −∇· v ∇2A = −∇× v Elliptic PDEs can be addressed by a combination of FFT techniques (base grid) and iterative solvers (e.g., SOR), so that overall the cost is ≲O(N4/3). Multi-scale Reynolds decomposition v(x) = ⟨v⟩L(x)(x) | {z } Bulk velocity field + δv(x) | {z } Turbulent velocity field The bulk velocity is obtained by performing a top- hat filtering with position-dependent width, ⟨v⟩L(x)(x) = RRR |x′−x|<L(x) w(x′, |x −x′|)v(x′)d3x′ RRR |x′−x|<L(x) w(x′, |x −x′|)d3x′ where L(x) is determined by the smallest of: The scale required for δv(x) to converge, which is an indication of the outer scale of turbulence. The distance to the nearest influential (M ≳2 −3) shock. Learn more The papers The code CPC 263:107892 MNRAS 504(1):510 github.com/dvallesp/vortex Compressive and solenoidal part of the turbulent velocity field (same simulation as the other panels) vortex-p: Helmholtz-Hodge & Reynolds decomposition for particle-based and meshless data B
poster
I-а Міжнародна конференція «Відкрита наука та інновації в Україні 2022», Київ, 27-28 жовтня 2022 р. UNIVERSAL MURANOV ENGINE WITH EXTERNAL HEAT EXCHANGE FOR ENVIRONMENTALLY FRIENDLY GENERATION OF CHEAP ENERGY Muranov Sergey, Muranov Ivan e-mail: smuranov111@gmail.com; tel. +38 050 819 82 10 One of the most important economic problems are : reducing the cost of energy, reducing the consumption of hydrocarbon raw materials, as well as switching to more efficient, reliable and environmentally friendly methods of energy generation Investors are offered an innovative project engine Universal Muranov Engine (UME) that radically reduces the cost of electricity generation (by 2-3 times) while improving environmental friendliness This engine is a deep modernization of the Stirling engine (SE) Matrix SWOT a Stsrling Engine (operating engines in USA, China, Sweden…) + Strength SE: unique properties for efficient power generation high efficiency, the ability to run on any type of fuel and from any external heat sources, noiselessness; - Weakness SE: high manufacturing cost; bulkiness of construction, heavy weight. + Opportunities SE: a wide range of applications, in particular in distributive power generation systems in cogeneration plants. - Threats SE: limitation of the scope of application due to high cost, large dimensions and weight. reduced SE reliability due to the complexity of the mechanical drive design. Universal Muranov Engine (UME) The upgraded Stirling Engine designated as the Universal Muranov Engine with external heat exchange, developed by S. Muranov and I. Muranov, can be the basis of a new highly efficient, reliable and environmentally cleaner energy. The deep modernization of the SE included: development of a crankless mechanism built exclusively on rolling bearings; a unique system for heat recovery from the walls of the working cylinder, a system of reliable seals of pistons operating without lubrication, a new efficient system of heat exchangers with a small dead space. During the modernization, significant shortcomings of the SE were overcome, the cost of manufacturing the engine was radically reduced, efficiency and reliability of operation were increased, the weight and dimensions of the engine were significantly reduced, expanded application areas. The upgraded new UME is able to compete with internal combustion engines. and also with gas and steam turbines in terms of application efficiency. At the same time, UME-based mini-CHP plants generate energy and heat 2-3 times cheaper than traditional power plants. This engine can also be used on any vehicles - cars, ships, yachts, submarines, etc. UME can work autonomously for a long time without any maintenance. The modularity of the design allows you to create engines with a power of 1-50 MW. This project is also of exceptional importance for the EU and Germany, due to the fact that it radically reduces the cost of electricity, reduces the need for hydrocarbons, contributes to the preservation of the environment, and also increases the competitiveness of industry, energy independence and economic efficiency. The project was developed and tested for operability in the 3D design system of SolidWorks. All the main nodes have a novelty at the world level and will be patented. Individual nodes were tested on full- size real samples. An investor is required to finance the prototype.
poster
RepManNet Contact: repmannet_barrierefreiheit@univie.ac.at Authors: Susanne Blumesberger, Sonja Edler, Victoria Eisenheld, Maria Guseva, Doris Haslinger Lizenz: CC BY-NC-ND 4.0 How do I make my repository accessible? Access for All Accessible repositories are an important part of the open science ecosystem! WCAG Accessibility properties for metadata Guidelines on preparing accessible content for repositories • develop easy search, simple navigation and upload – whenever possible in multiple ways • implement metadata indicating accessibility properties • make sure that all necessary information is compact, clear and transparent • encourage to report accessibility issues and offer ongoing support user interface • be sure that your objects are accessible as possible – see guidelines on preparing accessible content for repositories • include comprehensive metadata – don’t forget metadata indicating accessibility properties content fullKeyboardControl fullMouseControl fullSwitchControl chartOnVisual fullTouchControl fullVideoControl fullVoiceControl auditory chemOnVisual colorDependent diagramOnVisual mathOnVisual musicOnVisual textOnVisual visual flashing noFlashingHazard motionSimulation noMotionSimulationHazard sound noSoundHazard annotations textual ARIA bookmarks index pageBreakMarkers pageNavigation readingOrder structuralNavigation tableOfContents taggedPDF alternativeText audioDescription closedCaptions describedMath longDescription openCaptions rubyAnnotations signLanguage transcript displayTransformability synchronizedAudioText timingControl unlocked ChemML latex MathML ttsMarkup highContrastAudio highContrastDisplay fullKeyboardControl fullMouseControl fullSwitchControl fullTouchControl fullVideoControl fullVoiceControl auditory chartOnVisual chemOnVisual colorDependent diagramOnVisual mathOnVisual musicOnVisual textOnVisual textual visual flashing noFlashingHazard motionSimulation noMotionSimulationHazard sound noSoundHazard annotations ARIA bookmarks index pageBreakMarkers pageNavigation readingOrder structuralNavigation tableOfContents taggedPDF alternativeText audioDescription closedCaptions describedMath longDescription openCaptions rubyAnnotations signLanguage transcript All people should be able to be a part of the scientific community without barriers! Constructing an accessible repository is • a collaborative effort of software developers, repository managers and data producers • an ongoing process – consider accessibility before you start a project and integrate it in the workflows on a permanent basis • demanding not only expert knowledge (consider WCAG!), but also participation of people with disabilities
poster
Open Access portal for students and young researchers Mission: Providing Resources, Support and Advice for students and young researchers in their journey from writing to publishing their scientific papers Sample: 1000 students and young researchers from over 80 different countries PhD candidates 54% / Bachelor & Master students 28% / Post-Doc 15% / Other 3% Respond: Open Access portal for students and young researchers Webinars OA events Videos News Comments Connection to DOAJ search OA journal reviews OA Quiz OA trainings Connection on social media Infographics Certification OA networks Online OA resources OAA portal content Open Access ??? OAA Contributors: Anna / @Anna_Pechenina USA Daniel / @d_mutonga Kenya Dusan / @dusandevic The Netherlands Iara / @iaravps Brazil Ivo / @ivocamposneto Portugal Lauren / @parnopaeus USA Meredith / @MeredithNiles1 USA Natalia / @natalianorori Nicaragua Prateek / @MahalwarPrateek Germany Slobodan / @RadicevSlobodan Italy Tom / @tompollard UK OA surveys Results after first year: - 1 tutorial - 2 surveys - 2 presentation at conferences - 2 webinars - 10% grow of visitors per month - 12 contributors - 350 unique visitors per day - 1100 newsletter subscribers - 48.000 unique visitors Osman / @aldirdiri Sudan
poster
TOI 560 : Two Transiting Planets Orbiting a K Dwarf Validated with iSHELL, PFS and HIRES RVs Mohammed El Mufti1, Peter Plavchan1, Angelle Tanner2, Eric Gaidos3, Bryson Cale4, Michael Reefe1, Justin Wittrock1, Kevin Issac Collins1, Patrick Newman1, David James Vermilion1,5, Claire Geneser2, Rena Lee3, Ahmad Sohani2, Farzaneh Zohrabi6 George Mason University1, Mississippi State University2, University of Hawai‘i at Mänoa3, NASA JPL4, NASA Godard Center5, Louisiana State University6 Abstract We validate the presence of a two-planet system orbiting the 0.15–1.4 Gyr K4 dwarf TOI 560 (HD 73583). The system con- sists of an inner moderately eccentric transiting mini-Neptune (TOI 560 b, P = 6.397438 ± 0.000037 days, e = 0.294+0.13 −0.062) initially discovered in the Sector 8 mission observations, and a transiting mini-Neptune (TOI 560 c, P = 18.8779 ± 0.0016 days) discovered in the Sector 34 observations, in a rare 1:3 orbital resonance. We utilize photometric data from Spitzer, and ground-based follow-up observations to confirm the ephemerides and period of the transiting planets and vet false positive scenarios. We obtain follow-up spectroscopy and corresponding precise radial velocities (RVs) with the iSHELL spectrograph at the NASA Infrared Telescope Facility and the HIRES Spectrograph at Keck Observatory to validate the plan- etary nature of these signals, which we combine with published PFS RVs from Magellan Observatory. We place upper limits on the masses of both planets of <2.1 and <4.1 MNep for b and c, respectively. We apply a Gaussian Processes (GP) model to the light curves to place priors on a chromatic radial velocity GP model to constrain the stellar activity of the TOI 560 host star. TOI 560 is a nearby moderately young multi-planet sys- tem with two planets suitable for atmospheric characterization with James Webb Space Telescope (JWST) and other upcom- ing missions. pychell Data Pipeline When star light passes through our gas cell and iSHELL cross-dispersed echelle spectrograph, we obtain high-resolution (R=80k) multi-order spectra. For each of the 29 orders, we do optimal spectral extraction (weighted summation) in the verti- cal direction so that in each column eventually we get a single value, which gives us a 1D spectrum for each order. We then forward model the spectrum to measure the radial velocity for each order, and then co-add the RVs from each order. We an- alyze our extracted spectral data using the following standard forward model where each term is a function of wave- length: Intensity = Blaze Function ×(Star × Gas Cell × Tellurics) ⊗Line Spread Function The blaze function ultimately sets the continuum of our spec- trum. The line spread function effectively blurs the spectrum down to resolution of the iSHELL spectrograph. The star has a Doppler shift, and the gas cell also has a Doppler shift which enables us to calibrate for instrumental drift in the wavelength solution and to constrain the line spread function. The tel- lurics, parameterized by a scaling factor optical depth related to airmass, are the remaining lines that causes by the green- house gases in the Earth’s atmosphere. Our RV pipeline is adapted from the CSHELL RV code de- scribed in [1]. We have rewritten the CSHELL code in a Python script pychell to adapt to iSHELL’s larger spectral grasp with multiple orders [2]. TOI 560 host star Parameter Value Reference Spectral type K4 Scolz et al. 2005 Teff[K] 4579+62 −61 this work R∗(R) 0.679+0.018 −0.017 this work Rotation period 12.2 ± 0.1 this work Age 150Myr - 0.5Gyr this work M∗(M⊕) 0.701+0.026 −0.025 this work Luminosity(L∗) 0.1802 ± 0.0058 Stassun et al. 2019 R.A. 08:38:45.260 Stassun et al. 2019 Dec. -13:15:24.09 Stassun et al. 2019 Data analysis • Standard G.P. Kernel: KQP(ti, tj) = ησ2 Decay z }| { exp  −∆t2 2ητ 2   Periodic z }| { exp  −1 2ηl2 sin2        π∆t ηp          , • Modified Chromatic G.P. Kernel: KJ2(ti, tj, λi, λj) = η2 σ,0  
poster
www.helmholtz-hzi.de Introduction The species of the Chaetosphaeriaceae (Chaetosphaeriales, Sordariomycetes, Ascomycota) are microscopic asexually reproducing fungi, some of which belong to the life cycle of sexual morphs with macroscopic fruit bodies (~300 μm diam) perceptible as “black dots” on the surface of decaying plants. They are invisible to most of the people and difficult to find even by a trained eye. In addition, they are usually hard to cultivate and slow growing, what make them difficult to study. Traditionally, they are reported from the decaying plant matter in the forest habitat and some exhibit endophytic lifestyle . Surprisingly, analysis of metabarcoding data showed that they are also common soil fungi, dwelling in a bulk soil in grasslands and croplands. There are about 44 genera classified into this family with monographs available for several genera, but with very little knowledge about their biology. Studies showed that Chaetosphaeriaceae members are important degraders of lignocellulose in wood, litter and soil and have rich structural diversity of natural compounds (e.g. terpenoids, cyclic peptides, naphthoquinones, lactones, lipopeptides, xanthones). Three unpublished and unannotated draft genomes of Chaetosphaeriaceae are available (1000 fungal genome project). Nevertheless, the information about the genetic background of the basic pathways is missing in this fungal lineage. The same is true for the whole Chaetosphaeriales. Our study is filling this gap using analysis of genomes from 16 species. Genome assembly statics of 16 species from the family Chaetosphaeriaceae Comparison of CAZymes classes within the Chaetosphaeriaceae Homology analysis of predicted pyrichalasin H biosynthesis gene cluster in M. ciliata, M. caesia, and P.grisea Conclusion: In this study, we introduced the genomes of sixteen Chaetosphaeriaceae species. The quality of those genomes were assessed, and their carbohydrate-active enzymes (CAZymes) were annotated. There are more than 800 genes that encode CAZymes from each species. The genome mining of the analyzed members of the Chaetosphaeriaceae species revealed a high diversity of biosynthetic pathways that far outmatches the number of compounds known from the individual species. The genome sequences generated in this work will enablea broad range of investigations including studies on fungal evolution, population dynamics, biodegradation and also biosynthesis of secondary metabolites. Acknowledgement: The department Microbial Drugs is gratefully acknowledged for financial support of TC. The work was also supported by ESF project "International mobility of researchers of the Institute of Microbiology of the CAS, v.v.i. No 2" registration number CZ.02.2.69/0.0/0.0/18_053/0017705. First insight into Chaetosphaeriaceae biology using genomic Tian Cheng1,2, František Sklenář2, Martina Réblová2, Tobias Busche3, Marc Stadler1, Jörn Kalinowski3, and Miroslav Kolařík2 1Helmholtz Centre for Infection Research, Inhoffenstr 7, 38124 Braunschweig, Germany. 2Institute of Microbiology, the Czech Academy of Sciences, Vídeňská 1083, 142 20 Praha, Czech Republic. 3Center for Biotechnology (CeBiTec), Bielefeld University, Universitätsstr 25, D-33615 Bielefeld, Germany. References (1) Réblová M, Barr ME, Samuels GJ. (1999) Chaetosphaeriaceae, a new family for Chaetosphaeria and its relatives. Sydowia 51(1): 49–70. (2) Wijayawardene NN, Hyde KD, Al-Ani LKT, et al. (2020) Outline of Fungi and fungus-like taxa. Mycosphere 11(1): 1060–1456. (3) Wang C, Hantke V, Cox R, et al. (2019) Targeted Gene Inactivations Expose Silent Cytochalasans in Magnaporthe grisea NI980. Org. Lett. 21(11): 4163-4167. (4) Kjærbølling I, Vesth T, Frisvad J, et al. (2018) Linking secondary metabolites to gene clusters through genome sequencing of six diverse. Proc. Natl. Acad. Sci. U.S.A. 115(4): E753-E761. Tian.Cheng@Helmholtz-hzi.de Number of identified enzymes are displayed. (GT: glycosyl transferase, GH: glycoside hydrolas
poster
Burgerdata inzetten voor een veiligere en gezondere schoolomgeving Recrutering & community building Data analyse & empowerment Inwoners toelichting geven bij het voornemen, dialoog aangaan, bezorgdheden capteren en uitnodigen om de effecten te meten aan de hand van sensoren. Iedereen heeft toegang tot de verkeersmetingen, ingewonnen via Telraam-toestellen en kan de situatie voor en tijdens de invoering van de proefopstelling bekijken door middel van het COMPAIR Policy Monitoring Dashboard (PMD). De schoolstraat wordt in het algemeen positief ervaren. Om beperkte overlast in buurtstraten te milderen en verkeersveiligheid verder te verbeteren stellen leerlingen, ouders, leerkrachten en buurtbewoners maatregelen voor (o.a. invoering zone 30 in heel het centrum, knip in parralle straat en sluiting zij- ingang school). Politie advies in voorbereiding Leerlingen, buurtbewoners en gemeente hebben een positieve ervaring opgedaan inzake burgerparticipatie. Lessons learned worden uitgedragen in het SOLVA-werkingsgebied. Meer info … Evaluatie, bijsturing & legacy Gemeente Herzele & Intercommunale SOLVA betrekken burgers, scholen en leerlingen bij de invoering van een schoolstraat. Samen wordt het verkeer & de luchtkwaliteit gemeten en geanalyseerd. Samen wordt de proefopstelling geëvalueerd en bijgestuurd. • Toename fietsverkeer in schoolstraat (3x) • Afname autoverkeer in schoolstraat (60%) • Toename autoverkeer in buurt (14%), met grote verschillen tussen straten +/- 700 students +/- 300 students school street +/- 300 students
poster
Functionalisation of nanochannels for the development of a sustainable and efficient low-grade waste heat harvester Kamil Rahme*, Anjali Ashokan, Ievgen Nedrygailov, Scott Monaghan, Rupa Ranjani, Paul Hurley, Subhajit Biswas and Justin D. Holmes School of Chemistry & Tyndall National Institute, University College Cork, Cork, Ireland AMBER Centre, Environmental Research Institute, University College Cork, Ireland *krahme@ucc.ie This research is supported by the European Union's Horizon 2020 research and innovation programme under grant agreement number 964251, and by Irish Government funding via the DAFM NXTGENWOOD research program (grant agreement: 2019PROG704). Abstract Organic ligands have been used to modify the surface charge in nanochannels formed in cellulose and anodised aluminium oxide (AAO) membranes, using a combination of oxidative, covalent and non-covalent chemistry approaches. Initial electrical and thermoelectric measurements performed on the membranes show that surface functionalisation enhances ionic thermoelectric effects in these nanochannels. 3. Covalent attachment of Polyelectrolyte onto AAO nanochannels Conclusions 2.2. Electrostatic LBL deposition of polyelectrolytes onto silanised AAO Motivation1 Methodologies and results 2.1. Aminosilanes containing two and three amino groups TEMPO/NaBr/NaClO oxidation in water at pH 10–11 oxidise the C6 primary hydroxyl groups in cellulose to carboxyl, generating negatively charged nanochannels under certain conditions.2,4 1. oxidation of cellulose produced from natural wood using TEMPO 1. https://translate-energy.eu/about/ 2. Tian Li et al., Nature Materials 2019, 18, 608. 3. Chih-Feng Huang et al. Polymer Degradation and Stability 2019, 161, 206. 4. Ioana A. Duceac et al. Materials 2022, 15, 5076. 5. Laura Pol et al., Nanomaterials 2019, 9(3), 478. References Acknowledgement FTIR and SEM/EDX, TGA analysis also confirm the presence of PEI 2 KD (~ 3 % weight loss). Nanochannels within AAO and cellulose membranes were functionalised. Surface functionalisation enhances ionic thermoelectric effects in the nanochannels. Further studies to explain this ionic thermoelectric effect enhancement, surface charge/pores densities and zeta potential data are in progress. 1200 1000 800 600 400 200 0 0 10000 20000 30000 40000 50000 Survey Scan CPS Binding Energy (eV) AAO-SiNH2-PAA-PEI AAO-Si-NH2 N 1sC 1s Si 2p 4000 3500 3000 2500 2000 1500 1000 1403 cm -1 C=O in COO - 2858 cm -1 Transmittance (%) Wavenumber (cm-1) AAO AAO-OH AAO-Si-NH-NH-NH2 AAO-Si-NH-NH-NH2-PAA-PEI-PAA 2930 cm -1 C-H stretching is sp 3 1710 cm -1 C=O in COOH 4000 3500 3000 2500 2000 1500 1000 500 80 85 90 95 100 1720 cm-1 C=O in COOH Transmittance (%) Wavenumber (cm-1) TEMPO treated Cellulose Cellulose T = 14K 4 M NaCl 2. Silanisation of AAO membranes with aminosilane ligands5 AAO-NH2 48.8 AAO 19.7 AAO-OH ~ 10 AAO AAO-OH AAO-NH-NH-NH2 3-[2-(2-aminoethylamino)ethylamino]propyl-trimethoxysilane (TAEPT) 3-(2-aminoethylamino)propyl]trimethoxysilane AAO-NH-NH-NH2 4000 3500 3000 2500 2000 1500 1000 500 60 70 80 90 100 110 1160 cm -1 C-N stretching Transmittance (%) Wavenumbers (cm-1) AAO untreated AAO-OH AAO-NH-NH-NH2 1650 cm-1 N-H bending 3300 cm -1 OH Stretching 2930 cm -1 C-H stretching in sp 3 FTIR analysis after TEMPO oxidation shows a new peak at 1720 cm-1 for C=O in carboxylic acid. 3500 3000 2500 2000 1500 1000 0 50 100 Transmittance (%) Wavenumber (cm -1) AAO-OH AAO-Si-EPOXY AAO-Si-EPOXY-PEI 2KD C-H stretching in Sp 3 96 97 98 99 100 101 100 200 300 400 500 600 700 800 AAO membrane untreated AAO-Si-EPOXY-PEI 2KD Weight loss (%) Temperature (oC) AAO-Si-EPOXY-PEI 2KD (2,2,6,6-tetramethylpiperidin-1-yl)oxidanyl
poster
Future Work • Improvements to this study • Larger sample sizes • Vary populations, especially to avoid selection biases • What is the mechanism for the improvement in self-efficacy? Is it sustained long-term? • Hypothesis: Focusing on future impacts enables students to envision themselves as active participants in a changing world. • Can we use our students’ desires to increase motivation and self-efficacy? • E.g. describe how the concepts they’re learning have impacted the world today and/or how these concepts can be leveraged to change the future Experimental Design 1. Pre-class questionnaire. The survey contained SMQII likert-type questions to assess motivation and open-ended questions to investigate student preferences about learning about scientific research. 2. Brief lecture. Students were introduced to historic examples of the impacts of research on society through machine learning as it relates to Netflix recommendations. The areas of scientific impact were then presented. This process served to: i. introduce the concept that scientific research has real-world consequences ii. demonstrate a framework for evaluating the broader impacts of research. 3. Think-pair-share. A problem situated discussion in either the influence of past research on the world today (intervention A) or how current research might impact the future (intervention B). 4. Post-class questionnaire. This survey utilized re-worded, but equivalent, SMQII questions to assess motivation to learn science, and open-ended questions to gauge changes in perceptions of science classes and scientific research. The pre- and post- class questionnaires contained unique codes that link answers to a particular student without revealing identifying information. Results Conclusions • Students indicate that they prefer to learn about current scientific research. • Open-ended responses can be categorized into specific desires. • Students who experienced intervention B, where discussion was grounded in the future impacts of current scientific research, reported an overall increase in self-efficacy, as measured using questions from the SMQII. • Wilcoxon signed-rank test between pre- and post-intervention scores for individuals is significant (p-value = 0.0003049) • This suggests that even quick, one-time interventions can produce measurable changes in self-efficacy The effects of temporal focus on students’ motivation to learn science James C. Schwabacher1 1Northwestern University, Evanston, IL 60208 Motivation in Social Cognitive Theory The motivation to learn science is “an internal state that arouses, directs, and sustains science-learning behavior,” and is typically measured through the Science Motivation Questionnaire II (SMQII).1 Prior studies demonstrate that measures of student motivation to learn science are a fair measure of success in science, technology, engineering, and mathematics (STEM) courses.2,3 Positively influencing student motivation to learn science, thus, is a route to improving student learning outcomes in STEM courses. Psychologists have found that the temporal focus of a situation can drastically affect motivation (i.e. proximal vs distal goals),4 yet no investigations of temporal focus within science classrooms have been reported. References (1) Glynn, S. M.; Brickman, P.; Armstrong, N.; Taasoobshirazi, G. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors. J. Res. Sci. Teach. 2011, 48, 1159–1176. (2) Reardon, R. F.; Traverse, M. A.; Feakes, D. A.; Gibbs, K. A.; Rohde, R. E. Discovering the Determinants of Chemistry Course Perceptions in Undergraduate Students. J. Chem. Educ. 2010, 87, 643–646. (3) Smist, J. M. General Chemistry and Self-Efficacy. Natl. Meet. Am. Chem. Soc. 1993, 1–27. (4) Karniol, R.; Ross, M. THE MOTIVATIONAL IMPACT OF TEMPORAL FOCUS: Thinking About the Future and the Past. Annu. Rev. Psychol. 1996, 47, 593–620. Acknowledgements A special thanks to Dr. Lauren Woods for her patience and cont
poster
Utilizing Patch Metrics to Improve Classification of Remote Sensing Imagery Michael L. Treglia* and Megan E. Young Department of Biological Science, University of Tulsa, Tulsa, OK 74104-9700 Departments of Wildlife and Fisheries, Texas A&M University, College Station, TX 77843-2258 *e-mail: mike-treglia@utulsa.edu Problem: Classified remote sensing imagery is used to inform management of various natural resources, though achieving accurate results is a persistent challenge. In particular, some land cover types that are functionally very distinct can be spectrally similar, thus misclassification errors can yield poor decision-making. Relatively new, object-oriented classification methods that consider the shape of clusters of similar pixels can yield improved results, albeit often at considerable software expense and steep learning curves. Our Approach: We employ a two-stage classification procedure, that takes advantage of free and open source tools to improve upon per-pixel classification techniques. First we perform supervised classification to obtain a layer with patches of our focal land cover classes. We then calculate descriptive metrics for each patch, characterizing size and shape, and for each land cover class we extract the per-pixel probability of class assignment. Lastly, we perform a second supervised classification based on raster layers representing the aforementioned class probabilities and patch metrics. Case Study: A Complex Landscape of West Texas Study Area • We focused on a portion of the Mescalaro/Monahans Sand Dunes ecosystem in western Texas, USA, home to unique biodiversity and threatened by oil and gas development . • The landscape is complex and difficult to classify, particularly with regard to oil/gas roads and well-pads (made of caliche) and natural sand formations. • We strived to classify on Landsat 5 TM data, available for large portions of the globe for up to 27 years, at 30 m resolution. Results Deriving patch statistics from a per-pixel classification for use in a second classification run provided some benefit in our system (Fig. 3). Though there are still classification errors after stage 2, caliche roads and well-pads are better distinguished from sand formations. The stage 2 classification increased overall accuracy on our training dataset, from 94% to 98%. Though these results are positive, neither classification performed well on a smaller evaluation dataset (~50% overall accuracy), with the greatest confusion between living and dead vegetation. Discussion and Next Steps: Our two-stage approach for classifying remote sensing imagery effectively uses information about clusters of similar pixels to improve results. It is conceptually simple, and was done exclusively using freely available tools. The example presented here is a first attempt at this method, involving a complex landscape with fine-scale features, perhaps difficult to detect using Landsat imagery. Thus, we suspect this technique will yield greater benefit in landscapes with larger features, or using higher-resolution imagery. Fig. 1. NAIP 1 m aerial imagery (left) and false color composite from Landsat TM data (right) displaying NDVI, Green and Blue for our study area. Stage 2 Classification • Calculated metrics for each distinct land cover patch classified in stage 1, using Fragstats, and created raster layers for each metric. • Created raster layers of class probabilities from the stage 1 classification for each land cover class. • Performed supervised classification using Random Forests, on raster layers representing class probabilities and patch metrics. Methods Stage 1 Classification • Calculated NDVI and Tasseled Cap metrics for a single Landsat scene (Path 30/Row 38) from 2010 and stacked them with the original Landsat bands using QGIS and R. • Performed supervised classification using Random Forests (in R), based on 260 training areas derived from 1 m aerial imagery and our knowledge of the system. Patch ID AREA PER
poster
KNOWLEDGE ABOUT CERVICAL CANCER AND AWARENESS OF PAP SMEAR SCREENING AMONG FEMALE NURSES IN HOSPITAL PAKAR SULTANAH FATIMAH Authors: Ruzila R, Azlina D, Siti Salmiah S, Noraini A, Rohayu M, Nazrah A. Institution :Nursing Unit, Hospital Pakar Sultanah Muar Johor P-151 NMRR ID:20-3008-56270 Introduction To have a successful cancer control program, nursing staff must be aware of facts about cervical cancer and screening tests themselves. This study was aims to assess the level knowledge about cervical cancer and awareness of Pap smear screening among female nurses in HPSF Methodology A descriptive cross-sectional study, a self- administered semi structured questionnaire, and selected by stratified random sampling method, involved the 239 registered nurses who worked in HPSF. IBM SPSS was used for data analyzed. Results Demographic background among nurse Table 1: Demographic characteristic (n=239) To determine the level of knowledge about cervical cancer and awareness of Pap smear screening. Figure 1: Level of knowledge about cervical cancer (n=239) Figure 2: Level of awareness of pap smear screening(n=239) To identify the association between experience working, knowledge about cervical cancer and awareness of Pap smear screening. Table 2: Association between years of experience working and level awareness (good level and poor level) among participants using simple logistic regression Variables Frequency (%) Mean Age 37.13* Experience working 12.79* Deparment Medical 62( 24.2%) Surgical and Orthopaedic 49(19.1%) Obsetric and Gynae 50( 19.5%) Paediatric 40(15.6%) Multidispline 27(10.5%) Anaesthesia 28(10.9%) Race Malay 252(98.4%) Chinese 1(4%) India 2(8%) Others 1(4%) Maritul status Married 250(97.65%) Widow 6(2.35%) Education Diploma 239(93%) Degree 18(7%) Attended CNE Yes 132(51.6%) No 124(48.4%) Parity 0 17(6.64%) 1 32(12.5%) 2 80(31.25%) 3 82(32.03%) 4 38(14.84%) 5 6(2.34%) 6 1(0.40%) Variable Crude OR (95%CI) P value Experience working 0.926 (0.871,0.985) 0.014 Discussion Cancer cervix is preventable, and one of important aspects is prevention in detection of the premalignant lesion by screening[1]. This study show that majority of the participants have a good knowledge about cervical cancer and most of the participants have a moderate awareness of the Pap smear screening. Moreover, if nurses themselves undergo screening tests regularly, they can be role models for other females in carrying out cervical cancer screening tests[2,3]. The association between years of services and level awareness were significantly (P=0.014). This also implise that more important in teaching curriculum and training programs should be incorporated about cancer cervix and screening[2]. Conclusion Knowledge about cancer cervix, screening and practice of Pap smear is good among female nurses in HPSF. Nurses, if properly aware of cervical cancer and screening methods, can educate the women in the community and increase health-seeking behaviour among eligible women. References [1] Ministry of Health Malaysia, Academy of Medicine of Malaysia. Clinical Practice Guidelines. Management of Cervical Cancer. Kuala Lumpur: Ministry of Health Malaysia;2003. [2] World Health Organization. Comprehensive cervical cancer control. A guide to essential practice. 2nd ed. Geneva: World Health Organization; 2014. [3] Institute for Public Health. The Third National Health and Morbidity Survey (NHMS III) 2006. Women's Health. Kuala Lumpur: Ministry of Health Malaysia; 2008.
poster
p o li c y i n n o v a ti o n s tr a t e g ic d e si g n h a p p i n e s s & w e ll b e i n g textile artisans’ communities material tool by hands machinery digital tools quality skilled control personal identity material culture local fibres: vegetal animal discarded research framework methodology a r t i s a n c o m m u n i t y large diffusion wide applications sustainability challenges t e x t i l e s utilitarian culturally meaningful aesthetic discover define develop deliver contextual interviews co-creation experts’ focus group shadowing service blueprint service ecosystem map o n t o l o g y different stakeholders acknowledge multiple realities constructivism interpretivism participatory action research e p i s t e m o l o g y r e s e a r c h p a r a d i g m interaction with participants influences the research qualitative and flexible process of in situ data collection Francesco Mazzarella PhD Student (1st Year) AHRC Design Star CDT F.Mazzarella@lboro.ac.uk Supervisors: Dr Escobar-Tello - Dr Mitchell an enabling ecosystem towards sustainability and social innovation service design for the future of textile artisans’ communities Textile artisanship is the human-centred economic activity of giving form and meaning to local natural fibres, by hands or by directly controlling mechanised and digital tools, and managing the process of apparel making. research questions What are the barriers and enablers for textile artisans’ communities to become sustainable? How can service design be used to encourage such communities toward a sustainable future? How can an enabling ecosystem be codesigned to scale up innovation at a glocal level? area of study The global economic and environmental crisis is leading to the transition from a linear economy based on consumption and waste, towards new ethics of sustainability and cutting-edge business opportunities. Textile artisanship is an interesting opportunity for opening up flexible and redistributed micro-enterprises, while bridging local realities with global markets. It is a key contributor to sustainable development as it catalyses cultural heritage, provides social employment, boosts creative economies and enhances environmental stewardship. This PhD aims to explore how service design can strategically contribute to encourage textile artisans’ communities towards a sustainable future. This means empowering artisans’ creative assets and social bonds, codesigning collaborative services and scaling up initiatives within an enabling ecosystem at a glocal level. small scale localised diversified flexible service design sustainable development social innovation e n a b li n g e c o s y s t e m r e si li e n c y g l o c al m a r k e ts c r a ft c lu st e r s m a t e ri al c u lt u r e c r e a ti v e e c o n o m y ci r c u la r e c o n o m y h y b ri d t o u c h p o i n ts c o ll a b o r a ti v e s e r vi c e s d i g it al i n n o v a ti o n a i m f o c u s a p p r o a c h a i m making
poster
CompressedSensinginWirelessCommunication Thomas Arildsen∗, Tobias Lindstrøm Jensen, Karsten Fyhn, Peng Li, Pawel Pankiewicz, Jacek Pierzchlewski, Hao Shen, Ruben Grigoryan, Søren Holdt Jensen, Torben Larsen Dept. of Electronic Systems, Aalborg University, Denmark ∗tha@es.aau.dk Abstract Compressed sensing has gained a lot of attention in recent years. While the famous Shannon-Nyquist boundary for sampling of analog signals provides a sufficient condition for the sampling frequency, it can be reduced substantially in situations where a signal is sparse in a given domain. This can lead to compressive sensing which can be formulated as an optimization problem where an under-determined system of linear equations is solved under the con- dition of a sparse solution. In our research group at Aalborg University we are using compressed sensing theory to address some important practical challenges in radio frequency communication and more generally in analog-to- digital conversion. Radio frequency receivers for modern telecommu- nication standards in power-constrained devices are increasingly challenged by the necessary sam- pling frequencies and hardware power consump- tion. Compressed sensing techniques may help ad- dress some of these challenges, for example by re- ducing the necessary sampling rate. This is a move towards the original, but hardware-wise unrealistic, ideas of software defined radios. In order to utilise compressed sensing, we must be able to represent the relevant signals sparsely – or, alternatively, find new signal structures that allow compressed sensing. In practice, various hardware imperfections as well as noise and interference limit the applicability of compressed sensing. These issues must be ad- dressed to make compressed sensing practicable. Multi-Coset Sampling Scheme ▶The multi-coset sampling scheme is one of sev- eral possible sub-Nyquist sampling schemes. ▶The multi-coset sampling scheme can be used to sample and reconstruct frequency sparse sig- nals. Reconstruction is possible with various al- gorithms that may have different performance. ▶We are focusing on the impact of hardware im- perfections on the reconstruction quality. Var- ious real-life restrictions such as practical sam- pling patterns are also important. ▶This research will bring multi-coset sampling schemes closer to real-life applications. ADC 1 x(t) x((m·L+c1)·T) = y1[m] ADC 2 ADC P x((m·L+c2)·T) = y2[m] x((m·L+cP)·T) = yP[m] Sparse Sampling Receivers ▶Applying multi-coset sampling on a conven- tional time-interleaved ADC may increase the input bandwidth. ▶Time-interleaved ADC traditionally uses uni- form sampling. We investigate introduction of randomized sampling. ▶We consider a sample-time error in each sub- ADC. Sample-time error is a random time delay for each sub-ADC, and it is a constant error for each sampling instance. ▶We identify these sample-time errors and ex- ploit them to introduce randomized sampling similar to multi-coset sampling. L 1c 2c 3c c1 0 0 o T 0 max s max s max s T ( 1) L T  max s max s - - - max s CS Direct Conversion Receiver ▶Increasing computational power of modern data receivers enables moving more and more pro- cessing from the analog to the digital domain. ▶In this project, compressed sensing is used to relax the analog filtering requirements prior to the ADCs in a direct conversion receiver. ▶The filtered down-converted radio signals are randomly sampled with an average sampling frequency below its Nyquist rate. ▶This approach exploits frequency domain spar- sity of the down-converted radio signals. ▶The approach allows identifying and remov- ing interfering signals from the sampled signal, thereby enabling reduced low-pass filter order. f f 0 Frequencies allowed by the radio band-pass filter B B f r f r f r f r Blockers Desired signal Noise s b2 s b1 s f r f r f 2f 0 Low frequency components f r f r Downconverted blockers All the high frequency components can be removed by the 1st order low- pass filters N
poster
The Governance of Digital Immortality & the Digital Afterlife Researcher: Khadiza Laskor khadiza.laskor@bristol.ac.uk https://www.bristol.ac.uk/cdt/cyber-security/ Supervisors: Professor Richard Owen Professor Andrew Charlesworth Digital Immortality & Digital Afterlife… …are terms that grew in prominence since the turn of the century. This prompted the emergence of ‘grief tech’ which includes the possibility to preserve and curate a virtual online persona from digital remains, enabling individuals to ‘live’ after their physical death. Governance & Regulation However, history is at risk of repeating itself: the ‘pacing problem’ (Collingridge, 1980) has often been observed when governing technology. This is the main theme being explored using ‘grief tech’ as a use case. Research Progress A systematic literature review on the framing of ‘Digital Immortality’ and ‘Digital Afterlife’, and the state of art of its governance, has revealed strong technological influences, with little emphasis on governance or regulation. This Year’s Focus Analysis of over 50 expert stakeholder interviews, with results from focus group discussions with the public (to be scheduled later) are expected to highlight how this innovation ought to be controlled, if at all. Further… …the combined results will be used to assess whether existing governance artefacts are able to protect an identifiable virtual online persona created by the individual before they died, or posthumously by a loved one. This multi-disciplinary PhD thesis utilises a flexible design process, based upon ‘Responsible Innovation’ (Owen et al, 2013). The approaches being adopted are empirico-inductive: thus, allowing for a comprehensive analysis in how such a nascent industry, loosely labelled as ‘grief tech’, could be governed. While the foci is on data left behind by the deceased, due to sensitivities associated with grief and bereavement, creative means are also being pursued.
poster
Astyanax mexicanus: surface fish and Pachón Methods: • Cardiac cryoinjury • Histochemistry analysis • RNAsequencing • Real-time quantitative polymerase chain reaction (qPCR) • RNAscope • QTL analysis RESULTS (CONTINUED) MATERIALS AND METHODS SUMMARY FINANCIAL SUPPORT RESULTS BACKBROUND Distinct dynamics of the extracellular matrix during heart regeneration in different Astyanax mexicanus populations Zhilian Hu1, Madeleine E. Lemieux2, Mathilda T. M. Mommersteeg1 1. Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK 2. Bioinfo, Plantagenet, Ontario, Canada The cardiac regenerative capacity of Astyanax mexicanus populations is completely distinct in response to the injury. Surface fish can regenerate the ablated heart tissue, similar to zebrafish, while Pachón cannot and instead forms a permanent scar with excess extracellular matrix (ECM), similar to human. ECM is a dynamic three-dimension network of extracellular macromolecules including collagen and non-collagenous components. Its content, relative proportion, and spatial arrangement play a pivotal role during tissue regeneration. Any changes in ECM deposition can have a big impact on the ability to regenerate. Astyanax mexicanus allows us to identify the differences in ECM deposition and removal between a degradable scar in a regenerative setting and a permanent scar in a non-regenerative setting. Our research indicates ECM components and their regulating Matrix metalloproteinases (MMPs) are dynamically and differentially expressed during heart regeneration between surface fish and Pachón. Next, we will link the key scarring differences directly to both the genome and the ability for heart regeneration using new and prior Quantitative Trait Loci analyses. This will allow to find the most fundamental molecular mechanisms directing the wound healing process towards regeneration versus scarring. Result 1: Differential regenerative capacity of the heart after cryoinjury between surface fish and Pachón Figure 1. A-A’, Adult Astyanax mexicanus surface fish (A) and cave-dwelling Pachón (B). B-C’, AFOG staining of the cardiac ventricle after cryoinjury (Fibrin in red, intact muscle in orange and collagen in blue). The hearts are similar in fibrin formation at 3 days post cryoinjury (dpci) (B-B’), followed by intermingling of fibrin with collagen fibrils at 14 dpci (C-C’). Resolution of fibrotic scar in surface fish (D), but permanent scar in Pachón (D’) at 100 dpci. E, Quantification of the scar area at 100 dpci shows a large scar is still present in Pachón, while the surface fish restored the injured ventricular apex. Scar bar: 100 µm. Result 2:Excess deposition of collagens and disorganisation of collagen fibrils in the wound area of Pachón heart Figure 2. A-B, Picro Sirius red (PSR) staining of the ventricle at 14 dpci for both fish (collagen in bright red under light microscopy). ven, ventricle. A’-B’, Imaging of collagen fibrils under a polarised microscope shows remarkable high deposition of collagens and disorganization of collagen fibrils in Pachón (Green-thin fibrils in green and thick fibrils in yellow-orange- red). Result 3: Deregulation of collagen and non-collagenous extracellular matrix encoding genes after cryoinjury Figure 3. A-A’, Diagram of the workflow of RNAseq. B, Example of candidate genes validated by qPCR at six time points (6 hpci, 24 hpci, 3 dpci, 7 dpci, 14 dpci and 30 dpci). Expression levels of non-collagen fibronectin (fn1a) and FACIT collagen 12 (col12) are dynamically and differentially expressed between both fish during the regenerative process. C-D”, RNAscope revealed that col12 is primarily expressed in the outside wall of wound. E-F”, Multiplexin collagen 18 (col18) is undetectable in the wound region of surface fish (D-D”) but is expressed in the Pachón scar (E-E”). These results indicate the composition of the scar is different between both fish. FACIT, Fibril Associated Collagens with Interrupted Triple Helices; Mu
poster
NIMBLE: A language for algorithms for graphical models embedded in R Problem-specific hierarchical statistical models (graphical models) are used by domain experts in many scientific fields: • State-space or hidden Markov models for time-series data • Random field models for spatial data • Generalized linear mixed models for complex designed studies • Capture-recapture models • Non-parametric regression and distribution models • Combinations of these and many more ideas There are many unmet statistical challenges for hierarchical models: • Efficient Markov chain Monte Carlo (MCMC) for parameter estimation • Maximum likelihood (or empirical Bayes) estimation when the likelihood requires integration. • Approximation of normalizing constants (likelihood or marginal likelihood) for model comparisons. • Tools for model selection, an unresolved area of Bayesian methodology. • Tools for model averaging. • Tools for assessing/validating model fit or assumptions. Perry de Valpine1, Christopher J. Paciorek1, Daniel Turek2, Nick Michaud1, & Duncan Temple Lang3 1University of California, Berkeley. 2Williams College. 3University of California, Davis. Previous software typically follows one of two basic designs: 1. Define a single model or family of models and provide specific methods for them. • Typical of R packages. • E.g. Generalized linear mixed models (GLMMS; lme4, MCMCglmm). • E.g. Generalized additive mixed models (mgcv). • E.g. Spatial models (spBayes, INLA). • E.g. Dirichlet process type models (dpPackage) • User cannot extend the model. 2. Provide a domain-specific language for writing general models and one or a few black-box algorithms. • E.g. MCMC is provided by BUGS (WinBUGS, OpenBUGS, JAGS), Stan, PyMC and others. • E.g. particle filtering is provided by BiiPS, LibBI, pomp, and others. • User cannot write new algorithms for the models. New computational statistics methods for general models Software for statistically savvy domain scientists Statisticians and computer scientists publish many new methods that are not accessible via software to domain scientists. • Model-specific methods can be distributed as R packages. • Model-generic methods are hard to code because the model must be abstracted. There has been no general system for model-generic programming. Model flexibility Algorithm flexibility BUGS/JAGS PyMC ADMB Stan Typical R packages NIMBLE = BUGS language + Algorithm programming 1. Domain-specific language (DSL) for statistical models • We adopt and extend the widely-used BUGS language 2. Domain-specific language embedded within R for model-generic algorithms 3. Code-generator (compiler) that generates C++ from the model and algorithms DSLs. • C++ objects are managed from R by dynamically-generated interface classes 4. Algorithm library (MCMC, SMC, etc.) metropolis_hastings_sampler <- nimbleFunction( contains = sampler_BASE, setup = function(model, mvSaved, targetNode, scale) { calcNodes <- model$getDependencies(targetNode) }, run = function() { model_lp_current <- model$getLogProb(calcNodes) proposal <- rnorm(1, model[[targetNode]], scale) model[[targetNode]] <<- proposal model_lp_prop <- model$calculate(calcNodes) log_MH_ratio <- model_lp_prop - model_lp_initial if(decide(log_MH_ratio)) jump <- TRUE else jump <- FALSE if(jump) copy(from = model, to = mvSaved, row = 1, nodes = calcNodes, logProb = TRUE) else copy(from = mvSaved, to = model, row = 1, nodes = calcNodes, logProb = TRUE) }) The gap between methods and software Limitations of previous software designs NIMBLE enables algorithm programming for general graphical models Four components of NIMBLE Numerical Inference for statistical Models using Bayesian and Likelihood Estimation Existing and future (in colors) NIMBLE processing flows Why R? Benefits of embedding NIMBLE in R: • R is widely used in applied statistics. • The BUGS language uses extremely R-like syntax that can be natively parsed in R. • R handles code as an object. • NIMBLE constructs and evaluat
poster
Manual Quality Control with Freesurfer 7.1: Key Problem Areas and Importance of Corrections V. Vahermaa¹ ², B. Aydogan³ , T. Raij³ ⁴, R-L. Armio⁵, H. Laurikainen⁵, J. Saramäki ¹, J. Suvisaari² ¹ Department of Computer Science, School of Science, Aalto University, Finland ² Department of National Health and Welfare (THL), Helsinki, Finland ³ Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Finland ⁴ University of Helsinki and Helsinki University Hospital, Psychiatry, Helsinki, Finland ⁵ Department of Psychiatry, University of Turku, Finland Contact Information: Vesa Vahermaa vesa.vahermaa@aalto.fi Results Methods Introduction Conclusions Dataset FreeSurfer (FS) is a free software suite that provides a comprehensive processing stream for structural magnetic resonance imaging (MRI) images. ⁶ This stream includes brain extraction and grey-white matter segmentation, which outlines the brain. 51 FEP patients and 78 images were included in the dataset. All subjects were scanned with the same Siemens brand 3T scanner. The impact of pial surface errors related to overextension of gray matter boundary on automatically calculated brain volumes is very small. Unless the study requires highly precise volume measurements or focuses on the areas that were shown to have a significant number of errors, manual quality control may not be necessary. References Figure 1. Three regions of the brain with error clusters: lateral orbitofrontal cortex (A), paracentral lobule (B), and cerebellar tentorium (C). Table 1. Error frequency among inspected MRI images. A total of 233 errors related to overextension of gray matter boundary were recorded from the images. Here, we determine the number, location, and volume distorting effect of such errors in the brains of 51 first-episode psychosis (FEP) patients and 78 MRI images. 28 of the subjects had both baseline and 1-year follow-up images. The subjects had FEP in 2010–2016, and were recruited by the Finnish Institute for health and welfare (THL). The changes in local brain volumes as listed in aparc.stats between non-controlled and controlled images were on average very small (-0.09% +- 0.33). The average change in cortical thickness was -0.07% +- 0.14. 127 errors were in the left hemisphere, and 106 in the right hemisphere. The images went through manual quality control by a single researcher (V.V.), with error locations recorded. All MRI images underwent the full FS processing stream (recon-all) including automatic volume calculation. Mistakes in this automated process lead to the miscalculation of brain volume in the areas with incorrectly determined edges. 6 http://surfer.nmr.mgh.harvard.edu/ 7 Paul A. Yushkevich, Joseph Piven, Heather Cody Hazlett, Rachel Gimpel Smith, Sean Ho, James C. Gee, and Guido Gerig. User- guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage 2006 Jul 1;31(3):1116-28. The pial surfaces of each brain were systematically scanned for gray matter boundary errors present in at least 3 consecutive slices, and any non-brain tissue was removed. The error coordinates were recorded, and location labels determined from the aparc+aseg —file of each subject. The errors were not uniformly distributed in the brain: the most affected areas were the middle cerebral artery (59 errors) and the immediate vicinity of the cerebellar tentorium (50 errors). Watch a video about Figure 1: The effect is calculated by having a researcher manually examine the MRI images and manually correct the errors, and comparing the pre- and post intervention datasets and their volume measurements. Each set of error coordinates was converted to MNI 305 space as a single voxel. A heatmap was generated by applying gaussian kernel smoothing to the voxels in MNI305 space, and overlaying the heatmap in ITK-Snap. ⁷ A B C Subjects with visible motion artefacts were excluded from the dataset. The errors were also
poster
Results Analysis of variance tests compared mean abundance data across multiple semesters. Results show that only three of the key species (Nassau grouper, Red lionfish, and Queen triggerfish) had a significant change. Abundance of Nassau grouper initially decreased from Fall 2004 to Spring 2005 and since has been increasing through Fall 2010. Significant increases were seen when comparing Fall 2010 to Fall 2005, Spring 2006, and Fall 2006 (P=0.06, P<0.01, and P=0.004; Figure 1). Results for Red lionfish showed a significant difference in all semesters when compared to Fall 2008 (P=0.001, P=0.001, P=0.041, and P=0.043; Figure 2). The mean abundance of Queen triggerfish is significantly different when comparing all semesters to spring 2007, when there was a significant rise in mean abundance ( P<0.05; Figure 3). Visual Surveys of Patch Reef Species Abundance in South Eleuthera, The Bahamas Phoebe Fitz, Marianne Foss-Skiftesvik, Meaghan Kachadoorian, Chris Lorient, Chris Pibl, Jackson Rafter Advisors: Matt Dawson and Skylar Miller Methods This study was performed off Powell Point in Cape Eleuthera, Bahamas during a seven-week period in Fall 2010. Twenty patch reefs, all within a proposed marine protected area, were surveyed twice during the seven-week observation period in order to maintain accurate results (Figure 5). Extensive fish identification training was conducted before research commenced to ensure proper field identification. Visual surveys were conducted in buddy pairs while snorkeling to record population abundance of the six key species (Figure 4). Mean abundance values of each species across all survey sites over the seven-week period were calculated. This data was then analyzed and compared to data from the past seven years. Discussion One reason for the increase in abundance of Nassau grouper may be the effectiveness of current fishing regulations. Fishing regulations are determined annually by the Bahamian government. For the year 2011, fishing is closed December 1 to February 28. Another regulation limits the size of the Nassau grouper caught to a minimum of three pounds (Department of Marine Resources 2010). The initial increase in Red lionfish population seen from 2007 to 2008 may be due to their invasive nature and therefore lack of natural predators. Because the Red lionfish originated in the Indo-pacific, locals had no previous knowledge of this species. However, now that handling techniques are more familiar, some local Bahamians are beginning to recognize Red lionfish as a good food source. Because Red lionfish spawn every four days and therefore have a very high reproduction rate, even the smallest catch by locals impacts their population. Despite this, the most probable reason for the decrease in Red lionfish population is that maximum carrying capacity was reached. As the Red lionfish population increased in South Eleuthera, competition for food and space may have caused the significant decrease. The population abundance of the Queen triggerfish has proved to be fairly constant over this ongoing study, with the exception of the atypical data collected in 2007. However, their population should continue to be monitored to ensure its stability, especially since the Queen triggerfish is economically important to the Bahamas. Their vibrant colors attract many tourists to snorkel on the patch reefs and they have been used in the past as a fishing alternative to Nassau grouper when their numbers were dwindling. The data supported the hypothesis for some of the key species, while was inconclusive for others. The increase in Nassau grouper populations is an example of how fishing regulations may have positive effects on species population abundance. However, this data is not conclusive enough to provide recommendations for future marine protected areas and/or fishing regulations in South Eleuthera, The Bahamas. Literature Cited Albins, M.A., Hixon, M.A. 2008. Invasive Indo-Pacific lionfish reduce recruitment of
poster
18th International Open Repositories Conference (OR2023), Stellenbosch University, South Africa, 12‐15 June 2023 Marcin Werla (mwerla@qnl.qa), Dr. Alwaleed Alkhaja (aalkhaja@qnl.qa), Dr. Arif Shaon (ashaon@qnl.qa) Qatar National Library In November 2022, Qatar National Library (QNL) launched a new repository service called Manara – the Arabic word for ‘lighthouse’ – with the ambition of capturing, curating, preserving and providing open access to all research outputs created in Qatar. This service builds on a national-level e-resources consortium that the Library has been coordinating since 2016, ensuring optimum licensing and financial terms for subscription-based access to research and educational content for 10 major institutions. Besides institutional-level cooperation, Manara welcomes all kinds of research-related open access contributions from all researchers based in Qatar on an individual voluntary basis. The Library is responsible for the basic content review before publishing. Introduction Manara is a new service and will initially operate as a pilot. The wider goal is to develop Manara as a core Library service that is strategically important for QNL's long-term mission and vision. The exact long-term sustainability model will be developed in cooperation with participating institutions after the initial pilot period, which runs until mid-2024. THE FUTURE Find out more at manara.qnl.qa Manara, as a comprehensive and sustainable research repository, contributes significantly to several UNESCO Sustainable Development Goals by enriching scientific, educational and cultural knowledge environments, supporting research communication and information sharing. The open access model promoted through the repository makes the contribution even stronger, opening Qatari research outputs to the world. UNESCO SDGs An important role of Manara is to help preserve institutional archives in the dynamic landscape of Qatar. To support the Library’s mission, Manara can preserve research and educational content from institutions that no longer exist, like the Qatar branch of University College London, UCL-Q, which was active between 2010 and 2020, and played the role of Center of Excellence for Cultural Heritage and Knowledge Management in the country, conducting around 100 research and education projects. LEGACY ARCHIVING The concept at the heart of a refreshed Qatar Foundation strategy revolves around creating a collaborative and diverse higher education ecosystem in Qatar that promotes interdisciplinary learning, research excellence, and global connectivity. The concept emphasizes the integration of multiple universities, research institutions, and educational facilities within a shared campus or network. It aims to contribute to the development of Qatar as a knowledge-based society and a hub for education and research in the region. This is a comprehensive development plan introduced by the government of Qatar to guide the country's growth and progress across various sectors by the year 2030. The vision emphatically focuses on four key pillars: human development, social development, economic development, and environmental development. The vision profoundly emphasizes the importance of creating a knowledgeable and skilled society, developing a knowledge-based economy, and promoting entrepreneurship and innovation. Qatar National Library actively promotes open access through a broad variety of services, to support global dissemination of, and unrestricted access to, Qatar-based research, including: • signing open access agreements with publishers and providing an article processing charges fund for Qatar-based researchers; • supporting international open access initiatives, like INTACT’s OpenAPC and Open Access 2020; • running events and training sessions; • providing information and advice to researchers about open access publishing and open licensing. Qatar National Vision 2030 Promoting Open Access The Qatar Foundation Multiversity Manara
poster
Crowdsourcing in History. New participatory and inclusive method- ological challenges for research in History in Spain (CrowdHistory) DH2023 COLLABORATION AS OPPORTUNITY ALLIANCE OF DIGITAL HUMANITIES ORGANIZATIONS, UNIVERSITY OF GRAZ AUSTRIA, 10-14 JULY 2023. Credits Scientific Publications The main objective of CrowdHistory project is to identify, analyse, test and standardise methodological practices of citizen participa- tion in Spanish R&D projects from the per- spective of Digital Humanities, both within and outside the academic field. by PhD. Lidia Bocanegra Barbecho - Digital Humanities responsible at Medialab UGR (Universidad de Granada- Spain) and tenure Track position at Contemporary History Department, UGR. lbocanegra@ugr.es | @Lidia_Bocanegra THE PROJECT AIM CROWDHISTORY METHODOLOGY To achieve the proposed objectives, CrowdHistory implements a four-phase methodology as follow: - Phase 1 - Detection of research projects in history that use public participation from citizen science. Design and structuring of a relational web database (DB), implemented through the Drupal Content Management System, version 9, PHP 7 and MariaDB 5.5. The established database (DB) allows classification parame- ters in closed lists and with free text fields, where explanatory information about the strategies of approaching the public has been added. - Phase 2 - Standardisation methodology on public participation in historical re- search. This phase will lead to first analyses and, as a result, to a first catalogu- ing of participatory processes (of citizen involvement) as well as engagement activities (public engagement). - Phase 3 - Testing and improving the methodology of public participation in his- torical research from citizen science through pilot cases. use of hybrid (physical/ digital) citizen laboratories where some of the classified participatory approach- es and techniques will be put into practice. - Phase 4 - Evaluation of the public participation and public engagement methodology and development of good practice guidelines. STATE OF THE ART The current state of research lacks scientific investigations that reflect methodologically and conceptually on public participa- tion in historical research in Spain. When we talk about public participation in research, we refer to Citizen Science (CS): this is scientific research coordinated by scientists in which ordinary people, amateur and non-professional researchers, participate in some aspect of the research (Eitzel 2017). This type of methodological research is practically new in the humanities; however, there are a discrete number of research projects that use CS that are currently underway or have already been car- ried out, which offer a suitable breeding ground with relevant data to carry out analyses in Citizen Science (Bocanegra et al. 2017). This citizen participatory aspect is one of the main characteris- tics of the Digital Humanities (DH) (Bocanegra 2020); in the last decade there has been a change in the way of doing research in the human sciences as society begins to have a participatory role during the research process; beyond being the subject of analysis to even participate in part or all stages of scientific production. This new way of doing science is strongly driven by the growing boom of web 2.0 and, within it, digital social net- works stand out as a place for interaction (Bocanegra 2020); betting on collective intelligence (Surowiecki 2004) and micro- PRELIMINARY RESULTS Currently, we have identified and classified 473 international projects that use CS through the project's web database: https://cohistoria.es/; of which 161 are Spanish initiatives or with Spanish participation. Data collection continues as new initiatives are found. An ad hoc classification has been made on the basis of two models: PPS – Public Participation Strategies and PES – Public Engagement Strategies, through which a series of participatory and public engagement activities have been i
poster
ERSP - baseline, 0-50ms & 4-8Hz ERSP - random, 0-50ms & 4-8Hz -0.9 -0.4 0.1 0.7 1.2 (p-value) perm with fdr 1 0.1 0.01 0.001 ERSP - 1, 0-50ms & 8-12Hz ERSP - 4, 0-50ms & 8-12Hz -0.4 0.1 0.6 1.1 1.6 (p-value) perm with fdr 1 0.1 0.01 0.001 Background Method 32 cap-mounted electrodes Two sessions: count aloud 1-10 randomly count aloud 1-10 Count upon visual presentation question mark Epochs time-locked to question mark onset Preprocessing & power spectrum analysis in EEGLAB Bobby Ruijgrok & Olga Kepinska bobbyruijgrok@gmail.com Alpha and theta enhancement during self-ordered number generation [1] Petrides, M., Alivisatos, B., Meyer, E., & Evans, A. C. (1993). Functional activation of the human frontal cortex during the performance of verbal working memory tasks. Proceedings of the National Academy of Sciences of the United States of America, 90(3), 878–882. [2] Meyer, L. (2017). The neural oscillations of speech processing and language comprehension: state of the art and emerging mechanisms. European Journal of Neuroscience. https://doi.org/10.1111/ejn.13748 . Descriptive statistics 93 Dutch participants (71 female; 85 righthanded) Mage = 23.65, SE = 0.16 Accuracy count aloud: 100% Accuracy randomly count aloud: 59.12% Discussion Method Replicated finding of enhanced bilateral activity within the mid-dorsolateral frontal cortex Additional temporal and parietal activity Frontal distribution: Theta retrieval Alpha storage Verbal working memory (VWM) capacity tested by means of self-ordered number generation. PET-study Petrides et al. (1993) reported bilateral activa- tion mid-dorsolateral frontal cortex[1] Current study: power spectrum analysis on theta band (4-8Hz) and alpha band (8-12Hz). Storage and retrieval interplay of alpha and theta[2] . [Figure taken from Petrides et al. (1993)] Self-ordered minus control task. Merged PETMRI horizontal sections showing activation foci within the mid-dorsolateral frontal cortex. The schematic outlines of the brain indicate the level (interrupted lines) of the sections presented. The subject's left is on the left side in these sections. The coordinates (x, y, z) of the foci shown on the schematic outlines of the brain are 1, 40, 32, 30; 2, 35, 42, 22; and 1, 38, 39, 26. Results Theta Alpha
poster
Innovative Research for a Sustainable Future www.epa.gov/research knudsen.thomas@epa.gov l 919-541-9776 hunter.sid@epa.gov l 919-541-3490 Framework for Predictive Developmental Toxicity of Chemicals Regulated under TSCA Thomas B. Knudsen1 and E. Sidney Hunter2 U.S. Environmental Protection Agency, Office of Research and Development 1NCCT and 2NHEERL Mining ToxCast to define sensitive pathways Summary OBJECTIVE: build a framework for pre-prioritizing chemicals for developmental toxicity (DevTox) for potential application to chemicals regulated under TSCA (amended 2016). INPUTS: ToxCast data including new information on chemical effects in zebrafish embryos and in diverse mammalian embryonic stem cell lines (hESC, mESC). OUTPUT: generalized workflow to sort compounds by their predicted potential for DevTox and in silico modeling of quantitative cellular, tissue and phenotypic responses. Evaluating model performance • Assigning chemicals under amended TSCA to low and high priority comes with uncertainties in complexity of predicting toxicity during pregnancy. • A DevTox predictive framework will implement data from in vitro assays assessing embryonic cell growth and differentiation. • The framework will operationalize a defined interpretation protocol for computational synthesis and integration DevTox decision tree. • Multiscale modeling and simulation will provide tools and approaches to virtually reconstruct and analyze complexity and quantitative prediction. OBSERVATIONS: • ZFISH platforms were more sensitive but less specific than mESC or hESC platforms. • Model performance improves against more stringent in vivo criteria for DevTox. • Overall accuracy approaches 62% (ZFISH), 72% (mESC), and 84% (hESC). • The hESC model also predicts the critical concentration for human dosimetry. DEFINITIONS: (a) LEL, lowest effect level (mg/kg/day) in ToxRefDB prenatal developmental toxicity study: dLEL (fetal endpoint) and mLEL (maternal endpoint) in the same ToxRefDB study for ‘n’ compounds; (b) evidence for DevTox: ‘CLEAR’ (dLEL< 200, dLEL<mLEL); ‘SOME’ (dLEL< 200, dLEL> mLEL); ‘EQUIVOCAL’ (dLEL> 200 but <1000); and ‘NO’ (dLEL>1000) in rat and/or rabbit study; (c) TP (true positive), FP (false positive), FN (false negative), TN (true negative); SENS, sensitivity [TP/(TP+FN)]; SPEC, specificity [TN/(TN+FP)]; ACC, accuracy [(TP+FN)/(TP+FP+FN+TN)]. Anatomical development Overview Computational synthesis and integration This poster does not represent EPA Policy i) assess qualitative performance of the predictive model; ii) set an exposure-based prediction of teratogenicity; iii) characterize sensitive response pathways; and iv) virtually reconstruct quantitative phenotypic response. Chemicals ranked by enumerated response Toxicity Prioritization index (ToxPi): • chart maps 753 ToxCast chemicals tested across 6 in vitro platforms; • slices indicate response (1) or no response (0) for each platform: blue - zebrafish platforms (2) yellow - ToxCast pathways (multiple) red - hESC (pluripotent H9 cells) green - mESC (differentiating J1 cells). Pyridaben (strong evidence) Captan (moderate evidence) Hexaconazole (weak evidence) 2,4-Dichlorophenoxy- acetic acid (false negative) Specific aims examples Human palatal closure (left) simulated in a virtual embryo (right). Running the simulation with ToxCast data for ‘Captan’ predicts a tipping point at 4 µM. Captan (+) EGF (-) TGF-beta Human HTTK model predicts exposure of a pregnant woman to Captan at 2.39 mg/kg/day would achieve a steady state of 4 µM in fetal plasma. example
poster
De novo genome sequencing of Skeletonema marinoi and Surirella brebissonii Mats Töpel¹*, Magnus Alm Rosenblad¹, Ulrika Lind¹, Susanna Gross², Sandra Karlsten¹, Jens Persson¹, Mattias Backman¹, Anna Godhe², Anders Blomberg¹ 1. Department of Chemistry and Molecular Biology, University of Gothenburg 2. Department of Biology and Environmental Sciences, University of Gothenburg *mats@topel.se Introduction De novo whole genome sequencing of the two diatom species Suri- rella brebissonii (CCMP2919) and Skeletonema marinoi (GUMACC St54) are currently conducted as part of the Linnaeus Centre for Marine Evolutionary Biology (CeMEB) initiative at the University of Gothenburg. This work is part of the Infrastructure for Marine Genetic model Organisms (IMAGO) project, aimed at developing new marine model systems and provide genomic and genetic tools to study vital phenomena and components of coastal marine ecosystems. Protein translocation in diatoms A chloroplast’s genome only encodes ~100 proteins, but the organ- elle requires many more proteins in order to perform its functions in the cell. Translocons at the Outer and Inner Chloroplast enve- lope membranes (TOC and TIC, respectively) are the two multi- protein complexes in plants, red- and green algae that enable chlo- roplasts to import these essential nuclear-encoded proteins. Diatom plastids, on the other hand, are surrounded by four mem- branes where the outermost is continuous with the endoplasmic reticulum (ER) [8]. The second membrane (known as the periplas- tid membrane [PPM]) is the remnant of the secondary endosymbiont’s plasma membrane (proposed to be of red algal origin) [9]. The two innermost membranes are homologous to the outer and inner envelope membranes in plant plastids and are de- rived from the membranes surrounding the cyanobiont of primary plastids [10]. The identity of the TOC and TIC translocons in diatoms and most other chromalveolate organism groups (e.g brown algae, dinofla- gellates and apicomplexan parasites) is mainly unknown. How- ever, bioinformatics analyses of whole genome sequences from dia- toms has shown that these systems are also present in diatoms. Bullman et al. [11] reported the discovery of an Omp85 protein that is localized in the third outermost plastid membrane (homologous to the outer envelope membrane in plants) of the diatom Phaeodactylum tricornutum. To date, this Omp85 protein is the only reported putative member of the TOC complex in diatoms. Our phylogenetic analyses (including bacterial, plant and diatom sequences) reveals that the diatom sequences are of red algal origin and more specifically belongs to the Toc75 gene family. Unexpectedly long branches in the diatom part of the tree indicates a rapid, albeit even, evolu- tionary rate. This phenomenon has been reported on previously, but the significance of the phenomenon has not yet been thor- oughly investigated. Assembly statistics DNA libraries (insert size 150 and 3000 bp), of which one were gen- erated from an axenic culture, and one RNA library (300 bp) from Surirella brebissonii has been sequenced. One axenic 300 bp library from Skeletonema marinoi has been generated. Both genomes have been assembled using the CLC de novo assembler software package. Sequence reads where preprocessed using cutadapt [1] and the fastx toolkit [2] (for details see http://matstopel.se/notebook). Skeletonema Surirella Total nt sequenced (Gb) 33 46 Total input to assembly (Gb) 26 33 Assembly size (Mb) 49 136 Number of contigs (K) 53 244 Average coverage (x/cont) 443 207 N50 (bp) 1673 694 Average contig lenght (bp) 929 557 Longest contig (Kb) 506* 76 *Putative bacterial symbiont. Preliminary findings Organelles − The plastid of Surirella brebissonii contains a group II intron with an ORF, the first group II intron to be identified in dia- toms. Interestingly, the mitochondrial genome of S. brebissonii has lost the group II intron present in both Thalassiosira pseudonana and Phaeodactylum tricornutu
poster
08-11 November 2021 Online Event Prediction of photoelectric effect log and facies classification in the Panoma gas field Felipe F. Melo CGG GeoSoftware (Formerly at PUC Rio) Contact: felipe146@hotmail.com 1) AIM Predict the photoelectric effect log on two wells and then perform facies classification. 2) Introduction Departing from seven wells in the training set, I impute missing values in the photoelectric effect log (PE) in one well and predict the PE log from two wells. Then, I aggregate these wells to the training data in order to perform facies classification. Distinct feature augmentation techniques are used in both log prediction and facies classification. I used the random forest algorithm for log prediction and the extreme gradient boosting for facies classification. Focusing on the generalization of the models, the hyperparameters were tuned with the nested and the leave two out cross validation, respectively. The well Churchman Bible is picked as test data, this well is chosen because it has all facies and a large number of samples related to facies of interest. 3) Dataset 4) Method 5) Prediction of photoelectric effect log 6) Facies Classification 7) Conclusions • The feature augmentation was successful in both prediction and classification. • The PE prediction score (72%) in the test well shows the good result of this approach. • The results for facies classification are acceptable because of the samples limitations, accuracy of 58%. • The accuracy of 86% on adjacent facies and 90% on BS, 57% on PS and 6% on D (critical facies) shows the effectiveness of the proposed approach. 9) References Dubois, M. K.; Bohling, G. C.; Chakrabarti, S. 2007. Comparison of four approaches to a rock facies classification problem. Computers & Geosciences, 33 (5), 599–617. Hall, B. 2016a Facies classification using machine learning. The Leading Edge, 35, 906–909. Hall, M.; Hall, B. 2017. Distributed collaborative prediction: Results of the machine learning contest. The Leading Edge, 36, 267–269. 8) Acknowledgments I would like to acknowledge Brendon and Matt Hall, the participants of the SEG contest and the Swung community for push the knowledge and free software. This research has been benefited from knowledge and source codes shared by them. 10) Reproduce The codes used on this work are available at: https://github.com/ffigura/SBGf_ ML_gas/ Figure 1 – Distribution of facies for each well, the well names are shown over each plot. The last panel shows the distribution of all facies. Figure 2 – Cross-plots from the well logs colored with facies classification. Figure 3 – Well logs and facies from the well Nolan. Figure 4 – Feature augmentation for well Nolan. Figure 5 – Prediction results. a) Feature importance for the test set. b) True and predicted logs from the well Nolan, this well is part of the training set. c) True and predicted logs from the test set, well Churchman Bible. Figure 7 – Confusion matrix. The facies as BS, PS and D are more critical in this work. The well Churchman Bible is used as test dataset, i.e., the quality of the regressor and of the classifier will be evaluated on this well. All other nine wells are used as training dataset for facies classification. In the first part, the photoelectric effect (PE) log is imputed for the well Recruit F9 (12 missing samples) and computed for the wells Alexander D and Kimzey A using the Random Forest regressor. Before log prediction, I perform feature augmentation. In this case, it adds non-linearities computing quadratic expansion and by considering all second order interaction terms and applies a low pass filter with the 1D discrete wavelet transform in the logs. Therefore, the training set moves from six to twenty features before estimate the PE logs. The logs NM_N and RELPOS are not used in feature augmentation because it would make no geophysical sense. After prediction of the PE logs into the two missing wells, these wells are grouped with the seven wells used as trai
poster
activity consulting / coordination / event Activities & Services A visual walk through 2019-05-05 | Andrea Scharnhorst - Femmy Admiraal - Dirk Roorda | DANS Austria (AT) Belgium (BE) Croatia (HR) Cyprus (CY) Denmark (DK) France (FR) Germany (DE) Greece (GR) Ireland (IE) Italy (IT) Luxembourg (LU) Netherlands (NL) Poland (PL) Portugal (PT) Serbia (RS) Slovenia (SI) activity resource creation / software development service access to resources / data hosting / processing / support uncert ain 15 a/cons ulting 17 a/event 52 a/resource creation 29 a/software development 21 s/access to resources 60 s/data hosting 15 s/processing service 25 s/support service 21 a/DARIAH coordination 34 DARIAH (in-kind) contributions
poster
Simple tool quantifying sediment resuspension effects on marine organic carbon storage A rapid and efficient solution for integrating sediment heterogeneity in resuspension and C storage assessments. Global trawling CO2 footprint estimations3 lack • measurements of resuspension effect on org. C mineralization rates4 • accountancy for spatial variation of sediment types/ habitats5 Seafloor plays pivotal role for Earth’s climate by storing organic carbon long-term.1 Resuspension enhances mineralisation of stored organic carbon to CO2.2 Ines Bartl Tegan Evans Jenny Hillman Simon Thrush Institute of Marine Science, The University of Auckland, Auckland 1142, New Zealand ines.bartl@auckland.ac.nz Acknowledgments: This research was funded by the University of Auckland, the George Mason Centre for the Environment, the Ministry of Business, Innovation and Employment (UOAX2307), and the Oceans of Change Project. We thank Stefano Schenone, Sam Ladewig, Simon Thomas, Alessandra Valim, Li Yeoh, Keshav Chandran, Gemma Cunnington, Eliana Ferreti, Caitlin Grosvenor, Paul Caiger, Andrew Reid, Sophie Thomson, and Alanta Loucks for their assistance during the field campaigns, with special thanks to Brady Doak for both his professional support and his hospitality during the cruise. References: 1 Bianchi, T. S., Cui, X., Blair, et al. (2018). Organic Geochemistry, 115, 138–155. https://doi.org/10.1016/J.ORGGEOCHEM.2017.09.008 2 Ståhlberg, C., Bastviken, D., Svensson, B. H., & Rahm, L. (2006). Estuarine, Coastal and Shelf Science, 70(1–2), 317–325. https://doi.org/10.1016/J.ECSS.2006.06.0224 3 Sala, E., Mayorga, J., Bradley, D., et al. (2021). Nature, 592. https://doi.org/10.1038/s41586-021-03371-z 4 Hiddink, J. G., van de Velde, S. J., McConnaughey, R. A., et al. (2023). Nature 2023 617:7960, 617(7960), E1–E2. https://doi.org/10.1038/s41586-023-06014-7 5 Epstein, G., Middelburg, J. J., Hawkins, J. P., et al. (2022). In Global Change Biology (Vol. 28, Issue 9, pp. 2875–2894). https://doi.org/10.1111/gcb.161056 6 Jørgensen, B. B., Wenzhöfer, F., Egger, M., & Glud, R. N. (2022).. Earth-Science Reviews, 228, 103987. https://doi.org/10.1016/J.EARSCIREV.2022.103987 7 Sawyer, C. N., McCarty, P. L., & Parkin, G. F. (2003). Chemistry for Environmental Engineering and Science. 5th (edn) McGraw-Hill Inc. New York, 1–752. The Resuspension Assay What’s next? Identifying vulnerability of Hauraki Gulf sediments to demersal fishing. Simple  based on BOD method7 Rapid  short incubations of 3-5 hrs Efficient  7-8 sites per day in 50 km radius Affordable  inexpensive equipment https://www.usgs.gov/media/images/conceptual-drawing-ocean-trawling
poster
Characterizing pathways of exposure for risk-based chemical prioritization Data streams Large public inventories, reporting chemical use and/or occurrence: • ACToR USEdb [3]: manually-curated pathways database (2015) • CompTox Chemicals Dashboard (CCD) chemical lists [4] (https://comptox.epa.gov/dashboard/chemical-lists) • Chemical Data Reporting 2016-2020 [5] • CPDat (Chemical & Products Database) [6-11] via ChemExpo (see poster P284/abstract 3795) • Chemical-Product Composition data • Functional Use categories • Chemical List Presence (curated tags) Data streams include 62,692 unique DSSTox Substance IDs Figure 2. Data streams for food-only chemicals. Left panels: Specific CCD lists. Upper/lower panels: Chemicals with multiple/single data streams. Many chemicals have info only from CCD lists of food volatiles or food contact substances. https://comptox.epa.gov /dashboard/chemical- lists Case Study: Food-Only Chemicals For references or copy of poster: Scan QR code or https://github.com/carolinering/pathwaysSOT2023 Contact: ring.caroline@epa.gov Caroline L. Ring1; Paul M. Kruse1,2; John F. Wambaugh1; Kristin K. Isaacs1 1 US EPA Center for Computational Toxicology and Exposure (Research Triangle Park, NC, USA) 2 Oak Ridge Institute for Science and Education (Oak Ridge, TN, USA) Pathway classifications Figure 1. Heatmap of pathway classifications (left panel) and data stream sources (right panel) for 62,692 unique DTXSIDs. Overview Manual mapping of chemical exposure data to broad exposure pathway categories to categorize 62,692 chemicals by exposure pathway Methods Scientific Impact & Potential Applications • Pathways can be used to inform rapid exposure models for use in risk-based chemical prioritization [1], e.g.: • as a training set for automated read-across of exposure pathways based on chemical structure • as features (predictor variables) for machine-learning models of exposure [2] Consumer o Fragrance o Personal care o Cosmetic Pesticide o active o inert Industrial Pharmaceutical Food/Dietary Other o Flame retardant o Tobacco o Colorant o Commercial o Agricultural  Fertilizer o Biocide  Herbicide  Antimicrobial Chemical may be positive, negative, or unclassified (no data, NA) for each pathway. Negative = ChemExpo Chemical List Presence tags "prohibited", "restricted", "not used", "non_food_use", "nondetect" Classifications from each data stream may disagree. Here: • Any positive = positive overall. • No positive & any negative = negative overall. • All NA = NA overall. 6583 chemicals positive for the Food pathway and negative or NA for all other pathways Case Study Summary • Transparent, granular sourcing for pathway categorization • Majority of food-only chemicals rely on data from 1-2 lists • Food-only chemical classes are consistent with source lists (e.g. flavors) Figure 3. ClassyFire classifications for food-only chemicals. Colored rings: # of food-only chemicals in each superclass (inner ring) and class (outer ring, labels). Most frequent classes: Prenol lipids; Fatty acyls; Organooxygen compounds. Visualized using ClassyFire [12] & treecompareR package (see poster P160/abstract 3675). Fatty acyls Prenol lipids Organooxygen cpds.
poster
Understanding Lithium Extraction Through Social Life Cycle Assessment in the American Context: A Critical Review Megan Jermak, Sarah Cribb, Joseph Bozeman III, PhD CEM Georgia Institute of Technology | Social Equity and Environmental Engineering Lab Benoît, C. et al. (2021). Guidelines for Social Life Cycle Assessment of Products and Organizations 2020. Life Cycle Initiative. United Nations Environmental Programme. www.lifecycleinitiative.org/wp-content/uploads/2021/01/Guidelines-for-Social-Life-Cycle-Assessment-of-Products-and- Organizations-2020-22.1.21sml.pdf. Benoît, C. et al. (2009). The Guidelines for Social Life Cycle Assessment of products: Just in time! The International Journal of Life Cycle Assessment, vol. 15, no. 2, 21 Jan. 2010, pp. 156–16., https://doi.org/10.1007/s11367-009-0147-8. Riofrancos, T. (2022). The Security–Sustainability Nexus: Lithium Onshoring in the Global North. Global Environmental Politics, 23(1), 1–22. https://doi.org/10.1162/glep_a_00668 Jaskula, B. W. (2024). Lithium. In U.S. Geological Survey mineral commodity summaries 2024. U.S. Geological Survey. https://pubs.usgs.gov/periodicals/mcs2024/mcs2024-lithium.pdf Mancini, L., & Sala, S. (2018). Social impact assessment in the mining sector: Review and comparison of indicators frameworks. Resources Policy, 57(57), 98–111. ScienceDirect. https://doi.org/10.1016/j.resourpol.2018.02.002 Chhipi-Shrestha, G. K., Hewage, K., & Sadiq, R. (2014). “Socializing” sustainability: a critical review on current development status of social life cycle impact assessment method. Clean Technologies and Environmental Policy, 17(3), 579–596. https://doi.org/10.1007/s10098-014-0841-5 Grubert, E. (2016). Rigor in social life cycle assessment: improving the scientific grounding of SLCA. The International Journal of Life Cycle Assessment, 23(3), 481– 491. https://doi.org/10.1007/s11367-016-1117-6 Ramirez, P. K. S., Petti, L., Haberland, N. T., & Ugaya, C. M. L. (2014). Subcategory assessment method for social life cycle assessment. Part 1: methodological framework. The International Journal of Life Cycle Assessment, 19(8), 1515–1523. https://doi.org/10.1007/s11367-014-0761-y Bozeman, J. et al. (2022). A path toward systemic equity in life cycle assessment and decision-making: Standardizing sociodemographic data practices. Environmental Engineering Science, vol. 39, no. 9, 1 pp. 759–769. https://doi.org/10.1089/ees.2021.0375. Social Life Cycle Application Conclusion and Recommendations Acknowledgements In this critical review, lithium mining is examined under the context of its extraction phase, examining how social and environmental outcomes in foreign mining projects may be echoed in new American mining endeavors. Industrial ecology tools, such as Social Life Cycle Assessments (SLCAs), can be utilized to draw comparisons of unit operations across varying countries and contexts. As part of the guidelines established by the United Nations Environmental Programme (UNEP) for implementing SLCAs, two Life Cycle Inventories (LCI) were conducted analyzing the social impacts of lithium mining operations. One LCI addressed lithium mines currently operating within South America (SA), while the other LCI investigated data on a Thacker Pass, a proposed mine within the United States (US). Social Life Cycle Assessment (SLCA) is a tool developed by the United Nations Environment Programme/Society for Environmental Toxicology and Chemistry (UNEP/SETAC) for the analysis of social impacts associated with a product’s life cycle from cradle to grave (UNEP, 2009). Although SLCA has evolved significantly in recent years, limited standardization of procedures has restricted its use in the US. Meanwhile, the U.S. is rapidly expanding its lithium mining industry to meet electric vehicle (EV) demand and stabilize lithium supply chains (Riofrancos, 2022). While US mining companies posit American mines as safer and more sustainable than those in the global South, storied negative social implicati
poster
Capabilities, contents, and use cases of the Outer Planets Unified Search (OPUS) from the PDS RMS Node Megan R.K. Seritan, M.S. Tiscareno, R.S. French, M.R. Showalter, M.K. Gordon, Y.-J. Chang, D.J. Stopp, J.N. Spitale, M.J.T. Mace, E.R. Simpson | SETI Institute—Mountain View, CA The Planetary Data System (PDS) is NASA’s scientifically-guided archive for planetary science: The Ring-Moon Systems (RMS) Node is a PDS Discipline Node located at the SETI Institute in Mountain View, CA: The Outer Planets Unified Search (OPUS) is a cross- mission, cross-instrument data search tool: What is PDS? What is RMS? What is OPUS? OPUS Support of Cross-Discipline, Cross-Instrument Search What’s New! • Users can now search using detailed geometric metadata for all Galileo SSI images Example 2 of RMS-generated metadata: Geometric metadata Galileo SSI metadata Cassini UVIS HDAC Cassini UVIS HSP Cassini UVIS EUV Cassini CIRS Cassini CIRS Cassini ISS Different Filters Cassini VIMS New Horizons MVIC OPUS provides rich, consistent metadata • OPUS now includes New Horizons LORRI and MVIC data through the end of 2020 New Horizons new data • New previews and diagrams are available for browsing and downloading • These provide at-a-glance identification info for these non-image observations • Four different Voyager instruments are supported: PPS, UVS, and RSS (occultation) & ISS (reflectance) Previews show the occultation profile itself… …while diagrams show the source’s path through the field of view. • OPUS now supports PDS4 datasets! • Metadata still consistent and accurate! Related concepts with different names, such as “volume” (PDS3) and “bundle” (PDS4), are combined • Earth-based Uranus ring & atmosphere stellar occultations from 1977-2002 is the first PDS4-only data set in OPUS Support for PDS4 OPUS geometric search categories 0.5 0.6 0.7 0.8 Hubble WCF3 F689M Galileo SSI Red Voyager ISS Orange Cassini ISS Red 0.65-0.72 μm 0.62-0.73 μm 0.59-0.64 μm 0.57-0.72 μm wavelength (μm) Example 1 of RMS-generated metadata: Consistent wavelength information Example 3 of RMS-generated metadata: Information-rich search results • Archival datasets, especially in PDS3, were not designed with cross-discipline, cross- instrument search as a goal • Because of this, the metadata in these archival datasets that would make cross- discipline, cross-instrument search possible is usually incomplete or missing, at no fault of the data provider • The major benefit of OPUS is that RMS creates consistent supplemental metadata fields to facilitate cross-mission, cross-discipline search • See the following examples of RMS-generated metadata that allow cross-discipline, cross- instrument search: • OPUS makes wavelength into a standardized attribute • This feature addresses the issue of filters covering similar, but not identical, wavelength ranges and having various names • For example: • When wavelength is a standardized attribute, users can find related observations that may not have been found when looking at archived metadata alone • OPUS geometric metadata identifies every body and ring inside the field of view of an image • This metadata is derived by densely sampling the field of view and reporting the full range of values for each geometric parameter • These attributes are available in a consistent way for every supported dataset • RMS-generated thumbnails are a visual type of metadata to make cross-discipline, cross- instrument search easier • Some sets of thumbnails use color-coding to indicate wavelength/filter type: • A federated structure of six discipline-specific nodes and two support nodes • Run by scientists who bridge the data community and the research community • Provides expert curation and peer review of datasets • The Node’s emphasis is on datasets in which rings, moons, and/or their primary bodies are viewed as a system • We focus on placing individual data products in context • In practice, this works out to: o Jupiter, Saturn, Uranus, Neptune, Pluto, &
poster
Management and Uncertainties of Severe Accidents L.E. Herranz (CIEMAT), S. Paci (UNIPI) MUSA was founded in HORIZON 2020 EURATOM NFRP-2018 call on “Safety assessments to improve Accident Management strategies for Generation II and III reactors” On June 15th, 2018 MUSA obtains the NUGENIA label that recognizes the excellence of the project Numerical tools are widely used to assess the Nuclear Power Plants (NPP) behaviour during postulated Severe Accidents (SA). Considering the complexity of the processes taking place during a SA and the inherent nature of numerical codes (numerics, spatial discretization, etc.), it is mandatory to quantify their embedded uncertainties taking into account the latest developments in methods and algorithms as well as the availability of computing resources. Mathematical tools for quantification of code uncertainties and sensitivities have been under development for many years, with a huge accumulated experience in performing Uncertainty Quantifications (UQ) with Best Estimate (BE) system codes, partly because of new requirements in regulations of some countries as part of the NPPs licensing processes. This is so far not the case for SA codes and only a few investigations have been focused on SA and UQ. MUSA has an “innovative research agenda” in order to move beyond the state-of-the-art regarding the predictive capability of SA analysis codes by combining them with the best available or improved UQ tools. By doing so, not only the prediction of timing for the failure of safety barriers and of radiological Source Term (ST) will be possible, but also the quantification of the uncertainty bands of selected analysis results, considering any relevant source of uncertainty, will be provided. Objective of the MUSA project Assess the capability of SA codes when modelling reactor/ SFP accident scenarios of GEN II, GEN III designs Identification of UQ methodologies to be employed, with emphasis on the effect of both existing and innovative SAM measures on the accident progression, particularly those related to the ST mitigation Determination of the state-of-the-art prediction capability of SA codes regarding the ST that potentially may be released to the external environment, and to the quantification of the associated code’s uncertainties applied to SA sequences in NPPs and SFPs MUSA Work Packages WP WP MUSA COordination (MUCO), coordinated by CIEMAT WP2 Identification & Quantification of Uncertainty Sources (IQUS), coordinated by GRS WP3 Review of Uncertainty Quantification Methods (RUQM), coordinated by KIT WP4 Application of Uncertainty Quantification Methods against Integral Experiments (AUQMIE) by ENEA WP5 Uncertainty Quantification in the Analysis and Management of Reactor Accidents (UQAMRA) by JRC WP6 Uncertainty Quantification and Innovative Management of SFP Accidents (IMSFP) by IRSN WP7 COmmunication & REsults DISsemination (COREDIS) coordinated by UNIPI Dissemination of Knowledge Special attention for knowledge transfer towards young researchers and Masters/PhD students Production of public learning modules to be published in the MUSA open website. Mobility programme under which university students and young researchers go to internship programmes Production of a lecture on “Uncertainty Quantification in Severe Accident Analyses” for the different international Courses that might be given on Severe Accidents and/or on "uncertainties" MUSA Education activities will be carried out in a close collaboration with the ENEN Network Perspectives MUSA will mean a better exploitation of research previously performed within the EU framework Over the years, reliable and experienced teams of modellers and analytical teams have been built-up, and MUSA is an unique opportunity to achieve real feedback among them In addition, MUSA encourages cooperation in research, innovation and young generation’s formation Finally, MUSA will be an open results project for its importance on forthcoming SA analyses CONTACT: Pr
poster
Urban Growth: Effects on urban green spaces and implications for planning and policy Marta Vallejo, MACS, Heriot-Watt University Supervisor: David W. Corne How to apply a Genetic Algorithm to a Sequential Planning Problem GREEN AREAS GENETIC ALGORITHM STATISTICAL DATA Metric: POPULATION SATISFACTION PROBLEM MODEL PROCEDURE RESULTS Agent-Based Modelling – Cellular Automata POLICIES & GUIDELINES Description: Cellular Automata model a city and its surroundings. Each cell has an ecological random value. The city growth has a negative effect on these values. The city is populated with a set of agents which represent families and are modelled by the use of an Agent-Based approach. E-mail: mv59@hw.ac.uk PGR Student Poster Day - 23rd August 2012 Objective: Distributing a set of green areas throughout the city and achieving the maximum satisfaction from the inhabitants of the city. An agent is satisficed if it lives close enough of one of these areas. GA should deal with future uncertainty: need the support of external tools (Statistical Data) Experiments still in progress. Preliminary data indicates a positive improvement in the satisfaction of the population using a GA-Statistical approach. Next steps: · Include new metrics like preserve most valuable ecological areas · Increase the complexity of the model to get closer to real-world scenarios · Validate the strategy followed by the comparison with other approaches commonly used for sequential planning problems like Reinforcement Learning.
poster
Innovative Research for a Sustainable Future www.epa.gov/research ORCID: 0000-0002-4024-534X John Wambaugh l wambaugh.john@epa.gov l 919-451-5400 Product Deformulation to Identify Exposure Pathways for ToxCast Chemicals John Wambaugh1, Alice Yau2, Katherine Phillips3,4, Kristin A. Favela2, Chantel Nicolas1,4, Derya Biryol3,4, Christopher Grulke1,5, Ann M. Richard1, Paul Price3, Antony Williams1, Kristin Isaacs3, Russell Thomas1 National Center for Computational Toxicology1 and National Exposure Research Laboratory3, U.S. Environmental Protection Agency, Office of Research and Development Southwest Research Institute2, Oak Ridge Institute for Science and Education4, Lockheed Martin5 Abstract Introduction Results Estrogen Actives and Flame Retardants Summary High throughput screening (HTS) data that characterize chemically induced biological activity have been generated for thousands of chemicals by the US interagency Tox21 and the US EPA ToxCast programs. In many cases there are no data available for comparing bioactivity from HTS with relevant human exposures. The EPA’s ExpoCast program is developing high-throughput approaches to generate the needed exposure estimates using existing databases and new, high-throughput measurements. The exposure pathway (i.e., the route of chemical from manufacture to human intake) significantly impacts the level of exposure. The presence, concentration, and formulation of chemicals in consumer products and articles of commerce (e.g., clothing) can therefore provide critical information for estimating risk. We have found that there are only limited data available on the chemical constituents (e.g., flame retardants, plasticizers) within most articles of commerce. Furthermore, the presence of some chemicals in otherwise well characterized products may be due to product packaging. We are analyzing sample consumer products using 2D gas chromatograph (GC) x GC Time of Flight Mass Spectrometry (GCxGCTOF/MS), which is suited for forensic investigation of chemicals in complex matrices (including toys, cleaners, and food). In parallel, we are working to create a reference library of retention times and spectral information for the entire Tox21 chemical library. In an examination of five plastic children’s toys, as many as 114 and as few as 56 chemicals were identified in each toy, with between 0 and 40 unidentified chemicals in each. Putative endocrine disrupter, Bisphenol A (BPA), was identified as present in a product marked “BPA free”. Information on chemical constituents of products, while only a prerequisite to actual exposure, provides heuristics for estimating likely human exposure pathways for thousands of chemicals. Methods Example: Chromatograms for Baby Toys [solvent tail] is is is is is s s s s s s s is [solvent tail] is is is is is s s s s s s s is [Column bleed] [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s [solvent tail] [Column bleed] is is is is is is s s s s s s s is = internal standard s = surrogate Product 1 Product 2 Product 3 Product 4 Product 5 Retention Time First Dimension (s) Retention Time Second Dimension (s) 100x 10x 1x DCM Dilution • Southwest Research Institute conducted analytical chemistry screening for large numbers of chemicals in consumer products and articles of commerce • Five sample products were arbitrarily selected from each of twenty different categories • Products were analyzed using two dimensional gas chromatograph (GC) x GC Time of Flight Mass Spectrometry
poster
Kelle Cruz (CUNY, NY, USA), William J. Cooper (U. of Hertfordshire, UK), David R. Rodriguez (STScI, MD, USA), Sherelyn Alejandro (CUNY, NY, USA), Ella Hort (Pomona College, CA, USA), Clémence Fontanive (U. of Bern, Switz.), Niall Whiteford (American Museum of Natural History, NY, USA) The SIMPLE Archive project — the Substellar and IMaged PLanet Explorer Archive of Complex Objects — is an endeavor to create a collaboratively maintained database of low-mass stars, brown dwarfs, and directly-imaged planetary mass objects using GitHub, Python, and SQL. Contribute The vision for SIMPLE is that anyone can contribute data using a typical GitHub workflow. The chart below illustrates the workflow from inserting new data to a local database instance via Python, review of database modifications in GitHub, and then publishing them to the website. Please join the #simple-db-dev channel in the Astropy Slack workspace for data contribution and development discussion. Develop •Work on local instance •Export to JSON •Commit to branch Review •Issue pull request •Run validation tests •Merge changes Publish •Run GitHub actions •Load DB from JSON •Deploy to website The SIMPLE Archive http://simple-bd-archive.org/ A collaboratively-curated database and website of low mass stars, brown dwarfs, and exoplanets Holdings The SIMPLE Archive currently contains over 3,000 sources, including directly imaged planets and UCDs with spectral types M7 and later. SIMPLE-AstroDB/SIMPLE-db/ SIMPLE-AstroDB/SIMPLE-web/ #simple-db-dev on astropy.slack.com/ http://joinslack.astropy.org/ Explore and Search • Website: search by object name, coordinates, or full text. • Python: SQL searches using astrodbkit2. This material is based upon work supported by the National Science Foundation under Grant No. AST-1909837. • DBBrowser: Browse data using a GUI and/or SQL. For these 3,000 sources, it contains: • 11,638 photometry measurements • 4,897 proper motion measurements • 2,628 parallax measurements • 1,404 spectra
poster
https://zarr.dev zarr-developers @zarr_dev bioimaging distributed and cloud storage geospatial chunked storage allows parallelized access and computation data science built-in compression The Zarr specification and implementations are governed by an open process. Specification changes are suggested via Zarr enhancement proposals (ZEP). Those are reviewed and voted on by the Zarr Implementors Council (ZIC) and Zarr Steering Council (ZSC). https://zarr.dev/zeps Governance process Broad Compatibility, based on a Common Specification Funding provided by the Zarr is a Sponsored Project of Confocal imaging of mouse blastocysts Skin Layers Dermis and Epidermis Neuroglancer Napari webKnossos Atmospheric Conditions from the Coupled Model Intercomparison Project Phase 6 (CMIP6) Usage Examples Example of a Bioimaging Dataset Streamed with Zarr Chunking and Compression compatible with NEUROGLANCER INTERACTIVE NOTEBOOK https://bit.ly/3QsmI1M compatible with Chunked, Compressed N-Dimensional Arrays Josh Moore German BioImaging e.V. (NFDI4BIOIMAGE) @notjustmoore josh@openmicroscopy.org
poster
a School of Civil, Environmental and Mining Engineering, University of Adelaide, SA, Australia b Hydrology and Atmospheric Sciences, University of Arizona, Arizona, USAFenner School of Environment, Australian National University, Canberra, SA, Australia d College of Science and Engineering, Flinders University, SA, Australia e Bren School of Environmental Science & Management, UC Santa Barbara, CA, USA Virtual Hydrological Laboratories: Developing the next generation of conceptual models to support decision-making under change Mark Thyer (a), Hoshin Gupta (b), Seth Westra(a), David McInerney(a), Holger Maier(a), Dmitri Kavetski(a), Tony Jakeman(c), Barry Croke(c), Craig Simmons(d), Margaret Shanafield(d) , Daniel Partington(d), Christina Tague(e) Lead Author Email: mark.thyer@adelaide.edu.au Presenting Author Email: Tony.Jakeman@anu.edu.au 1. Conceptual models commonly used in practice, but may provide poor predictions of future hydrological change J4. Hydrologic non-stationarity and extrapolating models to predict the future • Conceptual models (see Figure 1) are the most common type of surface hydrological model used for decision-making (Peel and McMahon, 2020) • Due to suitable predictive performance, ease of use and computational speed which facilitates scenario/sensitivity/uncertainty analysis • They have significant limitations in their ability to capture the wide range of hydrological changes (see Figure 2) (Partington et al, 2022, Fowler, et al, 2022) Figure 1: ‘End-Member’ types on the spectrum of hydrological models 2. Developing next generation of conceputal models: Hydrological Lines of Evidence (HLE) Figure 2: Range of Drivers of Hydrological Change Robustness in terms of (1) Predictive Ability: accurate predictions of observations (2) Hydrological Fidelity: Capture key processes under change 3. Emerging HLE: Virtual Hydrological Laboratory 4. Evaluation of hydrological lines of evidence to provide conceptual models robust to future change Key Question(s): Can the HLE evaluate for “robustness” for 1. Catchments of practically relevant size (100’s to 10,000’s km2) 2. Catchments with diverse range of different properties 3. Wide range of process/predictions (not just streamflow, but also ET, soil moisture etc) 4. Using real observed data 5. Robustness to “Significant” Change – see Table 1 (below) for different approaches to evaluating change 5. Unique Strengths of Virtual Hydrological Laboratory to Accelerate Development of Conceptual Models 1. Conceptual model development informed by wide range of processes In a virtual catchment all hydrological components can be observed (albeit virtually), and conceptual models can be subjected to a higher level of scrutiny than current observational datasets allow. 2. Enables the use of controlled hydrological change experiments Ability to systematically change climate and land cover/use characteristics provides opportunity to undertake hydrological change experiments and pro-actively evaluate CMs on a wide range of future hydrological change scenarios that are outside the envelope of observations. 3. Integrate Multiple Hydrological Lines of Evidence A VHL provides the opportunity to integrate multiple lines of evidence (Figure 4) thereby overcoming their respective strengths and weaknesses to build the next generation of conceptual models 6. Call to Action Predictive skill in hydrology is confronted by rapid and multi-faceted change. Our current suite of conceptual models have limited ability to deal with change. New thinking will be needed as hydrological systems are being pushed outside the envelope of historical experience “Virtual Hydrological Laboratories” provide a unique opportunity to 1. Undertake “controlled experiments” to evaluate robustness to future change 2. Complement (and integrate) existing “Hydrological Lines of Evidence” 3. Lead to new generation of conceptual models that are (i) Robust to change (ii) Embody best science (iii) Workhorse for practical decisio
poster
Presenter: Dr Dawn C. Rose Parkinson’s is an incurable brain condition that affects people’s movement and mood. As the disease progresses, more care is required impacting on the caregivers quality of life. Research Question: Does participating in activities together improve quality of life for caregivers of people with Parkinson’s? Mixed methods: focus group (n=4), interviews (n=8) and survey (n=75, 16 male). Results 63% Parkinson’s caregivers jointly participated in a large variety of activities Impact seemed gendered: Male caregivers reported better levels of personal care, and less anxiety, depression and stress in general. Significantly higher levels of stress revealed for females not participating in activities with the person they care for. Type of activity did not seem to impact quality of life. Authors: Lia Prado (MSc.) Rebecca Hadley (PhD.) University of Hertfordshire, UK, Dawn C. Rose (PhD.) Lucerne University of Applied Sciences and Arts, Switzerland Acknowledgments: The Parkinson’s Advisory Team (University of Hertfordshire), and caregivers of people with Parkinson’s who took part in this study. Taking Time: Carer Participation and Quality of Life APA Ref: Prado, L., Hadley, R., & Rose, D. (2020). Taking time: a mixed methods study of Parkinson’s disease caregiver participation in activities in relation to their wellbeing. Parkinson’s Disease, 2020. https://doi.org/10.1155/2020/7370810 Reasons Against… Working Taking time for self Seeing other friends Prefers other activities Promoting independence (both) Reasons For… Exercise Socialising Together Time Not enough time Empathetic Safe Space Qualitative Evidence “I just need to think about myself a bit and not always be worrying, so now when she goes out I feel that I can just relax… So to just have a morning that I don’t have to think about that it is actually quite nice. It sounds very selfish.” “I think the interesting thing is that you lose a lot of friends, because actually they don’t understand it at all.” “We both go to Parkinson’s UK. I also committed to the support group because we got Parkinson’s. It is something that we both need to cope with and we need people who understand what we are going through” 0 20 40 60 80 Physical Social Home-based entertainment Daily acitvities Types of Activities and % Participation Male Carers Female Carers 0 20 40 60 Personal and Social Activities Anxiety and Depression Self-care Stress Factors of Wellbeing and % Quality of Life Compromised Male Carers Female Carers
poster
Latest Results of Reactor Antineutrino Flux and Spectrum at Daya Bay - Antineutrino detected by inverse beta reaction (IBD) in Gd-loaded liquid scintillator • Prompt e+ signal : 1-10 MeV, determined by antineutrino energy • Delayed neutron capture signal: 8 MeV @ Gd • Time correlation: capture time is about 28 μs in 0.1% Gd-LS XXVIII International Conference on Neutrino Physics and Astrophysics, Heidelberg, 4-9 June 2018 Liang Zhan, Institute of High Energy Physics, Beijing on behalf of the Daya Bay Collaboration Reactor Antineutrinos at Daya Bay • Six reactors with a total thermal power of 17.4 GW • Antineutrino flux produced by fissions of isotopes: 235U, 238U, 239Pu , and 241Pu • Reactor operator provides generated thermal power (Wth) and fission fraction (fi /F) • ei : Energy release per fission for isotope i • Si(E): Antineutrino energy spectrum for each isotope • αi : fission fraction Abstract The latest measurement of the reactor antineutrino flux and energy spectrum by the Daya Bay reactor neutrino experiment is reported. The antineutrinos were generated by six 2.9 GWth nuclear reactors and detected by eight antineutrino detectors deployed in two near (500 m and 600 m flux-weighted baselines) and one far (1600 m flux-weighted baseline) underground experimental halls. An improvement on the neutron detection efficiency determination was performed using a new neutron calibration campaign and dedicated data-simulation comparison. With a 1230-day data set, the IBD yield was measured to be (5.91±0.09)x10-43cm2/fission. The ratio between the measured to predicted reactor antineutrino yield is 0.952±0.014(exp.)±0.023(model). The comparison of the measured IBD positron energy spectrum with the predictions is also reported with a previous 621-day data set. A reactor antineutrino spectrum weighted by the IBD cross section is extracted for model-independent predictions. Reactor Antineutrino Detection S: antineutrino flux from reactors σ: cross section of inverse beta decay. L: baseline, surveyed with a precision of 28 mm. Psur : antineutrino survival probability, including the fit parameter sin22θ13 Np: number of target protons, determined by target mass. Є: detection efficiency - Antineutrino spectrum of IBD reactions An improvement on the determination of the neutron detection efficiency was performed compared with previous publication Reactor Antineutrino Flux Measurement - Determination of the neutron detection efficiency was improved using a new neutron calibration campaign and dedicated data-simulation comparison. - An extensive neutron calibration campaign was carried out in Daya Bay at the end of 2016. Two types of neutron sources ( 241Am-13C and 241Am-9Be) were deployed vertically in three calibration axis and the data in 59 different combinations of sources and locations was collected. - A benchmark quantity was defined on the neutron capture energy spectrum. - Good agreement between calibration data and simulation on the energy spectrum and F. - The neutron detection efficiency was determined to be (81.48%±0.60%) with a reduction in the uncertainty by 56%. - A new measurement on the reactor antineutrino yield was performed using the 1230-day data which has average fission fractions of (0.571, 0.076, 0.299, and 0.054) for (235U, 238U, 239Pu , and 241Pu) Reactor Antineutrino Spectrum Measurement - Measured IBD prompt energy spectrum vs. prediction after normalization - Obvious discrepancy between data and prediction, significance was evaluated. - 3 σ for the whole spectrum - 4.4 σ for a 2-MeV energy window around 5 MeV - Uncertainty of the prompt energy spectrum - An antineutrino spectrum of IBD reactions is provided as an input for reactor neutrino experiments after unfolding the IBD prompt energy spectrum to antineutrino energy. - Consistent results by two unfolding methods: singular value decomposition (SVD) and Bayesian iteration Reactors: D1, D2, L1, L2, L3, and L4 Detectors: AD1, AD2, …, AD8 - A variety of simulati
poster
I-RIM Conference 2019 October 18 - 20, Rome, Italy ISBN: 9788894580501 Adaptive admittance control for a safe and efficient human-robot interaction Federica Ferraguti, Chiara Talignani Landi, Lorenzo Sabattini, Marcello Bonf`e, Cesare Fantuzzi and Cristian Secchi Abstract— The possibility of adapting online the way a robot interacts with the environment is becoming more and more important. Nevertheless, stability problems arise when the environment (e.g. the human) the robot is interacting with gets too stiff. In this work, we present a strategy for handling the stability issues related to a change of stiffness of the human arm during the interaction with an admittance-controlled robot. I. INTRODUCTION One of the most revolutionary and challenging features of the new generation of robots is physical human-robot interaction (pHRI). In pHRI tasks, robots are designed to coexist and cooperate with humans in applications such as assisted industrial manipulation, collaborative assembly, domestic work, entertainment, rehabilitation or medical ap- plications. In these contexts, due to the desired coexistence of robotic systems and humans in the same workspace, main concerns are related to safety and dependability. A widely used approach consists in implementing interaction control strategies that guarantee a compliant behavior of the robot. In particular, admittance control is typically utilized for controlling industrial robots, that are generally characterized by a stiff and non-backdrivable mechanical structure [1]. For example, admittance control has been used to imple- ment robot manual guidance in [2] and [3], by means of the “walk-through programming” where the human operator becomes the teacher that physically guides the robot through- out the desired trajectory. When using admittance-controlled robots, instability can arise when interacting with stiff envi- ronments. Since humans are dynamic systems characterized by a time-varying impedance, they can behave in a stiff way and, consequently, give rise to instability when interacting with admittance-controlled robot. Instability induces, among other undesired effects, a deviation of the robot from the desired admittance behavior. Furthermore, it produces high amplitude oscillations of the end-effector, undermining the user safety during the interaction. The deviations have to be first promptly detected and then canceled (or reduced) to restore the stability of the system. The adaptation of the parameters of the admittance control is a common strategy for recovering the stability of the interaction as shown, e.g., in [4], [5] and [6]. In this work we show a novel strategy F. Ferraguti, C. Talignani Landi, L. Sabattini, C. Fantuzzi and C. Secchi are with the Department of Sciences and Methods for Engineering (DISMI), University of Modena and Reggio Emilia, Italy {name.surname}@unimore.it M. Bonf`e is with the Engineering Department, University of Ferrara, Italy marcello.bonfe@unife.it for detecting the rise of oscillations during the interaction between a human and an admittance-controlled robot and a parametric adaptation of the admittance for restoring a stable behavior. The proposed adaptation allows to keep the adaptive dynamics similar to the nominal one in order to avoid unbalancing effects and to increase the usability of the system. Preliminary results have been presented in [7] and [8], while in [9] a method for automatically setting the detection threshold using a thorough statistical analysis has been introduced. Moreover, a weighted energy allocation strategy has been proposed in order to consider separately translations and rotations. In this work we present the overall framework and we show the experimental validation of the control architecture. II. ADAPTIVE ADMITTANCE CONTROL The goal of the admittance control is to force the robot to behave in a desired way when interacting with the en- vironment. In this paper, we address the case of a robotic manipulator manuall
poster
GALAXY CLUSTERS IN THE CFHTLS SARRON F. MARTINET N. ADAMI C. DURRET F. INSTITUT D’ASTROPHYSIQUE DE PARIS (IAP) ARGELANDER-INSTITUT FÜR ASTRONOMIE, BONN LABORATOIRE D’ASTROPHYSIQUE DE MARSEILLE (LAM) INSTITUT D’ASTROPHYSIQUE DE PARIS (IAP) Florian Sarron PhD student IAP (Paris) GALAXY EVOLUTION ACROSS TIME 12-16 JUNE 2017 There is a degeneracy between mass and redshift : only most massive clusters are detected at high redshift. Figure 6 shows the ETGs and LTGs fraction as a function of redshift for different cluster masses. The evolution in redshift for the full mass range is dominated by low mass clusters - the size of our sample enables us to study dependence on both parameters simultaneously. Fraction of ETGs and LTGs 0 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 Galaxy type proportion mean cluster redshift all M200 0.2 0.4 0.6 mean cluster redshift 1013.5 < M200 < 1014 Msun 0.2 0.4 0.6 mean cluster redshift 1014 < M200 < 1014.5 Msun 0.2 0.4 0.6 mean cluster redshift M200 > 1014.5 Msun NEXT : BREAKING THE REDSHIFT-MASS DEGENERACY Figure 6 There is some disagreement in the literature concerning the evolution with redshift of the galaxy luminosity function (GLF) in galaxy clusters. It is unclear whether the red sequence (RS) is enriched by efficient quenching of blue late-type galaxies inside the cluster from z~1, or if this RS is built at higher redshift. To explore this issue, we apply our new version of the Adami and MAzure Cluster FInder (AMACFI) to the Canada France Hawaii Telescope Legacy Survey (CFHTLS) W1 field. Figure 3 Schechter t parameters evolution for ETGs -1 -0.5 0.3 0.6 α zclus -23 -22.5 0.3 0.6 M* zclus passive evolution - BC03 -1 -0.5 13.5 14 14.5 15 α log(M200/Msun) -23 -22.5 13.5 14 14.5 15 M* log(M200/Msun) RESULT : ETGs GLFs faint end shows a mild decrease with redshift at 4.8 - meaning galaxies in clusters are efficiently quenched from z ~ 0.7. From the mass dependance we can infer that quenching is more efficient in denser environments. σ 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.79 ± 0.02 M*early=−22.64 ± 0.07 Galaxies / deg2 Mabs(i) 142 clusters at <z>= 0.22 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.80 ± 0.03 M*early=−22.82 ± 0.06 Galaxies / deg2 Mabs(i) 264 clusters at <z>= 0.30 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.74 ± 0.03 M*early=−22.78 ± 0.05 Galaxies / deg2 Mabs(i) 327 clusters at <z>= 0.40 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.61 ± 0.03 M*early=−22.63 ± 0.05 Galaxies / deg2 Mabs(i) 366 clusters at <z>= 0.49 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.55 ± 0.04 M*early=−22.60 ± 0.05 Galaxies / deg2 Mabs(i) 370 clusters at <z>= 0.60 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.58 ± 0.05 M*early=−22.69 ± 0.06 Galaxies / deg2 Mabs(i) 344 clusters at <z>= 0.68 Figure 5 - Same as Figure 4 for stacks in bins of redshift of cluster GLFs. 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.62 ± 0.06 M*early=−22.76 ± 0.07 Galaxies / deg2 Mabs(i) 453 clusters at <M200>= 6.74E+13 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.90 ± 0.07 M*early=−22.79 ± 0.10 Galaxies / deg2 Mabs(i) 91 clusters at <M200>= 4.50E+14 1 10 100 1000 10000 −28 −26 −24 −22 −20 −18 −16 αearly=−0.87 ± 0.03 M*early=−22.89 ± 0.05 Galaxies / deg2 Mabs(i) 622 clusters at <M200>= 1.71E+14 Figure 4 - Stacks in bins of mass of cluster GLFs. Best fit of Schechter parameters of ETGs is indicated. Vertical line is the 90% completeness limit of the sample - corresponding to i=23 in apparent magnitude. Composite GLFs of cluster candidates 0 0.2 0.4 0.6 0.8 1 0.3 0.6 0.9 Completeness zhalo Completeness for M200 > 1014 Msun 0 0.2 0.4 0.6 0.8 1 1 2 3 4 5 6 Completeness M200,halo [1014 Msun] Completeness for M200 > 1014 Msun 0.6 0.8 1.0 0.3 0.6 0.9 Purity zphot Purity for M200 > 1013 Msun S/N > 3 S/N > 4 S/N > 5 13.0 13.5 14.0 14.5 15.0 0.3 0.6 0.9 log(M200/Msun) zphot SN > 4 13.0 13.5 14.0 14.5 15.0 0.3 0.6 0.9 0 0.2 0.4 0.6 0.8 1 We apply AMACFI to
poster
Databases and web-based software tools for HR-pyPopStar models and MEGASTAR library Mollá Lorente, M. 1 ; García-Vargas, M. 2; Carrasco Licea, E. 3; Mujica Álvarez, E. 2 & Millán-Irigoyen, I. 1 1CIEMAT (Madrid, Spain), 2FRACTAL S.L.N.E. (Madrid, Spain), 3INAOE (Puebla, Mexico) MEGASTAR is a stellar spectral atlas for MEGARA (Multi Espectrógrafo en GTC de Alta Resolución para Astronomía). MEGARA is an optical (3650 – 9750 Å), fiber- fed, medium-high spectral resolution (R = 6000, 12000, 20000) instrument in operation on the GTC 10.4m telescope. The scientific exploitation of MEGARA demanded a stellar-spectra library to interpret galaxy data and to estimate the contribution of the stellar populations. MEGASTAR atlas is focused on the highest resolution setups, HR-R and HR-I, and already has almost 1000 stars (twice spectra) thanks to the filler-type OpenTime obtained up to now in 7 semesters. We have developed a web- based tool and a database that allow working in the project (for MEGASTAR team) and making the observations and products available to the scientific community. MEGASTAR WebApp Functionalities: § Visualization and search of the sources involved in the project (link to the SIMBAD database). § Visualization, search, download and plot the data product files (by using a java tool) of the observations. § Paint a graph of the stars grouped by three ranges of [Fe/H] as follows: all stars of the library, the observed stars of the library or the user stars uploaded by a file. § Download the releases of the reduced star spectra. § Download the statistics files: observations statistics by VPH and by stellar physical parameters. MEGASTAR WebApp Technologies: § Web Servers: Apache and Tomcat § Database: MySQL § Languages: Java, Java Servlets, HTML and JavaScript § Libraries: JFreeChart, Plotly and nom.tam FITS HR-pyPopStar WebApp Functionalities: § Detailed description of the HR-pyPopStar models. § Models Download. § Plot spectra from the HR-pyPopStar Library. § Plot synthetic properties resulting from SSP SEDs. § Combine spectra from the HR-pyPopStar Library. HR-pyPopStar WebApp Technologies: § Web Servers: Apache and Tomcat. § Database: MySQL. § Framework: Google Web Toolkit (GWT). § Languages: Java, Java Servlets, HTML and JavaScript. § Libraries: JFreeChart and nom.tam FITS. HR-pyPopStar model provides a complete set of high resolution (HR) Spectral Energy Distributions (SEDs) of Single Stellar Populations. The model uses the most recent high wavelength-resolution theoretical atmosphere libraries for main sequence, post-AGB/planetary nebulae and Wolf-Rayet stars. The SEDs are given for more than a hundred ages ranging from 0.1 Myr to 13.8 Gyr, four different values of the metallicity (Z = 0.004, 0.008, 0.02 and 0.05), and four different IMFs. We have developed a public web-based software tools and a database to manage HR-pyPopStar models and to make this available to the users community. References: García-Vargas et al. , 2020, MNRAS, 493, 871. DOI: 10.1093/mnras/staa126; Carrasco et al., 2021, MNRAS, 501, 3568. DOI: 10.1093/mnras/staa3704; Mollá et al. 2022, submitted. References: Millán-Irigoyen et al. (2021), MNRAS, 506, 4781. https://doi.org/10.1093/mnras/stab1969
poster
SCHOOL OF ROCK 2016 UN’INSEGNANTE ITALIANA SULLA JOIDES RESOLUTION Cicconi A. 1,2 1. Liceo Scientifico “G. Marinelli”, Udine; 2. School of Science and Technology, University of Camerino, Camerino COSA TROVERETE IN QUESTO POSTER Lo scopo di questo poster è raccontare l’esperienza della prima insegnante italiana a bordo della Joides Resolution per partecipare ad un corso di formazione per docenti di Scienze: School of Rock. Attività sulla Joides Resolution LEZIONI IN AULA Ogni giorno sono state svolte diverse ore di lezione teorica al fine di costruire le basi per la comprensione di complicati articoli scientifici sulla paleoclimatologia delle carote di sedimento oceanico. A TU PER TU CON LE CAROTE Ricercatori esperti nel settore ci hanno mostrato come vengono analizzate le carote di sedimento raccolte dalla Joides Resolution. VETRINI…VETRINI…E ANCORA VETRINI … JOIDES RESOLUTION La Joides Resolution è una nave oceanografica per carotaggi sul fondale marino. La nave è proprietà della società Siem Offshore AS ma lavora per IODP (International Ocean Discovery Program). Gli scienziati che lavorano sulla JR vengono da tutto il mondo. In ogni spedizione, sulla nave, è presente un docente di scienze che si occupa della divulgazione. … IODP L’International Ocean Discovery Program è un programma di ricerca che vede la collaborazione di 25 Nazioni. Lo scopo è quello di studiare la storia e la dinamica della Terra a partire dallo studio dei fondali oceanici. Il piano scientifico 2013-2023 ha un titolo più che significativo: «Illuminating Earth’s Past, Present and Future» Sponsor principali di questo programma sono: la National Science Foundation (USA), Il Ministero dell’Educazione, Cultura, Sport, Scienza e Tecnologia (JAPAN) e l’ ECORD (European Consortium for Ocean Research Drilling). … SCHOOL OF ROCK School of Rock è un corso di formazione per docenti organizzato da IODP con lo scopo di divulgare, creando dei docenti esperti, le attività di ricerca che si svolgono sulla JR. Nel 2016 sono stati selezionati 16 docenti di cui 14 Statunitensi, una docente Francese e per la prima volta una docente Italiana. … DOCENTI E RICERCATORI INSIEME! La caratteristica di School of Rock che ne fa un corso di aggiornamento per docenti di elevato valore formativo sia dal punto di vista didattico ma anche scientifico è la presenza, come formatori, di ricercatori attivamente coinvolti nelle spedizioni di Joides Resolution. Gli insegnanti li hanno affiancati nell’analisi delle carote di sedimento e nella lettura dei dati proprio come dei veri ricercatori. CONNESSI CON IL RESTO DEL MONDO Ogni giorno i docenti hanno gestito un Blog sulle attività svolte sulla nave o all’esterno nonché aggiornato la pagina Facebook e Twitter. Inoltre sono stati organizzati collegamenti con scuole dei propri Paesi. FACCIAMO LA CONOSCENZA CON… Nella foto a sinistra il ricercatore Larry Krissek della Ohio State University ed io che lo intervisto e trasmetto con un tablet ad una classe collegata dall’Italia (foto a destra). SCHOOL OF ROCK 2016 DOVE All’interno della Joides Resolution ancorata al porto di Cape Town (Sud Africa). QUANDO Dal 29 Maggio al 6 Giugno 2016. CHI 16 docenti di scienze di cui 14 Statunitensi, una docente Francese ed una docente Italiana. COSA Exploring Ocean Cores and Climate Connections: From Antarctica Across the Southern Ocean. L’Antartide è stato sempre coperto dai ghiacci? L’estensione del ghiaccio marino è stata sempre la stessa? Quali informazioni si ricavano dalle carote di sedimento oceaniche estratte nell’Oceano Meridionale? Come si interpretano i dati relativi ai fossili presenti nelle carote? Come è cambiato il clima nel passato della Terra? Questi sono stati alcuni degli interrogativi a cui si è cercato di dare una risposta esclusivamente utilizzando i dati ricavati dalle carote (Data Driven Education) utilizzando quindi lo stesso procedimento di indagine usato dagli scienziati. Le escursioni sono state guidate da docenti dell’Università di
poster
2D Two-Temperature Model Numerical Study Of Formation Mechanisms Of Periodic Surface Structures Induced On Silicon Under Femtosecond Laser Irradiation T. JY. Derrien1, R. Torres1, T. Sarnet1, M. Sentis1, T. E. Itina2 1 Lasers, Plasmas and Photonic Processes Laboratory (LP3 UMR 6182 CNRS), 163 av. de Luminy, Case 917, 13288 Marseille, France 2 Hubert Curien laboratory, UMR CNRS 5516/Université Jean Monnet, 18 rue du Professeur Benoit Lauras, 42000, Saint Etienne, France Contacts PhD std: thibault.derrien@lp3.univ-mrs.fr CNRS researcher: tatiana.itina@univ-st-etienne.fr Surface wave investigations Context Lifetime of a fs interference in Si Does melting occur under fs pulse on Si ? CINES computational center and French Ministry of Research PhD support are gratefully acknowledged. Additional thanks to D. Grojo, I. Bogatyrev, and A. Nikitin. « Spikes » Black silicon. « Agregation » Amplification Increasing number of 100 fs, 1 kHz shots « Ripples » LIPSS « Beads » Saturation Problem: Formation mechanisms are not well-known. -- Objective: Identify the mechanisms of : 1. Periodic modification of the electron-hole plasma created by femtosecond laser irradiation 2. Transmission of the pattern from electrons to lattice 3. Change of order and amplification of the nanostructures Interplay between Electron-lattice coupling and Diffusion of Electrons Conclusion Fusion Resolidification Recrystallisation References. - ~45 nm melted and resolidifcated depth (0.42 J/cm2, 130 fs) [J. Bonse, Applied Surface Science 221, 215 (2004)] - Melting occurs from 0.2-0.5 J/cm2 [NM Bulgakova, Physical Review B 69, 054102 (2004)], [DP Korfiatis, Journal of Physics D 40, 6803 (2007)] Bulk silicon Melting starts at ~0.4 J/cm². ~115 nm melted at 0.5 J/cm². Laser fluence (J/cm²) Electron temperature Free – carrier density Lattice temp. Melting Critical density Model includes: •Electron and Lattice temperatures (heating, coupling, diffusion, cooling). •Free-electron and hole density (1-2 photon ionization, electronic avalanche, convection, recombination) •Fresnel formulas of reflectivity and inverse Bremmstrahlung absorption •Drude model (EM behaviour) •Electron-phonon and electron-electron collisions are considered. EM plasmonic interference is modeled and the possibility of printing the pattern from electrons to lattice is checked. Result is : for high fluence, convection-diffusion is dominant on electron-lattice coupling, for low-fluence, it is possible to observe a priodic modification in lattice temperature profile. On these figures, F=500 mJ/cm². Calculated free-electron density, electron and lattice temperature are presented for a 500 mJ/cm² laser fluence, 800 nm NIR wavelength, and 100 fs pulse duration (FWHM). It has been reported that c-Si (100) melts from 0.2 J/cm² under ~100 fs laser irradiation. Our calculations based on a melting temperature criterium showed that, if melting occurs, resolidifcation front velocities are higher than maximum speed of recrystallisation, thus should be observed. However, calculations based on fusion enthalpy showed that no melting occurs. Our experimental measurements show that no melting/recrystallisation front is observable on Transverse Electron Microscopy pictures of the Si-protection layer interface. - Calculations showed that thermal processes (melting, evaporation, …) are not the mechanisms responsible for pattern transmission from electron to the lattice. Ablation may occur, but an ultrafast mechanism of ablation (~200 fs at 500 mJ/cm²) is required to support the interference hypothesis. - Calculations confirm the possibility of the surface electromagnetic wave (plasmon) to be responsible for ripple formation under fs laser irradiation, through the modification of absorption provided by a seed (defects, roughness, nanoparticle) under our conditions. - Calculations and experiments show that no melting occurs during ripple formation. - Experiments show that mechanism of ripple-to-beads transition occurs in a crysta
poster
Snip – User Centric Collaborative ELN Markus Osterhoff, Sebastian B. Mohr, Sarah Köster Georg-August-Universität Göttingen Röntgenphysik | CIDAS | SFB 1456 internet server analysis catalogue sample DB CONTROL meta data raw data experiment discussion users discussing with colleagues mobile access …! …? …!? …? config positions, parameters images, links data decisions feedback User Centric Paradigm – Documenting Non-Linear Discussions and Experiments Examples Creative, Collaborative Collage ► ►computer-generated contents (API) ► ►handcrafted annotations (pen entry) ► ►real-time communication (web based) Curated Lab Book ► ►putting the experimental state into context ► ►via API: parameters, images, URLs ► ►via pen & text: highlights, thoughts, sketches ► ►bringing machine-readable snips into perspective Closing the Loop ► ►Experiment ⇒ Thoughts ⇒ Decisions ⇒ Experiment Acknowledgements We thank Jan Goeman for outstanding technical support. Funding by Deutsche Forschungsgemeinschaft (SFB 1456), DAPHNE4NFDI and the Göttingen Campus Institute Data Science is gratefully acknowledged. API / Data Ingestion ► ►interoperable with other software: instruments send their state, users discuss and comment ► ►public JSON-endpoint accepts standardised snips: API tokens per book for authentication ► ►custom snip types: hierarchical namepaces prevent conflicts between groups Single / dual page layout Upload / place new snips Doodle tool Hyperlinks Permission system Creative collage JSON snip sci.photos/ QR/FDM24 snip. roentgen. physik.uni- goettingen. de Current Status & Outlook Client/server architecture ► ►React in TypeScript, serviced by node.js ► ►extensible and modular, custom snips and render functions In production @ GINIX X-ray microscope (DESY, Hamburg) ► ►macros send content to lab book ► ►integration into other beamlines under development In production @ IRP ► ►lab experiments, personal ELNs; shared experiments ► ►150 books, > 20k pages; ~ 30k snips, ~ 500k doodles Feature Status live update, working collaboratively yes free-form input; pen entry, sketches; images yes machine-readable data, data ingestion via API yes; namespaces and schemas permissions (r/w; API token; r/o token) yes; user-defined groups searching and filtering yes tagging system (table of contents, flags) under discussion hyperlinks to external systems yes federated login, SSO work in progress removing contents during grace period automatic creation of lab book could be implemented Snip types ► ►Basic types: image, text, doodle URLs linking to SciCat, Sample DB, PID services etc. ► ►Array: composite snips ► ►User defined types: timestamp, motor positions, macro, log, measurement, Jupyter analysis, … Book browser MariaDB page -, book -, user data, ... Render service Preview storage trigger server side rendering preview images signup pw reset Email service real time updates Websocket trigger updates static client files API requests fetch data
poster
The characterization of the water vapor vertical distribution on Mars is currently a very active research topic. Until this last decade, the water cycle has been studied mainly with the analysis of column density abundances. The recent possibility of solar occultation (SO) observations has opened a new path towards a better understanding of the Martian climate [1]. The Mars we observe today is a dry and cold planet although there are abundant evidences suggesting a warmer and wetter environment in the past [2]. Despite being a minor species in the current Martian atmosphere (~0.03%), water vapor has a large variability throughout the year, resulting in a very complex hydrological cycle involving sublimation and condensation processes affected by dust and atmospheric transport [1]. Global Dust Storm Data analysis Super­saturation and water ice Seasonal variation Contact: adrianbm@iaa.es WATER VAPOR VERTICAL DISTRIBUTION ON MARS DURING 5 YEARS OF EXOMARS­TGO/NOMAD OBSERVATIONS A. Brines1, M. A. López­Valverde1, A. Stolzenbach1, A. Modak1, B. Funke1, F. G. Galindo1, S. Aoki2,3, G. L. Villanueva4, G. Liuzzi4,5, I. R. Thomas2, J. T. Erwin2, U. Grabowski6, F. Forget7, J.J. Lopez­Moreno1, J. Rodriguez­Gomez1, F. Daerden2, L. Trompet2, B. Ristic2, M. R. Patel8, G. Bellucci9 and A. C. Vandaele2 1Instituto de Astrofísica de Andalucía (IAA/CSIC), Spain; 2Royal Belgian Institute for Space Aeronomy, Belgium; 3Department of Complexity Science and Engineering, University of Tokyo, Japan; 4NASA Goddard Space Flight Center, USA 5American University, Washington DC, USA; 6Karlsruhe Institute of Technology, Karlsruhe, Germany; 7Laboratoire de Météorologie Dynamique, IPSL, Paris, France; 8Open University, UK; 9Istituto di Astrofisica e Planetologia, Italy VII Reunión de Ciencias Planetarias y Exploración del Sistema Solar Valladolid 11, 12 y 13 de julio de 2023 Here we present a summary of results about the characterization of the water vapor in the Martian atmosphere. We study its seasonal variability during 3 Martian Years (MY) analyzing almost 5 years (April 2018 to December 2022) of NOMAD SO observations. We present a detailed insight of specific periods showing the effects of dust storms and detections of supersaturation. NOMAD is an infrared spectrometer covering the spectral range between 0.2 to 4.3 µm. Its SO uses an echelle grating with with an Acousto­Optical Tunable Filter (AOTF) to select different diffraction orders to be used during the observations. The spectral resolution of the SO channel is λ/∆λ = 20,000. The sampling allows a vertical resolution of 1 km. Also, the AOTF permits probing the atmosphere at a given altitude through 6 different diffraction orders [7]. For this study, we use Level 1 SO calibrated transmittances [8] of diffraction orders 134 (3011­3035 cm­1), 136 (3056­3081 cm­1) for the lower atmosphere and 168 (3775­3806 cm­1), 169 (3798­3828 cm­1) for the upper atmosphere. We have developed pre­processing tools to identify and eliminate residual artifacts in the spectra (bending, spectral shift) using the line­by­line radiative transfer algorithm KOPRA [9]. For the inversions, we use the latest calibration of the NOMAD AOTF and its instrumental line ­ shape (ILS) [10]. The retrievals are done combining the spectra of low altitude orders (134,136) with spectra of high altitude orders (168,169) up to 120 km for occultations where those orders where observed simultaneously. The bottom panels show two examples of typical NOMAD spectra (black) after its bending and spectral shift corrections for diffraction orders 136 (panel A) and 169 (panel B) at 30 and 100 km respectively. We also show the fitting after the retrieval (red). For optimization purposes, we only fit the data at certain spectral windows where the strongest H2O absorption lines are located. The panel on the right shows a typical retrieved H2O vertical profile (blue) and the GCM a priori used during its inversion. For this example, we are not fitting the data
poster
References Ran E, et al. (2013) Alternative cleavage and polyadenylation: extent, regulation and function. Nature Reviews Genetics 14, 496-506. Simpson JT, et al. (2009) ABySS: a parallel assembler for short read sequence data. Genome Research 19, 1117-1123. Robertson G. et al. (2010) De novo assembly and analysis of RNA-seq data. Nature Methods 7, 909-912. Trapnell C, et al. (2010) Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nature Biotechnology 28, 511-515. Grabherr MG, et al. (2011) Full-length transcriptome assembly from RNA-Seq data without a reference genome. Nature Biotechnology 29, 644-652. Djebali S, et al. (2012) Landscape of transcription in human cells. Nature 489, 101-108. Abreviations 3’-end – Downstream end 5’-end – Upstream end ABySS – Assembly by short sequences APA – Alternative polyadenylation CDS – Coding DNA sequence CS – Cleavage site DNA – Deoxyribonucleic acid ENCODE – Encyclopedia of DNA elements EST – Expressed sequence tag FDR – False discovery rate PET – Paired-end tag Poly(A) – Polyadenylation PPV – Positive predictive value RNA – Ribonucleic acid RNA-Seq – Whole transcriptome shotgun sequencing ROC – Receiver operator characteristic SPAT – Searching for poly(A) tails TPR – True positive rate Trans-ABySS – Transcriptome ABySS UTR – Untranslated region WIG – Wiggle format 1. Overview Acknowledgements In cancers, alternative splicing and polyadenylation (APA) can affect transcript stability, transport and translation, and can change a transcript’s translated sequence. APA can be characterized with short-read sequencing using specialized library construction methods. For large-scale disease studies, however, it is desirable to characterize splicing and APA from a single library construction and sequencing run. In the work reported here, we describe SPAT, an analysis tool that uses de novo assembly of RNA-Seq data to Search for Poly(A) Tails. SPAT is designed to accept contigs from a range of de novo transcriptome assemblers. When used with Trans- ABySS, the overall pipeline reports fusions and rearrangements, indels, alternative splicing, and a range of types of APA. SPAT: Searching for Poly(A) Tails in RNA-Seq de novo Assemblies Raymond AGJ, Robertson AG, Chiu R, Mungall K, Nip KM, Kreitzman M, Jackman SD, Karsan A and Birol I 2. RNA-Seq Assembly and Poly(A) Tail Detection RNA-PET Clusters Using data supplied by ENCODE for cell line A459, we first thresholded peaks in RNA-PET coverage tracks, then clustered peaks that occurred within 100bp of each other. We used the resulting clusters as ‘true’ events for validations and ROC-like curves. ROC-Like Curve true positive | An RNA-PET cluster with at least one CS call within 100bp. false positive | A CS call that is not within 100bp of an RNA-PET cluster. true negative | Cannot be defined. false negative | An RNA-PET cluster with no CS call within 100bp. 3. Comparing Cleavage Site Calls RNA-Seq Tools Trans-ABySS/SPAT | Assemble with Trans-ABySS and process with SPAT. Trinity/SPAT | Assemble with Trinity and process with SPAT. Cufflinks | Assemble with Cufflinks and call CSs at transcript ends. ROC-like curves, which were insensitive to thresholds for RNA-PET peak detection and clustering, show that SPAT works with other assemblers, and performs better than a method that uses annotations to determine ends of expressed 3’ UTRs. SPAT has a PPV of 90% when we require an RNA-PET coverage of 3 and 2 bridging reads supporting a CS call. Validation Counts Venn diagrams were generated with the true positives from each tool set. Increasing RNA-PET stringency reduces the number of true positives by 6% for Trans-ABySS/SPAT and 27% for Cufflinks. We thank the ENCODE consortium for supplying the resources that made this analysis possible. We gratefully acknowledge funding from Genome BC, Genome Canada, and the BC Cancer Foundation. Trans-ABySS Assembly Assembly involves constructing
poster
KNOW W HAT YOU EAT GET THE MOBILE APP Fish supply chain traceability at your fingertips, for an informed and conscious choice. NOVAFOODIES GET THE APP SCAN QR CODE DISCOVER This project has received funding from the European Union under Grant Agreement N° 101084180. © All registered trademarks and logos mentioned in this document are the property of their respective owners. Co-funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
poster
The ESCAPE project is working toward the implementation of FAIR principles for Astrophysics ESFRI by using IVOA standards in the context of connecting the Virtual Observatory framework to the European Open Science Cloud. The scientists, software engineers, and data experts from the different facilities are addressing the ‘data stewardship’ challenges for a wide range of different data products and services. The recent ESCAPE Technology Forum event identified a number of important topics in the different domains from Radio Astronomy to Solar Physics: Connecting to the European Open Science Cloud Data stewardship of next-generation data products and services related to ESFRI and other astrophysics infrastructures M. Allen1, S. Amodeo1, F. Bonnarel1, M. Molinaro2, F. Genova1, M. Romaniello3, H. Heinl1 , A. Schaaff1, S. Lesteven1 Application of IVOA standards for solar physics data: § Metadata for the semantics of the content (Unified Content Descriptors ‘UCD’ standard) § Table Access Protocol for finding and using tabular data Solar Physics Standards: § Data Provenance implementations – “Allowing users to access quality and reliability information.” § Application of IVOA Data Models § Support for multi-messenger approach (VO events, schedules, alerts) High Energy Astrophysics First steps for neutrino data in VO : § Publication of Tables § Publication of Events § Integration into multi-messenger astrophysics systems § Data provenance standards § Using VO tools (e.g. TOPCAT, Aladin) § Complementarity with the services of the KM3NeT Open Data Center Neutrino Astrophysics Standards: § Standards for discovery of interferometric data – “Make the data easy to find!” § Data provenance – “What happened to these data?” § Data models – “What metadata is needed for discovery and interoperability?” Accessing and using radio astronomy data: § Zoomable interactive access to large simulations on the sky (HiPS standard) § Access to VO services for radio data from python Radio and millimetre Astronomy § Standards and tools for GW follow-ups: § Sky coverage standards § Indexing the sky in space and time (HIPS, MOC 2.0) § See- A full presentation by G. Greco Gravitational Wave astronomy ACRONYMS ESCAPE: European Science Cluster of Astronomy & Particle Physics ESFRI Research Infrastructures ESFRI: European Strategy Forum on Research Infrastructures FAIR: Findable, Accessible, Interoperable, Reusable HiPS: Hierarchical Progressive Survey IVOA: International Virtual Observatory Alliance MOC: Multi-Order Coverage Deep learning for adding value to the ESO archive data – ”let the data speak” § e.g. search for similar spectra § See -- Sedaghat et al. 2021 Optical/UV/IR Astronomy VO standards in the ESO archive for: § linking data (Datalink standard) § image cutouts (SODA standard) § Large-scale imaging (HiPS standard) See -- Romaniello et al. 2018 ESCAPE - The European Science Cluster of Astronomy & Particle Physics ESFRI Research Infrastructures has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement no. 824064. 1 CNRS, Observatoire astronomique de Strasbourg, France 2 INAF, Osservatorio Astronomico di Trieste, Italy 3 ESO – European Southern Observatory
poster
Accuracy assessment of MERRA-2 temperature and humidity profiles over the tropical ocean using AEROSE observations and ECMWF analysis Bingkun Luo1, Peter J Minnett1, Goshka Szczodrak1, Nicholas R. Nalli2 •Reanalysis model output is extensively used in atmospheric research to complement the satellite and in situ data. NASA's Goddard Earth Sciences Modern-Era Retrospective analysis for Research and Applications v 2 (MERRA-2) assimilates measurements from different sources to minimize bias errors. The model data must be rigorously evaluated to assess their suitability to study a variety of questions. Here, Sea Surface Temperature (SST) and profiles of temperature and humidity through the marine atmosphere from MERRA-2 are compared to independent ship-based measurements of radiosondes, and retrievals from the ship-based Marine-Atmosphere Emitted Radiance Interferometer (M-AERI), and also the analyses fields of the European Center for Medium - Range Weather Forecasting (ECMWF). The ship measurements include those from Aerosol and Ocean Science Expedition (AEROSE) cruises of the NOAA Ship Ronald H. Brown in the tropical North Atlantic Ocean, focusing on the representation of the spatial and temporal variability. The results reveal that temperature and water vapor profiles from the MERRA-2 are in good agreement with the in situ measurements. For part of the cruises, the atmosphere included a mid-level dry layer emanating from North Africa, the Saharan Air Layer (SAL). With respect to difference, MERRA-2 humidity profiles shows a big difference (40%) with RAOBs at high altitude (pressure level < 300 hPa). These results support the use of MERRA-2 fields in a variety of research applications, including those directed at improving the accuracies of satellite-derived SSTskin retrievals. Abstract The MERRA-2 SST agrees well with M-AERI, and it has a mean bias of 0.125 K and have the potential to provide a satisfactory contribution to studies involving SST. The SST difference distribution between them shows there is negative bias near equatorial region, effort is still required to provide better data especially some water vapor-contaminated data. Table: Statistics of errors of MERRA-2 SST vs M-AERI 2. SST Validation MERRA-2 datasets: The NASA MERRA-2 reanalysis data were generated using the Goddard Earth Observing System data assimilation system by NASA's Global Modeling and Assimilation Office Data Assimilation System V5.12.4. Covering the earth observation data from 1980 to present. MERRA-2 analysis data are computed on a latitude– longitude-altitude cubed sphere grid at the same spatial resolution as the 3DVAR algorithm atmospheric model based on the Grid-point Statistical Interpolation. AEROSE: • A series of intensive field campaigns on the NOAA ship Ronald H. Brown in the Atlantic Ocean. • AEROSE data include complementary measurements to study the transport of aerosols from the African continent across the Atlantic Ocean, including: –Microphysical evolution and regional impacts –Regional atmospheric chemistry and marine meteorology AEROSE tracks, color indicates the days since departure. Independent Validation data: •Radiosonde (RAOB): Balloon-based instrument to make direct in situ measurements of air temperature, humidity, and pressure to about 30 km height. • ECMWF: Contains a set of atmospheric variables on a spatial horizontal grid of 0.5ox 0.5 o at 60 vertical levels. The ECMWF analysis is produced daily for 00, 06, 12 and 18 UTC. The four analyses per day are obtained by four- dimensional variational analysis (4DVAR) schemes. • M-AERI: An accurate, self- calibrating, Fourier transform IR spectroradiometer that measures emission spectra from the sea and atmosphere. (Minnett et al. 2001). 1. Data 19th International GHRSST Science Team Meeting 4th-8th June 2018, EUMETSAT, Darmstadt, Germany Relative Humidity and temperature layer properties of RAOB, MERRA-2 and ECMWF during 2015 AEROSE cruise, color indicates RH and temperat
poster
Holocene sea-level changes in Southeast Asia 1 2 3 4 5 1 M. Bender , T. Mann ,D. Kneer , P. Stocchi , J. Jompa , A. Rovere Sea Level & Coastal Changes SLCC Acknowedgments: This work is supported through the grant SEASchange from the Deutsche Förderungsgemeinschaft (DFG) as part of the Special Priority Program (SPP) 1889 Regional Sea Level Change and Society“ (SeaLevel). The Sea Level and Coastal Changes group research is supported by the Institutional Strategy of the University of Bremen, funded by the German Excellence Initiative [ABPZuK-03/2014] and by ZMT, the Leibniz Centre for Tropical Marine Research 1: MARUM – Center for Marine Environmental Sciences, University of Bremen, Bremen, Germany 2: ZMT – Leibniz Centre for Tropical Marine Research, Bremen, Germany 3: Alfred-Wegener-Institut, Bremerhaven, Germany 4: NIOZ Royal Netherlands Institute for Sea Research, Den Burg, Texel, Netherlands 5: Universitas Hasanuddin - Department of Marine Science Indonesia - Makassar RSL results from fossil microatolls in the Spermonde Archipelago, Indonesia, SE Asia Affiliations References modern microatoll fossil microatoll In this study we surveyed fossil microatolls on five new islands and living microatolls and compared the relative sea level (RSL) estimates with previous data provided by Mann et al., 2016. Figure 1) the locations and close- ups of the surveyed islands. Panel a-b) provide overview of study areas, sorted by letters as indicated in b). The islands Panambungan (g) and Barrang Lompo (i) were surveyed by Mann et al., 2016. The red S. indicates a third island from Mann et al., 2016 where only living microatolls are found. This island is not shown here (see figure 3 for LMA elevations). Figure 2) All related RSL curves derived from fossil microatolls and their modern counterparts. Panel a) the positions of the islands are indicated by letters and colors and correlate to RSL graphs. All islands show comparable RSL results except the data set from Barrang Lompo, which shows a noticeable low RSL curve. The two studies from De Klerk, 1982 and Tjia et al.1972 show RSL results up to 6 meter above the present sea level, whereas the new data and the data by Mann et al., 2016 show RSL results of less than one meter above today. One possible explanation is that the earlier RSL results reflect extreme storm or tsunami events. Additionally, the GIA predictions mismatch the data and therefore,we propose to explore different ice models e.g. ICE-6G by Peltier et al., 2015 or ANICE ( De Boer et al., 2014). Figure 4) RSL indicator points for the Spermonde Archipelago and the ICE-5G GIA model (Peltier, 2004) and five different mantle viscosities for the same area. The gray band indicates the lowest and highest GIA prediction for this region. Shown are two marine limiting points from Tjia et al.,1972, 16 marine and two terrestrial limiting and one index point by De Klerk,1982, 21 sea-level index points from Mann et al.,2016 and the 24 newly surveyed data points. Comparison with data from literature -0.4 -1 -0.8 -0.6 -0.2 0 Elevation [m] Sanane -1 Sanrobengi Barrang Lompo Panambungan Tambakulu n=24 n=23 n=20 n=17 n=51 Our results for the living microatoll surveys show a clear geographic trend in the HLC. The highest HLC was measured on Sanrobengi. This island is placed only one km from the coast of Southwest Sulawesi. The HLC from Sanane, Panambungan and Barrang Lompo differ slighty from each other but they are within a comparable range. These islands are located in the middle of the Archipelaho. On Tambakulu the HLC signal is the lowest. It is placed at the outer part of Spermonde ~70 km from the coast of the mainland. This trend might be based on differences in the local mean sea level or partial tides from the coast to open ocean due to a deepening bathymetry. Figure 3) a boxplot of the average HLC (Height of Living Coral) of living microatolls on all surveyed locations. n= the amount of surveyed individuals. Living Microatoll survey and HLC averages
poster
FILM SERIES “The Human Dimensions of Climate Change” Sponsored by: Fogler Library, the Climate Change Institute, Anthropology, Communication & Journalism & Political Science. The University of Maine does not discriminate on the grounds of race color, religion, sex, sexual orientation, including transgender status and gender expression, national origin, citizenship status, age, disability, genetic information or veteran’s status in employment, education, and all other programs and activities. The following person has been designated to handle inquiries regarding nondiscrimination policies: Director, Office of Equal Opportunity, 101 North Stevens Hall, 207.581.1226. Discussant Dr. Robert Glover Assistant Professor of Political Science & Honors, Cooperating Faculty Margaret Chase Smith Policy Center This Changes Everything MARCH 29 @ 6PM Library Classroom, Fogler Library Merchants of Doubt MARCH 22 @ 6PM Library Classroom, Fogler Library Discussant Dr. Laura Rickard Assistant Professor of Communication and Journalism Discussant Dr. Daniel Dixon Research Assistant Professor, Climate Change Institute UMaine Sustainability In The Path of Resistance APRIL 5 @ 6PM Library Classroom, Fogler Library
poster
CODATA-RDA Escuelas de Ciencia de Datos para la Investigación Marcela Alfaro Córdoba, Louise Bezuidenhout, Raphael Cóbe, Sara El jadid, Bianca Peterson, Rob Quick, Hugh Shanahan, Shanmugasun- daram Venkataraman Objetivos y Alcance • Crear una cultura dentro de las instituciones de investigación de los países de ingresos bajos y medios, que se comprometa plenamente con la era de los Datos Digitales. • Escuelas regionales para enseñar habilidades fundamentales de Ciencia de Datos, diseñadas para crear comunidades de personas investigadoras que inician su carrera en países de ingresos bajos y medios, que tienen interés en practicar la Ciencia Abierta. • Plan de estudios distribuido en dos semanas de clases. • Egresados incluyen más de 500 estudiantes de más 45 países. • Planes para futuras sesiones de formación de formadores. Convocatoria abierta a ayudantes e instruc- tores para futuras escuelas regionales. Incorporación de la Ciencia Abierta • Evitamos dar recetas. • Preparamos para la realidad en casa. • Promovemos el debate reflexivo. • Proveemos ejercicios éticos integrados en cada sesión. • Mezclamos teoría con ejercicios prácticos. Currículo Incorporamos la práctica de la Ciencia Abierta en la formación en CienciadeDatos. Tome una fotografía para visitar nuestra página web Línea de Tiempo ↑ Julio 2016 • dataTrieste16 • ICTP Trieste, Italy Julio 2017 • dataTrieste17 • ICTP Trieste, Italy Diciembre 2017 • dataSaoPaulo17 • ICTP-SAIFR Sao Paulo, Brazil Julio 2018 • dataTrieste18 • ICTP Trieste, Italy Octubre 2018 • dataKigali18 • ICTP-EAIFR Kigali, Rwanda Diciembre 2018 • dataSaoPaulo18 • ICTP-SAIFR Sao Paulo, Brazil Junio 2019 • dataAddis19 • Addis Ababa University, Ethiopia Julio 2019 • dataTrieste19 and Data Stewards • ICTP Trieste, Italy Diciembre 2019 • dataSanJose19 • CeNAT San José, Costa Rica Enero 2020 • dataPretoria20 • University of Pretoria South Africa Julio 2020 • dataTrieste20 • alumni meeting Online Enero 2021 • dataPretoria21 • University of Pretoria Online Septiembre 2021 • dataTrieste21 • ICTP, Trieste Online Mayo 2022 • dataPretoriaSaoPaulo22 • Pretoria and Sao Paulo Online Julio 2022 • dataTrieste22 • ICTP Trieste, Italy •
poster
The Heliocentric Model of Open Science Documentation Understanding Scientific Information Stewardship Top-Down and Bottom-Up v6 29NOV2024 Monica Gonzalez-Marquez, et al*, Open Science Advocate mg246@cornell.edu DOI: 10.5281/zenodo.14242541 1. Do away with a paper-centric model of documentation 2. Return to first principles and allow Scientific Questions and their constraints to guide documentation structure. 3. For optimal communication, allow the labor of the scientific process to determine: a. What needs to be documented b. How it should be documented Golden Rule: Documentation is for communication with humans The documentation tools we build are meant to serve the components of the scientific process, not exist independently: Data without its context is meaningless. Credit: T Šimko. METHODS METHODS *paper with collaborators forthcoming Scientific documentation should not be constrained by the limits of the out-of-date research article. Scientific information takes make shapes and model media should be used to accommodate it. The purpose of documentation is to Communicate our science to current and Future generations. Science is a verb, it is labor. RDM should be grounded in preserving Information about the labor that produced the data, as well as the data itself. Instead of allowing the paper and publishing constraints to dictate what information is documented for posterity and how, Helio positions the research question at the helm of record-keeping. It is in turn, constrained by the resources needed to ask the question. These two layers are constrained by the communicative drive: science that is not communicated in usable fashion, may as well not exist. It is also a complete waste of resources, and thus not sustainable. The methods satellite below shows some of the labor involved in completing the methods component of a human science study. These details are indispensable to understanding how to interpret and potentially reproduce a study. They are also indispensable to accurate peer review. Current documentation practices either grossly under-describe or completely omit these needed details. Helio provides needed structure to Open Science efforts by helping map tools, infrastructure and other resources directly to components of the scientific process, thus reframing them as being directly at the service of documenting complete research processes for posterity. Helio uses labor as the basic unit to breakdown scientific processes for documentation. These satellites are not mapped to sections in a paper but to components of scientific processes, as instantiations of the scientific method, and as the labor needed to complete them. This presupposes that scientific processes and their components will vary according to discipline. This model is an analogy of the original planetary model transition process. Just as it was an error to position the Earth at the center, it has been an error to place the paper at the helm of scientific documentation. It doesn’t matter how sophisticated a machine learning algorithm might be or what A.I. can do, the ultimate interpreters of scientific information will always be people. Hence, we must document so that information is usable and understandable by people. ©Inês Silva for all figures unless otherwise noted The paper is demoted to become the narrative. In this way it assumes the role it has in practice which is to function as an extended abstract with links to detailed, usable descriptions of the labor and labor products of the scientific process.
poster
Centro Universitário Euroamericano - UNIEURO / Curso de Medicina Humanidades Médicas REDUÇÃO DE DANOS: USO DE PSICOESTIMULANTES POR ESTUDANTES DE MEDICINA 1Autores: MARIANA DIAS TEIXEIRA; NATÁLIA FERNANDES CAVALCANTI; NICOLE COSTA DE HOLANDA; VALENTINA DINIZ CATHOUD; PEDRO AUGUSTO M FERREIRA ; WESLEY RIBEIRO DE CASTRO 2 Orientadoras: VANESSA TEREZINHA ALVES TENTES; PROFª MS DIANA ARISTOTELIS ROCHA DE SÁ INTRODUÇÃO O uso de psicoestimulantes, como o metilfenidato, tem se tornado comum entre estudantes de medicina, que recorrem a esses medicamentos para melhorar o desempenho acadêmico, principalmente em situações de alta pressão. No entanto, o uso indiscriminado sem prescrição médica levanta preocupações sérias sobre os efeitos colaterais e o risco de dependência. Diante disso, a abordagem de redução de danos emerge como uma estratégia fundamental para minimizar os impactos negativos dessa prática, promovendo um uso mais seguro e informado dos medicamentos. OBJETIVO Este artigo busca discutir a aplicação de estratégias de redução de danos relacionadas ao uso de metilfenidato por estudantes de medicina, com foco em minimizar os riscos à saúde física e mental, além de fornecer suporte educacional e psicológico para esses indivíduos. MÉTODO RESULTADOS A revisão revelou que aproximadamente 23,3% dos estudantes de medicina recorrem a substâncias psicoestimulantes, como o Venvanse e o Ritalina, para superar a fadiga e melhorar o desempenho acadêmico. Notavelmente, 57,1% desses indivíduos utilizam tais medicamentos sem prescrição médica ou diagnóstico formal, obtendo-os de forma ilícita através de amigos, parentes ou receitas falsas. O uso de metilfenidato sem prescrição entre estudantes de medicina é uma questão crítica que exige intervenção imediata. As estratégias de redução de danos, quando aplicadas de forma eficaz, podem diminuir os riscos à saúde e promover o bem-estar geral dos estudantes. É essencial que as instituições de ensino médico implementem programas educacionais e de suporte que abordem as causas subjacentes ao uso desses medicamentos, proporcionando um ambiente acadêmico mais saudável e seguro. REFERÊNCIAS Lopes, J. V. et al. (2024). Metilfenidato e Venvanse: o impacto na qualidade de vida dos estudantes de Medicina. Brazilian Journal of Implantology and Health Sciences, 6(8), 1891- 1906 Amaral, N. A. et al. (2022). Precisamos falar sobre uso de Metilfenidato por estudantes de medicina - revisão da literatura. Revista Brasileira de Educação Médica, 46(2), e060 Santos, M. E. B. V. et al. (2022). Uso de metilfenidato e lisdexanfetamina por universitários da área da saúde: uma revisão bibliográfica. Revista Corpus Hippocraticum, 2(1) CONSIDERAÇÕES FINAIS Foi realizada uma revisão de literatura em bases de dados como PubMed, Scielo e LILACS, com foco em estudos sobre o uso de metilfenidato entre estudantes de medicina sem diagnóstico de TDAH (Transtorno de Déficit de Atenção e Hiperatividade). Foram analisadas pesquisas publicadas nos últimos cinco anos, que abordam tanto os efeitos do uso quanto as estratégias de mitigação dos danos associados ao uso não supervisionado de psicoestimulantes A rotina dos estudantes de medicina é marcada por alta carga de trabalho, longas horas de estudo, estágios e preparação para exames. Com isso, muitos estudantes recorrem a estimulantes como o Venvanse e a Ritalina, com a expectativa de melhorar a capacidade de concentração, aumentar a produtividade e diminuir a fadiga mental e física.Esses psicoestimulantes agem inibindo a receptação dos neurotransmissores, resultando em um aumento da concentração extracelular de dopamina e norepinefrina. Contribuindo para a melhora dos sintomas associados ao TDAH, como a desatenção, hiperatividade e impulsividadeApesar dos benefícios percebidos, a eficácia real desses medicamentos para indivíduos saudáveis, sem diagnóstico clínico, é questionada, com evidências sugerindo que o efeito placebo pode desempenhar um papel significativo na percepção de mel
poster
www.dtls.nl • https//www.helisacademy.com/en • A three year project targeting the Life Science & Health sector • Aim: Improve skills for both new staff and existing staff (lifelong learning) on identified gaps, by developing tailored training • Helis themes identified: Good manufacturing practices (GMP), Clinical testing, Business development, Product & process design, Data Analysis & Stewardship • Contact: celia.van.gelder@dtls.nl • https://zenodo.org/communities/nl-ds-pd-ls/ • Aim: professionalise the data steward function within the life-sciences domain, with a special focus on implementation of the FAIR principles and endorsed by all stakeholders in NL • First deliverable: Life Science Data Steward Function Matrix v1.1, for 2 levels of Data Stewards and 9 knowledge areas: https://zenodo.org/record/2561723 • Upcoming: defining competences and building training • Contact: celia.van.gelder@dtls.nl FAIR Data Training and Community Building activities in the Netherlands Mateusz Kuzak, Christine Staiger and Celia W. G. van Gelder (DTL/ELIXIR-NL) Interoperability, FAIR data stewardship and training researchers and data experts are themes that are core to the remit of DTL and ELIXIR Netherlands. In line with these focal points, a significant part of our DTL Learning/ELIXIR- NL training activities can be categorized as “FAIR Data Training & Community Building”. They are set up with and by the DTL and ELIXIR-NL Community in the Netherlands, and in close collaboration with the DTL Data and DTL Technologies programmes. A number of highlights are shown here, more information can be found at https://www.dtls.nl/fair-data/fair-data-training/. Community Building – The Data Stewards Interest Group Defining the Data Steward profession & Curriculum Building • https://ds-wizard.org/ • The Data Stewardship Wizard (DSW) is a web-based application for preparing smart Data Management Plans. It connects to additional learning resources and helps raise awareness about DS challenges. It can be customised by Data Stewards or Funders to meet more specific needs. • Contact: rob.hooft@dtls.nl Training in Data Analysis & Stewardship • https://www.dtls.nl/community/interest-groups/data- stewards-interest-group/ • A community hub for Data Stewardship that enables informal and inclusive knowledge and experience exchange. • Ca. 90 members (NL, BE, LU, UK, SE, FR, DE) • Mission: • Providing a platform for Data Stewards and like-minded in the Netherlands (and abroad) to share the experience • Fostering the national implementation of Data Stewardship • Joining efforts to produce hands-on solutions • Contact: Jasmin Böhmer (chair), j.k.bohmer@umcutrecht.nl Contact Celia van Gelder Programme Manager DTL Learning/ELIXIR-NL Training, celia.van.gelder@dtls.nl Mateusz Kuzak Scientific Community Manager DTL, mateusz.kuzak@dtls.nl Christine Staiger Data Steward DTL, christine.staiger@dtls.nl The ELIXIR Data Stewardship Wizard
poster
ABSTRACT Hepatitis E virus (HEV) is a lesser-known hepatitis virus, but its worldwide spread is doubtless and has increased recent years. Zoonotic spread of HEV, mainly due to genotype (gt) 3, has emerged in developed countries in the past decade. The current study has shown for the first time that domestic pigs in Bulgaria are widely infected with HEV. There is a need for a method for the cost-effective detection of the HEV infection and production of HEV immunogenic proteins that can be used as diagnostic antigens for the serological tests. Also, in this study we have compared the results from a commercial ELISA and an in-house ELISA test based on HEV open reading frame (ORF) 2 protein produced in planta, the sensitivity and the specify of the tests were evaluated. The development of a HEV diagnostic kit based on plant-derived ORF2 capsid protein is our goal and this will help to reduce the cost of such a kit and to enable diagnoses of HEV in developing countries. Serological diagnostics of hepatitis E virus infection in pigs based on capsid protein (ORF2 genotype 3) expressed in planta – sensitivity and specificity Katerina Takova 1, Tsvetoslav Koynarski 2, Valentina Toneva 1, 3, Gergana Zahmanova 1, 4 Acknowledgements In total, 433 swine blood samples were collected from 19 different pig farms and 1 slaughterhouse, located in eleven districts across Bulgaria—Pazardzhik, Plovdiv, Stara Zagora, Yambol, Ruse, Razgrad, Silistra, Shumen, Varna, Dobrich, and Burgas district. The samples were collected during 2017 (n = 75), 2018 (n = 158), and 2019 (n = 200). In 2018 samples were taken from two age-groups, 6- month-old pigs that weighed around 80–100 kg (n = 78), and approximately 3- month-old pigs that weighed around 30 kg (n = 80). The samples taken in 2017 and 2019 are all from 6-month-old pigs. In addition, 32 meat chunks from wild boar were collected for meat juice extraction. The samples were collected from animals hunted in Stara Zagora, Vratsa, Montana, and Dobrich districts during the official hunting season. CONCLUSION: This study confirmed that plant-derived HEV diagnostic antigen could provide a cost-effective and scalable alternative of the commercially available coating protein. This research was funded by the Research Fund at the University of Plovdiv, competition “Young Scientists and Doctoral Students” MU21-BF-02, the Bulgarian National Fund, Project DNTC/Russia 02-6, Project PlantaSYST. PlantaSYST has received funding from the European Union’s Horizon 2020 research and innovation programme (SGA-CSA No 664621 and No 739582 under FPA No. 664620) Figure 1. The map of Bulgaria represents the districts of origin of the farming pigs tested in grey - Pazardzhik, Plovdiv, Stara Zagora, Yambol, Ruse, Razgrad, Silistra, Shumen, Varna, Dobrich, and Burgas district. The origins of the wild boar’s testes are marked with red circles. Wild boar samples originated from Stara Zagora, Dobrich, Vratsa and Montana district. Figure 4. Determination of the optimal antigen concentration (recombinant capsid protein) for the developed ELISA. A two-fold dilution series of the recombinant capsid protein was performed to identify the optimum working concentration, and a titration experiment was performed using pig sera already with known serostatus. Results of the checkerboard titration analysis demonstrated that the most optimal and reliable results were obtained with coating protein concentration of 2 ng/ul. Figure 2. A A high HEV seroprevalence at all farms in 2019 – [98% (95% Cl 96.4 to 99.9)] compared to 2017 – [45.33% (95% CI 2.7 to 87.3)] and 2018 – [38.46% (95% CI 29.1 to 49.7) is observed. A significant year effect (2019) was found, p-value < 0.01 (Table 2). The chi-square statistic is 149.0211. The p-value is < 0.00001. The result is significant at p < 0.01. B Distribution of the observed within the farm seroprevalence of HEV among the slaughter-aged pigs (17 farms and 1 slaughterhouse with 30 % 2017- 2020, Bulgaria). The chart r
poster
Potential Cost-saving for Liquid-to-Solid Medicines among Paediatric Patients Wei Chern Ang1,2, Mohamad Syafuan Fadzil2*, Chian Khie Lim2, Nur Inani Mohamad Nasir2, Muhammad Asyraf Amir2 1 Clinical Research Centre, Hospital Tuanku Fauziah, Ministry of Health Malaysia 2 Department of Pharmacy, Hospital Tuanku Fauziah, Ministry of Health Malaysia BR-08 NMRR-18-2790-44206 Introduction The World Health Organisation (WHO) recommends that children be treated with oral solid medicines for drugs, especially those require a precise dose titration.1 Problem statement: Factors that can affect a child’s success in swallowing pills include developmental stage, fear, anxiety, intolerance of unpleasant flavors and failure to appreciate the risks associated with the non- compliance. Inadequate dosing flexibility without the need of tablet crushing explained the lack of usage of solid oral formulation in pediatrics.2 The current practices only based on preference of prescribers and patient’s parent/care. Justification of Study: •There are still lack of evidence regarding the solid oral formulation suitability in Malaysia • There is no published study in Malaysia that analyses the cost differences when oral liquid medications were changed to oral solid medications in a hospital outpatient setting. Objective: 1. To examine the association of demographic and solid dosage suitability. 2. To investigate factors affecting prescribing of solid oral dosage dorms. 3. To examine the potential cost savings if patient were prescribed with solid over liquid preparations. Methods This cross-sectional study conducted from October 2018 to March 2019 among patients visiting Paediatric Clinic, Hospital Tuanku Fauziah (HTF), Kangar, Malaysia. Discussions/Conclusions 1. WHO (2008). Report of the informal expert meeting on dosage forms of medicines for children. Geneva, Switzerland.15-6. 2. Patel A et al. (2014). J Pediatr. 135;183 3. EMA (2011) Guideline on Pharmaceutical Dev. of Medicines for Paediatric Use. 4. Lajoinie A et al. (2014). Br J Clin Pharmacol. 78(5); 1080-1089. 5. Jacobsen, L et al.(2016) Tablet/capsule size variation among the most commonly prescribed medication for children in the USA; 65-73. Variable Solid Dosage Suitability p-value Yes No Age (years) 2-5 6-11 >12 Weight (kg, adjusted by age) <13 >13.01 Race Malay Non-Malay Disease condition Acute Chronic Gender Male Female 107 (47.6) 50 (42.0) 10 (37.0) 97 (50.8) 70 (38.9) 162 (45) 5 (45.5) 153 (46.8) 14 (31.8) 96 (46.8) 71 (42.8) 118 (52.4) 69 (58.0) 17 (63.0) 94 (49.2) 110 (61.1) 198 (55) 6 (54.5) 174 (53.2) 30 (68.2) 109 (53.2) 95 (57.2) 0.425* 0.021* 0.976** 0.075** 0.463** References Variable Crude OR (95%CI) p-value Age (years) 2-5 6-11 >12 Weight (kg) Disease condition Acute Chronic Gender Male Female 1.00(ref.) 1.25(0.80,1.96) 1.54(0.67,3.51) 1.04 (1.01,1.07) 1.00 (ref.) 0.53 (0.27,1.04) 1.00 (ref.) 1.18(0.78, 1.78) 0.327 0.303 0.004 0.064 0.435 In conclusion, conversion of liquid to suitable oral solid medications is cost-saving and feasible to be practised. • Most tablets prescribed were in the range of 5-10mm (62.5%) which are only suitable for 6-11 years old.3 However, there is no significance difference between age group and solid dosage suitability (p>0.05). • Most of prescribed medication for paediatrics varied widely in size e.g. paracetamol 500mg tablets ranged from 5 to 22mm, with a median of 15mm. The common paediatric antibiotics were larger with a median diameter of 17mm).5 • Most of our paediatric patients were 2-5 years old (60.6%). However, only a fraction (15.1%) of tablet with size of 3-5mm were only suitable for this age group. Liquid formulation will still be suitable for this group of patients. In our setting, parents/carers may still be afraid for patients to take solid dosage medication in this age. • Suitability of oral solid dosage forms based on size from the EMA3 may not be suitable for Asian population. To the best of our knowledge, there is no consensus or g
poster
Introduction Methodology Results Next steps References Acknowledgements A - Samples of different insects (Honeybees, Solitary Bees and Bumblebees) were scanned in TIF format 16bit depth, 1920x1920 pixels. B - CT scans in Raw setup, without filtering. In some cases was required in this process using python scripts (Figure 1). C - 3D model processed in the Blender 3D software, using automatic and manual tools for cleaning up (Figure 2). D - Model after cleaning up and with internal armature for rigging (Figure 3). E - Model in natural position, with wings fixed, ready to be exported. F - Render model made in Blender cycles for scientific illustration. G- Voxelized model in Sim4Life ready for simulations (3.5GHz, 20M cells). 25 CT scans were processed in to 3D models STL format, ready to be used in RF-EMF simulations. Models were tested for compatibility with Sim4Life software. For some models reconstruction of missing parts as such wings and legs were necessary. Those were done using references from literature. For manual tasks, python scripts using Blender API workflows were created where was possible to automate the work. Scripts can generate simple structures like antennas and legs segments. [1] Thielens, et al. Scientific Reports 8(1), 3924, 2018. [2] Thielens, PE 690.021, 2021, doi: 10.2861/318352. - Insects are crucial for the sustainability of ecosystems. - Telecommunication antennas emit Radio-Frequency Electromagnetic Fields (RF-EMFs) that could affect insects. As telecommunications evolve (from 4G to 5G), the exposure of insects to RF-EMFs is likely to vary [1]. - Near powerful antennas, pollinators like bees face relatively high levels of RF-EMF exposure. This exposure can cause biological reactions in insects [2], yet the consequences of RF-EMF exposure on insects remain uncertain [2]. ETAIN has received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No. 101057216 To assess the absorption of RF-EMFs in insects by utilizing detailed anatomical 3D models derived from CT scans, and to develop adaptable models compatible with EMF simulation software for various simulation scenarios. In this poster we focus on the process of extracting CT scan data to 3D models. Conduct simulations across each model at frequencies (1.8, 3.5, and 26 GHz) to verify whole-body averaged absorbed power, specific absorption rates and absorbed power densities. This will facilitate the analysis of how the distinct shapes of insects influence the results of these simulations. Implement simulations on heterogeneous models by assigning varied dielectric properties to different parts of the insects. This approach will enable a more detailed and accurate representation of real-world conditions. Additionally, undertake simulations that focus on the internal structures of insects, thereby providing deeper insights into the internal effects of RF-EMF exposure. By executing these steps, the project aims to shed light on the complex interactions between electromagnetic fields and biological entities, offering valuable data for both scientific understanding and practical applications in the field of bioelectromagnetics. Goals Figure 3. Illustration of the processing method of a CT scan of Osmia bicornis. Figure 1. Bombus terrestris CT scan filtering method. 1.1 - Raw CT scan file. 1.2 3D model extracted from 1.1. The insect fur creates extra volume and issues to process the 3D model. 1.3 Filtered CT scan with lower density structures as the fur removed. 1.4 Updated 3D model without problematic fur meshes. Figure 2. 3D Models processing steps in Blender. Scripts have been implemented to streamline data aggregation into a unified table, capturing species name, collection site, and date. Additionally, it compiles morphological metrics calculated in Blender alongside Sim4Life simulation parameters and outcomes, providing a consolidated view of each sample's characteristics and simulation data. 3D Models were created using C
poster
Scientific Knowledge Graphs and Metadata Infrastructure Daan Broeder, Menzo Windhouwer, Michael Kurzmeier, Katja Moilanen CLARIN ERIC, KNAW/HuC-DI, OAEW, TAU-FSD Scientific Knowledge Graphs Scientific Knowledge Graphs (SKGs) have recently gained popularity as a rich metadata structure for describing research activities using triples. The idea that a fully connected graph better represents the real world than a hierarchical structure is now widely accepted. Current implementations are the OpenAIRE Research Graph, PID Graph (DataCite), Open Research Knowledge Graph (TIB), … OpenAIRE Graph data model Data Cite PID Graph content From Scholarly to Scientific Knowledge Graphs? Within the RDA WG SKG Interoperability Framework, some existing SKGs have been developing a standardised model for SKGs. However mainly from a perspective of publishing and measuring research output for funders. To better model the research process, entities and relationships involving data creation and processing, such as processing services and instruments, are needed The research communities collaborating in the OSTrails project are advancing this work in the SKG-IF WG. Metadata Interoperability: SKGs to the rescue? In the OSTrails project the creation of SKGs by communities is an important means for improving the Discoverability and Interoperability of the community resources. From the SSHOC science cluster a number of CLARIN, DARIAH and CESSDA partners closely collaborate. The SSHOC partners are working towards creating SKGs from their rich metadata catalogues: - CESSDA data catalogue (CDC) - CLARIN Virtual Language Observatory (VLO) - SSH Open Marketplace (SSHOMP) Existing SKG-IF model SSHOC science cluster use-cases Retrieve information from SKGs : 1.When a user creates a new item in the SSHOMP, users enter the PID and fields for the new item are populated with the results. 2.When a user visits a dataset in the VLO see if the creator has also resources in the CESSDA SKG. Metadata catalogue Metadata Repository SKG publishing JSON / RDF / … OAI-PMH +? JSONAPI + SPARQL + .. Std. vocabularies community metadata providers OAI-PMH + ? Metadata catalogue Metadata Repository SKG publishing JSON / RDF / … OAI-PMH +? JSON API + SPARQL + .. Std. vocabularies community metadata providers OAI-PMH + ? SKG based Information exchange SKGs exchanging information
poster
Kosuke Namekata (National Astronomical Observatory of Japan, e-mail: kosuke.namekata@astro.nao.ac.jp)1 Hiroyuki Maehara1; Satoshi Honda2, Yuta Notsu3; Daisaku Nogami4; Kazunari Shibata4, and SMART & OISTER team collaborations (1) National Astronomical Observatory of Japan (NAOJ); (2) University of Hyogo; (3) Colorado University; (4) Kyoto University A Filament Erup/on and Coronal Mass Ejec/on from a Superflare on a Young Sun-like Star We report that our op*cal spectroscopic observa*on of the young solar-type star EK Draconis reveals the first evidence for a stellar filament erupCon associated with a superflare on a young Sun-like star EK Draconis (Namekata et al. 2021 (or 2022), Nature Astronomy, 6, 241). We monitored EK Draconis for about 30 nights with the TESS’s op*cal photometry and ground-based spectroscopy, and finally detected a superflare in white-light and Hα. APer the superflare with radiated energy of 2.0x1033 erg, a blue-shiPed Hα absorp*on component with a large velocity of -510 km s-1 appeared. The temporal changes in the spectra greatly resemble the Sun-as-a-star spectra of solar filament erup*ons, indica*ng a common picture of solar and stellar filament erup*ons. The comparison of this stellar filament erup*on with solar filament erup*ons in terms of the length scale and velocity strongly suggests that this would eventually become a stellar CME. The mass of erupted filament of 1.1x1018 g is surprisingly 10 *mes larger than those of the largest solar CMEs. The huge filament erup*on and an associated CME on the young Sun-like star provide the first opportunity to evaluate how the young Sun-like star/young Sun affects the environment of young exoplanets/young Earth as well as stellar mass/angular-momentum evoluCon (cf. the faint young Sun paradox). 1. Introduc,on 3. Filament erup,on on young Sun-like star 2. Observa,ons 4. Sun-as-a-star analysis 5. Frequency of CMEs 7. References 6. Summary & Future Works 1. Namekata et al. 2021 online (or 2022a), Nature Astronomy, 6, 241(→) 2. Namekata et al. 2022b, ApJ Le?ers, 926,5 3. Namekata et al. 2022c, accepted (h?ps://arxiv.org/abs/2206.01395) [See, my talk in Splinter session “Solar and stellar coronal mass ejecOons” ] FIGURE 3: A solar flare WITH filament erupOon. (middle) Light curve. (right) Dynamic spectrum of the Hα line (Namekata et al. 2021 (or 2022a), Nat. Astron.). What does the erup5ve event on EK Dra look like? To answer this quesOon, we conducted the Sun-as-as-star analysis of Hα spectrum of solar filament erupOons. As a results, spectral features are very similar to the event on EK Dra, indicaOng that the event on EK Dra was a large-scale filament erup5on whose picture is similar to those of solar filament erup5ons. Solar image Hα – 1Å filament flare FIGURE 4: A solar flare WITHOUT a signature of filament eruption. In this case, strong redshifted & broadened spectrum can be seen (Namekata et al. 2022c, ApJ). flare Velocity [km/s] Wavelength [Å] Impulsive Phase (flare) Normalized Intensity Decay phase FIGURE 1: A stellar filament erupOon from a 2x1033 erg superflare on a young Sun- like star EK Dra. (le`) Light curve in Hα. (middle) pre-flare subtracted spectrum (right) Dynamic spectrum (Namekata et al. 2021 (or 2022a), Nat. Astron.). We detected Hα spectra from superflares (1033-34 erg) on a young Sun-like stars for the first time. Surprisingly, one of them shows post-flare Hα dimming, and the Hα spectrum shows blue-shifted absorption component. We concluded that we detected a stellar filament eruption on young Sun-like star for the first time. FIGURE 2: Large velocity and length scale indicate that the filament erup5on eventually becomes stellar CMEs. ProperOes Velocity: -510 km/s Length scale: 500 Mm Mass: 1.1x1018 g (10 x the largest solar CME) Superflare FIGURE 5: NO filament erup5on signature in a 1034 erg superflare on a young Sun- like star EK Dra. (Namekata et al. 2022b, ApJ Le?ers). Other superflares DO NOT necessarily show filament erup5on signatures ⇒Signatures of filament erupOon
poster
Innovation Radar INCUBATOR HUB www cincubator.com @CincubatorHUB @cincubator @CloudIncubatorHUB
poster
Acknowledgements & Computational Resources References [1] Tan C. and McClements D. J., Foods , Apr. 2021, vol. 10, no. 4, p. 812 [2] Bai L., Huan S., Rojas O. J., and McClements D. J, J. Agric. Food Chem., Aug. 2021, vol. 69, no. 32, pp. 8944–8963. [3] Posocco P., Perazzo A., Preziosi V., Laurini E., Pricl S., and Guido S., RSC Adv., 2016 vol. 6, no. 6, pp. 4723–4729. [4] Yamashita Y., Miyahara R., and Sakamoto K., Cosmetic Science and Technology, Elsevier, 2017, pp. 489–506. [5] Álvarez Vanegas M. et al., J Mol Model, Dec. 2013, vol. 19, no. 12, pp. 5539–5543. Methodology Stability and Interfacial Phenomena of W/O Emulsions via Molecular Dynamics Luís G. Gómez Martínez1,4, Deisy G. Giraldo3,4, Marianny Y. Combariza2,5, Cristian Blanco Tirado2,5 & Aldo F. Combariza1,4 1 Departamento de Biología Y Química, Facultad de Educación y Ciencias, Universidad de Sucre. 2 Escuela de Química, Facultad de Ciencias, Universidad Industrial de Santander (UIS). 3 Institut des Sciences Analytiques et de Physico-chimie pour l’Environnement et les Matériaux, Université de Pau et des Pays de l’Adour. 4 Grupo de Investigación en Modelamiento Molecular y Simulación Computacional In silico, Universidad de Sucre. 5 Grupo de Investigación en Fisicoquímica Teórica y Experimental GIFTEX, Universidad Industrial de Santander. e-mail: luis.gomez@unisucrevirtual.edu.co , aldo.combariza@unisucrevirtual.edu.co We thank to Universidad de Sucre and Universidad Industrial de Santander for the support and in-silico & GIFTEX Research Group members for their support in the development of this work. We thank to QCC organizing committee for the opportunity to participate in this event. We thank to IAAPP for allowing us the access to INKARI cluster and to Ronald Apaza for his support. We also thank to Dr. Wilson Castro Silupu and Ronald Apaza for allowing us the link to INKARI. Motivation Results and Discussion Preliminary Conclusions Phospholipidic Emulsifier 1-Palmitoyl-2-oleoyl -sn-glycero-3-phosp hocholine (POPC) 1-palmitoyl-2-oleoyl-3-stearoyl-glycerol (POS) 1-palmitoyl-2-oleoyl-3-stearoyl-glycerol (POS) 1,3-Dipalmitoyl-2-oleoylglycerol (POP) 1,3-distearoyl-2-oleoylglycerol (SOS) Main triacylglycerols (TAGs) and Phospholipids present in Theobroma cacao L. Kirkwood-Buff simplified formulation Theoretical Interfacial tension (IFT) of Commercial and Cocoa Emulsions ( mN m) IFT γ of Pattern Commercial Emulsion: 2.74 mN m IFT γ of Pattern Cocoa Emulsion: 1.98 mN m Experimental IFT γ of Oil (Sunflower) and Water: 24.80 In which we consider the tangential componentes (Pt = Pxx + Pyy)/2) and the normal component (PN = Pzz) to the interfacial plane of Cauchy Stress Tensor. Water in Oil (W/O) Emulsion Oil in Water (O/W) Emulsion Mix Emulsification CHARMM-GUI Effective Simulation Input Generator and More Automated Topology Builder (ATB) and Repository Visual Molecular Dynamics Phosphatidylcholine (PC) Phosphatidylethanolamine (PE) Phosphatidylserine (PS) Phosphatidylglycerol (PG) Phosphatidylinositol (PI) Phosphatidic acid (PA) Temperature: 298 K Pressure: 1 Bar Force Field: CHARMM36 Statistical Ensemble: NVT (for minimization) & NPT (for equilibration and production simulations) GROMACS Version: 2019.3 Water Model: TIP3P Finally we run a simulation of production of 50 ns, NPT ensemble, under normal conditions of pressure and temperature Then we run a minimization of 10.000 nsteps (step size 2fs, 20 ps) within the NVT ensemble and six different equilibrations of 125.000 nsteps (250 ps) NPT ensemble under normal conditions of pressure and temperature Analysis of MD Trajectories System setup CHARMM-GUI TAG’s Topology files of molecules We select/download the TAG’s topology files (SOS and POP) from the ATB repository to set up the simulation cell ● MD simulations and trajectories showed that initial system of cocoa emulsions (with pure components) were well-equilibrated and the IFT values were ranged between 1-10 mN/m which are good values for emulsions. ● RDF graph showed pea
poster
Contact us now info@csem.ch • www.csem.ch Developing high-performance and safe electrolyte for silicon and lithium metal batteries Sufu Liu, Leonardo Pires Da Veiga, Chengyin Fu, Mohammed Srout, Andrea Ingenito CSEM Sustainable Energy Center, CSEM SA, Switzerland In-situ thermally triggered polymer for silicon battery electrolyte Abstract: Compared with the state-of-the-art graphite anodes, silicon and lithium metal are considered as the most promising alternatives for higher energy density lithium batteries owning to their high theoretical capacity. However, significant challenges such as low coulombic efficiency, short cycle life and safety concern have seriously hindered their practical applications. [1,2] To enable the use of silicon anode, we are developing a polymer electrolyte with self-healing properties to stabilize the electrode interphase and restrain the huge volume change of silicon material. When compared with commercial carbonate liquid electrolyte with or without additives, our designed polymer electrolyte enables the NMC622//Si full cell with improved capacity retention and combustion resistance. Meanwhile, for rechargeable lithium metal batteries, our designed nonflammable electrolyte also enables the LFP//ultra-thin lithium metal (25 µm) with greatly extended cycling life, which delivers more than 88% capacity retention after 400 cycles. Our research paves the way for safer and more efficient lithium-ion batteries with silicon or lithium metal-based anodes, which will help to realize higher energy density and longer-lasting battery technologies. Conclusion and future work • In-situ thermally triggered polymerization is a promising method for fabricating polymer electrolyte with self-healing properties outperforming the commercial carbonate electrolyte. • Electrolyte engineering with aggregate and contact ion pair structures is an effective strategy to develop advanced electrolyte with high Li metal Coulombic efficiency. • Thermally evaporated thin Li metal and advanced electrolyte design could be an effective combination to promote long-term cycling of lithium metal batteries. References: [1] Park et al., Chem. Sci., vol. 14, pp. 9996-10024, (2023). [2] Narayan et al., Adv. Energy Mater. vol. 12, pp. 2102652, (2022). Advanced electrolyte for lithium metal battery • 99.6% Li Coulombic efficiency could be reached in the obtained electrolyte The mechanism diagram of in-situ polymerization of the thermally triggered polymer The structure design of the advance electrolyte for LMBs • Thermally evaporated Li metal with high purity • 88% capacity retention after 400 cycles has achieved when coupling the in-home prepared ultrathin Li anode with commercial LFP cathode 60% Si coin cell exhibits enhanced capacity retention 20% SiOx punch cell indicates stable cycling Building more reliable and performing batteries by embedding sensors and self-healing functionalities to detect degradation and repair damage via advanced Battery Management Systems Methodology • Develop self-healing battery materials and sensing devices. • Validate the triggering mechanisms and degradation detection. • Assess the manufacturing, recycling, and sustainability process and develop the Battery Management System. Consortium
poster
ENRICHING THE EVIDENCE BASE OF CO-CREATION IN PUBLIC HEALTH WITH METHODOLOGICAL PRINCIPLES OF CRITICAL REALISM INTRODUCTION Meta-theory, such as Critical Realism (CR) provides a foundation for understanding and researching phenomena. CR, originating from Bhaskar's work, distinguishes between the real and observable world and encourages exploring complex social phenomena with a focus on causal mechanisms. Co-creation research lacks explicit meta-theoretical foundations, whereas CR is a convincing proponent for hypothesised mechanisms to have the strongest explanatory power, related to empirical evidence. This study aims to explore critical realism as a promising meta-theory providing clear methodological principles [1,2] to enrich the evidence base for co-creation in public health research. RESULTS + MAIN TAKE-HOME MESSAGES www.healthcascade.eu @health_cascade Health CASCADE is a Marie Skłodowska-Curie Innovative Training Network funded by the European Union’s Horizon 2020 research and innovation programme under Marie Skłodowska-Curie grant agreement n° 956501 AUTHORS: MESSIHA, K (1); ALTENBURG, TM (1); SCHREIER, M (2); LONGWORTH, GR (3); THOMAS, N (4); CHASTIN, S (5,6); CHINAPAW, MJM (1) Critical realism methodological principles seem well-suited as a meta-theoretical framework for evidence-based co-creation in public health empirical research METHODS Scan me to connect on LinkedIn Free hand Google Scholar Litmaps Full text screening Formative synthesis output Deliberative meetings Summative synthesis output Assessing applicability and usability (real-life case study “KiA”* [3] + WP3-6 feedback) Figure: Information flow chart of paper selection Define event as an outcome of research; Review literature for concepts, studies, and evidence; Provide chronological narrative; Analyse co-creators' perspectives; Compile key components: Actors, Actions, Objects, Outcome. Explication of Events Identification of + detailing the critical/ important events as a set of related actions or changes that occur over time and have a particular outcome or goal. Identify + analyse the components of the contextual environment that influence the event. Clarify their connections in order to understand how such event is influenced. Explication of structure + context Identification of social (norms) + physical structure, contextual environment + their relationships as linked to the event(s) which occurred. Propose superior mechanisms for event explanation; Identify contextual entities, including affordances influencing behaviours; Note retroduction's has 4 types: overcoded, undercoded, creative + meta-retroduction. Retroduction Identification + explanation of underlying mechanisms that caused the observed events. Formulate hypothesis based on prior knowledge; Test using empirical methods, longitudinally; Analyse data for variable relationships; Evaluate results for empirical adequacy; Further assess to corroborate/ refute refined theories. Empirical corroboration About the validation/ ‘confirmation’ of proposed causal mechanisms through empirical testing. Use diverse theories, methods, data sources + investigators for comprehensive perspectives; Assess event analysis significance (non-definitive). Triangulation and multi-methods About using (i.e., combining and integrating) a variety of data types and sources, relevant theories, analytical methods + observers in a research study to identify causal relationships. Principle 1 Principle 2 Principle 3 Principle 4 Principle 5 CR Principles with convincing empirical parallels to KiA... Event: Childhood overweight (outcome). Explored literature and local data (KiA neighbourhood). Participatory Assessment: 3-4 group meetings (children, n=20), interviews with parents (n=27), professionals (n=9). Results: Childhood overweight identified as main issue, with insufficient physical activity and unhealthy diet as main risk factors. Actions: Unhealthy behaviours. Objects: School, home, neighbourhood. Needs assessment w
poster
The breakdown of current gyrochronology as evidenced by old coeval stars Diego Godoy-Rivera(★), Joaquín Silva-Beyer, Julio Chanamé • Gyrochronology promises to deliver stellar ages for field main-sequence stars from rotation period measurements. • Before being widely used, the age- rotation relations must be comprehensively examined. • We develop a new method to test the state-of-the-art gyrochronology relations with old coeval stars. • In Figure 3, the members of Clusters show a better agreement with gyrochronology (highest peak), followed by the Wide Binaries. • We use Random Pairs as a control sample to calibrate the expected agreement for unassociated stars (assembled by disassociating the binary components). • The Random Pairs show a distribution in agreement with theoretical expectations (Expected ΔProt) • Our results demonstrate that the gyrochronology relations under study (Angus+19, Spada & Lanzafame 2020) do have predicting power, as the sets of coeval stars (Clusters, Wide Binaries) are in better concordance than the unassociated Random Pairs. • In Figure 4, we test the fractional agreement with gyrochronology as a function of age by using the different samples. • We find a good agreement for young ages (≲ 1 Gyr) , but a clear degradation towards older ages (≳ 2 Gyr). • This highlights the need for novel empirical constraints at older ages, which may allow revised gyrochronology calibrations. Fig. 2: Prot vs. color diagram for open clusters (top) and wide binaries (bottom). Silva-Beyer, Godoy-Rivera & Chanamé (2023) Fig. 4: Agreement of gyrochronology as a function of age, for the different samples. Context Results Conclusions References Fig. 1: Illustration of the rotation period test ΔProt,gyro (top), and examples of low and high ΔProt,gyro values (bottom). Fig. 3: Normalized distribution of ΔProt,gyro values for the different data samples. More centrally concentrated peaks mean better agreement with gyrochronology. Gyrochronology relations: • Angus et al. (2019) • Spada & Lanzafame (2020) Data sets: • Curtis et al. (2020) • Godoy-Rivera et al. (2018, 2021b) • Gruner & Barnes (2020) • Meibom et al. (2015) (★) diego.godoy.rivera@iac.es Method and Data For a given pair, it is the difference of Prot(expected)-Prot(measured) for the secondary given the primary’s gyrochrone fit • In Figure 1, we design an indicator that quantifies the (dis)agreement between coeval pairs and a given gyrochronology relation. • We run the test using samples of stellar pairs with coeval, rotating components, for a range of ages. • These correspond to open clusters and Kepler-field wide binary systems, as shown in Figure 2. Gyrochronology relation = Spada & Lanzafame (2020) Fraction of each sample with |ΔProt,gyro| ≤ N*σ ΔProt,gyro For more details, please see:
poster
IMPORTANCE OF OCEAN HEAT CONTENT TO THE GLOBAL CLIMATE SYSTEM Eileen Maturi1, David Donahue2, Deirdre Bryne1 1. NOAA/NESDIS/STAR 2. NOAA/NESDIS/OSPO Ocean Heat Content (OHC) The Ocean Heat Content is a measure of the integrated vertical temperature from the 26o C isotherm to the sea surface where surface boundary conditions exist (area under the curve in Fig. 1). The 26o C isotherm defines the lower limit at which convection can develop and strengthen. Surface winds generate turbulent mixing within the uppermost part of the ocean; this layer is called the Mixed Layer, and can often reach depths of hundreds of meters. Turbulence associated with the Mixed Layer is responsible for providing a large reservoir of heat energy that can be tapped into by a tropical cyclone. This can intensify and extend the lifetime of the cyclone. Figure 1. Schematic of OHC calculation. The red shading shows the true OHC by integrating the black temperature profile. The dashed blue line shows the approximated temperature profile of the upper ocean. Summary The Madden Julian Oscillation (MJO) and the El Nino/La Nina oscillations have global impacts which affect all the basins. It is important to know the Ocean Heat Content of all the Ocean Basins to predict the rainfall patterns, tropical and mid-latitude cyclones and drought. All these phenomena are components of the Global Climate System. NOAA/NESDIS generates a daily suite of satellite-generated OHC products for the Atlantic and Pacific. There are plans to generate a suite of OHC products for the Indian Ocean. Operational Ocean Heat Content (OHC) Products IMPACTS • The OHC products are important to the monitoring and understanding of Coral Reef systems. Coral reefs are sensitive to unusually warm water. Increased Ocean Heat leads to the destruction of Coral Reefs. • Higher oceanic heat content leads to increased rainfall, which is beneficial for agriculture. • Increased oceanic heat content in the Northern Indian Ocean leads to increased summer monsoon rainfall in India. • Improved ecological forecasting leads to better management of fish stocks, which are an important food source for ocean-bordering countries. There is only one true ocean, the “World Ocean.” For simplicity, the World Ocean is divided into 5 interconnected ocean basins. They are the Atlantic, Pacific, Indian, Arctic, and Southern Oceans. Climatic Variability The Indian Ocean is the warmest of the 5 ocean basins. An Indian OHC product can provide information about the Madden Julian Oscillation (MJO), which is a major factor in determining rainfall, drought, and tropical cyclone patterns in the Eastern Hemisphere. MJO can be visualized as alternating regions of rising air and sinking air. The rising air is associated with tropical rainfall. The sinking air is associated with drier conditions, allowing more solar insolation to reach the ground, inducing drought. It can also play a role in strengthening or weakening tropical cyclones. In addition, the OHC can be used to gauge the strength and duration of the Indian, Asian and North American Summer Monsoons. The MJO typically moves across the tropical Indian Ocean and western Pacific before weakening; though it occasionally circumnavigates the global tropics in 30- 60 days (1-2 months). MJO differs from El Nino/La Nina primarily in that the MJO recurs more frequently (lasting 1-2 months),compared to El Nino/La Nina which recurs less frequently (lasting 6-9 months or longer). The OHC is especially important for the Atlantic and Pacific Basins to help gauge the intensity and duration of El Nino/La Nina, tropical and extratropical (mid- latitude) cyclones, and anomalous rainfall patterns.
poster
Thibault Goessel*1,2, Simon Ligier1, Adélaïde Aublet-Mailhac1, Robin Girard2 / CONTEXTE ET OBJECTIFS Contexte • Le secteur du bâtiment représentait 25% de l’empreinte carbone de la France en 2019. Les émissions de carbone sont réparties sur toute la chaîne de valeur : environ 2/3 pour la phase d’exploitation du bâtiment et 1/3 pour la phase de construction et fin de vie [1]. 1) Développer une méthode d’identification des stratégies de rénovation optimales, basée sur l’évaluation des impacts sur le cycle de vie du bâtiment rénové. 2) Analyser l’influence du réchauffement climatique sur l’identification des stratégies de rénovation. / RÉSULTATS • La massification de la rénovation du parc existant apparaît comme un levier majeur de décarbonation. Des objectifs nationaux ambitieux ont été établis, comme l’atteinte en moyenne pour le parc d’ici 2050 du niveau Bâtiment Basse Consommation [2]. Il faudrait 700 000 à 1 million de rénovations performantes par an, contre 66 000 en 2022 [3]. • Des critères complémentaires existent pour juger de la performance d’une rénovation. Plusieurs études ont commencé à analyser l’influence du choix des critères sur les choix des stratégies de rénovation, en se focalisant principalement sur le coût et les émissions de gaz à effet de serre [4,5]. La prise en compte du confort est plus récente [6]. [1] Feuille de route décarbonation du bâtiment, 2023 [2] Plan rénovation énergétique des bâtiments, 2021 [3] Rapport activité Anah, 2022 [4] Amini Toosi et al., 2020, « Life Cycle Sustainability Asssment in Building Energy Retrofitting : A Review » [5] Galimshina et al., 2021, « What Is the Optimal Robust Environmental and Cost-Effective Solution for Building Renovation? Not the Usual One » [6] Mostafazadeh et al., 2023, « Energy, economic and comfort optimization of building retrofits considering climate change: A simulation-based NSGA-III approach » [7] Kraiem et al., « fTMY-Documentation », confidentiel [8] DGEC, 2023, « Synthèse du scénario « avec mesures existantes » 2023 de la Stratégie Nationale Bas Carbone » • Etude de cas sur une maison individuelle : bloc béton, à effet Joule, toit légèrement isolé, en Gironde (zone H2c), Cep=676 kWh/m²/an. • 800+ stratégies de rénovation mêlant les lots techniques : chauffage, refroidissement, ventilation, murs, toiture, fenêtres. • Taux d’actualisation de 3,2 %. Méthodologie prospective pour l’optimisation multicritère de la rénovation énergétique des bâtiments Objectifs CONFÉRENCE IBPSA FRANCE 13 AU 17 MAI 2024 LA ROCHELLE ET OLÉRON THIBAULT.GOESSEL@CSTB.FR 1 Centre Scientifique et Technique du Bâtiment (CSTB) 2 Mines Paris – PSL, centre PERSEE Thèse commencée le 21/11/2022 / MÉTHODOLOGIE • 12 stratégies sont sur le front de Pareto 3D (coût, carbone, inconfort thermique) :  L’intégralité rénove le système de chauffage (chaudière bois) et isole les murs (ITE fibre de bois 24cm).  La moitié rénove la ventilation (ventilation double flux). • La prospective climatique en année-type aggrave l’inconfort thermique (+200 à +400 °C.h) mais a des effets marginaux sur le coût total et l’empreinte carbone. • Le choix des stratégies optimales n’est que peu modifié par la prospective climatique : parmi les 12 stratégies sur le front de Pareto 3D en climat historique, 11 sont encore optimales en climat prospectif.
poster
Extraction of historical information using semantic techniques and knowledge graphs Jorge Álvarez Fidalgo Introduction In the framework of the Semantic Web, knowledge graphs such as Wikidata have emerged as powerful tools to represent and or- ganise large amounts of information in a structured way (linked data). Our aim is to create a methodology that enhances the extraction of information from digitised historical documents by relying on external knowledge graphs, feeding them back in the process. Research questions •RQ1. How can we extract linked data from historical documents? •RQ2. Is it possible to use data from external knowledge graphs to improve the extraction process? •RQ3. Is it possible to detect errors and shortcomings in external graphs from the extracted information? •RQ4. How can we use NLP techniques to improve the accuracy of the linked data obtained? Methodology •To solve RQ1, we extract triples (?head, ?rel, ?tail). •In order to get all ?head -and some ?tail- entities we perform NER using existing M-BERT [1] derived models. •To extract ?rels we propose to perform link prediction on Wikidata in two phases -thus addressing RQ2-. Link Prediction (I). Semi-inductive prediction [2] using tail prediction models trained on Wikidata datasets, such as KGT5 [3]. To restrict the number of properties to be predicted, we use Shape Expressions [4]. Link Prediction (II). Fully inductive prediction [2] in which, based on the information extracted in (I), we define property paths in Wikidata -by means of SPARQL queries- that, if they exist between two entities, indicate a possible relationship between them. Preliminary results To evaluate this approach we are applying these techniques on a corpus of Spanish medieval notarial records consisting of 128 documents [5]. We focus on three types of relationships of varying difficulty for their detection: father-child, spouse and sibling, obtaining high recall [0.60-0.87] yet very low accuracy ([0.06-0.08]). Preliminary conclusions •Most of the relevant triples are extracted successfully. •Those undetected are mostly due to a) incomplete information in the knowledge graph (RQ3) and b) relationships that do not adhere to any pattern. •Accuracy is very low since many non-relevant pairs of entities match the patterns. •RQ4 may be of help in this regard, thus facilitating the work of the human-in-the-loop. Bibliography [1] Telmo Pires, Eva Schlinger, and Dan Garrette. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1493. [2] Mehdi Ali, Max Berrendorf, Mikhail Galkin, Veronika Thost, Tengfei Ma, Volker Tresp, and Jens Lehmann. Improving inductive link prediction using hyper-relational facts. In The Semantic Web – ISWC 2021, pages 74–92, Cham, 2021. Springer International Publishing. ISBN 978-3-030-88361-4. [3] Apoorv Saxena, Adrian Kochsiek, and Rainer Gemulla. Sequence-to-sequence knowledge graph completion and question an- swering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 2814–2828, 01 2022. doi: 10.18653/v1/2022.acl-long.201. [4] Eric Prud’hommeaux, Jose Emilio Labra Gayo, and Harold Solbrig. Shape expressions: an rdf validation and transformation language. In Proceedings of the 10th International Conference on Semantic Systems, SEM ’14, page 32–40, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450329279. doi: 10.1145/2660517.2660523. [5] Jorge Felpeto Cueva and Calleja Miguel Puerta. El archivo de un artesano del siglo XIV: el orfebre Alfonso Fernández de Oviedo. Phd thesis, Universidad de Oviedo, 2023.
poster
I. Motivation IV. Results & Discussion Jason N. Ott1*, Cailey B. Condit1 and Vera Schulte-Pelkum2 1University of Washington, Earth and Space Sciences, Seattle WA 2University of Colorado Boulder, Boulder CO *jasonott@uw.edu Seismic Anisotropy of Mafic Blueschists: Constraints from Exhumed Glaucophane-Rich Blueschists with Implications for the Subduction Interface Diagram of a typical subduction zone illustrating major structural features and the evolution of pressure and temperature with increasing depth. II. Materials • Samples selected from naturally deformed, mafic blueschists with variable mineralogies spanning a broad range of the blueschist facies P-T conditions. o Bandon OR: BAN09, BAN15, LAB34 o Cycladic Islands: CY106, CY107 o Diablo Range CA: DR221, DR265 o New Caledonia: LAB555, LAB570, LAB572 o Catalina Schist CA: GB15-02A • Kinematically oriented thin sections prepared (foliation normal/lineation parallel) for EBSD analysis. Blueschist sample P-T estimates from the literature overlain on metamorphic facies diagram with slab-top paths of van Keken et al. (2018). VI. Future Work • The remaining mafic blueschist samples targeted for this study will be mapped, analyzed, and integrated into the framework of this compilation study. • Synthetic receiver functions will be applied to subduction zone models using the results from this study to develop constraints applicable to imaging of blueschist-metamorphosed oceanic crust. • Deformation experiments on glaucophane samples at varying temperatures, strain rates, and initial grain sizes in combination with microstructural analysis of recovered samples will be used to link observations of naturally deformed blueschists to an experimentally determined flow law. III. Methods • We can better understand subduction zone dynamics by using exhumed subduction exposures to constrain remote geophysical observations. • Seismic anisotropy can illuminate subduction zone structure and link deep processes to their expression as geological hazards at the surface. • Understanding the range of seismic anisotropy in mafic blueschists, a key constituent of subducting slabs, will improve imaging of the subduction zone interface. The seismic anisotropy of the blueschist samples (in AVp %) as a function of fabric strength of the [100], [010], and [001] pfJ-indices of glaucophane in the samples—corresponding to the a-, b-, and c-axes respectively—displays a general increase in magnitude of anisotropy with glaucophane fabric strength. The strongest correlation between pfJ-index and AVp % occurs with the [100] direction. Misorientation to mean orientation maps of glaucophane in samples CY107 (top) and GB15-02A (bottom), representative of samples near the deepest and shallowest extents of warm slab-top paths under blueschist facies conditions. Deformation microstructures show evidence of intragranular misorientation gradients and subgrain boundary formation that suggest deformation may be facilitated by dislocation creep. The seismic anisotropy (in AVp %) of the blueschist samples displays a general increase in magnitude with increasing volumes of glaucophane in the sample. The seismic wave velocity orientations (AVp patterns) can be divided into 2 main types: 1) Fast-velocity girdle in the foliation plane with slow-velocities normal to the foliation that corresponds to a glaucophane CPO with point-maxima in the a-, b-, and c-axes, and 2) Fast-velocity direction parallel to the sample lineation with a slow-velocity girdle normal to the lineation direction corresponding to a glaucophane CPO with the c-axis oriented along the lineation and a girdle formed by the a- and b-axes. VII. Abstract and References Blueschists show strong seismic anisotropy, scaling with glaucophane volume and fabric strength. This is likely a useful tool for imaging subducting slabs with receiver functions. V. Key Takeaways • The seismic anisotropy of mafic blueschists is largely controlled by the modal percentage of the amphibol
poster
Estimation of Under-5 Mortality by Wealth Quintile Fengqing Chao1, Danzhen You2, Jon Pedersen3, Lucia Hug2, Leontine Alkema4 1(Email: ephchf@nus.edu.sg), Saw Swee Hock School of Public Health, National University of Singapore. 2Division of Data, Research and Policy, UNICEF. 3Fafo Institute of Applied International Studies, Fafo. 4Department of Biostatistics and Epidemiology, University of Massachusetts, Amherst. Poster 11833, Poster Session P3 – Data, Methods, and Professionalization, Population Association of America, Apr 27th 2017, Chicago Introduction Objective ▶Estimate the levels and trends of under-5 mortality rate (U5MR; probability of dying before age 5) by wealth quintile across countries from 1990 to 2015; ▶Estimate the relation between ratios of quintile-specific U5MR and the national-level U5MR (all quintiles combined); ▶Implement a reproducible statistical model; Wealth Quintiles ▶Refer to 5 equal-size birth groups with different levels of socioeconomic status according to the wealth index assigned to each household; ▶1st wealth quintile = the poorest group; 5th wealth quintile = the richest group. Wealth Index ▶Computed based on selected questions asked in the Demographic Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS); ▶Indicator variables: any item that will reflect economic status: ▶Type of flooring (dirt/cement/parquet)? Type of toilet (bush/flush)? Has electricity? Number of members per sleeping room? etc. ▶Use principal component analysis (PCA) to assign the indicator weights, and get the weighted sum as the wealth index. Data Data Types # surveys DHS direct 208 MICS direct 17 MICS indirect 66 total 291 Table: Database for modeling. ‘Direct’: full birth history data. ‘Indirect’: summary birth history data. ▶95 countries with DHS and/or MICS survey data; ▶Each survey has 1 data point for each quintile; ▶Range of reference year: 1987 – 2011. Results 1985 1995 2005 2015 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Wealth Quintile Group 1 Quintile−specific / Total U5MR Country estimate Global expected 1985 1995 2005 2015 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Wealth Quintile Group 2 1985 1995 2005 2015 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Wealth Quintile Group 3 1985 1995 2005 2015 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Wealth Quintile Group 4 1985 1995 2005 2015 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Wealth Quintile Group 5 1985 1995 2005 2015 50 100 150 200 U5MR (deaths per 1000 livebirth) quintile−specific total 1985 1995 2005 2015 50 100 150 200 1985 1995 2005 2015 50 100 150 200 1985 1995 2005 2015 50 100 150 200 1985 1995 2005 2015 50 100 150 200 India 1985 1995 2005 2015 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Wealth Quintile Group 1 Quintile−specific / Total U5MR Country estimate Global expected 1985 1995 2005 2015 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Wealth Quintile Group 2 1985 1995 2005 2015 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Wealth Quintile Group 3 1985 1995 2005 2015 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Wealth Quintile Group 4 1985 1995 2005 2015 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Wealth Quintile Group 5 1985 1995 2005 2015 100 150 200 250 300 U5MR (deaths per 1000 livebirth) quintile−specific total 1985 1995 2005 2015 100 150 200 250 300 1985 1995 2005 2015 100 150 200 250 300 1985 1995 2005 2015 100 150 200 250 300 1985 1995 2005 2015 100 150 200 250 300 Chad Model estimates and 90% CI for India and Chad. Row 1: ratio of wealth quintile-specific to national-level U5MR. Row 2: wealth quintile-specific and national-level U5MR. 0 20 40 60 80 1.0 1.5 2.0 2.5 3.0 Difference between U5MRQ1 and U5MRQ5 Ratio of U5MRQ1 to U5MRQ5 CMR CIV GHA HTI MWI MDA NAM NIC PRY ALB AZE BLZ BEN ERI GIN PSE TJK THA AFG ARM BLR BDI COL COG GAB GEO GUY MKD MDV MRT SRB SWZ SYR TUN UKR UZB ZMB AGO BGD BTN BOL BFA KHM CAF TCD COM COD DOM EGY GNQ ETH GMB GTM GNB HND IND IDN IRQ KEN KGZ LAO LSO LBR MDG MLI MNG MAR MOZ NPL NER NGA PAK PHL RWA STP SEN SLE SOM ZAF SSD SDN SUR TZA TLS TGO UGA VUT VNM YEM ZWE BRA JOR KAZ PER TUR G G G G G G South Asia CEE/CIS Sub−Saharan Africa Latin America and the Caribbean East Asia an
poster
Pile-up rejection for AMoRE-II Yoomin Oh References: [1] K.S. Park et al, DOI:10.3389/fphy.2024.1323991. [2] G.B. Kim et al, DOI:10.1016/j.astropartphys.2017.02.009. [3] S.C. Kim et al, DOI:10.1088/1748-0221/17/07/P07034. [4] C. Augier et al, DOI:10.1103/PhysRevLett.131.162501. [5] T. Chen and C. Guestrin, DOI:10.1145/2939672.2939785. [6] V. Alenkov et al, DOI:10.1140/epjc/s10052-022-11104-3. AMoRE: 0νDBD search • Mo-100 enriched scintillation crystal. • Cryogenic detector technique (MMC). • AMoRE-II using 157 kg of Li2MoO4. • Being prepared at 1000 m underground in Yemilab [1]. • 5+ years running: mass·time exposure > 500 kg·year. • Energy resolution ~ 10 keV FWHM at Qββ=3034 keV. • Background level ≪ 2×10-4 count/keV/kg/year (ckky). • Half-life sensitivity ~ 4×1026 years at 90% CL. Pile-up background • Preference for a larger-size crystal detector to reduce the number of detector channels (~€$₩). • Random coincidence of two signals in a detector crystal volume. • Expecting the largest contribution from two 2νDBD signals. • Rate scales with the crystal size (internal~volume, external~area) and coincidence t window. • For Li2MoO4 with Φ/H=6 cm and a 500 µs coincidence time window (Δt): pile-up background rate at ROI ~ 2.2×10-4 ckky [4]. • Rejection at the analysis level is required with εrejection≳90%. 2νββ random coincidence Each total Pile-up simulation and analysis • Signal templates of Li2MoO4 detectors from R&D data: 1. Fast signal: rise-time ~ 1.9 ms, 10-90%, (Diffused surface / 20 mK / MMC thermal link to Cu-frame ❍) 2. Slow signal: rise-time ~ 5.8 ms, 10-90%, (Diffused surface / 10 mK / MMC thermal link to Cu-frame ✕) • Noise (baseline): randomly sampled from real data. • Analysis utilizing a machine learning method: gradient boosting [5]. Selected parameters for discrimination and training/evaluation data samples Pileup Single Single Pileup Pileup Single Pileup Pileup Single Single Pileup Single Pileup Pileup Single Pileup Pileup Single Pileup Pileup Single Pileup Pileup Single Pileup Pileup Single Pileup samples Butterworth bandpass filters with different cutoff frequencies Raw heat Raw light Raw heat Raw light Validation using real data: Bi-Po β-α decay • A CaMoO4 crystal detector with a high U/Th contamination in AMoRE-I [6]. 214Bi β−decay −−−−−−−−! Q=3.27 MeV 214Po (T1/2 = 164 μs) α−decay −−−−−−−−! Q=7.83 MeV ··· 212Bi β−decay −−−−−−−−! Q=1.88 MeV 212Po (T1/2 = 299 ns) α−decay −−−−−−−−! Q=8.95 MeV ··· Selection efficiency ~97% for single α events Some can be discriminated. Rejection efficiency~60% Expecting no rejection at all! Signal and noise characteristics affects pile-up rate • A faster signal and higher signal/noise preferred for less pile-up rate [2]. • Signal parameters such as size and speed (rise-/decay-time) can be tuned with: 1. temperature, 2. crystal surface condition, 3. thermal link between sensor (MMC) and Cu frame [3]. • Baseline r.m.s. can be controlled to be as small as a few keV level. Result and Discussion • Rejection efficiency for the pile-up of 2νDBD signals at ROI in 500 µs: • Better than 90% with faster signals on lower noise baselines. • Close to 80% with slower signals on higher noise baselines. • Selection efficiency for the single signal event > 95%. • Pile-up rate by 2νDBD at ROI can be suppressed down to (2–4)×10-5 ckky. • Studying the possibility for further improvements, not only for pile-up discrimination but also for more general event type classification. yoomin@ibs.re.kr Fast signal, Lower noise case
poster
VIDEO ABSTRACTS: DO THE METRICS STACK UP? Tom Rees1, Sandra Lê2, Leigh Prevost1, and Sheelah Smith1 1PAREXEL International, Worthing, UK; 2Dove Medical Press, Macclesfield, UK Objective: Video abstracts offer a means to present results that complement the full paper. We aimed to compare video abstract views with other publication metrics in order to understand better the role they play in disseminating research findings. Research design and methods: Metrics were obtained from YouTube for all video abstracts published by the open-access publisher, Dove Medical Press. All articles with video abstracts published in four journals in different therapy areas over the period February 27, 2014, to March 19, 2015, were analyzed (average time since publication: 212 days), and compared with metrics for matched articles without video abstracts. Results: A total of 31 articles with video abstracts were identified. Average number of views per video was 364; most views came from United States, India, and United Kingdom. Males were 66% of viewers and 36% were aged 25–34 years. In our sample, video abstracts did not increase with time. Video views were weakly correlated with text abstract views (r-squared = 0.22) and with full paper views (r2 = 0.23) with around 1.5 video views per full paper view. Full views of papers with a video abstract were similar to those of papers without. Conclusions: Video abstracts are of interest to many authors and readers and complement full views. This is in line with authors’ and publication professionals’ views that publishing a video abstract increases the reach of research results and may increase downloads. UPDATED ABSTRACT • Video abstracts offer a novel means to present study results that can both complement and enrich the full published article. • Presenting results through a different medium could address the learning style preferences of a wider group. • In an accompanying poster, we present survey results that indicate high levels of interest from authors in video abstracts although concerns over impact and value were raised. INTRODUCTION • Video abstracts appear to be more popular with younger audiences. • The low correlation between video abstract, conventional abstract and full paper views suggests that they are reaching different audiences; however, video abstracts do not appear to replace or encourage full paper views. SUMMARY AND CONCLUSIONS • In this exploratory analysis, we aimed to compare video abstract views with other publication metrics in order to understand better the role that video abstracts play in disseminating research findings. OBJECTIVE RESULTS • Across the Dove Medical Press portfolio, the mean number of views per video abstract was 364. Scan QR code to download this poster Scan QR code to download any of PAREXEL’s and Dove Medical Press’ posters presented at ISMPP Acknowledgments The authors would like to acknowledge and thank PAREXEL Editorial and Creative Services for their support. 11th Annual Meeting of ISMPP; April 27–29, 2015, Arlington, VA, USA. • Aggregate viewing statistics for all video abstracts published by Dove Medical Press, an open-access publisher, were obtained from YouTube. • Four journals published by Dove Medical Press across diverse medical fields were identified: Clinical Ophthalmology, Therapeutics and Clinical Risk Management, International Journal of Chronic Obstructive Pulmonary Disease, and Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy. • Metrics for all papers with video abstracts published over a 12-month period (February 27, 2014, to March 19, 2015) were obtained (including original research articles, reviews, case series, and case reports). ‚‚ Views on PubMed Commons were excluded from this analysis. • For the control group we identified, for each video abstract, the article of the same type closest in publication date, but without a video abstract. • Relationships were explored using linear regression, two sample T-tests and, for
poster
What determines the inner sizes of protoplanetary disks? I. Mendigutía1,*, P. Marcos-Arenal1,2, E. Koumpia3, R.D. Oudmaijer4, M. Vioque5,6, J. Guzmán-Díaz1, C. Wichittanakom7, W.J. de Wit3, B. Montesinos1, J.D. Ilee4 1 Centro de Astrobiología (CAB), CSIC-INTA, Madrid, Spain 2 European Space Agency, ESA-ESAC, Madrid, Spain 3 European Southern Observatory, Santiago, Chile 4 University of Leeds, UK 5Joint ALMA Observatory, Santiago, Chile 6National Radio Astronomy Observatory, Charlottesville, USA 7Thammasat University, Pathum Thani, Thailand * imendigutia@cab.inta-csic.es Background Planets form in gas and dust disks that surround young stars. Dust particles do not survive temperatures above 1000-2000 K, for which protoplanetary disks are cleared of dust close to the central stars. The size of such inner dust holes should be larger for more luminous (hotter) sources, ranging between ~ 0.01 and 20 au in optically-visible young stars. This range is critical for our understanding of planet formation, as it corresponds to the region where most exoplanets’ orbits lie1. In turn, those inner disk sizes translate to angular scales of milli-arcsecs even for the closest sources, and can be resolved mainly for intermediate-mass “Herbig Ae/Be” stars only by using near-infrared interferometry. Although previous interferometric studies have shown that there is a correlation linking the stellar luminosity of Herbig Ae/Be stars and the size of their inner dust disks, this “size-luminosity correlation” may change for early type Herbig Bes and shows considerable scatter2,3. The main physical scenarios proposed to explain the observed size-luminosity distribution are related to the presence/absence of innermost gas, different disk-to-star accretion mechanisms, or different dust disk properties as inferred from spectral energy distributions (SEDs)4,5,6. However, no general observational confirmation has been provided to date. Our goal is to identify the physical mechanism that drives the observed distribution in the size-luminosity correlation. References (see the associated paper Marcos-Arenal et al. 2021, A&A, 652, A68) Fig. 1. Example: the young star V590 Mon. (Left) GRAVITY/VLTI interferometric observables -flux (red), squared visibility (blue) and differential phase (green)- vs wavelength around the Brɣ emission line for two different VLT baselines. (Right) Continuum squared visibilities vs spatial frequency (solid dots with errorbars). The best fit is indicated with a solid line and corresponds to a Gaussian disk on top of the central star, leading to an inner dust disk radius ~ 2 au. Alternative fits corresponding to ring and uniform disk models are plotted with dashed and dot-dashed lines. XV Reunión Científica Sociedad Española de Astronomía. Tenerife. September 2022 1: http://exoplanet.eu/ 2: Monnier & Millan-Gabet. 2002, ApJ, 579, 694 Fig. 2. (Left) Size-luminosity relation where each panel emphasizes a different aspect. Top left: Herbig Ae and Herbig Be stars; the five with our new GRAVITY data are highlighted in red. Top right: the sources with strong (blue) and weak (red) accretion rates. Bottom left: the sources with spectro-interferometric measurements indicating the presence of atomic gas inside the inner dust disk (blue) and the ones without such evidence (red). Bottom right: Group I and Group II sources based on the SED shape. For all panels, triangles indicate upper and lower limits and the solid and dashed lines indicate the expected boundaries of the inner dust disk radius based on the two main accretion scenarios and innermost gas densities. (Right) Size-luminosity relation where the stars have been color-coded to indicate different distance ranges, the rest of the symbols and lines as in the left panel. disk Results 1) We have updated the size-luminosity diagram, almost doubling the number of optically visible young stars considered in previous works. 2) We find no general trend linking the presence/absence of innermost gas,
poster
Predictive Modeling of Polymer-Derived Ceramics: Discovering Methods for the Design and Fabrication of Complex Disordered Solids Paul Rulis1 , Nathan Oyler2 , Michelle Paquette1, Ridwan Sakidja3, Jinwoo Hwang4 1. Department of Physics and Astronomy, University of Missouri - Kansas City, Kansas City MO 64110 2. Department of Chemistry, University of Missouri - Kansas City, Kansas City MO 64110 3. Department of Physics, Astronomy and Materials Science, Missouri State University, Springfield MO 65897 4. Department of Materials Science and Engineering, The Ohio State University, Columbus OH 43212 Program Objective • Toward the long-term goal of creating an integrated program for the design of complex disordered solids, this new project aims to develop a general, simulation- driven methodology for accurately recreating the atomic structure of substructure- containing amorphous solids and mapping resultant structures and properties back to fabrication conditions, ultimately enabling a computational design capability. • The work involves developing an ab initio molecular dynamics (AIMD) and hybrid reverse Monte Carlo (HRMC) simulation that is augmented by ab initio total energy and a set of geometric constraints. We will use a collection of thin-film amorphous preceramic polymers (a-BC:H, a-SiBCN:H, and a-SiCO:H) as suitably complex and technologically relevant case studies. • We develop novel algorithms for linking growth conditions and characterization information to atomistic simulation, and for mapping fabrication conditions to desired properties. Our modeling goal is to first identify non-global potential energy minima for material sub-components that are produced under non-thermodynamic conditions and to then align the timescales of the simulation and growth processes so as to generate realistic structural models of complex disordered solids. • The project combines state of the art computational techniques (AIMD, HRMC), modern optimization algorithms (e.g. artificial neural networks (ANNs), particle swarm optimization (PSO)), specialized experimental characterization techniques, (solid-state nuclear magnetic resonance (NMR), 4-dimensional scanning transmission electron microscopy (4D-STEM)), and advanced thin-film fabrication technology (plasma enhanced chemical vapor deposition (PECVD)). • The unique utility of modern solid-state NMR techniques to obtain specific bonding and connectivity information and the sensitive medium-range order information available from 4D-STEM will be combined with neutron diffraction and more routine physical/electronic structure characterization methods to provide input and constraints for the simulations. The project includes (left) growth and characterization, (middle) iterative modeling, and (right) design training and validation. Single-frame red boxes represent experimental samples and data, while double-framed blue boxes represent computational products. The shaded region in the middle represents the application of particle swarm optimization. The general flow can be understood as: (1) growth of samples varied by composition and growth procedures; (2) experimental structural characterization; (3) iterative model simulation using characterization data; (4) ANN training to link simulation and growth parameters followed by predictive application of the ANN. Acknowledgement To meet the long-term goal of developing an integrated program for the fabrication of complex disordered solids that are designed to satisfy the performance requirements of advanced applications, we set the following plans for the first year of this project. 1. We will form connections between experimental characterization and simulated data to guide the construction of realistic models of preceramic polymers. Our working procedure for generating individual models is to prepare the composition for a large-scale classical MD simulation via smaller scale AIMD and to then use HRMC to progressively improve matching between the mod
poster
Diagrams for PhD research By Scott Skipworth ORCID: https://orcid.org/0000-0003-2674-529X Disseminated Embodiment: My term for a repositioning of human embodiment for the 21st Century: no longer a distinct figure in relation to the built environment, but an expanding and contracting satellite system of the local and global built environment itself. Diagram by Scott Skipworth ORCID: https://orcid.org/0000-0003-2674-529X Embodied student physically at site visit while meeting with studios at remotely located campuses through mobile technology: student is disseminated – in “place” across locations: Where to put one X to represent student figure in a place? (can’t) Diagram by Scott Skipworth ORCID: https://orcid.org/0000-0003-2674-529X Embodied client/user physically at home interacting with remote urban site through personal (home) and building mounted (site) technology (body sharing?)– in “place” across locations: Where to put one X to represent the user figure in a place? (can’t) Existing framework/model of human embodiment as figure in a built environment “place” no longer works Existing framework/model of human embodiment as figure in a built environment “place” no longer works
poster
1 Unidad de Genotipado y Diagnóstico Genético (UGDG), INCLIVA, Valencia (Spain) 2 Unidad de Bioinformática, INCLIVA, Valencia (Spain) 3 Hospital Universitari de Tarragona Joan XXIII / IISPV / Universitat Rovira i Virgili / CIBERDEM (Spain) Daniel Pérez-Gil1,2, Verónica Lendínez-Tortajada1, Alba Sanchis-Juan1,2, Pilar Rentero-Garrido1, Ana Megía3, Joan Vendrell3, Felipe Javier Chaves-Martínez1, Pablo Marín-García1,2 DPMAS: pipeline for 450K top table replication with MiSeq amplicons ABSTRACT DNA methylation is one of the most important epigenetic modifications in gene expression regulation and it plays a crucial role in multiple biological processes. Therefore, DNA methylation profiling is important to reveal regions of the genome that are altered during development or that are perturbed by disease. Methylation signals at CpG sites can be assessed using bisulfite conversion method and massive sequencing technologies. Here we present the implementation of a bioinformatics pipeline for replication with MiSeq sequencing amplicons of the top hits obtained from Infinium 450K differential methylation experiments. CONCLUSIONS We have implemented a bioinformatics pipeline for replication of the top hits obtained from Infinium 450K experiments using MiSeq sequencing amplicons. For this purpose we have used well-established software for genome analysis as well as applications developed in our lab, all of them open source. The proposed workflow has proved to be useful for methylation level assessment while maintaining a good data quality control. Source code will be available at: https://github.com/pamag/DPMAS PIPELINE REPORT 450K results The assessment of methylation level at CpG sites from raw genetic sequences involves quality control and several processing steps. The main building blocks of the pipeline include SNP selection from 450K experiments, sequencing of the regions containing those SNPs with directional primers, mapping of the reads to the reference amplicon sequences and variant calling at cytosines in CpG sites. Finally, a dashboard report is produced as an analysis summary. Other SNP selection Directional amplification primers design MiSeq sequencing Adaptor removal and quality trimming Mapping Min length filter Calling C and T counts REPORT Beta values plot FASTQ BAM VCF Figure 2. Coverage plot. Depth per base distribution is plotted for each sample (black lines) in every experimental group. In this example a threshold of 100x is established as a minimum value to report reliable results (red line). Figure 3. Methylation plots. Beta values plot (A). Methylation percentages of CpG sites along the amplicon are plotted for each sample in every experimental group. The 450K selected SNPs are highlighted as a red line. In this example only those samples with coverage >=100 at the CpG site are plotted. A reliability value is assigned to each point depending on the error assumed when the methylation level is assigned (B). Figure 1. Localization plot. The region where the selected 450K SNPs are located is plotted. Chromosome and gene transcripts are displayed and the CpG of interest is shown as a red line. Table 1. Methylation level table. Mean, standard error and number of samples (mean±se(N)) are shown for each CpG site in every experimental group. The CpG site of interest is highlighted in red. Position c1 c2 c3 c4 64 77.71±0.63(46) 79.03±0.49(85) 78.69±0.61(60) 78.61±1.02(62) 105 89.76±0.37(46) 89.3±0.42(85) 90.38±0.45(60) 89.82±0.6(61) 108 94.75±0.21(46) 95.01±0.24(85) 94.62±0.24(60) 94.61±0.42(62) 119 54.15±4.57(46) 52.69±3.43(85) 55.51±4.14(60) 56.16±3.84(63) 127 95.71±0.18(46) 95.89±0.19(85) 95.82±0.28(60) 96.2±0.38(62) 187 92.49±0.46(46) 92.57±0.31(85) 92.91±0.27(60) 92.88±0.56(62) 218 96.03±0.21(46) 96.21±0.27(85) 96.3±0.23(60) 96.46±0.33(62) A) B) c1 c2 c3 c4 c1 c2 c3 c4 c1 c2 c3 c4 c1 c2 c3 c4
poster
About the agreement To achieve the large-scale transformation towards open access, the project partners devised a “Publish and Read” (PAR) model with two components: The Projekt DEAL-Wiley agreement Significantly increased open access output from German research institutions. Enabled researchers from German institutions to easily publish their work open access, independently from available funding in their research areas, and retain copyright for their work. Expanded access to research findings for German institutions. Expand equal access to high-quality scholarly journals to all researchers in Germany. Analysis Wiley and the project partners are constantly monitoring the publishing output. Now that the agreement has been in effect for over two years,what are the actual changes in publishing and reading behaviour Wiley was able to observe? After the second year of this agreement, it is possible to draw the reasonable conclusions about impact and achievements: The two Key goals of the Projekt DEAL-Wiley agreement: Transitioning to an open future Wiley’s commitment to open access Empower all (corresponding) authors with the means to publish their articles openly. Gold Open Access increase in the number of Gold journal published articles from 2018 to 2020. increase the number of accepted and approved articles in 2020, compared to 2019. Hybrid Open Access increase in the number of articles published open access. of authors choose to publish OA in our journals of authors choose a CCBY license Open Access Publishing The Projekt DEAL-Wiley agreement has already made notable advancement towards a higher ratio of open access publications. articles published in Wiley’s hybrid portfolio between July 2019 and December 2020. articles published in Wiley’s Gold Open Access journals in 2019 and 2020. Outcomes: Reading accesses to full text HTML and PDF (2020). increase in usage since before the agreement. Participation and reach institutions where Readers have reading rights (2020). Publish component: • Authors publishing in Wiley’s 1,400+ hybrid journals may publish OA at no direct cost to them; Projekt DEAL pays a PAR fee of 2.750€ to Wiley per such article • Authors can publish in 230+ fully gold open access journals at no direct cost*, and institutions get a 20% discount on the Article Publication Charge (APC). Read component: • Participating institutions gain free read access to more than 1,600 Wiley journals (dating back to 1997) Outcomes: Introduction In January 2019, Projekt DEAL, a consortium representing German research institutes and academic libraries, signed a large-scale open access agreement with Wiley, publisher of over 1,600 peer-reviewed journals from all areas of research. Its goal was to accelerate the nation-wide transition from a predominantly subscription- based publishing model to one based on open access. It is intended to significantly raise the number of open access articles being published out of German research institutions while securing, and even expanding institutions’ read access to Wiley’s journal output at the same time. >1000% 92% 15,716,977 49% 495 58% 92% + 14,500 + 1,500 Participation and reach institutions where eligible authors have open access publishing rights. (2020) 943 Conclusions + 50% *Some institutional funding policies may differ for gold OA APCs. Authors should check with their library for any exceptions. Authors: Melanie Lehnert-Bechle (Wiley-VCH), Torben Quasdorf (Wiley-VCH), Eva-Maria Scheer (Wiley-VCH) This work is licensed under a Creative Commons Attribution 4.0 International License.
poster
• has attracted over 900 registered users world-wide, up from 400 users in 2013 SI2-SSE: Modules for Experiments in Stellar Astrophysics (MESA) The goal of this project is to support the MESA software infrastructure by providing sustained innovation in the astrophysics community. MESA solves the 1D fully coupled structure and composition equations governing stellar evolution with implicit finite volume scheme. State-of-the-art modules provide adaptive mesh refinement, sophisticated timestep controls, non-adiabatic oscillation capabilities, equation of state, opacity, nuclear reaction rates, element diffusion, boundary conditions, and changes to the mass of the star. MESA has significant penetration in the astrophysics community and is well on its way to becoming the world standard for evolving stars, providing a crucial resource to the astrophysical community at just the right time. The third year of our SI2-SSE award has been key to sustaining MESA as an essential piece of software infrastructure while building new scientific and educational networks. • software instrument papers MESA I (2011), MESA II (2013), and MESA III (2015) have collectively been cited over 1000 times. Each paper ranks in the Top 25 most cited astronomy and astrophysics paper published in that year. Frank Timmes (PI), Rich Townsend (Co-PI), Lars Bildsten (Co-PI), Bill Paxton (MESA’s First Author). MESA: ~300 Registered USA Users Sept. 2016 South Pacific Ocean South Atlantic Ocean North Atlantic Ocean North Pacific Ocean Indian Ocean MESA: ~915 Total Registered Users Sept. 2016 0 15 30 45 2012 2013 2014 2015 2016 Grads Postdocs Undergrads Faculty Amateurs 28% female 80% USA 66% USA 39% female 64% USA 33% female 68% USA 47% female 48% USA 32% female Citations 0 100 200 300 2011 2012 2013 2014 2015 Sept. 2016 MESA I 2011 MESA II 2013 MESA III 2015 Total Citations: 947 as of 01Sep2016 • had one paper, MESA II, that was one of 15 papers selected by the American Astronomical Society for “high-impact research” published between 2012 and 2015. MESA was publicly launched in 2011 and received an SI2-SSE award in 2013. The MESA project • has witnessed over 10,000 downloads of the source code • has distributed over 12,000 archived and searchable posts on community discussions • provides a Software Development Kit to build MESA across a variety of platforms • delivers an annual Summer School program that now has over 170 graduates which offers a week of extensive hands-on labs on how use MESA in their own research • hosts a web-portal for the community to share analysis tools and build provenance • offers a prototype of a cloud resource for education, MESA-Web, that has served up over 1500 models for education to 40 institutions in the USA unique users in its first year of operation.
poster
Bringing Interoperable Annotation to All Scholarly Works Annotating All Knowledge Coalition and Working Group A coalition of some of the world’s key scholarly publishers, platforms, libraries, and technology organizations are coming together to create an open, interoperable annotation layer. Web annotation is here! Our goal is to make it pervasive across scholarship, open and interoperable. For decades web pioneers have imagined a native and universal collaborative capability over the Web. Many projects have experimented with it, yet in 2015 we’re still stuck with the same patchwork of proprietary commenting systems, only available in some places. In the last few years a growing community has been working to change that. Our goal is to standardize “annotation” as a unit of conversation built into the very fabric of the Web. In late 2014 the World Wide Web Consortium (W3C), the standards body for the Web, established a formal Working Group to support this standards effort. Much progress has been made, and many implementations are now mature enough to be deployed. What’s possible with web-based annotation? Make persistent annotations in texts by highlighting passages, adding notes, and tagging with keywords. Coalition Founding Members American Geophysical Union ● Amherst University Library ● Annual Reviews ● arXiv ● Atypon ● Authorea ● Biodiversity Heritage Library ● biorxiv ● Brandeis University Library ● Cambridge University Press ● CELI ● Carnegie Mellon University ● Centre for Agriculture and Biosciences ● International Center for Open Science ● Collaborative Knowledge Foundation ● ContentMine ● Crossref ● Duke University Press ● eLife ● Elsevier ● Europe PubMed Central ● Europeana ● F1000 - Faculty of 1000 ● Factminers ● Gigascience ● Harvard- Smithsonian Center for Astrophysics ● HathiTrust ● Highwire ● Hypothes.is ● IDPF ● iPython / Project Jupyter ● JHU MUSE ● Journal of Neuroscience Research ● JSTOR ● Knowledge Unlatched ● Michigan Publishing ● MIT Press ● NASA Astrophysics Data System ● NYU Press and NYU Libraries ● O’Reilly Books (aka Safari Books Online) ● PKP / Open Journal Systems ● Publishing Technology plc ● ORCID ● Oxford University Press ● PaperHive ● Pensoft ● PLOS ● Pressbooks ● RedLink ● Romanian Center for Investigative Journalism ● Semantico ● SSRN ● Stanford University Libraries ● Ubiquity Press ● University of California Davis Library ● University of California Press ● University of Illinois Libraries ● University of Southern California - Scalar ● Unizin ● W3C ● Wellcome Library ● Wiley ● Wolfram ● World Archives of Sciences ● World Digital Mathematics Library – – – – – – Goal To deploy an interoperable annotation layer over all scholarly and scientific works within three years. – Members agree to Share the vision for how this can benefit scholarship Explore how to incorporate web annotation Collaborate openly with others Be open about their participation – – – – Annotating All Knowledge Coalition AAK Working Group was established at FORCE11 to support the work of the AAK Coalition. Support the communication and dissemination needs of the coalition Provide a forum for AAK Coalition discussions Coordinate technical working group Organize Face to Face meeting at FORCE2016 – – – Timeline – – – – Face to Face Kick Off Meeting >55 attendees can network Establish a working definition of interoperability Define the most compelling use cases/user stories to drive technology development Organize a technical specifications working group Discuss branding – – – – – AAK Working Group The coalition is open to any scholarly publisher, platform, library or technology organization that shares our vision and objectives and wants to participate. Contact us at: coalition@hypothes.is for more information. Join the working group at FORCE11 Contribute your use cases Contribute your talent Build open annotation into your platform – – – – Participate Upcoming events I Annotate Berlin, May 19-20, 2016 – Join us! hypothes.is/annotating-a
poster
d matthias de Fré Jeroen van der Hooft Tim Wauters Filip De Turck how to Realize an immersive Jedi Council meeting a Capturing Story of holograms and Their hyperdrive delivery
poster
Sample Ms Mr Bc Bcr Mr/Ms Bcr/Bc B2861 1.21 0.267 11.63 21.02 0.221 1.807 B4591 0.34 0.14 29.25 41.7 0.412 1.426 B5981 1.3 0.2 6.79 12.94 0.154 1.906 B7151 0.95 0.113 3.29 6.24 0.119 1.897 Image credit: IODP, Blastcube from wiki AF Thermal Can we use field dependence of susceptibility to identify the magnetic mineralogy of submarine basalts? Hong Yang (hyang777@Stanford.edu), Sonia M. Tikoo | Dept of Geophysics/Dept of Earth & Planetary sciences, Stanford University Dario Bilardello, Peat Solheid | Institute for Rock Magnetism, University of Minnesota IODP Expedition 391 scientists 30-second Summary Background & Motivation Sample Info χHd = (χ300 A/m -χ30 A/m)/χ30 A/m -100 x10 -3 -50 0 50 100 0.16 0.12 0.08 0.04 0.00 Bc (T) 600 400 200 0 -200 -400 -600 Am 2/T 2 Bu (T) -100 x10 -3 -50 0 50 100 0.12 0.08 0.04 0.00 Bc (T) 400 300 200 100 0 Am 2/T 2 Bu (T) -100 x10 -3 -50 0 50 100 0.12 0.08 0.04 0.00 Bc (T) 2500 2000 1500 1000 500 0 -500 Am 2/T 2 Bu (T) 𝜒= 1% 𝜒= 5% 𝜒= 51% 𝜒= 14% -100 x10 -3 -50 0 50 100 0.16 0.12 0.08 0.04 0.00 Bc (T) 600 400 200 0 -200 Am 2/T 2 Bu (T) T (℃) k (10-6 SI) k (10-6 SI) T (℃) k (10-6 SI) T (℃) k (10-6 SI) T (℃) Preliminary Conclusions Acknowledgement References heating cooling heating cooling heating cooling heating cooling To help guide the choice of demagnetization method of submarine basalts in shipboard/lab measurements, we test if room temperature field dependence can be used to infer the magnetic mineralogy of submarine basalts. We studied magnetic properties of basalts with different field dependences. We found the field dependence behavior is associated with Hopkinson peaks. High field-dependent basalts are characterized by reversible thermomagnetic curves and low Curie temperature and should be better treated with AF demagnetization. [1] Zhou W, Van der Voo R, Peacor D R, Zhang Y 2000 Earth Planet. Sci. Lett. 179 9 [2] Özdemir Ö 1987 Phys. Earth Planet. Inter. 46 184 [3] Furuta T 1993 Geophys. J. Int. 113 95 [4] Sager W, Hoernle K, Petronotis K 2020 [5] Jackson M, Moskowitz B, Rosenbaum J, Kissel C 1998 Earth Planet. Sci. Lett. 157 129 [6] Bilardello D 2023 IRM Q. 32-4 [7] Clark D A 2016 ASEG Extended Abstracts 2016 1 Field dependence data: Clark 2016 B2861 Deposit type: pillow lava Depth: 292.52m Location: (-32.32806, -0.643127) B5981 Deposit type: massive lava flow Depth: 391.32 m Location: (-32.32806, -0.643127) B4591 Deposit type: massive lava flow Depth: 174.09 m Location: (-25.202398, 7.4969) B7151 Deposit type: massive lava flow Depth: 181.92 m Location: (-25.202398, 7.4969) • Field-dependent behavior is associated with the Hopkinson peak, especially the rising part. • Real and Imaginary susceptibilities show different peak positions • The magnitude of room temperature field dependence depends on where the Hopkinson peak is, which is influenced by the Ti content of Titanomagnetite. • Samples with high field dependence is correlated with low Curie temperature and reversible k-T curve. They are better suited for AF demagnetization. • More to explore with field-frequency dependence? • We thank Institute for Rock Magnetism for awarding HY the visiting student fellowship to conduct this research. Additionally, we would like to extend our appreciation to Max Brown for his valuable assistance in arranging the visit. • The IRM is a US National Multi-user Facility supported through the Instrumentation and Facilities program of the National Science Foundation, Earth Sciences Division (NSF EAR -2153786), and by funding from the University of Minnesota.”
poster