text
stringlengths
21
4k
label
stringclasses
2 values
Prof. Luke Clancy TobaccoFree Research Institute Ireland (TFRI) TU Dublin Email: lclancy@tri.ie Website: www.tri.ie Prevalence of smoking in 16-year olds in Ireland in 2019 has increased for the first time since 1995. E-cigarette ever-use and current use associated with increased teen smoking. Tobacco Free Ireland’s endgame strategy is under threat. TobaccoFree Ireland endgame threatened by increase in Smoking and E-cigarette use among Adolescents Salome Sunday MPH, Joan Hanafin PhD, Luke Clancy, MD, PhD TobaccoFree Research Institute Ireland, TU Dublin, www.tri.ie 21st SRNT-Europe Annual Conference, September 15-17 2021 | Virtual Current smoking increased significantly in 2019 compared to 2015 with an adjusted odds ratio (AOR of 1.64 (CI 1.13,2.39)) for boys. This was accompanied by an increase in ever-used e- cigarettes AOR of 2.06 (CI 1.50,2.83) overall with a greater increase in AOR for girls. Mother’s education at college /university level was also associated with an increased AOR of 2.30 (CI 1.14,4.65) for boys but not for girls and father’s education was not significant. Truancy was also significantly linked with current smoking AOR 5.31 (CI 3.21,8.78) and the highest association was with easy access to cigarettes AOR 6.40 (CI 4.16,9.86) Ever-smoking slightly reduced overall but ever e-cigarette use was positively linked with ever smoking In boys and girls. Mother’s higher education was also linked positively with ever-smoking but only in boys. Access to cigarettes was also the highest association with ever-smoking AOR 4.39 (CI 3.43,5.6) This trend analysis showed an association between adolescent cigarette smoking and perceived risk, truancy, and peer smoking, factors which did not deteriorate between 2015 and 2019. While access to cigarettes was the strongest association the negative impact which observed increased youth e-cigarette use had on teenage cigarette smoking is worrying. In Ireland, as in many European countries, teenage smoking has declined from the very high prevalence of the 1990s. Prevalence of current smoking decreased significantly between 1995 and 2019 from 32% to 20% (1,2) in ESPAD countries and from 41% in 1995 to 13% in 2015 in Ireland (3), accounted for by tobacco control measures (4). But in 2019 Irish teenage boys’ smoking increased (2). We attempt to explain this unexpected finding by looking at two comparable (ESPAD) samples from 2015 and 2019. INTRODUCTION METHODS AND MATERIALS CONCLUSIONS RESULTS REFERENCES WHAT THIS STUDY ADDS CONTACT 1. Hanafin J, Sunday S, Keogan S, Clancy L. Gender difference results in increase in adolescent smoking in 2019 in Ireland- European trend analysis of current smoking prevalence 1995-2019. Irish Journal of Medical Science. 2021 Jan 7;190:3.11. 2. Sunday S, Keogan S, Hanafin J, Clancy L. ESPAD 2019: European Schools Project on Alcohol and Other Drugs in Ireland. Dublin: TFRI; 2020. 3. Li S, Keogan S, Taylor K, Clancy L. Decline of adolescent smoking in Ireland 1995-2015: trend analysis and associated factors. BMJ Open. 2018 Apr 27;8(4):e020708–e020708. 4. Li S, Keogan S, Clancy L. Does smoke-free legislation work for teens too? A logistic regression analysis of smoking prevalence and gender among 16 years old in Ireland, using the 1995–2015 ESPAD school surveys. BMJ Open. 2020 Aug 1;10(8):e032630. TFRI for the Department of Health, Ireland.; 2016. • The ESPAD (European School Survey Project on Alcohol and other Drugs survey is a cross-sectional survey conducted every four years in 35 European countries with the aim of collecting comparable data on substance use among students aged 15–16 years in Europe. Data • We used data from two waves (2015 and 2019) of the Irish arm of the ESPAD survey. Our sample for this study comprised 1,493 students in 2015 and 1,949 students in 2019. Sample • A multivariable logistic regression analysis was performed to examine the prevalence of smoking, and to understand the factors associated with the change in adolescent smoking. A
poster
Initial Results from SAR-Based Validation of Sea Ice Drift Forecast Models Introduction • Optimal shipping routes through drifting sea ice increasingly important for navigation in polar regions • Sea ice drift information obtained from Synthetic Aperture Radar (SAR) [5] • Evaluation of usability of sea ice drift forecast models for multi-day sea ice analysis • Improvements for high-resolution forecast Predictive Ice Image application (PRIIMA) [2] • Forecast model trajectories derived with Lagrangian Tracking • Sea ice drift vector fields obtained from successive SAR-scene pairs • Comparison of forecast model trajectories and SAR-based drift vector fields Initial Results Martin Bathmann*1, Stefan Wiehle1, Anja Frost1, Lasse Rabenstein2, Gunnar Spreen3 1German Aerospace Center (DLR), Maritime Safety and Security Lab Bremen, Germany; 2Drift+Noise Polar Services, Bremen, Germany, 3University of Bremen, Institute of Environmental Physics, Germany *martin.bathmann@dlr.de Further Research • How is the measured sea ice deformation represented in the forecast models? • How can the influence of the sea ice rheology be derived by evaluating the forecast model input data (e.g. winds and ocean currents)? • Which other solutions for sea ice analysis can be put into practice with the available OOP approach? • Divergence maps derived with Sobel kernels [1] Data and Methodology • Historical TOPAZ4 [4] and neXtSIM [3] forecasts from December 2021 • 7 Sentinel-1 SAR-scene pairs from 2 regions of interest (ROIs) in the Lincoln Sea • Vector model for data processing • high flexibility • floating-point number resolution • Object-oriented-programming (OOP), topology, hashing and spatial indexing • Trajectories and measurements both calculated starting from a regular grid • Forecast model sea ice drift interpolated in every grid point with cubic splines • Runge-Kutta 4th-order (RK4) [6] combined with Inverse Distance Weighting (IDW) to a refined approach (RK4-IDW) Lagrangian Tracking Performing Lagrangian Tracking with RK4-IDW RK4: 𝑋𝐼𝐷𝑊𝑘+1 = 𝑋𝐼𝐷𝑊𝑘+ 𝑓𝐼𝐷𝑊𝑘+1∆𝑡 𝑓𝐼𝐷𝑊𝑘+1 = 1 6 𝑓𝐼𝐷𝑊1 + 2𝑓𝐼𝐷𝑊2 + 2𝑓𝐼𝐷𝑊3 + 𝑓𝐼𝐷𝑊4 𝑓𝐼𝐷𝑊1 = 𝑓𝑋𝐼𝐷𝑊𝑘, 𝑡𝑘 𝑓𝐼𝐷𝑊2 = 𝑓𝑋𝐼𝐷𝑊𝑘+ 𝑓𝐼𝐷𝑊1 Τ ∆𝑡2 , 𝑡𝑘+ Τ ∆𝑡2 𝑓𝐼𝐷𝑊3 = 𝑓𝑋𝐼𝐷𝑊𝑘+ 𝑓𝐼𝐷𝑊2 Τ ∆𝑡2 , 𝑡𝑘+ Τ ∆𝑡2 𝑓𝐼𝐷𝑊4 = 𝑓𝑋𝐼𝐷𝑊𝑘+ 𝑓𝐼𝐷𝑊3∆𝑡, 𝑡𝑘+ ∆𝑡 IDW: 𝑑𝑝,𝑡𝑗= 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑏𝑒𝑡𝑤𝑒𝑒𝑛𝑔𝑟𝑖𝑑𝑝𝑜𝑖𝑛𝑡 𝑎𝑛𝑑𝑐𝑢𝑟𝑟𝑒𝑛𝑡𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛 𝑢𝑠,𝑡𝑗= 𝑠𝑒𝑎𝑖𝑐𝑒𝑑𝑟𝑖𝑓𝑡𝑎𝑡𝑡𝑖𝑚𝑒𝑡𝑗 𝑓𝐼𝐷𝑊𝑖= σ𝑖=1 3 𝑢𝑖,𝑡𝑗 𝑑𝑖,𝑡𝑗 σ𝑖=1 3 1 𝑑𝑖,𝑡𝑗 forecast model grid point one hour drift vector 𝑠𝑖= 𝑠𝑒𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑠𝑖 𝑅𝑀𝑆𝑠𝑒𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒= σ𝑖=1 𝑛 𝑠𝑖2 𝑛 forecast model trajectorySAR-based measurement 𝑑1,𝑡𝑗 𝑑4,𝑡𝑗 𝑑3,𝑡𝑗 𝑓𝐼𝐷𝑊𝑖 𝑢1,𝑡𝑗 𝑢4,𝑡𝑗 𝑢3,𝑡𝑗 𝑑2,𝑡𝑗 𝑢2,𝑡𝑗 𝑓𝐼𝐷𝑊4 𝑓𝐼𝐷𝑊3 𝑓𝐼𝐷𝑊2 𝑓𝐼𝐷𝑊1 0 5 10 15 20 one two three RMS separation distance [km] timeframe [days] RMS separation distance measured values vs quality information (QI) TOPAZ4 IDW TOPAZ4 RK4 TOPAZ QI neXtSIM IDW neXtSIM RK4 neXtSIM QI neXtSIM 6th – 8th December 2021 48h TOPAZ4 6th – 8th December 2021 48h ROIs Conclusions • Trajectories • RK4-IDW yields smoother trajectories • Small difference (ca. 200 m) between IDW and RK4-IDW • TOPAZ4: viscous-plastic rheology • Rheology without brittleness of sea ice, but good overall drift • Difficult to derive deformation fields • neXtSIM: brittle rheology • Problems of low drift near land • Divergence field is promising • Only small case study so far • Overall RMS separation distance of TOPAZ4 and neXtSIM between 3 and 5 ൗ 𝑘𝑚𝑑𝑎𝑦 References [1] Albedyll, L. von: Sea ice deformation and sea ice thickness change, Dissertation, Universität Bremen, 2022. [2] Drift+Noise: PRIIMA - Predictive Ice Image, https://business.esa.int/projects /priima, last access: 31 January 2023, 2019. [3] European Union - Copernicus Marine Service: neXtSIM: Arctic Ocean Sea Ice Analysis and Forecast, https://doi.org/10.48670/moi-00004, 2020. [4] European Union - Copernicus Marine Service: TOPAZ4: Arctic Ocean Physics Analysis and Forecast, https://doi.org/10.48670/moi-00001, 2015. [5] Frost, A., Wiehle, S., Singha, S., and Krause, D.: Sea Ice Motion Tracking from Near Real Time Sar Data
poster
Exploring variation in ED attendances and acute hospital admissions for Ambulatory Care Sensitive Conditions: A federated data analysis Richard Jacques1, Rebecca Simpson1, Madina Hasan2, Simone Croft2, Richard Campbell1, Peter Bath1, Susan Croft1, Suzanne Mason1 and the HDR UK Regional Linked Data Consortium 1SCHARR, Division of Population Health, 2Data Connect Email: r.jacques@sheffield.ac.uk 7 regions and 23 ED and 26 APC sites Internal Analysis: - 1 region and 5 sites External Analysis: - 6 regions and 21 sites Inpatient Data (APC) 01/11/2021 to 31/10/2022 Adults (18+) Completed acute emergency admission ED Data (ECDS) 01/11/2021 to 31/10/2022 Adults (18+) Unplanned first emergency care attendances Type 1 ED Aim Methods Federated multi-regional analysis Pre-specified aggregate-level tables (split by ACSC and non-ACSC): • patient demographics • attendance/admission characteristics • attendance/admission outcomes. The aggregate-level data were combined by the lead site to produce overall summary statistics and make comparisons between hospitals. ACSCs are conditions where effective community care and case management can help prevent the need for hospital admission. Identify potentially avoidable acute admissions focussing on ambulatory care sensitive conditions (ACSCs). Understand variation across the country in acute hospital admissions and Emergency Department (ED) attendances. ED Data Inpatient Data ED data • High proportion ED attendances are with ACSC, significant variation in attendance rates. • More patients admitted from ED with ACSC than non-ACSC. Inpatient Data • High proportion of admissions are for ACSC, significant variation in admission rates. • A higher proportion of long stays were for ACSC. Overall • Results could indicate a failure of care in pre-hospital settings (i.e., primary and community care). • Further research is needed to establish clearer criteria for potentially avoidable admissions and same day emergency care-eligible patients. Number of first attendances: 32,880 to 114,190. Median number of attendances: 70,963. Overall percentage of ACSC attendances: 30% range 14% to 54% 32% of ACSC attendances result in admission, range 4% to 48%. 23% of non-ACSC attendances result in admission, range 7% to 41%. Number of admissions: 3,960 to 51,905. Median number of admissions: 27,388. Overall percentage of ACSC admissions: 41% range 21% to 61% The most common conditions were: 1. Low risk chest pain (13.5%) 2. Lower respiratory tract infection or community acquired pneumonia (10.6%) 3. Falls including syncope and collapse (7.8%) 4. Abnormal liver function (5.9%) 5. Urinary tract infection (5.8%). Higher proportion of short stays (<2 days) admissions for non-ACSC (51% vs 47%) Higher proportion of longer stays (2 days or over) for ACSC conditions (53% vs 49%). Median length of stay range: 0 - 5 days for ACSC 0 - 5 days for non - ACSC. The most common conditions were: 1. Condition unspecified (24.8%) 2. Low risk chest pain (9.7%) 3. Lower respiratory tract infection or community acquired pneumonia (9.1%) 4. Upper GI haemorrhage (6.9%) 5. Supraventricular tachycardias (4.3%). This research is part of, and funded by the Data and Connectivity National Core Study, led by Health Data Research UK in partnership with the Office for National Statistics and funded by UK Research and Innovation (grant ref MC_PC_20058)
poster
Output: 1. Risk Log Entry 2. Policy Recommendations 3. Audit Recommendations 4. Risk Acceptance Statement Input: 1. Standards & Norms 2. Legislation 3. Ethics & Values Output: 1. Measures 2. Training Content 3. Design Recommendations Input: 1. Standards & Norms 2. Legislation 3. Ethics & Values Output: 1. DPIA, TIA, Security Assessment Report 2. Communication to Data Subjects (Transparency) 3. List of vulnerabilities Input: 1. Standards & Norms 2. Legislation 3. Ethics & Values Output: 1. Risk Log Entry 2. Start Risk Management 3. Risk Acceptance Statement Input: 1. Standards & Norms 2. Legislation 3. Ethics & Values Privacy in Context - Operationalizing The Risk Based Approach Erasmus University Rotterdam marlon.domingus@eur.nl January 2022 1. Categorizing Risk 2. Prioritizing Risk 3. Risk Assessment: Likelihood & Impact 4. Selection of Controls 5. Evaluation 1. Strategic Risk 2. External Risk 3. Operational Risk Risks related to: 1. Data Subjects 2. Data Processing & Data Transfer 3. Tooling & Platforms Used 4. Third Parties and Subcontrollers 5. Controller’s Reputation 6. Society & Environment Action: 1. Short Term 2. Medium Term 3. Do Nothing 1. Organizational 2. Technical 3. Legal 4. Design 5. Training Risk: 1. Fully Mitigated 2. Unchanged Risk 3. Residual Risk Input: 1. Policy & Governance 2. Policy Guidelines 3. Policy Review Cycle Input: 1. Standards & Norms 2. Legislation 3. Ethics & Values Data Processing in Academia analysis data collection publication deletion masking and / or de-identification personal data storage archiving anonimization
poster
Liver elastography scores in a large population cohort in rural Uganda: Uganda Liver Disease Study (ULiDS) Sheila F Lumley1,2*, Beatrice Kimono3*, Joseph Mugisha3, Brian Ssengendo3, Elizabeth Waddilove4, Richard Ndungutse3, Moses Kwizera Mbonye3, Ponsiano Ocama5, Philippa C Matthews4,6,7 ^, Robert Newton3^ * joint first, ^ joint senior 1 Peter Medawar Building for Pathogen Research, Nuffield Department of Medicine, University of Oxford, South Parks Road, Oxford OX1 3SY, UK, 2 Department of Infectious Diseases and Microbiology, Oxford University Hospitals NHS Foundation Trust, 3 MRC/UVRI/LSHTM Uganda Research Unit, 4 The Francis Crick Institute, 1 Midland Road, London NW1 1AT, UK, 5 Makerere University College of Health Sciences in Kampala, Uganda, 6 Division of Infection and Immunity, University College London, Gower St, London WC1E 6BT, UK, 7 Department of Infection, University College London Hospitals, 235 Euston Rd, London NW1 2BU, UK Contact: sheila.lumley@gtc.ox.ac.uk, philippa.matthews@crick.ac.uk INTRODUCTION - Chronic liver disease (CLD) represents an increasing healthcare burden in many global settings. - In many populations in the WHO Africa region, the prevalence, aetiology and outcomes of CLD represent a neglected challenge. - Hepatitis B virus (HBV) and Human immunodeficiency virus (HIV) are associated with liver fibrosis. - Non-invasive tests (NITs) such as liver elastography act as a surrogate measure of fibrosis. Aims: (1) estimate the prevalence of liver disease in a large population cohort in rural Southern Uganda using elastography (2) define the distribution of elastography scores in a subset of HBV or HIV infected individuals. METHODS - Between April - June 2023, we performed a cross-sectional study (Uganda Liver Disease Study - ULiDs) nested within the Kyamulibwa Uganda MRC General Population cohort (GPC), a well established rural cohort of ~22,000 individuals in Kalungu District. - 517 adults including a subgroup with known HBV infection were selected from the GPC. Pregnant women were not eligible to participate in the study. - We performed a questionnaire to identify exposure to known CLD risk factors, anthropometric measures, bloods for HIV, HBV, HCV and malaria infection status and liver elastography (Fibroscan, Echosens, Paris). - Elastography score was expressed in kilopascal (kPa) as a median of 10 successful acquisitions, scores with interquartile range/median >0.30 were excluded. - We applied the following thresholds to interpret elastography score: - Analysis was performed using R v 4.2.2 (https://www.R-project.org/Licenses) RESULTS (1) Demographics We reviewed 517 adults, the median age of participants was 53 years (IQR 44-63, range 23-96) there were 216 males and 301 females. Current alcohol consumption was reported by 221/517 individuals (42.7%). Median BMI was 21.01 (IQR 18.77 – 23.57). 23 had HBV infection and 64 HIV infection (one individual had HBV/HIV co-infection). (Table 1) Table 1: Demographics for ULiDS cohort Elastography scores In the absence of chronic BBV infection, median elastography score was 5.3 (IQR 4.4 - 6.2), scores were >7kPa in 59/431 (13.7%) and >12kPa in 4/431 (0.9%), compared to (5/23 (21.7%) vs. 1/23 (4.3%) in HBV and 11/64 (17.1%) vs. 2/64 (3.1%) in HIV respectively. Figure 1: Distribution of liver elastography scores across subpopulations with differing BBV infection status In a univariate analysis, elastography scores were higher in males (p=0.003) and chronic HBV infection (median 6.4, IQR 5.7-6.7, p = 0.042). DISCUSSION 112 Elastography score (kPa) Interpretation <7 CLD excluded 7-12 Indeterminate >12 CLD likely - In this rural East African population, elastography data suggest potential associations between liver fibrosis and HBV, male sex and low BMI. - Male sex has been previously identified as a clinical predictor of elevated liver stiffness in HBV infected individuals. - Low BMI may be associated with high levels of liver steatosis in metabolic liver diseas
poster
The Hartebeesthoek Radio Astronomy Observatory (HartRAO) participates in global astronomic and geodetic research activities. Astronomy activities focus largely on the use of the 26-m radio telescope to conduct astronomical observations- single-dish observations and astronomical Very Long Baseline Interferometry (VLBI), whilst geodetic activities focus on using the HartRAO 15-m and 26-m radio telescopes for geodetic VLBI, Global Navigation Satellite Systems (GNSS) positional reference stations, weather data, seismic systems, satellite laser ranging (SLR) and Doppler Orbitography and Radiopositioning (DORIS) systems. Added to the existing instruments and techniques are gravimetric instruments, a Lunar Laser Ranger (LLR) and, in the near future, a VLBI Global Observing System (VGOS) telescope. HartRAO is also participating in the African VLBI Network (AVN), a network of radio telescopes in Africa. Data and data products produced by HartRAO's expanded range of on-site and off-site instruments must be archived and stored at HartRAO and made accessible to the scientific community. The data management and storage systems currently being used have certain drawbacks, such as being distributed and outdated as well as Progress with the new research data management system at HartRAO Poster presentation at the Library & Information Services (LISA) VIII Conference "Astronomy Librarianship in the era of Big Data and Open Science", 6 - 9 June 2017, European Doctoral College, Strasbourg, France having a limited capacity to manage additional large data volumes, types and user requirements. This necessitated the design and implementation of a new, next- generation Geodetic Research Data Management System (GRDMS), which will comply with internationally accepted standards. Main objectives of the system are to organise, structure and store geodesy and geodynamics related data and data products in a central data bank, maintain information about the archival of the data and disseminate data, data products and information in a timely manner to the global research community. Components of our data management system will be similar to and incorporate the same software as that of the Crustal Dynamics Data Information System (CDDIS) and University NAVSTAR Consortium (UNAVCO). Data structures and file-naming conventions of the CDDIS and UNAVCO will be used for all geodetic data. Each dataset will receive persistent interoperable identifiers, Digital Object Identifiers (DOIs). A web-based graphic user interface (GUI) for the dissemination of data and data products will be provided to users. We present progress to date on various sub-systems as well as a top-level conceptual model of the GRDMS. Internal Steps [SI] data flow cycle: Ÿ [SI1]: The data collection (Vector) and storage of raw data from the various stations on the Archive Ÿ [SI2]: Raw data is streamed to the Data Processing Unit for processing towards data products Ÿ [SI3]: Processed data are sent and stored in specified formats and structures in the Archive External Steps [SE] data flow cycle: Ÿ [SE1]: The scientific community can interact with and request data via a website (HTTP) and/or FTP Ÿ [SE2]: The Data Access System obtains the requested data from the Archive Ÿ [SE3]: Requested data are packaged into a single compressed file and made available Custom Steps [SC] data flow cycles (for simple requests via the online interface): Ÿ [SC1]: Requests are submitted on the website via a special interface Ÿ [SC2]: Requests are translated to a script and sent to the Data Processing Unit Ÿ [SC3]: The Data Processing Unit obtains the required raw data and processes it Ÿ [SC4]: Processed data is sent to the Archive for storage Ÿ [SC5]: The Data Access System retrieves processed data Ÿ [SC6]: Results are sent to the requesting user Acknowledgements The authors would like to acknowledge funding awarded by the National Equipment Programme (NEP) of the National Research Foundation (NRF) for the development o
poster
ANÁLISIS DEL USO DE DOBLE PROTECCIÓN (ANTICONCEPCIÓN E ITS) EN PACIENTES QUE INGRESAN AL SERVICIO DE PATOLOGÍA OBSTÉTRICA EN ENERO DE 2021 DEL HOSPITAL GINECO-OBSTÉTRICO ISIDRO AYORA DE QUITO-ECUADOR Morales Carrasco María Fernanda*, López Sosa Carlos Alberto** * Ginecóloga-obstetra, Máster en VIH, Hospital Gineco-Obstétrico “Isidro Ayora” ** Ginecólogo-obstetra, Máster en VIH, CODESER. PRIMERA JORNADA MÉDICA ACADÉMICA DE BIENESTAR MATERNO-NEONATAL Y GINECOLÓGICO “HGOIA” Objetivo: Analizar el uso de la doble protección en pacientes que ingresan al Servicio de Patología Obstétrica del Hospital Gineco-Obstétrico “Isidro Ayora”. Metodología: Se realizó un estudio descriptivo-transversal mediante la aplicación de encuestas a pacientes que ingresaron en Patología Obstétrica en enero de 2021, para analizar los datos se utilizó el paquete estadístico SPSS, versión 22.0. Palabras clave: doble protección, ITS, VIH, métodos anticonceptivos Resultados: El 55.8% de mujeres encuestadas no tiene conocimiento sobre la temática de la doble protección y el 62.2% no utiliza la misma. El nivel de conocimiento de la doble protección se relaciona de forma estadísticamente significativa con la edad, nivel de instrucción y número de gestas. El uso de la doble protección es estadísticamente significativo con el nivel de conocimiento sobre los métodos anticonceptivos. La preferencia anticonceptiva de las pacientes al alta es por métodos hormonales (53.7%). De las pacientes que conocen sobre doble protección al terminar la gestación solamente el 1.81% van a utilizarla. Conclusión: El conocimiento y el uso de doble protección son bajos, es necesario enfatizar en la educación sexual y reproductiva su importancia en la prevención de embarazos no deseados e ITS-VIH. Referencias 1. The GFOR, Of M, Sexually S, Infections T. Guidelines for the management of symptomatic sexually transmitted infections. 2021. 2. ONU SIDA. Hoja informativa — Últimas estadísticas sobre el estado de la epidemia de sida [Internet]. 2022. Available from: https://www.unaids.org/es/resources/fact- sheet 3. Prevención , diagnóstico y tratamiento de la infección por el virus de inmunodeficiencia humana ( VIH ) en adolescentes y adultos. 2019; 4. 2018 EDITION What ’ s New in This Edition ? 2018. 5. ONU SIDA. Disminuye el uso de preservativos [Internet]. 2020. Available from: https://www.unaids.org/es/resources/presscentre/featurest ories/2020/november/ 20201123_condom-use-declining 6. PRIMICIAS. El uso de preservativos en Ecuador cayó 68% por el confinamiento [Internet]. 2020. Available from: https://www.primicias.ec/noticias/sociedad/usopreservativo s-ecuador-crisis-sanitaria/ 7. DOUBLE CONTRACEPTIVE PROTECTION IN USER OF HORMONAL METHOD AND INTRAUTERINE. 2021;6:9994. 8. Díaz R. Especialización En Medicina Familiar Y Comunitaria. 2013 Grafico 1. Método anticonceptivo de preferencia al fianalizar la gestación Tabla 1. Relación entre el nivel de conocimiento de los anticonceptivos con el uso de doble protección
poster
Conference on Nonlinear Systems & Dynamics IISER Pune, 15-18 December 2022 Fractional Maps of complex fractional order with poles Divya D. Joshi * November 2022 The existence of chaos in nonlinear maps of complex orders has been recently in [1]. Various maps have been studied in this paper. Smooth maps like logistic, Gauss, Duffing, and H´enon maps do not show chaos. While discontinuous maps such as Bernoulli, and circle and non-differentiable map like The tent and Lozi map show chaos. These systems are not differentiable or discontin- uous at some point and we argued that it is necessary that the function is not analytic. The following work can be seen as an extension of this work. In this work, I study a function which has simple poles at some point and shows that these functions show chaos as well. In this context, I study the q-deformed logistic map and find that they show chaos for certain deformations. The logistic map is defined as : xn+1 = λxn(1 −xn), where λ is the parameter that lies within the interval [0,4]. As per [1], the logistic map being a smooth map defined by an analytic function, chaos is absent for complex or- der fractional logistic map. I introduce q-deformation in the complex order fractional logistic map. Earlier, q-deformation in nonlinear maps have been studied by Jaganathan and Sinha in [2] where the q-deformation is given as : xdef = x 1−ϵ(x−1), where ϵ is the deformation parameter. The fractional order logistic system with q-deformation can be formulated as : x(t) = x(0) + 1 Γ(α) t X j=1 Γ(t −j + α) Γ(t −j + 1) ×[λxdef(j −1)(1 −xdef(j −1)) −x(j −1)]. (1) Here, α is the complex fractional order, and x(0) is the initial condition. For the purpose of simplicity I set α = α0eirπ/2 with 0 < α0 < 1 and 0 ≤r < 1. In our previous work, we observed that maps that were discontinuous or non-differentiable showed chaos. Here I have a q-deformed fractional logistic map of complex or- der, which has second-order poles which is non-analytic. The existence of chaos, in this case, strengthens our hy- pothesis. Figure(1(a) and 1(b)) shows bifurcation diagram for ϵ = 0 and ϵ = −1.5 respectively. No chaos is seen in the case of ϵ = 0. Chaos exists in the case of ϵ = −1.5. Also the system does not show chaos for the values of ϵ ≥−1. *Divya D. Joshi are with the Department of Physics, Rash- trasant Tukadoji Maharaj Nagpur University, email: divya- joshidj27@gmail.com. 0 . 0 0 . 5 1 . 0 1 . 5 2 . 0 2 . 5 3 . 0 0 . 0 0 . 2 0 . 4 0 . 6 0 . 8 R e ( x ( t ) ) l (a) 0 1 2 3 4 - 3 . 0 - 2 . 5 - 2 . 0 - 1 . 5 - 1 . 0 - 0 . 5 0 . 0 0 . 5 1 . 0 1 . 5 2 . 0 R e ( x ( t ) ) l (b) Figure 1: Bifurcation diagram for α0 = 0.7, r = 0.01, x(0) = 0.1 for (a) ϵ = 0 and (b) ϵ = −1.5 A multistable system has coexisting attractors for the same parameters with different initial conditions. Figure(2) shows the existence of multistability for a q- deformed fractional logistic map of complex order. The result suggests a relation between the analyticity of the complex fractional ordered functions and the existence of multistability. 2 . 5 3 . 0 3 . 5 4 . 0 - 3 - 2 - 1 0 1 2 3 x ( 0 ) = 1 8 + 1 9 i x ( 0 ) = 0 . 1 R e ( x ( t ) ) l Figure 2: Multistability observed for α0 = 0.7, r = 0.01, ϵ = −1.5 with different initial conditions. While concluding, I note the following observations made in this work. Existence of chaos for the q-deformed logistic map of complex fractional order. In this case, the system does not show chaos for the values of ϵ ≥−1. The system shows multistability. References [1] Divya D Joshi, Prashant M Gade, and Sachin Bhalekar. Study of low-dimensional nonlinear fractional difference equations of com- plex order. Chaos: An Interdisciplinary Journal of Nonlinear Sci- ence, 32(11):113101, 2022. [2] Ramaswamy Jaganathan and Sudeshna Sinha. A q-deformed non- linear map. Physics Letters A, 338(3-5):277–287, 2005.
poster
OBJECTIVE MATERIAL & METHOD CONCLUSIONS Rachida NACIRI1, Mohamed CHTOUKI1, Abdallah OUKARROUM1 1Plant Stress Physiology Laboratory, College of Agriculture and Environmental Sciences, Mohammed VI Polytechnic University, Benguerir, Morocco Elucidate how different phosphorus fertilization regimes based on can improve some biophysiological processes in tomato plants exposed to Cd stress. Tomato seeds were germinated and grown in a growth chamber with a day/night cycle of 16/8 h at 24 °C in modified Hoagland nutrient solution. Hypothesis Highly polymerized poly-P fertilizers can reduce Cd availability in the nutrient solution and limit its uptake by tomato roots and translocation to aerial parts. RESULTS Changes in micronutrient availability in the NS depend significantly to P form. Poly-P fertilizer improves the homeostasis of iron in the shoot part (Fe/Cd, Fe/Zn, Fe/P, and Fe/Mn) compared to the ortho-P. Cadmium stress significantly alters the electron transport flux per reaction centers (ET0/RC) as well as the RE0/RC. especially under ortho-P fertilization regimes. Polyphosphate fertilizer attenuate the effect of Cd stress on electron transport chain and photosynthesis efficiency Plant exposure to cadmium stress significantly reduced SLA by 43 and 53% in the Cd12 and Cd25 ortho-P regimes. However, under Poly-P regimes, the reduction was only 37 and 47% under medium (Cd12) and severe (Cd25) Cd stress. Effect of cadmium level and P fertilizer form on the chlorophyll content and chlorophyll stability index. Polyphosphate fertilizer improve Fe/micronutrient ratio in shoot tissue under Cd stress. Cadmium stress induces significant physiological changes in tomato plants including a disturbance of micronutrient uptake and homeostasis, especially for iron element. Poly-P fertilizer can modulate Cd toxicity in tomato by enhancing Fe, Zn, Mn uptake, and their utilization in photosynthesis Effect of cadmium level and P fertilizer form on tomato specific leaf area. The study was supported by the Plant Stress Physiology Laboratory and the School of Agriculture, Fertilization and Environmental Sciences (UM6P). Potassium and phosphorus content ratio in hydroponic culture affects tomato plant growth and nutrient uptake. Physiol Mol Biol Plants 28, 763–774 (2022). Effect of Cadmium and Phosphorus Interaction on Tomato: Chlorophyll a Fluorescence, Plant Growth, and Cadmium Translocation. Water Air Soil Pollut 232, 84 (2021). REFERENCES ACKNOWLEDGMENTS How polyphosphate fertilizer can alleviate cadmium stress in tomato plants in hydroponic agriculture?
poster
Appear vs. Disappear sounds Study Protocol for HI Study Motivation Why is Change Detection important? • The performance worsens when sounds are not limited to the front hemisphere and can appear or disappear randomly, as in real life (Figure 8). • Overall performance in the appear trials is significantly better than in the disappear trials (Figure 6). • No significant differences between unaided and aided performance. • Performance for the sound localization in disappear trials was overall the worst. Influence of Sound Position • *No sig. differences were found between two HA processing, except for the sound identification task in diagonal trials including appearing and disappearing sounds (p < .05, r = .51). Experimental Design for HI Study • A mixture of 4 sounds played from the front, back or both hemispheres (Figure 2). • Sounds were randomly chosen out of a pool of 12 everyday sounds (Figure 3). • Continuous scene for 10 sec, a sound appears or disappears after 6 sec. • Listening conditions: Unaided and aided with state-of-the-art HA processing. References: [1] studio4rt (2022). Freepik. https://bitly.ws/37EiK. [2] Brungart, D.S., Cohen, J., Cord, M., Zion, D., Kalluri, S. (2014): Assessment of auditory spatial awareness in complex listening environments. The Journal of the Acoustical Society of America, 136(4), 1808–1820. https://doi.org/10.1121/1.4893932. [3] Kosminski, M., Bilert, S., Serman, M., Hoppe, U. (2022): Experimental paradigm for change detection exploration. SPIN 2023, Split. [4] Ventry, I. M., & Weinstein, B. E. (1982). The Hearing Handicap Inventory for the Elderly: a New Tool. Ear and Hearing, 3(3), 128-134. https://doi.org/10.1097/00003446-198205000-00006. Results Corresponding author: michelle.kosminski@fau.de To appear or to disappear? Investigating change detection in hearing-impaired listeners Conclusions 1WSAudiology, Erlangen, Germany, 2Friedrich-Alexander-University Erlangen-Nürnberg Background Michelle Kosminski1,2, Sascha Bilert1, Maja Serman1, Ulrich Hoppe2 Acknowledgments: Our thanks goes to all listeners who participated in this experiment. Methods To feel safe in a real-life environment we must detect sudden appearing or disappearing sounds. Can hearing aids (HA) help elderly hearing-impaired (HI) with detecting such changes? Let’s test! Subjective Experience • Sig. correlations between aided sound identification and the situational subscore of the HHIE [4]: Subjects who performed worse in the experiment reported more difficulties in everyday life in social settings (back trials: rs = -.56, p < .05; diagonal trials: rs = -.63, p < .05). Figure 1: General example for appear (a) and disappear (b) change detection experiment. Figure 3: Interface for subjects to identify and locate the appearing or disappearing sound. Figure 2: Loudspeaker configurations used in the experiment. Figure 4: Study protocol for the first and second appointment. Grayed-out parts of the protocol are not presented here. HHIE = Hearing Handicap Inventory for the Elderly [4]. [1] For young normal-hearing (NH) listeners it is easier to locate and identify sounds appearing into a mixture in the front hemisphere [2, 3]. (a) (b) Figure 7: Performance for the sound identification task and sound localization task (N = 14). Results for the aided condition are shown for one type of HA processing*. Figure 8: Comparison of Brungart et al. [2] (N = 19, mixture of 2 and 4 sounds) and Kosminski et al. [3] (N = 14, mixture of 4 sounds) for HI participants. Figure 6: Overall performance for appear and disappear trials (N = 14). Results for the aided condition only include one type of (HA) processing. Correlation of aided performance and HHIE situational subscore, points towards the relevance of change detection in real life of HI listeners. • N = 14 (4 female, 10 male) • Age range: 21 – 85 years (M = 59.4 years) • HI with mild to severe symmetrical hearing loss (Figure 5) • PTA: 27.5 – 63.75 dB HL (M = 49.24 dB HL) Participants Figure 5
poster
Multilinguals Write Back : Modeling Language, Politics and Identity in Philippine Social Media Frances Cruz University of Antwerp, University of the Philippines Abstract Introduction Methodology Results and Discussion Conclusion References The Philippines is a multilingual country with estimates of up to 134 languages (KWF, 2014). Despite this linguistic diversity, the country’s languages are disproportionally represented in print media. While national broadsheets are primarily published in English, Filipino and regional languages tend to be the language of tabloids, reflecting relationships between print media type, language, and class (Ables, 2003, p. 44-45). Social media, with its multilingual digital publics, can thus give insights into different languages’ current degree of use in written discourse. This study documents the intersections of language and the public sphere through a cross- sectional model of comments on public Facebook (FB) pages of selected Philippine newspapers from 2015-2019. Its objectives are to train a multilabel classifier to model monolingual and code- switched language use in discussions of current events, identify differences between national and regional newspapers, and glean digital insights into the conduct of public discourse. • Codeswitching was a discernible feature of online texts. • Monolingual texts in Tagalog/Filipino were the most common, followed by monolingual English entries and Cebuano/Bisaya entries. • Among the national newspapers, results from the Inquirer and Philippine Star indicated the presence of more Tagalog/Filipino than English comments, with the Manila Bulletin and Manila Times showing the opposite trend. • In terms of regional newspapers, Cebuano/Bisaya comments in Cebu-based newspapers Cebu Daily News and Sun Star Cebu outnumbered English and Tagalog/Filipino comments. • Two Mindanao-based newspapers, Mindanao Times and Sun Star Davao, featured Tagalog/Filipino as the most-used language, followed by English and Cebuano/Bisaya. Ables, H. (2003). Mass Communication and Philippine Society. University of the Philippines Press. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of NAACL-HLT, 4171–4186. Jünger, J., & Keyling, T. (2019). Facepager: An application for automated data retrieval on the web. https://github.com/strohne/Facepager/. Komisyon saWikang Filipino (2014). Atlas Filipinas. https://kwf.gov.ph/atlas-filipinas. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Bertrand, T., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., David Cournapeau, Brucher, M., Perro, M., & Duchesnay, E. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825–2830. Selected Newspapers’ FB Pages: Manila Bulletin, Manila Times, the Philippine Daily Inquirer, the Philippine Star, Cebu Daily News, Mindanews, Mindanao Times, Sun Star Cebu, Sun Star Davao, and The Freeman were captured by Facepager (Jünger & Keyling 2019). Data Scope: Two months per newspaper in 2015, 2017 and 2019. Classifier: A multilabel classifier in the form of a random forest (RF) (Pedregosa et al., 2011) was finetuned on top of a pretrained BERT feature extractor (specifically the ‘bert-base-multilingual- uncased’ model) (Devlin et al., 2019). Test and Training Set: Over 16,000 social media comments were annotated for the presence/absence of Tagalog/Filipino, Cebuano/Bisaya and English respectively. • In terms of social media responses, regional newspapers offer more diverse language use. • Despite the presence of English-language newspapers, current events are written about and responded to by a multilingual Philippine public sphere. • Social media thus offers opportunities not only for re-thinking monolingual norms in media, but may also revitalize written forms of regional languages. Figure 5: Language use as percentage of
poster
U.S. Environmental Protection Agency Office of Research and Development A Cheminformatics Workflow for Higher-Throughput Modeling of Chemical Exposures From Biosolids Paul M. Kruse1,2, Caroline L. Ring2 1. Oak Ridge Institute for Science and Education; Oak Ridge, TN 2. Center for Computational Toxicology and Exposure, United States Environmental Protection Agency; Research Triangle Park, NC Introduction Methods Workflows Results and Discussion Conclusion Paul Kruse l kruse.paul@epa.gov l 0000-0001-5516-9717 Abstract number: 4397 Poster number: P184 • Biosolids are treated sewage sludge produced as a byproduct of the wastewater treatment process • Under the Clean Water Act, the US EPA Office of Water (OW) has the responsibility to protect human health and the environment from adverse effects of pollutants that may be present in biosolids • EPA has identified over 700 chemicals found in previous National Sewage Sludge Surveys (NSSS) and Biennial Reviews (BR) [1, 2] • OW has developed a Biosolids Screening Tool (BST), a software tool implementing a model for risk screening for potential human and ecological exposures to biosolids chemicals • The BST models two disposal pathways: land application of biosolids and landfilling of sewage sludge • The BST requires many chemical-specific input parameters, including physico-chemical and fate and transport property data • We created the R package ccdR to access and retrieve data from the Center for Computational Toxicology and Exposure (CCTE) APIs • We developed an automated workflow to collect and process the input parameters the BST requires to run for each chemical. This new R-based workflow: o Integrates chemical information from publicly available cheminformatics databases and tools  EPA CompTox Chemicals Dashboard (CCD) [3]  OPEn (quantitative) structure-activity/property Relationship App (OPERA) [4]  ClassyFire chemical-classification tool [5]  Httk R package [6] o Interfaces with the existing Microsoft Access implementation of the BST via Microsoft Excel input/output References • 591 of the 623 Biosolids chemicals with ClassyFire classifications and concentration data had available physico-chemical property data. An additional three chemicals were removed due to incomplete physico- chemical property data or property data values outside the range of applicability for the BST, leaving 588 • Removed 222 Dioxin-like compounds from this list since they require cumulative exposure assessment and cannot be modeled appropriately using the BST, leaving 366 chemicals • Removed chemicals missing sufficient biosolids concentration data, leaving 339 remaining chemicals for mean concentration and 345 for 95th percentile concentration simulations • The workflow for collecting physico-chemical property data, filtering out chemicals missing necessary values for running the BST, and correctly formatting data for use within the BST takes under 15 minutes to execute for roughly 700 chemicals • The BST takes about 30 hours to run all possible scenarios for roughly 340 chemicals (average/wet/dry climate types, surface disposal/land applied use, mean/95th percentile biosolids concentration) • The workflow is easily adjusted to accommodate high-throughput screening efforts, and the BST runs on the output of the R-based workflow • This workflow combines several publicly available cheminformatics tools and databases to prepare chemicals for screening using the BST • This high-throughput workflow is easily adaptable for facilitating rapid chemical prioritization under other chemical fate scenarios • The workflow leverages the ccdR R package we developed, which can be deployed rapidly and easily in a variety of data pipeline workflows Biennial Reviews & Sewage Sludge Surveys Prioritization of Chemicals for Assessment Risk Screening Exceeds EPA level of concern? Low Priority Risk Assessment Exceeds EPA level of concern? Low Priority Consider Regulation If chemical may harm human health or the environment
poster
Introduction Couples sometimes have fewer children than they intend to, resulting in a gap between intended family size and completed fertility. Education and union patterns are considered important factors behind postponement of fertility, which in turn is a key component of the fertility gap; because physiological constraints on fertility intensify at older reproductive ages. We aim to measure how much education and union patterns contributed to the fertility gap of Dutch women born during 1974-1984 (mean of 0.23, Table 2). Research question How much did union formation and - dissolution as well as educational attainment contribute to the gap between intended family size and completed fertility for Dutch women born between 1974 and 1984? Theory We assume that most pregnancies are a conscious decision by a couple. However, significant shares of pregnancies are unplanned and unintended. Some couples who want to get pregnant are also unable to, due to low fecundity and miscarriage. By including contraceptive behaviour and physiological constraints on fertility we model both perceived and actual control over fertility. Figure 1: Conceptual framework based on literature review Data Methods We simulate the reproductive life courses of individual women (couples). We predict the fertility of the cohort by generating cumulative distribution functions based on cohort distributions and parameters derived from the data, from which we make random draws. Simulation models The simulation model is split in to two separate parts, the union formation and reproductive processes. The models iterate monthly from age 15 to 55 (or sterility) to match the reproductive window and the menstrual cycle. No conceptions occur during enrolment in education or outside of cohabitation. We simulate 100,000 women. Figure 2: Union formation simulation model (woman stays in her current state if the transition conditions are not met) Figure 3: Reproductive process simulation model (IUM = Intrauterine mortality) Results Education barely contributes to the fertility gap, because later and less partnering among highly educated women is compensated for by more stable unions. Moderate changes in marriage, separation, and re-partnering all have small contributions to the gap. Divorce essentially has no contribution (70% marry, event occurs late in life course). The contribution of postponement of first cohabitation increases with duration due to physiological constraints on fertility at older ages. A reduction in the share of women who ever partner contributes substantially to the gap. Figure 4: Distribution (%) of women by intended family size and completed fertility. Most women realised their fertility intentions (diagonal line), underachievement (below line) was much more common than overachievement (above line). The fertility gap increased somewhat with education. Table 1: Contributions to the fertility gap (mean 0.33) Parameters Adjustment Contribution to fertility gap Share of highly educated Increase by 10%-points 0.013 Share of highly educated 25-34 year-olds in 2022 0.010 Share who marry Decrease by 5%-points 0.023 Share who separate Increase by 5%-points 0.019 Share who divorce Increase by 5%-points 0.003 Share who re-partner Decrease by 5%-points 0.027 Age at first cohabitation Increase mean by 1 year 0.029 Age at first cohabitation Increase mean by 3 years 0.061 Age at first cohabitation Increase mean by 5 years 0.181 Share that ever cohabit Decrease by 5%-points 0.092 Table 2: Simulation results versus reference data Indicator Demographic data Simulation results (1974- 1984 cohort) Mean age at first marriage (CBS) 30.10 29.22 Mean age at first cohabitation 24.50 24.30 Mean age at first separation (no previous divorce) 28.50 28.15 Mean age at first divorce 37.60 38.07 Mean age at first repartnering NA 33.32 Mean number of partners (1954- 1964) 1.20 1.43 Percent ever cohabited 95.00 95.05 Percent ever married 70.00 72.20 Percent cohabited & never separated
poster
The Role of Atmosphere Feedbacks During El Niño General Circulation Models (GCMs) still have trouble simulating the observed frequency, structure and amplitude of the El Niño-Southern Oscillation (ENSO) phenomenon. Recent work (Guilyardi et al., 2004, 2008) suggests that the atmosphere plays a dominant role in determining the properties of ENSO. The work described here builds on this by analyzing the two main ENSO-relevant ocean-atmosphere feedbacks in the WCRP CMIP3 multimodel dataset. Can differences in the modelled feedbacks help explain the diverse ENSO simulation in models? James Lloyd*1, Eric Guilyardi1, Julia Slingo2, Hilary Weller1, Adam Scaife2 1 NCAS Climate, Department of Meteorology, University of Reading, UK 2 Met Office, Exeter, UK *Email: j.b.b.lloyd@reading.ac.uk • The heat flux feedback, α, can be related to the ENSO amplitude in the models. • The SW component could help explain the model diversity in both overall α and ENSO amplitude. Next steps: • Look for links between the feedbacks and the mean state biases in the models. • Understand the dynamical µ feedback, especially the relationship with ENSO amplitude. There are two main ocean-atmosphere feedbacks relevant to ENSO: • Dynamical (Bjerknes) feedback: τxA = µSSTA Positive feedback (µ) linking zonal wind stress (τxA) and SST anomalies (SSTA). • Thermodynamical feedback: QA = αSSTA Negative feedback (α) linking total surface heat flux (QA) and SST anomalies. These feedbacks are diagnosed in GCMs by linearly regressing the relevant variables: Both feedbacks are generally underestimated in the models compared to the ERA40 and OAFlux (α only) reanalysis data. The zonal wind stress coupling with an East Pacific SST change is too weak, as is the heat flux damping response. There is thus an error compensation between the two feedbacks. The relationship of α to the ENSO strength (measured by the SST Niño 3 standard deviation) shows that the models with strongest damping have the weakest ENSO, whereas those with the weakest damping generally exhibit a stronger ENSO. However, the corresponding µ graph (not shown) shows the opposite result to expected - this will need to be investigated. Figure 1. The two ocean-atmosphere feedbacks in ERA40: (a) Zonal surface stress anomaly (Niño4) against SST anomaly (Niño3). The linear fit gives a value for µ (b) Surface total heat flux anomaly (Niño3) against SST anomaly (Niño3). The linear fit gives a value for α. Each point represents one monthly average. Niño4 = 160E-150W, 5N-5S (West Pacific) Niño3 = 150W-90W, 5N-5S (East Pacific) 1. Motivation 2. ENSO Ocean-Atmosphere Feedbacks 3. The Feedbacks in the GCMs a) Annual averages of µ and α b) Relationship to ENSO strength 5. Conclusions Figure 3. ENSO amplitude against α feedback for ERA40, OAFlux and 12 CMIP3 models. Figure 2. Average annual Niño 4 µ (blue bars) and Niño 3 α (red bars) for ERA40, OAFlux (α only) and 12 CMIP3 models. a) The total heat flux can be separated into four components: shortwave radiation (SW), longwave radiation (LW), latent heat flux (LH) and sensible heat flux (SH). The individual feedbacks are calculated for each of these in the Niño 3 region: In the East Pacific the sign of αSW depends on the large-scale circulation, with a negative feedback in areas of ascent and a positive feedback in subsident regimes. (R & C, 1991; Philander et al., 1996) By binning the vertical velocity at 500hPa (w500) according to SST we have calculated the Niño 3 ‘ascent threshold’ for each model, the average SST above the mean state at which ascent occurs. 4. Understanding the α Feedback a) Splitting up the net feedback b) The SW component, αSW Figure 5. Ascent threshold vs. ENSO amplitude. Figure 4. Average annual α feedback components in Niño 3 for ERA40, OAFlux and 12 CMIP3 models. The latent heat and shortwave feedback components dominate, but it is the shortwave component, αSW, which exhibits the most variation between models. (a) (b) µ ~ 12 x 10-3 Nm-2C-1 α ~ -19 Wm-
poster
Jeferson Freitas, jeferson@consultead.com.br Matheus Gabriel da Silva, matheuscoelho032@gmail.com Cleonir Tumelero, cleonir.tumelero@up.edu.br A relação entre Inteligência Artificial e Inovação em Modelo de Negócios de Startups 1 INTRODUÇÃO A inteligência artificial (IA) vem promovendo a automatização de processos e operações das organizações em ritmo acelerado nos últimos anos. Tanto indivíduos quanto corporações mostram-se atentos e preocupados com essa mudança, que se apresenta contundente e disruptiva, no intuito de acompanharem as novas ferramentas tecnológicas já em uso, e para as tendências inovativas de um novo futuro que se apresenta (Junior & Torres, 2022, p. 2). As empresas têm hoje maior possibilidade de inovar. Como resultado, provavelmente, haverá uma evolução necessária dos mercados amparados pela IA. A IA é uma ferramenta de auxílio aos seus colaboradores, sendo essencial para a gestão de negócios, para a automatização de processos, para a definição de estratégias inovadoras na cadeia produtiva, para a análise de mercados, análise de desempenho de recursos humanos (Raimundo & Sebastião, 2021). 1.1 OBJETIVOS/PERGUNTA DE PESQUISA O objetivo deste estudo é minerar textos científicos na fronteira teórica do conhecimento sobre inteligência artificial (IA) e novos modelos de negócios em startups. A pergunta de pesquisa é “Quais são as relações e a intensidade semântica dos principais termos sobre inteligência artificial e inovação em modelo de negócios de startups em estudos científicos?”. 2 REVISÃO DA LITERATURA Observa-se um rápido surgimento de startups que utilizam IA como parte de seu modelo de negócios (Weber et al., 2021). Com relação à IA, Junior e Torres (2022) afirmam que ela está mudando a forma como pessoas e organizações realizam seus processos e suas operações. Os novos modelos de negócios, alinhados à transformação digital, vêm provocando uma revolução na automatização do uso de dados e informações para a tomada de decisão, por um lado, e a descoberta de novos formatos de trabalho, por outro lado. O modelo de negócios, segundo (Teece, 2010), descreve a lógica de negócios de uma empresa: a proposta de valor que é oferecida, como o valor é criado e entregue aos clientes e como a receita é gerada e capturada. Para Skala (2022), startups são empreendimentos cujo modelo de negócios é moldado pela inovação, crescimento rápido e alta ambição. Elas projetam e validam seus modelos de negócios sob incertezas e recursos limitados. O modelo de negócios deve possibilitar a escalabilidade do empreendimento, que é alcançada por meio do aproveitamento de tecnologias digitais ou outras soluções técnicas e organizacionais. A IA é uma tecnologia-chave na nova revolução industrial; pode mudar a interação entre os participantes das empresas e a sociedade como um todo (Kulkov, 2021, p. 1). Para Weber et al. (2021), atualmente, observa-se o rápido surgimento de startups que fazem uso da IA como parte de seu modelo de negócios. 3 METODOLOGIA Realizou-se uma pesquisa exploratória com o uso do software VosViewer e, em seguida, fez-se uma análise das redes bibliométricas que caracterizam a produção nacional e internacional acerca do tema com a verificação da base de dados Web of Science (WoS), de 2018 a 2022. A pesquisa fez uso dos seguintes descritores Startup, Artificial Intelligence e Innovation. Chegou-se a um número de 284 artigos quando se utilizaram esses três descritores em conjunto. 4 RESULTADOS Rede das palavras-chave de maior ocorrência em publicações nacionais e internacionais com Startups, Artificial Intelligence e Innovation e indexadas na WoS. Há a ocorrência de oito clusters, rede de palavras mais citadas e de suas respectivas frequências. No Network Visualization (Fig. 1), constata-se que há relação entre startups, inteligência artificial e inovação. Esses três termos estão presentes no cluster 4 e se formam por afinidade ou proximidade. A linha fina (arestas) de ligação entre IA e startups mostra que essa ligaç
poster
Developed by the Community Lab at the MDI Biological Laboratory in Bar Harbor, Maine. Anecdata is a free online citizen science platform to support individuals and organizations in collecting, managing, and sharing citizen and community science data. Get started @ Anecdata.org! Anecdata is used by hundreds of individuals and organizations in 76 countries and 6 languages to gather, share, and access citizen science observations. Check out our Project Best Practices Guide! and Community Standards! Questions? Get in touch! Email us at anecdata@mdibl.org or @Anecdata_org Create Project managers create projects, choose data licenses, and create the datasheets. Collect Participants join projects and use the Anecdata website or mobile app to make measurements and collect data. Share Data are incorporated into the project and can be analyzed and shared! Download the app! App Store Google Play
poster
Entwicklung des Anteils der Closed-Artikel im Berichtszeitraum 2016–2020 (Vergleich Berichtsjahr vs. Neuerhebung Juni 2022) Entwicklung des Anteils der Gold-Open-Access-Artikel im Berichtszeitraum 2016–2020 (Vergleich Berichtsjahr vs. Neuerhebung Juni 2022) Entwicklung des Anteils der Green-Open-Access-Artikel im Berichtszeitraum 2016–2020 (Vergleich Berichtsjahr vs. Neuerhebung Juni 2022) Entwicklung des Anteils der Hybrid-Artikel im Berichtszeitraum 2016–2020 (Vergleich Berichtsjahr vs. Neuerhebung Juni 2022) Entwicklung der Open-Access-Anteile bei Zeitschriftenartikeln von Wissenschaftler*innen an Einrichtungen des Landes Berlin 2016–2020 Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung 4.0 International Lizenz. Anteil von 60% Open Access bei Zeitschriftenartikeln erreicht Der Gesamtanteil an Open-Access-Zeitschriftenartikeln mit Berliner Autor*innenschaft im Publikationsjahr 2020 wird untergliedert in den Anteil von Artikeln in genuinen Open-Access-Zeitschriften (Gold Open Access) und in Hybridzeitschriften (Hybrid Open Access) sowie in den Anteil von Open-Access-Zweitveröffentlichungen (Green Open Access). Insgesamt wurden 13.925 Artikel identifiziert. 8.858 Artikel wurden als Open-Access-Artikel veröffentlicht, was einem Anteil von 63,6 % entspricht. Die vom Land Berlin im Rahmen der Open-Access-Strategie im Jahr 2015 formulierte Zielstellung, bis zum Jahr 2020 einen Open-Access-Anteil von 60 % bei wissenschaftlichen Zeitschriftenartikeln zu erreichen, wurde demnach leicht übertroffen. Die Auswertung der im Juni 2022 für die Jahre 2016–2020 neu erhobenen Daten zeigt darüber hinaus, dass der Anteil an Open-Access-Artikeln für alle Publikationsjahre über den bislang bekannten Zahlen liegt. Werden die Zahlen der Neuerhebung zugrunde gelegt, liegt der Open-Access-Anteil insgesamt sogar bereits bei 64,6 %. In die Analyse wurden die neun publikationsstärksten öffentlichen Wissenschaftseinrichtungen des Landes Berlin einbezogen: ●Alice Salomon Hochschule (ASH) ●Berliner Hochschule für Technik (BHT, früher: Beuth Hochschule) ●Charité – Universitätsmedizin Berlin (Charité) ●Freie Universität Berlin (FU Berlin) ●Hochschule für Wirtschaft und Recht Berlin (HWR) ●Hochschule für Technik und Wirtschaft Berlin (HTW) ●Humboldt-Universität zu Berlin (HU Berlin) ●Technische Universität Berlin (TU Berlin) ●Universität der Künste (UdK) Bericht: https://doi.org/10.14279/depositonce-15778 (Stand Oktober 2022) Daten zum Bericht: https://doi.org/10.14279/depositonce-15780 (Stand Oktober 2022) Beitragende: Datenerhebung: Jenny Delasalle (Charité), Pamela Finke (HU Berlin), Sean Nowak (FU Berlin), Alexandra Schütrumpf (TU Berlin), Michaela Voigt (TU Berlin) Datenverarbeitung und -auswertung: Martin Hampl (FU Berlin), Pamela Finke (HU Berlin) Konzept und technische Umsetzung der Datenaggregation und -aufbereitung: Eva Bunge (Deutsches Museum München), Michaela Voigt (TU Berlin) Poster: Maaike Duine (Open-Access-Büro Berlin) Autorinnen des Berichts: Maxi Kindling (Open-Access-Büro Berlin) maxi.kindling@open-access-berlin.de 0000-0002-0167-0466 Jenny Delasalle (Med. Bibliothek der Charité – Universitätsmedizin Berlin) 0000-0002-2241-4525 Pamela Finke (Humboldt-Universität zu Berlin, Universitätsbibliothek) 0000-0001-9086-3202 Steffi Grimm (Freie Universität Berlin, Universitätsbibliothek) 0000-0001-5055-9492 Michaela Voigt (Technische Universität Berlin, Universitätsbibliothek) 0000-0001-9486-3189 Gesamtanzahl Zeitschriftenartikel 13.925 Gesamtanteil Open Access 8.858 (63,6 %) Gold Open Access 3.928 (28,2 %) Green Open Access 2.076 (14,9 %) Hybrid Open Access 2.854 (20,5 %) Auf diesem Poster wird die Entwicklung der Open-Access-Anteile bei Artikeln in wissenschaftlichen Zeitschriften für Gold, Hybrid und Green Open Access sowie Closed Access für die Jahre 2016–2020 gezeigt. Die Abbildungen beziehen sich jeweils auf die Datenauswertungen aus den jeweiligen Berichtsjahren und auf Daten einer Neuerhebung, die im Juni 2022 durchgeführt w
poster
El sedimentador fue diseñado y construido en los laboratorios de Aguas y Química de la Universidad Sergio Arboleda, garantizando un entorno controlado para la realización de las pruebas experimentales. La Figura 1 presenta la metodología implementada para su diseño, así como la secuencia establecida para llevar a cabo los ensayos experimentales, con una descripción detallada de los parámetros evaluados. 1 Estudiantes del programa de Ingeniería Ambiental de la Universidad Sergio Arboleda, Bogotá, Colombia. Correos electrónicos: carlos.cadavi01@usa.edu.co, juliian.ramos01@usa.edu.co, camilo.sanchez03@usa.edu.co y ana.meza01@usa.edu.co. ORCID: 0009-0002-1932-2598, 0009-0002-8794-7880, 0009-0008-0812-7651 y 0009-0009- 5592-8002. 2 Profesor asociado del programa de Ingeniería Ambiental de la Universidad Sergio Arboleda, Bogotá, Colombia. Correo electrónico: victor.lizcano@usa.edu.co. ORCID:0000-0002-1569-4784. 1 Carlos Fernando Cadavid Zapata, 1 Julián Esteban Ramos Arias, 1 Camilo Andrés Sánchez Gutiérrez, 1 Ana Sofia Meza Julio, 2 Victor Augusto Lizcano Sandoval. Evaluación del Desempeño de un Prototipo de Sedimentador de Alta Tasa La presente investigación analizó el desempeño de un prototipo de sedimentador de alta tasa diseñado y construido en los laboratorios de la Universidad Sergio Arboleda. Los sistemas de sedimentación convencionales enfrentan limitaciones significativas en la eliminación de partículas finas, como las de arcilla, lo que afecta negativamente la calidad del agua tratada. Para evaluar la eficacia del sedimentador, se consideraron parámetros fisicoquímicos como el pH y el caudal, en un enfoque experimental que incluyó mediciones de turbidez y sólidos suspendidos totales (SST). Los resultados obtenidos revelaron que un pH cercano a 7.5 mejora las interacciones entre partículas, facilitando la formación de flóculos y aumentando las tasas de sedimentación. Asimismo, al utilizar sulfato de aluminio como coagulante a un pH de 8.4, se logró una mejora significativa en la remoción de turbidez (66%), color (55%) y SST (56%). En cuanto al caudal, se determinó que este es un factor crítico para la eficiencia del sistema: caudales excesivamente altos dificultan el asentamiento de partículas, mientras que caudales bajos comprometen la capacidad del sedimentador. Sin embargo, se identificaron problemas en el diseño y la construcción del prototipo, tales como dimensiones inapropiadas de las placas y deficiencias en su fijación, que ocasionaron filtraciones y alteraciones en el flujo hidráulico, afectando negativamente el desempeño del sistema. Estos hallazgos destacan la necesidad de un diseño robusto y una construcción precisa para garantizar un rendimiento eficiente en los procesos de tratamiento de aguas residuales. Palabras clave: Caudal; Desempeño; Turbidez; Sedimentador de alta tasa. Resumen Objetivos Conclusiones El pH afecta la carga de las partículas y la eficacia de los coagulantes, siendo un pH cercano a 7.5 ideal para reducir la turbidez. En condiciones de pH 8.4 con coagulante, s e optimizan los procesos de neutralización y formación de flóculos, mejorando las tasas de sedimentación y la eficiencia del sistema. El caudal influye en el tiempo de retención hidráulica, clave para la sedimentación. Caudales altos dificultan el asentamiento de partículas, mientras que caudales bajo s reducen la capacidad del sistema. En agua con arcilla sin coagulantes, el control preci so del caudal es esencial para mejorar la remoción de sólidos. Errores en las dimensiones y fijación de las placas generaron filtraciones y alteraciones en el flujo, impactando negativamente la eficiencia del sedimentador. Esto resalta l a importanci a d e u n diseñ o adecuad o y un a construcció n precis a par a asegura r el rendimiento. Referencias Davis, M. L. (2010). Water and wastewater engineering: design principles and practice . McGraw-Hill. Hossam, Altaher., Emad, ElQada., Waid, Omar. (2011). Pretreatment of Wastewater Streams from Petroleu
poster
QualidataNet – das Netzwerk für Management, Data Sharing und Nachnutzung qualitativer Forschungsdaten KonsortSWD wird im Rahmen der NFDI durch die Deutsche Forschungsgemeinschaft (DFG) gefördert – Projektnummer: 42494171 Kati Mozygemba - „Central point of entry“ für qualitative Daten Services für Forschende • QualidataNet bietet einen zentralen Zugang zu qualitativen Daten der im Netzwerk vertretenen FDZ. Qualitative Daten lassen sich so leichter finden, ebenso wie die spezifischen Ansprechpartner für die jeweilige Datennutzung. • QualidataNet unterstützt interessierte Wissenschaftler*innen beim Finden des für sie und ihre Daten passenden Partners für die Archivierung und führt durch die spezialisierten Infrastrukturangebote. • Ein Portfolio an Instrumenten zum Forschungsdatenmanagement (FDM) unterstützt Forschende bei Fragen rund um das Management, das Teilen und die Nachnutzung qualitativer Forschungsdaten. • Untereinander abgestimmte Vorlagen und Handreichungen schaffen Orientierung und erleichtern es, sich mit den für das Data Sharing und das Forschungsdatenmanagement qualitativer Daten zentralen Aspekten auseinanderzusetzen. Services für Infrastrukturen • QualidataNet steht allen Institutionen, die qualitative Daten halten und zur Nachnutzung bereitstellen, offen. Die Netzwerkpartner profitieren von Austausch und gemeinsamen Entwicklungen. • Zentrale Aspekte des FDM und des Data Sharing qualitativer Daten werden gemeinsam bearbeitet, was Doppelentwicklungen vermeidet und die Expertise von spezialisierten Archivierungspartnern für einen bestimmten Datentyp berücksichtigt. • Durch die Präsentation von Datensätzen im Portal von QualidataNet erhöht sich die Sichtbarkeit und Findbarkeit der Datensätze und die Nachnutzung qualitativer Daten wird erleichtert. • Mit QualiTerm schafft QualidataNet ein gemeinsam mit qualitativ Forschenden entwickeltes kontrolliertes Vokabular für die Beschreibung zentraler Elemente qualitativer Forschung, welches Forschungsdatenzentren bei der Anlage von passenden Metadaten unterstützt. Website und Suchportal - Mitarbeit an DDI-CDI-Modell zur Erhöhung der FAIRness qualitativer Daten über Disziplingrenzen hinweg Metadaten und Kontrolliertes Vokabular (QualiTerm) FDM-Portfolio für Qualitative Daten Netzwerkpartner Forschungsdatenmanagement-Portfolio für qualitative Daten • Koordination der DDI-CDI-Subgroup Qualitative Data • Mitarbeit am DDI-CDI-Modell zur Erhöhung der Interoperabilität und Reusability qualitativer Daten über Disziplinen hinweg • Entwicklung von QualiTerm in Kooperation mit Forschenden • Implementation in Metadatenschemata • Publikation via CESSDA Besondere Bedarfe im Management Qualitativer Daten • FDM-Beratung und Vorlagen für qualitative Daten müssen Offenheit, Verweisungsdichte und Methodenvielfalt qualitativer Daten Rechnung tragen • Forschungsethische Abwägungen spielen eine zentrale Rolle • häufig müssen studienspezifische Lösungen gefunden werden • integrierende Data Sharing-Broschüre thematisiert Spezifika und Lösungen • Dokumentation der Aufwände und Kosten bei Aufbereitung, Kuration und Bereitstellung von Forschungsdaten im FDZ • Nutzerschulungen, Workshops und Vorträge zum FDM qualitativer Daten • Inputs zu rechtlichen Fragen • Auseinandersetzung mit aktuellen Entwicklungen wie des Einsatzes von KI in der qualitativen Forschung und Auswirkungen auf Data Sharing und Nachnutzung https://www.qualidatanet.com/ DOI: 10.5281/zenodo.14260129
poster
Operational Collaborative Tool of Ongoing Practices in Urgent Sanitation Poster prepared by: Claire Papin-Stammose _ Solidarités International _ cpapin-stammose@solidarites.org 42nd WEDC INTERNATIONAL CONFERENCE ONLINE: 13 –15 September 2021 Equitable and Sustainable WASH Services Future challenges in a rapidly changing world What is OCTOPUS? Online participative platform on faecal sludge treatment in emergency Experience sharing from the field Peer to peer learning with success and challenges How OCTOPUS can help you? Contextualized cases studies on fecal sludge treatment in emergency Comprehensive case studies with technical and financial information Tool box to sort and compare technical solutions Links to technical resources on faecal sludge management Community of practioners Repertory of experts available upon request Join OCTOPUS community and share your case studies!
poster
Canterbury mudfish and severe droughts: Could an altered drought regime reduce resilience to subsequent disturbances? Why Canterbury mudfish? Methods Stress-tolerant species (STS), like Canterbury mudfish, typically persist in extreme habitats free from compeƟƟon Fragmented populaƟons and highly modified habitat likely make this STS extremely suscepƟble to global changes Conclusions Drought intensity and mudfish size distribuƟon are not independent (χ2 = 26.60, df = 14, p = 0.02) Severe drought => shiŌ to juvenile-dominated populaƟons and decreased mean size High fecundity and fast growth enable quick recovery, but will be ineffecƟve if all adults die Soil moisture index Frequency (CPUE per site) Wet Mild drought Severe drought 0 50 60 75 90 110 120 140 0 50 60 75 90 110 120 140 0 50 60 75 90 110 120 140 15 12 9 6 3 0 50 0 -50 -100 -150 Sept Oct Nov Dec Jan Feb Sept Oct Nov Dec Jan Feb Sept Oct Nov Dec Jan Feb = sampling period = mean size Total length (mm) Month Increasing drought intensity = loss of adult mudfish {Dry early January} {Dry early November} {Dry late September} 2006/07 2009/10 2015/16 The impact of predatory eels and trout InteracƟon with drought intensity probable because of changes to accessibility Predicted effect of extreme drought Acknowledgements We thank the staff and students of the Freshwater Ecology Research Group at the University of Canterbury for providing field help, staƟsƟcal assitance and emoƟonal support as required. CGM completed this study during a UC summer scholarship and was supported by Waterways Centre for Freshwater Management Contestable Fund. We sampled sites from previous surveys along the Waianiwaniwa River Rainfall and soil moisture were downloaded from NIWA climate database Catch per unit effort (CPUE) was used to quanƟfy and compare abundance 1School of Biological Sciences, University of Canterbury, Private Bag 4800, Christchurch 8104 cgm51@uclive.ac.nz AffiliaƟons Wet -> low adult mortality suppresses juvenile recruitment, creaƟng adult- dominated populaƟons Future direcƟons Could global change (frequent severe droughts) => exƟrpaƟon? Healthy population with mudfish of all sizes All adults lost and few juveniles left => Slow recovery possible Last juveniles die => Local extirpation Few adults remain, but many juveniles => Quick recovery possible {predicted} 0 -10 -20 -30 -40 -50 -60 -70 Mean soil moisture index (10-20 Nov) 8 6 4 2 0 CPUE (per site) {observed} Christopher G. Meijer1, Richard S. A. White1, Jon S. Harding1 and Angus R. McIntosh1 Hypotheses Mild drought -> moderate adult mortality allows for some recruitment, creaƟng a balanced populaƟon Severe drought -> high adult mortality, which is associated with juvenile- dominated populaƟons Got better fish puns? You’ll have to let minnow I feel like a fish out of water....... 100% 0% Drought intensity Wet Severe Percent of populaƟon Mild Extreme YES! Increased drought frequency or intensity could reduce the resilience of Canterbury mudfish to future droughts Canterbury mudfish (Neochanna burrowsius) Did you know weighing us can be time-consuming? Because, unlike most fish, we don’t have scales No, why?
poster
The new Cryogenic Underground (C2U) facility: an overview The low radioactivity cryostat of CROSS (Cryoconcept) is equipped with a Cryomech PT415 pulse-tube. A special technology (Ultra Quiet Technology) is employed to mitigate the pulse tube vibrations. The cooling process take place in two phases: - precooling: from 300 K to 4 K obtained by circulating a 3He/4He mixture at high flow rate (17 mmol/s) - standard cooling: from 4 K to 10 mK, by using a dilution unit closed-cycle (condensation at 2 mmol/s). The dilution unit has a cooling power of about 6 uW at 20 mK and 340 uW at 100 mK, and can reach temperatures below 10 mK over an experimental volume of 600 cm length and 30 cm diameter. The cryostat cools down to 4 K without any exchange gas: 120 kg of internal lead-copper shielding plus few tens of kg of low temperatures detectors/holders are ready in 55 hours to be further cooled down 10 mK. In few hours the detectors reach the 20 mK temperature region and are ready to study. The Mixing Chamber temperature is stabilized around 12 mK with temperature fluctuation within 10 uK. The best detector working point is searched and noise optimization studies begin... We performed a campaign of "seismic" measurement at the installation site to built the best strategy to place the cryostat and decouple from environmental vibrations. Since the ground floor at LSC is an extremely calm reference frame (except human activity), the cryostat is referenced to the floor. With high sensitivity accelerometers we further characterized the vibration profile of the 10 mK replica plate, where detectors are installed, with pulse tube ON/OFF. Along the radial direction r, the pulse tube high order harmonics are not totally cancelled, despite the UQT system. These residual components are mainly due to the positioning/fixation of the rotary valve. An upgrade of the system is under study. The Mixing Chamber is regulated via a PID system, which typically delivers a power of about 200 nW (5.5 uW) to stay at 10 mK (20 mK). Temperature/noise stabilities are key parameters to perform high quality, long-living bolometric searches. The cryostat is equipped with shielded, twisted pair manganin wires. The wiring goes (without any interruption) from 300 K down to 10 mK: here the conductors are thermalized via Kapton pads thermally anchored to the cold stage. The cables are thermalized at every cryostat cold stages by mechanical clamping of the shielding. Experimental Space Tripode (to hold the pulse-tube head) External 25 cm thick lead shielding Rotary Valve New low noise electronics Hydroformed bellow Pulse Tube head 300 K flange The pulse tube head is coupled to the cryostat via a supple hydro-formed bellow and is firmly kept by a tripode rigid frame. The pulse tube cold stages are thermally linked to the cryostat cold stages by gas loaded exchangers which prevent any mechanical contacts. The cryostat is mechanically referenced to the floor: it is tighten to a 10 ton lead-brick loaded platform. Bricks constitute the low radioactivity external lead-shielding (25 cm thick) which protects the internal experimental volume from the external radioactivity. An internal lead socket (120 kg weight), cooled down to temperatures as low as 800 mK, shields the experimental volume from the dilution unit and cryostat upper parts (no low radioactivity materials). From below, the experimental volume is protected by a 25 cm thick lead socket. Lead socket Front-end: 6 detectors per card, directly plugged to the cryostat connector box Detectors are equipped with high resistivity sensors (Ge-NTD, 1-10 Mohm resistance). The sensor signals are read by low-noise, room-temperature amplifiers. The sensor bias current circuitries (typ: 10 pA-30 nA range) is also embedded in the amplifier modules. The signals are afterwards shaped with a programmable Bessel filter and digitized with 24 bit precision ADC (+- 10.25 V dynamic range, 5kHz sampling frequency). New electronic intrinsic noise 10
poster
Introduction & Methodology The transformation of the energy system towards a low-carbon and decentralized model requires the integration of a wide range of energy-related data, including electricity production and consumption, weather conditions, energy storage, and grid infrastructure. However, much of this data is currently dispersed across different stakeholders, such as utilities, grid operators, regulators, and consumers, and is subject to various legal, technical, and economic barriers to sharing. To overcome these challenges, a data sharing platform can provide a common space for collecting, processing, and sharing energy-related data among different actors, thus enabling the development of new services, applications, and business models based on data-driven insights.To address these challenges, this paper proposes a framework for ensuring transparency and involvement of the energy-related industry in a data sharing platform, based on the FAIR data principles. The proposed framework in the Figure consists of three main parts: (1) the definition of technical and organizational requirements for data sharing, (2) the involvement of industry partners in the co-creation of the platform, and (3) the collection and creation of substitute data. In the first part, the relevant partners for the energy domain should be identified. The technical and organizational requirements for data sharing are based on the FAIR data principles, which provide guidelines for making data Findable, Accessible, Interoperable, and Reusable. This includes the use of standardized data formats, metadata, and vocabularies, as well as the provision of appropriate documentation, licenses, and identifiers. In addition, the platform should support data quality control, data enrichment, and data integration services, to ensure that the data is relevant, accurate, and consistent across different sources. To facilitate this process, the platform provides collaborative and participatory tools, such as forums, workshops, and hackathons, that allows industry partners to exchange ideas and feedback. The involvement of industry partners in the co-creation of the platform is essential to ensure that the platform meets the needs and expectations of the energy-related industry. This includes the identification of data sources, data use cases, and data sharing agreements, as well as the development of new services, applications, and business models based on the data. Besides data, the industry requirement for accessing and using the related services is also part of the framework. The last part of the framework is the collection and creation of substitute data. The industry partners may not always be able to provide complete data sets due to privacy concerns or other reasons. Therefore, synthetic data will be created where necessary to ensure that the platform has the data required for analysis. Tools for the anonymization of personal data are integrated to ensure that the privacy of individuals is protected. Conclusion In conclusion, the framework developed addresses the need for transparency and involvement of the energy-related industry by creating a collaborative environment where industry partners can contribute and access FAIR data, access community services, and provide feedback for continuous improvement. Additionally, the framework addresses the issue of missing or personal data through the development of tools for the creation of synthetic data and the anonymization of personal data. By incorporating industry needs and concerns, the framework facilitates collaboration between industry partners and researchers, resulting in more effective and efficient energy systems. Transparency and Involvement of the Energy- Related Industry in a Data Sharing Platform Zhiyu Pan +49 (0)241 80-49713 Zhiyu.pan@eonerc.rwth-aachen.de RWTH Aachen University | E.ON Energy Research Center Institute for Energy Efficient Buildings and Indoor Climate Mathieustraße 10 | 52074 Aachen | Ge
poster
Glueing Life Cycle Sustainability Assessment of Laminated Strand Lumber in the Spanish Woodworking Sector: Integrating Economic, Environmental, and Social Dimensions Lago-Olveira, Sara1; Gallego, María1; Coello-García, Tamara2; Alvarado-Morales, Merlín3; Entrena-Barbero, Eduardo1 1Contactica Innovation, C/ Embajadores, 187, 28045 Madrid (Spain) 2Cesefor, Pol. Ind. Las Casas, C/ C, Parcela 4, 42005 Soria (Spain) 3Technical University of Denmark, Department of Environmental Engineering, Building 113, DK-2800 Kgs, Lyngby (Denmark) CONTEXT & OBJECTIVE SYSTEM BOUNDARIES Silviculture Levelized Cost of Production (LCOP) LIFE CYCLE SUSTAINABILITY ASSESSMENT METHODOLOGY Cradle-to-gate approach SOCIAL PERSPECTIVE Job creation potential according to the method proposed by Pillain et al. (2019)1 ENVIRONMENTAL PERSPECTIVE ECONOMIC PERSPECTIVE This research has been supported by CALIMERO (Industry CAse Studies AnaLysis To IMprove EnviROnmental Performance And Sustainability Of Bio-Based Industrial Processes) (No 101060546), funded by the European Commission call HORIZON-CL6-2021-ZEROPOLLUTION-01. SOCIAL PERSPECTIVE Debarking and chipping Drying and screening Pressing Glueing Edge trimming and sanding Transport Laminated Strand Lumber s FU: 1 m3 of LSL ENVIRONMENTAL PERSPECTIVE Product Environmental Footprint (PEF) method v3.1 ECONOMIC PERSPECTIVE Net Present Value (NPV) Levelized Cost of Production (LCOP) Environmental externalities s Stage I Stage II 0% 20% 40% 60% 80% 100% CC ODP HTC AC FEU EC Stage I Stage II Relative contributions of each stage 0% 20% 40% 60% 80% 100% CC ODP HTC AC FEU EC Relative contributions of each step of stage II 0% 20% 40% 60% 80% 100% CC ODP HTC AC FEU EC Silviculture Logging Transport Relative contributions of each step of stage I Impact categories: Climate Change (CC), Ozone Depletion (ODP), Cancer Human Toxicity (HTC), Acidification (AC), Freshwater Eutrophication (FEU), Freshwater Ecotoxicity (EC) Debarking & chipping Drying & screening Pressing Edge trimming & sanding In the Spanish context, Laminated Strand Lumber (LSL) offers superior mechanical properties for a variety of construction applications. In addition, it is produced from parts of trees that are unsuitable for other industries, such as plywood or laminated timber, thus maximizing the use of trees. However, the woodworking sector in Spain faces a number of challenges that need to be addressed: - Cover the harmful effects of volatile organic compounds emitted during wood processing - Reduce energy consumption - Consider social risks such as job loss, apart from health and safety issues, particularly related to toxic risks Therefore, the objective of this study was to analyze the level of sustainability, including environmental, social and economic dimensions, of an LSL production system in Spain. NPV 1.12 M€ 39.63 €·FU-1 97.02 €·FU-1 LCOP (CO2 emissions price from the Emission Trading System (ETS)) (10-year period) 82 jobs 51 In-house jobs Upstream jobs Environmental externalities 39 12 Stage I Stage II 31 ¹Pillain, B., Viana, L.R., Lefeuvre, A., Jacquemin, L., Sonnemann, G., 2019. Social life cycle assessment framework for evaluation of potential job creation with an application in the French carbon fiber aeronautical recycling sector. Int. J. Life Cycle Assess. 24, 1729–1742. https://doi.org/10.1007/s11367-019-01593-y
poster
El gráfico en anillo presenta los formatos de publicaciones realizadas durante el año 2021, siendo mayoritarias las imágenes individuales, seguidas por imágenes múltiples (tipo carrusel), videos con duración inferior a un minuto e IGTV con duración superior a un minuto. El gráfico de barras organiza los tipos de publicaciones realizadas en función de la interacción, del Avg Engagement, permitiendo visualizar que han sido las imágenes múltiples aquellas con mejor recepción por parte de la comunidad que interactúa con la cuenta. Es destacable que los vídeos de hasta un minuto tienen casi la mitad del interacción, pese a ser numéricamente inferiores. Tags y palabras clave. Son términos que los usuarios tienden a utilizar juntos en una publicación cuando escriben #Antropología. Conocerlos es útil para identificar la percepción de términos con relación orgánica, esto es, aquellos conceptos que la comunidad percibe y utiliza estrechamente vinculados. Términos relacionados POTENCIALIDADES DE LA APLICACIÓN DE HERRAMIENTAS DE MARKETING PARA DIVULGACIÓN CIENTÍFICA Casamayú, Ignacio Agustín Muiña, María Florencia Hay un número creciente de trabajos académicos sobre divulgación científica y cada vez encontramos una mayor cantidad de casos de aplicación con la creación de cuentas en redes sociales, blogs y páginas web sobre divulgación de diversas disciplinas, gestionadas por estudiantes de grado, investigadores, equipos de trabajo e instituciones como Museos y dependencias académicas de Universidades, que reconocen la relevancia social de la divulgación. En la siguiente presentación nos enfocamos en dos herramientas de marketing (escucha activa y análisis de medios sociales) aplicadas al perfil de comunicación científica @shincal_equipo_investigacion en la plataforma de Instagram. Introducción Análisis de medios sociales Conocer estos tags permite considerar las etiquetas temáticas para reproducir el tipo de contenido que ha tenido buena recepción, tanto en alcance como en interacción. Tags con mayor interacción El análisis de medios sociales se realiza sobre el perfil de la cuenta. Es utilizado para recopilar datos a fin de conocer el desempeño en redes, evaluar la estrategia implementada y planificar su optimización acorde a los objetivos que la organización tuviera establecido (Quintero Barrizonte et al., 2014; Vidal Fernández, 2016; Fernández y Sieso, 2017; Christensen y Khalid, 2018; Muiña, 2021). 45.36 K Cantidad de publicaciones que han utilizado el tag #antropología. A mayor volumen de uso, más tráfico generan y resulta más competitivo lograr visibilidad 29 Dificultad estimada para aparecer entre las publicaciones destacadas del tag #antropología. #Antropología Escucha Activa o Christensen, L. L., & Khalid, M. S. (2018). Social media analytics dashboard for academics and the decision-making process. In Proceedings of the 11th International Conference on Networked Learning (pp. 425-431). o Fernández de la Peña, F. J., & Pereira Sieso, J. (2017). El Proyecto Arqueológico BHIT: difundiendo y midiendo un proyecto arqueológico en la Web. Complutum, 28 (1): 219-242 o Quintero Barrizonte, J.L., García Pérez, A., & Medina Ruíz, G. (2014). Plan de marketing para la revista “Universidad y Sociedad”. Universidad y Sociedad, 6 (3) pp. 20-25. o Muiña, M. F. (2021). Divulgación de Arqueología en redes sociales implementando estrategias de marketing. En Actas del XV Coloquio de Estudiantes de Arqueología PUCP. En prensa o Vidal Fernández, P. V. (2016). Metodología para la elaboración de un plan de marketing online. 3C Empresa. 26 (5): 57-72 o Zapatero, G. R. (2009). La divulgación arqueológica: las ideologías ocultas. Cuadernos de Prehistoria y Arqueología de la Universidad de Granada, 19, 11-36. Bibliografía La utilización de plataformas sociales para la divulgación responde al hecho de que éstas ofrecen una infraestructura digital que facilita la interacción entre usuarios, el intercambio de experiencias y la organización de actividades comuni
poster
Scientific Context Approach Results and Applications Case Study Name: Estrogen Receptor Model • Manuscript – Methods & ER Case Study: Internal Review • Manuscript – AR Case Study: Submitted • Manuscript – Thyroid Peroxidase Case Study: In Prep • Manuscript – Cytotoxicity: In Prep • Manuscript – Zebrafish: In Prep • R Package – Toxboot: Published • R Package – Toxpath: In Prep • Data to be available on dashboard and FTP Case Study Name: Androgen Receptor Model References Disruptive Innovation in Chemical Evaluation Assessing Uncertainty in Risk Assessment Models Eric D Watt1,2 and Richard S Judson2 1Oak Ridge Institute for Science and Education, Oak Ridge, Tennessee 2National Center for Computational Toxicology, Office of Research and Development, U.S. Environmental Protection Agency, Research Triangle Park, North Carolina epa.gov/research CSS BoSC Meeting 2016 Fig 2 Bootstrap. A) Experimental response values (circles) and hill model fit. B) Uncertainty in response values and fitted model using 1000 bootstrap resamples. Experimental (cyan) and bootstrap resampled (black) response values (circles) and hill model fits (lines). Fig 3 Bisphenol AF ATG_ERa_TRANS_up bootstrap results. Potency (AC50) values for A) Hill, B) Gnls, C) Winning model potency (hill red, gnls blue). D) Correlation between winning model efficacy (top) and AC50 (hill red, gnls blue). E) Experimental values (black circles) and model fit (black curve). Dashed line is 3x baseline median absolute deviation and solid line is assay activity cutoff. 1000 bootstrap fits are indicated (534 hill red, 466 gnls blue). F) Hit call and model selection for all chemicals. Black bars indicate chemical not a hit, red is hill model active, blue is gnls active. Fig 1 ToxCast models. A) Constant (cnst), B) Hill, and C) Gain-Loss (gnls) models. Over 2.6 million in vitro curves • Many chemicals (> 8,000 unique) • Many assays (> 800) Broad assay coverage • Numerous assay sources (> 10) • Many biological pathways (> 400) • Representing many species including human, rat, mouse, and fish • Diverse detection methods including fluorescence, colorimetric, radioactive, electronic sensing, and RNA transcription Broad chemical coverage • Pesticides, food additives, green alternatives, endocrine reference compounds, water contaminants, fragrances, etc. ToxCast Pipeline offers consistent analysis • Multiple models fit to data to determine efficacy (top) and potency (AC50) (Fig 1) • Model selection based on AIC from model fits, hit call based on efficacy relative to cutoff for winning model Fig 7 AR model AUC. Androgen receptor model AUC values for chemicals with an agonist or antagonist AUC > 0.05 with point estimates (circles) and 95% confidence intervals (error bars) for agonist (red) and antagonist (black) values. Fig 9 Androgen receptor model uncertainty distributions. Distributions of agonist (red), antagonist (black) or combined (orange) AUC values are explored using cumulative distribution function plots. A) Equilin is clearly active with some uncertainty between agonist and antagonist modes. B) Prodiamine antagonist AUC is slightly above the cuttoff and narrow distribution around this value showing high confidence in the calculated score. C) Benzoin agonist and antagonist AUC point estimates are 0, but there is ~60% probability of an antagonist value in the range of 0.2-0.35, flagging this chemical as a potential false negative. Fig 8 Androgen antagonist activity shift. Comparison of androgen antagonist assay potency with high (black) and low (gold) agonist concentration. Chemicals acting as true antagonists are expected to see a potency shift. Using bootstrap confidence intervals, we determine which chemicals have a significant potency shift and are therefore likely to be true antagonists. Anticipated Products 1. Federal Register. Use of High Throughput Assays and Computational Tools; Endocrine Disruptor Screening Program; Notice of Availability and Opportunity for Comment. 2015 Jun pp
poster
SSHOC social sciences & humanities open cloud “Social Sciences and Humanities Open Cloud”, has received funding from the European Union’s Horizon 2020 project call H2020-INFRAEOSC-04-2018, grant agreement #823782 Main impacts SSHOC social sciences & humanities open cloud Realising the Social Sciences and Humanities part of the European Open Science Cloud The Social Sciences and Humanities are seamlessly integrated in the European Open Science Cloud EU-wide availability of trusted and secure access mechanisms for SSH data, conforming to EU legal requirements Availability of an EU-wide, easy- to-use SSH Open Marketplace, where tools and data are openly accessible State of the art Research Infrastructure in several pilot domains advanced through dedicated SSH data pilots cluster projects EU-wide availability of high quality “cloud ready” SSH tools and high quality SSH data Maximising reuse through Open Science and FAIR principles (standards, common catalogue, access control, semantic techniques, training) Empowering Users, Building Expertise Broadening the SSHOC network of user communities and empower them to a greater level of expertise in utilizing the SSHOC services, tools, data throughout the research lifecycle and according to the FAIR on- and offline trainings and training materials and an international cross-disciplinary trainer network Join our community K sshopencloud.eu t @SSHOpenCloud l /in/sshopencloud “ SSHOC pools, harmonizes and makes easily usable tools and services that allow to process, enrich, analyse and compare the vast heterogeneous collections of SSH data available across the boundaries of individual repositories or institutions in Europe. info@sshopencloud.eu ✉ SSHOC Partners Keep an eye out for the upcoming report on LIBER 2019 workshop Social Sciences & Humanities Open Cloud and future SSHOC trainings at sshopencloud.eu
poster
Charlie J. Galicich Student of Digital and Computational Studies, Bowdoin College, Brunswick, Maine USA, cgalicic@bowdoin.edu Lost Possible Worlds: Toward a Narrative Approach to Computing Ethics Rather than an “a posteriori'' approach to addressing the ethics of developing digital technologies, in which movement toward more ethical practice or deployment of technology occurs only after a certain technology negatively impacts certain populations, technological development must take an a priori approach in which multiple ethical ramifications of the technology are considered beforehand. This paper illuminates the power that narratives can provide to such an a priori approach, providing imaginative variations of potential technologies that those seated at the development table may consider. Using Paul Ricœur’s view of narratives as “ethical laboratories,” I argue that narratives, whether fictions or case studies, effectively provide good ethical deliberation at technology development tables through offering specific, contextual possibilities of how technology can affect or fail certain groups or populations. The narrative approach suggests a method of embedding ethical principles through viewing predictive narratives as imaginative variations of technologies that are distanced from such ethical principles. I dissect the short story “Burning Chrome” by William Gibson as a narrative that successfully predicted ethical and social discussions of digital networks and their impacts to prove the value of narratives in making a more informed developmental decision and serving as a crucial method for an a priori approach to technological development. I then discuss Gibson’s predictions in the context of the “Metaverse” to demonstrate how this narrative can serve as a crucial component of a priori deliberation in the development of this new networked environment. Find below the link to this paper. Keywords: Computing Ethics, Technological Design and Development, Narrative, A Priori, Representation and Equitability https://zenodo.org/record/6595433#.YpYUwpNBx_Q Storytelling in the Computing Ethics Narratives Project: Narrative Ethics Theory in Practice The Computing Ethics Narratives (CEN) project is a multidisciplinary collaboration between professors at Bowdoin and Colby colleges located in Maine, US. The purpose of this website is to provide undergraduate computer science professors with narratives related to ethical uses and ethical failures of a variety of technologies so that lessons on ethics may be embedded seamlessly into their curricula. Narratives featured vary in length, medium, and place and time of origin, and are both nonfictional (journalistic articles, podcasts) and fictional (film clips, prose reading). Each narrative features two types of taxonomy: technologies involved, and ethical themes implicated. Additionally, each narrative comes with a brief summary of its contents and their relation to the taxonomic terms as well as discussion questions for learners. The website contains a search term mechanism, as well as filtered searching by ethical theme, technology involved, media type, among others. Future additions to the website ideally involve more lesson modules, an increased presence of globally diverse narratives, and the ability to create an account to accumulate and bundle one’s own selection of narratives. Find below the link to this website. https://www.computingnarratives.com/
poster
Event reconstruction and tau neutrino appearance using CNNs for KM3NeT / ORCA T. Eberl*, M. Moser, S. Hallmann, J. Hofestädt, and S. Reck for the KM3NeT Collaboration Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg KM3NeT / ORCA: a water Cherenkov detector in the deep sea several Mtons instrumented mass 115 strings 18 DOMs/string ~ 64k PMTs ~200 m ~20 – 23 m ~9 m Letter of Intent for KM3NeT 2.0, J.Phys. G43 (2016) no.8, 084001 Simulated sets of KM3NeT data, on the level of individual recorded photosensor signals above a set threshold, are binned in space and time, and then used as inputs for different Convolutional Neural Networks (CNNs). A complete data classification and regression chain using these CNNs is provided, cf. arXiv:2004.08254. The classification tasks consist of the rejection of background events, induced by atmospheric muons and random noise, and the separation of track-like and shower-like event topologies induced by different neutrino flavours and interaction modes. The regression CNN allows for the reconstruction of the neutrino interaction position, the neutrino energy and direction, and of their corresponding uncertainties. This analysis chain reaches a competitive and partly superior performance with respect to classical approaches pursued in KM3NeT. Gains in sensitivity of 10% and more can be reached compared to classical approaches in event reconstruction and classification. The CNN-based reconstruction results are then used to calculate the sensitivity of KM3NeT/ORCA to deviations from the standard-model expectations for the purely oscillation-induced flux of tau neutrinos from the atmosphere. Classification: Background suppression Recorded event data is dominated by atmospheric muons (several tens of Hz trigger rate) from cosmic- ray air-showers reaching the detector from above and pure-noise events due to 40K decays and biological background light. An efficient selection of neutrino events (mHz rate) is necessary for physics analyses. Fraction of atmospheric neutrinos having passed classical analysis pre-selection that survive the anti-muon cut vs. the contamination (fraction of atmospheric muons) in the final sample for the CNN and Random Forest (RF) anti-muon classifier. References: [1] http://github.com/KM3NeT/OrcaNet [2] http://www.tensorflow.org [3] http://keras.io KM3NeT is a distributed neutrino research infrastructure in the abyss of the Mediterranean Sea. One part of KM3NeT is the ORCA (Oscillation Research with Cosmics in the Abyss) detector. It is under construction and has been optimised to study the properties of neutrinos with GeV energies. Neutrinos are detected through the Cherenkov radiation induced by secondary particles generated in neutrino interactions in the water. To this end, several megatons of seawater will be instrumented with a 3D array of 2070 glass spheres (DOMs), each housing 31 3’’ PMTs. Classification: Event topology Neutrino interactions lead to different event topologies: Track-like: !" charged-current (CC) interactions create a final-state # that travels ~4m/GeV leaving a long track-like light signature. Shower-like: e with short radiation length ($eCC), % with short lifetime ($%CC) and neutral current (hadronic cascade only) events are localised around the interaction vertex. Top: Fraction of events classified as track (track score > 0.5) by the CNN as a function of neutrino energy. Bottom: Relative improvement using the CNN compared to the RF classifier in terms of classification distance between $#CC and the respective shower-dominated electron neutrino channels. Regression: Energy reconstruction Reconstructed energy vs. true MC neutrino energy for $eCC. The distribution is normalised to unity in each true energy bin. KM3NeT preliminary Default CNN Tau neutrino sensitivity KM3NeT preliminary CNN Non-appearance (=0) exclusion at 5s level possible within a few months of operation with full ORCA detector. Fit results robu
poster
A REFERENCE SET OF RUMEN MICROBIAL GENOMES: THE HUNGATE1000 PROJECT. William J. Kelly*1 (bill.kelly@agresearch.co.nz), Graeme T. Attwood1, Peter H. Janssen1, Adrian L. Cookson1, Gemma Henderson1, Suzanne C. Lambie1, Rechelle Perry1, Kenneth Teh1, Nikola Palevich1, Samantha Noel1, Lynne A. Goodwin2, Nicole Shapiro2, Tanja Woyke2, Christopher J. Creevey3, Sinead C. Leahy1. 1AgResearch Ltd, Grasslands Research Centre, Palmerston North, New Zealand; 2DOE Joint Genome Institute, Walnut Creek, California, USA; 3Institute of Biological, Environmental and Rural Sciences, Aberystwyth University, UK. Introduction. Acknowledgements. Research is funded by the New Zealand Government to support the objectives of the Livestock Research Group of the Global Research Alliance on Agricultural Greenhouse Gases. The information contained within this poster should not be taken to represent the views of the Alliance as a whole or its partners. Table 1. Comparison of rumen bacteria covered by culture information ,16S rRNA gene-based studies and genome sequencing. Phylum Cultured genera Cultured isolates Meta-analysis Hungate1000 Actinobacteria 11 (13%) 25 (17%) 41 (2%) 13 (6%) Bacteriodetes 6 (7%) 5 (3%) 907 (38%) 15 (7%) Fibrobacteres 1 (1%) 7 (5%) 16 (1%) 1 (0%) Firmicutes 45 (51%) 90 (62%) 1263 (53%) 186 (84%) Fusobacteria 1 (1%) _ 1 (0%) 1 (0%) Proteobacteria 20 (23%) 16 (11%) 64 (3%) 3 (1%) Spirochaetes 1 (1%) 2 (1%) 48 (2%) 2 (1%) Synergistetes 1 (1%) 1 (1%) 11 (0%) _ Tenericutes 2 (2%) _ 21 (1%) _ Development of the host-microbe relationship begins at birth and continues throughout adult life. The rumen is undeveloped at birth and incomplete information exists about the microbial diversity and colonization dynamics that result in a functioning rumen after weaning. It is conceivable that early-life microbial intervention represents an opportunity to manipulate the indigenous microbial populations of the ruminant and influence the lifelong environmental impact and productivity of the animal. Bifidobacteria are a major component of the microbiota of the ruminant digestive tract from birth until weaning. Two different species were sequenced and their genomes reflect different lifestyles. Genomes of six other rumen Bifidobacterium species are in progress. Bifidobacterium longum subsp. infantis AGR2137 encodes genes for the utilization of bovine milk oligosaccharides and also host-derived glucans. This strain resembles those isolated from human infants but lacks the capacity to utilize fucose and sialic acid substituted oligosaccharides. Bifidobacterium pseudolongum AGR2145 encodes genes for the utilization of a range of plant derived oligosaccharides including fructo-oligosaccharides, galacto- oligosaccharides, arabinoxylo-oligosaccharides and raffinose. This strain also encodes genes that specify sortase-dependent pili which may facilitate host-microbe interaction. Early life. B. longum subsp. infantis 2.27 Mb B. pseudolongum 1.99 Mb Diversity. Butyrivibrio is one of the main genera of rumen bacteria able to initiate the breakdown of hemicellulose and pectin found in plant cell walls. We sequenced and compared the genomes of several strains identified as Butyrivibrio on the basis of 16rRNA analysis. Figure 2 shows three main clusters of strains belonging to the genus Butyrivibrio, although these do not correspond to the three currently recognised species. Strains belonging to the genus Pseudobutyrivibrio form a separate cluster while one additional strain (AE2032) presumably represents a novel genus. The genomes of the Butyrivibrio strains range in size from 3.4 to 5.1 Mb and encode between one and seven multi-domain xylanases belonging to the GH10 and GH11 families. Figure 2. Comparison of the complete ORFeomes of Butyrivibrio genomes by functional genome distribution analysis (Altermann 2012, Front. Microbiol. 3, 48). Figure 3. Genome atlas of Ruminococcus flavefaciens AE3010. From outside to the centre: Genes on forward strand; Genes on reverse
poster
muralir@cbs.mpg.de Discussion Conclusion Introduction Results The present study I would if I could but I can't: Different types of non-prototypical Actor arguments are processed in a qualitatively similar manner R. Muralikrishnan1, Matthias Schlesewsky2, Ina Bornkessel-Schlesewsky1,3 1Research Group Neurotypology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany 2Department of English and Linguistics, Johannes Gutenberg-University, Mainz, Germany 3Department of Germanic Linguistics, University of Marburg, Germany Language is a uniquely human ability and also one of the most complex human cognitive skills. It thus appears plausible to assume that there is an intimate relationship between the structure of human language(s) and basic characteristics of human neurobiology / neurocognition. One possible way of shedding further light on this relationship is to draw conclusions about possible "universals" of language by identifying cross-linguistically recurring patterns of lan- guage processing. These universals can be con- sidered potential candidates for links between language and cognition / neurobiology. A potential universal of language processing that has been identified in this way is the en- deavour towards unambiguous identification of the "actor", i.e. the participant primarily responsible for the state of affairs described (Bornkessel-Schlesewsky & Schlesewsky, 2009). The aim of the present study was to provide a more fine-grained characterisation of linguistic actorhood and its neurocognitive ramifications by examining the role of "volitionality" as a key feature of prototypical actors. Actorhood and Volitionality: A Cross-linguistic View In English, states of affairs such as (1) and (2) are encoded in a grammatically identical manner. Prima facie, this appears to suggest that linguistic structure is primarily attuned to encoding causality or the "starting point" of an event (MacWhinney, 1977): the protagonist, the old man, is treated identically whether he voli- tionally causes the event (example 1) or experi- ences it (example 2). In many other languages, by contrast, non-volitionality of an actor can be grammatically encoded, for example via dative case marking as in example (3) from Tamil: Similar structures occur in a range of other lan- guages, e.g. Hindi, Japanese, Icelandic, and Russian. This recurring pattern suggests that volitionality may be a key feature in the defini- tion of linguistic actors, since non-volitional actors are consistently flagged with additional morphological marking. Previous Neurocognitive Findings Previous studies examining the processing of non-volitional actors have focused on inanimate actors. In this regard, it has been demonstrated for several languages (including German, English, Chinese and Tamil) that inanimate actors engender an N400 effect in terms of event-related brain potentials (ERPs) and that this effect can be reduced neither to lexical differences between animates and inanimates nor to the infrequency of inanimate Actors (for an overview, see Bornkessel-Schlesewsky & Schlesewsky, 2009). However, inanimate actors are not only non- volitional but also lack other prototypical actor properties such as sentience and the capability for autonomous movement (Primus, 1999). Previous findings on inanimate actors are thus not suited to isolating the individual actor characteristic(s) that are particularly important for the cognitive definition of a prototypical actor. The aim of the present study was to isolate the neurophysiological response to non-volitional actors and to compare it to that for inanimate actors. To this end, we examined the processing of "dative subject" constructions in Tamil (cf. example 3). Participants listened to question-answer pairs such as those in Table 1. The context questions ensured that dative arguments would be inter- preted as actors rather than as indirect objects. All sentence types were also presented in a neu- tral c
poster
The GALAH survey: Stellar chemical tagging using phylogenetic trees Traversing the tree structure The tree structure may be regarded as any other dimensional reduction and/or clustering technique with the difference that here we are given an exact path or relation between any two objects in our dataset. The path we are following is represented and explained by above image. At every step we assume that the objects have very similar chemical composition, but can have different position and velocity through the Galaxy. To calculate full 6D position and velocity information, Gaia observation are used. This it is used to select object with similar kinematics and possibly find stellar clusters. Phylogenetic representation A phylogenetic or evolutionary tree is a diagram that shows evolutionary relations among our data, where the position on the tree is based upon similarities and differences in the parameters that represent our observations. The trees were invented to show the evolution of the DNA sequence, where similarity is defined as number of matching nucleotides (Lemey P., 2009, The phylogenetic handbook). Similarity between chemical composition of stars In our case distances between stellar abundances were determined in a mathematical way. We tested different algorithms to compute distance between objects: - euclidean (best for clustering repeats) - manhattan - sorensen - canberra (had the smallest number of nodes between repeated observations). The only information that we used to determine the distance between stars were abundance values itself. They were standardized before the computation in order to remove the differences in mean and variance value between individual abundances. No additional information about physical properties or kinematic information was used. Determining properties of the tree Based on how the trees are constructed, we know that objects with identical or very similar chemical composition should be located together at the top of the tree (Jofrè et al. 2017, MNRAS, 467, 1140). Our selected set of observations has nearly 500 repeated objects and some members of known stellar clusters. With them we can estimate: - how many of the repeated observations are located together (14% in our case) - what is the expected average number of nodes between objects to be still considered to have similar composition (20 nodes in our case) - do cluster members have chemical composition similar enough to get clustered in one part of the tree Objective: The GALactic Archaeology with HERMES (GALAH) spectroscopic survey aims to provide stellar spectra for 1 million stars in four different wavelength intervals. They are selected in a way that it should be possible to obtain more than 20 different chemical abundances per star. Chemical composition of the stars can be used to find (tag) members of long dissipated stellar clusters (De Silva et al. 2015, MNRAS, 449, 2604). Our method uses abundances of 13 elements (determined by Cannon (Ness et al. 2015, ApJ, 808, 1) method) to asses the similarity in chemical composition of stars. The similarity information is then used to construct a phylogenetic tree that gives us a visual representation of similarity in composition between individual stars. The tree is then traversed in a direction from tips to the root of the tree to further analyze kinematic relations between stars with similar chemical composition. Klemen Čotar*, Tomaž Zwitter* and the Galah team *Faculty of mathematics and physics, University of Ljubljana, Slovenia 1. 2. Datasets and filtering For this analysis we used GALAH radial velocities and abundances determined by the Cannon (pipeline version 1.2) approach for more than 200k stars. The positions and proper motions are taken from the Gaia-TGAS objects, which greatly reduces the final number of useful object as GALAH usually observes dimmer objects (12 < G < 14) that were not observed by the Hipparcos satellite. In order to select a reliable dataset, we applied multiple filters
poster
Crop Transport information, Physiology & Signalling Knowledgebase - CropTiPS Integration of crop transporter data Rakesh David1, Cornelia M Hooper2, Ian R Castleden2, Matthew Gilliham1, Stephen D Tyerman1 1ARC CoE in Plant Energy Biology, University of Adelaide, SA, Australia ; 2ARC CoE in Plant Energy Biology, University of Western Australia, WA, Australia Membrane transporters are ubiquitous proteins that play crucial roles in physiology, metabolism and signalling through their function in translocating ions and small molecules across biological membranes. In plants, approximately 10-20% of the coding sequences encode transport proteins and can be functionally classified into channels, carriers and pumps (Figure 1) (Saier and Ren, 2006). Membrane transporters are the focus of extensive research in crop species (Figure 2) due to their importance in protecting plants from environmental stresses. However, this data remains dispersed and unconnected, with only limited attempts made to integrate the vast amount of molecular, physiological and biophysical information available. Here, we address this issue with the development of a database that contains a comprehensive collection manually curated experimental data related to transport proteins in major crop species. 1. Key features of CropTiPS 2. Experimental data classification 4. Conclusion The CropTiPS repository connects physiology and biophysics of transporter proteins from four important agronomic species into a single online resource. The resource is an important step in developing new strategies to manipulate transporter function to ultimately enhance crop yield and resistance to key stresses Saier. MH Jr., Ren Q. Journal of Molecular Microbiology and Biotechnology (2006) Taiz. L., and Zeiger. E. Plant Physiology 5th Ed (2010). Channel protein Carrier protein Pump 1971-1975 1976-1980 1981-1985 1986-1990 1991-1995 1996-2000 2001-2005 2006-2010 2011-2016 0 100 200 300 Rice Wheat Barley Maize Published transporter studies • Knowledgebase of membrane transporters and signalling systems in rice, wheat, barley and maize. • Source is published literature and reference transport proteins. • Searchable for experimental curated data/transport protein. Taiz and Zeiger (2010) Plant Phys Transporter Transport properties Physiology Signalling Expression profile Number of publications Published experimental data related to a transporter is categorized into three functional groups - Transport, Physiology & Signalling and Expression profile. Experimental approaches used to study these functions are stored in a MySQL database. Experimental data is linked to the current genome annotations through the Ensembl Plants identifier system. CropTiPS is linked to the AgriConnect platform that aims to connect high value crop data collections across key Australian agricultural research institutions. 3. CropTiPS user interface Query page Results tabular view Transporter Factsheet Expression Profile Tissue Localization Transport properties Yeast expression Xenopus oocyte Phenotype Transporter Protein Interaction Protein modifications Physiology and Signalling Natural Variation Other heterologous system 67 experimental parameters Ensembl Plants Identifier linking CropTiPS user Interface & query builder http://www.croptips.org/ AgriConnect portal https://agriconnect.latrobe.edu.au/ Transporter information can be retrieved using the query builder: • Choice of four crop species. • Quick search of text or protein identifiers. • Experimental information organised under functional categories. The results table offers compact view of protein hits matching the query and a summary of curated experimental data: • Results table is organised based on protein identifiers and annotation for each species. • Summarised experimental data and PubMed links to the related articles are provided for each protein. • Columns can be customized to display additional features such as Arabidopsis homology, length, isoelectric point, molecula
poster
Strategy Removal mechanisms & efficiency INSIGHTS INTO NATURAL ORGANIC MATTER AND ITS REMOVAL BY ION EXCHANGE RESINS INDUSTRIAL CATALYSIS AND ADSORPTION TECHNOLOGY Elien Laforce, Emile R. Cornelissen, Pieter Vermeir and Jeriffa De Clercq Problem Statement Natural organic matter (NOM) • Originates from degradation and byproducts of living organisms and plants • Complex mixture (wide range of MW, hydrophobicity, functionalities) • Separation via Liquid Chromatograpy – Organic Carbon Detection (LC-OCD) into 5 fractions: • Biopolymers (1) – Humic substances (2) – Building blocks (3) – Low MW acids (4) – Low MW neutrals (5) Challenges • Complete removal not achieved by classical water treatment processes (e.g. demineralisation via ion exchange (IEX) → remaining NOM: biopolymers & low MW neutrals) • Increased risk on corrosion in steam cycle • Formation of disinfection by-products during drinking water production • Microbiological growth in water distribution systems Problem Statement pH effects Contact Jeriffa.DeClercq@ugent.be Elien.Laforce@ugent.be Conditioning procedure influences equilibrium pH and ionic strength, affecting removal efficiency and mechanisms – especially in batch mode experiments Selection of resin and conditioning procedure enables optimal removal of targeted NOM (fraction) Insight in removal mechanisms helps to optimize regeneration procedures Future work on (NOM fractions from) real surface water to investigate the effects of this more complex water matrix on NOM removal Conclusions Anion exchange resin (AER) conditioning Batch experiments: AER and NOM model compounds Weak Basic AER Strong Basic AER Set-up: 50 ppm model compound → BSA / dextran / alginate / HA / resorcinol 2 g resin OH- form (equivalent capacity Cl-form) 22h – 200 rpm – 25°C Analysis: pH – UV - TOC Resin Functionality MP62a WBA MP68a W/SBA (3.5/1) MP64a W/SBA (2.0/1) MP600a SBA (type II) MP500a SBA (type I) a Lewatit, Lanxess (NaOH) (HCl) (NaOH) (NaCl) Model compound SBA – OH- form NaOH conditioning pHeq. 9-10.5 SBA – Cl- form NaCl conditioning pHeq. 4.5-6 WBA – FB form NaOH conditioning pHeq. 6.5-8 WBA – Cl- form HCl conditioning pHeq. 3-3.5 Removal mechanism Removal a Removal mechanism Removal a Removal mechanism Removal a Removal mechanism Removal a BSA IEX - IEX + + + H-bond - n.a. - Dextran H-bond + + + n.a. - n.a. - n.a. - Alginate IEX, H-bond - / + IEX + IEX + H-bond, IEX + Humic acid IEX, H-bond, π- π + IEX + + H-bond, π- π + H-bond, π- π + Resorcinol H-bond, π- π, IEX + + + π- π + / + + π- π + + + π- π + + a removal efficiency (%) range coding: - : 0-20%; + : 20-60%; + + : 60-80%; + + + : 80-100%1 M solutionLow electrolyte Effluent Effluent • Aromatic NOM: π- π interactions (all counter ion forms) • Carboxyl groups: H-bonding with Cl- WBA & OH- SBA • Hydroxyl groups: H-bonding with OH- SBA • Charged (anionic) NOM: Ion exchange • W/SBA resins: behaviour depends on ratio weak versus strong basic functionalities • Release of ions in subsequent batch experimtent → ionic strength ↑; pH ↑or ↓ References: Laforce E, et al. Revealing the effect of anion exchange resin conditioning on the pH and natural organic matter model compounds removal mechanisms, Journal of Environmental Chemical Engineering, (2022)
poster
Towards a community-endorsed data steward description for lifesciences Mijke Jetten1, Inge Slouwerhof1, Salome Scholtens2, Jasmin Böhmer3, Marije van der Geest2, Christine Staiger4 & Celia W.G. van Gelder4 (1) Radboud University, (2) UMC Groningen, (3) UMC Utrecht, (4) DTL (Dutch Techcentre for Life Sciences)/ELIXIR-Netherlands Contact: Mijke Jetten, m.jetten@ubn.ru.nl Background 1 Responsibilities examples of activities/tasks Results • Embed data steward roles and competencies in a formal function profi le including function levels (junior, senior), and add formal job evaluation and grading • Add discipline-specifi c knowledge, skills and abilities to the function description • Develop a self-assessment tool for data stewards to assess responsibilities, tasks and competencies, combined with navigating directions to training and materials • Use the matrices and the (self-)assessment tool to assess data stewardship roles in organisations • Develop certifi ed training for data stewards T e project originally focused on the lifesciences. However, the outcomes show to be relevant for other domains as well. Starting September 2019, the project will be continued in a one year NPOS (National Platform Open Science, the Netherlands) funded project on professionalising training in open science and data stewardship. T is ZonMw project (Aug. '18 - Sept. '19) has delivered a function description for three data steward roles: policy, research and infrastructure. For each data steward role, competence areas with tasks were defi ned, i.e. 1) policy/strategy, 2) compliance, 3) alignment with FAIR, 4) services, 5) infrastructure, 6) knowledge management, 7) network, and 8) data archiving. Table 1 shows a section of the matrices (full version via Zenodo). Tasks were translated into learning objectives based on Bloom (example shown in table 2). In two workshops (June '19) with Dutch data stewards (± 60 participants), existing and desired training was mapped. T ese mappings are included in the fi nal report. Policy/strategy Responsible for advice on and development, implementation and monitoring of a RDM policy and strategy for the research institute, which includes the complete research data life cycle and supports FAIR data and Open Science, in alignment with the relevant stakeholders and within fi nancial and legal constraints, within the institute and in the context of the institute. T e policy is the basis for (project) DMPs. Responsibility Compliance Skill/ability - Translate RDM policy and legislation and codes of conduct with regard to research data to practical implications and guidelines that researchers can understand. Learning objectives - List relevant legislation, ethical principles, and codes of conduct for RDM (remembering). - Examine and list the practical implications of legislation, ethical principles, and codes of con- duct with regard to research data (analysing). - Translate RDM policy and legislation, ethical principles, and codes of conduct with regard to research data to researchers (applying). - Create guidelines and procedures based on legislations, ethical principles, and codes of con- duct with regard to research data (creating). • Develops, implements and monitors the institute's RDM policy. • Advises the institute's management on short- and long-term actions to advance RDM in the institute. • Assesses and monitors the institute's time and fi nancial investments in relation to the institute's needs for RDM. • Explores new needs, opportunities and trends in RDM. This project is funded by the ZonMw Personalised Medicine Programme under dossier number: 80-84600-98-3007. Compliance Responsible for compliance of the RDM policy to the Netherlands Code of Conduct for Academic Practice, the Netherlands Code of Conduct for Research Integrity and the General Data Protection Regulation (GDPR), as well as continuous alignment with legal and ethical standards. • Ensures compatibility of the RDM policy and monitors compliance. • Contacts
poster
Introduction HD-tDCS of the dorsolateral prefrontal cortex A: Electrode position B: Simulated electrical field Stimulation The effect of transcranial direct current stimulation on the interplay between executive control, behavioral variability and mind wandering: A registered report - Mind wandering (MW) is a mental phenomenon we humans experience on a daily basis - High-Definition transcranial direct current stimulation (HD-tDCS) can modulate neuronal excitability and potentially lead to changes in cognition - This study aimed to replicate a previous finding of reduced mind wandering using HD-tDCS over the dorsolateral prefrontal cortex (DLPFC) in adult humans (Boayue et al., 2020) - Additionally we investigated if we could find neurophysiological markers of MW, and whether they were influenced by HD-tDCS, by recording electroencephalogram (EEG) and pupil size (pupillometry) Andreas Alexandersen aal077@uit.no +4791854015 Cognitive task and measured variables Finger-tapping random sequence generator task (FT-RSGT) - Press two buttons in a random order - Match every button press to a rhythmic tone - The tone is presented for 75ms, with an interval of 750ms - Behavioral variabiliy Standard deviation of inter-tap intervals - Approximate entropy Randomness of tapping pattern - Self reported mind wandering score Sampled by thought probes Where was your thoughts focused right before this question appeared? 1 2 3 4 Clearly on-task Clearly off-task Results Andreas Alexandersen1, Gábor Csifcsák1, Josephine Groot1, Matthias Mittner1 Registered hypotheses 1. Expected propensity to mind-wander to be reduced in the real relative to the sham stimulation group. 2. We expect behavioural variability (BV) to be increased prior to mind wandering when compared to on-task periods. 3. We expect the utilisation of executive resources (AE) to be reduced prior to mind wandering. 4. We expect an interaction effect of BV and AE such that the BV-MW effect is more pronounced during periods of high AE. Statistical model Bayesian hierarchical ordered-probit regression model Dependent variable: Thought-Probe responses Predictors: Behavioral variability (BV), Approximate entropy (AE), BV x AE, Trial (probe number), Group (sham vs real stimulation), Block (baseline vs stimulation) and Group x Block - Block nested in subject - Posterior mean and high density intervals of the regression coefficient as well as evidence ratios Methods 1Institute for Psychology, The Artic University of Norway (UiT) Baseline Stimulation Sham Offline Instructions and preparing equipment Training, miniquiz and baseline Stimulation or sham protocol Offline protocol Questionnaires and removing equipment Approximately 90 minutes total 10 minutes, 10 probes EEG & Pupillometry 800 trials N=100 10 minutes, 10 probes EEG and Pupillometry 800 trials (~70 oddballs) N=100 20 minutes, 20 probes HD-tDCS and Pupillometry 1600 trials N=50 20 minutes 20 minutes 20 minutes 10 minutes 10 minutes N=50 20 minutes, 20 probes HD-tDCS and Pupillometry 1600 trials Design Apparatus EEG A) Mismatch negativity waveforms for sham and real HD-tDCS groups B) Occipital alpha power spectra for the sham and real HD-tDCS groups C) Individual mismatch negativity amplitudes D) Posterior occipital alpha power values for sham and real HD-tDCS groups Pupillometry Predictors and coefficientsfrom the two Bayesian hierarchical linear regression models with tonic (left) and phasic (right) pupil responses as dependent variables C) Frequency (Hz) Time (ms) Power (µV²) 10 5 0 -800 -400 400 -1200 -1 0 1 Fz 5100 5400 high low Power (µV²) A) Voltage (µV) Time (ms) -100 0 100 200 300 400 0 1 2 3 4 -1 Difference waveform (Fz) Standard (Fz) Oddball (Fz) Difference waveform (POz) Difference waveform (Oz) B) Frequency (Hz) Power (µV²) 0 2 4 6 8 10 12 14 16 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Difference waveform On-task Mind wandering POz/Oz 3.6 4.8 MW OT Power (µV²) A) Mismatch negativy in the expected time window B) Increased occipital alp
poster
All-sky Wide Infrared Survey Explorer (AllWISE) DR3. For this poster, we focus on Abell 2485 cluster field, with the following details: As a final step, we manually checked for candidates against the NASA/IPAC Extragalactic Database to look for any redshif t information , o r whethe r a sourc e exist s i n othe r optical/ IR catalogues. Our final HzRG candidate list contains a total of 331 sources: 229 USS sources and 102 potential HzRG candidates with n o spectral index information. 5σ Searching for high-redshift radio galaxies with the MeerKAT Galaxy Cluster Legacy Survey 1, 2 1, * Centre for Radio Astronomy Techniques and Technologies, Department of Physics and Electronics, Rhodes University, Makhanda, South Africa South African Radio Astronomy Observatory, Fir Street, Observatory, South Africa cossav25@gmail.com 1 2 Vasco Cossa , Kenda Knowles * Abstract MGCLS data Exploring the high redshift Universe is critical to our understanding of cosmological and astrophysical processes. This poster presents a search for high-redshift radio galaxies (HzRG) using 1.28 GHz da ta fr om t he MeerK AT Gala xy Clust er Lega cy Surv ey (MGCL S). Thro ugh rigor ous select ion criter ia, we identif ied 331 HzRG candid ate s in the A bell 2485 fi eld. This s hows that the MGCLS d ata's sensit ivit y and band width ma ke it a s trong to ol for discovering potential HzRG candidates in the Southern sky. The MGCLS DR1 products provide a compact source catalogue for 115 fields. Utilizing Knowles e t al . (2021) , w e filte r fo r field s wit h goo d dynam ic range and astrometry corrections. Subsequently, we narrow down to 39 fields with complete coverag e in both Dark Energy Camera Legacy Survey (DECaLS) DR10 and Methods and results for Abell 2485 7170 RMS Noise Step 1: Signal-to-noise and angular size requirements Abell 2485 field poses few bright-source artifacts, however, the image noise does vary across the field due to the primary bea m correction, with the local RMS noise increasing away from t he ima ge centr e. Rath er th an implementi ng a fix ed fl ux dens ity cut, we apply a signal-to-noise ratio (SNR) restriction using the fi tted flux den sity ve rsus its associ ated uncertai nt y. We simultaneously apply an angular size restriction. Step 2: Multi-wavelength cross-matching This study highlights the potential of MGCLS DR1 data for identifying HzRG candidates. Among the 115 fields, 39 offer a goo d dynami c rang e an d comprehensiv e coverag e i n DECa LS and AllWISE. Expanding the study to these fields could yield more candidates for spectroscopic follow-up. Likelihood ratio method We use the likelihood ratio (LR) method, introduced by Sutherland and Saunders (1992), to cross-match our radi o catalo g wit h multi-wavelengt h catalogues , an d t o exclu de optical and infrared counterparts as HzRGs candidates. The LR is defined as the ratio of the probability that an optical source is the true counterpart, to the probability that the sam e source is a spurious alignment, i.e., q(m) is the expected magnitude distribution for the true counterparts; f(r) is the probability distribution function of the positional uncertainties in the involved catalogues; n(m) is the surface density of the unrelated background objects with magnitude m. Step 3: Ultra-Steep Spectrum (USS) selection We are currently exploring the now-established correlation between galaxy redshift and radio spectral steepnes s (Röttgering et al., 1994). Step 4: Manual cross-checking with NED and Vizier Conclusion References 3512 3046 SNR Angular size smaj ≤ 10'' Fig 1. MGCLS Abell 2485 field with DECaLS and AllWISE coverage Fig 2. Angular separations between the positions of radio sources in the Abell 2485 and their DECaLS counterparts. In each panel, the red line shows the normalise d Gaussian distribution Fig 3. MGCLS Abell 2485–DECaLS cross-match results. The vertical dashed line indicates the optimal threshold LR > 0.1 Fig 4. Angular separations between the positions of ra
poster
26 de septiembre de 2024 Boletín Informativo el ornitorrinco tachado. revista de la facultad de artes uaeméx ornitorrincotachado.uaemex.mx Existen diferentes tipos de publicaciones que se usan para manifestar diferentes aportaciones a la red de arte, conocerlas permite crear a favor de ellas para difundir mejor el trabajo y la obra artística. fanzines/autopublicaciones Creada como un medio de expresión directo, pretende ser un fin en sí mismo de creación artística y de divulgación e inclusión de cual- quier tipo de temas (literarios, musicales, de cine, políticos, sociales, artísticos, etc.) que invita a idear tramas narrativas que son intere- santes para un tipo de lectores específicos, usando mano y reflexionando a su vez sobre el diseño gráfico, el dibujo y la escritura. revistas no académicas de arte Son las revistas que establecen y siguen sus propios criterios de publicación, no están bajo el brazo de los lineamientos requeridos por los índices, sino que publican y crean a partir de las aportaciones artísticas que desean hacer. Tratan sobre el mercado, la historia, la crítica, las noticias y la información sobre el arte, su práctica, estética y política. Los requisitos para publicar en ellas suelen ser menos rigurosos que en una revista indexada. revistas no indexadas Publicaciones que siguen normas de una revista indexada, pero que no pertenecen a ningún índice, cumplen con los estándares de calidad dándose libertades creativas que no les permiten formar parte de las bases de datos. Publican manifestaciones artísticas que no cumplen con la presentación para ser una aportación académica. catálogo de exposición y/o de obra Manifestación en formato libro que suele ser publicada por el propio artista, abarca los aspectos técnicos, históricos y/o ambientales de una obra o colección. Contiene fotografías y diagramas que brindan la máxima información sobre la obra y amplían el conocimiento que se tiene, muchas veces es detras de escena que muestra desde la concepción de la idea hasta la elaboración de la misma. fuentes de consulta Espejo, B. (2012). ¿De qué hablamos cuando hablamos de libro de artista? El Español. https://shorturl.at/bi- DEG Libreta de Bocetos. (2021). Revistas de arte y arte de revistas. Libreta de Bocetos. https://shorturl.at/d- grH5 Mikucionyte, Z. (2014). Catalogación de obras artísticas: Análisis de problemas y mejoras en el fondo de arte y patrimonio de la Universidad Politécnica de Valencia. http://shorturl.at/dkmnz Osorio, A. (2018). Sistematización praxeológica: El fanzine como escritura contrahegemónica para la comunicación, el desarrollo y el cambio social. https://doi.org/10.15332/tg.mae.2018.00497 Otras publicaciones
poster
Legal, Constitutional and Ethical Principles for Mandatory Vaccination Requirements for Covid-19 General Principles Statutes are better than regulations • Schemes should be provided in statutes instead of executive rules (i.e. regulations). • The making of laws should follow three principles: – Consultative – last a minimum of 4-6 weeks and involve sub-national governments, opposition parties, trade unions, academics, the public, and others. – Transparent – the consultation and government responses should be published well before the introduction of any bill. – Clear – legislation should not leave major policy questions for interpretation by government departments or private actors. • Temporary or fast-tracked legislation should be replaced promptly with laws following the three principles outlined above. Protection of rights through equality and proportionality • Human rights protections are not absolute, but restrictions should adhere to the principle of proportionality (see Box 1). • A proportionality test requires any scheme to: – Pursue the clearly defined and legitimate aim of protecting public health and/or securing greater freedoms for others. – Be necessary and minimally impairing in relation to the pursuit of the legitimate aim. – Strike a fair balance, with penalties imposed for non-compliance relevant to the strength of the requirement of the scheme. • Schemes should allow fair access to vaccinations, i.e. by not discriminating against individuals based on protected characteristics. Exemption for some, engagement with others • Exemptions legally excuse groups from compliance with schemes, but alternative measures (e.g. testing) can be required. • Consultation with a range of public bodies should guide exemptions. • Legal systems vary, but exemptions for religious beliefs/freedom of conscience are not generally required by human rights law. • Reasonable vaccine hesitancy (see Box 1) should be met with constructive engagement and education but not exemptions. Information... This briefing document summarises a more detailed set of principles (available here) signed by 50 academics within The Lex-Atlas: Covid-19 (LAC19) network (more information on the LAC19 project can be found here). The point of the principles is to set out best or ideal practice for the design and implementation of mandatory vaccination schemes for Covid-19. To achieve this, the principles address the legal, constitutional, and ethical dimensions of mandatory vaccination requirements. Five Key Points 1. Well-designed mandatory vaccination schemes are both compatible with human rights AND have the potential to advance human rights. 2. Schemes should be regulated by statute, rather than executive rules. 3. Extensive consultation with a range of groups is essential for an effective scheme. 4. Constructive engagement with reasonable vaccine hesitancy should be part of any scheme, but it does not need to lead to exemptions. 5. Strong oversight is needed to ensure schemes do not depart from their stated aims. Box 1: Key terms Mandatory vaccination requirements Any law making vaccination compulsory, or any public or private vaccination requirement for accessing a venue that cannot be avoided without undue burden. Principle of proportionality The principle that the burdens placed on an individual when complying with a mandatory vaccination scheme are proportional to the aims of the law. The greater the burden, then the higher the bar of proportionality is set. Reasonable vaccine hesitancy Reluctance to take a vaccine resulting from distrust in dealings between the state and a given group or community. Reasonable vaccine hesitancy is often prevalent in groups and communities who have experienced a history of state-complicit persecution, discrimination, marginalisation, or neglect. Particular Sectors Workplace schemes must be clearly regulated • International and domestic law recognises the right to safe and healthy workplaces. • Occupational schemes should:
poster
THE EFFECT OF DIURNAL SEA SURFACE TEMPERATURE WARMING ON THE MEDITERRANEAN SEA HEAT AND WATER BUDGET S. Marulloa,P. Minne/b, R. Santoleric , V. Artalea a Italian Na)onal Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Frasca) Research Center, Italy bRosenstiel School of Marine & Atmospheric Science, Miami, USA cItalian Na)onal Research Council (CNR)-­‐ Ins)tute of Atmospheric Sciences and Climate (ISAC), Roma, Italy) ABSTRACT Diurnal Warming and Mediterranean Heat Budget Data and Methods ValidaDon The diurnal cycle in sea-­‐surface temperature is reconstructed by combining numerical model analyses and satellite measurements in the context of the Op)mal Interpola)on theory. The method (Marullo et al., 2014) is applied to reconstruct hourly Mediterranean SST fields during 2013 using data from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) and Mediterranean Forecas)ng System analyses provided by the Copernicus Marine environment monitoring service. The Diurnal OI SST (DOISST) fields reproduce well the diurnal cycles including extreme diurnal warming events as measured by driQing buoys. The evalua)on of DOISST products against driQer measurements results in a mean bias of −0.1°C and a RMS of 0.4°C. We evaluate the impact of resolving the SST diurnal cycle on the heat budget of the Mediterranean Sea over an en)re annual cycle. The mean annual difference in the heat budget derived using SST’s with and without diurnal varia)ons being -­‐4 Wm-­‐2 with a peak monthly difference of -­‐9 Wm-­‐2 in July-­‐August. Input data: ü Satellite data: hourly SEVIRI SST fields, distributed by OSI-­‐SAF (Ocean and Sea Ice -­‐ Satellite ApplicaCon Facility) ü CMEMS (Copernicus Marine Environmental Monitoring Service) Mediterranean ForecasCng SST Analysis Method: Diurnal OpCmal InterpolaCon SST method which combines numerical model analyses and geostaConary satellite data (Marullo et al. RSE, 2014). Output: L4 Hourly reconstructed SST maps Monthly Bias (Sat – In Situ) DriSers PosiCon (2013 Data) SEVIRI (Measured) SST ReconstrucDon (sat-­‐buoy) -­‐0.03 °C -­‐0.13 °C STD 0.47 °C 0.39 °C Corr. Coef. 0.9957 0.9953 N. of matchups 26149 13846 Diurnal Warming in the Mediterranean Sea A Diurnal Warming (DW) Event in The Mediterranean Sea on June 18° 2013. Figure (a) shows the maximum day – night temperature excursion during this day. DW events require sustained low winds under favorable insolaCon condiCons. The DW pa/ern in fig (a) highlights this effect downwind of the Crete Island and on its east and west side. Diurnal Warming Event are very frequent in The Mediterranean Sea. Figures below (a), (b) and (c) show the number of occurrence of Diurnal Warming Event with day-­‐night difference above 1, 2 and 3 K respecCvely during 2013 (a) ba ) (c) Spa$al distribu$on of the number of DW events with amplitude greater than 1°C for each month of the year 2013. The impact of the SST Diurnal Warming on air-­‐sea Heat Fluxes. Difference between ocean heat loss es$mated using founda$on SST or hourly DOISST Radia$ve and Turbulent Heat Flux formulae: monthly spa$al distribu$on (to the right) and monthly mean (below) April 9 2015: Sea Surface Temperature The COSIMO 2015 Experiment 1. CTD Casts 2. Near surface Thermistor Measurements 3. Meteo data 4. M-­‐AERI conCnuous skin SST Measurements (SST retrieval uncertainty <<0.1 K) Hourly CTD Casts During the COSIMO Experiment ( from 2015/04/08 22:14:00 to 2015/04/10 07:59:00) GOTM Run (25 cm verCcal resoluCon) using in situ iniCal condiCon and Meteo data acquired on board (no SST assimilaCon) The COSIMO Diurnal Experiment was conducted in the north AdriaCc Sea from 2015/04/08 22:14:00 to 2015/04/10 07:59:00 The M-­‐AERI Mounted on Italian R/V Minerva Uno CTD Casts Acknowledgments This research was supported by the Italian Ministry of Foreign Affairs and InternaConal CooperaCon in the framework of the DeclaraCon Italia – United States 2014-­‐2015. The research was also supported by MyOcean 2 a
poster
The Census of Exoplanets in Visual Binaries Clémence Fontanive & Daniella Bardalez Gagliuffi Published in Frontiers in Astronomy and Space Sciences (2021), 8, 16. RAW BINARY FRACTIONS PLANET PROPERTIES VS. BINARITY EFFECT OF BINARY PROPERTIES Fig. 3. Predictions from COPAINS for HR 7672 B compared to the position of the known companion. Dr. Clémence Fontanive CSH Research Fellow University of Bern, Switzerland clemence.fontanive@csh.unibe.ch • ~23% raw binary fraction across all spectral types. • multi-planet systems have a lower binary rate. • more massive hosts more often in multiple systems. • high-mass close-in planets more often in binaries. 2.2-σ 3.6-σ • giant planets and brown dwarfs in binaries have shorter periods and/or higher masses. • multiplicity does not impact the properties of smaller and wider-orbit exoplanets. • no influence from very wide binaries >1000 AU. • binaries on ~few 100 AU separation can affect the architectures of massive, close-in exoplanets. OVERVIEW Why? stellar binary companions may affect the formation and architectures of planetary systems although the nature and extent of the role played by multiplicity is not yet understood. How? we conducted an extensive search in the literature and Gaia DR2 for wide visual comoving binary companions to 938 stars hosting exoplanets and brown dwarfs within 200 pc. What? we found 218 planet hosts to be in multiple-star systems, with 10 new binaries and 5 new tertiary components, and explored correlations between exoplanet and binary properties.
poster
Evaluating current convection-permitting ensembles for past high- impact precipitation events in Italy: the SPITCAPE Special Project Valerio Capecchi - LaMMA, Florence, Italy Goals of the ECMWF Special Project SPITCAPE 1) understand the information content of a cascade of state-of-the-art ensembles, from global-to-local, by re-forecasting past high-impact precipitation events (HPEs) 2) investigate the added value of running a convection-permitting ensemble directly nested (ie dynamical downscaling) into the ECMWF global ensemble at Tco639L91 resolution Conclusions 1) ENS outperforms WRF-ENS when considering Ensemble Mean precipitation prediction for forecast range < 72 hours 2) WRF-ENS is better than ENS when looking at the ROC Area for thresholds up to 250 mm (Cinque Terre 2011 & Genoa 2011) 3) No skills for ENS & WRF-ENS for Genoa 2014 (missing/misplacement triggering mechanism? further investigations needed) References: Buzzi et al (2014): Heavy rainfall episodes over Liguria in autumn 2011: numerical forecasting experiments, NHESS.; Davolio et al (2015): Effects of Increasing Horizontal Resolution in a Convection Permitting Model on Flood Forecasting: The 2011 Dramatic Events in Liguria (Italy), J. Hydromet.; Davolio et al (2017): Impact of rainfall assimilation on high-resolution hydro-meteorological forecasts over Liguria (Italy), J. Hydrometeorol. ENS WRF-ENS Model IFS cycle 41r2 (March 2016) WRF 3.8.1 (August 2015) Grid spacing ~18 km 3 km Nr of members 50 + control 50 + control Boundary cond N/A ENS Convection parametrised resolved Forecast range 7 days to 1 day (init. 00 & 12) 3 days to 1 day (init. 00 & 12) ●Upper-level trough over Atlantic Ocean ●Humid low-level flow over Med ●V-shaped back-building MCS ●Two rainfall peaks ●Precip maxima: 380 mm/24- hour & 130 mm/1-hour Lots of common features with the Cinque Terre Event, but: ● trough axis orientation is N-S ● triggering mechanism is the low-level convergence line ●Precip maxima: 450 mm/6-h T0+168 hours T0+48 hours T0+36 hours T0+120 hours T0+72 hours T0+60 hours Cinque Terre 25 Oct 2011 Genoa 4 Nov 2011 Genoa 9 Oct 2014 Forecast range ENS WRF-ENS ENS WRF-ENS ENS WRF-ENS T0+36 hours 106 107 133 156 164 176 T0+48 hours 114 127 130 151 165 169 T0+60 hours 112 113 138 159 169 189 T0+72 hours 121 112 135 146 171 153 Genoa 4 Nov 2011 Cinque Terre 25 Oct 2011 Genoa 9 Oct 2014 ●large depression off Ireland’s western shore; ●low-level blocking anticyclone over eastern Europe; ●Low-level wind shear; ●Precip maxima: 400 mm/12- hour & 150 mm/1-hour Observations Geopotential 500 hPa Ensemble Spread Probability of Precipitation PoP>50mm/24-hour PoP>100mm/24-hour ENS WRF-ENS Cinque Terre 25 Oct 2011 Genoa 4 Nov 2011 Genoa 4 Oct 2014 Root Mean Square Error (RMSE) of Ensemble Mean precipitation prediction Receiver Operating Characteristics (ROC) Area for different precipitation thresholds (all starting dates) Results Verification of 24-hour accumulated precipitation Data and Methods
poster
Use of scientific data of protected areas for analysis of its socio- economic impact Author(s) Undergraduate Student(s): Iago Fava da Costa, Enrico Rausch Triñanes Other Authors: Dra. Marina Jeaneth Machicao Justo, Luiz Diego Baldolino Supervisor: Prof. Dr. Pedro Luiz Pizzigatti Corrêa Escola Politécnica da USP iagofava@usp.br Objectives Reproduction of the experiment described in “Combining satellite imagery and machine learning to predict poverty” by Jean et al (2016) applied to Vale do Ribeira municipalities in Brazil, aiming to predict socioeconomic data based in satellite images and deep learning algorithms, as part of research activity of BELMONT/FAPESP project - PARSEC - https://parsecproject.org/ (2018/24017-3). Materials and Methods For the satellite imagery, the research is using Planet’s and Google Earth’s data. Meanwhile, for the socioeconomic data, census data from IBGE. The data acquisition scripts are based on the article (Jean, 2016) and new adaptations were developed such as extraction of metadata for further analysis and replicability, thus we acquired satellite images from Vale do Ribeira - SP area. With both imagery and census data, a supervised deep-learning algorithm will be trained. Picture 1: Vale do Ribeira, represented in red. Source: Wikipedia.org Results It was downloaded 7.146 satellite images from 2016, covering the entire territory of Vale do Ribeira. From the census data, we have the results of a Principal Component Analysis (PCA) applied to data referred to longevity, education, and income. With the dimension reduction of the census data, it is possible to visualize the indicators by the census sector. Picture 2: Examples of data obtained from Planet. Conclusions We achieved the initial steps regarding the data science experiment related to the data acquisition, exploration, and preparation of the dataset. The next step is to train the deep learning algorithm with satellite images and the socioeconomic data to create a computational model. Later, this model will be applied in different areas to predict socioeconomic indicators. References N. Jean et al., Combining satellite imagery and machine learning to predict poverty. Science 353, 790–794 (2016).
poster
[ BioRender object ] Intrahippocampal injections Innate Emotional Behaviour (IEB) Barnes maze (BM) Novel Object Recognition Test (NORT) Immunohistochemistry (IHC) In vivo electrophysiology INTRODUCTION METHODS RESULTS CONCLUSIONS Medial Temporal Lobe Epilepsy (MTLE) is a neurological disorder characterized by neuronal hyperactivity and unpredictable seizures, often accompanied with memory impairments. During the early stages of this condition, and between seizures, patients begin to develop abnormal brain activity known as epileptiform activity (EA), mainly presented in the hippocampus. Here, taking advantage from a new model of EA, we characterized the cognitive profile of EA mice considering sexual dimorphism. Furthermore, we examined its hippocampal cytoarchitecture and its electrophysiological correlates. CA1 DG Male Female 0.000 0.001 0.002 0.003 0.004 0.005 CA1 density Cell density (cells/µm³) Control KA Male Female 0 20 40 60 80 100 CA1 dispersion Thickness (µm) Control KA Male Female 0.000 0.005 0.010 0.015 DG density Cell density (cells/µm³) Cotnrol KA Male Female 0 30 60 90 120 DG dispersion Thickness (µm) Control KA SpikeInterface Control KA BM & NORT IEB IHC RECORDINGS In general, EA mice does not present an altered emotional state. KA male mice barely exhibit cognitive deficits, presenting similar performance to controls in cognitive tasks. In contrast, KA female mice show impaired spatial and recognition memory, resulting in specific deficits in random and spatial strategies in BM, together with a lower discrimination index in NORT. Lastly, histological analysis of DG and CA1 revealed EA mice, regardless of sex, preserved cell density and layer thickness in both regions. ACKNOWLEDGEMENTS • Jon Egaña • María Ceprián • Jonathan Draffin • Stefano Calovi • Paula Torres • Beatrice Sheikh SUPPORT Hippocampal-dependent memory in a model of epileptiform activity Juan Cobos Álvarez1, 2, Pablo Reyes Velásquez1, 2, Lucía Sangroniz Beltrán1, 2 , Diego M. Mateos2 and Edgar Soria-Gómez1, 2, 3 1University of the Basque Country, Department of Neuroscience. 2Achucarro Basque Center for Neuroscience, Leioa, Spain. 3Ikerbasque, Basque Foundation for Science Control KA 0 5 10 15 Spatial Latency (s) CTL KA 0 20 40 60 80 Random Latency (s)
poster
Embedded Application Server Gateways Local Servers Smart Phone (HDMI) Currently not supported Stroke-Back Patient System Gaming Client Kinect Server Sensor feedback Shimmer 2 Surface EMG BT 2.1 Wireless Sensor BAN Emotiv EEG MS Kinect for PC User Interfaces Physical Objects Smart TV (WEB Browser) Smart Table StrokeBack Back-Office Game Server PHR Server INTERNET The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 288692. Advanced Media Technologies for Stroke Rehabilitation Emmanouela VOGIATZAKI1 , Peter LANGENDOERFER2 and Steffen ORTMANN2 1 Research for Science, Art and Technology (RFSAT) Ltd, United Kingdom 2 Innovations For High Performance Microelectronics (IHP) GmbH, Germany Aims and Objectives of the “StrokeBack” Project : • Telemedicine supervision of rehabilitation exercise. • Continuous monitoring of impact of the exercises also in “normal” life situations. • Integrated telemedicine rehabilitation and Personal Health Records for improved long term evaluation of patient recovery. • Providing feedback to health care professionals on the impact of rehabilitation exercises Advanced Media in the Context of the Project: • Games: Application Interface (API) for supporting game development. • 3D body tracing: improving accuracy of 3D model matching and detection of precise hand and finger movements • Kinect & 3D Engines: mapping patient’s skeleton as avatar in commercial 3D Engines (e.g. CryTek, Unreal, etc). • Sensor Networking: sensor BAN for improved understanding of skeleton position and patient’s physiological condition System Architecture: Example Rehabilitation Games: Conclusions and Future Work: • Validate means of animating avatars with Kinect using commercial 3D engines (CryTek and/or other engines) • Develop algorithms for 3D modelling with Kinect for more accurate motion tracking • Integrate client system on a low-cost embedded platform (e.g. Raspberry PI @ 25USD) • Integration of rehabilitation gaming into Personal Health Record (PHR) system as a “treatment” • Research into EEG signal matching to physical motion deficiencies Managing Offline Operation Smartphone Smart-TV etc. Remote Server PHR Server Cheap Embedded Device Application Gateway Game Server Kinect Server Client WEB Browser Internet Operation modes: • Permanent network connectivity • Network connectivity: may or may not be maintained Sending medical data and game results back to PHR Downloading Games Cheap Embedded Device Remote Server PHR Server Game Server Client (WEB Browser) Kinect Server Internet Smartphone Smart-TV etc. Mixed-reality interaction with virtual (a) and physical objects (b) Virtual-Table supported interactions with physical objects (a) (b) Depth-mapping for body tracing and avatar generation
poster
This project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 732064 A project implemented by DataBio CONSORTIUM WWW.DATABIO.EU FarmTelemetry Karel Charvát Lesprojekt - služby Czech Republic charvat@lesprojekt.cz Karel Charvát Jr. Lesprojekt-služby Czech Republic charvat_junior@lesprojekt.cz Michal Kepka University of West Bohemia Czech Republic mkepka@kgm.zcu.cz Marek Šplíchal Czech Centre for Science and Society Czech Republic splichal@ccss.cz FarmTelmetery is an application designed to record, process and analyze data from tractors in connection with other data relevant to farm operations. Current functionality includes: displaying current position of tractors, recording positions and various variables such as fuel consumption over time, activities overviews for individual tractors, overviews for individual fields, comparison between defined parts of field (management zones). ? ? ? ? ? We currently install third party telematics units on tractors to record and retrieve data.The monitoring unit contains these parts: terminal with GLONASS/GPS positioning and GSM connectivity, RFID Readers, CAN sniffer. ? ? ? The solution uses library as a map client. HSLayers-NG In the future we plan to add the ability to work with the ISOBUS protocol if the tractor supports it or get direct access to data from services provided by some tractor manufacturers on the basis of a contract with these manufacturers and farmers who own the tractors. Our goal is to offer a tool that can handle and analyze data from tractors of different brands and models. FarmTelemetry uses to collect and store data. Senslog data model has been extended to include elements relevant for the monitoring of tractors and integration with other farm data. Senslog
poster
Physician Residents Shadowing a Certified Wound/Ostomy/Continence Nurse to Develop Interprofessional Competencies Laura Monahan, OFS, DNP, MBA, RN, Meng Zhao, PhD, RN, Michael Monahan, JD, MBA, Katelijne Acker, PhD, RN, Mary Sandrik, MSN, RN, CWOCN Background & Significance Conceptual Frameworks & Measurement Tools Project Design & Description Specific Aims Findings Demographics Background: Wound care costs are skyrocketing to $96.8 billion annually, yet physicians are often unaware how to treat wounds due to a lack of exposure and training in their medical education. Physician training has not kept abreast of the increased care needed for wounds, and medical education does not adequately train physician providers on wound care. Assessment: Interprofessional education (IPE) offers an opportunity to enhance wound care management and improved patient outcomes through interdisciplinary collaboration. Certified Wound, Ostomy, and Continence Nurses (CWOCN) have an arsenal of knowledge about dressings, biological tissue replacements, cell-based treatment options, positioning, and psychosocial support available for wound management. Purpose Statement: Addressing this knowledge gap, this quality improvement project required 49 Family and Internal Medicine Physician Residents to shadow a CWOCN to improve their knowledge of the four interprofessional education (IPE) domains (interprofessional communication, role awareness and responsibilities, teams and teamwork, and values and ethics). This job shadowing broadened physicians’ awareness of wound complications and care, and the availability of various wound treatments that the CWOCN provides. • Physician residents shadowed a CWOCN for 16 hours over 4 nonconsecutive days and completed the Interprofessional Education Collaborative Competency Self-Assessment Tool (IPESAT) instrument pre- and post-shadowing that measured 4 interprofessional education (IPE) domains of: Professional Communication Roles and Responsibilities Teams and Teamwork Values and Ethics. • Paired t tests were performed to determine differences in IPESAT scores before and after the shadowing experience. Conceptual Framework: Deming’s Plan-Do-Study-Act (PDSA) cycle Measurement tool: Interprofessional Education Collaborative Competency Self- Assessment Tool (IPESAT) pre- and post-tests, comprised of 42 questions that assessed the 4 Interprofessional Education (IPE) domains: through • 5-point Likert type scales • ranging from strongly disagree to strongly agree • with higher scores indicating greater knowledge of the IPE domains. Pre- and Post-Intervention IPESAT Scores IPE Domain Pre- Post- P value % Mean Mean (2 tailed) Overall 187.69 201.8 .000** +7.5 Score Roles/ 39.06 41.17 .000** +5.4 Responsibilities Professional 53.3 57.51 .000** +7.9 Communication Teams 50.26 55.68 .000** +10.8 & Teamwork Values 45.07 47.44 .000** +5.3 **Statistically significant IPE: Interprofessional education IPESAT: Interprofessional Education Collaborative Competency Self-Assessment Tool • Location at a Level 1 trauma center in the Midwest USA • IRB approval & letter of support received • Pre-test and post-test method • N = 49 convenience sampling of Family & Internal Medicine Residents • Project time (11/2016 - 8/2020) • Multi-disciplinary team approach • Data Analysis: Descriptive stats, paired t-test and Z test. • The shadowing experience improved IPESAT scores, as well as the morale and confidence of the Resident Physicians during chronic wound management, as reflected by their comments following the experience. • All 4 IPESAT domain scores improvements were statistically significantly, with the greatest gain in the domain of Teams and Teamwork (10.8% increase). • Physician Resident comments reflected increased respect for the CWOCN, the value of bedside training experiences, and the importance of collaboration with the CWOCN and wound specialty practices to jointly achieve positive patient care results. The Authors explicitly stated no con
poster
Synthesizing realistic neural population activity patterns using Generative Adversarial Networks Manuel Molano-Mazon1, Arno Onken1,2, Eugenio Piasini1,3 and Stefano Panzeri1 1. Laboratory of Neural Computation, Istituto Italiano di Tecnologia Rovereto, 38068 Rovereto, Italy. 2. University of Edinburgh, Edinburgh EH8 9AB, UK. 3. University of Pennsylvania, Philadelphia, PA 19104 • We use GANs to simulate the concerted activity of a population of neurons. • Spike-GAN generates spike trains that match the first- and second-order statistics of datasets of tens of neurons. • We apply Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-of-the-art approaches. • We exploit a trained Spike-GAN to construct importance maps to detect the most relevant statistical structures present in a spike train. Summary Architecture The samples Fitting neural data from salamander retina Fitting the whole probability distribution Comparison with a multi-layer perceptron Generated Sample Example Samples correspond to the spiking activity of a population of N neurons represented as binary vectors of length T (N=8 and T=12ms in the example below). The architecture of Spike-GAN is adapted from the one proposed by Gulrajani et al. (WGAN-GP). Samples are transposed so as to input the neurons’ activities into different channels. The convolutional filters (red box) span thus all neurons but share weights across the time dimension. The architecture of the generator is the same as that of the critic, used in the opposite direction and with sigmoid units in the last layer. Furthermore, we applied the procedure described by Odena et al. to avoid the checkerboard effect. We simulated activity patterns for a small ‘population’ of 2 neurons during 12 ms and evaluated how well Spike-GAN fits the whole probability density function from which the patterns are drawn. Novel samples 55% | 56% Not in Underlying distribution 3.8% | 3.2% Underlying distribution (numerical probabilities) Training dataset Surrogate dataset Spike-GAN distribution Blue tones Red tones Nearest neighbor analysis We checked for signs of overfitting by plotting randomly selected generated samples together with their closest sample (in terms of L1 distance) in the training dataset. Spike-GAN samples Real samples Importance maps: We infer the most relevant features characterizing a given neural activity pattern by enquiring a trained critic: 1. Compute the output produced by the critic for that particular pattern. 2. Shuffle across time the spikes emitted by a neuron during a specific period of time and compute again the output of the critic. 3. The absolute difference between the two outputs gives the importance of shuffled spikes. 4. Multiply the masked original sample by the importance. 5. Sum up all resulting maps for all neurons and time periods to get the importance map. Hypothetical experiment: N repetitions of a behavioral task, where a mouse has to discriminate two different stimuli (vertical/horizontal stripes). By means of two-photon calcium imaging the activity of a population of V1 neurons in the visual cortex of the mouse is recorded in response to the two stimuli. 1. Goodfellow et al. NIPS 2014. 2. Gulrajani et al. NIPS 2017. 3. Odena et al. Distill 2016. 4. Marre et al. IST Austria 2014. 5. Tkacik et al. Plos Comp. Biol. 2014. 6. Lyamzin et al. Front. Comp. Neurosci. 2010. 7. Luczak et al. Nat. Rev. Neurosci. 2015. References critic critic - shuffle Importance of shuffled spikes x Finding relevant patterns of neural activity Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 699829. We compared Spike-GAN with a generative adversarial network in which both the generator and the critic are a 4-layer multi-layer perceptron (MLP). (Goodfellow et al.; Gulrajani et al.) Packets: We tested the importance maps procedur
poster
LATEX TikZposter NEAR-INFRARED VEILING OF ACCRETING STARS USING THE CFHT/SPIROU DATA A. P. Sousa1, J. Bouvier1, S. H. P. Alencar2, J.-F. Donati3, C. Dougados 1, A. Carmona1 and the SPIRou consortium 1Institut de Planétologie et d’Astrophysique de Grenoble - IPAG - France 2Universidade Federal de Minas Gerais - UFMG - Brasil 3Univ. de Toulouse - France alana.sousa@univ-grenoble-alpes.fr NEAR-INFRARED VEILING OF ACCRETING STARS USING THE CFHT/SPIROU DATA A. P. Sousa1, J. Bouvier1, S. H. P. Alencar2, J.-F. Donati3, C. Dougados 1, A. Carmona1 and the SPIRou consortium 1Institut de Planétologie et d’Astrophysique de Grenoble - IPAG - France 2Universidade Federal de Minas Gerais - UFMG - Brasil 3Univ. de Toulouse - France alana.sousa@univ-grenoble-alpes.fr Context Veiling is ubiquitous at different wavelength ranges in classical T Tauri stars (e.g., Hartigan et al. 1991; Fischer et al. 2011). The hot spot alone is not enough to explain the shallow photospheric infrared lines in accreting systems, and it suggests that another source contributes to the veiling in the near-infrared. The inner disk is often quoted as the additional emitting source to explain the infrared veiling (e.g., McClure et al. 2013; Alcalá et al. 2021). We used a sample of accreting stars observed with the CFHT/SPIRou spectrograph to measure the near-infrared veiling along the YJHK bands to understand its origin and time scale variability. We compared the computed veiling with accretion and inner disk diagnostics from photometric observations gathered in the literature. Near-infrared veiling • Additional continuum – accretion shock (hot spot) – dust in the inner disk – gas in the inner disk Sample of stars - CFHT/SPIRou YJHK veiling Fig. 1: Average near-infrared veiling (left), the veiling variability diagnostic (right) measured in different wavelength regions. • Veiling increases with the wavelength from Y to K band in most stars, highlighting the contribution of the inner disk emission to the NIR-veiling. Inner disk emission Fig. 2: Comparison between the color excess computed using the near-infrared veiling and the 2MASS photometry. • The relation between the near-infrared veiling and the inner disk emission diagnostics show that a higher veiled system also presents higher inner disk emission, which is expected if the veiling has a contribution from the inner disk. Short-term variability • Veiling varies in a time scale of at least one day, then what- ever region the NIR-veiling comes from, this region is dy- namic, and its flux changes in a time scale of days. • We computed the periodogram of the veiling, and we did not find any significant periodic signal in the veiling for most systems (in the time scale of typical stellar rotation). Accretion diagnostics Fig. 3: Near-infrared veiling as a function of average mass accretion rate computed using the line fluxes of Paβ and Brγ. • We found a linear correlation between the veiling and the ac- cretion properties of the system. This shows that, although the veiling shows a contribution from the inner disk emis- sion, it also suggests a connection with the accretion process. Conclusion • The veiling increase from the Y to the K band, as a result of the increase of the emission contribution from the inner disk as a function of wavelength. • The veiling correlates with other photometric inner disk di- agnostics, such as color excess and slope of the spectral en- ergy distribution, providing further evidence that the veiling arises in the inner disk. We also find a linear correlation be- tween the veiling and the accretion properties of the system. This shows that accretion contributes to inner disk heating and, consequently, to the inner disk emission excess. • We show that the veiling is variable for most targets in a time scale of at least one day. Besides that, the near-infrared veil- ing seems mostly stable on a long time scale for most targets. Long-term variability Fig. 4: Average of the near-infrared veiling c
poster
Temporal Signals Help Label Temporal Relations Leon Derczynski and Robert Gaizauskas Problem: temporally ordering events and times Mentions of events in text can often only be placed on a timeline by relating them to times and other events. Relations describe the order of two intervals (events or times) w.r.t each other. Allen (1983) describes 13 types, which look like: Relation labeling is the act of assigning one of these relation types to a relation between two intervals, thus temporally ordering them. The state of the art is at an impasse Many approaches and even evaluation exercises have tackled automatic general labeling of temporal relations (e.g. TempEval) Accuracy rarely reaches above 60% for event-event links (the majority), or 80% for event-time links Some links are difficult. What's in them? About 30% of difficult links use a temporal signal Hypothesis: explicit temporal signals can help ordering Sometimes, a temporal relation is co-ordinated by a temporal signal The torpedo was fired after the ship started sinking This signal explicitly describes the nature of the temporal relation. Signals may be a single word, as “after” above, or have a head and a qualifier. We got out just before the storm hit The match was won shortly after Some signals are also polysemous: I will drag you before the court! This spatial use of “before” does not imply: X ..and later, I will drag the court Feature groups Base features TimeML attributes event/time text, event class, part-of-speech, polarity, modality In our corpus (TB-sig - adapted from TimeBank) 13.7% of relations had a co-ordinating signal. Signal Text The text of the signal is important; before, during and after all have different semantics Signal, lowercase signal and signal head lemma are included Argument Order Two simple boolean features for argument interval position in text: ordering and same-sentence Signal Order Relative position of the signal in the sentence affects temporal ordering: You walk before you run Before you walk you run Syntax We capture the constituent parse path between arguments, the label of the lowest common ancestor, and flags for interactions with a temporal function tag -TMP. In this example, the path to began is IN-PP-S-VBD. DCT A flag for relations with times, indicating if the time is the document's default timestamp, which Is often referred to implicitly. Results Yes, signals are a huge help: over 50% less error (+23% absolute performance) Consistent improvements seen ●When using this feature representation ●In many classifiers: MaxEnt, AdaBoost, Nbayes, LinearSVC, RandomForest Signal text Intuitively, the signal word (before, after, during etc.) is very important Removing these features gives a 10% accuracy drop – but, remaining features still give +14% absolute boost, even without the explicit ordering text Syntax When adding feture groups to the base set, group describing signal syntax gives the biggest boost (+13%) with event-time links (signal text best for event-event) Honourable mention: order features Adding just the Argument Order and Signal Order groups to the baseline gives almost all the performance of the full feature set – no use of text or syntax. 42 47 52 57 62 67 72 77 82 Feature ablation event-event relation labels Feature group Accuracy 42 47 52 57 62 67 72 77 82 Feature addition event-time relation labels Feature group Accuracy Interestingly, classification without signal features was much less effective on relations that used a signal than on those that didn't; 2.7% error reduction on signalled vs. 28% on non-signalled (event-event) Indicates one must take signal information into account to order these links. Which features had the most effect?
poster
Innovative Research for a Sustainable Future www.epa.gov/research Danica DeGroot l degroot.danica@epa.gov l 919-541-2482 Chemical Screening for Bioactivated Electrophilic Metabolites Using Alginate Immobilization of Metabolic Enzymes (AIME) Danica E. DeGroot, Russell S. Thomas, Steven O. Simmons National Center for Computational Toxicology, U.S. EPA, Research Triangle Park, NC Objective Materials & Methods Results & Conclusions Future Directions To develop a platform to retrofit existing high-throughput screening assays with metabolic competence. Introduction The EPA’s ToxCast program utilizes a wide variety of high-throughput screening assays (HTS) to assess chemical perturbations of molecular and cellular endpoints. A key limitation of many HTS assays used for toxicity assessment is the lack of xenobiotic metabolism which precludes the detoxification as well as toxic bioactivation of chemicals tested in vitro, thereby mischaracterizing the potential hazard posed by these chemicals. To address this deficiency, we have developed the AIME platform to retrofit existing HTS assays with extracellular xenobiotic metabolism. By encapsulating hepatic S9 in alginate microspheres, cytotoxicity and assay interference associated with direct addition of S9 is reduced. Here we describe deployment strategies used with the AIME platform and present our data from three different assay deployments to illustrate the advantages and disadvantages of each strategy. Chemicals – All chemicals were purchased from Sigma-Aldrich and stock solutions were prepared in DMSO. Alginate Immobilization of Metabolic Enzymes (AIME) – Human hepatic S9 (pooled, mixed gender) or Aroclor-induced male rat hepatic S9 was encapsulated in alginate microspheres using a modification of a cell encapsulation protocol by Lee et al. (1). References 1. Lee et al. (2013) Sens. Actuators, B 177: 78-85 2. McCallum et al. (2013) J. Biomol. Screening, 18(6): 705-713 3. Rogers and Denison (2000) In Vitro Mol. Toxicol. 13(1): 67-82 This poster does not necessarily reflect EPA policy. Mention of trade names or commercial products does not constitute endorsement or recommendation for use. • We have successfully produced functional AIME microspheres on lids compatible with 96- and 384-well microplates. • Three strategies for deployment of the AIME platform include the All-in-One method, conditioned medium transfer (CMT) and concentrated reagent addition (CRA). • The All-in-One method is preferred for retrofitting existing HTS assays due to its speed and ease of deployment; however, assays utilizing buffers or other reagents which are incompatible with xenobiotic metabolizing enzymes may dictate use of an alternative method. • Conditioned Medium Transfer (CMT) has been successful used to deploy the AIME platform to a cell-based estrogen receptor transactivation assay. • Using Concentrated Reagent Addition (CRA), the AIME platform was coupled to the MSTI Fluorescence-Based Thiol-Reactive Assay™. However, AIME- mediated metabolic activity had limited effect on enhancing the electrophilicity of compounds screened with this assay. • For several compounds (2,2’,4’-trichloroacetophenone, 1,4-dihydroxy-2- naphthoic acid, and diquat bromide) AIME-mediated metabolic activity resulted in a suppression of electrophilicity that was not due to protein binding, suggesting a possible detoxification. • Overall, coupling the AIME platform to the MSTI assay did not identify any novel electrophiles in the tested chemical library. Possible explanations include assay incompatibility with AIME, low formation of metabolites, and/or low electrophile capture due to the highly-reactive nature of these compounds. • Encapsulation of human hepatocytes and/or Supersomes to increase the metabolic capacity of the AIME platform • Identification and quantitation of metabolites using LC-MS/MS to quantify metabolic output of the AIME platform. This will provide information needed to determine the minimum metabolite levels
poster
Adverse Events Following Immunisation of COVID-19 Vaccine at Queen Elizabeth Hospital Lim Ming Yao1, Lem Fui Fui1 1Clinical Research Centre, Queen Elizabeth Hospital P-79 NMRR NO: NMRR-21-713-59282 Introduction The COVID-19 pandemic has brought about catastrophic repercussions globally. First discovered in 2019, as of July 2021, this vicious virus has resulted in more than 193 million people being infected and more than 4 million mortality cases worldwide[1]. Apart from the many innocent lives lost, the global economy has contracted by 3.5% in April 2021 according to a report published by the International Monetary Fund[2]. While public health measures have been devised by experts in every country with the intent of curbing the spread of the virus, these measures inevitably involve temporary cessation of many economic activities which in turn affected the livelihoods of the people. Locally, in Malaysia, the unemployment rate in 2021 remains at a staggering rate of 4.5% according to official figures. The fundamental solution to this natural calamity is herd immunity made possible by the development of vaccine against this virus. This impetus to seek out effective vaccines has brought together leading scientists and institutions around the world[3]. With these concerted efforts, many vaccines were being tested and the development process has been expedited at a speed unprecedented in the human history. Traditionally, it takes 10-15 years for the development of a safe vaccine. Nonetheless, in dealing with this callous virus, time is a luxury that we do not have. As a result, many uncertainties remain unveiled regarding the safety and long term efficacy of vaccine developed in such hasty fashion. In Malaysia, front liners are among the first to be vaccinated. In this study, we aim to describe and summarise adverse events following immunisation (AEFI) using Pfizer Comirnaty vaccine as reported by healthcare workers in the hospital setting. Method The AEFI data were collected via three routes. Firstly, at the observation zone immediately post vaccination. Secondly, when the patient visits emergency department of Queen Elizabeth Hospital. Thirdly, information collected on the mySejahtera app. The data were then collated from these ADR forms for the purpose of this study over the period from 2nd March 2021 to 9th June 2021. Results Out of 155 subjects recruited, there were 80.6% female, in contrast with the 19.4% male. The median age were 33 year old. 64.5% of them reported history of allergy. The five most commonly reported adverse events were rashes (27.7%), globus pharyngeus (27.1%), dizziness (25.8%), pruritus (23.2%) and nausea (16.8%). 86.5% of the subject required treatment and the two most commonly administered treatment are intravenous steroid and intravenous antihistamines. 65% of the subject reported history of allergy whereas 28% and 35% reported drug allergy and food allergy respectively. Discussion Gender preponderance among female have also been reported studies conducted in Korea [4] and Italy [5]. In terms of frequency of reported adverse events, rashes and fever were also listed as common events in another study done in Italy [6]. In USA, thrombocytopenia cases were reported, in our study, however, we did not capture any cases of bleeding tendency which could be suggestive of thrombocytopenia. However, ethnic differences in response to treatments due to biologic factors, such as genetic and epigenetic variants [7] , may contradict the findings of thrombocytopenia [8] where the majority of the data is from the Scottish population[9]. Rashes are the most prevalent adverse effects, according to Robinson et al., (2021), who found that Asians are the second most afflicted which is aligned with our findings [10]. There are several limitations to our study. Firstly, report bias, those who experience mild symptoms that resolved spontaneously and did not seek treatment at hospital might not report it to the app and hence not
poster
Project Raptor: An Autonomous Vehicle Platform Copyright © 2019 Chude Qian, James C. Schmidt, Christina L. Gallishen Semester 1 Spring 2019 Chude Qian (EE), James C. Schmidt (SC, EE), Christina L. Gallishen (EE) Instructor: Prof. Gregory S. Lee (EECS398/9) Technical Advisor: Prof. Francis L. Merat (Emeritus Professor | EECS) Presented: April 19, 2019, Intersections Poster Session, Case Western Reserve University, Cleveland, OH Project Raptor is a low-cost expandable experimental Ackermann-Steering (car-like) autonomous vehicle platform build based on a modified Power Wheels platform. The goal of this project is to develop an autonomous vehicle platform on which future research can be conducted, and can easily be duplicated by other autonomous vehicle research groups. The original Power Wheels platform has minimal control and circuitry components. By retro-fitting the platform using servo and DC motor controllers, we are now able to control the vehicle electronically. Beyond that, sensors are being installed for future research applications with the potential of raising the vehicle to autonomy level 3 (based on SAE standard J3016). The controlling software is built using the ROS developing standard for robots guaranteeing future expandability. Constraints and Standards Project Budget Courses and Work Breakdown We would like to thank Marc Krumbein, John Gibbons, David Jarvi, Wyatt Newman, Larry Sears, David Kazdan, and the CWRU IGVC team for all of their unwavering support of this project over the course of the semester. They have been instrumental in the success of this project over its duration. Problem Statement Vehicle Design System Control Structure Looking at the vehicle as a whole, there are numerous components to control while maintaining safety. A three level layer control system was implemented on the vehicle. The top layer (based on Linux and ROS) serves as coordinator and post processor for the vehicle. The middle layer of control is implemented using Arduino, aiming to preserve real time system commands, in case the vehicle must be emergency stopped. The middle layer also directly handles the remote control signal input when a human is operating the vehicle. When in remote control mode, the upper layer is not in use. The last and bottom layer is in charge of emergency stopping and cutting the power to the drive mechanism. It is responsible for stopping the vehicle when both the top layer coordinator and middle layer controller failed. Velocity and Acceleration Steering Minimum Steering Radius is tested at 0.5 m/s: averaging 5 meters *Note 1: Road test data was collected on Saturday, April 6th, 2019 *Note 2: Autonomous capability has not yet tested Results Constraints An autonomous car has numerous design projects hidden in the larger overall project. Completing and financing all these tasks in a single semester is difficult, especially sensing and control. An additional challenge was that many of the components operated on different voltages, namely 5VDC, 12VDC, 24VDC, and 120VAC. Furthermore, due to the nature of an EECS project being electrical and not heavily mechanical, the idea was to work with the sourced plastic vehicle and not spend time altering it. Standards IGVC, “Intelligent Ground Vehicle Competition Rule Book,” Intelligent Ground Vehicle Competition Rule Book. IGVC Competition, Rochester, MI, 2018. Standard Specification for Additive Manufacturing File Format, ASTM Standard 52915, 2016 Standard Practice for Selection of Wire and Cable Size in AWG or Metric Units, ASTM Standard F1883-03, 2013 Power over Ethernet Standard, IEEE Standard 802.3at, 2009 OSI Model for Ethernet Communication, ISO/IEC Standard 7498-1, 1994 ROS Coordinate Frames for Mobile Platforms, ROS Standard REP-0105, 2010 Steering The steering on a Power Wheels is manual. It was driven by a long shaft attached to the steering shaft and steering wheel. There were no pre-manufactured points where a servo could be directly installed for auto
poster
A new tool to human provenancing? Neodymium in a forensic and archaeological context. Esther Plomp (e.plomp@vu.nl), Isabella von Holstein, Laura Font, Janne Koornneef, Jason Laffoon and Gareth Davies Deep Earth Cluster, Faculty of Earth and Life Science (FALW), Vrije Universiteit Amsterdam, The Netherlands 1. Introduction Isotope analyses have increas- ingly been used over the last decade to decipher human diet and migration in forensic [1,2] and archaeo- logical investigations [3,4]. Commonly used isotopic systems (Sr-Pb, O-H) for human prov- enancing studies, however, have limitations. Thus the inclusion of another isotope system is necessary to provide further validation of the technique and to improve the spatial resolution. Neodymium isotopes (143Nd/144Nd) are a potential tool that may compliment currently utilised systems. In this pilot study a latest generation thermal ionization mass spectrometer (TIMS, TRITON- Plus), at the Vrije Universiteit, Amsterdam (Fig. 1), was used to measure sub- nanogram amounts of Nd in modern human teeth to demonstrate the applicability of the Nd isotope technique to human provenancing. Enamel and dentine from third molars belonging to 35 modern Dutch inhabitants were sampled. Due to low concentrations of Nd in human teeth all the enamel and dentine was sampled and prepared using various methods (Fig. 2) acquiring sample sizes of generally ~0.5g, up to ~1g. These sample quantities are enormous compared to what is required for Sr analysis (1-3 mg), complicating sampling and chemical processing. Sampling bulk enamel and dentine was found to be most efficient using a hand-held drill. The teeth were dissolved and processed using chromatographic procedures, and were analysed using TIMS. Due to the low Nd concentrations the samples were analysed using 1013 Ohm amplifiers. The internal precision and external reproducibility has been demon- strated to be better for small samples in comparison to the standard 1011 Ohm amplifiers [10]. 4. Nd Results Nd concentrations in human teeth were found to be very low, generally vary- ing from 0.3 to 3 ppb, with some exceptions of concentrations up to 30 ppb (Fig 3.). The 143Nd/144Nd isotope ratios in individual teeth usually fall within or above the currently defined Dutch local range, which is based on Dutch river sediment data [11, 12], fossil animal bones [7] and archaeological glass [3] (Fig. 3). As all individuals analysed lived within the Netherlands, it was expect- ed that their Nd values would fall within the local range. This is the case for the majority (n=10) of the individuals. Some individuals (n=2) show slightly elevat- ed neodymium ratios. It is possible that these individuals fall outside of the currently defined local range as the data used to construct this range is not di- rectly comparable to modern human individuals. It was possible for 3 out of 12 individuals to measure the sample with both 1013 and 1011 Ohm amplifiers (Fig 4). In these cases the 143Nd/144Nd ratios of both measurements are within error (2 SD), validating the 1013 Ohm amplifier measurements on human tissues. 5. Conclusion Preliminary results show that low Nd concentrations in modern human teeth are within analytical capabilities, however, the analytical pro- cess is still challenging. The 143Nd/144Nd isotope results in tooth enamel of Dutch individuals illustrate the potential of Nd isotopes as an additional provenance tool that may provide a more complete image of the geo- graphical origins of individuals. Ongoing work to refine the method is being completed before it is applied to modern human remains in forensic investigations and archaeological remains. 143Nd is expressed relative to the stable, non-radiogenic isotope 144Nd: the 143Nd/144Nd isotope ratio (comparable to the 87Sr/86Sr ratio). Geological variations of 143Nd/144Nd generally range from 0.510 in the oldest parts of the earth to 0.514 in recent mantle derived lavas [5]. Nd has limited mobility during weathering process
poster
III:Small: Partitioning Big Data for High Performance Computation of Persistent Homology PI: Philip A. Wilsey Institution: University of Cincinnati Award #: IIS-1909096 NSF CSSI PI Meeting, Seattle, WA, Feb. 13-14, 2020 IIS--1909096 Contact information: Philip A. Wilsey High Performance Computing Lab Dept of EECS, PO Box 210030 Cincinnati, OH 45221-0030 Topological Data Analysis/ Persistent Homology Exponential Complexity (time & space) ● Limited to ~10K points in R^3 (64GB RAM) Data reduction & Partitioning ● Use cluster centroids (samples) ● Use clusters +δ as partitions Parallelism and concurrency Data reduction: 27K → 300 points Utilizing k-means++ Data Sampling and Partitioning Witness complex Random Sampling Clustering (enables upscaling) ●Density-based: DBScan ●Grid-based (distance-independent) ●Partition-based: k-Means++ ●Hierarchical-based: Agglomerative 3-4 orders of magnitude perf gains Output Analysis Quantitative Analysis ●Persistence Interval Comparison: ●Bottleneck, Wasserstein, Heat Kernel Distances ●Performance: ●Runtime, Memory use, Scalability ●Data Sampling & Partitioning: ●Persistent Homology preserving sampling ●Persistence interval preservation ●Topological feature preservation Qualitative Analysis ●Barcodes ●Persistence Diagrams ●Landscape Diagrams ●Persistence Images ●Feature Boundary Extraction Data Reduction performance Improvements Reduced input points results in: ●Reduced Memory Footprint ●Faster CPU and Wall Time ●Approximations of large features with bounded error ●Upscaling to improve boundary identification of topological feature 0.9 0.75 0.5 0.25 0.1 1.00 10.00 100.00 1,000.00 10,000.00 100,000.00 Two Circles Two Moons Percent of Original Speedup
poster
Wir unterstützen Forscherinnen und Forscher ■in den Sozial, Verhaltens-, Bildungs- und Wirtschaftswissenschaften, ■bei qualitativen und quantitativen Forschungsdesigns, ■als Datennutzende und Daten- produzierende. Unsere Ziele – Schlaglichter aus dem Arbeitsprogramm ■Zusammenarbeit von FDZ vertiefen: Kompetenzaufbau des FDZ Personals gemeinsam gestalten ■Datenqualität und Potenzial für die Wiederverwendung von Daten erhöhen: Gemeinsames Portfolio professioneller Services für ein auf qualitative Daten und Forschung zugeschnittenes FDM entwickeln ■ Als Teil von KonsortSWD bietet der RatSWD Wissenschaft und Datenproduktion ein Austauschforum und ist Ihre Anlaufstelle zur Verbesserung des Datenzugangs. Schreiben Sie uns auf www.konsortswd.de oder an info@konsortswd.de! Wer wir sind Datenzugang Unsere Forschungsdatenzentren (FDZ): der Schlüssel zu qualitätsgesicherten Daten für Ihre Forschung. Bereitstellung von Forschungsdaten Forschungsdatenmanagement beginnt bei der Datenerhebung. Technische Lösungen Metadaten pflegen, Daten finden – bei uns ist alles FAIR. Wir bieten Beteiligung der Communities Unser Anspruch ist es, das Angebot rund um den Forschungsdaten­ zugang nach Ihren Wünschen zu entwickeln. Deutsches Zentrum für Hochschul- und Wissenschaftsforschung Dr. Bernhard Miller, GESIS – Leibniz-Institut für Sozialwissenschaften Susanne Zindler, RatSWD Geschäftsstelle Weitreichende und nachhaltige Forschungs- dateninfrastruktur der FDZ Dauerhafte Sicherung des flexiblen Zugangs zu sensiblen Daten Starke Qualität durch FAIRe Daten und Metadaten Gute Daten. Bessere Forschung!
poster
Academic Titles and Open Science Perspectives Matic Bradač, Tomaž Ulčakar Investigating the Researchers' Attitudes at the University of Ljubljana School of Economics and Business, with Insights into the Development of Central Economics Library Services PURPOSE Perception of OS Examining the influence of academic title on perception of practices. Integration of OS Exploring the integration of practices at the institution. New Library Services Developing new services based on survey results. APPROACH Online Survey Questionnaire was sent in May 2022 to 171 UL SEB researchers with 40% response rate. Statistical Analysis Binary logistic regression was used. An aca- demic title was the independent variable and agreement with individual OS practices was the dependent variable. Small Sample Size Focusing on a single institution limited the ability to make broader conclusions within the academic sphere. RESULTS Academic Titles and OS Practices Statistical analyses showed no significant correlation between academic titles and support for OS practices. Attitude towards OS practices While most researchers expressed belief in the benefits of OA publishing, certain OS practices encountered disapproval. New Library Services Varying levels of success in the Central Economics Library's efforts to raise awareness about OS, with strong support for OA publishing but mixed support for other OS practices. New or improved services based on survey results are: promotion & consultations on OS, online guide, depositing publications, open data assistance. OA PUBLISHING IS USEFUL FOR SCIENCE RESERCH DATA SHOULD BE OPEN CITIZEN SCIENCE IS USEFUL FOR RESEARCH Assistant Assist. Professor Assoc. Professor Full Professor Assistant Assist. Professor Assoc. Professor Full Professor Assistant Assist. Professor Assoc. Professor Full Professor Agree Undecided Disagree
poster
Introduction The constant increase in the users’ bitrate demands has made that telecom operators provide fast and economical broadband to home and business users. Due to its lower cost compared to fiber-to-the-home (FTTH) deployments and quicker time to market, fixed wireless access (FWA) is a viable alternative to fiber for providing the necessary connection with enormous growth prospects for operators. Wider bandwidths provided by mmWave frequencies, are considered by FWA in order to enable more than 1 Gbps in the access network. The IEEE 802.11ay specification, defines that the 60 GHz band can achieve such bandwidths with technologies that include Multi-user MIMO (MuMIMO) serving bitrates up to 40 Gbps and transmission lengths up to 500 m. However, this short distance makes unfeasible a backhaul made of fiber. Thus, wireless backhaul links must be used to support serving nodes. Locating the serving nodes (Edge Nodes -EN) and the user nodes (Customer Premises Equipment - CPE) is a challenge than involves several items from the technology and propagation point of view. A typical FWA network architecture is presented in Fig. 1. Fig. 1: FWA Network architecture Methods To evaluate the performance of the FWA network, a Java-based network planner is created. It includes a multi-objective graph solver algorithm to provide connectivity to user houses on the 60 GHz band. Channel modelling The path loss model is based on measurements on the 60 GHz band. • Path Loss due propagation: • Reflection losses • Rain losses • Vegetation losses Use cases Two main use case Rural (Leest) and Urban (Liege) Algorithm Main Results Fig. 2: Network planning examples for Rural and Urban environments 60 GHZ NETWORK PLANNING FOR FIXED WIRELESS ACCESS DEPARTMENT OF INFORMATION TECHNOLOGY – WAVES RESEARCH GROUP German Castellanos Contact German.castellanos@UGent.be www.waves.intec.ugent.be Research objective Design a Network Planner algorithm to serve home users with Fixed Wireless Access (FWA) network under environmental constrains in the 60GHz band Universiteit Gent @ugent Ghent University Planner Initialization (Input config and Scenarios) CPE generation (Bitrate and Location) EN generation (Tree Pruning ) Graph Network Generation Node Creation - Path Loss Calculations Solver Configuration Define optimization Objectives and Constrains SOLVE Mixed-Integer Programming Assign Solved Solution to ENs and CPEs Calculate metrics & print results Simulation Finished 1 2 3 4 5 6 Coverage • Achieves >97% in Urban while Rural >90%. • Lamp post usage is essential. Edge node usage • Urban uses 4x more nodes/km2 than Rural. • Urban serves less CPE/EDGE (~0.8) compared to Rural (~1.0). ➔Depends on CPE density. Link distance • Mesh distance is constant in all scenarios. • CPE distance is 25% larger in Rural Served Capacity per Edge Node • In urban (300-500Mbps) is 2/3 of the Rural Scenario • Supported capacity is superior of 4Gbps in Urban and 3Gbps in rural. Vegetation • Rural reduces Coverage by ~3%. Then, Edge node usage increases by ~7%. • No impact in user coverage in Urban due to vegetation. But ~4% increment on the Edge Node used. Heavy Rain • In Rural, reduces Coverage by ~5% or increases Edge Node usage by ~15%. • Heavy rain slightly affects user coverage, but the required Edge nodes increase by ~5% in Urban.
poster
The bulk of the taxonomic information is closed in paper-based legacy literature, especially in fundamental regional treatises such as Flora, Fauna and Mycota series. The current pilot demonstrates a workflow that enhances the marked up content of Flora Malesiana and re-publishes it into an open access, semantically enriched HTML edition available on the newly launched, Advanced Books publishing platform (http://advancedbooks.org) (Fig. 1). The present pilot demonstrates how scientifically important historical monographs, enriched with additional information from up-to-date external sources related to taxon names, species treatments, morphological characters, etc., become freely usable for anyone at any place in the world, in addition to other benefits of the digitization and markup effort such as data extraction and collation, distribution and re-use of atomized content, and archiving of different data elements in relevant repositories (Fig. 2). 1 Institute for Biodiversity and Ecosystem Research, Bulgarian Academy of Sciences, Sofia, Bulgaria, 2 Pensoft Publishers, Sofia, Bulgaria, 3 Netherlands Centre for Biodiversity Naturalis, Leiden, The Netherlands, 4 Freie Universität Berlin – Botanischer Garten und Botanisches Museum, Berlin-Dahlem, Germany, 5 Plazi, Zinggstrasse 16, Bern, Switzerland The workflow Background Key outputs Contact: Lyubomir Penev, info@pensoft.net Step 1. Conversion of the printed volumes into digital text format, through scanning and OCR (Naturalis). Step 2. Markup of generic document features and domain-specific information following the FlorML (Naturalis); export of extracted data into EDIT CDM, (BGBM). Step 3. Conversion of the FlorML XML files into TaxPub-based XML (Plazi, Pensoft); export treatments to the Plazi Treatment Repository. Step 4. Markup, convert and publish the XML into semantically enriched open access HTML edition (Pensoft). Step 5. Browse, search, export and re-use of the atomized content (taxon treatments, images, morphological characters, etc.). www.pro-ibiosphere.eu 10/06/14 A re-publication of Flora Malesiana in semantically enriched open access edition facebook.com/proibiosphere twitter.com/proibiosphere plus.google.com/108695805977454304422 linkedin.com/groups/PRO-iBiosphere-4682845 Lyubomir Penev1,2, Teodor Georgiev2, Jordan Biserkov2, Thomas Hamann3, Peter Schalk3, Andreas Müller4, Anton Güntsch4, Terry Catapano5, Donat Agosti5 PROSPECTIVE PUBLISHING HISTORICAL LITERATURE FlorML and TaxPub XML schemas Content management systems & repositories (e.g., CDM, SCRATCHPADS Plazi, EOL, GBIF,) OCR, retyping, extracting, images, tables Re-published in semantically enriched open access HTML edition END USERS Unified marked up final output (taxon treatments, keys, images, localities, references Fig. 2. Multiplying the impact of the markup effort: (1) content digitized, data extracted and collated with other data; (2) content linked to external sources and re-published in semantically enriched open access; (3) Re-use and re-cycle of biodiversity data from both legacy and recently published literature Fig. 1.Re-published edition of volume 14 of Flora Malesiana on advancedbooks.org Re-publication in semantically enhanced HTML Scan of original page Markup and data extraction XML Extracted data GBIF, EOL, CDM, Plazi, etc. Aggregators
poster
 LSND and MiniBooNE observed electron neutrino and antineutrino appearance inconsistent with standard three-flavor formalism  Sterile neutrino model possible explanation for this result  We consider 3+1 model in MINOS+, which adds additional oscillation parameters  The MINOS+ Impact  Compared to MINOS, there is an increased rate of background events, particularly neutral currents (NC), and a decreased rate of νe charged current (CC) appearance  However, 3+1 model can lead to beneficial shifts in the expected event rates  MINOS+ builds upon the vetted MINOS appearance analysis techniques to probe for new physics in 6-12 GeV region An updated search for muon neutrino to electron neutrino transitions mediated by sterile neutrinos in MINOS+ Stefano Germani (University College London), Gregory Pawloski (University of Minnesota), Adam P. Schreckenberger (The University of Texas at Austin) (On behalf of the MINOS+ Collaboration) Far Detector Near Detector Updated e Event Selection LEM Selector Analysis Crosschecks Library Event Matching(LEM) signal selection method used in the past† Single discriminant produced by comparing input candidates to library of simulated 20M signal and 30M NC Far Detector (FD) events Compare topologies of events to select compact e CC showers from hadronic activity Four variables from matching process input to artificial neural network that yields discriminant • Fraction of best 50 that were signal matches • Mean inelasticity of signal events in best 50 • Mean matched charge of signal events in best 50 • Reconstructed energy of input candidate Artificial neural network trained using Monte Carlo optimized for MINOS+ energy spectrum Selector provides clear shape difference between background and signal events in 3+1 parameter space Cut between 6-12 GeV reduces background and improves signal-to- background in the signal-selected region (LEM > 0.6) The MINOS+ Experiment and Sterile Search Motivations MINOS+ is an on-axis, long-baseline experiment studying neutrino oscillations in the medium-energy NuMI beam Extension of MINOS experiment that studied neutrino and antineutrino oscillations in the low-energy NuMI beam mode Opportunities from using higher energy NuMI beam Increased beam power in addition to new beam optics νμ νe appearance has not been explored in an accelerator experiment with current NuMI on-axis energy spectrum Search for exotic oscillation phenomena by focusing on energies shifted from oscillation maximum Comparisons between FD predictions and data place limits upon the parameter space of interest Before looking at the signal-selected region, several crosschecks are performed to verify the LEM selection algorithm and the prediction method AntiPID – compares the three-flavor FD prediction and data with LEM < 0.5 No e CC excess is expected in this region Predicted 131.2 ± 11.5(stat. only), observe 132 MRCC – assesses the handling of NC events in the analysis region (LEM > 0.6) by making a prediction using an NC-like sample created from μ CC events in data and simulation Both sideband FD predictions were statistically indistinguishable from the data Functionally identical MINOS Near and Far Detectors LEM Selector References †Electron neutrino and antineutrino appearance in the full MINOS data sample, P. Adamson et al. (MINOS), Phys. Rev. Lett. 110 (2013) 171801, arXiv:1108.0015. ††P. Huber,Phys. Rev. C 85 029901 (2011) (fit and reactor flux update); A. Aguilar et al. (LSND), Phys. Rev. D 64, 112007 (2001); A.A. Aguilar-Arevalo et al. (MiniBooNE), Phys. Rev. Lett. 110, 161801 (2013).  Analysis to be performed on first 5.77 × 1020 Protons-on-Target (POT) delivered to MINOS+  Expected 109.2 events in the FD data given a three-flavor oscillation prediction using global best values MINOS+ Sensitivities Fit to 3+1 model done in 3 bins of LEM PID and 6 bins of reconstructed energy This analysis is sensitive to both θ14 and θ24, and there are additional dependencies to θ1
poster
Q. Coppée1,2, J. Müller1,2, S. Hekker1,2 1 Heidelberg Institute for Theoretical Studies, Germany 2 Heidelberg University, Heidelberg, Germany quentin.coppee@h-its.org Various morphologies observed in the power spectra of suppressed-dipole mode red giants Fig. 1: Morphologies of stars with suppressed dipole modes, as observed in the power spectrum of RGB stars. The regions of the radial, dipole, quadrupole and octupole modes are highlighted respectively in black, blue, orange and yellow. Fig. 3: We find a significant number of suppressed-dipole mode red giants that had a radiative core on the main sequence. This means that these stars potentially did not have a magnetic dynamo in their core, contrary to what Stello et al. (2016) found. The observed morphologies appear to be related to νmax (i.e. evolution along the RGB). We find that a significant number of our stars should not have had a convective core when hydrogen core-burning took place. This suggests that these stars did not have a magnetic dynamo in their core on the main sequence. Fig. 2: Distribution of νmax per observed morphology. Red giants with fully suppressed dipole modes have a higher νmax than partially suppressed red giants KIC 11099144 KIC 10779177 KIC 6975038 KIC 6206407 l = 0 l = 1 l = 2 l = 3 convective core on MS (solar metallicity) radiative core on MS (solar metallicity)
poster
ExoFOP and the NASA Exoplanet Archive: Serving Exoplanetary Systems to the Community Julian van Eyken, Jessie Christiansen, Doug McElroy, David Ciardi, Megan Crane, John Good, Marcy Harbut, Aurora Kesseli, Mike Lund, Meca Lynn, Ricky Nilsson, Toba Oluyide, Mike Papin, Nick Susemiehl , Melanie Swain, Raymond Tam Caltech/IPAC-NExScI The Exoplanet Follow-up Observing Program (ExoFOP) and its sibling service, the NASA Exoplanet Archive, are NExScI’s two web-accessible databases dedicated to supporting the community in the study of exoplanets. Both build on the infrastructure and in-house experience of IPAC as a NASA data center. The ExoFOP website is designed to optimize resources and facilitate collaboration in follow-up studies of exoplanet candidates, serving as a repository for project and community-gathered data. The NASA Exoplanet Archive is NASA’s science archive for astronomical data on published exoplanets and their host stars. exofop.ipac.caltech.edu exoplanetarchive.ipac.caltech.edu ExoFOP The NASA Exoplanet Archive The publicly accessible NASA Exoplanet Archive collates data on confirmed exoplanets drawn from the published literature and from mission project deliveries, as well as contributed datasets. DATA • Parameters for over 5,700 published confirmed planets vetted by staff scientists, and 14,000 planet candidates • Regularly updated approx. every one to two weeks to keep up with newly published literature • Over 130 million additional photometric and radial velocity time series • Data from HST, JWST, Spitzer, TESS, Kepler/K2, SuperWASP, MOA, KELT, UKIRT, CoRoT, and more • Choice of parameters drawn from single or multiple references TOOLS • Interactive and programmatic tools for accessing, visualizing, and retrieving data: • Data tables can be filtered, searched, sorted, plotted, and downloaded in multiple formats. • System Overview pages centralize planet & host data. • Transit and Ephemeris Service predicts transit events observable from Earth and space, including JWST. • Interactive visualization of exoplanet transmission and emission spectra using IPAC-developed Firefly visualization library • EXOFAST MCMC transit & radial-velocity fitting • Predictions of exoplanet observable signatures • Pre-generated presentation plots • Periodogram service • IVOA TAP interface for standardized automated data retrieval ExoFOP facilitates science community follow-up observing. It enables users in the community to share data, files, and observing notes. Any user can access and download data, while users with registered accounts can also upload data. Originally designed to support Kepler and TESS, ExoFOP is now being adapted to handle custom target lists for other missions and community groups; the Habitable Worlds Observatory (HWO) target list is already available. • >1600 registered users: professionals and citizens scientists worldwide. • Approaching 1,000,000 user-uploaded data files, with tens to hundreds added daily. • Built on entire TESS Input Catalog (Gaia DR2): ~2-billion-star base catalog that covers the whole sky. • Primarily organized around recording photometric, imaging, and spectroscopic observations. • API interfaces available for download/upload; python- wrapped examples available; adapting to be IVOA compliant. • Related observations, data, files, notes can all be tied together with a “data-tag.” • Email notifications available for specific targets and daily updates. • Searches can be saved and tagged for reuse. Cumulative histogram of the number of files uploaded to ExoFOP by users over time. Currently around 150 files per day are uploaded on average. The large jump at the end of 2016 is a result of the K2 community sharing data; the jump around 2019 is a result of the ExoFOP being officially part of the TESS Follow-up Observing Program (TFOP) and general adoption of sharing by the community. 973,804 uploaded files 748,150 uploaded parameters 31,409 observing notes 25,641 spectroscopy observations 30,925 im
poster
this study was conducted to explore mothers’ knowledge and attitude towards childhood immunization in Sibu, Sarawak. Prospective Cross-sectional Study on Mother’s Knowledge and Attitude on Childhood Immunisation in Sibu Hospital Mothers in ANW , PNW a and 3 paediatric medical wards in Sibu Hospital Screen for inclusion criteria (n= 473) Excluded (n= 62) Data analysis Inclusion criteria •Mothers with child/ children with age of 0 to 15 years old. •Comprehends Malay or English Data collection forms Part 1: Patient demographics Part 2: Knowledge (Ammar et al) Part 3: Attitude (Abdullah et al) Part 4: Willingness to pay for non- scheduled vaccinesb a ANW= Antenatal ward. PNW= Postnatal ward. b Non-scheduled vaccine: Vaccines not provided free from national immunisation schedule Materials and methods Introduction Data collection (n=201) (Self administered questionnaire)* Jude Siong Yip Kiong1, Tang Shi Ying1, Wong Zhen Zhen1, Philip Tang Tung Ying1 1 Department of Pharmacy, Sibu Hospital, Ministry of Health Malaysia P-92 NMRR-20-775-53752 Young mothers in Sibu generally have moderate knowledge and positive attitude towards childhood immunisation. More than 70% of subjects are willing to pay for influenza and hepatitis A vaccination. We would like to encourage healthcare providers to give more information about childhood vaccination to mothers during Maternal and Child Health (MCH) follow up and offer influenza or hepatitis A vaccination in private clinics if parents are willing. Discussion/ Conclusion Results 1 2 3 4 Study Design: Cross-Sectional study Target population: Mothers in Sibu Hospital Sampling method: Data collection was done in 7 randomly selected working days in antenatal ward (ANW), Postnatal ward (PNW) and 3 paediatric medical wards. Figure 2: Research outline Result 1: Demographic data. (n=201) Result 2: Knowledge score Result 3: Attitude REFERENCES: 1. Kusnin F. Immunisation programme in Malaysia. Vaccinology 2017: International symposium for asia pacific experts. Available from: URL: https://www.fondation-merieux.org/wp-content/uploads/2017/10/vaccinology-2017-faridah-kusnin.pdf 2. Abdullah AC, binti Mohd Zulkefli NA, Rosliza AM. Predictors for inadequate knowledge and negative attitude towards childhood immunization among parents in Hulu Langat, Selangor, Malaysia. Malaysian J Public Heal Med. 2018;18(1):102–12. 3. Awadh AI, Hassali MA, Al-lela OQ, Bux SH, Elkalmi RM, Hadi H. Does an educational intervention improve parents’ knowledge about immunization? Experience from Malaysia. BMC Pediatr. 2014;14(1):1–7. Vaccine refusal in Malaysia has been increasing trend since 2013 due to anti- vaccine influence. Since parents are the main decision makers for their children to have immunization, Figure 1: Vaccine refusal in Malaysia Refused (n= 210) Mean knowledge score: 6.43 (SD 1.88) out of total score 10 There were no significant association between sociodemographic data and knowledge score >80% agrees to positive attitude statements 37.8% are worried about side effects after vaccination. 43.4% would be discouraged for vaccination if their friend did not vaccinate their children with certain vaccine Result 4: Willingness to pay for Non-scheduled vaccines
poster
Sustainability performance of industrial scale heterojunction technology (HJT) for solar photovoltaics (PV) INTRODUCTION • The goal of the AMPERE Project is to set-up an innovative 200 MWp full-scale automated pilot line, which will produce high efficiency and long life HJT silicon solar cells and modules at the Enel Green Power (EGP) 3SUN site in Catania, Italy. • The purpose is to demonstrate innovative and sustainable manufacturing of PV products and enhance the competitiveness of the EU industry in the global PV market. • A sustainability assessment was conducted to assess the potential environmental and social impacts and benefits that may occur across the value chain. The results will be used to inform stakeholders on sustainability best practices for scaling similar PV technologies in Europe. The business of sustainability INITIAL RESULTS & CONCLUSIONS SCOPE & METHODOLOGY Impact assessment methods • Environmental LCA (E-LCA): Cradle to grave LCA according to ISO14040 and EU Product Environmental Footprint Category Rule for PV modules (PEFCR) • Social LCA (S-LCA): UNEP S-LCA guidelines Types of impacts considered • E-LCA: Impact categories included in International Reference Life Cycle Data System (ILCD), including Global warming potential (IPCC 2013). • GWP is a key driver is assessing electricity production technologies. • S-LCA: Material issues were identified as: Working conditions, labour rights, health and safety, local community impacts and socio-economic benefits. GWP Results & Conclusions: • GWP per kWh is less for the AMPERE bifacial module than the PERC (mono- facial module): • Lower GWP impact in the production process. • Increased lifetime electricity generation from the bifacial gain and extended lifetime. • Impact of reducing wafer thickness and source of electricity used for wafer : • Wafers produced using Norway grid electricity (predominantly renewable hydro power) have a lower GWP impact than wafers produced with average European electricity (significant contribution from coal and other fossil fuels). • Use of thinner wafers (150µm compared to 180µm) can reduce GWP but larger decreases can be see when electricity is sourced from renewable sources. OBJECTIVES 1. Use life cycle assessment (LCA) methods to identify environmental and social impacts in the AMPERE HJT module value chain. 2. Benchmark the environmental and social performance against; mainstream PV technologies (i.e. PERC mono-facial module) and the potential benefits design innovations considered in the AMPERE project (e.g. thinner wafers and use of Smart Wire Connection Technology). 3. Identify potential environmental and social impacts and benefits of producing HJT modules and components. Social LCA - Initial conclusions • Upstream activities in emerging markets present E&S risks and the PV materials' typical value chains were assessed through a sector / country combined analysis • There is a strong potential for creating benefits in the European PV value chain: direct and indirect job creation, economic contribution, technological development and innovation in both upstream and downstream segments. • Increase of technological capacities and innovation in European PV value chains D. Reid*, B. Hartlin, C. Poulopoulos and E. Bauguen *Donald Reid, Environmental Resources Management (ERM), Eaton House, Wallbrook Court, North Hinksey Lane, Oxford, UK, OX2 0QS. Email: donald.reid@erm.com Environmental LCA – Global Warming Potential (GWP) High to very high risks • Child/forced labour • Poor and unsafe working conditions • Workers’ rights and wages • Impacts on local communities (environmental damage, public health, land use) End of life Operation and maintenance Installation Catania production site Manufacturing of components Processing of raw materials Mining (Copper, Silver, tin, etc.) Benefits: Potential for job creation, economic contribution, technological innovation in Europe Medium to high risks • Poor and unsafe working conditions • Worker
poster
Biogeographic patterns in the deep ocean: a revision of the Global Open Oceans and Deep-Seabed classification system Berta Ramiro Sánchez1*, Lea-Anne Henry1, J. Murray Roberts1, Telmo Morato2, Gérald Hechter Taranto2, Marina Carreiro-Silva2, Íris Sampaio2, Sophie Arnaud-Haond3, Bramley Murton4 1School of GeoSciences, The University of Edinburgh, 2IMAR-Uaz, 3MARBEC-Ifremer, 4NOC Berta Ramiro Sánchez Introduction Biogeographic classifications are used to analyse patterns of marine biodiversity; advance knowledge of evolutionary and ecosystem processes; and assist governments in designing management tools. The Global Open Oceans and Deep Seabed (GOODS) biogeographic classification system for the deep ocean1,2 was developed to aid management efforts and minimize impacts of activities in the high seas, where governance is limited. Vulnerable marine ecosystems3 (VMEs) provide essential ecosystem services (e.g. nursery grounds4; nutrient cycling5) and are protected through international initiatives6. Most of the VMEs, however, lie in the high seas, receiving little attention. Based entirely on physical proxies presumed to reflect species biogeography, the GOODS tool is not grounded in species data. of GOODS is currently only a static product, as the classification does not account either for projected future climate change scenarios Berta.ramiro@ed.ac.uk Objectives (1) To validate the GOODS classification for complex habitats formed by VME indicator taxa. (2) To test biogeographic boundaries at present and under future ocean climate change projections for the year 21007, based on the output of the Intergovernmental Panel on Climate Change Fifth Assessment Report models. Figure 1. Current proposed GOODS benthic provinces in the North Atlantic. A. Shows three lower bathyal (801-3500 m) provinces, BY1: Arctic; BY2: Northern Atlantic Boreal; BY4: North Atlantic (included MAR hydrothermal vents). B. Shows one abyssal (3501-6500 m) province, AB2: North Atlantic. Modified from Watling et al. (2013). Methodology Data compilation Existing environmental variables (depth, temperature, salinity, dissolved oxygen, POC flux and silicate) and presence and absence point data of VME indicator species in the North Atlantic will be compiled. Data analysis Metacommunity structure analysis will be used to quantify the spatial structure of VME species distribution and obtain distinct faunal provinces. VME indicator species will be assigned to the environmental and historical cluster with which they spatially co-occur. Validation of GOODS for complex habitats formed by VME indicator taxa 1. A B Testing biogeographic boundaries at present and under future climate change scenarios 2. Data compilation Data will consist of species distribution models (SDMs) from 6-8 VME taxa in the North Atlantic and modelled changes in environmental variables for the year 21005. Data analysis SDMs will be correlated with the environmental and historical clusters producing faunal breaks in the projected conditions. Through Procrustes rotation the projected GOODS for the year 2100 will be contrasted with the present-day GOODS. Figure 3. Modelled environmental changes at the deep seafloor (>200) in the year 2100 relative to present-day conditions, following the IPCC card reports. Modified from Sweetman et al. (2017). Challenges References The spatial resolution of available environmental datasets is broad enough to encompass areas that may not have sufficient data, but equally to miss detailed information. Similarly, VME indicator species point data is abundant on continental waters, however, despite ATLAS case studies contribution, there is still a lack of data covering the high seas 1UNESCO, 2009. Global Open Oceans and Deep Seabed (GOODS) - biogeographic classification. IOC Technical Series 84, 84. 2Watling, L., et al., 2013. A proposed biogeography of the deep ocean floor. Progress in Oceanography 111, 91–112. 3UNGA resolution 61/1052, 2006 4Henry, L.-A., et al., 2016. Seamo
poster
Results Background • Community pharmacists often use third-party medication therapy management (MTM) platforms to identify gaps in a patient’s medication regimen (adherence problems, suboptimal drug regimens, cost- saving alternatives, etc.) and recommend targeted interventions to the patient or the prescriber. o These interventions (TIPs) can be completed by a pharmacist, pharmacy resident, or pharmacy student under the supervision of a pharmacist. o With intervention outreach, the pharmacy team member will contact the patient in order to provide support and identify ways to improve the patient’s medication regimen. o In addition to the targeted intervention, the pharmacy team member will provide counseling, facilitate communication with other clinicians, or refer to other pharmacy-based services (PBSs) as appropriate. o Once the targeted intervention is addressed, a claim will be submitted for reimbursement. • According to Took et al., community pharmacies were able to identify and resolve 34 medication- related problems (MRPs) and 81 medication discrepancies in 19 patients, avoiding $92,142 in healthcare costs in only four months.1 o On average, 4.3 discrepancies and 1.8 MRPs per patient o The most common MRP was “underuse of medication”, indicating the need for adherence support in these patients. o This study shows the monetary value of using this platform to help identify MRPs, but the clinical benefit of the program has not yet been established. • Such programs can also be used by pharmacies in an effort to decrease direct and indirect renumeration (DIR) fees. o DIR fees are partially determined by patients’ adherence to oral medications treating hypertension (HTN), hyperlipidemia (HLD), or type 2 diabetes mellitus (T2DM). o As such, this study will evaluate chronic disease control for patients with HTN, HLD, and/or T2DM, by assessing blood pressure (BP), low-density lipoprotein (LDL) levels, and/or hemoglobin A1C (A1C), respectively. Objectives The primary objective of this study is to assess the impact on clinical outcomes of community pharmacists’ use of a third-party MTM platform to identify MRPs. The secondary objectives include adherence rates for patients filling at hospital-based outpatient pharmacies using a PBS and hospitalization due to a complication of HTN, HLD, and/or T2DM. Methods Karen Caye Juco, PharmD; Marlowe Djuric Kachlic, PharmD, BCACP Nazia S. Babul, PharmD, BCACP; Jewel Sophia Younge, PharmD, BCPS Chronic Disease Management by Community Pharmacists Through a Third-Party Medication Therapy Management Platform Limitations Adherence/PDC • PDC was unavailable for prescriptions not filled at an affiliated hospital-based outpatient pharmacy • PDC recorded was the PDC at time of data collection, rather than during the study period Hospitalizations • Could not accurately assess hospitalizations not at the affiliated academic medical center, therefore outside hospitalizations were not included in data collection • Adherence alone cannot prevent all hospitalizations Lab Value Assessment • Changes in lab values cannot be solely attributed to pharmacist interventions • COVID-19 pandemic may have led to delay of lab monitoring for many patients • BP measurements in EMR do not account for potential at-home BP measurements Data Collection • Patient-identifying data was unavailable for some claims in the third-party platform, leading to loss of some data • EMR conversion occurred during the study period, which could have led to unknown data loss • Almost one-third of patients did not have adequate lab values recorded in the EMR for comparison, leading to their exclusion Clinical Decision Making • Assignment of “controlled” versus “uncontrolled” did not account for appropriate deviations in clinical goals (i.e., A1C goal <8% rather than <7% for patients with a history of hypoglycemic events) • Difficult to account for medications used for indications other than BP/LDL/A1C improvement or off- label uses (i.e., sta
poster
Optimizing FastTrack’s performance to obtain formant values in fricative noise FastTrack by S. Barreda [1] References Alexander Shiryaeva • Alexandre Arkhipovb • Michael Danielc • Ekaterina Shepeld a Independent researcher, Singapore • b University of Hamburg, Germany • c University of Tübingen, Germany • d Université Paris Cité, France This paper has been produced in the context of the joint research funding of the German Federal Government and Federal States in the Academies’ Programme, with funding from the Federal Ministry of Education and Research and the Free and Hanseatic City of Hamburg. The Academies’ Programme is coordinated by the Union of the German Academies of Sciences and Humanities. A Praat plugin for (semi-)automated formant tracking • Only original Praat [2] formant estimates, no modifications • Varying Praat’s Formant ceiling (=max. value for F5) in 8, 12, 16, 20 or 24 steps • At each step, formants are estimated by Praat, the resulting formant tracks are approximated by a smooth function • The analysis with less curved tracks is chosen (“Winner”) • 3 or 4 formants can be tracked • Variable curvature of approximating smooth function • Allows manually choosing another of the suggested analyses • Each formant can be separately picked from a particular analysis step FastTrack analysis with eight approximations for [ħ] in cenħe ‘together’, formant ceiling range 4,000-6,000 Hz, curvature 4, speaker Kz. “Winner” analysis in the bold frame, red ovals show how approximating functions change due to ceiling value Data Settings [1] Barreda, S. (2021). Fast Track: fast (nearly) automatic formant-tracking using Praat. Linguistics Vanguard, 7(1) [2] Boersma, Paul & Weenink, David (2024). Praat: doing phonetics by computer [Computer program]. Version 6.4.12, retrieved 2 May 2024 from http://www.praat.org/ • Optimal formant ceiling range? • Optimal approximation function curvature? • 3 or 4 formants to track? • Are fricatives more difficult to track than vowels? Choosing FastTrack parameters [3] Dobrushina, N. 2019. The language and people of Mehweb. Daniel, M. et al. (eds.), The Mehweb language: Essays on phonology, morphology and syntax. Language Science Press, 1–15. [4] Moroz, G. 2019. Phonology of Mehweb. In: Daniel, M.et al. (eds.), The Mehweb language: Essays on phonology, morphology and syntax. Language Science Press, 17–37. [5] Arkhipov, A., Daniel, M., Shiryaev, A. & Shepel, E. 2023. Evaluating Formant estimations and Discrete Cosine Transform to differentiate between pharyngeal fricatives in Mehweb. 20th ICPhS, Prague. Zenodo. https://doi.org/10.5281/zenodo.8264637 • Given smooth formant transitions between epilaryngeals and adjacent vowels, can it be helpful to track formants on entire VCV sequences? • Do they need higher curvature settings? General observations • Optimal formant ceiling range is speaker-dependent with women having higher ranges, as expected • Laryngeal [h] is the most error-prone due to low intensity and duration • Plain epilaryngeal [ħ] has the clearest formant structure, and therefore attains highest FT accuracy among epilaryngeals • In pharyngealized [ʜ], individual formants are less visible (“flatter” high- energy region) ⇒ more errors than in plain [ħ] Formant ceiling range: four 2 kHz ranges for each speaker, 500 Hz step between ranges. Lowest range (men): 3,000-5,000 Hz, highest range (women): 6,000-8,000 Hz. Number of steps: 24 steps for each combination of settings Curvature: 4, 6, 8 Number of formants to track: 3 Preliminary tests have shown that optimizing for 4 formants can yield less precise results for F1-F3 (if F4 itself is not studied) Segment types: C / V individual consonant / vowel vCv consonant +35 ms of adjacent vowels VCV consonant + entire adjacent vowels Discussion • FastTrack’s accuracy on [ʜ] was generally similar to that on vowels • Increased curvature in most cases did not improve FastTrack accuracy. Curvature 6 was optimal only for the speaker Kz, curvature 8 perform
poster
Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion Yu, Hao and Gu, Hanming Institute of Geophysics and Geomatics, China University of Geosciences Wuhan, Wuhan, China Yu Hao China University of Geosciences email: yuhao8905@cug.edu.cn website: whimian.github.io Contact 1. Sayers, C. M., et al. “Use of reflection tomography to predict pore pressure in overpressured reservoir sands” SEG Technical Program Expanded Abstracts 22.1(2003):1362. 2. Gu, W., et al. “Application of seismic motion inversion technology in thin reservoir prediction: A case study of the thin sandstone gas reservoir in the B area of Junggar Basin” Natural Gas Geoscience 27.11(2016). References A new multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by- trace multivariate regression analysis on seismic-derived petrophysical properties to locally calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. Introduction Multivariate Model Results Developed from traditional geostatistical inversion methodologies, Seismic Motion Inversion (SMI) is a inversion method that utilize thin bed tuning effect for determining and optimizing the structure of reflection coefficient and simulate the distribution of sand bodies. Lateral variation of seismic motion instead of traditional variogram is used to describe the spatial variation of reservoir. Seismic Motion Inversion Application of the proposed methodology to an research area in East China Sea has shown that the method: • bridge the gap between seismic and well log pressure prediction; • give prediction values close to pressure measurements from well testing; • provide more detailed pressure variation both vertically and horizontally. Future works: An Uncertainty Analysis will be included in this pressure prediction workflow, not only as a quality control process but also as a measure of how reliable the predicted pressure data is when used in well planning and casing design. Conclusions The multivariate formation pressure prediction model proposed by Sayers (2003) is well balanced between conciseness and representativeness. Three petrophysical properties (velocity, porosity, and shale volume) are used to describe the variation of effective stress in this model: 𝑉= 𝑎0 −𝑎1𝜑−𝑎2𝐶+ 𝑎3𝜎𝐵 The porosity term 𝜑describes the degree of compaction while the shale volume term C describes the relative influence of different rock type. Though proposed as a model that can only deal with abnormal pressure generated by compaction disequilibrium, it can be used in predicting abnormal pressure caused by fluid expansion when combined with the unloading model proposed by Bowers. The corresponding equation for unloading can be formulated as: 𝑉= 𝑎0 −𝑎1𝜑−𝑎2𝐶+ 𝑎3 𝜎𝑚𝑎𝑥 𝜎 𝜎𝑚𝑎𝑥 1 𝑈 𝐵 Code for basic part of this project has been published as an open source python package -- pyGeoPressure. https://github.com/whimian/pyGeoPressure Seismic Initial Model likelihood prior well log Ir Ia seismic motion norm. Correlator posterior sampling expectation perturb accept reject Fig 1. SMI workflow Fig 2. SMI samples according to wave motion similarity and distance to well location. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Porosity and sh
poster
Funders myrovlytis trust API With Europe PMC Annotations Platform anyone can harness the power of text-mining for the benefit of their own research. Removing barriers to text analytics Text-mining data covers different research outputs: research articles, patents, preprints, theses, etc. It includes abstracts as well as full text for open access articles. Over 500 million annotations of biological concepts, events and relations. All text-mining data is freely accessible via RESTful API in a variety of formats. Comprehensive dataset of biological annotations 500 million annotations gene ontology gene/ protein mutations protein interactions diseases organisms phospho- rylation gene- diseases accession numbers transcription factor-target gene function chemicals How do you get hold of publicly available text- mining data that can help you find biological connections in research literature? Europe PMC provides an open platform where experts share their text-mining outputs with the wider community Public text-mining platform Europe PMC Annotations Platform EXTRACTING RESEARCH EVIDENCE FROM BIOMEDICAL PUBLICATIONS
poster
Results shown as bias ± limits of agreement [bpm] Respiratory rate monitoring to detect deteriorations using wearable sensors P. H. Charlton 123, T. Bonnici 123, L. Tarassenko 2, D. A. Clifton 2, P. J. Watkinson 4, J. Alastruey 1, R. Beale 13 1 King’s College London; 2 University of Oxford; 3 Guy’s and St Thomas’ NHS Foundation Trust; 4 Oxford University Hospitals This work was supported by the UK EPSRC (Grant EP/H019944/1), the NIHR Biomedical Research Centre at Guy's and St Thomas' NHS Foundation Trust in partnership with King's College London, the NIHR Oxford Biomedical Research Centre Programme, the Oxford and King's College London Centres of Excellence in Medical Engineering funded by the Wellcome Trust and EPSRC under grant no. WT88877/Z/09/Z and grant no. WT088641/Z/09/Z, a Royal Academy of Engineering (RAEng) Research Fellowship awarded to DAC, and an EPSRC Challenge Award to DAC. The views expressed are those of the authors and not necessarily those of the EPSRC, NHS, NIHR, Department of Health, Wellcome Trust, or RAEng. Source: Charlton, P.H., 2017. Continuous respiratory rate monitoring to detect clinical deteriorations using wearable sensors. PhD Thesis, King’s College London. 1. Continuous respiratory rate (RR) monitoring using wearable sensors 2. Development of an algorithm to estimate RR from the ECG 3. Detection of deteriorations in real-time Continuous monitoring may provide early warning of deteriorations Respiratory rate (RR, number of breaths per minute) often increases in the hours before acute deteriorations such as cardiac arrests and sepsis. RR is currently measured by hand every 4 – 6 hours in hospitalised patients. Consequently, changes in RR can go unrecognised between measurements. It is difficult to monitor RR in ambulatory patients Continuous RR monitoring often relies on cumbersome sensors, such as the chest band, facemask and oral-nasal cannula shown below. These are not suitable for monitoring ambulatory patients for several days. RR could be estimated from ECG or PPG signals Both the electrocardiogram (ECG) and pulse oximetry (photoplethysmogram, PPG) signals can be continuously monitored using wearable sensors. PPG ECG The ECG and PPG are modulated by respiration The ECG and PPG are influenced by respiration in three ways. This provides opportunity to estimate RR from the signals. Original Baseline Modulation Amplitude Modulation Frequency Modulation RR algorithms in the literature A systematic review of the literature identified 140 publications containing evaluations of RR algorithms. A total of 95 candidate algorithms were implemented for testing. Publicly available datasets Four datasets were identified with which to assess RR algorithms. Two datasets (Vortal and Fantasia) were acquired from healthy subjects, and two (MIMIC-II and CapnoBase) were collected from hospital patients. Assessment of RR algorithms The performances of RR algorithms were assessed on the four datasets using the limits of agreement statistics: the bias (i.e. mean error) and the limits of agreement (LoAs, within which 95% of errors are expected to lie). The results are shown in the table (bias ± LOAs in breaths per min). The best algorithm in the literature demonstrated high levels of inaccuracy. The high LoAs of ± 8.6 and ± 10.0 bpm on the MIMIC-II dataset showed that the algorithm was too imprecise for continuous monitoring. Refinement for continuous monitoring A novel RR algorithm was designed to provide more precise RR estimates (results in table). It achieved lower LOAs of ± 3.2 bpm when using the ECG. Dataset Best algorithm in literature Novel RR algorithm ECG PPG ECG PPG Healthy subjects Vortal -1.8 ± 7.9 -5.0 ± 10.2 -0.2 ± 3.1 -0.9 ± 4.5 Fantasia -1.8 ± 7.8 n / a -0.2 ± 2.5 n / a Hospitalised patients CapnoBase -0.1 ± 3.6 0.2 ± 4.9 0.4 ± 1.5 0.8 ± 3.3 MIMIC-II -1.2 ± 8.6 -1.8 ± 10.0 0.0 ± 3.2 -0.2 ± 9.0 Application to clinical data 184 patients recovering from cardiac surgery in hospital were monitored using
poster
Combining models and observations for a more inclusive science with the Vera C. Rubin Observatory S. Ustamujic1, R. Bonito1 , L. Venuti2 1 INAF-Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, 90134 Palermo, Italy 2 SETI Institute, 339 Bernardo Avenue, Suite 200, Mountain View, CA 94043, USA AIMS In this project, we aim to investigate the variability in YSOs at different time scales (from hours to years). Email: sabina.ustamujic@inaf.it NGC 6530 stellar cluster - Kepler satellite Figure adapted from Venuti et al. (2021), AJ 162, 101 ABSTRACT The study of Young Stellar Objects (YSOs) and their variability (related to e.g. accretion, flares, rotation …) is one of the scientific topics that will take advantage of the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) in the context of the Transients and Variable Stars (TVS) Science Collaboration (SC). In order to support a more inclusive participation to Rubin LSST activities, we aim to create a catalogue of publicly available interactive 3D graphics and 3D printed kits based on the modeling and interpretation of a number of physical processes causing photometric variability in YSOs that we plan to investigate with future Rubin LSST data. This action is in line with the Justice, Equity, Diversity & Inclusion (JEDI) group of the TVS SC, and will help to disseminate some of our results, and to adequately present Rubin/TVS science to visually impaired researchers and members of the community at large. Here we present the project and the first models of our catalogue. + CATALOGUE Our final goal is to create a catalogue of publicly available interactive 3D graphics and 3D printed kits to adequately present Rubin/TVS science to visually impaired researchers and members of the community at large for a more inclusive science. Here we present one of our preliminary models. VARIABILITY IN YSOs YSOs are complex systems formed by a central star surrounded by a circumstellar disk, which exhibit significant variability due to various dynamical processes: magnetically- channeled accretion streams transfer material from the disk to the star, and supersonic jets and outflows eject material from the system. RUBIN-LSST The goal of the Vera C. Rubin Observatory project is to conduct the 10-year Legacy Survey of Space and Time (LSST), which will deliver a 500 petabyte set of images and data products that will address some of the most pressing questions about the structure and evolution of the universe and the objects in it. The Transients and Variable Stars (TVS) Science Collaboration (SC) will study Young Stellar Objects (YSOs) and their variability taking advantage of the huge amount of data that the survey will obtain. ACTIVITIES 1) Analyzing YSO light curves from public datasets of young star-forming regions from available large-scale surveys in preparation for Rubin LSST data. 2) Developing models and 3D renderings reproducing the geometry of star-disk systems that could explain the variability observed in the data and that would give rise to specific light curve patterns. 3D model of an accretion disk with the inner disk warped
poster
HIP 102152b: A low-mass planet candidate around an old solar twin Thiago Ferreira & Jorge Meléndez Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Universidade de São Paulo – Brazil Contact: tfsantos@usp.br ABOUT THE STAR HIP 102152 is a solar twin (Teff= 5718 K, [Fe/H] = -0.016, and log g = 4.325; [1]), placed near the end of the main sequence (t = 8.2 Gyr), and presents a rotation period of 35.7 days [2]. Due to its severe lithium depletion [3], and low activity levels (log R′ HK = −5.12) but high radial velocity (RV) RMS, it was identified as a promising planet-hosting system [4]. DATA ANALYSIS HIP 102152 was observed with the ESO/HARPS spectrograph mounted at the 3.6–m telescope in La Silla Observatory between Oct.–2011 and May–2019. From a Lomb-Scargle analysis, we detected a periodic signal of ∼23–d in the RV time-series, and besides known stellar activity, no modulation of the indicators was observed at this period, nor correlations with the RVs (see Figs. 1 and 2). We employed a joint Keplerian plus Gaussian Process (GP) model and estimated the significance of the results via Markov chain sampling, where observations at two different epochs are correlated via the kernel Σij = η2 1·exp  −|ti −tj| 2 · η22 − 2 · sin2 ( π·|ti−tj|) η3 ) η24  , a function of the Doppler amplitude of the signal (η1), the time-scale for growth and decay of active regions (which is often comparable to the star’s rotation period, η2), the recurrence time-scale for active regions (η3), and a smoothing parameter (η4) [5]. For the Keplerian orbits, both the eccentricity (in the orthogonal basis with the periastron argument: √e sin(ω) and √e cos(ω)) and the radial velocity slope ( ˙γ) were set to freely vary, and we included separated jitter parameters for data taken before and after the ESO/HARPS upgrade in mid-2015. The other model variables include the GLS orbital period (P), its Doppler amplitude (K) and a reference time (Tc). 0.0 0.1 0.2 RV RV Prot = 35.7 d Porb = 22.9 d 0.0 0.1 0.2 BIS CCF BIS 0.0 0.1 0.2 FWHM CCF FWHM 0.0 0.1 0.2 S Ca II S (H+K) 0.0 0.1 0.2 H H 0.0 0.1 0.2 H H 0.0 0.1 0.2 0.3 H H 0.0 0.1 0.2 H H 0.0 0.1 0.2 H H 10 4 10 3 10 2 10 1 100 Frequencies / d 1 0.0 0.5 1.0 S. W. Spectral Window Figure 1. GLS periodogram of the RV and activity indicators of HIP 102151. The central blue line marks the star’s rotational period and its harmonics. The green line marks the planet’s orbital period, and the red line is the periodogram peak for each variable. The power level for which FAP levels are less than 1% and 5% are indicated as grey horizontal lines. 20 15 10 5 0 5 Bisector span / m s 1 p = -0.27 50 40 30 20 10 0 10 20 30 FWHM / m s 1 p = 0.13 0.10 0.11 0.12 0.13 0.14 0.15 0.16 Ca II S (H+K) p = 0.06 0.092 0.094 0.096 0.098 H p = 0.13 0.102 0.104 0.106 0.108 0.110 H p = -0.11 0.110 0.115 0.120 0.125 0.130 H p = -0.11 7.5 5.0 2.5 0.0 2.5 5.0 7.5 Radial Velocity / m s 1 0.125 0.130 0.135 0.140 0.145 0.150 H p = 0.17 7.5 5.0 2.5 0.0 2.5 5.0 7.5 Radial Velocity / m s 1 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 H p = 0.11 Figure 2. The blue line represents the best-fit linear correlation between the RVs and several activity indicators. The Pearson correlation test p is indicated to quantify the significance of the correlations. RESULTS HIP 102152b is consistent with a mini- Neptune (m sin(i) = 8 ± 1M⊕) on a circular orbit with period P = 22.9 ± 0.01– d, and no RV slope was observed due to acceleration from an outer massive companion in this system (see Fig. 3). The best-fit solution (by combining Normal and Jeffrey priors) was obtained with the RadVel routine [6], implementing a maximum a posteriori optimisation and computing confidence intervals from 1000 independent draws plus requiring a Gelman-Rubin statistics ˆR < 1.01. π(Vr|Θ) ∝N(P|22.9, 0.01) × N(K|2, 0.2) × N(T0|x(max y), 0.01) × J (η1|0.01, 100) × N(η2|35.7, 1.4) × N(η3|30, 0.7) × N(η4|0.46, 0.01) × N(σA,B|0, 0.1) Four models (circular and eccentric, with and w/ sl
poster
The 5GinFIRE platform A testbed for end-to-end 5G experimentation Christos Tranoris, Spyros Denazis University of Patras Patras, Greece tranoris@ece.upatras.gr, sdean@upatras.gr Anastasius Gavras, Halid Hrasnica Eurescom GmbH Heidelberg, Germany gavras@eurescom.eu, hrasnica@eurescom.eu Abstract — 5G is the next generation networking infrastructure with a strong focus on requirements of various vertical domains. 5G brings improvements on networking performance but also introduces new services for deploying software involving networking aspects in an end-to-end manner from the edge to the cloud. 5GinFIRE is an EU H2020 project that builds and operates an Open 5G NFV based reference ecosystem of experimental facilities. It enables 5G NFV-based architectures for vertical industries’ applications and facilitates experimentation. Keywords—5G; experimentation; NFV; MANO; I. INTRODUCTION The 5G system has the ambition of responding to the widest range of services and applications in the history of mobile and wireless communications [1]. Addressing the question of how a platform can host and integrate verticals and concurrently deal with reconciling their competing and opposing requirements, requires operational 5G infrastructures that can host various vertical industries’ applications. A key issue is the lifecycle management of the verticals’ services by means of Virtualized Network Functions (VNF) deployment and programmability techniques. The technical objective of 5GINFIRE is to build and operate an Open, and Extensible 5G NFV-based reference ecosystem of experimental facilities that lays down the foundations of a standards-based network substrate for instantiating fully softwarised architectures for vertical industries purposes. II. ARCHITECTURE AND IMPLEMENTATION A. Use cases and requirements 5GINFIRE (https://5ginfire.eu/) derives its requirements from a set of simple use cases that are positioned in the areas of the automotive vertical sector and smart cities. The use cases are used as source of requirements for building the infrastructure and to showcase its capabilities. In the automotive case the requirements are extracted from scenarios such as sensing-based and video-camera-based assisted driving, which use a multitude of information sources (intra- vehicle, as well as inter-vehicle and vehicle-to-infrastructure) to enable assisted driving. In the smart cities case the requirements are derived from the scenarios to facilitate the use and exploitation of available open data provided by existing sensor deployments in cities, as well as interfacing with the capabilities of the existing deployments of sensor functionality in the testbeds of the project partners in Bristol (U.K.), Aveiro (Portugal) in Europe and São Paulo and Uberlândia in Brazil. B. Arhitecture Fig. 1 illustrates the 5GINFIRE conceptual architecture derived from the ETSI NFV reference architecture [2] and upstream open source projects. It depicts the major architectural areas and shows in a workflow manner the various interactions. Although not exhaustive, the conceptual architecture highlights the functionality that is required to integrate existing open source components and physical infrastructures or being developed by the project. It provides an indication on how the required architectural and technological convergence with mainstream industrial and open source activities could be achieved. In Fig. 1 an application is composed of services that are configured to offer this application. These services are further decomposed into virtual networking and vertical functions (VxFs) that are deployed at the corresponding points of presence in the infrastructure. A use case from the automotive vertical is used to validate the platform. Following similar practices more types of virtual experimentation environments may be instantiated. The 5GINFIRE middleware communicates with the endpoints of the orchestration services that are responsible for orchestrating the man
poster
Many projected impacts on global fisheries. Coastal artisanal fisheries are especially vulnerable • Small boats means fishers have limited ability to move with the fish • Limited to the coastal region • Limited resources to change gear • Strongly impacted by coastal upwelling and coastal hypoxia The impacts of climate change are exacerbated by a disparity in available resources 10 Days of Biriyani and Coding: Building Bridges for Indian Ocean Rim Marine Scientists Across the “Big Data Geoscience” and Cloud–computing Divide Elizabeth E Holmes1, Aditi Modi2, Kumar Nimit3, Smitha B R4, Swarnali Majumder3, Sourav Maity3,5 and TVS Udaya Bhaskar3 (1)NOAA Northwest Fisheries Science Center, Seattle, United States, (2)Indian Institute of Tropical Meteorology, Pune India, Indian Institute of Technology Bombay, India, Pune, India, (3)Indian National Center for Ocean Information Services, Hyderabad, India, (4)Centre for Marine Living Resources and Ecology, Ministry of Earth Sciences, Kochi, India, (5) Coastal Observatory and Outreach Centre, Vidyasagar University, Midnapore, India First event of the UN Decade of the Ocean capacity development initiative “Devising Early-Career Capacity Development - Indian Ocean” Report on the ITCOocean machine learning and species distribution course and hackweek Indian Ocean Coastal communities depend on fisheries for food and livelihoods The disparity extends to Indian Ocean Rim scientists – the people who will lead the science and innovations. Early career scientists are missing out a crucial area of advancement in earth sciences in the era of “big data” --- training in the new cloud-optimized geospatial tools and platforms. Ocean climate change is here and will continue Image shows increase in Ocean Heat Content relative to 1981-2010 base line Cheng, L. J., and Coauthors, 2021: Upper ocean temperatures hit record high in 2020. Adv. Atmos. Sci., 38(4), 523−530 60 ocean scientists from across India & Bangladesh Experiential Learning Lessons Learned • Visas were a major barrier for African invitees. Hold local workshops in E Africa. • Internet speed! Test everything on slow internet. • Travel cost is a barrier for participants. • Need a more coders. The few coders were stretched thin. IDEATION PITCHING TEAMWORK PRESENTATION Thank you to the ITCOocean Hack2week supporters
poster
Efficient susceptibility screening of submarine basalts for paleomagnetic research Hong Yang (hyang777@stanford.edu), Sonia M. Tikoo | Stanford Univ. Claire Carvallo | IMPMC, Sorbonne Univ. Dario Bilardello, Peter Solheid | IRM, U of Minnesota Kevin M. Gaastra, William W. Sager, Sriharsha Thoram | Univ. of Houston IODP 391 Scientists (scan the QR code for full list) We use amplitude-dependent magnetic susceptibility to pre-screen basalt samples and select the most effective demagnetization method. Our method is as quick as two minutes per sample and non-destructive to the magnetic information in the sample. Image credit: IODP, Blastcube from wiki Alternating Field (AF) Thermal B7151, AD = 53% B5981, AD = 14% B4591, AD = 5% B2861, AD = 1% Thermal Thermal Thermal AF AF AF AF A B C D unit = 5×10-4 Am2kg-1 unit = 1×10-3 Am2kg-1 unit = 2×10-4 Am2kg-1 unit = 2×10-4 Am2kg-1 unit = 5×10-4 Am2kg-1 East Up, North East Up, North Up, North Up, North East East East East East East Up, North Up, North Up, North Up, North AF χ χ χ χ Selecting a demagnetization method requires information of individual sample’s magnetic properties: • Curie temperature • Thermal stability • Magnetic coercivity • AF field response Susceptibility amplitude dependence of Fe-Ti-O minerals can correlate well with these properties We measured the the amplitude dependence of magnetic susceptibility and use it to test on Expedition 391 basalt samples with a variety of: • Oxidation states (degree of weathering) • Emplacement types • Titanium content Amplitude dependence is well correlated with Ti content and freshness of the Ti-magnetite in the basalt, regardless of the mineral content. • High slope corresponds to low saturation magnetization values, which indicates high Ti content • Lower slope means either lower Ti content or alteration due to weathering. Amplitude-dependent • Reversible k-T curve • Low Curie temperature Amplitude-independent • Thermal instability (indicating maghemitization) • Formation of nearly stoichiometric magnetite during heating • High-AD sample (A) is characterized by low coercivity, most of its magnetization lost after 7-10 mT. • Low-AD samples (C,D) are affected by alteration and their thermal demagnetization results can be noisy • Intermediate-AD sample (B) performs well in both thermal and AF treatments. Amplitude dependence of basalt samples in this study Acknowledgement: This research used samples and data provided by the International Ocean Discovery Program (IODP) and the JOIDES Resolution Science Operator. Funding was provided by NSF, an IRM visiting student fellowship, and the USSSP. Amplitude dependence is correlated with thermomagnetic behaviors. Motivation During IODP Expedition 391, 20%-40% of shipboard demagnetization experiments on submarine basalts failed. Can we save time and resources by pre-screening samples to select the best demagnetization method? Proposed Method Amplitude dependence can differentiate fresh high Ti basalts from other samples. Amplitude dependence can help us choose the demagnetization method. Amplitude dependence (AD) = (χ300A/m- χ30A/m)/(χ30 A/m)
poster
DataHub Team at ELIXIR Belgium datahub@elixir-belgium.org Efficient metadata collection is vital for the discoverability and reusability of experimental data. Metadata collected following standardized, structured models, and machine-readable formats enhances (meta)data interoperability and prepare the data for seamless integration with ML/AI applications. However, existing open-source tools catering to this process are limited, lacking user-friendliness, and often too specific to certain use cases. With the development of the DataHub platform, our aim is to empower researchers and institutes by offering a streamlined, intuitive, and FAIR-by-design approach to metadata management. Flora D’Anna1, Kevin De Pelseneer1 , Rafael Andrade Buono1, SciLifeLab team2, FAIRDOM community3, Frederik Coppens1. 1 VIB Data Core, VIB Technologies, Technologiepark 75, 9052 Ghent, Belgium; 2 SciLifeLab-Data Centre, SciLifeLab, Tomtebodavägen 23, 171 65 Solna, Sweden; 3 https://fair-dom.org/people. How to structure experimental metadata in DataHub Investigation: overall goals of related studies Stream of Assays: test producing measurements via 1 technology type Study: information about - the subjects or observation units under study (sources) - the samples collection protocol - the resulting samples Sample: input material of assay Protocol 1: extraction Extract: output material from protocol 1 Protocol 2: library construction Library: output material from protocol 2 Data file: outputs from protocol 3 Protocol 3: sequencing Assay: one experimental step Assay 1 Assay 2 Assay 3 Sample collection (Protocol) Samples (Material) Sources (Material) Mass Spectrometry Sequencing DataHub platform Rationale of the approach: FAIR experimental metadata by-design by starting with the end in mind DataHub aims to integrate metadata required by public or local data repositories and archives (1). This will allow researchers to collect experimental metadata according to standards from the start of their experiments, one step at the time, while referencing data files stored in appropriate data storage systems (2). Structured and standardized metadata could then be easily deposited in data repositories by researchers (3). End-repositories ●End-repositories’ checklists, ontologies, controlled vocabularies ●Sample-level metadata ●Template customisation ●Describe 1 experimental step at the time ●Link to external data storage Local archive Streamlined metadata and data submission to end-repositories End-repositories’ metadata requirements and standards Collection of experimental metadata 1 2 3 https://datahub.elixir-belgium.org ELNs Object storage Future developments https://datahub-test.elixir-belgium.org • For active (meta)data collection of experimental conditions and samples by researchers • Samples lineage • Compliant with repositories’ checklists and customisable • Controlled vocabularies • Ontologies from Ontology Lookup Service (OLS) • Samples metadata in tabular format • Import and export • Creation and editing in batch via Excel spreadsheet Dynamic table Future developments Samples query "List all the sequencing files originating from an Illumina MiSeq instrument and derived from the material named 'Yeast culture 1'." DataHub user interface and features • Research laboratories or units: enhance data management by integrating DataHub with ELNs and other existing systems. • Technical core facilities: utilize and share metadata standards with users. • Research institutes and consortia: • keep track of and discover datasets generated by various laboratories • support the implementation of FAIR and Open Science principles. Who can benefit from using DataHub Method: FAIRDOM-SEEK and ISA-JSON We develop features in FAIRDOM-SEEK1 that support structuring of the experimental metadata according to the established ISA metadata framework, in JSON format2. 1 DOI 10.5281/zenodo.5653415 2 https://doi.org/10.1093/gigascience/giab060 FAIR metadata by-design as a service for life s
poster
Graph de-duplication: challenges & requirements De-duplicating the OpenAIRE Scholarly Communication Big Graph Claudio Atzori, Paolo Manghi, Alessia Bardi Institute of Information Science and Technologies, Italian National Research Council (ISTI-CNR), Pisa {name.surname}@isti.cnr.it Results ➢GDup Open Source Software: https: //doi.org/10.5281/zenodo.292980 ➢GDup is today a production service (TRL9) of the OpenAIRE infrastructure ➢GDup is used to de-duplicate literature, datasets, software and organisation entities to ensure sensible statistics are delivered by the OpenAIRE infrastructure. Ongoing & Future work ➢Make it a fully user-friendly product, i.e. completion of data curators GUI ➢Address further functional scenarios, e.g. crowd-sourcing deduplication by delegating to a set of experts the addition of assertions to clean deduplication results and build ground truth ➢Apache Spark to implement candidate identification and matching phases ➢Apache GraphX to implement the graph disambiguation phase Acknowledgments This work is partially supported by the European Commission as part of the projects ➢OpenAIRE2020 (H2020-EINFRA-2014-1, Grant Agreement 643410) ➢OpenAIRE-Advance (H2020-EINFRA-2017-1, Grant Agreement 777541) Solution proposed: GDup GDup is an integrated, scalable, general-purpose system for entity de-duplication over big graphs. GDup supports data curators with out of the box functionalities they require to support an end-to-end entity deduplication workflow over a generic input graph. GDup is not about better recall/precision for given deduplication problems, but rather about provision of tools enabling data curators to concentrate on modeling and customizing their deduplication solutions without bothering about the extra conceptual and technical challenges that such task implies. End-to-end workflow enabling data curators at: 1. Importing their graph in the system 2. Configuring for each entity type the relative duplicate identification “configurations” 3. Managing Ground Truth generation and injection 4. Configuring graph disambiguation strategies 5. Supporting data curators at manually fixing the results of deduplication 6. Exporting a disambiguated graph An academic graph aggregating all information required to deliver monitoring tools The scholarly graph is obtained as continuous aggregation of bibliographic metadata records originating from a variable set of information systems (repositories, publishers, funder databases) with heterogeneous and duplicated content. Main entities of the graph are organizations, results (literature, datasets, software, other products), funders, projects, and data sources. The graph counts ∼26Mi result entities, 2,5Mi projects, with∼40Mi links between them. Title Authors Date OpenAIREplus - OpenAIRE APIs for third party services. D8.6 Manghi et all 2012-06-12 OpenAIREplus - OpenAIRE APIs for third party services. D8.4 Manghi, P.; 2012-12-06 OpenAIREplus - OpenAIRE APIs for third party services. D8.6 Manghi Paolo NA 1 2 3 4 5 6
poster
© 2020 Novartis Cheat sheet for model uncertainty assessment Version 1.2 Novartis Contributors: Andrew M Stein, Jeffrey D Kearns, Jaeyeon Kim, Alison Margolskee Purpose of this document The Pedigree Table on the back of this document helps a modeler or reviewer assess the uncertainty of a model’s predictions in a structured way [1, 2]. This tool should be applied during scoping, to ensure effort is placed on the most critical aspects of the model, and then applied again at the end of the activity, to ensure the model is fit for the purpose of informing the decision. In practice, the act of thinking carefully about model uncertainty and its consequences is more important than the exact choice of scores. Because model uncertainty is difficult to rigorously quantify, this assessment is qualitative, and reviewers may differ in their assessment. This difference in opinion can be useful information and it is recommended that the results from different reviewers be shown simultaneously to highlight areas of disagreement [2]. Figure 1 shows one way to visualize the results of the uncertainty assessment. This figure is illustrative and does not refer to specific models in the literature. Important Considerations Fit for Purpose: What is the purpose of this model? What are the key behaviors that it must capture? While complex models may be initially employed for hypothesis exploration, use the simplest model that addresses the specific question; reevaluate the model if the question changes. Consequences of incorrect prediction: If the model prediction is wrong, what are the consequences? What level of uncertainty can be tolerated? The pedigree table scores should reflect this. Range of predictions: How does the model uncertainty affect the decision or recommendation? Have you clearly defined a range of model predictions (i.e. ‘best’, ‘median’, ‘worst’ cases)? This representation is often more accessible to non-modelers than formal uncertainty metrics. References 1. Saltelli, A. "A short comment on statistical versus mathematical modelling." Nature communications 10.1 (2019), 1 2. Van Der Sluijs, JP., et al. "Combining quantitative and qualitative measures of uncertainty in model-based environmental assessment: the NUSAP system." Risk Analysis: An International Journal 25.2 (2005): 481. 2 3 1 1 2 1 0 2 2 3 2 1 2 2 2 2 3 4 4 4 2 3 4 3 3 4 4 4 4 4 4 4 3 3 4 4 M1 M2 M3 M4 Technical Implementation Parameter Sensitivity Variability Ability to Extrapolate Ability to Interpolate Assumptions Structural Model Biological Understanding Experimental Data Uncertainty Criteria Figure 1 Example pedigree table for four hypothetical models that were used to: explore hypotheses about a particular disease pathway (M1), predict the first in human dose (M2), select a phase 3 dose (M3), and support a change in label (M4).
poster
Which Semantic Web? Marshall & Shipman* Problems with formality Conclusion: Semantic web design and adoption questions Three semantic web perspectives Pragmatic issues: The world-wide web does NOT have a consistent structure; it is not ordered or classified. Cognitive overhead: learning the representation (syntax and semantics) and making decisions on how to represent knowledge. Tacit and evolving knowledge Premature structure: - impedes & impairs classification - incremental structure development req'd. Situated nature of knowledge Sub-divide ideas & entities Find appropriate concepts Create concepts (when needed): - name it - connect to other concepts by attaching attributes and relations Knowing what to express (a learn by doing, trial and error, process) What is tacit knowledge? (Unspoken knowledge that may not be expressible in explicit language.) Formal representations are rigid (poor match for natural communications) Understanding of what is knowledge changes over time Natural categorizations (people give different names and attributes to the same objects in different contexts) Knowledge representation is problem dependent (attributes for objects and contexts are not enumerable (Kent, 2003 ?)) Knowledge stability (How well are the domain and its practices understood? How much formalization and restructuring is expected, or can be tolerated?) Competing conceptual approaches (Is the knowledge intrinsic or extrinsic? Can intrinsic structure be recognized with heuristic approaches; can declared representations be avoided or minimized?) Negotiation among information resource stakeholders (What roles do negotiation, facilitation, or intervention play in representing knowledge in a socio- technical framework? Are there identified and accepted approaches that work in the domain?) Universal Library: Existing tools (e.g., search) are sufficient; extend the web by defining and linking web information; enable more effective discovery, automation, integration, and reuse. Computer agent environment: Software agents can use defined and linked information to find, filter, and prepare information for people and procedures. Implementation is too hard in theory and in practice: (i) context-free knowledge representation relies on domain orientation; (ii) there is always a situation where more contextual information is needed. Federation of knowledge/data bases: most promising, can leverage information in existing relational databases, but requires - knowledge engineering (including social processes) - cataloging - librarianship Unruly properties of the (semantic) web: (i) AAA slogan (anyone can say anything about any topic; (ii) open world/ closed world (anyone can say something new about anything, hence there could always be more information); (iii) non- unique naming (many names for the same thing); (iv) it is an information wilderness (no guarantees that information is orderly or understandable) (Allemang & Hendler, 2011). Semantic Web as a metadata system Semantic Web as markup: HTML example; how complex and complicated will semweb markup get? (See W3C work on RDFa and Schema.org for current approaches. (LiDRC, 2010), (W3C, 2012).) Semantic Web as killer app: primary challenge is that it is difficult to keep detailed preferences in line with moment-to-moment needs Community centered: every aspect of metadata (collecting, validating, using) is a local practice; no set of cataloging rules ever fully prescriptive (cf. W. Kent, 2003). Cost: every metadata standard has an associated cost; each implementation must decide how much and who bears it? Authority and trust: knowing that the metadata is a good representation of content for its intended use is a challenge; needed early, but not later? * Marshall, C. & Shipman, F., Which semantic web?, Hypertext'03 (2003). Retrieved 2013-02-17 from: http://dx.doi.org/10.1145/900051.900063. W.L. Anderson, 2012, 2013, CC-BY References. Allemang, D. & Hendler J. (2011) Semantic Web for the Working Ontol
poster
A trio of transiting Neptunian planets around HD 28109 (aka TOI-282) Florian Rodler (ESO Chile, frodler@eso.org) on behalf of Georgina Dransfield, Amaury H.M.J. Triaud, Davide Gandolfi, Juan Cabrera, Tristan Guillot, Djamel Mekarnia, David Nesvorny, Nicolas Crouzet, Lyu Abe, Karim Agabi, Marco Buttu, Maximilian N. Günther, François-Xavier Schmider, Philippe Stee, Olga Suarez, Karen A. Collins, Martín Dévora-Pajares, Steve B. Howell, Elisabeth C. Matthews, Matthew R. Standing, Keivan G. Stassun, Chris Stockdale, Samuel N. Quinn, Carl Ziegler, Ian J. M. Crossfield, Jack J. Lissauer, Andrew W. Mann, Rachel Matson, Joshua Schlieder and George Zhou Introduction. HD 28109 (TOI-282, where TOI stands for TESS Object of Interest) is a bright (V=9.4) main-sequence star, of spectral type F8/G0V, located in the southern continuous viewing zone of the Transiting Exoplanet Survey Satellite (TESS). TESS observations taken with 2 min cadence initially led to three transiting planet candidate signals: TOI-282.01 with an orbital period of about 56 days, .02 with a period of 31 days, and finally .03 with a period of 84 days. A search for additional candidates in the TESS data as well as in radial velocity data revealed that the candidate feature .02 with a period of 31 days was spurious; instead, a further candidate signal .04 with an orbital period close to 23 days was identified (Fig. 1). We furthermore excluded the presence of background stars of / stellar companions to HD 28109 by using high-resolution imaging data with NACO (VLT), Zorro (Gemini South) and with SOAR. We verified the three candidate signals with photometry and radial velocity (RV) follow-up. Transit-time variations (TTVs). Given the proximity of the orbital periods of the two outer planets to a first order 3:2 mean-motion resonance (84.3 vs 56 days), these planets experience some mutual gravitational influence leading to TTVs. Fig. 2 shows that these planets present very clear and significant anti-correlated TTVs with peak to peak amplitudes of ∼50 min and ∼100 min respectively. This allowed us to put constraints on their masses (see table below). Radial velocity (RV) measurements. We are currently conducting an RV survey with ESO's ultra- stable spectrographs HARPS and ESPRESSO to further characterize the planetary system (Fig. 3). Stellar activity in the star, which exhibits stellar jitter on the order of 2.5 - 3 m/s, plus the low masses of the planets and the resulting small RV semi-amplitudes ranging from only 0.8 to 2.4 m/s, make this RV follow-up a very challenging task. Results. The following table lists the fundamental parameters derived from our analysis, with their error values specified in parentheses. The mass estimates for the planets are derived by RV measurements as well as by TTVs (the latter for planets c and d; marked with asterisk*). Fig. 4 shows that the inner planet b is supposedly a rocky sub-Neptunian with a density of Earth, while the outer two planets are fluffy, gaseous Neptunians. The results are currently published in Dransfield++ (MNRAS, under review). The ongoing RV survey will be published in Rodler++. Figure 1: Global fit of the TESS photometry data for each planet (phase folded). The colored data points present the TESS data taken with 2 minutes cadence, while the white circles are binned data. Figure 2: Transit-time variations (in minutes) for the planets c and d spanning over three years of TESS data plus ground-based photometric monitoring. Candidate → Planet Orbital Period (days) Semi-major axis (AU) Radius (Earth) Mass (Earth) TOI-282.04 → HD 28109 b 22.891 (0.0004) 0.14 (0.003) 2.2 (0.1) 11.4 (2.6) TOI-282.01 → HD 28109 c 56.008 (0.002) 0.31 (0.011) 4.2 (0.11) 10.2 (3.6); 7.9* (3.8)* TOI-282.03 → HD 28109 d 84.26 (0.007) 0.41 (0.016) 3.3 (0.11) 8.1 (4.1); 5.7* (2.5)* Figure 3: Radial velocity fit based on 38 HARPS and 44 ESPRESSO measurements. Figure 4: Mass-radius diagram showing the positions of HD 28109 b, c and d. While the innermos
poster
López-Hernández María A. 1 , Montilla Luís M. 1 , Verde Alejandra 1, Agudo- Adriani Esteban 1, Rivera Andreína 1, Miyazawa Emy 1 , Mariño Gloria 1 , Ascanio Alfredo 1 , Cróquer Aldo 1*. 1* Laboratorio de Ecología Experimental – Universidad Simón Bolívar. 1 panamalopezh@Gmail.com – Caracas, Venezuela *acroquer@ubs.ve – Caracas, Venezuela • Venezuelan coral reefs: A health assessment using the Reef Health Index with complementary variables THE PROBLEM REEF HEALTH INDEX The implantation and development of new methods aimed to assess the health status of coral reefs is a tenet core for the conservation science of these systems. For years, coral reefs communities have been described in Venezuela; however, no efforts to classify the health status of these systems have been accomplished. Part of the problem is that reefs have being always classified in terms of live coral cover, while other variables clearly linked with healthy reefs have been neglected. We used a modification of the Reef Health Index (RHI) proposed by The Nature Conservancy (TNC, 2016), with the addition of two variable: complexity index and algal turf height; because these two variables are currently seen as important proxies of reef health. HOW DID WE DO IT? The Reef Health “scores” are calculated by converting the average data value of each indicator into a condition ranking from ‘critical’ to ‘very good’ based on reference values (above). The seven scores are averaged to obtain the overall RHI score. In this work, we aimed to assess the health status of 36 reef sites encompassing seven different geographical locations in Venezuela using a multivariate approach. CRITICAL POOR FAIR GOOD VERY GOOD VARIABLES The Reef Health Index Turf Heigth (mm) Scleractinian Cover (%) Complexity Index Forager Fish Biomass (kg/100 m2) Fleshy Algae Cover (%) Commercial Fish Biomass (Kg/100 m2) < 0, 42 ≥ 0,42 - < 0,84 ≥ 0,84 - < 1,26 ≥1,26 - < 1,68 ≥ 1,68 OUR RESULTS 1 2 3 4 5 6 7 Our results indicated that health conditions in the reef sites of Venezuela are variable; we observed reefs that are in excellent condition, but also, we have degraded reefs. Additionally, this index offers a valuable tool for managers, educators and policy makers interested to implement future conservation actions in Venezuela. Only a few sites within Mochima National Park (Gabarra and San Agustín) and Cubagua (Punta Conejo) were rated as “Poor” and/or “Critical”. “Fair” sites were most likely located in Chichiriviche (La Pared, Punta Media Legua and Petaquire) and Los Frailes (La Pecha). IMPLICATIONS OF OUR FINDINGS Los Roques National Park (Boca de Cote, Rabusqui and Dos Mosquises), Morrocoy National Park (Cayo Norte and Sombrero) and Ocumare (Ciènaga Interno and Ciénada Oeste) the ones with the highest RHI values. 1 Morrocoy 2 Ocumare 3 Chichiriviche 4 Los Roques 5 Mochima 6 Cubagua 7 Los Frailes Boca Seca Bajo Caimán Cayo Medio Cayo Mero Cayo Norte Cayo Sombrero Cayo Sur Ciénaga Interno Ciénaga Este Ciénaga Oeste Guabinitas Tiburòn Media Legua La Pared Petaquire Punta Media Legua Punta Mono La Venada Cayo Agua Boca de Cote Dos Mosquises Madrisqui Rabusqui Salinas Garrapata Carabela Blanca Gabarra Punta Cruz San Agustín Charagato La Muerta Punta Conejo Cominoto La Pecha Puerto Real SITES VALUE Venezuela`s RHI VARIABLES: Forager Fish Biomass Scleractinian Cover Turf Heigth Complexity Index Fleshy Algae Cover Commercial Fish Biomass Overall RHI VALUE: Critical Very Good 1 - 1.8 1.9 - 2.6 2.7 - 3.4 3.5 - 4.2 4.3 - 5 ≥ 16 < 16 - ≥ 12 < 12 - ≥ 8 < 8 - ≥ 4 < 4 < 5 ≥ 5 - < 10 ≥ 10 - < 20 ≥ 20 - < 40 ≥ 40 < 1.5 ≥1.5 - < 2 ≥2 - < 2.5 ≥2.5 - < 3 ≥3 < 0,96 ≥3,48 > 25 ≤ 25 - > 12 ≤ 12 - > 5 ≤ 5 - ≥ 1 < 1 ≥ 2,88 - < 3,48 ≥ 1,98 - < 2,88 ≥ 0,96 - < 1,98
poster
An Energy Data Service Platform for Whom? Understanding the Needs of the Energy Research Community Franziska M. Hoffart1,2; 1Sociological Research Institute Göttingen Nina Kerker1; 2Energy, Transportation, Environment Department, German Institute for Economic Research Oliver Werth3 3OFFIS e. V. – Institute for Information Technology, Oldenburg Poster designed by Hanna Pohlmann Task Area 1 of the nfdi4energy project 1. Introduction: Relevance and Objectives • To combat climate change, energy transition and energy system transformation are crucial significant role of energy-related research • For the required research, access to data is necessary to develop new theories and models • (Energy) research lags behind in promoting open data various projects and initiatives are building data platform solutions • New developments might be difficult to oversee • NDFI4Energy aims to develop and provide an open and FAIR research ecosystem addressing the whole research cycle • Includes successful elements of existing platforms and solutions and addresses unsolved and new challenges faced by researches • Key: identifying and understanding the needs of the energy research community • Who makes up the energy research community? What are the needs of its members in terms of platform development? 1 NFDI4Energy platform M1.1 Requirements of research community M1.2 Feedback mechanisms for platform M1.3 Development of platform M1.4 Development of content for best practices TA6 (Use Cases for Community Services) Content Services Requirements TA2 (Integrating Society and Policy in Energy Research) + TA3 (Transparency and Involvement of the Energy-Related Industry) TA4 (FAIR Data for Energy System Research) + TA5 (Simulation in Interdisciplinary Energy Research) + Techno-Economic Investigation Socio-Technical Perspective • Platform requirements cannot be identified without acknowledging the increasing diversity and interdisciplinarity of the energy research community • In the beginning, energy research had a techno-economic focus, today it is seen as a socio-technical transition embedded within society and current socio-ecological challenges • Inclusion of non-technical aspects e. g. political feasibility, social acceptance and just energy transition • Evolution calls for the strengthening of non-technical research, e. g. from the social sciences to introduce new perspectives, approaches and tools 2. Who is the Energy Research Community? Evolution of the Energy Research Community 3. Platform Requirement Analysis Expert interviews with scientists from different disciplines of the energy research community Questions include, among others: • What do you understand by the term energy research? • How do you rate access to these resources in the energy research community? (Where do you see room for improvement in procurement?) • In principle, would you be prepared to store your research outputs in a public data archive and make them accessible? • Which specific functions or services would you consider particularly valuable on a platform for networking the interdisciplinary energy research community? Preliminary results regarding challenges, needs and potential services for platform development: 1. Challenge: Overinvestigation in certain areas, where multiple researchers are active in the same regions, leading to duplicated efforts and frustration among respondents. 2. Need: Easily accessible information on other research projects, including geographical area, time frame, research interests, and contact details. 3. Potential Service: Visual map illustrating the "activity regions" of research project 4. Conclusions • The researchers proposed an approach that combines research community analysis with platform requirement analysis which can be worthwhile for other research domains • Acknowledging the development of the energy research community over the last decades towards more social science integration and interdisciplinary research, the community could b
poster
Simulations for Planning Next-Generation Exoplanet Radial Velocity Surveys Patrick D Newman1, Peter Plavchan1, Jennifer A. Burt2, Johanna Teske3, Eric E. Mamaje2, Stephanie Leifer4, B. Scott Gaudi5, Garry Blackwood2, Rhonda Morgan2 1George Mason University, 2Jet Propulsion Laboratory, 3Carnegie Institution for Science, 4The Aerospace Corporation, 5The Ohio State University Abstract Future direct imaging missions such as HabEx and LUVOIR aim to catalog and characterize Earth-mass analogs around nearby stars. The exoplanet yield of these missions will be dependent on the frequency of Earth-like planets, and potentially the a priori knowledge of which stars specifically host suitable planetary systems. Ground or space based radial velocity surveys can potentially perform the pre-selection of targets and assist in the optimization of observation times, as opposed to an uninformed direct imaging survey. In this paper, we present our framework for simulating future radial velocity surveys of nearby stars in support of direct imaging missions. We generate lists of exposure times, observation time-series, and radial velocity time-series given a direct imaging target list. We generate simulated surveys for a proposed set of telescopes and precise radial velocity spectrographs spanning a set of plausible global-network architectures that may be considered for next generation extremely precise radial velocity surveys. We also develop figures of merit for observation frequency and planet detection sensitivity, and compare these across architectures. From these, we draw conclusions, given our stated assumptions and caveats, to optimize the yield of future radial velocity surveys in support of direct imaging missions. We find that all of our considered surveys obtain sufficient numbers of precise observations to meet the minimum theoretical white noise detection sensitivity for Earth-mass habitable zone planets, with margin to explore systematic effects due to stellar activity and correlated noise. Survey Targets, Goals, and Architectures 50 100 150 200 250 300 350 Day of Year 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of Clear Nights Site Weather Mauna Kea Kitt Peak Calar Alto Las Campanas Sutherland Siding Spring Figure: Left: General diagrams of different architecture(telescope/spectrograph combinations). Right: Nightly weather (as fraction of useable nights) at the sites considered for these surveys. We took a list of 101 nearby (≤15 pc) FGK stars that are considered both good direct imaging and RV targets, and split them into two mostly non-overlaping groups (51 north and 58 south). We performed simulated 10 year surveys on them across seven different architectures (telescope/instrument combinations) on six different sites (3 north, 3 south). Architecture I IIa IIb V Telescopes 6x2.4 m 2x6 m and 4x4 m 6x4 m 6x3 m Collecting area by aperture 2.4 m = 4.2 m2 4 m = 9.5 m2; 6 m = 27 m2 4 m = 9.5 m2 3 m = 6.3 m2 Time allocation 100% 100% 100% 100% Wavelength coverage 380-930 nm 380 - 930 nm 380 - 930 nm 500-1700 nm Spectral resolution 180 000 180 000 180 000 180 000 Total system efficiency 6% 6% 6% 7% instrument noise floor 10 cm/s 5 cm/s 5 cm/s 10 cm/s Required (peak) SNR/pix 300 300 300 300 Required RV precision 10 cm/s 10 cm/s 10 cm/s 10 cm/s Observation cadence per star 1 / night 3 / night 3 / night 2 / telescope / night Architecture VI VIIIa VIIIb Telescopes 6x arrays of 1 m 2x10 m and 4x 3.5 m 2x10 m and 6x2.4 m Collecting area by aperture 0.61m2 each; array is 9.5 m2 10 m = 75 m2; 3.5 m = 9.5 m2 10 m = 75 m2; 2.4 m = 4.2 m2 Time allocation 100% 25% of 10 m; 100% of 3.5m 25% of 10 m; 100% of 2.4 m Wavelength coverage 500-800 nm 380-930 nm 380-930 nm Spectral resolution 150 000 180 000 180 000 Total system efficiency 6% 6% 6% instrument noise floor 10 cm/s 5 cm/s 5cm/s Required (peak) SNR/pix 300 1000 for the 10 m; 300 for 3.5 m 1000 for the 10 m; 300 for 2.4 m Required RV precision 10 cm/s 15 cm/s on 3.5 m; 5 cm/s on 10 m 15 cm/s on 2.4 m; 5 cm/s on 10 m Obser
poster
Poster: Expectations, Perceptions, and Misconceptions of Personal Firewalls Fahimeh Raja, Kirstie Hawkey, Pooya Jaferian, Konstantin Beznosov, Kellogg S. Booth University of British Columbia Vancouver, Canada {fahimehr,hawkey,pooya,beznosov}@ece.ubc.ca, ksbooth@cs.ubc.ca 1. INTRODUCTION Personal firewalls are recognized as the first line of de- fense for personal computers. However, the protection they afford depends strongly on their correct configuration [4]. Therefore, their usability is key to their effectiveness. In particular, as users become increasingly mobile, it is impor- tant for them to be able to judge whether their computer is secure enough for the usage context at hand [2]. Our prior research [5] revealed that the lack of an ac- curate mental model about the firewall’s system model is one of the root causes of users’ errors when configuring the firewall. The results of a laboratory study showed that an improved user interface design that incorporated feedback about the state of the firewall in different network contexts could help users develop more effective mental models of the firewall and improve their understanding of firewall’s config- uration, resulting in fewer dangerous errors. However, we also learned that a large proportion of users did not see the need for multiple profiles based on context. In this research, our goal is to better understand users’ knowledge, expectations, perceptions, and misconceptions of personal firewalls. We conducted interviews with 30 par- ticipants and analyzed the data using qualitative descrip- tion [7]. The results from 10 interviews are presented in [6]. In this paper we present our aggregated results and examine their implications for the design of personal firewalls. 2. METHODOLOGY We conducted semi-structured interviews to answer the following research questions: 1) What do users know and what misconceptions do they have about personal firewalls and the protection provided by them? 2) What expectations do users have of an application such as a personal firewall? 3) How do users prefer to interact with an application such as a personal firewall (i.e., the level of automation, feedback)? 4) Do the users need to have different levels of protection for an application such as a personal firewall? Why? and 5) What factors, do the users think, affect their required level of protection from an application such as a personal firewall? We recruited 30 participants from both the university and general community. They had a wide range of edu- cational levels, backgrounds, and occupations. Almost all (28) used a laptop in a variety of network contexts. We classified their security knowledge and experience into three categories: high (H), medium (M), and low (L) in order to examine their expectations, perceptions, and misconcep- tions of a personal firewall in relation to their level of security knowledge and expertise. Table 1 shows their demographics. Group L M H Total Security Level Low Medium High N/A Group Size (N) 13 11 6 30 Age Mean 28.4 26.5 26.2 27.3 Range 20-51 22-32 26-27 20-51 Gender Female 9 3 1 13 Male 4 8 5 17 Student Yes 5 6 4 15 No 8 5 2 15 Table 1: Participants’ demographics. 3. RESULTS None in group L knew about the functionality of a firewall. Their comments show their misconceptions about the pro- tection provided by different security software. Most of them did not know whether or not they had a firewall on their computer. All in group H and 6 in group M had previous experience configuring a firewall. Others in group M were not sure about the functionality of a firewall. Some com- ments from these participants show that they faced problems in configuration of their firewall (e.g., in allowing a printer connection), which resulted in them turning the firewall off. We examined if participants’ required level of protection from their personal firewall varies depending on the context. All in group H and 7 in group M wanted varying levels of protection based on their activity (e.g., online banking vs.
poster
Anja Gerber Klassik Stiftung Weimar Prof. Dr. Günther Görz, Dr. Sarah Wagner FAU Competence Center for Research Data & Information Die Objektbiografie als Ansatz für Datenintegration Was ist eine Objektbiografie? • Sie überträgt das Konzept der Biografie auf materielle Kulturgüter und bildet eine wichtige Grundlage für archäologische und objektbasierte Forschung, • kompiliert jegliche Information (Kontexte, Wege, Bedeutungen, Deutungen) zu einem Objekt, • ist chronologisch, aber nicht linear und durchzogen von Lücken, • Kontexte umfassen u. a. Herstellung oder ideelle Konzeption eines Gegenstandes, Nutzungsphasen bei Artificialia; Geburt, Lebensraum, erdzeitliche Entstehung, Präparation bei Naturalia; Phase(n) des Vergessens, Fund, Ausgrabung, Eingang in einen Sammlungskontext, Ausstellung, Erforschung, Modifikation, Restaurierung, Rezeption, Zerstörung, Restitution oder Repatriierung, • Es können mehrere Sichtweisen und verschiedene Kontexte gleichzeitig existieren, • Akteur*innen stellen Objekte her, nutzen sie, entnehmen sie aus der Natur, graben sie aus, erwerben oder rauben sie, präparieren und restaurieren sie, stellen sie aus, setzen sie in Beziehung, erforschen sie und rekonstruieren letztlich ihre Biografie, • Quellen (Bild- und Schriftgut, Datenbanken) bilden die Grundlage der Informationsrekonstruktion und -provenienz. Chancen der Objektbiografie • Ermöglicht eine detaillierte Aufarbeitung von Fund- oder Erwerbsumständen, • betrachtet alle Phasen und damit die dabei zugewiesenen Informationen gleichwertig, • schafft eine Metaperspektive auf Wissensobjekte, da der gegenwärtige Sammlungskontext, aus dem heraus vorwiegend dokumentiert wird, nur einen von vielen bildet, • Bedeutungszuschreibungen von Akteur*innen und Gruppen auch außerhalb des Sammlungs- /Museums- oder des wissenschaftlichen Kontextes werden egalitär einbezogen (Herkunftsgesellschaften, Populärkultur usw.). Digitale Objektbiografien • Vernetzung heterogener und verteilter Informationen, • Objektbiografie als „Maximaldatensatz“, • multiperspektivischer Zugriff, disziplinübergreifende Recherche, • Abbildung von Informationsprovenienz, Mehrdeutigkeit und Widersprüchlichkeit, • Kontextualisierung der zugewiesenen Objektinfor- mationen, Zeit, Ort, Akteur*innen, Quellen über Ereignisse, • Anwendung des CIDOC CRM als ISO-zertifizierter Beschreibungsstandard für das Kulturerbe. Abstract Die Entwicklung von Konzepten zur semantischen Harmonisierung von Objektinformationen bildet einen zentralen Arbeitsbereich in Task Area 6, bei dem die Klassik Stiftung Weimar und das FAU Competence Center for Research Data and Information eng zusammenarbeiten. Dabei wird das Konzept der Objektbiografie mit Semantic-Web-Technologien ins Digitale übertragen, um heterogene, multidisziplinäre Daten der NFDI4Objects-Community zu integrieren und so zu vernetzen, damit institutions- und bestandsübergreifende Fragestellungen beantwortet werden können. Abbildungen: Anja Gerber, Sarah Wagner, CC-BY 4.0 Zitation: Gerber, A., Goerz, G., Wagner, S. (2024, September 15-17). Die Objektbiografie als Ansatz für Datenintegration. NFDI4Objects Community Meeting, Mainz. Zenodo. https://doi.org/10.5281/zenodo.13757398. Schematische Darstellung einer Objektbiografie: Im Zentrum steht das Objekt (Artificialie oder Naturalie), das je nach Objektart und Herkunft in verschiedene Kontexte eingebettet ist/war. Diese werden jeweils durch Angaben zu Zeit und Ort genauer beschrieben, mit Akteuren verbunden und schließlich mit Quellen belegt. Talk to us! Literatur (Auswahl, vollständige Liste findet sich bei Zenodo): Kopytoff, I. (1986) The Cultural Biography of Things. In: The Social Life of Things. Hg. v. Arjun Appadurai, Cambridge, S. 64-91. Braun, P. (2015). Objektbiographie. Ein Arbeitsbuch. Thiery, Florian u. a. (2023). ‘Object-Related Research Data Workflows Within NFDI4Objects and Beyond’. In 1st Conference on Research Data Infrastructure (CoRDI) - Connecting Communities, edited by York Sure-Vetter and Carole Goble,
poster
Examination of reliability of nuclear matrix elements of neutrinoless double-β decay by QRPA J. Terasaki, Institute of Experimental and Applied Physics, Czech Technical Univ. in Prague Y. Iwata, Faculty of Chemistry, Materials and Bioengineering, Kansai University The goal is to determine the effective mass of the neutrino. The double-β decay of nucleus is used for this purpose. We aim at the theoretical contribution. Possible change of two neutrons to two protons in a nucleus emitting two electrons with neutrino exchange (neutrinoless double-β decay). This decay occurs, if the neutrino(ν) is a Majorana particle (𝜈= 𝜈̅ ), and the effective neutrino mass can be determined; see the equations below. Determination of the effective neutrino mass is one of the most important subjects in modern physics. 𝜈𝑒 𝑛 𝑛 𝑝 𝑝 𝑒− 𝑒− 𝜈̅𝑒 Nucleus of (neutron number, proton number) = (𝑁, 𝑍) Nucleus (𝑁−2, 𝑍+ 2) Why nuclei? Because 𝐸(final state) < 𝐸(initial state) is necessary. Other conditions for the nuclei used in the experiments: Single beta decay is suppressed. The energy spectrum of two electrons in 2𝜈𝜈𝜈 decay can be distinguished well from that of 0𝜈𝜈𝜈. Large Q value [ ≈𝐸(initial state) −𝐸(final state)]. The parent nuclei can be produced massively with high purity. 76Ge→76Se 130Te→130Xe 136Xe→136Ba 150Nd→150Sm 48Ca→48Ti 82Se→82Kr 96Zr→96Mo 100Mo→100Ru 110Pd→110Cd 116Cd→116Sn 124Sn→124Te and more List of decays searched in the experiments 1/𝑇0𝜈(g. s. →g. s. ) = 𝑀0𝜈 2𝐺0ν𝑔𝐴 4 𝑚𝜈 𝑚𝑒 2 Half-life Nuclear matrix element Phase -space factor Effective 𝜈 mass Principle to determine effective neutrino mass 𝑚𝜈= ෍𝑈𝑒𝑒 2 𝑚𝑖 𝑖=1,2,3 Theoretical calculation Experimental measurement U : Pontecorvo–Maki– Nakagawa–Sakata matrix 𝑚𝑖: eigen mass (i =1,2,3) Axial-vector current coupling 𝑚𝑒: electron mass Phase-space factor ← Wave functions of emitted electrons Nuclear matrix element ← Nuclear wave functions Approximation is indispensable. Accurate calculation more difficult than the phase-space factor Nuclear matrix element 𝑉𝑟12, 𝐸𝑏≅ℎ+ 𝑟12, 𝐸𝑏 {linear combination of Double- Gamow- Teller and Double- Fermi operators} 𝑀(0𝜈) = ෍෍෍⟨𝑝𝑝′ 𝑉𝑟12, 𝐸𝑏𝑛𝑛′⟩⟨0f +|𝑐𝑝′ † 𝑐𝑛′|𝑏⟩ 𝑛𝑛′ 𝑝𝑝′ 𝑏 ⟨𝑏|𝑐𝑝 †𝑐𝑛|0i +⟩ The transition operator used in our calculation is Final state, ground state of nucleus(N-2,Z+2) intermediate state, nucleus (N-1,Z+1). Energy is Eb. Initial state, ground state of nucleus (N,Z) ℎ+ 𝑟12, 𝐸𝑏 : neutrino potential; energy denominator included Status and purpose of this study: The calculated nuclear matrix elements by various approximation methods and groups are distributed in a range of factor of 2‒3. The nuclear matrix element cannot be obtained by experiment. Thus, examination and improvement of the calculation are essential. Approximation to the method to obtain the nuclear wave functions in our study: Quasiparticle random-phase approximation (QRPA) How good? • Transition-strength function can be well reproduced. • Sum rule is satisfied. • Widely used in nuclear and condensed-matter physics. Nuclear excitation is described as the superposition of two-quasiparticle excitations. Under the approximation to the equation of the nuclear matrix element by 𝑉𝑟12, 𝐸𝑏≅𝑉𝑟12, 𝐸ത𝑏 𝐸ത𝑏: average energy of the intermediate states |𝑏⟩, a virtual decay path is possible to use for calculation as two-neutron removal followed by two-proton addition. The nuclear matrix element of the virtual path must be equal to that of the double-β path. This is a constraint to the effective interactions used for the approximation. The strength of the isoscalar proton-neutron pairing interaction is determined. J. Terasaki, Phys. Rev. C 91, 034318 (2015); ibid 93, 024317 (2016) Why important? This interaction is necessary for calculating the nuclear matrix element of two-neutrino double-β decay. There are exp. data of half-life to this decay for those nuclei used for the neutrinoless double-β decay exp. The two-neutrino decay operator 𝑉2ν 𝐸𝑏 cannot be replaced by 𝑉2ν 𝐸ത𝑏, thus, the double-β-path calcul
poster
I SIMPOSIO DE OBSTETRICIA CON ÉNFASIS EN LA SALUD MATERNA Y NEONATAL SATISFACCIÓN DEL USO DE LOS MÉTODOS ANTICONCEPTIVOS EN UNIVERSITARIOS DE LA CARRERA DE OBSTETRICIA Y TEATRO DE LA UNIVERSIDAD CENTRAL DEL ECUADOR. Niurka Cuji Choto 1 y Carmen Durán 2 1 estudiante de la Carrera de Obstetricia, Facultad de Ciencia Médicas de la Universidad Central del Ecuador, Quito, Ecuador 2 Docente tutora de la Carrera de Obstetricia, Facultad de Ciencia Médicas de la Universidad Central del Ecuador, Quito, Ecuador INTRODUCCIÓN: La educación sexual y reproductiva promueve el acceso a métodos anticonceptivos seguros y efectivos, generando un alto nivel de satisfacción entre los estudiantes, lo que permite disfrutar de una vida sexual saludable y responsable. Es crucial para prevenir los embarazos no deseados y las infecciones de transmisión sexual (ITS) (1). Por ello es importante conocer la percepción que tiene los estudiantes de la Universidad Central con respecto al uso de métodos anticonceptivos. OBJETIVO: Analizar el grado de satisfacción en el uso de métodos anticonceptivos entre las estudiantes de las carreras de Obstetricia y Teatro de la Universidad Central del Ecuador periodo abril- agosto 2024 METODOLOGÍA: Estudio descriptivo de corte transversal con una muestra a conveniencia en estudiantes de las carreras de Teatro y Obstetricia. La herramienta utilizada fue una encuesta anónima a través de Google Forms se obtuvo frecuencias y porcentajes utilizando el estadístico Excel. Aprobación ética y o consentimiento informado: Se solicitó el consentimiento informado previo a la participación de los estudiantes. RESULTADOS: La muestra consistió en 23 participantes (18 estudiantes de teatro y 5 estudiantes de obstétrica). La mayoría conocían sobre los métodos anticonceptivas a través de los medios de comunicación, la radio, el internet y la televisión. Estos métodos lo adquirían en un alto porcentaje comparando y muy poco frecuente a través de los centros de salud públicos. Los estudiantes encuentran satisfacción en el uso de los métodos anticonceptivos sobre todo del preservativo masculino y lo recomendarían. DISCUSIÓN: Los resultados de nuestro estudio revelan que la mayor parte de estudiantes tienen alto conocimiento sobre el uso de preservativos, seguido de las píldoras anticonceptivas e implantes. En estudios similares como el de Ky, McGeechan y K, Watson CJ 2021 describen el uso de métodos anticonceptivos de larga duración por la continuidad y satisfacción con relación a otros métodos difieren de nuestro estudio. (2). También los resultados revelan los estudiantes adquieren mayor conocimiento sobre métodos anticonceptivos a través de medios de comunicación, como la televisión y el internet, coincidiendo con Purdy CH. 2011 en el que describe que la información lo obtienen con mayor frecuencia a través del uso del internet y las redes sociales, por lo que las organizaciones de planificación familiar deberían aprovechar esta coyuntura para incorporar programas educativos, de divulgación y de marketing (3) CONCLUSIONES: Los estudiantes universitarios tienen acceso a la información sobre métodos anticonceptivos y utilizan con frecuencia. Es importante continuar mejorando la educación sexual y reproductiva, asegurar el acceso gratuito a los métodos para garantizar que todos los jóvenes puedan tomar decisiones informadas sobre su salud reproductiva. Por otro lado, algunos estudiantes mencionan efectos secundarios como Acné y cambios en la masa corporal al utilizar métodos anticonceptivos. A pesar de estos efectos, la mayoría de los encuestados se muestran satisfechos con los métodos que utilizan, y los recomendarían, siempre que estén bien informados por parte del personal de salud. Referencias 1. Schivone GB, Glish LL. Asesoramiento anticonceptivo para continuidad y satisfacción. Current opinion in obstetrics & gynecology. 2017; 6: p. 443-448. 2. negra Ky, McGeechan K, Watson CJ, Lucke J, Taft Á, McNamee K, et al. Satisfacción de las mujere
poster
Minor Intron Genes as Potential Targets for Affecting Survival of Triple Negative Breast Cancer Cells Zoya Farooqui, Dr Ihab Younis, Dr Mazen Sidani Department of Biological Sciences, Carnegie Mellon University Qatar • Triple Negative Breast cancer (TNBC) is an aggressive cancer associated with lower survival, prognosis, and less treatment options. • Minor Intron genes (MIGs) have information processing functions. • Every hallmark of cancer is associated with a MIG. • Previous work on this research explored splicing behaviour of MIGs in breast cancer cells by manipulating levels of U6atac snRNA. Spliced mRNA Unspliced mRNA retaining minor intron After U6atac inhibition Class I: Accumulation of Unspliced mRNA Class II: No Accumulation of Unspliced mRNA Overall Aim: Build on previous research to broaden understanding of MIGs, their behaviour in TNBC and manipulate their splicing for potential thereaptuic advantage. Specific Aim 1: Understand why unspliced mRNA only accumulates for Class I MIGs. -Hypothesis : Unspliced mRNA for Class I genes holds translational potential. Specific Aim 2: Find a lowU6atac AMO dose that is critical for TNBC cells. • El Marabti, E., Malek, J., & Younis, I. (2021). Minor Intron Splicing from Basic Science to Disease. International journal of molecular sciences, 22(11), 6062.https://doi.org/10.3390/ijms2211606 • Hall, S. L., & Padgett, R. A. (1996). Requirement of U12 snRNA for in vivo splicing of a minor class of eukaryotic nuclear pre-mRNA introns. Science (New York, N.Y.), 271(5256),1716- 17181https://doi.org/10.1126/science.271.5256.171 • Younis, I., Dittmar, K., Wang, W., Foley, S. W., Berg, M. G., Hu, K. Y., ... & Dreyfuss, G. (2013). Minor introns are embedded molecular switches regulated by highly unstable U6atacsnRNA. Elife, 2, e00780. Previous Research Exon Intron Categorization of MIGs in two classes: Research Aims Background Methodology Aim 1 Aim 2 Conclusions Future Directions Figure 2: 0.5μM U6atac AMO is a critical dose for MDA-MB-231 cells. (A) RealTime Glo viability assay on transfected MDA-MB-231 cells using MT reagent over 5 day period. * indicates significance of p-value <0. 05. (B) Compilation of percentage decrease in expression of spliced mRNA following 0.5μM U6atac AMO treatment using RT-qPCR data of tested MIGs. MIGs with 50% or more percentage decrease were considered significant. (C) Most significant Gene ontology results on sensitive MIGS. Figure 1:Treatment with U6atac AMO prevents translation of both Class I and Class II MIGs. RT-qPCR (A, C) splicing analysis and Western Blot image (B, D) of Class I (EIF3I) and Class II (MAPK14) MIGs. MDA-MB-231 cells were transfected with Control and U6atac AMO. RNA was extracted, converted to cDNA and used for RT-qPCR using QuantStudio 6 Flex thermocycler.Similarly, protein lysate were extracted from transfected cells, separated by SDS-PAGE, stained with specific antibodies. Results B C References • Protein expression of MIGs directly correlates with the splicing of their minor intron. • Unspliced mRNA for both Class I and Class II genes is not translated. • 0.5μM U6atac AMO dose affects replication and growth of MDA-MB- 231 (TNBC) cells. • Splicing of multiple MIGs is significantly affected by 0.5μM U6atac AMO treatment. • These sensitive MIGs form potential targets whose expression can be prevented to affect survival of TNBC cells. • Exploration of miRNA binding sites within minor introns of Class I and Class II genes to explain splicing trends observed in MIGs. • Testing 0.5μM U6atac AMO in non- breast cancer and other cancer cell lines to observe if its effect is specific to TNBC cells. • Manipulating expression of sensitive MIGs within TNBC cells. Transfection of MDA- MB-231 cells (TNBC) with U6atac AMO Protein Extraction Plating for Viability Assay RNA Extraction and cDNA synthesis Western Blot RealTime Glo Viability Assay Quantitative PCR (qPCR)
poster
ExploreSalon: Unveil Hidden Stories from the Past Concept and Outcome of a Digital Humanities and Cultural Heritage "Hackathon" Charvat, Vera Maria; Ďurčo, Matej; Königshofer, Elisabeth; Petrovic-Majer, Sylvia; Woldrich, Anna "When people interact, they leave traces: uncountable letters, diaries, chronicles, plaques, and other written records have preserved memories about people – long before the rise of smartphones and social media."1 For the duration of one week (22nd – 26th May 2023) the so-called ExploreSalon will offer a collaborative space to explore digitized memories, in the form of curated biographic, spatial and temporal datasets with focus on Vienna 1900. The event will be organized by the Austrian Centre for Digital Humanities and Cultural Heritage (ACDH-CH) and supported by the CLARIAH-AT2 national consortium as part of its knowledge sharing activities. The ExploreSalon has two goals: Bring people of diverse backgrounds together and provide them with the opportunity to discover innovative ways of data-based storytelling – exploring stories hidden in the data, presenting ideas and sharing findings. The concept of the ExploreSalon is based on cultural hackathons, such as "OpenGLAM.at"3, "Coding Da Vinci"4 and "Coding Duerer"5. It departs from the traditional "hacking marathon" that is aimed primarily at people with a technical background. Instead, a cultural hackathon seeks to bring together all kinds of creative minds with different skills and levels of expertise from the (Digital) Humanities and Cultural Heritage sectors as well as GLAM (Galleries, Libraries, Archives and Museums) institutions. Inviting creativity as a form of generative change and providing space for the emergence of new ideas, shared purposes and a vivid community, this new kind of event is focusing on establishing a community of practice and enables all stakeholders to communicate at eye level. Dialogue and values like transparency, open knowledge and participatory leadership are propagated rather than a set of specific methods. Following a purpose-driven approach, ExploreSalon aims to empower the community of practitioners to govern their own work process by stepping away from rigid frames and time 5 https://codingdurer.de/ (last accessed: 2023-02-08) 4 https://codingdavinci.de/ (last accessed: 2023-02-08) 3 https://www.openglam.at/ (last accessed: 2023-02-08) 2 https://clariah.at/ (last accessed: 2023-02-08) 1 https://clariah.at/exploresalon2023 (last accessed: 2023-02-08)
poster
Ryan Coulson1, 2, Max Kirkpatrick1, 3, Megan Robinson1, 4, Meghan Donahue5, Devin R. Berg1 1Engineering & Technology Department, University of Wisconsin-Stout; 2Department of Mechanical Engineering, Lafayette College; 3Department of Mechanical Engineering, University of South Carolina; 4Department of Electrical Engineering and Computer Science, Case Western Reserve University; 5Stout Vocational Rehabilitation Institute, University of Wisconsin-Stout User Testing of a Continuum Manipulator for Assistive Technology User No. Average Completion Time (s) – Session One Average Completion Time (s) – Session Two Average Completion Time (s) – Session Three Percent Improvement 1 115.18 121.79 58.84 48.91 2 121.79 112.57 66.02 42.96 3 170.65 149.66 120.40 29.45 User No. Average Completion Time (s) – Session One Average Completion Time (s) – Session Two Percent Improvement 1 235.14 168.18 28.47 2 234.57 214.89 8.39 3 435.57 288.30 33.81 Control Scheme Avg Completion Time (s) Standard Deviation (s) Average Intuition Ranking Single-Joystick Compensative 63.00 54.61 1.86 Dual-Joystick 72.76 42.60 1.93 Single-Joystick Segmented 96.20 55.58 2.21 The use of robots in assistive technology is well-studied, with numerous robotic arms for rehabilitative applications that have been designed and tested to-date, and several that are commercially available [1, 2, 3]. These robots are intended to improve independence and quality of life for people who are unable to perform activities of daily living (ADLs) without additional aid. Unfortunately, they are often prohibitively expensive, costing tens of thousands of dollars [4]. Additionally, they pose a risk of harmful collision to their users and must incorporate sophisticated sensors and control methods to ensure the users’ safety. This work evaluates an alternative platform for assistive robotics which alleviates these issues: continuum manipulators. Continuum manipulators are robots that lack rigid segments and discrete joints [5]. Instead, they function by bending continuously along their length, like the trunk of an elephant or the tentacle of an octopus. The use of continuum manipulators in assistive technology has been proposed with respect to the ADL of bathing by Ansari et al., 2017 [6], although no user testing of this proposal has been completed. This work is supported by the National Science Foundation under Grant No. CNS-1560219. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Table 1. Results from Round One of user testing for n = 14 users. References [1] Chung, C. S., Wang, H., & Cooper, R. A. (2013). DOI: 10.1179/2045772313Y.0000000132 [2] Driessen, B. J. F., Evers, H. G., & v Woerden, J. A. (2001). DOI: 10.1243/0954411011535876 [3] Maheu, V., Archambault, P. S., Frappier, J., & Routhier, F. (2011). DOI: 10.1109/ICORR.2011.5975397 [4] Allin, S., Eckel, E., Markham, H., & Brewer, B. R. (2010). DOI: 10.1016/j.pmr.2009.09.001 [5] Walker, I. D. (2013). DOI: 10.5402/2013/726506 [6] Ansari, Y., Manti, M., Falotico, E., Mollard, Y., Cianchetti, M., & Laschi, C. (2017). DOI: 10.1177/1729881416687132 Table 2. Results from Round Two of user testing: Peg-in-hole task. Table 3. Results from Round Two of user testing: Drawer task. Table 1 shows the results from Round One of user testing. On average, users were able to complete the given task most quickly using Compensative control, followed by Dual control and then Segmented control. The standard deviations for these data are relatively large, as users demonstrated a wide range of skill levels when using the robot. Examining the average intuition rankings, it can be seen that users rated Compensative control as most intuitive, followed by Dual control and then Segmented control. This result further supports Compensative control as the superior of the three control schemes. Tables 2 and 3 show results from Round Two
poster
Design and Development of Cooling Solutions for Rotating Detonation Engines Ramanagar S Shreyas, Sandri Umberto, Mazzei Lorenzo, Picchi Alessio and Andreini Antonio DIEF DEPARTMENT OF INDUSTRIAL ENGINEERING Quantification of Heat Flux Introduction One of the main challenges associated with the development of Rotating Detonation Combustion Engines (RDE) is dissipating the tremendous amount of heat generated by the high-frequency rotating detonation waves. The Current research aims to develop cooling designs for RDE. Wall heat flux contour of TU Berlin RDC operating in laboratory conditions obtained from LES simulation [3]. Bulk average heat flux of TU Berlin RDC obtained from LES simulation along with validation of the ROM. Heat Flux Modelling The wall heat flux is calculated with the following equation [1]. Making an assumption of single rotating detonation wave at all operating pressure. A simple scaling model is built which uses Shock and Detonation tool box [2] to obtain the detonation temperature. Scaled Heat Flux for different operating pressure. For an operating pressure of 5 bar the peak average heat flux is 6 MW/m2. Very high when compared to GT Engines. References: [1]Braun et al., Numerical assessment of the convective heat transfer in rotating detonation combustors using a reduced-order model. Applied Sciences, 2018, 8(6). [2]Browne, S et al., Numerical Solution Methods for Shock and Detonation Jump Conditions; GALCIT Report FM2006.006; GALCIT: San Diego, CA, USA, 2004. [3]Nassini Pier Carlo, High-fidelity Numerical Investigations of a Hydrogen Rotating Detonation Combustor, PhD Thesis, UniFI, 2022. [4] Tian et al., Numerical investigation on flow and film cooling characteristics of coolant injection in rotating detonation combustor. Aerospace science and technology, 2022, 122, 107379. “This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 956803” Preliminary assessment of cooling solutions For aeronautical applications, forced air cooling is the best feasible option. Feasibility study of available cooling methods is performed. The cooling effectiveness (φ) required to maintain the liner temperature below the limit of 1200K is determined. Bulk averaged HTC and Gas Temperature calculated from data obtained from Braun et al. [1]. Effusion cooling assessment Due to detonation the use of TBC is not feasible. A protective layer which can rebuild itself quickly, even after disrupted by the detonation wave is the only possible solution. Researchers like Tian et al. [4] have already shown it is possible. As part of research activity, we are building a test rig to evaluate the experimental performance of effusion cooling in non reactive supersonic mainstream conditions. A reactive LES simulation of the RDE to assess the feasibility of Effusion cooling will be performed using AVBP solver. A Possible cooling system design of an RDE. Where the first part uses turbulators and effusion cooling for the rest. HTC augmentation of 15 with respect to a baseline smooth flat plate is not sufficient to obtain the required φ = 0.63. The current state-of-the-art forced convection cooling schemes are not sufficient, unless the walls are protected.
poster
ICON-BASED COUPLED OCEAN-ATMOSPHERE DEMONSTRATOR Visualisation of cloud water and ocean currents in a 5km-5km ocean- atmosphere simulation. Left: Global view. Right: Zoom into Carribean sea. A throughput of ca. 20 simulated days/day was achieved on 500 Broadwell nodes (platform: Mistral, DKRZ). ATMOSPHERE-ONLY DEMONSTRATORS Global ICON-based weather forecast, run at a global resolution of 2.5km. Scalability of global high-resolution atmosphere-only simulations using the models IFS and ICON (no I/O). Despite significant achievements for extreme scale simulations, even more effort is required to push the models towards production readiness (target throughput: ca. 365 forecast days/day). References: T.C. Schulthess, P. Bauer et al. IEEE Comput. Sci. Eng. 21(1), 2019, https://doi.org/10.1109/MCSE.2018.2888788 P. Neumann, P. Düben et al. Philos. Trans. Royal Soc. A 377(2142), 20180148, 2019, https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2018.0148 THE CENTRAL DELIVERABLE: GLOBAL HIGH-RESOLUTION DEMONSTRATORS ESiWACE will deliver global high-resolution demonstrators of atmosphere- only, ocean-only and coupled ocean-atmosphere simulations; a key target is to reach spatial resolutions of ca. 1 km that allow simulating convective clouds and small-scale ocean eddies. This will provide much more fidelity in the representation of high-impact regional events. The demonstrators will allow for computability estimates for these configurations at exascale. They are based on widely used European models (IFS, ICON, NEMO, EC-EARTH). In this context, ESiWACE has been strongly supporting the intercomparison project DYAMOND, integrating both views on performance and science case. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 675191. The poster reflects only the authors’ view. The EC is not responsible for any use that may be made of the information it contains. Visualisations are courtesy by Niklas Röber, DKRZ. © Deutsches Klimarechenzentrum GmbH, Bundesstr. 45a, 20146 Hamburg, Germany OVERVIEW Funding period: 1 Sep 2015 – 31 Aug 2019 Coordination: DKRZ (Joachim Biercamp), ECMWF (Peter Bauer) Consortium: 16 partners from 7 countries Call reference: European research infrastructures, EINFRA-5-2015 The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) forms a joint scientific community around Earth System Modelling (ESM) from the two communities of Weather and Climate research by leveraging two established European networks: • The European Network for Earth System Modelling • The European Centre for Medium-Range Weather Forecasts The main objectives of ESiWACE are to • Substantially improve efficiency and productivity of numerical Weather and Climate simulation on high-performance computing (HPC) platforms, strengthening the user-driven evolution of the community software • Build a critical mass and create expertise to increase the community impact on hardware development towards the extreme scale as well as future international exascale initiatives THE STORY CONTINUES: ESiWACE2 Funding period: 1 Jan 2019 – 31 Dec 2022 Coordination: DKRZ (Joachim Biercamp), ECMWF (Peter Bauer) Consortium: 20 partners from 9 countries Call reference: European research infrastructures, INFRAEDI-02-2018 The project will push the global high-resolution demonstrators towards production-ready simulations on European pre-exascale and future exascale systems. ESiWACE2 will further focus on exploring and exploiting suitable innovative technologies such as DSLs, on the development of processing tools for more efficient I/O and visualisation and on providing enhanced services, training and benchmarks for the community. ESiWACE Contacts Presenter: Philipp Neumann Web: www.esiwace.eu E-Mail: esiwace@dkrz.de PROJECT IMPACTS AND ACHIEVEMENTS ESiWACE addresses three core themes on the applications’ way towards exascale computing: • Scalability of models and tools
poster
Thamil Arasu Saminathan , Muhammad Fadhli , Tania Gayle , Halizah Mat Riffin , Hasimah Ismail , Norazizah Ibrahim Wong , Wan Shakira , Hooi Lai Seong , Sunita Bavanandan , Ghazali Ahmad , Ong Loke Meng , Esther Tan , Irene Wong , Fatimah Othman , Hamizatul Akmal , Haji Tahir Aris 1 Institute for Public Health, National Institutes of Health Malaysia, Ministry of Health Malaysia 2 Sector for Biostatistics & Data Repository, National Institutes of Health Malaysia, Ministry of Health Malaysia 3 Sultanah Aminah Hospital, Johor, Ministry of Health Malaysia 4 Kuala Lumpur Hospital, Ministry of Health Malaysia 5 Pulau Pinang Hospital, Ministry of Health Malaysia 6 Clinical Research Centre, Penang, Ministry of Health Malaysia 7 Selayang Hospital, Ministry of Health Malaysia 8 Tengku Ampuan Rahimah Hospital, Klang, Ministry of Health Malaysia 9 Institute for Medical Research, Ministry of Health Malaysia 1. Institute for Health Metrics and Evaluation (IHME). Findings from the Global Burden of Disease Study 2017. Seattle, WA: IHME, 2018. 2. Institute for Public Health (IPH). National Health and Morbidity Survey 2011 (NHMS 2011). Vol. II: Non-Communicable Diseases. Kuala Lumpur: Ministry of Health Malaysia; 2011. ISBN 978-967-3887-68-2. 3. Institute for Public Health (IPH), National Institutes of Health, Ministry of Health Malaysia. 2020. National Health and Morbidity Survey (NHMS) 2019: Vol. I: NCDs – Non-Communicable Diseases: Risk Factors and other Health Problems 4. Department of Statistics Malaysia https://www.dosm.gov.my/v1/index.php?r=column/cthemeByCat&- cat=117&bul_id=MDMxdHZjWTk1SjFzTzNkRXYzcVZjdz09&men Accessed 01 Aug 2020. 5. Department of Statistics Malaysia https://www.dosm.gov.my/v1/index.php?r=column/cthemeByCat&- cat=155&bul_id=c1pqTnFjb29HSnNYNUpiTmNWZHArdz09&men Accessed 01 Aug 2020 6. Ministry of Health Malaysia. 2018. National Action Plan for Healthy Kidneys (ACT-KID) 2018-2025. References PREVALENCE AND ASSOCIATED FACTORS P-83 According to the Global Burden of Disease Study 2017, the prevalence of chronic kidney disease (CKD) globally was 9.1% . Malaysia recorded a similar prevalence of 9.07% in the 2011’s National Health and Morbidity Survey . We aim to determine the current prevalence and associated factors of CKD among adults in Malaysia. CHRONIC KIDNEY DISEASE IN MALAYSIA NMRR-17-806-35765 Introduction A nation-wide, population-based, cross-sectional study was conducted in 2018 among adults aged ≥18-year-old. Total of 1,398 adults were randomly selected, using stratified cluster method. Blood for serum creatinine and random blood sugar was taken from respondents at their home by qualified staff from a nearby Ministry of Health (MOH) hemodialysis unit. Urine albumin-to-creatinine ratio (uACR) was measured using a single urine sample. The estimated glomerular filtration rate (eGFR) was measured with a calibrated serum creatinine using the CKD-EPI equation. CKD was defined as eGFR < 60 ml/min/1.73m or the presence of persistent albuminuria if eGFR ≥ 60 ml/min/1.73m . Material and Methods A total of 1398 individuals were approached for this study, and 75% of them (n = 1047) consented to participate. Serum creatinine was measured in 977 respondents. The final analysis set comprised of 890 respondents. Table 1 shows the prevalence of CKD by stages (n = 890). Our study shows that the prevalence of CKD in Malaysia was 15.48% (95% CI: 12.30, 19.31). Using multivariate analysis, as seen in Table 2, shows that hypertension (aOR 3.72), diabetes mellitus (aOR 3.32), increasing Body Mass Index (aOR 1.06), and increasing age (aOR 1.06) were significantly associated with CKD. Results This study has demonstrated a rising prevalence of CKD in Malaysia over the last 7 years since the previous study, the prevalence of 9.07 in 2011 to 15.48% currently. The probable reasons accounting for this rising trend are the increasing prevalence of non-communicable diseases that is associated with CKD and changes in population demographics. National Heal
poster
Study of radon background in the SuperNEMO demonstrator • 𝜷𝜷𝟎𝝂Source foil: 𝟔. 𝟏𝟏𝒌𝒈of 82𝑺𝒆 with 𝑄𝛽𝛽~ 3 𝑀𝑒𝑉 • Tracker: 2034 Geiger cells • Calorimeter: 712 Optical Modules (OMs : Plastic scintillators + PMTs) • Copper coil for magnetic field 𝑩 Anti-radon strategy Anti-radon strategies: • Material screening for radiopurity ➔very low Rn emanation • J-Trap facility ➔ultra high Rn purification of the gas (He+Ar) • Anti-Rn tent ➔Buffer volume against the air from the LSM lab • Anti-Rn factory ➔to inject Rn-free air in the tent 222Rn decay to 214𝐁𝐢(Qβ−= 3.272 MeV) →background for 0νββ search Measuring Rn activity ⟺measuring BiPo activity Golden 214Bi-214Po channel: 1 electron + 1 delayed alpha (T214Po = 164 μs) Efficiency selection of 3.1% From 106 events simulated on the surface of the grounds wires Objectives • To optimize the BiPo selection criteria with high statistics • To study the spatial radon uniformity in the detector • To study the radon residence time in the detector Top view of the e- vertices distribution from the selected BiPo events →Spatial Rn uniformity in the bulk of the tracker →Small side effects and left-right asymmetry under study → Radon and the tracker gas mixture have the same flow dynamics Real BiPo event Time between 𝒆−𝐚𝐧𝐝 𝜶 →Increasing the gas flow by a factor 2 decreases the BiPo rate by a factor 𝐑≈2 as expected SuperNEMO Demonstrator • Goal of SuperNEMO : to search for 0𝜈ββ process with 82Se (Qββ~ 3 MeV) • Sensitivity: T1/2 0ν > 5.7 × 1024 years (17.5 kg.y exposure with 82Se ) • Able to track and measure electron energy independently • Data taking in autumn 2024 Measuring radon activity with 214Bi-214Po decay events Radon background Dedicated radon injection runs Antoine Lahaie, Mathis Granjon, Frédéric Perrot, Yegor Vereshchaka, on behalf of the SuperNEMO collaboration Radon source • Emanation rate: 95 ± 5 Rn atoms per second • Injected in the tracker at 5 L.min-1 Two Rn measurements performed at 5 and 10 L.min-1 c:-( c:-( Ar He Alcohol J-trap Emanation Diffusion ~ 0.15 mBq. m−3 ~ 30 mBq. m−3 Gas mixture Anti-radon factory Laboratory air ~ 30 Bq. m−3 Rn-free air Anti-radon tent New Radon activity updated soon with anti-radon factory, gas purification and nominal gas flow at 20 L.min-1, stay tuned ! • 58 h of Rn background measurement in March 2024 at different gas flow →Consistent with the expected value: →Radon (Rn) is the main background in the tracker Gas + Rn 𝑅= 2.3 ± 0.2 • Electron identification: ➢1 OM triggered ➢≥ 1 associated Geiger cell near the OM ➢Temporally correlated with the OM • Alpha identification: ➢≥ 2 close Geiger cells triggered ➢Delayed alpha below 1.6 µs after electron ➢Short track (≤ 12 Geiger cells ) (He + Ar gas)  Vertex OM Electron Alpha Signal Background noise selection Additional BiPo selection: 82Se source foil OM 𝜶 𝒆− ➢See SuperNEMO poster #451 Electron energy >300 keV e−/α vertex distance in xy plane ≤6 cm e−/α vertex distance on z axis ≤10 cm Delay between α and e−track [300-1600] µs τeff fit = 1.43 ± 0.14 days Measurement of the Rn residence time with gas flow at 5 L.min-1 τeff = 1 ϕ V + 1 τ = 1.54 days Gas only T = 168 ± 8 μs →fully consistent →First Rn background activity in current operation mode at 10 L.min-1 : [10-15] mBq.m-3 ✓ ✓ #41
poster
Figure 1: Schematic diagram of the PGNAA experimental facility Neutron Radiography using isotopic neutron sources: Preliminary results S. Kolovi1,2, T.J. Mertzimekis2, I.E. Stamatelatos1, K. Bouchra3, Z. Dimitrakopoulou3 1 Institute of Nuclear and Radiological Sciences, Energy, Technology & Safety, NCSR "Demokritos” 2 Dept. of Physics, University of Athens 3 Dept. of Computed Tomography, General Hospital of Nikaia “Agios Panteleimon” INTRODUCTION Neutron Radiography (NR) is an imaging method based on the neutron attenuation properties of the imaged object. To produce an image a source of neutrons, a collimator to shape the emitted neutrons to a mono-directional beam, an object and a method of detecting neutrons are required. The present work is a preliminary study aiming to investigate the potential of Neutron Radiography using (a, n)-type isotopic neutron sources. As an application, NR images of a portable NaI scintillation detector and common computer components (CC) were acquired. The parameters that should be taken into account for the design of NR system based on (α, n) type sources are discussed. In addition, the NaI detector was examined with X-ray Computed Tomography (CT) to compare the two imaging techniques on their efficiency as diagnostic tools for such instruments. CONCLUSIONS x The results of the study suggest that neutron imaging can be performed using an isotopic (α, n) type neutron source and an IP (imaging plate) with Gd2O3 surface layer . x Requirement for a successful image is a well collimated and moderated neutron beam. x The thermal-neutron fluencies that can be achieved from isotopic source assemblies are low and can only be used for applications in which high resolution and short exposure times are not required. x Nevertheless, a significant advantage of this approach is the overall simplicity and cost effectiveness due to the long half-life of (α, n) type neutron sources. x Future work will be directed towards a trade-off study between source intensity, neutron collimation and moderation, as well as radiation shielding requirements, in order to achieve a given image resolution. REFERENCES ‚ Neutron Imaging: A Non - Destructive Tool for Material Testing, IAEA - TECDOC - 1604, ISBN 978 - 92 - 0 - 110308 - 6 ‚ Practical Neutron Radiography , J. C. Domanus, ISBN 0 - 7923 - 1860 - 9 ‚ Bio-Imaging Analyzer BAS1500 IP Reader (Fujifilm) Operation Manual ‚ N. Chankow et al., Applied Radiation and Isotopes, 68, 662 - 664, (2010) Computer Components NR NaI Detector NR - CT Sample Cadmium Moderator Δt (min) Figure Floppy drive / Graphics card + - 60 5 i + + 60 5 ii - + 38 NaI 6 i + - 60 - + + 60 6 ii - + 120 6 iii + - 210 7 ii + + 210 6 iv / 7 i EXPERIMENTAL SETUP & METHODOLOGY RESULTS Figure 6: NR with slow neutrons i) Δt = 38 min ii) Δt = 60 min iii) Δt = 120 min iv) Δt = 210 min The Cadmium reference sheet is marked in the picture. Figure 7: i) NR with thermal neutrons Δt = 210 min ii) NR with fast neutrons Δt = 210 min iii) CT ( X - Rays) The Cadmium reference sheet is marked in the picture. NaI Detector NR NR vs. CT in the case of NaI detector The initial aim of the irradiation of the portable NaI detector was to detect any defects or cracks in the crystal. The NR image with thermal neutrons (Fig. 7i) is of higher resolution in comparison to the one with fast neutrons (Fig. 7ii), whereas Fig. 7ii has a better image quality. No significant structural defects were detected with NR. The CT image (Fig. 7iii) offers the greatest resolution and image quality in the minimum exposure time. No defects were detected either. Dimensions of the NaI crystal (known) agree with the measurements taken from Fig. 7i and Fig. 7iii. iii Fast vs. Slow neutrons For the same exposure time, the NR image obtained with fast neutrons (no moderator) is of higher quality in comparison to the one obtained with slower neutrons (5 cm moderator) in the case of the computer compo- nents. CC consist mainly of metal parts. The effect of exposur
poster
Characterising contrast response functions using salience-matched achromatic and chromatic Gabor patches Joel T. Martin1, Zoé Darrasse1,2 and Jasna Martinovic1 1 School of Philosophy, Psychology and Language Sciences, University of Edinburgh 2 Neuropsychology and Clinical Neurosciences, University Paul Sabatier Toulouse III Background • The human visual system processes achromatic and chromatic information in distinct parallel pathways (L+M; L-M; S- [L+M]). • Visual evoked potentials to chromatic and luminance stimuli differ in morphology and with respect to stimulus contrast (Porciatti et al., 1999). • Here we present the beginnings of a normative dataset characterising contrast response functions to salience-matched chromatic and achromatic Gabors. Experiment • 31 participants; chromatic (L-M; S-[L+M]) and achromatic (luminance: L+M) Gabors (±2σ = ~3.65 dva; SF = 0.8 cpd) with 4 logarithmically spaced contrast levels (Ach: 3.9—26%; RG: 2.5—14.3%; BY: 13.5—57.7%); task: respond to horizontal stimuli. • Heterochromatic flicker photometry was used to ensure isoluminance of chromatic stimuli. • Normal colour vision was established with the Cambridge Colour Test (CCT). • Asymmetric matching (L+M or S-[L+M] matched to L-M contrasts) to verify if salience was indeed equated across mechanisms. • Stimuli and their LMS coordinates are shown below. Results • All participants provided CCT scores within the normal limits for colour vision. • The stimulus matching procedure demonstrated close agreement between our participants. Their data generally fell close to the standards selected for the EEG experiment. Conclusion • Chromatic and achromatic VEP morphology is consistent with previous findings (Porciatti et al., 1999; Ellemberg et al., 2001). • These data are the beginnings of a normative data set that will eventually be used for control comparisons in a large-scale study on visual function in individuals with bipolar disorder (Roguski et al., 2024). HELIOS-BD Scan QR code to learn more about the HELIOS-BD project (226787/Z/22/Z). • Overall scalp topographies and EEG responses to chromatic and achromatic stimuli are shown below for all electrodes. • VEP amplitudes for each condition were fit using the Naka-Rushton equation (Naka & Rushton, 1966) to estimate the overall shape of the contrast response functions. Contact joel.martin@ed.ac.uk zoe.darrasse@gmail.com j.martinovic@ed.ac.uk References • Ellemberg, D., Hammarrenger, B., Lepore, F., Roy, M. S., & Guillemot, J. P. (2001). Contrast dependency of VEPs as a function of spatial frequency: the parvocellular and magnocellular contributions to human VEPs. Spatial vision, 15(1), 99-111. • Naka, K. I., & Rushton, W. A. (1966). S‐potentials from luminosity units in the retina of fish (Cyprinidae). The Journal of physiology, 185(3), 587-599. • Porciatti, V., & Sartucci, F. (1999). Normative data for onset VEPs to red-green and blue-yellow chromatic contrast. Clinical Neurophysiology, 110(4), 772-781. • Roguski, A., Needham, N., MacGillivray, T., Martinovic, J., Dhillon, B., Riha, R. L., ... & Smith, D. J. (2024). Investigating light sensitivity in bipolar disorder (HELIOS-BD). Wellcome Open Research, 9. L-M S-(L+M) Scan for link to online poster. • Grand average VEP are shown above for electrodes Pz, Oz, O1, O2, Iz. Achromatic-driven contrast VEPs are characterised by a robust P1 component that saturates at higher levels of contrast, followed by a non-contrast dependent N1 component. Chromatic Gabors elicited a single, strong negative deflection whose amplitude and latency depend on stimulus contrast more linearly. Grey regions were selected for calculation of N1 and P1 mean amplitude responses. P1 N1 N1 N1 Amplitude L+M Amplitude Amplitude Global field power Amplitude Amplitude Amplitude Amplitude Amplitude Lum/4 Lum/3 Lum/2 Lum/1 LM/4 LM/3 LM/2 LM/1 S/4 S/3 S/2 S/1
poster
Open, Metadata-Enriched, Non-Proprietary Data Format for Data Dissemination Claudia Saalbach, Xiaoyao Han • Proprietary data formats jeopardize the principle of FAIR • To reach a wider user group, data producers provide the same data in various formats, resulting tin higher costs. • Different data formats, which are only partially compatible, represent an obstacle for the replication studies • Metadata in pdf or in a web-based information system are inconvenience to use Why do we need open data format? Add your information, graphs and images to this section. Specification As part of the NFDI, KonsortSWD is expanding its services for research with data in the social, educational, behavioral and economic sciences. The mission is to develop – strengthen, widen, deepen – a research data infrastructure for the study of human society. KonsortSWD’s RDM strategy aims to - provide researchers and research data centres (RDCs) with the tools and services they need for managing and sharing (new) sensitive and non-sensitive data in accordance with the FAIR principles. - support sustainable RDM in all phases of the research data lifecycle. NFDI- KonsortSWD The open, metadata-enriched, non-proprietary data dissemination format(OpenDF) is a project of KonsortSWD, the NFDI consortium for the social, behavioural, educational and economic sciences. The project provides a non-proprietary Open Data Format enriched with multi-level metadata that is smoothly usable with popular statistical software. The goal of the project Open Data Format: Specification data.zip metadat a.xml data.csv metadata.xml data.csv bap87,bap9201,bap9001,bap9002,bap9003,bap96 ,name 4,-2,1,-1,2,-2,Jakob 3,5,-2,1,4,1.57,Luca ,-1,-1,2,-1,1.92,Emilia 1,9,-2,2,4,1.85,Charlotte -1,4,2,3,1,1.91,Johanna 3,4,-1,4,-2,1.8,Paul 1,9,2,-1,-1,1.8, 5,6,1,-1,1,1.96,Mia 5,5,5,3,1,1.64,Ben -2,4,4,-1,-2,1.93,Jakob <dataDscr> <var name="bap87"> <labl xml:lang="en">Current Health</labl> <labl xml:lang="de">Gesundheitszustand gegenwärtig</labl> <txt xml:lang="en">Question: How would you describe your current health?</txt> <txt xml:lang="de">Frage: Wie würden Sie Ihren gegenwärtigen Gesundheitszustand beschreiben?</txt> <notes> <ExtLink URI="https://paneldata.org/soep-core/data/bap/bap87"/> </notes> … data.zip Stata R SPSS Use case: R package DIW Berlin: www.diw.de SOEP: https://www.diw.de/en/diw_01.c.615551.en/research_infrastructure__socio-economic_panel__soep.html Project: KonsortSWD - Consortium for the Social, Behavioural, Educational and Economic Sciences in the National Research Data Infrastructure (NFDI)
poster
Modelling Solar Ca II H&K Emission Variations K. Sowmya, A. I. Shapiro, V. Witzke, N.-E. Nèmec, T. Chatzistergos, K. L. Yeo, N. A. Krivova, S. K. Solanki Max Planck Institute for Solar System Research, Göttingen, Germany EU Grant No. 797715 (IMagE) • Depending on the inclination and period of observations, the activity cycle in solar S- index can appear weaker or stronger (as shown by the red symbols in Figure 3) than in stars with a solar-like level of magnetic activity. • Solar chromospheric emission variation is absolutely normal in comparison to other stars with near-solar magnetic activity. Figure 1: Validation of our model calculations (orange) against the ground-based solar observations (corresponding to i~90°; black). Panel a - daily values; panel b - 81-day smoothed values. Figure 2: Inclination dependence of the S-index. 81-day averaged S-index values for solar cycles 21-23 for inclinations ranging from 90° (equator-on view; black curve) to 0° (pole-on view; purple curve). Figure 3: Chromospheric emission variations (y- axis) vs. the mean chromospheric activity (x-axis) for the Sun (black star and red symbols) and other Sun-like stars (black circles). Red symbols show our calculations. Grey shaded regions indicate the spread due to inclination and strength of the activity cycle. • We used the distributions of solar magnetic features derived from the surface flux transport simulations together with the non-LTE spectra of Ca II H&K lines. • We showed that the S-index values obtained by an out-of- ecliptic observer are different from those obtained by an ecliptic-bound observer (Figure 2). • This indicates that it is important to consider the inclination effect on S-index while comparing the magnetic activity of the Sun to other stars. • The S-index quantifies the emission in the near UV Ca II H&K lines and is a prime proxy of solar and stellar magnetic activity. • To study the dependence of S-index on the inclination angle (i) between the stellar rotation axis and the direction to the observer, we developed a physics-based model and validated it against the available solar S-index measurements (Figure 1). krishnamurthy@mps.mpg.de Sowmya et al. (2021), ApJ, Under review
poster
Consensus detection and identification protocol for Acidovorax citrulli on cucurbit seeds Funding Non-competitive funding mechanism. Each funder only pays for the participation of their own national researchers. Total funding € 78 000 Picture 1 Picture 1 Picture 1 Goals Acidovorax citrulli (synonyms: Acidovorax avenae subsp. citrulli, Pseudomonas pseudoalcaligenes subsp. citrulli) is the causal agent of bacterial fruit blotch (BFB) of cucurbit plants, primarily watermelon and melon where significant economic losses have been reported. Once the pathogen is introduced in an area, high humidity, high temperature and overhead irrigation increase the risk of BFB epidemic development. Despite the economic importance of the disease, little is known about the basic aspects of A. citrulli epidemiology and the factors involved in its pathogenicity and virulence. Since there are no resistant commercial cultivars, successful management of BFB depends on exclusion of primary inoculum by using pathogen-free seeds and seedlings. Seed health testing reduces the risk of outbreaks. For this purpose the project has the goal to develop a consensus protocol for the detection of A. citrulli on cucurbit seeds together with the monitoring of the pest in the main cucurbit production areas. Moreover, there is an ever-growing need for multiplex, high- throughput detection methods that can be applied directly to plant parts to certify their health status. Improvement or development of new methods to distinguish accurately between bacterial strains (two groups determined currently for Acidovorax citrulli) would be beneficial in determining their emergence and evolution as pathogens. Research consortium UNIMORE (IT), ANSES (FR), BPI (GR), NFCSO (HU), CREA (IT), NVWA (NL), FGBU- VNIIKR (RU) Contact information Project coordinator: Emilio Stefani stefani.emilio@unimore.it Key outputs and results • Validated identification methods (test performance studies for detection and identification tests) • A consensus protocol for the detection of Acidovorax citrulli on cucurbit seeds 07/2016-06/2018
poster
Figure1: SnE-VNet: A visual representation of the presented model. Figure2: Visual representation of a SnE block. Identifies important channels in the feature map. Improves performance without increasing computational cost. Enhances feature representation in 3D data. The SnE- Blocks Encoder: Downsamples the image to extract high-level features. Decoder: Upsamples the features to reconstruct the original image. Connection: Connect encoder and decoder layers to capture spatial information. Building Blocks METHODOLOGY 3D CNN-based encoder-decoder structure with squeeze and excitation (SnE)-based residual connections. AFFILIATION SnE-VNet: A Deep Learning Model with Squeeze and Excitation for Improved 3D Stroke Lesion Segmentation AUTHORS Mishaim Malik, Benjamin Chong, Justin Fernandez, Vickie Shim, Alan Wang INTRODUCTION Stroke is a leading cause of disability, affecting over 12 million people worldwide each year. Accurate lesion segmentation could help automate stroke management, treatment planning, and rehabilitation outcome prediction. MOTIVATION Deep learning offers a promising alternative to manual segmentation, which is time-consuming and requires expertise. SnE-VNet automates segmentation, improving efficiency and consistency. RESULTS Improved Ground Truth and Prediction overlap score by 1.3% & true positive rate by 4.7%. Figure3: Segmentation Results. KEY FINDINGS SnE-VNet: Segments lesions of various sizes and shapes. has the potential to help clinicians to access better an d faster results, which will expedi te stroke diagnosis and managem ent significantly . REFERENCES [1] Feigin, V. L. et al., 2022 “International Journal of Stroke”. [2] Karakis, R. et al., 2023 “Journal of Biomedical Informatics”, 141. [3] Malik, M. et al., 2024 “Bioengineering”, 11(1), 86. [1] This work was funded by the Health Research Council of New Zealand. [2] [3] Red = False Negatives Green = False Positives
poster