text stringlengths 21 4k | label stringclasses 2 values |
|---|---|
1 Possibility of making a kilo-watt inverter driven by the 3rd positive EMF (This paper was presented at Division of Energy and Fuels of the 252nd ACS National Meeting in Philadelphia, August, 22, 2016, paper number. ENFL 212.) Osamu Ide Foundation for the Advancement of Environmental Energy (FAEE) Clean Energy Research Laboratory, 3-4-21-601 Mita, Minato-ku, Tokyo, 108-0073, Japan Email: daphne@myad.jp Introduction At the staring of this research, the author found the 3rd positive EMF for a new source of clean energy in the motor driven by the discharging of a capacitor in 1980. The author called the motor Ether Engine. The author found the phenomenon that the capacitor charging voltage was increased, compared with the estimated value by the calculation, when the Ether Engine was running. The author had researched the phenomenon precisely in experimental and mathematically method. It was impossible to verify the phenomenon by the common electro-magnetism. Finally, the author reached the conclusion that it was due to an independent EMF induced in opposite direction of Faraday’s back EMF. The papers of the Ether Engine were presented in 1995 and 2000, * (1) (2). These papers are also related to * (9) (10) (11) presented by AIAS fellows. In 2000, the author changed the direction of this research from a motor to an inverter, applying the 3rd positive EMF because an inverter has no mechanical loss. In 2010, the author succeeded in making an inverter of 300% efficiency, * (3) (4) (7) (14). In 2015, the author has finally succeeded in making an inverter generator self-charging. Although the output of the inverter generator is around 1W, it exhausts no natural resources, * (13). This system should be adaptable to a power of kilo-watts to supply the power source for house and EV. This report shows the possibility and concept of constructing a kilo-watt inverter generator driven by the 3rd positive EMF. The basic experiment was already made and presented by the paper of the author at the ACS meeting in March 2015, *(5). The first perfect replication according to the author’s paper was made and presented by the Munich group of Dr. Horst Eckardt and Kurt Arenhold, * (8). The most amazing point of their paper is the phenomenon that the 3rd positive EMF could be derived from ECE theory by Dr. Myron Evans of AIAS. This report is the 2nd replication of the author’s paper * (5). However the scale of the experiment is much larger than in the first paper * (5), concerning the voltage of the power source and the inductance of the transformer. | non_poster |
© Agnès Eyhéramendy !""#$!"$#$%&'()&% | non_poster |
An Accurate Implementation of the Studentized Range Distribution for Python Samuel Wallan1, Dominic Chmiel2, Matt Haberland3∗ 1. Computer Science and Software Engineering Department, Cal Poly, San Luis Obispo 2. Mechanical Engineering Department, Cal Poly, San Luis Obispo 3. BioResource and Agricultural Engineering Department, Cal Poly, San Luis Obispo ∗Corresponding Author, mhaberla@calpoly.edu 1 Introduction As data becomes more and more accessible, it can be tempting to misuse data analysis techniques to find statistically significant results, a practice known as “p-hacking”. Tukey’s HSD (Honestly Significant Difference) test is one of several tests that guards against this practice by using the studentized range distribution to compute p-values that account for the number of comparisons performed. Implementations of Tukey’s HSD already exist within the scientific Python ecosystem, but they rely on approximations of the studentized range distribution that may not behave well outside of their intended range and, even within the intended range, are only accurate to a few digits. In this document, we present a fast, highly accurate, and direct implementation of the studentized range distribution for SciPy, and we demonstrate its speed and accuracy. 2 The Distribution The studentized range probability density function (PDF) and cumulative distribution function (CDF) take the following forms, respectively [1]: fR(q; k, ν) = √ 2π k (k −1) νν/2 Γ(ν/2) 2(ν/2−1) Z ∞ 0 sν φ(√ ν s) Z ∞ −∞φ(z + q s) φ(z) [Φ(z + q s) −Φ(z)]k−2 dz ds (1) and FR(q; k, ν) = √ 2π k νν/2 Γ(ν/2) 2(ν/2−1) Z ∞ 0 sν−1φ(√ ν s) Z ∞ −∞φ(z) [Φ(z + q s) −Φ(z)]k−1 dz ds, (2) where q > 0 is the studentized range of sample means from k > 1 groups, ν > 0 is the number of degrees of freedom used for determining the pooled sample variance, and φ(z) and Φ(z) represent the normal PDF and CDF. In this document, the meaning of these terms is not important; we simply take (1) and (2) as (messy!) integrals to be evaluated given real numbers q, k, and ν. 1 | non_poster |
i i “230211*G*latex” — 2023/2/15 — 15:04 — page 1 — #1 i i i i i i Gravity’s fluctuation derives from its speed— a constant to the absolute space presumed Wen-an Zhang ∗ ∗Retired civil engineer, Beijing, 100072, China Submitted to Proceedings of the National Academy of Sciences of the United States of America Newton’s equation of gravity implies gravity instantly attracting t- wo gravitating objects regardless of the distance they segregate in space. In other words, gravity working infinitely fast was assumed to formulate the equation. If this is valid, there is no need for users of the equation to query what the dimension “r” therein really means; it is simply the relational distance between two gravitating objects. However, if gravity binds its influence with its gravitational field out- put at finite speed, there is an additional effect to account for. This additional effect comes from the discrepancy between the relational distance and the true distance for the “r” to choose, because the instantaneous output of a gravitational field needs time to approach its target, during which the field-source of gravity has been moving farther and farther away from that field’s starting point, meaning that the distance between the centers of two gravitating objects is not exactly the path in which the expanding field radially chases its target, and the line joining the centers of the same is not precisely the route by which the gravity comes to pull its target. To reflect the said effect in Newton’s model of gravity, this paper attempts to combine Einstein’s idea of finite speed gravity with Newton’s concep- tion of absolute space, from which the asymmetry of gravity (or the fluctuation of gravity)and the meliorated radar wave measurement were mathematically derived. gravity fluctuation | gravitational field | gravitational constant N ewton’s equation of gravity soon came into use after the proportionality in his law of gravity was measured by the experiment. This equation is deemed applicable to any two gravitating objects of the universe in any scenario of rela- tionships. They may be in relative motion or motionlessness, in a small duo of one particle circling the other, or in a large group of celestial objects where the orbiting members revolve the central dominant while some of them have their own satel- lites, etc. Such diversified relationship scenarios demonstrate the more diversified motion status of those individuals by viewing them not just from the domestic member to member positions, but from the dynamic swarm to the outside references. Now the question is, if a celestial group, like our solar system as a whole, for instance, moves faster or slower than it currently does, or more generally, if an object’s motion in change through space, makes any difference to its gravitational field strength? The answer, to Newton’s equation of gravity, would be “no”, because it implicitly postulates that gravity works infinitely fast, which legitimates the equation to see the diversities as no more complex than the triadic relations between gravita- tional attraction, mass production and mass separation as the equation determines, because, to space, the speed of a gravity source, might be tremendous, but must not be limitless, thus leading the ratio of the latter to the former to zero. However, if gravity cannot strike its target in an instant, like a long and rigid rod transferring an applied force from one end to the other, which must require the continuous con- tact of solid particles full of the path along the force exertion route, but behaves influential only when its field reaches its target through travelling at finite speed, to which, Albert E- instein, the most famous pioneer of physics in the twentieth century, already enlightened us one hundred years ago, the said effect, though probably tiny, should take into account. To combine Einstein’s idea of finite speed gravity with New- ton’s concept of absolute space, this work proposes two ame- liorated gravi | non_poster |
Hochschule für Gesundheit · University of Applied Sciences · Gesundheitscampus 8 · 44801 Bochum alina.napetschnig@hs-gesundheit.de Hintergrund und Ziele Methodik Ergebnisse und Diskussion Die Entwicklung des Gütekriterienkernsets für senior*innengerechte VR-Anwendungen beruht auf einem mehrschrittigen, multimetho- dischen Studiendesign. Die Definitionen und Inhalte der jeweiligen Anforderungen der Kriterien werden in einem iterativen Prozess entwickelt (vgl. Abb. 1). Der Vorgang orientiert sich an dem „Plan, Do, Check, Act“-Kreis (PDCA) und beschreibt vier Aufgaben, die sich zyklisch im Sinne der laufenden Verbesserung wiederholen3. In den einzelnen Teilschritten werden qualitative Forschungsmethoden, wie die Framework Analyse, Gruppendiskussionen sowie Aspekte der Grounded Theory angewendet. Die aufgestellten Kriterien des vorläufigen Gütekriterienkernsets werden anhand einer VR-Trainingsanwendung zur Straßenüber- querung mit Senior*innen zielgruppengerecht und partizipativ in einer Interventionsstudie mit anschließender Gruppendiskussion evaluiert. Die abschließenden Ergebnisse stellen das evaluierte standardisierte Gütekriterienkernset dar. Literatur 1) Baas, J. (2020). Digitale Gesundheit in Europa. Berlin: Medizinisch wissenschaftliche Verlagsgesellschaft. 2) Deiters, W., Höcker, P., Napetschnig, A. & Preissner, L. (2022). Innovationsräume für die partizipative Entwicklung digitaler nutzer*innen-zentrierter Anwen dungen. In: Department of Community Health (Hrsg.): Community Health - Grundlagen Methoden, Praxis; Weinheim Basel, Beltz Juventa, S. 411 - 423 3) Schönsleben, P. (2016). Integrales Logistikmanagement: Operations und Supply Chain Management innerhalb des Unternehmens und unternehmensübergreifend. Berlin: Springer-Verlag. Die aktuellen Ergebnisse beziehen sich auf den dritten PDCA-Zyklus und bilden das vorläufige Gütekriterienkernset. Die in dem iterativen Verfahren und qualitativen Forschungsprozess gewonnen Kriterien umfassen Kategorien zu folgenden „allgemeinen Anforderungen“: 1. Qualitätssicherung medizinischer/ gesundheitlicher Inhalte, 2. Datenschutzbestimmungen, 3. Qualitätsanforderungen, 4. Verbraucherschutz und 5. Interoperabilität. Kategorien mit Kriterien zu „VR-spezifischen Anforderungen“ sind: 1. Grafik/ Qualität, 2. 3D-Charakter/ Avatar, 3. Bereitstellen von spielinternen Anweisungen und Aufforderungen, 4. Interaktion, 5. Navigation und 6. Förderung der Nutzermotivation und Nutzungstreue. Die Kriterien bilden die Grundlage für die Entwicklung der VR- Trainingsanwendung (vgl. Abb. 2) und können detailliert durch Scannen des untenstehenden QR-Codes eingesehen werden. Die Resultate bieten Raum für Diskussionen zu VR-Gestaltungsanforderungen, insbesondere für die Zielgruppe Senior*innen. Abbildung 1: Implementierung des PDCA-Kreises in den Produktentstehungsprozess (PEP) zur Entwicklung des Gütekriterienkernsets Quelle: Eigene Darstellung Abbildung 2: Screenshot VR-Trainingsanwendung für Senior*innen zur sicheren Straßenüberquerung Quelle: Eigene Darstellung WEGFEST. – SCAN ME! Virtual Reality (VR) ist nicht nur in der Spieleindustrie vertreten, zunehmend etabliert sich diese innovative Technologie im Gesund- heitssektor1. Ein Leitfaden, der essentielle Gütekriterien für die Entwicklung von VR-Anwendungen für Senior*innen enthält, existiert bisher nicht. Lediglich bestehen allgemeingültige Kriterien für digitale Gesundheitswendungen, die weder zielgruppenspezifisch sind, noch eine Eingrenzung der Technologien beinhalten2. • • • Definition und Evaluierung Anforderungskriterien für den Einsatz von VR in der Gerontologie Problemstellung Bewertung bestehender allgemeingültiger Kriterien für digitale Gesundheitsanwendungen Ergebnis Partizipativ-evaluiertes Gütekriterienkernset mit gültigen Kriterien für senior*innengerechte VR-Anwendungen Tag der Nachwuchswissenschaftler*innen JUNI 2022 Hauptforschungsfrage: Welche Anforderungen muss eine Gesundheitsanwendung in VR für den Einsatz in der Gerontologie erfüllen | non_poster |
Plan de Gestion des Données (PGD) ou Data Management Plan (DMP) - LA bonne pratique pour tout projet de recherche. - Exigé par les organismes de financement et les institutions. - Spécifie quelles données sont collectées ou générées, comment celles-ci sont gérées, partagées et préservées pendant et après le projet. - Traduit la politique de gestion des données issues de la recherche. - Aide à anticiper les étapes du cycle de vie des données de la recherche. Le Plan de Gestion des Données de recherche en 8 points | non_poster |
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement number 101000499. © COPYRIGHT NEWSLETTER Magazine 5th ISSUE September 2024 FORK-TO-FARM AGENT-BASED SIMULATION TOOL AUGMENTING BIODIVERSITY IN THE AGRI-FOOD VALUE CHAIN Enhancing Biodiversity in Farming with the BioValue model Interview with IDENER Our Pilot Cases- Lentils Our Recipes- Lentils as an appetizer News Must Read- Our latest Publications The Consortium Contents | non_poster |
The HOLMES low activity implantation G.Gallucci on behalf of HOLMES collaboration NuPhys2023: Prospects in Neutrino Physics 18–20 dic 2023 King's College London | non_poster |
G. Loconsole1, G. Altamura1, V. Cavalieri1,2, M. Saponari1 1Institute for Sustainable Plant Protection, CNR (IPSP-CNR) - ITALY 2Centro di Ricerca, Formazione e Sperimentazione in Agricoltura ‘Basile Caramia’, CRSFA - ITALY Attempts for improving protocol to phenotype olive cultivars responses to Xylella fastidiosa infections 26-30 April, 2021 | non_poster |
Anatomy, um jogo sério para o aprendizado de anatomia: desenvolvimento e validação. YASMIM FERNANDES MONIZ HIGOR BARRETO CAMPOS BRUNO TOSHIO GOMES GUNJI Resumo – Objetivos: Desenvolver, validar e investigar a usabilidade de um jogo sério para o estudo de Anatomia Humana. Métodos: O Anatomy foi projetado para ser um jogo do estilo pergunta- resposta abordando conteúdos do sistema ósseo com feedback imediato quanto a assertividade. Para a validação foram realizados testes de validação funcional e estrutural (Caixa Preta e Caixa Branca) e a sua usabilidade foi testada por meio da escala System Usability Scale (SUS). Resultados: Participaram do estudo 5 profissionais de desenvolvimento de softwares. Os testes de validação verificaram a ausência de inconsistências lógicas ou de erros no sistema. De acordo com a escala SUS, o jogo foi classificado como “Melhor Imaginável”, com score de 91 (100 o máximo). Os componentes de qualidade “facilidade de aprendizagem” e “memorização”, “satisfação do usuario”, “minimização de erros” também tiveram scores classificados como “Melhor Imaginável”. Conclusão: O Anatomy pode ser uma ferramenta útil e motivadora para o estudo de Anatomia, mais estudos são necessários para quantificar seu impacto no desempenho acadêmico dos estudantes. Palavras- chave: Jogo. Anatomia.Usabilidade.Validação. Introdução Uma das matérias fundamentais para o aprendizado de conteúdos relacionados aos cursos da área da saúde é a Anatomia Humana, embora esta seja uma disciplina considerada como pré-requisito para o entendimento de outras, os estudantes enfrentam desafios para alcançar a expertise necessária para a sua compreensão (ARRUDA e SOUSA, 2013) Alguns dos desafios relacionados à aprendizagem da Anatomia são o conteúdo extensivo para intervalos de tempo determinados, a dificuldade em memorização de nomes e estruturas e um protocolo metódico e repetitivo que os estudantes consideram pouco atrativo (SALBEGO et al., 2015) Uma opção para o auxílio e otimização do aprendizado de Anatomia no ensino superior é a gamificação de parte do processo pedagógico. Neste contexto os jogos sérios são uma alternativa viável, já que podem ser criados com fins específicos e desempenham essas finalidades em um ambiente de entretenimento para o usuário (ANYANWU, 2014). Na revisão sistemática conduzida por Van Gaalen et al (2021) foi verificado que os resultados da aprendizagem podem ser otimizados quando aplicados os conceitos de gamificação. Considerando as particularidades na grade de ensino das instituições, tendo em vista que | non_poster |
ISSN 1028-9933 7 Volumen 98 No. 1 enero-febrero 2019 ARTÍCULO ORIGINAL Caracterización del adulto mayor con diagnóstico probable de cáncer de piel Characterization of the elderly adult with probable diagnosis of skin cancer Caracterização do adulto idoso com provável diagnóstico de câncer de pele José Antonio Bordelois Abdo1, Mauricio López Mateus2, Iliana Fernández Ramírez3, Kathy Julissa Lagos Ordoñez4 1 Especialista de II Grado en Dermatología. Máster en Enfermedades Infecciosas. Asistente. Hospital General Docente "Dr. Agostinho Neto" Guantánamo. Cuba. Email: jbabdo@infomed.sld.cu ORCID: https://orcid.org/0000-0003-0060-2135 2 Médico colombiano. Residente de Dermatología. Hospital General Docente "Dr. Agostinho Neto". Guantánamo. Cuba. Email: mauriciolopezmateus@gmail.com 3 Especialista de II Grado en Dermatología. Máster en Enfermedades Infecciosas. Asistente. Hospital General Docente "Dr. Agostinho Neto" Guantánamo. Cuba. Email: ileca@infomed.sld.cu 4 Médico hondureña. Residente de Dermatología. Hospital General Docente "Dr. Agostinho Neto". Guantánamo. Cuba. Email: relias@infomed.sld.cu RESUMEN Introducción: en Cuba una de las exigencias sociales es el estudio del cáncer de piel en la población de adultos mayores. Objetivo: caracterizar aspectos clínicos-epidemiológicos en pacientes con diagnóstico probable de CP ingresados en los hogares de ancianos "Caridad Jaca" y "San José" de la ciudad de Guantánamo durante el año 2017. Método: se realizó un estudio observacional, prospectivo y transversal con todos los ancianos (n=256) y en aquellos con probable cáncer de piel (n=15) se precisó la edad; sexo; lugar de nacimiento, dónde vivió; antecedente patológicos personales; foto tipo cutáneo; características de la lesión; diagnóstico clínico y dermatoscópico. Resultados: en el 5,9 % de los ancianos se diagnosticó un cáncer de | non_poster |
Optical light curves of the FUor and FUor-like objects Evgeni Semkov1, Stoyanka Peneva1, Sunay Ibryamov2, Ulisse Munari3, Hiroyuki Mito4 1) Institute of Astronomy and NAO, Bulgarian Academy of Sciences, Sofia, Bulgaria 2) Department of Physics and Astronomy, Faculty of Natural Sciences, University of Shumen, Shumen, Bulgaria 3) INAF Osservatorio Astronomico di Padova, Sede di Asiago, Asiago, Italy 4) Kiso Observatory, Institute of Astronomy, University of Tokyo, Japan | non_poster |
Colección de ESMOS 1 Infografía Cáncer oral y genes que nos protegen de él Raul Arciniega Escorcia* iD Biología Celular y Molecular, Facultad de Estomatología, Benemérita Universidad Autónoma de Puebla, Puebla, México. *Email: raul.arciniega@alumno.buap.mx 03 de Febrero de 2023 DOI: http://doi.org/10.5281/zenodo.7604414 Editado por: América Paulina Rivera-Urbalejo (Facultad de Estomatología, Benemérita Universidad Autónoma de Puebla. Revisado por: Jesús Muñoz-Rojas (Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla). Colección de ESMOS Resumen El cáncer es una de las principales causas de muerte en el mundo y el cáncer oral comprende del 2% al 4% del total de éste. Es importante considerar que cada año se registran más de 5,000 casos diagnosticados, por ello es importante difundir la información a la población [1]. El cáncer oral es un tumor que se desarrolla en un proceso multisecuencial que implica una serie de cambios irreversibles en el | non_poster |
Colección de ESMOS 1 Ponencia La rizosfera y su mundo microscópico Alondra Yemina Flores-Álvarez iD, Lilia Sánchez- Minutti* iD Laboratorio de Procesos Biotecnológicos, Universidad Politécnica de Tlaxcala,Tlaxcala, México. *Email: lilia.sanchez@uptlax.edu.mx 22 de enero de 2024 DOI: http://doi.org/10.5281/zenodo.10553307 Editado por: Alma Rosa Netzahuatl-Muñoz (Programa Académico de Ingeniería en Biotecnología, Universidad Politécnica de Tlaxcala). Revisado por: Jabel Dinorin Téllez Girón (Facultad de Biotecnología, decanato de Ciencias Biológicas, Universidad Popular Autónoma del Estado de Puebla). Colección de ESMOS Resumen La rizosfera es la zona del suelo circundante y más cercano a las raíces de las plantas y contiene principalmente material rocoso, partículas de tierra de diferentes texturas, raíces, nutrientes en forma de compuestos orgánicos e inorgánicos, agua y microorganismos. Desde el punto de vista microbiológico, el estudio de los microorganismos rizosféricos comenzó en 1888 en las raíces de | non_poster |
P2. Where are the red nuggets? Massive compact spheroids across time Dexter HON Swinburne University of Technology Red Nuggets: MASSIVE (M/M☉~1011) COMPACT (Re< 2.2 kpc) QUIESCENT Galaxies z~1.5 Supervisor: Prof. Alister Graham | non_poster |
A predicted chemo-polypharmacophoric agent comprising (Propeptide-Fc)/MGF peptide mimicking interactive of high free binding energy properties towards Wnt7a/Fzd7 signalling Akt/mTOR anabolic growth IGF-I/PI3K/Akt -I/MAPK/ERK pathways. Grigoriadis Ioannis1, Grigoriadis George2 and Grigoriadis Nikolaos3* 1.Department of Computer Drug Discovery Science, BiogenetoligandorolTM, Thessaloniki, Greece, 2.Department of Stem Cell Bank and ViroGeneaTM, Biogenea Pharmaceuticals Ltd, Thessaloniki, Greece, 3.Department of IT Computer Aided Personalized Myoncotherapy, Cartigenea-Cardiogenea, Neurogenea-Cellgenea, Cordigenea-HyperoligandorolTM, ABSTRACT: The insulin-like growth factor-I (IGF-I) is a key regulator of skeletal muscle growth in vertebrates, promoting mitogenic and anabolic effects through the activation of the MAPK/ERK and the PI3K/Akt signaling pathways. Also, these results show that there is a time-dependent regulation of IGF-I plasma levels and its signaling pathways in muscle. The insulin-like growth factor-I (IGF-I) is a key regulatory hormone that controls growth in vertebrates. Particularly, skeletal muscle growth is strongly stimulated by this hormone. IGFI stimulates both proliferation and differentiation of myoblasts, as well as promoting myotube hypertrophy in vitro and in vivo. The mitogenic and anabolic effects of IGF-I on muscle cells are mediated through specific binding with the IGF-I receptor (IGF-IR). This ligand-receptor interaction promotes the activation of two major intracellular signaling pathways, the mitogen-activated protein kinases (MAPKs), specifically the extracellular signal-regulated kinase (ERK), and the phosphatidylinositol 3 kinase (PI3K)/Akt. The MAPK (RAF/MEK/ERK) is a key signaling pathway in skeletal muscle, where its activation is absolutely indispensable for muscle cell proliferation. Biologically active polypeptides derived from the E domain that forms the C-terminus of the insulin-like growth factor I (IGF-I) splice variant known as mechano growth factor which have been demonstrated neuroprotective and cardioprotective properties, as well as the ability to increase the strength of normal and dystrophic skeletal muscle. Ligands selected from phage-displayed random peptide libraries tend to be directed to biologically relevant sites on the surface of the target protein. Protein-peptide interactions form the basis of many cellular processes. Consequently, peptides derived from library screenings often modulate the target protein's activity in vitro and in vivo and can be used as lead compounds in drug design and as alternatives to antibodies for target validation in both genomics and drug discovery. In this research and science project we for the first time a predicted chemo-polypharmacophoric agent comprising (Propeptide-Fc)/MGF peptide mimicking properties for the possible increasement of the Muscle Mass Fiber Size towards Wnt7a/Fzd7 Signalling to the Akt/mTOR Anabolic Growth IGF-I/PI3K/Akt - I/MAPK/ERK pathways utilising (Propeptide-Fc)/MGF phage-displayed random peptide libraries through a KNIME-RDkit-CDK clustering pipeline. II METHODS Α Sequential Solution of the Poisson-Boltzmann Equation through a Combination Index Dynamic Unified Theorem for Multiple Entities: and when m ≠ 1, then: Based on Eqs. 1 and 2, in conjunction with Eq. 4, Chou and Talalay in 1983 introduced the term combination index (CI) for quantification of synergism (CI<1), additive effect (CI=1), and antagonism (CI>1) [6,13,14], where at x% inhibition, the general equation for two drugs is given below: A typical presentation of algorithms and graphics of CI values as a function of effect (fa) is illustrated in Figure 2. The resulting Fa-CI plot is also called Chou-Talalay plot. The Fa-CI plot and isobologram are two sides of the same coin, where Fa-CI plot is effect-oriented and the isobologram is dose-oriented (Figure 1). More details have been given in Reference 6. The algorithm for quantifying synergism or antagoni | non_poster |
AMERICAN JOURNAL OF EDUCATION AND LEARNING ISSN: 2996-5128 (online) | ResearchBib (IF) = 9.918 IMPACT FACTOR Volume-2| Issue-4| 2024 Published: |30-11-2024| 576 IMPLEMENTING STRATEGIC MANAGEMENT METHODS IN TEXTILE ENTERPRISES. https://doi.org/10.5281/zenodo.14223960 Ergashev Jamshid Jamoliddinovich Head of the "Socio-Economic and Sports" Department at NamTSI (PhD) Sultonova Maxfirat Akramjon qizi Assistant at the "Socio-Economic and Sports" Department at NamTSI G'ulomjonova Guliruxsor Nasriddin qizi Namangan institute of textile industry Assistant at the "Socio-Economic and Sports" Department at NamTSI Annotation According to the report published by the International Chamber of Commerce (ICC) on the results of 2024, key issues faced by businesses worldwide include the application of existing management strategies and the processes of collaborative implementation. In particular, in Uzbekistan, the application of improved management strategies in textile enterprises during the first half of 2024 led to a 4.1- fold increase in quality efficiency and production volume across 11 industrial zones. This article provides information on the principles and methods for implementing modern management approaches in textile enterprises. Аннотация Согласно отчету, опубликованному Международной торговой палатой (ICC) о результатах 2024 года, ключевыми проблемами, с которыми сталкиваются бизнесы по всему миру, являются применение существующих управленческих стратегий и процессы совместной реализации. В частности, в Узбекистане применение усовершенствованных управленческих стратегий на текстильных предприятиях в первой половине 2024 года привело к увеличению качества эффективности и объема производства в 4,1 раза по 11 промышленным зонам. В данной статье представлена информация о принципах и методах внедрения современных управленческих подходов на текстильных предприятиях. Anotatsiya Butun jahon buzines tashkilotining (ICC) 2024-yil yakunlari yuzzasidan e‟lon qilingan hissobotida, dunyo bo„ylab korxonalarda yuzzaga keladigan asosiy muammolar mavjud menejment strategiyalarni qo„llash va ularni kollobaratsion tadbiq etish jarayonlari hissoblanmoqda. Xususan, O„zbekistonda ham 2024-yilning 1-yarmida takomillashtirilgan menejment strategiyalarini to„qimachilik | non_poster |
Poster: Effectiveness of Moving Target Defense Techniques to Disrupt Attacks in the Cloud Salman Manzoor Lancaster University United Kingdom s.manzoor1@lancaster.ac.uk Antonios Gouglidis Lancaster University United Kingdom a.gouglidis@lancaster.ac.uk Matthew Bradbury Lancaster University United Kingdom m.s.bradbury@lancaster.ac.uk Neeraj Suri Lancaster University United Kingdom neeraj.suri@lancaster.ac.uk ABSTRACT Moving Target Defense (MTD) can eliminate the asymmetric advan- tage that attackers have in terms of time to explore a static system by changing a system’s configuration dynamically to reduce the efficacy of reconnaissance and increase uncertainty and complexity for attackers. To this extent, a variety of MTDs have been proposed for specific aspects of a system. However, deploying MTDs at differ- ent layers/components of the Cloud and assessing their effects on the overall security gains for the entire system is still challenging since the Cloud is a complex system entailing physical and virtual resources, and there exists a multitude of attack surfaces that an attacker can target. Thus, we explore the combination of MTDs, and their deployment at different components (belonging to various operational layers) to maximize the security gains offered by the MTDs. We also propose a quantification mechanism to evaluate the effectiveness of the MTDs against the attacks in the Cloud. CCS CONCEPTS • Security and privacy →Information flow control; Distributed systems security. ACM Reference Format: Salman Manzoor, Antonios Gouglidis, Matthew Bradbury, and Neeraj Suri. 2022. Poster: Effectiveness of Moving Target Defense Techniques to Disrupt Attacks in the Cloud. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS ’22), November 7–11, 2022, Los Angeles, CA, USA. ACM, New York, NY, USA, 3 pages. https://doi.org/10. 1145/3548606.3563514 1 INTRODUCTION Current IT systems operate in a relatively static configuration which gives attackers the advantage of time, as attackers can plan, per- form reconnaissance, and execute attacks without time constraints. Conventional security measures rely on patching individual vul- nerabilities, which can be cumbersome, time-consuming, and risk introducing configuration errors in the system. Moreover, it is dif- ficult (and likely impossible) for system defenders to eliminate all vulnerabilities in a system; this provides the attackers a window of opportunity to compromise the system. Consequently, Moving Target Defense (MTD) [8] techniques are advocated as a proactive approach to improve a system’s security. The basic premise behind Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CCS ’22, November 7–11, 2022, Los Angeles, CA, USA © 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9450-5/22/11. https://doi.org/10.1145/3548606.3563514 MTD is that introducing increased uncertainty and complexity for the attackers reduces the likelihood of them successfully exploiting the system. For example, a dynamic run-time environment could change the execution environment presented to the application [2], while MTD techniques focusing on the network layer dynamically modifies the network characteristics (e.g., IP/MAC addresses) [4] to change the attack surfaces presented to the adversary. The previously mentioned MTDs are typically applied to indi- vidual aspects of a system (IP addresses, OS replicas, etc.). Their effectiveness is measured through their capability to mitigate at- tacks targeting these components. However, the application of the MTDs considering a holistic view of the Cloud is limited [1] due t | non_poster |
19 th annual Congress of the EUROPEAN COLLEGE OF SPORT SCIENCE 2 nd - 5 th July 2014, Amsterdam – The Netherlands BOOK OF ABSTRACTS Edited by: De Haan, A., De Ruiter, C. J., Tsolakidis, E. Hosted by the VU University Amsterdam & VU University Medical Center Amsterdam ISBN 978-94-622-8477-7 | non_poster |
DNA-barcoding and assessment of the genetic diversity of the Xylella fastidiosa vectors in the Balearic Islands Delgado-Serra, Sofia1; López-Mercadal, Julia1; Lester, Katherine2; Miranda-Chueca, Miguel Ángel1; Jurado-Rivera, Jose Antonio3; Paredes-Esquivel, Claudia1 1 Applied Zoology and Animal Conservation, University of the Balearic Islands, Spain 2 Diagnostics, Wildlife & Molecular Biology. Science and Advice for Scottish Agriculture, Scotland 3 Biodiversity, Systematics and Evolution, University of the Balearic Islands, Spain | non_poster |
2023/5/25 1 [MGI27-P01] ベルモントフォーラムPARSECプロジェクトにおける オープンサイエンス推進ツール、チェックリストの開発 Developing tools/checklists for researcher, data and software to promote Open Science as part of Belmont Forum PARSEC project *村山泰啓1, 宮入暢子1, 近藤康久2, Shelley Stall3, Alison Specht4 1. 国立研究開発法人情報通信研究機構NICTナレッジハブ、2. 総合地球環境学研究所研究基盤国際センター 3. American Geophysical Union、4. The University of Queensland *Yasuhiro Murayama1, Nobuko Miyairi1, Yasuhisa Kondo2, Shelley Stall3, Alison Specht4 1. National Institute of Information and Communications Technology、2.Research Institute for Humanity and Nature 3. American Geophysical Union、4. The University of Queensland Acknowledgement: This work was supported by Japan Science and Technology Agency as part of the Belmont Forum PARSEC project. PARSECプロジェクト この研究は、「自然保護区が社会経済に及ぼす影響の多国融合研究を通じた新たなデ ータ共有・再利用手法の構築(PARSEC)」プロジェクトの一部であり、ベルモント ・フォーラムより、全米科学財団(NSF, Grant 1929464)、フランス国立研究機構 (ANR)、ブラジルサンパウロ州研究財団(FAPESP)、日本科学技術振興機構(JST)を通 じた資金提供を受けています。 This work is part of the Building New Tools for Data Sharing and Re-use through a Transnational Investigation of the Socioeconomic Impacts of Protected Areas (PARSEC) project with funding provided by the Belmont Forum through the National Science Foundation, NSF, Grant 1929464, (US), Agence Nationale de la Recherche, ANR (France), Fundação de Amparo à Pesquisa do Estado de São Paulo, FAPESP (Brazil), and Japan Science and Technology Agency, JST (Japan). 1 2 | non_poster |
Poster STI 2022 Conference Proceedings Proceedings of the 26th International Conference on Science and Technology Indicators All papers published in this conference proceedings have been peer reviewed through a peer review process administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a conference proceedings. Proceeding Editors Nicolas Robinson-Garcia Daniel Torres-Salinas Wenceslao Arroyo-Machado Citation: Zhang, L., Qian, Y., & Li, J. (2022). Continued collaboration shortens the transition period of a scientist after moving from one institution to another. In N. Robinson-Garcia, D. Torres-Salinas, & W. Arroyo-Machado (Eds.), 26th International Conference on Science and Technology Indicators, STI 2022 (sti22160). https://doi.org/10.5281/zenodo.6962471 Copyright: © 2022 the authors, © 2022 Faculty of Communication and Documentation, University of Granada, Spain. This is an open access article distributed under the terms of the Creative Commons Attribution License. Collection: https://zenodo.org/communities/sti2022grx/ | non_poster |
NEWSLETTER www.eenvest.eu EEnvest project has just passed its turning point and in early June the project’s partners shared their activity progress status in the 4th project meeting. The technical-financial risk evaluation methodology has been completed and it is now undergoing the testing phase based on data from the two demo cases in Italy and Spain. An innovative methodology to evaluate investments in energy efficiency is being developed. Based on technical-financial risk analysis, the methodology considers both energy and non-energy related benefits, including comfort, well-being and health, and will be implemented into EEnvest Search&Match web platform, coming in Q4/21. Other useful services are being integrated to support investors and building owners, as a blockchain-based reporting tool and self-assessment tool for benchmarking project quality. Cristian Pozza, EEnvest project coordinator (Eurac Research) Dear Readers, INSIDE Interview to PoliMi - 2 Interview to UIPI - 3 Webinar: “The Future of Multiple Benefits for Investors: Accelerating Energy Renovation Investments” - 4 4th General Assembly and Mid-term Conference - 5 Activities 2021/2022 - 7 /eenvest @eenvest_eu Risk Reduction For Building Energy Efficiency Investment June 2021 - Fourth Issue Italian demo case in Rome Photo credits: PRELIOS INTEGRA S.p.A. https://twitter.com/EENVEST_EU https://www.linkedin.com/company/eenvest/ | non_poster |
High Performance Computing Coursework 2 | non_poster |
Poster STI 2022 Conference Proceedings Proceedings of the 26th International Conference on Science and Technology Indicators All papers published in this conference proceedings have been peer reviewed through a peer review process administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a conference proceedings. Proceeding Editors Nicolas Robinson-Garcia Daniel Torres-Salinas Wenceslao Arroyo-Machado Citation: Piro, F. N., D’Este, P., Yegros, A., & Borlaug, S. B. (2022). Joining evenly while remaining unlike: the influence of balanced inter- sectoral research collaborations on scientific performance. In N. Robinson-Garcia, D. Torres- Salinas, & W. Arroyo-Machado (Eds.), 26th International Conference on Science and Technology Indicators, STI 2022 (sti2248). https://doi.org/10.5281/zenodo.6907129 Copyright: © 2022 the authors, © 2022 Faculty of Communication and Documentation, University of Granada, Spain. This is an open access article distributed under the terms of the Creative Commons Attribution License. Collection: https://zenodo.org/communities/sti2022grx/ | non_poster |
Maitrayee.Mandal@ncbj.gov.pl NuPhys2023 18-20 DECEMBER 2023 SUPPORTED BY NCN GRANT: UMO-2018/30/E/ ST2/00441 Latest Measurement of the Appearance of Tau Neutrinos in the Flux of Atmospheric Neutrinos at Super-Kamiokande Maitrayee Mandal, NCBJ Poland | non_poster |
Information Science Trends 2024- The ASIS&T European Chapter Research Series. July 2024 Title A Gap in the Sustainable Management of Social Network Data: Existing challenges and suggestions Author Saeed Rezaei Sharifabadi Amir Ghaebi Sara Soltani1 Author’s Affiliation Professor, Department of Information Science, Alzahra University, Tehran, I.R. Iran. srezaei@alzahra.ac.ir Professor, Department of Information Science, Alzahra University, Tehran, I.R. Iran. ghaebi@alzahra.ac.ir Ph.D. Candidate in information science, Alzahra University, Tehran, I.R.Iran. s.soltani@alzahra.ac.ir Keywords: [Data management, Social networks, Sustainable management] STRUCTURED ABSTRACT Aim of the contribution What is the aim of your contribution to the conference? My purpose of participating in this conference is to share the research problem in my doctoral dissertation that I am currently working on. By participating in this 1 Corresponding author | non_poster |
ESTRATEGIAS NO CONVENCIONALES PARA UN TRATAMIENTO DE LA ACCIÓN DEL VACíO SOBRE LA SOLUCIÓN DE LA ECUACIÓN CLÁSICA DEL CALOR. EDISON E. VILLA CHICA, MANUEL J. SALAZAR Resumen. Este contenido trata sobre la acción del vacío, en la ecuación del calor y de onda, lo cual de manera inicial trataremos como la medida en volúmenes cuya temperatura toma el cero absoluto, y siendo esta una manera plausible de proponerlo, de antemano es sabido que el problema sobre el vacío, no es aparte de lo inimaginable de su signi…cado trans…nito que lo hace ser indeterminado, pues no sólo, la medida del cero absoluto es imposible de obtenerla, sino además cualquier medida constante, más aún cualquier medida exacta que pueda medirse no sólo por medio de la ecuación del calor o de onda, sino en general por cualquier mecanismo racional o no, tal como lo exacto de un signi…cado es lo ideal de él, siendo esta la medida, lo cual es imposible, ya que la medida ideal no es posible ni siquiera en lo imaginario, pues lo inimaginable del signi…cado trans…nito de su expresión, o sea de su sentencia, llevará a su multiverso, si es que no es el mismo multiverso, donde el grado de oposición entre el signi…cado y su opuesto tiene un in…nito espectro para el cual el multiverso es sensible a su con- tradicción o a esta in…nita realidad, ya sea para oponerse o favorecerlos, luego sobre cualquier medida que se hable y no sólo acerca de la ecuación del calor, de la de onda o de cualquier otra, la medida in…nita como es su indeterminación ocurre de manera independiente, de cualquier medida que la quiera predeterminar a ella, básicamente porque el multiverso siempre está presente y al tomarse este como la realidad misma, casi toda su in…nita realidad es …cción, sólo el CM, no podría serlo ante el UBE como el juez más universal y por esto él debe verse como la TF que capta esta posible realidad y también el recurso que él utiliza para ser el juez acerca de la realidad de un contenido y se vuelve el problema de la determinación del resultado atómico para que la in…nita realidad del multiverso no ocurra, ya que es imposible y la teoría del bienestar (TBE) debe evitarlo o impedirlo, pues si ocurriera, la reserva se agotaría totalmente en su inmensa realidad, debido al mal manejo de la hipótesis de los límites en la reserva (HLR), luego este texto trata esta problemática desde la ecuación clásica del calor y de onda derivándose como causa hacia ella o como consecuencia, ya que el prob- lema puede ser mirado desde cualquier teoría y no sólo desde algún contenido de las matemáticas como lo es el contenido sobre la ecuación clásica del calor que trataremos. Palabras claves: El vacío en la ecuación del calor o de onda, teorías no convencionales, el multiverso, emergencia o urgencia prioritaria (UP). Contenido 1. Introducción 2 2. Teorías no convencionales 5 1 | non_poster |
Bendix Hagedorn – Institute of Theoretical Astrophysics, UiO AtLAST Design Study Conference, Mainz May 24, 2024 Molecular gas scaling relations of local star-forming galaxies below 𝑀∗~10"𝑀⊙ | non_poster |
Xylella fastidiosa: Imminent Risk to Food Security in Near East and North Africa Region Thaer YASEEN, Food and Agriculture Organization of the United Nations (FAO), Regional Office in Cairo, Egypt. | non_poster |
Drug-Target Relationship Analysis Drug targets refer to biological macromolecules that have pharmacodynamic functions in the body and can be acted upon by drugs, such as certain proteins and nucleic acids. They have targeted structures and target molecules related to specific diseases. The analysis of drug targets can reflect which structures of drugs can interact with the target molecules and produce curative effects, so as to achieve the purpose of curing diseases. Figure 1 Drug target structure CADD for Drug-target Relationship Analysis Drug research and development is the driving force of the pharmaceutical industry, and it has also caused a research boom in the academic community. Computer-aided drug design is a supplement to traditional experimental drug discovery. It accelerates the development of new drugs, shortens drug development time, and reduces the cost of drug R & D projects. Physics-based calculation methods used in CADD improve the success rate of drug development. In the process of drug research, it is very important to understand the drug-target relationship. Drug-target interaction prediction is an important auxiliary method for drug development. Network pharmacology and drug repositioning subvert the traditional drug development concepts, improve the theoretical basis of drug-target | non_poster |
POSTER: How Attackers Determine the Ransom in Ransomware Attacks Tom Meurs University of Twente Enschede, Netherlands t.w.a.meurs@utwente.nl Marianne Junger University of Twente Enschede, Netherlands m.junger@utwente.nl Abhishta Abhishta University of Twente Enschede, Netherlands s.abhishta@utwente.nl Erik Tews University of Twente Enschede, Netherlands e.tews@utwente.nl Abstract—Ransomware may lead to massive economic dam- age to victims [13]. However, it is still unclear how at- tackers determine the amount of ransom. In this poster we empirically study the ransom requested by attackers in ransomware attacks. We analysed 371 ransomware attacks reported to the Dutch Police between 2019 and 2021. Our results indicate that attacker’s effort and opportunity are important predictors for the ransom requested. The goal of the poster is to invite other researchers for collaboration. Index Terms—Ransomware, cyber attacks, criminal revenue, police reports 1. Introduction Ransomware attacks have become more prevalent over the past years [1]. Even though most ransomware attackers are financially motivated [9], the actual financial gains made by attackers are still unclear. This poster abstract aims at introducing a dataset that could be used to empir- ically study the ransom requested by attackers. The Rational Choice Perspective (RCP) [2], [3] states that criminal decision-making is based on weighing the costs and benefits of an attack. Costs could be effort or risk of being caught by Law Enforcement. Benefits is mostly money, but could also be reputation. Based on RCP, we hypothesise that ransom requested depends on how much effort attackers put in a ransomware attack. Furthermore, there might be an increase of requested ransom over the years because improved anti-virus scanners might make it more difficult to perform ransomware attacks and there- fore require more effort. A complementary approach is the Routine Activity Theory (RAT) [10]. RAT focuses on the opportunities for attackers provided by context. From RAT it follows that victims with more money provide the opportunity for attackers to earn more money and therefore will demand higher ransom [6]. Furthermore, opportunity might vary between seasons [12], so requested ransom could also vary between seasons. Summarizing, attacker’s effort and opportunity could influence ransom requested (Figure 1) .We propose the following hypotheses: • H1: If attackers put in more effort they will ask a larger ransom • H2: There is an increasing trend of requested ransom over the years • H3: High revenue of victims should lead to larger requested ransom • H4: The requested ransom varies over the different seasons Figure 1: Theoretical framework 2. Methods 2.1. Sample We investigated 371 ransomware attacks registered by the Dutch Police between 2019 and 2021. Ransomware attacks on individuals as companies were included in the sample. Ransomware attacks with victims outside the Netherlands, but reported to the Dutch Police, were excluded from this study. 2.2. Measures To measure requested ransom information about the ransom at first contact with victims was collected. From the 371 observations, 172 attacks (46 %) reported ransom demanded by attackers. If ransom demanded was un- known, this was mostly (52 %) because attackers wanted victims to contact them to inform them about the ransom and the victim did not want to do so. In our analysis we perform analysis to see if there is selection bias of the unknown requested ransomware. | non_poster |
DOI: 10.15457/biolexsoe_2017_11_6 Biobrary. The Library as Producer of Biographical Information? – The IOS-Example - Poster - Hans Bauer Library and Electronic Research Infrastructure Leibniz-Institute for East and Southeast European Studies (IOS), Regensburg, Germany mailto:hbauer@ios-regensburg.de Abstract—Biographical and bibliographic data have much in common. However, biographical information within library infrastructures focuses on primary data such as name, date of birth / date of death, etc. Thus, libraries initially provide personal data in a structured form (what differs from biodata). As digitisation and the increase of electronic services are progressing, libraries are gaining access to biographical resources beyond the only metadata. The poster refers to two Open Access resources, managed by the library of the Leibniz Institute for East and Southeast European Studies Regensburg (IOS), and discusses strategies to open them up to further enhancement: the Erik Amburger Database and the BioLexSOE online. Keywords—biographical dictionary; biographical database; library; authority files; Open Access I. LIBRARIES, INDEXING, AND THE PERSON Musil’s famous librarian from the Austrian State Library characterises his strategy of knowledge organization as follows: “[…] if you want to know how I know about every book here, I can tell you: Because I never read any of them. […] With the exception of the catalogue.”[1] Albeit this is of course a shortened and pointed statement, librarians like to mention this quote from the Man without Qualities. The reason may be that Musil touches an often unseen fact, pivotal for librarian consciousness: even more than books, even more than content libraries organize and manage data referring to them. Indexing, the art of describing entities with distinct terms – metadata – is one of the crucial librarian’s tasks. Thus, libraries have developed powerful tools to identify entities in all subjects as unambiguously as possible. Authority files like the Gemeinsame Normdatei (GND)[2] combine different designation variants for named entities. The data sets contain not only preferred names for an informational object but definitions, references, relations, based on the GND- Ontology.[3] Actually, the GND manages more than 10 million records about persons and person names. However, the way libraries deal with persons differs from the biographer’s approach. Biographical information within library infrastructures initially focuses on primary data such as name, name variants, date of birth / date of death, etc. Libraries are primarily interested in identifying individual entities, and not in creating narratives. Therefore thesauri, authority files, and other classification systems are maintained. Librarian descriptions of persons are able to provide personal data, but not biographical data.[4] Personal data, according to Bourdieu and following Kripke’s Naming and Necessity, are used as rigid designators, forcing identity for extern (social or institutional) purpose.[5] The availability of this kind of data is required to manage big amounts of information like in libraries or in mass prosopographies like registration databases. They serve to mark up individuals and to reduce problems of redundancy and homonymy, “[…] but these basic facts do not provide a strong foundation for a more complex and interconnected depiction of lives online.”[6] In summary library tools are highly appropriate for showing correlations and connections between entities / information objects, and for structuring resources referring to variant names.[7] But in contrast to biographies person data collected by libraries provide rather indications of lives than an insight into their course. Authority files like the GND are first of all indexing tools, they are not an encyclopaedia.1 Hence, Musil’s librarian’s strategy not to get into the subject matter seems to reflect a professional conviction: in knowledge organisation it’s all about | non_poster |
(http://dh2016.adho.org) DH Home (http://www.dh2016.adho.org) / Abstracts (/abstracts/) / 358 (/abstracts/358) Robles-Gómez, A., González-Blanco, E., Ros, S., Del Rio Riande, G., Hernández, R., Tobarra, L., Caminero, A., Pastor, R. (2016). Researchers’ perceptions of DH trends and topics in the English and Spanish-speaking community. DayofDH data as a case study.. In Digital Humanities 2016: Conference Abstracts. Jagiellonian University & Pedagogical University, Kraków, pp. 658-660. Defining the “state of the art” in Digital Humanities (DH) is a really challenging task, given the range of contents that this tag covers. One of the most successful efforts in this sense has been the international blogging event known as “DayofDH” or “A Day in the Life of the Digital Humanities” project, promoted and sponsored by centerNet ( http://www.dhcenternet.org/ (http://www.dhcenternet.org/)), which has put together digital humanists from around the world to document once a year what they do (Rockwell et al., 2012). The websites of DayofDH were hosted in North America until 2015, when it was coordinated in Europe by LINHD ( http://linhd.uned.es (http://linhd.uned.es)), the Digital Innovation Lab, at UNED in Madrid. Participants belong to several countries around the world. The relevance of DH in non-English speaking countries has been quick and important in the last decade, and especially important in the Spanish-speaking world (Spence and González-Blanco, 2014; González-Blanco, 2013; Del Rio Riande, 2014a; Del Rio Riande, 2014b; Galina et al., 2015). Technological projects for humanities have existed in the Spanish world for many years; however, the discipline called “Digital Humanities” arose in 2011 with the first meeting that originated the Spanish Digital Humanities Association, HDH. This relevance is reflected in the creation of a parallel version of the DayofDH in Spanish, the “DíaHD”, which was hosted by the UNAM in Mexico in 2013 and 2014 and converged in the last initiative at UNED transforming both blogging events into a bilingual version of the Day. Although there have been general studies about the information on participation in those events (Priani et al., 2014), there has not been an automated data analysis using NLP (Natural Language Processing) or Big Data tools to extract and classify the relevant information gathered in blogs (Webb et al., 2004). More technical details about these aspects can be found in (Tobarra et al., 2014b). According to this, the main goal of this paper is to develop a dashboard that allows us to get more insight about interest topics and leaderships of this community during the period of time in which this event has been developed. With the “dashboard” word, we mean the analysis and presentation of results, not a tool. In this sense, the topic characterization process deals with the detection of the most relevant topics which are employed in the publication tools of these kinds of virtual communities (Tobarra et al., 2014a). In order to achieve our aforementioned objectives, this work is focused on the datasets corresponding to four years of DayofDH (2012, 2013, 2014 and 2015 editions), and the Spanish version of the event in DíaHD 2013. This work strives at showing the evolution and trends in the last four years in order to give account of the presence of the Hispanic communities in the field. The information of the Spanish 2014 edition has been discarded, as it is not any more available online due to technical problems at the organizing institution. All editions of DayofDH employ WordPress, which has an associated SQL database, including several general tables and a specific set of tables per blog, defined in the project and common to all editions. The CMS is combined with the Buddypress social plugin, which lets users register, create communities and forums and interact among them. For the last edition of the Day, LINHD included also the bilingual plugin WPML to make it available the possibility | non_poster |
1 Esteban Cristaldo for the DUNE Collaboration The front-end electronics of the DUNE Photon Detection System | non_poster |
VIEW THE SUBMITTED ABSTRACT. 4. REVIEW AND CHECK OF THE SUBMITTED INFORMATION DR. GIOVANNI L'ABATE (GIOVANNI.LABATE@CREA.GOV.IT) IUSS24-PAP-5092-1119-137226-20240115160040 PREFERRED PRESENTATION TYPE: Poster LIST OF AUTHORS Title First name initial(s) only Family Name / Surname Affiliation number/s Presenting Author 1. Dr. G. L'ABATE 1 Yes 2. Dr. A. Lachi 1 LIST OF AFFILIATIONS Institution City Country 1. CREA, Agriculture and Environment Firenze (FI) ITALY | non_poster |
Harnessing the Power of Pre-Trained Models in Healthcare: A Comprehensive Review and Future Directions Abstract The integration of pre-trained models in healthcare has witnessed a paradigm shift in the development of robust and efficient diagnostic tools. This paper presents a comprehensive review of the current landscape, challenges, and opportunities surrounding the application of pre-trained models in various healthcare domains. Leveraging pre-existing knowledge from large datasets, these models offer a compelling avenue for accelerating advancements in medical imaging, disease diagnosis, and patient care. 1. Introduction In recent years, the intersection of artificial intelligence (AI) and healthcare has given rise to transformative advancements, catalyzing a paradigm shift in diagnostic methodologies and patient care. Among the myriad approaches within this burgeoning field, the integration of pre-trained models has emerged as a cornerstone, revolutionizing the landscape of healthcare applications. This paper seeks to provide a comprehensive introduction to the pivotal role of pre-trained models in the healthcare domain, shedding light on their potential to enhance diagnostic accuracy, streamline treatment planning, and ultimately elevate patient outcomes. The impetus behind the integration of pre-trained models lies in their capacity to harness knowledge gleaned from vast and diverse datasets in unrelated domains. By leveraging the foundational learnings of these models, healthcare practitioners can expedite the development of robust diagnostic tools, particularly in the realms of medical imaging, disease diagnosis, and predictive analytics. The burgeoning success of these models, rooted in the principles of transfer learning, has paved the way for accelerated breakthroughs in the medical field, overcoming challenges associated with limited datasets and computational resources. As we delve into the multifaceted applications of pre-trained models in healthcare, it becomes evident that their impact extends across various dimensions. From the intricate analysis of medical imaging, including the discernment of anomalies in radiological scans, to the nuanced interpretation of electronic health records for disease diagnosis and prognostication, these models offer a promising avenue for augmenting the capabilities of healthcare professionals. | non_poster |
HyperQueue: Overcoming Limitations of HPC Job Managers Stanislav Böhm stanislav.bohm@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic Jakub Beránek jakub.beranek@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic Vojtěch Cima vojtech.cima@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic Roman Macháček roman.machacek@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic Vyomkesh Jha jha0007@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic Alfréd Kočí alfred.koci@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic Branislav Jansík branislav.jansik@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic Jan Martinovič jan.martinovic@vsb.cz IT4Innovations, VSB – Technical University of Ostrava Czech Republic ABSTRACT In recent years, HPC workloads and communities have undergone substantial paradigm shifts. There is an increasing amount of users that want to leverage HPC clusters to execute many simple and embarrassingly parallel tasks as easily as possible. However, due to the limitations of traditional HPC job managers, these users must often resort to manual aggregation of tasks into a smaller number of jobs to reduce job manager overhead. This approach is both labour-intensive and inefficient, as it lacks dynamic load balancing required to fully utilize computational nodes with tens or hundreds of cores. We introduce HyperQueue - a task scheduling runtime that can execute a large amount of tasks on top of an HPC job manager by automatically aggregating tasks into jobs and dynamically load balancing them across all allocated nodes and CPU cores. HyperQueue is an open-source tool that is designed for ease of use and deployment. KEYWORDS scheduling, task management, resource management, cluster 1 INTRODUCTION Modern HPC systems contain a large number of computational nodes, with each node having tens or hundreds of CPU cores. There is an increasing number of HPC users that want to leverage all this computational power with quite simple workloads, for example by running a large number of single-node, relatively short-lived tasks in an embarrassingly parallel manner [1]. While HPC systems are prepared for almost arbitrarily complex distributed computations, it can be surprisingly difficult to execute a large number of such simple tasks on them efficiently. HPC job managers such as Slurm [2] or PBS are almost ubiquitously used to manage computational resources of HPC clusters. The most straightforward approach in this scenario is thus to simply map each task directly to a single job. However, this is usually infeasible, because the manager intro- duces nontrivial overhead and thus severely limits the amount of jobs that can be submitted by a single user. The manager is also usually configured in a way that does not allow allocating sub-node resources (running multiple jobs on a single node concurrently), either for security or performance stability reasons. Therefore, if the tasks only use a few cores, they cannot fully utilize the provided computational nodes. These limitations can pose a barrier to entry for new HPC users, since they cannot submit tasks to the cluster in a straightforward way. To overcome these limitations, we are introducing HyperQueue, a task execution runtime that can efficiently execute many single- node tasks on top of a Slurm/PBS cluster. Users can submit a large amount of tasks into HyperQueue in a simple way; it will then automatically aggregate tasks into a smaller amount of jobs. Once the jobs are started, it will distribute and dynamically load balance the tasks across all allocated jobs, nodes and cores to efficiently utilize available resources. HyperQueue is an open-source Rust project released under the MIT licence: https://github.com/It4innovations/hyperqueue. 2 RELATED WORK There are several other approaches that can be used inst | non_poster |
Poster: Integrating a Secure Processing Environment in an IoT Operating System Lena Boeckmann HAW Hamburg Hamburg, Germany lena.boeckmann@haw-hamburg.de Thomas C. Schmidt HAW Hamburg Hamburg, Germany t.schmidt@haw-hamburg.de Matthias W¨ahlisch TU Dresden Dresden, Germany m.waehlisch@tu-dresden.de Abstract—Trusted Execution Environments (TEE) and secure enclaves with hardware support are promising concepts for en- hancing security in constrained environments. These approaches provide protected processing areas within a SOC, in which security-critical applications can execute, and at the same time prevent unauthorized access to sensitive data and program code. New microcontrollers with the Armv8-M architecture offer Trustzone-M, a hardware feature to protect memory and support TEEs. To facilitate adoption, Arm provides an open source reference implementation for a secure processing environment (Trusted Firmware-M). In this poster, we present how we integrated this secure firmware in an IoT operating system and measure the overhead cost in memory and execution time. Index Terms—Internet of Things, IoT, Security, Trusted Exe- cution Environment, Trustzone-M I. INTRODUCTION Internet of Things (IoT) devices store, process and transmit sensitive data, while often being insecure and easily physically accessible by potential attackers. Vulnerable devices can serve as entry points to larger networks for compromising critical system components and infrastructure. To secure IoT systems we need measures to make those devices trustworthy. One way to achieve this are Trusted Execution Environ- ments (TEE) [1]. Those are isolated components in which trusted applications (TA) can perform security critical oper- ations, such as secure storage of data and cryptographic key material, cryptographic operations, device authentication and attestation and secure over-the-air (OTA) updates. TEEs can provide a reduced set of operations only required to establish trust between communication partners and ex- pose a smaller attack surface than a rich OS. An OS could be compromised by malware or through a physical attack. Separating critical operations from the OS provides an extra layer of security and allows for independent attestation and verification. In the constrained IoT, hardware-supported TEEs can help to protect devices. On Arm Cortex-M devices with the Armv8- M and Armv8.1-M architectures, TrustZone-M allows for a memory-map based system separation [2]. Flash and memory are split into secure and non-secure address regions, allowing access to secure addresses only, when the CPU runs in a secure state. CPU state transitions are performed in hardware, either by triggering interrupts or through non-secure callable veneer functions, aiming to make them fast and efficient. Non-Secure Hardware Trusted Firmware-M HAL Secure Hardware Core (IPC, SPM, Interrupts) Secure Boot Updates Storage Crypto Attestation PSA APIs Non-Secure Apps Fig. 1. TF-M in combination with a non-secure operating system like RIOT To increase the security of the IoT operating system RIOT [5], we aim to integrate a secure firmware with the OS. RIOT is an open source project aiming at a small memory footprint and support for many different architectures. This is achieved by a minimalistic core, which can be extended with optional feature modules. One existing candidate for a secure firmware on Arm Cortex-M platforms, is the open source project Trusted Firmware-M (TF-M) [4]. TF-M is a reference implementation of a Secure Processing Environment (SPE) [3], which has been specified as part of the Arm Platform Security Architecture (PSA) framework. Since RIOT already supports the PSA Crypto API [6], we decided to also evaluate the suitability of TF-M as a secure firmware in RIOT. In this poster, we report on ongoing work to leverage TEE technologies in RIOT. We partly integrated TF-M with the OS and document the steps needed to run RIOT side by side with the firmware (§ II). We then measur | non_poster |
For new PGT International students in UK HE PGTI Student Induction Ensuring international students have a supportive and smooth transition into UK Higher Education, leading to a sense of belonging and positive student experience Key considerations for providing a pre-arrival induction resource for new postgraduate taught international students What is the resource - general, specific, internal/external information, information-only or interactive, links to other induction activity Who is accessing the resource - alt text, contrast, size of text, format of document, jargon How is the resource accessed - mobile friendly, offline/online, data volume, log-ins When is the resource accessed - time limited, up-to-date information, time-relevant information, support available Utility of resource - metrics on usage, evaluation Things international students may want to know | non_poster |
STI 2022 From Global Indicators to Local Applications STI 2022 | https://doi.org/10.5281/zenodo.6948453 0 / 13 Poster STI 2022 Conference Proceedings Proceedings of the 26th International Conference on Science and Technology Indicators All papers published in this conference proceedings have been peer reviewed through a peer review process administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a conference proceedings. Proceeding Editors Nicolas Robinson-Garcia Daniel Torres-Salinas Wenceslao Arroyo-Machado Citation: Orduña-Malea, E., & Bautista-Puig, N. (2022). Measuring web connectivity between research organizations through ROR identifiers. In N. Robinson-Garcia, D. Torres-Salinas, & W. Arroyo-Machado (Eds.), 26th International Conference on Science and Technology Indicators, STI 2022 (sti22101). https://doi.org/10.5281/zenodo.6948453 Copyright: © 2022 the authors, © 2022 Faculty of Communication and Documentation, University of Granada, Spain. This is an open access article distributed under the terms of the Creative Commons Attribution License. Collection: https://zenodo.org/communities/sti2022grx/ | non_poster |
University of Aberdeen A Datagram API for Evolving Networks Beyond 5G Jones, Tom Harvey; Fairhurst, Godred; Vyncke, Eric Publication date: 2017 Document Version Publisher's PDF, also known as Version of record Link to publication Citation for pulished version (APA): Jones, T., Fairhurst, G., & Vyncke, E. (2017). A Datagram API for Evolving Networks Beyond 5G. Poster session presented at European Conference on Networks and Communications 2017, Oulu, Finland. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. ? Users may download and print one copy of any publication from the public portal for the purpose of private study or research. ? You may not further distribute the material or use it for any profit-making activity or commercial gain ? You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 19. Apr. 2018 | non_poster |
European Constitutional Court Network Lando Kirchmair and Lisa Lechner Funding: go!digital ÖAW 1 | non_poster |
Proceedings of the LXVI SIGA Annual Congress Bari, 5/8 September, 2023 ISBN: 978-88-944843-4-2 Poster Communication Abstract – 4.13 DURUM WHEAT ROOT SYSTEM RESPONSE TO ABIOTIC STRESS URBANAVIčIūTė I.*, BONFIGLIOLI L.*, PAGNOTTA M. A.* *) University of Tuscia root architecture, salinity, drought, abiotic tolerance Durum wheat (Triticum turgidum L. subsp. durum (Desf) Husn.) is one of the major cultivated crops across the Mediterranean basin. In this region, abiotic stresses have a significant impact on durum wheat cultivation and resulting in loss of total production up to 72% and 40% for drought and salinity, respectively. Under abiotic stress conditions, the root system is crucial for crop adaptability and productivity, and its response depends on stress timing, intensity, and duration. Moreover, high-throughput analysis technologies allow to determine more detailed root system response and the target root traits for crop improvement under abiotic stress. Present work was conducted in the frame of the ECOBREED project (European Union’s Horizon 2020 research and innovation program under grant agreement No 771367). The aim of this study was to evaluate the phenotypic root system response of four durum wheat genotypes (one drought and salt tolerant landrace and three modern varieties) under drought and salt stress across the time from seedlings until the maturity. The experiment was performed in the greenhouse of Tuscia University, Viterbo, Italy, in pots. During the experiment, stress conditions (drought and salt) were applied four times, where drought conditions were managed by discontinuing watering, and the salt pots were irrigated with salt solution (250mM NaCl), 3 times per week. The roots phenotypic data were collected after each treatment and analysed using the high-throughput software Win-RHIZO. Although all genotypes’ roots responded to abiotic stress during the whole experiment very differently, at the end of the experiment the landrace J. Khetifa has parameters significantly higher than the ones of the analysed modern varieties. Our results indicated that local landraces could provide | non_poster |
Page | 1 The Influence of Low Traffic Neighbourhood Scheme on Multimodal Traffic Flow in London Xianghui Zhang*1, Tao Cheng†1 1 SpaceTimeLab for Big Data Analytics, Department of Civil, Environmental and Geomatic Engineering, University College London, London WC1E 6BT, UK GISRUK 2023 Summary This study aims to investigate the influence of Low Traffic Neighbourhood (LTN) Scheme deployed since the COVID-19 pandemic on multimodal traffic flow in London. We adopt a mobile phone application dataset to investigate the changes in multimodal traffic flow generated by the general public following the introduction of LTNs. Three LTNs located in London are explored between 4th May and 30th August 2020. The analysis approved that LTN scheme could encourage residents to take cycling and restrict through-traffic, but the influence varies across areas, travel modes and groups and may affect by specific measures. KEYWORDS: Multimode, Traffic flow, Low Traffic Neighbourhood, London 1. Introduction After the outbreak of the COVID-19 pandemic in early 2020, the LTN schemes have been quickly delivered in several London boroughs to make it easier for active travel to keep social distance. These new LTNs are implemented based on the combination of local knowledge and TfL’s Strategic Neighbourhoods Analysis (Transport for London, 2020). Based on this analysis and decision-making, several combinations of intervention measures are adopted to make up LTNs, such as modal filters, bus gates, traffic signals. After deploying these measures, the through-traffic will be restricted to access the intervened areas, but residents’ vehicles and public services vehicles can still arrive at the destinations within the LTNs. Existing research explored the influences of LTN schemes deployed before and after the pandemic on impacts on active travel (O’Malley, 2021), road safety (Goodman et al., 2021), and health (Laverty et al., 2021). These existing studies contributed useful knowledge for evaluating LTN schemes, but their direct influences on preventing driving through traffic and mode shifting are rarely examined due to the limitations of available data sources. Although some local authorities monitored the changes in traffic flow after introducing LTNs through traffic count sites, this data only reported the driving flow on limited road segments. Besides, existing research only focused on the responses of residents (via social survey) but ignore the well-being of people with trip attractions (i.e., workers and visitors) and pass-through people, which are also important components of local mobility. Besides, existing research did not reveal the responses of cycling and walking flow to LTN scheme, which is one of the important targets of LTN scheme. To address the gap in existing research, this paper aims to evaluate the impacts of LTNs on mobility by analysing the multimodal traffic flow in the general public, including residents, people with trip attractions, and pass-through people. We will generate a multimodal traffic flow dataset based on an individual multimodal derived from Mobile Apps GPS data. Three LTNs deployed in London from * xianghui.zhang.20@ucl.ac.uk † tao.cheng@ucl.ac.uk | non_poster |
Chamada para submissão de artigos – CBEB 2014 Todos os artigos aceitos para apresentação no evento serão publicados nos Anais do CBEB 2014 e indexados em bases de dados, tais como: Scopus e Web of Science. Trabalhos de destaque serão convidados a participar de uma edição especial da Revista Brasileira de Engenharia Biomédica. CBEB 2014 A Engenharia Biomédica como Propulsora de Desenvolvimento e Inovação Tecnológica em Saúde. www.cbeb.org.br Datas importantes Submissão de artigos: 01/05/2014 a 15/06/2014. Notificação de aceite: a partir de 15/07/2014. Submissão da versão final: até 15/08/2014. As inscrições para o CBEB 2014 já estão abertas. Áreas temáticas Avaliação de Técnicas e Métodos em Engenharia Biomédica; Biomateriais e Engenharia de Tecidos; Bioengenharia; Dispositivos e Instrumentação Biomé- dica; Biomecânica; Imagens Biomédicas; Neuroen- genharia e Engenharia de Reabilitação; Processa- mento Digital de Sinais; Robótica Biomédica e Tec- nologia Cirúrgica; Bioética; Educação em Engenha- ria Biomédica. Contato Envie suas dúvidas ou sugestões relacionadas ao CBEB 2014 para cbeb.brasil2014@gmail.com | non_poster |
Colección de ESMOS 1 Infografía Tetraciclinas: no solo dañan a las bacterias Genaro Miguel Valencia-Macías* iD, Mariana Ruíz-Coronado iD, Mariana Mendoza-Badillo iD, Olíblish Mariel Laguna- Morales iD, Carlos Eliú Granados-Bernardo iD Estudiantes de Licenciatura en Biotecnología, Facultad de Ciencias Biológicas, Benemérita Universidad Autónoma de Puebla, Puebla, México. Email: *genaro.valencia@alumno.buap.mx 03 de diciembre de 2024 DOI: http://doi.org/10.5281/zenodo.14271709 Editado por: Verónica Quintero Hernández (Profesora Investigadora de Cátedras CONAHCyT, Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla) Revisado por: Pedro Alejandro Fong Coronado (Posgrado en Ciencias (Microbiología), Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla, Puebla, México). Apoyo en la maquetación: Sofía Salazar Ortega (Estudiante de Bioquímica Clínica, Universidad de las Américas Puebla, Puebla, México). Colección de ESMOS Resumen Las tetraciclinas fueron descubiertas a inicios de 1940 y son conocidas como agentes bacteriostáticos por su actividad antimicrobiana al inhibir la síntesis de proteínas del receptor [1, 2]. Su mecanismo de acción inicia al atravesar la membrana externa de la bacteria hospedadora mediante | non_poster |
Juillet 2022 Fiche élevage- Tout savoir sur l’élevage de porcs charcutiers ! Auteur principal : Chaire BEA Contributeurs : Amandine Rave, Valérie Courboulay Infographie : Noÿa Broise DOI : 10.5281/zenodo.12800521 https://chaire-bea.vetagro-sup.fr Maintenant que les porcs n’ont plus de secret pour vous, vous vous demandez peut- être comment s’organise l’élevage de porcs dits « charcutiers », la proportion d’animaux élevés en bâtiments ou en plein-air, l’âge auquel ils commencent à être engraissés puis auquel ils sont envoyés à l’abattoir ? Grâce à cette nouvelle fiche pédagogique réalisée en partenariat avec l’IFIP (merci particulièrement à Valérie Courboulay), vous en saurez bientôt davantage à ce sujet ! | non_poster |
1 Elemente und Beziehungen übernommen aus dem Archimatetool: http://www.archimatetool.com Abbildung der Beziehung1 Eingesetzte Beziehung Beschreibung Assignment Relationship Business Actor und Business Role werden mit der Assignment Relationship verbunden. Der Business Actor ist sozusagen der Business Role zugewiesen. Business Actor kann mehreren Business Roles zugeteilt werden. Mehrere Business Actor können einer Business Role zugewiesen sein. Serving Relationship Business Service zeigt die Rolle einer Business Role. Statt der Business Role kann auch eine Business Collaboration in diese Relationship eingesetzt werden. | non_poster |
A Case of Open Access in Sub Sahara Africa Joy Owango Executive Director Training Centre in Communication | non_poster |
Colección de ESMOS 1 Infografía Un vistazo a los marcadores biológicos Ana Carolina Robles Ramos* iD Licenciatura en Biotecnología, Facultad de Ciencias Biológicas, Benemérita Universidad Autónoma de Puebla, Puebla, México. *Email: 202074317@viep.com.mx 25 de mayo de 2023 DOI: http://doi.org/10.5281/zenodo.7969448 Editado por: Jesús Muñoz-Rojas (Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla). Revisado por: Luis Ernesto Fuentes-Ramírez (Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla). Colección de ESMOS Resumen A medida que el tiempo transcurre se han desarrollo nuevas metodologías, técnicas y herramientas en favor de la salud pues aún hay enfermedades que no se pueden detectar con anticipación o que el costo del estudio puede ser muy elevado por lo que un gran paso para ello son los biomarcadores o marcadores biológicos definidos como aquellas particularidades medibles que indican lo que sucede en un organismo o | non_poster |
GNB2023, June 21st-23rd 2023, Padova, Italy 1 Abstract—Humans are continuously exposed to a wide variety of chemicals. Animal and bi-dimensional (2D) in vitro models currently represent the gold standard for chemical toxicity assessment, although they do not adequately reproduce the real human scenario, thus resulting in poorly predictive results. In this context, New Approach Methodologies are needed for a proper assessment of chemical toxicity. For instance, three- dimensional (3D) bioengineered in vitro models have the potential to closely represent the human physio-pathological milieu, enabling a more reliable evaluation of chemical toxicity. Among human tissues/organs, the myocardium is a target organ for chemical- and drug-induced toxicity. In this work, a custom- made poly(ester urethane) was synthesized and used as raw material to fabricate a platform of multi-layered structures with different geometrical features through melt extrusion additive manufacturing. The designed matrices were also surface functionalized with proteins to improve their biomimesis of the native cardiac extracellular matrix. The physico-chemical properties of the constructs proved their suitability for the development of in vitro bioengineered cardiac tissue models, enabling the investigation of physio-pathological processes and cardiotoxicity testing. Keywords—poly(urethane)s, in vitro cardiac tissue models, additive manufacturing, surface functionalization. I. INTRODUCTION Drug development, chemical toxicity assessment, and disease modeling heavily rely on the use of animal models and two- dimensional in vitro models [1]. However, these models exhibit several drawbacks that make their outcomes poorly predictive of the real human scenario. Two-dimensional models are extremely simple and do not allow to replicate the complexity of human tissue structure [2]. Conversely, animal models present high management costs and ethical issues. Furthermore, due to interspecies differences, animal models are unlikely to faithfully replicate the human physio- pathological behavior. The concordance between toxicity levels evaluated in humans and laboratory animals is around 71% when both rodent (mice and rats) and non-rodent (dogs and monkeys) models are used. This concordance is reduced to 43% using only rodents as animal models [3]. The consequences of these issues are diverse and heterogeneous. The selection effectiveness of preclinical trials is reduced: only 11.8% of drugs entering clinical trials reaches approval. In addition, there are considerable cases of approved drugs showing undesirable toxic effects. For instance, during the last decade eight non-cardiovascular drugs have been reported to induce ventricular arrhythmia and sudden death, and thus they have been withdrawn from the market [4]. Furthermore, the proper and reliable assessment of the toxicity of chemicals and chemical mixtures is another key target to achieve, with important health, social and economic repercussions. Until 2008, only 5,000 chemicals out of 100,000 present on the western market had been subjected to an approved analysis process due to the limitations of two- dimensional and animal models [3]. In this complex scenario, the development of reliable three- dimensional (3D) bioengineered in vitro models is of central importance since they can be very useful tools for overcoming the limitations of previously adopted approaches [5] [6]. In particular, the design of reliable 3D in vitro models of the cardiac tissue in gaining increasing interest in the research community. On the PubMed database the number of publications related to “cardiac tissue models” is exponentially increasing over time, with around 4000 publications in 2022. This trend can be correlated to the very high social and economic impact of heart diseases, that are responsible for around 17.5 million deaths worldwide each year [7]. Secondly, the high complexity of the cardiac tissue and the very limited possibilit | non_poster |
Workshop 12 set. 2023 Presencial 14-15 set. 2023 Online 3. Novas estratégias para o ensino a distância: como construir um MOOC com base nos recursos ROSSIO (o exemplo de Caça à baleia em Portugal) NOVA FCSH Campus de Campolide Colégio Almada Negreiros Sala 221 / Zoom 4. O ambiente virtual de investigação ROSSIO: uma ferramenta para a investigação em ciências sociais, artes e humanidades (CSAH) 1. Exposições e coleções digitais no Portal ROSSIO 2. Os serviços de vocabulários controlados na infraestrutura ROSSIO Contacto: rossio@fcsh.unl.pt + info: rossio.pt Series GRATUITAS mas OBRIGATÓRIAS PRESENCIAL até 11 set. 23 ONLINE até 13 set. 23 inscrições O R G A N I Z A D O P O R : NDDI/DITD | non_poster |
Opportunités et limites en stylistique computationnelle de la poésie : Détection automatique de l'enjambement en anglais Eulalie Monget, Pablo Ruiz Fabo Université de Strasbourg, LiLPa UR 1339 / F-67000 Strasbourg, France Plusieurs initiatives internationales témoignent de l’intérêt actuel pour les analyses littéraires assistées par des moyens computationnels, comme le groupement Digital Literary Stylistics (SIG-DLS) de l’Alliance for Digital Humanities Organizations. Le colloque Plotting Poetry montre la variété de phénomènes poétiques abordés à l’aide d’outils informatiques (https://plottingpoetry.wordpress.com/). Un numéro thématique de la revue Langages (2015) en fournit une autre synthèse. Les projets impliquant l’annotation linguistique automatique pour l’analyse littéraire partagent certains soucis : comment opérationnaliser des concepts d’analyse littéraire sur la base d’annotations issues des outils de Traitement automatique des langues (TAL), originalement conçues pour une analyse non-littéraire ? Comment évaluer nos annotations automatiques d’un trait stylistique, en termes de, et au-delà de, la comparaison avec des données de référence annotées manuellement ? Quel gain de connaissance spécifique à l’analyse littéraire est atteignable à travers l’annotation stylistique automatique, qui serait impossible sans traitement informatique ? C’est des questions qui nous occupent également dans notre projet sur la détection automatique de l’enjambement dans la poésie en anglais. L’enjambement implique une discordance entre les pauses requises par la structure métrique (fins de vers ou hémistiche) et des pauses demandées par la syntaxe ou le sens (cf. Golomb, 1979, p. 269). On le rencontre souvent lorsqu'un un syntagme est éclaté sur deux vers successifs, contrariant l'attente d'une pause à la fin du premier vers. Hors cette caractérisation générale, la définition de l’enjambement ne fait pas consensus (cf. Quilis, 1964 ; Hollander, 1975 ; Golomb, 1979 ; Hussein et al., 2018 ; Delente, 2019). C’est une raison pour développer des logiciels qui implémentent les différentes définitions possibles : en détectant automatiquement leurs occurrences sur un grand corpus, les atouts et limites de chaque définition pourront être examinés au vue d’un exemplier large. De plus, il n'y a pas d'études publiées sur la détection automatique de l'enjambement en anglais, contrairement à l'allemand (Hussein et al., 2018) ou espagnol (Ruiz et al., 2017, http://prf1.org/anja/index/). Concernant l’opérationnalisation, nous adoptons une définition à base largement syntaxique (Quilis, 1964) : l’enjambement se produit quand la fin de vers coupe certaines séquences à forte cohésion interne. Ses atouts : premièrement, la facilité d’opérationnalisation. La définition implique des séquences d’étiquettes grammaticales, des dépendances et constituants syntaxiques, fournis par les librairies de TAL. Deuxièmement, l’intérêt de vérifier si cette approche, déjà appliquée en espagnol (Ruiz et al., 2017), serait applicable à l’anglais. Nous avons constaté des limites, ayant modifié la typologie pour mieux gérer l’anglais (voir https://git.unistra.fr/enj/corpus-reference) ; plus généralement, Delente (2019) discute les limites des définitions syntaxiques. 1 | non_poster |
PROGRAM ISCT EUROPE 2024 REGIONAL MEETING REGIONAL TITLE SPONSOR SEPTEMBER 4-6, 2024 • GOTHENBURG • SWEDEN AstraZeneca has provided a sponsorship grant towards this independent programme | non_poster |
Full-Body Interaction for Live Coding Juan F. Olaya-Figueroa Laura V. Zapata-Cortés Camilo A. Nemocón- Farfán Universidad Escuela de Artes y Letras juanolayafi@gmail.com Univesidad El Bosque laura.zapata1191@gmail.com Universidad Jorge Tadeo canfcero@gmail.com ABSTRACT This paper describes the integration of a full-body interactive system for its use in live coding performances. The system has an interactive mat of grid layout, which is located on the floor and serves as a guide for the user to move within its 17 areas in which it is divided. The body gestures made by the user are mapped to modify the execution of the algorithms that generate the sounds and the visuals in real time. Thus, the audi- ence can participate in the improvisation through full-body interactions and without having programming knowledge. The system allows the live coders to manage the parameters manipulated by the user during the interactive experience. It was noted that users recognize the interactive mat as a space of interaction with the system and also that users move intuitively on the mat. In addition, we found that the users recognize the feedback of their body gestures, by observing the change in the outcomes of the visuals and sounds, during a live coding performance. 1. INTRODUCTION Live coding focuses on the possibility of generating visual and sound compositions in real time, using algo- rithms or programming codes. This practice focuses on the person or group of experts, live coders/performers, who use the code to generate the visual and sound sets, without taking into account the participation of the audience, whom cannot be part of the creation of the composition. In this work we explore the possibility of intervention of the audience by body gestures, within a practice of live coding. This project seeks a symbiotic performance between a person of the audience and the live coders in real time. The execution of the algorithm created by the live coder is modified by the movements of the body, which generates a collaborative composition between the user and the live coder to jointly create a sound and visual discourse. To achieve this goal a programmable system was proposed using the following tools: Processing (Reas and Fry 2006), SuperCollider (McCartney 1996), TidalCycles (McLean and Wiggins 2010), Hydra (Jackson 2018) and a Kinect device (Zhang 2012). These tools allow to generate a live coding performance, in a sonorous, visual and interactive way, integrating the capture of the movements in real time with the full body tracking device, Kinect. Moreover, an interactive mat was developed as a space divided by areas where the user can move and interact. 2. PROBLEM AND MOTIVATION The processes of improvisation in live coding are executed by specialists in the generation of sounds and visu- als. However, during the live coding performances, a high interest of the audience in participating in the im- provisation has been observed. Nonetheless, the lack of specialized knowledge does not allow the audience to participate in the improvisation. A system was developed in order to let the audience to participate in the composition of a live coding perfor- mance. It allows the user to use the movements of its body to modify the algorithms, which are being written | non_poster |
PublicaƟons 2017 Page 1 | Refereed Journal Articles (published) Allen-Ankins S, Stoffels RJ (2017) Contrasting fundamental and realized niches: two fishes with similar thermal performance curves occupy different thermal habitats. Freshwater Science 36 (3): 635-652 DOI: 10.1086/693134. Burrows RM, Rutlidge H, Bond NR, Eberhard SM, Auhl A, Andersen MS, Valdez DG, Kennard MJ (2017) High rates of organic carbon processing in the hyporheic zone of intermittent streams. Nature Research Scientific Reports 7: 13198 DOI:10.1038/s41598-017-12957-5 Carew ME, Nichols SJ, Batovskam J, St Clair R, Murphy NP, Blacket MJ, Shackleton ME (2017) A DNA barcode database of Australia's freshwater macroinvertebrate fauna. Marine and Freshwater Research 68: 1788-1802 https://doi.org/10.1071/MF16304 Dabrowski J, Baldwin DS, Dabrowski JM, Hill L, Shadung (2017) Impact of temporary desiccation on mobility of nutrients and metals from sediments of Loskop Reservoir, Olifants River. Water SA Vol.43 No. 1 http://dx.doi.org/10.4314/wsa.v43i1.02 Deane DC, Nichol JM, Gehrig SL, Harding C, Aldridge KT, Goodman AM, Brookes JD (2017) Hydrological-niche models predict water plant functional group distributions in diverse wetland types. Ecological Applications 27 (4) 1351-1364 Freestone FL, Brown P, Campbell CJ, Wood DB, Nielsen DL, Henderson MW (2017) Return of the lignum dead: Resilience of an arid floodplain shrub to drought. Journal of Arid Environments 138 9-17 http://dx.doi.org/10.1016/j.jaridenv.2016.11.011 Goldrick S, Holmes W, Bond N, Lewis G, Kuiper M, Turner R, Farid SS (2017) Advanced Multivariate Data Analysis to Determine the Root Cause of Trisulfide Bond Formation in a Novel Antibody–Peptide. Biotechnology and Bioengineering Vol 114, No10 2222-2234 Haby NA (2017) Long-term revegetation success of serverly degraded chenopod shrublands. The Rangeland Journal, 39 341-354 https://doi.org/10.1071/RJ17027 Harris JH, Kingsford RT, Peirson W, Baumgartner L (2017) Mitigating the effects of barriers to freshwater fish migrations: the Australian experience. Marine and Freshwater Research 68, 614-628 http://dx.doi.org/10.1071/MF15284 Holland A, Wood CM, Smith DS, Correia TG, Val AL (2017) Nickel toxicity to cardinal tetra (Paracheirodan axerodi) differs seasonally and among the black, white and clear river waters of the Amazon basin. Water Research 123: 21-29 https://doi.org/10.1016/j.watres.2017.06.044 Karlsbakk E, Kristmundsson Á, Albano M, Brown P, Freeman MA (2017) Redescription and phylogenetic position of Myxobolus aeglefini and Myxobolus platessae n. comb.(Myxosporea), parasites in the cartilage of some North Atlantic marine fishes, with notes on the phylogeny and classification of the Platysporina. Parasitology international 66, 952-959. Kopf KR, Nimmo DG, Humphries P, Baumgartner LJ, Bode, M, Bond NR, Byrom AE,Cucherousset J, Keller RP, King AJ, McGuiness HM, Moyle PB, Olden JD (2017) Confronting the risks of large-scale invasive species control. Nature Ecology & Evolution. Vol 1, 0172 DOI: 10.1038/s41559-017-0172 | http://www.nature.com/articles/s41559-017-0172 McInerney PJ, Rees GN (2017) Co-invasion hypothesis explains microbial community structure changes in upland streams affected by riparian invader. Freshwater Science 36 (2) 297-306 https://doi.org/.1086/692068 McInerney PJ, Stoffels RJ, Shackleton ME, Davey CD (2017) Flooding drives a macroinvertebrate biomass boom in ephemeral floodplain wetlands. Freshwater Science. Freshwater Science 36 (4): 726-738 DOI: 10.1086/694905. Portinho JL, Nielsen DL, Ning N, Paul W, Noguiera M (2017) Spatial variability of aquatic plant and microfaunal seed and egg bank communities within a forested floodplain system of a temperate Australian river. Aquatic Sciences 79 515-527 https://doi.org/10.1007/s00027-016-0514-z Robson BJ, Lester RE, Baldwin DS, Bond NR, Drouart R, Rolls RJ, Ryder DS, Thompson R (2017) Modelling food-web mediated effects of hydrological variability and environmental flows. Water Research. 1 | non_poster |
Pulsation in pre-main sequence stars: TESS observations & models from accreting protostars T. Steindl , K. Zwintz , T. G. Barnes , M. Müllner , E. I. Vorobyov Institute für Astro- and Particlephysics, University of Innsbruck, Technikerstraße 25, A-6020 Innsbruck, Austria The University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1402, Austin, TX 78712-1206, USA Department of Astrophysics, University of Vienna, Vienna 1180, Austria Research Institute of Physics, Southern Federal University, Rostov-on-Don 344090, Russia In context of stellar modelling, the pre-main sequence phase of evolution is often substantially simplified. Initial models are created with huge radii and uniform contraction to make them fully convective. These then follow the classical pre-main sequence evolution starting with a fully convective contraction along the Hayashi track. Only after the onset of thermal reaction slows the contraction will the star develope a radiative core that will continue to grow while the star evolves along the Henyey track. Before arriving at the main sequence, a first episode of hydrogen burning leads to a convective core and pauses the contraction, resulting in the well known hook in the evolutionary track. Show model details This however is far from the real processes that happen in and around newly born stars. Born in the collapse of a molecular cloud, protostellar seeds typically have a few Jupiter masses at a few solar radii. A main difference to the classical pre-main sequence evolution is that these protostellar seeds need to accrete material from their surrounding cloud/disk to obtain the same mass at the zero age main sequence. Such accreting protostars never obtain the huge radii adopted for the classical initial models. As a consequence, the evolutionary track is completely altered. The model starts at significantly lower temperatures and accretes material at a constant rate. The initially fully convective star quickly obtains a radiative part between the convective core and the convective envelope. Only during the short phase of deuterium burning will (almost) the whole star be convective again. After the model has accreted all its mass, it converges to the classical model. 1 1 2 1 3,4 1 2 3 4 0:00 / 0:20 Thomas Steindl Home Publications Physics CV https://thomassteindl.com/poster/TESS_Science_Conference_II_2021 | non_poster |
Poster Session Proceedings 7th IEEE European Symposium on Security and Privacy | non_poster |
First sharpshooter species proven as vectors of Xylella fastidiosa subsp. multiplex in Prunus salicina trees in Brazil Cristiane Müller, Mariana Bossi Esteves, Heloisa Thomazi Kleina, Aline Nondillo, Marcos Botton, João Roberto Spotti Lopes | non_poster |
SIOS POSTER ABSTRACT A Student Initiative for Open Science (SIOS) Franziska Nippold & Marla Dressel SIOS University of Amsterdam Author Note Correspondence concerning this work should be addressed to Franziska Nippold & Marla Dressel, SIOS Chairs of Communications, Psychology Research Master students at the University of Amsterdam. E-mail: sios.information@gmail.com. | non_poster |
COUSHATTA BASKET WEAVERS: MAPPING GIFT ECONOMY How not to lie with maps Denise Bates*, Arina Melkozernova** in collaboration with the Coushatta Tribe of Louisiana, USA *Professor, Faculty of Leadership and Integrative Studies, Arizona State University **PhD Candidate, Comparative Culture and Language, Arizona State University Keywords: Historical maps, TEK, tribal economy, Coushatta, access “True Indigenous formulations are non-intrusive and build frameworks of respectful coexistence by acknowledging the integrity and autonomy of the various constituent elements of the relationship.” Alfred Taiaiake (2005) Arizona State University (ASU), in partnership with the Coushatta Tribe Council, the Coushatta Heritage Department, archivists and community members, funded by the Stowe Endowment Fund, are collaborating on generating the geospatial data from various resources. The oral histories, archeological findings, and cultural items held in public and private collections contain data about the Coushatta ecological knowledge and land stewardship practices. Based on this data, we developed the map of basket weavers’ housing along Bayou Blue, ca. 1970s (Langley, Bates, 2021, 23). The map demonstrates the strong connection of the basket weavers’ homesteads to the swamps, rivers, and pinewoods where all basket materials were collected. The clusters of homesteads show the community density that help transfer information and knowledge to the next generations. The map intended to serve as a geo-argument to support the claim about the Coushatta basket weaving as a communal endeavor that provided the prominent contribution to a tribal economy. As the result, in 1972, the Coushatta tribe was recognized by the State of Louisiana and federally acknowledged. This poster documents the process of producing a historic map of the Coushatta basket weavers, circa 1970 from archival materials provided by the Coushatta Heritage Department. The locations of basket weavers in 1970 were translated into coordinates (longitude and latitude) by using free Google maps. By using Python programming in the Jupiter Notebook, the data were imported from Excel into a map viewer. Producing the map by using the freely accessible Google maps as a base posed challenges of interpreting the accurate results. For Coushattas, baskets are sacred “gifts emanating from the landscape” (Pete, 75). The reduction of geo information by the Google algorithm wiped out small rivers and creeks from the interactive map. As the result, the place-based connection between homesteads and watershed was lost and with it the references to Coushattas’ cultural values and identity. To restore the narrative of the Coushatta’s Traditional Ecological Knowledge (TEK), the watershed layers were reinstated through adding rivers to the map base by using commercial software (StoryMaps, Adobe Cloud). In order to convey the deep connection to land, the equitable and eminent collaboration in a cross-cultural setting and access to technology are necessary to enable communities to contribute to their presence in a digital space. Both technology and the TEK are equally important for imagining alternative futures (Duarte & Belarde-Lewis, 2015). | non_poster |
New generation offline software for the LHCb upgrade I Martina Ferrillo*† Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich E-mail: martina.ferrillo@cern.ch In preparation for Run 3 of the LHC, the LHCb experiment extensively upgrades its detector to meet the quest for higher luminosity and physics yield. The corresponding increase in the data volume and the use of fast simulation pose a challenge to the data storage resource strategy and computing model. The newly built Data Processing & Analysis Tools (DPA) project aims at coordinating the efforts in the development of the experiment software offline tools to ensure efficient and full exploitation of the LHCb physics potential. This work presents an overview of the main aspects of the new generation software developed within the DPA: the offline data processing strategy; the mechanisms for user data-structures production; the offline user analysis tools. ***Presented at the 30th International Symposium on Lepton Photon Interactions at High Energies,*** *** hosted by the University of Manchester,*** *** 10-14 January 2022.*** *Speaker. †On behalf of the LHCb Collaboration. © Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). http://pos.sissa.it/ | non_poster |
Fig. 1 From A to D: dorsal, ventral, right lateral and posterior views of Rhynchophorous ferrugineus female genitalia, everted by human physiological injection in abdomen. Two sub-conical pouches are clearly visible at ovipositor sides. Fig. 2 From A to C: A drawing from Wattanapongsiri (1966), the Author shows the Vaginal Base (VgB) but neglets the pouches. The same vaginal base cleared and mounted in Essig’s fluid de- monstrate the presence of pouches (red arrows) either in reflected (B) or transmitted (C) light. Spc = Spermatheca Fig. 5 Ventral, dorsal, lateral and frontal micro-CT 3D re- constructed Rhyn- chophorous ferrugi- neus vaginal base and ovipositor. For high resolution micro-CT scans, in- sect's genitalia were preliminarly fixed and dehydrated through critical point drying (CPD). Hitherto undescribed cuticular pouches of RPW female genitalia, recent findings and interpretation attempts C. Porfido1, V. Russo1, F. Garganese1, L. Diana2, F. Nugnes3, F. Porcelli1,2 1 Francesco Porcelli DiSSPA UNIBA Aldo Moro, Via Amendola 165/a - 70126 Bari, Italy; 2 CIHEAM Bari, Via Ceglie 9 - 70010 Valenzano (BA), Italy; 3 CNR IPSP, Via Università 133 Portici (NA) DOI: 10.5281/zenodo.3956088 Red palm weevil (RPW) infestations menace palms with historical, economic and landscape importance in many regions of the world. Despite human efforts, the pest is still undeterred by actual control ac- tions. Seeking out symbiotic Bacteria repository associated with RPW female reproductive apparatus, we encountered two paired and possibly eversible cone-shaped pouches, which open on both sides of the membranes between the spiculum ventrale and vaginal base. Such structures, never observed previously, can be inflated and everted by injecting and pressuring the weevil with human physiological solu- tion. Study by macrophotography, stereoscopy, bright field light microscopy, SEM and Cryo-SEM showed that the ventral cuticle of the cones is membranous and decorated with minute seta-like processes (microtrichia) plus scarce setae. The dorsal cuticle is somewhat sclerotized, with the exposed side pitted by single or grouped (2-20) apparent duct openings. Such openings should correspond to gland out- lets and are served either by a pore-channelled sieve or by a long ducted end-apparatus on the body side of cuticle. Non-destructive investigations and high-resolution micro computed tomography 3D ob- servations (HR-µ-CT, ~ 400 nm resolution) of resting vaginal base and its content allows the study of the organ interrelationship within the reproductive system, from the common oviduct to ovipositor tip. µ -CT provided very detailed anatomical micrographs of retracted pouches, common oviduct, copulatory pouch, cuticle, and membranes. Muscles are also recognized. These evidences show complex gross anatomy and morphology of the region, persuading to further clarify the possible function of the structure. Keywords: Rhynchophorus ferrugineus, Phoenix canariensis, dactylifera, Cocos nucifera, Elaeis guineensis, urban ornamental. Fig 7 The function of such pouches appears unclear but they harbour Mononchoides macrospiculum a phytopathogenic nematodes that was found abundant in several living female trapped in Italy. This nematode is just the last of a serie of RPW-borne organisms as Serratia spp. (Bacteria), Candida spp., Hyphopichia spp. and Curvularia (?) spp.(Fungi); Centrouropoda spp. and Uroobovella spp. (Acari). A and B Micro-CT 3D rendering; C Methylene Blue vital staining light macrog- raphy; D Gram-stained transmitted light microsco- py. Arrows on Mononchoides macrospiculum Fig. 6 3D micro-CT reconstruction of Rhyn- chophorous ferrugineus vaginal base and ovi- positor to show transversal (A) and longitu- dinal (B) virtual section. The introflexed pou- ches are visible (blue arrows). Fig 4 A and B SEM study reveals single or grouped (2-20) apparent ducts inlets and end-apparatus referable to two categories of glands. Fig. 3A Right pouche cleared and mounte | non_poster |
International Bioscience Conference and the 8th International PSU – UNS Bioscience Conference IBCS2021 is organized jointly by: University Prince of Songkla, Thailand University of Novi Sad, Faculty of Sciences, Serbia Towards the SDG Challenges BOOK OF ABSTRACTS 25-26 November 2021, Novi Sad, Serbia ONLINE | non_poster |
NORMAS ELABORAÇÃO, SUBMISSÃO E APRESENTAÇÃO DE PÔSTER 2024 I ENCONTRO EQUILIBRIUM RURAL - INTERVENÇÕES TRANSDISCIPLINARES ASSISTIDAS POR EQUINOS ORGANIZAÇÃO: Papo de EQUIlibrium /equilibrium_rural Novas Perspectivas de Educação e Saúde I EQUI ITAE I EQUI ITAE Anos Anos 2024 04/09 | non_poster |
Study of the ω →π0e+e−conversion decay with the CMD-3 detector at VEPP-2000 collider B D Kutsenko1,2 on behalf of the CMD-3 collaboration 1Budker Institute of Nuclear Physics, SB RAS, Novosibirsk 630090, Russia 2Novosibirsk State University, Novosibirsk, 630090, Russia E-mail: bdkutsenko@gmail.com Abstract. The study of the conversion decay ω →π0e+e−in the decay mode π0 →γγ was performed with the CMD-3 detector at the VEPP-2000 e+e−collider in Novosibirsk. Main background processes are events of ω →π0π+π−decay, QED events, and events of the radiative decay ω →π0γ, where monochromatic photon converts on the material in front of the sensitive volume of the detector. To suppress the last type of background the deep neural network was used. Using an integrated luminosity of about 10 pb−1 collected at the c.m. energy range from 660 MeV to 840 MeV the cross-section of the process under study was measured and the preliminary result for branching ratio Br(ω →π0e+e−) was obtained. The result is more precise than any previous measurements. The current status of the analysis is presented. 1. Introduction Measurement of branching ratios and transition form factors of conversion decays provides an important test of the vector dominance model [1] and an accurate background estimation in searches of lepton pairs produced in quark-gluon plasma [2-3]. One of such decays is ω →π0e+e− was studied at CMD-3 [4]. The Cryogenic Magnetic Detector (CMD-3) is a general purpose detector for the VEPP-2000 electron-positron collider [5], which operates in Budker Institute of Nuclear Physics, Novosibirsk, Russia. The tracking system consists of a cylindrical drift chamber. The tracking system is placed inside a thin superconducting solenoid with a field of 1.3 T. Electromagnetic barrel calorimeters are placed outside the solenoid: an LXe with a thickness of 5.4 X0 and CsI crystals with a thickness of 8.1 X0. An endcap calorimeter is made of BGO scintillation crystals, with a thickness of 13.4 X0. The main aspects of the physical program of the CMD-3 experiment are precision measurements of hadronic cross-sections in the region of the center of mass energy range from 0.4 GeV up to 2 GeV. Using an integrated luminosity of about 10 pb−1 collected at the c.m. energy range from 660 MeV to 840 MeV the cross-section of the process under study was measured and the preliminary result for branching ratio Br(ω →π0e+e−) was obtained. This data sample is 2 times larger than the sample previously used for the measurements at the CMD-2 detector. 2. Data analysis The main sources of resonant background contamination are ω →π0π+π−decay, with probability three orders of magnitudes larger than signal events, and ω →π0γ followed by the Dalitz decay of the π0 or by γ conversion on the material in front of the detector. The Monte-Carlo simulation for each energy point has been performed based on the Geant4 package [6] taking into account the initial state radiation and individual parameters for each energy point. Events with two well-reconstructed tracks and two or more photons are selected. Based on the simulation, selection criteria for tracks and photons parameters were determined and applied to data. The invariant mass of e+e−γ was used to suppress QED events, tracks spatial angle and photons recoil mass were used to suppress 3π events. One of the most important criteria is presented in Fig. 1. The other notable and efficient criterion is connected with the Presented at the 30th International Symposium on Lepton Photon Interactions at High Energies, hosted by the University of Manchester, 10-14 January 2022 | non_poster |
Flow-Conditioned Parameter Grids: Theodore Barnhart, August Schultz, Seth Siefken, T. Roy Sando, and Peter McCarthy Wyoming-Montana Water Science Center U.S. Geological Survey ESIP Virtual Poster Session - July 17, 2020 A CONUS Hydrologic Parameter Dataset For Mechanistic, Statistical, and Machine Learning Models | non_poster |
Dimitrios Bikiaris Laboratory of Polymer Chemistry and Technology Chemistry Department Aristotle University of Thessaloniki Synthesis and characterization of poly(lactic acid) biobased composites with Lignin/Nanolignin additives European Sustainable Biobased Nanomaterials Community (BIOMAC) This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 952941 | non_poster |
Colección de ESMOS 1 α- amilasa Sofía López González* iD Licenciatura en Biotecnología, Facultad de Ciencias Biológicas, Benemérita Universidad Autónoma de Puebla, Puebla, México. *Email: sofia_lpg@outlook.com 01 de Noviembre de 2022 DOI: http://doi.org/10.5281/zenodo.7272559 Editado por: Jesús Muñoz-Rojas (Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla). Revisado por: Ma Dolores Castañeda Antonio (Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla). Colección de ESMOS Resumen La enzima α-amilasa fue nombrada en el año de 1925 por el químico austriaco Richar Kuhn, esta enzima recibe su nombre gracias a la posición α en la que se encuentran sus productos. También es conocida como α- 1,4-glucano 4-glucanohidrolasas. La alfa amilasa pertenece al grupo de las endo amilasas, las cuales catalizan la hidrólisis de almidón mediante la ruptura de enlaces | non_poster |
25ª Reunião da Rede UNA-SUS De 07 a 08 de novembro de 2019 ANAIS IV Mostra de Experiências Exitosas Salvador/Ba | non_poster |
ORIGINAL RESEARCH published: 05 July 2022 doi: 10.3389/fcvm.2022.901902 Frontiers in Cardiovascular Medicine | www.frontiersin.org 1 July 2022 | Volume 9 | Article 901902 Edited by: Sabina Gallina, University of Studies G. d’Annunzio Chieti and Pescara, Italy Reviewed by: Rajiv Rampat, William Harvey Hospital, United Kingdom Elias Karabelas, University of Graz, Austria *Correspondence: Natalia Solowjowa solowjowa@dhzb.de †These authors share first authorship Specialty section: This article was submitted to Cardiovascular Medicine, a section of the journal Frontiers in Cardiovascular Medicine Received: 22 March 2022 Accepted: 07 June 2022 Published: 05 July 2022 Citation: Goubergrits L, Vellguth K, Obermeier L, Schlief A, Tautz L, Bruening J, Lamecker H, Szengel A, Nemchyna O, Knosalla C, Kuehne T and Solowjowa N (2022) CT-Based Analysis of Left Ventricular Hemodynamics Using Statistical Shape Modeling and Computational Fluid Dynamics. Front. Cardiovasc. Med. 9:901902. doi: 10.3389/fcvm.2022.901902 CT-Based Analysis of Left Ventricular Hemodynamics Using Statistical Shape Modeling and Computational Fluid Dynamics Leonid Goubergrits 1,2†, Katharina Vellguth 1†, Lukas Obermeier 1, Adriano Schlief 1, Lennart Tautz 3, Jan Bruening 1, Hans Lamecker 4, Angelika Szengel 4, Olena Nemchyna 5, Christoph Knosalla 5,6,7, Titus Kuehne 1,6 and Natalia Solowjowa 5* 1 Institute of Computer-Assisted Cardiovascular Medicine, Charité-Universitätsmedizin Berlin, Berlin, Germany, 2 Einstein Center Digital Future, Berlin, Germany, 3 Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany, 4 1000shapes, Berlin, Germany, 5 Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany, 6 German Centre for Cardiovascular Research (DZHK), Partner Site Berlin, Berlin, Germany, 7 Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Berlin, Germany Background: Cardiac computed tomography (CCT) based computational fluid dynamics (CFD) allows to assess intracardiac flow features, which are hypothesized as an early predictor for heart diseases and may support treatment decisions. However, the understanding of intracardiac flow is challenging due to high variability in heart shapes and contractility. Using statistical shape modeling (SSM) in combination with CFD facilitates an intracardiac flow analysis. The aim of this study is to prove the usability of a new approach to describe various cohorts. Materials and Methods: CCT data of 125 patients (mean age: 60.6 ± 10.0 years, 16.8% woman) were used to generate SSMs representing aneurysmatic and non- aneurysmatic left ventricles (LVs). Using SSMs, seven group-averaged LV shapes and contraction fields were generated: four representing patients with and without aneurysms and with mild or severe mitral regurgitation (MR), and three distinguishing aneurysmatic patients with true, intermediate aneurysms, and globally hypokinetic LVs. End-diastolic LV volumes of the groups varied between 258 and 347 ml, whereas ejection fractions varied between 21 and 26%. MR degrees varied from 1.0 to 2.5. Prescribed motion CFD was used to simulate intracardiac flow, which was analyzed regarding large-scale flow features, kinetic energy, washout, and pressure gradients. Results: SSMs of aneurysmatic and non-aneurysmatic LVs were generated. Differences in shapes and contractility were found in the first three shape modes. Ninety percent of the cumulative shape variance is described with approximately 30 modes. A comparison of hemodynamics between all groups found shape-, contractility- and MR-dependent differences. Disturbed blood washout in the apex region was found in the aneurysmatic cases. With increasing MR, the diastolic jet becomes less coherent, whereas energy dissipation increases by decreasing kinetic energy. | non_poster |
— 622 632 Celebration & Contemplation, 10th International Conference on Design & Emotion 27 — 30 September 2016, Amsterdam Abstract The Auto-Cam investigates how design researchers can take advantage of photography as a visual research method through the design and deployment of a wearable ‘POV’ (Point-Of-View) camera. Photographs have a unique ability to capture and express personal perspective and promote empathy among researchers and participants (Van Gestel, 2015), but photography remains relatively overlooked as a research method, in particular for design research. The Auto-Cam provides a first-person look into someone’s daily life and activities, using coloured stickers as a way to ‘tag’ objects of interest in one’s environment; the Auto-Cam is designed to automatically capture an image when it ‘sees’ a sticker. We present the Auto-Cam as a case study showing the value of automated first-person photography and the importance of continuing to explore the potential of photography as a valuable research method for design. Keywords Visual research, Design research, Photography, Wearable cameras, Empathy Auto-cam – An innovative approach to visual research for design - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Samantha Miller1 sam@stby.eu Bas Raijmakers1 bas@stby.eu William Gaver2 w.gaver@gold.ac.uk a.boucher@ gold.ac.uk Andy Boucher2 a.boucher@gold.ac.uk 1STBY, United Kingdom 2Goldsmiths, University of London, United Kingdom - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Figure 1. It’s perhaps no surprise that photography is particularly well-suited as a visual research method – photographs can provide deep qualitative insight, serve as artistic or design inspiration, and be revisited for ongoing reflection. Photography has traction as a visual research method across the social sciences, but there is still relatively little critical inquiry into how we can capture and analyse photographs as design researchers. More recently discourse around visual research within design has grown; for example, questioning how photography can be used within HCI design in particular (Bariola and Faste, 2012). For example, snapshots like the three above taken during fieldwork for design research are valuable research material, able to inspire design informed by observed experiences. How could this woman’s experience of boarding a train be more seamless, more dignified? How could we design a service for handicapped passengers that doesn’t disrupt the flow of passenger traffic? | non_poster |
ANAIS 2017 VI CEFIVASF VI CONGRESSO DE EDUCAÇÃO FÍSICA DO VALE DO SÃO FRANCISCO 24 a 26 de Agosto de 2017, Petrolina-PE / Juazeiro-BA Educação Física no Brasil: Aplicações na Escola, Saúde e Desempenho GEPEGENE CEFIS Colegiado de Educação Física Prefeitura de Juazeiro O trabalho segue em frente para mudar ainda mais Ministério da Saúde Ministério do Esporte Realização Patrocínio Apoio Sec. Executiva TREINAMENTOS E EVENTOS FACULDADE INSPIRAR ® WZ PETROLINA | non_poster |
La ‘topografía de la memoria’ y la ‘memoria ejemplar’. Las marcas urbanas en recuerdo de Delfor Santos Soto en el Municipio de La Matanza. AGOSTINO, HILDA NOEMI Institución: Universidad Nacional de La Matanza. Florencio Varela 1903. San Justo.CP.1754 Disciplinas de formación: Postgrados en Historia y en Educación Dirección electrónica: juntahis@unlam.edu.ar BERTUNE FATGALA, MIRTA NATALIA - Institución: Universidad Nacional de La Matanza. Florencio Varela 1903. San Justo.CP.1754 Instituto Superior de Formación Docente Nº82. Isidro Casanova. Disciplinas de formación: Profesora y Licenciada en Historia. Dirección electrónica: bertunefatgala.historia@gmail.com ARTOLA, ANALIA YAEL – Institución: Universidad Nacional de La Matanza. Florencio Varela 1903. San Justo.CP.1754 Disciplinas de formación: Licenciada en Turismo, Especialista en Patrimonio. Dirección electrónica: analiaartola @yahoo.com.ar Palabras claves: dictadura –desaparecidos - memoria – marcas urbanas – historia local Introducción. En el siguiente trabajo proponemos realizar un estudio desde la memoria territorial, intentando reconocer aquellos lugares donde el terrorismo de Estado actuó. El Municipio de La Matanza, ubicado en la Provincia de Buenos Aires, al igual que el resto de la Argentina fue escenario del horror y la violencia sistemática. Tanto el espacio público como el privado fueron testigos mudos de la acción violenta ejercida sobre la población. Con el deseo de activar la memoria, en los últimos años distintos lugares significativos fueron ‘marcados’ por la intervención política y voluntaria. En esta presentación atenderemos a aquellas marcas urbanas realizadas para recordar a las víctimas en el territorio matancero, identificando sus formas y los procesos de producción. Enfatizaremos en un estudio de caso: las aparecidas en recuerdo del concejal y literato Delfor Santos Soto, detenido desaparecido el 21 de agosto de 1976 en la localidad de Ramos Mejía. Utilizaremos como fuentes primarias los libros publicados de su autoría, entrevistas realizadas a su esposa e hija, diarios de sesiones, y observaciones y relevamientos realizados sobre la geografía matancera para identificar las marcas en su nombre diseminadas en el espacio. ¿Por qué nos detenemos en un estudio de caso? Porque de esta manera podremos trabajar desde la categoría de ‘memoria ejemplar’. Nos situamos más allá del acontecimiento, no negando su singularidad, pero permitiéndonos usarlo como modelo para pensar y abordar otros. Las marcas de la memoria en el Partido de La Matanza. Antes de continuar debemos caracterizar territorialmente al Partido de La Matanza entre | non_poster |
Overview. Pollution is an ever increasing problem in modern society. Much of this pollution is anthropogenic (man made) in origin. Sulphur dioxide (SO2) , hydrocarbons (VOC), carbon monoxide (CO) and oxides of nitrogen (NOX) are largely anthropogenic. Many of these compounds can greatly affect rural areas. Pollutants are released from cities and industrial areas. Over many miles, reactions can alter the chemical composition of this pollution. | non_poster |
A.Światkowski Wroclaw University of Economics Working paper | non_poster |
Probing a GeV-scale Scalar Boson in Association with a TeV-scale Vector-like Quark in the U(1)T3R BSM Extension at the LHC using Machine Learning Alfredo Gurrola, Umar Sohail Qureshi Department of Physics and Astronomy, Vanderbilt University 17-21 July 2023 Abstract A U(1)T 3R extension of the Standard Model (SM) can address the mass hier- archy between the third and the first two generations of fermions, explain thermal dark matter abundance, and the muon g −2 and B-meson anomalies. The model contains a light scalar boson, the ϕ′ and a heavy vector-like quark χu that can be probed from proton-proton (pp) collisions at CERN’s Large Hadron Collider (LHC) at √s = 13 TeV through g−g and χu−t fusion. We work under a phenomeno- logical framework in which the χu and ϕ′ masses are free parameters and develop a discovery methodology considering final states of the χu decaying to a b-quark, a muon, and MET from neutrinos and the ϕ′ decaying to µ+µ−. The analysis is performed using machine learning algorithms, over traditional methods, to maxi- mize the signal sensitivity with an integrated luminosity of 3000 fb−1. Further, we note the proposed methodology can be a key mode for discovery over a large mass range, including low masses, traditionally considered difficult due to experimental constraints. CC-BY-4.0 licence | non_poster |
VALIDAÇÃO DE ONTOLOGIA SOBRE FINANÇAS DESCENTRALIZADAS E SUA CONTRIBUIÇÃO NO PROCESSO DE LETRAMENTO FINANCEIRO-DIGITAL VALIDATION OF ONTOLOGY ON DECENTRALIZED FINANCE AND ITS CONTRIBUTION IN THE DIGITAL FINANCIAL LITERACY PROCESS DOI: https://doi.org/10.5281/zenodo.11372520 1 INTRODUÇÃO As ontologias são artefatos de representação e organização do conhecimento, capazes de descrever uma realidade, constituída de um vocabulário e os fatos que o embasam (Guarino, 1998 apud Almeida; Bax, 2003). São cada vez mais utilizadas para representar domínios ou áreas de conhecimento e apresentam a capacidade de inferência automática baseada em lógicas. Sistemas de Finanças Descentralizadas (DeFi) replicam serviços e instrumentos financeiros tradicionais, ao mesmo tempo que eliminam as instituições centralizadoras dos processos (Gramlich et al., 2023, p. 1). As transações são baseadas em redes blockchain: Blockchain pode ser considerada um tipo de arquitetura distribuída de banco de dados em que uma rede descentralizada de partes interessadas mantém uma máquina de estados única. [...] As transações em blockchain são disseminadas entre os seus participantes em blocos de dados, cuja segurança e confiabilidade é garantida por criptografia. (Cossenzo; Bax; Costa, 2024, p. 2). Esta pesquisa visa o processo de validação de uma ontologia sobre DeFi, com apoio de especialistas de domínio e outros usuários. Com isso espera-se investigar o potencial da ontologia como instrumento de letramento financeiro-digital para uma melhor compreensão dos | non_poster |
FACULTY OF ARTS, HUMANITIES & EDUCATION THE EPISTEMIC INSIGHT INITIATIVE GLOSSARY | non_poster |
Leyre Burguera Ameave (coord.) DOI: 10.5281/zenodo.14525546 | non_poster |
BALANCE DE LÍNEAS EN LOS SISTEMAS DE PRODUCCIÓN | non_poster |
Adventures with SPIRou: The APERO pipeline and LBL RV analysis tool Neil James Cook, Etienne Artigau, Rene Doyon Université de Montréal apero.exoplanets.ca lbl.exoplanets.ca Neil James Cook Researcher, iREx, UdeM neil.cook@umontreal.ca njcuk9999.github.io Etienne Artigau Researcher, iREx, UdeM etienne.artigau@umontreal.ca exoplanetes.umontreal.ca | non_poster |
Warming Uma Newsletter do PELD – CRSC Boletim 003 Março 2021 Augastes lumachella. Foto: Ciro Albano. https://doi.org/10.6084/m9.figshare.14099327.v2 | non_poster |
T L @FAIRsFAIR_eu www.fairsfair.eu /company/fairsfair Research Data Management at University of Cape Town Use case: building staff capacity and creating new training activities and organisational units as support actions for the implementation of institutional policies Type of initiative: support services and training sessions on RDM and FAIR data topics open to students, researchers and the university’s staff Organiser of the initiative: University of Cape Town (UCT) uct.ac.za Scope and objectives The Digital Library Services (DLS) of the UCT Libraries is one of the university’s departments responsible for supporting and fostering Open Science practices at the institutional level. The DLS inhabit this role as one of the stakeholders in the UCT eResearch landscape, providing a variety of services, systems and resources to ensure that good research data management is performed within the university’s teaching, learning and research activities. This includes giving recurring training sessions on several RDM and FAIR data topics and ad hoc training activities and consultations on request by different university departments. Support is also given to researchers and their support staff in using dedicated UCT infrastructures, such as the data management planning platform and the institutional data repository. Training delivered by the DLS is not limited to UCT affiliates and is open to all interested individuals from other South African (and international) institutions. Training activities are provided as support measures to ensure that students at the master level, doctoral candidates and other researchers and the university’s staff can comply with the objectives set by the institutional policy on Research Data Management. In particular, the RDM policy aims to “ensure consistent research practice related to data management principles that support effective data sharing, including open access; and the need for data to be discoverable, accessible, reusable and interoperable to specific quality standards”. Interviewees also highlighted how providing training activities to support the implementation of RDM and FAIR data practices and more broadly the transition to Open Science contribute to the broader UCT’s strategic agenda. In particular, it is crucial to define a leadership role for the university in the changing landscape of research and higher education, supporting UCT’s capacity to address new challenges and needs in the way research is produced, shared and managed in the long term. Interactions with international partners located in both the African and European continent played a key role in highlighting the need for more training activities related to FAIR research data practices. Interviewees highlighted the importance of participating in transnational fora such as the 2018 SciDataCon conference in Botswana, which was a landmark event for promoting the creation of an African community in the field of research data. Other key opportunities for building the Research Data Services at DLS have been regular attendance at the International Digital Curation Conference (IDCC) as well as the CODATA-RDA Schools of Research Data Science. More recently, the UCT Libraries became a member of the Digital Preservation Coalition (DPC), which was welcomed by interviewees as a milestone achievement for the South African university in terms of building contacts and fostering the exchange of experiences with other institutions in the context of an established international community. The University of Cape Town implemented a range of services and training opportunities to support the uptake of RDM and FAIR data skills and practices at the institutional level. These activities are targeting students at the master level, doctoral candidates, academic staff and other researchers as well as the non-academic staff of UCT and of other interested institutions. The case study shows how needs and challenges related to the implementation of RDM and FAIR da | non_poster |
Colección de ESMOS 1 Lactasa, enzima que descompone a la lactosa Carolina Enciso Arévalo* iD Licenciatura en Biotecnología, Facultad de Ciencias Biológicas, Benemérita Universidad Autónoma de Puebla, Puebla, México. *Email: carolina.enciso@alumno.buap.mx 11 de Noviembre de 2022 DOI: http://doi.org/10.5281/zenodo.7314627 Editado por: Jesús Muñoz-Rojas (Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla). Revisado por: Lucía Martínez-Martínez (Laboratorio de Biología Molecular, Centro de Investigación Facultad de Medicina UNAM-UABJO, Facultad de Medicina y Cirugía, Universidad Autónoma “Benito Juárez” de Oaxaca). Colección de ESMOS Resumen Para hablar de la lactasa primero debemos entender qué es una enzima, para qué sirve y cuáles son sus beneficios en el ser humano. | non_poster |
QM/MM Simulation Hybrid quantum mechanics/molecular mechanics (QM/MM) simulations have become a popular tool for investigating chemical reactions in condensed phases. In QM/MM methods, the region of the system in which the chemical process takes place is treated at an appropriate level of quantum chemistry theory, while the remainder is described by a molecular mechanics force field. Within this approach, chemical reactivity can be studied in large systems, such as enzymes. Using quality management to calculate the properties of a large chemical library is time-consuming and costly. High-precision QM/MM calculation is a multi-scale calculation method to study ligand binding. By using quantum chemistry representing ligands combined with molecular mechanics to characterize proteins and solvents, modeling time can be greatly reduced. Our bioinformaticians will provide you the most efficient quantum chemistry services. Figure 1. Illustration of the QM/MM concept. A small region, in which a chemical reaction occurs and therefore cannot be described with a force field, is treated at a sufficiently high level of QM theory. The remainder of the system is modelled at the MM level. Overall solutions • Subtractive QM/MM Coupling In the subtractive scheme, the QM/MM energy of the system is obtained in three steps. First, the energy of the total system, consisting of both QM and MM regions, is evaluated at the MM level. The QM energy of the isolated QM subsystem is added in the second step. Third, the MM energy of the QM subsystem is computed and subtracted. The last step corrects for including the | non_poster |
The indicator will be examined in the future as: f(d): factor related to the distance from the shoreline. [where f(d)= 1-0,1*int d] ‘ANTHROPOGENETIC INTENSITY ANTHROPOGENETIC INTENSITY’ A NEW INDICATOR TO MEASURE COASTAL MAN A NEW INDICATOR TO MEASURE COASTAL MAN – MADE VOLUM MADE VOLUMƪ CASE STUDY: NAVPAKTOS CASE STUDY: NAVPAKTOS , AETOLIA AETOLIA-ACARNANIA ACARNANIA, GREECE GREECE About the study Sources & Methodology In many Mediterranean countries different problems are evident in land uses because of improper land planning and over use of coastal zones. The basic aim is to study the "behavior" of coastal areas by devising a special indicator to model human intervention in coastal zones and the surrounding areas. Detail of aerial photo draped on the AIEM Applying the indicator an increase in the year 2007 is apparent : a2007 = 5,72 m. On the contrary in 1985 the human impact was less: a1985 = 4,55 m. Thus, within 22 years the indicator of the human intervention has been increased around 1,17 m. "The Project is co The Project is co-funded by the European Social Fund and National Resources funded by the European Social Fund and National Resources - (EPEAEK II) ARXIMHDHS" (EPEAEK II) ARXIMHDHS" Members of the AMICA research group, scientific coordinator John Kiousopoulos Ifigenia Veizi1, Ifig9veizi@gmail.com Nantialena Tsiougou2, nantialena@gmail.com Christine Kouki1, christinekouki14@hotmail.com Diamado Matsa1, dmatsa@teiat.gr George Tsiougos1, gtsiougos@teiath.gr Nikolaos Lakafosis1, nlakafos@teiath.gr Mixalis Zarras1, mizarad@teiath.gr George Miliaresis3, gmiliar@upatras.gr Panagiotis Partsinevelos, ppartsi@gmail.com Maria Pigaki4, pigaki@survey.ntua.gr 1 Surveying Engineer Dept., Technological Educational Institute (T.E.I) of Athens 2 Geography Dept., Harokopio University of Athens 3 University of Patras 4 National Technical University (N.T.U) of Athens hi m wi 15 2 10 2 10 2 8 4 4 4 3 3 3 2 3 0,5 1 0,5 3 0,5 1 0,5 1 0,5 3 0 0 0 0 0 15 2 Mosaic of Aerial photos of 1985, from Hellenic Mapping and Cadastral Organization ƳƮƹƪ Mosaic of Satellite images of 2007, from Google Earth The Anthropogenetic Intensity - ai is indicative of the human impact & calculates the total volume of man-made activities on a coastal zone. Differences Among the areas that occupy the same land use in different time periods are noticed. The images were digitized and produced the land use maps (Length > 10km, Width ~ 3km from the coastline) There were created totally 16 Land Uses in polygon layers. After the digitization the Land Uses were compared in two different time periods. “Anthropogenetic Intensity Anthropogenetic Intensity” si : the area of each land use hi : the real height of each land use construction wi : weight for each land use This indicator has been proposed by: Kiousopoulos 1999, Kiousopoulos & Lagkas 2005, Kiousopoulos 2007 ¦ ¦ i i i i s w h s ai 1985 2007 a2007 2007 = 5,7 5,72 m 2 m a1985 1985 = 4,55 m = 4,55 m 2007 1985 ) ( * d f s w h s ai i i i i ¦ ¦ Each land use has a different weight proportionally analogous to its distance from the shoreline. For example, one land use close to the shoreline (within 1 km) will not have the same weight like one farther away (10 km). In each coastal area all the factors that can influence the change of land use & cover have to be considered. The weights are related mainly to the extent of the construction volume, the distance from the shoreline and other parameters. Depending on the land use and its real height hi, each land use has a distinct weight wi which has been evaluated arbitrarily. Some of these uses do not exist in 2007 compared to 1985 (e.g. Trees in the form of a Forest). On the other hand some others occupy a bigger extent of area in 2007, because of the human intervention (e.g. There is an increase in Urban Fabric Areas, Tourism, Transport.) By mapping the human impact in coastal zones, it is possible to measure the human intervention in those areas, with the Anthropogenetic Inten | non_poster |
Final conference, Rome October 11, 2016 First measurements of the new 22 GHz water vapor spectrometer VESPA-22 obtained during the SVAAP campaign at Thule, Greenland Gabriele Mevi1,2, Giovanni Muscari1, Massimo Mari1, Daniela Meloni3, Tatiana Di Iorio3, Alcide di Sarra3, Giandomenico Pace3, Marco Cacciani4 1Istituto Nazionale di Geofisica e Vulcanologia, Italy, email: gabriele.mevi@ingv.it; 2Also at Mathematics and Physics Department, University of Roma Tre, Italy; 3Laboratory for Observations and Analyses of Earth and Climate, ENEA, Italy; 4Physics Department, University of Rome “La Sapienza”, Italy. Abstract Water vapor has a large impact on the Earth radiative budget, affecting both infrared and shortwave radiation. In the Arctic region, the importance of water vapor is enhanced by the so called Artic Amplification effect [Serreze and Francis, 2006], a positive feedback that links the quantity of water vapor in the troposphere, the presence of clouds, the ice coverage, and the surface temperature. Additionally, about 10% of the surface warming measured during the last two decades can be ascribed to stratospheric water vapor [Solomon et al., 2010]. The characterization of middle atmospheric water vapor profiles is also important to understand many chemical processes that occur in the Polar middle atmosphere, as water vapor is involved in the ozone chemistry. In order to identify and quantify the contribution of water vapor to the changes that are occurring in the Arctic, long-term measurements of tropospheric and stratospheric water vapor are necessary. With this purpose, during the SVAAP measurements campaign funded by the Italian PNRA (see the contribution by Meloni et al. to the ARCA conference) and in synergy with the ARCA project, a new 22 GHz spectrometer (the water Vapor Emission Spectrometer for Polar Atmospheres at 22 GHz, VESPA-22 [Bertagnolio et al., 2012]) was installed at Thule Air Base (76.5° N, 68.8° W), Greenland. VESPA-22 has been set-up for continuous unattended measurements. Since the early 90’s Thule Air Base has been hosting a NDACC (Network for the Detection of Atmospheric Composition Change) atmospheric observatory where instruments belonging to Italian (http://www.thuleatmos-it.it/), Danish and US research institutes and universities are installed to monitor the Arctic atmosphere and radiation budget. The SVAAP campaign took place in July 2016 and was mainly aimed at studying the role of water vapor and clouds in the radiation budget at the surface, which is also an important objective of the ARCA project. During the campaign we launched 23 radiosondes, carried out measurements of various atmospheric parameters (see contribution by Meloni et al.), and VESPA-22 operated side by side with the HATPRO microwave radiometer [Rose et al., 2006]. The campaign was characterized by ten days of fair weather followed by ten days of more unstable and cloudy conditions. VESPA-22 (figure 1) collects the microwave radiation emitted by the H2O rotational transition at 22.235 GHz with a spectral resolution of 31 kHz and a bandwidth of 500 MHz (figure 2). Using the relation between the emission line width and the atmospheric pressure, VESPA-22 spectral measurements can be inverted to obtain water vapor profiles from about 30 km up to 70 km with a vertical resolution of about 7 km and a temporal resolution of 1-2 profiles a day, depending on weather conditions (figure 3). The instrument can also retrieve the precipitable water vapor (PWV) by measuring the sky opacity 𝜏𝑧 with a temporal resolution of a few minutes and using the formula: PWV = aτz + bτzTatm + c (1) [Deuber et al., 2005] where 𝑇𝑎𝑡𝑚 is the mean tropospheric emission temperature calculated using radiosounding data, while a, b, and c are three site dependent coefficients. Equation 1 does not take | non_poster |
Recent results in the 𝑯→𝑾𝑾channel with full LHC Run-II data collected by the CMS experiment Presented at the 30th International Symposium on Lepton Photon Interactions at High Energies, hosted by the University of Manchester, 10-14 January 2022. Amandeep Kaur𝑎 on behalf of the CMS Collaboration 𝑎Panjab University Chandigarh, India email: a.kaur@cern.ch, kauraman.pu@gmail.com Abstract There are several production channels of the Higgs boson and we are searching for the production of the Higgs boson in association with a vector boson in the 𝐻→𝑊𝑊decay channel with the CMS experiment at the LHC. This measurement provides a direct probe of the Higgs boson coupling to vector bosons. The latest CMS results on the Higgs boson decay to a W boson pair are presented. The focus of the report is on the inclusive and STXS measurements performed for the 𝑉𝐻leptonic channel with full Run 2 data which corresponds to an integrated luminosity of 137 fb−1, collected by the CMS detector at LHC. 1 Introduction This report presents a measurement of the Higgs boson [1, 2] production cross section decaying to 𝑊𝑊 boson in association with 𝑊or 𝑍(collectively 𝑉) boson, and thus referred to as 𝑉𝐻leptonic production. As the Higgs boson mass is smaller than twice the nominal mass of 𝑊boson, at least one of the 𝑊bosons is produced off-shell. This characteristic, along with the small branching ratios for the 𝑉𝐻leptonic production and 𝐻→𝑊𝑊decay, make this measurement challenging. However, this analysis benefits from the data sample collected during Run-II with the compact muon solenoid (CMS) detector [3] at the large hadron collider (LHC) at a center-of-mass energy of √𝑠= 13 TeV, corresponding to an integrated luminosity of 137 fb−1. 2 Analysis Strategy In this analysis the events where associated vector bosos (𝑊or 𝑍) is decaying leptonically are considered. Broadly, the analysis is subdivided into four channels based on the number of leptons and jets: 𝑊𝐻𝑆𝑆, 𝑊𝐻3ℓ, 𝑍𝐻3ℓand 𝑍𝐻4ℓ. The production cross section, which is extracted as a signal strength modifier, are also measured using a simplified template cross sections framework [4] (STXS) in addition to inclusive measurement. Since final states with at least 2 leptons (electrons or muons) are studied, a combination of both single and double lepton triggers are used, feynman diagrams of these processes are shown in Figure.1. There are different challenges in each channel depending upon the dominating backgrounds, hence different approaches, and the following sections will focus on the strategies for each channel: • WHSS: The final state contains two leptons, p𝑚𝑖𝑠𝑠 𝑇 , and jets, as this channel targets the 𝑊𝐻→2ℓ2𝜈𝑞𝑞 as shown in Figure.1(a), and in order to reduce the Drell-Yann background two leptons are required to have the same sign. Based on the number of jets in the event and the lepton flavour, the Signal region (SR) events are divided into four categories 1j e𝜇, 1j 𝜇𝜇, 2j e𝜇, 2j 𝜇𝜇. The electron-electron flavor 1 | non_poster |
The 20th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun Edited by S. J. Wolk The SUPERWIDE Catalog of Wide Binaries Zachary Hartman,1,2 Sébastien Lépine1 1 Department of Physics & Astronomy, Georgia State University, Atlanta, Georgia 2 Lowell Observatory, Flagsta, Arizona Abstract We present the results of our search for wide binaries in the SUPERBLINK high proper motion catalog of 2.8 million stars with proper motions >40 mas/yr, which has been recently enhanced with data from the GAIA mission. In a rst step, we conduct a Bayesian analysis taking into account angular separations and proper motion dierences provided by the SUPERBLINK catalog, and identify all possible common proper motion (CPM) pairs with separations up to 60 arcminutes. In the second step we expand the analysis using parallaxes from the Gaia DR2 release, and calculate probabilities for each pair to be a gravitationally bound system. The result is a list of 18,000 pairs with probabilities of being real binaries greater than 99 percent. We show that the distribution of projected physical separations of these wide, eld binaries follows a decreasing power law, and shows no evidence of being bimodal, i.e. there is no evidence of a secondary population of pairs with separations > 104 AU. In addition, we nd clear evidence that at least 30 percent of these wide binaries are triples/multiples, based on one of the components being over-luminous. 1 Searching for Wide Binaries Using the SUPERBLINK high proper motion catalog and the Gaia DR2 data release (Gaia Collaboration et al. 2016, 2018b), we conducted two Bayesian analyzes tak- ing into account angular separations and proper motion dierences in the rst pass and distance dierences from the Gaia data in the second pass. Figure 1 shows the probability distributions for both passes, with the top gure showing the result for the rst pass and the bot- tom gure showing the result of the second pass. In the rst pass, we eliminated all pairs with probabilities below 10%, noting that the overwhelming majority of them are chance alignments on the sky. Those pairs with rst pass probabilities > 10% were then taken and passed through our second Bayesian analysis. Keeping pairs with nal probabilities greater than 95%, we obtain a sample of 22,175 high probability wide binaries. Figure 2 shows the sky positions of these pairs. 2 Distribution of Projected Physical Separations Figure 3 shows the distribution of projected physical separation, color coded for pairs that are de nitely physi- cal (purple), de nitely chance alignment (red), and unde- termined (yellow). Projected physical separation was cal- culated from the parallax of the primary and the angular separation to the secondary in the plane of the sky. SU- PERBLINK depends on 2MASS catalog identi cations, which have a 4 resolution. Therefore, our completeness limit depends on this times a distance limit, which was chosen to be 500 pc. This is marked on Figure 3 as a dotted line. Note that the distribution appears bimodal only when chance alignments are included; the distribu- tion of high probability, physical pairs falls o rapidly with physical separation, and does not show evidence of a distinct population of ultra-wide systems. Figure 4 shows multiple ts to the tail of the real pairs distribu- tion in log-log space. A Gaussian t based on the t from Duquennoy & Mayor (1991) and Raghavan et al. (2010) is shown in cyan, the model clearly overestimates the high-end tail of the distribution. Starting from around 104 AU, it appears that a single power law is a better t with a power = -2.2. This could indicate that very wide (>10,000AU) binary systems are disrupted over time. 3 Over-Luminous Stars We examined the possibility of triple systems by look- ing for over-luminous (O-L) stars in our sample of pairs with nal probabilities > 95%, with distances <250 pc and where the primary star is a K-dwarf to limit eects from evolved stars and metallicity. Figure 5 | non_poster |
Olfactory behavior of Philaenus spumarius (Hemiptera: Aphrophoridae) to two naturally occurring volatile compounds on almond, olive, and vine leaves Rodrigues I.a,b, Benhadi-Marín J.a, Baptista P.a Pereira J.A.a aCentro de Investigação de Montanha (CIMO), Instituto Politécnico de Bragança. Campus de Santa Apolónia, 5300-253 Bragança, Portugal bUniversidad de Léon, Departamento de Ingeniería Agraria, Av. Portugal, n° 41, 24071 Léon, Spain | non_poster |
Confirmation of coffee related Xylella fastidiosa vectors (Cicadellidae) in Costa Rica Garita L1, Villalobos-Muller W1, Zúñiga-Pereira AM2, Álvarez-Mora JP2, Godoy C3, Montero-Astúa M1, Chacón-Díaz C2, Weintraub P4. 1Centro de Investigación en Biología Celular y Molecular, Universidad de Costa Rica, San Pedro 2060, San José, Costa Rica. 2Centro de Investigación en Enfermedades Tropicales, Facultad de Microbiología, Universidad de Costa Rica, San José, Costa Rica. 3Escuela de Ciencias Exactas y Naturales, Universidad Estatal a Distancia, Sabanilla, San José, Costa Rica. 4Agricultural Research Organization, Gilat Research Center, D.N. Negev, 85280, Israel. | non_poster |
High power integrated laser for microwave photonics Jörn P. Epping1, Ruud M. Oldenbeuving1, Dimitri Geskus1, Ilka Visscher1, Robert Grootjans1, Chris G.H. Roeloffzen1, and René G. Heideman1 1LioniX International BV, P.O. Box 456, 7500 AL, Enschede, The Netherlands. j.p.epping@lionix-int.com Abstract: We present a hybrid integrated laser with two gain sections coupled to one tunable cavity. The resulting laser has a record on-chip power of up to 20.7 dBm and an intrinsic linewidth of 320 Hz. © 2020 The Author(s) 1. Introduction Tunable semiconductor lasers with a narrow linewidth have found a wide range of important applications, reaching from coherent communications, remote optical sensing such as Lidar to arising applications such as microwave photonics (MWP). Especially, integrated microwave photonics [1,2] aims to achieve RF-to-RF signal processing in the optical domain with a small footprint while offering wide bandwidths. Recently, we developed a hybrid integrated photonic platform [3] based on active InP components such as gain sections, modulators, and detectors with passive silicon nitride-based TriPleX enabling low loss optical signal processing due to its low propagation losses (< 0.1 dB/cm). The combination of these two platforms enables fully integrated RF-to-RF analog photonic links (APL). The link gain of an APL quadratically depends on the sensitivity of the modulator, responsivity of the photodetector (PD) as well as the optical power at the receiving PD [2]. Therefore, to achieve an efficient APL with a high link gain an optical input power of typically more than 20 dBm is needed. Furthermore, high optical powers can keep the noise figure at a minimum by making the use of additional amplifiers obsolete. However, so far current hybrid as well as heterogeneously integrated lasers lack sufficient optical power to achieve such efficient APLs. Here, we present, to the best of our knowledge, the first hybrid integrated laser consisting of more than one InP gain sections sharing a common laser mirror. The resulting on-chip power of 20.7 dBm using two gain sections at a pump current of twice 300 mA is the highest optical power of a hybrid integrated laser to date. Furthermore, the laser shows a low intrinsic linewidth of 320 Hz as well as a low relative intensity noise (<-160 dBc/Hz), which should enable future APL with a high link gain and a low noise level. 2. Device design The scheme and a photograph of the dual-gain section hybrid integrated laser is shown in Fig.1 (a) and (b), respectively. The laser comprises of two InP reflective semiconductor optical amplifier (RSOA) chips and a low- loss TriPleX frequency selective mirror chip. The frequency selective mirror is based on a well-known principle of Vernier mirror with two micro-resonators of slightly different length and, hence, free-spectral ranges of 208 GHz and 215 GHz. To achieve a higher optical output power the mirror chip is coupled to two individual InP RSOAs with a length of 700 µm instead of one [4]. The coupling losses from the InP RSOAs to TriPleX laser cavity are estimated to be about 1 dB. Each of the RSOAs has an individual electrical connection to apply a pump current. On the TriPleX chip, thermo-optic tuners are used to set the frequency selective mirror, to control the phases of the RSOAs within the cavity, two tunable Mach-Zehnder couplers are used to combine both output coupling of the cavity and the power in the output waveguide. The output waveguides are coupled to a fiber array with standard polarization maintaining fibers with a coupling loss of 0.5 dB. Figure 1 a) Conceptual design of the dual gain laser cavity consisting of a tunable dual-ring cavity, two gain section with phase tuners, and two tunable Mach-Zehnder couplers (TC) to control the output coupling and output power. b) Photograph of the assembled dual gain laser. | non_poster |
Final conference, Rome October 11, 2016 Chemical evolution of the surface and subsurface Svalbard annual snow Andrea Spolaor1,2, Elena Barbaro1,2, Jean Marc Christille3,4, Torben Kirchgeorg1, Fabio Giardi5, David Cappelletti6, Clara Turetta2, Andrea Bernagozzi3, Mats P. Björkman7, Enzo Bertolini3 and Carlo Barbante1,2 1Ca’ Foscari University of Venice, Department of Environmental Sciences, Informatics and Statistics, Santa Marta – Dorsoduro 2137, 30123 Venice, Italy – email: andrea.spolaor@unive.it 2Institute for the Dynamics of Environmental Processes, IDPA-CNR, Dorsoduro 2137, 30123 Venice, Italy. 3Research Unit Atlas, Astronomical Observatory of the Autonomous Region of the Aosta Valley (OAVdA), Loc. Lignan 39, 11020 Nus (AO), Italy. 4Dipartimento di Fisica e Geologia, Università degli Studi di Perugia, I-06123 Perugia, Italy 5Chemistry Department – Analytical Chemistry, Scientific Pole, University of Florence, Via della Lastruccia 3, I-50019 Sesto Fiorentino (Florence) Italy. 6Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia I-06123 Perugia, Italy 7University of Gothenburg, Department of Earth Sciences, Box 460, 40530 Göteborg, Sweden Abstract Understanding and monitoring the evolution of annual snow is an important aspect of cryosphere research. Changes in physical proprieties such as hardness, presence of melt layers, or the shape and size of crystals can completely modify the robustness, propriety and quality of the snow. Evaluating these changes can inform the study and prediction of avalanches. The annual snow layer is an extremely dynamic portion of the cryosphere and can be defined as the snow accumulated on the ground during the year, from September to April. The characteristics of the annual snow strata can influence the access to food, particularly for animals that rely on food sources below the snow strata (Kohler et al. 2004). From a chemical point of view, snow depositions during the winter are a sink for an impressive amount of chemical compounds (natural and anthropogenic) and elements trapped in the snow layers. Particularly, compounds and elements that can be photo-activated accumulate during the winter and can be re-emitted in the atmosphere, taking part in numerous geochemical and biological cycles (Björkman et al. 2013) during the spring. However, elements that can be photo-activated are not the only ones to be released from the annual snow strata. During the melting phase, all elements and compounds that are still present in the snow can be released in the melting water (Bogdal et al. 2010), accumulate in the ground or be discharged in the sea, affecting biological productivity or, in the case of anthropogenic compounds, causing a spotted contamination of the surrounding environment. Two experiments were performed to evaluate the changes in the chemical composition of surface and subsurface annual snow. An in-depth investigation of the evolution of the first meter of the annual snow layer, with daily resolution, was conducted in the glacier of Austre Brøggerbreen, Svalbard, between the 27th of March and the 31st of May, in concomitance with the start of the melting phase (Spolaor et al. 2016). The present monitoring study mainly aimed to evaluate changes in the thermal profile and liquid water content (LWC) during the formation of a new ice layer (Figure 1) as well as the re-allocation of the total dissolved salts (TDS) and, more in detailed, the specific ionic compound as Na+, Br-, NO3 -,Cl-, Ca2+,Mg2+ ect. In addition, and to corroborate the previous experiment, three additionally experiments were performed, two at high temporal resolution between the 29th April and 1st of May 2015 (Figure 2) and between the 7th and the 10th of April 2016, and one sampling the surface snow (15 cm) with daily resolution between the 1st of April to the 30th of June 2014. The first high-resolution experiment were performed during the midnight sun season while the second ones when the | non_poster |
Book of Abstracts Edited by: Baca A., Wessner B., Diketmüller R., Tschan H., Hofmann M., Kornfeind P., Tsolakidis E. EUROPEAN COLLEGE OF SPORT SCIENCE CROSSING BORDERS THROUGH SPORT SCIENCE 21 st Αnnual Congress of the 6th - 9th July 2016, Vienna - Austria Hosted by the Centre for Sport Science and University Sports, University of Vienna | non_poster |
Information Science Trends 2022 - The ASIS&T European Chapter Research Series. 17-19 June 2022 1 Crossing the border between human and non-human in information science Niloofar Solhjoo, PhD candidate, School of Information Management, Victoria University of Wellington Keywords everyday information behaviour, more-than-human, companion animal, research method, multispecies ethnography, posthumanism STRUCTURED ABSTRACT Aim of the contribution Telling untold story of animals whose actions are linked to human everyday information behaviour (e.g., in a multispecies family), this poster aims to outline methods to address the voice and perspective of nonhuman animals in information studies. Value of the contribution In this creative study design information behaviour was seen as an inseparable part of more-than-human cohabitation as humans have learnt to understand, make meaning, or just live with animals in their homes and cities. Dogs and cats, as embodied and minded individuals living their lives entangled with humans, were recognized as the research participants. This poster makes a call for an ‘animal turn’ in information science to find more-than-human ways of doing research and undermine the exclusiveness of the human agents in the study of information experience. Research context Existing Literature of the scarce information research related to keeping and living with companion animals were mostly expanded reports, outside the information field, and were built on a limited view of the objective information (e.g., documents and recorded information), with no attention to internal and subjective information, and overlooking everyday practices and important actors in a multispecies family. The fact that dogs and cats live with humans and are interactively entangled with everyday life is sufficient to justify their inclusion in information behaviour studies. Therefore, this poster shifts away from a theoretical focus on information research about or on animals to research ‘with’ animals. | non_poster |
BOLETÍN INFORMATIVO 2022-II FUNDACIÓN UNIVERSITARIA DE POPAYÁN PROGRAMA DE PSICOLOGÍA LÍNEA SALUD-CLÍNICA | non_poster |
Chihiro Imamura, Yoichi Tamura, Akio Taniguchi (Nagoya University) Toshiaki Kimura, Hiroaki Kawamura, Ayame Usui (Nagoya City University) Mikio Kurita (Kyoto University) Heuristic design of light-weight homologous structure for Large Submillimeter Telescope AtLAST design study: results, science, and next steps 23rd May 2024 Optimization/Evolution 50-m Class Single-dish Telescope From Kawabe+2016 From Mroczkowski+2024 | non_poster |
Fusion of multiple classifiers using self supervised learning for satellite image change detection Alexandros Oikonomidis1[0000−0003−4803−6419], Maria Pegia1[0000−0003−2643−0028], Anastasia Moumtzidou1[0000−0001−7615−8400], Ilias Gialampoukidis1[0000−0002−5234−9795], Stefanos Vrochidis1[0000−0002−2505−9178], and Ioannis Kompatsiaris1[0000−0001−6447−9020] Information Technologies Institute / Centre for Research & Technology Hellas, Thessaloniki, Greece {aleoikon, mpegia, moumtzid, heliasgj, stefanos, ikom}@iti.gr Abstract. Deep learning methods are widely used in the domain of change de- tection in remote sensing images. While datasets of that kind are abundant, anno- tated images, specific for the task at hand, are still scarce. Neural networks trained with Self supervised learning aim to harness large volumes of unlabeled satellite high resolution images to help in finding better solutions for the change detection problem. In this paper we experiment with this approach by presenting 4 dif- ferent change detection methodologies. We propose a fusion method that under specific parameters can provide better results. We evaluate our results using two openly available datasets with Sentinel-2 satellite images, S2MTCP and OSCD, and we investigate the impact of using 2 different Sentinel 2 band combinations on our final predictions. Finally we conclude by summarizing the benefits of this approach as well as we propose future areas of interest that could be of value in enhancing the change detection task’s outcomes. Keywords: Change Detection · Self Supervised Learning · Satellite images 1 Introduction The history of change detection can be traced back to the early days of remote sensing with one of the first examples being the use of aerial photography to identify agricul- tural land use changes [13]. Change detection is the process of comparing two images of the same area at different times to identify changes, such as urban growth and de- velopment, deforestation events and vegetation evolution. Those changes are depicted onto a change mask where usually a white pixel represents a change and a black pixel that nothing has changed. Despite the large amount of data that is now available from programs like Coperni- cus and Landsat, there is still a lack of open labelled datasets using these images. This makes it difficult to compare and evaluate new proposed change detection algorithms. Sufficient labelled datasets are essential for developing supervised learning methods with deep neural networks. A promising candidate in tackling this problem is Self Supervised Learning(SSL), a subset of Unsupervised Learning. The term supervision means that labels do exist | non_poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.