source
stringlengths
727
3.71k
target
stringlengths
9
18.4k
lan
stringclasses
23 values
Super-resolution fluorescence microscopy, with a resolution beyond the diffraction limit of light, has become an indispensable tool to directly visualize biological structures in living cells at a nanometer-scale resolution. Despite advances in high-density super-resolution fluorescent techniques, existing methods still have bottlenecks, including extremely long execution time, artificial thinning and thickening of structures, and lack of ability to capture latent structures. Here we propose a novel deep learning guided Bayesian inference approach, DLBI, for the time-series analysis of high-density fluorescent images. Our method combines the strength of deep learning and statistical inference, where deep learning captures the underlying distribution of the fluorophores that are consistent with the observed time-series fluorescent images by exploring local features and correlation along time-axis, and statistical inference further refines the ultrastructure extracted by deep learning and endues physical meaning to the final image. Comprehensive experimental results on both real and simulated datasets demonstrate that our method provides more accurate and realistic local patch and large-field reconstruction than the state-of-the-art method, the 3B analysis, while our method is more than two orders of magnitude faster. The main program is available at https://github.com/lykaust15/DLBI
La microscopia fluorescente a super-risoluzione, con una risoluzione superiore al limite di diffrazione della luce, è diventata uno strumento indispensabile per visualizzare direttamente strutture biologiche all'interno di cellule vive con risoluzione su scala nanometrica. Nonostante i progressi ottenuti con tecniche fluorescenti a super-risoluzione ad alta densità, i metodi esistenti presentano ancora collo di bottiglia, tra cui tempi di esecuzione estremamente lunghi, artificiosa riduzione o ispessimento delle strutture e incapacità di rilevare strutture latenti. Proponiamo qui un nuovo approccio basato sull'inferenza bayesiana guidata da apprendimento profondo, DLBI, per l'analisi in serie temporale di immagini fluorescenti ad alta densità. Il nostro metodo combina i punti di forza dell'apprendimento profondo e dell'inferenza statistica: l'apprendimento profondo cattura la distribuzione sottostante dei fluorofori coerente con le immagini fluorescenti in serie temporale osservate, esplorando caratteristiche locali e correlazioni lungo l'asse temporale, mentre l'inferenza statistica perfeziona ulteriormente l'ultrastruttura estratta dall'apprendimento profondo e conferisce significato fisico all'immagine finale. Risultati sperimentali completivi, ottenuti sia su dati reali che simulati, dimostrano che il nostro metodo fornisce una ricostruzione più accurata e realistica sia a livello di porzioni locali che su ampio campo rispetto al metodo all'avanguardia attuale, l'analisi 3B, mentre il nostro metodo è più veloce di oltre due ordini di grandezza. Il programma principale è disponibile all'indirizzo https://github.com/lykaust15/DLBI
it
A new network with super approximation power is introduced. This network is built with Floor ($\lfloor x\rfloor$) or ReLU ($\max\{0,x\}$) activation function in each neuron and hence we call such networks Floor-ReLU networks. For any hyper-parameters $N\in\mathbb{N}^+$ and $L\in\mathbb{N}^+$, it is shown that Floor-ReLU networks with width $\max\{d,\, 5N+13\}$ and depth $64dL+3$ can uniformly approximate a H\"older function $f$ on $[0,1]^d$ with an approximation error $3\lambda d^{\alpha/2}N^{-\alpha\sqrt{L}}$, where $\alpha \in(0,1]$ and $\lambda$ are the H\"older order and constant, respectively. More generally for an arbitrary continuous function $f$ on $[0,1]^d$ with a modulus of continuity $\omega_f(\cdot)$, the constructive approximation rate is $\omega_f(\sqrt{d}\,N^{-\sqrt{L}})+2\omega_f(\sqrt{d}){N^{-\sqrt{L}}}$. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of $\omega_f(r)$ as $r\to 0$ is moderate (e.g., $\omega_f(r) \lesssim r^\alpha$ for H\"older continuous functions), since the major term to be considered in our approximation rate is essentially $\sqrt{d}$ times a function of $N$ and $L$ independent of $d$ within the modulus of continuity.
Представлена новая сеть с высокой аппроксимационной способностью. Эта сеть построена с использованием функции активации Floor ($\lfloor x\rfloor$) или ReLU ($\max\{0,x\}$) в каждом нейроне, поэтому мы называем такие сети Floor-ReLU сетями. Для любых гиперпараметров $N\in\mathbb{N}^+$ и $L\in\mathbb{N}^+$ показано, что Floor-ReLU сети шириной $\max\{d,\, 5N+13\}$ и глубиной $64dL+3$ могут равномерно аппроксимировать функцию Гёльдера $f$ на $[0,1]^d$ с погрешностью аппроксимации $3\lambda d^{\alpha/2}N^{-\alpha\sqrt{L}}$, где $\alpha \in(0,1]$ и $\lambda$ — соответственно порядок и константа Гёльдера. В более общем случае для произвольной непрерывной функции $f$ на $[0,1]^d$ с модулем непрерывности $\omega_f(\cdot)$ конструктивная скорость аппроксимации составляет $\omega_f(\sqrt{d}\,N^{-\sqrt{L}})+2\omega_f(\sqrt{d}){N^{-\sqrt{L}}}$. Как следствие, этот новый класс сетей преодолевает проклятие размерности в аппроксимационной способности, когда поведение $\omega_f(r)$ при $r\to 0$ является умеренным (например, $\omega_f(r) \lesssim r^\alpha$ для функций, удовлетворяющих условию Гёльдера), поскольку главный член в нашей оценке скорости аппроксимации по существу равен $\sqrt{d}$, умноженному на функцию от $N$ и $L$, не зависящую от $d$, входящую в модуль непрерывности.
ru
Several studies have tried to ascertain whether or not the increase in abundance of the early-type galaxies (E-S0a's) with time is mainly due to major mergers, reaching opposite conclusions. We have tested it directly through semi-analytical modelling, by studying how the massive early-type galaxies with log(M_*/Msun)>11 at z~0 (mETGs) would have evolved backwards-in-time, under the hypothesis that each major merger gives place to an early-type galaxy. The study was carried out just considering the major mergers strictly reported by observations at each redshift, and assuming that gas-rich major mergers experience transitory phases of dust-reddened, star-forming galaxies (DSFs). The model is able to reproduce the observed evolution of the galaxy LFs at z<~1, simultaneously for different rest-frame bands (B, I, and K) and for different selection criteria on color and morphology. It also provides a framework in which apparently-contradictory results on the recent evolution of the luminosity function (LF) of massive, red galaxies can be reconciled, just considering that observational samples of red galaxies can be significantly contaminated by DSFs. The model proves that it is feasible to build up ~50-60% of the present-day mETG population at z<~1 and to reproduce the observational excess by a factor of ~4-5 of late-type galaxies at 0.8<z<1 through the coordinated action of wet, mixed, and dry major mergers, fulfilling global trends that are in general agreement with mass-downsizing. The bulk of this assembly takes place during ~1 Gyr elapsed at 0.8<z<1. The model suggests that major mergers have been the main driver for the observational migration of mass from the massive-end of the blue galaxy cloud to that of the red sequence in the last ~8 Gyr.(Abridged)
ការសិក្សាច្រើនបានព្យាយាមកំណត់ថា តើការកើនឡើងនៃប្រភេទហ្គាឡាក់ស៊ីប្រភេទដំបូង (E-S0a) តាមពេលវេលានោះ គឺបណ្តាលមកពីការប្រសព្វគ្នាចម្បងៗ (major mergers) ឬអត់ ដោយបានឈានដល់ សេចក្តីសន្និដ្ឋានផ្ទុយគ្នា។ យើងបានធ្វើការធ្វើតេស្តដោយផ្ទាល់ តាមរយៈគំរូសេមី-វិភាគ (semi-analytical modelling) ដោយសិក្សាថា តើហ្គាឡាក់ស៊ីប្រភេទដំបូងដែលមានម៉ាសធំ (log(M_*/Msun)>11) នៅ z~0 (mETGs) នឹងវិវត្តទៅកាន់អតីតកាល ក្រោមសម្មតិកម្មថា ការប្រសព្វគ្នានីមួយៗផ្តល់ជាហ្គាឡាក់ស៊ីប្រភេទដំបូង។ ការសិក្សានេះ បានធ្វើឡើងដោយគ្រាន់តែពិចារណាលើការប្រសព្វគ្នាចម្បងៗ ដែលត្រូវបានរាយការណ៍ដោយផ្ទាល់តាមការសង្កេតនៅរាល់កម្រិតក្រហម (redshift) និងសន្មតថា ការប្រសព្វគ្នាចម្បងៗដែលមានឧស្ម័នច្រើន នឹងឆ្លងកាត់ដំណាក់កាលបណ្តោះអាសន្ននៃហ្គាឡាក់ស៊ីបង្កើតផ្កាយដែលត្រូវបានធ្វើអោយក្រហមដោយធូលី (DSFs)។ គំរូនេះអាចចម្លងការវិវត្តនៃអនុគមន៍ភាពភ្លឺ (LFs) របស់ហ្គាឡាក់ស៊ីដែលត្រូវបានសង្កេតឃើញ នៅ z<~1 ដោយក្នុងពេលតែមួយសម្រាប់ខ្សែរប្រភពភ្លើងផ្សេងៗគ្នា (B, I និង K) និងសម្រាប់លក្ខខណ្ឌជ្រើសរើសផ្សេងៗគ្នាលើពណ៌ និងរូបរាង។ វាក៏ផ្តល់នូវគ្រប់គ្រង់មួយដែលលទ្ធផលដែលហាក់ដូចជាប្រឆាំងគ្នា អំពីការវិវត្តន៍ថ្មីៗនៃអនុគមន៍ភាពភ្លឺ (LF) របស់ហ្គាឡាក់ស៊ីធំៗ និងហ្គាឡាក់ស៊ីក្រហម អាចត្រូវបានដោះស្រាយ ដោយគ្រាន់តែពិចារណាថា ក្រុមហ៊ុនសំណាកសង្កេតអាចត្រូវបានបំពុលយ៉ាងខ្លាំងដោយ DSFs។ គំរូនេះបញ្ជាក់ថា វាអាចធ្វើទៅបានក្នុងការបង្កើតប្រជាជន mETG បច្ចុប្បន្នប្រហែល 50-60% នៅ z<~1 និងចម្លងនូវការលើសសង្កេតបានដោយកត្តាប្រហែល 4-5 នៃហ្គាឡាក់ស៊ីប្រភេទក្រោយនៅ 0.8<z<1 តាមរយៈសកម្មភាពរួមគ្នារបស់ការប្រសព្វគ្នាចម្បងៗប្រភេទសើម (wet) ប្រភេទចម្រុះ (mixed) និងប្រភេទស្ងួត (dry) ដែលបំពេញនូវនិន្នាការទូទៅដែលស្របគ្នាជាមួយនឹង mass-downsizing។ ភាគច្រើននៃការប្រមូលផ្តុំនេះ បានកើតឡើងក្នុងរយៈពេលប្រហែល 1 Gyr នៅ 0.8<z<1។ គំរូនេះបង្ហាញថា ការប្រសព្វគ្នាចម្បងៗ បានក្លាយជាកត្តាចម្បងសម្រាប់ការផ្លាស់ប្តូរម៉ាសដែលត្រូវបានសង្កេតឃើញ ពីចុងខាងធំនៃពពកហ្គាឡាក់ស៊ីខៀវ ទៅកាន់ចុងខាងធំនៃលំដាប់ហ្គាឡាក់ស៊ីក្រហម ក្នុងរយៈពេលប្រហែល 8 Gyr ចុងក្រោយនេះ។ (សង្ខេប)
km
In this paper, the finite volume lattice Boltzmann method (FVLBM) on unstructured grid presented in Part I of this paper is extended to simulate the turbulent flows. To model the turbulent effect, the $k-\omega$ SST turbulence model is incorporated into the present FVLBM framework and also is solved by the finite volume method. Based on the eddy viscosity hypothesis, the eddy viscosity is computed from the solution of k-\omega SST model, and the total viscosity is modified by adding this eddy viscosity to the laminar (kinematic) viscosity given in the Bhatnagar-Gross-Krook collision term. In order to enhance the computational efficiency, the three-stage second-order implicit-explicit (IMEX) Runge-Kutta method is used for temporal discretization and the time step can be larger one- or two-order of magnitude compared with explicit Euler forward scheme. Though the computational cost is increased, the finial computational efficiency is enhanced about one-order of magnitude and the good results also can be obtained at large time step through the test case of lid-driven cavity flow. Two turbulent flow cases are carried out to validate the present method, including flow over backward-facing step and flow around NACA0012 airfoil. Our numerical results are found to be in agreement with experimental data and numerical solutions, demonstrating applicability of the present FVLBM coupled with k-\omega SST model to accurately predict the incompressible turbulent flows.
V tomto článku je metoda konečných objemů založená na Boltzmannově mřížkové metodě (FVLBM) na nestrukturované síti, představená v první části tohoto článku, rozšířena pro simulaci turbulentních proudění. Za účelem modelování turbulentních jevů je do současného rámce FVLBM zahrnut turbulenční model $k-\omega$ SST, který je rovněž řešen metodou konečných objemů. Na základě hypotézy vířivé viskozity se vířivá viskozita vypočítává z řešení modelu $k-\omega$ SST a celková viskozita je upravena přičtením této vířivé viskozity k laminární (kinematické) viskozitě uvedené v kolizním členu Bhatnagar-Gross-Krook. Za účelem zvýšení výpočetní efektivity je pro časovou diskretizaci použita třístupňová druhého řádu implicitně-explícitní (IMEX) Runge-Kuttova metoda, která umožňuje použít časový krok větší o jedno až dvě řády velikosti ve srovnání s explicitní Eulerovou dopřednou schémou. I když výpočetní náročnost stoupá, konečná výpočetní efektivita se zvýší přibližně o jeden řád velikosti a dobré výsledky lze získat i při velkém časovém kroku, jak ukazuje testovací případ proudění v uzavřené dutině. Pro ověření současné metody jsou provedeny dvě úlohy turbulentního proudění: proudění přes zpětnou stupňovitou změnu a proudění kolem profilu NACA0012. Získané numerické výsledky dobře souhlasí s experimentálními daty a numerickými řešeními, což demonstruje použitelnost současné FVLBM spojené s modelem $k-\omega$ SST pro přesné předpovídání nestlačitelných turbulentních proudění.
cs
One approach to delay the spread of the novel coronavirus (COVID-19) is to reduce human travel by imposing travel restriction policies. It is yet unclear how effective those policies are on suppressing the mobility trend due to the lack of ground truth and large-scale dataset describing human mobility during the pandemic. This study uses real-world location-based service data collected from anonymized mobile devices to uncover mobility changes during COVID-19 and under the 'Stay-at-home' state orders in the U.S. The study measures human mobility with two important metrics: daily average number of trips per person and daily average person-miles traveled. The data-driven analysis and modeling attribute less than 5% of the reduction in the number of trips and person-miles traveled to the effect of the policy. The models developed in the study exhibit high prediction accuracy and can be applied to inform epidemics modeling with empirically verified mobility trends and to support time-sensitive decision-making processes.
ការរារាំងការធ្វើដំណើររបស់មនុស្សដោយការដាក់គោលនយោបាយកំណត់ដែនដីគឺជាវិធីមួយក្នុងការពន្យាពេលការរីករាយនៃមេរោគកូរ៉ូណាថ្មី (COVID-19)។ ការប្រសិទ្ធភាពនៃគោលនយោបាយទាំងនោះក្នុងការបង្ក្រាបនិន្នាការធ្វើចរាចរណ៍នៅឡើយនៅមិនទាន់ច្បាស់លាស់នៅឡើយទេ ដោយសារតែខ្វះទិន្នន័យពិតប្រាកដ និងឯកសារទិន្នន័យកម្រិតធំមួយដែលពិពណ៌នាអំពីចរាចរណ៍របស់មនុស្សក្នុងអំឡុងពេលរាតត្បាត។ ការសិក្សានេះប្រើទិន្នន័យសេវាកម្មផ្អែកលើទីតាំងពិតប្រាកដដែលប្រមូលបានពីឧបករណ៍ទូរសព្ទដែលបានលាក់អត្តសញ្ញាណ ដើម្បីបង្ហាញពីការផ្លាស់ប្តូរចរាចរណ៍ក្នុងអំឡុងពេល COVID-19 និងក្រោមបញ្ជារបស់រដ្ឋ "ស្នាក់នៅផ្ទះ" នៅសហរដ្ឋអាមេរិក។ ការសិក្សានេះវាស់វែងចរាចរណ៍របស់មនុស្សដោយប្រើម៉ែត្រវាស់វែងសំខាន់ៗពីរគឺ ចំនួនដំណើរកំសាន្តមធ្យមក្នុងមួយថ្ងៃក្នុងមួយនាក់ និងចំនួនម៉ាយល៍មធ្យមក្នុងមួយថ្ងៃក្នុងមួយនាក់។ ការវិភាគ និងគំរូផ្អែកលើទិន្នន័យបានបង្ហាញថា មានតិចជាង ៥% នៃការថយចុះចំនួនដំណើរកំសាន្ត និងចំនួនម៉ាយល៍ក្នុងមួយនាក់ដែលធ្វើដំណើរ ដោយសារផលប៉ះពាល់នៃគោលនយោបាយ។ គំរូដែលបានអភិវឌ្ឍក្នុងការសិក្សានេះបានបង្ហាញពីភាពត្រឹមត្រូវខ្ពស់ក្នុងការទស្សន៍ទាយ ហើយអាចត្រូវបានអនុវត្តដើម្បីផ្តល់ព័ត៌មានដល់គំរូរាតត្បាតដោយផ្អែកលើនិន្នាការចរាចរណ៍ដែលបានផ្ទៀងផ្ទាត់តាមបទពិសោធន៍ និងគាំទ្រដំណើរការសម្រេចចិត្តដែលអាស្រ័យលើពេលវេលា។
km
Special-purpose constraint propagation algorithms frequently make implicit use of short supports -- by examining a subset of the variables, they can infer support (a justification that a variable-value pair may still form part of an assignment that satisfies the constraint) for all other variables and values and save substantial work -- but short supports have not been studied in their own right. The two main contributions of this paper are the identification of short supports as important for constraint propagation, and the introduction of HaggisGAC, an efficient and effective general purpose propagation algorithm for exploiting short supports. Given the complexity of HaggisGAC, we present it as an optimised version of a simpler algorithm ShortGAC. Although experiments demonstrate the efficiency of ShortGAC compared with other general-purpose propagation algorithms where a compact set of short supports is available, we show theoretically and experimentally that HaggisGAC is even better. We also find that HaggisGAC performs better than GAC-Schema on full-length supports. We also introduce a variant algorithm HaggisGAC-Stable, which is adapted to avoid work on backtracking and in some cases can be faster and have significant reductions in memory use. All the proposed algorithms are excellent for propagating disjunctions of constraints. In all experiments with disjunctions we found our algorithms to be faster than Constructive Or and GAC-Schema by at least an order of magnitude, and up to three orders of magnitude.
Algoritmos de propagação de restrições com propósito específico frequentemente fazem uso implícito de suportes curtos — ao examinar um subconjunto das variáveis, conseguem inferir suporte (uma justificativa de que um par variável-valor ainda pode fazer parte de uma atribuição que satisfaz a restrição) para todas as outras variáveis e valores, economizando trabalho substancial —, mas os suportes curtos não foram estudados em si mesmos. As duas principais contribuições deste artigo são a identificação dos suportes curtos como importantes para a propagação de restrições e a introdução do HaggisGAC, um algoritmo de propagação geral eficiente e eficaz para explorar suportes curtos. Dada a complexidade do HaggisGAC, apresentamo-lo como uma versão otimizada de um algoritmo mais simples, o ShortGAC. Embora experimentos demonstrem a eficiência do ShortGAC em comparação com outros algoritmos gerais de propagação quando um conjunto compacto de suportes curtos está disponível, mostramos teórica e experimentalmente que o HaggisGAC é ainda melhor. Também descobrimos que o HaggisGAC apresenta desempenho superior ao GAC-Schema em suportes completos. Introduzimos também uma variante do algoritmo, o HaggisGAC-Stable, adaptado para evitar trabalho durante retrocessão e, em alguns casos, pode ser mais rápido e apresentar reduções significativas no uso de memória. Todos os algoritmos propostos são excelentes para propagar disjunções de restrições. Em todos os experimentos com disjunções, verificamos que nossos algoritmos foram mais rápidos que o Constructive Or e o GAC-Schema em pelo menos uma ordem de magnitude, e até três ordens de magnitude.
pt
Spatial-temporal local binary pattern (STLBP) has been widely used in dynamic texture recognition. STLBP often encounters the high-dimension problem as its dimension increases exponentially, so that STLBP could only utilize a small neighborhood. To tackle this problem, we propose a method for dynamic texture recognition using PDV hashing and dictionary learning on multi-scale volume local binary pattern (PHD-MVLBP). Instead of forming very high-dimensional LBP histogram features, it first uses hash functions to map the pixel difference vectors (PDVs) to binary vectors, then forms a dictionary using the derived binary vector, and encodes them using the derived dictionary. In such a way, the PDVs are mapped to feature vectors of the size of dictionary, instead of LBP histograms of very high dimension. Such an encoding scheme could extract the discriminant information from videos in a much larger neighborhood effectively. The experimental results on two widely-used dynamic textures datasets, DynTex++ and UCLA, show the superiority performance of the proposed approach over the state-of-the-art methods.
جگہیں وقتی مقامی بائنری پیٹرن (STLBP) کو متحرک متن کی تشخیص میں وسیع پیمانے پر استعمال کیا گیا ہے۔ STLBP اکثر اُس کے بعد جب اس کا بعد تیزی سے بڑھ جاتا ہے تو اُچّے بعد کے مسئلے کا سامنا کرتا ہے، جس کی وجہ سے STLBP صرف چھوٹے پڑوس کو استعمال کر سکتا ہے۔ اس مسئلے کو حل کرنے کے لیے، ہم متعدد سطح والے حجم مقامی بائنری پیٹرن (PHD-MVLBP) پر PDV ہیش اور لغت سیکھنے کا استعمال کرتے ہوئے متحرک متن کی تشخیص کے لیے ایک طریقہ پیش کرتے ہیں۔ بجائے بہت اُچّے بعد والے LBP ہسٹوگرام خصوصیات تشکیل دینے کے، یہ پہلے ہیش فنکشنز کا استعمال پکسل فرق ویکٹرز (PDVs) کو بائنری ویکٹرز میں نقشہ بنانے کے لیے کرتا ہے، پھر حاصل شدہ بائنری ویکٹر کا استعمال کرتے ہوئے ایک لغت تشکیل دیتا ہے، اور انہیں حاصل شدہ لغت کے ذریعہ کوڈ کرتا ہے۔ اس طرح، PDVs کو بہت اُچّے بعد والے LBP ہسٹوگرام کی بجائے لغت کے سائز کے برابر خصوصیات ویکٹرز میں نقشہ بنایا جاتا ہے۔ اس قسم کی کوڈنگ کی تکنیک ویڈیوز سے بہت بڑے پڑوس میں موثر طریقے سے امتیازی معلومات نکال سکتی ہے۔ دو وسیع پیمانے پر استعمال ہونے والے متحرک متن کے ڈیٹا سیٹ، ڈائن ٹیکس++ اور یو سی ایل اے، پر تجرباتی نتائج پیش کردہ طریقے کی موجودہ بہترین طریقوں پر ترجیحی کارکردگی کو ظاہر کرتے ہیں۔
ur
We present a general form of Renormalization operator $\mathcal{R}$ acting on potentials $V:\{0,1\}^\mathbb{N} \to \mathbb{R}$. We exhibit the analytical expression of the fixed point potential $V$ for such operator $\mathcal{R}$. This potential can be expressed in a naturally way in terms of a certain integral over the Hausdorff probability on a Cantor type set on the interval $[0,1]$. This result generalizes a previous one by A. Baraviera, R. Leplaideur and A. Lopes where the fixed point potential $V$ was of Hofbauer type. For the potentials of Hofbauer type (a well known case of phase transition) the decay is like $n^{-\gamma}$, $\gamma>0$. Among other things we present the estimation of the decay of correlation of the equilibrium probability associated to the fixed potential $V$ of our general renormalization procedure. In some cases we get polynomial decay like $n^{-\gamma}$, $\gamma>0$, and in others a decay faster than $n \,e^{ -\, \sqrt{n}}$, when $n \to \infty$. The potentials $g$ we consider here are elements of the so called family of Walters potentials on $\{0,1\}^\mathbb{N} $ which generalizes the potentials considered initially by F. Hofbauer. For these potentials some explicit expressions for the eigenfunctions are known. In a final section we also show that given any choice $d_n \to 0$ of real numbers varying with $n \in \mathbb{N}$ there exist a potential $g$ on the class defined by Walters which has a invariant probability with such numbers as the coefficients of correlation (for a certain explicit observable function).
Kami memperkenalkan bentuk umum operator Renormalisasi $\mathcal{R}$ yang bekerja pada potensial $V:\{0,1\}^\mathbb{N} \to \mathbb{R}$. Kami menampilkan ekspresi analitik dari potensial titik tetap $V$ untuk operator $\mathcal{R}$ tersebut. Potensial ini dapat dinyatakan secara alami dalam bentuk integral tertentu terhadap probabilitas Hausdorff pada himpunan tipe Cantor di dalam interval $[0,1]$. Hasil ini merupakan perumuman dari hasil sebelumnya oleh A. Baraviera, R. Leplaideur dan A. Lopes, di mana potensial titik tetap $V$ merupakan tipe Hofbauer. Untuk potensial tipe Hofbauer (kasus transisi fasa yang telah dikenal baik), peluruhan berbentuk $n^{-\gamma}$, $\gamma>0$. Di antara hal-hal lain, kami menyajikan estimasi peluruhan korelasi dari probabilitas ekuilibrium yang terkait dengan potensial tetap $V$ dari prosedur renormalisasi umum kami. Dalam beberapa kasus, kami memperoleh peluruhan polinomial seperti $n^{-\gamma}$, $\gamma>0$, dan dalam kasus lain peluruhannya lebih cepat daripada $n \,e^{ -\, \sqrt{n}}$, ketika $n \to \infty$. Potensial $g$ yang kami tinjau di sini merupakan elemen dari keluarga potensial Walters pada $\{0,1\}^\mathbb{N}$ yang merupakan perumuman dari potensial yang awalnya ditinjau oleh F. Hofbauer. Untuk potensial-potensial ini, beberapa ekspresi eksplisit untuk fungsi eigen telah diketahui. Pada bagian akhir, kami juga menunjukkan bahwa diberikan sembarang pilihan $d_n \to 0$ dari bilangan real yang bergantung pada $n \in \mathbb{N}$, terdapat potensial $g$ dalam kelas yang didefinisikan oleh Walters yang memiliki probabilitas invarian dengan bilangan-bilangan tersebut sebagai koefisien korelasi (untuk fungsi observabel eksplisit tertentu).
id
A new network with super approximation power is introduced. This network is built with Floor ($\lfloor x\rfloor$) or ReLU ($\max\{0,x\}$) activation function in each neuron and hence we call such networks Floor-ReLU networks. For any hyper-parameters $N\in\mathbb{N}^+$ and $L\in\mathbb{N}^+$, it is shown that Floor-ReLU networks with width $\max\{d,\, 5N+13\}$ and depth $64dL+3$ can uniformly approximate a H\"older function $f$ on $[0,1]^d$ with an approximation error $3\lambda d^{\alpha/2}N^{-\alpha\sqrt{L}}$, where $\alpha \in(0,1]$ and $\lambda$ are the H\"older order and constant, respectively. More generally for an arbitrary continuous function $f$ on $[0,1]^d$ with a modulus of continuity $\omega_f(\cdot)$, the constructive approximation rate is $\omega_f(\sqrt{d}\,N^{-\sqrt{L}})+2\omega_f(\sqrt{d}){N^{-\sqrt{L}}}$. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of $\omega_f(r)$ as $r\to 0$ is moderate (e.g., $\omega_f(r) \lesssim r^\alpha$ for H\"older continuous functions), since the major term to be considered in our approximation rate is essentially $\sqrt{d}$ times a function of $N$ and $L$ independent of $d$ within the modulus of continuity.
Je představena nová síť s vysokou aproximační schopností. Tato síť je sestavena s funkcí aktivace Floor ($\lfloor x\rfloor$) nebo ReLU ($\max\{0,x\}$) v každém neuronu, a proto takovým sítím říkáme sítě Floor-ReLU. Pro libovolné nadparametry $N\in\mathbb{N}^+$ a $L\in\mathbb{N}^+$ je ukázáno, že sítě Floor-ReLU o šířce $\max\{d,\, 5N+13\}$ a hloubce $64dL+3$ mohou rovnoměrně aproximovat H\"olderovu funkci $f$ na $[0,1]^d$ s aproximační chybou $3\lambda d^{\alpha/2}N^{-\alpha\sqrt{L}}$, kde $\alpha \in(0,1]$ a $\lambda$ jsou pořadí a konstanta H\"olderovy podmínky. Obecněji, pro libovolnou spojitou funkci $f$ na $[0,1]^d$ s modulem spojitosti $\omega_f(\cdot)$ je konstruktivní rychlost aproximace $\omega_f(\sqrt{d}\,N^{-\sqrt{L}})+2\omega_f(\sqrt{d}){N^{-\sqrt{L}}}$. Jako důsledek tato nová třída sítí překonává prokletí dimenze v aproximační schopnosti, když se změna $\omega_f(r)$ pro $r\to 0$ mění umírněně (např. $\omega_f(r) \lesssim r^\alpha$ pro H\"olderovsky spojité funkce), protože hlavní člen, který je třeba uvažovat v naší aproximační rychlosti, je v podstatě $\sqrt{d}$ krát funkce závislá na $N$ a $L$, nezávislá na $d$ uvnitř modulu spojitosti.
cs
UltraCarbonaceous Antarctic MicroMeteorites (UCAMMs) represent a small fraction of interplanetary dust particles reaching the Earth's surface and contain large amounts of an organic component not found elsewhere. They are most probably sampling a contribution from the outer regions of the solar system to the local interplanetary dust particle flux. We characterize UCAMMs composition focusing on the organic matter, and compare the results to the insoluble organic matter (IOM) from primitive meteorites, IDPs, and the Earth.We acquired synchrotron infrared microspectroscopy and micro-Raman spectra of eight UCAMMs from the Concordia/CSNSM collection, as well as N/C atomic ratios determined with an electron microprobe. The spectra are dominated by an organic component with a low aliphatic CH versus aromatic C=C ratio, and a higher nitrogen fraction and lower oxygen fraction compared to carbonaceous chondrites and IDPs. The UCAMMs carbonyl absorption band is in agreement with a ketone or aldehyde functional group. Some of the IR and Raman spectra show a C$\equiv$N band corresponding to a nitrile. The absorption band profile from 1400 to 1100 cm-1 is compatible with the presence of C-N bondings in the carbonaceous network, and is spectrally different from that reported in meteorite IOM. We confirm that the silicate-to-carbon content in UCAMMs is well below that reported in IDPs and meteorites. Together with the high nitrogen abundance relative to carbon building the organic matter matrix, the most likely scenario for the formation of UCAMMs occurs via physicochemical mechanisms taking place in a cold nitrogen rich environment, like the surface of icy parent bodies in the outer solar system. The composition of UCAMMs provides an additional hint of the presence of a heliocentric positive gradient in the C/Si and N/C abundance ratios in the solar system protoplanetary disc evolution.
ไมโครเมทีออร์ไรต์แอนตาร์กติกส์ชนิดอัลตร้าคาร์บอนเนส (UCAMMs) ถือเป็นสัดส่วนเล็กน้อยของอนุภาคฝุ่นระหว่างดาวเคราะห์ที่เข้ามาถึงพื้นผิวโลก และมีส่วนประกอบอินทรีย์ในปริมาณมากที่ไม่พบในแหล่งอื่น อนุภาคเหล่านี้น่าจะเป็นตัวอย่างของส่วนผสมจากบริเวณรอบนอกของระบบสุริยะที่มีส่วนร่วมในกระแสฝุ่นอนุภาคระหว่างดาวเคราะห์ที่เข้ามาถึงโลก เราได้ศึกษาองค์ประกอบของ UCAMMs โดยเน้นที่ส่วนประกอบอินทรีย์ และเปรียบเทียบผลลัพธ์กับสารอินทรีย์ที่ไม่ละลายน้ำ (IOM) จากอุกกาบาตดึกดำบรรพ์ อนุภาคฝุ่นระหว่างดาวเคราะห์ (IDPs) และโลก เราได้บันทึกสเปกตรัมไมโครอินฟราเรดแบบซิงโครตรอนและสเปกตรัมไมโครราแมนของ UCAMMs จำนวนแปดตัวอย่างจากคอลเลกชันคอนคอร์เดีย/CSNSM รวมถึงการวิเคราะห์อัตราส่วนอะตอม N/C โดยใช้ไมโครพรวบอิเล็กตรอน สเปกตรัมที่ได้มีลักษณะเด่นจากองค์ประกอบอินทรีย์ที่มีอัตราส่วนของหมู่อัลลิเฟติก CH ต่ำเมื่อเทียบกับหมู่อะโรมาติก C=C และมีปริมาณไนโตรเจนสูงกว่าแต่ปริมาณออกซิเจนต่ำกว่าเมื่อเทียบกับคาร์บอนัสคอนดริไทต์และ IDPs แถบการดูดซับคาร์บอนิลของ UCAMMs สอดคล้องกับหมู่ฟังก์ชันคีโตนหรือแอลดีไฮด์ สเปกตรัมอินฟราเรดและราแมนบางตัวแสดงแถบ C$\equiv$N ซึ่งสอดคล้องกับสารไนไตรล์ ลักษณะโปรไฟล์แถบการดูดซับจาก 1400 ถึง 1100 ซม.-1 สอดคล้องกับการมีพันธะ C-N อยู่ในโครงข่ายคาร์บอน และมีลักษณะทางสเปกตรัมแตกต่างจากที่รายงานใน IOM ของอุกกาบาต นอกจากนี้ยังยืนยันว่า อัตราส่วนปริมาณซิลิเกตต่อคาร์บอนใน UCAMMs ต่ำกว่าที่พบใน IDPs และอุกกาบาตอย่างชัดเจน ร่วมกับปริมาณไนโตรเจนที่สูงเมื่อเทียบกับคาร์บอนในโครงสร้างอินทรีย์ สถานการณ์ที่เป็นไปได้มากที่สุดสำหรับการก่อตัวของ UCAMMs คือกลไกทางฟิสิกส์และเคมีที่เกิดขึ้นในสภาพแวดล้อมที่เย็นและอุดมด้วยไนโตรเจน เช่น ผิวของวัตถุแม่ที่มีน้ำแข็งอยู่ในบริเวณรอบนอกของระบบสุริยะ องค์ประกอบของ UCAMMs ให้หลักฐานเพิ่มเติมเกี่ยวกับการมีอยู่ของแรงโน้มเอียงเชิงบวกตามระยะทางจากดวงอาทิตย์ในอัตราส่วนปริมาณ C/Si และ N/C ระหว่างวิวัฒนาการของแผ่นจานโปรโตดาวในระบบสุริยะ
th
Cluster formation and gas dynamics in the central regions of barred galaxies are not well understood. This paper reviews the environment of three 10^7 Msun clusters near the inner Lindblad resonance of the barred spiral NGC 1365. The morphology, mass, and flow of HI and CO gas in the spiral and barred regions are examined for evidence of the location and mechanism of cluster formation. The accretion rate is compared with the star formation rate to infer the lifetime of the starburst. The gas appears to move from inside corotation in the spiral region to looping filaments in the interbar region at a rate of ~6 Msun/yr before impacting the bar dustlane somewhere along its length. The gas in this dustlane moves inward, growing in flux as a result of the accretion to ~40 Msun/yr near the ILR. This inner rate exceeds the current nuclear star formation rate by a factor of 4, suggesting continued buildup of nuclear mass for another ~0.5 Gyr. The bar may be only 1-2 Gyr old. Extrapolating the bar flow back in time, we infer that the clusters formed in the bar dustlane outside the central dust ring at a position where an interbar filament currently impacts the lane. The ram pressure from this impact is comparable to the pressure in the bar dustlane, and both are comparable to the pressure in the massive clusters. Impact triggering is suggested. The isothermal assumption in numerical simulations seems inappropriate for the rare fraction parts of spiral and bar gas flows. The clusters have enough lower-mass counterparts to suggest they are part of a normal power law mass distribution. Gas trapping in the most massive clusters could explain their [NeII] emission, which is not evident from the lower-mass clusters nearby.
Sự hình thành các cụm sao và động lực học khí trong các vùng trung tâm của các thiên hà có thanh chưa được hiểu rõ. Bài báo này xem xét môi trường của ba cụm sao có khối lượng 10^7 khối lượng Mặt Trời gần cộng hưởng Lindblad nội tại của thiên hà xoắn ốc có thanh NGC 1365. Hình thái, khối lượng và dòng chảy của khí HI và CO trong các vùng xoắn ốc và vùng có thanh được khảo sát nhằm tìm kiếm bằng chứng về vị trí và cơ chế hình thành các cụm sao. Tốc độ tích tụ được so sánh với tốc độ hình thành sao để suy ra tuổi thọ của đợt bùng nổ sao. Khí dường như di chuyển từ bên trong vùng đồng quay trong khu vực xoắn ốc sang các sợi khí dạng vòng ở vùng giữa các thanh với tốc độ khoảng 6 khối lượng Mặt Trời mỗi năm, trước khi va chạm vào dải bụi của thanh ở một vị trí nào đó dọc theo chiều dài của nó. Khí trong dải bụi này di chuyển vào trong, tăng dần về lưu lượng do tích tụ, đạt khoảng 40 khối lượng Mặt Trời mỗi năm gần cộng hưởng Lindblad nội tại. Tốc độ phía trong này vượt quá tốc độ hình thành sao hiện tại ở vùng nhân khoảng bốn lần, cho thấy sự tích tụ khối lượng ở vùng nhân sẽ tiếp tục trong khoảng 0,5 tỷ năm nữa. Thanh có thể chỉ mới tồn tại từ 1 đến 2 tỷ năm. Khi suy ngược dòng chảy của thanh theo thời gian, chúng tôi suy ra rằng các cụm sao đã hình thành trong dải bụi của thanh, nằm bên ngoài vòng bụi trung tâm, tại vị trí hiện nay có một sợi khí từ vùng giữa các thanh va chạm vào dải đó. Áp suất va chạm từ sự va chạm này tương đương với áp suất trong dải bụi của thanh, và cả hai đều tương đương với áp suất trong các cụm sao khối lượng lớn. Điều này gợi ý khả năng sự hình thành cụm sao được kích hoạt bởi va chạm. Giả định đẳng nhiệt trong các mô phỏng số dường như không phù hợp với phần khí có phân số nhỏ trong các dòng chảy khí ở vùng xoắn ốc và vùng thanh. Các cụm sao này có đủ các cụm khối lượng thấp hơn để gợi ý rằng chúng là một phần của phân bố khối lượng theo luật lũy thừa bình thường. Việc giữ lại khí trong các cụm sao khối lượng lớn nhất có thể giải thích bức xạ [NeII] của chúng, hiện tượng này không thấy rõ ở các cụm khối lượng thấp hơn gần đó.
vi
The effect of the intersite and interplane Coulomb interactions between the Dirac fermions on the formation of the Kohn-Luttinger superconductivity in bilayer doped graphene is studied disregarding the effects of the van der Waals potential of the substrate and both magnetic and non-magnetic impurities. The phase diagram determining the boundaries of superconductive domains with different types of symmetry of the order parameter is built using the extended Hubbard model in the Born weak-coupling approximation with allowance for the intratomic, interatomic, and interlayer Coulomb interactions between electrons. It is shown that the Kohn-Luttinger polarization contributions up to the second order of perturbation theory in the Coulomb interaction inclusively and an account for the long-range intraplane Coulomb interactions significantly affect the competition between the superconducting $f-$, $p+ip-$, and $d+id-$wave pairings. It is demonstrated that the account for the interplane Coulomb interaction enhances the critical temperature of the transition to the superconducting phase.
ڈائرک فرمیونز کے درمیان بین السائٹ اور بین السطح کولمب تعاملات کے اثرات دوہری تہہ والے ڈوپڈ گرافین میں کوہن-لٹنگر سپرکنڈکٹیویٹی کی تشکیل پر مطالعہ کیا گیا ہے، جس میں سبسٹریٹ کے وان ڈر والز ممکنہ تاثرات اور دونوں مقناطیسی اور غیر مقناطیسی ناخالصیوں کے اثرات کو نظر انداز کیا گیا ہے۔ حکم کے پیرامیٹر کی مختلف اقسام کی تقارنی سپرکنڈکٹنگ دائرے کی حدود کا تعین کرنے والے فیز ڈائریگرام کی تعمیر داخلہ ایٹمی، بین ایٹمی اور بین تہہ کولمب تعاملات کو مدنظر رکھتے ہوئے بورن کمزور کپلنگ تقریب میں توسیع یافتہ ہببرڈ ماڈل کے ذریعے کی گئی ہے۔ دکھایا گیا ہے کہ کولمب تعامل میں خلل کی تھیوری کے دوسرے درجے تک کوہن-لٹنگر دھرولتہ تعاون اور طویل فاصلے کے اندر سطح کولمب تعاملات کا اکاؤنٹ $f-$، $p+ip-$، اور $d+id-$ ویو جوڑے بننے کے درمیان مقابلے کو نمایاں طور پر متاثر کرتا ہے۔ یہ نشان دہی کی گئی ہے کہ بین سطح کولمب تعامل کا اکاؤنٹ سپرکنڈکٹنگ فیز میں تبدیلی کے لیے تنقیطی درجہ حرارت کو بڑھاتا ہے۔
ur
Special-purpose constraint propagation algorithms frequently make implicit use of short supports -- by examining a subset of the variables, they can infer support (a justification that a variable-value pair may still form part of an assignment that satisfies the constraint) for all other variables and values and save substantial work -- but short supports have not been studied in their own right. The two main contributions of this paper are the identification of short supports as important for constraint propagation, and the introduction of HaggisGAC, an efficient and effective general purpose propagation algorithm for exploiting short supports. Given the complexity of HaggisGAC, we present it as an optimised version of a simpler algorithm ShortGAC. Although experiments demonstrate the efficiency of ShortGAC compared with other general-purpose propagation algorithms where a compact set of short supports is available, we show theoretically and experimentally that HaggisGAC is even better. We also find that HaggisGAC performs better than GAC-Schema on full-length supports. We also introduce a variant algorithm HaggisGAC-Stable, which is adapted to avoid work on backtracking and in some cases can be faster and have significant reductions in memory use. All the proposed algorithms are excellent for propagating disjunctions of constraints. In all experiments with disjunctions we found our algorithms to be faster than Constructive Or and GAC-Schema by at least an order of magnitude, and up to three orders of magnitude.
Des algorithmes spécialisés de propagation de contraintes utilisent fréquemment de manière implicite des supports courts — en examinant un sous-ensemble des variables, ils peuvent déduire un support (une justification indiquant qu'un couple variable-valeur peut encore faire partie d'une affectation satisfaisant la contrainte) pour toutes les autres variables et valeurs, ce qui permet d'économiser un travail substantiel — mais les supports courts n'ont pas été étudiés en tant que tels. Les deux principales contributions de cet article sont l'identification des supports courts comme étant importants pour la propagation de contraintes, et la présentation de HaggisGAC, un algorithme de propagation polyvalent efficace et performant permettant d'exploiter les supports courts. Compte tenu de la complexité de HaggisGAC, nous le présentons comme une version optimisée d'un algorithme plus simple appelé ShortGAC. Bien que des expériences démontrent l'efficacité de ShortGAC par rapport à d'autres algorithmes généraux de propagation lorsque l'on dispose d'un ensemble compact de supports courts, nous montrons théoriquement et expérimentalement que HaggisGAC est encore meilleur. Nous constatons également que HaggisGAC surpasse GAC-Schema lorsqu'il s'agit de supports complets. Nous introduisons aussi une variante appelée HaggisGAC-Stable, adaptée pour éviter des calculs inutiles lors du retour arrière, et qui dans certains cas peut être plus rapide et entraîner des réductions significatives de l'utilisation mémoire. Tous les algorithmes proposés sont excellents pour propager des disjonctions de contraintes. Dans toutes les expériences menées sur des disjonctions, nous avons constaté que nos algorithmes étaient plus rapides que Constructive Or et GAC-Schema d'au moins un ordre de grandeur, et jusqu'à trois ordres de grandeur.
fr
The skip-gram model for learning word embeddings (Mikolov et al. 2013) has been widely popular, and DeepWalk (Perozzi et al. 2014), among other methods, has extended the model to learning node representations from networks. Recent work of Qiu et al. (2018) provides a closed-form expression for the DeepWalk objective, obviating the need for sampling for small datasets and improving accuracy. In these methods, the "window size" T within which words or nodes are considered to co-occur is a key hyperparameter. We study the objective in the limit as T goes to infinity, which allows us to simplify the expression of Qiu et al. We prove that this limiting objective corresponds to factoring a simple transformation of the pseudoinverse of the graph Laplacian, linking DeepWalk to extensive prior work in spectral graph embeddings. Further, we show that by a applying a simple nonlinear entrywise transformation to this pseudoinverse, we recover a good approximation of the finite-T objective and embeddings that are competitive with those from DeepWalk and other skip-gram methods in multi-label classification. Surprisingly, we find that even simple binary thresholding of the Laplacian pseudoinverse is often competitive, suggesting that the core advancement of recent methods is a nonlinearity on top of the classical spectral embedding approach.
គំរូ skip-gram សម្រាប់រៀនការបង្កប់ពាក្យ (Mikolov et al. 2013) បានក្លាយជាទូទៅនិយមយ៉ាងខ្លាំង ហើយ DeepWalk (Perozzi et al. 2014) ព្រមទាំងវិធីសាស្ត្រផ្សេងៗទៀត បានពង្រីកគំរូនេះទៅកាន់ការរៀនតំណាងឱ្យចំណុចពីបណ្តាញ។ ការងារថ្មីៗរបស់ Qiu et al. (2018) ផ្តល់នូវកន្សោមបិទជិតសម្រាប់គោលដៅ DeepWalk ដែលធ្វើឱ្យមិនចាំបាច់គំរូសំរាប់ទិន្នន័យតូច ហើយកែលម្អភាពត្រឹមត្រូវ។ ក្នុងវិធីសាស្ត្រទាំងនេះ ទំហំ "បង្អួច" T ដែលក្នុងនោះពាក្យ ឬ ចំណុចត្រូវបានចាត់ទុកថាកើតឡើងរួមគ្នា គឺជាប៉ារ៉ាម៉ែត្រសំខាន់មួយ។ យើងសិក្សាគោលដៅនៅពេលដែល T ទៅដល់អនន្ត ដែលអនុញ្ញាតឱ្យយើងសាមញ្ញកន្សោមរបស់ Qiu et al.។ យើងបញ្ជាក់ថា គោលដៅកំណត់នេះត្រូវនឹងការបំបែកការផ្លាស់ប្តូរសាមញ្ញនៃបញ្ច្រាសប៉សែូដនៃឡាប្លាស្យង់ក្រាហ្វ ដែលភ្ជាប់ DeepWalk ទៅនឹងការងារមុនៗច្រើនក្នុងការបង្កប់ក្រាហ្វតាមវិធីស្ពេគត្រាល។ បន្ថែមទៀត យើងបង្ហាញថា ដោយអនុវត្តនូវការផ្លាស់ប្តូរប៉សែូដឡាប្លាស្យង់តាមរបៀបក្រៅលីនេអ៊ែរសាមញ្ញ យើងទាញបាននូវការប៉ាន់ស្មានល្អនៃគោលដៅ T កំណត់ និងការបង្កប់ដែលមានកម្រិតប្រកួតប្រជែងជាមួយនឹងការបង្កប់ពី DeepWalk និងវិធីសាស្ត្រ skip-gram ផ្សេងទៀតក្នុងការចាត់ថ្នាក់ពហុស្លាក។ គួរឱ្យភ្ញាក់ផ្អើល យើងរកឃើញថា ការកំណត់ដោយគ្រាប់គោលគោលពីរសាមញ្ញលើបញ្ច្រាសប៉សែូដឡាប្លាស្យង់ ក៏ច្រើនតែប្រកួតប្រជែងដែរ ដែលបង្ហាញថា ការអភិវឌ្ឍសំខាន់នៃវិធីសាស្ត្រថ្មីៗគឺជាការក្រៅលីនេអ៊ែរលើវិធីសាស្ត្របង្កប់តាមវិធីស្ពេគត្រាលបុរាណ។
km
Special-purpose constraint propagation algorithms frequently make implicit use of short supports -- by examining a subset of the variables, they can infer support (a justification that a variable-value pair may still form part of an assignment that satisfies the constraint) for all other variables and values and save substantial work -- but short supports have not been studied in their own right. The two main contributions of this paper are the identification of short supports as important for constraint propagation, and the introduction of HaggisGAC, an efficient and effective general purpose propagation algorithm for exploiting short supports. Given the complexity of HaggisGAC, we present it as an optimised version of a simpler algorithm ShortGAC. Although experiments demonstrate the efficiency of ShortGAC compared with other general-purpose propagation algorithms where a compact set of short supports is available, we show theoretically and experimentally that HaggisGAC is even better. We also find that HaggisGAC performs better than GAC-Schema on full-length supports. We also introduce a variant algorithm HaggisGAC-Stable, which is adapted to avoid work on backtracking and in some cases can be faster and have significant reductions in memory use. All the proposed algorithms are excellent for propagating disjunctions of constraints. In all experiments with disjunctions we found our algorithms to be faster than Constructive Or and GAC-Schema by at least an order of magnitude, and up to three orders of magnitude.
Speziell angepasste Constraints-Propagations-Algorithmen verwenden häufig implizit kurze Supports – indem sie eine Teilmenge der Variablen untersuchen, können sie Support (eine Begründung dafür, dass ein Variablen-Wert-Paar noch Teil einer Belegung sein kann, die die Bedingung erfüllt) für alle anderen Variablen und Werte erschließen und so erheblichen Aufwand einsparen – doch kurze Supports wurden bisher nicht eigenständig untersucht. Die beiden Hauptbeiträge dieser Arbeit sind die Feststellung, dass kurze Supports für die Constraints-Propagation wichtig sind, sowie die Einführung von HaggisGAC, einem effizienten und effektiven Allzweck-Propagationsalgorithmus zur Nutzung kurzer Supports. Angesichts der Komplexität von HaggisGAC stellen wir diesen als optimierte Version eines einfacheren Algorithmus, ShortGAC, vor. Obwohl Experimente die Effizienz von ShortGAC im Vergleich zu anderen Allzweck-Propagationsalgorithmen zeigen, wenn ein kompaktes Set kurzer Supports verfügbar ist, zeigen wir theoretisch und experimentell, dass HaggisGAC noch besser ist. Außerdem stellen wir fest, dass HaggisGAC bei vollständigen Supports besser abschneidet als GAC-Schema. Wir führen auch einen variante Algorithmus, HaggisGAC-Stable, ein, der angepasst wurde, um Aufwand beim Backtracking zu vermeiden und in einigen Fällen schneller ist und deutliche Reduktionen im Speicherverbrauch aufweist. Alle vorgeschlagenen Algorithmen eignen sich hervorragend zur Propagierung von Disjunktionen von Constraints. In allen Experimenten mit Disjunktionen erwiesen sich unsere Algorithmen als mindestens eine Größenordnung schneller als Constructive Or und GAC-Schema, und bis zu drei Größenordnungen schneller.
de
Objective: This work aimed to demonstrate the effectiveness of a hybrid approach based on Sentence BERT model and retrofitting algorithm to compute relatedness between any two biomedical concepts. Materials and Methods: We generated concept vectors by encoding concept preferred terms using ELMo, BERT, and Sentence BERT models. We used BioELMo and Clinical ELMo. We used Ontology Knowledge Free (OKF) models like PubMedBERT, BioBERT, BioClinicalBERT, and Ontology Knowledge Injected (OKI) models like SapBERT, CoderBERT, KbBERT, and UmlsBERT. We trained all the BERT models using Siamese network on SNLI and STSb datasets to allow the models to learn more semantic information at the phrase or sentence level so that they can represent multi-word concepts better. Finally, to inject ontology relationship knowledge into concept vectors, we used retrofitting algorithm and concepts from various UMLS relationships. We evaluated our hybrid approach on four publicly available datasets which also includes the recently released EHR-RelB dataset. EHR-RelB is the largest publicly available relatedness dataset in which 89% of terms are multi-word which makes it more challenging. Results: Sentence BERT models mostly outperformed corresponding BERT models. The concept vectors generated using the Sentence BERT model based on SapBERT and retrofitted using UMLS-related concepts achieved the best results on all four datasets. Conclusions: Sentence BERT models are more effective compared to BERT models in computing relatedness scores in most of the cases. Injecting ontology knowledge into concept vectors further enhances their quality and contributes to better relatedness scores.
Tujuan: Penelitian ini bertujuan menunjukkan efektivitas pendekatan hibrida berbasis model Sentence BERT dan algoritma retrofitting untuk menghitung keterkaitan antara dua konsep biomedis. Bahan dan Metode: Kami menghasilkan vektor konsep dengan mengenkoding istilah utama konsep menggunakan model ELMo, BERT, dan Sentence BERT. Kami menggunakan BioELMo dan Clinical ELMo. Kami menggunakan model Ontologi Knowledge Free (OKF) seperti PubMedBERT, BioBERT, BioClinicalBERT, dan model Ontologi Knowledge Injected (OKI) seperti SapBERT, CoderBERT, KbBERT, dan UmlsBERT. Kami melatih semua model BERT menggunakan jaringan Siamese pada dataset SNLI dan STSb agar model dapat mempelajari informasi semantik lebih lanjut pada tingkat frasa atau kalimat sehingga mampu merepresentasikan konsep multi-kata dengan lebih baik. Akhirnya, untuk menyuntikkan pengetahuan relasi ontologi ke dalam vektor konsep, kami menggunakan algoritma retrofitting dan konsep dari berbagai relasi UMLS. Kami mengevaluasi pendekatan hibrida kami pada empat dataset yang tersedia secara publik, termasuk dataset EHR-RelB yang baru dirilis. EHR-RelB merupakan dataset keterkaitan terbesar yang tersedia secara publik, di mana 89% istilahnya terdiri dari multi-kata, sehingga membuatnya lebih menantang. Hasil: Model Sentence BERT sebagian besar mengungguli model BERT yang sesuai. Vektor konsep yang dihasilkan menggunakan model Sentence BERT berbasis SapBERT dan di-retrofit menggunakan konsep terkait UMLS mencapai hasil terbaik pada keempat dataset. Kesimpulan: Model Sentence BERT lebih efektif dibandingkan model BERT dalam menghitung skor keterkaitan pada sebagian besar kasus. Menyuntikkan pengetahuan ontologi ke dalam vektor konsep semakin meningkatkan kualitasnya dan berkontribusi pada skor keterkaitan yang lebih baik.
id
We investigate the length of the period of validity of a classical description for the cosmic axion field. To this end, we first show that we can understand the oscillating axion solution as expectation value over an underlying coherent quantum state. Once we include self-interaction of the axion, the quantum state evolves so that the expectation value over it starts to deviate from the classical solution. The time-scale of this process defines the quantum break-time. For the hypothetical dark matter axion field in our Universe, we show that quantum break-time exceeds the age of the Universe by many orders of magnitude. This conclusion is independent of specific properties of the axion model. Thus, experimental searches based on the classical approximation of the oscillating cosmic axion field are fully justified. Additionally, we point out that the distinction of classical nonlinearities and true quantum effects is crucial for calculating the quantum break-time in any system. Our analysis can also be applied to other types of dark matter that are described as classical fluids in the mean field approximation.
Prozkoumáváme délku doby platnosti klasického popisu kosmického pole axionů. Za tímto účelem nejprve ukážeme, že oscilující řešení pro axion můžeme chápat jako střední hodnotu přes základní koherentní kvantový stav. Jakmile zahrneme samointerakci axionu, kvantový stav se vyvíjí tak, že střední hodnota přes tento stav začne odchylovat od klasického řešení. Časová škála tohoto procesu definuje kvantový čas rozpadu. Pro hypotetické pole axionové temné hmoty ve vesmíru ukazujeme, že kvantový čas rozpadu převyšuje stáří vesmíru o mnoho řádů. Tento závěr je nezávislý na konkrétních vlastnostech modelu axionu. Experimentální vyhledávání založená na klasické aproximaci oscilujícího kosmického pole axionů jsou tedy plně oprávněná. Dále upozorňujeme, že rozlišení mezi klasickými nelinearitami a skutečnými kvantovými efekty je klíčové pro výpočet kvantového času rozpadu v jakémkoli systému. Naše analýza může být rovněž použita i na jiné typy temné hmoty, které jsou v rámci středního pole popisovány jako klasické tekutiny.
cs
Mechanics can be founded in a principle stating the uncertainty in the position of an observable particle delta-q as a function of its motion relative to the observer, expressed in a trajectory representation . From this principle, p.delta-q=const., being p the q-conjugated momentum, mechanical laws are derived and the meaning of the Lagrangian and Hamiltonian functions are discussed. The connection between the presented principle and Hamilton's Least Action Principle is examined. For a particle hidden from direct observation, the position uncertainty is determined by the enclosing boundaries, and is, thus, disengaged from its momentum. Heat, as a non-mechanical magnitude, stem from this fact, and thermodynamical magnitudes have direct expression in the presented formalism. It is finally shown that in terms of Information Theory, mechanical laws have simple interpretation. Kinetic and potential energies are expressions of the information on momentum and position respectively, and the law of conservation of energy expresses the absence of information exchange in mechanical interactions.
力学可基于一个原理建立,该原理将可观测粒子的位置不确定性 delta-q 表示为粒子相对于观察者运动的函数,并以轨迹表示形式表达。从这一原理 p·delta-q = 常数 出发(其中 p 为与 q 共轭的动量),可推导出力学定律,并讨论拉格朗日函数和哈密顿函数的意义。本文还考察了所提出的原理与哈密顿最小作用量原理之间的联系。对于无法直接观测的粒子,其位置不确定性由包围它的边界决定,因此与动量无关。热量作为一种非力学量,正源于这一事实,而热力学量在该形式体系中具有直接的表达。最后表明,从信息论的角度来看,力学定律具有简单的解释:动能和势能分别是关于动量和位置的信息的表达,而能量守恒定律则表达了在力学相互作用中不存在信息交换。
zh
We present a model of worldwide crisis contagion based on the Google matrix analysis of the world trade network obtained from the UN Comtrade database. The fraction of bankrupted countries exhibits an \textit{on-off} phase transition governed by a bankruptcy threshold $\kappa$ related to the trade balance of the countries. For $\kappa>\kappa_c$, the contagion is circumscribed to less than 10\% of the countries, whereas, for $\kappa<\kappa_c$, the crisis is global with about 90\% of the countries going to bankruptcy. We measure the total cost of the crisis during the contagion process. In addition to providing contagion scenarios, our model allows to probe the structural trading dependencies between countries. For different networks extracted from the world trade exchanges of the last two decades, the global crisis comes from the Western world. In particular, the source of the global crisis is systematically the Old Continent and The Americas (mainly US and Mexico). Besides the economy of Australia, those of Asian countries, such as China, India, Indonesia, Malaysia and Thailand, are the last to fall during the contagion. Also, the four BRIC are among the most robust countries to the world trade crisis.
Presentiamo un modello di contagio globale delle crisi basato sull'analisi della matrice Google della rete mondiale del commercio, ottenuta dal database UN Comtrade. La frazione di paesi in bancarotta mostra una transizione di fase \textit{on-off} governata da una soglia di bancarotta $\kappa$ correlata al saldo commerciale dei paesi. Per $\kappa>\kappa_c$, il contagio è limitato a meno del 10\% dei paesi, mentre per $\kappa<\kappa_c$ la crisi è globale, con circa il 90\% dei paesi che finiscono in bancarotta. Misuriamo il costo totale della crisi durante il processo di contagio. Oltre a fornire scenari di contagio, il nostro modello consente di analizzare le dipendenze strutturali commerciali tra i paesi. Per diverse reti estratte dagli scambi commerciali mondiali degli ultimi due decenni, la crisi globale ha origine dal mondo occidentale. In particolare, la fonte della crisi globale è sistematicamente il Vecchio Continente e le Americhe (principalmente Stati Uniti e Messico). Oltre all'economia dell'Australia, quelle dei paesi asiatici, come Cina, India, Indonesia, Malesia e Thailandia, sono le ultime a crollare durante il contagio. Inoltre, i quattro BRIC figurano tra i paesi più resilienti alla crisi del commercio mondiale.
it
In this contribution, a variational diffuse modeling framework for cracks in heterogeneous media is presented. A static order parameter smoothly bridges the discontinuity at material interfaces, while an evolving phase-field captures the regularized crack. The key novelty is the combination of a strain energy split with a partial rank-I relaxation in the vicinity of the diffuse interface. The former is necessary to account for physically meaningful crack kinematics like crack closure, the latter ensures the mechanical jump conditions throughout the diffuse region. The model is verified by a convergence study, where a circular bi-material disc with and without a crack is subjected to radial loads. For the uncracked case, analytical solutions are taken as reference. In a second step, the model is applied to crack propagation, where a meaningful influence on crack branching is observed, that underlines the necessity of a reasonable homogenization scheme. The presented model is particularly relevant for the combination of any variational strain energy split in the fracture phase-field model with a diffuse modeling approach for material heterogeneities.
本文提出了一种用于非均质介质中裂纹的变分扩散建模框架。静态序参量在材料界面处平滑地连接不连续性,而演化的相场则描述了正则化的裂纹。该模型的核心创新在于将应变能分解与在扩散界面附近的部分一阶松弛相结合:前者用于描述裂纹闭合等符合物理规律的裂纹运动学行为,后者则确保在整个扩散区域内满足力学跳跃条件。通过收敛性研究对该模型进行了验证,研究中对含裂纹与不含裂纹的圆形双材料圆盘施加径向载荷。对于无裂纹情况,采用解析解作为参考。进一步地,将该模型应用于裂纹扩展分析,观察到裂纹分叉行为受到显著影响,突显了合理均质化方案的必要性。本文提出的模型特别适用于将断裂相场模型中的任意变分应变能分解方法与材料非均质性的扩散建模方法相结合的情形。
zh
Effective non-parametric density estimation is a key challenge in high-dimensional multivariate data analysis. In this paper,we propose a novel approach that builds upon tensor factorization tools. Any multivariate density can be represented by its characteristic function, via the Fourier transform. If the sought density is compactly supported, then its characteristic function can be approximated, within controllable error, by a finite tensor of leading Fourier coefficients, whose size de-pends on the smoothness of the underlying density. This tensor can be naturally estimated from observed realizations of the random vector of interest, via sample averaging. In order to circumvent the curse of dimensionality, we introduce a low-rank model of this characteristic tensor, which significantly improves the density estimate especially for high-dimensional data and/or in the sample-starved regime. By virtue of uniqueness of low-rank tensor decomposition, under certain conditions, our method enables learning the true data-generating distribution. We demonstrate the very promising performance of the proposed method using several measured datasets.
ການຄາດຄະເນຄວາມໜາແໜ້ນທີ່ບໍ່ແມ່ນພາລາມິເຕີຢ່າງມີປະສິດທິຜົນ ແມ່ນຄວາມທ້າທາຍທີ່ສຳຄັນໃນການວິເຄາະຂໍ້ມູນພົຫຼາຕົວປ່ຽນມິຕິສູງ. ໃນບົດຄວາມນີ້, ພວກເຮົາຂໍເອີ້ນວິທີການໃໝ່ທີ່ສ້າງຕັ້ງຂຶ້ນໂດຍອີງໃສ່ເຄື່ອງມືການປັດສະວະເຕັນສະ. ຄວາມໜາແໜ້ນທີ່ພົຫຼາຕົວປ່ຽນໃດໆສາມາດຖືກສະແດງອອກໂດຍຟັງຊັ່ນຄຸນລັກສະນະຂອງມັນ ຜ່ານການປ່ຽນແປງຟູເຣີເຍ. ຖ້າຄວາມໜາແໜ້ນທີ່ກຳລັງຊອກຫາມີການສະໜັບສະໜູນແບບບີບອັດ, ດັ່ງນັ້ນຟັງຊັ່ນຄຸນລັກສະນະຂອງມັນສາມາດຖືກຄາດຄະເນໄດ້, ພາຍໃນຂໍ້ຜິດພາດທີ່ຄວບຄຸມໄດ້, ໂດຍເຕັນສະຈຳກັດຂອງສຳປະສິດຟູເຣີເຍຊັ້ນນຳ, ຂະໜາດຂອງມັນຂຶ້ນກັບລະດັບຄວາມລຽບຂອງຄວາມໜາແໜ້ນທີ່ຢູ່ເບື້ອງຫຼັງ. ເຕັນສະນີ້ສາມາດຖືກຄາດຄະເນຢ່າງເປັນທຳມະຊາດຈາກການສັງເກດຄ່າຈິງທີ່ເກີດຂຶ້ນຈາກເວັກເຕີສຸ່ມທີ່ກ່ຽວຂ້ອງ, ຜ່ານການເອົາສະເລ່ຍຕົວຢ່າງ. ເພື່ອຫຼີກລ່ຽງບັນຫາຂອງມິຕິສູງ, ພວກເຮົານຳສະເໜີຮູບແບບເຕັນສະຄຸນນະພາບຕ່ຳ, ທີ່ປັບປຸງການຄາດຄະເນຄວາມໜາແໜ້ນຢ່າງຫຼວງຫຼາຍໂດຍສະເພາະສຳລັບຂໍ້ມູນມິຕິສູງ ແລະ/ຫຼື ໃນສະພາບການຂາດຕົວຢ່າງ. ເນື່ອງຈາກຄວາມເປັນເອກະລັກຂອງການແຍກຕົວເຕັນສະຄຸນນະພາບຕ່ຳ, ໃຕ້ເງື່ອນໄຂບາງຢ່າງ, ວິທີການຂອງພວກເຮົາອະນຸຍາດໃຫ້ຮຽນຮູ້ການຈຳໜ່າຍຂໍ້ມູນທີ່ຖືກຕ້ອງ. ພວກເຮົາສະແດງໃຫ້ເຫັນເຖິງການປະຕິບັດງານທີ່ມີແນວໂນ້ມດີຫຼາຍຂອງວິທີການທີ່ສະເໜີໄວ້ໂດຍໃຊ້ຊຸດຂໍ້ມູນທີ່ວັດແທກໄດ້ຫຼາຍຊຸດ.
lo
We consider a broadcast communication system over parallel sub-channels where the transmitter sends three messages: a common message to two users, and two confidential messages to each user which need to be kept secret from the other user. We assume partial channel state information at the transmitter (CSIT), stemming from noisy channel estimation. The first contribution of this paper is the characterization of the secrecy capacity region boundary as the solution of weighted sum-rate problems, with suitable weights. Partial CSIT is addressed by adding a margin to the estimated channel gains. The second paper contribution is the solution of this problem in an almost closed-form, where only two single real parameters must be optimized, e.g., through dichotomic searches. On the one hand, the considered problem generalizes existing literature where only two out of the three messages are transmitted. On the other hand, the solution finds also practical applications into the resource allocation of orthogonal frequency division multiplexing (OFDM) systems with both secrecy and fairness constraints.
Consideramos un sistema de comunicación de difusión sobre subcanales paralelos en el que el transmisor envía tres mensajes: un mensaje común a dos usuarios y dos mensajes confidenciales, uno para cada usuario, que deben mantenerse en secreto respecto al otro usuario. Suponemos información parcial del estado del canal en el transmisor (CSIT, por sus siglas en inglés), proveniente de una estimación ruidosa del canal. La primera contribución de este artículo es la caracterización del límite de la región de capacidad de secrecía como solución de problemas de suma ponderada de tasas, con pesos adecuados. El CSIT parcial se aborda añadiendo un margen a las ganancias del canal estimadas. La segunda contribución del artículo es la solución de este problema de forma casi cerrada, en la que solo deben optimizarse dos parámetros reales, por ejemplo, mediante búsquedas dicotómicas. Por un lado, el problema considerado generaliza trabajos previos existentes en los que solo se transmiten dos de los tres mensajes. Por otro lado, la solución encuentra también aplicaciones prácticas en la asignación de recursos en sistemas de multiplexación por división ortogonal de frecuencias (OFDM) con restricciones tanto de secrecía como de equidad.
es
This paper aims at explaining the two phases in the observed specific star formation rate (sSFR), namely the high (>3/Gyr) values at z>2 and the smooth decrease since z=2. In order to do this, we compare to observations the specific star formation rate evolution predicted by well calibrated models of chemical evolution for elliptical and spiral galaxies, using the additional constraints on the mean stellar ages of these galaxies (at a given mass). We can conclude that the two phases of the sSFR evolution across cosmic time are due to different populations of galaxies. At z>2 the contribution comes from spheroids: the progenitors of present-day massive ellipticals (which feature the highest sSFR) as well as halos and bulges in spirals (which contribute with average and lower-than-average sSFR). In each single galaxy the sSFR decreases rapidly and the star formation stops in <1 Gyr. However the combination of different generations of ellipticals in formation might result in an apparent lack of strong evolution of the sSFR (averaged over a population) at high redshift. The z<2 decrease is due to the slow evolution of the gas fraction in discs, modulated by the gas accretion history and regulated by the Schmidt law. The Milky Way makes no exception to this behaviour.
Tento článek si klade za cíl vysvětlit dvě fáze pozorované specifické tvorby hvězd (sSFR), a to vysoké hodnoty (>3/Gyr) při z>2 a hladký pokles od z=2. Za tímto účelem porovnáváme s pozorováními vývoj specifické tvorby hvězd předpovídaný dobře kalibrovanými modely chemického vývoje eliptických a spirálních galaxií, přičemž využíváme dodatečná omezení týkající se středního stáří hvězd v těchto galaxiích (při dané hmotnosti). Můžeme uzavřít, že dvě fáze vývoje sSFR v průběhu kosmického času jsou způsobeny různými populacemi galaxií. Při z>2 pochází příspěvek od sféroidů: předchůdců dnešních hmotných eliptických galaxií (které vykazují nejvyšší sSFR) i od hal a výdutí ve spirálách (které přispívají průměrnou a podprůměrnou sSFR). V každé jednotlivé galaxii sSFR rychle klesá a tvorba hvězd se ukončí za méně než 1 Gyr. Kombinace různých generací právě vznikajících eliptických galaxií však může vést k zdánlivému nedostatku silného vývoje sSFR (průměrované přes populaci) ve vysokých červených posunech. Pokles při z<2 je způsoben pomalým vývojem zlomku plynu v discích, modulovaným historií přítoku plynu a regulovaným Schmidtem. Galaxie Mléčná dráha není v tomto chování výjimkou.
cs
Bayesian optimization through Gaussian process regression is an effective method of optimizing an unknown function for which every measurement is expensive. It approximates the objective function and then recommends a new measurement point to try out. This recommendation is usually selected by optimizing a given acquisition function. After a sufficient number of measurements, a recommendation about the maximum is made. However, a key realization is that the maximum of a Gaussian process is not a deterministic point, but a random variable with a distribution of its own. This distribution cannot be calculated analytically. Our main contribution is an algorithm, inspired by sequential Monte Carlo samplers, that approximates this maximum distribution. Subsequently, by taking samples from this distribution, we enable Thompson sampling to be applied to (armed-bandit) optimization problems with a continuous input space. All this is done without requiring the optimization of a nonlinear acquisition function. Experiments have shown that the resulting optimization method has a competitive performance at keeping the cumulative regret limited.
通过高斯过程回归的贝叶斯优化是一种有效方法,用于优化未知且每次测量成本高昂的函数。该方法首先对目标函数进行近似,然后推荐下一个待测量的点。该推荐通常通过优化一个给定的采集函数来确定。在进行足够多次测量后,最终给出关于函数最大值的建议。然而,一个关键的认识是,高斯过程的最大值并非一个确定性点,而是一个具有自身分布的随机变量。该分布无法通过解析方法计算。我们的主要贡献是一种受序列蒙特卡洛采样器启发的算法,用于近似该最大值的分布。随后,通过对该分布进行采样,我们使得汤普森采样能够应用于具有连续输入空间的(多臂老虎机)优化问题。整个过程无需优化非线性采集函数。实验表明,由此产生的优化方法在限制累积遗憾方面表现出具有竞争力的性能。
zh
The evolutionary persistence of symbiotic associations is a puzzle. Adaptation should eliminate cooperative traits if it is possible to enjoy the advantages of cooperation without reciprocating - a facet of cooperation known in game theory as the Prisoner's Dilemma. Despite this barrier, symbioses are widespread, and may have been necessary for the evolution of complex life. The discovery of strategies such as tit-for-tat has been presented as a general solution to the problem of cooperation. However, this only holds for within-species cooperation, where a single strategy will come to dominate the population. In a symbiotic association each species may have a different strategy, and the theoretical analysis of the single species problem is no guide to the outcome. We present basic analysis of two-species cooperation and show that a species with a fast adaptation rate is enslaved by a slowly evolving one. Paradoxically, the rapidly evolving species becomes highly cooperative, whereas the slowly evolving one gives little in return. This helps understand the occurrence of endosymbioses where the host benefits, but the symbionts appear to gain little from the association.
共生关系的长期进化存在是一个谜题。如果生物能够享受合作带来的好处而不必给予回报,那么适应性进化就应淘汰合作性状——这种合作困境在博弈论中被称为“囚徒困境”。尽管存在这一障碍,共生现象却广泛存在,并且可能对复杂生命的演化至关重要。诸如“以牙还牙”策略的发现曾被视为解决合作问题的普遍方案。然而,这仅适用于种内合作,即单一策略最终在种群中占据主导地位的情况。在共生关系中,每个物种可能采取不同的策略,而单一种群问题的理论分析无法指导这种跨物种互动的结果。我们对双物种合作进行了基础性分析,结果表明,适应速率较快的物种会被进化较慢的物种所“奴役”。具有讽刺意味的是,快速进化的物种表现出高度的合作性,而进化缓慢的物种却几乎不给予回报。这一发现有助于理解某些内共生现象:宿主从中获益,而共生体似乎在共生关系中获益甚微。
zh
We construct a symplectic realisation of the twisted Poisson structure on the phase space of an electric charge in the background of an arbitrary smooth magnetic monopole density in three dimensions. We use the extended phase space variables to study the classical and quantum dynamics of charged particles in arbitrary magnetic fields by constructing a suitable Hamiltonian that reproduces the Lorentz force law for the physical degrees of freedom. In the source-free case the auxiliary variables can be eliminated via Hamiltonian reduction, while for non-zero monopole densities they are necessary for a consistent formulation and are related to the extra degrees of freedom usually required in the Hamiltonian description of dissipative systems. We obtain new perspectives on the dynamics of dyons and motion in the field of a Dirac monopole, which can be formulated without Dirac strings. We compare our associative phase space formalism with the approach based on nonassociative quantum mechanics, reproducing extended versions of the characteristic translation group three-cocycles and minimal momentum space volumes, and prove that the two approaches are formally equivalent. We also comment on the implications of our symplectic realisation in the dual framework of non-geometric string theory and double field theory.
យើងបានសាងសង់ការសម្រេចបែបស៊ីមផ្លេកទិចនៃរចនាសម្ព័ន្ធពើស្តុងដែលបានបង្វិលនៅលើផ្ទៃដែលជាកន្លែងរបស់ប្រិមាណអគ្គិសនីក្នុងបរិបទនៃដង់ស៊ីតេម៉ាញេទិចម៉ូណូប៉ូលប្រេកង់ស្រេកស្រាច់មួយនៅក្នុងបីវិមាត្រ។ យើងប្រើអថេរនៃផ្ទៃដែលបានពង្រីក ដើម្បីសិក្សាអំពីចលនាសិក្សាប្រព័ន្ធក្លាសសិក្ និងប្រព័ន្ធចលនាសិក្សាបែបគ្រង់ហ្គែររបស់អំណាចអគ្គិសនីក្នុងវាលម៉ាញេទិចប្រេកង់ ដោយការសាងសង់អនុគមន៍ហាម៊ីលតុនដែលសមស្រប ដែលបង្កើតច្បាប់កំលាំងឡរ៉ង់ (Lorentz force law) សម្រាប់ដឺក្រេនៃសេរីភាពរបស់រូបវិទ្យា។ ក្នុងករណីដែលគ្មានប្រភព អថេរជំនួយអាចត្រូវបានដកចេញតាមរយៈការកាត់បន្ថយអនុគមន៍ហាម៊ីលតុន ខណៈដែលក្នុងករណីដង់ស៊ីតេម៉ូណូប៉ូលមិនសូន្យ អថេរទាំងនោះចាំបាច់សម្រាប់ការបង្កើតរចនាសម្ព័ន្ធដែលមានសុពលភាព ហើយពាក់ព័ន្ធនឹងដឺក្រេនៃសេរីភាពបន្ថែមដែលត្រូវបានទាមទារជាទូទៅក្នុងការពិពណ៌នាអំពីប្រព័ន្ធហាម៊ីលតុននៃប្រព័ន្ធដែលបាត់បង់ថាមពល។ យើងបានទទួលទស្សនៈថ្មីៗអំពីចលនាសិក្សារបស់ឌីអូន (dyons) និងចលនាក្នុងវាលរបស់ម៉ូណូប៉ូលឌីរ៉ាក់ ដែលអាចត្រូវបានបង្កើតឡើងដោយគ្មានខ្សែឌីរ៉ាក់។ យើងប្រៀបធៀបរចនាសម្ព័ន្ធផ្ទៃដែលមានលក្ខណៈអាសូស៊ីអេទីវនៃផ្ទៃដែលយើងបានស្នើ ជាមួយនឹងវិធីសាកសួរដែលផ្អែកលើមេកានិចបែបគ្រង់ហ្គែរដែលមិនអាសូស៊ីអេទីវ ដោយបង្កើតទម្រង់ពង្រីកនៃកូស៊ីក្លិចក្រុមបី (three-cocycles) និងបរិមាត្រអប្បបរមានៃផ្ទៃដែលជាកន្លែងរបស់ម៉ូមង់តុម ហើយបញ្ជាក់ថាវិធីសាកសួរទាំងពីរនេះមានលក្ខណៈស្មើគ្នាតាមទម្រង់។ យើងក៏បានផ្តល់យោបល់អំពីផលប៉ះពាល់នៃការសម្រេចបែបស៊ីមផ្លេកទិចរបស់យើងនៅក្នុងបរិបទដែលមានលក្ខណៈគូរបស់ទ្រឹស្តីខ្សែដែលគ្មានធរណីមាត្រ និងទ្រឹស្តីវាលទ្វេ (double field theory)។
km
Context. Transit detection algorithms are mathematical tools used for detecting planets in the photometric data of transit surveys. In this work we study their application to space-based surveys. Aims: Space missions are exploring the parameter space of the transit surveys where classical algorithms do not perform optimally, either because of the challenging signal-to-noise ratio of the signal or its non-periodic characteristics. We have developed an algorithm addressing these challenges for the mission CoRoT. Here we extend the application to the data from the space mission Kepler. We aim at understanding the performances of algorithms in different data sets. Methods: We built a simple analytical model of the transit signal and developed a strategy for the search that improves the detection performance for transiting planets. We analyzed Kepler data with a set of stellar activity filtering and transit detection tools from the CoRoT community that are designed for the search of transiting planets. Results: We present a new algorithm and its performances compared to one of the most widely used techniques in the literature using CoRoT data. Additionally, we analyzed Kepler data corresponding to quarter Q1 and compare our results with the most recent list of planetary candidates from the Kepler survey. We found candidates that went unnoticed by the Kepler team when analyzing longer data sets. We study the impact of instrumental features on the production of false alarms and false positives. These results show that the analysis of space mission data advocates the use of complementary detrending and transit detection tools also for future space-based transit surveys such as PLATO.
နောက်ခံ။ ဂြိုဟ်များကို ဂြိုဟ်ဖြတ်ကျော်မှု စစ်တမ်းများ၏ အလင်းရောင်တိုင်းတာမှု ဒေတာများမှ ရှာဖွေရန်အတွက် အသုံးပြုသည့် သင်္ချာဆိုင်ရာ ကိရိယာများမှာ ဂြိုဟ်ဖြတ်ကျော်မှု ရှာဖွေမှု အယ်လ်ဂိုရိသပ်များ ဖြစ်သည်။ ဤလုပ်ငန်းတွင် အာကာသအခြေပြု စစ်တမ်းများအတွက် အသုံးပြုမှုကို လေ့လာပါသည်။ ရည်ရွယ်ချက်များ- အာကာသ လွှတ်တင်မှုများသည် ဂြိုဟ်ဖြတ်ကျော်မှု စစ်တမ်းများ၏ ပါရာမီတာ နယ်ပယ်ကို စူးစမ်းလျက်ရှိပြီး ရိုးရာ အယ်လ်ဂိုရိသပ်များသည် လက်တွေ့အားဖြင့် ထိရောက်မှု မရှိပေ။ အကြောင်းမှာ အချက်အလက်၏ ဆိုင်းနယ်-တိုင်းရီရှိယို အချိုးကို စိန်ခေါ်မှုများ သို့မဟုတ် ၎င်း၏ မပုံမနှံ့ သဘောသဘာဝကြောင့် ဖြစ်သည်။ ကျွန်ုပ်တို့သည် CoRoT လွှတ်တင်မှုအတွက် ဤစိန်ခေါ်မှုများကို ဖြေရှင်းရန် အယ်လ်ဂိုရိသပ်တစ်ခုကို ဖွံ့ဖြိုးတိုးတက်စေခဲ့သည်။ ဤတွင် ကျွန်ုပ်တို့သည် Kepler အာကာသလွှတ်တင်မှုမှ ရရှိသော ဒေတာများသို့ အသုံးပြုမှုကို ချဲ့ထွင်ထားပါသည်။ ကျွန်ုပ်တို့၏ ရည်မှန်းချက်မှာ မတူညီသော ဒေတာအစုအမှုပ်များတွင် အယ်လ်ဂိုရိသပ်များ၏ စွမ်းဆောင်ရည်ကို နားလည်ရန်ဖြစ်သည်။ နည်းလမ်းများ- ဂြိုဟ်ဖြတ်ကျော်မှု အချက်အလက်အတွက် ရိုးရှင်းသော သီအိုရီဆိုင်ရာ မော်ဒယ်တစ်ခုကို တည်ဆောက်ပြီး ဂြိုဟ်ဖြတ်ကျော်မှု ရှာဖွေရေး စွမ်းဆောင်ရည်ကို မြှင့်တင်ပေးမည့် ရှာဖွေရေး ဗျူဟာတစ်ခုကို ဖွံ့ဖြိုးတိုးတက်စေခဲ့သည်။ Kepler ဒေတာကို CoRoT လူမှုအဖွဲ့မှ ဂြိုဟ်ဖြတ်ကျော်မှု ရှာဖွေရန် ဒီဇိုင်းထုတ်ထားသည့် ကြယ်တာရာများ၏ လှုပ်ရှားမှုကို စစ်ထုတ်ခြင်းနှင့် ဂြိုဟ်ဖြတ်ကျော်မှု ရှာဖွေမှု ကိရိယာများဖြင့် ဆန်းစစ်ခဲ့သည်။ ရလဒ်များ- CoRoT ဒေတာများကို အသုံးပြု၍ စာပေတွင် အသုံးအများဆုံး နည်းလမ်းများထဲမှ တစ်ခုနှင့် နှိုင်းယှဉ်၍ အယ်လ်ဂိုရိသပ်သစ်တစ်ခုနှင့် ၎င်း၏ စွမ်းဆောင်ရည်ကို တင်ပြပါသည်။ ထို့အပြင် Kepler စစ်တမ်း၏ ဂြိုဟ်အလားအလာများ၏ နောက်ဆုံးစာရင်းနှင့် နှိုင်းယှဉ်၍ Q1 ကွတ်တာနှင့် သက်ဆိုင်သည့် Kepler ဒေတာကို ဆန်းစစ်ခဲ့ပါသည်။ ပိုရှည်သော ဒေတာအစုများကို ဆန်းစစ်သည့်အခါ Kepler အဖွဲ့မှ သတိမပြုမိသည့် အလားအလာများကို တွေ့ရှိခဲ့သည်။ အဆိုးမြင် အချက်များနှင့် အကြောင်းမဲ့ အပြုသဘောများကို ဖြစ်ပေါ်စေသည့် ကိရိယာဆိုင်ရာ အင်္ဂါရပ်များ၏ သက်ရောက်မှုကို လေ့လာခဲ့သည်။ ဤရလဒ်များသည် အနာဂတ်တွင် PLATO ကဲ့သို့သော အာကာသအခြေပြု ဂြိုဟ်ဖြတ်ကျော်မှု စစ်တမ်းများအတွက်ပါ အပိုဆောင်း ဒီထရန်းဒင်းနှင့် ဂြိုဟ်ဖြတ်ကျော်မှု ရှာဖွေမှု ကိရိယာများကို အသုံးပြုရန် အာကာသလွှတ်တင်မှု ဒေတာများကို ဆန်းစစ်ခြင်းက အကြံပြုနေသည်ဟု ပြသထားသည်။
my
Successful applications of reinforcement learning in real-world problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent's entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we propose a new family of hybrid models that combines the strength of both supervised learning (SL) and reinforcement learning (RL), trained in a joint fashion: The SL component can be a recurrent neural networks (RNN) or its long short-term memory (LSTM) version, which is equipped with the desired property of being able to capture long-term dependency on history, thus providing an effective way of learning the representation of hidden states. The RL component is a deep Q-network (DQN) that learns to optimize the control for maximizing long-term rewards. Extensive experiments in a direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best among a set of previous state-of-the-art methods.
Gerçek dünya problemlerinde pekiştirmeli öğrenmenin başarılı uygulamaları genellikle kısmen gözlemlenebilir durumlarla başa çıkmayı gerektirir. Gizli durumları oluşturma ve çıkarım yapma genellikle oldukça zordur çünkü bu durumlar genellikle ajanın tüm etkileşim geçmişiyle ilişkilidir ve önemli ölçüde alan bilgisi gerektirebilir. Bu çalışmada, alanla ilgili en az düzeyde önceden bilgiyle kısmen gözlemlenebilir görevlerde durum temsillerini öğrenmeye yönelik derin öğrenme yaklaşımını incelemekteyiz. Özellikle, süpervizyonlu öğrenme (SL) ve pekiştirmeli öğrenme (RL) yöntemlerinin güçlü yönlerini bir araya getiren ve birlikte eğitilen yeni bir hibrit model ailesi önermekteyiz: SL bileşeni, geçmişe uzun vadeli bağımlılığı yakalayabilme özelliğine sahip olan, dolayısıyla gizli durum temsillerini öğrenmede etkili bir yöntem sunan, özyinelemeli sinir ağları (RNN) veya uzun kısa vadeli bellek (LSTM) versiyonu olabilir. RL bileşeni ise, uzun vadeli ödülleri en üst düzeye çıkarmak için kontrolü optimize etmeyi öğrenen derin Q ağıdır (DQN). Doğrudan posta kampanyası problemi üzerinde yapılan kapsamlı deneyler, önerilen yaklaşımın etkinliğini ve avantajlarını göstermekte olup, önceki en iyi yöntemler arasında en iyi performansı sergilemektedir.
tr
Using a fully ab-initio methodology, we demonstrate how the lattice vibrations couple with neutral excitons in monolayer WSe2 and contribute to the non-radiative excitonic lifetime. We show that only by treating the electron-electron and electron-phonon interactions at the same time it is possible to obtain an unprecedented agreement of the zero and finite-temperature optical gaps and absorption spectra with the experimental results. The bare energies were calculated by solving the Kohn-Sham equations, whereas G$_{0}$W$_{0}$ many body perturbation theory was used to extract the excited state energies. A coupled electron-hole Bethe-Salpeter equation was solved incorporating the polaronic energies to show that it is the in-plane torsional acoustic phonon branch that contributes mostly to the A and B exciton build-up. We find that the three A, B and C excitonic peaks exhibit different behaviour with temperature, displaying different non-radiative linewidths. There is no considerable transition in the strength of the excitons with temperature but A-exciton exhibits darker nature in comparison to C-exciton. Further, all the excitonic peaks redshifts as temperature rises. Renormalization of the bare electronic energies by phonon interactions and the anharmonic lattice thermal expansion causes a decreasing band-gap with increasing temperature. The zero point energy renormalization (31 meV) is found to be entirely due to the polaronic interaction with negligible contribution from lattice anharmonicites. These findings may find a profound impact on electronic and optoelectronic device technologies based on these monolayers.
Используя полностью первопринципную методологию, мы демонстрируем, как фононы решётки взаимодействуют с нейтральными экситонами в однослойном WSe2 и вносят вклад во время некогерентной экситонной жизни. Показано, что только одновременный учёт электрон-электронного и электрон-фононного взаимодействий позволяет достичь беспрецедентного согласия оптических щелей и спектров поглощения при нулевой и конечной температурах с экспериментальными данными. Исходные энергии рассчитывались путём решения уравнений Кона-Шэма, а для получения энергий возбуждённых состояний применялась теория возмущений многих тел G$_{0}$W$_{0}$. Решение связанного электрон-дырочного уравнения Бете-Солпетера с учётом поляронных энергий показало, что именно в-плоскостная акустическая ветвь фононов кручения вносит основной вклад в формирование экситонов A и B. Установлено, что три экситонных пика A, B и C проявляют различное поведение с изменением температуры, демонстрируя различные некогерентные ширины линий. Значительных изменений интенсивности экситонов с температурой не наблюдается, однако экситон A проявляет более тёмную природу по сравнению с экситоном C. Кроме того, все экситонные пики сдвигаются в красную область при повышении температуры. Перенормировка исходных электронных энергий вследствие взаимодействия с фононами и ангармоническое тепловое расширение решётки приводят к уменьшению ширины запрещённой зоны с ростом температуры. Перенормировка энергии нулевых колебаний (31 мэВ) целиком обусловлена поляронным взаимодействием, вклад ангармоничностей решётки пренебрежимо мал. Полученные результаты могут оказать значительное влияние на технологии электронных и оптоэлектронных устройств, основанных на таких монослоях.
ru
Numerical models of the wind-blown bubble of massive stars usually only account for the wind of a single star. However, since massive stars are usually formed in clusters, it would be more realistic to follow the evolution of a bubble created by several stars. We develope a two-dimensional (2D) model of the circumstellar bubble created by two massive stars, a 40 solar mass star and a 25 solar mass star, and follow its evolution. The stars are separated by approximately 16 pc and surrounded by a cold medium with a density of 20 particles per cubic cm. We use the MPI-AMRVAC hydrodynamics code to solve the conservation equations of hydrodynamics on a 2D cylindrical grid using time-dependent models for the wind parameters of the two stars. At the end of the stellar evolution (4.5 and 7.0 million years for the 40 and 25 solar mass stars, respectively), we simulate the supernova explosion of each star. Each star initially creates its own bubble. However, as the bubbles expand they merge, creating a combined, aspherical bubble. The combined bubble evolves over time, influenced by the stellar winds and supernova explosions. The evolution of a wind-blown bubble created by two stars deviates from that of the bubbles around single stars. In particular, once one of the stars has exploded, the bubble is too large for the wind of the remaining star to maintain and the outer shell starts to disintegrate. The lack of thermal pressure inside the bubble also changes the behavior of circumstellar features close to the remaining star. The supernovae are contained inside the bubble, which reflects part of the energy back into the circumstellar medium.
គំរូលេខនៃពពួកខ្យល់ដែលបក់ចេញពីផ្កាយដ៏ធំៗ ជាទូទៅគិតគូរតែពីខ្យល់ដែលបក់ចេញពីផ្កាយតែមួយប៉ុណ្ណោះ។ ទោះជាយ៉ាងណា ដោយសារផ្កាយធំៗភាគច្រើនតែងបង្កើតជាក្រុម វានឹងកាន់តែជិតនឹងភាពពិតប្រាកដប្រសិនបើយើងតាមដានការវិវត្តនៃពពួកដែលបង្កើតឡើងដោយផ្កាយច្រើន។ យើងបានអភិវឌ្ឍគំរូពីរវិមាត្រ (2D) នៃពពួកបរិសុុទ្ធដែលបង្កើតឡើងដោយផ្កាយធំពីរ ដែលមានម៉ាស 40 ដងនៃព្រះអាទិត្យ និង 25 ដងនៃព្រះអាទិត្យ ហើយតាមដានការវិវត្តន៍របស់វា។ ផ្កាយទាំងពីរនេះមានចម្ងាយបែងចែកគ្នាប្រហែល 16 pc ហើយហ៊ុំព័ទ្ធដោយមេដែកត្រជាក់ដែលមានដង់ស៊ីតេ 20 អេឡិចត្រុងក្នុងមួយសង់ទីម៉ែត្រគូប។ យើងប្រើកូដអ៊ីដ្រូដេនាមិក MPI-AMRVAC ដើម្បីដោះស្រាយសមីការអនុរក្សអ៊ីដ្រូដេនាមិកលើក្រឡាចត្រង្គ 2D បែបស៊ីឡាំង ដោយប្រើគំរូអាស្រ័យពេលវេលាសម្រាប់ប៉ារ៉ាម៉ែត្រខ្យល់របស់ផ្កាយទាំងពីរ។ នៅចុងបញ្ចប់នៃការវិវត្តផ្កាយ (4.5 និង 7.0 លានឆ្នាំសម្រាប់ផ្កាយ 40 និង 25 ដងនៃម៉ាសព្រះអាទិត្យ រៀងគ្នា) យើងធ្វើការសមាមាត្រការផ្ទះពិសោធន៍របស់ផ្កាយនីមួយៗ។ ផ្កាយនីមួយៗបង្កើតពពួកផ្ទាល់ខ្លួនរបស់វាជាដំបូង។ ទោះជាយ៉ាងណា កាលណាពពួកទាំងនោះពង្រីក ពួកវាបានផ្សះបញ្ចូលគ្នា បង្កើតពពួករួមដែលមិនសូវមានរាងជាគ្រវី។ ពពួករួមនេះវិវត្តន៍តាមពេលវេលា ដោយរងឥទ្ធិពលពីខ្យល់ផ្កាយ និងការផ្ទះពិសោធន៍។ ការវិវត្តនៃពពួកខ្យល់ដែលបក់ចេញដោយផ្កាយពីរ ខុសពីការវិវត្តនៃពពួកនៅជុំវិញផ្កាយតែមួយ។ ជាពិសេស នៅពេលដែលផ្កាយមួយបានផ្ទះពិសោធន៍ ពពួកនោះធំពេក ដែលខ្យល់ពីផ្កាយដែលនៅសល់មិនអាចរក្សាទុកបានទេ ហើយស្រទាប់ខាងក្រៅចាប់ផ្តើមរលាយ។ ការខ្វះសម្ពាធកំដៅនៅក្នុងពពួកក៏ផ្លាស់ប្តូរឥរិយាបថនៃលក្ខណៈបរិសុទ្ធដែលនៅក្បែរផ្កាយដែលនៅសល់ផងដែរ។ ការផ្ទះពិសោធន៍ត្រូវបានគេរក្សាទុកនៅក្នុងពពួក ដែលវាត្រឡប់ថាមពលមួយភាគតួចតួចទៅក្នុងបរិសុទ្ធមេដែក។
km
The well known and oft-quoted Feynman's expression, entered the title, leading at a loss and even being objectionable, has not yet a clear explanation. The hidden parameters problem in quantum mechanics is considered here on the base of group-theoretic approach which includes the complete set of observables indispensably. The last ones are the bilinear Hermitian forms constructed from the Schroedinger equation solutions and its first derivatives, they satisfy the algebraic completeness condition. These Hermitian forms, obtained for the simplest standard problem of particle transmission above potential step, had been compared with the Hermitian forms which are usually considered in this problem, and an additional ones, which may be obtained within the framework of an ordinary schemes of quantum mechanics. It is shown that the generally recognised schemes of the problem solution lead to violation of some conservation laws on the step directly. On the contrary, the group-theoretic approach leads to fulfillments of all necessary conservation laws everywhere at the same time. It is also shown that the complete set of observables leads a probabilistic interpretation in quantum mechanics to be excessive.
កន្លែងដែលគេស្គាល់យ៉ាងច្បាស់ និងច្រើនដងនៃសេចក្តីថ្លែងការណ៍របស់ហ្វេយម៉ាន់ (Feynman) ដែលបានចូលទៅក្នុងចំណងជើង ដែលនាំឱ្យមានភាពភាន់ច្រឡំ ហើយថែមទាំងអាចប៉ះពាល់ដល់ការយល់ដឹង នៅមិនទាន់មានការបកស្រាយច្បាស់លាស់នៅឡើយ។ បញ្ហានៃប៉ារ៉ាម៉ែត្រដែលលាក់កំបាំងនៅក្នុងមេកានិចបម្លាស់ប្តូរគឺត្រូវបានពិចារណានៅទីនេះ ដោយផ្អែកលើវិធីសាកសួរដែលពាក់ព័ន្ធក្រុម ដែលរួមបញ្ចូលនូវសំណុំពេញលេញនៃតម្លៃដែលអាចសង្កេតបាន ដែលចាំបាច់យ៉ាងខ្លាំង។ តម្លៃទាំងនោះគឺជាទម្រង់អេរ្មីទៀន (Hermitian) ពីរដែលកសាងឡើងពីដំណោះស្រាយនៃសមីការស្រែុនហ្ស៊ីដឺ (Schroedinger) និងដេរីវេទីមួយរបស់វា ហើយពួកវាទាំងនោះគោរពតាមលក្ខខណ្ឌនៃការបំពេញលក្ខណៈតាមអាល់ហ្សែប្រា។ ទម្រង់អេរ្មីទៀនទាំងនោះ ដែលបានទទួលបានសម្រាប់បញ្ហាស្តង់ដារសាមញ្ញបំផុតនៃការផ្ទេរអេឡិចត្រុងពីលើជំហានសក្តានុពល ត្រូវបានធ្វើការប្រៀបធៀបជាមួយនឹងទម្រង់អេរ្មីទៀន ដែលត្រូវបានពិចារណាធម្មតាក្នុងបញ្ហានេះ និងទម្រង់បន្ថែមផ្សេងទៀត ដែលអាចទទួលបាននៅក្នុងគំរូធម្មតានៃមេកានិចបម្លាស់ប្តូរ។ វាត្រូវបានបង្ហាញថា គំរូទូទៅដែលត្រូវបានទទួលស្គាល់ជាទូទៅនៃដំណោះស្រាយបញ្ហានេះ នាំឱ្យមានការរំលោភលើច្បាប់អនុរក្សណ៍មួយចំនួនដោយផ្ទាល់នៅលើជំហាន។ ផ្ទុយទៅវិញ វិធីសាកសួរដែលផ្អែកលើក្រុម នាំឱ្យមានការគោរពតាមច្បាប់អនុរក្សណ៍ចាំបាច់ទាំងអស់នៅគ្រប់ទីកន្លែងក្នុងពេលតែមួយ។ វាក៏ត្រូវបានបង្ហាញផងដែរថា សំណុំពេញលេញនៃតម្លៃដែលអាចសង្កេតបាន នាំឱ្យការបកស្រាយតាមបែបប្រូបាប៊ីលីតេក្នុងមេកានិចបម្លាស់ប្តូរក្លាយជាច្រើនពេក។
km
Background: Over the year, Machine Learning Phishing URL classification (MLPU) systems have gained tremendous popularity to detect phishing URLs proactively. Despite this vogue, the security vulnerabilities of MLPUs remain mostly unknown. Aim: To address this concern, we conduct a study to understand the test time security vulnerabilities of the state-of-the-art MLPU systems, aiming at providing guidelines for the future development of these systems. Method: In this paper, we propose an evasion attack framework against MLPU systems. To achieve this, we first develop an algorithm to generate adversarial phishing URLs. We then reproduce 41 MLPU systems and record their baseline performance. Finally, we simulate an evasion attack to evaluate these MLPU systems against our generated adversarial URLs. Results: In comparison to previous works, our attack is: (i) effective as it evades all the models with an average success rate of 66% and 85% for famous (such as Netflix, Google) and less popular phishing targets (e.g., Wish, JBHIFI, Officeworks) respectively; (ii) realistic as it requires only 23ms to produce a new adversarial URL variant that is available for registration with a median cost of only $11.99/year. We also found that popular online services such as Google SafeBrowsing and VirusTotal are unable to detect these URLs. (iii) We find that Adversarial training (successful defence against evasion attack) does not significantly improve the robustness of these systems as it decreases the success rate of our attack by only 6% on average for all the models. (iv) Further, we identify the security vulnerabilities of the considered MLPU systems. Our findings lead to promising directions for future research. Conclusion: Our study not only illustrate vulnerabilities in MLPU systems but also highlights implications for future study towards assessing and improving these systems.
Түйіндеме: Машиналық оқу негізіндегі фишингтік URL-дерді классификациялау (МОФБ) жүйелері бір жыл бойы фишингтік URL-дерді белсенді түрде анықтау үшін үлкен танымалдылыққа ие болды. Бұл тенденцияға қарамастан, МОФБ жүйелерінің қауіпсіздік бойынша әлсіздіктері негізінен белгісіз болып қала береді. Мақсат: Бұл мәселеге шешім табу үшін біз заманауи МОФБ жүйелерінің тестілеу кезіндегі қауіпсіздік әлсіздіктерін түсіну мақсатында зерттеу жүргіздік және осы жүйелердің болашақта дамуы үшін нұсқаулар ұсынуды көздедік. Әдіс: Бұл мақалада біз МОФБ жүйелеріне қарсы шабуылдан қашу (ескертуден қашу) шабуылының архитектурасын ұсынамыз. Осы мақсатқа жету үшін біз алдымен қарсы шабуылдық фишингтік URL-дерді құру алгоритмін әзірледік. Содан кейін біз 41 МОФБ жүйесін қайта жасап шығарып, олардың бастапқы жұмыс нәтижелерін тіркедік. Соңында біз құрастырған қарсы шабуылдық URL-дерге қарсы осы МОФБ жүйелерін бағалау үшін шабуылдан қашу шабуылын модельдейміз. Нәтижелер: Бұрынғы жұмыстармен салыстырғанда, біздің шабуылымыз: (i) барлық модельдерден орташа есеппен 66% және 85% сәттілікпен қашып құтылуы арқылы тиімді болды, бұл көрнекті (мысалы, Netflix, Google) және аз танымал фишингтік мақсаттар (мысалы, Wish, JBHIFI, Officeworks) үшін сәйкес келеді; (ii) жаңа қарсы шабуылдық URL-нұсқаны тек 23мс уақытта құруға мүмкіндік беретін және тіркелуге байланысты жылдық орташа құны тек 11,99 АҚШ доллары құрайтындықтан, шынайы болып табылады. Біз сонымен қатар Google SafeBrowsing және VirusTotal сияқты танымал онлайн-қызметтердің осы URL-дерді анықтай алмайтынын анықтадық. (iii) Біз қарсы шабуылдық оқыту (шабуылдан қашу шабуылына қарсы сәтті қорғаныс) бұл жүйелердің төзімділігін айтарлықтай жақсартпайтынын анықтадық, себебі бұл барлық модельдер үшін біздің шабуылымыздың сәттілік пайызын орташа есеппен тек 6% ғана төмендетеді. (iv) Сонымен қатар біз қарастырылған МОФБ жүйелерінің қауіпсіздік әлсіздіктерін анықтадық. Біздің табыстар болашақтағы зерттеулер үшін перспективалы бағыттар ашады. Қорытынды: Біздің зерттеуіміз МОФБ жүйелеріндегі әлсіздіктерді көрсетіп қана қоймайды, сонымен қатар осы жүйелерді бағалау мен жақсарту бойынша болашақ зерттеулердің салдарын да көрсетеді.
kk
We have detected, for the first time, Cepheid variables in the Sculptor Group spiral galaxy NGC 7793. From wide-field images obtained in the optical V and I bands on 56 nights in 2003-2005, we have discovered 17 long-period (24-62 days) Cepheids whose periods and mean magnitudes define tight period-luminosity relations. We use the (V-I) Wesenheit index to determine a reddening-free true distance modulus to NGC 7793 of 27.68 +- 0.05 mag (internal error) +- 0.08 mag (systematic error). The comparison of the reddened distance moduli in V and I with the one derived from the Wesenheit magnitude indicates that the Cepheids in NGC 7793 are affected by an average total reddening of E(B-V)=0.08 mag, 0.06 of which is produced inside the host galaxy. As in the earlier Cepheid studies of the Araucaria Project, the reported distance is tied to an assumed LMC distance modulus of 18.50. The quoted systematic uncertainty takes into account effects like blending and possible inhomogeneous filling of the Cepheid instability strip on the derived distance. The reported distance value does not depend on the (unknown) metallicity of the Cepheids according to recent theoretical and empirical results. Our Cepheid distance is shorter, but within the errors consistent with the distance to NGC 7793 determined earlier with the TRGB and Tully-Fisher methods. The NGC 7793 distance of 3.4 Mpc is almost identical to the one our project had found from Cepheid variables for NGC 247, another spiral member of the Sculptor Group located close to NGC 7793 on the sky. Two other conspicuous spiral galaxies in the Sculptor Group, NGC 55 and NGC 300, are much nearer (1.9 Mpc), confirming the picture of a very elongated structure of the Sculptor Group in the line of sight put forward by Jerjen et al. and others.
Kami telah mengesan secara pertama kali pembolehubah Cepheid dalam galaksi spiral Kumpulan Sculptor, NGC 7793. Daripada imej medan luas yang diperoleh dalam jalur optik V dan I pada 56 malam antara tahun 2003 hingga 2005, kami telah menemui 17 Cepheid berperiode panjang (24-62 hari) yang tempoh dan magnitud puratanya membentuk hubungan tempoh-kecerahan yang ketat. Kami menggunakan indeks Wesenheit (V-I) untuk menentukan modulus jarak benar bebas pengaburan bagi NGC 7793 sebanyak 27.68 ± 0.05 mag (ralat dalaman) ± 0.08 mag (ralat sistematik). Perbandingan modulus jarak yang terabur dalam V dan I dengan yang diperoleh daripada magnitud Wesenheit menunjukkan bahawa Cepheid dalam NGC 7793 dipengaruhi oleh pengaburan jumlah purata sebanyak E(B-V) = 0.08 mag, di mana 0.06 daripadanya dihasilkan di dalam galaksi perumah. Seperti dalam kajian Cepheid lepas oleh Projek Araucaria, jarak yang dilaporkan dikaitkan dengan anggapan modulus jarak LMC sebanyak 18.50. Ketidakpastian sistematik yang dinyatakan mengambil kira kesan seperti percantuman dan kemungkinan pengisian tidak seragam jalur ketidakstabilan Cepheid terhadap jarak yang diperoleh. Nilai jarak yang dilaporkan tidak bergantung kepada metalisiti (yang tidak diketahui) Cepheid mengikut keputusan teori dan empirikal terkini. Jarak Cepheid kami lebih pendek, tetapi dalam had ralat adalah konsisten dengan jarak NGC 7793 yang ditentukan sebelumnya menggunakan kaedah TRGB dan Tully-Fisher. Jarak NGC 7793 sebanyak 3.4 Mpc hampir sama dengan jarak yang ditemui oleh projek kami menggunakan pembolehubah Cepheid bagi NGC 247, satu lagi ahli spiral Kumpulan Sculptor yang terletak berdekatan NGC 7793 di langit. Dua galaksi spiral ketara lain dalam Kumpulan Sculptor, NGC 55 dan NGC 300, jauh lebih dekat (1.9 Mpc), mengesahkan gambaran struktur yang sangat memanjang Kumpulan Sculptor dalam garis pandangan seperti yang dicadangkan oleh Jerjen et al. dan lain-lain.
ms
High-energy gamma-rays propagating in the intergalactic medium can interact with background infrared photons to produce e+e- pairs, resulting in the absorption of the intrinsic gamma-ray spectrum. TeV observations of the distant blazar 1ES 1101-232 were thus recently used to put an upper limit on the infrared extragalactic background light density. The created pairs can upscatter background photons to high energies, which in turn may pair produce, thereby initiating a cascade. The pairs diffuse on the extragalactic magnetic field (EMF) and cascade emission has been suggested as a means for measuring its intensity. Limits on the IR background and EMF are reconsidered taking into account cascade emissions. The cascade equations are solved numerically. Assuming a power-law intrinsic spectrum, the observed 100 MeV - 100 TeV spectrum is found as a function of the intrinsic spectral index and the intensity of the EMF. Cascades emit mainly at or below 100 GeV. The observed TeV spectrum appears softer than for pure absorption when cascade emission is taken into account. The upper limit on the IR photon background is found to be robust. Inversely, the intrinsic spectra needed to fit the TeV data are uncomfortably hard when cascade emission makes a significant contribution to the observed spectrum. An EMF intensity around 1e-8 nG leads to a characteristic spectral hump in the GLAST band. Higher EMF intensities divert the pairs away from the line-of-sight and the cascade contribution to the spectrum becomes negligible.
Жоғары энергиялық гамма-сәулелер өзара әрекеттесу үшін инфрақызыл фондық фотондармен әрекеттесіп, e+e- жұптарын тудыруы мүмкін, бұл өз кезегінде гамма-сәулелердің өзіндік спектрінің жұтылуына әкеледі. Сондықтан 1ES 1101-232 блаазарының ТэВ диапазонындағы бақылаулары соңғы кезде инфрақызыл экстрагалактикалық фондық жарықтың тығыздығына жоғарғы шек қою үшін пайдаланылды. Пайда болған жұптар фондық фотондарды жоғары энергияларға дейін шашыратуы мүмкін, бұл кезде қайтадан жұп тудыруға әкеліп, каскадты бастайды. Жұптар экстрагалактикалық магнит өрісінде (ЭМӨ) диффузияланады және каскадты шығару оның интенсивтілігін өлшеу үшін ұсынылған. Каскадты шығаруды ескере отырып, инфрақызыл фон мен ЭМӨ шектері қайта қарастырылады. Каскадтық теңдеулер сандық түрде шешіледі. Қуат заңына бағынатын өзіндік спектр бар деп болжай отырып, бақыланатын 100 МэВ - 100 ТэВ спектрі өзіндік спектрлік индекс пен ЭМӨ интенсивтілігінің функциясы ретінде анықталады. Каскадтар негізінен 100 ГэВ-та немесе одан төмен энергияларда шығарылады. Каскадты шығаруды ескерген кезде, бақыланатын ТэВ спектрі таза жұтылу кезіндегіге қарағанда жұмсарақ болып көрінеді. Инфрақызыл фотондық фонға қойылған жоғарғы шек берілгенінше тұрақты болып қалады. Керісінше, каскадты шығару бақыланатын спектрге маңызды үлес қосқан кезде, ТэВ деректерін сәйкестендіру үшін қажетті өзіндік спектрлер тым қатты болып көрінеді. 1e-8 нГ шамасындағы ЭМӨ интенсивтілігі GLAST диапазонында сипаттамалық спектрлік бүдырын тудырады. Жоғарырақ ЭМӨ интенсивтіліктері жұптарды көрінетін сызықтан ауытқытады және спектрге каскадтық үлес елеусіз болып қалады.
kk
Governments and cities around the world are currently facing rapid growth in the use of Electric Vehicles and therewith the need for Charging Infrastructure. For these cities, the struggle remains how to further roll out charging infrastructure in the most efficient way, both in terms of cost and use. Forecasting models are not able to predict more long-term developments, and as such more complex simulation models offer opportunities to simulate various scenarios. Agent based simulation models provide insight into the effects of incentives and roll-out strategies before they are implemented in practice and thus allow for scenario testing. This paper describes the build up of an agent based model that enables policy makers to anticipate on charging infrastructure development. The model is able to simulate charging transactions of individual users and is both calibrated and validated using a dataset of charging transactions from the public charging infrastructure of the four largest cities in the Netherlands.
Governos e cidades ao redor do mundo estão atualmente enfrentando um rápido crescimento no uso de Veículos Elétricos e, consequentemente, a necessidade de infraestrutura de recarga. Para essas cidades, persiste a dificuldade de como expandir ainda mais a infraestrutura de recarga da forma mais eficiente possível, tanto em termos de custo quanto de utilização. Modelos de previsão não são capazes de antecipar desenvolvimentos de longo prazo, e, portanto, modelos de simulação mais complexos oferecem oportunidades para simular diversos cenários. Modelos de simulação baseados em agentes proporcionam insights sobre os efeitos de incentivos e estratégias de implantação antes que sejam implementados na prática, permitindo assim testes de cenários. Este artigo descreve a construção de um modelo baseado em agentes que permite aos formuladores de políticas antecipar-se ao desenvolvimento da infraestrutura de recarga. O modelo é capaz de simular transações de recarga de usuários individuais e é tanto calibrado quanto validado utilizando um conjunto de dados de transações de recarga da infraestrutura pública de recarga das quatro maiores cidades dos Países Baixos.
pt
This note provides a neat and enjoyable expansion and application of the magnificent Ordentlich-Cover theory of "universal portfolios." I generalize Cover's benchmark of the best constant-rebalanced portfolio (or 1-linear trading strategy) in hindsight by considering the best bilinear trading strategy determined in hindsight for the realized sequence of asset prices. A bilinear trading strategy is a mini two-period active strategy whose final capital growth factor is linear separately in each period's gross return vector for the asset market. I apply Cover's ingenious (1991) performance-weighted averaging technique to construct a universal bilinear portfolio that is guaranteed (uniformly for all possible market behavior) to compound its money at the same asymptotic rate as the best bilinear trading strategy in hindsight. Thus, the universal bilinear portfolio asymptotically dominates the original (1-linear) universal portfolio in the same technical sense that Cover's universal portfolios asymptotically dominate all constant-rebalanced portfolios and all buy-and-hold strategies. In fact, like so many Russian dolls, one can get carried away and use these ideas to construct an endless hierarchy of ever more dominant $H$-linear universal portfolios.
本文对奥尔登堡-科弗(Ordentlich-Cover)的“普适投资组合”宏伟理论提供了一种简洁而有趣的拓展与应用。我通过考虑针对已实现资产价格序列在事后确定的最优双线性交易策略,推广了科弗所设定的以事后最优常数再平衡投资组合(即1-线性交易策略)为基准的方法。双线性交易策略是一种小型两期主动策略,其最终资本增长因子分别在每个时期的资产市场总回报向量上呈线性关系。我应用科弗巧妙的(1991年)基于绩效加权平均方法,构建了一种普适双线性投资组合,该组合可确保(对所有可能的市场行为一致地)以与事后最优双线性交易策略相同的渐近速率复利增长财富。因此,该普适双线性投资组合在同样的技术意义上渐近地优于原始的(1-线性)普适投资组合,正如科弗的普适投资组合在渐近意义上优于所有常数再平衡投资组合和所有买入持有策略一样。事实上,就像无数个俄罗斯套娃一样,人们很容易受此启发,利用这些思想构建出一个无限层级、逐层更具优势的H-线性普适投资组合体系。
zh
The complete mid- to far- infrared continuum energy distribution collected with the Infrared Space Observatory of the Seyfert 2 prototype NGC 5252 is presented. ISOCAM images taken in the 3--15 micron range show a resolved central source that is consistent at all bands with a region of about 1.3 kpc in size. Due to the lack of on going star formation in the disk of the galaxy, this resolved emission is associated with either dust heated in the nuclear active region or with bremsstrahlung emission from the nuclear and extended ionised gas. The size of the mid-IR emission contrasts with the standard unification scenario envisaging a compact dusty structure surrounding and hiding the active nucleus and the broad-line region. The mid-IR data are complemented with ISOPHOT aperture photometry in the 25--200 micron range. The overall IR spectral energy distribution is dominated by a well-defined component peaking at about 100$ micron, a characteristic temperature of T ~20 K, and an associated dust mass of 2.5 x 10E7 Msun, which greatly dominates the total dust mass content of the galaxy. The heating mechanism of this dust is probably the interstellar radiation field. After subtracting the contribution of this cold dust component, the bulk of the residual emission is attributed to dust heated within the nuclear environment. Its luminosity consistently accounts for the reprocessing of the X-ray to UV emission derived for the nucleus of this galaxy. The comparison of NGC 5252 spectral energy distribution with current torus models favors large nuclear disk structure on the kiloparsec scale.
Trình bày phân bố năng lượng liên tục đầy đủ ở miền hồng ngoại trung đến hồng ngoại xa, được thu thập bằng Đài quan sát Không gian Hồng ngoại (Infrared Space Observatory) đối với thiên hà Seyfert 2 điển hình NGC 5252. Các hình ảnh ISOCAM chụp trong dải bước sóng 3–15 micron cho thấy một nguồn trung tâm được phân giải, có hình dạng nhất quán ở mọi dải bước sóng, tương ứng với một vùng có kích thước khoảng 1,3 kpc. Do sự thiếu vắng hình thành sao đang diễn ra trong đĩa thiên hà, bức xạ được phân giải này được liên kết hoặc với bụi bị đốt nóng trong vùng hoạt động hạt nhân, hoặc với bức xạ bremsstrahlung từ khí ion hóa ở vùng hạt nhân và vùng mở rộng. Kích thước của bức xạ hồng ngoại trung tâm này trái ngược với kịch bản hợp nhất tiêu chuẩn, vốn hình dung một cấu trúc bụi nhỏ gọn bao quanh và che khuất nhân hoạt động cùng vùng phát xạ vạch rộng. Dữ liệu hồng ngoại trung được bổ sung bằng phép đo quang học khẩu độ ISOPHOT trong dải 25–200 micron. Phân bố phổ năng lượng hồng ngoại tổng thể bị chi phối bởi một thành phần rõ rệt, đạt cực đại ở khoảng 100 micron, có nhiệt độ đặc trưng T ~20 K và khối lượng bụi tương ứng là 2,5 x 10E7 khối lượng Mặt Trời, vượt trội đáng kể so với tổng lượng bụi trong toàn bộ thiên hà. Cơ chế đốt nóng bụi này có thể là trường bức xạ liên sao. Sau khi trừ đi đóng góp của thành phần bụi lạnh này, phần lớn bức xạ còn lại được quy cho bụi bị đốt nóng trong môi trường hạt nhân. Độ sáng của nó phù hợp nhất quán với việc tái xử lý bức xạ từ tia X đến tử ngoại, vốn đã được suy ra từ nhân của thiên hà này. Việc so sánh phân bố năng lượng phổ của NGC 5252 với các mô hình đĩa hiện tại cho thấy ưu thế của cấu trúc đĩa hạt nhân lớn, có quy mô kiloparsec.
vi
There is growing interest in termination reasoning for non-linear programs and, meanwhile, recent dynamic strategies have shown they are able to infer invariants for such challenging programs. These advances led us to hypothesize that perhaps such dynamic strategies for non-linear invariants could be adapted to learn recurrent sets (for non-termination) and/or ranking functions (for termination). In this paper, we exploit dynamic analysis and draw termination and non-termination as well as static and dynamic strategies closer together in order to tackle non-linear programs. For termination, our algorithm infers ranking functions from concrete transitive closures, and, for non-termination, the algorithm iteratively collects executions and dynamically learns conditions to refine recurrent sets. Finally, we describe an integrated algorithm that allows these algorithms to mutually inform each other, taking counterexamples from a failed validation in one endeavor and crossing both the static/dynamic and termination/non-termination lines, to create new execution samples for the other one.
ມີຄວາມສົນໃຈເພີ່ມຂຶ້ນໃນການເຫັນດີກ່ຽວກັບໂປຣແກຣມທີ່ບໍ່ແມ່ນເສັ້ນຊື່, ແລະໃນຂະນະດຽວກັນນັ້ນຍຸດທະສາດແບບໄດ້ນາມິກລ້າສຸດກໍ່ໄດ້ສະແດງໃຫ້ເຫັນວ່າພວກມັນສາມາດສືບຄົ້ນຫາອັງກຸນຍັງຜັນແປ (invariants) ສຳລັບໂປຣແກຣມທີ່ມີຄວາມທ້າທາຍເຊັ່ນນີ້ໄດ້. ການກ້າວໜ້າເຫຼົ່ານີ້ໄດ້ນຳພາພວກເຮົາໄປສູ່ການສົມມຸດຖານວ່າບາງທີຍຸດທະສາດແບບໄດ້ນາມິກດັ່ງກ່າວສຳລັບອັງກຸນຍັງຜັນແປທີ່ບໍ່ແມ່ນເສັ້ນຊື່ອາດຈະຖືກປັບໃຊ້ເພື່ອຮຽນຮູ້ຊຸດທີ່ກັບຄືນຊ້ຳ (recurrent sets) (ສຳລັບການບໍ່ຢຸດ) ແລະ/ຫຼື ຟັງຊັ່ນຈັດລຳດັບ (ranking functions) (ສຳລັບການຢຸດ). ໃນບົດຄວາມນີ້, ພວກເຮົານຳໃຊ້ການວິເຄາະແບບໄດ້ນາມິກ ແລະ ນຳເອົາການຢຸດ ແລະ ບໍ່ຢຸດ ພ້ອມດ້ວຍຍຸດທະສາດແບບສະຖິດ ແລະ ໄດ້ນາມິກມາໃກ້ກັນຂຶ້ນເພື່ອຈະແກ້ໄຂໂປຣແກຣມທີ່ບໍ່ແມ່ນເສັ້ນຊື່. ສຳລັບກໍລະນີການຢຸດ, ອະລະກອລິດ (algorithm) ຂອງພວກເຮົາສືບຄົ້ນຫາຟັງຊັ່ນຈັດລຳດັບຈາກບັນທຶກການປ່ຽນແປງແບບເປັນຮູບປັ້ນ (concrete transitive closures), ແລະ ສຳລັບກໍລະນີການບໍ່ຢຸດ, ອະລະກອລິດຈະເກັບກຳການດຳເນີນງານຢ່າງຕໍ່ເນື່ອງ ແລະ ຮຽນຮູ້ເງື່ອນໄຂແບບໄດ້ນາມິກເພື່ອປັບປຸງຊຸດທີ່ກັບຄືນຊ້ຳ. ສຸດທ້າຍ, ພວກເຮົາອະທິບາຍອະລະກອລິດທີ່ຖືກຜະສົມຜະສານ ທີ່ອະນຸຍາດໃຫ້ອະລະກອລິດເຫຼົ່ານີ້ສາມາດແລກປ່ຽນຂໍ້ມູນກັນໄດ້, ໂດຍການນຳເອົາຕົວຢ່າງທີ່ພິສູດບໍ່ໄດ້ຈາກຄວາມພະຍາຍາມໜຶ່ງ ແລະ ຂ້າມທັງເສັ້ນແບ່ງລະຫວ່າງ ສະຖິດ/ໄດ້ນາມິກ ແລະ ຢຸດ/ບໍ່ຢຸດ ເພື່ອສ້າງຕົວຢ່າງການດຳເນີນງານໃໝ່ໃຫ້ກັບອີກຄົນໜຶ່ງ.
lo
This paper investigates the problem of finding a fixed point for a global nonexpansive operator under time-varying communication graphs in real Hilbert spaces, where the global operator is separable and composed of an aggregate sum of local nonexpansive operators. Each local operator is only privately accessible to each agent, and all agents constitute a network. To seek a fixed point of the global operator, it is indispensable for agents to exchange local information and update their solution cooperatively. To solve the problem, two algorithms are developed, called distributed Krasnosel'ski\u{\i}-Mann (D-KM) and distributed block-coordinate Krasnosel'ski\u{\i}-Mann (D-BKM) iterations, for which the D-BKM iteration is a block-coordinate version of the D-KM iteration in the sense of randomly choosing and computing only one block-coordinate of local operators at each time for each agent. It is shown that the proposed two algorithms can both converge weakly to a fixed point of the global operator. Meanwhile, the designed algorithms are applied to recover the classical distributed gradient descent (DGD) algorithm, devise a new block-coordinate DGD algorithm, handle a distributed shortest distance problem in the Hilbert space for the first time, and solve linear algebraic equations in a novel distributed approach. Finally, the theoretical results are corroborated by a few numerical examples.
본 논문은 실 힐베르트 공간에서 시간적으로 변하는 통신 그래프 하에, 전역 비팽창 연산자(전역 연산자)의 고정점을 구하는 문제를 다룬다. 여기서 전역 연산자는 분리 가능하며, 각각의 국소 비팽창 연산자의 집합 합으로 구성된다. 각 국소 연산자는 각 에이전트에 의해서만 사적으로 접근 가능하며, 모든 에이전트는 하나의 네트워크를 구성한다. 전역 연산자의 고정점을 찾기 위해서는 에이전트들이 국소 정보를 교환하고 협력적으로 해를 갱신하는 것이 필수적이다. 본 문제를 해결하기 위해, 분산 크라스노셀스키-만(D-KM) 반복법과 분산 블록좌표 크라스노셀스키-만(D-BKM) 반복법이라는 두 가지 알고리즘이 제안되었다. 여기서 D-BKM 반복법은 각 시간마다 각 에이전트가 국소 연산자의 블록좌표 하나만을 무작위로 선택하고 계산한다는 의미에서, D-KM 반복법의 블록좌표 형태이다. 제안된 두 알고리즘 모두 전역 연산자의 고정점으로 약하게 수렴함이 입증되었다. 또한, 설계된 알고리즘들은 고전적인 분산 그래디언트 하강(DGD) 알고리즘을 재현하고, 새로운 블록좌표 DGD 알고리즘을 개발하며, 힐베르트 공간에서 분산 최단 거리 문제를 처음으로 처리하고, 선형 대수 방정식을 새로운 분산 방식으로 해결하는 데 적용되었다. 마지막으로, 제시된 이론적 결과들은 몇 가지 수치적 예제를 통해 검증되었다.
ko
This paper investigates the problem of finding a fixed point for a global nonexpansive operator under time-varying communication graphs in real Hilbert spaces, where the global operator is separable and composed of an aggregate sum of local nonexpansive operators. Each local operator is only privately accessible to each agent, and all agents constitute a network. To seek a fixed point of the global operator, it is indispensable for agents to exchange local information and update their solution cooperatively. To solve the problem, two algorithms are developed, called distributed Krasnosel'ski\u{\i}-Mann (D-KM) and distributed block-coordinate Krasnosel'ski\u{\i}-Mann (D-BKM) iterations, for which the D-BKM iteration is a block-coordinate version of the D-KM iteration in the sense of randomly choosing and computing only one block-coordinate of local operators at each time for each agent. It is shown that the proposed two algorithms can both converge weakly to a fixed point of the global operator. Meanwhile, the designed algorithms are applied to recover the classical distributed gradient descent (DGD) algorithm, devise a new block-coordinate DGD algorithm, handle a distributed shortest distance problem in the Hilbert space for the first time, and solve linear algebraic equations in a novel distributed approach. Finally, the theoretical results are corroborated by a few numerical examples.
In dieser Arbeit wird das Problem der Bestimmung eines Fixpunkts eines globalen nichtexpansiven Operators unter zeitvariierenden Kommunikationsgraphen in reellen Hilberträumen untersucht, wobei der globale Operator separabel ist und sich aus einer aggregierten Summe lokaler nichtexpansiver Operatoren zusammensetzt. Jeder lokale Operator ist nur einzelnen Agenten privat zugänglich, und alle Agenten bilden ein Netzwerk. Um einen Fixpunkt des globalen Operators zu finden, ist es unerlässlich, dass die Agenten lokale Informationen austauschen und ihre Lösungen kooperativ aktualisieren. Zur Lösung des Problems werden zwei Algorithmen entwickelt, die als verteilte Krasnosel'ski\u{\i}-Mann-Iteration (D-KM) und verteilte blockkoordinatenbasierte Krasnosel'ski\u{\i}-Mann-Iteration (D-BKM) bezeichnet werden, wobei die D-BKM-Iteration eine blockkoordinatenbasierte Version der D-KM-Iteration darstellt, bei der zu jedem Zeitpunkt für jeden Agenten zufällig nur eine Blockkoordinate der lokalen Operatoren ausgewählt und berechnet wird. Es wird gezeigt, dass beide vorgeschlagenen Algorithmen schwach gegen einen Fixpunkt des globalen Operators konvergieren. Gleichzeitig werden die entwickelten Algorithmen dazu verwendet, den klassischen verteilten Gradientenabstiegsalgorithmus (DGD) herzuleiten, einen neuen blockkoordinatenbasierten DGD-Algorithmus zu entwerfen, erstmals ein verteiltes Problem der kürzesten Distanz im Hilbertraum zu lösen und lineare algebraische Gleichungen auf neuartige, verteilte Weise zu bearbeiten. Schließlich werden die theoretischen Ergebnisse durch einige numerische Beispiele bestätigt.
de
Super-resolution fluorescence microscopy, with a resolution beyond the diffraction limit of light, has become an indispensable tool to directly visualize biological structures in living cells at a nanometer-scale resolution. Despite advances in high-density super-resolution fluorescent techniques, existing methods still have bottlenecks, including extremely long execution time, artificial thinning and thickening of structures, and lack of ability to capture latent structures. Here we propose a novel deep learning guided Bayesian inference approach, DLBI, for the time-series analysis of high-density fluorescent images. Our method combines the strength of deep learning and statistical inference, where deep learning captures the underlying distribution of the fluorophores that are consistent with the observed time-series fluorescent images by exploring local features and correlation along time-axis, and statistical inference further refines the ultrastructure extracted by deep learning and endues physical meaning to the final image. Comprehensive experimental results on both real and simulated datasets demonstrate that our method provides more accurate and realistic local patch and large-field reconstruction than the state-of-the-art method, the 3B analysis, while our method is more than two orders of magnitude faster. The main program is available at https://github.com/lykaust15/DLBI
A microscopia de fluorescência com super-resolução, com resolução além do limite de difração da luz, tornou-se uma ferramenta indispensável para visualizar diretamente estruturas biológicas em células vivas com resolução em escala nanométrica. Apesar dos avanços nas técnicas de fluorescência de super-resolução de alta densidade, os métodos existentes ainda apresentam gargalos, incluindo tempo extremamente longo de execução, adelgaçamento e espessamento artificiais de estruturas, e falta de capacidade para capturar estruturas latentes. Aqui propomos uma nova abordagem baseada em inferência bayesiana guiada por aprendizado profundo, DLBI, para a análise de séries temporais de imagens fluorescentes de alta densidade. Nosso método combina a força do aprendizado profundo e da inferência estatística, em que o aprendizado profundo captura a distribuição subjacente dos fluoróforos que são consistentes com as imagens fluorescentes de séries temporais observadas, explorando características locais e correlações ao longo do eixo temporal, e a inferência estatística refinada ainda mais a ultraestrutura extraída pelo aprendizado profundo e atribui significado físico à imagem final. Resultados experimentais abrangentes em conjuntos de dados reais e simulados demonstram que nosso método fornece reconstruções mais precisas e realistas de fragmentos locais e de campo amplo do que o método atualmente mais avançado, a análise 3B, enquanto nosso método é mais de duas ordens de grandeza mais rápido. O programa principal está disponível em https://github.com/lykaust15/DLBI
pt
Self-assembly of colloidal particles due to elastic interactions in nematic liquid crystals promises tunable composite materials and can be guided by exploiting surface functionalization, geometric shape and topology, though these means of controlling self-assembly remain limited. Here, we realize low-symmetry achiral and chiral elastic colloids in the nematic liquid crystals using colloidal polygonal concave and convex prisms. We show that the controlled pinning of disclinations at the prisms edges alters the symmetry of director distortions around the prisms and their orientation with respect to the far-field director. The controlled localization of the disclinations at the prism's edges significantly influences anisotropy of the diffusion properties of prisms dispersed in liquid crystals and allows one to modify their self-assembly. We show that elastic interactions between polygonal prisms can be switched between repulsive and attractive just by controlled re-pinning the disclinations at different edges using laser tweezers. Our findings demonstrate that elastic interactions between colloidal particles dispersed in nematic liquid crystals are sensitive to the topologically equivalent but geometrically rich controlled configurations of the particle-induced defects.
Sự tự lắp ráp của các hạt keo nhờ tương tác đàn hồi trong tinh thể lỏng nematik hứa hẹn tạo ra các vật liệu tổ hợp có thể điều chỉnh được và có thể được định hướng bằng cách khai thác chức năng hóa bề mặt, hình dạng hình học và tôpô, mặc dù các phương pháp kiểm soát sự tự lắp ráp này vẫn còn hạn chế. Trong nghiên cứu này, chúng tôi tạo ra các hạt keo đàn hồi có độ đối xứng thấp, không đối xứng và đối xứng xoắn trong tinh thể lỏng nematik bằng cách sử dụng các lăng kính đa giác lõm và lồi ở cấp độ keo. Chúng tôi chỉ ra rằng việc cố định kiểm soát các đường lệch (disclinations) tại các cạnh của lăng kính làm thay đổi đối xứng của các biến dạng trường hướng (director distortions) xung quanh các lăng kính cũng như định hướng của chúng so với trường hướng ở xa. Việc định vị kiểm soát các đường lệch tại các cạnh của lăng kính ảnh hưởng đáng kể đến tính dị hướng của các đặc tính khuếch tán của các lăng kính phân tán trong tinh thể lỏng và cho phép điều chỉnh sự tự lắp ráp của chúng. Chúng tôi chứng minh rằng các tương tác đàn hồi giữa các lăng kính đa giác có thể được chuyển đổi giữa đẩy và hút chỉ bằng cách thao tác lại vị trí cố định các đường lệch tại các cạnh khác nhau bằng kẹp laser. Những phát hiện của chúng tôi cho thấy các tương tác đàn hồi giữa các hạt keo phân tán trong tinh thể lỏng nematik rất nhạy cảm với các cấu hình khuyết tật do hạt gây ra, dù tương đương về mặt tôpô nhưng lại phong phú về mặt hình học và có thể kiểm soát được.
vi
Technological advances have made wireless sensors cheap and reliable enough to be brought into industrial use. A major challenge arises from the fact that wireless channels introduce random packet dropouts. Power control and coding are key enabling technologies in wireless communications to ensure efficient communications. In the present work, we examine the role of power control and coding for Kalman filtering over wireless correlated channels. Two estimation architectures are considered: In the first, the sensors send their measurements directly to a single gateway. In the second scheme, wireless relay nodes provide additional links. The gateway decides on the coding scheme and the transmitter power levels of the wireless nodes. The decision process is carried out on-line and adapts to varying channel conditions in order to improve the trade-off between state estimation accuracy and energy expenditure. In combination with predictive power control, we investigate the use of multiple-description coding, zero-error coding and network coding and provide sufficient conditions for the expectation of the estimation error covariance matrix to be bounded. Numerical results suggest that the proposed method may lead to energy savings of around 50 %, when compared to an alternative scheme, wherein transmission power levels and bit-rates are governed by simple logic. In particular, zero-error coding is preferable at time instances with high channel gains, whereas multiple-description coding is superior for time instances with low gains. When channels between the sensors and the gateway are in deep fades, network coding improves estimation accuracy significantly without sacrificing energy efficiency.
Durch technologische Fortschritte sind drahtlose Sensoren so kostengünstig und zuverlässig geworden, dass sie in der Industrie eingesetzt werden können. Eine große Herausforderung ergibt sich aus der Tatsache, dass drahtlose Kanäle zufällige Paketverluste verursachen. Leistungsregelung und Kanalcodierung sind zentrale Schlüsseltechnologien in der drahtlosen Kommunikation, um eine effiziente Datenübertragung sicherzustellen. In der vorliegenden Arbeit untersuchen wir die Rolle der Leistungsregelung und Codierung für die Kalman-Filterung über drahtlose, korrelierte Kanäle. Zwei Schätzarchitekturen werden betrachtet: In der ersten senden die Sensoren ihre Messungen direkt an ein einziges Gateway. In der zweiten Architektur stellen drahtlose Relaisknoten zusätzliche Verbindungen bereit. Das Gateway entscheidet über das Codierungsschema und die Sendeleistungsstufen der drahtlosen Knoten. Der Entscheidungsprozess erfolgt online und passt sich an wechselnde Kanalbedingungen an, um den Kompromiss zwischen Genauigkeit der Zustandsschätzung und Energieverbrauch zu verbessern. In Kombination mit prädiktiver Leistungsregelung untersuchen wir den Einsatz von Mehrfachbeschreibungs-Codierung, fehlerfreier Codierung und Netzwerkcodierung und geben hinreichende Bedingungen dafür an, dass der Erwartungswert der Schätzfehler-Kovarianzmatrix beschränkt bleibt. Numerische Ergebnisse deuten darauf hin, dass die vorgeschlagene Methode im Vergleich zu einem alternativen Verfahren, bei dem Sendeleistungen und Bitraten durch eine einfache Logik gesteuert werden, zu Energieeinsparungen von etwa 50 % führen kann. Insbesondere erweist sich die fehlerfreie Codierung bei Zeitpunkten mit hohen Kanalverstärkungen als vorteilhaft, während die Mehrfachbeschreibungs-Codierung bei Zeitpunkten mit geringen Verstärkungen überlegen ist. Wenn die Kanäle zwischen den Sensoren und dem Gateway in tiefen Fading-Zuständen sind, verbessert die Netzwerkcodierung die Schätzgenauigkeit deutlich, ohne die Energieeffizienz zu beeinträchtigen.
de
The aim of this note is to discuss in more detail the Pohozaev-type identities that have been recently obtained by the author, Paul Laurain and Tristan Rivi\`ere in the framework of half-harmonic maps defined either on $R$ or on the sphere $S^1$ with values into a closed manifold $N^n\subset R^m$. Weak half-harmonic maps are critical points of the following nonlocal energy $$\int_{R}|(-\Delta)^{1/4}u|^2 dx~~\mbox{or}~~\int_{S^1}|(-\Delta)^{1/4}u|^2\ d\theta.$$ If $u$ is a sufficiently smooth critical point of the above energy then it satisfies the following equation of stationarity $$\frac{du}{dx}\cdot (-\Delta)^{1/2} u=0~~\mbox{a.e in $R$}~~\mbox{or}~~\frac{\partial u}{\partial \theta}\cdot (-\Delta)^{1/2} u=0~~\mbox{a.e in $S^1$.}$$ By using the invariance of the equation of stationarity in $S^1$ with respect to the trace of the M\"obius transformations of the $2$ dimensional disk we derive a countable family of relations involving the Fourier coefficients of weak half-harmonic maps $u\colon S^1\to N^n.$ In the same spirit we also provide as many Pohozaev-type identities in $2$-D for stationary harmonic maps as conformal vector fields in $R^2$ generated by holomorphic functions.
এই নোটটির উদ্দেশ্য হল পোহোজাভ-ধরনের অভেদাঙ্কগুলি সম্পর্কে আরও বিস্তারিতভাবে আলোচনা করা, যা সদ্য লেখক, পল লরেন এবং ট্রিস্টান রিভিয়ের কর্তৃক অর্ধ-হারমোনিক ম্যাপগুলির কাঠামোতে প্রাপ্ত হয়েছে, যেগুলি হয় $R$-এ নয়তো গোলক $S^1$-এ সংজ্ঞায়িত এবং বদ্ধ বহুসমতল $N^n\subset R^m$-এ মান গ্রহণ করে। দুর্বল অর্ধ-হারমোনিক ম্যাপগুলি নিম্নলিখিত অস্থানিক শক্তির সংক্রান্তীয় বিন্দুগুলি: $$\int_{R}|(-\Delta)^{1/4}u|^2 dx~~\mbox{অথবা}~~\int_{S^1}|(-\Delta)^{1/4}u|^2\ d\theta।$$ যদি $u$ উপরের শক্তির একটি যথেষ্ট মসৃণ সংক্রান্তীয় বিন্দু হয়, তবে এটি নিম্নলিখিত স্টেশনারিটির সমীকরণ মেনে চলে: $$\frac{du}{dx}\cdot (-\Delta)^{1/2} u=0~~\mbox{$R$-এ প্রায় সর্বত্র}~~\mbox{অথবা}~~\frac{\partial u}{\partial \theta}\cdot (-\Delta)^{1/2} u=0~~\mbox{$S^1$-এ প্রায় সর্বত্র।}$$ $S^1$-এ স্টেশনারিটির সমীকরণের মবিয়াস রূপান্তরগুলির চিহ্নের সাপেক্ষে অপরিবর্তনশীলতা ব্যবহার করে, যা 2-মাত্রিক চাকতির উপর ক্রিয়া করে, আমরা দুর্বল অর্ধ-হারমোনিক ম্যাপ $u\colon S^1\to N^n$-এর ফুরিয়ে সহগগুলির সাথে জড়িত গণনাযোগ্য সম্পর্কের একটি পরিবার পাই। একই ধারায়, আমরা $R^2$-এ হলোমরফিক অপেক্ষকগুলি দ্বারা উৎপন্ন কনফরমাল ভেক্টর ক্ষেত্রগুলির সংখ্যার সমান সংখ্যক পোহোজাভ-ধরনের অভেদাঙ্ক $2$-D-এ স্টেশনারি হারমোনিক ম্যাপগুলির জন্য প্রদান করি।
bn
We present the results on the star formation history and extinction in the disk of M82 over spatial scales of 10" (~180 pc). Multi-band photometric data covering from the far ultraviolet to the near infrared bands were fitted to a grid of synthetic spectral energy distributions. We obtained distribution functions of age and extinction for each of the 117 apertures analyzed, taking into account observational errors through Monte-Carlo simulations. These distribution functions were fitted with gaussian functions to obtain the mean ages and extinctions along with errors on them. The analyzed zones include the high surface brightness complexes defined by O'Connell & Mangano (1978). We found that these complexes share the same star formation history and extinction as the field stellar populations in the disk. There is an indication that the stellar populations are marginally older at the outer disk (450 Myr at ~3 kpc) as compared to the inner disk (100 Myr at 0.5 kpc). For the nuclear regions (radius less than 500 pc), we obtained an age of less than 10 Myr. The results obtained in this work are consistent with the idea that the 0.5-3 kpc part of the disk of M82 formed around 90% of the stellar mass in a star-forming episode that started around 450 Myr ago lasting for about 350 Myr. We found that field stars are the major contributors to the flux over the spatial scales analyzed in this study, with stellar cluster contribution being 7% in the nucleus and 0.7% in the disk.
ကျွန်ုပ်တို့သည် M82 ဒစ်က်၏ ကြယ်ဖွဲ့စည်းမှု သမိုင်းနှင့် အမှောင်ခံမှုတို့ကို ၁၀" (~၁၈၀ pc) အကွာအဝေးတွင် ရလဒ်များကို တင်ပြပါသည်။ အလင်းအာရုံခံဒေတာများကို အလွန်အားဖြင့် ယူလ်ထရာဗိုင်အိုလက် မှ နီးယားအင်ဖရာရက်အထိ အမျိုးမျိုးသော လှိုင်းအလျားများအတွက် စင်သော စပက်ထရမ် စွမ်းအင်ဖြန့်ကျက်မှုများကို ကိုက်ညီအောင် လုပ်ဆောင်ခဲ့သည်။ Monte-Carlo စမ်းသပ်မှုများဖြင့် စူးစမ်းလေ့လာမှုအမှားများကို ထည့်သွင်းစဉ်းစားကာ ဆန်းစစ်ထားသော အပေါက်ပြတ် ၁၁၇ ခုစလုံးအတွက် အသက်နှင့် အမှောင်ခံမှုတို့၏ ဖြန့်ကျက်မှု ဖန်ရှင်များကို ရရှိခဲ့သည်။ ဤဖြန့်ကျက်မှု ဖန်ရှင်များကို ဂေါက်ရှင်ဖန်ရှင်များဖြင့် ကိုက်ညီအောင်လုပ်ကာ အသက်နှင့် အမှောင်ခံမှုတို့၏ ပျမ်းမျှတန်ဖိုးများနှင့် အမှားတို့ကို ရယူခဲ့သည်။ ဆန်းစစ်ထားသော ဧရိယာများတွင် O'Connell နှင့် Mangano (၁၉၇၈) တို့က သတ်မှတ်ထားသော မျက်နှာပြင်အလင်းသိပ်သည်းမှု မြင့်မားသည့် စုပေါင်းအုပ်စုများ ပါဝင်သည်။ ဤစုပေါင်းအုပ်စုများသည် ဒစ်က်ရှိ ကွင်းလယ်ကြယ် လူဦးရေများနှင့် အတူတူပင် ကြယ်ဖွဲ့စည်းမှု သမိုင်းနှင့် အမှောင်ခံမှုကို မျှဝေနေကြောင်း တွေ့ရှိခဲ့သည်။ ဒစ်က်၏ အပြင်ဘက် (၃ kpc တွင် ၄၅၀ Myr) တွင် ကြယ်လူဦးရေများသည် အတွင်းပိုင်းဒစ်က် (၀.၅ kpc တွင် ၁၀၀ Myr) ထက် အနည်းငယ် အသက်ကြီးနိုင်ကြောင်း ညွှန်ပြချက်များရှိသည်။ ဗဟိုနယ်မြေများ (၅၀၀ pc ထက်နည်းသော အချင်းဝက်) အတွက် ၁၀ Myr အထက် မရှိသော အသက်ကို ရရှိခဲ့သည်။ ဤလုပ်ငန်းမှ ရရှိသော ရလဒ်များသည် M82 ဒစ်၏ ၀.၅-၃ kpc အပိုင်းသည် ၄၅၀ Myr ခန့်က စတင်ကာ ၃၅၀ Myr ခန့်ကြာသည့် ကြယ်ဖွဲ့စည်းမှု ဖြစ်ရပ်တစ်ခုအတွင်း ကြယ်များ၏ စတီလာ ဒြပ်ထု၏ ၉၀% ခန့်ကို ဖွဲ့စည်းခဲ့သည်ဟူသော အယူအဆနှင့် ကိုက်ညီသည်။ ဤလေ့လာမှုတွင် ဆန်းစစ်ထားသော အကွာအဝေးအတိုင်းအတာများတွင် စတီလာ ကလပ်စတာများ၏ ပံ့ပိုးမှုသည် ဗဟိုနယ်တွင် ၇% နှင့် ဒစ်က်တွင် ၀.၇% သာရှိပြီး စတီလာဖလပ်စ်များသည် အဓိက အားဖြင့် စီးဆင်းမှုကို ပံ့ပိုးပေးနေကြောင်း တွေ့ရှိခဲ့သည်။
my
When a three-dimensional (3D) ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon - the quantum anomalous Hall effect - provides a conceptually new platform for studies of edge-state transport, distinct from the more extensively studied integer and fractional quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt driven crossover from predominantly edge state transport to diffusive transport in Cr-doped (Bi,Sb)2Te3 thin films, as the system transitions from a quantum anomalous Hall insulator to a gapless, ferromagnetic topological insulator. The crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain using the Landauer-Buttiker formalism. Our methodology provides a powerful means of quantifying edge state contributions to transport in temperature and chemical potential regimes far from perfect quantization.
Quando um filme fino de isolante topológico ferromagnético tridimensional (3D) é magnetizado fora do plano, a condução ocorre idealmente por meio de estados unidimensionais (1D) quiral sem dissipação, caracterizados por uma condutância de Hall quantizada e nula em campo zero. A realização recente desse fenômeno – o efeito Hall anômalo quântico – fornece uma plataforma conceitualmente nova para estudos de transporte em estados de borda, distinta dos efeitos Hall quânticos inteiro e fracionário, mais amplamente estudados, que surgem da formação de níveis de Landau. Uma questão importante surge neste contexto: como esses estados de borda 1D evoluem quando a magnetização é alterada de fora do plano para dentro do plano? Examinamos essa questão estudando a transição induzida pela inclinação do campo, de um transporte predominantemente em estados de borda para um transporte difusivo em filmes finos de (Bi,Sb)2Te3 dopados com Cr, à medida que o sistema passa de um isolante Hall anômalo quântico para um isolante topológico ferromagnético sem gap. Essa transição manifesta-se em uma magnetorresistência anisotrópica gigante, eletricamente sintonizável, que explicamos utilizando a formalização de Landauer-Büttiker. Nossa metodologia fornece um meio poderoso de quantificar as contribuições dos estados de borda ao transporte em regimes de temperatura e potencial químico distantes da quantização perfeita.
pt
We scrutinize congruence as one of the basic definitions of equality in geometry and pit it against physics of Special Relativity. We show that two non-rigid rods permanently kept congruent during their common expansion or compression may have different instantaneous proper lengths (when measured at the same time of their respective reference clocks) if they have different mass distributions over their lengths. Alternatively, their proper lengths can come out equal only when measured at different but strictly correlated moments of time of their respective clocks. The derived expression for the ratio of instantaneous proper lengths of two permanently congruent changing objects explicitly contains information about the objects mass distribution. The same is true for the ratio of readings of the two reference clocks, for which the instantaneous measurements of respective proper lengths produce the same result. In either case the characteristics usually considered as purely kinematic depend on mass distribution, which is a dynamic property. This is a spectacular demonstration of dynamic aspect of geometry already in the framework of Special Relativity.
យើងបានពិនិត្យមើលភាពសอดគំនិត ជាចំណុចនិយមន័យមូលដ្ឋានមួយនៃសភាពស្មើគ្នាក្នុងធរណីមាត្រ ហើយយកវាមកប្រៀបធៀបជាមួយនឹងរូបវិទ្យានៃទ្រឹស្ដីសមាសភាពពិសេស។ យើងបានបង្ហាញថា ដងទ្រដែលមិនរឹងមាំពីរ ដែលត្រូវបានរក្សាទុកឱ្យស្ថិតក្នុងសភាពស័ព្ទគ្នាជារៀងរហូត កំពុងពង្រីក ឬបង្ហាប់រួមគ្នា អាចមានប្រវែងត្រឹមត្រូវភ្លាមៗខុសគ្នា (នៅពេលវាស់នៅពេលដូចគ្នាតាមនាឡិកាឯកោរបស់វានីមួយៗ) ប្រសិនបើវាមានការចែកចាយម៉ាស់ខុសគ្នាក្នុងបណ្ដោយរបស់វា។ ផ្ទុយទៅវិញ ប្រវែងត្រឹមត្រូវរបស់វាអាចស្មើគ្នាបាន លុះត្រាតែវាត្រូវបានវាស់នៅពេលខុសគ្នា ប៉ុន្តែត្រូវបានភ្ជាប់គ្នាយ៉ាងតឹងរ៉ឹងតាមពេលវេលានៃនាឡិកាឯកោរបស់វានីមួយៗ។ កន្សោមដែលបានគណនាសមាមាត្រនៃប្រវែងត្រឹមត្រូវភ្លាមៗរបស់វត្ថុដែលកំពុងផ្លាស់ប្ដូរ ហើយស្ថិតក្នុងសភាពស័ព្ទគ្នាជារៀងរហូត មានការបញ្ចូលព័ត៌មានដោយច្បាស់លាស់អំពីការចែកចាយម៉ាស់របស់វត្ថុ។ នេះក៏ដូចគ្នាដែរចំពោះសមាមាត្រនៃការអាននាឡិកាឯកោពីរ ដែលការវាស់ប្រវែងត្រឹមត្រូវភ្លាមៗរបស់វាបានផ្តល់លទ្ធផលដូចគ្នា។ ក្នុងករណីទាំងពីរ លក្ខណៈដែលតែងតែចាត់ទុកថាជាចលនាសុទ្ធ អាស្រ័យលើការចែកចាយម៉ាស់ ដែលជាលក្ខណៈដែលទាក់ទងនឹងចលករ។ នេះគឺជាការបង្ហាញដ៏អស្ចារ្យនៃផ្នែកចលករនៃធរណីមាត្រ ទោះបីជានៅក្នុងគោលការណ៍នៃទ្រឹស្ដីសមាសភាពពិសេសក៏ដោយ។
km
Subsurface radioactivity may be due to transport of radionuclides from a contaminated surface into the solid volume, as occurs for radioactive fallout deposited on soil, or from fast neutron activation of a solid volume, as occurs in concrete blocks used for radiation shielding. For purposes including fate and transport studies of radionuclides in the environment, decommissioning and decontamination of radiation facilities, and nuclear forensics, an in situ, nondestructive method for ascertaining the subsurface distribution of radioactivity is desired. The method developed here obtains a polynomial expression for the radioactivity depth profile, using a small set of gamma-ray count rates measured by a collimated detector directed towards the surface at a variety of angles with respect to the surface normal. To demonstrate its capabilities, this polynomial method is applied to the simple case where the radioactivity is maximal at the surface and decreases exponentially with depth below the surface, and to the more difficult case where the maximal radioactivity is below the surface.
地下の放射能は、土壌に沈着した放射性降下物のように、汚染された表面から固体内部へ放射性核種が移動することによって生じる場合と、放射線遮蔽に用いられるコンクリートブロックのように、固体内部の高速中性子による活性化によって生じる場合がある。環境中の放射性核種の挙動や移動の研究、放射線施設の廃止措置および除染、核鑑識といった目的のために、地下における放射能分布を非破壊的かつ現地で測定する方法が求められている。ここで開発された方法は、表面法線に対してさまざまな角度で向けて測定した少数のガンマ線計数率を用いて、放射能の深さ方向プロファイルを多項式で表すものである。この多項式法の性能を示すために、表面で放射能が最大となり、表面より深い部分では指数関数的に減少する単純な場合と、放射能の最大値が表面より下方にあるより困難な場合の両方に適用する。
ja
We present a general form of Renormalization operator $\mathcal{R}$ acting on potentials $V:\{0,1\}^\mathbb{N} \to \mathbb{R}$. We exhibit the analytical expression of the fixed point potential $V$ for such operator $\mathcal{R}$. This potential can be expressed in a naturally way in terms of a certain integral over the Hausdorff probability on a Cantor type set on the interval $[0,1]$. This result generalizes a previous one by A. Baraviera, R. Leplaideur and A. Lopes where the fixed point potential $V$ was of Hofbauer type. For the potentials of Hofbauer type (a well known case of phase transition) the decay is like $n^{-\gamma}$, $\gamma>0$. Among other things we present the estimation of the decay of correlation of the equilibrium probability associated to the fixed potential $V$ of our general renormalization procedure. In some cases we get polynomial decay like $n^{-\gamma}$, $\gamma>0$, and in others a decay faster than $n \,e^{ -\, \sqrt{n}}$, when $n \to \infty$. The potentials $g$ we consider here are elements of the so called family of Walters potentials on $\{0,1\}^\mathbb{N} $ which generalizes the potentials considered initially by F. Hofbauer. For these potentials some explicit expressions for the eigenfunctions are known. In a final section we also show that given any choice $d_n \to 0$ of real numbers varying with $n \in \mathbb{N}$ there exist a potential $g$ on the class defined by Walters which has a invariant probability with such numbers as the coefficients of correlation (for a certain explicit observable function).
نقدم شكلاً عاماً لمؤثر التدوير $\mathcal{R}$ المؤثر على البواعث $V:\{0,1\}^\mathbb{N} \to \mathbb{R}$. ونُظهر التعبير التحليلي للبُعث $V$ الثابت بالنسبة لهذا المؤثر $\mathcal{R}$. يمكن التعبير عن هذا البُعث بطريقة طبيعية بدلالة تكامل معين بالنسبة لاحتمال هاوسدورف على مجموعة من نوع كانتور ضمن المجال $[0,1]$. يعمم هذا الناتج نتيجة سابقة لـ A. Baraviera وR. Leplaideur وA. Lopes، حيث كان البُعث الثابت $V$ من نوع هوفباور. بالنسبة للبواعث من نوع هوفباور (وهي حالة معروفة بحدوث انتقال طوري)، يكون التناقص على شكل $n^{-\gamma}$، حيث $\gamma>0$. من بين أمور أخرى، نقدّم تقديرًا لتناقص الارتباط لاحتمال التوازن المرتبط بالبُعث الثابت $V$ الناتج عن إجراء التدوير العام لدينا. في بعض الحالات نحصل على تناقص متعدد الحدود على الشكل $n^{-\gamma}$، $\gamma>0$، وفي حالات أخرى نحصل على تناقص أسرع من $n \,e^{ -\, \sqrt{n}}$ عندما يؤول $n$ إلى ما لا نهاية. إن البواعث $g$ التي ندرسها هنا هي عناصر من ما يُعرف بعائلة بواعث والترز على $\{0,1\}^\mathbb{N}$، والتي تعمّم البواعث التي درسها في البداية F. Hofbauer. بالنسبة لهذه البواعث، تُعرف تعبيرات صريحة لبعض الدوال الذاتية. في قسم ختامي، نبيّن أيضًا أنه لأي اختيار $d_n \to 0$ لأعداد حقيقية تتغير مع $n \in \mathbb{N}$، يوجد بُعث $g$ ضمن الصنف المعرف بواسطة والترز، يمتلك احتمالاً ثابتًا تكون فيه هذه الأعداد معاملات الارتباط (لدالة قابلة للرصد معيّنة بشكل صريح).
ar
A nonlinear kinetic chemotaxis model with internal dynamics incorporating signal transduction and adaptation is considered. This paper is concerned with: (i) the global solution for this model, and, (ii) its fast adaptation limit to Othmer-Dunbar-Alt type model. This limit gives some insight to the molecular origin of the chemotaxis behaviour. First, by using the Schauder fixed point theorem, the global existence of weak solution is proved based on detailed a priori estimates, under some quite general assumptions on the model and the initial data. However, the Schauder fixed point theorem does not provide uniqueness. Therefore, additional analysis is required to be developed to obtain uniqueness. Next, the fast adaptation limit of this model is derived by extracting a weak convergence subsequence in measure space. For this limit, the first difficulty is to show the concentration effect on the internal state. When the small parameter {\epsilon}, the adaptation time scale, goes to zero, we prove that the solution converges to a Dirac mass in the internal state variable. Another difficulty is the strong compactness argument on the chemical potential, which is essential for passing the nonlinear kinetic equation to the weak limit.
ພິຈາລະນາຮູບແບບໄຄເນຕິກການເຄື່ອນທີ່ຂອງຈຸລັງຕາມສັນຍານ (chemotaxis) ທີ່ບໍ່ເປັນເສັ້ນຊື່ ໂດຍມີການເຊື່ອມໂຍງກັບການຖ່າຍໂອນຂໍ້ມູນພາຍໃນ (signal transduction) ແລະ ການປັບໂຕ (adaptation). ບົດຄວາມນີ້ກ່ຽວຂ້ອງກັບ: (i) ການມີຢູ່ຂອງວິທີແກ້ໄຂທົ່ວໄປສຳລັບຮູບແບບນີ້, ແລະ (ii) ຂອບເຂດການປັບໂຕຢ່າງໄວວາ (fast adaptation limit) ໄປສູ່ຮູບແບບປະເພດ Othmer-Dunbar-Alt. ຂອບເຂດດັ່ງກ່າວສະໜອງຂໍ້ມູນເຂົ້າໃຈໃນຕົ້ນກຳເນີດຂອງພຶດຕິກຳ chemotaxis ໃນລະດັບໂມເລກຸນ. ທຳອິດ, ໂດຍການນຳໃຊ້ທິດສະດີຈຸດຖາວອນຂອງ Schauder, ໄດ້ພິສູດການມີຢູ່ຂອງວິທີແກ້ໄຂອ່ອນ (weak solution) ໂດຍອີງໃສ່ການຄາດຄະເນລ່ວງໜ້າຢ່າງລະອຽດ, ໂດຍມີຂໍ້ກຳນົດທົ່ວໄປຄ່ອນຂ້າງສູງກ່ຽວກັບຮູບແບບ ແລະ ຂໍ້ມູນເລີ່ມຕົ້ນ. ຢ່າງໃດກໍຕາມ, ທິດສະດີຈຸດຖາວອນຂອງ Schauder ບໍ່ສະໜອງການມີຄວາມເປັນເອກະລັກ. ສະນັ້ນ, ຈຳເປັນຕ້ອງດຳເນີນການວິເຄາະເພີ່ມເຕີມເພື່ອໃຫ້ໄດ້ມາຊະນິດທີ່ເປັນເອກະລັກ. ຕໍ່ມາ, ຂອບເຂດການປັບໂຕຢ່າງໄວວາຂອງຮູບແບບນີ້ຖືກສະຫຼຸບໄດ້ໂດຍການສະກັດເອົາລຳດັບຍ່ອຍທີ່ເຂົ້າໃກ້ກັນໃນທາງອ່ອນໃນພື້ນທີ່ຂອງມາດຕະການ. ສຳລັບຂອບເຂດນີ້, ບັນຫາຍາກອັນດັບໜຶ່ງແມ່ນການສະແດງຜົນກະທົບຂອງການລວມຕົວກັນໃນສະພາບພາຍໃນ. ເມື່ອພາລາມິເຕີນ້ອຍ {\epsilon}, ເຊິ່ງເປັນຂອງເວລາປັບໂຕ, ໄປຫາສູນ, ພວກເຮົາພິສູດວ່າວິທີແກ້ໄຂຈະເຂົ້າໃກ້ກັບມວນດິຣາກ (Dirac mass) ໃນໂຕປ່ຽນສະພາບພາຍໃນ. ອີກບັນຫາໜຶ່ງທີ່ຍາກກໍຄື ການໂລ່ງຂໍ້ຄວາມທີ່ເຂັ້ມແຂງກ່ຽວກັບ» potential » ເຄມີ, ເຊິ່ງເປັນສິ່ງຈຳເປັນສຳລັບການຜ່ານສົມຜົນໄຄເນຕິກບໍ່ເປັນເສັ້ນຊື່ໄປສູ່ຂອບເຂດອ່ອນ.
lo
When a three-dimensional (3D) ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon - the quantum anomalous Hall effect - provides a conceptually new platform for studies of edge-state transport, distinct from the more extensively studied integer and fractional quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt driven crossover from predominantly edge state transport to diffusive transport in Cr-doped (Bi,Sb)2Te3 thin films, as the system transitions from a quantum anomalous Hall insulator to a gapless, ferromagnetic topological insulator. The crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain using the Landauer-Buttiker formalism. Our methodology provides a powerful means of quantifying edge state contributions to transport in temperature and chemical potential regimes far from perfect quantization.
三次元(3D)の強磁性トポロジカル絶縁体薄膜が面外方向に磁化されるとき、伝導は理想的には散逸のない一次元(1D)のキラル状態を通じて行われ、これは量子化された零磁場ホール伝導度によって特徴づけられる。この現象(量子異常ホール効果)が最近実現されたことで、ランドウ準位の形成に基づく、より広く研究されてきた整数量子ホール効果や分数量子ホール効果とは異なる、エッジ状態輸送の研究のための概念的に新しいプラットフォームが提供された。この文脈で重要な疑問が生じる。すなわち、磁化が面外から面内へと変化するにつれて、これらの1Dエッジ状態はどのように変化するのか、ということである。我々は、Crドープされた(Bi,Sb)2Te3薄膜において、系が量子異常ホール絶縁体からギャップのない強磁性トポロジカル絶縁体へと遷移する過程で、磁場の傾きによって引き起こされる、主にエッジ状態輸送から拡散輸送へのクロスオーバーを調べることで、この疑問を検討する。このクロスオーバーは、巨大かつ電気的に制御可能な異方性磁気抵抗として現れ、我々はランドauer-ブッティカー形式を用いてこれを説明する。本手法は、完全な量子化から離れた温度および化学ポテンシャル領域におけるエッジ状態の輸送への寄与を定量化する強力な手段を提供する。
ja
A density-functional theory is developed based on the Maxwell--Schr\"odinger equation with an internal magnetic field in addition to the external electromagnetic potentials. The basic variables of this theory are the electron density and the total magnetic field, which can equivalently be represented as a physical current density. Hence, the theory can be regarded as a physical current-density functional theory and an alternative to the paramagnetic current density-functional theory due to Vignale and Rasolt. The energy functional has strong enough convexity properties to allow a formulation that generalizes Lieb's convex analysis-formulation of standard density-functional theory. Several variational principles as well as a Hohenberg--Kohn-like mapping between potentials and ground-state densities follow from the underlying convex structure. Moreover, the energy functional can be regarded as the result of a standard approximation technique (Moreau--Yosida regularization) applied to the conventional Schr\"odinger ground state energy, which imposes limits on the maximum curvature of the energy (w.r.t.\ the magnetic field) and enables construction of a (Fr\'echet) differentiable universal density functional.
Максвелл – Шрёдингер теңдеуіне сыртқы электромагниттік потенциалдармен қатар ішкі магнит өрісін қосу арқылы тығыздық-функционалдық теория әзірленді. Бұл теорияның негізгі айнымалылары – электрондық тығыздық және жалпы магнит өрісі, оны эквивалентті түрде физикалық ток тығыздығы ретінде көрсетуге болады. Сондықтан бұл теория Виньяле мен Расолттың параметрлік ток тығыздығы-функционалдық теориясына альтернатива болып табылатын физикалық ток тығыздығы-функционалдық теория ретінде қарастырылуы мүмкін. Энергия функционалы Лайбтың стандартты тығыздық-функционалдық теориясының дөңес талдау тұжырымдамасын жалпылауға мүмкіндік беретін жеткілікті күшті дөңес қасиетке ие. Негізгі дөңес құрылымнан бірнеше вариациялық принциптер қана емес, сонымен қатар потенциалдар мен негізгі күй тығыздықтары арасындағы Хонденберг – Конаға ұқсас сәйкестік те шығады. Сонымен қатар, энергия функционалын магнит өрісі бойынша энергияның максималды қисықтығына шектеулер қоятын және (Фреше бойынша) дифференциалданатын универсалды тығыздық функционалын құруға мүмкіндік беретін, дәстүрлі Шрёдингердің негізгі күй энергиясына қолданылатын стандартты жуықтау әдісінің (Моро – Йосида реттеуі) нәтижесі ретінде қарастыруға болады.
kk
The recent discovery of ten new dwarf galaxy candidates by the Dark Energy Survey (DES) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) could increase the Fermi Gamma-Ray Space Telescope's sensitivity to annihilating dark matter particles, potentially enabling a definitive test of the dark matter interpretation of the long-standing Galactic Center gamma-ray excess. In this paper, we compare the previous analyses of Fermi data from the directions of the new dwarf candidates (including the relatively nearby Reticulum II) and perform our own analysis, with the goal of establishing the statistical significance of any gamma-ray signal from these sources. We confirm the presence of an excess from Reticulum II, with a spectral shape that is compatible with the Galactic Center signal. The significance of this emission is greater than that observed from 99.84% of randomly chosen high-latitude blank-sky locations, corresponding to a local detection significance of 3.2 sigma. We improve upon the standard blank-sky calibration approach through the use of multi-wavelength catalogs, which allow us to avoid regions that are likely to contain unresolved gamma-ray sources.
Nedávný objev deseti nových kandidátů na trpasličí galaxie pomocí temné energetické průzkumné mise (DES) a Panoramatického průzkumného dalekohledu a rychlé odezvové soustavy (Pan-STARRS) by mohl zvýšit citlivost kosmického gama-záření Fermiho na anihilující částice temné hmoty, což by mohlo umožnit definitivní test interpretace dlouhodobého přebytku gama-záření z galaktického středu. V tomto článku porovnáváme předchozí analýzy dat Fermiho z oblastí nových kandidátů na trpasličí galaxie (včetně relativně blízké Reticulum II) a provádíme vlastní analýzu s cílem určit statistickou významnost jakéhokoli gama-zářivého signálu z těchto zdrojů. Potvrzujeme přítomnost přebytku z Reticulum II, jehož spektrální tvar je slučitelný se signálem z galaktického středu. Významnost tohoto záření je vyšší než u 99,84 % náhodně vybraných míst blank-sky na vysokých zeměpisných šířkách, což odpovídá lokální detekční významnosti 3,2 sigma. Vylepšujeme standardní přístup blank-sky kalibrace použitím vícevlnových katalogů, které nám umožňují vyhnout se oblastem, které pravděpodobně obsahují nevyřešené zdroje gama-záření.
cs
We report transport and magnetization measurements on graphitic materials that have been hydrogenated after being treated with octane. The temperature-dependent electrical resistivity shows anomalies manifested as re-entrant insulator-metal transitions. Below 50 K, the magnetoresistance data shows both antiferromagnetic and ferromagnetic behavior as the magnetic field is decrease or increased, respectively. The system is possibly an unconventional magnetic superconductor. The irreversible behavior observed in the field-cooled vs. the zero-field cooled data for a sufficiently high magnetic field suggests that the system might enter a superconducting state below 50 K. Energy gap data is obtained from nonlocal electric differential conductance measurements. An exciton-based mechanism is likely driving the system to the superconducting state below 50 K, where the gap is divergent. We find that the hydrogenated carbon fiber is a multiple gap system with critical temperatures estimates above room temperature. The temperature dependence of the superconducting gap follows the flat-band energy relationship, with the flat band gap parameter linearly increasing with the temperature above 50 K. Thus, we find that either a magnetic or an electric field can drive this hydrogenated graphitic system to superconducting state below 50 K. In addition, AF spin fluctuations creates pseudo-gap states above 50 K.
เราขอรายงานผลการวัดการนำไฟฟ้าและการเป็นแม่เหล็กของวัสดุกราไฟต์ที่ถูกเติมไฮโดรเจนหลังจากได้รับการปฏิบัติรักษาด้วยออกเทน ค่าความต้านทานไฟฟ้าที่ขึ้นกับอุณหภูมิแสดงความผิดปกติในรูปของการเปลี่ยนแปลงสถานะจากฉนวนเป็นตัวนำไฟฟ้าแบบย้อนกลับ (re-entrant insulator-metal transitions) ข้อมูลแม่เหล็กต้านทานที่ต่ำกว่า 50 เคลวิน แสดงพฤติกรรมทั้งแบบแอนติเฟอโรแมกเนติกและเฟอโรแมกเนติก เมื่อสนามแม่เหล็กลดลงหรือเพิ่มขึ้นตามลำดับ ระบบนี้อาจเป็นตัวนำยิ่งยวดที่มีลักษณะแม่เหล็กแบบผิดแผกไปจากปกติ พฤติกรรมที่ไม่สามารถย้อนกลับได้ที่สังเกตได้จากการเปรียบเทียบข้อมูลที่วัดภายใต้สนามแม่เหล็กคงที่กับข้อมูลที่วัดในสภาวะไม่มีสนามแม่เหล็ก สำหรับสนามแม่เหล็กที่สูงพอสมควร บ่งชี้ว่าระบบอาจเข้าสู่สถานะตัวนำยิ่งยวดที่ต่ำกว่า 50 เคลวิน ข้อมูลช่องว่างพลังงานได้มาจากการวัดการนำไฟฟ้าต่างระดับที่ไม่เป็นท้องถิ่น (nonlocal electric differential conductance) กลไกที่อิงกับเอกซิทอน (exciton-based mechanism) มีแนวโน้มเป็นตัวขับเคลื่อนให้ระบบเข้าสู่สถานะตัวนำยิ่งยวดที่ต่ำกว่า 50 เคลวิน โดยที่ช่องว่างพลังงานมีลักษณะพุ่งออกไป (divergent) เราพบว่าเส้นใยคาร์บอนที่ถูกเติมไฮโดรเจนเป็นระบบที่มีหลายช่องว่างพลังงาน โดยมีการประมาณค่าอุณหภูมิวิกฤตที่สูงกว่าอุณหภูมิห้อง ความสัมพันธ์ของช่องว่างตัวนำยิ่งยวดกับอุณหภูมิตามความสัมพันธ์พลังงานแถบแบน (flat-band energy relationship) โดยพารามิเตอร์ช่องว่างแถบแบนเพิ่มขึ้นเชิงเส้นกับอุณหภูมิที่สูงกว่า 50 เคลวิน ดังนั้นเราพบว่าทั้งสนามแม่เหล็กหรือสนามไฟฟ้าสามารถขับเคลื่อนระบบกราไฟต์ที่เติมไฮโดรเจนนี้ให้เข้าสู่สถานะตัวนำยิ่งยวดที่ต่ำกว่า 50 เคลวิน นอกจากนี้ การสั่นไหวของสปินแบบแอนติเฟอโรแมกเนติก (AF spin fluctuations) ยังก่อให้เกิดสถานะช่องว่างปลอม (pseudo-gap states) ที่สูงกว่า 50 เคลวิน
th
In order to study the dependence of the coercive force of sintered magnets on temperature, nucleation and domain wall propagation at the grain boundary are studied as rate-determining processes of the magnetization reversal phenomena in magnets consisting of bulk hard magnetic grains contacting via grain boundaries of a soft magnetic material. These systems have been studied analytically for a continuum model at zero temperature (A. Sakuma, et al. J. Mag. Mag. Mat. {\bf 84} 52 (1990)). In the present study, the temperature dependence is studied by making use of the stochastic Landau-Lifshitz-Gilbert equation at finite temperatures. In particular, the threshold fields for nucleation and domain wall propagation are obtained as functions of ratios of magnetic interactions and anisotropies of the soft and hard magnets for various temperatures. It was found that the threshold field for domain wall propagation is robust against thermal fluctuations, while that for nucleation is fragile. The microscopic mechanisms of the observed temperature dependence are discussed.
من أجل دراسة اعتماد قوة التناقض في المغناطيسات المدماجة على درجة الحرارة، تم دراسة تكوين النوى وانتشار جدار المجال المغناطيسي عند حدود الحبيبات باعتبارهما عمليتين تحدّدان المعدل في ظواهر عكس المغناطيسية في المغناطيسات المكونة من حبيبات مغناطيسية صلبة كبيرة الحجم تتلامس عبر حدود حبيبات من مادة مغناطيسية لينة. وقد تم دراسة هذه الأنظمة تحليليًا باستخدام نموذج الاستمرارية عند درجة حرارة الصفر المطلق (A. Sakuma وآخرون، J. Mag. Mag. Mat. {\bf 84} 52 (1990)). في هذه الدراسة الحالية، يتم دراسة الاعتماد على درجة الحرارة باستخدام معادلة لانداو-ليفشيتس-جيلبرت العشوائية عند درجات حرارة محدودة. وعلى وجه الخصوص، تم الحصول على حقول العتبة الخاصة بتكوين النوى وانتشار جدار المجال المغناطيسي كدوال لنسب التآثرات المغناطيسية وثوابت التباين للمغناطيسات اللينة والصلبة، وذلك عند درجات حرارة مختلفة. وتبين أن حقل العتبة لانتشار جدار المجال المغناطيسي يمتاز بالثبات أمام التقلبات الحرارية، في حين أن حقل العتبة لتكوين النوى يكون هشًا. وتم مناقشة الآليات الميكروسكوبية للاعتماد الملحوظ على درجة الحرارة.
ar
Governments and cities around the world are currently facing rapid growth in the use of Electric Vehicles and therewith the need for Charging Infrastructure. For these cities, the struggle remains how to further roll out charging infrastructure in the most efficient way, both in terms of cost and use. Forecasting models are not able to predict more long-term developments, and as such more complex simulation models offer opportunities to simulate various scenarios. Agent based simulation models provide insight into the effects of incentives and roll-out strategies before they are implemented in practice and thus allow for scenario testing. This paper describes the build up of an agent based model that enables policy makers to anticipate on charging infrastructure development. The model is able to simulate charging transactions of individual users and is both calibrated and validated using a dataset of charging transactions from the public charging infrastructure of the four largest cities in the Netherlands.
Dünya genelinde hükümetler ve şehirler şu anda Elektrikli Araçların kullanımındaki hızlı artışla ve buna bağlı olarak Şarj Altyapısına olan ihtiyaçla karşı karşıya. Bu şehirler için, maliyet ve kullanım açısından en etkili şekilde şarj altyapısının nasıl daha fazla yaygınlaştırılacağı sorunsalı devam etmekte. Tahmin modelleri daha uzun vadeli gelişmeleri öngörememekte ve bu nedenle çeşitli senaryoları simüle etme imkânı sunan daha karmaşık simülasyon modelleri ön plana çıkmaktadır. Ajan temelli simülasyon modelleri, teşviklerin ve yaygınlaştırma stratejilerinin etkilerini bunlar uygulamaya konmadan önce ortaya koyarak senaryo testlerine olanak tanır. Bu makale, politika yapıcıların şarj altyapısı gelişimine önceden hazırlanmalarını sağlayan ajan temelli bir modelin oluşturulmasını açıklamaktadır. Model, bireysel kullanıcıların şarj işlemlerini simüle edebilmekte ve Hollanda'nın dört büyük şehrinin kamuya açık şarj altyapısından alınan şarj işlem verileriyle hem kalibre edilmiş hem de doğrulanmıştır.
tr
We consider the problem of optimal charging of plug-in electric vehicles (PEVs). We treat this problem as a multi-agent game, where vehicles/agents are heterogeneous since they are subject to possibly different constraints. Under the assumption that electricity price is affine in total demand, we show that, for any finite number of heterogeneous agents, the PEV charging control game admits a unique Nash equilibrium, which is the optimizer of an auxiliary minimization program. We are also able to quantify the asymptotic behaviour of the price of anarchy for this class of games. More precisely, we prove that if the parameters defining the constraints of each vehicle are drawn randomly from a given distribution, then, the value of the game converges almost surely to the optimum of the cooperative problem counterpart as the number of agents tends to infinity. In the case of a discrete probability distribution, we provide a systematic way to abstract agents in homogeneous groups and show that, as the number of agents tends to infinity, the value of the game tends to a deterministic quantity.
យើងពិចារណាលើបញ្ហានៃការផ្គត់ផ្គង់ថាមពលដោយប្រសើរបំផុតសម្រាប់យានយន្តអគ្គិសនីដែលភ្ជាប់ (PEVs)។ យើងដោះស្រាយបញ្ហានេះដោយមើលឃើញថាជាល្បែងពហុភាគី ដែលយានយន្ត/ភាគីមានលក្ខណៈខុសគ្នា ដោយសារពួកគេត្រូវបានគេដាក់កម្រិតខុសៗគ្នា។ ក្រោមសន្មត់ថាថ្លៃភ្លើងគឺជាអនុគមន៍លីនេអ៊ែរនៃតម្រូវការសរុប យើងបង្ហាញថា សម្រាប់ចំនួនភាគីខុសៗគ្នាមួយណាមួយដែលមានកំណត់ ល្បែងគ្រប់គ្រងការផ្គត់ផ្គង់ថាមពល PEV មានតែតុល្យភាពណាស់ (Nash equilibrium) តែមួយគត់ ដែលជាចំណុចអ៊ីស្ត្រង់ (optimizer) នៃកម្មវិធីអប្បបរមាជំនួយមួយ។ យើងក៏អាចវាស់វែងឥរិយាបថនៃតម្លៃនៃភាពអាក្រក់ (price of anarchy) សម្រាប់ថ្នាក់នៃល្បែងទាំងនេះបានដែរ។ ជាក់លាក់ជាងនេះ យើងបញ្ជាក់ថា ប្រសិនបើប៉ារ៉ាម៉ែត្រកំណត់កម្រិតសម្រាប់យានយន្តនីមួយៗត្រូវបានគេដកចេញដោយចៃដន្យពីចែកចាយមួយដែលបានផ្តល់ នោះតម្លៃនៃល្បែងនឹងប្រូបាបប្រូបាបជិតស្និទ្ធទៅនឹងតម្លៃអ៊ីស្ត្រង់នៃបញ្ហាសហការដែលស្របគ្នាដូចគ្នានៅពេលចំនួនភាគីខិតទៅអនន្ត។ ក្នុងករណីចែកចាយប្រូបាបឌីស្ស្រេត (discrete probability distribution) យើងផ្តល់វិធីសាស្ត្រប្រព័ន្ធមួយដើម្បីស្រង់ចេញភាគីទៅជាក្រុមស្មើគ្នា ហើយបង្ហាញថា នៅពេលចំនួនភាគីខិតទៅអនន្ត តម្លៃនៃល្បែងនឹងខិតទៅរកបរិមាណមួយដែលកំណត់ដោយច្បាប់។
km
OSD PSE is the Indonesian Government Certification Authority (CA) for National e-Procurement System and later named OSD PSE G2. It has a unique hierarchical structure under the OSD Lemsaneg. As an Issuing CA, the OSD PSE G2 publishes and guarantee the quality of the Certificate Policy and Certification Practice Statement (CP-CPS) in order to gain the PKI user trustworthy. In this article, we analyze the CP-CPS version 1.0 that published by OSD PSE G2. For this purpose, we apply the methodology of PKI Assessment Guidelines (PAG). The quality assessment of this CP-CPS, including its compliance to the related reference/standard, namely: CP OSD Lemsaneg v.1.1; RFC 3647; and CA Business Practice Disclosure Principle on Trust Service Principles and Criteria for Certification Authorities (BPDP-TSPCCA) version 2.0. We finally found that the CP-CPS version 1.0 does not comply with related standard and reference. Hence, the CP-CPS need to be updated following the current condition of OSD PSE G2.
OSD PSE là Cơ quan chứng thực (CA) của Chính phủ Indonesia dành cho Hệ thống mua sắm điện tử quốc gia, sau đó được đổi tên thành OSD PSE G2. Cơ quan này có cấu trúc phân cấp độc đáo dưới sự quản lý của OSD Lemsaneg. Với tư cách là một CA cấp phát, OSD PSE G2 công bố và đảm bảo chất lượng Chính sách Chứng chỉ và Tuyên bố Thực hành Chứng thực (CP-CPS) nhằm tạo dựng sự tin cậy từ người dùng PKI. Trong bài viết này, chúng tôi phân tích bản CP-CPS phiên bản 1.0 do OSD PSE G2 công bố. Để thực hiện điều này, chúng tôi áp dụng phương pháp luận theo Hướng dẫn Đánh giá PKI (PAG). Việc đánh giá chất lượng CP-CPS này bao gồm mức độ tuân thủ các tài liệu tham chiếu/tiêu chuẩn liên quan, cụ thể là: CP OSD Lemsaneg phiên bản 1.1; RFC 3647; và Nguyên tắc Tiết lộ Thực hành Kinh doanh CA về Nguyên tắc và Tiêu chí Dịch vụ Tin cậy dành cho Cơ quan Chứng thực (BPDP-TSPCCA) phiên bản 2.0. Cuối cùng, chúng tôi nhận thấy rằng CP-CPS phiên bản 1.0 chưa tuân thủ các tiêu chuẩn và tài liệu tham chiếu liên quan. Do đó, CP-CPS cần được cập nhật phù hợp với điều kiện thực tế hiện tại của OSD PSE G2.
vi
Self-assembly of colloidal particles due to elastic interactions in nematic liquid crystals promises tunable composite materials and can be guided by exploiting surface functionalization, geometric shape and topology, though these means of controlling self-assembly remain limited. Here, we realize low-symmetry achiral and chiral elastic colloids in the nematic liquid crystals using colloidal polygonal concave and convex prisms. We show that the controlled pinning of disclinations at the prisms edges alters the symmetry of director distortions around the prisms and their orientation with respect to the far-field director. The controlled localization of the disclinations at the prism's edges significantly influences anisotropy of the diffusion properties of prisms dispersed in liquid crystals and allows one to modify their self-assembly. We show that elastic interactions between polygonal prisms can be switched between repulsive and attractive just by controlled re-pinning the disclinations at different edges using laser tweezers. Our findings demonstrate that elastic interactions between colloidal particles dispersed in nematic liquid crystals are sensitive to the topologically equivalent but geometrically rich controlled configurations of the particle-induced defects.
Perakitan diri partikel koloid akibat interaksi elastis dalam kristal cair nematik menjanjikan material komposit yang dapat disesuaikan dan dapat diarahkan dengan memanfaatkan fungsionalisasi permukaan, bentuk geometris, dan topologi, meskipun metode pengendalian perakitan diri ini masih terbatas. Di sini, kami mewujudkan koloid elastis aksiral dan kiral berke-simetrian rendah dalam kristal cair nematik menggunakan prisma koloid poligonal cekung dan cembung. Kami menunjukkan bahwa penjepitan terkendali disklinsi pada tepi-tepi prisma mengubah simetri distorsi arah orientasi di sekitar prisma serta orientasinya terhadap arah medan jauh. Lokalisasi terkendali disklinsi pada tepi prisma secara signifikan memengaruhi anisotropi sifat difusi prisma yang tersebar dalam kristal cair dan memungkinkan modifikasi perakitan dirinya. Kami menunjukkan bahwa interaksi elastis antara prisma poligonal dapat diubah dari tolakan menjadi tarikan hanya dengan mengatur ulang penjepitan disklinsi pada tepi yang berbeda menggunakan penangkap laser. Temuan kami menunjukkan bahwa interaksi elastis antara partikel koloid yang tersebar dalam kristal cair nematik peka terhadap konfigurasi cacat yang diinduksi partikel dengan kesetaraan topologis namun kekayaan geometris yang dikendalikan.
id
The $\sigma$-$\omega$ model of nuclei is studied at leading order in the $1/N$ expansion thereby introducing the self consistent Hartree approximation, the Dirac sea corrections and the one fermion loop meson self energies in a unified way. For simplicity, the Dirac sea is further treated within a semiclassical expansion to all orders. The well-known Landau pole vacuum instability appearing in this kind of theories is removed by means of a scheme recently proposed in this context. The effect of such removal on the low momentum effective parameters of the model, relevant to describe nuclear matter and finite nuclei, is analyzed. The one fermion loop meson self energies are found to have a sizeable contribution to these parameters. However, such contribution turns out to come mostly from the Landau poles and is thus spurious. We conclude that the fermionic loop can only be introduced consistently in the $\sigma$-$\omega$ nuclear model if the Landau pole problem is dealt with properly.
পরমাণুর $\sigma$-$\omega$ মডেলটি $1/N$ প্রসারণের প্রধান ক্রম অনুযায়ী অধ্যয়ন করা হয়, যার ফলে স্ব-সামঞ্জস্যপূর্ণ হার্ট্রি আসন্নীকরণ, ডিরাক সমুদ্রের সংশোধন এবং এক ফার্মিয়ন লুপ মেসন স্ব-শক্তি একটি সমন্বিত পদ্ধতিতে প্রবর্তিত হয়। সরলতার জন্য, ডিরাক সমুদ্রকে আরও সেমিক্লাসিকাল প্রসারণের মাধ্যমে সকল ক্রমে পরিচালনা করা হয়। এই ধরনের তত্ত্বগুলিতে দেখা দেওয়া পরিচিত ল্যান্ডাউ পোল শূন্যস্থান অস্থিরতা সম্প্রতি এই প্রসঙ্গে প্রস্তাবিত একটি পদ্ধতির মাধ্যমে দূর করা হয়। পারমাণবিক বস্তু এবং সসীম নিউক্লিয়াস বর্ণনার জন্য প্রাসঙ্গিক কম ভারবেগের কার্যকর প্যারামিটারগুলিতে এই অপসারণের প্রভাব বিশ্লেষণ করা হয়। এক ফার্মিয়ন লুপ মেসন স্ব-শক্তিগুলির এই প্যারামিটারগুলির উপর উল্লেখযোগ্য অবদান রয়েছে বলে দেখা যায়। তবে এই অবদানটি মূলত ল্যান্ডাউ পোল থেকে আসে এবং তাই এটি মিথ্যা। আমরা এই উপসংহারে আসি যে ল্যান্ডাউ পোল সমস্যার সঠিক সমাধান না করা পর্যন্ত $\sigma$-$\omega$ পারমাণবিক মডেলে ফার্মিয়ন লুপ সামঞ্জস্যপূর্ণভাবে প্রবর্তন করা সম্ভব হবে না।
bn
When the training and test data are from different distributions, domain adaptation is needed to reduce dataset bias to improve the model's generalization ability. Since it is difficult to directly match the cross-domain joint distributions, existing methods tend to reduce the marginal or conditional distribution divergence using predefined distances such as MMD and adversarial-based discrepancies. However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias. Thus they may fail to uncover the underlying relationship between transferable features and joint distributions. This paper proposes Learning to Match (L2M) to automatically learn the cross-domain distribution matching without relying on hand-crafted priors on the matching loss. Instead, L2M reduces the inductive bias by using a meta-network to learn the distribution matching loss in a data-driven way. L2M is a general framework that unifies task-independent and human-designed matching features. We design a novel optimization algorithm for this challenging objective with self-supervised label propagation. Experiments on public datasets substantiate the superiority of L2M over SOTA methods. Moreover, we apply L2M to transfer from pneumonia to COVID-19 chest X-ray images with remarkable performance. L2M can also be extended in other distribution matching applications where we show in a trial experiment that L2M generates more realistic and sharper MNIST samples.
Když trénovací a testovací data pocházejí z různých distribucí, je potřeba doménové přizpůsobení, aby bylo možné snížit vychýlení datové sady a zlepšit schopnost modelu generalizovat. Protože je obtížné přímo porovnat křížově doménové sdružené distribuce, stávající metody mají tendenci snižovat divergenci marginálních nebo podmíněných distribucí pomocí předdefinovaných vzdáleností, jako je MMD nebo nesoulad založený na adversariálním učení. Zůstává však obtížné určit, která metoda je vhodná pro danou aplikaci, protože jsou postaveny na určitých apriorních předpokladech či vychýleních. Mohou tedy selhat v odhalení skrytého vztahu mezi přenositelnými funkcemi a sdruženými distribucemi. Tento článek navrhuje metodu Learning to Match (L2M), která automaticky učí křížově doménové porovnávání distribucí bez spoléhání se na ručně vytvořené apriorní předpoklady ohledně ztrátové funkce porovnání. Místo toho L2M snižuje indukční vychýlení použitím meta-sítě, která se datově řízeným způsobem učí ztrátovou funkci porovnávání distribucí. L2M je obecný rámec, který sjednocuje nezávislé na úkolech a lidskými návrhy definované porovnávací funkce. Navrhli jsme nový optimalizační algoritmus pro toto náročné cílové zadání se samo-spravovaným šířením štítků. Experimenty na veřejných datových sadách potvrzují nadřazenost L2M oproti současným nejlepším metodám. Navíc jsme aplikovali L2M na přenos z rentgenových snímků plic u pneumonie na snímky u COVID-19 s významným výsledkem. L2M lze také rozšířit na další aplikace porovnávání distribucí, kde jsme v pokusném experimentu ukázali, že L2M generuje realističtější a ostřejší vzorky MNIST.
cs
OSD PSE is the Indonesian Government Certification Authority (CA) for National e-Procurement System and later named OSD PSE G2. It has a unique hierarchical structure under the OSD Lemsaneg. As an Issuing CA, the OSD PSE G2 publishes and guarantee the quality of the Certificate Policy and Certification Practice Statement (CP-CPS) in order to gain the PKI user trustworthy. In this article, we analyze the CP-CPS version 1.0 that published by OSD PSE G2. For this purpose, we apply the methodology of PKI Assessment Guidelines (PAG). The quality assessment of this CP-CPS, including its compliance to the related reference/standard, namely: CP OSD Lemsaneg v.1.1; RFC 3647; and CA Business Practice Disclosure Principle on Trust Service Principles and Criteria for Certification Authorities (BPDP-TSPCCA) version 2.0. We finally found that the CP-CPS version 1.0 does not comply with related standard and reference. Hence, the CP-CPS need to be updated following the current condition of OSD PSE G2.
OSD PSE هو سلطة الشهادات (CA) التابعة للحكومة الإندونيسية لنظام المناقصات الإلكترونية الوطني، وأُعيد تسميته لاحقًا بـ OSD PSE G2. ويتميز ببنية هرمية فريدة تحت إشراف OSD Lemsaneg. وبصفته سلطة إصدار الشهادات، ينشر OSD PSE G2 ويضمن جودة سياسة الشهادة وبيان ممارسات التصديق (CP-CPS) من أجل اكتساب ثقة مستخدمي البنية التحتية للمفتاح العام (PKI). في هذه المقالة، نحلل إصدار 1.0 من وثيقة CP-CPS التي نشرها OSD PSE G2. ولتحقيق هذه الغاية، نُطبّق منهجية دليل تقييم البنية التحتية للمفتاح العام (PAG). ويشمل تقييم جودة هذه الوثيقة CP-CPS مدى مطابقتها للمراجع/المعايير ذات الصلة، وهي: CP OSD Lemsaneg الإصدار 1.1؛ RFC 3647؛ ومبادئ الإفصاح عن الممارسات التجارية للسلطة المصدرة للشهادات المتعلقة بمبادئ ومعايير خدمات الثقة للسلطات المصدرة للشهادات (BPDP-TSPCCA) الإصدار 2.0. وتوصلنا في النهاية إلى أن وثيقة CP-CPS الإصدار 1.0 غير مطابقة للمعايير والمراجع ذات الصلة. وبالتالي، يجب تحديث وثيقة CP-CPS بما يتماشى مع الوضع الحالي لـ OSD PSE G2.
ar
We initiate the study of multi-layered cake cutting with the goal of fairly allocating multiple divisible resources (layers of a cake) among a set of agents. The key requirement is that each agent can only utilize a single resource at each time interval. Several real-life applications exhibit such restrictions on overlapping pieces; for example, assigning time intervals over multiple facilities and resources or assigning shifts to medical professionals. We investigate the existence and computation of envy-free and proportional allocations. We show that envy-free allocations that are both feasible and contiguous are guaranteed to exist for up to three agents with two types of preferences, when the number of layers is two. We also show that envy-free feasible allocations where each agent receives a polynomially bounded number of intervals exist for any number of agents and layers under mild conditions on agents' preferences. We further devise an algorithm for computing proportional allocations for any number of agents and layers.
Sinimulan namin ang pag-aaral ng pagputol ng multi-layered na cake na may layunin na makapaglaan nang patas ng maramihang mga mapanghati na yaman (mga layer ng isang cake) sa mga ahente. Ang pangunahing kahilingan ay ang bawat ahente ay maaari lamang gumamit ng isang yaman sa bawat panahon. Maraming tunay na aplikasyon ang nagpapakita ng ganitong uri ng paghihigpit sa nag-uugnay na mga bahagi; halimbawa, ang pagtatalaga ng mga panahon sa iba't ibang pasilidad at yaman o ang pagtatalaga ng mga shift sa mga propesyonal sa medisina. Sinuri namin ang pag-iral at pagkakalkula ng mga paglaan na walang inggit (envy-free) at proporsyonal. Ipinakita namin na ang mga paglaan na walang inggit na parehong posible at magkakasunod ay garantisadong umiiral para sa hanggang tatlong ahente na may dalawang uri ng kagustuhan, kapag ang bilang ng mga layer ay dalawa. Ipinakita rin namin na umiiral ang mga feasible na paglaan na walang inggit kung saan ang bawat ahente ay tumatanggap ng bilang ng mga interval na limitado sa polynomial para sa anumang bilang ng mga ahente at layer sa ilalim ng mga banayad na kondisyon sa kagustuhan ng mga ahente. Higit pa rito, nag-imbento kami ng isang algoritmo para makapagkompyut ng mga proporsyonal na paglaan para sa anumang bilang ng mga ahente at layer.
tl
We present new Near-Infrared (J,K) magnitudes for 114 RR Lyrae stars in the globular cluster Omega Cen (NGC 5139) which we combine with data from the literature to construct a sample of 180 RR Lyrae stars with J and K mean magnitudes on a common photometric system. This is presently the largest such sample in any stellar system. We also present updated predictions for J,K-band Period-Luminosity relations for both fundamental and first-overtone RR Lyrae stars, based on synthetic horizontal branch models with metal abundance ranging from Z=0.0001 to Z=0.004. By adopting for the Omega Cen variables with measured metal abundances an alpha-element enhancement of a factor of 3 (about 0.5 dex) with respect to iron we find a true distance modulus of 13.70 (with a random error of 0.06 and a systematic error of 0.06), corresponding to a distance d=5.5 Kpc (with both random and systematic errors equal to 0.03 Kpc). Our estimate is in excellent agreement with the distance inferred for the eclipsing binary OGLEGC-17, but differ significantly from the recent distance estimates based on cluster dynamics and on high amplitude Delta Scuti stars.
យើងខ្ញុំបានផ្តល់​រ៉ឺសូលធំថ្មីៗ (J,K) សម្រាប់​ផ្កាយ RR Lyrae ចំនួន 114 នៅ​ក្នុង​ក្រុមផ្កាយ​រាង​គ្រាប់​អ៊ូម៉ែហ្គា សេន (NGC 5139) ដែល​យើង​បាន​បញ្ចូល​ជាមួយ​ទិន្នន័យ​ពី​អត្ថបទ​សិក្សា ដើម្បី​បង្កើត​គំរូ​មួយ​ដែល​មាន​ផ្កាយ RR Lyrae ចំនួន 180 ដែល​មាន​រ៉ឺសូលធំ​មធ្យម J និង K នៅ​លើ​ប្រព័ន្ធ​រ៉ឺសូលធំ​ដូច​គ្នា។ នេះ​គឺ​ជា​គំរូ​ធំ​បំផុត​នាពេល​បច្ចុប្បន្ន​ក្នុង​ប្រព័ន្ធ​ផ្កាយ​ណាមួយ។ យើង​ក៏​បាន​ផ្តល់​នូវ​ការ​ព្យាករ​ថ្មី​អំពី​ទំនាក់​ទំនង​រយៈ​ពេល-ភាពភ្លឺ​សម្រាប់​ផ្កាយ RR Lyrae ដែល​មាន​រយៈ​ពេល​មូលដ្ឋាន និង​រយៈ​ពេល​ដំបូង​ដែល​លើក​ឡើង​ ដោយ​ផ្អែក​លើ​គំរូ​ស្ថាបត្យកម្ម​ផ្កាយ​ដែល​មាន​ធាតុ​ដែក​ចាប់​ពី Z=0.0001 ដល់ Z=0.004។ ដោយ​បាន​ទទួល​យក​សម្រាប់​ផ្កាយ​អថេរ​នៃ Omega Cen ដែល​មាន​ការ​វាស់​វែង​អំពី​បរិមាណ​ធាតុ​ដែក ដោយ​មាន​ការ​កើន​ឡើង​ធាតុ alpha ចំនួន 3 ដង (ប្រហែល 0.5 dex) ធៀប​នឹង​ដែក យើង​បាន​រក​ឃើញ​ម៉ូឌុល​ចម្ងាយ​ពិត​ប្រាកដ​ស្មើ​នឹង 13.70 (ដោយ​មាន​កំហុស​ចៃ​ដន្យ 0.06 និង​កំហុស​ប្រព័ន្ធ 0.06) ដែល​ស្មើ​នឹង​ចម្ងាយ d=5.5 Kpc (ដោយ​មាន​កំហុស​ចៃ​ដន្យ និង​ប្រព័ន្ធ​ទាំង​ពីរ​ស្មើ​នឹង 0.03 Kpc)។ ការ​ប៉ាន់​ប្រមាណ​របស់​យើង​គឺ​មាន​ភាព​ព្រម​ព្រៀង​គ្នា​យ៉ាង​ល្អ​ជាមួយ​ចម្ងាយ​ដែល​បាន​សន្និដ្ឋាន​សម្រាប់​គូ​ផ្កាយ​ដែល​បាំង​គ្នា OGLEGC-17 ប៉ុន្តែ​ខុស​ពី​ការ​ប៉ាន់​ប្រមាណ​ចម្ងាយ​ថ្មីៗ​ដែល​ផ្អែក​លើ​ចលនាក្រុម​ផ្កាយ និង​ផ្កាយ Delta Scuti ដែល​មាន​អំពើ​ធំ។
km
There appears to be a longtime, very slowly evolving state in dense simple fluids which, for high enough density, approaches a glassy nonergodic state. The nature of the nonergodic state can be characterized by the associated static equilibrium state. In particular, systems driven by Smoluchowski or Newtonian dynamics share the same static equilibrium and nonergodic states. That these systems share the same nonergodic states is a highly nontrivial statement and requires establishing a number of results. In the high-density regime one finds that an equilibrating system decays via a three-step process identified in mode-coupling theory (MCT). For densities greater than a critical density one has time-power-law decay with exponents a and b. There are sets of linear fluctuation dissipation relations (FDRs) which connect the cumulants of these two fields. The form of the FDRs is the same for both Smoluchowski or Newtonian dynamics. While we show this universality of nonergodic states within perturbation theory, we expect it to be true more generally. The nature of the approach to the nonergodic state has been suggested by MCT. It has been a point of contention that MCT is a phenomenological theory and not a systematic theory with prospects for improvement. Recently a systematic theory has been developed. It naturally allows one to calculate self-consistently density cumulants in a perturbation expansion in a pseudo-potential. At leading order one obtains a kinetic kernel quadratic in the density. This is a "one-loop" theory like MCT. At this one-loop level one finds vertex corrections which depend on the three-point equilibrium cumulants. Here we assume these vertex-corrections can be ignored and focus on the higher-order loops. We show that one can sum up all of the loop contributions. The higher-order loops do not change the nonergodic state parameters substantially.
ហាក់ដូចជាមានស្ថានភាពដែលវិវត្តយឺតៗ និងបានយូរមួយកើតមានក្នុងសារធាតុរាវសាមញ្ញដែលមានដង់ស៊ីតេខ្ពស់ ដែលសម្រាប់ដង់ស៊ីតេគ្រប់គ្រាន់ វាប្រៀបប្រៀលទៅនឹងស្ថានភាពកញ្ចក់ដែលមិនមានលក្ខណៈអេរ៉ហ្គូឌិក (nonergodic)។ លក្ខណៈរបស់ស្ថានភាពដែលមិនមានលក្ខណៈអេរ៉ហ្គូឌិកនេះអាចត្រូវបានគេចាត់ទុកដោយស្ថានភាពលំនឹងស្ទាទិកដែលពាក់ព័ន្ធ។ ជាពិសេស ប្រព័ន្ធដែលគ្រប់គ្រងដោយ​ ឌីណាមិក​ស្មូឡូឆូវស្សឺ (Smoluchowski) ឬ ញូវតុន (Newtonian) មានស្ថានភាពលំនឹងស្ទាទិក និងស្ថានភាពមិនមានលក្ខណៈអេរ៉ហ្គូឌិកដូចគ្នា។ ការដែលប្រព័ន្ធទាំងនេះមានស្ថានភាពមិនមានលក្ខណៈអេរ៉ហ្គូឌិកដូចគ្នាគឺជាការអះអាងដែលមិនសាមញ្ញ ហើយទាមទារការបង្កើតលទ្ធផលជាច្រើន។ នៅក្នុងតំបន់ដង់ស៊ីតេខ្ពស់ គេរកឃើញថាប្រព័ន្ធដែលកំពុងស្វែងរកលំនឹងនឹងរលាយតាមដំណើរការបីជំហាន ដែលត្រូវបានកំណត់អត្តសញ្ញាណក្នុងទ្រឹស្តីការភ្ជាប់របៀប (mode-coupling theory - MCT)។ សម្រាប់ដង់ស៊ីតេធំជាងដង់ស៊ីតេវិវដ្ត គេមានការរលាយតាមច្បាប់អានុភាពនៃពេលវេលាដោយមានមេគុណ a និង b។ មានសំណុំទំនាក់ទំនងរំញ័ររលាយលីនេអ៊ែ (FDRs) ដែលភ្ជាប់គុណកាត់ (cumulants) នៃវាលទាំងពីរនេះ។ ទម្រង់នៃ FDRs គឺដូចគ្នាសម្រាប់ទាំងឌីណាមិកស្មូឡូឆូវស្សឺ និងញូវតុន។ ទោះបីយើងបង្ហាញពីលក្ខណៈសកលនៃស្ថានភាពមិនមានលក្ខណៈអេរ៉ហ្គូឌិកនៅក្នុងទ្រឹស្តីរំខានក៏ដោយ យើងរំពឹងថាវាពិតប្រាកដនៅក្នុងទម្រង់ទូទៅជាងនេះ។ លក្ខណៈនៃការប្រៀបប្រៀលទៅស្ថានភាពមិនមានលក្ខណៈអេរ៉ហ្គូឌិកត្រូវបានបង្ហាញដោយ MCT។ វាត្រូវបានគេជជែកវែកញែកថា MCT គឺជាទ្រឹស្តីបាតុភូត មិនមែនជាទ្រឹស្តីប្រព័ន្ធដែលមានសក្ដានុពលក្នុងការកែលម្អនោះទេ។ កាលពីថ្មីៗនេះ ទ្រឹស្តីប្រព័ន្ធមួយត្រូវបានអភិវឌ្ឍ។ វាអនុញ្ញាតឱ្យគេគណនាគុណកាត់ដង់ស៊ីតេដោយខ្លួនឯងដោយផ្នែកការពង្រីករំខានក្នុងប៉្រូដូ-សក្ដានុពល (pseudo-potential)។ នៅកម្រិតដឹកនាំ គេទទួលបានគ្រាប់គ្រងគីណេទិក (kinetic kernel) ដែលជាទម្រង់ការ៉េនៃដង់ស៊ីតេ។ នេះគឺជាទ្រឹស្តី "មួយរង្វិល" (one-loop) ដូចគ្នានឹង MCT។ នៅកម្រិតមួយរង្វិលនេះ គេរកឃើញការកែតម្រូវកំពូល (vertex corrections) ដែលអាស្រ័យលើគុណកាត់សមតុល្យបីចំណុច។ នៅទីនេះ យើងសន្មតថាការកែតម្រូវកំពូលទាំងនេះអាចត្រូវបានគេមើលរំលង ហើយផ្តោតលើរង្វិលកម្រិតខ្ពស់ជាង។ យើងបង្ហាញថាគេអាចបូកសរុបចំណែករង្វិលទាំងអស់បាន។ រង្វិលកម្រិតខ្ពស់ទាំងនេះមិនផ្លាស់ប្តូរប៉ារ៉ាម៉ែត្រស្ថានភាពមិនមានលក្ខណៈអេរ៉ហ្គូឌិកយ៉ាងច្រើននោះទេ។
km
This paper investigates the problem of finding a fixed point for a global nonexpansive operator under time-varying communication graphs in real Hilbert spaces, where the global operator is separable and composed of an aggregate sum of local nonexpansive operators. Each local operator is only privately accessible to each agent, and all agents constitute a network. To seek a fixed point of the global operator, it is indispensable for agents to exchange local information and update their solution cooperatively. To solve the problem, two algorithms are developed, called distributed Krasnosel'ski\u{\i}-Mann (D-KM) and distributed block-coordinate Krasnosel'ski\u{\i}-Mann (D-BKM) iterations, for which the D-BKM iteration is a block-coordinate version of the D-KM iteration in the sense of randomly choosing and computing only one block-coordinate of local operators at each time for each agent. It is shown that the proposed two algorithms can both converge weakly to a fixed point of the global operator. Meanwhile, the designed algorithms are applied to recover the classical distributed gradient descent (DGD) algorithm, devise a new block-coordinate DGD algorithm, handle a distributed shortest distance problem in the Hilbert space for the first time, and solve linear algebraic equations in a novel distributed approach. Finally, the theoretical results are corroborated by a few numerical examples.
Ce document étudie le problème de la recherche d'un point fixe pour un opérateur global non expansif sous des graphes de communication variant dans le temps dans des espaces de Hilbert réels, où l'opérateur global est séparable et composé de la somme agrégée d'opérateurs non expansifs locaux. Chaque opérateur local n'est accessible qu'individuellement à chaque agent, et tous les agents forment un réseau. Pour trouver un point fixe de l'opérateur global, il est indispensable que les agents échangent des informations locales et mettent à jour leurs solutions de manière coopérative. Pour résoudre ce problème, deux algorithmes sont proposés, appelés itérations distribuées de Krasnosel'ski\u{\i}-Mann (D-KM) et itérations distribuées de Krasnosel'ski\u{\i}-Mann en coordonnées par blocs (D-BKM), lequel D-BKM constitue une version par coordonnées par blocs du D-KM, dans le sens où, à chaque instant, chaque agent choisit aléatoirement et calcule uniquement une coordonnée par bloc de ses opérateurs locaux. On montre que les deux algorithmes proposés convergent faiblement vers un point fixe de l'opérateur global. Par ailleurs, les algorithmes conçus permettent de retrouver l'algorithme classique de descente de gradient distribué (DGD), de concevoir un nouvel algorithme DGD par coordonnées par blocs, de traiter pour la première fois un problème distribué de distance minimale dans un espace de Hilbert, ainsi que de résoudre des équations algébriques linéaires selon une approche distribuée nouvelle. Enfin, les résultats théoriques sont confirmés par plusieurs exemples numériques.
fr
Objective: This work aimed to demonstrate the effectiveness of a hybrid approach based on Sentence BERT model and retrofitting algorithm to compute relatedness between any two biomedical concepts. Materials and Methods: We generated concept vectors by encoding concept preferred terms using ELMo, BERT, and Sentence BERT models. We used BioELMo and Clinical ELMo. We used Ontology Knowledge Free (OKF) models like PubMedBERT, BioBERT, BioClinicalBERT, and Ontology Knowledge Injected (OKI) models like SapBERT, CoderBERT, KbBERT, and UmlsBERT. We trained all the BERT models using Siamese network on SNLI and STSb datasets to allow the models to learn more semantic information at the phrase or sentence level so that they can represent multi-word concepts better. Finally, to inject ontology relationship knowledge into concept vectors, we used retrofitting algorithm and concepts from various UMLS relationships. We evaluated our hybrid approach on four publicly available datasets which also includes the recently released EHR-RelB dataset. EHR-RelB is the largest publicly available relatedness dataset in which 89% of terms are multi-word which makes it more challenging. Results: Sentence BERT models mostly outperformed corresponding BERT models. The concept vectors generated using the Sentence BERT model based on SapBERT and retrofitted using UMLS-related concepts achieved the best results on all four datasets. Conclusions: Sentence BERT models are more effective compared to BERT models in computing relatedness scores in most of the cases. Injecting ontology knowledge into concept vectors further enhances their quality and contributes to better relatedness scores.
Objektif: Kajian ini bertujuan untuk menunjukkan keberkesanan pendekatan hibrid berasaskan model Sentence BERT dan algoritma retrofitting untuk mengira keterkaitan antara dua konsep bioperubatan. Bahan dan Kaedah: Kami menjana vektor konsep dengan menyandar istilah utama konsep menggunakan model ELMo, BERT, dan Sentence BERT. Kami menggunakan BioELMo dan Clinical ELMo. Kami menggunakan model Ontologi Pengetahuan Bebas (OKF) seperti PubMedBERT, BioBERT, BioClinicalBERT, dan model Ontologi Pengetahuan Disuntik (OKI) seperti SapBERT, CoderBERT, KbBERT, dan UmlsBERT. Kami melatih semua model BERT menggunakan rangkaian Siamese pada set data SNLI dan STSb untuk membolehkan model mempelajari maklumat semantik yang lebih banyak pada peringkat frasa atau ayat supaya mereka dapat mewakili konsep pelbagai perkataan dengan lebih baik. Akhirnya, untuk menyuntik pengetahuan hubungan ontologi ke dalam vektor konsep, kami menggunakan algoritma retrofitting dan konsep daripada pelbagai hubungan UMLS. Kami menilai pendekatan hibrid kami pada empat set data yang tersedia secara awam yang juga termasuk set data EHR-RelB yang dikeluarkan baru-baru ini. EHR-RelB merupakan set data keterkaitan terbesar yang tersedia secara awam di mana 89% istilahnya terdiri daripada pelbagai perkataan, menjadikannya lebih mencabar. Keputusan: Model Sentence BERT kebanyakannya mengatasi model BERT yang sepadan. Vektor konsep yang dijana menggunakan model Sentence BERT berasaskan SapBERT dan dijalani proses retrofitting menggunakan konsep berkaitan UMLS mencapai keputusan terbaik pada keempat-empat set data. Kesimpulan: Model Sentence BERT lebih berkesan berbanding model BERT dalam mengira skor keterkaitan dalam kebanyakan kes. Penyuntikan pengetahuan ontologi ke dalam vektor konsep seterusnya meningkatkan kualiti mereka dan menyumbang kepada skor keterkaitan yang lebih baik.
ms
This text is a survey (Bourbaki seminar) on the paper "Liouville quantum gravity and KPZ" By B.Duplantier and S.Sheffield. The study of statistical physics models in two dimensions (d=2) at their critical point is in general a significantly hard problem (not to mention the d=3 case). In the eighties, three physicists, Knizhnik, Polyakov et Zamolodchikov (KPZ) came up in \cite{\KPZ} with a novel and far-reaching approach in order to understand the critical behavior of these models. Among these, one finds for example random walks, percolation as well as the Ising model. The main underlying idea of their approach is to study these models along a two-step procedure as follows: a/ First of all, instead of considering the model on some regular lattice of the plane (such as $\Z^2$ for example), one defines it instead on a well-chosen "random planar lattice". Doing so corresponds to studying the model in its {\it quantum gravity} form. In the case of percolation, the appropriate choice of random lattice matches with the so-called planar maps. b/ Then it remains to get back to the actual {\it Euclidean} setup. This is done thanks to the celebrated {\bf KPZ formula} which gives a very precise correspondence between the geometric properties of models in their quantum gravity formulation and their analogs in the Euclidean case. The nature and the origin of such a powerful correspondence remained rather mysterious for a long time. In fact, the KPZ formula is still not rigorously established and remains a conjectural correspondence. The purpose of this survey is to explain how the recent work of Duplantier and Sheffield enables to explain some of the mystery hidden behind this KPZ formula. To summarize their contribution in one sentence, their work implies a beautiful interpretation of the KPZ correpondence through a uniformization of the random lattice, seen as a Riemann surface.
Ce texte est une présentation (séminaire Bourbaki) de l'article « Liouville quantum gravity and KPZ » de B. Duplantier et S. Sheffield. L'étude des modèles de physique statistique en deux dimensions (d=2) à leur point critique est en général un problème particulièrement difficile (sans parler du cas d=3). Dans les années quatre-vingt, trois physiciens, Knizhnik, Polyakov et Zamolodchikov (KPZ), ont proposé dans \cite{\KPZ} une approche nouvelle et féconde afin de comprendre le comportement critique de ces modèles. Parmi ceux-ci figurent, par exemple, les marches aléatoires, la percolation ainsi que le modèle d'Ising. L'idée fondamentale de leur approche consiste à étudier ces modèles selon une procédure en deux étapes : a/ Tout d'abord, au lieu de considérer le modèle sur un réseau régulier du plan (tel que $\Z^2$, par exemple), on le définit plutôt sur un « réseau planaire aléatoire » bien choisi. Ce passage correspond à l'étude du modèle sous sa forme de {\it gravité quantique}. Dans le cas de la percolation, le choix approprié de réseau aléatoire correspond aux cartes planaires. b/ Ensuite, il s'agit de revenir au cadre {\it euclidien} usuel. Cela s'effectue grâce à la célèbre {\bf formule KPZ}, qui établit une correspondance très précise entre les propriétés géométriques des modèles dans leur formulation en gravité quantique et leurs analogues dans le cas euclidien. La nature et l'origine d'une telle correspondance puissante sont restées longtemps assez mystérieuses. En réalité, la formule KPZ n'est toujours pas établie rigoureusement et demeure une correspondance conjecturale. L'objectif de cette présentation est d'expliquer en quoi les travaux récents de Duplantier et Sheffield permettent d'éclairer une partie du mystère entourant cette formule KPZ. Pour résumer leur contribution en une phrase, leurs travaux impliquent une interprétation magnifique de la correspondance KPZ à travers une uniformisation du réseau aléatoire, vu comme une surface de Riemann.
fr
We prove the computational weakness of a model of tile assembly that has so far resisted many attempts of formal analysis or positive constructions. Specifically, we prove that, in Winfree's abstract Tile Assembly Model, when restricted to use only noncooperative bindings, any long enough path that can grow in all terminal assemblies is pumpable, meaning that this path can be extended into an infinite, ultimately periodic path. This result can be seen as a geometric generalization of the pumping lemma of finite state automata, and closes the question of what can be computed deterministically in this model. Moreover, this question has motivated the development of a new method called visible glues. We believe that this method can also be used to tackle other long-standing problems in computational geometry, in relation for instance with self-avoiding paths. Tile assembly (including non-cooperative tile assembly) was originally introduced by Winfree and Rothemund in STOC 2000 to understand how to program shapes. The non-cooperative variant, also known as temperature 1 tile assembly, is the model where tiles are allowed to bind as soon as they match on one side, whereas in cooperative tile assembly, some tiles need to match on several sides in order to bind. In this work, we prove that only very simple shapes can indeed be programmed, whereas exactly one known result (SODA 2014) showed a restriction on the assemblies general non-cooperative self-assembly could achieve, without any implication on its computational expressiveness. With non-square tiles (like polyominos, SODA 2015), other recent works have shown that the model quickly becomes computationally powerful.
Wir beweisen die berechnungstheoretische Schwäche eines Modells der Fliesenbildung, das bisher vielen Versuchen einer formalen Analyse oder positiver Konstruktionen widerstanden hat. Insbesondere zeigen wir, dass im abstrakten Fliesen-Assemblierungsmodell von Winfree, wenn es auf nicht-kooperative Bindungen beschränkt ist, jeder hinreichend lange Pfad, der in allen terminalen Assemblierungen wachsen kann, pumpbar ist, das heißt, dieser Pfad kann zu einem unendlichen, letztlich periodischen Pfad erweitert werden. Dieses Ergebnis kann als eine geometrische Verallgemeinerung des Pumping-Lemmas von endlichen Automaten angesehen werden und schließt die Frage, was in diesem Modell deterministisch berechnet werden kann. Darüber hinaus hat diese Frage die Entwicklung einer neuen Methode namens sichtbare Klebestellen motiviert. Wir glauben, dass diese Methode auch zur Lösung anderer langjähriger Probleme in der Berechnungsgeometrie eingesetzt werden kann, insbesondere im Zusammenhang mit selbstvermeidenden Pfaden. Die Fliesen-Assemblierung (einschließlich der nicht-kooperativen Fliesen-Assemblierung) wurde ursprünglich von Winfree und Rothemund auf der STOC 2000 eingeführt, um zu verstehen, wie man Formen programmieren kann. Die nicht-kooperative Variante, auch als Fliesen-Assemblierung bei Temperatur 1 bekannt, ist das Modell, bei dem Fliesen binden dürfen, sobald sie an einer Seite übereinstimmen, während im kooperativen Modell einige Fliesen an mehreren Seiten übereinstimmen müssen, um binden zu können. In dieser Arbeit beweisen wir, dass tatsächlich nur sehr einfache Formen programmiert werden können, während bisher genau ein bekanntes Ergebnis (SODA 2014) eine Beschränkung der durch allgemeine nicht-kooperative Selbstassemblierung erreichbaren Assemblierungen zeigte, ohne jedoch Auswirkungen auf deren berechnungstheoretische Ausdrucksstärke zu haben. Mit nicht-quadratischen Fliesen (wie Polyominos, SODA 2015) haben andere neuere Arbeiten gezeigt, dass das Modell sehr schnell berechnungstheoretisch leistungsfähig wird.
de
This paper considers the problem of scheduling autonomous vehicles in intersections. A new system is proposed that could be an additional choice to the recently introduced Autonomous Intersection Management (AIM) model. The proposed system is based on the production line technique, where the environment of the intersection, vehicles position, speeds and turning are specified and determined in advance. The goal of the proposed system is to eliminate vehicle collision and the waiting time inside the intersection. Three different patterns of the vehicles flow toward the intersection have been considered for the evaluation of the model. The system requires less waiting time (compared to the other models) in the random case where the flow is unpredictable. The KNN algorithm is used to predict the right turn vehicle. The experimental results show that there is no single chance of collision inside the intersection, however, the system requires more free space in the traffic lane.
本文研究了交叉路口中自动驾驶车辆的调度问题。提出了一种新系统,可作为近期引入的自动驾驶交叉路口管理(AIM)模型的补充选择。该系统基于流水线技术,预先确定交叉路口环境、车辆位置、速度及转向信息。所提系统的目标是消除车辆碰撞以及车辆在交叉路口内的等待时间。为评估该模型,考虑了三种不同的车辆流向交叉路口的模式。在车流不可预测的随机情况下,该系统相较于其他模型所需等待时间更短。采用KNN算法预测右转车辆。实验结果表明,交叉路口内不存在任何碰撞可能,但该系统需要在行车道上保留更多空闲空间。
zh
New spectra have been obtained with the ESPaDOnS spectropolarimeter supplemented with unpolarised spectra from the ESO UVES, UVES-FLAMES, and HARPS spectrographs of the very peculiar large-field magnetic Ap star HD 318107, a member of the open cluster NGC 6405. The available data provide sufficient material with which to re-analyse the first-order model of the magnetic field geometry and to derive abundances of Si, Ti, Fe, Nd, Pr, Mg, Cr, Mn, O, and Ca. The magnetic field structure was modelled with a low-order colinear multipole expansion, using coefficients derived from the observed variations of the field strength with rotation phase. The abundances of several elements were determined using spectral synthesis. After experiments with a very simple model of uniform abundance on each of three rings of equal width in co-latitude and symmetric about the assumed magnetic axis, we decided to model the spectra assuming uniform abundances of each element over the stellar surface. The new magnetic field measurements allow us to refine the rotation period of HD 318107 to P = 9.7088 +/- 0.0007 days. Appropriate magnetic field model parameters were found that very coarsely describe the (apparently rather complex) field moment variations. Spectrum synthesis leads to the derivation of mean abundances for the elements Mg, Si, Ca, Ti, Cr, Fe, Nd, and Pr. All of these elements except for Mg and Ca are strongly overabundant compared to the solar abundance ratios. There is considerable evidence of non-uniformity, for example in the different values of B_z found using lines of different elements. The present data set, while limited, is nevertheless sufficient to provide a useful first-order assessment of both the magnetic and surface abundance properties of HD 318107, making it one of the very few magnetic Ap stars of well-known age for which both of these properties have been studied.
ESPADOnS စပက်ထရိုပိုလာရီမီတာဖြင့် ရရှိသော စပက်ထရမ်များကို ESO UVES၊ UVES-FLAMES နှင့် HARPS စပက်ထရိုဂရပ်များမှ အပိုလာရိုင်းစပက်ထရမ်များဖြင့် ဖြည့်စွက်၍ HD 318107 ဟုခေါ်သော မှော်ဆန်းလှသည့် ကွဲပြားသော ကွင်းဝန်းကျယ်ပြန့်သည့် သံလိုက် Ap ကြယ်၏ စပက်ထရမ်များကို ရယူရာတွင် အသုံးပြုခဲ့သည်။ ဤကြယ်သည် ဖွင့်ထားသော ကလပ်စတာ NGC 6405 ၏ အဖွဲ့ဝင်တစ်ဦးဖြစ်သည်။ ရရှိနိုင်သော ဒေတာများသည် သံလိုက်စက်ကွင်း ဂျီဩမေတြီ၏ ပထမအဆင့်မော်ဒယ်ကို ပြန်လည်ဆန်းစစ်ရန်နှင့် Si၊ Ti၊ Fe၊ Nd၊ Pr၊ Mg၊ Cr၊ Mn၊ O နှင့် Ca ဒြပ်စင်များ၏ ပါဝင်မှုပမာဏကို တွက်ချက်ရန် လုံလောက်သော အချက်အလက်များကို ပေးစွမ်းပါသည်။ သံလိုက်စက်ကွင်း ဖွဲ့စည်းပုံကို သံလိုက်စက်ကွင်း၏ အားကို လှည့်ပတ်မှုအဆင့်အလိုက် ပြောင်းလဲမှုများမှ ရရှိသော ဂျီဩမေတြီဆိုင်ရာ မြောက်ဝင်ရိုးအလိုက် ကိန်းရှင်များကို အသုံးပြု၍ အဆင့်နိမ့် colinear multipole ချဲ့ထွင်မှုဖြင့် မော်ဒယ်လုပ်ခဲ့သည်။ ဒြပ်စင်အားလုံး၏ ပါဝင်မှုပမာဏများကို စပက်ထရမ် ပေါင်းစပ်ခြင်းကို အသုံးပြု၍ ဆုံးဖြတ်ခဲ့သည်။ မှန်းထားသော သံလိုက်ဝင်ရိုးကို အတိအကျ အတူတူ ကွာဝေးသော မြောက်ဝင်ရိုးအလိုက် အကျယ်တူ စက်ဝိုင်းပုံသုံးခုတွင် တစ်သမတ်တည်း ပါဝင်မှုပမာဏရှိသည့် အလွန်ရိုးရှင်းသော မော်ဒယ်ဖြင့် စမ်းသပ်မှုများပြုလုပ်ပြီးနောက် ကြယ်မျက်နှာပြင်တစ်ခုလုံးတွင် ဒြပ်စင်တစ်ခုချင်းစီအတွက် တစ်သမတ်တည်း ပါဝင်မှုပမာဏရှိသည်ဟု ယူဆ၍ စပက်ထရမ်များကို မော်ဒယ်လုပ်ရန် ဆုံးဖြတ်ခဲ့သည်။ သံလိုက်စက်ကွင်း တိုင်းတာမှုအသစ်များက HD 318107 ၏ လှည့်ပတ်ကာလကို P = 9.7088 +/- 0.0007 ရက်အဖြစ် ပိုမိုတိကျစွာ သတ်မှတ်ရန် ခွင့်ပြုပေးသည်။ သံလိုက်စက်ကွင်း၏ အလှည့်အပြောင်းများကို (ရှုပ်ထွေးနိုင်သည်ဟု ထင်ရသော်လည်း) အလွန်မျှင်းကားစွာ ဖော်ပြနိုင်မည့် သင့်တော်သော သံလိုက်စက်ကွင်း မော်ဒယ် စံနှုန်းများကို ရှာဖွေတွေ့ရှိခဲ့သည်။ စပက်ထရမ် ပေါင်းစပ်ခြင်းသည် Mg၊ Si၊ Ca၊ Ti၊ Cr၊ Fe၊ Nd နှင့် Pr ဒြပ်စင်များအတွက် ပျမ်းမျှပါဝင်မှုပမာဏများကို တွက်ချက်ရာတွင် ဦးဆောင်ပေးခဲ့သည်။ ဤဒြပ်စင်များအနက် Mg နှင့် Ca ကို 除သော အခြားဒြပ်စင်အားလုံးသည် နေ၏ ပါဝင်မှုအချိုးနှုန်းများနှင့် နှိုင်းယှဉ်ပါက အလွန်များပြားစွာ ပါဝင်နေကြောင်း တွေ့ရှိရသည်။ ဥပမာအားဖြင့် ဒြပ်စင်များစွာ၏ မတူညီသော လိုင်းများကို အသုံးပြု၍ တိုင်းတာရရှိသော B_z တန်ဖိုးများတွင် တွေ့ရှိရသည့် မညီမျှမှုများကဲ့သို့ မညီမျှမှုရှိကြောင်း အထောက်အထားများစွာ ရှိပါသည်။ ကန့်သတ်ချက်ရှိသော်လည်း ယခုရရှိနေသော ဒေတာများသည် HD 318107 ၏ သံလိုက်စက်ကွင်းနှင့် မျက်နှာပြင်ပါဝင်မှု ဂုဏ်သတ္တိများကို အသုံးဝင်သော ပထမအဆင့် အကဲဖြတ်မှုကို ပေးစွမ်းနိုင်ရန် လုံလောက်ပါသည်။ ထို့ကြောင့် ဤဂုဏ်သတ္တိနှစ်ခုစလုံးကို လေ့လာခဲ့သည့် အသက်အရွယ်ကို ကောင်းစွာသိရှိထားသော သံလိုက် Ap ကြယ်အနည်းငယ်တွင် ဤကြယ်သည် တစ်ခုအပါအဝင်ဖြစ်လာပါသည်။
my
Thermodynamics relies on the possibility to describe systems composed of a large number of constituents in terms of few macroscopic variables. Its foundations are rooted into the paradigm of statistical mechanics, where thermal properties originate from averaging procedures which smoothen out local details. While undoubtedly successful, elegant and formally correct, this approach carries over an operational problem: what is the precision at which such variables are inferred, when technical/practical limitations restrict our capabilities to local probing? Here we introduce the local quantum thermal susceptibility, a quantifier for the best achievable accuracy for temperature estimation via local measurements. Our method relies on basic concepts of quantum estimation theory, providing an operative strategy to address the local thermal response of arbitrary quantum systems at equilibrium. At low temperatures it highlights the local distinguishability of the ground state from the excited sub-manifolds, thus providing a method to locate quantum phase transitions.
Termodinamik bergantung kepada kemungkinan untuk menggambarkan sistem yang terdiri daripada sebilangan besar komponen dalam bentuk beberapa pemboleh ubah makroskopik. Asasnya berpunca daripada paradigma mekanik statistik, di mana sifat terma berasal daripada prosedur pengpurataan yang melicinkan butiran setempat. Walaupun jelas berjaya, elegan dan secara formal betul, pendekatan ini membawa masalah operasi: apakah ketepatan yang boleh dicapai dalam inferens pemboleh ubah sedemikian, apabila had teknikal/praktikal menghadkan keupayaan kita untuk pengukuran setempat? Di sini kami memperkenalkan kepekaan terma kuantum setempat, suatu pengukur bagi ketepatan terbaik yang boleh dicapai untuk anggaran suhu melalui pengukuran setempat. Kaedah kami bergantung kepada konsep asas teori anggaran kuantum, menyediakan strategi operasi untuk menangani sambutan terma setempat sistem kuantum yang sewenang-wenang dalam keadaan keseimbangan. Pada suhu rendah, ia menonjolkan kebolehbezajadian setempat keadaan asas daripada sub-himpunan keadaan teruja, seterusnya memberikan kaedah untuk menentukan kedudukan peralihan fasa kuantum.
ms
Human computation is a computing approach that draws upon human cognitive abilities to solve computational tasks for which there are so far no satisfactory fully automated solutions even when using the most advanced computing technologies available. Human computation for citizen science projects consists in designing systems that allow large crowds of volunteers to contribute to scientific research by executing human computation tasks. Examples of successful projects are Galaxy Zoo and FoldIt. A key feature of this kind of project is its capacity to engage volunteers. An important requirement for the proposal and evaluation of new engagement strategies is having a clear understanding of the typical engagement of the volunteers; however, even though several projects of this kind have already been completed, little is known about this issue. In this paper, we investigate the engagement pattern of the volunteers in their interactions in human computation for citizen science projects, how they differ among themselves in terms of engagement, and how those volunteer engagement features should be taken into account for establishing the engagement encouragement strategies that should be brought into play in a given project. To this end, we define four quantitative engagement metrics to measure different aspects of volunteer engagement, and use data mining algorithms to identify the different volunteer profiles in terms of the engagement metrics. Our study is based on data collected from two projects: Galaxy Zoo and The Milky Way Project. The results show that the volunteers in such projects can be grouped into five distinct engagement profiles that we label as follows: hardworking, spasmodic, persistent, lasting, and moderate. The analysis of these profiles provides a deeper understanding of the nature of volunteers' engagement in human computation for citizen science projects
การประมวลผลด้วยมนุษย์เป็นแนวทางการประมวลผลที่อาศัยความสามารถทางปัญญาของมนุษย์ในการแก้ปัญหาที่ยังไม่มีวิธีการอัตโนมัติที่พึงพอใจ แม้จะใช้เทคโนโลยีการประมวลผลขั้นสูงที่มีอยู่ในปัจจุบันก็ตาม สำหรับโครงการวิทยาศาสตร์พลเมือง การประมวลผลด้วยมนุษย์ประกอบด้วยการออกแบบระบบเพื่อให้กลุ่มคนจำนวนมากสามารถมีส่วนร่วมในการวิจัยทางวิทยาศาสตร์ โดยการดำเนินการตามภารกิจที่ต้องใช้การประมวลผลด้วยมนุษย์ ตัวอย่างโครงการที่ประสบความสำเร็จ ได้แก่ Galaxy Zoo และ FoldIt ลักษณะสำคัญประการหนึ่งของโครงการประเภทนี้คือ ความสามารถในการดึงดูดผู้อาสาสมัครให้มีส่วนร่วม ซึ่งการพัฒนาและประเมินกลยุทธ์การมีส่วนร่วมใหม่ๆ จำเป็นต้องเข้าใจอย่างชัดเจนถึงลักษณะการมีส่วนร่วมโดยทั่วไปของผู้อาสาสมัคร อย่างไรก็ตาม แม้จะมีการดำเนินโครงการลักษณะนี้มาแล้วหลายโครงการ แต่ยังมีความรู้ที่จำกัดเกี่ยวกับประเด็นนี้ ในบทความนี้ เราศึกษารูปแบบการมีส่วนร่วมของผู้อาสาสมัครในการปฏิสัมพันธ์ของพวกเขาในบริบทของการประมวลผลด้วยมนุษย์สำหรับโครงการวิทยาศาสตร์พลเมือง ความแตกต่างระหว่างผู้อาสาสมัครในด้านการมีส่วนร่วม และการพิจารณาคุณลักษณะการมีส่วนร่วมของผู้อาสาสมัครเหล่านี้อย่างไรเพื่อกำหนดกลยุทธ์การส่งเสริมการมีส่วนร่วมที่ควรนำมาใช้ในโครงการใดโครงการหนึ่ง เพื่อจุดประสงค์ดังกล่าว เราได้กำหนดตัวชี้วัดการมีส่วนร่วมเชิงปริมาณสี่ตัว เพื่อวัดด้านต่างๆ ของการมีส่วนร่วมของผู้อาสาสมัคร และใช้อัลกอริทึมการขุดข้อมูลเพื่อระบุโปรไฟล์ที่แตกต่างกันของผู้อาสาสมัครตามตัวชี้วัดการมีส่วนร่วม การศึกษาของเราอิงจากข้อมูลที่รวบรวมจากสองโครงการ ได้แก่ Galaxy Zoo และ The Milky Way Project ผลลัพธ์แสดงให้เห็นว่า ผู้อาสาสมัครในโครงการดังกล่าวสามารถจัดกลุ่มออกเป็นโปรไฟล์การมีส่วนร่วมที่แตกต่างกันได้ห้ากลุ่ม ซึ่งเราตั้งชื่อว่า: ขยัน, ช่วงๆ, มุ่งมั่น, ยั่งยืน, และ ปานกลาง การวิเคราะห์โปรไฟล์เหล่านี้ช่วยให้เข้าใจลักษณะการมีส่วนร่วมของผู้อาสาสมัครในโครงการวิทยาศาสตร์พลเมืองที่ใช้การประมวลผลด้วยมนุษย์ได้ลึกซึ้งยิ่งขึ้น
th
Autonomous cars are subjected to several different kind of inputs (other cars, road structure, etc.) and, therefore, testing the car under all possible conditions is impossible. To tackle this problem, scenario-based testing for automated driving defines categories of different scenarios that should be covered. Although this kind of coverage is a necessary condition, it still does not guarantee that any possible behaviour of the autonomous car is tested. In this paper, we consider the path planner of an autonomous car that decides, at each timestep, the short-term path to follow in the next few seconds; such decision is done by using a weighted cost function that considers different aspects (safety, comfort, etc.). In order to assess whether all the possible decisions that can be taken by the path planner are covered by a given test suite T, we propose a mutation-based approach that mutates the weights of the cost function and then checks if at least one scenario of T kills the mutant. Preliminary experiments on a manually designed test suite show that some weights are easier to cover as they consider aspects that more likely occur in a scenario, and that more complicated scenarios (that generate more complex paths) are those that allow to cover more weights.
Carros autônomos estão sujeitos a diversos tipos de entradas (outros carros, estrutura da estrada, etc.) e, portanto, testar o carro sob todas as condições possíveis é impossível. Para enfrentar esse problema, os testes baseados em cenários para condução automatizada definem categorias de diferentes cenários que devem ser cobertos. Embora esse tipo de cobertura seja uma condição necessária, ainda assim não garante que qualquer comportamento possível do carro autônomo seja testado. Neste artigo, consideramos o planejador de trajetória de um carro autônomo que decide, a cada instante, o caminho de curto prazo a seguir nos próximos segundos; tal decisão é feita utilizando uma função de custo ponderada que considera diferentes aspectos (segurança, conforto, etc.). A fim de avaliar se todas as decisões possíveis que podem ser tomadas pelo planejador de trajetória são cobertas por uma determinada suíte de testes T, propomos uma abordagem baseada em mutação que altera os pesos da função de custo e depois verifica se pelo menos um cenário de T elimina o mutante. Experimentos preliminares em uma suíte de testes projetada manualmente mostram que alguns pesos são mais fáceis de cobrir, pois consideram aspectos que ocorrem com maior probabilidade em um cenário, e que cenários mais complexos (que geram trajetórias mais complexas) são aqueles que permitem cobrir mais pesos.
pt
When a three-dimensional (3D) ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon - the quantum anomalous Hall effect - provides a conceptually new platform for studies of edge-state transport, distinct from the more extensively studied integer and fractional quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt driven crossover from predominantly edge state transport to diffusive transport in Cr-doped (Bi,Sb)2Te3 thin films, as the system transitions from a quantum anomalous Hall insulator to a gapless, ferromagnetic topological insulator. The crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain using the Landauer-Buttiker formalism. Our methodology provides a powerful means of quantifying edge state contributions to transport in temperature and chemical potential regimes far from perfect quantization.
Quando un sottile film di isolante topologico ferromagnetico tridimensionale (3D) viene magnetizzato fuori dal piano, la conduzione avviene idealmente attraverso stati chirali unidimensionali (1D) senza dissipazione, caratterizzati da una conduttanza di Hall quantizzata e a campo nullo. La recente realizzazione di questo fenomeno, l'effetto Hall anomalo quantistico, fornisce una piattaforma concettualmente nuova per lo studio del trasporto di stati di bordo, distinta dagli effetti Hall quantistici intero e frazionario, più ampiamente studiati, che derivano dalla formazione di livelli di Landau. In questo contesto si pone una domanda importante: come evolvono questi stati di bordo 1D quando la magnetizzazione passa da una direzione fuori dal piano a una direzione nel piano? Analizziamo questa domanda studiando la transizione indotta dall'inclinazione del campo magnetico da un trasporto dominato dagli stati di bordo a un trasporto diffusivo in film sottili di (Bi,Sb)2Te3 drogati con Cr, mentre il sistema passa da un isolante topologico con effetto Hall anomalo quantistico a un isolante topologico ferromagnetico senza gap. Tale transizione si manifesta in una enorme magnetoresistenza anisotropa, elettricamente accordabile, che spieghiamo utilizzando il formalismo di Landauer-Büttiker. La nostra metodologia fornisce un potente mezzo per quantificare il contributo degli stati di bordo al trasporto in regimi di temperatura e potenziale chimico lontani dalla perfetta quantizzazione.
it
Successful applications of reinforcement learning in real-world problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent's entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we propose a new family of hybrid models that combines the strength of both supervised learning (SL) and reinforcement learning (RL), trained in a joint fashion: The SL component can be a recurrent neural networks (RNN) or its long short-term memory (LSTM) version, which is equipped with the desired property of being able to capture long-term dependency on history, thus providing an effective way of learning the representation of hidden states. The RL component is a deep Q-network (DQN) that learns to optimize the control for maximizing long-term rewards. Extensive experiments in a direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best among a set of previous state-of-the-art methods.
Успешное применение обучения с подкреплением для решения реальных задач зачастую требует работы с частично наблюдаемыми состояниями. В общем случае построение и вывод скрытых состояний представляет собой весьма сложную задачу, поскольку они часто зависят от всей истории взаимодействия агента и могут требовать значительных знаний в предметной области. В данной работе мы исследуем подход, основанный на глубоком обучении, для изучения представления состояний в задачах с частичной наблюдаемостью при минимальных априорных знаниях о предметной области. В частности, мы предлагаем новое семейство гибридных моделей, объединяющих преимущества обучения с учителем (SL) и обучения с подкреплением (RL), при этом обучение осуществляется совместно: компонент SL может представлять собой рекуррентную нейронную сеть (RNN) или её модификацию с долгой краткосрочной памятью (LSTM), которая обладает желаемым свойством эффективного учёта долгосрочных зависимостей от истории, обеспечивая тем самым эффективный способ изучения представления скрытых состояний. Компонент RL — это глубокая Q-сеть (DQN), которая обучается оптимизировать управление с целью максимизации долгосрочных вознаграждений. Обширные эксперименты в задаче кампании прямой почтовой рассылки демонстрируют эффективность и преимущества предложенного подхода, который показывает наилучшие результаты по сравнению с набором ранее существовавших передовых методов.
ru
The purpose of these notes is to discuss the relation between the additivity questions regarding the quantities (Holevo) capacity of a quantum channel T and entanglement of formation of a given bipartite state. In particular, using the Stinespring dilation theorem, we give a formula for the channel capacity involving entanglement of formation. This can be used to show that additivity of the latter for some states can be inferred from the additivity of capacity for certain channels. We demonstrate this connection for a family of group--covariant channels, allowing us to calculate the entanglement cost for many states, including some where a strictly smaller upper bound on the distillable entanglement is known. Group symmetry is used for more sophisticated analysis, giving formulas valid for a class of channels. This is presented in a general framework, extending recent findings of Vidal, Dur and Cirac (e-print quant-ph/0112131). We speculate on a general relation of superadditivity of the entanglement of formation, which would imply both the general additivity of this function under tensor products and of the Holevo capacity (with or without linear cost constraints).
ဤမှတ်သားချက်များ၏ ရည်ရွယ်ချက်မှာ ကွမ်တမ်ချိတ်ဆက်မှု T ၏ (ဟိုလေဗို) စွမ်းရည်နှင့် ပေးထားသော ဒွိတိယအခြေအနေတစ်ခု၏ ရှုပ်ထွေးမှုဖွဲ့စည်းမှုတို့နှင့် သက်ဆိုင်သည့် ပေါင်းလိုက်ခြင်း မေးခွန်းများအကြား ဆက်နွှယ်မှုကို ဆွေးနွေးရန်ဖြစ်သည်။ အထူးသဖြင့် စတိုင်းစပရင့် ချဲ့ထွင်မှု သီအိုရမ်ကို အသုံးပြု၍ ရှုပ်ထွေးမှုဖွဲ့စည်းမှုကို ပါဝင်သည့် ချိတ်ဆက်မှုစွမ်းရည်နှင့် သက်ဆိုင်သော ဖော်မြူလာတစ်ခုကို ပေးပို့ပါသည်။ အချို့သော အခြေအနေများအတွက် နောက်ဆုံးပေါ်ပေါင်းလိုက်မှုကို ချိတ်ဆက်မှုအတွက် စွမ်းရည်ပေါင်းလိုက်မှုမှ ယူဆနိုင်ကြောင်း ပြသရန် အသုံးပြုနိုင်ပါသည်။ ကျွန်ုပ်တို့သည် အုပ်စု-အကြီးတန်း ချိတ်ဆက်မှုများ၏ မိသားစုအတွက် ဤချိတ်ဆက်မှုကို ပြသပြီး ရယူနိုင်သော ရှုပ်ထွေးမှုအတွက် ပိုမိုသေးငယ်သော အထက်စည်းမျဉ်းကို သိရှိထားသည့် အခြေအနေအချို့အပါအဝင် အခြေအနေအများအပြားအတွက် ရှုပ်ထွေးမှုကုန်ကျစရိတ်ကို တွက်ချက်နိုင်ပါသည်။ ပိုမိုရှုပ်ထွေးသော ဆန်းစစ်မှုအတွက် အုပ်စု အမှန်အကန်ကို အသုံးပြုပြီး ချိတ်ဆက်မှုတစ်စုအတွက် မှန်ကန်သော ဖော်မြူလာများကို ပေးပို့ပါသည်။ ဤအရာကို Vidal၊ Dur နှင့် Cirac (e-print quant-ph/0112131) တို့၏ မကြာသေးမီက ရှာဖွေတွေ့ရှိချက်များကို ချဲ့ထွင်သည့် ယေဘုယျကျသော အခြေအနေတစ်ခုတွင် တင်ပြထားပါသည်။ ဤလုပ်ဆောင်ချက်၏ ယေဘုယျ စုစုပေါင်းပေါင်းခြင်းဆိုင်ရာ ဆက်နွှယ်မှုကို မှန်းဆပါသည်။ ဤဆက်နွှယ်မှုသည် ဤလုပ်ဆောင်ချက်၏ ယေဘုယျပေါင်းလိုက်မှုကို အက္ခရာတို့၏ မျှော်လင့်ချက်အောက်တွင် (သို့မဟုတ်) မျဉ်းဖြောင့်ကုန်ကျစရိတ် ကန့်သတ်ချက်များဖြင့် ဟိုလေဗို စွမ်းရည်ကို အကြံပြုပါလိမ့်မည်။
my
Super-resolution fluorescence microscopy, with a resolution beyond the diffraction limit of light, has become an indispensable tool to directly visualize biological structures in living cells at a nanometer-scale resolution. Despite advances in high-density super-resolution fluorescent techniques, existing methods still have bottlenecks, including extremely long execution time, artificial thinning and thickening of structures, and lack of ability to capture latent structures. Here we propose a novel deep learning guided Bayesian inference approach, DLBI, for the time-series analysis of high-density fluorescent images. Our method combines the strength of deep learning and statistical inference, where deep learning captures the underlying distribution of the fluorophores that are consistent with the observed time-series fluorescent images by exploring local features and correlation along time-axis, and statistical inference further refines the ultrastructure extracted by deep learning and endues physical meaning to the final image. Comprehensive experimental results on both real and simulated datasets demonstrate that our method provides more accurate and realistic local patch and large-field reconstruction than the state-of-the-art method, the 3B analysis, while our method is more than two orders of magnitude faster. The main program is available at https://github.com/lykaust15/DLBI
تُعدّ مجهرية الفلورة فائقة الدقة، التي تتجاوز حد الحيود للضوء، أداة لا غنى عنها لتصور الهياكل البيولوجية مباشرةً داخل الخلايا الحية بدقة تصل إلى المقياس النانومتري. وعلى الرغم من التقدّم في تقنيات الفلورة فائقة الدقة ذات الكثافة العالية، لا تزال الطرق الحالية تعاني من عوائق، تشمل وقت تنفيذ طويل للغاية، وتحويرًا اصطناعيًا لانكماش أو تضخّم الهياكل، وعدم القدرة على اكتشاف الهياكل الكامنة. في هذا العمل، نقترح نهجًا جديدًا مبنيًا على الاستدلال البيزي المُوجَّه بالتعلّم العميق، يُعرف بـ DLBI، لتحليل السلاسل الزمنية للصور الفلورية عالية الكثافة. يجمع منهجنا بين قوة التعلّم العميق والاستدلال الإحصائي، حيث يستفيد التعلّم العميق من الميزات المحلية والارتباط على طول محور الزمن لالتقاط التوزيع الأساسي للجسيمات الفلورية المتوافقة مع صور السلسلة الزمنية المرصودة، في حين يعمل الاستدلال الإحصائي على تحسين البنية الفوق جزيئية المستخلصة بواسطة التعلّم العميق ويوفر المعنى الفيزيائي للصورة النهائية. تُظهر النتائج التجريبية الشاملة على مجموعات بيانات حقيقية ومُحاكاة أن منهجنا يوفّر إعادة بناء أكثر دقة وواقعية للنطاقات المحلية وللمساحات الواسعة مقارنةً بالطريقة الحديثة الحالية المعروفة بتحليل 3B، كما أن منهجنا أسرع بأكثر من مرّتين من حيث الرتبة المقدارية. يتوافر البرنامج الرئيسي على الرابط التالي: https://github.com/lykaust15/DLBI
ar
Radio variability on timescales from a few hours to several days in extragalactic flat-spectrum radio sources is generally classified as intra-day variability (IDV). The origin of this short term variability is still controversial and both extrinsic and intrinsic mechanisms must be considered and may both contribute to the observed variations. The measured linear and circular polarization of IDV sources constrains the low energy end of the electron population. Any population of cold electrons within sources at or above the equipartition temperature of 10^11 K depolarizes the emission and can be ruled out. Intrinsic shock models are shown to either violate the large fraction of sources displaying IDV or they do not relax the light travel time argument for intrinsic variations. From structure function analysis, we further conclude that interstellar scintillation also leads to tight size estimates unless a very local cloud in the ISM is responsible for IDV.
การแปรผันของรังสีวิทยุในช่วงเวลาตั้งแต่ไม่กี่ชั่วโมงถึงหลายวันในแหล่งกำเนิดรังสีวิทยุที่มีสเปกตรัมเรียบนอกกาแล็กซี มักถูกจัดว่าเป็นการแปรผันภายในวัน (intra-day variability: IDV) ต้นกำเนิดของการแปรผันระยะสั้นนี้ยังคงเป็นที่ถกเถียงกัน และจำเป็นต้องพิจารณาทั้งกลไกภายนอกและกลไกภายใน ซึ่งทั้งสองอาจมีส่วนทำให้เกิดการแปรผันที่สังเกตเห็นได้ การวัดโพลาไรเซชันเชิงเส้นและโพลาไรเซชันแบบวงกลมของแหล่งที่มาที่มีการแปรผันภายในวัน ช่วยจำกัดช่วงพลังงานต่ำของประชากรอิเล็กตรอน ประชากรของอิเล็กตรอนที่เย็นในแหล่งกำเนิดที่มีอุณหภูมิเท่ากับหรือสูงกว่าอุณหภูมิสมดุลที่ 10^11 เคลวิน จะทำให้การแผ่รังสีเสียโพลาไรเซชัน และสามารถตัดความเป็นไปได้นี้ออกไปได้ แบบจำลองการชนกันภายใน (intrinsic shock models) แสดงให้เห็นว่า ไม่ว่าจะขัดแย้งกับสัดส่วนที่สูงมากของแหล่งที่แสดงการแปรผันภายในวัน หรือไม่สามารถลดข้อโต้แย้งจากเวลาการเดินทางของแสงสำหรับการแปรผันภายในได้ จากการวิเคราะห์ฟังก์ชันโครงสร้าง (structure function analysis) เราสรุปเพิ่มเติมว่า การกระเจิงระหว่างดาว (interstellar scintillation) ก็ยังนำไปสู่การประมาณขนาดที่ค่อนข้างแน่นชัด เว้นแต่ว่ากลุ่มก๊าซที่อยู่ใกล้มากในสื่อกึ่งระหว่างดวงดาว (ISM) จะเป็นสาเหตุของการแปรผันภายในวัน
th
We present a model of worldwide crisis contagion based on the Google matrix analysis of the world trade network obtained from the UN Comtrade database. The fraction of bankrupted countries exhibits an \textit{on-off} phase transition governed by a bankruptcy threshold $\kappa$ related to the trade balance of the countries. For $\kappa>\kappa_c$, the contagion is circumscribed to less than 10\% of the countries, whereas, for $\kappa<\kappa_c$, the crisis is global with about 90\% of the countries going to bankruptcy. We measure the total cost of the crisis during the contagion process. In addition to providing contagion scenarios, our model allows to probe the structural trading dependencies between countries. For different networks extracted from the world trade exchanges of the last two decades, the global crisis comes from the Western world. In particular, the source of the global crisis is systematically the Old Continent and The Americas (mainly US and Mexico). Besides the economy of Australia, those of Asian countries, such as China, India, Indonesia, Malaysia and Thailand, are the last to fall during the contagion. Also, the four BRIC are among the most robust countries to the world trade crisis.
Chúng tôi trình bày một mô hình lan truyền khủng hoảng toàn cầu dựa trên phân tích ma trận Google của mạng thương mại thế giới, được lấy từ cơ sở dữ liệu UN Comtrade. Tỷ lệ các quốc gia phá sản thể hiện sự chuyển pha \textit{bật-tắt} được điều khiển bởi ngưỡng phá sản $\kappa$ liên quan đến cán cân thương mại của các quốc gia. Khi $\kappa>\kappa_c$, sự lây lan bị giới hạn dưới 10\% các quốc gia, trong khi đó khi $\kappa<\kappa_c$, khủng hoảng trở nên toàn cầu với khoảng 90\% các quốc gia rơi vào tình trạng phá sản. Chúng tôi đo lường tổng chi phí của khủng hoảng trong suốt quá trình lây lan. Ngoài việc cung cấp các kịch bản lan truyền, mô hình của chúng tôi còn cho phép khảo sát các phụ thuộc thương mại cấu trúc giữa các quốc gia. Đối với các mạng khác nhau được trích xuất từ các giao dịch thương mại thế giới trong hai thập kỷ qua, khủng hoảng toàn cầu bắt nguồn từ thế giới phương Tây. Cụ thể, nguồn gốc của khủng hoảng toàn cầu luôn là Châu Âu cũ và Châu Mỹ (chủ yếu là Hoa Kỳ và Mexico). Bên cạnh nền kinh tế Úc, các nền kinh tế các nước châu Á như Trung Quốc, Ấn Độ, Indonesia, Malaysia và Thái Lan là những nền kinh tế cuối cùng sụp đổ trong quá trình lan truyền. Ngoài ra, bốn nước BRIC nằm trong số các quốc gia vững chắc nhất trước cuộc khủng hoảng thương mại toàn cầu.
vi
Governments and cities around the world are currently facing rapid growth in the use of Electric Vehicles and therewith the need for Charging Infrastructure. For these cities, the struggle remains how to further roll out charging infrastructure in the most efficient way, both in terms of cost and use. Forecasting models are not able to predict more long-term developments, and as such more complex simulation models offer opportunities to simulate various scenarios. Agent based simulation models provide insight into the effects of incentives and roll-out strategies before they are implemented in practice and thus allow for scenario testing. This paper describes the build up of an agent based model that enables policy makers to anticipate on charging infrastructure development. The model is able to simulate charging transactions of individual users and is both calibrated and validated using a dataset of charging transactions from the public charging infrastructure of the four largest cities in the Netherlands.
Правительства и города по всему миру в настоящее время сталкиваются с быстрым ростом использования электромобилей и, как следствие, с необходимостью развивать инфраструктуру зарядных станций. Для этих городов остаётся сложной задача дальнейшего развертывания инфраструктуры зарядки наиболее эффективным способом с точки зрения как затрат, так и использования. Модели прогнозирования не способны предсказывать долгосрочные тенденции, и поэтому более сложные модели имитационного моделирования открывают возможности для моделирования различных сценариев. Модели агентного имитационного моделирования позволяют оценить последствия стимулов и стратегий развертывания до их практической реализации, обеспечивая возможность тестирования сценариев. В данной статье описывается построение агентной модели, которая позволяет лицам, ответственным за принятие решений, прогнозировать развитие инфраструктуры зарядки. Модель способна имитировать процессы зарядки отдельных пользователей и была откалибрована, а также проверена с использованием набора данных о транзакциях зарядки из публичной инфраструктуры зарядных станций четырёх крупнейших городов Нидерландов.
ru
Special-purpose constraint propagation algorithms frequently make implicit use of short supports -- by examining a subset of the variables, they can infer support (a justification that a variable-value pair may still form part of an assignment that satisfies the constraint) for all other variables and values and save substantial work -- but short supports have not been studied in their own right. The two main contributions of this paper are the identification of short supports as important for constraint propagation, and the introduction of HaggisGAC, an efficient and effective general purpose propagation algorithm for exploiting short supports. Given the complexity of HaggisGAC, we present it as an optimised version of a simpler algorithm ShortGAC. Although experiments demonstrate the efficiency of ShortGAC compared with other general-purpose propagation algorithms where a compact set of short supports is available, we show theoretically and experimentally that HaggisGAC is even better. We also find that HaggisGAC performs better than GAC-Schema on full-length supports. We also introduce a variant algorithm HaggisGAC-Stable, which is adapted to avoid work on backtracking and in some cases can be faster and have significant reductions in memory use. All the proposed algorithms are excellent for propagating disjunctions of constraints. In all experiments with disjunctions we found our algorithms to be faster than Constructive Or and GAC-Schema by at least an order of magnitude, and up to three orders of magnitude.
تستخدم خوارزميات الانتشار المقيدة الخاصة بالغرض عادةً دعماً قصيراً ضمنياً – من خلال فحص مجموعة فرعية من المتغيرات، يمكنها الاستدلال بالدعم (مما يبرر أن زوجاً من المتغير والقيمة قد يشكل جزءاً من تعيين يحقق القيد) لجميع المتغيرات والقيم الأخرى، وبالتالي توفر جهداً كبيراً – ولكن لم يُدرس الدعم القصير بذاته بشكل مستقل. يكمن المساهمتان الرئيسيتان في هذه الورقة في تحديد أهمية الدعم القصير لانتشار القيود، وفي تقديم خوارزمية HaggisGAC، وهي خوارزمية انتشار عامة الغرض فعالة وفعالة في استغلال الدعم القصير. نظراً لتعقيد HaggisGAC، نقدمها على هيئة نسخة مُحسّنة من خوارزمية أبسط تُدعى ShortGAC. وعلى الرغم من أن التجارب تُظهر كفاءة ShortGAC مقارنةً بخوارزميات الانتشار العامة الأخرى عندما تكون مجموعة مدمجة من الدعوم القصيرة متاحة، فإننا نُثبت نظرياً وتجريبياً أن HaggisGAC أفضل. كما وجدنا أن HaggisGAC تؤدي بشكل أفضل من GAC-Schema على الدعوم الكاملة الطول. نقدم أيضاً خوارزمية متغيرة تُدعى HaggisGAC-Stable، تم تعديلها لتجنب العمل أثناء الرجوع للخلف، ويمكن في بعض الحالات أن تكون أسرع وتُحقق تخفيضات كبيرة في استخدام الذاكرة. جميع الخوارزميات المقترحة ممتازة في نشر عطفات القيود. في جميع التجارب التي أجريت على العطفات، وجدنا أن خوارزمياتنا أسرع من Constructive Or وGAC-Schema بفارق لا يقل عن رتبة واحدة، ويصل إلى ثلاث رتب.
ar
We have detected, for the first time, Cepheid variables in the Sculptor Group spiral galaxy NGC 7793. From wide-field images obtained in the optical V and I bands on 56 nights in 2003-2005, we have discovered 17 long-period (24-62 days) Cepheids whose periods and mean magnitudes define tight period-luminosity relations. We use the (V-I) Wesenheit index to determine a reddening-free true distance modulus to NGC 7793 of 27.68 +- 0.05 mag (internal error) +- 0.08 mag (systematic error). The comparison of the reddened distance moduli in V and I with the one derived from the Wesenheit magnitude indicates that the Cepheids in NGC 7793 are affected by an average total reddening of E(B-V)=0.08 mag, 0.06 of which is produced inside the host galaxy. As in the earlier Cepheid studies of the Araucaria Project, the reported distance is tied to an assumed LMC distance modulus of 18.50. The quoted systematic uncertainty takes into account effects like blending and possible inhomogeneous filling of the Cepheid instability strip on the derived distance. The reported distance value does not depend on the (unknown) metallicity of the Cepheids according to recent theoretical and empirical results. Our Cepheid distance is shorter, but within the errors consistent with the distance to NGC 7793 determined earlier with the TRGB and Tully-Fisher methods. The NGC 7793 distance of 3.4 Mpc is almost identical to the one our project had found from Cepheid variables for NGC 247, another spiral member of the Sculptor Group located close to NGC 7793 on the sky. Two other conspicuous spiral galaxies in the Sculptor Group, NGC 55 and NGC 300, are much nearer (1.9 Mpc), confirming the picture of a very elongated structure of the Sculptor Group in the line of sight put forward by Jerjen et al. and others.
Lần đầu tiên, chúng tôi đã phát hiện các biến quang Cepheid trong thiên hà xoắn ốc NGC 7793 thuộc nhóm thiên hà Sculptor. Từ những hình ảnh có trường rộng thu được ở các dải quang học V và I trong 56 đêm từ năm 2003 đến 2005, chúng tôi đã phát hiện 17 biến quang Cepheid chu kỳ dài (24-62 ngày), với các chu kỳ và độ sáng trung bình xác định rõ ràng các quan hệ chu kỳ-sáng rất chặt chẽ. Chúng tôi sử dụng chỉ số Wesenheit (V-I) để xác định môđun khoảng cách thực không bị ảnh hưởng bởi sự đỏ hóa đối với NGC 7793 là 27,68 ± 0,05 mag (sai số nội tại) ± 0,08 mag (sai số hệ thống). Việc so sánh các môđun khoảng cách bị đỏ hóa ở dải V và I với môđun thu được từ độ sáng Wesenheit cho thấy các biến quang Cepheid trong NGC 7793 chịu ảnh hưởng của mức độ đỏ hóa tổng cộng trung bình là E(B-V) = 0,08 mag, trong đó 0,06 mag xảy ra bên trong thiên hà chủ. Như trong các nghiên cứu Cepheid trước đây của Dự án Araucaria, khoảng cách báo cáo được liên kết với giá trị môđun khoảng cách của Tinh vân Đại Magellan (LMC) là 18,50. Sai số hệ thống được nêu đã tính đến các hiệu ứng như hiện tượng trộn lẫn ánh sáng (blending) và khả năng phân bố không đồng đều trong dải bất ổn định Cepheid ảnh hưởng đến khoảng cách suy ra. Giá trị khoảng cách báo cáo không phụ thuộc vào kim loại độ (chưa biết) của các biến quang Cepheid, theo các kết quả lý thuyết và thực nghiệm gần đây. Khoảng cách Cepheid của chúng tôi ngắn hơn nhưng vẫn nằm trong phạm vi sai số, phù hợp với các khoảng cách đến NGC 7793 đã được xác định trước đó bằng phương pháp TRGB và Tully-Fisher. Khoảng cách 3,4 Mpc đến NGC 7793 gần như giống hệt với khoảng cách mà dự án của chúng tôi đã tìm thấy từ các biến quang Cepheid đối với NGC 247, một thiên hà xoắn ốc khác thuộc nhóm Sculptor, nằm gần NGC 7793 trên bầu trời. Hai thiên hà xoắn ốc nổi bật khác trong nhóm Sculptor, NGC 55 và NGC 300, nằm gần hơn nhiều (1,9 Mpc), khẳng định hình ảnh một cấu trúc rất dài và kéo giãn theo hướng nhìn của nhóm thiên hà Sculptor, như Jerjen và các cộng sự cùng những nhà nghiên cứu khác đã từng đề xuất.
vi
Caching is an effective mechanism for reducing bandwidth usage and alleviating server load. However, the use of caching entails a compromise between content freshness and refresh cost. An excessive refresh allows a high degree of content freshness at a greater cost of system resource. Conversely, a deficient refresh inhibits content freshness but saves the cost of resource usages. To address the freshness-cost problem, we formulate the refresh scheduling problem with a generic cost model and use this cost model to determine an optimal refresh frequency that gives the best tradeoff between refresh cost and content freshness. We prove the existence and uniqueness of an optimal refresh frequency under the assumptions that the arrival of content update is Poisson and the age-related cost monotonically increases with decreasing freshness. In addition, we provide an analytic comparison of system performance under fixed refresh scheduling and random refresh scheduling, showing that with the same average refresh frequency two refresh schedulings are mathematically equivalent in terms of the long-run average cost.
ক্যাশিং ব্যান্ডউইথ ব্যবহার কমানো এবং সার্ভারের লোড হালকা করার জন্য একটি কার্যকর পদ্ধতি। তবে, ক্যাশিং ব্যবহার করা অর্থ হল সামগ্রীর সতেজতা এবং রিফ্রেশ খরচের মধ্যে একটি আপস করা। অতিরিক্ত রিফ্রেশ সিস্টেমের সম্পদের উপর বেশি খরচ সত্ত্বেও সামগ্রীর উচ্চ স্তরের সতেজতা নিশ্চিত করে। অন্যদিকে, অপর্যাপ্ত রিফ্রেশ সামগ্রীর সতেজতা হ্রাস করে কিন্তু সম্পদ ব্যবহারের খরচ বাঁচায়। সতেজতা-খরচ সমস্যা সমাধানের জন্য, আমরা একটি সাধারণ খরচ মডেল ব্যবহার করে রিফ্রেশ সময়সূচী সমস্যার সূত্রায়ন করি এবং এই খরচ মডেল ব্যবহার করে একটি অনুকূল রিফ্রেশ ফ্রিকোয়েন্সি নির্ধারণ করি যা রিফ্রেশ খরচ এবং সামগ্রীর সতেজতার মধ্যে সেরা ভারসাম্য প্রদান করে। আমরা প্রমাণ করি যে সামগ্রী আপডেটের আগমন যদি পয়জন প্রক্রিয়া অনুসরণ করে এবং বয়স-সংক্রান্ত খরচ সতেজতা হ্রাসের সাথে সাথে একঘেয়েভাবে বৃদ্ধি পায়, তবে একটি অনুকূল রিফ্রেশ ফ্রিকোয়েন্সির অস্তিত্ব এবং এর এককতা রয়েছে। এছাড়াও, আমরা নির্দিষ্ট রিফ্রেশ সময়সূচী এবং দৈবচয়নিক রিফ্রেশ সময়সূচীর অধীনে সিস্টেমের কর্মদক্ষতার বিশ্লেষণমূলক তুলনা প্রদান করি এবং দেখাই যে গড় রিফ্রেশ ফ্রিকোয়েন্সি একই থাকলে দীর্ঘমেয়াদী গড় খরচের দিক থেকে দুটি রিফ্রেশ সময়সূচী গাণিতিকভাবে সমতুল্য।
bn
We analyze the formation and evolution of terrestrial-like planets around solar-type stars in the absence of gaseous giants. In particular, we focus on the physical and dynamical properties of those that survive in the system's Habitable Zone (HZ). This study is based on a comparative study between N-body simulations that include fragmentation and others that consider all collisions as perfect mergers. We use an N-body code, presented in a previous paper, that allows planetary fragmentation. We carry out three sets of 24 simulations for 400 Myr. Two sets are developed adopting a model that includes hit-and-run collisions and planetary fragmentation, each one with different values of the individual minimum mass allowed for the fragments. For the third set, we considered that all collisions lead to perfect mergers. The systems produced in N-body simulations with and without fragmentation are broadly similar, with some differences. In runs with fragmentation, the formed planets have lower masses since part it is distributed amongst collisional fragments. Additionally, those planets presented lower eccentricities, presumably due to dynamical friction with the generated fragments. Perfect mergers and hit-and-run collisions are the most common outcome. Regardless of the collisional treatment adopted, most of the planets that survive in the HZ start the simulation beyond the snow line, having very high final water contents. The fragments' contribution to their mass and water content is negligible. Finally, the individual minimum mass for fragments may play an important role in the planets' collisional history. Collisional models that incorporate fragmentation and hit-and-run collisions lead to a more detailed description of the physical properties of the terrestrial-like planets formed. We conclude that planetary fragmentation is not a barrier to the formation of water worlds in the HZ.
ကျွန်ုပ်တို့သည် ဂၽေးအမျိုးအစားကြယ်များပတ်လည်ရှိ ဂြိုလ်ကြီးများမရှိသည့်အခြေအနေတွင် ကမ္ဘာကဲ့သို့သောဂြိုလ်များ၏ ဖြစ်ပေါ်မှုနှင့် ဖွံ့ဖြိုးတိုးတက်မှုကို ဆန်းစစ်လေ့လာကြသည်။ အထူးသဖြင့် စနစ်၏ နေထိုင်နိုင်သောဇုန် (HZ) တွင် ကျန်ရှိနေသည့် ဂြိုလ်များ၏ ရူပဗေဒနှင့် စွမ်းအင်ဖြန့်ဝေမှုဆိုင်ရာ ဂုဏ်သတ္တိများကို အဓိကထားလေ့လာကြသည်။ ဤလေ့လာမှုသည် အပိုင်းအစများဖြစ်ပေါ်ခြင်းကို ထည့်သွင်းထားသည့် N-body စမ်းသပ်မှုများနှင့် တိုက်မိမှုအားလုံးကို လုံးဝပေါင်းစည်းခြင်းအဖြစ် ယူဆထားသည့် အခြားစမ်းသပ်မှုများကြား နှိုင်းယှဉ်လေ့လာမှုအပေါ် အခြေခံထားသည်။ အပိုင်းအစများဖြစ်ပေါ်စေနိုင်သည့် N-body ကုဒ်ကို အသုံးပြုပြီး ၎င်းကို ယခင်က ဖော်ပြခဲ့သည်။ 400 Myr အတွက် စမ်းသပ်မှု ၂၄ ခုစီပါဝင်သည့် စမ်းသပ်မှုအစု သုံးစုကို ကျွန်ုပ်တို့ ဆောင်ရွက်ခဲ့သည်။ အပိုင်းအစများဖြစ်ပေါ်ခြင်းနှင့် တိုက်မိပြီးပြေးခြင်းတိုက်မိမှုများကို ထည့်သွင်းထားသည့် မော်ဒယ်ကို အသုံးပြု၍ စမ်းသပ်မှုအစုနှစ်စုကို ဖွံ့ဖြိုးစေခဲ့ပြီး အပိုင်းအစများအတွက် ခွင့်ပြုထားသည့် တစ်ဦးချင်း အနည်းဆုံးရှိနိုင်သော အမြဲတမ်းကွဲပြားမှုများဖြင့် ဖန်တီးခဲ့သည်။ တတိယအစုအတွက် တိုက်မိမှုအားလုံးသည် လုံးဝပေါင်းစည်းမှုများဖြစ်သည်ဟု ယူဆထားသည်။ အပိုင်းအစများဖြစ်ပေါ်ခြင်းနှင့် မဖြစ်ပေါ်ခြင်းတို့ဖြင့် N-body စမ်းသပ်မှုများတွင် ထုတ်လုပ်ထားသည့် စနစ်များသည် အနည်းငယ်ကွဲပြားမှုများရှိသော်လည်း အကျယ်အဝန်းအားဖြင့် ဆင်တူသည်။ အပိုင်းအစများဖြစ်ပေါ်သည့် စမ်းသပ်မှုများတွင် ဖြစ်ပေါ်လာသော ဂြိုလ်များသည် အမှုအစားတစ်ခုသည် တိုက်မိမှုအပိုင်းအစများအကြား ဖြန့်ဝေထားသောကြောင့် နိမ့်သော ဒြပ်ထုများရှိသည်။ ထို့အပြင် ထုတ်လုပ်ထားသော အပိုင်းအစများနှင့် စွမ်းအင်ဖြန့်ဝေမှု ပွတ်တိုက်မှုကြောင့် ထိုဂြိုလ်များသည် ပိုမိုနိမ့်သော ပုံရိပ်မျဉ်းများကို ပြသသည်ဟု ယူဆရသည်။ လုံးဝပေါင်းစည်းမှုများနှင့် တိုက်မိပြီးပြေးခြင်းတိုက်မိမှုများသည် အဖြစ်များဆုံး ရလဒ်များဖြစ်သည်။ တိုက်မိမှုဆိုင်ရာ ကုသမှုကို မည်သည့်နည်းလမ်းကို အသုံးပြုသည်ဖြစ်စေ နေထိုင်နိုင်သောဇုန်တွင် ကျန်ရှိနေသည့် ဂြိုလ်အများစုသည် နှင်းမျဉ်းကျော်လွန်ရာတွင် စတင်ပြီး အဆုံးတွင် ရေပါဝင်မှုအလွန်မြင့်မားစွာရှိသည်။ အပိုင်းအစများ၏ သူတို့၏ ဒြပ်ထုနှင့် ရေပါဝင်မှုသို့ ပံ့ပိုးမှုသည် သိမ်းယူ၍မရအောင် နည်းပါးသည်။ နောက်ဆုံးအနေဖြင့် အပိုင်းအစများအတွက် တစ်ဦးချင်း အနည်းဆုံးရှိနိုင်သော ဒြပ်ထုသည် ဂြိုလ်များ၏ တိုက်မိမှုသမိုင်းတွင် အရေးပါသော အခန်းကဏ္ဍကို ပါဝင်နိုင်သည်။ အပိုင်းအစများဖြစ်ပေါ်မှုနှင့် တိုက်မိပြီးပြေးခြင်းတိုက်မိမှုများကို ထည့်သွင်းထားသည့် တိုက်မိမှုဆိုင်ရာ မော်ဒယ်များသည် ဖြစ်ပေါ်လာသော ကမ္ဘာကဲ့သို့သော ဂြိုလ်များ၏ ရူပဗေဒဂုဏ်သတ္တိများကို ပိုမိုအသေးစိတ် ဖော်ပြနိုင်စေသည်။ နေထိုင်နိုင်သောဇုန်တွင် ရေကမ္ဘာများ ဖြစ်ပေါ်ရာတွင် ဂြိုလ်များ၏ အပိုင်းအစများဖြစ်ပေါ်မှုသည် အတားအဆီးမဟုတ်ကြောင်း ကျွန်ုပ်တို့ နိဂုံးချုပ်သည်။
my
The complete mid- to far- infrared continuum energy distribution collected with the Infrared Space Observatory of the Seyfert 2 prototype NGC 5252 is presented. ISOCAM images taken in the 3--15 micron range show a resolved central source that is consistent at all bands with a region of about 1.3 kpc in size. Due to the lack of on going star formation in the disk of the galaxy, this resolved emission is associated with either dust heated in the nuclear active region or with bremsstrahlung emission from the nuclear and extended ionised gas. The size of the mid-IR emission contrasts with the standard unification scenario envisaging a compact dusty structure surrounding and hiding the active nucleus and the broad-line region. The mid-IR data are complemented with ISOPHOT aperture photometry in the 25--200 micron range. The overall IR spectral energy distribution is dominated by a well-defined component peaking at about 100$ micron, a characteristic temperature of T ~20 K, and an associated dust mass of 2.5 x 10E7 Msun, which greatly dominates the total dust mass content of the galaxy. The heating mechanism of this dust is probably the interstellar radiation field. After subtracting the contribution of this cold dust component, the bulk of the residual emission is attributed to dust heated within the nuclear environment. Its luminosity consistently accounts for the reprocessing of the X-ray to UV emission derived for the nucleus of this galaxy. The comparison of NGC 5252 spectral energy distribution with current torus models favors large nuclear disk structure on the kiloparsec scale.
Apresenta-se a distribuição completa de energia no contínuo do infravermelho médio ao distante, obtida com o Observatório Espacial de Infravermelho, do protótipo de Seyfert 2 NGC 5252. Imagens do ISOCAM obtidas na faixa de 3--15 mícrons mostram uma fonte central resolvida que, em todas as bandas, é consistente com uma região de cerca de 1,3 kpc de tamanho. Devido à ausência de formação estelar em andamento no disco da galáxia, esta emissão resolvida está associada ou ao aquecimento de poeira na região ativa nuclear ou à emissão de bremsstrahlung do gás ionizado nuclear e estendido. O tamanho da emissão no infravermelho médio contrasta com o cenário padrão de unificação, que prevê uma estrutura compacta e empoeirada envolvendo e ocultando o núcleo ativo e a região de linhas largas. Os dados no infravermelho médio são complementados com fotometria de abertura do ISOPHOT na faixa de 25--200 mícrons. A distribuição espectral de energia no infravermelho é dominada por um componente bem definido com pico em torno de 100 mícrons, uma temperatura característica de T ~20 K e uma massa de poeira associada de 2,5 x 10E7 Msun, que domina amplamente o conteúdo total de massa de poeira da galáxia. O mecanismo de aquecimento dessa poeira é provavelmente o campo de radiação interestelar. Após subtrair a contribuição desse componente de poeira fria, a maior parte da emissão residual é atribuída à poeira aquecida no ambiente nuclear. Sua luminosidade explica consistentemente a reemissão da radiação de raios-X até o ultravioleta derivada para o núcleo desta galáxia. A comparação da distribuição espectral de energia de NGC 5252 com modelos atuais de toro favorece uma estrutura de disco nuclear extensa, na escala do quiloparsec.
pt
We present the second report of our systematic search for strongly lensed quasars from the data of the Sloan Digital Sky Survey (SDSS). From extensive follow-up observations of 136 candidate objects, we find 36 lenses in the full sample of 77,429 spectroscopically confirmed quasars in the SDSS Data Release 5. We then define a complete sample of 19 lenses, including 11 from our previous search in the SDSS Data Release 3, from the sample of 36,287 quasars with i<19.1 in the redshift range 0.6<z<2.2, where we require the lenses to have image separations of 1"<\theta<20" and i-band magnitude differences between the two images smaller than 1.25 mag. Among the 19 lensed quasars, 3 have quadruple-image configurations, while the remaining 16 show double images. This lens sample constrains the cosmological constant to be \Omega_\Lambda=0.84^{+0.06}_{-0.08}(stat.)^{+0.09}_{-0.07}(syst.) assuming a flat universe, which is in good agreement with other cosmological observations. We also report the discoveries of 7 binary quasars with separations ranging from 1.1" to 16.6", which are identified in the course of our lens survey. This study concludes the construction of our statistical lens sample in the full SDSS-I data set.
เราขอเสนอรายงานฉบับที่สองของการค้นหาควาซาร์ที่เกิดเลนส์แรงโน้มถ่วงอย่างชัดเจนจากรายการข้อมูลของ Sloan Digital Sky Survey (SDSS) จากการสังเกตติดตามอย่างละเอียดของวัตถุตัวอย่างจำนวน 136 ดวง เราพบเลนส์จำนวน 36 ระบบ จากตัวอย่างทั้งหมด 77,429 ควาซาร์ ซึ่งได้รับการยืนยันด้วยสเปกโตรสโกปีในชุดข้อมูล SDSS Data Release 5 จากนั้นเราได้กำหนดตัวอย่างที่สมบูรณ์จำนวน 19 ระบบ ซึ่งรวมถึงเลนส์ 11 ระบบจากผลการค้นหาใน SDSS Data Release 3 ที่ผ่านมา โดยเลือกจากกลุ่มตัวอย่างควาซาร์จำนวน 36,287 ดวง ที่มีความสว่างในแถบ i น้อยกว่า 19.1 (i<19.1) และอยู่ในช่วงเรดชิฟต์ 0.6<z<2.2 โดยกำหนดให้เลนส์ต้องมีมุมแยกของภาพอยู่ระหว่าง 1"<\theta<20" และมีความต่างของความสว่างในแถบ i ระหว่างภาพทั้งสองน้อยกว่า 1.25 แมกนิจูด จากระบบควาซาร์ที่ถูกเลนส์ทั้ง 19 ระบบ มี 3 ระบบเป็นการจัดเรียงภาพสี่ภาพ (quadruple-image) และอีก 16 ระบบเป็นภาพคู่ (double images) ตัวอย่างเลนส์ชุดนี้ให้ข้อจำกัดค่าคงที่ทางจักรวาลวิทยาเป็น \Omega_\Lambda=0.84^{+0.06}_{-0.08}(สถิติ)^{+0.09}_{-0.07}(ระบบ) โดยสมมติว่าจักรวาลมีลักษณะแบน ซึ่งสอดคล้องดีกับผลการสังเกตการณ์ทางจักรวาลวิทยาอื่น ๆ นอกจากนี้ เรายังรายงานการค้นพบควาซาร์คู่จำนวน 7 ระบบ ซึ่งมีมุมแยกตั้งแต่ 1.1" ถึง 16.6" ที่ตรวจพบระหว่างการสำรวจเลนส์นี้ งานวิจัยนี้ถือเป็นการสรุปขั้นตอนการสร้างตัวอย่างเลนส์เชิงสถิติของเราจากชุดข้อมูล SDSS-I ทั้งหมด
th
Neurodegenerative parkinsonism can be assessed by dopamine transporter single photon emission computed tomography (DaT-SPECT). Although generating images is time consuming, these images can show interobserver variability and they have been visually interpreted by nuclear medicine physicians to date. Accordingly, this study aims to provide an automatic and robust method based on Diffusion Maps and machine learning classifiers to classify the SPECT images into two types, namely Normal and Abnormal DaT-SPECT image groups. In the proposed method, the 3D images of N patients are mapped to an N by N pairwise distance matrix and are visualized in Diffusion Maps coordinates. The images of the training set are embedded into a low-dimensional space by using diffusion maps. Moreover, we use Nystr\"om's out-of-sample extension, which embeds new sample points as the testing set in the reduced space. Testing samples in the embedded space are then classified into two types through the ensemble classifier with Linear Discriminant Analysis (LDA) and voting procedure through twenty-five-fold cross-validation results. The feasibility of the method is demonstrated via Parkinsonism Progression Markers Initiative (PPMI) dataset of 1097 subjects and a clinical cohort from Kaohsiung Chang Gung Memorial Hospital (KCGMH-TW) of 630 patients. We compare performances using Diffusion Maps with those of three alternative manifold methods for dimension reduction, namely Locally Linear Embedding (LLE), Isomorphic Mapping Algorithm (Isomap), and Kernel Principal Component Analysis (Kernel PCA). We also compare results using 2D and 3D CNN methods. The diffusion maps method has an average accuracy of 98% for the PPMI and 90% for the KCGMH-TW dataset with twenty-five fold cross-validation results. It outperforms the other three methods concerning the overall accuracy and the robustness in the training and testing samples.
نیوروڈی جنریٹو پارکنسونیزم کا جائزہ ڈوپامائن ٹرانسپورٹر سنگل فوٹون اخراج کمپیوٹڈ ٹوموگرافی (DaT-SPECT) کے ذریعے لیا جا سکتا ہے۔ اگرچہ تصاویر تیار کرنا وقت طلب ہے، تاہم ان تصاویر میں مشاہدہ کنندگان کے درمیان تغیرات پائے جاتے ہیں، اور اب تک ان کی ویژوئل تشریح نیوکلیئر میڈیسن کے ماہرین کے ذریعے کی جاتی رہی ہے۔ اس لیے، اس مطالعہ کا مقصد ڈفیوژن میپس اور مشین لرننگ کلاسیفائیئرز پر مبنی ایک خودکار اور مضبوط طریقہ فراہم کرنا ہے تاکہ SPECT تصاویر کو دو اقسام میں درجہ بندی کیا جا سکے، یعنی نارمل اور غیر نارمل DaT-SPECT تصویر گروپس۔ پیش کردہ طریقہ کار میں، N مریضوں کی 3D تصاویر کو ایک N بائی N جوڑی وار فاصلہ میٹرکس میں منعکس کیا جاتا ہے اور ڈفیوژن میپس کے متناسق نقوش میں دکھایا جاتا ہے۔ تربیتی سیٹ کی تصاویر کو ڈفیوژن میپس کے استعمال سے کم بعدی جگہ میں رکھا جاتا ہے۔ مزید برآں، ہم نائسٹروم کے آؤٹ آف سیمپل توسیع کا استعمال کرتے ہیں، جو کم شدہ جگہ میں نئے نمونہ نقاط کو ٹیسٹنگ سیٹ کے طور پر رکھتی ہے۔ تعلیم یافتہ جگہ میں ٹیسٹنگ نمونوں کو اینسمبل کلاسیفائیر کے ذریعے درجہ بندی کیا جاتا ہے، جس میں لکیری تفریقی تجزیہ (LDA) اور پچیس گنا کراس ویلیڈیشن کے نتائج کے ذریعے ووٹنگ کی طریقہ کار شامل ہے۔ اس طریقہ کی عملی قابلیت کو 1097 افراد پر مشتمل پارکنسونیزم ترقی کے نشانیوں کے اقدام (PPMI) کے ڈیٹا سیٹ اور 630 مریضوں پر مشتمل کائوشیونگ چانگ گونگ میموریل ہسپتال (KCGMH-TW) کے ایک طبی کوہورٹ کے ذریعے ظاہر کیا گیا ہے۔ ہم ڈفیوژن میپس کے استعمال کے نتائج کا موازنہ بعد کم کرنے کے تین متبادل مینی فولڈ طریقوں، یعنی مقامی لکیری ایمبیڈنگ (LLE)، آئسو مورفک میپنگ الگورتھم (آئسو میپ)، اور کرنل اہم اجزاء کا تجزیہ (کرنل PCA) کے ساتھ کرتے ہیں۔ ہم 2D اور 3D سی این این طریقوں کے نتائج کا بھی موازنہ کرتے ہیں۔ ڈفیوژن میپس کا طریقہ پچیس گنا کراس ویلیڈیشن کے نتائج کے ساتھ PPMI کے لیے 98 فیصد اور KCGMH-TW ڈیٹا سیٹ کے لیے 90 فیصد کی اوسط درستگی کا حامل ہے۔ یہ تربیت اور ٹیسٹنگ نمونوں میں کل درستگی اور مضبوطی کے لحاظ سے دیگر تین طریقوں پر بھاری ہے۔
ur
We consider a broadcast communication system over parallel sub-channels where the transmitter sends three messages: a common message to two users, and two confidential messages to each user which need to be kept secret from the other user. We assume partial channel state information at the transmitter (CSIT), stemming from noisy channel estimation. The first contribution of this paper is the characterization of the secrecy capacity region boundary as the solution of weighted sum-rate problems, with suitable weights. Partial CSIT is addressed by adding a margin to the estimated channel gains. The second paper contribution is the solution of this problem in an almost closed-form, where only two single real parameters must be optimized, e.g., through dichotomic searches. On the one hand, the considered problem generalizes existing literature where only two out of the three messages are transmitted. On the other hand, the solution finds also practical applications into the resource allocation of orthogonal frequency division multiplexing (OFDM) systems with both secrecy and fairness constraints.
Chúng tôi xét một hệ thống truyền thông phát sóng qua các kênh con song song, trong đó bộ phát gửi ba thông điệp: một thông điệp chung đến hai người dùng, và hai thông điệp mật gửi đến từng người dùng riêng biệt, cần được giữ bí mật đối với người dùng còn lại. Chúng tôi giả định thông tin trạng thái kênh một phần tại bộ phát (CSIT), bắt nguồn từ việc ước lượng kênh bị nhiễu. Đóng góp đầu tiên của bài báo này là việc xác định biên vùng dung lượng bảo mật dưới dạng nghiệm của các bài toán tổng tỉ lệ có trọng số, với các trọng số phù hợp. Vấn đề CSIT một phần được giải quyết bằng cách thêm một biên độ vào các hệ số kênh đã ước lượng. Đóng góp thứ hai của bài báo là lời giải cho bài toán này dưới dạng gần như đóng, trong đó chỉ cần tối ưu hóa hai tham số thực đơn lẻ, ví dụ thông qua các tìm kiếm nhị phân. Một mặt, bài toán được xét tổng quát hóa các nghiên cứu hiện có, nơi chỉ truyền hai trong số ba thông điệp. Mặt khác, lời giải cũng tìm thấy ứng dụng thực tiễn trong việc phân bổ tài nguyên cho các hệ thống đa truy nhập phân chia theo tần số trực giao (OFDM) với cả ràng buộc bảo mật và công bằng.
vi
When a three-dimensional (3D) ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon - the quantum anomalous Hall effect - provides a conceptually new platform for studies of edge-state transport, distinct from the more extensively studied integer and fractional quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt driven crossover from predominantly edge state transport to diffusive transport in Cr-doped (Bi,Sb)2Te3 thin films, as the system transitions from a quantum anomalous Hall insulator to a gapless, ferromagnetic topological insulator. The crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain using the Landauer-Buttiker formalism. Our methodology provides a powerful means of quantifying edge state contributions to transport in temperature and chemical potential regimes far from perfect quantization.
3차원(3D) 철자성 위상 절연체 박막이 수직 방향으로 자화될 때, 전도는 이론적으로 소산이 없는 1차원(1D) 수렴 상태를 통해 이루어지며, 이는 양자화된 영장 영역 홀 전도도로 특징지어진다. 최근 이러한 현상인 양자 이상 홀 효과의 실현은 정수 및 분수 양자 홀 효과에서 발생하는 란다우 준위 형성과는 구별되는, 전도 엣지 상태 수송 연구를 위한 개념적으로 새로운 플랫폼을 제공한다. 이 맥락에서 중요한 질문이 제기된다. 자화 방향이 수직에서 수평 방향으로 변화할 때 이러한 1D 엣지 상태는 어떻게 변화하는가? 우리는 크롬 도핑된 (Bi,Sb)2Te3 박막에서 자장 기울기 변화에 따른 주로 엣지 상태 수송에서 확산 수송으로의 전이를 연구함으로써 이 질문을 검토한다. 이 전이는 시스템이 양자 이상 홀 절연체에서 갭이 없는 철자성 위상 절연체로 전이할 때 나타난다. 이 전이는 랜다우어-뷰티커 형식론을 사용하여 설명할 수 있는 거대하고 전기적으로 조절 가능한 이방성 자저항으로 나타난다. 본 연구 방법은 완전한 양자화에서 멀리 떨어진 온도 및 화학적 퍼텐셜 영역에서 엣지 상태가 수송에 기여하는 정도를 정량화하는 강력한 수단을 제공한다.
ko
The digital revolution has brought most of the world on the world wide web. The data available on WWW has increased many folds in the past decade. Social networks, online clubs and organisations have come into existence. Information is extracted from these venues about a real world entity like a person, organisation, event, etc. However, this information may change over time, and there is a need for the sources to be up-to-date. Therefore, it is desirable to have a model to extract relevant data items from different sources and merge them to build a complete profile of an entity (entity profiling). Further, this model should be able to handle incorrect or obsolete data items. In this paper, we propose a novel method for completing a profile. We have developed a two phase method-1) The first phase (resolution phase) links records to the queries. We have proposed and observed that the use of random forest for entity resolution increases the performance of the system as this has resulted in more records getting linked to the correct entity. Also, we used trustworthiness of a source as a feature to the random forest. 2) The second phase selects the appropriate values from records to complete a profile based on our proposed selection criteria. We have used various metrics for measuring the performance of the resolution phase as well as for the overall ReLiC framework. It is established through our results that the use of biased sources has significantly improved the performance of the ReLiC framework. Experimental results show that our proposed system, ReLiC outperforms the state-of-the-art.
ການປະຕິວັດດິຈິຕອນໄດ້ນຳເອົາໂລກສ່ວນໃຫຍ່ມາສູ່ເວີດໄວດ໌ເວັບ. ຂໍ້ມູນທີ່ມີຢູ່ໃນ WWW ໄດ້ເພີ່ມຂຶ້ນຫຼາຍເທົ່າໃນຊົ່ວໂມງທີ່ຜ່ານມາ. ເຄືອຂ່າຍສັງຄົມ, ສະໂມສອນອອນລາຍ ແລະ ອົງການຕ່າງໆ ໄດ້ມີການເກີດຂຶ້ນ. ຂໍ້ມູນຖືກສະກັດອອກຈາກສະຖານທີ່ເຫຼົ່ານີ້ກ່ຽວກັບໜ່ວຍງານໃນໂລກຈິງ ເຊັ່ນ: ບຸກຄົນ, ອົງການ, ເຫດການ ແລະ ອື່ນໆ. ເຖິງຢ່າງໃດກໍຕາມ, ຂໍ້ມູນນີ້ອາດຈະປ່ຽນແປງໄປຕາມເວລາ, ແລະ ມີຄວາມຈຳເປັນທີ່ແຫຼ່ງຂໍ້ມູນຕ້ອງຖືກອັບເດດຢູ່ສະເໝີ. ດັ່ງນັ້ນ, ຈຶ່ງເປັນທີ່ຕ້ອງການທີ່ຈະມີແບບຈຳລອງໃນການສະກັດຂໍ້ມູນທີ່ກ່ຽວຂ້ອງອອກຈາກແຫຼ່ງຂໍ້ມູນຕ່າງໆ ແລະ ການລວມເຂົ້າກັນເພື່ອສ້າງໂປຣໄຟລ໌ທີ່ຄົບຖ້ວນຂອງໜ່ວຍງານໜຶ່ງ (ການສ້າງໂປຣໄຟລ໌ໜ່ວຍງານ). ພ້ອມກັນນັ້ນ, ແບບຈຳລອງນີ້ຄວນຈະສາມາດຈັດການຂໍ້ມູນທີ່ບໍ່ຖືກຕ້ອງ ຫຼື ລ້າສົມັຍໄດ້. ໃນບົດຄວາມນີ້, ພວກເຮົາຂໍເສີມເລື່ອງວິທີໃໝ່ໃນການປະສົມໂປຣໄຟລ໌ໃຫ້ຄົບຖ້ວນ. ພວກເຮົາໄດ້ພັດທະນາວິທີການສອງຂັ້ນຕອນ: 1) ຂັ້ນຕອນທຳອິດ (ຂັ້ນຕອນການແກ້ໄຂ) ເຊື່ອມບັນທຶກກັບການສອບຖາມ. ພວກເຮົາໄດ້ສະເໜີ ແລະ ສັງເກດເຫັນວ່າການນຳໃຊ້ປ່າສຸ່ມສຳລັບການແກ້ໄຂໜ່ວຍງານ ເພີ່ມປະສິດທິພາບຂອງລະບົບ ເນື່ອງຈາກມັນໄດ້ນຳໄປສູ່ການເຊື່ອມຕໍ່ບັນທຶກຫຼາຍຂຶ້ນກັບໜ່ວຍງານທີ່ຖືກຕ້ອງ. ພ້ອມກັນນັ້ນ, ພວກເຮົາຍັງໄດ້ນຳໃຊ້ຄວາມໜ້າເຊື່ອຖືຂອງແຫຼ່ງຂໍ້ມູນເປັນຄຸນລັກສະນະໜຶ່ງໃນປ່າສຸ່ມ. 2) ຂັ້ນຕອນທີສອງ ເລືອກຄ່າທີ່ເໝາະສົມຈາກບັນທຶກເພື່ອປະສົມໂປຣໄຟລ໌ ໂດຍອີງໃສ່ມາດຕະຖານການເລືອກທີ່ພວກເຮົາສະເໜີ. ພວກເຮົາໄດ້ນຳໃຊ້ມາດຕະການຕ່າງໆ ເພື່ອວັດແທກປະສິດທິພາບຂອງຂັ້ນຕອນການແກ້ໄຂ ແລະ ລະບົບ ReLiC ໃນທົ່ວໄປ. ຜ່ານຜົນໄດ້ຮັບຂອງພວກເຮົາ ໄດ້ສະແດງໃຫ້ເຫັນວ່າການນຳໃຊ້ແຫຼ່ງຂໍ້ມູນທີ່ມີຄວາມເອີ້ຍເອີ້ນໄດ້ປັບປຸງປະສິດທິພາບຂອງລະບົບ ReLiC ໄດ້ຢ່າງຫຼວງຫຼາຍ. ຜົນໄດ້ຮັບຈາກການທົດສອບສະແດງໃຫ້ເຫັນວ່າລະບົບທີ່ພວກເຮົາສະເໜີ, ReLiC ດີກວ່າລະບົບທີ່ດີທີ່ສຸດໃນປັດຈຸບັນ.
lo
When a three-dimensional (3D) ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon - the quantum anomalous Hall effect - provides a conceptually new platform for studies of edge-state transport, distinct from the more extensively studied integer and fractional quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt driven crossover from predominantly edge state transport to diffusive transport in Cr-doped (Bi,Sb)2Te3 thin films, as the system transitions from a quantum anomalous Hall insulator to a gapless, ferromagnetic topological insulator. The crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain using the Landauer-Buttiker formalism. Our methodology provides a powerful means of quantifying edge state contributions to transport in temperature and chemical potential regimes far from perfect quantization.
Lorsqu'un film mince d'isolant topologique ferromagnétique tridimensionnel (3D) est magnétisé perpendiculairement au plan, la conduction se produit idéalement par des états chiraux unidimensionnels (1D) sans dissipation, caractérisés par une conductance de Hall quantifiée en champ nul. La réalisation récente de ce phénomène – l'effet Hall anomal quantique – offre une plateforme conceptuellement nouvelle pour l'étude du transport par états de bord, distincte des effets Hall quantique entier et fractionnaire, plus largement étudiés, qui résultent de la formation de niveaux de Landau. Une question importante se pose dans ce contexte : comment ces états de bord 1D évoluent-ils lorsque la magnétisation passe d'une orientation perpendiculaire à une orientation dans le plan ? Nous examinons cette question en étudiant la transition induite par l'inclinaison du champ magnétique, allant d'un transport principalement dû aux états de bord vers un transport diffusif, dans des films minces de (Bi,Sb)2Te3 dopés au chrome, lorsque le système passe d'un isolant de Hall anomal quantique à un isolant topologique ferromagnétique sans bande interdite. Cette transition se manifeste par une magnétorésistance anisotrope géante, électriquement accordable, que nous expliquons à l'aide du formalisme de Landauer-Büttiker. Notre méthodologie fournit un moyen puissant de quantifier la contribution des états de bord au transport dans des régimes de température et de potentiel chimique éloignés de la quantification parfaite.
fr
A precise understanding of the influence of an environment on quantum dynamics, which is at the heart of the theory of open quantum systems, is crucial for further progress in the development of controllable large-scale quantum systems. However, existing approaches to account for complex system environment interaction in the presence of memory effects are either based on heuristic and oversimplified principles or give rise to computational difficulties. In practice, one can take advantage of available experimental data and replace the first-principles simulation with a data-driven analysis that is often much simpler. Inspired by recent advances in data analysis and machine learning, we suggest a data-driven approach to the analysis of the non-Markovian dynamics of open quantum systems. Our method allows capturing the most important properties of open quantum systems, such as the effective dimension of the environment, eigenfrequencies of the joint system-environment quantum dynamics, as well as reconstructing the minimal Markovian embedding, predicting dynamics, and denoising of measured quantum trajectories. We demonstrate the performance of the suggested approach on various models of open quantum systems, including a qubit coupled with a finite environment, a spin-boson model, and the damped Jaynes-Cummings model.
کوانٹم مراکز پر ماحول کے اثر کی درست تفہیم، جو کھلے کوانٹم نظام کے نظریہ کا بنیادی حصہ ہے، قابل کنٹرول بڑے پیمانے پر کوانٹم نظام کی ترقی کے لیے مزید پیشرفت کے لیے نہایت اہم ہے۔ تاہم، یادداشت کے اثرات کی موجودگی میں پیچیدہ نظام اور ماحول کے تعامل کو مدنظر رکھتے ہوئے موجودہ طریقہ کار یا تو متجزئہ اور مبہم اصولوں پر مبنی ہوتے ہیں یا پھر حساباتی مشکلات کا باعث بنتے ہیں۔ عملی طور پر، دستیاب تجرباتی ڈیٹا کا فائدہ اٹھاتے ہوئے بنیادی اصولوں پر مبنی حساب کی بجائے ڈیٹا پر مبنی تجزیہ کو استعمال کیا جا سکتا ہے، جو اکثر بہت زیادہ سادہ ہوتا ہے۔ حالیہ پیشرفت کے تناظر میں ڈیٹا کے تجزیہ اور مشین لرننگ سے متاثر ہو کر، ہم کھلے کوانٹم نظام کی غیر مارکوفیان حرکیات کے تجزیہ کے لیے ڈیٹا پر مبنی طریقہ کار کی تجویز پیش کرتے ہیں۔ ہمارا طریقہ کار کھلے کوانٹم نظام کی انتہائی اہم خصوصیات کو محفوظ کرنے کی اجازت دیتا ہے، جیسے ماحول کا مؤثر بعد، نظام اور ماحول کی مشترکہ کوانٹم حرکیات کی ایگن فریکوئنسیز، نیز مناسب ترین مارکوفیان انضمام کی تعمیر نو، حرکیات کی پیش گوئی، اور ماپی گئی کوانٹم سفر کی شور زدگی کی اصلاح۔ ہم کھلے کوانٹم نظام کے مختلف ماڈلز پر تجویز کردہ طریقہ کار کی کارکردگی کو ظاہر کرتے ہیں، جن میں ایک محدود ماحول سے جڑا ہوا کیوبٹ، اسپن-بوزون ماڈل، اور کمزور پڑنے والا جینز-کامنگز ماڈل شامل ہیں۔
ur
Governments and cities around the world are currently facing rapid growth in the use of Electric Vehicles and therewith the need for Charging Infrastructure. For these cities, the struggle remains how to further roll out charging infrastructure in the most efficient way, both in terms of cost and use. Forecasting models are not able to predict more long-term developments, and as such more complex simulation models offer opportunities to simulate various scenarios. Agent based simulation models provide insight into the effects of incentives and roll-out strategies before they are implemented in practice and thus allow for scenario testing. This paper describes the build up of an agent based model that enables policy makers to anticipate on charging infrastructure development. The model is able to simulate charging transactions of individual users and is both calibrated and validated using a dataset of charging transactions from the public charging infrastructure of the four largest cities in the Netherlands.
전 세계의 정부와 도시들은 현재 전기자동차의 사용이 급격히 증가함에 따라 충전 인프라에 대한 필요성도 함께 증가하고 있는 상황이다. 이러한 도시들로서는 비용과 이용 효율성 측면에서 충전 인프라를 가장 효율적인 방식으로 추가로 확장해 나갈지에 대한 과제가 여전히 남아 있다. 예측 모델은 장기적인 변화를 예측하는 데 한계가 있어, 보다 복잡한 시뮬레이션 모델을 통해 다양한 시나리오를 시뮬레이션할 수 있는 기회가 제공된다. 에이전트 기반 시뮬레이션 모델은 실제 시행 전에 인센티브 및 확대 전략의 효과를 파악할 수 있게 해주며, 따라서 시나리오 테스트를 가능하게 한다. 본 논문은 정책 결정자들이 충전 인프라 개발에 선제적으로 대응할 수 있도록 해주는 에이전트 기반 모델의 구축 과정을 설명한다. 이 모델은 개별 사용자의 충전 거래를 시뮬레이션할 수 있으며, 네덜란드의 4대 주요 도시에 설치된 공용 충전 인프라에서 수집된 충전 거래 데이터 세트를 사용하여 보정 및 검증을 거쳤다.
ko
We consider a broadcast communication system over parallel sub-channels where the transmitter sends three messages: a common message to two users, and two confidential messages to each user which need to be kept secret from the other user. We assume partial channel state information at the transmitter (CSIT), stemming from noisy channel estimation. The first contribution of this paper is the characterization of the secrecy capacity region boundary as the solution of weighted sum-rate problems, with suitable weights. Partial CSIT is addressed by adding a margin to the estimated channel gains. The second paper contribution is the solution of this problem in an almost closed-form, where only two single real parameters must be optimized, e.g., through dichotomic searches. On the one hand, the considered problem generalizes existing literature where only two out of the three messages are transmitted. On the other hand, the solution finds also practical applications into the resource allocation of orthogonal frequency division multiplexing (OFDM) systems with both secrecy and fairness constraints.
Nous considérons un système de communication de diffusion utilisant des sous-canaux parallèles, dans lequel l'émetteur transmet trois messages : un message commun destiné aux deux utilisateurs, et deux messages confidentiels, chacun destiné à un utilisateur et devant rester secret vis-à-vis de l'autre utilisateur. Nous supposons une information partielle sur l'état du canal disponible à l'émetteur (CSIT), provenant d'une estimation bruitée du canal. La première contribution de cet article est la caractérisation de la frontière de la région de capacité de confidentialité comme solution de problèmes de somme pondérée des débits, avec des poids appropriés. La prise en compte du CSIT partiel s'effectue en ajoutant une marge aux gains de canal estimés. La seconde contribution de l'article est la résolution de ce problème sous une forme quasi fermée, dans laquelle seuls deux paramètres réels doivent être optimisés, par exemple au moyen de recherches dichotomiques. D'une part, le problème considéré généralise les travaux existants dans lesquels seuls deux des trois messages sont transmis. D'autre part, la solution trouvée possède également des applications pratiques dans l'allocation des ressources des systèmes de multiplexage orthogonal en fréquence (OFDM) soumis à la fois à des contraintes de confidentialité et d'équité.
fr
This paper investigates the problem of finding a fixed point for a global nonexpansive operator under time-varying communication graphs in real Hilbert spaces, where the global operator is separable and composed of an aggregate sum of local nonexpansive operators. Each local operator is only privately accessible to each agent, and all agents constitute a network. To seek a fixed point of the global operator, it is indispensable for agents to exchange local information and update their solution cooperatively. To solve the problem, two algorithms are developed, called distributed Krasnosel'ski\u{\i}-Mann (D-KM) and distributed block-coordinate Krasnosel'ski\u{\i}-Mann (D-BKM) iterations, for which the D-BKM iteration is a block-coordinate version of the D-KM iteration in the sense of randomly choosing and computing only one block-coordinate of local operators at each time for each agent. It is shown that the proposed two algorithms can both converge weakly to a fixed point of the global operator. Meanwhile, the designed algorithms are applied to recover the classical distributed gradient descent (DGD) algorithm, devise a new block-coordinate DGD algorithm, handle a distributed shortest distance problem in the Hilbert space for the first time, and solve linear algebraic equations in a novel distributed approach. Finally, the theoretical results are corroborated by a few numerical examples.
Makalah ini mengkaji permasalahan pencarian titik tetap untuk operator nonekspansif global di bawah graf komunikasi yang berubah-ubah dalam ruang Hilbert real, di mana operator global bersifat terpisahkan dan terdiri dari jumlah agregat dari operator-operator nonekspansif lokal. Setiap operator lokal hanya dapat diakses secara privat oleh masing-masing agen, dan semua agen membentuk suatu jaringan. Untuk mencari titik tetap dari operator global, sangatlah penting bagi para agen untuk bertukar informasi lokal dan memperbarui solusi mereka secara kooperatif. Untuk menyelesaikan permasalahan tersebut, dikembangkan dua algoritma, yaitu iterasi Krasnosel'ski\u{\i}-Mann terdistribusi (D-KM) dan iterasi block-coordinate Krasnosel'ski\u{\i}-Mann terdistribusi (D-BKM), di mana iterasi D-BKM merupakan versi block-coordinate dari iterasi D-KM dalam artian hanya satu block-coordinate dari operator lokal yang dipilih secara acak dan dihitung pada setiap waktu untuk setiap agen. Diperlihatkan bahwa kedua algoritma yang diusulkan dapat konvergen secara lemah ke suatu titik tetap dari operator global. Di samping itu, algoritma yang dirancang diterapkan untuk merekonstruksi algoritma turunan gradien terdistribusi (DGD) klasik, merancang algoritma DGD block-coordinate yang baru, menangani permasalahan jarak terpendek terdistribusi dalam ruang Hilbert untuk pertama kalinya, serta menyelesaikan persamaan aljabar linear dengan pendekatan terdistribusi yang baru. Akhirnya, hasil-hasil teoritis dikonfirmasi melalui beberapa contoh numerik.
id
Using a fully ab-initio methodology, we demonstrate how the lattice vibrations couple with neutral excitons in monolayer WSe2 and contribute to the non-radiative excitonic lifetime. We show that only by treating the electron-electron and electron-phonon interactions at the same time it is possible to obtain an unprecedented agreement of the zero and finite-temperature optical gaps and absorption spectra with the experimental results. The bare energies were calculated by solving the Kohn-Sham equations, whereas G$_{0}$W$_{0}$ many body perturbation theory was used to extract the excited state energies. A coupled electron-hole Bethe-Salpeter equation was solved incorporating the polaronic energies to show that it is the in-plane torsional acoustic phonon branch that contributes mostly to the A and B exciton build-up. We find that the three A, B and C excitonic peaks exhibit different behaviour with temperature, displaying different non-radiative linewidths. There is no considerable transition in the strength of the excitons with temperature but A-exciton exhibits darker nature in comparison to C-exciton. Further, all the excitonic peaks redshifts as temperature rises. Renormalization of the bare electronic energies by phonon interactions and the anharmonic lattice thermal expansion causes a decreasing band-gap with increasing temperature. The zero point energy renormalization (31 meV) is found to be entirely due to the polaronic interaction with negligible contribution from lattice anharmonicites. These findings may find a profound impact on electronic and optoelectronic device technologies based on these monolayers.
Pomocí plně ab-initio metodiky ukazujeme, jak se mřížkové vibrace vazí na neutrální excitony v monovrstvě WSe2 a přispívají k neradiativnímu excitonovému životnímu cyklu. Ukazujeme, že pouze současným zohledněním elektron–elektron a elektron–fononové interakce lze dosáhnout dosud nevídané shody nulových a teplotně závislých optických energetických roztečí a absorpčních spekter s experimentálními výsledky. Hrubé energie byly vypočteny řešením Kohn-Shamových rovnic, zatímco pro extrakci excitovaných stavových energií byla použita mnohoteilová poruchová teorie G$_{0}$W$_{0}$. Řešením vazané elektron-díra Bethe-Salpeterovy rovnice s započtením polaronových energií jsme ukázali, že největší příspěvek k tvorbě A a B excitonů má in-plane torzní akustická fononová větev. Zjišťujeme, že tři excitonové píky A, B a C vykazují různé chování s teplotou a projevují se různými neradiativními šířkami čar. Nezaznamenáváme výraznou změnu síly excitonů s teplotou, ale A-exciton vykazuje tmavší charakter ve srovnání s C-excitonem. Dále všechny excitonové píky posouvají ke kratším energiím (redshift) s rostoucí teplotou. Renormalizace hrubých elektronických energií interakcí s fonony a anharmonickou tepelnou expanzí mřížky způsobuje pokles šířky zakázaného pásu s rostoucí teplotou. Renormalizace energie nulového bodu (31 meV) je zcela způsobena polaronovou interakcí, přičemž příspěvek anharmonicity mřížky je zanedbatelný. Tyto zjištění mohou mít významný dopad na elektronické a optoelektronické technologie zařízení založené na těchto monovrstvách.
cs
Edge computing moves the computation closer to the data and the data closer to the user to overcome the high latency communication of cloud computing. Storage at the edge allows data access with high speeds that enable latency-sensitive applications in areas such as autonomous driving and smart grid. However, several distributed services are typically designed for the cloud and building an efficient edge-enabled storage system is challenging because of the distributed and heterogeneous nature of the edge and its limited resources. In this paper, we propose EdgeKV, a decentralized storage system designed for the network edge. EdgeKV offers fast and reliable storage, utilizing data replication with strong consistency guarantees. With a location-transparent and interface-based design, EdgeKV can scale with a heterogeneous system of edge nodes. We implement a prototype of the EdgeKV modules in Golang and evaluate it in both the edge and cloud settings on the Grid'5000 testbed. We utilize the Yahoo! Cloud Serving Benchmark (YCSB) to analyze the system's performance under realistic workloads. Our evaluation results show that EdgeKV outperforms the cloud storage setting with both local and global data access with an average write response time and throughput improvements of 26% and 19% respectively under the same settings. Our evaluations also show that EdgeKV can scale with the number of clients, without sacrificing performance. Finally, we discuss the energy efficiency improvement when utilizing edge resources with EdgeKV instead of a centralized cloud.
การประมวลผลแบบเอจ (Edge computing) ได้ย้ายการประมวลผลให้ใกล้กับข้อมูลมากขึ้น และย้ายข้อมูลให้ใกล้กับผู้ใช้มากขึ้น เพื่อแก้ปัญหาการสื่อสารที่มีความหน่วงสูงในระบบคลาวด์คอมพิวติ้ง การจัดเก็บข้อมูลที่ตำแหน่งเอจช่วยให้สามารถเข้าถึงข้อมูลได้อย่างรวดเร็ว ซึ่งเอื้อต่อการใช้งานแอปพลิเคชันที่ต้องการความหน่วงต่ำ เช่น การขับขี่อัตโนมัติและกริดอัจฉริยะ อย่างไรก็ตาม บริการกระจายศูนย์หลายตัวมักได้รับการออกแบบมาเพื่อใช้ในระบบคลาวด์ และการสร้างระบบจัดเก็บข้อมูลที่รองรับเอจอย่างมีประสิทธิภาพจึงเป็นเรื่องท้าทาย เนื่องจากลักษณะที่กระจายศูนย์และไม่เป็นเอกภาพของเครือข่ายเอจ รวมถึงข้อจำกัดของทรัพยากรที่มีอยู่ ในบทความนี้ เราขอเสนอ EdgeKV ซึ่งเป็นระบบจัดเก็บข้อมูลแบบกระจายศูนย์ที่ออกแบบมาเพื่อใช้ที่ขอบเครือข่าย (network edge) EdgeKV ให้บริการการจัดเก็บข้อมูลที่รวดเร็วและเชื่อถือได้ โดยใช้การจำลองข้อมูล (data replication) พร้อมการรับประกันความสอดคล้องที่เข้มงวด (strong consistency guarantees) ด้วยการออกแบบที่ซ่อนตำแหน่ง (location-transparent) และใช้พื้นฐานอินเทอร์เฟซ EdgeKV สามารถปรับขนาดได้ในระบบที่มีโหนดเอจที่ไม่เป็นเอกภาพ เราได้พัฒนาต้นแบบโมดูล EdgeKV ด้วยภาษา Golang และประเมินผลทั้งในสภาพแวดล้อมเอจและคลาวด์บนแพลตฟอร์มทดสอบ Grid'5000 เราใช้มาตรฐานการทดสอบบริการคลาวด์ของ Yahoo! (YCSB) เพื่อวิเคราะห์ประสิทธิภาพของระบบภายใต้ภาระงานที่สมจริง ผลการประเมินแสดงให้เห็นว่า EdgeKV ให้ผลลัพธ์ดีกว่าการจัดเก็บข้อมูลแบบคลาวด์ทั้งในกรณีการเข้าถึงข้อมูลภายในท้องถิ่นและข้อมูลระดับโลก โดยมีเวลาตอบสนองการเขียนข้อมูลเฉลี่ยดีขึ้น 26% และอัตราผ่านงานดีขึ้น 19% ภายใต้สภาพแวดล้อมเดียวกัน ผลการประเมินยังแสดงให้เห็นว่า EdgeKV สามารถปรับขนาดตามจำนวนผู้ใช้บริการ (clients) โดยไม่ลดทอนประสิทธิภาพส่วนหนึ่งส่วนใด สุดท้ายนี้ เราอภิปรายถึงการปรับปรุงประสิทธิภาพด้านการใช้พลังงานเมื่อใช้ทรัพยากรเอจผ่าน EdgeKV แทนการใช้คลาวด์แบบรวมศูนย์
th
Price differentiation describes a marketing strategy to determine the price of goods on the basis of a potential customer's attributes like location, financial status, possessions, or behavior. Several cases of online price differentiation have been revealed in recent years. For example, different pricing based on a user's location was discovered for online office supply chain stores and there were indications that offers for hotel rooms are priced higher for Apple users compared to Windows users at certain online booking websites. One potential source for relevant distinctive features are \emph{system fingerprints}, i.\,e., a technique to recognize users' systems by identifying unique attributes such as the source IP address or system configuration. In this paper, we shed light on the ecosystem of pricing at online platforms and aim to detect if and how such platform providers make use of price differentiation based on digital system fingerprints. We designed and implemented an automated price scanner capable of disguising itself as an arbitrary system, leveraging real-world system fingerprints, and searched for price differences related to different features (e.\,g., user location, language setting, or operating system). This system allows us to explore price differentiation cases and expose those characteristic features of a system that may influence a product's price.
মূল্য বৈষম্য বলতে একটি বিপণন কৌশলকে বোঝায় যেখানে কোনো পণ্যের মূল্য নির্ধারণ করা হয় সম্ভাব্য গ্রাহকের অবস্থান, আর্থিক অবস্থা, সম্পত্তি বা আচরণের মতো বৈশিষ্ট্যের ভিত্তিতে। সম্প্রতি কয়েকটি অনলাইন মূল্য বৈষম্যের ক্ষেত্রে এই ধরনের কৌশল প্রকাশিত হয়েছে। উদাহরণস্বরূপ, অনলাইন অফিস সরবরাহ শৃঙ্খলের দোকানগুলিতে ব্যবহারকারীর অবস্থানের ভিত্তিতে ভিন্ন মূল্য নির্ধারণ করা হয়েছে এবং কিছু অনলাইন বুকিং ওয়েবসাইটে উইন্ডোজ ব্যবহারকারীদের তুলনায় অ্যাপল ব্যবহারকারীদের জন্য হোটেলের ঘরের দাম বেশি রাখার ইঙ্গিত পাওয়া গেছে। এই ধরনের পার্থক্যমূলক বৈশিষ্ট্য খুঁজে পাওয়ার একটি সম্ভাব্য উৎস হলো \emph{সিস্টেম ফিঙ্গারপ্রিন্ট}, অর্থাৎ উৎস IP ঠিকানা বা সিস্টেম কনফিগারেশনের মতো অনন্য বৈশিষ্ট্য চিহ্নিত করে ব্যবহারকারীদের সিস্টেম চেনার একটি কৌশল। এই গবেষণাপত্রে, আমরা অনলাইন প্ল্যাটফর্মগুলিতে মূল্য নির্ধারণের বাস্তুতন্ত্র নিয়ে আলোকপাত করি এবং এই ধরনের প্ল্যাটফর্ম প্রদানকারীরা ডিজিটাল সিস্টেম ফিঙ্গারপ্রিন্টের ভিত্তিতে মূল্য বৈষম্য ব্যবহার করে কিনা এবং কীভাবে করে তা শনাক্ত করার চেষ্টা করি। আমরা একটি স্বয়ংক্রিয় মূল্য স্ক্যানার ডিজাইন ও বাস্তবায়ন করেছি যা বাস্তব জীবনের সিস্টেম ফিঙ্গারপ্রিন্ট ব্যবহার করে যে কোনো সিস্টেমের ছদ্মবেশ ধারণ করতে সক্ষম এবং বিভিন্ন বৈশিষ্ট্য (যেমন ব্যবহারকারীর অবস্থান, ভাষার সেটিং বা অপারেটিং সিস্টেম) এর ভিত্তিতে মূল্যের পার্থক্য খুঁজে বের করে। এই সিস্টেমটি আমাদের মূল্য বৈষম্যের ক্ষেত্রগুলি অন্বেষণ করতে এবং সেইসব বৈশিষ্ট্যগুলি উন্মোচন করতে সাহায্য করে যা কোনো পণ্যের মূল্যকে প্রভাবিত করতে পারে।
bn
Operationalizing AI has become a major endeavor in both research and industry. Automated, operationalized pipelines that manage the AI application lifecycle will form a significant part of tomorrow's infrastructure workloads. To optimize operations of production-grade AI workflow platforms we can leverage existing scheduling approaches, yet it is challenging to fine-tune operational strategies that achieve application-specific cost-benefit tradeoffs while catering to the specific domain characteristics of machine learning (ML) models, such as accuracy, robustness, or fairness. We present a trace-driven simulation-based experimentation and analytics environment that allows researchers and engineers to devise and evaluate such operational strategies for large-scale AI workflow systems. Analytics data from a production-grade AI platform developed at IBM are used to build a comprehensive simulation model. Our simulation model describes the interaction between pipelines and system infrastructure, and how pipeline tasks affect different ML model metrics. We implement the model in a standalone, stochastic, discrete event simulator, and provide a toolkit for running experiments. Synthetic traces are made available for ad-hoc exploration as well as statistical analysis of experiments to test and examine pipeline scheduling, cluster resource allocation, and similar operational mechanisms.
এআই-এর কার্যকরীকরণ গবেষণা এবং শিল্প উভয় ক্ষেত্রেই একটি প্রধান উদ্যোগে পরিণত হয়েছে। স্বয়ংক্রিয়, কার্যকরীকৃত পাইপলাইনগুলি যা এআই অ্যাপ্লিকেশন জীবনচক্র পরিচালনা করে, আগামীদিনের অবকাঠামোগত কাজের একটি গুরুত্বপূর্ণ অংশ গঠন করবে। উৎপাদন-মানের এআই কার্যপ্রবাহ প্ল্যাটফর্মগুলির কার্যক্রম অপ্টিমাইজ করতে আমরা বিদ্যমান শিডিউলিং পদ্ধতিগুলি কাজে লাগাতে পারি, তবে মেশিন লার্নিং (এমএল) মডেলগুলির নির্দিষ্ট ক্ষেত্রগত বৈশিষ্ট্য—যেমন নির্ভুলতা, স্থিতিশীলতা বা ন্যায্যতা—এর প্রতি মনোযোগ রেখে অ্যাপ্লিকেশন-নির্দিষ্ট খরচ-উপকার বিনিময় অর্জনের জন্য কার্যকরী কৌশলগুলি সূক্ষ্মভাবে সামঞ্জস্য করা কঠিন। আমরা একটি ট্রেস-চালিত, সিমুলেশন-ভিত্তিক পরীক্ষা ও বিশ্লেষণ পরিবেশ উপস্থাপন করছি যা গবেষক এবং প্রকৌশলীদের বৃহৎ পরিসরের এআই কার্যপ্রবাহ সিস্টেমগুলির জন্য এমন কার্যকরী কৌশল তৈরি এবং মূল্যায়ন করতে সাহায্য করে। আইবিএম-এ তৈরি একটি উৎপাদন-মানের এআই প্ল্যাটফর্ম থেকে প্রাপ্ত বিশ্লেষণ তথ্য ব্যবহার করে আমরা একটি বিস্তৃত সিমুলেশন মডেল তৈরি করি। আমাদের সিমুলেশন মডেলটি পাইপলাইন এবং সিস্টেম অবকাঠামোর মধ্যে সংযোগকে বর্ণনা করে এবং কীভাবে পাইপলাইনের কাজগুলি বিভিন্ন এমএল মডেল মেট্রিক্সকে প্রভাবিত করে তা বোঝায়। আমরা মডেলটি একটি স্বতন্ত্র, স্টোকাস্টিক, ডিসক্রিট ইভেন্ট সিমুলেটরে বাস্তবায়ন করি এবং পরীক্ষা চালানোর জন্য একটি টুলকিট প্রদান করি। পাইপলাইন শিডিউলিং, ক্লাস্টার রিসোর্স বরাদ্দ এবং এরূপ অন্যান্য কার্যকরী পদ্ধতি পরীক্ষা ও পর্যবেক্ষণ করার জন্য আনুষ্ঠানিক অনুসন্ধান এবং পরীক্ষার পরিসংখ্যানগত বিশ্লেষণের জন্য সিনথেটিক ট্রেসগুলি উপলব্ধ করা হয়।
bn