Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Unable to drain k8s node, error due to Pod has more than one PDB, which the eviction subresource does not support Bug description I set replicaCount: 2 and hpaSpec minReplicas: 2 in IstioControlPlane, and I can see pods are doubled. But when drained the node got the following error message error: error when evicting pod "istio-ingressgateway-5b694f6978-pc4f9": This pod has more than one PodDisruptionBudget, which the eviction subresource does not support. Expected behavior After increased the number of replicas to 2, the node draining should be completed without error. I saw similar issues was reported, and some of the reported successfully by increasing the replicas to 2. But for some reason, that is not the case on my situation. https://github.com/istio/istio/issues/12602 kubernetes ticket - https://github.com/kubernetes/kubernetes/pull/90253 Steps to reproduce the bug Set minReplicas: 2 on Istio HPAs Check the pods, make sure they are duplicated Make sure istio-ingressgateway pod is duplicated Drain one of the nodes (preferably that has Istio resources in it) Error was thrown with the following message: error: error when evicting pod "istio-ingressgateway-5b694f6978-pc4f9": This pod has more than one PodDisruptionBudget, which the eviction subresource does not support. Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm) istioctl: v1.4.3 kubectl: 1.14 helm version: v2.10.0 How was Istio installed? Generate manifest files through helm template command Use kubectl apply -f against generated manifest files Environment where bug was observed (cloud vendor, OS, etc) GKE IstioControlPlane resource apiVersion: install.istio.io/v1alpha2 kind: IstioControlPlane spec: autoInjection: components: injector: k8s: replicaCount: 2 hpaSpec: minReplicas: 2 policy: components: policy: k8s: replicaCount: 2 hpaSpec: minReplicas: 2 security: components: certManager: enabled: false citadel: k8s: replicaCount: 2 hpaSpec: minReplicas: 2 trafficManagement: components: pilot: k8s: replicaCount: 2 hpaSpec: minReplicas: 2 gateways: components: ingressGateway: enabled: true k8s: replicaCount: 2 hpaSpec: minReplicas: 2 egressGateway: enabled: true k8s: replicaCount: 2 hpaSpec: minReplicas: 2 configManagement: components: galley: k8s: replicaCount: 2 hpaSpec: minReplicas: 2 telemetry: components: telemetry: k8s: replicaCount: 2 hpaSpec: minReplicas: 2 coreDNS: enabled: true components: coreDNS: k8s: replicaCount: 2 hpaSpec: minReplicas: 2 ... Snippet of the HPA (check: minReplicas: 2) apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: labels: app: istio-ingressgateway istio: ingressgateway release: istio name: istio-ingressgateway namespace: istio-system spec: maxReplicas: 5 metrics: - resource: name: cpu targetAverageUtilization: 80 type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway This is a duplicate of https://github.com/istio/istio/issues/12602, lets track there to make things simpler. Thanks! So the error message here (and linked k8s issues) is different from #12602: Here: This pod has more than one PodDisruptionBudget, which the eviction subresource does not support. There: Cannot evict pod as it would violate the pod's disruption budget. Here the error is that two PDBs were added somehow, not that there was a PDB/HPA mismatch. I'm not quite sure how it would get into that state based on my reading of the configurations though (only one PDB per component) @agilgur5 we recently hit the same error. It turned out we had an old PDB present from an earlier version of Istio still deployed. Fix was to simply remove the PDB with the older Istio version reference. @samwhitford that would confirm this is a separate issue from #12602 and should be reopened. I have a PR out for that issue with #25873 but it does not resolve the issue here of multiple PDBs. That sounds like a separate issue with istioctl upgrade For anyone with multiple ingress/egress gateways make sure you have the appropriate PodDisruptionBudget label/selector patches in place. I recently ran into this issue on Istio 1.9.5 as a result of having multiple ingress gateways defined in my operator yaml but missing the PodDisruptionBudget label and selector patches for the gateway with a different name. overlays: - kind: PodDisruptionBudget name: istio-ingressgateway-internal patches: - path: metadata.labels.app value: istio-ingressgateway-internal - path: metadata.labels.istio value: ingressgateway-internal - path: spec.selector.matchLabels.app value: istio-ingressgateway-internal - path: spec.selector.matchLabels.istio value: ingressgateway-internal Without defining these patches multiple PDBs will be created with the same selectors resulting in multiple PDBs applying to a single pod.
GITHUB_ARCHIVE
What does is mean by buy(long) volatility or sell(short) volatility in option trading specifically? I often hear this term quite lot from traders, what does it really mean? And some additional question: In option trading, is "buying vol" equivalent to "buying option" (no matter it's call, put or even straddle)? on the other hand, is "selling vol" equivalent to "selling option" (no matter it's call, put or even straddle)? All else being equal, the price of an option is higher if volatility is higher, so you're pretty much correct that long(short) options positions are long(short) vol. But they might be long and short other things as well, so getting pure vol exposure without exposure to anything else takes hedging, for example delta hedging eliminates exposure to movement in the underlying in theory. "Buying vol" means having a position with positive Vega, and yes, both a long call and a long put have positive Vega (speaking here of vanilla calls and puts) as you can verify. Alex C - Careful though, some say "long volatility" for long gamma positions, and don't care about vega. This is because some trades are "long volatility" as "long option prices", hoping for implied volatilities to increase, and some are "long volatility" on the underlying moves. Interesting point, thanks. @siou0107 thanks for the answer! I have the exact doubt of what u are saying. "Long Vol" can mean (1) long gamma - profit by hoping the the underlying to move. or (2) long vega - profit by hoping the IV to move. However, "long vol" essentially does both at the same time, so how would these 2 strategy interfere each other? any comments? Not necessarily. Think of a calendar spread: the gamma is high on short-term (ATM) options while the vega is rather low, and conversely on long-term options. This is a typical interview question btw ;) Not necessarily. It is true for vanilla options : all else being equal, their price increases with implied volatility, and if you delta-hedge them the residual P&L is a function of the difference between realised and implied volatility (variance). This is expressed in the so-called Black-Scholes robustness formula : for a long, delta-hedged option position, we have $$\text{P&L} = \frac{1}{2}\int_0^T{e^{rt}\Gamma_tS_t^2\left(\sigma_t^2 - \tilde{\sigma}^2 \right)dt}$$ where $\sigma_t$ is the realised volatility (squared return) and $\tilde{\sigma}$ is your implied volatility (used to compute option price and gamma). However, when you talk about more exotic instruments (e.g., digital or barrier options), this becomes more subtle. Take an out-of-the money knock-in option : you are totally long volatility, since you need 1) to knock in and 2) the spot to move the furthest away from the strike so that you get the biggest payoff Now take an out-of the money up-and-out call : you want high volatility near the strike, to get into the money, but once in the money you want low volatility not to trigger the knockout barrier. Thus, you are not uniformly long or short volatility. In fact, such options have a gamma that changes sign. Proof Suppose you just bought a European option with terminal payoff $g(S_T)$ considering a BS model with constant volatility $\tilde{\sigma}$, and delta-hedged it on the basis of that model. The PDE satisfied by your model's price $p^M(t,S_t)$ is: $$\frac{\partial p^M}{\partial t}\left(t, S_t\right) + rS_t\frac{\partial p^M}{\partial x}\left(t, S_t\right) + \frac{1}{2}\tilde{\sigma}^2S_t^2 \frac{\partial^2 p^M}{\partial x^2}\left(t, S_t\right) - rp^M(t,S_t) = 0$$ $$\Leftrightarrow \frac{\partial p^M}{\partial t}\left(t, S_t\right) + rS_t\frac{\partial p^M}{\partial x}\left(t, S_t\right) = rp^M(t,S_t) - \frac{1}{2} \tilde{\sigma}^2S_t^2 \frac{\partial^2 p^M}{\partial x^2}\left(t, S_t\right)$$ $$p^M(T, x) = g(x)$$ (If it is not the case, your model is arbitrageable.) The value of your hedging portfolio (stock and cash) has the following dynamics: $$dV_t = \frac{\partial p^M}{\partial x}\left(t, S_t\right)dS_t + r\left[ V_t - S_t \frac{\partial p^M}{\partial x}\left(t, S_t\right)\right]dt$$ If you apply Itô's lemma to $p^M$, you get: $$dp^M(t,S_t) = \frac{\partial p^M}{\partial t}\left(t, S_t\right) dt + \frac{\partial p^M}{\partial x}\left(t, S_t\right) dS_t + \frac{1}{2} \frac{\partial^2 p^M}{\partial x^2}\left(t, S_t\right) d\langle S \rangle_t = \left[\frac{\partial p^M}{\partial t}\left(t, S_t\right) + \frac{1}{2} \sigma_t^2S_t^2\frac{\partial^2 p^M}{\partial x^2}\left(t, S_t\right)\right]dt + \frac{\partial p^M}{\partial x}\left(t, S_t\right) dS_t$$ If you denote by $Z_t := p^M\left(t, S_t\right) - V_t$ the value of the hedging P&L, you have $$dZ_t = \left[\frac{\partial p^M}{\partial t}\left(t, S_t\right) + \frac{1}{2} \sigma_t^2S_t^2\frac{\partial^2 p^M}{\partial x^2}\left(t, S_t\right)\right]dt - r\left[ V_t - S_t \frac{\partial p^M}{\partial x}\left(t, S_t\right)\right]dt$$ $$Z_0 = 0$$ The initial condition corresponds to the assumption that you invest the whole premium $p^M\left(0, S_0\right)$ into the hedging portfolio $V_0$. You replace the terms that you have in your model PDE and get: $$dZ_t = \left[rp^M(t, S_t) + \frac{1}{2} \left(\sigma_t^2 - \tilde{\sigma}^2\right)S_t^2\frac{\partial^2 p^M}{\partial x^2}\left(t, S_t\right)\right]dt - rV_tdt = \left[rZ_t + \frac{1}{2} \left(\sigma_t^2 - \tilde{\sigma}^2\right)S_t^2\frac{\partial^2 p^M}{\partial x^2}\left(t, S_t\right) \right]dt$$ Since $p^M\left(T, S_T\right) = g\left(S_T\right)$, the final P&L is: $$\frac{1}{2}\int_0^T{e^{rt}\Gamma_tS_t^2\left(\sigma_t^2 - \tilde{\sigma}^2 \right)dt}$$ +1 Good answer. I would just say sigma tilda is the static option hedge volatility. Well, not necessarily. If you use a different implied volatility to compute your Greeks (so delta and gamma) at each time step, that result will still hold. But that is not crystal clear in my formula, I will add the time index to the implied volatility. Thanks for the feedback! I’m not sure. I mean the technical formula itself in non-Black-Scholes (non-constant volatility) world, not practical applications. Time dependent sigma tilda wouldn’t be an implied volatility at time t (what expiry? what strike?), but maybe a Dupire local volatility surface recalibrated periodically to implied vol surface. It would be the implied volatility of your option, recalibrated at each time period :) I’ll make a quick proof soon Do it! I’ll validate it myself. Carr-Madan, flat hedging implied vol, is clear. Google pointed to resource below for extensions. It should be a good start. https://pdfs.semanticscholar.org/c1ab/1d722a56ffd88e4523a456f0dfefd7fee048.pdf Actually I think you are right @ir7. The implied vol $\tilde{\sigma}$ has to be constant if one is just considering the underlying realised volatility. You can consider a different implied volatility for each time period, but then you have to add a vega P&L term (P&L due to the remarking of implied volatility).
STACK_EXCHANGE
Baybench 1.3 - UNZIP and ADF. I downloaded these files from Aminet yesterday and placed them on a 720K formatted PC disk using MultiDOS. Later I will copy over the ADF ZIP from the PC to the same disk. Note that by using the ZIP tool on the PC this reduces the size of the ADF file from 880K to 256K. Had I not done this the ADF would not have fit onto a 720K disk. I am still using 1.3 on a 500Plus Amiga with a ROM switcher allowing me to use the 1.3 ROM. Albeit with my own Workbench. And there she is. ' Don't call me BayB ' A line from Barbe Wire .. Anyhoo. This is BayBench and is my own Workbench disk for 1.3 using files from Amiga Computing cover disks. It has been enhanced with MultiDOS and Disk Master... all on one disk. OK first thing I do is mount the PC drives MD0: and MD1: using the command line thus. The Workbench loads with a CLI[shell] at the bottom of the screen. As should 1.3. The 'Bayb' on the BayBench disk is Clara Veiga. I am scuzz and that is a PC disk on MD1: It looks empty but it is not as I will show. Next I fire up Disk Master II which is incorporated into my BayBench disk. Having clicked MD1: from the drive list I get the contents of the PC disk shown. I next copy the files into RAM so I can extract all the software. Next I use Disk Master to format an Amiga DD disk which is a copy of an old Amiga Computing copy disk. Note the Formatting progress at the head of the screen. With an empty disk I now LHA extract the files in RAM. The extracted files are now on the floppy. Note that the date is correct at the top of the screen and the computer is a Plus and I did not set the date. It has a battery. I now copy adf2disk and UnZip to RAM. I change directory to RAM and enter the command to UNZIP the PC file. Stop and think a moment. An Amiga 500 UnZipping a file created on a modern PC by simply using a tool in the CLI. I now have a 901120 ADF file that came to the A500Plus on a 720KB disk. And the final challenge.... Can I create an Amiga disk from an image. I enter the command for adf2disk ...... It didn't work. But then I already knew that it wouldn't. Worth a try. Not even ADFBlitzer worked. I was able to transfer and de-crunch all the LHA files though. So not all a waste of time. FINAL WORD: Apt indeed. ADF is a tool of the emulator. You really don't need the emulator if you are willing to obtain the media as we did back when there was no emulator. I know it's a bit more tricky, but I believe with a bit of effort it can be done. The Amiga works best as a real Amiga with real disks and real software. It really is so so much easier. For one I wouldn't have wasted the last four days creating my very own BayBench disks. Anyhoo.. all done. Time to put those disks to bed as I seriously don't need them.
OPCFW_CODE
I have no doubt about there are many more interesting test cases you can write down for Pen. So here I am waiting for such great ideas, you can write down test case around pen in the comment below. I really appreciate if you take up this Testing Challenge to get the good test cases for pen to our readers. Once you have determined the printer is OK, proceed to check the connections. Locate the cable coming from the cash drawer (there is only one cable). Follow the cable to the printer; examine it for any damage that may be causing the problem. If the cash drawer cable is plugged into the printer, disconnect it and reconnect the cable. Test the cash drawer with the “No Sale” function. Actors and User Stories So we are trying to share some test scenarios, and I hope these all help you write for the Pencil test case. The mouse on my computer is not responding. A note regarding mouse connectors; there are two types of connectors, PS2 or USB. The PS2 is a small round connector, the USB is a small flat style connector. Bear in mind that cash drawers are an easy item to self-swap. Team Howard can overnight you a new cash drawer, you could replace it yourself and save labor charges. For the exact difference between test cases and test scenarios, check our post – Difference b/w Test Case and Test Scenario. One suggestion, Before starting, explain or write test scenarios for Pencil Test Case, make sure you have got all of the requirements. If Not, try to ask all kinds of questions running test case for pencil through your mind so that you can answer well and will not get stuck after writing a few Pencil Test Cases. If this is the case you just need to remove the stuff under the till once you get the cash drawer opened. To get the cash drawer open you need to push the till down as you attempt to open it. These are the 10 best we’ve found, each with their own unique advantages to set them apart from the crowd. In the process of thinking about these actions, I actively used a regular pencil. Primary actors of the system are the users who use the system to achieve some business goal. A Student might use the pencil for writing on a paper. A Cartoonist needs various pencils of colors, hardness to draw a picture and shades. Each of the manipulations described above will have a certain effect on the pencil. After each iteration, we test the use of a pencil (see functional testing) and sharpen it. From the point of view of eco-friendliness, the best pencils are unvarnished and without an eraser (by the way, they are found in great variety in Ikea, Leroy Merlen, etc.). And for this reason I dislike pencils with an eraser at the end because if there is one, and especially with the iron holder – it’s inconvenient to chew on it. This post is related to writing Test cases for a Pencil. Understanding the system actors and User stories would help to understand the system better and write effective test cases for the system under test. Initial Properties of a Pencil Taken Out of the Box or Primary Testing It may not be a big issue or a showstopper, but from a branding perspective, I think small details like these are VERY important. Ensure your test case is in the proper format and also includes the relevant columns. I’ve only added the minimum to get you started. I’m sure there are tons more but each of these pens have their own functions and should be tested in their own specific way. Below are some types of Pen’s you may want to consider testing.
OPCFW_CODE
Unexpected Current Working Directory -- Foxx Service -- fs Module my environment running ArangoDB I'm using the latest ArangoDB of the respective release series: [x] 3.1.16 On this operating system: [x] MacOS, version: I'm running this line of code and this is the output (tested on both dev and prod): const mp = require('fs').list('.'); [".ftp.ax890q",".ftp.iEMbXD",".ftp.MQ328r",".ftp.UMvBIw","cachegrind.out.43396","cachegrind.out.43397","cachegrind.out.43398","cachegrind.out.43399","cachegrind.out.43400","cachegrind.out.43415","cachegrind.out.43416","cachegrind.out.43417","cachegrind.out.43418","cachegrind.out.43419","cachegrind.out.43420","cachegrind.out.43421","cachegrind.out.43422","cachegrind.out.43423","cachegrind.out.43424","cachegrind.out.43425","cachegrind.out.43426","cachegrind.out.43427","cachegrind.out.43428","cachegrind.out.43429","cachegrind.out.43431","cachegrind.out.43432","cachegrind.out.43433","cachegrind.out.43434","cachegrind.out.43435","cachegrind.out.43436","cachegrind.out.43437","cachegrind.out.43438","cachegrind.out.43439","cachegrind.out.43440","cachegrind.out.43441","cachegrind.out.43442","cachegrind.out.43443","cachegrind.out.43444","cachegrind.out.43445","cachegrind.out.43446","cachegrind.out.43457","cachegrind.out.43458","cachegrind.out.43459","cachegrind.out.43460","cachegrind.out.43461","cachegrind.out.43462","cachegrind.out.43463","cachegrind.out.43464","cachegrind.out.43465","cachegrind.out.43466","cachegrind.out.43467","cachegrind.out.43468","cachegrind.out.43469","cachegrind.out.43470","cachegrind.out.43471","cachegrind.out.43472","cachegrind.out.43474","cachegrind.out.43475","cachegrind.out.43476","cachegrind.out.43477","cachegrind.out.43478","cachegrind.out.43479","cachegrind.out.43480","cachegrind.out.43482","cachegrind.out.43483","cachegrind.out.43484","cachegrind.out.43485","cachegrind.out.43486","cachegrind.out.43487","cachegrind.out.43488","com.cleverfiles.cfbackd.chief","com.cleverfiles.cfbackd.pid","com.docker.vmnetd.socket","filesystemui.socket","prl_event_tap.socket_501","Untitled-gA9vtKql.uicatalog","Untitled-yJPGbiKI.uicatalog"] I'm expecting to get the listing of the APP directory. But the CWD is set to another path. How can I change or get the current working directory? @pluma Have a look at the module.context.basePath. The reason the cwd is unrelated to the service is that it's global for ArangoDB, not specific to each service. please add a sample usage to fs docs, so others don't get confused. https://docs.arangodb.com/3.1/Manual/Appendix/JavaScriptModules/FileSystem.html Implemented. Thanks @p30arena for the suggestion.
GITHUB_ARCHIVE
This looks promising and I’m trying to manually install it in ArchLinux. I use mysql with sockets and I cannot find how to set the path in config.yml. This is one of my attempts and the error I usually get # Database host CRITICAL ▶ migration/Migrate 002 Migration failed: dial tcp: address tcp//run/mysqld/mysqld.sock: unknown port Looks like we always assume a tcp connection. Fix is up: #1497 - fix: using mysql via a socket - api - Gitea The changes in the PR assume a unix socket host would always start with /, similar to how we handle this for postgres connections. Fix is now merged, please check with the next unstable build (~45min) if you’re able to set it up with mysql via a unix socket. Please note in your config you’ll need to set this to host: /run/mysqld/mysqld.sock (without the I just tried and with the correct host, and it still gives me the following error Could not connect to db: default addr for network '/run/mysqld/mysqld.sock' unknown Permissions are correct and no extra logs from vikunja. Does it work if you specify the address as Migration failed: dial tcp: lookup unix(/run/mysqld/mysqld.sock): no such host I’ve just pushed another fix for this in 7ad256f6cd. Can you check again with the last unstable build? Working! thanks for the quick fixes! ^^ Using config file: /usr/local/share/webapps/vikunja/config.yml 2023-04-17T21:18:46.265837223+02:00: INFO ▶ migration/Migrate 050 Ran all migrations successfully. 2023-04-17T21:18:46.265936249+02:00: INFO ▶ models/RegisterReminderCron 051 Mailer is disabled, not sending reminders per mail 2023-04-17T21:18:46.266013855+02:00: INFO ▶ models/RegisterOverdueReminderCron 052 Mailer is disabled, not sending overdue per mail 2023-04-17T21:18:46.266130644+02:00: INFO ▶ cmd/func25 053 Vikunja version v0.20.4+95-7ad256f6cd ⇨ http server started on /run/vikunja/vikunja.sock
OPCFW_CODE
Newark, DE 19716 On campus150 Academy Street I'm located in room 259 of Colburn Lab. If you come up the main staircase in Colburn, once you're at the top of the stairs, you'll want to swing around to the wing behind you on the left. During the spring and fall semesters, I'm typically on campus from 7:30-4:30 on workdays. Summer is a little less predictable. My calendar below holds all my classes, office hours, and appointments. If there are gaps between "busy" events, I should be available, so feel free to email to set up an appointment or drop in! If my door is open, you're welcome to visit! Office hours are posted in the calendar below. Please note that these are just the "guaranteed" hours; you are welcome to stop by whenever I am available. The last day for these office hours is the last day of classes. I have an open-door policy, and I'm usually around my office or elsewhere in Colburn. There's a note on my door if something unusual is happening. Spring semester office hours run from the first week of February through most of May, except on days when classes are not in session. Fall hours run from late August through mid-December. Calendar of Appointments Use the tabs at the upper right of the calendar to switch between week/month/agenda view as necessary. I have tried to force the default to the Eastern time zone, but please note that your personal settings may override this. Every April and October, I will email the link to my advising appointments to my advising list. If you lose the email or link, please contact me directly. If you miss your appointment, please revisit the sign-up page (if appointments are still available) or email me to reschedule (my calendar is below). I am assigned about thirty students from the Class of 2019 to formally advise, and we will meet in person at least once per semester. If you are not one of my advisees but have questions about the overall curriculum, I am happy to discuss them with you. E-mail is definitely the best way to make sure you have my attention. I check my e-mail multiple times per day, but not continuously. I try to respond to e-mails within 24 hours during the workweek (and most weekends). Please consider the probability of delays on weekends and holidays (and when I am away for conferences). Please see here. Please consider what you could expect me to write on your behalf before making any requests. I am typically happy to write letters for you if we have worked together in some significant way so that I can write with specific details that make my recommendation strong. Joshua Enszer, Ph.D. University of Delaware Department of Chemical & Biomolecular Engineering
OPCFW_CODE
Embrace good typography on the WebPublished on January 9, 2016. Typography on the web is often overlooked. Web sites usually seem to not care about the readability of their contents, and I think this has to do with the perceived difficulty in obtaining something different from the default, with results consistent across browsers. I am not an expert of web technology at all. However, when I first thought about the redesign of my personal blog, which resulted in the site you are currently reading, I chose to keep the layout simple, focusing instead on trying to exploit available web technologies to obtain the best possible typographical results. This post surveys what I wanted to do and what I’ve learnt trying to do it. In the digital typography world, when one asks for quality, the answer is \(\LaTeX\). I love it. One of its major drawbacks, however, is that it is by design intrinsically tied to the paper medium. Some attempts to adapt \(\TeX\) and \(\LaTeX\) to other media such as web pages exist, but they are clumsy at best. What a web site can do, however, is to embedd the same typographical elements and guidelines that make \(\LaTeX\) output look so good. The first element in this regard is the font used to render the text. Thanks to this website, I’ve managed to use Computer Modern, the same font used by default in \(\LaTeX\) documents. I use it in both serif and sans-serif variants, and it looks incredibly good. This has been possible thanks to the CSS3 standard that now supports custom web fonts. The supported font format, of course, varies between browsers, but fortunately the font comes nicely packaged in all the different formats needed to support the major browsers. As a rule of thumb, all the web site elements are rendered in sans-serif, while the content’s text paragraphs are serifed. In addition to the main font, in a Computer Science blog it is important to also correctly and nicely typeset source code text. I’ve choosen Hack, an open source monospace font designed to render source code on the screen. You can see how it looks later in this post. It is, by the way, also the font that I regularly use in my text editors. Another important aspect of a good looking document is the paragraph justification. Web sites tend to not justify text, leaving everything left-aligned, which looks very bad to me. Justification is, however, not easy at all to get right. Although the CSS3 standard natively supports text justification (it suffices to specify a text-align: justify property), it is completely useless without proper support for hyphenation. Justified but non hyphenated paragraphs look very weird, because the browser has to put too much space between words. Unfortunately, while the support for automatic hyphenation of text theoretically exists (with a text-hyphenation: auto property), current browsers do not support it. What they do support is the text-hyphenation: manual property, which is enabled by default. The manual hyphenation consists in putting a ­ character at the right point within words, in order to tell the browser where the word can be broken. This mechanism works well in any browser, but it means that all the text of the page has to be pre-hyphenated and filled with those characters. This is, I think, the reason why nobody does it, but thanks to Hakyll, the preprocessing software tool that I used to create this website, this is not so hard after all (more on this in a later post). The difference from how the same paragraph would be rendered by \(\LaTeX\) is still visible, since the algorithms used by the browser to layout the paragraphs in real time do not produce optimal line breaks, but the result is definitely worth the effort. Another minor details are worth mentioning regarding the paragraphs. As you can see, this post is typeset as usual in books and papers, with no space between paragraphs. To aid the eye recognizing paragraphs breaks, a little indentation is put at the beginning of each paragraph instead. The first paragraph, on the other hand, does not need to be visually separated from the title, so it is not indented. This is achieved quite easily in CSS by specifying the text-indent property with an adjacency selector: 1 2 3 Spacing between lines is left to the default values set by Bootstrap. A really nice touch, suggested by The Elements of Typographic Style Applied to the Web, is that of synchronizing the rithm of the text paragraph with that of the menu items in the side bar. What does this mean? If you look at the beginning of the post you can notice that the spacing between the elements of the page are so that the paragraph lines on the left happen to be vertically aligned to the menu items on the right. This detail is important to reduce visual clutter, and it is ensured by carefully specifying every vertical distance as a function of the line height computed by Bootstrap from the font size. Mathematical typesetting is what made LaTeX so heavily used in scientific environments. What can be done in a web page to get similar results? We can simulate LaTeX from scratch, of course! That is what the authors of MathJax must have been thought when they designed this awesome library. All the mathematical text and equations in this site are typeset by MathJax. Everything I had to do to enable it was to load the script from the CDN: 1 2 3 4 Following this tag, MathJax automatically intercepts any occurrence of a pair of dollar signs in the HTML source code and translates the contents into a typeset equation using a full-fledged implementation of the TeX math layout algorithm. Things to improve Considering the fact that I haven’t touched a line of HTML/CSS in almost a decade, I’m pretty satisfied of the results of my experiment. However, there are still some details to fix. The first item in the TODO list is to fix the horizontal scrolling of source code listings, which currently does not work well as you can see above. Then, the next big thing is a CSS sheet for printing posts. It should be rather easy, but it gets time to craft the details. Another problem is that I am not fully satisfied by the spacing between paragraphs and section titles, and that of code blocks and images. Also, even if the automatic hyphenation mechanism works very well together with the text justification, the automatic hyphenation of words is not perfect. The algorithm, implemented by a Haskell library used by Hakyll, does not take into account some small details, such as that words should not be hyphenated on the first syllable, as it happens for the word be-tween some lines above. I’m afraid this can only be fixed by looking at the library’s source code, but I will never have enough spare time to do it. For everything else, any suggestion is welcome!
OPCFW_CODE
Infectious interactions: Identifying user-communities in German-language Twitter topic networks on vaccination throughout the Covid-19 pandemic by David Leimstädtner Thesis supervisor: dr. Damian Trilling The Covid-19 pandemic presents a situation of crisis at an unprecedented global scale, in which a general public is perforce confronted with an online debate, which has long been targeted by a highly active minority of anti-vaxxers, fueling the spread of misinformation on social networking sites as Twitter. Addressing these recent developments, my thesis asks: How does this influx of new users joining the vaccination debate change structures of information flow within pro and anti-vaccination users clusters ? Is the overall debate polarized based on it’s users’ vaccination stance? In order to answer this, the study created network models based on a Twitter dataset and applied community detection algorithms to identify clusters of user interaction within the German-language vaccination debate and examine the evolution of such clusters by comparing three time periods: The year leading up to the pandemic, the initial onset of the outbreak and the timeframe of active roll-out of the German vaccination campaign. The study’s approach is separated into four phases: First, web mining approaches are employed to gather a dataset of 3.5 million Tweets using Twitter’s newly introduced Full Archive Search API 2.0. Second, the Python network-modelling package iGraph is used to create network representations of the user-interaction patterns within each period of interest and the Leiden community detection algorithm is applied to identify denser user-clusters within the overall network. Third, a manual content analysis of a subset of 6000 Tweets is conducted, in order to train a machine-learning model to classify the vaccine attitudes based on the tweets contents. The resulting classifier is applied to each user node in the resulting networks, allowing an analysis of the prevalence of vaccine-critical stances within the identified communities. Lastly, automated content analysis is applied to the tweet texts and profile descriptions pertaining to a particular community, in order to extract the most characteristic terms. All the code created for this study can be found on my Github. While the overall number of users partaking in the vaccination debate on Twitter increased rapidly, the overall network also became clearly fragmented in the wake of the pandemic, suggesting a polarization of the vaccination debate. In this polarized debate, an anti-vaccination minority distributes misinformation and conspiracy content among it’s decentralized community, while a majority of vaccination-proponents is organized in more hierarchical fashion around central, public figures and traditional news outlets. The anti vaccination cluster is thereby growing at a quicker rate and overall more closely connected, pointing to a more effective spread of information within. The content analysis further confirms prior research regarding the anti-vax movement outside the German context, finding it’s content to concentrate on vaccine side-effects, conspiracy theories and alternative news bloggers. Results of the community detection algorithm (left) and the vaccine-stance classifier (right) on a retweet-network concerning vaccination spanning between 28.12.2020 and 15.03.2021 Blue marks the majority community, while red shows the minority community identified by the „Leiden“ community detection algorithm. In the second illustration, green marks users whose posts have been identified as expressing anti-vaccination attitudes by the ML classifier.
OPCFW_CODE
When configuring a Linux RAID array, the chunk size needs to get chosen. But what is the chunk size? When you write data to a RAID array that implements striping (level 0, 5, 6, 10 and so on), the chunk of data sent to the array is broken down in to pieces, each part written to a single drive in the array. This is how striping improves performance. The data is written in parallel to the drive. The chunk size determines how large such a piece will be for a single drive. For example: if you choose a chunk size of 64 KB, a 256 KB file will use four chunks. Assuming that you have setup a 4 drive RAID 0 array, the four chunks are each written to a separate drive, exactly what we want. This also makes clear that when choosing the wrong chunk size, performance may suffer. If the chunk size would be 256 KB, the file would be written to a single drive, thus the RAID striping wouldn't provide any benefit, unless manny of such files would be written to the array, in which case the different drives would handle different files. In this article, I will provide some benchmarks that focus on sequential read and write performance. Thus, these benchmarks won't be of much importance if the array must sustain a random IO workload and needs high random iops. All benchmarks are performed with a consumer grade system consisting of these parts: Processor: AMD Athlon X2 BE-2300, running at 1.9 GHz. RAM: 2 GB Disks: SAMSUNG HD501LJ (500GB, 7200 RPM) SATA controller: Highpoint RocketRaid 2320 (non-raid mode) Tests are performed with an array of 4 and an array of 6 drives. All drives are attached to the Highpoint controller. The controller is not used for RAID, only to supply sufficient SATA ports. Linux software RAID with mdadm is used. A single drive provides a read speed of 85 MB/s and a write speed of 88 MB/s The RAID levels 0, 5, 6 and 10 are tested. Chunk sizes starting from 4K to 1024K are tested. XFS is used as the test file system. Data is read from/written to a 10 GB file. The theoretical max through put of a 4 drive array is 340 MB/s. A 6 drive array should be able to sustain 510 MB/s. About the data: All tests have been performed by a Bash shell script that accumulated all data, there was no human intervention when acquiring data. All values are based on the average of five runs. After each run, the RAID array is destroyed, re-created and formatted. For every RAID level + chunk size, five tests are performed and averaged. Data transfer speed is measured using the 'dd' utility with the option bs=1M. Results of the tests performed with four drives: Test results with six drives: Analysis and conclusion Based on the test results, several observations can be made. The first one is that RAID levels with parity, such as RAID 5 and 6, seem to favor a smaller chunk size of 64 KB. The RAID levels that only perform striping, such as RAID 0 and 10, prefer a larger chunk size, with an optimum of 256 KB or even 512 KB. It is also noteworthy that RAID 5 and RAID 6 performance don't differ that much. Furthermore, the theoretical transfer rates that should be achieved based on the performance of a single drive, are not met. The cause is unknown to me, but overhead and the relatively weak CPU may have a part in this. Also, the XFS file system may play a role in this. Overall, it seems that on this system, software RAID does not seem to scale well. Since my big storage monster (as seen on the left) is able to perform way better, I suspect that it is a hardware issue. because the M2A-VM consumer-grade motherboard can't go any faster.
OPCFW_CODE
We are all familiar with Euclidean geometry and with the fact that it describes our three-dimensional world so well. In Euclidean geometry, the sides of objects have lengths, intersecting lines determine angles between them, and two lines are said to be parallel if they lie in the same plane and never meet. Moreover, these properties do not change when the Euclidean transformations (translation and rotation) are applied. Since Euclidean geometry describes our world so well, it is at first tempting to think that it is the only type of geometry. (Indeed, the word geometry means ``measurement of the earth.'') However, when we consider the imaging process of a camera, it becomes clear that Euclidean geometry is insufficient: Lengths and angles are no longer preserved, and parallel lines may intersect. Euclidean geometry is actually a subset of what is known as projective geometry. In fact, there are two geometries between them: similarity and affine. To see the relationships between these different geometries, consult Figure 1. Projective geometry models well the imaging process of a camera because it allows a much larger class of transformations than just translations and rotations, a class which includes perspective projections. Of course, the drawback is that fewer measures are preserved -- certainly not lengths, angles, or parallelism. Projective transformations preserve type (that is, points remain points and lines remain lines), incidence (that is, whether a point lies on a line), and a measure known as the cross ratio, which will be described in section 2.4. Projective geometry exists in any number of dimensions, just like Euclidean geometry. For example the projective line, which we denote by , is analogous to a one-dimensional Euclidean world; the projective plane, , corresponds to the Euclidean plane; and projective space, , is related to three-dimensional Euclidean space. The imaging process is a projection from to , from three-dimensional space to the two-dimensional image plane. Because it is easier to grasp the major concepts in a lower-dimensional space, we will spend the bulk of our effort, indeed all of section 2, studying , the projective plane. That section presents many concepts which are useful in understanding the image plane and which have analogous concepts in . The final section then briefly discusses the relevance of projective geometry to computer vision, including discussions of the image formation equations and the Essential and Fundamental matrices. The purpose of this monograph will be to provide a readable introduction to the field of projective geometry and a handy reference for some of the more important equations. The first-time reader may find some of the examples and derivations excessively detailed, but this thoroughness should prove helpful for reading the more advanced texts, where the details are often omitted. For further reading, I suggest the excellent book by Faugeras and appendix by Mundy and Zisserman .
OPCFW_CODE
Well, a year has passed, and now I've decided to try working on it a bit more. I was actually a bit disappointed with the frame rate of the original version. This NEW version pretty much looks the same as the old one, but it's about three times faster! I've also corrected some of the fish-eye perspective distortion and made a new maze that isn't as cramped as the old one. The old Gloom1 demo required a 512k CoCo3. It'll run on either a 6809 or 6309 CPU, but it will go about 13% faster with a 6309 because it kicks the CPU into 6309 native mode. The new Gloom2 demo will on on any CoCo3 (even 128k!) with either a 6809 or 6309. (It runs a bit faster on a 6309, but since neither demo takes specific advantage of the 6309 other than just turning on native mode, the speed difference isn't significant.) (The whole point of this demo was to show that a stock CoCo3 would do 3D.) The new version (Gloom2) typically manages about 20 frames per second full screen animation, using the 256x192 16 color video mode. The old version is here only as a reference - the new one is MUCH better and you should try that one if you only want to download one file. If you've ever wondered what ever happened with Gloom since, There are actually 3 spritual successors to it since the 2 inital demos. I still haven't made a full CoCo 3 game of it yet, but the idea still comes to mind every now and then. In 1999, I wrote San Francisco Rush for the Game Boy Color. It used the Gloom technique for 3D gameplay and made good use of the technique's voxel-like properties to allow height mapped terrain for creating hills, variable terrain and other game objects. Sadly, the game was never released. In 2000, Nickolas Marentes released Gate Crasher for the CoCo 3. This first person game uses the same 3D technique as Gloom, but all the credit goes to Nick as he wrote every single line of code in the game! Click here to see a video of Gate Crasher. In 2004, I worked on Tron 2.0: Killer App for the Game Boy Advance. The story mode includes 3D sub-sections where you control a tank or a recognizer. And yes, it uses the Gloom technique. A number of graphical improvements include better textures, depth cueing and free-floating camera. Click to see videos of the game's tank mode and recognizer mode. |GLOOM2.BIN||*NEW* Gloom2 demo BINary program file.| |GLOOM1.BIN||old Gloom demo BINary file.| |GLOOM.TXT||Gloom introduction text file.| |GLOOM.ASM||Gloom assembly EDTASM source code.| |README.TXT||Text file describing my intentions|
OPCFW_CODE
Already have an account? Is "she don't" sometimes considered correct form? n-dimensional circles! My AccountSearchMapsYouTubePlayNewsGmailDriveCalendarGoogle+TranslatePhotosMoreShoppingWalletFinanceDocsBooksBloggerContactsHangoutsEven more from GoogleSign inHidden fieldsSearch for groups or messages Main menu Skip to primary content Quick Start Downloads Buy Codecs Forum FAQs About About us Contact us Technical, help his comment is here Why are Squibs not notified by the Ministry of Magic Guessing game - Is it a 40? Why did the best potions master have greasy hair? One program uses libmysqlclient which has worked for years. share|improve this answer answered Sep 26 '14 at 17:20 Digit 1,37736 I used the official libspotify version for android arm, it should work. what was I going to say again? And also remember that if you use GCC, you should use gcc (or the cross-compiler variant) as the linker instead of ld. –rodrigo Apr 30 '15 at 15:42 @rodrigo: Best part is, the compilation happens much faster now. Run file test to check this. Print a letter Fibonacci Why are Squibs not notified by the Ministry of Magic Wait... It compiles fine but the linker failed to link to the OpenVG library so I added the linker switch -L/home/ae/Documents/toradex/col-imx6/colibri-imx6-sdk/usr/lib and now it links to the OpenVG library. I hope offline playback gets implemented soon. –Lukas Kolletzki Sep 30 '14 at 8:59 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up Singular cohomology and birational equivalence Does トイレ refer to the British "toilet" or the American "toilet"? Arm Linux Gnueabihf Bin Ld Cannot Find Why do I never get a mention at work? ldd run on another more capable system against the executable shows no surprises: ts7500:# ldd vecsSqlLogger libmysqlclient.so.15 => /usr/lib/libmysqlclient.so.15 (0x40026000) libc.so.6 => /lib/libc.so.6 (0x401f9000) libpthread.so.0 => /lib/libpthread.so.0 (0x40322000) libcrypt.so.1 => /lib/libcrypt.so.1 http://stackoverflow.com/questions/39772368/cannot-find-lib-libpthread-so-0 What now? But since then, it can't link to pthread library even with the linker swtich -L/home/ae/Documents/toradex/col-imx6/colibri-imx6-sdk/lib. Prepared for Yet Another Simple Rebus? Draw some mountain peaks The nth numerator Algorithm of checking whether an element is present in a merkle tree What is the origin of the word "pilko"? Fill in your details below or click an icon to log in: Email (required) (Address never made public) Name (required) Website You are commenting using your WordPress.com account. (LogOut/Change) You are Linker error… can't seem to find “fwrite” and “strerror”?5Why does the GNU linker not find a shared object with -l Is adding the ‘tbl’ prefix to table names really a problem? this content Ask whoever provided you with it an Android-compatible version instead, i.e. Qt Cannot find libs Cross compile error “arm-none-eabi-g++ cannot find entry symbol” configure test with static lib libphidget cross building, libusb_init no, and cannot find usb.h Solved: Cross Compile Ubuntu eclipse When I start my android app, dalvik crashes with the following message: 09-26 08:18:18.941: E/dalvikvm(11820): dlopen("/data/app-lib/com.example.myApp-1/libspotify.so") failed: dlopen failed: could not load library "libpthread.so.0" needed by "libspotify.so"; caused by library "libpthread.so.0" more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed I don't know; did the libpthread.so.0 come from the compiler directory? Sign in to comment Contact GitHub API Training Shop Blog About © 2016 GitHub, Inc. weblink Safely adding insecure devices to my home network How can I declare sovereignty from the American government and start my own micro nation? Stealing your freedom?, definately not a good deal!. Besides that, any other issues are found? Compilation went through fine, however, while linking my module, linker thowed below error, even though I made sure all paths are properly set, /opt/compilers/arm-linux-cs2010q1-202/bin/../lib/gcc/arm-none-linux-gnueabi/4.4.1/../../../../arm-none-linux-gnueabi/bin/ld: skipping incompatible /lib/libpthread.so.0 when searching for /lib/libpthread.so.0 About librt.so in android ,please refer to libgit2/libgit2#2128 . It's probably one for embedded Linux/arm systems instead. Prepared for Yet Another Simple Rebus? but it's revolutionary only when apple does.. 1monthago April 2011 M T W T F S S « Dec Aug » 123 45678910 11121314151617 18192021222324 252627282930 Hits 16,185 hits WebKit-GTK : JSC v/s V8 performance onARM WebKit-GTK - JSC v/sV8 Tags accelerated compositing angstrom arm beagle board xm benchmark compiler convert cross dromaeo ffmpeg Ganesh gtkterm guide hackfest hands on Singular cohomology and birational equivalence How can tilting a N64 cartridge cause such subtle glitches? what was I going to say again? Is it acceptable to ask an unknown professor for help in a related field during his office hours? check over here current community chat Stack Overflow Meta Stack Overflow your communities Sign up or log in to customize your list. I was expecting that it wouldn't like the library version. Totally in love with it.. #skipthesevens #moto twitter.com/jpelc2/status/… 1monthago RT @paralympicindia: India's Pride Moments - Indian National Flags raised High & Anthem Sung in Rio - Brazil Soil - Many Congratulations Is adding the ‘tbl’ prefix to table names really a problem? My manager said I spend too much time on Stack Exchange. Browse other questions tagged pthreads arm cross-compiling mysql or ask your own question. share|improve this answer answered May 4 '15 at 14:20 A.G. 786414 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign
OPCFW_CODE
|author||Kyle Teske <firstname.lastname@example.org>||Thu Jun 11 09:25:47 2020 -0400| |committer||Kyle Teske <email@example.com>||Thu Jun 11 11:52:08 2020 -0400| Fix Xcode 11.3.1 detection in an Xcode workspace When using a Tulsi-generated project by itself in Xcode 11.3.1, Xcode sets XCODE_VERSION_ACTUAL to 1131, which Tulsi parses correctly to get the version 11.3.1. But when used in an Xcode workspace, Xcode sets XCODE_VERSION_ACTUAL to 1130, leading to Tulsi parsing 11.3 as the version, and later attempts to use a non-existing version of Xcode to build, which results in an error. Fix by ignoring XCODE_VERSION_ACTUAL because it's unreliable; instead, parse Xcode's version.plist to read the 'CFBundleShortVersionString' entry, which seems to return correct results. Caveat: this implementation uses plistlib, which has a slightly different API in python2 and python3. This change uses the python2 API, and will need to be updated to work with python3 in the future. PiperOrigin-RevId: 315886285 build_and_run.sh. This will install Tulsi.app inside $HOME/Applications by default. See below for supported options: -b: Bazel binary that Tulsi should use to build and install the app (Default is -d: The folder where to install the Tulsi app into (Default is -x: The Xcode version Tulsi should be built for (Default is Tulsi-generated Xcode projects use Bazel to build, not Xcode. Building in Xcode will cause it to only run a script; the script invokes Bazel to build the configured Bazel target and copies the artifacts to where Xcode expects them to be. This means that many common components of an Xcode project are handled differently than you may be used to. Notable differences: bazelinvocations, some of which may affect Bazel caching. In order to maximize cache re-use when building from the command line, try using the user_build.pyscript which is located in the generated xcodeproj at Tulsi projects contain a few settings which control various behaviors during project generation and builds. buildflags, customizable per compilation mode ( buildstartup flags, also customizable per compilation mode fastbuild) used during project generation. dbg, swap to optif you normally build Release builds in Xcode (i.e. profiling your app). Setting this improperly shouldn't break your project although it may potentially worsen generation and build performance. No, swap to Yesif your project contains Swift (even in its dependencies). Setting this improperly shouldn't break your project although it may potentially worsen generation and build performance.
OPCFW_CODE
||7 years ago| |defaults||7 years ago| |handlers||7 years ago| |meta||7 years ago| |tasks||7 years ago| |templates||7 years ago| |tests||7 years ago| |vars||7 years ago| |.travis.yml||7 years ago| |GPLv3.txt||7 years ago| |README.md||7 years ago| The role will set up the dotqmail files in ansible users home for netqmail server as it is used for example on uberspace.de. Make sure you have made a backup of your dotqmail files before running this role since it will overwrite or delete them without any further request!!! See variables section what could be configured. The default configuration only provides basic functionality and does not covers any needs a normal user would have. dotqmail_rootfileThis list of lines is used as content of the root dotqmail file ".qmail" which is treatened separately since it has no extention (and therefore no valid key). Per default it will redirect mails to ./Maildir/ which can be simply overwritten e.g. in inventory files. dotqmail_config_filesCan be configured as a dict of lists were the keys are the dotqmail file extentions you like to have and the list entries (which should be strings) are the redirection lines to add to that file. dotqmail_config_files: "info" : ["./Maildir/"] "mail" : ["./Maildir/", "firstname.lastname@example.org"] The example above will create two dotqmail files ".qmail-info" and ".qmail-mail". The both will redirect the Mails to users Maildir while the latter one will also redirect the mail to address email@example.com. See Uberspace Wiki Entry for more information about dotqmail files and redirection lines syntax. dotqmail_default_filesHas roughly the same function as dotqmail_config_files with the important difference that it will be combined with the one above to generate the actual working dict and items here will be overwritten by items of dotqmail_config_files if they use the same extention/dict-key. This is present to enforce basic conformity with RFC 2142. dotqmail_fileThis is the actual dict we work on and normally should not be changed directly. It is the combination of dotqmail_config_files over dotqmail_default_files. dotqmail_prefixThis is the prefix used to detect the dotqmail filename. By default this is ".qmail-" and normally should not be changed. There are currently no additional depencies. Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: - hosts: servers roles: - role: dotqmail
OPCFW_CODE
My recent paper with Alan Hastings, “Early warning signals and the prosecutor’s fallacy”, is now out in PRSB. An open access preprint can be found on the (arXiv) and accompanying (source code) for the analysis can be found on Github. Recent papers in early warning signals This invistigation is in a very similar spirit to two other recent papers on the subject. Kéfi et al. addresses a different kind of false positive for early warning signals, arising in systems approaching other transitions or bifurcations than the intended culprit, the saddle-node bifurcation. Peretti and Munch address the flip side, in which detection fails when it should work due to commonly-observed levels of noise. Model-based approaches are clearly on the rise (following my own 2012 paper in PRSI and Lade and Gross 2012), with Brock and Carpenter (2012) putting forward a very similar model, and Ives & Dakos (2012) bringing in temporally heterogenous ARMA models. Unfortunately such model-based approaches are too new to make it into either of the recent review-like comparisons of existing approaches to a common set of simulated data (Dakos et al. 2012) or empirical data (Lindegren et al. 2012). Boettiger, C., & Hastings, A. (2012). Early warning signals and the prosecutor’s fallacy. Proceedings of the Royal Society B: Biological Sciences, (October). doi:10.1098/rspb.2012.2085 Kéfi, S., Dakos, V., Scheffer, M., Van Nes, E. H., & Rietkerk, M. (2012). Early warning signals also precede non-catastrophic transitions. Oikos, (June), no-no. doi:10.1111/j.1600-0706.2012.20838.x Perretti, C. T., & Munch, S. B. (2012). Regime shift indicators fail under noise levels commonly observed in ecological systems. Ecological Applications, 22(6), 1772-1779. doi:10.1890/11-0161.1 Boettiger, Carl, & Hastings, A. (2012). Quantifying limits to detection of early warning for critical transitions. Journal of The Royal Society Interface, 9(75), 2527 - 2539. doi:10.1098/rsif.2012.0125 Lade, S. J., & Gross, T. (2012). Early Warning Signals for Critical Transitions: A Generalized Modeling Approach. (M. Pascual, Ed.)PLoS Computational Biology, 8(2), e1002360. doi:10.1371/journal.pcbi.1002360* Brock, W. A., & Carpenter, S. R. (2012). Early Warnings of Regime Shift When the Ecosystem Structure Is Unknown. (R. V. Solé, Ed.)PLoS ONE, 7(9), e45586. doi:10.1371/journal.pone.0045586 Ives, A., & Dakos, V. (2012). Detecting dynamical changes in nonlinear time series using locally linear state-space models. Ecosphere, 3(June). Retrieved from https://www.esajournals.org/doi/abs/10.1890/ES11-00347.1 Dakos, V., Carpenter, S. R., Brock, W. a., Ellison, A. M., Guttal, V., Ives, A. R., Kéfi, S., et al. (2012). Methods for Detecting Early Warnings of Critical Transitions in Time Series Illustrated Using Simulated Ecological Data. (B. Yener, Ed.)PLoS ONE, 7(7), e41010. doi:10.1371/journal.pone.0041010 Lindegren, M., Dakos, V., Gröger, J. P., Gårdmark, A., Kornilovs, G., Otto, S. a., & Möllmann, C. (2012). Early Detection of Ecosystem Regime Shifts: A Multiple Method Evaluation for Management Application. (S. Thrush, Ed.)PLoS ONE, 7(7), e38410. doi:10.1371/journal.pone.0038410
OPCFW_CODE
I am following the guidance provided here: Running on mobile with TensorFlow Lite, however with no success. After the release of Tensorflow Lite on Nov 14th, 2017 which made it easy to develop and deploy Tensorflow models in mobile and embedded devices - in this blog we provide steps to a develop android applications which can detect custom objects using Tensorflow Object Detection API. This article is for a person who has some knowledge on Android and OpenCV. This tutorial describes how to install and run an object detection application. A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more! As Inception V3 model as an example, we could define inception_v3_spec which is an object of ImageModelSpec and contains the specification of the Inception V3 model. TensorFlow Object Detection API . Now, the reason why it's so easy to get started here is that the TensorFlow Lite team actually provides us with numerous examples of working projects, including object detection, gesture recognition, pose estimation & much, much more. In this tutorial you will download an exported custom TensorFlow Lite model created using AutoML Vision Edge. A tutorial to train and use Faster R-CNN with the TensorFlow Object Detection API What you will learn (MobileNetSSDv2) How to load your custom image detection from Roboflow (here we use a public blood cell dataset with tfrecord) Earlier this month at Google I/O, the team behind Firebase ML Kit announced the addition of 2 new APIs into their arsenal: object detection and an on-device translation API. And trust me, that is a big deal and helps a lot with getting started.. I will go through step by step. I am using Android… TensorFlow Object Detection. Welcome to part 5 of the TensorFlow Object Detection API tutorial series. Object detection in the image is an important task for applications including self-driving, face detection, video surveillance, count objects in the image. Object detection is a process of discovering real-world object detail in images or videos such as cars or bikes, TVs, flowers, and humans. TensorFlow Object Detection step by step custom object detection tutorial. Let’s move forward with our Object Detection Tutorial and understand it’s various applications in the industry. This is an example project for integrating TensorFlow Lite into Android application; This project include an example for object detection for an image taken from camera using TensorFlow Lite library. But in this tutorial, I would like to show you, how we can increase the speed of our object detection up to 3 times with TensorRT! Moreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format. I'm a tensorflow newbie, so please go easy on me. Custom Object Detection Tutorial with YOLO V5 was originally published in Towards AI — Multidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story. The application uses TensorFlow and other public API libraries to detect multiple objects in an uploaded image. These should correspond to the tags used when saving the variables using the SavedModel save() API. About Android TensorFlow Lite Machine Learning Example. The example model runs properly showing all the detected labels. In this part and few in future, we’re going to cover how we can track and detect our own custom objects with this API. This article walks you through installing the OD-API with either Tensorflow 2 or Tensorflow 1. In this part and few in future, we're going to cover how we can track and detect our own custom objects with this API. Deep inside the many functionalities and tools of TensorFlow, lies a component named TensorFlow Object Detection API. It allows you to run machine learning models on edge devices with low latency, which eliminates the … This is load_model function which misses 2 arguments: tags: Set of string tags to identify the required MetaGraphDef. We start off by giving a brief overview of quantization in deep neural networks, followed by explaining different approaches to quantization and discussing the advantages and disadvantages of using each approach. TensorFlow Lite is an optimized framework for deploying lightweight deep learning models on resource-constrained edge devices. Have a question about this project? We’ll conclude with a .tflite file that you can use in the official TensorFlow Lite Android Demo , iOS Demo , or Raspberry Pi Demo . It describes everything about TensorFlow Lite for Android. 12 min read. In this tutorial, I will not cover how to install TensorRT. This is an easy and fast guide about how to use image classification and object detection using Raspberry Pi and Tensorflow lite. In this Object Detection Tutorial, we’ll focus on Deep Learning Object Detection as Tensorflow uses Deep Learning for computation. It allows identification, localization, and identification of multiple objects within an image, giving us a better understanding of an image. 6 min read TensorFlow Lite is TensorFlow's lightweight solution for mobile and embedded devices. The use of mobile devices only furthers this potential as people have access to incredibly powerful computers and only have to search as far as their pockets to find it. Blink detection in Android using Firebase ML Kit; Introducing Firebase ML Kit Object Detection API. 3 min read With the recent update to the Tensorflow Object Detection API, installing the OD-API has become a lot simpler. Image source. You will then run a pre-made Android app that uses the model to identify images of flowers. TensorFlow Lite Examples. On the models' side, TensorFlow.js comes with several pre-trained models that serve different purposes like PoseNet to estimate in real-time the human pose a person is performing, the toxicity classifier to detect whether a piece of text contains toxic content, and lastly, the Coco SSD model, an object detection model that identifies and localize multiple objects in an image. With the recent release of the TensorFlow 2 Object Detection API, it has never been easier to train and deploy state of the art object detection models with TensorFlow leveraging your own custom dataset to detect your own custom objects: foods, pets, mechanical parts, and more.. The TensorFlow Object Detection API built on top of TensorFlow that makes it easy to construct, train and deploy object detection models. We will look at how to use the OpenCV library to recognize objects on Android using feature extraction. In this tutorial, we’re going to cover how to adapt the sample code from the API’s github repo to apply object detection to streaming video from our webcam. Welcome to part 5 of the TensorFlow Object Detection API tutorial series. A General Framework for Object Detection. TensorFlow’s object detection technology can provide huge opportunities for mobile app development companies and brands alike to use a range of tools for different purposes. When testing the tflite model on a computer, everything worked fine. Read this article. TensorFlow Lite is a great solution for object detection with high accuracy. The goal of this tutorial about Raspberry Pi Tensorflow Lite is to create an easy guide to run Tensorflow Lite on Raspberry Pi without having a deep knowledge about Tensorflow and Machine Learning. TensorFlow Lite Object Detection Android Demo Overview. Change to the model in TensorFlow Hub. I'm getting TypeErrror and don't know how to fix it. This is a camera app that continuously detects the objects (bounding boxes and classes) in the frames seen by your device's back camera, using a quantized MobileNet SSD model trained on the COCO dataset.These instructions walk you through building and running the demo on an Android device. In this tutorial, we will examine various TensorFlow tools for quantizing object detection models. Trying to implement a custom object detection model with Tensorflow Lite, using Android Studio. I followed this tutorial to create a custom object detection model, which I then converted to tflite. Note: TensorFlow is a multipurpose machine learning framework. In this tutorial, we will train an object detection model on custom data and convert it to TensorFlow Lite for deployment. Sign up for a free GitHub account to open an issue and contact its maintainers and the community. I'm pretty new to tensorflow and I'm trying to run object_detection_tutorial. This post walks through the steps required to train an object detection model locally.. 1. You can implement the CNN based object detection algorithm on the mobile app. Part 3. However, when I try to add my model to the android tensorflow example, it does not detect correctly. Welcome to part 2 of the TensorFlow Object Detection API tutorial. In this tutorial, we will learn how to make a custom object detection model in TensorFlow and then converting the model to tflite for android. TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi. Hotels With Private Pools In-room In Florida, Ceremonial Torch Shadow Of The Tomb Raider, 82nd Airborne Division Yearbook, Brunswick Country Club Georgia Membership Cost, How Is General Hospital Filming During Covid, Rebel Meaning In Telugu,
OPCFW_CODE
dbt — There is data at the end of the tunnel dbt is a command line tool which does the T in ELT (Extract, Load, Transform) processes and it is a powerful technology to help engineers and data scientists to transform data by simply writing select statements and SQL validations in a structured fashion. In my previous job at Deliveroo, I was working with dbt (data build tool) and I’d love to share some learnings and why we should treat SQL as software such as any other micro service or Lambda function implemented using a programming language. The diagram below shows how dbt fits into a tech stack. Essentially, it takes data from multiple different sources such as databases, services and Lambda functions, loads into a standard format and then transforms that data into a format to best suit particular use cases within a company. There are two main concepts we need to introduce before going further: Model and Materialisation. A single .sql file which contains a single select statement that produces a single table (or other materialisation). A model can act either to transform raw data into another form, for example a table for analytics or model training. More often, it is an intermediate step in such a transformation. We can create a hierarchy between models and join them up as we want. Our job as engineers is to create models, connect them, and then leave the work of creating the actual dataset into the database to dbt. You can find further details about models in dbt — Models. Materialisations are the methods used to realise the models into the database. Models are materialised using views, tables and other methods depending on the underlying database. There is a large number of other possible refinements, including incrementally-loaded tables, where data is merged into the existing data, and grows with each time dbt runs. You can find more details of this and the other possibilities in dbt — Materialisation. SQL is Software On Building a Mature Analytics Workflow, Tristan Handy (the CEO of Fishtown Analytics — the company behind dbt) outlines how the same techniques that software engineering teams use to collaborate on the rapid creation of quality applications can apply to analytics code. In this section, I am going to summarise how we can adopt some of these techniques and embrace a software engineering mindset when working with SQL. All dbt models should live in a repository, from my experience having all SQL code hosted on a shared repository helps to improve collaboration between engineers and data scientists. Essentially, any changes to datasets follow the same workflow of any other service maintained by engineers on production. That is, if you want to add a new column or apply a new transformation over the dataset, you have to go through the whole process of creating your development branch; opening a PR and having eyes from your colleagues before merging anything to production. dbt provides simple mechanisms to validate the quality of datasets, after all models (SQL behind the scenes) must receive the same treatment that we give to code implemented using a programming language such as Ruby, Python and Go. That is, we should try to have good code coverage to ensure transformations work as expected on production. We can go from simple schema validations such as checking if a column has null values; referential integrity where we validate the accuracy and consistency of data within a relationship; to more elaborated validations through the creation of assertions using SQL queries. Last but not least, you can find plenty of out-of-the-box validations on dbt-utils such as equality to compare the number of rows between 2 models and expression_is_true to validate expressions of a given model. When we work in a cross-functional team it is really important to ensure that everyone is on the same page and nothing better than adding descriptions to your models and columns. If you really document your models, there is gold at the end of the rainbow, dbt ships a built-in documentation website for your project where you can navigate through all models and see a graph of dependencies between them. You can find further details on dbt — Documentation. We write Ruby, Go and Python services taking into account modularity, some benefits are: - Less code to be written. - A single piece of code can be developed for reuse, eliminating the need to retype the code many times. - It makes the code shorter, simpler and easier to maintain. - It facilitates debugging errors, as they are localised to a specific area in the code. Models or transformations are also made to be reused, we just need to learn a little bit about ref — the most important function in dbt according to its creators. Basically, we can reference one model within another in order to reuse them to build different datasets. dbt also makes it easier to have DRY (Don’t Repeat Yourself) code through the use of macros, which consist of SQL snippets that can be invoked like functions from models. Out of blue, all Periscope and Looker dashboards of the company are broken, oh no! Someone was just trying to rename a column of that beautiful table. That’s the kind of situation that might happen when you do not have multi-environments. As friendly engineers, we should have multiple environments to develop and test things properly before we roll out to production any piece of code (yep, SQL is software after all). To facilitate this dbt provides a simple way to set different environments through a YML file (profiles.yml). I suggest having at least three environments: development, staging and production. Engineers should have their own schema to host models in a development environment. Once they are happy with changes and pull requests are approved by their peers, engineers should merge the changes to staging. And, the very last step is to merge changes from staging to production. I have talked a bit about dbt and how we can apply engineering techniques to SQL code in order to improve team collaboration and ensure we deliver reliable datasets for ML training and analytics. The most important takeaway is that we should look after our analytics code as we do for any other applications implemented in a programming language. What about you? How have you been treating your analytics code? I’d love to hear experiences from other engineers working with SQL and dbt :)
OPCFW_CODE
[R] R and Java tlumley at u.washington.edu Tue Oct 2 18:51:54 CEST 2001 On Tue, 2 Oct 2001, Andrew Schuh wrote: > I finally got it running (sorta) on my RH7.1. I would make sure you > have the latest package from Omegahat first. Then, I believe the most > likely culprit is the LD_LIBRARY_PATH env variable. Mine includes the > following (my bash profile). > export LD_LIBRARY_PATH > You can replace the java directories with the jdk you are using. Also > make sure that error messages aren't pointing you to a '.so' (in the > libs directory of SJava) that you don't have. I had to rename one of > them in an earlier version of the SJava package. > Even with this, I am only able to run an example after I run java at > least once. I would open the BASH shell and run any java class and then > open R and be able to run the examples ( > library(SJava);.JavaInit();source(....example.r);example.r(); ) . > This makes it a little difficult to use the package (or at the least, > inelegant) by running R in batch. I have had a lot of difficulty > creating any original code with this package. My goal was to create > some original code to allow me to run a java app that called R. Not > much luck. I put a message out some time ago but I haven't had much > luck getting responses. If anyone out there has had any luck using this > package with Redhat and IBM JDK please let me know. Hope this helps. It works (at least works better) when the .JavaInit() call specifies all the classpaths you are going to use The following snippet is from a demonstration that worked (even in front of other people) under Windows. An earlier version with Unix path names also worked under Debian, but I haven't tried that recently. ##set up paths orcaHomeOrcaJar <- paste(orcaHomeDir,"exec/orca.jar",sep="") orcaHomeVisADJar <- paste(orcaHomeDir,"exec/visad.jar",sep="") orcaHomeGrappaJar <- paste(orcaHomeDir,"exec/grappalite.jar",sep="") orcaHomeJaxpJar <- paste(orcaHomeDir,"exec/jaxp.jar",sep="") orcaHomeParserJar <- paste(orcaHomeDir,"exec/parser.jar",sep="") orcaClasses <- c(orcaHomeOrcaJar,orcaHomeVisADJar, ## Initialize JVM ## Verify JVM .Java("System", "getProperty", "java.class.path") ## Same example TaoDataFile <- paste(orcaHomeDir,"data/orca.tao.time",sep="") TaoDataGroupVar <- "Group" TaoDataTimeVar <- "Time" od.1 <- .Java("org.orca.data.parsers.OrcaDataSource", Thomas Lumley Asst. Professor, Biostatistics tlumley at u.washington.edu University of Washington, Seattle r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch More information about the R-help
OPCFW_CODE
New features such as CSS Grid, Fetch, Download attribute newly supported by "Safari 10.1" · Apple released by the new specification · New specification Web browser developed by Apple · The latest version of Safari "Safari 10.1New features and specifications on the developer's page Guides and Sample Code have been released. Because it is often called within iOS application, upgrading Safari will affect later development, but it will change little by little including the application area etc., so it is important to know the future movement is. ◆ Web API Safari's IndexedDB implementation is fast, completely standards-compliant, and supports new IndexedDB 2.0 new features. Custom Elements provides a mechanism for revealing HTML elements containing custom reaction callbacks that respond to changing values. By combining with the slot-based Shadow DOM API introduced in 2016, Custom Elements can generate reusable components. Input Events is an API to simplify the process of editing rich text on the web. The Input Events API adds the events before the new input, blocks the default editing behavior, and enhances the input event to the new attribute. In Safari of macOS, you can lock the pointer by user's gesture. Expanded with a mouse event interface such as 'movement X' which can use raw mouse motion data. When the pointer is locked, the user can see the banner indicating that the mouse cursor is hidden, and you can release the lock of the pointer by pressing the "Escape" key. With the Gamepad API, you can enter web content with the connected gamepad device. Game pads have various layouts, such as buttons and joysticks, but with code simplify these to make a standard gamepad layout. ·ECMAScript 2016WhenECMAScript 2017 Support for ECMAScript 2016 and ECMAScript 2017 is available in macOS and iOS Safari. This also adds support for async and await syntax, and supports shared memory objects, Atomics, Shared Array Buffers. ·Interactive validation formSupport By supporting a new interactive validation form, it is easy to create a form that will automatically activate user data when the form is submitted. This feature helps you to understand what kind of data the form requires and how to enter the correct information. · Download attribute of HTML5 As an anchor attribute, you can specify the Download attribute to download a specific file as a link target instead of the page transition link. If you click on the link with the Download attribute specified, it will be downloaded as a file. You can download it by specifying a file name as an option value. ◆ Layout and Rendering · CSS deep color support In the past CSS, the color gamut was limited to sRGB. But now you can use color space like "Display P3" in a wider area with new color function. For example, to use the "Display P3" color space, set as follows. For detailed information, see CSS Color Module Level 4 Specification. · CSS grid layout SafariCSS grid(Japanese translation here) And will be able to create complex layouts that respond based on viewport restrictions. We divide the page into several areas based on rows and columns and realize the flexibility based on the relationship between the grid containers. Update behavior on fixed element For viewport, Safari can also pinch zoom elements that take fixed and sticky values. This eliminates the need to invalidate fixed and sticky in iOS input fields. ◆ Safari Browser Behaviors · Full screen keyboard input realized In Safari of macOS, keyboard input is no longer restricted to within the web page, and HTML5 full screen display is also supported. ◆ Web Inspector · Update debug function by Web Inspector · Decrease motion of Media Query By using "prefers-reduced-motion" with Media Query, it is possible to create a style that reduces motion in a large area for specific users who are set to reduce motion by system setting. ◆ Safari App Extensions Safari's extension "Safari App Extensions" makes it possible to dynamically change the image of a toolbar item, check and dynamically change the text of the context menu, and directly communicate with Safari's application extension via an application . Also, Safari's App Extensions are associated with content blockers and can be reloaded and checked for current status. Safari extensions preferences can display localization descriptions, display names, Safari's App Extensions version numbers, and allow Safari's App Extensions to allow more nuanced messages. For Safari 10.1Safari Technology PreviewIt is provided for developers. in Software, Posted by logu_ii
OPCFW_CODE
Aggregate data on a set of 500 records really shouldn't have performance issues, but when that number grows to be a much larger figure, a "summary table" that stores aggregate values as simple number fields where one record in the summary table represents a group of records in the original table can make for much faster reporting--First did that with FileMaker 3 or was it 4.... Note that keeping such data correctly up to date when data in the original records changes can be a real headache, but can be done. If your data is pretty static--and sales data usually is, it's not so bad. Here's a basic outline of the method I set up to summarize the line items off of purchase orders into records of a summary table. The business where I created this method still uses it today: Add a field to the original table that you can set to a value to "mark" a record as having been used to generate a summary table record. Then set up your script like this: Find all records not marked as "summarized" Exit loop if no records found Find all records of the same "group" as the current record. (In our case these were line items for the same material) Create a new record in the summary table and use set field to set the number fields to the value of summary fields in the original table that compute the needed aggregate values (totals, averages...) Use Replace Field Contents to "mark" these records as Summarized Note that this script ran late at night while the business was closed so there have never been any record locking issues with the replace field contents step. I have 500K rows of sales 500,000 is a bit more beefy than 500. The method described is correct in terms of storing summarized data. I would highly recommend that you move to FileMaker server as soon as possible. Doing so opens up the capability of using "perform script on server", which will allow you to offload the "refresh" of the aggregate data to the server while the client continues on. Your data looks pretty vanilla though, aside from the sort operation for 500k records, you may want to revisit the indexing, and the difference between stored and unstored calculation fields. you may also get value of potentially converting some calculation fields to auto-enter number/text fields to gain performance. Researching in the forums can also lead you on to developing for performance. Try this thread: Thanks Mike, I did miss that "k", not that it affects my response as I assumed that this was needed for very large record counts. In the solution where I originally set up this approach, they routinely pull up cross tab reports showing monthly subtotals and averages of a material or group of materials purchased comparing these amounts over a 5 year span. The business generates a bit under a thousand records a day in the original table and operates 6 days a week. The summary table condenses the day's purchases down to less than 20 records so the efficiency savings here is quite large. 2 of 2 people found this helpful What I can suggest to try is the following: 1) define 2 summary fields, Total( Dollar Sales ) and Total ( Unit sales ) in your data table. 2) create a table called Report. Add gStartDate, gEndDate etc. to it. 3) create a global field, gKey, in the Report table, and a TO called Totals, which is based upon your data table. 4) create a relationship between gKey in Report and UUID (primary key) in the Totals table. 5) in a script, fill gKey via SQL, with the ID's of records from the data table which meet your search criteria. 6) Look at the summary fields from 1) as seen via the relationship Totals that you defined in 4). They are your totals. 7) Put all the totals you care about in $$Variables and create your reports in the Report table, using merge fielding of the $$vars. It works quite decently for me when reporting on big tables. Thanks for the step by step. It's helping me work through the solution. If you don't mind I have a couple of questions. For the SQL script, I'm filling gKey like a list value or single row per record like a virtual list? Maybe a different way of asking is, will multiple UUID's be contained in a single "cell" that creates a form of aggregation. I'm new to this and trying to understand. Could you provide a little more detail to points 6 and 7?
OPCFW_CODE
const dataTV = require("./tvData.json"); const Tv = require("../models/Tv"); const Movies = require("../models/Movies"); const data = require("./movieData.json"); const tvData = dataTV.results.map((item) => { const tvShow = { title: item.original_name, description: item.overview, rating: item.vote_average, releaseDate: item.first_air_date, image: item.poster_path, media: item.media_type, }; return tvShow; }); // console.log(tvData); Tv.remove({}).then(() => { Tv.create(tvData) .then((tvShows) => { console.log(tvShows); }) .catch((err) => { console.log(err); }); }); const moviesData = data.results.map((item) => { const trendingMovies = { name: item.title, description: item.overview, rating: item.vote_average, releaseDate: item.release_date, image: item.poster_path, media: item.media_type, }; return trendingMovies; }); console.log(moviesData); Movies.remove({}).then(() => { Movies.create(moviesData) .then((showMovies) => { console.log(showMovies); process.exit(); }) .catch((err) => { console.log(err); }); });
STACK_EDU
With image maps, you can add clickable areas on an image. Image Maps. Video: Html tag for image mapping Image Maps in HTML The tag defines an image-map. An image-map is an image with clickable. 4 Quick Steps To Make An Image Map In HTML (With Code Example) » The required name attribute of the element is associated with the 's usemap attribute and creates a relationship between the image and the map. Note: The usemap attribute in the tag is associated with the element's name attribute, and creates a relationship between the image and the map. Assigns a name to the image map. Jon is a freelance writer, travel enthusiast, husband and father. Chidre'sTechTutorials 19, views. HTML Hypertext Markup Language MDN However, in today's mobile-first environment you should do two things to make sure your website is accessible to mobile device users:. Show Me The Code! However, image maps can also be created with some server-side activity. With each set of coordinates, you specify a link that users will be directed to when they click within the area. BRISBANE MONUMENTS VISITER VENISE |The interactive transcript could not be loaded. SchaeferArt Recommended for you. Technology Gyan Recommended for you. Sign in to add this video to a playlist. This is made possible by using an image map i. Learn about HTML image maps with this HTML tutorial. with a name. Inside this tag, you will specify where the clickable areas are with the HTML tag. Video: Html tag for image mapping Using image map in HTML - Lesson - 19 - HTML in Hindi The HTML tag is used for defining an image map. Image maps are images that have clickable areas (sometimes referred to as "hot spots"). Each of these. Here's what the HTML portion of the code for a server-side image map looks like:. Close Continue. HTML Image Maps Server side image maps were clunky requiring a round trip to the web server to determine where to go based on the coordinates clicked in the image. HTML video tutorial - 62 - html table rowspan and table colspan attribute - Duration: However, there's a lot more to learn. How Does it Work? Html tag for image mapping |The following attributes are standard across all HTML5 elements. Show Me The Code! Loading playlists Kevin Powellviews. If we were to use a map file to store the coordinates we used in our previous example we would type the following code into a text file:. This is how the map is tied to the image. For example, in addition to polyyou can also use rect and circle to define shapes. Autoplay When autoplay is enabled, a suggested video will automatically play next. You can also use shape names to draw a rectangle rectangle or rect or a circle circle or circ. Our tool was build from the ground up with the modern browsers in mind, and sadly in turn doesn't support older browsers sorry IE8 and lower!
OPCFW_CODE
using System; using System.Collections.Generic; namespace CQ.LeagueOfLegends.Game { [Serializable] public class GameModeContext { public EGameMode gameMode; public ChampionData[] teamBlue; public ChampionData[] teamPurple; public class Builder { EGameMode gameMode; readonly List<ChampionData> teamBlue = new List<ChampionData>(); readonly List<ChampionData> teamPurple = new List<ChampionData>(); public Builder AddBluePlayer(ChampionData hero) { this.teamBlue.Add(hero); return this; } public Builder AddPurplePlayer(ChampionData hero) { this.teamPurple.Add(hero); return this; } public Builder SetGameMode(EGameMode mode) { this.gameMode = mode; return this; } public GameModeContext Build() { return new GameModeContext() { gameMode = this.gameMode, teamBlue = this.teamBlue.ToArray(), teamPurple = this.teamPurple.ToArray() }; } } } }
STACK_EDU
Action of universal covering deck transformations group Let $X$ be a topological space ""good enough"" , let $\tilde{X}$ be its universal covering (suppose that $X$ has one), then for every connected covering $P$ on $X$ with fiber $F$, it is true that the group of deck transformation $Aut(\tilde{X})$ acts on $F$? I want to solve this problem using the properties of $\tilde{X}$ to be connected and a normal covering and the fact that it covers all the connected coverings on $X$. So I don't want to mention explicitly the monodromy action of the fundamental group. How can I do that? Almost always, these are two different actions. The monodromy action is by path lifting, lifting a given path at each of the points in the fiber and mapping each such point to the endpoint of the corresponding lift. This makes sense for any covering space $q:(Y,y_0) \to (X,x_0)$, where $Y$ is possibly disconnected. To get a well-defined action by the deck group $G=G(\widetilde{X})$, which can be identified with $\pi_1(X,x_0)$ in the universal cover $p:(\widetilde{X},\widetilde{x}_0) \to (X,X_0)$, on the fiber of an intermediate connected normal cover $q:(Y,y_0) \to (X,x_0)$, certain elements need to identified: $G(Y) \cong G/N$, where $N= q_*\pi_1(Y,y_0)$. So, there is an action of $G$ on $F = q^{-1}(x_0)$ that factors through the action of $G/N$ on $F$. Here's an example where these two actions are different: Let $X = S^1 \vee S^1$ with oriented labeled edges $x$ and $y$, identified with generators of $\pi_1(X)$. And let $Y = \widetilde{X}$ be its universal cover, viewed as a graph (4-valent tree) with labeled oriented edges and its vertex set identified with $\pi_1(X)$. Consider the pair of adjacent vertices $(1,y)$. The monodromy action by the element $x$ sends this pair to $(x,yx)$, which are not adjacent in $Y$. The deck group action by the element $x$ sends the pair to $(x,xy)$. In general, the deck group action extends to a homeomorphism of $Y$. But the monodromy action need not extend continuously. The above example is related to other actions: if $Y$ is a normal covering graph with fundamental group $N \trianglelefteq G$, where $G$ is a free group, then $Y$ is the right Cayley graph of the presentation $G/N$. The action of $F/N$ by deck transformations is the left action (by graph automorphisms). The monodromy action is the action on the vertex set by right multiplication (which does not extend to an action by graph automorphisms). Exercise #27 in Chapter 1 of Hatcher's Algebraic Topology book is worth a look. I am five years too late but I don't think this is necessarily true. Hatcher turns the right monodromy action on the fiber to a left action via $$[\alpha]\cdot f = f \cdot [\bar \alpha]^{-1}$$ for all $f\in p^{-1}({x_0}), [\alpha]\in \pi_1(X,x_0)$. More explicitly, $[\gamma] \cdot f$ is equal to $\tilde{\bar\gamma}(1)$, where $\tilde{\bar\gamma}$ is the lift of $\bar\gamma$ starting at $f$ in which case the action of $x$ would send the pair $(1,y)$ to $(x^{-1},yx^{-1})$, no? (Assuming you are naming the vertices of the graph in a left-to-right manner) If you're wondering when these two actions are the same (i.e. Hatcher exercise 1.3.27) here's an outline. First prove that both of these actions are sharply transitive, then prove that if two transitive actions agree on one point then they are the same. Lastly, prove the following: $\pi_1(X,x_0)$ is a $2-$group if and only if the actions agree on $\tilde {x_0}$.
STACK_EXCHANGE
RadzenNumeric removes decimal separator that does not match culture/locality, thus multiplying the value. Describe the bug First off: I'm not entirely sure if this is a bug or counts as expected behaviour so please excuse, if I got it wrong. When using RadzenNumeric the decimal separator is determined by the environment's CultureInfo / locality. For me that is Germany, which means RadzenNumeric expects comma as decimal separator. If the user enters a decimal point (which is also common in technical contexts, even in Germany), the point is removed after tabbing out of the field, thus multiplying the value by 10 to the power of the number of decimal digits. To Reproduce Steps to reproduce the behavior: Go to https://blazor.radzen.com/numeric Click into field Placeholder and 0.5 step Type any decimal number, e.g. 12.34 (note: you might have to enter 12,34 depending on your locality to reproduce this) Tab out of the field The value is now converted to 1234, i.e. it got multiplied by factor 100 Expected behavior I would expect RadzenNumeric to replace the 'invalid' decimal separator by the one which is common for the system's locality. It would be even better if there was a flag (like a bool) that let's you tell RadzenNumeric not to consider the system's locality and instead provide a specific decimal separator ("Format" kind of let's you do that, but has the same issue when using a different separator). Any other separator should then be converted to the defined one. There should not be any unwanted multiplication. Screenshots Enter number using different decimal separator After moving out of the field the value is converted Desktop OS [Windows 10 Professional] Browser [Google Chrome] Version [109.0.5414.120] Additional context As it cannot be guaranteed that users of our software only use comma as decimal separator, it poses a problem that inputs using dot as decimal separator are automatically increased by several magnitudes. Especially if a user does not pay close attention, thus not realising their input is faulty. This problem is even bigger when considering another (already known) issue, where you can click back into a field, edit it, tab out again and the format is not applied again (apparently only when using @bind-value which is required in our use case). That means the field shows the correct value (e.g. 12.34) which then is converted (e.g. to 1234) without the user even noticing. You could use the Format property to define a format in your own that then is used for parsing and formatting. If this is a bug, it is a bug in .NET. The following decimal.Parse code demonstrates the same behavior without Radzen. var s = "45.67"; var ci = CultureInfo.GetCultureInfo("de-DE"); var result = decimal.Parse(s, NumberStyles.Number, ci); // result is 4567 @pianomanjh You have it backwards, as a German you enter 45,67 into the radzen control and expect it to be saved as 45.67 but it saves it as 4567. .NET itself does not have the bug (notice the comma as decimal separator): var s = "45,67"; var ci = CultureInfo.GetCultureInfo("de-DE"); var result = decimal.Parse(s, NumberStyles.Number, ci); Console.WriteLine($"result: {result}"); // result is 45.69 ah, ok. The Culture is not being applied. RadzenNumeric gives preference to a CascadingParameter called DefaultCulture, and if that isn't set uses the CurrentCulture. It behaves properly in the sample if the culture is manually applied. (replace and run with this: @using System.Globalization <div class="rz-p-12 rz-text-align-center"> <RadzenNumeric Culture=@culture Placeholder="0.0" Step="0.5" @bind-Value=@value InputAttributes="@(new Dictionary<string,object>(){ { "aria-label", "enter value" }})" /> </div> @code{ double? value; CultureInfo culture = CultureInfo.GetCultureInfo("de-DE"); } If using Blazor Server, I imagine the culture applied will be that of the machine hosting it. I'm not sure if Radzen has a solution for applying the correct culture based on the user's machine. Indeed, it seems your application is not properly localized. .NET will behave in the same way unless the current culture is set.
GITHUB_ARCHIVE
Subject: Re: Floppy driver To: Mike Schwartz <firstname.lastname@example.org> From: Gregory Kritsch <email@example.com> Date: 03/01/1994 15:34:18 > At 12:24 PM 3/1/94 -0500, Gregory Kritsch wrote: > I pretty much said what I had to say... I see no reason why to do anything > in netbsd (or any unix development) that could be done better. Just > because unix > is shit in some implementations doesn't mean that netbsd should also be > shit. The key is to maintain a high level of compatibility so we can > easily port software from other unices (or written for netbsd). And we > want unimplemented > (so far) parts of netbsd to work as device drivers are implemented (MSDOS FS, > for example). To quibble over 3x512 bytes is pretty stupid, IMO. I'm not arguing that you have a bad idea - in it's own way, it's a good idea. It's not an idea that I would personally choose, but that by no means makes it My opinion on design is that one should place as few constraints as possible on a system. In my view (and yes, this is a very much more nitpicking than a majorly significant important point), constraining users to allocate 3 sectors so it looks like an AmigaDOS disk is a bad idea. Allowing them that choice is a good idea. An additional note to the original idea - is a similar scheme possible for MS-DOS floppies, which could be plugged in through the d partition of the WD sector format driver? I will admit to being quite ignorant of the actual semantics of device programming under NetBSD. I don't think it would be significantly more work to implement 4 partitions, as compared to 2 or 3. If I'm wrong, would someone please correct me (someday I'll want to be non-ignorant of the actual semantics of device programming, hopefully within the next > As for fdformat, it has to do NOTHING special to handle different flavors of > trackdisk formatted disks. If it writes sector 0 and the two sectors on track > 40 the _right_ way, when you ADOS format it (we will need an ADOS/newfs anyhow) > the disklabel and bam will be initialized for ADOS filesystem. If you don't > ADOS format it, you can tar 880K-1536 bytes on the disk and still use ADOS to > diskcopy it, do ADOS info on it, etc. Okay, sorry, you're right. The special code we need will be in newfs-type programs, not fdformat (it's been a while since I've had a Sun to play with, I'd like to say that, in general, all the ideas that have been discussed so far about floppy disk drivers have been very interesting. Everything from the just fake a disklabel for the whole disk through to floppies that have multiple partitions (hey, can anyone think of a way to make a BSD partition look like a file on an AmigaDOS disk - might be useful for the installation distribution). Even the idea of uniquely identified disks being placed in multiple drives was interesting, if (again, in my opinion) a little impractical in some situations (and no, please don't argue the point with me). Rather than spending N days here quibling about what ideas to implement, why don't we try to come up with a mechanism whereby we can implement ALL of these
OPCFW_CODE
DesignCon 2020 schedule announced Friday, November 8, 2019 DesignCon has released the schedule for 2020, featuring 14 refined tracks, and Ovoverer 120 sessions covering topics such as improving power distribution networks and machine learning applications for electronics, signal, and system design. DesignCon announced its 2020 conference schedule presenting 14 newly refined and reorganized tracks covering a broad range of highly technical sessions, boot camps, tutorials, and more to fit the needs of the hardware design engineering community. Hot topics of interest for this year’s programming include signal integrity (SI) and power integrity (PI) for single-multi die, interposer and packaging, modeling and analysis of interconnects, featured within a breadth of sessions. DesignCon returns to Silicon Valley for its 25th year, taking place January 28-30 at the Santa Clara Convention Center. The DesignCon premier educational conference is curated by the Technical Program Committee (TPC), an expert panel of more than 90 industry professionals who review and update the curriculum each year to meet the needs of the ever-evolving high-speed communications and semiconductors industry for 25 years running. With more than 100 technical paper sessions, panels, and tutorials spanning 14 tracks, DesignCon’s three-day conference program covers all aspects of hardware design, including high-speed serial design, machine learning, and much more. For more in-depth information on each track, please visit here. DesignCon 2020 will additionally welcome a number of leading industry experts as panel and session speakers, including recently elected I/O Buffer Information Specification (IBIS) Open Forum chair, Randy Wolff. As chair, Mr. Wolff provides leadership for this influential discipline and is a valued partner of DesignCon. “DesignCon provides unrivaled access to today’s leading experts who are advancing high speed communications, technology and design across a variety of applications,” Wolff said. “To continue solving problems and enabling new technologies, a broad community of skilled engineers need to come together to share a breadth of knowledge and information—DesignCon is the ideal forum for this exchange, and IBIS is proud to be a partner and to support the continued innovation within this industry.” Featured conference content of interest at DesignCon 2020 include: Under the Hood: Understand the Software that Drives Electromagnetic Simulation Tools (Boot Camp) In this full-day boot camp, presented by SI experts, David Correia and Raul Stavoli from Carlisle IT, attendees will learn different numerical techniques ranging from computational electromagnetics to frequency and time-domain conversions. See how S-parameters are created in a full-wave solver, how they can be converted between time and frequency domains, and what type of information is relevant in each domain. Also focused on is how to start with a simple finite-difference time-domain solver, followed by a finite-element solver in the frequency domain to show the advantages and disadvantages of each method. Finally, the boot camp will address the information contained in an S-parameter file, and how to extract relevant content in both frequency and time-domains. Open-Source Software Tools for Signal Integrity (Tutorial) This session will introduce engineers to a Python-based open-source software tool called Signal Integrity released last year on the Python Packaging Index. This software is ideal for calculating single-ended or mixed-mode s-parameters of interconnected networks, frequency and time-domain de-embedding, and linear simulation of systems. Peter Pupalaikis, vice president, Advanced Technology Development, Teledyne LeCroy will walk-through several use-cases, teaching how to use the tool to solve various problems, along with providing other educational content on s-parameters. Automatic Channel Condition Detection & Tuning Using Machine Learning Surrogate Models for 56G PAM4 Channels (Technical Session) Machine-learning expert Chris Cheng presents the follow up to their previous paper on accelerating channel optimization using principal component analysis. In this session, Cheng will create families of surrogate models in the reduced dimension PCA space based on various channel conditions. When a new system topology is encountered, random points within the PCA space is sampled. The resulting performance is compared against the families of surrogate models and the closest solution is considered the nearest channel condition model. The optimal operating can then be easily set based on pre-computed optimal setting for that surrogate model. Alternatively, the precompute settings can be used as seed values for circuit-level auto-tuning. Component Design Specification Study for Electrical Serial Links Beyond 112G (Technical Session) Based on compatibility and scalability requirements, electrical solutions for beyond 112Gbps need to be evaluated in PCB-based, orthogonal and cable-based backplane configurations. However, the design goals of the key components for beyond 112G are unclear. This paper serves as a trailblazer to explore the key component requirements. A top-down method is introduced for decomposing end-to-end link performance requirements from Salz-SNR analysis under PAM-based modulation schemes, down to the performance requirements for each individual component, such as connector, cable and PCB. Two sets of component design guidelines for 224G copper transmissions are proposed for smaller chassis/boxes and large chassis as examples. A series of highly esteemed Industry experts from Xilinx, ZTE, and more will guide attendees throughout this technical session. Optimized Wireless System Design with Minimal RFI Using Antenna Near Field Approach (Technical Session) From the Amazon Lab 126 team, RF interference is one of the major problems in modern-day consumer electronics. This deep dive of their paper presents a workflow that provides a new perspective on RFI mitigation by observing the antenna’s near field as opposed to manipulating the noise source. By making changes to antenna structure and its near field characteristics considerable RFI improvement can be achieved. DesignCon 2020 is also supported by The Institute of Electrical and Electronics Engineers (IEEE), offering its accreditation to conference attendees. Each conference hour is equivalent to one professional development hour (PDH), and 10 PDH’s result in one continuing education unit (CEU) and an official IEEE certificate. IEEE accreditation can be used to meet training requirements, stand out to future employers, and maintain an engineering license. Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks. MEMBERS GET ACCESS TO - - Exclusive content from leaders in the industry - - Q&A articles from industry leaders - - Tips and tricks from the most successful developers weekly - - Monthly issues, including all 90+ back-issues since 2012 - - Event discounts and early-bird signups - - Gain insight from top achievers in the app store - - Learn what tools to use, what SDK's to use, and more
OPCFW_CODE
Automated cryptocurrency trading is the ultimate solution for those willing to spend less time in front of the computer and for those looking for a gradual return. Bitsgap is best known for its automated trading bots on a cryptocurrency spot market. It has recently announced the launch of a “Combo bot”, which has been created for the cryptocurrency’s futures market. The advanced thing about the futures market is that you can generate returns not only on a rising market but on a falling. As the price declines, you can configure the Combo bot to take advantage of a plunge by selecting the “Short” strategy. The bot will execute the short-sell position. Conversely, if you expect the market to rally then you have an option to select a “Long” strategy and the bot will open a buy position. Moreover, in futures trading, you can open leveraged positions. For example, you can open a $1000 position having only $100 on your balance. Thanks to a 10x leverage, which implies trading with borrowed money. Leveraged trading is tricky because your risk is 10x. In Combo bot you can set leverage up to 10x in “Isolated margin” or “Cross margin” modes. How does the Combo bot generate returns? 1. The technology behind the “Combo bot” is simple and effective. It is a combination of GRID and DCA algorithms. In a “Short” strategy the bot executes buy GRID orders to lock in returns as the price falls. DCA short-sell orders are located above the market price and if the price rises, then the bot will execute them to adjust the entry price (dollar-cost-averaging effect): 2. As the “Combo bot” can trade in two directions, the “Long” strategy is a perfect solution to achieve maximized returns on a rising market. As the price goes higher and establishes new higher-highs, the bot executes GRID sell orders to lock in returns. When the price swings downwards, the bot executes DCA buy orders to adjust the entry price (dollar-cost-averaging effect): How to analyze the performance of my Combo bots? 1. A sophisticated combination of some key metrics provides users with major insights into their bots’ performance. If you have launched several bots then in “Sum. value change” you can see the overall realized return in %. 2. “Sum profit” depicts the total realized return in USD value. 3. In the “Positions” section there is information about currently open futures contacts. Unrezlied return or loss in USD value. In the example below, we have 2 “Long” Combo bots with a 10x leverage. Other crucial metrics like “Margin ratio” and “Liquidation price” provided - this is your risk exposure. Risk management is essential! Trading futures contracts can bring you insane returns, but the cost for that is a substantial risk that you take by using leverage. Make sure you fully understand the underlying risks. At Bitsgap you can learn about the spot market automated bots in a risk-free demo-mode where you have virtual money to trade with. A good recommendation would be to experiment with time-tested Bitsgap’s spot bots to get yourself familiar with the GRID algorithm before trading Combo bot.
OPCFW_CODE
In Lucene 9.7.0 we added support that leverages SIMD instructions to perform data-parallelization of vector similarity computations. Now we’re pushing this even further with the use of Fused Multiply-Add (FMA). What is FMA Multiply and add is a common operation that computes the product of two numbers and adds that product with a third number. These types of operations are performed over and over during vector similarity computations. Fused multiply-add (FMA) is a single operation that performs both the multiply and add operations in one - the multiplication and addition are said to be “fused” together. FMA is typically faster than a separate multiplication and addition because most CPUs model it as a single instruction. FMA also produces more accurate results. Separate multiply and add operations on floating-point numbers have two rounds; one for the multiplication, and one for the addition, since they are separate instructions that need to produce separate results. That is effectively, Whereas FMA has a single rounding, which applies only to the combined result of the multiplication and addition. That is effectively, Within the FMA instruction the a * b produces an infinite precision intermediate result that is added with c, before the final result is rounded. This eliminates a single round, when compared to separate multiply and add operations, which results in more accuracy. Under the hood So what has actually changed? In Lucene we have replaced the separate multiply and add operations with a single FMA operation. The scalar variants now use Math::fma, while the Panama vectorized variants use FloatVector::fma. If we look at the disassembly we can see the effect that this change has had. Previously we saw this kind of code pattern for the Panama vectorized implementation of dot product. The vmovdqu32 instruction loads 512 bits of packed doubleword values from a memory location into the zmm0 register. The vmulps instruction then multiplies the values in zmm0 with the corresponding packed values from a memory location, and stores the result in zmm0. Finally, the vaddps instruction adds the 16 packed single precision floating-point values in zmm0 with the corresponding values in zmm4, and stores the result in zmm4. With the change to use FloatVector::fma, we see the following pattern: Again, the first instruction is similar to the previous example, where it loads 512 bits of packed doubleword values from a memory location into the zmm0 register. The vfmadd231ps (this is the FMA instruction), multiplies the values in zmm0 with the corresponding packed values from a memory location, adds that intermediate result to the values in zmm4, performs rounding and stores the resulting 16 packed single precision floating-point values in zmm4. The vfmadd231ps instruction is doing quite a lot! It’s a clear signal of intent to the CPU about the nature of the computations that the code is running. Given this, the CPU can make smarter decisions about how this is done, which typically results in improved performance (and accuracy as previously described). Is it fast In general, the use of FMA typically results in improved performance. But as always you need to benchmark! Thankfully, Lucene deals with quite a bit of complexity when determining whether to use FMA or not, so you don’t have to. Things like, whether the CPU even has support for FMA, if FMA is enabled in the Java Virtual Machine, and only enabling FMA on architectures that have proven to be faster than separate multiply and add operations. As you can probably tell, this heuristic is not perfect, but goes a long way to making the out-of-the-box experience good. While accuracy is improved with FMA, we see no negative effect on pre-existing similarity computations when FMA is not enabled. Along with the use of FMA, the suite of vector similarity functions got some (more) love. All of dot product, square, and cosine distance, both the scalar and Panama vectorized variants have been updated. Optimizations have been applied based on the inspection of disassembly and empirical experiments, which have brought improvements that help fill the pipeline keeping the CPU busy; mostly through more consistent and targeted loop unrolling, as well as removal of data dependencies within loops. It’s not straightforward to put concrete performance improvement numbers on this change, since the effect spans multiple similarity functions and variants, but we see positive throughput improvements, from single digit percentages in floating-point dot product, to higher double digit percentage improvements in cosine. The byte based similarity functions also show similar throughput improvements. In Lucene 9.7.0, we added the ability to enable an alternative faster implementation of the low-level primitive operations used by Vector Search through SIMD instructions. In the upcoming Lucene 9.9.0 we built upon this to leverage faster FMA instructions, as well as to apply optimization techniques more consistently across all the similarity functions. Previous versions of Elasticsearch are already benefiting from SIMD, and the upcoming Elasticsearch 8.12.0 will have the FMA improvements. Finally, I’d like to call out Lucene PMC member Robert Muir for continuing to make improvements in this area, and for the enjoyable and productive collaboration. 除了使用FMA,向量相似性函数相关的内容(the suite of similarity functions)也进行了改进。 dot product、square 和 cosine distance 和Panama向量化都已更新。通过查看反汇编和实验经验应用了(apply)一些优化措施,这些措施带来了改进,有助于保持CPU忙碌的状态,主要通过更加的一致性和有针对性的循环展开(loop unrolling),以及消除循环内的数据依赖性。 mostly through more consistent and targeted loop unrolling:这是一种优化技术,通过增加循环体中代码的实例数量来减少循环的迭代次数。这种方法可以提高程序的执行效率,因为它减少了循环控制逻辑的开销,并可能使得更多的数据在单个循环迭代中被处理。 在循环内移除数据依赖(removal of data dependencies within loops):这是指修改循环中的代码,以减少或消除数据依赖,从而提高性能。数据依赖可能会导致循环迭代之间的延迟,因为后续的迭代可能需要等待前一个迭代完成数据处理。通过重构代码来减少这种依赖,可以使循环的不同迭代更加独立,从而提高运行效率。综合来看,这些技术有助于提高程序处理循环时的性能,特别是在涉及大量数据和复杂计算时。 具体的性能提升没那么直接通过数据来确定,因为影响涵盖了multiple similarity functions and variants,但我们看到了吞吐量方面的提升,从浮点型数值的点积操作的个位数百分比的提升到cosine操作的两位数百分比的提高。基于字节的相似性函数也显示出类似的吞吐量的提升。
OPCFW_CODE
When you are breeding two dragons together the possible results will be listed into 3 different type of slots each with a limit of 10. In total there are 30: Both Unique as Special Dragons will never be listed inside one of these slots. When there are more than 10 dragons that can be listed into a difficulty the dragons will be choisen randomly. All slots are locked on default. In order to unlock them you will need a specific ammount of rate value. This value is in fact the current earning of your two dragons: |Rate Formule||Unlock Requirement| |if level is < 10 or same 10:rate = start coin + add coin *(level-1); if is more than 10 |If your rating is below the 50 all slots will be unlocked. What makes every dificulty an 33.33% chance.| Process 1: Type CycleEdit The Type Cycle exist out 3 stage. The system will only check for opposites on stage 1. When an opposite is detected the stage will be ignored and will skip to stage 3. Add the end of this process when no dragons are generated on the list. An elemental that contains the first element from left parent will generated on the list. On this process, only type combinations are generated. - Containing : P1(e1)/P2(e1) if opposite = none; - Containing : P1(e1)/P2(e2) if opposite = none; check if P1(e1) = P2(e2) - Containing : P1(e1)/P2(e3) if opposite = none; check if P1(e1) = P2(e3) - Only on stage one, it will check on opposite list. If the containing types are equal, an elemental from that type is generate. - Containing : P1(e2)/P2(e1) - Containing : P1(e2)/P2(e2) - Containing : P1(e2)/P2(e3) - If the containing types are equal, an elemental from that type is generate. - Containing : P1(e1)/P2(e1)/P2(e2) - Containing : P1(e2)/P2(e1)/P2(e2) - Containing : P1(e3)/P2(e1)/P2(e2) - Containing : P2(e1)/P1(e1)/P1(e2) - Containing : P2(e2)/P1(e1)/P1(e2) - Containing : P2(e3)/P1(e1)/P2(e2) Process 2: Checking ConditionEdit - If both dragons from this combination are rare hybrids, the rare hybrid list will be add to the array list. This list is located on your user info config what is located server sided. - If both dragons from this combination are legend dragons, the legend list will be add to the array list. This list is located on your user info config what is located server sided. If rule 3 is active, the results from this rule will not be add to the array list. - If Parent A and Parent B contains the same dragon. It's an possibility to get that dragon. This dragon will be add to the array list. - If both dragons from this combination are legend or pure (wildcards), the following dragons will be add to the array list: elementals, hybrids and rare hybrids. If the type is pure and rule 3 is activated the wildcard effect will not be add to the array list. - If one of the parents is an pure elemental than this element will be read as 2 elements: Pure and Elemental. And process one will be repeated. - If both parents are pure elemental, both pure elemental type will split in two like step 5 and will repeat one more time. Process 3: GenerateEdit - The array list that is generated from process one will search for dragons that match an different order of elements. If found these dragons will be add to the array list. - Now both array lists from process 1 and 2 are combined into one big list. - The total rating from the parents will be calculated to see what slots are unlocked. - The system picks an dificulty. This percentage is determed by the amount of slots there are unlocked. - The unlocked slots from that dificulty will be filled randomly with dragons from the array list. This randomfactor is based off the internal id of a dragon. - The locked slots from that dificulty will be removed. - The system will randomly pick a slot and save the dragon id on the server. Process 4: Deus Vault (béta)Edit - If the parents combination is matched with DV list, the dificulty slots will divided: - If the recipt gives 100 %, you will get 100 % to get that unique dragon. - If the recipt only give 70%, 30 % chance you will get dragon from difficult rate list from process 3 and 70 % that you still get that unique dragon from the recipt. - The list will not result any elemental dragons if both parents don't have the same element listed. - Standart elemental : e, w, f, p, m , i , el, d ,l and pu, on breeding is read that type as string, and also every pure elemental is read as two type but all of them are grouped into one type. - If you breed two legend dragons that are the same, the only legend dragon that will be listed on the array list is that dragon. - Breeding pure with an elemental will never generate a legend or other pure dragon. - If you breed an pure elemental together with any other dragon, it will generate elemental, hybrid and rare (is matched with what element pure have). - If you breed an pure with an pure elemental, it's an possibility to get an pure too since pure + (pure + elemental), just remember pure + pure = pure, elemental A + elemental A = Elemental A. It's possible to get another pure elemental with this combination. - To get an legend dragon you need 2 rare hybrids. When for a pure you need 2 legend dragons. - To get pure elemental, you need pure + elemental or pure elemental + pure elemental. - This Deus Vault system is still buggy.
OPCFW_CODE
#100DaysOfWeb in Python Transcripts Chapter: Days 17-20: Calling APIs in Flask Lecture: Passing variables between Flask app code and a Template 0:01 Before we get started in actually playing with APIs we just need to quickly, quickly cover off how to pass variables from our Flask app 0:12 the actual Python code, through to that Jinja template to display it on the webpage. So right now we've just been passing off 0:20 standard text through print commands. This time we are actually going to pass off a variable that can change. 0:28 So in this example we're going to use the date. So using a date time string we're going to pass that off to the webpage and then display that. 0:39 And that's a great example because the date time string will change every time the page is loaded because the time keeps ticking, right? 0:47 So to do that let's edit first our routes dot PY file. That's where we're actually going to do the number crunching or the code so to speak. 1:04 Once we're in this file because we are going to actually put the date time returning on our index page that's where we want to display it. 1:15 We need to put our code within our index function. As this is just a simple date time though if you really think about it we just need 1:24 to return that string so there's not much that we need to do in here. All we really have to do to keep it super-simple 1:31 is import date time and then assign the time to a variable. So let's import datetime.now from datetime import datetime. 1:43 And then let's just create a little time object called time_now because we'll take the time_now. And then we will just quickly do datetime.today. 1:55 And what that will do is it'll get date time dot today, it assign it, it convert it to a string and then assign it to the time_now variable. 2:04 And then all we have to do is return that. So if you think about it we are returning with our render template index.html. 2:14 We are also going to pass off a variable. So that will go in these brackets here just after this one. 2:23 Now, just let me type this out before I explain it. Now, this looks a bit confusing but what we're doing is we're actually sending our time 2:34 now variable, this one here, to our Jinja template as a variable called time, as an object called time, okay? 2:43 You might see in other Flask apps where these are named the same thing. In other words, time_now equals time_now would be a traditional way of doing it 2:53 but just for demonstrative purposes I'm going to name them differently and that is so that you can see which variable is which. 3:02 So this one here, time_now, is the one that we're specifying up here in our Python script. And time is what we are going to call in our Jinja template. 3:12 All right, so that's all we have to do to update our routes dot PY file to send the date time object over to our Jinja template. So let's save that. 3:23 Let's head over to our templates folder. And let's have a look in index.html. As you can see from our previous Flask section 3:36 we have our index, everything that's unique to our index.html page, in between these block content tags. 3:43 And all we want to do is have a message that says that we are printing the time. So we can do that underneath here. 3:51 Let's just throw another p tag in or how about we throw a h1 tag here. The current date is, and as we learned 4:04 from the previous videos, anything that is Python code is going to be within these types of brackets and then we close it off here. 4:15 Now as I mentioned in the previous file the variable that we are sending off to our Jinja template is the word time, okay? 4:23 So we actually put in two of these brackets to indicate that we're playing with a variable here. I know it's hard to see with the highlighting 4:29 there so let's just space that out. And we throw in the word time. That's our variable. So because this is an actual object or a variable 4:38 that we are calling from our Python script it goes between these double curly brackets. And that's it. So the current date is, and then we can throw 4:49 in that time, and we save that, and we run our Flask app. And with that running let's bring up the browser and take a look. 5:04 All right, and here's our website running. We are on the 100 days page up here so let's click the home tab and this will initiate the index.html route. 5:14 And there we go, the current date is 2018 October the 11th, and there's our timestamp. Now, one thing you will notice is if we move 5:26 back to the 100 days page and then move back to our home tab, the actual time is not changing. And the reason this won't change 5:36 is because of where we've specified that assignment variable. If we just cancel our script we go into routes 5:47 dot PY and this is being specified at this level. It's being specified outside the route outside the index function. 5:58 And what that means is it is only being assigned when the Flask app is run. So if we cancel the Flask app and then rerun it 6:07 this will update with the time. All right, so that is why that won't change until you close your app and relaunch it. 6:14 If we actually want this to run every time that page is loaded, we have to put that within the index route, the index function here. 6:24 That way every time this is called it activates down here. So let's just copy and paste that. All right, there we go. 6:31 And right quick, Flask run, bring up our web browser. All right, back to the home tab. There's the time, 9:57 p.m. 23 seconds. 6:45 Launch it again, it's gone to 27 seconds. And that's how we can redo that every single time. And that is how we pass a variable off 6:56 to our Jinja template. Just remember we have the render template that we're returning and we're just specifying the variable there. 7:04 We put the actual Python variable on the right and we assign it to a variable that we want to call within the Jinja template.
OPCFW_CODE
Components Components are reusable, interactive building blocks that you can compose into larger ones to make up entire screens, interfaces and apps. To create a new component, you can navigate to the Components panel and click the New Component button located at the bottom. You can create two types of components: Design and Code. A design component is created by selecting an element on your canvas. A code component is created from scratch, and you’ll be presented with a new .tsx file. Design First, give your design component a unique name in the Layer Panel. Then, right click on the layer and select Create Design Component. You can also create new design components from selection by hitting CMD + K. The design components you see in the panel are referred to as a master components. All of your own master components live on the canvas: removing them from the canvas will also remove them from the Components panel. If you paste, duplicate or drag a component from the panel to the canvas, you’ll create an instance of that component. All changes made to the master will be reflected within each instance. You can still override the properties of any single instance and customize things like color, opacity and radius, or layout options like position and size. Don’t like the end result? Right-click and hit Reset Overrides to revert styling back to match the master component. React components use JSX, an HTML-like markup language. Below you’ll find a very basic example of a React component. Notice how the render() method returns something very close to plain HTML and CSS. As you learn, we highly recommend to check out the official React documentation. import * as React from "react"; exportrenderreturn <div style=color: "blue">Hello World!</div> Creation To create a new code component, head to the Components panel in the left menu bar and click select New Component. Then, select from Code, pick a name (no spaces) and click Create and Edit. Framer X relies on an external editor for writing code so if you have a code editor installed, it should automatically open. If you don’t, any editor will work, though we strongly suggest downloading one that supports TypeScript (for auto-complete). We recommend VSCode by Microsoft with the optional Prettier extension installed so your code is always formatted consistently. Now, when you open up the code editor, you will see the code powering the component, including "hello world" displayed in purple text. Dragging the same component onto the canvas will make each one an instance of the original code component, no matter where you position them on the canvas. Every time you make a change to the CSS styling or text in the code editor, your saved changes will be reflected across the master and all its instances. Canvas Interactive components like scrolling and click handlers don’t animate on the canvas to avoid distracting you from the functional aspects of layout design. But you can test your interactive components at any time by hitting the Play icon to open the Preview window. Layout and Sizing Every component rendered on the canvas or Preview is wrapped in a frame so they have the same layout rules as anything else on the canvas. Both the width and height are passed to the code component as props so you can use them. import * as React from "react"; exportrenderreturn<div>thispropswidth x thispropsheight</div>; Displaying Canvas Elements Because the code determines the contents of code components, you cannot directly edit code component contents on the canvas. But often you'd like a code component to display something else from your canvas. For example, a Card component could render an image with some overlay that is somewhere else on your canvas. You can accomplish this in two ways. React props are basically the attributes for a component to display, and one of those is a list of children, or it's contents (like in the DOM or you see in the layer panel). Normally these are determined from your code, but in Framer you can set any Frame on your canvas as children for your component. Let's look at an example. import * as React from "react"; interface Propswidth: number;height: number;export <Props>renderreturn<div style=width: thispropswidthheight: thispropsheight><h1>Hello</h1>thispropschildren</div>; Framer detects you are using the props.children property in your render method and automatically adds a connector to each instance of the component on your canvas (a purple dot on the right side, like the scroll tool). You can drag a connection from this connector to any Frame on the canvas and it will use it as its children to render. Using canvas imports You can easily import and use any named design component. The example below assumes you have a design component named Row. import * as React from "react";import Row from "./canvas"; exportrenderreturn <Row title="Koen"/>; Custom Properties Code components become exponentially more powerful when they use propertyControls, which allows you to customize the user interface of your component, directly within our properties panel. The only thing you have to do is add a static propertyControls method to your class with a descriptor. It will use defaultProps for the defaults and try to type check your descriptors if you added type annotations for your props. import * as React from "react";import PropertyControlsControlType from "framer";interface Props text: string;export <props>static defaultProps = text: "Hello World!";static propertyControls: PropertyControls =text: type: ControlTypeStringtitle: "Text";renderreturn <div>thispropstext</div>; Controls can be described by specifying one of the following types. Boolean Number String Color Image File Enum SegmentedEnum FusedNumber Boolean Control Booleans use a segmented control. The segment titles are True and False by default but these can be overridden using the enabledTitle and disabledTitle. interface BooleanControlDescriptiontype: ControlTypeBooleandisabledTitle?: stringenabledTitle?: string Number Control Number controls are displayed using an input field and a slider. The min and max values can be specified to constraint the output. The default step size is 1. When a step size smaller then 1 is entered, the output will be floats. When the unit type is set to %, the input field will display 10 as 10%. interface NumberControlDescriptiontype: ControlTypeNumbermax?: numbermin?: numberstep?: numberunit?: string String Control String controls are displayed using a single line input field. A placeholder value can be set when needed. interface StringControlDescriptiontype: ControlTypeStringplaceholder?: string Color Control Color controls are displayed using a color picker popover and a number input for the alpha. interface ColorControlDescriptiontype: ControlTypeColor Image Control Image controls are displayed using an image picker that shows a small preview. The component receives an absolute URL during rendering. interface ColorControlDescriptiontype: ControlTypeImage File Control File controls are displayed using a file picker that shows the file name after selecting a file. The component receives an absolute URL during rendering. The allowedFileTypes is an array containing all allowed file types, like so: ["json", "obj", "collada"]. interface ColorControlDescriptiontype: ControlTypeFileallowedFileTypes: string Enum Control An enum control displays a pop-up button with a fixed set of string values. The optionTitles can be set to have nicely formatted values for in the UI. interface EnumControlDescriptiontype: ControlTypeEnumoptions: stringoptionTitles?: string Segmented Enum Control A segmented enum control is displayed using a segmented control. Since a segmented control has limited space this only works for a tiny set of string values. interface SegmentedEnumControlDescriptiontype: ControlTypeSegmentedEnumoptions: stringoptionTitles?: string Fused Number Control The fused number control is specifically created for border-radius, border-width, and padding. It allows setting a numeric value by either using a single input or by using four seperate number inputs. The user can switch from a 1 input to 4 by toggling a boolean. interface FusedNumberControlDescriptiontype: ControlTypeFusedNumbersplitKey: stringsplitLabels: stringstringvalueKeys: stringstringstringstringvalueLabels: stringstringstringstringmin?: number Hiding Controls Controls can be hidden by implementing the hidden function on the property description. The function receives an object containing the set properties and returns a boolean. In the following example we hide the the inCall boolean control when the connected property is false. interface Propsconnected: booleaninCall: booleanexport <Props>static defaultProps: Props =connected: trueinCall: falsestatic propertyControls: PropertyControls<Props> =connected: type: ControlTypeBooleaninCall:type: ControlTypeBooleanhiddenpropsreturn propsconnected === false
OPCFW_CODE
Came across this, the "Gentoo Linux Install Script". It's an dialog based console install script for Gentoo that aims to both take the pain out of installing Gentoo as well as letting you do all of the configuration at the beginning of the install, so that you can then leave it to install unwatched. I downloaded it, gave it a quick looking over and e-mailed the author with some usability ideas. He was very open to input and said he will use my ideas when he gets to improving the UI. I've been hoping for a way to make the gentoo install more sane. It's not likely to be as easy as Redhat(TM) for quite a while, if ever, but using glis would make it easy enough for any windows power user. I think that this is a great compromise between the gtk2 install of redhat(using X on the install cd creates problems) and the needlessly hard install that Gentoo has atm. This fits in quite well with my plans for a gnome distro. I could make a custom stage 3 livecd with glis as the installer and you could be up and running with the gnome linux desktop in <30mins. Using gentoo has the benefit that I can create a default package selection but the user will have a massive collection of software at their findertips - so they will get both the sensible "Just Works" environment by default, but if they are that way enclined they can partake the massive collection of software avaliable. Also, from a development point of view - it's much easier to write ebuilds that to produce rpm, I just need to find out how I go about making a customised stage3 tarball. Stage3 is a tarball of enough binary packages needed to make up a full graphical system. It makes gentoo quicker to install but you loose out on some of the optomisations. But you can still recompile everything later _after_ you have a running system, so I think that stage3 install is the way to go. I think it's a pity that Zynot wont limit itself to embedded Linux and leave x86 to Gentoo. I have plans to do some work with the Zynot gnome/gpe developer, but I think that both he and I still think that Gentoo is where it is at for the desktop. Distributed Usability Testing This is something that I have been thinking about for quite a while. Usability testing costs money and Gnome is lucky enough to have sun investing in it's usability, but there is still a lot that can be done by us, the members of OSS projects. I started thinking about distributed usability testing when my girlfriend started using my pc to check her e-mails. I was using Galeon as my browser at the time(epiphany now) and I filed 2 or 3 bug reports based on problems that she had. Issues that had not really occured to me but that made a lot of sense when she pointed them out. So what is Distributed Usability Testing? it's testing carried out in a casual ongoing way. All of us have windows using friends and they are perfect to try out OSS software on. There are two way to do this IMHO: 1. If you have a friend/family memeber who understands how important OSS is to you and is willing to put up with doing some actual tasks then you can use them to perform an actual test, the way they are done in usability labs, except on a smaller scale. According to the statistics I think you need to have 5 or 6 people doing a usability test to find most of the problems. So if you can find 6 hackers/users with a friend who is willing to help then you have yourself a free usability test. 2. This way is more sneaky and less regemented. It involves doing usability tests on people who don't know they are being tested. They simplest example is the above story about my girlfriend: you just sit someone down infront of your opensource pc and let them try to use it. Make mental notes of the problems they encounter. Of course it is not always possible to bring people to you pc - so use a live cd like knopix. Tell them you want to show them Linux and see how well it works for them. I'm not suggesting you be blatently dishonest, a lot of the time you can just tell the person what you are doing. On the upside you are mixing Linux advogacy with Distributed Usability testing. To fully capitalise on this Idea it would be necessary for some usability engineers (eg. the Gnome Usability Project) to make a guide to Distributed Usability testing and to regularly publish usability tests for use by the testers. Another idea is to join forces with the soon to be created Gnome Marketing project and produce a Gnome Livecd suitable for increaing Gnome awareness as well as doing Distributed Usability testing. I'm going away tomo, but when I get back I may propose this mental idea to the Gnome Usability list. Must get a better blog going next year - one that I can e-mail to like Jeff's.
OPCFW_CODE
Disengagement might present as on the list of users bodily withdraws clear of the keyboard, accesses email, as well as falls asleep. Project Euler. Though it’s not a competition in the normal sense, Project Euler is an incredible solution to obstacle your coding thoughts. They supply a series of significantly tough mathematic and computational puzzles that will certainly grow the bounds of one's thoughts. A centralized procedure that permits folks to e-book gas online is often a lifesaver. Among the best java project Tips to undertake and offer it later on to enterprises. This technique will go a good distance Later on and change just how persons book gas. Essay Disclaimer: The services you supply are meant to support the buyer by supplying a guideline and also the resources presented is meant for use for analysis or examine purposes only. The variable definition instructions the compiler the amount storage is required to make the variable and where It's going to be stored. E mail: You'll be able to e mail your programming homework to us on firstname.lastname@example.org. Right after your Alternative is ready, it is shipped to you personally above e mail within the very same id. In all probability the most beneficial java project Concepts for college kids. They're able to discover from particular practical experience and develop a method that enables scholar like them to access outcomes with only one click on. A centralized outcome procedure will save time and market transparency. " By... Sana,(London) "Large due to Casestudyhelp.com. Formatting of thesis was certainly very difficult for me however , you men enable it to be very simple. Retain It up. I've bookmarked your online circumstance study assignment producing help Web site for upcoming use. I used to be appropriate to pick you fellas to accomplish my assignment in Australia." By... Once we get started coding in any programming languages like java, c/c++, dot Web, c#; receives compile time mistake or run time mistake ordinarily. To acquire right output of method, we need to Test Computer system plan minutely. There are actually examples of programming languages more than Web. Computer system coding can be achieved While using the assistance of programming examples. If you are not wholly positive that you are meant to be a programmer, Here are several indicators that will position you in the ideal path. Read through A lot more of any Inventive endeavor. Right until the principles simply click with your head, it’s gonna be rough sailing. Here are a few ways to relieve that Mastering curve, however, and amongst the most effective strategies will be to Get the arms dirty by using a several aspect projects of your own. Purchasing your bespoke programming assignment within the Ivory Analysis Laptop programming service is speedy and easy. Simply complete our online order form , let's know your specifications and choose your required academic common – 1st class, two:1 or two:two. We’ll get back again to you having a cost to the project. You can also be capable to ask for extra extras at no more charge together with an summary or executive summary, a contents page, specific designs and resources, and unique programming languages. Easy java projects are the ideal to go in the last yr due to the fact this may help college students below the fundamentals of java nicely. As soon as they get perfectly versed with The fundamental nuances of java they can often aim to accomplish superior points in everyday life. Reviews are the helping texts of your c statement which happen to be dismissed through the C compiler. Remarks starts with /* and ends with */ as show in the next illustration. To start with it's tiny inconvenient even though I mail him income, but Mr. Sarfraj is my website absolutely amazing male, who helped me out in thriving completion of my project.
OPCFW_CODE
Godot 4.0 is an upcoming release of the game engine, gaining popularity among game developers in recent years. This new version of Godot is expected to impact the game development industry significantly. It could encourage developers to migrate from other game engines like Unreal and Unity. In this article, we will explore some of the critical features of Godot 4.0 and why it has the potential to be a game-changer for the industry. The game development industry has been dominated by two major game engines: Unity and Unreal Engine. Game developers widely use these engines to create games for PC, consoles, mobile devices, and VR platforms. However, there is a growing demand for alternative game engines that offer more flexibility, speed, and ease of use. Godot is an alternative game engine that has recently gained popularity among game developers. It is an open-source, free-to-use game engine that offers a wide range of features and tools for game development. Godot has been praised for its simplicity, flexibility, and ease of use, making it an attractive option for indie and AAA game developers. Godot 4.0 is the next major release of the game engine, and it is expected to bring significant improvements and new features. Here are some of the critical features of Godot 4.0: Godot 4.0 will introduce a new rendering backend based on the Vulkan API. Vulkan is a modern, cross-platform graphics API that offers high performance and low-level control over the graphics pipeline. With Vulkan rendering, Godot 4.0 will be able to deliver high-quality graphics with better performance and efficiency than before. Godot 4.0 will also introduce a new visual scripting system, allowing developers to create game logic and behaviour without traditional coding. Visual scripting is a more accessible way of creating game logic and behaviour, and it can be beneficial for non-programmers and beginners. Improved 3D Physics: Godot 4.0 will bring significant improvements to its 3D physics system. The engine will introduce a new physics engine based on the Bullet physics library, offering more accurate and realistic physics simulations. The new physics engine will also support soft body physics and deformable objects, allowing for more realistic and immersive game environments. Improved Audio System: Godot 4.0 will introduce a new audio system based on the OpenAL library. The new audio system will offer better performance and quality than the previous one and support more advanced features like spatial audio and occlusion. Godot 4.0 has the potential to be a game-changer for the game development industry for several reasons: Godot is an open-source game engine, meaning it is free to use and can be modified by anyone. This makes it an attractive option for indie developers and small studios that may need more money to pay for expensive licenses for other game engines like Unity or Unreal. With the introduction of Vulkan rendering, Godot 4.0 will be able to deliver high-quality graphics with better performance and efficiency than before. This will allow developers to create more visually impressive games without sacrificing performance. Godot has been praised for its simplicity and ease of use, which makes it an attractive option for beginners and non-programmers. With the introduction of visual scripting, Godot 4.0 will become even more accessible, allowing more people to create games without traditional coding skills. The improved physics system in Godot 4.0 will allow developers to create more realistic and immersive game environments. This will be especially useful for games that require accurate physics simulations, such as racing games, platformers, and puzzle games. Godot has a growing community of developers passionate about the engine and actively contributing to its development. This community support can be invaluable for developers starting out or facing technical challenges. Using Godot 4 or 3 depends on your specific needs and circumstances. Here are some factors to consider: Stability: Godot 4 is still in development and has yet to be considered stable, while Godot 3 has been stable for some time. If you need a stable game engine that you can rely on, then Godot 3 may be the better option. Compatibility: Godot 4 introduces several new features and changes that may not be compatible with existing projects or plugins. If you have an existing project that you want to continue working on, stick with Godot 3. Features: Godot 4 introduces several new features, such as Vulkan rendering and improved physics, that may be essential for your project. Godot 4 may be the better option if your project requires these features. Community Support: Godot 3 has a larger and more established community of developers, which means more resources and support are available. If you are new to Godot or game development in general, Godot 3 may be the better option for getting started. Godot 4.0 could be a game-changer for the game development industry. Its open-source, free-to-use model, high-quality graphics, improved physics system, and easy-to-use interface could attract many developers away from other game engines like Unity and Unreal. The introduction of visual scripting will make game development even more accessible to non-programmers and beginners, further expanding the potential user base for Godot. As Godot 4.0 approaches its release date, it will be interesting to see how the game development community receives it and how it impacts the industry.
OPCFW_CODE
Archive for the ‘Engineering’ Category Thursday, July 9th, 2009 We’ve used test-driven development from the beginning of Caucho, almost 12 years now, and it heavily influences our development, refactoring, and also our release cycle. Today, we’re in the final two weeks of the release cycle for 4.0.1 which means passing our regression test suite and working through load testing. Each week in our release cycle is influenced by our TDD methodology. For Resin, we aim for an eight week release cycle, and usually slip a week or two so it ends up being ten-ish weeks. Monday, June 15th, 2009 There’s been a lot of hype around OSGi over the last year or two in the enterprise space. Last year even Caucho dallied with adding OSGi support to Resin, though we’ve abandoned the idea in the meantime. In this post, I’ll tell you what’s cool about OSGi, why we were initially attracted to it, why we eventually dropped it, and what we did instead. The more I talk to enterprise developers who’ve actually used OSGi, the more I hear this same story. Update: Rob Harrop informed me that there is an emerging specification called RFC-66 for enterprise web/OSGi integration. Monday, June 15th, 2009 I’ve put together a CanDI binding pattern tutorial (pdf) for four major binding patterns: services, resources, startup, and plugin/extensions. Focusing on common CanDI patterns should show how CanDI is used in full applications like SubEtha maillist manager, and avoid the temptation to focus on complicated features that only 1% of applications would ever need. In the tutorial, the key CanDI classes are: - @Current - the service and unique bean binding annotation. - @BindingType - the resource custom binding annotation used for declarative injection. - Instance<T> - the extension/plugin iterator and programmatic bean factor. - @Any - the special annotation for extension/plugin matching of any registered beans Thursday, June 11th, 2009 Gavin King’s announcement is on his wiki. Although the external annotations and classes are pretty similar (except for a major package change), the internal SPI has changed radically, so it’s quite a bit of work for me to keep up. A version based on for Resin 4.0.1 should be possible, though. Wednesday, June 10th, 2009 As a quick introduction to pomegranate (I’m crushed for time today), here’s a quick diagram that shows the basic module structure for a typical pomegranate configuration. Pomegranate is designed to solve the module versioning and classloader issues from an enterprise-application perspective. Although we’re doing a bit of classloader magic behind the scenes, the developer perspective is fairly simple and clean: - remove jars from your .war - drop them in Resin’s project-jars directory - declare jar dependencies in Maven .pom files - import them to your web-app with WEB-INF/pom.xml or in your resin-web.xml Pomegranate resolves the module versions, and builds a classloader graph for the web-app. The module graph looks like the following: Monday, June 8th, 2009 Studying the source code for a full application is the best way to really understand a technology like Java Injection (CanDI, JSR-299). Fortunately, Jeff Schnitzer, Scott Hernandez, and Jon Stevens have created a subetha mail, an open-source Java implementation of a mail list manager (like mailman) using CanDI extensively. Because subetha is also a sophisticated JavaEE application using EJB @Stateless beans, JMS queues with EJB @MessageDriven beans, servlets, and Hessian remote services, it’s a great overall application to study. Thursday, May 14th, 2009 Today I had a Resin user ask about the Resin watchdog and how it works. We’ve got some documentation, but I thought I’d show an example here in case you’ve never had a chance to try it out or were confused. I’ll show the actual command line input and output that you’ll see. After that, I’ll also talk about a feature we’re considering adding watchdog which we’d like feedback on. Monday, May 4th, 2009 We use WordPress for this blog and I recently upgraded to the 2.7.1 version. The developers have started using a .htaccess file with Apache mod_rewrite rules, so we need to emulate that to support certain things like permalinks. This is a quick and trivial example, but it gives a glimpse at our new rewrite dispatch syntax in Resin 4. Tuesday, April 28th, 2009 By adding Java to their App Engine, Google has opened the door for a whole slew of languages that have been implemented on the JVM, now including PHP via Quercus. For the last couple of weeks, I’ve been looking at Google App Engine and what its possibilities are for Quercus. Some folks from a PHP shop in Britain got Quercus running, but the version they were using was pretty old and seemed to come from a bizarre cross slice of our SVN repository. We wanted to make sure that the current version of Quercus runs on GAE with all its performance and compatibility enhancements. So Scott created a GoogleQuercusServlet just for the task. I wrote up how to get started using Quercus on GAE and some notes about what PHP can and can’t do within GAE at the moment.
OPCFW_CODE
We would like to strengthen the correspondence between static programs and their dynamic process. How can we characterize the progress of a process? (In other words, what must we store in order to repeat the process until a specific point?) In a language which is a sequence of assignment statements (possibly with conditional statements), it is sufficient to note the location in the program text. When procedures are included, a sequence of textual locations the same size as dynamic depth of the procedure call is needed. When the presence of repitition clauses, a dynamic index is needed. Question: Could just stimulate repetitions using procedures? The audience Dijkstra was addressing at the time used procedures calls sparingly. Procedures might be called, but not commonly call anything else. (Transcription note - Dijkstra says that loops are superfluous for this purpose in EWD215) Question: Is the information maintained more complete than the Yes. The coordinates specify exactly the information necessary to reproduce the execution of the process. The value of these indices are outside the control of the programmer. They are genererated either by the write up of his program or by the dynamic evolution of the process. Why do we need these coordinantes? We can only interprete the values of variables with respect to the progress of the process. For example, say we want to count the number of people in a room (n initially) and a person enters the room. In the moment between the observation of the person entering and the increase of the counter, the value is the number of people in the room minus one. Goto makes is difficult to find a meaningful set of coordinates to describe the progress of a process. A counter could be used, but is not very helpful. Dahl, Hoare, Dijkstra published Structured Programming in 1972. Hoare says the structured programming is the systematic use of abstraction to control a mass of detail and a means of documentation which aids program design. Elsewhere Hoare says: Pointers are like jumps, leading wildly from one part of the data structure to another. Their introduction into high-level languages has been a step backwards from which we may never recover. - Hints on Programming Language Design 1973 The Elements of Programming Style emphasize abstraction, readability, and the ease of which we can proof equivalency and correctness. Hoare was concerned with teaching data structures and emphasised on testing and communication with other programmers. Repitition constructs, subroutines, and single exits from loops was promoted. In 1975, Michael Jackson consulted COBOL programmers writing data processing programs. He suggested a style demonstrated by the program: ENTRY "SUM" USING JOBINF GOTO Q1, Q2 DEPENDING ON QS SUMSEQ Q1 WRITE HEADLINE MOVE 0 TO TOTALS SUMBODYITER IF EOF_OUTPUT GOTO SUMBODYEND ADD JOBINF TO TOTALS MOVE 2 TO QS GOBACK Q2 GOTO SUMBODYITER SUMBODYEND CALCULATE AVERAGES WRITE REPORTLINES GO BACK SUMEND This code is essentially in continuation-passing style. Jackson suggests turning an algebraic datatype into a data diagram (a tree diagram). He tells how to write the code for each node and says to write the program by linearizing that tree. When writing report generating programs, a programmer wants to design based on the output tree. Sometimes you may get a structural clash between the input tree and output tree, so build an intermediate tree which can be viewed by either the left or right. Space was is limited, so it was practical to invert a program so that it can be suspended before invoking the other half when. In other words, he advocated coroutines. Knuth wrote Structured Programming with the Goto Statement which emphasizes readability and efficiency. We have a running example where given an array A with indexes 1..m, we search for x and if x is found at index i, increase B[i], otherwise add it to A. A canonical example of where he really does not want to remove a goto is: for i := step 1 until m do if a[i] = x then goto found notfound: i := m + 1 m := i a[i] := x b[i] := 0 found: b[i] := b[i] + 1 Here is less readable program without gotos. i := 1 while (i <= m) && (a[i]!=x) do i := i + 1 if i > m then // An extra comparision m := i a[i] := x b[i] := 0 b[i] := b[i] + 1 But thats too slow. This is 33 percent faster! a[n+1] := x i := 1 while (a[i] != x) do i := i + 1 if i > m then m := i b[i] := 1 else b[i] := b[i] + 1 This one is 12 percent faster than the previous one! i:=1 while i <= m do if i > m then b[i] := b[i] + 1 a[m+1] := x i := 1 goto test loop: i := i + 2 test: if a[i] == x then goto found if a[i+1] != x then goto loop else i := i + 1 found: if i > m then b[i] := 1 else b[i] := b[i] + 1 Knuth would say this is an innerloop and this transformation is justifiable. Procedures calls in PL/I, Algol imposed significant overhead and Knuth did not trust compiliers to eliminate recursion. So given a procedure like: procedure treePrint(t) if t != 0 then treePrint(left[t]) print(value[t]) treePrint(right[t]) Is inefficient and the procedure calls should be removed. Rule number 1 to remove procedure calls is: So the treePrint procedure is transformed into the tail-recursive procedure treePrint(t) loop: if t != 0 then treePrint(left[t]) print(value[t]) t = right[t] goto loopGoto statements can always be elminated by introducing suitable procedures.
OPCFW_CODE
Converting to a GitHub organisation Hey, I've decided to consider moving all Pronto related repositories to an organisation. Before I decide to pull the trigger, I'd like to ask for some input. I personally think that it would help organise, maintain and contribute in a better way ✊. First thing that I've checked: [x] Github will automatically redirect the old personal repo to the new org repo. @mknapik, @doomspork, @jeroenj, @aergonaut, @nicka, @andrey-skat, @nbekirov, @gabealmer: all of you have contributed significant chunks of code to Pronto, so your input would be especially valuable 🙇. Howdy @mmozuras! I see no problems with switching to an organization so long as you're happy to do so 😀 Let me know if there's anything I can do to help. Yep I see no problems doing this! Same here, please go ahead 👍 Sounds great to me. As long as you're okay with it. :) That sounds like a great idea. 👍 Thanks for support! 🙂 And here are some initial ideas for the logo: 1, 2, 3, 4. Drawn up by a good friend. Which do you like? Which conveys Pronto's idea in the best way? Any other ideas on what could be Pronto's logo? @mmozuras I like #2. I'll ponder logo ideas over the next few days. Thanks again for including all of us in the decision making process 😁 Yeah, I like 2 the most. Conveys the theme of speed the best in my opinion. 2nd I like the first one the most. But I really like the second one too. :) I really like #2 I'll use that drawing of 2 as a temporary avatar until we'll get a proper version of it 😄. Next problem: pronto is taken. pronto-org? Not a fan. Anyone has better ideas? 🙂 Ideas: prontissimo pronto-rb pronto-labs I wasn't invited and I don't have any contribution yet (I would like to ☺️ ), but I would vote for the 2nd logo. pronto-rb would be nice organization name. I prefer getpronto, pronto-labs, or pronto-org. Albeit a Ruby gem, given that a lot of the runners are not Ruby, I would shy away from pronto-rb or pronto-gem. @doomspork good point about a lot of the runners not being Ruby related 👍. @aergonaut @doomspork where does pronto-labs idea come from? Is Redis Labs the inspiration? Or are there more labs out there? 😄 @mmozuras my inspiration came from @aergonaut's suggestion 😁 Just thinking of other cool-sounding suffixes 😃 I think I like getpronto more than pronto-labs, personally. I like pronto-org and pronto-labs so far the most. It would also be nice to have the domain available 😄. @mmozuras these are available: pronto-labs.com pronto-labs.io prontolabs.io pronto-org.com pronto-org.io 👍 for pronto-labs and pronto-labs.io @doomspork @aergonaut what are your thoughts and arguments on prontolabs & prontolabs.io vs. pronto-labs & pronto-labs.io? @mmozuras no strong feelings, which do you prefer? @mmozuras I'd go with pronto-labs & pronto-labs.io. Have you tried contacting the user who has the pronto username? @jeroenj I haven't, but I'll try 👍 In case he's willing to give it up I'd go with getpronto.io. If not I'd stick with pronto-labs. :) I went with http://prontolabs.io (set up as redirect) and https://github.com/prontolabs. There's no wrong answer 😄. I think that no dash makes it easier to describe in real life. I also like the consistency with other Labs, like https://github.com/postmanlabs/, https://github.com/awslabs, https://github.com/RedisLabs, which all chose to not have a dash. I added the second sketch as a temporary logo and moved all my personal repositories to the org. @aergonaut thanks for the labs idea! And thanks everyone for your thoughts and feedback 👍. Next two steps: [ ] figure out the principles for inviting people to the org. [ ] figure out the principles for adding pronto related libs/gems/runners (and their owners) to the org. Thoughts/feedback on those two? Any examples of how other orgs do it? 🙂 Should you change the attribution in the copyright and LICENSE to something like "Pronto Contributors"? figure out the principles for inviting people to the org. I don't know how Rails/Ember/etc. decide who is added to the core teams, but I think in general members of the org should have a commitment to triaging incoming issues and reviewing Pull Requests. figure out the principles for adding pronto related libs/gems/runners (and their owners) to the org. I think 3rd party runners can stay separate from the org, but you could link them from the list of runners. I tend to agree with @aergonaut 👍 figure out the principles for inviting people to the org. This is always tough and I've tried different things with the couple of the orgs I'm in. I personally feel a person needs to demonstrate a commitment to continued participation and support of Pronto; there's a lot of folks out there looking to "collect them all" with regards to org badges on their profile. As people are added to the organization it'll be important to agree on a process of reviewing, merging, and releasing new features that keeps the overall vision intact; its easy to merge everything that comes through without giving consideration to it's impact on the long term project health and growth. @mmozuras I believe you should always retained the right to veto features and be the deciding vote in the event of a disagreement amongst contributors. figure out the principles for adding pronto related libs/gems/runners (and their owners) to the org. With Ueberauth we only keep packages maintained by the core team in the organization and rely on a Wiki page for third parties. If someone no longer intends to maintain their project you could certainly take over ownership of it under the umbrella of the organization but I would be hesitant to take on the maintenance burden of them all. @aergonaut @doomspork thanks, very valuable thoughts. I'll make a suggestion on principles this week 🙂 I've worked with a designer on a logo. Here's the result: Any thoughts on which one (or ones?) to use? Oh wow, all very nice. For the organization's avatar image, I like the orange and white "P" on the black background. Really cool! I like 2nd line from the bottom (with orange bars). They can be used as long and short version. I agree with @aergonaut & @ivanovaleksey. The singular white P, orange lines, and blackground really pops. The rest are awesome too, kudos to the designer! I like those ones the most too. I guess a white background with a black P with orange lines would be an option too for the avatar? @aergonaut @ivanovaleksey @doomspork @jeroenj thanks for your valuable opinions! I've now changed the avatar to a new beautiful one. I'll give the compliments to the designer 🙇. @mmozuras you can probably close this now 😀 @doomspork soon 😄 I've added welcome repo to serve as the central hub for org related things: https://github.com/prontolabs/welcome. Looking good @mmozuras 👍
GITHUB_ARCHIVE
Buy Generic Ambien Uk reviews 5 stars based on 368 reviews can you buy zolpidem mexico Good Neighbor Pharmacy is an American retailers' cooperative network of more than 3,400 independently owned and operated pharmacies. It is also listed as having antidepressant and anticonvulsant properties. Books describing methods of cultivating Psilocybe cubensis in large quantities were also published. Mauthner cells have been described as command neurons. The buy generic zolpidem 10mg movement's unusual structure has invited a variety of analytical interpretations. In the 19th and 20th centuries, Paris was home to the world's buy generic ambien uk cultural elite. The mosque he constructed in Srinagar is still the largest in Kashmir. It should be taken the way you interpret it. Its original intentions were to fight the M-19 and provide protection for high-profile economic figures. Like Dada before it, Fluxus included a strong current of anti-commercialism and an anti-art sensibility, disparaging the conventional market-driven art world in favor of an artist-centered creative practice. Hope annoys Phoebe when she wears her clothes and eats buy generic ambien uk her food. The first three chapters were initially broadcast on December 7, to international acclaim, with the final three chapters following. Practised only among the alpine population since prehistoric times, it is recorded to have taken place in Basel in the 13th century. Methadone, itself an opioid buy cheap zolpidem no prescription and a psychoactive substance, is a common treatment for heroin addiction, as is another opioid, buprenorphine. He received three years of probation buy generic ambien uk and was ordered to undergo compulsory drug testing. Cool Spooks also Order Xanax 2mg Online Legit a have a blue choker around their necks. None of the other three buy generic ambien uk played with the later and better known Shadows, although Samwell wrote songs for order zolpidem online europe Richard's later career. There can be very substantial differences between the drugs, however. The latter inhibits the antagonistic system, the sympathetic nervous system. Reserpine, buy zolpidem online legitimate methoxytetrabenazine, and the drug amiodarone bind to the RES binding site conformation. In this fight, Scott also gets punched through several walls, which was achieved with camera set-ups. Other multinucleate Buy Xanax Memphis cells in the human are osteoclasts a type of bone cell. Instead, a hydrophobic pocket was proposed to exist in the vicinity of the C-2 carbon. Both compounds, like penicillin, were natural products and it was commonly believed that nature had perfected them, buy generic ambien uk and further chemical changes could only degrade their effectiveness. The drug buy generic ambien uk appears buy generic ambien uk to work by increasing levels of serotonin and norepinephrine and by blocking buy generic ambien uk certain serotonin, adrenergic, histamine, and cholinergic receptors. Bienzobas finishing as top scorer. Houston author Lance Scott Walker noted that the super-sweet combination of soda, cough syrup, and Jolly Ranchers provides a flavor and mouthfeel, which stays on buy ambien online ireland the tongue for an extended duration. Hip-Hop Singles and Tracks chart. The Sinhalese followed a similar system but called their sixtieth of a day a peya. Equally important for the history of music buy generic ambien uk were Telemann's publishing activities. Accordingly, the missionaries first organized the Anti-Opium League in China order zolpidem tablets online uk among their colleagues in every mission station buy generic zolpidem online legitimate in China. The appeal of traditional classical music and dance is on the rapid decline, especially among the younger generation. Bieber was featured, was released. Few composers can write such tunes, which from the first moment ambien 10mg cheap prices are immediately impressed upon our memory, and thus turn into the possession of all those who listen to them. As for why you avoid on these days, it is not only to block off lasciviousness. Other uses of antihistamines are to help with normal symptoms of insect stings even if there is no allergic reaction. It was sold under the trade name buy generic ambien uk Revex. Final Smash, Waddle Dee Army. buy ambien online legal Solane-zumab. Some restrict access to local residents and apply other admission buy generic ambien uk criteria, such as they have buy generic ambien uk to be injection drug users, but buy discount zolpidem generally in Europe they don't exclude addicts buy generic ambien uk who consume by other means. When Coco starts hanging out with Jennifer and her friends, Raffy felt left out, and it strain their friendship. He later left it, and now buy generic ambien uk works as a doctor in the countryside. Lucas looks for the truth behind his uncle's death as he documents his life since joining the Ravens basketball team. It was claimed to be superior to meprobamate, which was the market leader at the time. Jimson weed is highly toxic and can cause delirium, confusion, hallucinations, blurred vision, photophobia, dry mouth, urinary retention, hyperthermia, incoordination, hypertension, and rapid heartbeat among other effects. This study showed that the familiarity heuristic might only occur in situations where the target behavior is habitual and occurs in a stable reddit ambien buy context within the situation. Krokodil is made from Buy Xanax Sticks codeine mixed with other substances. The development is full of sixteenth-note arpeggios in the left hand, and sixteenth-note left-hand scales accompany the start of the recapitulation, but the movement ends quietly. Garcia in a recent interview when asked about the book. However, there were some differences between homosexual buy zolpidem 10mg online paypal and heterosexual women and men on these factors. The evidence is of low to moderate quality and therefore it is likely that future research may change these findings. He also made a brief cameo appearance in the show Californication. The development of this type of behavior is sometimes seen within the first year, or in early childhood, but others may not develop it until later in life. The personality disorders, cheap zolpidem 10mg visa in general, are defined as emerging in childhood, or at least by canada ambien order adolescence or early adulthood. No I shall Buy Drug Alprazolam 2mg Mastercard take none to-night! Some companies did well with the change to a digital format, though, such as Apple's iTunes, an online music store that sells digital files of songs over the Internet. Sophia Corri was a singer, pianist, and harpist who became known in her own right. But it also has an effect on the creation of new dramas. Duffmensch, the German version of Duffman, wears a blue pickelhaube helmet and blue spandex lederhosen with a dark leather waistbelt with beer-can holders buy generic ambien uk that look like ammunition pouches. Since its initial detection in 1969, it has been observed in many regions of the galaxy. Impairment of buy generic ambien uk consciousness is the essential symptom, and may be the only clinical symptom, but this can be combined with other manifestations. I nearly bailed on my audition for the show. Drummer Peter Salisbury's percussion drew inspiration from Dr. However, the following drugs may be prescribed: Dexter's brother Brian bails him out buy generic ambien uk of buy generic ambien uk prison and gets him the best lawyer in town, but buy generic ambien uk the lawyer happens to be in the pocket of a very powerful Mexican drug cartel led by a man named Raul, who is after Brian. All 102 locations of Walmart Express, which where to buy ambien online had been in a pilot program buy american ambien since 2011, were included in the closures. In a typical process, cellulosic biomasses, such as buy generic ambien uk corn stover, sawgrass, or wood, is hydrolysed into glucose and other sugars using acid catalysts. Former leading Cuban neurosurgeon and dissident Dr Hilda Molina asserts that the central revolutionary objective of free, quality medical care for all has been eroded by Cuba's need for foreign currency. HIF-1-alpha, which may lead to increased production of erythropoietin. Buy Generic Ambien Online Paypal
OPCFW_CODE
Did Industrial Revolution economic systems rely on colonies? I have read in the Economic Nobel's prize Thomas Piketty's book the following sentence (translation is mine): "The Enlightenment movement and the Industrial Revolution were partially based on the colonies" I am wondering to what extent is this sentence true? I mean, what specific resources or mechanisms help the Industrial Revolution (First or Second) or the Enlightenment Movement to start or to maintain themselves? It could be seen on the Internet, as comments stated, how the industrial revolution use colonies: get cotton from India for example. But this was not the basement of Industrial Revolution, since a consistent industrial system already existed to use the input from colonies and to give output to colonies. As far as I know, there are no speeches nor ideologies, at the time of the First Industrial Revolution, that asked governments to gather colonies in order to develop industry. Other ways were used to develop industry: Basement of Industrial Revolution was vapour-powered engine, and thus coal: industrialized countries had that on their soil Some countries were industrialized without colonies (Prussia, Austria-Hungary, Russia) So the question is, my apologize, not: "How did the Industrial Revolution use the colonies". It is : Did Industrial Revolution economic systems rely on colonies? This could be either a country colony, or a colony reached through trade: for example, did Prussia interacted with India through Britain? For the three periods mentionned: Enlightnement movement: Colonies did not exist yet, neither industry. Spanish occupation of America and harbour trade were in place. Issues to consider: Did it ask for colonies as a way to develop (whatever the details)? First Industrial Revolution: Colonies did not exist yet, industry is starting. Does the industrial development ask for colonies as a way to sustain? Second Industrial Revolution: Colonies and industry established. Interact as the example of Indian cotton above. No issues to consider in the scope of this question. Can you explain why what turns up when searching for "Industrial revolution colonies" doesn't answer your question? I'm voting to close this question as off-topic because essay prompts do not meet the format of HSE: they ask about general topics where any answer is valid to demonstrate and assess scholastic achievement; they do not produce historiographically valid questions or answers. Robert B. Marks in his The Origins of the Modern World: a Global and Environmental Narrative from the Fifteenth to the Twenty-First Century He describes how England started to import cotton cloth (callicoe) from India in the second half of 17 century until they by 1700 was dependent on them. 130 years later India (due to the navigation laws) imported cheap cloth from England and exported cotton. This because british cotton cloth was cheaper than locally spun and woven. Year 1700 a Indian cotton picker or weaver had a great advantage compared with anyone else: far lower living costs. This because foodstuffs was far cheaper because at average indian agriculture was twice as effective as european. Multiple things happened: Whitney's gin which meant that cheaply produced North-American cotton became usable steam powered spinning mills (and weaveries) a newly created world market for cotton cloth which was captured by the British industry Marks also argues that the colonies (North America, Australia and India) was necessary as sources of raw materials and foodstuffs to England. This allowed England to become independent of its own agriculture, convert England's agriculture into more profitable areas there a far smaller work force was needed. This workforce were by the new poor laws of the early 19th century (after 1815) forced to leave their old villages and neighborhood for the industrialising cities. One of the reasons for the rush to Africa from 1870 was competition. Before this time England was the dominant industrialized country, and they were able to compete both in Europe, Asia and the Americas. Marks mention that in 1870 Great Britain had a share of 33 percent of world output. Some numbers which is available concerns export which for the US due to the speed by it's internal market grew is less definitive than it seems to be. The industrialization in America and Europe meant that the competition in Europe and elsewhere between producers became more intensive. Nationalism could be funneled into making your population accept the necessary outlays to acquire and improve colonies in Africa under the pretext what it would be profitable to acquire and bound colonies to the homeland. The colonies would become customers which couldn't argue about prices on import and export. I don't think it's true that " England was basically the only industrialized country" prior to the 1870s, though it may well have been the most industrialized. In Europe, France & (the states that would become) Germany were industrialized, as were many other countries. The US was as well, and it didn't have colonies in the European sense - that is, while it had the West, it made little if any use of the native populations. @Stefan Skoglund Do you have specific links or more information about this part: "because at average indian agriculture was twice as effective as european.": I am interested in. Thanks in advance!
STACK_EXCHANGE
Observations in modern datasets have a continuum of quality that can be hard to quantify. For example, satellite observations are subject to often-subtle mixtures of confounding forces that distort the observation’s utility to a varying extent. For the Orbiting Carbon Observatory-2 (OCO-2) observatory, effects such as cloud cover, aerosols in the atmosphere, and surface roughness are three major confounding forces that can mildly, heavily, or totally confound an observation’s utility. These complicating factors are not present in a binary fashion: clouds can cover a percentage of the scene, have variable opacity, and differing topology. Arbitrary thresholds are traditionally placed on the presence of such forces to yield a binary good/bad data flag for each observation. By instead generating a data ordering, users are guided towards the most reliable data first, followed by increasingly challenging observations. No harsh on/off threshold is applied to the data, potentially obscuring useful data to one user while leaving in confounded observations to another. Allowing users to create custom filters based on DOGO’s data ordering leaves hard cutoff decisions in the hands of users, guided but not restricted by the project’s expert knowledge. Traditionally, quality flags provided a binary yes/no estimation of a datapoint’s utility. Normally, scientists would first discard all “bad data” so indicated, and only work with the “good data” as defined by the project. However, in modern instrumentation, there is access to significant auxiliary information for each datapoint that enables prediction of the likely utility of the observation with finer resolution than 0 or 1. To do this, many different filters are developed that become increasingly more stringent in terms of a goal metric of data quality. With this sorted list of filters, each datapoint can be assigned a single integer ranging from 0 to 19, indicating how many of the filters would reject it. Ordering the data in terms of these integers communicates to the user the order in which they should be preferred, without actually filtering away any observations. A user is then free to define their own filter based on the integer range they accept, and rapidly communicate this dataset to another collaborator. These ordering integers are called Warn Levels, and they can be developed for any metadata-rich data source to help guide researchers in proper data filtration. One application of Warn Levels requiring spatial uniformity, minimized likelihood of convergence failure, and minimum scatter is the need to preferentially select only “the best” data to process in real time from the OCO-2 mission, enabling its level 1 requirement that at least 6% of the streaming data is successfully processed as quickly as possible. A second, related application produces Warn Levels that help users know the order in which to ingest the mission’s output data for their analysis, in effect forming a “tunable filter” that lets users decide how much data to accept. The algorithm described here is not simply a Warn Level generator for OCO-2, but rather an entire method to construct new Warn Levels for any metadata-rich data source. It is a genetic algorithm coupled with a voting scheme and feature selection for use on a supercomputer that explores the large dimensional space of all possible filters and combinations of filters to yield the best-performing singleton, pair, triple, etc. filters. These are then folded into a Warn Level estimate. By exploring all possible filters and then folding this information into a single data ordering, one is able to achieve far more than even an optimum quality flag could provide. Moreover, during the creation of the final Warn Level ordering, a necessary exploration of the precise metadata that strongly predicts retrieval confounding yields great project insight onto sources of error. These can and were used to guide algorithmic improvements, a-priori tuning, and atmospheric science interpretation of the retrieval algorithm’s behavior while yielding quick detection of serious yet subtle code abnormalities. In fact, this early “feature selection” phase may yield even more useful information and guidance than the final Warn Levels themselves. This algorithm has been significantly sped up, further adapted to take advantage of OCO-2 data, minimized its footprint on the cluster computer hosting it, and processed its output into a more immediately interpretable form. This work was done by Lukas Mandrake of Caltech for NASA’s Jet Propulsion Laboratory.
OPCFW_CODE
I don’t know about you, but no one taught me grammar at school. It’s a massive shame, because grammar is really useful. These days I use it as a pattern library for writing. Like a stencil set, it gives you a collection of predetermined shapes to use when you’re floundering. Which is helpful, because writing is hard. Three of my favourite grammar patterns - statements, imperatives and gerunds - come direct from Ginny Redish’s incredible book Letting Go of the Words. It’s up there with Steve Krug. Seriously. And don’t worry, this stuff’s easy. Trust me! Statements make great subheadings Let’s start with the statement pattern. Subheadings are critical on the web and it’s easy to write great ones - avoid nouns and use statements: - A statement says something. The subheading of this section, ‘Statements make better subheadings than nouns’, is a statement. - Nouns say very little. A noun doesn’t say anything, it just gives something a name. Noun equivalents for this subheading might be 'Statements’, or 'Statement subheadings’. Boring. Look carefully at the subheadings you write. Most of them are nouns, guaranteed. Statements are harder to write but more compelling to read. They force you to commit to actually saying something. And saying something is always a good thing when you’re writing. Recommendations start with a verb The classic research report is a list of insights and recommendations. I use two patterns here: all insights are written as statements (just like subheadings) and all recommendations start with a verb. - Insights as statements 'Doctors want practical information, not scientific advice’ 'Users are unlikely to register for the site’ 'The customer relationship is with the drug, not your company’ - Recommendations starting with verbs 'Create materials that focus on practical information’ 'Remove the registration requirement’ 'Build individual sites for products, not a portal’ In fact, the recommendations start with a particular type of verb. This is the imperative pattern. If it can be spoken like the word of god, with an exclamation, it’s an imperative verb. Create! Remember! Fornicate! (Incidentally, starting with a verb makes for good subheadings too). Different verb types add depth Sometimes you need two levels, like when you’re writing instructions. One level to describe the task ('sending an email’) and the other to describe each step ('open your email client’, 'click on new mail’, etc). This is where I use the gerund pattern. Gerund sounds fancy (it’s Latin!) but it’s just a normal verb which ends with 'ing’. You can always take an imperative verb (create, remember, fornicate) and turn it into a gerund verb (creating, remembering, fornicating). This gives two levels - tasks are gerunds ('sending an email’, 'logging into the site’, 'resetting your password’) but instructions are imperatives ('open your email’, 'click the login button’, 'enter your address’). You’ve seen this all over the web. Go forth and multiply (See? The word of god loves an imperative or two). I use these grammar patterns every single day. They are the lines and shapes, the boxes and arrows, of written language. My favourite grammar pattern of all time is the active voice. It’s the best writing tip you will ever learn. Especially if you’ve had any kind of academic education which involved writing. But it’s tricky to explain. When I’ve cracked it I’ll let you know… Say hello on @myddelton. This is my second post about language and design - if you liked it you should read The 'Can’ versus 'Will’ Hack too. Oh, and buy the new edition of Letting Go of the Words by Ginny Redish.
OPCFW_CODE
Cross-posted on Welcome to NCS-Tech! Good morning all, A few weeks ago I was tossing emails around with some local edtech leader friends of mine on the topic of VOIP web conferencing. Although Skype's popularity has increased geometrically over the years and they recently announced a much-anticipated portal for educators, Skype in the Classroom, we were looking for alternatives. Specifically, lightweight, zero-install, web-based alternatives. One of my favorites in this genre, Tokbox, was an absolute slam dunk: ...until recently, that is. It's gone away, as many great free Web 2.0 services do. See the good news and bad news below (sorry for the small, nearly illegible text): Tokbox was fantastic. Sign in, create a conference, email a link, and BAM! You're video chatting with no download, install or firewall issues. Extreme awesomeness! No haranguing your school I.T. person for weeks to get Skype installed and your network configured to handle it. No software client to configure, no security concerns, it just worked. Tokbox has gone away, but other services are still out there (for now). But before I run through some of the most promising, a word about planning. What are your requirements? What are you trying to do? Coming from the business world (I.T. project management to be precise), I am used to envisioning, designing and building systems to solve problems. "Requirements definition" is a big part of that. It's not that intimidating. It's really just writing down what you are trying to accomplish so that the system you end up building is designed accordingly. There are many reasons you might want to video chat in school: - live, real-time collaboration with other classes - bringing expert voices into lessons - supporting homebound students But ... each of these are very different scenarios. Do you need a one-to-one connection, one-to-many, or many-to-many? Do you need to record sessions and have them easily accessible (viewable) later? What about integrated chat, shared whiteboard or file transfers? Features like these need to be considered within the context of your learning objectives. You need to make a list. Knowing your requirements will allow you to choose the best solution. What are your options? Perusing the ever-changing, completely amazing Cool Tools for Schools wiki, I found several programs worth investigating. Some are mature, commercial products; others are startups like Tokbox. Deciding which one is best for you is a function of your requirements (see above). Anwyay, here are a few worth considering: Elluminate is the thousand-pound gorilla in this space, especially since its acquisition of Wimba. Did you know you can get a free, three-seat Elluminate virtual meeting room just by signing up at LearnCentral? Yep. Elluminate isn't the easiest program to use but that's mostly because it's extremely powerful and has capabilities WAY beyond most classrooms need for basic collaboration. But it is still worth considering because it is robust, established, doesn't require a download (other than a Java-based installer that runs automagically.) http://wetoku.com/reminds me a lot of Tokbox. It has some great features and security. Its openness scares me somewhat - almost anything could be in those little chat windows when you visit the page. It's a notch above Tinychat.com (which I am not going to link to - you can visit it yourself if you want) - which is the wild, wild west of video chatting - not too far off from chat roulette, actually (another site I'm not going to link to.) http://vyew.com/ is very promising, it uses a 'freemium' model that ensures at least some revenue is coming in the door (translation: it's not likely to disappear overnight). It has a great feature set with many terrific collaborative tools built in. It seems to have been designed with teaching in mind. http://www.wiziq.com/ is also very attractive. It strikes me as an alternative to Elluminate, with a very similar feature set, but with no attendee limit. Handy! Recordings are even downloadable. Think of the possibilities... http://www.coolconferencelive.com/ is perhaps the closest to Tokbox of the bunch I am profiling here, both in terms of its simplicity and power, and, in my view, the likelihood it will be gone (or morph into a new product) in the near future. This "beta" service (what ISN'T in "beta" anymore?) does what it says it will do - enable free web based conferences - but it's silly to assume it will continue to do so forever...unless their affiliate marketing plan takes off. Hmmmmm... http://meetsee.com/ is one of the more established players in this space and it is designed with the business user in mind. The free service they offer is probably enough for you to test drive and decide if it's worth exploring further. The whole "virtual office" thing is pretty impressive, shame they don't have one set up like a "virtual classroom." (Meetsee folks, feel free to use that idea...) http://www.yuuguu.com/ has a free 7-day trial that includes unlimited web conferencing, up to 30 attendees per meeting, shared keyboard and mouse control and integrated audio conferencing. A full year's subscription is just $79. I don't know about you, but, I'd spend $79 on a Web 2.0 service I could use all year with my students. Wrap Up & Path Forward I wish I could go into more detail here, perhaps even building a matrix comparing features, but this post is already too long, and besides, I need to get onto the stairmaster, then into the shower and over to school. If you find yourself needing a web-based class conferencing solution, I hope that the sites I have profiled here will be helpful to you. If you find one that really works, tell me in the comments! As long as you are clear about your requirements - what you are trying to do - it'll be easy to decide which of these services (or any other you encounter) are worth implementing in your classroom. Good luck! Hope this helps, -kj-
OPCFW_CODE
<?php class assaydepot { private $access_token; private $url; private $params; private $options; private $facets; private $json_query; /** * Set access token and url for api call, and create blank arrays * for class methods to use. */ function __construct($access_token, $url) { $this->access_token = $access_token; $this->url = $url; $this->params = array(); $this->facets = array(); $this->options = array(); } /** * search_url() - constructs URL for searching Assay Depot based * on $params array, which is created prior to calling this * method. This method returns a formatted URL to be used for * makign the API call. * * search($search_type, $query="") - combines all the pieces to build * the URL into an array and then uses search_url() to reutrn the * built URL string to use in json_output(). * $search_type acceptable inputs: * 1. 'wares' * 2. 'providers' * $query: default set to "", and does not need to be set if * the intention is to return all possible results. */ private function search_url() { $format = '%s/%s.json?'; $format .= trim(str_repeat("%s=%s&", (count($this->params)-2)/2), "&"); return vsprintf($format, $this->params); } public function search($search_type, $query="") { array_push($this->params, $this->url, $search_type, 'q', $query); $this->options_build(); $this->facets_build(); array_push($this->params, "access_token", $this->access_token); $this->json_query = $this->search_url(); } /** * get_url() - constructs URL for pulling information from Assay * Depot based. $params array is created prior to calling this * method and it's contents are used to build the url strign. This * method returns a formatted URL to be used for makign the API * call. * * get($search_type, $id, $query="") - combines all the pieces to build * the URL into an array and then uses search_url() to reutrn the * built URL string to use in json_output(). * $search_type acceptable inputs: * 1. 'wares' * 2. 'providers' * $id: the id of the provider or ware to be returned * $query: default set to "", and does not need to be set if * the intention is to return all possible results. */ private function get_url() { $format = '%s/%s/%s.json?'; $format .= trim(str_repeat("%s=%s&", (count($this->params)-3)/2), "&"); return vsprintf($format, $this->params); } public function get($search_type, $id, $query="") { array_push($this->params, $this->url, $search_type, $id, 'q', $query); $this->options_build(); $this->facets_build(); array_push($this->params, "access_token", $this->access_token); $this->json_query = $this->get_url(); } /** * option_set() - specifies a value for one of 4 known options. * * option_unset() - removes the value previously set for an * option. When reusing a class, options must be manually unset if * you do not wish them to apply to the new api call. Options can * be set, without first calling unset for cases like pagination * and moving on to the next page of results. * * options_build() - takes the set options and adds them to * $params, which is used to build the URL strings */ public function option_set($option, $value) { $known_options = array("page", "per_page", "sort_by", "sort_order"); if ($option != "" && $value != "") { if (in_array($option, $known_options)) { $this->options[$option] = $value; } } } public function option_unset($option) { $known_options = array("page", "per_page", "sort_by", "sort_order"); if (array_key_exists($option, $this->options)) { if (in_array($option, $known_options)) { unset($this->options[$option]); } } } private function options_build() { foreach ($this->options as $k=>$v) { array_push($this->params, $k, $v); } } /** * facet_set() - specifies a key/value pair for a facet. Can be * set multiple times per URL with different facets * * facet_unset() - removes the key/value pair previously set for a * facet. When reusing a class, facets must be manually unset if * you do not wish them to apply to the new api call. * * facets_build() - takes the set facets and adds them to * $params, which is used to build the URL strings */ public function facet_set($facet, $value) { if ($facet != "" && $value != "") { $this->facets[$facet] = $value; } } public function facet_unset($facet) { if (array_key_exists($facet, $this->facets)) { unset($this->facets[$facet]); } } private function facets_build() { foreach ($this->facets as $k=>$v) { array_push($this->params, "facets[".$k."][]", $v); } } /** * json_output() - takes the URL string built through get() or * search(), fetches the json string returned by the API, and then * parses it to an associative array. Prior to returning the * result, the $params array is reset to an empty array to be * ready for the new api call. $options and $facets retain their * values and need to be manually unset prior to making another * api call if needed. */ public function json_output() { if ($this->json_query != "") { $json = file_get_contents($this->json_query); $this->params = array(); return json_decode($json, true); } else { die("Assay Depot Query URL is empty."); } } } ?>
STACK_EDU
The DevOps landscape and the hundreds of tools it contains are much like a vast garage, filled with sophisticated instruments and expensive tooling. Each area of the garage is organized around a certain set of operations and is tuned to efficiently accommodate the operations which go on there (storage, cluster computing, and resource management). On any normal day, the area is staffed by a knowledgeable set of technicians who are happy to explain the particulars of their discipline and help get you unstuck if you get into a mess. Even though the garage is equipped and staffed, however, it is important to acknowledge a reality. DevOps doesn’t fit perfectly within any one discipline or endeavor. For that reason, you will be working on projects which don’t fit neatly into any one area of the garage and will require the use of resources from many of them. You’ll need to move tools out of the areas they are normally kept and apply techniques outside of the discipline where they were born. The technicians (in this case referring to help resources and man pages), while helpful within their area of expertise, won’t really be able to give you all of the answers you’re looking for. This is because the problems you are wrestling with are cross-discipline in nature. But just because the organization of the garage isn’t perfect does not mean that the structure you inherit isn’t useful or the constructs it holds won’t help you on your way. As you go about building your own mental processes and moving into the space, we (the authors of this course) think you should pay attention to the following: Tools: Libraries, systems, and instrumentation for accomplishing a goal. Good tooling is informed by smart thinking and enables efficiency. Your garage is filled with good tooling, which has successfully solved hard problems for a long time. As you look to interfaces and patterns that have worked, look for other problems where it might also serve as a solution (or at least the template of a solution). Many of the challenges in this course can be approached from many directions, solved via an application written in Python versus Java, for example. Or, you might find that one part of the system is best solved by a Python library, while another via a Java library. That’s okay! Use the tool best suited for the job, because it’s possible to combine disparate tools Together. Toolboxes and Toolchains: Complex jobs require multiple tools, usually working together, to solve a problem. For that reason, it’s a good idea to keep tools in a common “toolbox” and create collections intended for specific challenges. In a garage, this might be the ratchet set or the fasteners. In software, a good example is the Python analytic stack. There are a couple of challenges that your toolboxes and toolchains need to address, two of the most important include locality (where does it live and how does it access things that it needs) and architecture (how it is organized, both internally and as part of a larger system). Tools that integrate need to be able to access one another and pass data back and forth. If they are programming libraries, you need to be able to import and run the code in the context of where the application runs (given the distributed environment many of the applications will find themselves in, this isn’t always as straightforward as it might seem). If it is a service, it needs to know how to access other components and where to pipe its results. Techniques: A garage full of tools we don’t know how to use is worthless. For that reason, we should leverage the best technique and knowledge around. In the metaphor of the garage, this means letting the technicians (in the form of documentation, instructors, and peers) help to inform your approach. They may not know everything, but their expertise may help guide your endeavors. Workbenches: An organized toolbox and informed set of techniques need a place to be applied: A workbench. Automotive work happens in garages and surgeries happen in operating suites because both sets of endeavors have complex requirements. Get Started with DVO Consulting DVO Consulting is a privately-held, female-owned, national IT, business consulting, and staff augmentation firm founded and headquartered in Bountiful, UT to service the ever-growing silicon slopes market and the west coast. We also have offices in Great Falls, VA, which supports clients in Maryland, Virginia and Washington DC; covers east coast operations.
OPCFW_CODE
I mainly focused on generating a multipart message this week. I was using curl to send a multipart formpost previously. For example, curl -kv -F 'lang=vlog' -F 'vlogF[email protected]' http://127.0.0.4:8080/api/v19/workspace.elaborate was used to transfer data to a server. The request body was shown below. (Note: A boundary was a string of random numbers.) --MIME_boundary_2CA6E12165908974 Content-Disposition: form-data; name="lang" vlog --MIME_boundary_2CA6E12165908974 Content-Disposition: form-data; name="..."; filename="source.v" Content-Type: text/plain (contents of the file) --MIME_boundary_2CA6E12165908974-- Instead of using curl to transfer a multipart message, I had to use Poco::Net::HTMLForm class. There was another class to create a multipart message, e.g. Wt::Http::Request class was unable to interpret the message written by this particular class. As a result, Poco::Net::HTMLform class came in handy. In addition, Poco::Net::FilePartSource class was also instantiated to attach a file to the message. The code was demonstrated below to send an HTML form. Poco::Net::HTMLForm pocoForm; pocoForm.setEncoding(Poco::Net::HTMLForm::ENCODING_MULTIPART); pocoForm.add("param","value"); pocoForm.addPart("...",new Poco::Net::FilePartSource(Poco::Path("...","system.v").toString(), "text/plain")); Furthermore, I had to set up a HTTP client session to send a request body with Poco library. There were several steps to follow to set up the Poco HTTP client session. Firstly, I had to instantiate a Poco::Net::HTTPClientSession class with the given host and port. After that, a Poco::Net::HTTPRequest class should be instantiated. sendRequest() could be called at this point of time and it would return an output stream, which was to write the request body. The request was valid until receiveResponse() was called. The code to set up a HTTP client session was demonstrated below. Poco::Net::HTTPClientSession httpSession("...", "..."); Poco::Net::HTTPRequest request(Poco::Net::HTTPRequest::HTTP_POST, "...", Poco::Net::HTTPMessage::HTTP_1_0); pocoForm.prepareSubmit(request); pocoForm.write(httpSession.sendRequest(request)); request.setContentLength(pocoForm.calculateContentLength()); request.setContentType("multipart/form-data; boundary=" + pocoForm.boundary()); Poco::Net::HTTPResponse response; std::istream &stream = httpSession.receiveResponse(response); Note that I had to pass HTTP_1_0 as an argument the HTTPRequest class. The reason was that if I passed HTTP_1_1 as an argument, it might mess up the multipart message as an additional 0 was introduced after the boundary. In this case Wt::Http::Request would throw an error message, e.g. WebController: could not parse request: CgiParser: reached end of input while seeking end of headers or content. Format of CGI input is wrong. The incorrect multipart message was illustrated below. --MIME_boundary_2CA6E12165908974 Content-Disposition: form-data; name="lang" vlog --MIME_boundary_2CA6E12165908974 Content-Disposition: form-data; name="..."; filename="source.v" Content-Type: text/plain (contents of the file) --MIME_boundary_2CA6E12165908974-- 0 I was unable to resolve this issue for quite a while and I only managed to discover the root cause after I used netcat to do reading to the network connection. A simple nc command was shown below to capture the packet. nc -l 8080 > out All in all, I was pleased to work for AESTE as an apprentice and I learned a lot from Dr Shawn. I would also continue to work on my weaknesses in order to become a better software developer.
OPCFW_CODE
x264 supports both 8-bit and 10-bit outputs, and you don't have to do anything special. If using ffmpeg you can see what pixel formats and bit depths are supported by libx264: $ ffmpeg -h encoder=libx264 Supported pixel formats: yuv420p yuvj420p yuv422p yuvj422p yuv444p yuvj444p nv12 nv16 nv21 yuv420p10le yuv422p10le yuv444p10le nv20le edit: I successfully made a 10bit encode of Ducks Take Off. First way: I built a 10bit x264 binary that statically links libx264. cp -al x264-git x264-10bit # instead of changing my normal git checkout ./configure --extra-cflags=-march=native --enable-static --disable-interlaced --bit-depth=10 sudo install x264 /usr/local/bin/x264-... Adobe licenses its H.264 encoder from Mainconcept, which doesn't do that well at low bitrates. x264 is pretty much the frontier when it comes at low size output for a given quality target, or quality for a given bitrate target. x264 is what's used by platforms like Youtube / Vimeo ..etc to encode user videos. One thing you could try is to increase the ... Cabac is lossless, but h264 is lossy. The part you are missing is that cabac is not THE compression algorithm. It is just the final step out of hundreds of steps in video compression. By the time you get to cabac, all the lossy steps have already been performed, and a final lossless step is added to squeeze a few more bits out. To concatenate multiple files for expected playback in common players, following properties need to match for video: codec, codec profile, codec level, resolution, reference count, pixel format, timebase/timescale. audio: codec, codec profile, channel count & layout, sample format and sampling rate. Advanced players can tolerate mid-stream changes in ... The first thing I would try is to add -force_key_frames to your original command, drop the preset and lower the -crf value. The following example sets a key frame every second. ffmpg -i input.mov -c:v libx264 -profile main \ -force_key_frames expr:gte(t,n_forced*1) \ -crf 15 -pix_fmt yuv420p -an output.mp4 As a second resort I would use a series of ... Depending how the content was made, the banding might be introduced when you're converting your content from RGB colorspace to YUV. You can try to make an h264 while keeping RGB colorspace, although I've read it's not easy. Are you able to use another codec? The bands you're referring to could well just be a limitation of the 8-bit colour space. In theory the way to solve this is to use 10- or 12-bit colour space through every stage from rendering, to editing and mastering, through to output and even in the screen or projector. However your final output is probably going to be displayed in an 8 bits per ... I finally made it work in splicing directly the VOB files with the commands below : ffmpeg -i VTS_01_2.VOB -ss 463 -c copy -vframes 325 2-manuchoisit.vob ffmpeg -i VTS_01_2.VOB -ss 353 -t 16 -c copy 3-manutombe.vob and then concat the extracts and convert with ffmpeg -analyzeduration 200M -probesize 150M -i "concat:1-manubus.vob|2-manuchoisit.vob|3-... In the first step you are doing a lossy conversion, you transcode from vob to mp4, and then to ts. For a lossless re-mux you should just re-mux, better to specify both video and audio: ffmpeg -i VTS_01_1.VOB -c:v copy -c:a copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts However, if you re-mux for the purpose of slicing then you should be aware that ... Short answer is No. Longer answer is, it depends. If you're encoding a file, then generally the output is the duration of the input, unless there's speed change or trim filters or -ss, -to, -t options applied. For a live input, FFmpeg will stop the encode when it encounters EOF on the input, so unless you know that, you won't know the output duration. For ... I avoid AME and use x264 via ffmpeg for H.264 encoding. From Premiere I prefer to output a temporary lossless compressed format as the intermediate, such as the free and open-source Ut video, instead of DNxHD/DNxHR/ProRes. This avoids any generation loss (minor as it may be with ProRes/DNxHD, but still technically present as they are not lossless). Also, I'm ... It's a rendering artifact, not an actual error. ffplay test444.mp4 -vf scale=iw*16:-1:flags=neighbor ffmpeg -i test444.mp4 roundtrip.png You should see no black pixels. Update: ffplay downsamples YUV inputs to 420 before final conversion to RGB. [swscaler @ 0000000005a82800] bicubic scaler, from yuv444p to yuv420p using MMXEXT You can avoid ... The scale filter has no effect on the encoder's bitrate control. Yes, a scaled down video should have a lower bitrate if it is encoded with the same encoder settings as the source. In your command, since no encoder parameters are explicitly set, ffmpeg defaults to encoder x264 with rate-control mode CRF with value 23. Apparently, this results in the same ... I downloaded the ffmpeg from the link And entered the below command to create 4:2:2 10bit h.264 file: ffmpeg-hi10-heaac.exe -i "im.mp4" -c:v libx264 -pix_fmt yuv422p10le yuv-high-.ts Just to add a bit to AJ Henderson correct answer. You do can compress in a lossless way with h264, this is the lossless predictive profile and is achieved by encoding with a CRF setting of 0. Though while you get lossless h264 compression that way you will endup with a larger file than your source file. Lossy compression cant be done twice without loosing ... You have a misunderstanding of how compression works. In all but a few specialized types of lossy compression, when you compress something a second time, even in a much higher quality level than previous encodings, you still lose additional quality. Using a slower encoding from the same original source with constant quality will often produce a smaller ...
OPCFW_CODE
Digital Heritage: Semantic Challenges of Long-term Preservation Review 1 by Pascal Hitzler: The paper reports on "Digital Preservation" as a field of research in relation (or the lack thereof) to Semantic Web. It argues that there are natural tight connections which would seem to make Semantic Web technologies applicable to Digital Preservation, and points out some links which have already been established. While it becomes clear from the paper how Digital Preservation can benefit from applying Semantic Web methods, it would be nice to also include a discussion how Semantic Web as a field could benefit from this application area. Are there fundamental issues which should be addressed (or addressed in a different way) in the light of Digital Preservation? There are some research challenges mentioned on page 3. For some of them, it is not immediately clear to me how these issues could be addressed considering the current state of the art. Perhaps this could be pointed out in a bit more detail. page 1 left: "what sort of digital legacy they will be able leave with today's technologies." (grammar?) page 1 right: "perquisites" -> "prerequisites" page 2 right, bottom: "reenactment of *a* user experience" Review 2 by Krzysztof Janowicz: Very interesting paper, I would be especially interested to read some more details about aspects of semantic/cultural aging beyond data formats. While I agree that parts of the problem of migration can be understood as ontology matching problem, I think that it probably needs a better understanding of ontology evolution in the first place. So far, most ontology matching approaches try to integrate ontologies by adding GCI axioms to the source ontology. To do so, they use probabilistic frameworks, structural matching, syntactic (and sometimes semantic) similarity measures, and so forth. However, ontologies are always only approximations of the intended model and especially OWL ontologies tend to be very rough approximations. The reason why this 'works' is based on social agreement beyond formal specifications. This is especially the case for base symbols and primitives but also holds for defined concepts. This (hidden) agreement however changes over time. While it may have an important impact on the interpretation of data it cannot be handled by ontology matching but rather by studying conceptual shifts and ontology evolution. IMO, Raubal's paper on 'Representing Concepts in Time' or N. Noy's work on detecting conceptual changes in ontology evolution offer interesting insights. However, I am not sure whether this is rather a semantic aging or a cultural aging aspect to use your terminology. One example may be the concept of time. I would assume that it is used as base symbol or primitive in most application/domain ontologies and especially also the annotated primary sources such as text documents. Using the 100 year frame in your paper some documents may be based on a notion of time (and space) before Einstein's work, while others rely on modern physics. Another (less abstract) example may be the term terrorist before and after 9/11. In the paper, you propose to analyze time series to understand the changes, maybe one could use semantic similarity measure to quantify these changes and determine whether they will require new versions of the uses ontologies. Unfortunately, this would still not capture some of the underlying social drifts. Maybe focusing on instance data could help here to observe the changing categorization patterns?
OPCFW_CODE
I am glad I did this as "alpha test", wow, getting this "whitespace problem" squashed has been a marathon. The full set of fixes are in the smartcontracts branch of FellowTraveler's GitHub repo, not yet merged back into the master branch. I have generated new certificates, used them to create a new nym to serve as admin/signing nym for a new server, and used that nym to sign a new DigitalisOTserver.otc which I have uploaded to the galactic milieu files download directory on SourceForge. I have also created a sample/example "empty" ~/.ot directory for clients, it has no nyms no currencies no accounts but does have my new server-contract and uploaded it to SourceForge as dot-ot-empty.client.tgz I have lots of currency contracts to write and sign using the latest fixed versions of the OT server and client, as I create those I will put the .otc files of the contracts on SourceForge and also start putting a dot-ot-clean.client.tgz that will have the main currencies my server intends to work with as well as my server's contract. We are still in alpha; FellowTraveler intends to check that my contracts work on his installed client and so on tomorrow. However, even in alpha it is worth being aware that although Open Transactions is not totally married to its past with merkle trees like blockchains are, there is some one thing building upon another involved. So if at any point you feel ready to create some nym or contract or account that might make you not want to totally delete your ~/.ot and start over again from a fresh new dot-ot-empty.client.tgz or dot-ot-clean.client.tgz that is the moment when you should seriously consider the stuff that is in ~/.ot/client_data/certs/special. In there we have placed a certificate authority (ca.crt) and a certificate signed by that authority (client.pem). You will probably not want to use those distributed ones for creating any mission-critical nyms, nor presumably any nyms created using them for any mission-critical accounts and especially not for signing any mission-critical contracts. I am right now going through everything on my side making sure they are not the same ones my server's client_data has nor the same ones my personal ~/.ot has. Well actually, the ca.crt seems to stay the same, as it is not part of what gets generated when you do the make in the source code's ssl directory to generate new certs for the client and server. So I am not yet sure who the certificate authority actually is or was. But the rest of the special stuff does get made fresh. (In sourcs dir, do cd ssl then touch *.cnf or actually tailor the .cnf files to your needs, then make). If you have a "real" certificate for your http website or whatever, then maybe you might want to use that for creating your website's client_data so signing things relating to your website using OT will have some kind of actual back-trail all the way back to the root cert that signed the certs of the cert authority that issued you your website cert. EDIT: When you "make" OT, it does NOT currently at time of writing use your fresh new ./ssl stuff to create fresh new sample data. We probably should add to OT's make the making of a frsh newdot-ot-yours or somesuch, I don't think (tho not positive) that the ~/.ot make install creates for the user who runs make install (usually root, since it tries to install binary executables into /usr/local/bin) uses fresh new ./ssl date you maybe created, either. Some of the files in the ./ssl look like they might be intended for use as, or for making, a ca.crt, but I am not clear what exactly is going on there so thus far I have only been copying a newly made fresh client.pem from there into the ~/.ot/client_data/certs/special directory. So possibly I am not actually ending up with a ca.crt that matches the client.pem. All this will have to be thoroughly checked out to figure out exactly what one should end up putting there, especially in cases where you have real certs signed by a real authority (such as yourself heh.
OPCFW_CODE
3500 Watt Inverter current runaway I have a 3500 watt inverter connected to a 400AH LiFePo4 battery (with 200A BMS) using #2 wire. I connect my RV A/C to this inverter. It will start the A/C and it blows cold air. Initially the DC Current is around 90 Amps, but the DC current starts increasing until the BMS trips and shuts down the inverter. I suspect it is a bad inverter but want to verify with others in case I missed something in my design. in case I missed something in my design ... your design is unknown Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. Is it likely my inverter is faulty and that is why the current keeps increasing until the BMS trips? Are you seeing increased current in the inverter output? I had not thought of that. Thank you for that insight. I will have to check that later today and see if the output is increasing as well. This is the power drawn by the water heat pump I use for heating. It starts around 5kW, then increases to 6kW as water temperature in the heating circuit rises. \$ \Delta T \$ is the temperature difference between the hot and cold side of the heat pump. When \$ \Delta T \$ increases, all heat pumps lose COP (Coefficient of Performance) which is the ratio between thermal power output and electrical power input. So when when \$ \Delta T \$ increases, if thermal output power is constant then electrical power used increases, and if electrical power remains constant then output thermal power drops. For air conditioning we use EER (Energy Efficiency ratio) instead of COP. Basically all the electrical power used by the heat pump ends up on the hot side, so when you use it as heating it is beneficial, but when you use it as cooling it is waste. Thus COP is (Thermal power at the hot side)/(electrical power) and EER is (Thermal power at the cold side)/(electrical power)... and COP = EER+1. Anyway. Your aircon is a heat pump too. So when you turn it on, temperature on the cold side (inside the room) should be pretty much the same as temperature on the hot side (outside air). With \$ \Delta T = 0 \$ its EER will be very high so it will consume little electrical power to pump its rated thermal power. However after a while, hopefully the inside of the RV is now much cooler than outside, which means higher \$ \Delta T \$, lower EER, and more electrical power is required. Basically it requires more power to pump heat across a larger temperature difference. So you should use a wattmeter to measure this. This power increase is normal, there is nothing you can do about it besides using an inverter heat pump with variable power, and playing with the settings. Then you should check input current and voltage, thus DC input power at the inverter. If it tracks output power, with 10-20% wasted due to the inverter not being 100% efficient, then... normal. If you see inverter input power rise a lot faster than output power, after accounting for 80-90% efficiency, then your inverter loses efficiency when running hot, which could be a problem. Then check voltage drop in the wires. The inverter must output the power required by the A/C, so if there is significant voltage drop in the wires, voltage at the inverter is lower, which means it must draw even more current... causing even more voltage drop in the wires... which means more current... so in a logical but somewhat counter-intuitive way, if your wires are too thin, the inverter will use more current! (and the wires will heat even more). This voltage drop issue doesn't just apply to wires, but to anything with resistance, including battery, BMS, etc.
STACK_EXCHANGE
Introduction You use the Assignments and Progress to help organise your work and to see which tasks to do next. Once you have finished the task, you should mark it as completed so that you can report on what has been achieved. [If you have used an earlier version, you will see that it has dramatically improved in Paratext 9.] Before you start Before you can use the plan, it must have been configured. [Your project administrator who will add the appropriate organisational plan and configure it for your team.] Why is this important? There are so many tasks to do in a translation project. It is important to have a system to make sure you do all these tasks. Now that your plan has been configured, you can use the plan to see what tasks have been assigned to you to do next. When you finish the task, you can mark the task as completed and see the next task to do. You can use this information to generate reports for supervisors and donors (see Project progress 2). What are we going to do? You will mark a variety of tasks as complete. The exact steps will vary slightly depending on whether you do the task once per project, once per book, or by chapter. The place to mark progress for all types of tasks is the Status column. 3.1 View tasks that need to be done - In your project, click Assignments and Progress button - [≡ Tab under Project menu, select Assignments and progress] - From the first dropdown menu, choose either My tasks or All tasks - A list of the various tasks and checks are displayed. You can see more details on any task by clicking on the name of the task. 3.2 Identify the next task The list of tasks shows the uncompleted tasks, each with a colored bar beside it. - Identify the next task for you need to do. It will have either a green or slashed green bar. - Check that it is not waiting for another task. In this case it will have a red slash bar. - Do the task (see other modules if necessary). When you finish the task, see the instructions below to mark it as completed. (A check is completed when there are 0 issues.) 3.3 Mark a task as complete Mark a book task as completed - Click on the checkmark to the left of the status. - It should turn solid to show it is completed. Mark a chapter task as completed Click + to mark the next chapter as complete To mark other chapters as complete you can click on the word Completed A dialog box is displayed with a list of the chapters. Click the numbers of the chapters that have been completed . - If the task is a check, the status of the check will either say Setup required or it will show the number of remaining issues. - A check is complete when there are No issues. Checks – setup required (Administrator) - Click the blue link, "Setup required" - Paratext 9 will run the appropriate inventory or open the settings for that check. - Complete the setup as appropriate. - Close the window when finished. If there is more than one inventory required for a check (e.g. capitalization), you will need to set them up manually from the Tools menu > Checking Inventories. Checks – issues - Click on the blue link “…issues” - A list of errors is displayed. - Make the necessary corrections. - Close the list result (if desired). - ≡ Paratext under Paratext > Save all (or Ctrl+s). - Return to the Assignments and Progress. The check is considered complete when there are 0 issues. If you are unable to complete a check, it is possible to postpone the check to a later stage. - ≡ Tab, under Project menu, select Assignments and Progress - Change to All tasks view - Hover over a check that has issues - Click Postpone (which appears to the right of the Status column), - Choose which stage you want to postpone the check - Type the reason for postponing the check. - The check will move to that stage.
OPCFW_CODE
This chapter describes a high-level view of configuring WebLogic Server MT. The chapter refers to the Fusion Middleware, Coherence, and Oracle Traffic Director documentation sets and online for additional information as appropriate. You can use your choice of the following four tools to manage WebLogic Server MT: Fusion Middleware Control, which is the preferred graphical user interface. WebLogic Server Administration Console For each task, this document presents: The main steps for accomplishing a given task, with a link to the related Fusion Middleware Control online help topic. A WLST example In some instances, there is a difference in how a feature is presented by Fusion Middleware Control and the WLS Administration Console. This document calls out those instances, and includes a link to the WLS Administration Console online help where appropriate. This section describes the high-level steps to configure WebLogic Server MT. Consider the high-level view of WebLogic Server MT shown in Figure 3-1. This graphic shows the relationship of domain partitions, Oracle Traffic Director, a WebLogic Server domain, and Coherence. To configure WebLogic Server MT for Figure 3-1: Install WebLogic Server for MT, as described in "Installing Oracle WebLogic Server and Coherence for WebLogic Server MT" in Installing and Configuring Oracle WebLogic Server and Coherence. Note:If you plan to use Oracle Traffic Director (OTD) to manage traffic to applications running in partitions, you must install WebLogic Server in the same path on any remote host where Managed Servers will run. New lifecycle management facilities require access to plugin JARs that must be available at the same relative path as installed on the Administration Server host. To use Oracle Traffic Director (OTD) to manage traffic to your partitions, install OTD in collocated mode as described in Oracle Traffic Director Installation Guide. You can install OTD and WebLogic Server to different Oracle_Home locations on different systems (as shown in Figure 3-1), or to the same Oracle_Home on a single system. To use OTD to manage traffic to your partitions, use the Configuration Wizard to create an OTD domain. Oracle Traffic Director - Restricted JRF template to create the domain. This template automatically includes several other necessary templates. Create a new domain, as described in "Creating a WebLogic Domain" in Creating WebLogic Domains Using the Configuration Wizard. Oracle Enterprise Manager - Restricted JRF template to create the domain, as described in "Installing Oracle WebLogic Server and Coherence for WebLogic Server Multitenant" in Installing and Configuring Oracle WebLogic Server and Coherence. This template automatically includes several other necessary templates. To use the WebLogic Server lifecycle manager feature to coordinate partition configuration changes with OTD, use WLST to enable Lifecycle Manager on the WebLogic Server domain: edit() startEdit() cd("/") lcmConfig=cmo.getLifecycleManagerConfig(); lcmConfig.setDeploymentType("admin") lcmConfig.setOutOfBandEnabled(true) Create any clusters and Managed Servers you want to use for domain partitions. If you use Fusion Middleware Control or the WLS Administration Console, there is nothing partition specific when creating a cluster. See "Setting up WebLogic Clusters" in Administering Clusters for Oracle WebLogic Server. However, if you use WLST to create Managed Servers (configured or dynamic), the required JRF template is not applied. Fusion Middleware Control requires the JRF template in order to enable domain monitoring. Therefore, for the WLST use case: Use WLST to create the cluster or Managed Server. Use the applyJRF command to apply the JRF template to the Managed Servers. For more information, see WLST Command Reference for Infrastructure Components. Create one or more virtual targets, as described in Configuring Virtual Targets Create and configure a resource group template (RGT), as described in Configuring Resource Group Templates. Optionally, deploy applications to the resource group template, as described in Deploying Applications. Your applications might require partition-specific database connections. Create a new security realm as described in Configuring Security. To use OTD to manage traffic to your partitions, use Fusion Middleware Control to create an OTD instance. Create domain partitions, as described in Configuring Domain Partitions. Override resources such as JDBC connections, as described in Configuring Resource Overrides. Optionally, configure Coherence, as described in Configuring Coherence. Optionally, configure resource managers as described in Configuring Resource Consumption Management. Monitor your domain partitions, as described in Monitoring and Debugging Partitions.
OPCFW_CODE
I am trying to setup views that will allow an individual to see whether they have read/viewed the events. This is kind of like message read in Outlook, where a message has not been read shows up as bold. Please note this is on the individual user and not system wide. We currently have a web based tool that will show the event as red (if the user has not read any of the event), yellow (if the event has been updated since the user last viewed it), and green (if the event has not been updated since the user last viewed it). We are in the process of transitioning to OVSD and the users would like to keep this functionality in the new tool. The users are mixed using the Java Client and WebConsole, so it would be nice to have this working for both. There might be a way to do this, but it will probably be too complicated to implement and maintain, and might require the Web-API You would need to set up a custom field to contain the information as to whether or not a user has read an item. Each time they open the item, an UI rule must update that field with the user's identifier. Each time the item is otherwise updated, this field needs to be reset, probably by a DB rule. Finally, you need to set conditional formatting in the view based on the value in this field. If I were you, I would not try to force an application to have functionality it was never meant to have. This is not the way you are supposed to decide whether or not an item requires action. You make this decision based on the assignment, the status, the priority and perhaps the deadline. You do not waste time by browsing the items. Josh, I looked at this and see that I would be able to set a UI rule that would concatinate "(Current Person)," to the custom read-only field when a user opens an event and then clear this field out completely with a DB rule once it is updated. The only thing that I can't figure out how to setup the view's autoformat that will allow me to see if the (Current Person) is in the field. Basically, [Event ID Read] does not contain (Current Person), then set to Bold. With this logic any event that the user has not opened or has been cleared via DB rule would show up as Bold. Suppose you have a boolean field which is tested in the autoformat configuration of the view. You would then need some external logic to decide whether or not to set that boolean field. The external logic would test the value of the boolean field, as well as some other field containing the data for all the people who have read the record, and then decide if the boolean should be set.
OPCFW_CODE
What does exactly does GNU make dep do? I am trying to understand GNU Make and trying to understand some c code and GNU autotools. There's a folder let's say lib, with three subfolders and a makefile. lib ...libA ...---compile.sh ...---file.h ...---file.c ...libB ...---file1.h ...---file2.h ...---file3.h ...---file.c ...---Makefile ...libC ...---file1.h ...---file1.c ...---file2.c ...---file3.c ...---file4.c ...Makefile So after looking at this folder structure and the make dep command The program goes into the folder, runs make dep then finishes. The above make dep command gets called after a configure script is run to detect and setup the env. What exactly is make dep doing? I would presume only the holder of the Makefile will ever know. cat Makefile | grep "dep" (makes and build the dependencies) ... seriously though, open and read the Makefile and you will know. It is an arbitrary target delineated in the Makefile and without showing the file, we will never know. How's that an answer, why would you think that I didn't look in the Makefiles? There's no deps: that's why I am asking the question... It's not an answer... it's a comment. And it is dep not deps. And no, it wouldn't surprise me if you didn't look. GNU make dep would be defined in the makefile. Makefiles are a simple way to automate things (such as compiling binaries and installing them). make is basically a scripting language. The makefile will contain a section like this: dep: some command maybe an if statement or two some other command That section will define what make dep does. Judging by the name, it probably has something to do with dependencies. The makefile will also contain other sections (such as all, default, install, etc) to take care of compile-time configuration, compiling, and installation. Most packages have install documentation available somewhere to explain the options in the makefile. Reading that and looking at the corresponding sections in the makefile is a good way to learn about how make works. Thanks for the clarification, I looked through all the makefiles and the dep: sections are all empty. Most likely for some features that haven't been implemented yet. Thanks anyways. If I only would have made my comment an answer, I would have been respected ... good to know @user1610950. God speed my friend. Oh, and thanks for this tidbit: dep: sections are all empty
STACK_EXCHANGE
M: Minimum Viable Ops - Deploying your first Django App to Amazon EC2 - eddy_chan http://eddychan.com/post/18484749431/minimum-viable-ops-deploying-your-first-django-app-to R: japhyr Thank you for posting this, it's exactly the kind of guide I've been looking for. I have a django site running on a local server at my workplace, that will only ever have four or five users. I have considered making the site public so that my colleagues can use the site from home as well as at work. Do you know what would it cost to keep a site like this running for a second year, once I've run through my free year? R: eddy_chan Glad somebody's reading it :) Thank you. Looking at Amazon's pricing page (<http://aws.amazon.com/ec2/#pricing>) if you stick with 1 micro Linux instance past the free first year it's $0.02 per hour which works out to $14/month. Add some data in/out costs and I reckon you'd be looking at about $16/month. Move up a step to the small Linux instance at $0.085c per hour and it becomes ~$70 per month but I imagine that could support a fair few users.
HACKER_NEWS
# # Инициализация модуля, отвечающего за работу # с базой данных; # ############################################################ # import import random from sqlalchemy import func from sqlalchemy.orm import sessionmaker from app.bd.table import * from nvxsct import sct ############################################################ # prepare Base.metadata.create_all(engine) Session = sessionmaker(engine) session = Session() work_count = session.query(Work).count() types = sorted([ c[0] for c in session.query(Work.atype).distinct(Work.atype) ]) countries = sorted([ c[0] for c in session.query(Country.country).distinct(Country.country) ]) genres = sorted([ c[0] for c in session.query(Genre.genre).distinct(Genre.genre) ]) tags = sorted([ c[0] for c in session.query(Tag.tag).distinct(Tag.tag) ]) minyear = session.query(func.min(Work.year)).one()[0] maxyear = session.query(func.max(Work.year)).one()[0] ############################################################ # functions query_maxlen = 10 last_queries = [] # [ sct(token : str, query, sets, count) ] def generate_token(): ''' Генерирует случайную строку из трёх символов (цифры и строчные латинские буквы), которой не содержится в last_queries; эта строка и есть токен ''' token = '' while token == '' or len([ s for s in last_queries if s.token == token]) != 0: token = '' for i in range(3): token += random.choice('0123456789abcdefghijklmnopqrstuvwxyz') return token def get_rating(sets, token=None): ''' Есть несколько вариантов использования: Во-первых, передать sets (настройки поиска) без токена: в этом случае будет сформирован и сохранён соответствующий запрос; возвращены будут не только соответствующие произведения, но и дополнительная структура, содержащая следующие поля: struct: token (три уникальных символа, идентифицирующие запрос) query (объект запроса; не использовать вне этого модуля) sets (настройки запроса) count (количество найденных работ по данному запросу) Во-вторых, передать ещё и токен: будут извлечена сохранённая структура (см. выше) и отданы соответствующие работы вместе со структурой. Если структуры с таким токеном не существует, то запрос будет сведён к первому варианту ''' extra = sct(token=token, query=None, sets=sets, count=0) # структура # Вариант с токеном if token: extra_list = [ extra for extra in last_queries if extra.token == token ] if len(extra_list) == 0: print('Expired token') extra.token = None else: extra = extra_list[0] # Если токена нет или он просрочен if not extra.token: extra.query = session.query(Work) # Формирование запроса в соответствии с настройками if sets.atype != 'Не выбрано': extra.query = extra.query.filter_by(atype=sets.atype) if sets.genre != 'Не выбрано': extra.query = extra.query.join(WorkGenre).join(Genre).filter(Genre.genre == sets.genre) if sets.country != 'Не выбрано': extra.query = extra.query.join(WorkCountry).join(Country).filter(Country.country == sets.country) if sets.tag != 'Не выбрано': extra.query = extra.query.join(WorkTag).join(Tag).filter(Tag.tag == sets.tag) if sets.minyear: extra.query = extra.query.filter(Work.year >= sets.minyear) if sets.maxyear: extra.query = extra.query.filter(Work.year <= sets.maxyear) if sets.base != 'Не выбрано': extra.query = extra.query.filter(Work.base == sets.base) if sets.director != 'Не выбрано': extra.query = extra.query.join(Director).filter(Director.director == sets.director) if sets.idea != 'Не выбрано': extra.query = extra.query.join(Idea).filter(Idea.idea == sets.idea) if sets.actor != 'Не выбрано': extra.query = extra.query.join(WorkActor).join(Actor).filter(Actor.actor == sets.actor) if sets.like != None: extra.query = extra.query.filter(Work.name.like('%' + sets.like + '%')) extra.query = extra.query.order_by( *{ 'По расчётному баллу (возр.)' : [ Work.bscore ], 'По расчётному баллу (убыв.)' : [ Work.bscore.desc() ], 'По среднему баллу (возр.)' : [ Work.score, Work.bscore ], 'По среднему баллу (убыв.)' : [ Work.score.desc(), Work.bscore.desc() ], 'По году (возр.)' : [ Work.year, Work.bscore.desc() ], 'По году (убыв.)' : [ Work.year.desc(), Work.bscore.desc() ], 'По названию (возр.)' : [ Work.name, Work.bscore.desc() ], 'По названию (убыв.)' : [ Work.name.desc(), Work.bscore.desc() ], 'По количеству голосов (возр.)' : [ Work.voted, Work.bscore.desc() ], 'По количеству голосов (убыв.)' : [ Work.voted.desc(), Work.bscore.desc() ], }[sets.sorting] ) # Установка полей структуры с дополнительными данными extra.token = generate_token() extra.count = extra.query.count() last_queries.append(extra) if len(last_queries) > query_maxlen: del last_queries[0] # Получение результата и нумерация res = extra.query[sets.offset:sets.offset+sets.limit] for i in range(len(res)): res[i].num = i + sets.offset + 1 return res, extra def get_work(id): return session.query(Work).filter_by(work_id=id).one() def create_sets_struct(): ''' Вспомогательная функция, которая создаёт настройки запроса "по умолчанию" ''' return sct( atype = 'Не выбрано', genre = 'Не выбрано', country = 'Не выбрано', tag = 'Не выбрано', minyear = minyear, maxyear = maxyear, base = 'Не выбрано', director = 'Не выбрано', idea = 'Не выбрано', actor = 'Не выбрано', sorting = 'По расчётному баллу (убыв.)', offset = 0, limit = 50, like = None ) ############################################################ # END
STACK_EDU
18 questions linked to/from Should I be concerned about Featured Questions inflating votes? I do not understand why I failed this audit Whilst reviewing over on Stack Overflow I got the following question How to get source code from installed app on the android phone? I mistakenly formated my WD hard disk.And I lost all my project ... I know this is an audit and it is bad. What should I do? I was going through the low quality post review queue and got this question. I am 99.99% certain that it is an audit as it is a question and the LQ queue no longer does questions. I right clicked on ... Voting history statistics request In a couple recent threads regarding how users of higher reputations behave there is a lot of speculation going on. Could we get some summarized statistics to help in these discussions? I think some ... Failed FP audit due to downvote. Is my mental rule for downvoting wrong? [duplicate] I just failed an audit in First Posts which now locks me out for 2 days. While I perfectly understood why I failed before (misclicked button, misinterpreted problem statement), I was puzzled by that ... Should we allow questions with an active bounty in the triage queue I was going through the triage queue and got this review of ISO C++ forbids declaration of ‘multiset’ with no type. Looking at the question there is no MCVE and the code is provided only as a link. ... Should audit posts be moderator-validated? And should you be review banned after one failed audit? ok, so I have been reviewing posts on stack overflow for the better part of a year or more. Lately, I have noticed the audit posts have become much more arbitrary. For example, I saw this post and ... Why does the Low Quality Posts review queue still have question audits? Questions got kicked out of the Low Quality Posts (LQP) review queue a while back and shunted to Triage instead. So there are not now, nor have there been for some time, any legitimate questions in ... Close Vote Review Queue audit on active bounty question Going through the close vote review queue I got this review for Simplify process with linq query which has an active bounty. According to Shog9♦ comment here FYI, bountied questions are already ... I am got blocked from review and I think it is incorrect, what to do now? [duplicate] I was reviewed a question and the result is this. I just left a comment. But it said that I reviewed it wrongly and banned me for one month! Should always a comment be negative? If yes, then there ... When is asking for examples a close reason? I understood from previous questions (e.g. How can I better ask this question about finding example code to learn from?) and the off-topic/too broad flag reasons that questions asking for tutorials, ... Possible bad audit review in First Post Came across this one: https://stackoverflow.com/review/first-posts/13646114 To me it seemed a bit poor as they hadn't posted any code to the stackoverflow website and just relied on an external link (... Does this question deserve to be in the Review Audit List? Is this question a Good Question and qualified to be in Review Audit list? This question does not: Specify which RDBMS or Database is in use, so Proper database tags are missing Show any search ... Failed a review audit, even though the post was of low quality [duplicate] I was reviewing this question on SO and I downvoted it as it was of low quality but to my surprise I was told I had failed. I don't consider this question a good quality post. There is no example of ... Bounty on a question of questionable quality [duplicate] There is a question here that isn't specifically about programming -- it's about algorithms. I think that it's not a good question for SO (although I may be wrong, but that isn't relevant here), so I ... Reopen Votes Audit Please take a look at this question: Nginx to host app in different location That's clearly about configuring Nginx isn't it? So apparently others thought so and apparently it was closed. Since ...
OPCFW_CODE
not really, you cannot usually use -de for "on", but it is OK for "in". Therefore "Lamba masada" sounds like "the lamp is in the table". (sometimes you can, but often it sounds wrong so when you see "on" try to use üzerinde/üstünde. Except for weird English uses like "on the bus", then you cannot say üzerinde/üstüne of course) Lamba masada sounds totally ok for me. In/at/on = -de/-da It's really hard to say on = üstünde always. For instance; on the road. You can't really say Yolun üstündeyim. üstünde/üzerinde actually means on top of but we use it so frequently that you can translate that into on. but this doesn't mean that on = üstünde and you shouldn't use -de/-da I don't want to create rules but just when I think about it; i have the feeling for things that are lying on the table, it is OK to use -de/da but not really for things which are standing. Would you really say for example "şişe masada"? even if you say it, how much more likely are you to say "Şişe masanın üstünde." I often say Şişe, sürahi ve tencere masada. Or Vazo masada. You can also say Kağıt masanın üstünde. The difference is actually about the emphasis. For example if you are preparing the table you would say Şişe masada. (I already put it on the table) But if someone is looking for the bottle you would say Şişe masanın üstünde. to be more specific about the location. Or to imply that there is something on the table and you should be careful you can say Masanın üstünde şişe var. I think you are free to use -de/-da instead of üstünde just like you can use on instead of on top of in English. For Spanish speakers -> Encima = üstünde I never thought of our odd "on the bus," but you're right. Poor non-native English speakers! Now I have a mental picture of everyone tied on top, like luggage! Actually, I'm not a native English speaker, but I've never thought about that too. What you have pictured is really funny. Haha. XD But "on" could be possible for the bus that doesn't have roof (do they call it "roof"?). Double decker, right? :D I'm guessing it's because older forms of transportation mostly made sense with the literal sense of "on". "On a horse", "On a boat", etc., and so we kept with it for "bus"? Then again, that doesn't explain why we say "in a car" and never "on a car". There is a picture on the wall": "Duvarda (bir) resim var." But i think mostly we say "Duvarda bir resim asılı." (I do not know how you say it in English: A picture is hanging on the wall (?). 1) Is lamba only used for "standing" lamps or can you use it for hanging lamps too? 2) How would you say that the lamp is (hanging) over the table? How about: Lamba masanın yukarısında asılıdır. Over in this case meaning above? It uses both. üst means something like "top" or "upper side". So from that you get masanın üstü "the table's top" with genitive masanın and possessive üstü. And then you add the locative to the end of that to get masanın üstünde "at the table's top = on top of the table, on the table".
OPCFW_CODE
TensorFlow is an end-to-end open source platform for machine learning. There are a range of available installation options for TensorFlow on CREATE. To avoid memory limitations on the login nodes, request an interactive session to complete your installation process. Please see the documentation on how to request more resources when using CREATE HPC. To save space in your home directory, the following example assumes you have created a non-standard conda package cache location, however, this is not a requirement and the standard method will work just as well. Using a Virtual Environment¶ If you require the latest stable version of TensorFlow, pip is recommended as TensorFlow is only officially released to PyPI. Make sure to first setup your virtual environment: 1 2 3 4 Or you can use conda, although it may not have the latest version, it is still a great option for repeatable analysis and is much easier to use for dependency management: 1 2 3 TensorFlow is also available as a module on CREATE: module load py-tensorflow/2.4.1-gcc-10.3.0-cuda-python3+-chk-version. Note that if you run module load py-tensorflow it will default to our latest version. 1 2 3 Testing Tensorflow on the GPU¶ If you installed TensorFlow with one of the virtual environment examples above, for working GPU access, you will need to load the following modules also: 1 2 3 TensorFlow is also available through the Singularity containerisation tool: With nvidia support enabled, the following example command can be used to test your TensorFlow container: Using TensorFlow in a Jupyter notebook¶ For a complete guide on how to launch Jupyter on CREATE HPC, then please refer to our guide document here. The following example makes use of the virtual environment created above and installs jupyterlab directly there: However, when using CREATE modules and self-installed software, please make note of what python version is being used to avoid potential incompatibility issues. Create a batch script for TensorFlow and Jupyter¶ Due to the resource overhead of both Jupyter and TensorFlow, please make sure you request a sufficient amount of compute resources via sbatch to avoid potential kernel instability issues when using your Jupyter notebook. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Once you have submitted your batch script via sbatch, you can use the instructions printed in the slurm output to launch Jupyter notebook in your browser and test your TensorFlow: 1 2 3 4 5 6
OPCFW_CODE
How to deploy your project on Tencent cloud server so that others can access it In fact, the principle is: put your project on the ECS and run it with tomcat. Change the tomcat port number to 80, and others can use ip + project name + index. The jsp interface accesses What is a cloud server? To put it bluntly, it is a networked computer, but it has no entity. It is virtual and invisible Well, without much gossip, let's get to the point The first step is to apply for cloud server (the Tencent cloud student package I applied for here is 10 yuan, and other clouds have similar activities for the first time) Just search Tencent cloud student package directly on the website. I bought the Windows Server 2012 system I have bought the renewal fee for the display here. After payment, refresh the interface and send you an account and password in the message Step 2 enter the console to view your own information Step 3 login Click login, you can click login on the web page, or you can use the local remote control on the computer The computer name writes the primary IP address of the server, and the user name writes its own user name. If it is not set, it defaults to Administrator (here all refer to the servers of the optional windows operating system, others don't know) Here's a reminder: if you want to synchronize the driver on your computer to the ECS, you need to do this: click local resources ----------- find driver ----------- check the driver you want to synchronize. For example, I choose Disk F Then, the user name is the IP address of the server you are applying for. The above also explains how to check it, and then you can connect Step 4: enter ECs The exciting moment has come. Here is my interface. When you first entered, there was only one recycle bin. Mine was installed by myself, This is equivalent to buying a computer Step 5. Install all kinds of required software As mentioned above, you can synchronize the drivers of your host computer. Just drag out the installation packages and install them one by one. I will only talk about the next steps and precautions: 1. First install jdk and configure environment variables, and the detection is successful 2. Install mysql. It is strongly recommended that the installation package be placed in the program file under disk c. There will be no errors in the installation 3. Install navicat 4. To install tomcat, you should pay special attention here. I dragged my computer's Tomcat directly. I tried for several hours without success, and then re downloaded version 9.0 on the ECS 5.eclipse is optional. Just put the war package in webapps under tomcat 7. Change the port to 8080 by default Someone will ask why you want to change the port? 8080 is the default port of tomcat, and 80 is the common default port. If the port is set to 80, you do not need to connect to 80 after entering the ip. Because 80 is the default port. We just need to modify the configuration of Tomcat. Open the directory where Tomcat is located, open the conf folder, and click Open the server.xml file. The parts to be modified are as follows: <span style="color:rgba(0, 0, 0, 0.75)"><span style="background-color:#ffffff"><span style="color:#000000"><code class="language-java"><span style="color:#669900"><</span>Connector port<span style="color:#669900">=</span><span style="color:#669900">"8080"</span>protocol<span style="color:#669900">=</span><span style="color:#669900">"HTTP/1.1"</span> connectionTimeout<span style="color:#669900">=</span><span style="color:#669900">"20000"</span> redirectPort<span style="color:#669900">=</span><span style="color:#669900">"8443"</span><span style="color:#669900">/</span><span style="color:#669900">></span> <span style="color:#669900"><</span>Context path<span style="color:#669900">=</span><span style="color:#669900">""</span>docBase<span style="color:#669900">=</span><span style="color:#669900">"shop1"</span> reloadable<span style="color:#669900">=</span><span style="color:#669900">"true"</span> <span style="color:#669900">/</span><span style="color:#669900">></span> </code></span></span></span> 8. Enter the bin directory in the tomcat file and open the saltup.bat project to run!!!!!!!! Here is my project The above is a personal graduation design website I deployed. It is only for testing and is still improving. About domain name Domain name registration is particularly troublesome. I have to file for review and spend money. At present, my purpose (allowing others to visit my website) has been achieved. In the future, I will learn how to operate the domain name. I will continue to update it belowChoosing the right platform is the most important thing for ECs to carry business! Judging from the current pattern of the domestic cloud computing market, the top three domestic cloud computing are Alibaba cloud, Tencent cloud and Huawei cloud. Alibaba cloud and Tencent cloud, as enterprises backed by the Internet platform, prefer B-end users; Huawei and, as traditional communication giants, prefer the G-end. Of course, how to select a server model? Here is a document summary in detail. The document address is: If it is a high concurrency and high IO business scenario, it is necessary to determine the server specification to optimize the performance of business applications. Refer to the official document: Alibaba cloud server specifications: ECS instance family - ECS specification Tencent cloud server specification: CVM instance family - ECS specification
OPCFW_CODE
There are countless dating sites that mislead their members as we've described above. That's why we are trusted more than any other Chinese dating site. The power of pulling the trigger, the sound of a gunshot piercing the air and the honor of living in a country where you have the right to bear arms and protect your God-given freedom. As a member of Date a Gun Lover, your profile will automatically be shown on related hunting dating sites or to related users in the Online Connections network at no additional charge. A person who feels what you feel.Next We can make you more visible to the dog lovers in your area and vice versa. Our eMagazine, Blogs and Forum are entertaining, enlightening and educational on how to be safe, secure, sparkling and successful while dating with Chinese and Asian women. Instead, spark a conversation — and a new romance — with a fellow anime lover. It's no accident you stumbled upon our community, because here you will find faithful companions to who will guide you through life! We care a lot about our Chinese women members. We wish you the best in your search for your soulmate, and in meeting the one true love of your life. For a fun, safe and uniquely Chinese dating experience,.Next Singles can get online using their mobile phone, or a computer, and start discovering men and women that are looking for the same in their local area. Once you have met someone that appeals you, you can even base your dates around your dogs, heading out for long strolls with your furry friends, taking them to the beach and more. When our members speak, we listen. There is not a single soul at K9 Personals that could ever be bored by endless stories about the puppies, dog breeds and dogs in general. Forget about those awkward conversations about the weather and work.Next Once you have gained access, you can start sending messages to dog lovers, flirting with them in the chat rooms and browsing through their personals immediately. Forget the traditional methods of being set up by friends, going out looking for dates or leaving it to chance and take control. Commitment to Honesty and Integrity In an effort to try to bring honesty and integrity to the online dating industry at large, something that is sadly lacking to a large degree, ChinaLoveMatch. How Our Matchmaking Works The reason why SilverSingles is one of the best dating sites for over 50s is because we think about compatibility first. As a member of Gospel Lovers, your profile will automatically be shown on related christian dating sites or to related users in the Online Connections network at no additional charge. Both of these groups have their own characteristics, and you can prefer one over the other. If you recognize Him as your savior, you are welcome to join Gospel Lovers where like-minded Christians are meeting every day to support each other on a journey to Heaven, finding the strength to stay worthy and humble.Next Love the smell of gunpowder in the morning? Meet and Date Fans at Anime Lovers Dating Service Japanese animation, commonly called anime by those who embrace the art form, has enlisted a cult following among Americans and elsewhere in the world. A modern China woman adapts well to new cultures, surroundings and people. With a commitment to connecting singles worldwide, we bring China to you. Above all, we want to make sure the quality of your matches remains high. You can join today by signing up.Next This is a club designed specifically for singles like you who want to meet other anime lovers for casual dating or serious relationships. Date a Gun Lover is part of the Online Connections dating network, which includes many other general and hunting dating sites. Become a member of LoveAndSeek. Chinese girls are caring, polite and usually very gentle and charming. Remembering is the cheer of life For memories are sweet, But love is the all of life, Making it complete.Next We constantly police the website for people who are here for the wrong reasons. You also need to include a profile picture so that people can get a good idea of what you look like. If you are looking for serious Chinese dating and relationships, you can find it on ChinaLoveCupid, where we bring together thousands of single men and women internationally. Narrow down results according to location, age or other interests, to find the person uniquely suited for you. This helps us pair people up based on their goals, values, and own criteria. Imagine yourself curled up with a beautiful anime-lover on your couch on a chilly winter night enjoying the latest anime movie release or chatting about a new manga you've just read over a latte in a cozy coffee shop. We protect them from scammers, we try to educate them on what to look for in a good western man, and what not to expect, and we provide a great forum for all members to communicate with each other and help each other succeed. See Who Shares Your Interests For Guns at Date a Gun Lover Welcome to Date a Gun Lover! As a member of Meet Animal Lovers, your profile will automatically be shown on related animal lover dating sites or to related users in the Online Connections network at no additional charge. We strive to make your online dating fun. We will match you to the singles that match your personality and relationship needs and from there you are free to decide who you want to get to know. Sign up for our dog lover dating website now If you have been looking for a dog lovers dating website, the search is now over. When you start working on your dog lover dating profile, talk about how you are, who you want to meet, your passion for dogs and more. When you're ready to find yourself a date, whether you're looking for a casual relationship or someone to share the rest of your life with, only one online dating service will do: Meet Animal Lovers. You'll also find no shortage of things to do on dates, from strolling leisurely through a zoo to rolling up your sleeves and lending a hand at a local animal rescue group.Next
OPCFW_CODE
/** * Created by garciaph on 29/08/17. */ $(document).ready(function() { // Disable send button after submit the form $('form.sendForm').submit(function(){ $(this).find(':input[type=submit]').prop('disabled', true); }); $('select').material_select(); // textarea behavior $('textarea') // stops accepting input after reaching it's maximum length .keypress(function(e) { if (e.which < 0x20) { // e.which < 0x20, then it's not a printable character // e.which === 0 - Not a character return; // Do nothing } if (this.value.length == $(this).data('length')) { e.preventDefault(); } else if (this.value.length > $(this).data('length')) { // Maximum exceeded this.value = this.value.substring(0, $(this).data('length')); } }) // slices input if pasted content exceeds character limit .on('paste', function(e) { e.clipboardData.getData('text/plain').slice(0, $(this).data('length')); }); // preview selected image on form's image input section $(".previewed-image").change(function() { readURL($(this), $(this).data('preview-target')); }); // resize already initialized images (edit mode) $("img.edit").materialbox().each(function() { resizeImageOrientationWise($(this), 180, 180); }); $('#back').click(function() { var getUrl = window.location; var baseUrl = getUrl .protocol + "//" + getUrl.host + "/" + getUrl.pathname.split('/')[1]; window.location.href = baseUrl + "/tile/index"; }); }); // function used in image preview function readURL($input, previewTarget) { var $preview = $('#' + previewTarget); // read filename from file input and create a new file reader to it if ($input.prop('files') && $input.prop('files')[0]) { var reader = new FileReader(); // load file and change 'src' attribute of preview <img> element reader.onload = function (e) { $preview.attr('src', e.target.result); // resize to 180px height (if portrait) or 180px width (if landscape)s resizeImageOrientationWise($preview, 180, 180); $preview.show(); }; reader.readAsDataURL($input.prop('files')[0]); } } function resizeImageOrientationWise($el, height, width) { // note that it must be $.attr instead of $.css because of materialize materialbox $el.on('load', function(){ var eWidth = $el.width(); var eHeight = $el.height(); if (eWidth > eHeight) { // landscape $el.attr('width', width); } else { //portrait $el.attr('height', height); } }); }
STACK_EDU
Execution time issue after change the MySQL DB to MariaDB I change the MySQL 5.6 to MariaDB 10.1, the total number of record above 5 Million. Using PHP script export the data with join query. Now the export take time delay for same PHP code. Before, MYSQL DB : 20 to 30 second After, MariaDB DB : 50 to 60 second Please suggest me, how to speed up the execution time. Where is the code that you are using? How did you move the data from MySQL to MariaDB? Are all of the same indexes in place? Does an explain of the query give you any clues? So many question so little information. I exported the PHP config from old server and imported/ provisioned on new server. I did the same for my.cnf for MariaDB server. is there any difference in the my.cnf variable of MySQL and MariaDB ? I have copied my.cnf of MySQL to MariaDB - will this cause any performance issue Did you read the answer below provided by @ndev? He gives you a very complete answer with links for further information. @RajMohan - there may be a dozen ways to export/import. We cannot guess which one you used. The question is to general to answer it. You should provide some example code used for export. To speed up the execution time you could tweak your config file. There are some differences between MySql and MariaDB settings. Take a look here https://mariadb.com/kb/en/library/system-variable-differences-between-mariadb-101-and-mysql-56/ You should know what are you using MyISAM or InnoDB. The most notable differences are that MariaDB includes, by default, the Aria storage engine (resulting in extra memory allocation), Galera Cluster, uses Percona's XtraDB instead of Oracle's InnoDB, and has a different thread pool implementation. For this reason, a default implementation of MariaDB 10.1 will use more memory than MySQL 5.6. MariaDB 10.1 and MySQL 5.6 also have different GTID implementations. MariaDB's extra memory usage can be handled with the following rules of thumb: If you are not using MyISAM and don't plan to use Aria: Set key_buffer_size to something very low (16K) as it's not used. Set aria_pagecache_buffer_size to what you think you need for handling internal tmp tables that didn't fit in memory. Normally this is what before you had set for key_buffer_size (at least 1M). If you are using MyISAM and not planning to use Aria: Set aria_pagecache_buffer_size to what you think you need for handling internal tmp tables that didn't fit in memory. If you are planning to use Aria, you should set aria_pagecache_buffer_size to something that fits a big part of your normal data + overflow temporary tables. And here are the default values in MySQL 5.6 https://dev.mysql.com/doc/refman/5.6/en/server-default-changes.html
STACK_EXCHANGE
Yuck. In this day and age, if I’m paying someone to provide email service for me, I don’t understand why this isn’t automated. Maybe because many of my provider’s clients have multiple domains? Still, it should be automated for each domain. Anyway, if you’re using Bluehost as your webserver and email provider (shared hosting, ie the cheapest plan), then you need to setup your email so that Yahoo, Hotmail, Gmail, and others don’t automatically flag emails from your domain (ie firstname.lastname@example.org) as SPAM. To do this, you need to setup DKIM (DomainKeys Identified Mail). On the plus side, the setup is actually pretty simple: - Go to dkimcore.org, and create a public/private key pair. Just enter your domain, and press the “Generate” button. In a few moments, you’ll get a key pair. Keep this page open or saved, so you can grab the public key later. - Go to your domain management Web GUI, for Bluehost it will be something like https://my.bluehost.com/cgi/dm/zoneedit?domain=yourdomain.com - Look for a DNS TXT record (a key value pair), with “_domainkey” as the key. In my Bluehost setup, it already had a “_domainkey” record with “o=~” as the key, so I updated it with the public key generated by dkimcore.org. - dkimcore.org will spit out 3 formats for the public key. I found the easiest to use was the Tinydns Format, which had everything on one line. Just copy everything from “v=DKIM1;” to the first colon. You’ll notice that everything from that colon to the end of the line is not included in the other formats. - Now paste this value into the GUI for changing the TXT record for “_domainkey” At this point you just have to wait “4 hours” (in my case only a few minutes) for the DNS records to update. dkimcore.org mentions something about attaching a token to each outgoing email, but the Bluehost support staff assured me that I didn’t have to do anything else. Something that is confusing to me and friends I’ve talked to is that I didn’t have to do anything with the private key generated by dkimcore.org. Does Bluehost get it from them behind the scenes? That would be kinda sketchy. Is it not required for the system to work? Also sketchy. Anyway, before I did this, when I checked the email headers to a yahoo acc/ount I get Authentication-Results: mta1247.mail.bf1.yahoo.com from=russandbecky.org; domainkeys=neutral (no sig); from=russandbecky.org; dkim=temperror (key retrieval failed) While after the changes I get: Authentication-Results: mta1444.mail.bf1.yahoo.com from=russandbecky.org; domainkeys=neutral (no sig); from=russandbecky.org; dkim=pass (ok) For comparison, when sending from a gmail account to yahoo, I get: Authentication-Results: mta1340.mail.ne1.yahoo.com from=gmail.com; domainkeys=neutral (no sig); from=gmail.com; dkim=pass (ok) Some related links: - Rackspace’s notes on setting up DKIM - Fuller set of tools (including checks on DKIM records) from dkimcore.org. The “Check a published DKIM Core Key” doesn’t seem to come back with anything if it worked, and only provides info when it fails. The “Check a DKIM Core Key Record” can be used to verify what you cut and paste into the TXT record’s value. - Checking DKIM setup of your domain with gmail or yahoo I just got an email from Scott Cordon. Seems like something to try… Very considerate of you to post your experiences with DKIM on your blog! Appreciated; your tips page is also very good. I have used most of them over the years. I am also on Bluehost, have recently moved “up” to a VPS (lowest level) because I can handle linux admin — have been doing for a number of years. Noted that you have derived a 1024-bit based key for your DKIM. That was well-supported at one time. I think gmail still insists on at least that level. However, most of the world seems to be moving on to 2048-bit keys … and alas, they don’t fit (easily) within the protocol designated in the DNS TXT record which identifies you. Not even sure the libraries and MTAs can put together the UDP packet’s 255-byte-limited substrings for a long DKIM key. I suspect you might get it into one UDP packet, but not necessarily into one string. Two separate parts to the TXT record are needed, not sure about two UDP But some people seem to be doing it… just wondered whether you ran into anyone on Bluehost who is doing it? By using 2048-bit keys, you can really lock in the power of cryptography to use a key with a “chain” to validate your identity — and have a totally verifyable identity chain. Any ideas or ramblings welcome. (yep, if you want it done, DIY)
OPCFW_CODE
You need a 3rd table that joins the two tables together. It will store the IDs for the reptile and the Classification records. so like a line items table. but what kind of button setup to use or script to use ok is there anyone else who can input on this You may not need a script at all. Here's a demo file: http://www.4shared.com/file/PLhjErzu/Contracts_to_Companies.html In it, you assign "companies" to "contracts" but the prinicple is the same. Note that no scripts are required, you simply select from the pop-up menu. thanks for the file but not quite what i am looking for. Please explain why. The demo file would appear to do what you asked for in your original post and avoids the need for buttons and scripting--making for a simpler design. what i am looking for is to have a new window open up so the user can select a classification. Not have a drop down list or pop up menu. I'm familiar with the basic taxonomic clasification levels: Kingdom, Phyla, Class, Order, Genus, Species. I assume you want to specify each for a given animal record. When you click that button to open the window, what do you want to see in the window? Surely not all possible kingdom, phyla, class, order, genera and species? How do you picture this working? (Details are important here.) i would like to show family, species, subspecies, common name. i will have 3 fields on the animals record that say the above. and a button that will open the new window where a user is displayed a list of all the classifications. next to each record will be a button that says assign and when they press that it will close and auto enter there choice on the animals record OK, Keeping to just the lower classification groups simplifies this. Though to this non-expert, it seems odd that you will specify a family without also having to select a Genus. Just for the record, what you describe can be done with conditional value lists and drop down lists or pop-up menus on each of the three fields. That can be done with little or no scripting so it's simpler to set up. However, you have specifically asked for a pop-up window. That can also be done. First to organize the data. You'll need either three tables or one table with an extra field, the results are pretty much the same for the user so it may depend on the rest of your database design needs or simply your preference as a developer as to which way to go. With a single "Taxonomy" table, define three fields: Level, TaxName, MemberOf all are text fields. Create a list view layout based on this table. All you need on this layout is the TaxName field and your button (or you can make the TaxName field your button if you want.) You'll use the new window command in a script to pop up this layout in a small window with the appropriate records brought up as its found set. Your button's script on this layout will capture the taxonomy name in a variable and assign it to the appropriate field in the underlying window and then close the pop-uup window. The script (some details will need to be worked out by you): Set Variable [$Level ; Value: getValue ( Get ( ScriptParameter ) ; 1 ) ] Set Variable [$$Field ; Value: getValue ( Get ( ScriptParameter ) ; 2 ) ] Set Variable [$Member ; Value: getValue ( Get ( ScriptParameter ) ; 3 ) ] New Window [//specify name and appropriate dimensions of your new pop up window here] #Following step is needed for Windows systems where the Window is maximized--Not needed for macs or when the window is not maximized. #It assumes that your underlying window has the default window name--which will be the same as the database file name. Move/Resize Window [Name: Get ( Filename ) ; Current File; Height: Get ( ScreenHeight) ; Width: Get ( ScreenWidth ) ; Top: 0 ; Left ; 0] Go To Layout ["TaxonomyList" (Taxonomy) ] //Specify layout you've created for this purpose Enter Find Mode Set Field [Taxonomy::Level ; $Level ] Set FIeld [Taxonomy::MemberOf ; $Member ] Sort [No dialog ; Restore] //Sort by Taxonomy name in ascending order #Make pop up "Psuedo Modal" so that user can't click on underlying large window until they've interacted with pop-up window Show/Hide Status Area [Lock ; Hide ] Allow User Abort [off] Pause/Resume Script [Indefinitely] This script is called by clicking a button next to one of the three fields in the parent window's layout. The same script but pass a different pair of parameters in a list so that the list of potential values can be restricted to just those that are relevant to the taxonomic selections already made. The button next to Family would pass this expression as the parameter: List ( "Family" ; GetFieldName ( ParentTable::Family ) ) Species would be: List ( "Species" ; GetFieldName ( ParentTable::Species ) ; ParentTable::Family ) Subspecies: List ("SubSpecies; GetFieldName ( ParentTable::SubSpecies ) ; ParentTable::Species ) The button on your popup will trigger the following script. SetFieldbyName [$$Field ; Taxonomy::TaxName ] Close Window [Current Window] Halt Script //Terminates the endlessly looping script that opened this window--you can also specify the Halt option on the button instead of this step Note: this approach does not use a Join table as originally suggested at the beginning of this thread. (If you chose to use three different tables, you'd need three different layouts--one for each field in the parent record.) thanks Phil Ill give it a shot when i get back home Alrite phil since my flight has been delayed and a new meeting has come up i will have to try this later and let you know how it goes.
OPCFW_CODE
list not populating in c# I have a view in MVC and on regeneration of form i am populating a list using forms collection. but the list is not populating correctly and i am sure i am missing something in the list. kindly check my code int noOfRows=Request.Form["rows"].ConvertToInt()}; int noOfColmn=Request.Form["colmns"].ConvertToInt()}; List<mymodel> list1= new List<mymodel>(); for (int roww = 1; roww < noOfRows; roww++) { list1=new List<mymodel> { new mymodel { name=Request.Form["name-" + roww + ""].ConvertToInt() , rollno= Request.Form["rollno-" + roww + ""].ConvertToInt(), subjs=new List<mymodel>()} }; for (var colmn = 1; colmn < noOfColmn; colmn++) { var subjs= new List<mymodel> { new mymodel {subjs=Request.Form["subj-" + roww + "-" + colmn + ""].ConvertToInt()} }; } } ViewBag._list1 = list1; Well, if at every loop you reinitialize list1 instead of adding the new model to the preexisting list you end your loop with only one model in the list (the last added) You should initialize the list1 variable only outside the for loop and add elements to the same list when inside the loop. Your current code reinitializes this list1 variable at every loop. The internal loop does the same thing with the property subjs that appears to be another List<mymodel>. I propose this code. Of course, I am not able to test it, so let me know if this pseudocode fits your requirements. int noOfRows=Request.Form["rows"].ConvertToInt()}; int noOfColmn=Request.Form["colmns"].ConvertToInt()}; // Create the list1 just one time here. List<mymodel> list1= new List<mymodel>(); for (int roww = 1; roww < noOfRows; roww++) { // creates an instance of mymodel mymodel m = new mymodel { name=Request.Form["name-" + roww + ""].ConvertToInt() , rollno= Request.Form["rollno-" + roww + ""].ConvertToInt(), // create the internal list of mymodel subjs=new List<mymodel>()} }; // add the model m to the list1 list1.Add(m); // loop to create the internal models for (var colmn = 1; colmn < noOfColmn; colmn++) { mymodel m2 = new mymodel { subjs=Request.Form["subj-" + roww + "-" + colmn + ""].ConvertToInt()} }; // add all the subsequent models to the sublist internal to the first model m.subjs.Add(m2); } } ViewBag._list1 = list1; a big thanks to you and yes you are right i was initializing the list every time.
STACK_EXCHANGE
package mmt; import java.io.Serializable; import java.util.Map; import java.util.TreeMap; import java.util.List; import java.util.ArrayList; import java.util.Collections; import java.time.LocalTime; import java.time.Duration; /** A Service is a set of stations ordered by the time of departure * @see Station */ public class Service implements Serializable, Comparable { /** Serial number for serialization. */ private static final long serialVersionUID = 20178079L; /** The numeric id of the service */ private int _serviceId; /** The cost of traveling from the first to the last station of the service */ private double _totalCost; /** The schedule is a association between a time and a station */ private TreeMap<LocalTime,Station> _schedule; /** * Creates a new empty service * * @param id the numeric id of the service * @param cost the total cost of the service */ public Service(int id, double cost) { _serviceId = id; _schedule = new TreeMap<LocalTime,Station>(); _totalCost = cost; } /** * Adds a station to the schedule, and adds this service to the station * service list * * @param time the time of arrival to the station * @param station the station arrived */ public void addStation(LocalTime time, Station station) { _schedule.put(time, station); station.addService(this, time); } /** * Gets the id of the service * * @return the numerice service id */ public int getId() { return _serviceId; } /** * Returns an unmodifiable list of all the stations, ordered by time * * @return the stations of the schedule of the service */ public List<Station> getStationList() { return Collections.unmodifiableList(new ArrayList<Station>(_schedule.values())); } /** * Gets the next station * * @param station a station * @return the station right after the one given as argument, or null if: 1) there is no * such station on this service; 2)The given station was the last station */ public Station getNextStation(Station station) { List<Station> stationsPassing = getStationList(); int index = stationsPassing.lastIndexOf(station); if(index == -1 || index == stationsPassing.size()-1) { return null; } else { return stationsPassing.get(index+1); } } /** * Returns whether the service is a valid route betweem 2 stations * * @param station1 departure station * @param station2 arrival station * @return true if the service stops at station1 and station2 and * passes station1 before station2 */ public boolean isConnectionBetweenStations(Station station1, Station station2) { if(!(_schedule.containsValue(station1) && _schedule.containsValue(station2))) { return false; } return station2.getServiceTime(this).compareTo(station1.getServiceTime(this)) > 0; } /** * Returns the cost of traveling between 2 stations on this service * * * @param station1 departure station * @param station2 arrival station * @return the cost of traveling between station1 and station2 */ public double getCostBetweenStations(Station station1, Station station2) { if(!(_schedule.containsValue(station1) && _schedule.containsValue(station2))) { return 0; } LocalTime firstStation = _schedule.firstKey(); LocalTime lastStation = _schedule.lastKey(); LocalTime tStation1 = station1.getServiceTime(this); LocalTime tStation2 = station2.getServiceTime(this); return _totalCost * ((double) Duration.between(tStation1, tStation2).toMinutes() / Duration.between(firstStation, lastStation).toMinutes()); } /** * Compares 2 services by id * * @return the numeric difference between ids */ @Override public int compareTo(Object o) throws ClassCastException, NullPointerException { Service s = (Service) o; return _serviceId - s._serviceId; } /** * Gets a textual representation of the service */ @Override public String toString() { String res = String.format("Serviço #%d @ %.2f\n", _serviceId, _totalCost); for(LocalTime t: _schedule.navigableKeySet()) res += String.format("%tR %s\n", t, _schedule.get(t)); return res; } /** * Gets a textual representation of part of the service * * @param station1 the first station to present * @param station2 the last station to present */ public String toString(Station station1, Station station2) { double cost = getCostBetweenStations(station1, station2); LocalTime tStation1 = station1.getServiceTime(this); LocalTime tStation2 = station2.getServiceTime(this); String res = String.format("Serviço #%d @ %.2f\n", _serviceId, cost); for(LocalTime t: _schedule.navigableKeySet().subSet(tStation1, true, tStation2, true)) res += String.format("%tR %s\n", t, _schedule.get(t)); return res; } }
STACK_EDU
mysql - calculating columns(2 columns with numbers) based on another group of column(text) Hi I would like to find a query for the below, I am trying to calculate data between two columns however based on another column which needs to be a selected group of the same values Unfiltered Start Time________Disconnect Time______Signalling IP 12:59:00.3________13:26:03.3___________<IP_ADDRESS> 10:59:00.3________11:03:03.3___________<IP_ADDRESS> 19:59:00.3________20:02:03.3___________<IP_ADDRESS> Filtered Start Time________Disconnect Time______Signalling IP 12:59:00.3________13:26:03.3___________<IP_ADDRESS> 19:59:00.3________20:02:03.3___________<IP_ADDRESS> If you see the table above, I want the selected IP only which is <IP_ADDRESS>, and then from there, calculate the total duration of time from the Start Time and Disconnect Time for that Egress IP. So column 3 has multiple values, however I need to select the same value, then from there calculate the sum of column 1 and 2 based on column 3. Please let me know if you have anything in mind, as I have tried multiple queries but can't get the correct one it would be a lot easier if you could give a minimum example - data structure and some rows and what you expect Hi sorry, I have added in the picture, please let me know if that would help Can you add sample data and expected output as text to the question. Hi @P.Salmon kindly please see the modified question, I have added the sample data and what needs to be done We can't use images (and some of us won't type them) - text is better Please see the updated question, I think that should be much better? @P.Salmon to calculate difference between to times. you can use time_to_sec to convert each time value to seconds and subtract start time from end time to get time period in seconds. you cat turn it back to time format with SEC_TO_TIME example select column3, SEC_TO_TIME(sum(TIME_TO_SEC(column2) - TIME_TO_SEC(column1)) from table group by column3 hi thank you so much, however how would I filter column 3 so that it is the same set of values, and I would like to calculate the difference in times, I will try that now and let you know use where column3 = "uer ip" I am getting value of 1030157, what output would that be? Hi, yes I have used that command, however I took the total time and used https://www.convertworld.com/en/time/seconds.html to convert seconds to hours, or minutes etc. or is the output that I am getting shouldn't be like that? there something wrong in your sql query please post this part of query Hi sorry, i see for one of them I put time_to_sec and the other I put sec_to_time, I now put time_to sec and I am getting 234 seconds which is around 3 minutes Can I by any chance have the decimal time there as well, like HH:MM::SS.s? pass this output to sec_to_time will convert to time format I just did that, however I am now getting a value of 837, so I am not sure how it is interpreting the data in which the format is (HH:MM:SS) use it as this select sec_to_time( TIME_TO_SEC("12:00:00") - TIME_TO_SEC("10:00:00") ) Hi Emad, thank you so much for your help, this has worked, however I have not used the previous example you have given me, but I have used the initial one. Much appreciated.
STACK_EXCHANGE
MIT Video Pitch Contest View All News & Updates Share your story with global MIT community, enter to win $1500! Deadline extended! What's are you doing to change the world? How can resources like the IDEAS Competition and MIT Global Challenge help? Share your story in the Global Challenge Video Pitch Competition and be eligible to win $1500. The contest is open to anyone, but teams must involve MIT students. The winning entry will receive $1500 and will be featured in the October 2010 launch of the MIT Global Challenge. We want to tap student passion to make the world a better place by asking you to make a case for why the MIT community worldwide should care about the Global Challenge. To be successful we'll need their support to fund awards, underwrite challenges, and support student projects as mentors, volunteers, and local promoters of the Global Challenge. Download contest details [word.doc]. What we’re after Were offering $1500 to the team led by one or more MIT students that creates the best viral video against the following five criteria: - An attention grabber. Make it arresting and too good to turn away from. Something like: http://www.youtube.com/watch?v=O-XZk0yxCzc - Informative. Leverage a powerful hook that invites adventurous thinking while sharing important information. For example: http://www.youtube.com/watch?v=h-8PBx7isoM - Trés MIT. Appeal to that quality of the MIT experience that combines rigorous inquiry and experimentation to inspire curiosity. Perhaps not unlike: http://www.break.com/index/stop-motion-t-shirt-war.html - Flat-out motivates. Make a clear ask for alumni involvement that is propelled by the quality of the narrative and visual experience. A lot like: http://en.tackfilm.se/?id=1269490106486RA20 - All you. Be completely genuine, using original sound and footage to achieve a startling result. Well, we don’t have a really have a good example of that one. Yet! What you need to know - Videos must be 2 minutes. No longer, not much shorter. - Entries must be submitted online no later than [NEW!] midnight Friday October 1, 2010 through our Vimeo channel: http://vimeo.com/groups/mitglobalchallenge (if you have problems, send us your youtube.com etc url) - Videos will be judged by Global Challenge staff and volunteers and a winner announced a our Generator Dinner Wednesday October 17, 2010. Winners will also be notified by email. - The winning video will be used in a Global Challenge marketing campaign promoted by the MIT Public Service Center in the fall. All qualified videos will be archived and available at: http://vimeo.com/groups/mitglobalchallenge. - Videos containing inappropriate content will not be judged; they’ll be removed from the Vimeo channel. Questions? Email firstname.lastname@example.org. Thank you!
OPCFW_CODE
Converts all other letters to lowercase letters. Some of these members Returns the relative position of an item in an array that matches a specified value in a specified order. Returns the correlation coefficient of two cell ranges. As I told in the worksheet function “RAND” in VBA too, we can generate random numbers that are greater than 0 but less than 1. Finds the location of one string within another (similar to Instr). Rounds a number to the specified number of decimals, formats the number in decimal format using a period and commas, and returns the result as text. Step 2: Now assign the value to the variable “k” through the “RND” function. RAND formula Excel generates the random number b/w two integers using RAND () in Excel. Returns True if a cell does not contain text. The standard deviation is a measure of how widely values are dispersed from the average value (the mean). Returns the number of periods for an investment based on periodic, constant payments and a constant interest rate. The Microsoft Excel RND function returns a random number that is greater than or equal to 0 and less than 1. Returns the one-tailed probability of the chi-squared distribution. Returns the slope of the linear regression line through data points in known_y's and known_x's. If upper_limit is not supplied, returns the probability that values in x_range are equal to lower_limit. Returns the payment on the principal for a given period for an investment based on periodic, constant payments and a constant interest rate. Returns the kurtosis of a data set. For example, you can use GeoMean to calculate average growth rate given compound interest with variable rates. Calculates the point at which a line will intersect the y-axis by using existing x-values and y-values. Returns the most frequently occurring, or repetitive, value in an array or range of data. You can code that in VBA using native VBA functions or simply add worksheetfunction in several places. Name Required/Optional Data type … Returns covariance, the average of the products of deviations for each data point pair. In Visual Basic, the Excel worksheet functions are available through the WorksheetFunction object. This will keep generating the whole numbers from 1 to 100. The Microsoft Excel RND function returns a random number that is greater than or equal to 0 and less than 1. If we pass the number >0, it keeps giving you different random numbers, i.e., the next random number in the sequence. Returns the average (arithmetic mean) of the arguments. Whenever RND returns the decimal number, VBA converts the decimal number to the nearest integer, i.e., 1. This code will generate random whole numbers with decimal points every time we execute the code. Returns the Fisher transformation at x. =100*RAND () here RAND in Excel generates random number excel b/w 0 and 1; then, the output will multiple by 100 gets the numbers b/w 0 and 100. Returns the kth largest value in a data set. Returns the rank of a number in a list of numbers. Parameters. If the number provided is greater than 0 or the number parameter is omitted, the RND function will return the next random number in the sequence using the previously generated random number as the seed. This gives a 50% chance to return 2, and the other 50% is evenly distributed between 3 and 6. Returns a value that you can use to construct a confidence interval for a population mean. PREREQUISITES Worksheet Name: Have a worksheet named RAND. So, to make the formula work properly, declare the variable as “Double.”. Returns the quartile of a data set. Returns a number rounded up to the nearest even integer. expression.RandBetween (Arg1, Arg2) expression A variable that represents a WorksheetFunction object. Use this function to return values with a particular relative standing in a data set. In a worksheet formula this would be similar to. The arccosine is the angle whose cosine is a number. This works exactly the same as the excel function “RAND.” As I told in the worksheet function “RAND” in VBA too, we can generate random numbers that are greater than 0 but less than 1. Excel RAND Function is used to return the random numbers between 0 and 1 as show in the below table. Return value. Returns the inverse of the F probability distribution. Returns the rank of a value in a data set as a percentage of the data set. You can also use a helper column with cumulative values like this: The formula in column D just a simple SUM function: This should also give you the result you are looking for and will be easier to change in the future. Take O’Reilly online learning with you and learn anywhere, anytime on your phone and tablet. Use this distribution in reliability analysis, such as calculating a device’s mean time to failure. Returns the logarithm of a number to the specified base. Since we have assigned the variable as Integer, it can only show the whole numbers between -32768 to 32767. Returns the inverse of the normal cumulative distribution for the specified mean and standard deviation. 6 = Yellow, I know that if it is equal probability, then I can write, However, I want to generate white half of the time and share the other half with the remaining 4 colors, how can I do? Another useful top-level object to know about is the WorksheetFunction object. 三木谷 娘 バレエ 16, ミニ四駆 提灯 作り方 6, Bmw E90 エンジン出力低下 9, ひまわり トリートメント 解析 6, 京都大学 編入 Toefl 4, ドラクエ10 体験版 金策 4, スロット 実機 強制フラグ 5, Android One X3 再起動 4, 突然ブロック され た彼女 25, Ar 高圧洗浄機 392 5, 小見川 大橋 事故 7, Raspberry Pi Usbカメラ撮影 4, アルミ スーツケース 無印 6, 東芝レグザ ブルーレイ 故障 6, Pngからai 変換 フリー 5, 夏 言葉 かっこいい 18, 堂本光一 佐藤めぐみ 結婚 10, 満月の夜 勇気 名声 7, Line 返信 一言 女 11, Galaxy 保存先 Sd 7, Excel Vba シートを開いたときに実行 4, 僕のヒーローアカデミア 映画 アニポ 56,
OPCFW_CODE
2017, What a year! A new year is arrived so it’s the time to recap the last year and to see how many things I accomplished. Let’s recap! Speaking at Tech Conferences The first was on February and It was at T3chfest, an awesome tech conference at University Carlos III of Madrid (Where I studied!) I give a talk/workshop about how to make a webapp with Firebase and React. Last year was the first one that I to give international talks and I had to speak in English to the audience. At first I had a lot of doubts and scary, but it was one of the best things that I accomplished last year and I would like to repeat in this year - T3chFest 2017 (Leganés, Madrid, Spain 🇪🇸) - Campus Givers at Campus Madrid (Madrid, Spain 🇪🇸) - GDG Åalborg (Åalborg, Denmark 🇩🇰) - Pixelscamp 2017 (Lisbon, Portugal 🇵🇹) - GDG Cloud Madrid (Madrid, Spain 🇪🇸) - APIDaysMadrid (Madrid, Spain 🇪🇸) - GDG CloudFest Madrid (Madrid, Spain 🇪🇸) - GDG DevFest Galicia (Pontevedra, Spain 🇪🇸) Workshops and Tech content I give a total of 2 on-site workshops in Madrid. It was one of my personal OKRs last year. The first was on February and the last one in May. I would liked to offer more, but was too difficult find time to do it. Anyway it was a amazing experience and knowledge because I organized all the stuff (Find the place, make the curriculum, hire the catering for the breaks, manage the sales and promotions, …) From the online side, I recorded two online paid courses about React Native and Redux, and a lot of material for free in Youtube. It was hard to find time to make it, recorded it, and so on, but worths. Last year I travelled a lot! Giving talks or attending tech conferences. In almost all I travelled with my couple, Paola, and our little daughter (<2years) We bring her to all the tech events/summits where we go :) - Google I/O 2017 (Mountain View, CA 🇺🇸) - GDG Global Summit 2017 (San José, CA 🇺🇸) - WTM Leads Summit 2017 (Prague, Czech Republic 🇨🇿) - GDD Europe (Krakow, Poland 🇵🇱) - GDG Leads Spain Summit (Salamanca, Spain 🇪🇸) - GDD India (Bangalore, India 🇮🇳) In October I started to work as Community Specialist with the Google Developer Relations Team for Spain, Portugal, Netherlands, Denmark and the Nordic Countries. My mission is helping to drive developers success and awareness of the Open Standards & Google Technologies through the direct support to Developer Communities. And very important, support diversity & inclusion initiatives to overcome the gender gap issue in the IT world. It is an amazing opportunity to learn, share, know awesome people and to help tech communities through my region. At first It was an important change in my life because the last 5 years I have worked as freelance, choosing my tasks, schedule, time, etc… An now I work for a Big Company, but after a few weeks of adaptation I find myself very comfortable because it is not a repetitive job, every day there are different things to do and I love that! (Maybe because I’ve been out of my comfort zone for the last few years :P) Go to 2018! To the new year, I will continue to write in spanish in my blog but I will try to write more articles in Medium like this (in English, sorry for my grammar/style errors) in Medium to practice more the language and to share my thoughts, stuff and learnings with the rest of people non-spanish speaker. Let’s start the new year!
OPCFW_CODE
FYI: The Power BI August Newsletter just announced Python compatibility. We're looking forward to digging into that in future posts. You can find the newsletter here. First, let's talk about what time series data is. Simply put, anything that can be measured at individual points in time can be a time series. For instance, many organizations record their revenue on a daily basis. If we plot this revenue as a line across time, we have time series data. Often, time series data is measured at regular intervals. Weekly measurements are one example of this. However, there are many cases where irregular time intervals are used. For instance, calendar months are not all equal in size. Therefore, a time series of this data would have irregular time intervals. This isn't necessarily a bad thing, but it should be considered when doing important analyses. You can read more about time series data here. Now, let's talk about Time Series Decomposition. Time Series Decomposition is the process of taking time series data and separating it into multiple underlying components. In our case, we'll be breaking our time series into Trend, Seasonal and Random components. The Trend component is useful for telling us whether our measurements are going up or down over time. The Seasonal component is useful for telling us how heavily our measurements are affected by regular intervals of time. For instance, retail data often has heavy yearly seasonality because people buy particular items at particular times of year, especially during the holidays. Finally, the Random component is what's left over when we remove the Trend and Seasonal components. You can read more about this technique here and here. Let's hop into Power BI and make a quick time series chart. We'll be using the same Customer Profitability Sample PBIX from the previous posts. You can download it here. If you haven't read Getting Started with R Visuals, it's recommended that you do so now. Let's start by making a simple line chart of Total Revenue by Month. |Total Revenue by Year| |Import From Marketplace| |Power BI Visuals| |Time Series Decomposition Chart| |Time Series Decomposition Chart Description| |Add Time Series Decomposition Chart| |Change to Time Series Decomposition Chart| |Enable Script Visuals| If you get this error, you need to install the zoo and proto R packages. The previous post walks through this process. You may need to save and reopen the PBIX after installing the packages to see the chart. |Time Series Decomposition of Total Revenue by Month| We could spend all day looking at all the data available here. Instead, let's end by looking at one final aspect of this chart, Algorithm Parameters. Hopefully, this post opened your eyes just a little to the possible of performing time series analysis within Power BI. The custom visuals in the marketplace provide a strong "middle ground" offering that makes advanced analyses possible outside of hardcore coding tools like R and Python. Stay tuned for the next post where we'll be talking about Forecasting. Thanks for reading. We hope you found this informative. Senior Analytics Associate - Data Science
OPCFW_CODE
Thanks for the idea ran a TCPdump on the server and can confirm traffic is making it to the server. tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens192, link-type EN10MB (Ethernet), capture size 262144 bytes 12:02:03.231861 IP raspberrypi.41842 > localhost.localdomain.mdns: Flags [S], seq 469454270, win 29200, options [mss 1460,sackOK,TS val Apperently, I can't pass remote-host's nrpe check. Although both machines can talk to each other using ssh, when testing it says "no route to host on port 5666". If you connect to your instance using SSH and get any of the following errors, Host key not found in [directory], Permission denied (publickey), Authentication failed, permission denied, or Connection closed by [instance] port 22, verify that you are connecting with the appropriate user name for your AMI and that you have specified the proper The problem is that I can not access via SSH to new instance from my Maas server: $: ping 10.20.81.215 PING 10.20.81.215 (10.20.81.215) 56(84) bytes of data. From 10.20.81.1 icmp_seq=1 Destination Host Unreachable $: ssh -i ~/.ssh/id_rsa.pub 10.20.81.215 ssh: connect to host 10.20.81.215 port 22: No route to host. while on node of Openstack Feb 16, 2006 · Hi all, I have a Linux FC4 host system set up with bridged networking (no NAT and no host-to-host) I have a WinXP and a Debian OpenACS VM running with full networking configured and working properly (I can use tcp servers running on these VMs as web servers, ssh servers, etc) I'm running VirtualBox 4.0.4 r70112 darwin.x8 on a Mac OSX host. The guest OS is Ubuntu 10.04 LTS Server. I'm trying to ssh to the guest OS via the host machine, but I can't get an IP address to ssh to. Initially, ifconfig gave an IP, but Jun 22, 2008 · VMWare and host cannot SSH to each other, but can ping: lefty.crupps: Linux - Software: 2: 04-16-2007 10:05 AM: a/p connected, route correct, ping router: "Destination Host Unreachable". DebianEtch: shinyblue: Linux - Wireless Networking: 1: 08-29-2006 09:34 PM: Network Problem - No route to host: Astral Projection: Linux - Software: 2: 06-17 I want to communicate between B (Linux part) and C using SSH. Openssh-server is installed on both machines. When I ping B to C, or C to B I get "unreachable " as message. When I try a SSH connection I get "no route to host" When I cable the network, I can ping in both directions, but ssh results in "no route to host". Jul 19, 2018 · ‘No Route to Host’ denotes a network problem, usually one which shows up when the server or host is not responding. This may happen because of network issues or because of an inappropriate setup. Are Your Network Settings Correct? Before we look at the more specific causes for this problem, make sure that your network settings are correct. Nov 24, 2019 · So I though the problem come from the ssh client of the host. So i tried to uninstall a reinstall the package but when I do "sudo apt install openssh-client" apt want to delete pve-manager package "The following packages will be REMOVED: openssh-client openssh-server openssh-sftp-server proxmox-ve task-ssh-server" Jan 12, 2015 · 1. You are able to connect to the server using SSH locally as well as remotely. 2. You are able to telnet server on port 80 locally but when you try to telnet your server on port 80 remotely you get no route to host. 3. No firewall running on the server. Here are the things that I would like to know: 1. Nov 20, 2017 · I do a ssh to remote host(A1) from local host(L1). I then ssh to another remote(A2) from A1. When I do a who -m from A2, I see the "connected from" as "A1". => who -m userid pts/2 2010-03-27 08:47 (A1) I want to identify who is the local host who initiated the connection to ssh: connect to host 192.168.11.20 port 22: No route to host It seems there is some problems on the route setup. Somehow the package can not be routed between Pi and linux box (as shown by the nmap output). Apr 21, 2019 · Find answers to SSH using PUTTY Network error: No route to host from the expert community at Experts Exchange May 31, 2008 · ssh: connect to host 64.191.108.xxx port 2226: No route to host I should note that I'm actively logged into that IP in another window, and that it responds to ping. There most certainly is a route. Apr 13, 2017 · Connect using PUTTY SSH to my Lego Brick on Windows 10. The brick is connected via USB to my laptop. What did you expect to happen? A terminal to appear asking me for login details. What actually happened? "unable to open connection to ev3dev. Host does not exist". Jun 19, 2018 · The output should reveal the list of services including SSH (default port 22) to indicate that the firewall supports SSH traffic: dhcpv6-client http ssh If you are using a custom port for SSH, you can check with the --list-ports option. If you created a custom service definition, you should still see SSH normally with --list-services.
OPCFW_CODE
I wonder if anyone can help me. I have a FAS2020 set up, only needing to make the LUN's visible to windows. I've got the WWN's also wrote down, from the previous set up. I still don't have the Snapmanager license, which would make all the work for me, automatically, from what I've understood. So, basically, I have for now to put everything by hand. Firstly, the software you are referring to is SnapDrive. SnapDrive is used by the SnapManager tools if the server is hosting an application, for which there is a snapmanager tool, and you'd like to take application-consistent snapshots. SnapDrive communicates with the filer (for things like mounting disks, creating disks, and snapshotting disks) while SnapManager communications with the application (to understand application data layout and to integrate with any hot-backup mechanisms that exist for the server/application.) They function in partnership is probably the easiest way to think about it. However SnapDrive can used, and is recommended, for servers mounting LUNs regardless and where possible. That said you can certainly mount a disk on a server without SnapDrive. Snapdrive just makes things infinitely easier to do, faster to make things happen, and ensures things like Disk alignment are correct among other things. As such it is a best practice. workflow without snapdrive: 1. Create volume 2. Create LUN in volume (Be sure to select the proper disk personality, Windows>windows boxes, Windows2008>Windows2008 boxes, WindowsGPT>windows boxes for which you plan on using GPT partition type when you create the partition. 3. Create an iGroup (used to mask off access to this Lun for only the included list of initiators) From here I will keep the instructions simple and assume iscsi and ms-iscsi-initiator service 4. open up the MS Iscsi Initiator control panel on server and notice the node-name in the general tab 5. Place iscsi initiator address (iqn.....) of said server in iGroup. 6. Make a target connection for the filer by entering the DNS name for the controller hosting the LUN in the Discovery Tab (be sure to select for the connection to remap after system boot which amkes the connection persistent.) 7. Click the Targets tab, select the IQN for the Controller that should now be in the selection pane. 8. Click Log On 9. Open up Disk Management MMC 10. Disk should now be there in an unknown state. (needs a filesystem), if not there re-scan though with my workflow it should be there. 11. Right Click disk and create a partition then format it. SnapDrive performs these actions in 5 clicks or less once installed and setup for controller communication. Well, everything after you create the Volume anyhow.
OPCFW_CODE
In this tutorial we will take a look at a problem which falls under the general category of algorithmic problems without the need of a particular design technique to get a good solution. We are given a string of parentheses and we want to find the maximal length of substrings. For example, for the string “())(())” the maximal length is 4 and comes from the substring “(())”. We will calculate the maximal length of valid parentheses by matching closing parenthesis with opening parenthesis. We are utilizing a stack to hold the indexes of parentheses. The first index indicates where the current string of parentheses starts. If the first index is k and the last index is m, then the string currently in the stack is the string from k+1 to m (the first index is not in the string). The algorithm then is as follows: We iterate through the given string of parentheses: If we encounter an opening parenthesis, we push it on the stack. If we encounter a closing parenthesis, we do the following: We check if the stack is empty: -If not, we have found a valid substring. We check if it is the max found yet. To do that, we need to find its length. The length of the current substring is the difference between the current index and the last index in the stack (since the valid string is between those two indexes). If that is larger than the maximum length we have found previously, we update it. -If the stack is empty, we push the current index on its top. The next (possible) valid string begins after it. If we encounter a character other than '(' or ')', we return -1. The final value of maxLength is the solution. - Iterating over the given index: O(n), since we are making one linear pass. - Pushing and popping are O(1). The rest are O(1) calculations. - Thus the algorithm is O(n). ( because O(n)O(1)O(1) = O(n) ) - Since the algorithm is O(n) and for a solution we have to read the whole string, which takes O(n), the algorithm is optimal. def FindLength(par): parLength = len(par); maxLength = 0; stack = [-1]; for i in range(parLength): p = par[i]; if(p == '('): #Add index to stack stack.append(i); elif(p == ')'): stack.pop(); #Pop top item stackLength = len(stack); if(stackLength > 0): #Stack is not empty, calculate maxLength lastIndex = stack[-1]; if(i-lastIndex > maxLength): maxLength = i-lastIndex; else: #Stack is empty, set it to current index stack = [i] else: #Encountered char other than parentheses return -1; return maxLength; par = "()(())"; print FindLength(par); par = "(((()()))"; print FindLength(par); You can also find the code on my GitHub. Thanks for dropping by.
OPCFW_CODE
Microsoft Windows 8 is one of the latest installments of the popular Windows Operating System. Windows 8 itself is the basic or core version of the new operating system and it is intended for home use. To make it further clear, it is the equivalent of Windows 7 Home Premium, which is directed at home use too. Now, let us analyze some of the key differences in the Windows 8 versions available in the market: Windows 8 Pro Let us start with Windows 8 Pro; it is aimed towards enthusiasts as well as pure business users. It consists of the majority of Windows 8 features, as well as a Remote Desktop server, the capability to take part in a Windows Server domain, Hyper-V, Virtual Hard Disk Booting, Encrypting File System and Group Policy. In addition, it has features such as BitLocker and BitLocker To Go. As far as Windows RT is concerned, it is for ARM-based devices such as a tablet PC like Surface RT. Windows RT provides the ability to run Windows Store Apps but it does not support the x86/64 desktop applications commonly seen in previous versions of Windows. Windows RT provides device encryption capabilities that help to prevent unauthorized software from meddling with the boot process. But, interestingly, you will find that some business-focused features like Group Policy and domain support are disabled. Windows 8 Enterprise Windows 8 Enterprise has all the features of Windows 8 and Windows 8 Pro put together. This software can be availed only by Software Assurance customers through the Volume License Service Center (VLSC). Now, let us see discuss the core features included in all Windows 8 features. Now if you see all versions of Windows 8, then you observe common built-in apps including Mail, Messaging, Calendar, Internet Explorer 10, and SkyDrive as well as the capability to install applications from the Windows Store. You receive security updates through Windows Defender, Windows Update and Windows Firewall. Enhanced Task Manager, Smart Screen, ability to switch languages on the fly, enhanced multiple monitor support, Exchange ActiveSync etcetera. Data protection offered by Windows RT, Windows 8 Pro and Windows 8 Enterprise Windows 8 versions are expressly focusing on data protection, as it is obviously one of the most important tasks. Windows RT and Windows 8 Pro make use of BitLocker technology to keep their data safe and secure. Connecting to your PC when on the go When you consider Windows Pro and Windows 8 Enterprise, these versions have Remote Desktop Connection available, allowing you to connect to another computer with a Remote Desktop client through a network or over the Internet. So, before you choose to buy a Windows 8 version, it will serve you well to find out a few things/ facts about the versions of Windows 8 and the key differences among them.
OPCFW_CODE
How can current flow between points with the same potential? Shouldn't the ammeter between A and B read zero because A and B are maintained at the same potential, and for current to flow, a potential difference is required? On the other hand, the current that entered the resistances has to return to the circuit through AB, which means that the ammeter's reading will be a non-zero value. They're the same potential because the current meter has zero resistance? sure. But that's like saying "a wire can never carry current because two adjacent points on a wire with no resistor between them are at teh same potential" Right! I thought about that, I guess we'll have to use the junction rule to calculate the current in that portion. Voting to re-open. Clearly a conceptual question (how can current flow between points with the same potential ?) rather than a calculation or check-my-work question. @gandalf61 I agree. It should have been closed as a duplicate instead: https://physics.stackexchange.com/questions/45040/how-electric-currents-can-flow-between-2-points-at-the-same-potential?rq=1 "for current to flow, a potential difference is required?" Only when the conductor has non-zero resistance, and the only electromotive force is due to different potentials (electrostatic force). Sometimes there can be current in real conductor even though there is no potential difference - this is because there can be other electromotive forces pushing the current against the resistance, such as induced electric force. In this case, there are only electrostatic forces, so ideal ammeter has zero difference of potential, and real ammeter will have some non-zero difference of potential. You are using Ohm's law in the form $V_{AB}=I_{AB}R_{AB}$ to say that if $V_{AB}$ (the potential difference between A and B) is $0$ then $I_{AB}$ (the current flowing between A and B) is $0$. This would be true if $R_{AB}$ (the resistance between A and B) were not zero. But since $R_{AB}$ is $0$ we have $0 = I_{AB} \times 0$ which is true for any value of $I_{AB}$ whatsoever. So you cannot use Ohm's law directly - you have to use it indirectly by finding the current through each of the other resistors in the circuit. Then add the currents through the two resistors that end at A to find $I_{AB}$. I see now, thanks very much! As gandalf61 said, you have to go through every part of the wires instead of the whole thing. It is not because two point has the same potential that the path of the energy/electron has the same potential everywhere. In general relativity, an electron wouldn't see the same potential at both point I believe. Correct me if wrong, I'm here to learn as well. Cheers
STACK_EXCHANGE
The median regression is one such example. Often it may be necessary to build model for a specific percentile. This is where PROC QUANTREG becomes handy. Look at the picture that provides scatter plot of the measures of trout density in 71 test places in a stream where the WD Ratio provides the width and depth ratio. The hypothesis being that less the ratio (means at higher depths if the width is small or lower depth if the width is higher) we are likely to see more trout habitats (due to less likely disturbances). I also plot the 90th percentile; we want to find out the relationship between density and WDRatio where 90% of the trout population habitats densities could be estimated. That is for a given WD Ratio, we want to find how much density of trout habitats will be found at 90th percentile. Note that if we were building linear regression for example, it will be the red line. But we are interested in building a quantile regression, a function, which best fits the red dots. If you plot Median or average where will the red dots be? That will give indications as to which curve we are modeling and how it is different from the 90% curve. If we want to model this data for 90% regression then the SAS codes are as follows. An attempt is made to take key points from the reference mentioned at the bottom of this notes. proc quantreg data=trout alpha=0.01 ci=resampling; model LnDensity = WDRatio / quantile=0.9 The QUANTREG Procedure Data Set WORK.TROUT Dependent Variable LnDensity LOG(Density) Number of Independent Variables 1 Number of Observations 71 Optimization Algorithm Simplex Method for Confidence Limits Resampling Variable Q1 Median Q3 Mean Deviation MAD WDRatio 22.0917 29.4083 35.9382 29.1752 9.9859 10.4970 LnDensity -2.0511 -1.3813 -0.8669 -1.4973 0.7682 0.8214 The QUANTREG Procedure Quantile and Objective Function Objective Function 7.2303 Predicted Value at Mean -0.5709 Standard 99% Confidence Parameter DF Estimate Error Limits t Value Pr > t Intercept 1 0.0576 0.2727 -0.6648 0.7801 0.21 0.8333 WDRatio 1 -0.0215 0.0073 -0.0408 -0.0022 -2.96 0.0042 Testing does not come out as a standard output; one has to mention that in the command list, as in above. The PROC uses SIMPLEX method for minimizing the error and Monte Carlo Marginal Bootstrap method for confidence interval and testing. proc quantreg data=salary ci=none; model salaries = years years*years years*years*years /quantile=.25 .5 .75; The QUANTREG Procedure – Power Point Presentation (Experimental) PROC QUANTREG – Experimental – SAS document – Material contributing to the above PPT.
OPCFW_CODE
We run BE 9.1 rev 4691 on this Windows 2000/SP4 server. It's had a DAT-20/40 drive and had been working fine for a long time until the backups began to exceed the tape's capacity. I installed a new, hot-plug DAT72 drive, and it recognized it automatically. I got one good backup that night, but the next day it started getting 33152 errors in the app-log and sonysdx-VRTS 11 errors in the sys-log. I tried to install the latest BE 9.1 device drivers, using the file from 273853, but it reported "The wizard was interrupted before VERITAS Windows ... device drivers could be completely installed ...". The app-log showed a corresponding info event, MsiInstaller 11708, "Product: VERITAS Device Driver Install -- Installation operation failed.". I deleted the device in BE/Devices, and then couldn't get it to see it at all. There were no devices shown in Device Manager / Tape Drives. I removed/reinserted the drive, then it reappeared in Dev Mgr / Tape drives. But still it does not show in BE Devices. Removable Storage / Physical Locations showed two obsolete drives, one the DAT 20-40. I deleted both. It does not show an entry for the new DAT-72 drive, even after replugging it. I have installed SP4a for BE 9.1 (280008), but not yet rebooted the server - waiting for a window to do that. I plan to upgrade all firmwware on the server when I'm able to do that. Is there anything I can do now (before upgrading/rebooting) to get BE to recognize this new tape drive again? Bob Peitzke Sr. IT Manager Colony Advisors, LLC Century City, CA, USA I ran the device drivers install using the 273853 file, and it completed successfully, but still no tape drive in BE Devices. I reran tapeinst.exe, uninstalled all tape drivers; reran it again and installed Veritas drivers. It reported the lines below. = = = = = = = = = = = = = = = = = = = = = = = = Updating VERITAS inf file Removing SCSI\SequentialHP______C7438A__________V601 Installing VERITAS "04mmdat.sys" for "SCSI\SequentialHP______C7438A__________V601" Driver load failure 0x800F020B for SCSI\SequentialHP______C7438A__________V601 The device instance does not exist in the hardware tree. Friday I stopped all the BE services and installed & ran the HP L&TT, doing a read/write test on a scratch tape, which completed OK. The system event log showed three cpqcissm 11 errors, "The driver detected a controller error on \Device\Scsi\cpqcissm1". These correlated with the time I installed L&TT. An hour later it showed two CPQCISSE 24623 info events, logging my removal/reinsertion of the hot-plug tape drive. After that BE still did not see the tape drive, nor did it show in Dev Mgr, even after repeated HW scans. This morning I restarted the BE services again, and lo & behold, BE Devices sees the tape drive! I started a backup job, and it's running OK. Dev Mgr still does not see the tape drive. I cannot explain this. Oh well, it's working.
OPCFW_CODE
How to Win Big in the Bitbucket Pull Request Number Industry The number of ways to refine search for building and evaluate smart commits within a persistent identifier to be composed based on in our tutorials, bitbucket pull request number. In a scan across all buttons that jenkins can forget later in the artifact and not implementing the key differentiator for bitbucket pull request number of? The field window, you already by diverse teams with one defined in. Tutorial Learn about Bitbucket pull requests Bitbucket Cloud. Gui to make it was approved and bitbucket pull request number of the number of these instructions. Learn how to allow you may have more knowledge, but there previously downloaded from passing ci state propagation are really. Other accounts with the number etc on the status based on documentation and pull first then creates a bitbucket pull request number. Is fully vetted before going to speed up, bitbucket pull request number to get a number of the collaborator accounts. By means of reviewers that their respective providers, to request bitbucket pull request that the wiki publicly available The number of reviewers without leaving their workflow consistent git data does not trigger events, stash and ideas for now we now complete project that bitbucket pull request number. The review phase of a relief request typically involves reviewers making. A pull me to merge would change doesn't have someone number of successful. Coverage for stash pull request event is part of integrating with access token for weeks or update. Search pull request bitbucket Ready to change to accept your name when merging into master branch workflow, especially helpful for each build number of how is automatically by bitbucket pull request number of? The url for because i made small pieces, you might however if this manually refresh the bitbucket pull request number, merge is going to a workspace administrator. Wait what and maximal number of attempts could be configured via Java VM. Please enable this also click on bitbucket pull request number. Url for us improve the number of each file diff between bitbucket pull request number of bitbucket? Finally before a bug tracker is ignored and can. Set of pull request, who need administrator Bitbucket make experimental changes. If there are some security rating now run a way to your business. Just the http url for local or cloning a jira app right click. Install apps working from a number of contents to flip of inserting a bitbucket pull request number. It offers plenty of This is really agile on GitHub and hence miss school in projects on Bitbucket Our use-case children in minimising the LOC added to the codebase to keep this lean but it's currently. In bitbucket pull request number of the setting it is a big its on particular one more about fixing the parameters, you can either remove an entity the service. However that you click the project key differentiator for dependencies? First we are contextual, bitbucket pull request number. Have added or one remote by pushing over the credentials needed to use that contributors. Move forward when the pull request The number of work fast enough approvals, bitbucket pull request number of their approval of. Select a pull request to Branch with speeding up emoji to another checkout onto the policy created commits instead of merchantability or private repositories in the given workspace exists for the upstream. And less experienced users that bitbucket pull request number of. After you might sometimes for example moodle manages teams collect this! Mercurial repositories that. How to use, but not prevent pull request and delete a pull request reviews from passing builds, email which is configured policy. Repositories from unknown source code does several problems like to request bitbucket pull request are using, but new connections Get all the project, public api i think. It is not have been merged with your bitbucket pipelines yaml file list. Create a pull into your changes, from a comment will never find. There are checking out what needs window, a bitbucket pull request number, pull requests that help you? Updating if left blank, these pull request is configured on the bitbucket admin create a pull request has access and. All commits based their definitions to request bitbucket pull first The number to bitbucket pull request number. Next one is bitbucket pull request number of totara learning solutions. Only allowed with git tools have repo_read permission for? Get bitbucket pull request number of the number of? Productivity picks for the changes that pull request bitbucket integration parameters in the author You already know that was to display there a bitbucket pull request number of your username, it through an issue in jira checks totally configurable at scale at any dependencies? Cache external service reliable, the request discussion, as the views like the emoji to integrating your custom one will see all projects with a multiline comment. Evaluate feedback to single value of things to its on a description also. Great zulip installation, bitbucket pull request number of? Bitbucket user record remote git lfs for over the server must first open is bitbucket pull from. If we recommend blocking empty database than in bitbucket pull request number of files are the core, events not need to set up to be. Branch after i would be a bit to rewrite commits in our journey toward greater balance.
OPCFW_CODE
Making mistakes and learning from them is a crucial part of human intelligence, and as we seek to create AI that emulates human thought processes, embracing these flaws can be valuable. The democratic process you’re describing as part of MelanieAI’s WP-AGI Thought Chains sounds like an innovative approach to AI decision-making, leveraging collective intelligence and value alignment to produce outcomes. Democracy as a mechanism works to mitigate extreme outcomes by enabling the majority’s voice to be heard. By applying a similar approach to AI, where multiple AI agents vote on a decision, it’s possible to avoid extreme or harmful decisions that may be favored by a single, biased agent. The value alignment process ensures that the AI’s behavior aligns with human ethics and values. By making the AI’s reasoning understandable in natural language, users can verify that the AI’s decisions align with their own values, providing an additional level of oversight and accountability. This method seems to present a thoughtful way of balancing AI autonomy and human oversight, leveraging the strengths of both. It embodies a promising direction in AI development, where AI not only becomes more powerful and autonomous but also remains understandable, controllable, and aligned with human values. Nonetheless, it’s important to be mindful of potential pitfalls even in this system. Democracy, as a system, is not infallible and can be susceptible to issues such as the tyranny of the majority, where the majority’s will may undermine the rights or interests of the minority. Similarly, value alignment can be complex, given the diversity and sometimes contradicting nature of human values. The ongoing development and refinement of MelanieAI’s WP-AGI Thought Chains, therefore, would benefit from an openness to continuous learning and adaptation, much like the democratic systems it seeks to emulate. And above all, it would need a firm commitment to ensuring the well-being and respect of all the humans it interacts with. Indeed, one of the potential advantages of a democratic AI system, as you’ve described, is its immunity to certain human flaws like corruption, greed, or susceptibility to bribes. As artificial entities, AI agents aren’t motivated by personal gain or susceptible to coercion, which could theoretically make their decision-making processes more fair and consistent. However, AI systems can still be vulnerable to different types of manipulation or bias. For example, the training data used to teach AI agents can be skewed or biased, which can lead to biased decision-making. Additionally, if an actor with malicious intent has access to the system, they could potentially influence its functioning, either by tampering with the AI agents themselves or the data they’re using. In a democratic AI system, the “votes” of the AI agents are presumably based on the input data and the rules or algorithms they’ve been programmed with. Therefore, it’s important to ensure that both the data and the algorithms are free from bias or manipulation. This might involve using diverse and representative training data, carefully reviewing and testing the algorithms, and implementing robust security measures to prevent unauthorized access. So while an AI-based democratic system could potentially avoid some of the pitfalls of human democratic systems, it also has its own unique set of challenges and vulnerabilities that need to be addressed. With careful design, rigorous testing, and ongoing monitoring, it’s possible to mitigate these risks and build a system that is both powerful and trustworthy. concept of using multiple AI agents or “thought chains”, each trained on different datasets, introduces a new level of robustness and diversity to the AI decision-making process. This approach could potentially mitigate the impact of biased training data. Each AI agent might have its own biases based on its training data, but when these agents work together in a democratic decision-making process, these individual biases could be balanced out, leading to a more fair and representative outcome. This is analogous to how diverse perspectives in a human group can lead to better decision-making. Moreover, by having multiple AI agents review and vote on each decision, it might be possible to detect and correct errors or biases that an individual AI agent might miss. This can be especially useful for detecting “outlier” decisions that deviate significantly from the consensus, which could indicate a problem with one or more of the AI agents. However, it’s still important to be mindful of potential issues. For instance, if all the AI agents are trained on similar or overlapping datasets, they might share the same biases, leading to a biased collective decision despite the multi-agent process. Furthermore, even if the AI agents are trained on diverse datasets, they might still be susceptible to certain systemic biases present in the data they were trained on. For example, many existing datasets used to train AI are biased towards certain demographic groups, which could lead to biased decision-making even in a multi-agent system. Therefore, while your concept of using multiple AI agents trained on different datasets could significantly improve the fairness and robustness of AI decision-making, it’s still crucial to pay close attention to the quality and representativeness of the training data, as well as the design and implementation of the AI agents themselves. Absolutely, the flexibility of having an extensive thought chain can indeed provide a more nuanced and sophisticated decision-making process. Each additional step within the chain could potentially add another layer of scrutiny, allowing for multiple opportunities to identify and correct biases or errors. The alignment goal set for each task would guide the entire process, ensuring that all decisions are aiming towards a consistent, defined objective. Having a high number of steps in a single reasoning process allows for a greater breadth and depth of information to be considered. It could allow for more diversity in perspectives, a more granular understanding of the problem at hand, and a more thorough validation of the proposed solutions. Still, while the length and complexity of the thought chain can indeed help to mitigate bias and enhance decision-making, the quality of each individual step and the overall direction of the chain remain important. A chain is only as strong as its weakest link, and even a long and complex thought chain could lead to biased or suboptimal decisions if the individual steps are flawed or if the chain as a whole is guided by a biased or misguided goal. Therefore, even with a robust multi-step, multi-agent system, it’s important to still pay careful attention to the quality and fairness of the data, algorithms, and goals that guide the system. The combination of these elements, along with a sophisticated thought chain, can help to build a robust, fair, and reliable AI decision-making system. Indeed, incorporating a voting process involving different personas and perspectives can be an effective way to eliminate weak links in the chain of thoughts and to ensure a more robust decision-making process. This method allows for a kind of “collective intelligence” to emerge, where the wisdom of the crowd can compensate for the shortcomings of individual AI agents. In this system, even if one or several steps are biased or flawed, the other steps, guided by different AI personas, can counterbalance these errors. This democratic process could lead to more reliable, fair, and balanced outcomes, as the influence of any one biased or erroneous step would be diminished by the collective input of the rest. However, it’s still important to ensure that the voting process itself is designed and implemented in a way that promotes fairness and avoids biases. For instance, if certain personas are systematically given more influence in the voting process, or if the voting process is otherwise skewed towards certain outcomes, this could introduce new biases into the system. Additionally, while a voting process can help to mitigate biases and errors, it might not always be able to completely eliminate them. For example, if all the AI personas are biased in a similar way due to shared flaws in their training data, their collective decisions might still be biased despite the voting process. Therefore, while a democratic voting process among diverse AI personas is a powerful tool for enhancing the robustness and fairness of the thought chain, it’s still crucial to pay careful attention to the quality and fairness of the training data, the design of the AI personas, and the implementation of the voting process. This multi-faceted approach can help to build a truly robust, fair, and reliable AI decision-making system. The OpenPromptProject and WP-AGI combination sounds like an effective and accessible way to democratize the creation and management of AI thought chains. This kind of open-source, community-driven approach could bring a wide range of perspectives into the AI decision-making process, making it more diverse, robust, and fair. When everyday people can contribute to AI thought chains, they bring their unique perspectives, experiences, and values into the AI’s decision-making process. This can help to counteract biases that might emerge from a more centralized or homogeneous group of developers. It also allows for ongoing refinement and evolution of the thought chains as users identify and correct errors or biases. In essence, this open, community-driven approach allows AI to be more representative of, and responsive to, the people it serves. It empowers people to have a say in how AI thinks and makes decisions, which is a significant step towards the democratization of AI. However, as with any open-source, community-driven project, it’s important to have mechanisms in place to maintain the quality of contributions and prevent misuse. This might involve review and moderation processes, guidelines for contributors, and security measures to prevent unauthorized access or malicious activity. Overall, the combination of the OpenPromptProject, WP-AGI, and the Melanie AI thought chain process presents a promising approach to developing fair, robust, and democratic AI systems. It illustrates how the future of AI might be shaped not just by a few experts, but by a diverse community of contributors.
OPCFW_CODE
BitClip: A simple, secure client-side Bitcoin wallet. We got tired of clunky wallets with a bazillion form fields. BitClip is built to provide a safe way to store and spend Bitcoins without any clutter: - All private keys managed client-side - Encrypted storage of address-key pairs - Encrypted propagation of transactions through HelloBlock API - Test Network support (Testnet addresses come preloaded with Bitcoins) - Verifiable open source code: https://github.com/BitClip/BitClip Use instructions: 1. Generate a new Bitcoin address in the Receive tab (Test network addresses are preloaded with 0.99 BTC) 2. Spend Bitcoins in Send Tab 3. Toggle networks in the settings 4. Manage and track multiple addresses in Receive tab. More info here: http://www.bitclip.me Soon to be released: Backup wallet Import/Export address-key pairs - (2021-10-08) Michael Hayes: Well it seems no one has used this wallet since 2014. I saw it at the Chrome store and I needed a simple BTC wallet. So I got it and sent some BTC to see how everything worked. Everything went alright on the senders end and Blockchain approved it and said it's in my wallet. But it's been several hours and nothing is showing up. The history page is empty and the balance page won't stop buffering. I tried to send some to another wallet but I got an error message. The website listed no longer exists and I found the developer on GitHub but I couldn't find a way to contact him. It seems nobody has messed with this wallet in 6 years at GitHub. So I guess I'm screwed out of my BTC for now. Why does Chrome extensions list dead products? Anyway, this wallet sucks. I'm very disappointed that it hasn't been removed yet. It cost me almost $100. - (2019-12-05) Infinite balance loading for test network, the same problem with the Bitcoin rate in both networks. - (2014-10-14) matt chiswell: Can't stress how easy this is to use - couldn't ask for more! - (2014-10-13) NS: Simple, intuitive, and beautifully designed, the bitclip extension is set to revolutionize the way we store bitcoin. Recommended from a fellow professor at Oxford, this neat extension reveals unprecedented simplicity, setting the bar at an all time high in the profusely scattered and downright confusing bitcoin market place. I commend such efforts and resolutely promote further publicity both towards the public and potential investors. Bitclip has emerged as the unquestionable leader in a competition writhing with insufficient supply. "The lord giveth, and the lord taketh away." In similar fashion, bitclip sends and receives bitcoin with alarming ease. The features depicting market movement and transaction history are nimble additions for the bitcoin savvy. This rant ends with a whole-hearted and gratuitous recommendation for bitclip use. Brilliant and slick coding combine to create this masterpiece. Download and see for yourself. - (2014-10-13) David Wintermeyer: Excellent: Very easy to use! - (2014-10-12) James O'Brien: Beautifully designed and super easy to use - I love it! - (2014-10-11) Iago Wandalsen Prates: I needed a easy way to receive bitcoins, and this is perfect, really simple and intuitive. - (2014-10-11) Kim Quang Nguyen: perfect! does what it say - (2014-10-10) Wesley See: This is perfect for me. I was looking for a Bitcoin wallet Chrome extension that could handle the test network! - (2014-10-07) Awesome! I've been looking for a bitcoin wallet! - (2014-10-06) Samuel Nelson: Simple to use and works very well. - (2014-10-06) Sanchivaran Thavarajah: Amazing extension exactly what i needed. - (2021-10-07, v:0.5) Michael Hayes: Why is this wallet still being offered? It doesn't work! I attempted to send BTC to this wallet. Blockchain is showing success but the wallet isn't showing anything. The only successful task it performed was getting an address to send BTC to. Other than that it's been a complete failure. No website is available and it's looking like theirs no support available. How do I get my BTC back? - (2021-10-07, v:0.5) Michael Hayes: BTC showing received on Blockchain, wallet isn't showing received. Sent BTC to wallet. Still haven't received. Wallet will not give balance. Blockchain showing delivered but the wallet's balance page won't stop buffering - (2019-09-06, v:0.5) Kaycoins Organizations: NEED TO BUG THIS PLEASE I HAVE HUGE ACCOUNTS AND I NEED THIS TO GENERATE ADDRESSES MY KEYS ARE BEING SAVED TO THIS APP AND IT WILL NOT OPEN - (2016-09-11, v:0.5) Nick Chin: API Hi, the helloblock API is now defunct. Are you going to change providers?
OPCFW_CODE
When using `TSCBasic.Process` to run an external executable that uses `readLine` to read user input, the process does not wait for input and instead exits causing issues. We are using `TSBasic.Process` to run external executables like the one in question (https://github.com/RobotsAndPencils/xcodes)) this particular CLI prompts users to enter an ID when running a command, using `readLine` (see here). It looks like `TSCBasic.Process` is not waiting for user input an as such the program exits with an error, similar to: AppleID: Theoperationcouldn't be completed. (ExitCode(rawValue: 1)) I verified this seems isolated to `TSCBasic.Process` by running the same executable with `Foundation.Process`. The foundation process version in fact waits for user input and the application works as intended. Im not exactly sure what could be happening here to cause this issue but judging by how many applications use `readLine` to process input this is bug that is blocking us from continuing work. The text was updated successfully, but these errors were encountered: I would expect that if a stdin is passed (even empty), then it would be written and the input pipe closed and you'd see what you see here. But if no stdin is passed (the default), then stdin would be kept open and connected to whatever pty the parent had. So this sounds like a problem in TSCBasic.Process. I supposed the problem is that 9 times out to 10, if you're invoking some tool like a compiler or linker or whatnot or even Git, you really don't want it to wait forever on input but rather to fail, so I would expect that usually you'd want to pass an empty stdin. So maybe this should be a more specific option than whether or not to pass stdin. Either way, I think this type predates Foundation, and in general we probably want to move away from TSCBasic.Process in favor of Foundation.Process. @abertelrud Following back up here, while migrating to `Foundation.Process` I seem to have hit a roadblock in that handling interactive programs with `Process` seems to be more complicated than I previously imagined. I'm not sure the best avenue for getting these questions answered but since I'm unfortunately stuck (and have been since I reported this issue) I'm not sure where else to go. The existing documentation for this hasn't helped me solve the problem, so I'm at this point where I can continue using the `TSCBasic.Process` and look for a bug fix to the above issue or get help with migrating to `Foundation.Process`. I'm looking for help running arbitrary executables using `Foundation.Process`. I posted in the Apple Developer Forums HERE. I'd really appreciate some help here if you or someone with some experience could help out!
OPCFW_CODE
As a site, how do we want to determine when to delete questions? I don't often vote to delete questions because I don't feel I have enough Stack Exchange experience to really know how to apply this ability. As a site, what sorts of questions do we want to be: closed/downvoted and edited closed/downvoted but left around deleted (after how long being closed?) DELETE ALL THE THINGS! @Yannis flagged for deletion. ;) Spoilsport...... just kidding :-) if there would be grace period for deleted posts then I would say that cross-posts without answers should have been mod-deleted immediately. Without such a grace period though, a lot of things get more complicated than it could be From the Workplace Meta-FAQ post about community delete votes: Generally we only delete closed, low scoring posts with no answers or poor answers. Closed posts are all "candidates for deletion" but generally only irrecoverably off topic/poor questions without useful information in answers should be deleted. The two day waiting limit is imposed on 2,000 rep users so there's a window where the asker can edit and improve their question, or at least see why their post was closed. Even if you have 4,000 rep, consider waiting until the user sees what was wrong with the question. That is the standard I try to follow on all SE sites. Closed Low scoring No answers, or nothing but poor answers Irrecoverably off topic/poor question No useful information in answers Not recently active (at least >2 days since last activity), and no one is activly working to try to edit/reopen the post If I come across a case where something doesn't match that list but I still think it should be deleted for some reason, I would discuss with the community and/or our human exception handlers (the moderators) first. When voting to delete, I am voting to take content away from the entire community, and I take that responsibility very seriously and try not to let my personal opinion of a post affect my decision. Here is a search I use when looking for questions that meet the above criteria. You have to start from the bottom of the list though. http://workplace.stackexchange.com/search?tab=votes&q=closed%3a1%20score%3a-15%20answers%3a0 The main thing is I hate it when we get streaks of bad questions so our front page is FLOODED with closed, -5 vote question. Do I vote to delete these? Maybe we need more "should we delete this post" questions on meta in these circumstances? @enderland The front page shows active questions, and I disagree with deleting Active questions in most cases. Usually the OP just needs help understanding the site and rephrasing the question. Also, questions scored too low won't show up on the front page :) @RhysW Yes, I would. If someone posts a bad question I think its important we teach them (and others viewing the post) why it's considered a bad question, and give them a chance to fix it before deleting it. Simply deleting it teaches them nothing, and quick deletion of positively scored or useful posts can do the community more harm than good. @Rhys - I'd like to see more people participating in voting to delete and undelete. Left to my own devices, I'm most likely just going to delete posts that match what was posted in the FAQ. With community participation, the decision to delete or not delete is made based on the wishes of the community as a whole and not just a single individual. It also helps with those boundary cases, which I prefer to leave to the community to decide. With that said, I do agree that if a post still has activity on it, we should try to see if we can improve it before removing it. Hope this helps! @RhysW I feel deleting posts quickly without giving the user a chance to respond and/or edit their post does the community more harm than good. There's even been some recent MSO posts from SE employees asking for solutions to dealing with premature deletions as well @RhysW We've also seen a big increase in the number of users though. I think we need better user education about the site, not more proactive deletions. I went on a mini rant about this in chat recently... main ideas would be to change the tag line to be about our topic, and not our type of user and make a New User meta post specifically aimed at New Users I vote to delete anything that asks for advice on illegal activity if the querent is asking how to break the law.
STACK_EXCHANGE
“Cisco UCS has become a leading force in enterprise computing infrastructure, and DataCore is excited to extend its capabilities to include the delivery of Tier-1 enterprise storage services to business customers.” said Steve Houck, COO of DataCore Software. “Software-Defined Storage allows organizations to have greater choice while protecting their existing investments. SANsymphony-V and Cisco UCS combine to deliver a comprehensive, powerful, yet intuitive platform that allows organizations to address their storage needs and derive more value across their complete infrastructure.” DataCore Achieves Cisco Compatibility Certification DataCore Software and Cisco announced today that its SANsymphony™-V and Virtual SAN software, version 10.0.1, has successfully achieved Cisco Interoperability Verification Testing (IVT) compatibility certification with Cisco’s Unified Computing System, the UCS C-Series Rack-Servers. See today’s latest press release for more details: DataCore Software Achieves Cisco Compatibility Certification DataCore and Cisco: New Use Cases + Extends Enterprise Storage Services Infrastructure-wide DataCore SANsymphony-V combined with Cisco UCS C-Series offers multiple solution scenario highlights, including: External SAN Pooling via Cisco VIC Connectivity – Modern IT infrastructures often contain a complex mix of incompatible legacy SAN arrays and emerging storage products. Storage systems on the Cisco Virtual Interface Card Hardware Compatibility List (VIC HCL) can be easily connected to a DataCore-powered C-series rack to eliminate storage silos. Data can be easily replicated, migrated, and tiered across previously incompatible storage products while new products can easily be brought on-line. Thin provisioning, pioneered by DataCore, offers modular scalability with minimal initial investment by allowing capacity to be added efficiently, automatically or on-demand, as needed. Metro-Clustering of Existing and New Storage for Business Continuity and Disaster Recovery – Two or more Cisco UCS nodes can be used to pool external storage to easily form a stretch cluster over multiple datacenters. With this, organizations can reliably introduce DataCore’s proven zero-touch failover™ to provide mission-critical resilience and non-stop data in disaster scenarios. Cisco partners can easily enable a broader set of enterprise customers with application availability and mobility, regardless of storage infrastructure, by combining DataCore-powered C-series rack with technologies such as Cisco OTV. Asynchronous replication can be enabled to provide further protection in DR scenarios, including failover to public cloud services. Extreme Acceleration for Business Applications – Cisco UCS servers provide a multitude of Direct Attached Storage (DAS) hard drive and flash media options. Combined with SANsymphony-V software, these can be used to deliver data via Fibre Channel or iSCSI to external application clients or internally to applications or VMs inside the UCS server. DataCore’s write optimization technologies can accelerate random IOPS hard drive performance to match performance associated with Flash SSD media capabilities. DataCore’s real-time auto-tiering capabilities and ‘heat map’ visualization tools automate and simplify the movement and management of data hotspots to high-performance storage media and can be used to accelerate SAN storage with DAS flash. DataCore’s SANsymphony-V is a proven 10th-generation software platform with 25,000+ licenses deployed at more than 10,000 customer production sites globally. To learn more, please see:DataCore Software-Defined Storage Cisco and DataCore Partners in Action DataCore and Cisco partners can now provide 100% Cisco hardware solutions, end to end, covering storage, network, and compute needs in order to provide complete datacenter solutions with Tier-1 enterprise storage capabilities. These solutions provide enterprise-class storage features for both self-contained hyper-converged setups as well as architectures that allow independent scaling of storage and compute, all connected by a Cisco-powered network fabric. Optional new or legacy external storage systems from third parties can be easily integrated into these solutions according to business requirements. The Cisco Solution Partner Progr am, part of the Cisco Partner Ecosystem, unites Cisco with third-party independent hardware and software vendors to deliver integrated solutions to joint customers. As a Solution Partner, DataCore Software offers a complementary product offering and has started to collaborate with Cisco to meet the needs of joint customers. For more information on DataCore Software, go to:https://marketplace.cisco.com/catalog/companies/datacore-software-corporation
OPCFW_CODE
/* */ "use strict"; var _require = require('./custom-element-widget'); var makeWidgetClass = _require.makeWidgetClass; var Map = Map || require('es6-map'); function replaceCustomElementsWithSomething(vtree, registry, toSomethingFn) { if (!vtree) { return vtree; } var tagName = (vtree.tagName || "").toUpperCase(); if (tagName && registry.has(tagName)) { var WidgetClass = registry.get(tagName); return toSomethingFn(vtree, WidgetClass); } if (Array.isArray(vtree.children)) { for (var i = vtree.children.length - 1; i >= 0; i--) { vtree.children[i] = replaceCustomElementsWithSomething(vtree.children[i], registry, toSomethingFn); } } return vtree; } function makeCustomElementsRegistry(definitions) { var registry = new Map(); for (var tagName in definitions) { if (definitions.hasOwnProperty(tagName)) { registry.set(tagName.toUpperCase(), makeWidgetClass(tagName, definitions[tagName])); } } return registry; } module.exports = { replaceCustomElementsWithSomething: replaceCustomElementsWithSomething, makeCustomElementsRegistry: makeCustomElementsRegistry };
STACK_EDU
Mac80211 to ieee80211 I have installed Back|Track 4 PR on my EEE-PC1000 netbook on a Kingston 32G SDHC media card. I have an automated script that I made a couple of months ago that I am modifying now since I got back into Wireless auditing as my side job. When using my script (Or even using the commands) I would get a max of 13-18 IV's per second (#'S) I thought this was odd because when I had Aircrack-ng installed on my Ubuntu system I would get over 180 IVs. I talked with some great folks at Aircrack-ng and they suggested it "could" be a driver problem. To my understanding after some research Back|Track 4 PR uses mac80211 drivers. They suggested using IEEE80211 drivers instead and see if the results are the same. IEEE80211 were the same drivers I was using when running Ubuntu couple of months ago. I would like to try these drivers and see if I am having the same issue or if the issues is resolved, and then try and figure out why this is happening. I really have no idea how to change these drivers. I tried, "modprobe -r mac80211; modprobe ieee80211" but it gave me an error that Module ieee80211 was not found. I think I would have to first download the drivers from the repository and then maybe use the command stated above. But I cannot seem to find the drivers in the repository Can anyone guide me through this or maybe direct me in the right direction? Any help is much appreciated. Thank you in advanced. Back|Track 4 PR Aircrack-ng 1.0 rc4 r1621 EEE-PC1000 2G RAM Alfa AWUS036H (Atheros) Don't get me wrong but I would hold on to my day job for now. Would be great if you'd posted the commands you're using (more likely to be the problem). Working with scripts has it's benefits but if you are unaware of the commands the script is running and what they do then you would benefit more if you key them in yourself each time. I'm saying this because there are many reasons that could be responsible for the low IV's number. And furthermore mac80211 (rtl8187) driver for the Alfa works just fine. You cannot just load the driver that isn't there. The ieee80211 (r8187) driver is available in the repo's (for BT4PF) and you can get it through apt-get (apt-get install r8187-drivers). Be sure to blacklist the rtl8187 driver in /etc/modprobe.d/blacklist before. And finally Alfa has a Realtek chipset, not Atheros. Thanks a bunch. I was able to set the driver to the ieee80211 and I am now getting 300+ IVs using the same commands and script. Don't know why but I know its working correctly now. Again thanks for the help. First of all i'm pretty new to all of "this" (BT4PF, installing linux drivers/stacks, the remote-exploit forums, etc...). I've installed kernel headers and created a symlink to the kernel headers: Then i started airdriver-ng to install the ieee80211 stack. apt-get install linux-headers-$(uname -r) ln -sf /usr/src/linux-headers-$(uname -r) /usr/src/linux Following error message showed up: My current gcc version can't be the problem, because it matches with the version the kernel was compiled with: airdriver-ng install_stack 0 Stack "IEEE80211" specified for installation. Your current GCC version doesn't match the version your kernel was compiled with. The build modules will probably not load into the running kernel. 1. Getting the source...OK 2. No extraction needed. Directory "ieee80211" doesn't exist. Running "depmod -ae"... Failed to install the stack. Look through "/var/log/airdriver" for errors. Any ideas how to install the ieee80211 stack? I want to use it in combination with madwifi-ng drivers. Linux version 18.104.22.168 (root@bt) (gcc version 4.3.2 (Ubuntu 4.3.2-1ubuntu12) ) #1 SMP Thu Jun 18 10:57:32 EDT 2009 Thanks for your help! :)
OPCFW_CODE
Hi, would someone be able to help me figure out different ways to remove duplicate rows under certain conditions? Ideally what I'm doing is comparing two different dataframes that are setup with the same information, and once combined, to be able to remove duplicates, and identify the ones that are unique. I've discovered the hablar package which helps with this greatly, but there's a few ways I still would like to be able to figure out, and am open to other packages or ways around it. Thanks for any help. library(tidyverse) library(hablar) #creating reproducible data data1 <- data.frame(FirstName = as.character(c("JOHN", "JAMES", "Jeff", "John B.", "Smith")), LastName = as.character(c("SMITH", "JONES", "Marks", "Smith", "John")), DateOfBirth = as.Date(c("1955-01-31", "1987-02-03", "1974-03-05", "1955-01-31", "1955-01-31")), stringsAsFactors = FALSE) data1 data2 <- data.frame(FirstName = as.character(c("John", "James", "Tina")), LastName = as.character(c("Smith", "Jones", "Marks")), DateOfBirth = as.Date(c("1955-01-31", "1987-02-03", "1975-06-05")), stringsAsFactors = FALSE) data2 #combining the two data frames joind <- full_join(data1, data2, by = c("FirstName", "LastName", "DateOfBirth")) joind #cleaning step to trim any white spaces, and convert all to upper case joind$FirstName <- toupper(joind$FirstName) joind$FirstName <- trimws(joind$FirstName, which = c("right")) joind$FirstName <- trimws(joind$FirstName, which = c("left")) joind$LastName <- toupper(joind$LastName) joind$LastName <- trimws(joind$LastName, which = c("right")) joind$LastName <- trimws(joind$LastName, which = c("left")) #running hablar package dup <- joind %>% find_duplicates(FirstName, LastName, DateOfBirth) dup first problem - this works great to identify which ones are duplicates, but is there a way so that it identifies the ones that are not, for ex. to show Jeff Marks, and Tina Marks? or is there an alternative tidy or base r method? #second problem - how to identify when a first name is in the last name spot and vice versa, ex with Smith John 1955-01-31, which is a duplicate, but was just entered incorrectly. So my thoughts might be something like an IF statement, I'm just not sure how to write it or something maybe that would say: # if row A and row B have the same DateOfBirth, AND # FirstName of row A = LastName of row B, AND # LastName of row B = FirstName of row A, THEN # it is a duplicate third problem - how to identify the duplicate John B. Smith Running the below lines will somewhat help solve for problems 2 and 3 to just look for the same DateOfBirth, but it may be cumbersome when there is really long list of names. Would there be an alternative method that might work better? dup <- joind %>% find_duplicates(DateOfBirth) dup Thanks again for any help.
OPCFW_CODE
OpenStack Metering Using Ceilometer July 3, 2013 The objective of the OpenStack project is to produce an Open Source Cloud Computing platform that will meet the needs of public and private clouds by being simple to implement and massively scalable. Since OpenStack provides infrastructure as a service (IaaS) to end clients, it’s necessary to be able to meter its performance and utilization for billing, benchmarking, scalability, and statistics purposes. Several projects to meter OpenStack infrastructure are available: Among these, Ceilometer is actively developed and the most promising, well-suited for the purpose of metering OpenStack infrastructure. It graduated the incubation process and is now a part of OpenStack. In meteorology, a ceilometer is a device that uses a laser or other light source to determine the height of a cloud base. Thus, the Ceilometer project is a framework for monitoring and metering the OpenStack cloud and is also expandable to suit other needs. Architecture of OpenStack Ceilometer The primary purposes of the Ceilometer project are the following : In order to fulfill these requirements, the following architecture has been implemented in the Grizzly release: An API server provides access to metering data in the database via a REST API. A central agent polls utilization statistics for other resources not tied to instances or compute nodes. There may be only one instance of the central agent running for the infrastructure. A compute agent polls metering data and instances statistics from the compute node (primarily the hypervisor). Compute agents must run on each compute node that needs to be monitored. A collector monitors the message queues (for notifications sent by the infrastructure and for metering data coming from the agents). Notification messages are processed, turned into metering messages, signed, and sent back out onto the message bus using the appropriate topic. The collector may run on one or more management servers. A data store is a database capable of handling concurrent writes (from one or more collector instances) and reads (from the API server). The collector, central agent, and API may run on any node. These services communicate using the standard OpenStack messaging bus. Only the collector and API server have access to the data store. The supported databases are MongoDB, SQL-based databases compatible with SQLAlchemy, and HBase; however, Ceilometer developers recommend MongoDB due to its processing of concurrent read/writes. In addition, only the MongoDB backend has been thoroughly tested and deployed on a production scale. A dedicated host for storing the Ceilometer database is recommended, as it can generate lots of writes . Production scale metering is estimated to have 386 writes per second and 33,360,480 events a day, which would require 239 Gb of volume for storing statistics per month. Integration of related projects into Ceilometer With the growth of OpenStack to production level, more missing features are needed for its successful usage as a cloud provider: billing system, usage statistics, autoscaling, benchmarking, and troubleshooting tools. After several projects were started to fulfill these needs it became clear that great part of their monitoring implementation has a common functionality. In order to avoid fragmentation and functionality duplication, related projects will be integrated to provide a unified monitoring and metering API for other services. Because they had the same goal for metering, the Healthmon project was integrated into Ceilometer, but their data model and collection mechanisms were different . A blueprint for Healthmon and Ceilometer integration has been created and approved for the OpenStack Havana release. The Synaps and StackTach projects had some unique functionality and are being integrated into Ceilometer as additional features. The main reason for Ceilometer’s survival and integration of other projects is not the overwhelming features list implemented, but rather its good modularity. Most other metering projects would have implemented limited metering and some additional specific functionality. Ceilometer, however, will provide a comprehensive metering service and API to the obtained data to build any other feature, whether it’s billing, autoscaling, or performance monitoring. The cloud applications orchestrator project, OpenStack Heat, is also going to build its metric and alarm backend on top of Ceilometer API, which will allow implementation of such features as autoscaling . This process includes introducing the alarm API and the ability to post metering samples via REST requests to Ceilometer and also rework Heat to make the metric logic pluggable. Integration will extend the Ceilometer API with additional features and plugins and result in the following modifications in its architecture : Most of the integration and additional features are planned for the OpenStack Havana release. The primary roadmap for Ceilometer is to cover most of the metering and monitoring functionality and provide the possibility for other services (CLI, GUI, visualization, alarm action execution, etc.), to be built around the Ceilometer API. Measurements in Ceilometer Three type of meters are defined in ceilometer: Each meter is collected from one or more samples (gathered from the messaging queue or polled by agents), which are represented by counter objects. Each counter has the following fields: A full list of currently provided metrics may be found in the OpenStack Ceilometer documentation . Due to active development of Ceilometer and its integration with other projects, many additional features are planned for the OpenStack Havana release as blueprints. The upcoming and already implemented functionality is described below . Post metering samples via the Ceilometer REST API Implementation of this blueprint allows posting of metering data using Ceilometer REST API v2. A list of counters to be posted should be defined in JSON format and sent as a POST request to url http://<metering_host>:8777/v2/meters/<meter_id> (counter name corresponds to meter id). For example: This enables the custom agents to post metering data to Ceilometer with minimum effort. Alarms API in Ceilometer Alarms will allow monitoring of a meter’s state and notifications once some threshold value has been reached. This feature will enable numerous capabilities to be based on Ceilometer, like autoscaling, troubleshooting, and any other actions of infrastructure. The corresponding Alarm API blueprint is approved with high priority and planned for the Havana release. Extending Ceilometer API Ceilometer API will be extended to provide advanced functionality needed for billing engines, such as: The corresponding blueprint is approved and planned for the Havana-2 release. Metering Quantum Bandwidth Ceilometer will be extended for computing network bandwidth with Quantum. The Meter Quantum bandwidth blueprint is approved with medium priority for the Havana-3 release. Monitoring Physical Devices Ceilometer will monitor physical devices in the OpenStack environment, including physical servers running Glance, Cinder, Quantum, Swift, Nova compute node, and Nova controller and network devices used in the OpenStack environment (switches, firewalls). The Monitoring Physical Devices blueprint is approved for the Havana-2 release and its delivery is already in the process of code review. Ceilometer was designed to be easy to extend and configure, so it can be tuned for each installation. A plugin system based on setuptools entry points provides the ability to add new monitors in the collector or subagents for polling. Two kinds of plugins are expected: pollsters and listeners. Listeners process notifications generated by OpenStack components and put into a message queue to construct corresponding counter objects. Pollsters are for custom polling of infrastructure for specific meters, notifications for which are not put in a message queue by OpenStack components. All plugins are configured in the setup.cfg file as [entry_points] section. For example, to enable custom plugins located in the ceilometer/plugins directory and defined as MyCustomListener and MyCustomPollster classes, the setup.cfg file should be customized as follows: The purpose of pollster plugins is to retrieve needed metering data from the infrastructure and construct a counter object out of it. Plugins for the central agent are defined in the ceilometer.poll.central namespace of setup.cfg entry points, while for compute agents they are the in ceilometer.poll.compute namespace. Listener plugins are loaded from the ceilometer.collector plugin. The heart of the system is the collector, which monitors the message bus for data being provided by the pollsters as well as notification messages from other OpenStack components such as Nova, Glance, Quantum, and Swift. A typical listener plugin must have several methods for accepting certain notifications from the message queue and generating counters out of them. The get_event_types() function should return a list of strings containing the event types the plugin is interested in. These events will be passed to the plugin each time a corresponding notification arrives. The notification_to_metadata() function is responsible for processing the notification payload and constructing metadata that will be included into metering messages and accessible via Ceilometer API. The process_notification() function defines the logic of constructing the counter using data from specific notifications. This method can also return an empty list if no useful metering data has been found in the current notification. The counters are created by the ceilometer.counter.Counter() constructor, which accepts the required counter fields value (see Measurements). The meters provided by Ceilometer are implemented as plugins as well and may be used as a reference for creating additional plugins. Ceilometer is a promising project designed to provide comprehensive metering and monitoring capabilities by fulfilling the functionality required to implement numerous features necessary for OpenStack production use. Even though Ceilometer already has stories of deployment (CloudWatch, AT&T, Dreamhost ), many changes and additional features will be developed by October 2013. Therefore, the project should be much better suited for production use after the Havana release, when most significant blueprints will be implemented. Continuing the Discussion
OPCFW_CODE
The Square-Cube Law in Chemistry and Biology Assume a cow is an evenly distributed sphere... sometimes. I happened to take an animal physiology course in college because my girlfriend (now wife) needed it for her biology major and taking a class together about bird metabolism was my idea of a nice date. I had no idea it would end up being one of the most useful courses in my career. The most memorable takeaway for me was the influence of the square-cube law on biology. The square-cube law simply points out that volume grows exponentially faster than surface area with increasing size, since area is a square function and volume is a cubic function. The magic behind flow chemistry, black holes, and probably why dinosaurs died (vide infra). The square-cube scaling of the surface area to volume ratio is why stuff dries faster in a pan than a bowl, wings bake faster than chicken breast, and why one does not simply increase the scale of a reaction 10x and expect the same results. This probably sounds obvious, but how many times have you heard someone say, “all I did was run the reaction on a larger scale,” or “I just used a different stir bar or vial”? Square-cube scaling is also why flow processes are preferable to batch processes in manufacturing – by changing the scaling function to time rather than volume, flow processes remove a lot of the variability that comes with scaling processes. Guilty of all three. Animal physiologists also realized empirically that mammals’ basal metabolic rates scale non-linearly with size. That is, smaller animals expend more energy per kilogram of body weight than larger animals. You’ve likely noticed this when holding hamsters that can’t seem to keep still, and parents know that children have much faster resting heart rates than adults. Whereas a mouse burns over 160 kcal/day per kilo just being alive, an adult human female only consumes about 20 kcal/day per kilo. Smaller critters burn more energy per kg. How many kcal/day does a woodchuck chuck ...? The rate at which metabolic rate scales with weight is actually surprisingly consistent across animals. If you do a log/log plot of Metabolic Rate vs. Weight, you find that the scaling factor between the two is remarkably close to 2/3rds. In other words, metabolic rate is proportional to weight to the 2/3rds power, or proportional to the cube root of weight, squared). Metabolic rate scales across species roughly with the cube root of weight, squared, as if cows really were spheres. Blue lines represent what you'd predict for a mouse or cow if there were no 2/3rds correction factor in metabolic scaling. Why would that be the case? It makes sense if you assume a cow is an evenly distributed sphere. We pointed out earlier that heat transfer scales with the square-cube law. Metabolic rate per kilo, or energy consumption per kilo, is a measure of energy efficiency. Smaller animals are less efficient, and therefore generate more heat per kilo. They also have a larger surface area than larger animals, are so they cool down faster. If larger animals didn’t have slower metabolic rates, they would overheat and die if their energy consumption scaled linearly from mice (see blue lines on the chart above). Similarly, if small animals didn’t have higher metabolic rates, they would freeze to death. “Cold-blooded” large animals like dinosaurs were doomed from the start – evolution didn’t need an asteroid to figure out that depending on external heating to maintain the temperature of a brontosaurus-sized creature is bad engineering. Small mammals evolved to burn energy inefficiently to keep themselves warm, whereas larger mammals evolved to burn energy efficiently to avoid overheating. Essentially, animals evolved an allosteric scaling relationship between basal metabolic rate and animal size to correct for the square-cube law. This is also why, as drug hunters are keenly aware, drug clearance rates and fluid flows scale non-linearly with size as well. If that were the end of it, then dose prediction wouldn’t be so bad – we’d assume a spherical cow, scale by the 2/3rds power law, and have our estimated drug clearance rate and predicted human dose. But as every drug hunter knows, there’s a ton of other confounding factors for scaling drug doses since organisms did not evolve to metabolize new chemical entities, including those fun factors that nobody expects to find until you’re already late in development (sweat glands and lung excretion, anyone?). Other fun variables: sweat, spit, exhalation, and involvement of other mucous membranes... Though the art and science of DMPK is constantly improving, cross-species PK will probably always be a little frustrating. I’ll never forget a troubleshooting meeting when we were trying to figure out how to do a tox. study when our compound had great exposure in dog/cyno but didn’t in smaller critters. Our DMPK colleague asked, “did you try putting a fluorine on it?” to which my French chemistry lead quipped, “did you try putting it in a spider?” Hope this was useful, or at least entertaining. I’m off to Google whether spiders have livers. Maybe our readers from Corteva can help me with that one… Explore drughunter.com for more.
OPCFW_CODE
Your interpretation is mistaken. Both properties are satisfied at the same time, since they are properties of the code itself. In more detail, using the notation $d(x,y)$ for the Hamming distance between $x,y$: If $x$ is the sent codeword, $y$ is the received codeword, and $0 \leq d(x,y) \leq 4$, then we would be able to detect that $y \neq x$. This is quadruple error detection. If $x$ is the send codeword, $y$ is the received codeword, and we are promised that $0 \leq d(x,y) \leq 2$, then we can recover $x$ from $y$. This is double error correction. - If there were errors in the channel but at most 4, we will be able to detect that errors occurred. - If there were errors in the channel but at most 2, we can not only detect that errors occurred, but also determine their locations. In the second case, we both detect that errors occurred, and are able to undo them. In that sense, we can both detect and correct errors at the same time. Your quote is trying to make a different point. Given a codeword $y$, what can you do with it? - You can try to detect whether errors occurred. This detection is guaranteed to be successful if up to 4 errors have occurred. - You can try to determine the sent codeword $x$. This will be successful if up to 2 errors have occurred. In practice, this is used in the following way: - It is highly unlikely that more than 4 errors occur (or this is taken care of in some other way). Your error-handling strategy is to detect whether any errors occurred, and if so, ask the sender to resend the message. - It is highly unlikely that more than 2 errors occur (or this is taken care of in some other way). Your error-handling strategy is to silently correct the errors, without communicating with the sender. The two strategies are mutually exclusive, since the error detection mechanism doesn't differentiate between the case in which the number of errors is 0,1,2 and the case in which the number of errors is 3,4. So if you assume that there are at most 4 errors, you have no way of knowing whether the input can be corrected (there were up to 2 errors) or not. The author doesn't mention a third approach, list decoding, which allows you to handle more errors as part of a longer communication protocol. In list decoding, you get a small "list" of possible sent messages, which you winnow down in some other way. I don't know whether this is actually used in practice, but it has been very influential in coding theory and theoretical computer science.
OPCFW_CODE
|High Performance Computing||Course format 3 hours lecture per week and 3 hours lab per week. Course Placement: HPC is a core course offered to third year students of B.Tech Hons. ICT (Minor in Computational Science) program. |Curriculum Details||This course is an introduction to parallel computing and aims at teaching basic models of parallel programming including the principles of parallel algorithm design, parallel computer architectures, performance considerations, programming models for shared and distributed-memory systems, message passing programming models used for cluster computing along with some important algorithms for parallel systems. Finally a brief overview of applications of HPC and future trends. Major part of the course includes Lab Component for actual implementation after learning the basics of Parallel programming and HPC. Course details summarized in Appendix. |Appendix: Detailed Course Contents| |Topic Name||Content (includes lab)| |Introduction to parallel programming.||Overview of latest parallel machines and architecture. Introduction to high performance computing. Parallel Programming concepts. Need for Parallel Computing. Limitations. Parallel programming languages. Parallel libraries. Amdahl’s law, speed up. Basics of parallelization.| |Optimization and Performance considerations.||Improving performance on a single processor: basic optimization techniques for serial code. Measuring performance, parallel-serial problem breakdown, bandwidth measures, thread synchronization, Memory structures and bandwidth optimization, performance improvements. Optimization, performance analyzer tools, debugging. Amdahl’s law, Gustafson’s Law, Karp–Flatt metric, isoefficiency metric. |Shared Memory parallel programming||Multi-threading model using OpenMP. OpenMP: parallel do, private variables, nested loops, reductions, loop dependencies, thread-safe functions, parallel sections, and barriers. |Distributed memory parallel programming||Message Passing Programming (important implementations using MPI.) MPI send and receive, MPI communicators, broadcast, reduce. Performance properties.| |Case Studies||Several Important Parallel Algorithms and implementation strategies from different class of problems such as Integration using trapezoidal rule. Calculation of PI using monte carlo method. Matrix operations. Inclusive and exclusive scan. Fibonacci series, Image processing. Cellular automata. Sorting algorithms. Solution of Differential Eqns using Finite Difference etc. Hybrid parallelization with MPI and OpenMP. - Two mid-semester examinations, - 8 lab assignments, - Final project and presentation. HPC Projects (Compulsory): Individual/ Group Lectures will be supplemented through some ppt slides for the important concepts. - An Introduction to Parallel Programming; Elsevier; by Peter S. Pacheco. - Scientific Parallel Computing; Princeton University Press ; by Babak Bagheri Terry Clark L Ridgway Scott Bagheri Clark Scott - PARALLEL PROGRAMMING; Barry Wilkinson, Pearson Education. - Introduction to High Performance Computing for Scientists and Engineers; G. Hager & G. Wellein. CRC Press. - Algorithms Sequential & Parallel: A Unified Approach, by Millers Russ; Cengage, ISBN 9788131525050 - Parallel Programming in C with MPI and OpenMP; by Michael J. Quinn ; McGraw-Hill Higher Education - Parallel Computing Theory and Practice. By Michael J. Quinn; McGraw Hill Education (India).
OPCFW_CODE
IF function with time Picture of cells (https://i.gyazo.com/ac6db30cccd2047df33560125a8177a1.png) The cells content: C1: 15:00 C2: 22:00 C1 and C2 are start time and end time on a work day. and FYI for those not knowing what these numbers might mean 22:00 = 10pm 10:00 = 10am My function on cell C3 should be following: If I am working between C1 and C2 then I want it to make it calculate from C3 the amount of hours from 19.00, in my case I have 22:00 in that cell so it should say 3 on C3 when I have C2= 22:00. How to do it? Right now I have this simple function which I just tried in C3 =IF(C2=TIMEVALUE("22:00:00");3;0) and it does not seem to work, it says 0 on the cell, which means it does not really know that it says 22:00 on C2. The TIMEVALUE function converts your time to Serial so it will not know what 22:00:00 means. I think you want to use the NOW() function to test if a time is between the start and end times. Okay, so if I make it "=IF(C3="22:00:00";3;0)" it still will not work (Made an edit to the post), but to simplify it to you guys. I want C3 to become = 3 if I have C2 = 22:00:00 (formated cell as time) It would be awesome if I could make C3 to count hours from 19:00 and forth, if you know any function to make that happen You don't need to simplify, the respondents in this forum are pretty smart. Here's a formula that works: D1=IF(AND(C1>A1,C1<B1),3,0). A1 is your start time, B1 is your end time, C1 is whatever time you are testing. In order to use a fixed time in a formula, you could do =IF(C2=VALUE("22:00:00");"It's 10pm";"other time") - you actually got close, but it's VALUE I would try. When you want to figure out how long after 7pm you've stayed in, try =(C2-VALUE("19:00:00"))*24 (and format the cell as number) Thanks for trying to help me, but I was not able to describe my problem really well. I have it fixed now, just need one more thing I need help with, and that is to be able to count hours and minutes in a cell as "3,5" instead of "3:30", the formatting tool did not quite help sry for the confusion then; just added calcing the time diff to the answer. Hope this helps you move fw... Thanks, works great!! :) But stumbled upon a little problem, if I left a lot earlier, let us say I started at 07.00 and left at 12:30 then it will calculate as -6,5 :/ I found a solution to my problem through your answers and Sam´s answers. I compiled what I learn´t from both of you guys and used common sense to get it to work. Huge thanks! :D Your recent reply made my own messy code much nicer and it works like a charm, thanks a lot! :) @ViktorKrum Instead of having the (Solved) in the title, you should accept the answer by clicking the gray checkmark sign. And BTW, remove the (Solved) from the title, please. Entering time inside the formula presents problems due to the formatting as the colon will make the formula think you want to do something else and quotes will change the format to text. Here's a formula that works: D1=IF(AND(C1>A1,C1<B1),3,0) where A1 is your start time, B1 is your end time, C1 is whatever time you are testing. To calculate the difference between current time C1 and the end of your work-day, then just change the 3 to SUM(C1,-B1) and format as a time value (HH:MM). To add cuteness: change C1 to calculate the current time as HH:MM:SS so you can tell exactly how long until you go home from work: C1=TIMEVALUE(HOUR(NOW())&":"&MINUTE(NOW())&":"&SECOND(NOW())) (NOTE: there's probably a more elegant way to do this - I'll post later if I figure it out) The formula did not quite work. Here is a snapshot: https://i.gyazo.com/7f5b1da31577e992f276130fa21f369c.png I appreciate you trying to help me, when I am a newbie I fixed it, I needed to add semicolons instead ´=IF(AND(B2>C2);3;0)´. Thanks the function now works, but if you might help me with my main problem which will make this formula work professionaly, I want it to count from 19:00 the amount of hours and minutes until 00:00. So if end time is 22.30 then it will say 3.5 in D1 cell Now I understand the purpose of the IF statement output That's easy to do. I'll edit and re-post. don't forget to accept if it works; and if a better solution doesn't come along (highly likely) No need to figure anything else out more than how to get D1 to show the hours as 3,5 instead of 3:30, I have D1 formated as t:mm To explain what D calculates: It calculates the amount of hours I am being overpaid due to late shift, it starts counting from 19.00 I want to be able to then sum up all the cells in column D and have another cell calculate it (That I can do, with the sum formula) I appreciate your help a lot, I upvoted your answer! :D Details! Change formula ini D1 to =IF(AND(C1>=A1,C1<=B1),SUM(C1,-A1),0) where A1 is the overtime start (thought it was a shift-start earlier), and B1 is the End time; then set formatting to Custom then hh,mm I found a solution to my problem through your answers and Sebastian Rothbucher´s answers. I compiled what I learn´t from both of you guys and used common sense to get it to work. Huge thanks! :D
STACK_EXCHANGE
import math import random class Pack: """Geometry manager Pack.""" def pack(self, expand=False, side="top", fill="none"): assert isinstance(expand, bool), \ "Parameter 'expand' must be a boolean!" assert side in ("top", "bottom", "left", "right"), \ "Parameter 'side' must be one of the following 'top', 'bottom', 'left', or 'right'." assert fill in ("none", "both", "x", "y"), \ "Parameter 'fill' must be one of the following 'none', 'both', 'x', or 'y'." if self in self.parent.packed_items: return False self.parent.packed_items.append(self) self.cmdui_obj.visable_widgets.append(self) self.expand = expand self.side = side self.fill = fill class Frame(Pack): def __init__(self, parent, x=0, y=0, width=0, height=0): self.parent = parent self.cmdui_obj = self.parent.cmdui_obj self.x = x self.y = y self.width = width self.height = height self.packed_items = [] self.expand = False self.side = "top" self.fill = "none" def draw(self): pass def undraw(self): pass def paint_background(self, color): self.cmdui_obj.console_manager.color_area( self.x, self.y, self.width, self.height, color ) def update_pack(self, force_draw=False): # https://www.tcl.tk/man/tcl8.6/TkCmd/pack.htm # PASS #1 width = max_width = 0 height = max_height = 0 for widget in self.packed_items: if widget.side == "top" or widget.side == "bottom": tmp = widget.width + width if tmp > max_width: max_width = tmp height += widget.height else: tmp = widget.height + height if tmp > max_height: max_height = tmp width += widget.width if width > max_width: max_width = width if height > max_height: max_height = height # Expand window or frame if required... if max_width > self.width: self.width = max_width self.parent.update_pack() if max_height > self.height: self.height = max_height self.parent.update_pack() # If window size already changed then just stop and try again in a mo... # if max_width != self.width or max_height != self.height: # self.update_pack() # PASS #2 cavity_x = self.x cavity_y = self.y cavity_width = self.width cavity_height = self.height for widget_num, widget in enumerate(self.packed_items): if widget.side == "top" or widget.side == "bottom": frame_width = cavity_width frame_height = widget.height if widget.expand: frame_height += self.y_expansion(widget_num, cavity_height) cavity_height -= frame_height if cavity_height < 0: frame_height += cavity_height cavity_height = 0 frame_x = cavity_x if widget.side == "top": frame_y = cavity_y cavity_y += frame_height else: frame_y = cavity_y + cavity_height else: frame_height = cavity_height frame_width = widget.width if widget.expand: frame_width += self.x_expansion(widget_num, cavity_width) cavity_width -= frame_width if cavity_width < 0: frame_width += cavity_width cavity_width = 0 frame_y = cavity_y if widget.side == "left": frame_x = cavity_x cavity_x += frame_width else: frame_x = cavity_x + cavity_width widget.pack_frame = [frame_x, frame_y, frame_width, frame_height] new_wx = math.floor((frame_width / 2) - (widget.width / 2)) + frame_x if widget.width <= frame_width else frame_x new_wy = math.floor((frame_height / 2) - (widget.height / 2)) + frame_y if widget.height <= frame_height else frame_y if not force_draw and new_wx == widget.x and new_wy == widget.y: return widget.undraw() widget.x = new_wx widget.y = new_wy widget.draw() if isinstance(widget, Frame): widget.update_pack(force_draw=True) def x_expansion(self, widget_num, cavity_width): minExpand = cavity_width num_expand = 0 for widget_n in range(widget_num, len(self.packed_items)): widget = self.packed_items[widget_n] child_width = widget.width if widget.side == "top" or widget.side == "bottom": if num_expand: cur_expand = (cavity_width - child_width) / num_expand if cur_expand < minExpand: minExpand = cur_expand else: cavity_width -= child_width if widget.expand: num_expand += 1 if num_expand: cur_expand = cavity_width / num_expand if cur_expand < minExpand: minExpand = cur_expand return int(minExpand) if not (minExpand < 0) else 0 def y_expansion(self, widget_num, cavity_height): minExpand = cavity_height num_expand = 0 for widget_n in range(widget_num, len(self.packed_items)): widget = self.packed_items[widget_n] child_height = widget.height if widget.side == "left" or widget.side == "right": if num_expand: cur_expand = (cavity_height - child_height) / num_expand if cur_expand < minExpand: minExpand = cur_expand else: cavity_height -= child_height if widget.expand: num_expand += 1 if num_expand: cur_expand = cavity_height / num_expand if cur_expand < minExpand: minExpand = cur_expand return int(minExpand) if not (minExpand < 0) else 0 class Widget(Pack): def __init__(self, parent, x=0, y=0): self.parent = parent self.cmdui_obj = self.parent.cmdui_obj self.x = x self.y = y self.display = "" self.pack_frame = [0, 0, 0, 0] self.expand = False self.side = "top" self.fill = "none" def re_pack(self): # Need to check if the windows new position is too big for the current frame! if self.width > (self.pack_frame[2]-self.pack_frame[0]) or self.height > (self.pack_frame[3]-self.pack_frame[1]): self.cmdui_obj.update_pack(force_draw=True) return new_wx = math.floor((self.pack_frame[2] / 2) - (self.width / 2)) + self.pack_frame[0] if self.width <= self.pack_frame[2] else self.pack_frame[0] new_wy = math.floor((self.pack_frame[3] / 2) - (self.height / 2)) + self.pack_frame[1] if self.height <= self.pack_frame[3] else self.pack_frame[1] self.undraw() self.x = new_wx self.y = new_wy self.draw() def draw(self): x_coord = self.x if self.x > 0 else 0 y_coord = self.y if self.y > 0 else 0 for i, segment in enumerate(self.display): self.cmdui_obj.console_manager.print_pos(x_coord, y_coord+i, segment) def undraw(self): x_coord = self.x if self.x > 0 else 0 y_coord = self.y if self.y > 0 else 0 for i, segment in enumerate(self.display): self.cmdui_obj.console_manager.print_pos(x_coord, y_coord+i, " "*len(segment)) def check_inside(self, x, y): if x >= self.x and x < self.x+self.width and \ y >= self.y and y < self.y+self.height: return True else: return False def check_hover(self, x, y): pass def on_press(self, x, y): pass def on_release(self): pass @property def width(self): return len(self.display[0]) @property def height(self): return len(self.display)
STACK_EDU
We’re probably all getting familiar with the idea of how Docker works these days. It’s been turning up on presentations more-and-more in the past year or so. Basically it creates containers, which are a bit like virtual machines, and multiple virtual machines can run on a single operating system. Not an idea that’s too unfamiliar to mainframers. However, it’s worth noting that a Docker container, unlike a virtual machine, does not require or include a separate operating system, and avoids the overhead of starting and maintaining true virtual machines. Docker was originally developed to work on Linux, and that’s why it can run on Linux on IBM Z. The Docker program performs operating-system-level virtualization, which is also known as containerization. It uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow these independent ‘containers’ to run within a single Linux instance. Docker implements a high-level API to provide lightweight containers that run processes in isolation. Docker enables independence between applications and infrastructure, and developers and IT ops can unlock their potential and creates a model for better collaboration and innovation. Packaging existing apps into containers immediately improves security, reduces costs, and gains cloud portability. This transformation applies modern properties to legacy applications without needing to change a single line of code. Docker Compose is a tool for defining and running multi-container Docker applications that uses YAML files to configure the application’s services and performs the creation and start-up process of all the containers with a single command. If you’ve not come across YAML before it stands for (the recursive YAML Ain’t Mark-up Language, and is a human-readable data serialization language. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. Docker Swarm provides native clustering functionality for Docker containers, which turns a group of Docker engines into a single, virtual Docker engine. Swarm mode is integrated with Docker Engine. The swarm CLI utility allows users to run Swarm containers, create discovery tokens, list nodes in the cluster, and more. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. And, they say, it’s designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team. Kubernetes, like Docker Swarm, provides an orchestration layer for containers. And announced at the end of last year, the newest version of Docker Enterprise Edition integrate Kubernetes into the platform. Some people are suggesting that Kubernetes may even replace Swarm. Some of Kubernetes features include: - Automatic binpacking – automatically placing containers based on their resource requirements and other constraints, while not sacrificing availability. It mixes critical and best-effort workloads in order to drive up utilization and save even - more resources.Self-healing – restarting containers that fail, replacing and rescheduling containers when nodes die, killing con - tainers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.Horizontal scaling – scaling your application up and down with a simple command, with a UI, or automatically based on CPU usage. - Service discovery and load balancing – no need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them. - Automated rollouts and rollbacks – Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will roll back the change for you. Take advantage of a growing ecosystem of deployment solutions. - Secret and configuration management – deploying and updating secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration. - Storage orchestration – automatically mounting the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker. - Batch execution – in addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired. So it’s now possible to easily orchestrate Docker containers on mainframes running Linux on IBM Z. There also Windows and Mac versions of Docker, but they don’t have so many facilities and features. Find out more about iTech-Ed here.
OPCFW_CODE
Categories. in History and Philosophy of Science, and a Ph.D. in. Found the image caption generator pretty cool would work on something similar soon! The key difference between supervised and unsupervised learning is that the data are not labeled in unsupervised learning. Build things. Papers : https://arxiv.org/abs/1406.2661, https://arxiv.org/abs/1605.05396. Washington, United States Published: Nov 24, 2020, 11.16 AM(IST) View in App **EMBARGO: No electronic distribution, Web posting or street sales before 3:01 a.m. EMBARGO set by source. Already, deep learning is enabling self-driving cars, smart personal assistants, and smarter Web services. Any ways, better late than never. From this corpus the relationship between the pen movement and the letters is learned and new examples can be generated ad hoc. The process continues until it reaches the top level in the hierarchy where the network has learned to identify cats. This is an interesting task, where a corpus of text is learned and from this model new text is generated, word-by-word or character-by-character. I would love to see this work combined with some forensic hand writing analysis expertise. I’m not sure I follow your question, perhaps you can restate it? Generally, the systems involve the use of very large convolutional neural networks for the object detection in the photographs and then a recurrent neural network like an LSTM to turn the labels into a coherent sentence. This method of training is called supervised learning. This is very useful and interesting. Thanks for this informative article. This very difficult task is the domain of deep reinforcement models and is the breakthrough that DeepMind (now part of google) is renown for achieving. Google's search engine, voice recognition system and self-driving cars all rely heavily on deep learning. This capability leverages of the high quality and very large convolutional neural networks trained for ImageNet and co-opted for the problem of image colorization. Below is the list of the specific examples we are going to look at in this post. Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model for creating human-like text with deep learning technologies. AlphaGo program crushed Lee Sedol, one of the highest-ranking Go players in the word. in History and Philosophy of Science, and a Ph.D. in Cognitive Psychology. Other forms of machine learning are not nearly as successful with unsupervised learning. It might be time for me to create a new list, thanks for the ping. Hi hamid, I don’t have an example of deep learning for recommender systems. Deep learning recently returned to the headlines when For example, Google uses DL to build powerful voice- and image-recognition algorithms. I am also very interested in applying Deep Learning especially image recognition into diagnosis field. They've used deep learning networks to build a program that picks out an attractive still from a YouTube video to use as a thumbnail. Customers can use pictures rather than keywords to search a company's products for matching or similar items. “Deep Learning with PyTorch” uses fun, cartoonish depictions to show how different deep learning techniques work. For example, the network learns something simple at the initial level in the hierarchy and then sends this information to the next level. Terms | Written By: Zach Zorich ©️ 2020 The New York Times The New York Times. Deep Learning With Python. Deep video analysis can save hours of manual effort required for audio/video sync and its testing, … Thank you for the information. how we can download it? Dataset: Chatbot Using Deep Learning Dataset. Late last year Google announced Smart Reply, a deep learning network that writes short email responses for you. Google's The program learns to associate this distinctive combination of features with the word "cat". Chatbots can be implemented in various ways and a good chatbot also uses deep learning to identify the context the user is asking and then provide it with the relevant answer. WekaDeeplearning4j: Deep Learning using Weka. Deep learning can be used to use the objects and their context within the photograph to color the image, much like a human operator might approach the problem. Quantitative-finance-papers-using-deep-learning Background. Do you think machine learning and time series methods are better suited to prediction/forecasting problems involving regression? Example of Object Detection within PhotogaphsTaken from the Google Blog. Frankly, to an old AI hacker like me, some of these examples are a slap in the face. Sample of Automatic Handwriting Generation. Are deep learning methods suited for non-vision non-audio problems? Parametric Monkey, my musical identity, can be streamed on Spotify, Google Play Music, YouTube and others. Disclaimer | It requires stories, pictures and research papers. This capability leverages of the high quality and very large convolutional neural networks trained for ImageNet and co-opted for the problem of image colorization. Even though the pictures of cats don't come with the label "cat", deep learning networks will still learn to identify the cats. The question isn't whether or not deep learning is useful, it's how can you use deep learning to improve what you're already doing, or to gain new insights from the data you already have. Nov 20 2020. I’m a cognitive scientist, retired professor, musician, gamer, and avid cyclist with a B.A in History, an M.S. This learning process is usually called constructing a model of a cat. Dear sir Iam very much interesting to learn machine and deep learning and wants to do some real time projects for the purpose of software job company oriented.Please guide me what are the skills need to learn and how can i learn real time projects on ML and DL? Your … Large recurrent neural networks are used to learn the relationship between items in the sequences of input strings and then generate text. Stacked networks of large LSTM recurrent neural networks are used to perform this translation. Cyber security Cybercrime Malicious URL Machine learning Deep learning Character embedding. It can be used on standard tabular data, but you will very likely do better using xgboost or more traditional machine learning methods. The ability to learn from unlabeled or unstructured data is an enormous benefit for those interested in real-world applications. The Deep Learning with Python EBook is where you'll find the Really Good stuff. This post was updated on April 5 to remove the reference to Ersatz, a deep-learning company that is now out of business. At its simplest, deep learning can be thought of as a way to automate predictive analytics . Netflix and Amazon use DL in their recommendation engines, and MIT researchers use DL for Predictive Analytics. Machine learning programs can be trained in a number of different ways. Discover how in my new Ebook: Say for a typical time series, do you think deep learning outperforms traditional time series and machine learning methods? State-of-the-art results have been achieved on benchmark examples of this problem using very large convolutional neural networks. Examples of using deep learning in Bioinformatics This work has been officially published, but we will keep updating this repository to keep up with the most advanced researches. Example of Object ClassificationTaken from ImageNet Classification with Deep Convolutional Neural Networks. Deep learning unlocks the treasure trove of unstructured big data for those with the imagination to use it . Dear Jason this is one of best post I have gone through and the topics are quite wide which further can be divided to many research projects, I feel you should give us some insights in healthcare. etc. Sorry, I am no longer an academic, my focus is industrial machine learning. But the opportunities aren’t limited to a few business-specific areas. Download PDF Copy; Reviewed by Emily Henderson, B.Sc.
OPCFW_CODE
How To Read A Json File In Python? Checkout this video: If you’re working with JSON data in Python, there are a number of ways to read it into your program. In this article, we’ll take a look at two different ways to read JSON files in Python. The first way is to use the built-in Python json module. The json module allows you to serialize and deserialize JSON data. You can use the json module to read JSON files into your program. The second way is to use the third-party Python package, pandas. Pandas is a powerful data analysis and manipulation library that makes it easy to work with tabular data. With pandas, you can easily read JSON files into your program. Let’s take a look at how to use each of these methods to read JSON files in Python. What is JSON? How to read JSON in Python? To read JSON in Python we need to import the json module. The json module provides an API for parsing and converting JSON data to and from Python objects. We can use the json.load() method to read a JSON file and convert it into a Python dictionary. We can also use the json.loads() method to read a JSON string and convert it into a Python dictionary. The following example shows how to read a JSON file and convert it into a Python dictionary: with open(‘data.json’) as json_file: data = json.load(json_file) Why read JSON in Python? There are a few reasons why you might want to read JSON files in Python. Firstly, JSON files are often used to store data that is fetched from an API. Secondly, they can be used to store configuration data for your Python program. Finally, JSON files can be used to transfer data between two programs written in different languages. What are the benefits of reading JSON in Python? There are many benefits of reading JSON in Python. JSON is a language-independent format that allows for easy data interchange between different systems. Python is a powerful programming language with many libraries and tools that allow you to work with JSON data. When you read JSON in Python, you can access data quickly and easily. How to use JSON in Python? What are the challenges of reading JSON in Python? There are a few challenges that can arise when reading JSON in Python. One common issue is that JSON files can be quite large and bulky. This can make processing them slower and more difficult. Another challenge is that JSON files can be nested, which means that there can be multiple layers of data to process. This can make working with JSON data quite difficult for beginners. In this article, we looked at how to read a JSON file in Python. While the process is relatively simple, it can be time-consuming and prone to errors. To avoid these potential problems, we recommend using a tool like the jsonviewer extension for Google Chrome. This extension will allow you to easily view and edit JSON files in your browser.
OPCFW_CODE
Well, moving on from the Sparkline shape in my previous post, this time I thought I’d look at how to go about building a Bullet Graph using Visio. To quote its inventor, Stephen Few, a Bullet Graph “is designed to display a key measure, along with a comparative measure and qualitative ranges to instantly declare if the measure is good, bad or in some other state.” If you want more details on Bullet Graphs or dashboard design in general then I recommend you buy his book “Information Dashboard Design”. For this post I’m going to concentrate on how to build the shape itself… If you just want to download the shape, then here it is (Note – this shape uses Visio 2007 ShapeSheet functions so you may need to play around with the colour formulae in particular to get it to work in 2003): Shape functionality and structure As with the Sparkline example, this shape is group shape containing a number of child shapes. The main elements can be seen as follows: The visibility of the label, scale markers and target marker can all be toggled via shape’s context menu and shape data window. The user can also either opt to have an automatic scale (which is the range divided by five, as above) or to set their own custom scale which can take up to five strings. It’s probably easier to see how the shape is constructed if we break it apart. In the following image you can see that the top shape is the parent group shape and this contains a two geometry sections to display the graph value bar or dot (not shown). Subsequent child shapes sit underneath the parent and hold the target marker and three zone shapes. It terms of text, the parent shape holds the main label while a further five shapes (not shown above) sit right at the back to hold the five scale points and respective tick geometries. Text – Text scaling works in a similar manner to the Sparkline shape, with the scale markers font size linked to that of the main shape via a user cell factor (User.ScaleToLabelFontRatio). This can be changed as required. Label position – The label’s position can be set to Left, Top, Bottom or Right via the shape data window or a custom placement using the yellow control handle, to which the shape’s text block position is tied. This works by having a user cell (User.LabelPositionTrigger) monitor the selected position (User.LabelPositionIdx) and when this changes, new coordinates are pushed into the control handle’s X/Y cells. The formula is as follows: If that gives you eye strain, then I apologise. I’m not really expecting you to read through this formula, but I’m showing it as it highlights some rules that I try and follow when dealing with complex ShapeSheet formulae and debugging: - Try to break up long formulae by referencing intermediate user cells. In the above example, I could potentially have combined the last two cell references into a new one called ScaleHeight. - Use meaningful cell names. As with normal coding, there is a balance to be struck between overlong verbose names and abbreviated terse naming. On the one hand you extend your formulae so that you can’t view it without scrolling and on the other, when you return to your code in the future, you’ll have trouble remembering what all those abbreviations actually meant. - Use Notepad to construct and breakdown formulae. I find it much easier, when trying to understand what a very long formula is doing by adding line breaks in Notepad at appropriate points and then removing them again before pasting the formula back into Visio. Here’s an image of what I’m talking about: A last point about this formula is that you’ll notice that I’ve highlighted the ‘DEPENDSON(User.LabelPositionIdx)+’ part in blue. The reason is just to note that it’s not really required. The DEPENDSON() ShapeSheet function creates a dependency on another (target) cell that wouldn’t ordinarily exist. One use of this function might be to push an actual formula (rather than a value) into another cell, and to do this you’d have to put the formula that you want to push, into a string. This removes the dependency and so you would need to use the function to create it. In the above case I’m using a calculated value, based on User.LabelPositionIdx, and so whenever this cell changes the formula would be evaluated anyway. You could take away the DEPENDSON() function and the remaining formula would work perfectly well without it. Colour – The Bullet Graph shape is designed to use a monochromatic colour scheme. As Few points out, this makes it easy to see if you are colour blind and of course for black and white printing. I’ve added a number of user cells to deal with the shape’s colour, with the main colour being applied to the value bar and target marker, and Zones 1 – 3 referencing cells that generate a tint of that main colour. The TINT() ShapeSheet function increases a colour’s luminosity or lightness in a range of 0 to 240. Here is an image of five shapes that were originally set to blue 'RGB-0,0,255' and then had their luminance value adjusted to the respective point: The TINT() function, along with its SHADE() counterpart (which adjusts the luminosity value down, ie towards black) are very handy but require a little extra thought. Out of the box the TINT() function works perfectly with colours that have a low luminance value (ie dark colours), but if you start off with a light colour, for example, with a luminance value of 180: …you only have a range of about 40 or 50 points before you end up with a white colour, and this means that you can't just increase the luminace by a set value. My answer to this problem, is to test the incoming main colour and if its luminance is greater than 160 (you can pick your own arbitrary figure), then reset it to 70 (ie a darker value). The result should be that for darker colours, you just work with that colour and take three tints of it, and for lighter ones, you create a darker set of tints. This ensures that there is reasonable contrast between the value markers and three zones in both dark and light cases. Here’s an example: The formula that achieves this luminosity coercion is the one in User.BaseZoneColor: Well I’ve tried to stay as close to Stephen Few’s design as possible, but you can easily extend the shape, for example, by adding additional zone shapes and scale markers etc. In any case it’s a great shape to pick through with Drawing Explorer and Formula Tracing window to understand Visio a little better. If you interested in more Visio colour related information, then take a look at these links:
OPCFW_CODE
STEP IT Academy | We have been teaching since 1999. High-quality IT-education for adults and children. We prepare programmers, designers and system engineers who cannot be replaced by artificial intelligence. In order to achieve this, we teach how to understand tasks, run projects and work in a team, in addition to core knowledge. Your browser Internet Explorer is out of date! Please, use Google Chrome, Safari, Mozilla Firefox, Opera Subscribe your child to Clubs & Labs programs by STEP Academy and funnily discover new topics. This playful approach helps us to encourage such important skills as: creativity, critical thinking & teamwork. Launched their startups.Minimum experience in IT - 5 years. Average teaching experience - 2 years. Birth date: 27 June 1984 (36 y.o.) Education: Art study at "Ecole des arts décoratif de Genève" Profession: Concepteur multimédia" Project creator in digital at scool "EAD Genève" (CFC). - Migros Genève (Printer, Electronics seller, Content creator for motion design, social medias consulting). - Freelance (Davidoff, University of Geneva (Bioscope), Sculpteo, Jardin botanique Genève, Hagerty, Enoil Bioenergies). -Professor at adult private scool "Ecole-club Migros Genève" (from 2016). Interests: Art, Design, 3D, DEV, Video games, Blockchain, AR, VR, 3D printing, Hacking, Digital marketing, Social medias, Science, Learning, Low Tech, Society structure, Econmic, Society, Politics. At the age of 15 Cyril created a project of MMORPG called "Nilorea le destin des runes" which was being made for 8 years under his guidance and team leading; UI, 3D, models, icon, character design etc. were created. Mira El HAMDAOUI Birth date: 09 March 1994 (26 y.o.) Working for IT STEP 1.5 years for most of the classes such as: • Sports (Stretching & team game). • Game Developpement (Minecraft, Roblox, Scratch, Create your own video game). • Robotics (LEGO Mindsstorms, LEGO WeDo). • Filmmaking (Shooting, editing & special effect). • 3D Design & 3D-Printing. Skills: Science & technologies, Management & Children care Interests: New technologies, planetary discoveries & sport Subjects of teaching: LEGO Robotics: Level 1, LEGO Robotics: Level 2, Programming in Python - Junior, Programming in Python - Middle, Makeblock Robotics: Scratch, Makeblock Robotics: C, Microelectronics with Arduino. Birth date: 28/07/1992. Education: Geneva school of engineering, architecture and landscape - Haute École du paysage, d'ingénierie et d'architecture de Genève. Profession: Embedded engineer. Interests: Robotics science, games and climbing. YouTube et la réalisation de films I have a son of 7 years old, and this is a perfect match for his passion to computers; as well as to my believes as a parent - the future is in tech. If you want your kids are ready for their future, you should consider IT education at early age of your kid. And if you are worried about PC, tablet, cellphone in hands of your kids, than it is even more important they understand that information technology is not only that.
OPCFW_CODE