Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Services, no more?from the CommonsWare Community archives At October 10, 2020, 3:51pm, vedy asked: I know you advise against background services, but I just wanted to get more clarity on your position. The Android platform has the Services API: To run a service in the background (on a service thread) initiated by an activity for the duration of the life of that particular activity. Is this still valid? isn’t that what coroutines do now? Run a service in the back ground independent from activities (event based or on schedule) , my understanding is this is what you advise against. and you prefer that this be done wither using push messaging from the cloud , or using the official WorkManager. So in all is creating services based on Android Service API still relevant? At October 10, 2020, 5:20pm, mmurphy replied: IMHO, that was never valid. We did some of that in the very early years, while we were coming to grips with the SDK and while alternatives were clumsy or non-existent. But I have been steering developers away from that since at least 2014. Any threading option works. Services are not background threads, and if you have UI in the foreground, you do not need a service to keep your process around. For scheduled work, yes, these are the current right answers. In the case of push messaging, that would replace scheduled polling of a server. However, bear in mind that neither offer precise timing and neither are reliable in the face of Doze mode, app standby, and aggressive battery management policies from various device manufacturers. So, I steer developers away from trying to rely on services (or anything else) for periodic work. Roughly speaking, there are two categories of service in the modern era: A specific type of service, usually with a specific superclass, that handles a specific OS integration scenario. Examples include TileService(for notification shade tiles), RemoteViewsService(for supporting certain types of app widgets), and InputMethodService(for writing soft keyboards). In these cases, the OS supplies a base service class, and the OS binds to that service. We extend that base class and implement a specific API for whatever role we are trying to fill. Because these are bound to an OS process, they will run as long as the OS wants them to run, where that duration will vary on the type of service. Servicefor app use. Nowadays, these usually need to be foreground services, tied to a Notification, in order to be able to run for app-determined amounts of time. So, for example, for my current consulting client, we have a foreground Service that runs for an app-specified amount of time. The reason for the service is because the app integrates with heart rate monitors via Bluetooth, and we need to forward that heart rate data along. However, the user may not be actively using their device at the time after starting the initial data collection, so a foreground service helps ensure that our process sticks around to be able to collect this data for as long as we need that to happen (an hour or less in this case). That’s a legitimate and necessary reason for having a Service, the cost being that it has to be a foreground service on modern versions of Android. So, rolling back to: A foreground service that reacts to events — such as getting heart rate monitor data off of a Bluetooth connection — is reasonable.
OPCFW_CODE
feat(types): add generic parameter to toMatchObject I often use this generic parameter with Jest to have autocompletion on values to match, example : expect(response).toMatchObject< _.PartialDeep<GraphQLSchema.AnnotateFileResult> >({ __typename: "AnnotateFileErrors", errors: [ { __typename: "FileFormatNotSupported", }, ], }); }); One DX improvement I made compared to Jest's generic is that I apply the DeepPartial helper by default so the user doesn't have to think about it as we're always matching partial objects. Here is a usage example on TypeScript Playground. I'm not sure if DeepPartial is expected behaviour. Jest doc says: Optionally, you can provide an object to use as Generic type for the expected value. This ensures that the matching object matches the structure of the provided object-like type Indeed, what if I want to add type and check if it's fully compatible with the passed object? DeepPartial will pass even with {} @sheremet-va IMHO the types should reflect the runtime behaviour, if they are lying about it then they can't be trustee to keep the code safe. toMatchObject is for matching a subset of the object. For an exact match there is toEqual (which could take a generic as well, without DeepPartial). @sheremet-va IMHO the types should reflect the runtime behaviour, if they are lying about it then they can't be trusted to keep the code safe. toMatchObject is for matching a subset of the object. For an exact match there is toEqual (which could take a generic as well, without DeepPartial). But this generic exists to ensure the type for toMatchObject is equal to passed object. If it was as you say, we would use type that was passed to expect. @sheremet-va IMHO the types should reflect the runtime behaviour, if they are lying about it then they can't be trusted to keep the code safe. toMatchObject is for matching a subset of the object. For an exact match there is toEqual (which could take a generic as well, without DeepPartial). But this generic exists to ensure the type for toMatchObject is equal to passed object. If it was as you say, we would use type that was passed to expect. DeepPartial will never give an error, so providing anything to toMatchObject would be useless. I don't really understand this reasoning. In the usage of toMatchObject what I always want is autocompletion + avoiding typos when defining a partial matching object against the value I pass in expect(). Again if the point of the runtime behavior is matching an object partially then I expect my type to reflect this usage. If writing expect(someObject).toMatchObject<{ foo: boolean }>({}) is accepted at runtime then TypeScript should accept it as well. I read this assertion as "I expect myObject of type T to match at least this subset", which is exactly what is happening at runtime, and there is a greater chance in my experience that you will have the type T already provided by your library/API than a custom type U which would be a handmade subset of T (because if it's not a subset then you should definitely use toEqual). I understand that the current @types/jest implementation differs and I thought it was a good opportunity to provide a better default. If the goal is 100% backward compatibility then I'll remove the PartialDeep (and use module augmentation in my own projects to provide it as I think it's the most sensitive default 😄). @patak-dev could you tell us which solution you want to see implemented for vitest?
GITHUB_ARCHIVE
Resigned and received a pay check I resigned from my position from a company in December of last year and I received my paycheck (in January) from the company that I resigned from. This past week I have phoned them numerous times and sent emails querying about over payment. As it is Thursday today I have not heard or received any communication from them Can someone please advise me in what to do? Are you sure the paycheck isn't for the work you did for them in December? On 15th December I received my paycheck for December I'd be very surprised if your employer paid you mid-month for the whole month's work As it was Christmas holiday, companies in South Africa usually pays their employees on the 15th of December When I have resigned, I have received payments from the old employer afterward for a number of reasons. First, most employers pay in arrears, so the last check has come a few weeks later. Second, they've paid me for outstanding vacation time, again with a delay of a few weeks. Finally, I left a company early one January and I received my annual bonus for the previous year in March when they typically pay out. Your case may be different, but that's what telephones are for. Just a 'regional' note for those of us in North-America - in South Africa it is not uncommon when paying monthly salaries that these will be paid a few days before the end of the month - being paid on or around the 25th is not unusual. That being said - being paid more than a week before the end of the month is a little odd. FallenHero - while you were still employed at that company, when did you normally receive your salary? Does the timing of this payment seem unusual? No the salary falls into place of when they usually pay out the salaries Many company pay in arrears, so the paycheck is probably the last of what you earned. Basically, when you start working somewhere, if you get paid weekly, you get your first paycheck after your second week, which is a week late paying for the whole of your first work week. You then get paid every week, but every paycheck is delayed by a week, so when you leave you get paid after you leave for the time that you worked previously. Check the paystub, it usually has dates for the time period that the check is for, separate from the date that the check was created. The reason I am asking this question is that I have been calling since Monday 3 times a day and sent 3 or 4 emails and have not heard anything. As I am currently swamped at my new place of employment. I don't want to waste my time if they do not have the decency to give me a answer If you're concerned that they'll ask for it or there are things that may crop up later, send a written letter to the head of HR (by name) via certified mail, requesting a return receipt of delivery and to whom. That way you'll have proof that you attempted to get answers if they allege that you should have done something and the check turns out to be a mistake. Was it a physical check mailed to you? Or direct bank deposit? Checking the paystub will be the quickest way to tell if it was sent in error. if direct deposit, there is still a paystub, you just may have to log into the company's HR website, which you can still do because you have data there that you need to access. It was a direct payment into my account. They payment website is an internal system only the head of finance can access I recently had to get old tax information from a company I left last summer. I called HR, who were very slow to get back to me, but they just instructed me to access the employee HR payroll site that I used to use, and it had everything I needed. The paystub is yours, you have the right to access it, by receiving a physical or digital copy. Nothing to do except take the advice of the others and linked question in the meantime.
STACK_EXCHANGE
grouping alerts problem: can't disable grouping (to make it in opsgenie) (two alerts are two diff cpus of same instance) AM config: route: receiver: 'dev_null' ################################ routes: - receiver: 'opsgenie' group_wait: 0s group_interval: 0s continue: true receivers: - name: dev_null - name: "opsgenie" opsgenie_configs: - api_key: XXX teams: support_team tags: '{{ range .Alerts }}{{ .Labels.region }} {{ end }},{{ range .Alerts }}{{ .Labels.Type }} {{ end }}, {{ range .Alerts}}{{ .Labels.severity }} {{ end }}' details: { 'instance': '{{ range .Alerts }}{{ .Labels.instance }} {{ end }}' } expected to get all the alerts not grouped what happens: in opsgenie: message: [Prom]: [FIRING:2] CpuUtilizationWarning (xxxxx ap-southeast-1 warning) Description: (i-48fa1def) in region reported over 90% CPU utilization in the past 5 minutes. Monitored by job: . Public IP: Alerts Firing: Labels: alertname = CpuUtilizationWarning cpu = cpu3 instance = xxx region = xxx service = severity = warning value = 94% Annotations: description = (xxx) in region reported over 90% CPU utilization in the past 5 minutes. Monitored by job: . Public IP: Source: http://xxx:9090/graph?g0.expr=floor(100+-... Labels: alertname = CpuUtilizationWarning cpu = cpu0 instance = xxx region = xxx service = severity = warning value = 90% Annotations: description = (xxx) in region reported over 90% CPU utilization in the past 5 minutes. Monitored by job: . Public IP: Source: http://xxx:9090/graph?g0.expr=floor(100+-... I'm having a similar problem with Slack integration. There are no group_ properties in my routing tree. In my case I tried to group by a dummy label (group_by: ['foobar']) and alerts still get grouped Disclaimer: This is just a guess/opinion. While it might be nice to have an option to say "Don't group", the implementation of "group_by" is to group by the value of the labels provided, not by the labels themselves. In other words, if you have a label that has multiple values that you don't want grouped together, put that label in the "group_by" array. For the OP, this would be simply be group_by: ['cpu'] Closing here, feel free to reopen here in case this is an issue with Alertmanager or on Prometheus users Google groups in case of a usage question. I think there is (the same in Google groups) still no answer: is it possible to disable grouping at all in Alertmanager? We face the same issue and still have no success with solving it ( group_by: ['instance'] groups by each mchine @semyonslepov @vivekthangathurai do you guys found a way to not group at all? I think even group_by: ['instance'] is still bad, there should be a disable_grouping: true option I think you can use group_by: [...] @roidelapluie I tried that, all the open alerts are sent together to opsgenie. @fernandocarletti It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.
GITHUB_ARCHIVE
If you want to dive right in and skip the command line, there's a nice graphical way to use MAME without the need to download and set up a front end. Simply start MAME with no parameters, by double-clicking the mame.exe file or running it directly from the command line. If you're looking to harness the full power of MAME, keep reading further. On macOS and *nix-based platforms, please be sure to set your font up to match your locale before starting, otherwise you may not be able to read the text due to missing glyphs. If you are a new MAME user, you could find this emulator a bit complex at first. Let's take a moment to talk about software lists, as they can simplify matters quite a bit. If the content you are trying to play is a documented entry on one of the MAME software lists, starting the content is as easy as mame.exe <system> <software> mame.exe nes metroidu will load the USA version of Metroid for the Nintendo Entertainment System. Alternatively, you could start MAME with and choose the software list from the cartridge slot. From there, you could pick any software list-compatible software you have in your roms folders. Please note that many older dumps of cartridges and discs may either be bad or require renaming to match up to the software list in order to work in this way. If you are loading an arcade board or other non-software list content, things are only a little more complicated: The basic usage, from command line, is mame.exe <system> <media> <software> <options> <system> is the short name of the system you want to emulate (e.g. nes, c64, etc.) <media> is the switch for the media you want to load (if it's a cartridge, try -cart or -cart1; if it's a floppy disk, try -flop or -flop1; if it's a CD-ROM, try -cdrom) <software> is the program / game you want to load (and it can be given either as the fullpath to the file to load, or as the shortname of the file in our software lists) <options> is any additional command line option for controllers, video, sound, etc. Remember that if you type a <system> name which does not correspond to any emulated system, MAME will suggest some possible choices which are close to what you typed; and if you don't know which <media> switch are available, you can always launch mame.exe <system> -listmedia If you don't know what <options> are available, there are a few things you can do. First of all, you can check the command line options section of this manual. You can also try one of the many Front-ends available for MAME. Alternatively, you should keep in mind the following command line options, which might be very useful on occasion: gives a basic summary of command line options for MAME, as explained above. gives you the (quite long) list of available command line options for MAME. The main options are described, in the Universal Command-line Options section of this manual. gives you a (quite long) list of available configuration options for MAME. These options can always be modified at command line, or by editing them in mame.ini which is the main configuration file for MAME. You can find a description of some configuration options in the Universal Command-line Options section of the manual (in most cases, each configuration option has a corresponding command line option to configure and modify it). creates a brand new mame.ini file, with default configuration settings. Notice that mame.ini is basically a plain text file, so you can open it with any text editor (e.g. Notepad, Emacs or TextEdit) and configure every option you need. However, no particular tweaks are needed to start, so you can leave most of the options unaltered. If you execute mame -createconfig when you already have an existing mame.ini from a previous MAME version, MAME automatically updates the pre-existing mame.ini by copying changed options into it. Once you are more confident with MAME options, you may want to adjust the configuration of your setup a bit more. In this case, keep in mind the order in which options are read; see Order of Config Loading for details.
OPCFW_CODE
TLDR; no, libuv doesn't work in Cygwin just yet, see at the bottom. I tested using the master branch (9d3449852bd35c9283948186d0259c1bf73b8579 or later) I installed the following in the cygwin setup - gcc-c++ make cmake pkg-config libtool - lua (5.2), lua-devel, lua-bit, lua-lpeg Build third party deps in the Neovim folder mkdir .deps-cyg cd .deps-cyg cmake ../third-party/ -DUSE_BUNDLED_JEMALLOC=OFF -DUSE_BUNDLED_BUSTED=OFF -DUSE_BUNDLED_LUAJIT=OFF -DUSE_BUNDLED_LUV=OFF -DUSE_BUNDLED_LUAROCKS=OFF make One of the current issues is that mpack fails to build in third-party, and it also fails to build in lua 5.2 which is used in cygwin (there is a PR for that) lets build it by hand then Here is a quick recipe for lua.5.2, that last command will prompt you for permissions git clone https://github.com/y-stm/libmpack.git cd libmpack gcc -O2 -fPIC -I/usr/include/lua5.2 -c binding/lua/lmpack.c -o lmpack.o gcc -shared -o mpack.dll lmpack.o -llua-5.2 cygstart --action=runas cp mpack.dll /usr/lib/lua/5.2/ You can check if mpack was installed with the command lua -e "require('mpack')" Go back to the Neovim folder and apply the following patch (it disables the stack-protect for cygwin, since it was failing for missing symbols) diff --git a/CMakeLists.txt b/CMakeLists.txt index 317d2a1..f5f2356 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -230,7 +230,7 @@ if(HAS_WVLA_FLAG) add_definitions(-Wvla) endif() -if(UNIX) +if(UNIX AND NOT CYGWIN) # -fstack-protector breaks non Unix builds even in Mingw-w64 check_c_compiler_flag(-fstack-protector-strong HAS_FSTACK_PROTECTOR_STRONG_FLAG) check_c_compiler_flag(-fstack-protector HAS_FSTACK_PROTECTOR_FLAG) Build instructions are fairly standard mkdir build-cyg cd build-cyg cmake .. -DDEPS_PREFIX=../.deps-cyg/usr make Right now it fails when linking, the issue seems libuv related. In order of importance - no libuv support for cygwin read https://github.com/libuv/libuv/issues/832#issuecomment-216762505, TLDR https://www.cygwin.com/ml/cygwin/2015-12/msg00024.html - Disable stack protection for cygwin? - Fix libmpack for lua 5.2/5.3 - https://github.com/tarruda/libmpack/pull/5 - odities with luarocks. it appears to succeed but fails to install any of the rocks - Our luajit recipe failed to build in x86_64 - this is fixed in the upcoming luajit 2.1 release - BuildLuv needs some fixes to handle the case when USE_BUNDLED_LIBUV=OFF and USE_BUNDLED_LUAJIT=OFF - Error building jemalloc (optional) - https://github.com/jemalloc/jemalloc/issues/285
OPCFW_CODE
The Extremely Early Days - cat-scan.com The first active thread, then: Thread 19. The infamous "cat scan" thread. Infamous, and from a purist starting-from-the-beginning perspective consequently problematic—folks wandered into the thread long after it was posted, on more than one occasion, to chatter. The lack of a year field in the comment timestamps makes it difficult to tell when exactly the incursions happened, but the key point is clear: this thread, as a single entity, is not really an "early" thread so much as a thread that was originally posted early on. The comments span a great deal of mefi history. Looking at thread 19 is anachronistic in more than one way. Aside from the great big jumps in the 3-year-long comment chronology, there are site features present to the modern viewer that weren't there when the thread was posted. Tags, for example: those weren't around for years—only after the fact has the post been tagged. A sort of revisionist librarianism, that. Also, flags—just now, I've flagged this mathowie comment as 'other', but flags are a relatively recent innovation. (God knows what Matt and Jessamyn will make of that.) And the thread is closed! As it clearly wasn't originally—automatic thread closure after 30 days was a change made no less than three years after the original thread 19 was posted. The cat-scan Meme By virtue of both it's historical significance and the sheer potency of the idea itself, cat-scan.com has held fast as a long-running (if low-frequency) meme on Metafilter. It's not hard, with a little Googling, to find references to the site and even the original text of mathowie's post. Consider: - Pretty_Generic riffing - yhbc, likewise - evanizer makes a disapproving comparison - gluechunk has an epiphany - blue_beetle mis-spells the url - riffola discusses doubleposts - George_Spiggott uses it as a placeholder variable - DrJohnEvans on doublepostery The double-post joke has become a birthday tradition for Metafilter, as well: It'll be another month and a half before the inevitable 2006 edition. Not What I Had In Mind My hope, with Refi, is to take a good systematic look at joe-average Metafilter threads over time—essentially, examine threads that have no particular motivation toward self-examination and see what's going on in there. Thread 19 is a terrible fit for that sort of thing—as threads go, it is highly self-aware and its offspring are all likewise. On the other hand, looking through this stuff is a hell of a lot of fun, so it's safe to say you can expect to see more thematic/memetic explorations that veer somewhat off-point—considering that no such on-point posts yet exist, especially...
OPCFW_CODE
Estimate of Net % of transaction fees in bitcoin transactions one of the major pain points that bitcoin is trying to solve is the 2% transaction fee. However bitcoin is not necessarily free, even if it gets mass acceptance (say the entire USD is replaced by bitcoins). There will be additional transaction costs such as 1. generation of bitcoins 2. storage of bitcoins 3. apps that enable people to transact in bitcoins 4. credit history monitoring services etc. Some of these will be paid directly while exchange of value for currency (such as through transcation fees in bitcoin based credit cards), some of them may be indirect (such as an app which allows you to store/monitor bitcoins, but then uses ads to support itself. The ads inturn increase the price of the commodities that are being bought or sold. So you are paying for the bitcoin indirectly). I am curious to know whether any study has been done to calculate the net % that will be lost in transcation fees (paid directly or indirectly by the purchaser of the goods) as compared to that of credit cards (~2%)? I am curious to know whether any study has been done to calculate the net % that will be lost in transcation fees (paid directly or indirectly by the purchaser of the goods) as compared to that of credit cards (~2%)? Credit cards don't support all types of transactions. For example, say you wanted to pay me to fix your computer. You can't do that with a credit card, because I don't have a merchant account, and it's not worth the trouble for me to get one. There are other ways to accomplish this (bank wire, mailed check, PayPal) but they have different fee structures. The economy doesn't run entirely on credit cards. #1 will cost as much as the block reward plus the transaction fees - if miners spend more than that, they will lose money. #2 would cost $100 to $200 per person, for secure, tamperproof devices. #3 is likely to be the same amount spent to develop phone apps as is spent currently. People won't have the same incentives in a Bitcoin world. Suppose that when everyone uses Bitcoin, it's harder to force someone to pay a court judgement. Because of this, a car manufacturer becomes slightly less careful about the safety of their cars, and an extra 40 people die in the US every year. How are we supposed to convert that into an equivalent transaction fee? You mention credit monitoring services, which I don't think would exist in a Bitcoin economy; I don't think you'd be able to get any loans. How could you give someone a mortgage, if they could secretly start socking away money where you can't get at it?
STACK_EXCHANGE
I have 2 hypotheses for the same thing. I'm basically testing two theories. H1: there will be no difference between 2 groups - they will both do the same thing H2: there will be a difference. Now I'm testing this with a t-test. My question is...(drumroll)........ Do I divide the P values by 2 (i.e. one-tailed) or leave them as they were (two-tailed)?????????????????????????????????????? normally I would say yes. But I'm doing this whole intro as a theory x vs theory y (in a battle to the death (from boredom) ) So I would need to put both hypotheses. This is all supervisors crazy idea, so don't blame me! :-) I guess are you saying I should go with one-tailed, and then if it wasn't significant (which it is) then it would support the other hypothesis? From what I know, your H1 is the null hypo. Your H1 should be H0 and H2 should be H1. So if there is no diff between the two groups then you confirm the null hypo, which should be H0. What stat books do you use. I recommend Andy field's 'bible', it sorts out lot of stat problems without sweats.;-) A two-tailed is the one that is normally always used to allow for the fact that either group 1 or group 2 has the higher mean. One-tailed tests are only used if your alternative hypothesis is that one group (i.e. you know which one) has a bigger mean, which is rarely justifiable. In whichever stats package you are using there should be an option for doing a one-tailed test instead of the default two-tailed. I don't think there is an option in SPSS. Andy Field just says you divide the p value by 2 for a one-tailed. I think I'm going to go with 1-tailed in my results and clearly mark that its one-tailed. and maybe state in my text the two-tailed (and clearly label it) I always thought if you have a clear idea in which direction you expect the effext then you can use one tailed, but if you don't know whether group A wil be higher or group B (i.e just expect there to be a difference but don;t know in which direction) you'd use two tailed. That said you don't often see one tailed tests used. I alwsays feel a bit like it's cheating (even though I know it's not). I haven'y made a firm decision of what to do in mine....need to have a stats chat with supervisor. Anyhoo if you do use one tailed you simply divide the p value by 2. I'm going with one-tailed. The hypotheses are difficult, because I'm testing two theories, but I reckon one-tailed is the safest bet, I'm sure the reviewers will flag it if they think its an issue. Hi Sneaks, I think you've found a solution already, but if not here's my advice. You're only testing one hypothesis, as one of your hypotheses you mention is known as the "null hypothesis", whilst the other is the one you expect to find support for. You should probably go with the two-tailed test because you don't hypotheses a direction in the differences, just that there will be a difference. ======= Date Modified 09 Oct 2010 09:32:00 ======= my supervisor re-edited my work AGAIN yesterday and has removed one hypothesis, so only testing one (which I find non-sig), so just moving all the literature about the other into the discussion (where it was originally, about 10 edits ago) the problem before I think was that it wasn't really the null vs the real hypothesis. because H1 was "there will be no difference between groups, they will BOTH score 5", whereas hypothesis 2 was "there will be a difference, group 1 will score 0 and group 2 will score -4" - so although one said there will be no difference, it wasn't really the null, because its specifying the type of relationship too. confusing! anyway problem solved, in a very annyoying way! :-s Masters DegreesSearch For Masters Degrees An active and supportive community. Support and advice from your peers. Your postgraduate questions answered. Use your experience to help others. Enter your email address below to get started with your forum account Enter your username below to login to your account An email has been sent to your email account along with instructions on how to reset your password. If you do not recieve your email, or have any futher problems accessing your account, then please contact our customer support. or continue as guest
OPCFW_CODE
The same thing happens with the ODBC SQL driver and other connectors. I am still a bit baffled by why it works on the old server, which does NOT have a jTDS driver in lib... –Brian Knoblauch Jul 23 '14 at 13:19 add socketKeepAlive (default - false) true to enable TCP/IP keep-alive messages ssl (default - off) Specifies if and how to use SSL for secure communication. Our Situation We've been working for a big project which involved a lot of data-loads and data-processing and was designed to work using SpringBatch from the Spring Framework. my review here What is the URL format used by jTDS? Cancel Home Dienstleistungen Engpassbeseitigung durch Software Prozessautomatisierung durch Software Fachwissensicherung durch Software Produkte SMS Services & SMS Service Integration Erfahrung Projektmanagement Software Entwicklung Technologien Branchen Über uns Unser Team Kontakt Formular The jtds-1.2.6.jar file is in my /WebRoot/WEB-INF/lib folder and the Java Build Path looks good. If anyone has any suggestions it will be greatly appreciated. jTDS is a type 4 (pure Java) JDBC driver. There is a performance hit for the encoding logic so set this option to false if unitext or univarchar data types are not in use or if charset is utf-8. I get java.sql.SQLException: "ResultSet may only be accessed in a forward direction" or "ResultSet is read only" when using a scrollable and/or updateable ResultSet. Must be an integer value or the string "compute" to let jTDS choose a process ID. Operator ASCII art Why did Borden do that to his wife in The Prestige? These situations can be avoided in most cases by setting the useCursors property, but this will also affect performance. cant seem to find it anywhere. Because there is no URL when using the JtdsDataSource there are three other properties (with setters and getters) to take the place of those items that are part of the URL's Applies for characters from the extended set (codes 128-255). No practical use, it's displayed by Enterprise Manager or Profiler associated with the connection. This means extra request-response cycles, but less caching by the driver. Please note that setting lastUpdateCount to true could cause problems if you use queries that do actually return more than one update count (such as queries consisting of multiple updates/inserts), because A common mistake is to append a semicolon (";") to the end of the URL (e.g. "jdbc:jtds:sqlserver://server/db;TDS=7.0;" is wrong!). Why do I get a java.sql.SQLException: "Unable to get information from SQL Server" when trying to connect to an SQL Server instance? After I did this I no longer got the error. –MotoDave452 Aug 14 '14 at 17:26 add a comment| 1 Answer 1 active oldest votes up vote 1 down vote accepted Both of these (ResultSets and update counts) are considered by JDBC to be "results". maxStatements (default - 500) The number of statement prepares each connection should cache. this page JavaRanch FAQ HowToAskQuestionsOnJavaRanch Lee Fei Tye Greenhorn Posts: 12 posted 7 years ago org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot load JDBC driver class 'net.sourceforge.jtds.jdbc.Driver' Paul Sturrock Bartender Posts: 10336 I like... Batch processing with executeBatch() hangs or is unreliable on Sybase. to my script, it runs fine. This is the fastest approach but it means that the driver has to cache all results if another request needs to be made before all rows have been processed. TDS (Tabular Data Stream) is the protocol used by Microsoft SQL Server and Sybase to communicate with database clients. Although this means that a "good" driver could "fix" this behavior, fixing it would imply caching the whole server response, equaling a huge performance drop. http://assetsalessoftware.com/cannot-load/cannot-load-jdbc-driver-class-com-mysql-jdbc-driver-tomcat.php As a conclusion the only safe multithreading scenarios are these: (i) one Connection with multiple Statements, each Statement used by a single thread and (ii) a Statement used by one thread With Sybase a usual forward-only read-only cursor is created. up vote 1 down vote favorite 1 I am working with the BIRT Report Design Feature that is built into eclipse. You are very probably using TDS 4.2 to communicate with the SQL Server. The exception thrown when trying to start JUnit tests within Eclipse read as follows: java.lang.IllegalStateException: Failed to load ApplicationContext at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:99) at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:122) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:312) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:211) You can download it from jtds.sourceforge.net –Mark Rotteveel Aug 14 '14 at 15:35 Thank you, I downloaded the .jar file and when you go to add a Data Source Why do I get java.sql.SQLException: "Output parameter not allowed as argument list prevents use of RPC." when calling a stored procedure? stdarg and printf() in C How to be Recommended to be a Sitecore MVP Count trailing truths Mimsy were the Borogoves - why is "mimsy" an adjective? Browse other questions tagged java eclipse tomcat jdbc or ask your own question. Why is Professor Lewin correct regarding dimensional analysis, and I'm not? jTDS takes this one step further: when you create a PreparedStatement, jTDS caches it internally and keeps it there even after you close it so that every time you create it Its been a while since we last chatted about @Webclipse. useful reference I've changed the version of JTDS to 1.2.6 and all works fine. When using getConnection(String url, String user, String password) it's not required to set this property as it is passed as parameter, but you will have to set it when using getConnection(String Exception is: "Cannot load JDBC driver class ‘net.sourceforge.jtds.jdbc.Driver' " Any thoughts would be appreciated. The exception you are getting is usually caused by a timeout. My cat sat down on my laptop, now the right side of my keyboard types the wrong characters Why do cars die after removing jumper cables? Why do I still need to provide a username and password? Gary Xue wrote: > Did you check if your Jtds driver JAR files exist in this directory: > eclipsepluginsorg.eclipse.birt.report.viewer_1.0.1birtWEB-IN Fpluginsor > g.eclipse.birt.report.data.oda.jdbcdrivers > If not, correct the problem by running JDBC Scenario (i), while it does work, is not necessarily a good solution because it requires a lot of locking and waiting on the same network connection plus (last but not least) If that's the case, replace jtds.jar in the above example with jtds-1.2.jar or whatever your specific file name is. ^ top ^ Why do I get a java.sql.SQLException: "No suitable driver"
OPCFW_CODE
Application Program Interfaces Just as drivers provide a way for applications to make use of hardware subsystems without having to know every detail of the hardware's operation, application program interfaces (APIs) let application programmers use functions of the computer and operating system without having to directly keep track of all the details in the CPU's operation. Let's look at the example of creating a hard disk file for holding data to see why this can be important. A programmer writing an application to record data from a scientific instrument might want to allow the scientist to specify the name of the file created. The operating system might provide an API function named MakeFile for creating files. When writing the program, the programmer would insert a line that looks like this: In this example, the instruction tells the operating system to create a file that will allow random access to its data (signified by the 1 -- the other option might be 0 for a serial file), will have a name typed in by the user (%Name) and will be a size that varies depending on how much data is stored in the file (signified by the 2 -- other options might be zero for a fixed size, and 1 for a file that grows as data is added but does not shrink when data is removed). Now, let's look at what the operating system does to turn the instruction into action. The operating system sends a query to the disk drive to get the location of the first available free storage location. With that information, the operating system creates an entry in the file system showing the beginning and ending locations of the file, the name of the file, the file type, whether the file has been archived, which users have permission to look at or modify the file, and the date and time of the file's creation. The operating system writes information at the beginning of the file that identifies the file, sets up the type of access possible and includes other information that ties the file to the application. In all of this information, the queries to the disk drive and addresses of the beginning and ending point of the file are in formats heavily dependent on the manufacturer and model of the disk drive. Because the programmer has written the program to use the API for disk storage, the programmer doesn't have to keep up with the instruction codes, data types and response codes for every possible hard disk and tape drive. The operating system, connected to drivers for the various hardware subsystems, deals with the changing details of the hardware. The programmer must simply write code for the API and trust the operating system to do the rest. APIs have become one of the most hotly contested areas of the computer industry in recent years. Companies realize that programmers using their API will ultimately translate this into the ability to control and profit from a particular part of the industry. This is one of the reasons that so many companies have been willing to provide applications like readers or viewers to the public at no charge. They know consumers will request that programs take advantage of the free readers, and application companies will be ready to pay royalties to allow their software to provide the functions requested by the consumers.
OPCFW_CODE
Supernacularfiction 《Imperial Commander: His Pretty Wife Is Spoiled Rotten》 – Chapter 1015 – She Would Protect Him puzzled sidewalk to you-p1 Novel–Imperial Commander: His Pretty Wife Is Spoiled Rotten–Imperial Commander: His Pretty Wife Is Spoiled Rotten The Old Countess; or, The Two Proposals Chapter 1015 – She Would Protect Him cannon arithmetic “Young Commander has directed people to evaluate it. On the subject of biological tools, the best managing places terrific worth on ceasing them. Whenever there is someone coming from the other celebration inside the very best supervision, i am worried that people have no strategies left.” Now, she would guard him. a wild last boss appeared amazon Yun Xi frowned and shook her go. She did not know the solution to that. Imperial Commander: His Pretty Wife Is Spoiled Rotten “Crocodile happens to be referred to as a medicine lord. Plenty of drug treatments have always circulated through his fingers because he has several associations and submission routes. Regardless that we have now executed several raids while using Little Commander and staked out most of his lairs, he always generally seems to slide through the fractures and get away from. It can be totally obvious that someone in is cooperating with him.” “What about the Small Commander? How is he handling is important?” “Inside? Exactly what do you indicate by inside of? Can it be inside the Fresh Commander’s unique causes or interior Jun Country’s top notch management?” Conversely, she had to give your very best around the codeword pad. Considering that Feng Yang had been capable of finding Xinqi Area using the codeword pad, this codeword pad obtained proved to be invaluable. She must seize the chance very first to avoid dealing with restraints on her perform everywhere. “Inside? What do you suggest by within? Is it within the Small Commander’s particular factors or interior Jun Country’s best supervision?” In contrast, she was required to make an effort on the codeword cushion. Given that Feng Yang has been able to find Xinqi Area when using the codeword pad, this codeword mat possessed proved to be useful. “I have required visitors to carry on the research, but because I’m unsure in the other party’s goals and objectives, we must examine in magic formula so the method will probably be slow.” On the flip side, she had to do their best in the codeword cushion. Since Feng Yang were able to find Xinqi Town utilizing the codeword cushion, this codeword pad had demonstrated to be very beneficial. He obtained already been hurt during this event. She simply had to improve and prevent becoming so ineffective which he were forced to secure her continuously. She composed her mind to battle Crocodile to the finish. He acquired previously been seriously injured during this occurrence. She needed to boost and quit remaining so useless that they had to secure her at all times. Dick Hamilton’s Fortune Now, she would defend him. Section 1015: She Would Guard Him On the other hand, she were forced to make an effort on the codeword cushion. Due to the fact Feng Yang were able to find Xinqi Area utilizing the codeword mat, this codeword pad acquired proven to be very beneficial. “Young Commander also has sent people to check on it. In relation to biological weapons, the highest management locations wonderful importance on ending them. If you have anyone in the other party from the best administration, then I am afraid that we have zero techniques remaining.” She made-up her imagination to battle Crocodile into the conclude. Just once there had been a traitor in Mu Feichi’s crew, but she had not been absolutely sure whether there were clearly more invisible spies who obtained infiltrated his unique makes this time around. In this way, other event could be inside the dark areas, given that they had been open, and in addition they would encounter restraints almost everywhere. the u p trailer “I also reveal exactly the same suspicions. It might or might not be Crocodile, but regardless of what, we still have to capture him 1st before we can easily pull any final thoughts. Now they already have taken away the malware examples and antigens. It is actually a frightening final result!” the magnetic north pole of the earth is nearest It was actually the first time that he acquired ever seen a real seem on this girl’s encounter, and it also made him feel slightly worried. Concurrently, he could not help but actually feel distressed and pained that she had to check this sort of tribulations. Feng Yang checked out Yun Xi, whoever term acquired suddenly transformed dimly lit and sullen, and also the murderous appear that arose from those distinct sight ice cold him into the bone tissue.
OPCFW_CODE
Thanks! I was surprised at how little troll content there is in the single player department, but also motivated. This project draws inspiration from minds like Tommi Gustafsson and Med. MapGuy who've made some of the most influential custom gameplay I've enjoyed over the years. I'll be updating this thread as I move development along. Speaking of - this is a proof of concept. Using a pre-made terrain screenshot on top a custom sky texture as part of the skybox to create an illusion of depth in the scene during low-angle camera shots. This village is at the edge of the map in what is supposed to be a mountainous region, so it felt out of place having it against a pure sky backdrop. The texture is a placeholder, merely 512x512 at this point, so that will be appropriately upscaled in the final product. More on the custom skybox/backdrop. Upscaled the texture so it doesn't look blurry and pixelized. Initially tried tinting the backdrop manually, but it seems to blend in better by simply matching the fog colours being used in the particular scene. The blizz cliffs are a conscious decision, exactly for that classic warcraft 3 feel you mentioned. A certain area later on doesn't use them, but throughout most of the map that's what I've been sticking to. Also sprucing them up with non-pathable rocks and foliage seems to cover their inherent roughness quite nicely. As always, your thoughts are appreciated. I am glad to say that we're close to the stage of a playble alpha, so watch out for playtesting opportunities that might pop up (campaign/troll enthusiasts especially). Very nice! I really like the world tree being used as big trees; it's not that uncommon but still I like to see it. Cave also looks great. I dunno why, but your terraining style gives me nostalgia. I'm a big fan of strong light choices. You gotta be careful though: In the second picture (Batch3-1b) you've put too many light sources in close proximity, creating the lightning bugs around the tilegrid-edges. Deleting lights one by one until the bug dissapears should do the trick. Thanks for pointing that out. Upon closer inspection I discovered the whole cave is filled with that kind of light pollution - some places worse than others. Basically what I did was add light sources attached to @Uncle Fester 's crystal models, but for the sake of avoiding such bugs I'm remaking the cave environment as a mixture of edited and unedited crystals. Well, well, well - look who it is! Welcome back frosty the mojo-man. ^^ I must admit I never saw this day coming. What a gust of fresh old air you are to this old section I'm currently on my phone, so I can't see much, but those tiny screenshots look very classic and neat. I'll see if I can't give a little more in-depth feedback once I get on a PC, but for now I just wanted to say hi. Also, would you care to tell us a little about the idea for the campaign? Like lore and story and such. With regards to the campaign, I'll be making a thread in the project development forum once I've got a playable map. But the jist of it is - Troll-Aqir war; pre-sundering Azeroth; the first great conflict that forced the troll tribes into forming a world-spanning empire; based within the outline of cannonical lore, but expanding on it. Mostly original characters, fully custom techtree for the troll race. Over the years I've had a lot of fun playing this particular kind of campaign (Dwarf Campaign, Joe's Chronicles to name some examples) that rides the edge of canon without contradicting it, so that's the direction I'm taking my idea story-wise. I really like the environment of the terrain. I am also struck by several similarities with something I did a few months ago that made me smile. Maybe we share the same inspiration since I did not know about this project until recently hehe. Beautiful terrain
OPCFW_CODE
Why doesn't Google Cloud DataStore support multiple inequality filter on different properties? I know that is a limitation of DataStore, But I just want to figure out the reason. Invalid Argument: Cannot have inequality filters on multiple properties: [..., ...] I have read the bigtable paper and I can not find any restriction on inequality filter on different column. and it can support prefix and range scan. IMHO, DataStore could support that multiple inequality filter with these two operation. Do you know any reason take the functionality from DataStore? To avoid having to scan the entire index table, the query mechanism relies on all of a query's potential results being adjacent to one another in the index. To satisfy this constraint, a single query may not use inequality comparisons (LESS_THAN, LESS_THAN_OR_EQUAL, GREATER_THAN, GREATER_THAN_OR_EQUAL, NOT_EQUAL) on more than one property across all of its filters [Source : https://cloud.google.com/appengine/docs/standard/java/datastore/query-restrictions ] that is said, datastore query was limited acessing only one index when need to scan? Every datastore query is running on one or more index table. When considering inequality filters, we want to run on at most one property in order to avoid running on the entire index tables. It's easier to grasp when investigating how Filters work, how datastore-indexes.xml defined and what is index exactly: <datastore-index> are elements, one for every index that the Datastore should maintain. So, you defining one index (which is a table) for every relation between Entity's kind and it's properties. Now imagine when you need to calculate multiple inequality filter on different properties? It can lead to a huge running time. Not good. <?xml version="1.0" encoding="utf-8"?> <datastore-indexes autoGenerate="true"> <datastore-index kind="Greeting" ancestor="true" source="manual"> <property name="user" direction="asc" /> <property name="birthYear" direction="asc" /> <property name="height" direction="asc" /> </datastore-index> <datastore-index kind="Greeting" ancestor="true" source="manual"> <property name="date" direction="asc" /> </datastore-index> </datastore-indexes> Now, let's see filters. It's a valid filter - he runs on one property only (the birthYear property). Filter birthYearMinFilter = new FilterPredicate("birthYear", FilterOperator.GREATER_THAN_OR_EQUAL, minBirthYear); Filter birthYearMaxFilter = new FilterPredicate("birthYear", FilterOperator.LESS_THAN_OR_EQUAL, maxBirthYear); Filter birthYearRangeFilter = CompositeFilterOperator.and(birthYearMinFilter, birthYearMaxFilter); Query q = new Query("Person").setFilter(birthYearRangeFilter); It's a non-valid filter, he runs on 2 properties (birthYear and height): Filter birthYearMinFilter = new FilterPredicate("birthYear", FilterOperator.GREATER_THAN_OR_EQUAL, minBirthYear); Filter heightMaxFilter = new FilterPredicate("height", FilterOperator.LESS_THAN_OR_EQUAL, maxHeight); Filter invalidFilter = CompositeFilterOperator.and(birthYearMinFilter, heightMaxFilter); Query q = new Query("Person").setFilter(invalidFilter);
STACK_EXCHANGE
import { Trade, CurrencyPairPosition, CurrencyPairPositionWithPrice } from 'rt-types' export const formTable = { positions: (data: CurrencyPairPositionWithPrice[]) => data.map((item: CurrencyPairPositionWithPrice) => [ item.symbol, item.latestBid, item.latestAsk, item.baseTradedAmount, item.basePnl, ]), ccy: (data: CurrencyPairPosition[]) => { const positionsMap = data.reduce( (acc, item) => { const base = item.symbol.slice(0, 3) const prevPosition = acc[base] || 0 return { ...acc, [base]: prevPosition + item.baseTradedAmount } }, {} as { [ccy: string]: number }, ) return Object.keys(positionsMap).map(ccy => [ccy, positionsMap[ccy]] as [string, number]) }, blotter: (data: Array<Partial<Trade>>) => { return data .sort((a, b) => b.tradeId - a.tradeId) // Sort by most recent trades first .map((item: Partial<Trade>) => [ item.tradeId, item.tradeDate, item.direction, item.symbol, item.notional, item.dealtCurrency, item.spotRate, item.status, item.valueDate, item.traderName, ]) }, } export const delay = (ms: number) => new Promise(res => setTimeout(res, ms))
STACK_EDU
CI7520 Classical Machine Learning This assignment counts for 30% of the overall mark for this module. Its subject is to implement Classical Machine Learning solutions in Python using the Scikit-Learn library and other libraries introduced in the class. Specifically, both clustering and classification methods should be applied to the Wine Recognition dataset: You can form groups of up to four students (after a discussion and agreement with the tutor). Deliverables and Submission The coursework must be submitted by 23:59, on Friday 29th January 2021. Follow the submission guidelines in Canvas. For each submission ensure that you include · A zip file containing all runnable programs for the first three parts (see below in Project parts), with code written in Python. o For each part, a single python script (.py) should be used to execute all relevant code. All three executables should be placed in a single folder. Any results presented should be directly reproducible from the code without any modification. o A short README text file should be included in the folder to explain § The students’ names and k-numbers of the group § the contents of the folder § If any library additional to the ones used in the class is required, provide explicit guidance on how to install. § Any special instructions on how to run the code. · A report (3000-5000 words, excluding references and appendices), in word or pdf format with the students’ names and k-numbers of the group on the front page. · You are encouraged to look in the literature and identify methods that have already been applied to the particular problem. In this case, you must CLEARLY reference the relevant sources (e.g. scientific article, book, webpage) · Any third-party source code must be CLEARLY highlighted and referenced by appropriate annotation in the report and/or adding comments in the code. · Usage of any third-party libraries that have not used in the class must be approved by the Lecturer beforehand. · Copies of the code in the Appendix must be in text format, not screenshots · In case that the above rules are not obeyed, the submission may be considered for plagiarism and penalised according to the University regulations. PART I – Application: Load and overview data related to your theme The application should be able to load the data and identify its key aspects (number of dimensions/features, number and names of classes, number of samples per class, etc.). PART II – Application: Clustering a) You should use at least two clustering methods to partition the dataset. b) Evaluate the clustering methods using appropriate metrics such as the Adjusted Rand index, Homogeneity, Completeness and V-Measure, using the ground truth. c) Consider and implement any configuration of the parameters of your clustering methods that could further improve the results. PART III – Application: Classification: Training and Testing a) You should use at least two classification methods to distinguish between the classes. Both the following training/testing protocols should be used: · Split the data into training (70%) and testing (30%). · K-fold cross-validation for K=10. b) For both protocols, evaluate the classification approaches using appropriate metrics such as the Balanced Accuracy, F1-Score, ROC AUC, and drawing ROC curves and appropriate confusion matrices. Ideally all ROC curves should be drawn into a single graph to allow for easy comparison between methods. c) Consider and implement any configuration of the parameters of your classification methods that could further improve the results. PART IV - Report: The Project Report should be structured as follows: · Data: Description of the data, including the information derived in Part I, as produced by your code.. There is no need to describe the general problem. All information and figures should be derived by the code, not from other sources. o Outline of the clustering methods used in Part II. There is no need to describe the theory behind the methods, only to explain any different configurations you may have used. o Comparative analysis of all clustering methods used, including any improvements attempted. Ensure that any results/figures reported should be produced by your code. o Outline of the classification methods used in Part III. There is no need to describe the theory behind the methods, only to explain any different configurations you may have used. o Comparative analysis of all classification methods used, considering both training protocols, including any improvements attempted. Ensure that any results/figures reported should be produced by your code. · Discussion and Conclusion: o Critical Discussion of any challenges imposed by the specific dataset and the pipelines for clustering and classification. o Critical Discussion of clustering results. o Critical Discussion of classification results · Appendix: Include copies of all the code produced. Copies of the code must be in text format, not screenshots. Ensure that there is sufficient annotation/comments to indicate where your code has been taken/adapted from. Learning outcomes being assessed · Select and specify suitable methods and algorithms relevant for a particular data analysis process; · Build machine learning and artificial intelligence systems using software packages and/or specialised libraries; · Articulate and demonstrate the specific problems associated with different phases or tasks of a machine learning or artificial intelligence pipeline; Assess and evaluate machine learning methods using datasets and appropriate criteria;
OPCFW_CODE
Kyle Cranmer is the David R. Anderson Director of the UW–Madison Data Science Institute (DSI), powered by American Family Insurance. Cranmer leads the institute’s charge: advancing discoveries that benefit society through foundational and use-inspired data science research. He is a professor in the Physics Department with affiliate appointments in Computer Sciences and Statistics. Cranmer arrived at data science through his contributions to the search for the Higgs boson, a fundamental particle that, in the 1960s, had been theorized to exist and is responsible for giving fundamental particles in the universe their mass. Finding evidence for the particle required navigating enormous amounts of data generated by quadrillions of high-energy particle collisions. Cranmer developed a method for collaborative statistical modeling that allowed thousands of scientists to work together to seek, and eventually find, in 2012, strong evidence for the Higgs boson. After making this discovery, Cranmer pivoted to thinking more broadly about data science and machine learning for the physical sciences, identifying synergies and opportunities, and shaping this discussion internationally. His research has expanded beyond particle physics and is influencing astrophysics, cosmology, computational neuroscience, evolutionary biology, and other fields. One of Cranmer’s goals for the DSI is to broaden engagement in data science across the UW-Madison campus. Drawing on his own experiences reaching across traditional academic boundaries, he aims to build partnerships between people working on data science methodology and those working in the natural, physical, and social sciences and the humanities. Understanding and addressing the impact data science has on society, and the disproportionate effects it can have on marginalized people, is central to his vision for this work. Cranmer views the Wisconsin Idea as integral to the mission and role of the DSI. He seeks opportunities to engage with the community in research, such as working with UW-Extension and farmers on problems like agricultural sustainability, carbon capture and climate change. Cranmer serves on the advisory board for the UniverCity Alliance. He stresses the importance of building trust, both within and outside the university, by demonstrating the potential for data science to positively affect people’s lives and the world. Prior to starting his work at the DSI in July 2022, Cranmer was a professor of physics at New York University for 15 years and the executive director of the Moore-Sloan Data Science Environment, where he worked to understand the institutional changes necessary to establish data science in academia. He is an alumnus of the UW-Madison Department of Physics; after earning his Ph.D. here, he was a fellow at Brookhaven National Lab. He was awarded the Presidential Early Career Award for Science and Engineering in 2007 and the National Science Foundation’s Career Award in 2009. Cranmer was elected as a fellow of the American Physical Society in 2021, and he was selected as Editor in Chief of the journal Machine Learning: Science and Technology in 2022. He grew up in Arkansas and was in the first graduating class of a public, residential high school for math, science, and the arts.
OPCFW_CODE
|Don't forget to visit our web site for special offers on our products. US & Canadian customers can call us toll free at 1-800-992-0549. International orders can be taken from this number +1-785-841-1631. |Dr. Dobb's/CD Release 7 CD-ROM |Dr. Dobb's/CD Release 7 contains over 11 years of cutting-edge computer information and expertise from the most respected programmers in the industry. This means over 132 issues of Dr. Dobb's Journal plus all the issues of Dr. Dobb's Sourcebook on one CD-ROM! Every Programming Paradigms, every Algorithm Alley, every column, and every bit of source code from January 1988 to June 1999, all at your fingertips. Now in HTML, Dr. Dobb's/CD Release 7 provides you with a fast, accurate and extensive cross-platform search engine that will run on any operating system. |Essential Books on Algorithms and Data Structures - Release 2 CD-ROM |Complete text of nine essential books and over a dozen articles related to algorithms in a new HTML format. These books, selected by the editors of Dr. Dobb's Journal, are the most important books ever written on the subject and need to succeed. Plus, this CD-ROM features a full-text, cross-platform search engine, giving you instant access to the entire contents of all nine books. Learn from the experts. Set up your algorithms the right way the first time-before you ever write that first line of code! |Dr. Dobb's Python Resource CD-ROM |The Dr. Dobb's Python Resource CD-ROM is the best available source of Python information you'll find! It includes Python distributions for platforms ranging from Linux to Win32, and utilities and applications for programmers and end users alike. The CD-ROM also includes tutorials, FAQs, and Python articles from leading magazines such as Dr. Dobb's Journal, Web Techniques, and more! |Essential Books on Numerics and Numerical Programming CD-ROM |Dr. Dobb's Essential Books on Numerics and Numerical Programming CD-ROM is the answer for anyone who needs solutions for complex numerical programming problems. Selected for the value of their content by the editors of DDJ, these books on CD-ROM are a must have for all programmers. Compiled into easy-to-read PDF format, Essential Books on Numerics and Numerical Programming CD-ROM includes these special features: Adobe Acrobat 3.01, fast and accurate full-text search utility! |Dr. Dobb's Tcl/Tk Resource CD-ROM |Dr. Dobb's Journal & the Tcl/Tk Consortium deliver the most comprehensive resource of Tcl/Tk available on one CD-ROM! Known for its flexibility, Tcl/Tk can be implemented quickly and easily into new and legacy applications. Use Tcl/Tk to: Build web sites, perform hardware and software testing, act as an embedded command language, create network-aware applications and much more! |Dr. Dobb's Windows CE Programming CD-ROM |Dr. Dobb's Journal Windows CE Programming: For The Handheld PC CD-ROM teaches you how to write compact efficient programs in C++ using the Windows CE API. You get solid technical information with time proven instructional techniques- plus the comfort of selecting your learning environment. Quizzes, graphics and hands- on lab exercises insure that you become a skilled Windows CE programmer. |Essential Books on Cryptography and Security CD-ROM |This CD-ROM presents both the theory and practice of network security implemented in C, Basic, and other familiar programming languages. Special Features include: Full-Text Search Engine, Complete text of all books, Hyperlinks across all books, Technical bulletins, Security briefs, and Cryptographic FAQs from RSA Data Security. Now available for all platforms in Adobe PDF. |Dr. Dobb's Systems Internals CD-ROM |The Dr. Dobb's Systems Internals CD-ROM is a complete resource for people who need to gain a better understanding of how Windows operate. Network administrators, software developers, and users alike stand to benefit from this release. The NT Internals CD-ROM contains source code and executables for freeware and shareware utilities. Articles from Dr. Dobb's Journal, Microsoft Systems Journal, C/C++ Users Journal and Windows Developer's Journal all relating to Windows 3.x/95/98/NT/CE internals and systems programming. |Programmers at Work CD-ROM |Compiled by the editors of Dr. Dobb's Journal, the Programmers at Work CD-ROM contains full text of Susan Lammers' Programmers at Work - a timeless glimpse into the masterminds of the software industry and James Hague's Halcyon Days: Interviews with Classic Computer and Video Game Programmers. In addition, Programmers at Work includes never before published interviews by DDJ and audio & video clips from Dr. Dobb's Technetcast.com Internet Radio Program. Over 50 interviews with the industries leading programmers! |Essential Books on File-Formats CD-ROM |The Essential Books on File Formats CD-ROM contains the complete text from six books, which will provide you with the most comprehensive and detailed information on all the important and popular file formats in use today. Selected by the editors of Dr. Dobb's, this CD-ROM contains invaluable information on file formats used for graphics, multimedia, sound, databases, spreadsheets, Windows, the internet, and much more! |Essential Books on Graphics Programming CD-ROM |Get seven of the essential books in graphics programming, with full text, diagrams, graphics, and source code-all on one CD-ROM. From fundamental algorithms to the most complex techniques, this CD-ROM lets you find all the critical information you need for your graphics programming projects. Your CD-ROM includes these features: Texture mapping, color modeling, and morphing all optimized for speedy 2-D and 3-D graphics programming. Practical image acquisition and processing. High-quality 3-D photorealistic graphics. Digital halftoning Image synthesis and special effects. |Al Stevens Cram Course on C/C++ CD-ROM |Al Stevens, a world renowned programming expert and contributing editor for Dr. Dobb's Journal, has created the CD-ROM to answer all your C/C++ programming questions. This CD-ROM includes the complete text of three books written by Al Stevens, an interactive step-by step tutorial with precise explanations, video clips of Al discussing important topic, the GNU Compiler Suite directly connected to the exercises, lots of usable source code, plus a memory feature that bookmarks your place for quick returns to prior sessions. Designed for Win95 and NT (will work on 3.1). |Dr. Dobb's Small-C Resource CD-ROM |The Small-C Resource CD-ROM contains the definitive collection of compilers, information, and source code for the Small-C compiler. Packed with books, tools, utilities, and Small-C implementations, this CD-ROM includes James Hendrix's out-of-print book, A Small-C Compiler: Language, Usage, Theory, and Design and several relevant articles from Dr. Dobb's Journal written by C experts such as Allen Holub. Plus all content is hyperlinked in HTML format, which makes viewing easy and convenient. And for those who don't have an HTML browser, included is Netscape's Navigator browser supporting all major operating systems. |Alternative Programming Languages V.2 CD-ROM |The Alternative Programming languages CD-ROM is your definitive resource for cutting-edge programming environments, giving you access to the most innovative, solution-providing languages and tools available today. This CD-ROM will get you up to speed on distributed computing, embeddable languages, increased productivity, and includes languages such as Perl, Python, Dylan, Lout, Bob, Parasol, DSC, Glish, the GNU compiler suite, plus much more. |Windows 32-Bit API Programming Book w/ CD-ROM |Dr. Dobb's Journal is excited to introduce a fundamental book for all Win32-bit API programmers -- Windows 32-bit API Programming - The User Interface. This book/CD-ROM will help you learn Windows 32-bit API programming quickly and proficiently, using an intuitive tutorial that takes you through a series of interactive CD-ROM-based workshops. This hands-on workshop enables you to visualize concepts, as well as listen to audio explanations for each visual, with pop-up windows containing the voice-over text. Also included are comprehensive hands-on lab exercises and quizzes to test your comprehension.
OPCFW_CODE
If you are storing secrets as plain text in Power Automate flows or environment variables, you can integrate Azure KeyVault in Power Automate to retrieve those credentials directly from Azure Key Vault. If at any point you run into issues, you can reference Microsoft’s documentation here: Use Azure Key Vault secrets (preview) Before getting started, we need to knock out some prerequisites in Azure. Register a resource provider We need to register Microsoft.PowerPlatform resource provider. - Go to your Azure Portal - Once in the portal, go to Subscriptions - Select the subscription your Azure Key Vault will be in. - Now, select the Resource Providers blade under Settings - Search for Microsoft.PowerPlatform in the filter search bar - Select Microsoft.PowerPlatform and select Register - This will take a few minutes. Once registered, it should have a green check mark and show as registered. Configure Azure Key Vault Now that we have our Power Platform Resource Provider registered, we need to configure Azure Key Vault to allow Dataverse to access the resource. - Go to your Azure Key Vault resource. (My key vault is named Power-Platform) - Select the Access policies blade on the left - Select Add Access Policy - The Add access policy window will appear. Select None Selected on the Select Principal* line highlighted in red. - The Principal blade will appear from the right. Search for Dataverse. Select the principal that says Dataverse. The principal ID is 00000007-0000-0000-c000-000000000000 - Once selected, click on Select at the bottom. - Next, select the Secret Permissions drop-down and check Get. - Click Add. - This will bring you back to the main Access policies window. Select Save at the top to commit the changes. Configure Environment Variable in Power Automate Now that we have Azure Key Vault configured, we can set up environment variables inside a solution to retrieve secrets. Note: A user who creates a Secret Environment Variable needs at least read permissions on the Azure Key Vault resource, or they will receive the following error when attempting to save it. This variable didn’t save properly. User is not authorized to read secrets from ‘Azure Key Vault path’. More information can be found here: Create a new environment variable for the Key Vault secret - Go to a Solution that you’re developing in. - In your Solution, create a new Environment Variable. Go to New > More > Environment Variable - Enter a Display Name for your environment variable - For the Data Type, choose Secret - For Secret Store, choose Azure Key Vault - Next, you can either choose to do a New Azure Key Vault Reference under the Current or Default value. I will use the Default for my example. - Fill in the following fields. You’ll need to reference Azure Key Vault. - Azure Subscription Id: This is the subscription your Key Vault is in. You can find it on the Key Vault Overview blade. - Resource Group Name: The resource group your Key Vault is in. - Azure Key Vault Name: The name of your Key Vault. - Secret Name: The display name of your secret in Key Vault. - Select Save - Open the environment variable and copy the Name value. We will need this later. If you run into any errors while saving the environment variable about Microsoft.PowerPlatform resource provider, you might wait 10-15 minutes before you try to save again. Using Secret Environment Variables in Power Automate Now that we have Azure configured and our environment variable configured, we can use it in our flows within our solution. For this example, I’ll demonstrate how you can retrieve the secret utilizing the environment variable. - Create a new manually-triggered instant flow. - Select New Step and select Microsoft Dataverse from the actions. - Select Perform an unbound action - Once the Perform an unbound action loads, select the action name RetrieveEnvironmentVariableSecretValue - In the EvironmentVariableName field, paste in the Name of the Environment Variable we created earlier. Example: msauto_PowerPlatform_GraphAPI - Select ... > Settings on the top-right corner of the Perform an unbound action, action. - Enable Secure Outputs. This will scrub the secret so it’s not in plain text in the flow run history. - Select Save. Now, test your flow. If successful, you’ll get your secret from Azure Key Vault as shown below. You can use this when making various REST API calls with the HTTP connector such as Microsoft Graph. Want to do more with Azure Key Vault and Power Automate? If you want to get a better idea of how you can use this in your flows, check out my blog post on using Microsoft Graph with the HTTP connector in Power Automate. You can combine what you learned on this post to securely pull the secret instead of storing it in plain text inside an environment variable.
OPCFW_CODE
Export Operation to Zip File Feature Request Name Ability to export an Operation to a Zip file, including all images, texts, screen recordings, terminal recordings. Brief Description Often we want to backup operational data. It would be good to be able to download all the uploaded evidence into a Zip for archival purpose. Furthermore, timestamp prepend all of the file names. It would also be good to be able to introduce screen recordings into the evidence. The code is partially set up to do this, however there are a number of questions about how this would work in practice. This was an idea that was kicked around at one point, but was dropped for some reason. Some questions that would eventually need to be answered: Is this export something that can be re-imported into ashirt-server? Who is allowed to perform an export? What happens to the operation on ashirt-server after someone archives it? what happens to the files on the file store/ s3? What is the utility of the export? Meaning, is this just a backup, or is there some other desired utility? would we need/want to encrypt the zip/export format? is this a feature we would want to turn off for some reason? Is this export something that can be re-imported into ashirt-server? I believe the idea here is not to migrate an op to a new instance, but to share raw data with a 3rd party. Who is allowed to perform an export? We probably want to limit this to operation or system admins. What happens to the operation on ashirt-server after someone archives it? @vysecurity feel free to answer this one. We already have the capability to delete an operation and all of its evidence, so I don't feel any automated action is necessary. what happens to the files on the file store/ s3? What is the utility of the export? Meaning, is this just a backup, or is there some other desired utility? see above would we need/want to encrypt the zip/export format? Considering the sensitivity of the data in ashirt we should definitely encrypt any bulk export. We should also store logs of who has created a backup somewhere in the operation settings or detailed information. is this a feature we would want to turn off for some reason? I could see teams wanting to turn this off entirely depending on culture, but we don't have that requirement. The PR and branch that existed was #6. It doesn't look like we ever created an issue that defined the feature. but it was a lot of code to maintain and a lot of complexity that we felt we weren't going to get significant value from or was worth the effort to support. Now that we are interested in adding back zip file support, do you guys still feel the same regarding this statement above? I figure I can either resurrect PRs 2,3 and 6 use Joel's code in 6 as inspiration for adding on a zip file export feature, but start afresh @jrozner @jkennedyvz (cc @JoelAtDeluxe) Support for this functionality was mostly requested by the community. @jkennedyvz has had the most direct contact and knows the most about what the needs are. We can discuss and make a decision on how to proceed. Chatted with @jkennedyvz and this is where we ended up: Export only functionality Export should only export the evidence (no findings, metadata, settings, or saved searches) Export should produce a zipped json file (including all the evidence information) and raw evidence. This probably should just be an array of objects Export should denormalize the the data (tags should be their values rather than the number associated with the value) There should be a global on/off feature set along with the other runtime flags It should be scoped to operation admins Archives should be ephemeral and just provide them to the user
GITHUB_ARCHIVE
Interview challenge about biscuit boxes I have the following algo/interview question: Say I need to prepare n biscuit box, each contains specific number of biscuits that meet specific calorie range. say box1: 100 biscuits, calorie 200-300 box2: 200 biscuits, calorie 190-250 box3: 100 biscuits, calorie 280-220 The available biscuits are: 50 biscuits with calorie 200 250 biscuits with calorie 230 100 biscuits with calorie 190 find a way to prepare all biscuit box or prove there is no solution. Thought for a while but did not find a good solution. Greedy algorithm does not seem to work here, any hints? Not an algorithm expert, but is this the bin-packing problem? If so, as far as I recall, the solution is to brute-force search. However perhaps by searching for that phrase, you can get some better hints. It looks like a linear programming problem @symcbean Yes, it can be reduced to a maximum flow search, which is a linear programming problem. Greedy algorithm I think you can do this with a greedy algorithm as follows: Sort your boxes into increasing order of upper limit Sort your biscuits into increasing order of calorie value For each biscuit, place into first empty box that it can fit into Example So for your example, the sorted order (by upper limit): box2: 200 biscuits, calorie 190-250 box3: 100 biscuits, calorie 220-280 box1: 100 biscuits, calorie 200-300 and biscuits: 100 biscuits with calorie 190 50 biscuits with calorie 200 250 biscuits with calorie 230 then place 100 calorie 190 biscuits into box2 50 calorie 200 biscuits into box2 50 calorie 230 biscuits into box2 (only 50 because box2 becomes full) 100 calorie 230 biscuits into box3 (only 100 because box3 becomes full) 100 calorie 230 biscuits into box1 Sketch of proof Suppose all biscuits with calories less than W have been placed into boxes in an optimum way. Then consider biscuits with calories equal to W. If there is a choice of box for this weight, then choosing the box with the smallest upper limit can never stop us from placing the remaining biscuits. hmm, this seems to be working, I actually ruled out this solution due to a test case I had, I guess I did not calculate right manually at that time. 150 biscuits into box 1 is incorrect. Box 1 only require 100 biscuits. Counter example: Given box 1: 100 biscuits, calorie 100 - 200 box 2: 100 biscuits, calorie 150-175 and biscuits 100 biscuits @ 100 calories and 100 biscuits @ 200 calories. The optimal solution is to evenly split the biscuits across the boxes, but your algorithm would put all the 100 calorie biscuits into box 1. @AndyG: I think we only disagree owing to a different interpretation of the problem. My understanding is that for your example you would not be allowed to put any biscuits into box 2. My understanding is that you are only allowed to put a biscuit into box 2 if the number of calories (for that biscuit) is between 150 and 175. I guess your understanding is that a box is okay if the average number of calories is in the given range. @PeterdeRivaz: Indeed. Given that it is an interview question where it's not likely anybody would be expected to hand-solve a linear program on account of too much time involved, your interpretation is the likelier. This problem can be reduced to a maximum flow search. Let's construct a bipartite graph: the first set of vertices will correspond to biscuits(one vertex for each biscuit). The second set corresponds to boxes in the same manner. There is an edge with an infinite capacity between a vertex from the first set and a vertex from the second if a biscuit has appropriate calorie value. There is an edge between the source and each vertex from the first set with a capacity equal to the number of such biscuits. There is also an edge between each vertex from the second set and the sink with a capacity equal to the size of the box. After that one can just find the maximum flow between the source and the sink and check if all boxes are full. This solution has polynomial time and space complexity. Do you need an algorithm for this? If not, just put the 100 biscuits with calorie 190 into box 2, the 50 biscuits with calorie 200 into box 1, and divide the 250 biscuits with calorie 230 into box 2 and 3. yes, the simple data is just for demo. what if you have 100 boxes. just like we can easily sort 1,2,3, but how about 10000 intergers :-)
STACK_EXCHANGE
QuarkChain Weekly AMA Summary-05/04/2019 QuarkChain has been holding its bi-weekly AMA (Ask Me Anything) on Telegram/Wechat groups on Saturday, from 7–8 PM PST. This is the summary for AMA from last week. We are always happy to take questions/comments/suggestions. Part 1: Marketing Questions Q1: Congrats to QuarkChain Mainnet. It’s really big news but why there is no effect on the price upside? A: Price is determined by the whole market. Shorten the price will go up and down, but if you have the same belief of our plan after mainnet, and the top technology we are pursuing ( recommend to read our mainnet launch article), I think you are investing for the long term value Q2: With all FUD and uncertainty around the project. What is your marketing plan? A: The FUD will never be cleaned off when people are not satisfied with the price. For example, I have mentioned many times in different channels there are multiple transfers happening between two Binance wallets, they are managing tokens in the exchange. But people still create FUD about it. I think we just need to focus on what we should do, especially on the technology side. For example, I am organizing a sharding event with all sharding projects in consensus 2019. Trying to let people realize our important role in the crypto world. Q3. In the mainnet article, I saw: “a large number of applications are working closely with QuarkChain, and the development based on QuarkChain is about to start”. Does this mean QuarkChain will have its own Dapps? A. It not only means we will have Dapps, but also we will have other applicants like layer2, even public chain as well. Q4. When is the distribution of QKC for AMA winners on Binance English group? I put QKC in Binance, Do I need to send to elsewhere for token swap? A. It will be between 4/30 and 7/30.Binance should automatically do it for you Q5. QuarkChain has signed a cooperation agreement to establish a national-level blockchain platform infrastructure, this is really good news, any details about it? A. We cannot share more details due to NDA right now, that is our big focus after mainnet right now( everyone asks me what we are busy with). We will share more details when we can. Part 2: Technical Questions 1) why there are no dsha256 shards in mainnet? What are the plans on adding them? 2) what algorithm is on ROOT chain and how can it be mined? 3) why difficulty on ROOT chain is so big? is it compensated by PoSW? who is mining it? 4) when cpu miner will be rewritten to a standalone mode to remove the docker burden? any ETA or timeline? 1.The previous dsha256 is unfriendly to ASIC miner so we remove it and support it in the future. 2. the root chain algorithm is ethash, and everyone could mine it — although the difficulty is a bit high because of the guardian plan. 3. The guardian plan stakes the token via guardian and we provide the hashpower (with guardian benefit). We will switch to PoSW once PoSW in shards is verified. 4. We will provide a pool for cpu standalone users. However, our current plan is to make sure the root chain as enough hashpower to secure the network and perform token swap later. Q2: Why Quarkchain uses state sharding instead of others? A. Because it is probably the most advanced and most scalable. Q3: Who mines/mining ROOT CHAIN and where these coins will go? A. We are providing hashpower and the guardian plan provides the stake — both from the community, investors, and us. The token will be awarded to the stakers according to the plan. But we welcome anyone to mine the root chain — as long as the miner is not malicious. Q4. Here are some FUD about QuarkChain. 1.QuarkChain smart contracts only support in-shard interaction, which means it can only access addresses within the same shard. Contract is always deployed to the same shard as the creator’s address. To interact with a contract the sender’s address must be on the same shard as the contract. 2. Not only that but they also are using Etheruem under the hood. Basically, they haven’t implemented sharding. 3. So many talkers in this space that say they have the tech figured out before Etheruem. Not only is the tech not good but they also use Etheruems code. It’s a fork of Ethereum that says they are the Ethereum killer. They say they are the first blockchain to achieve state sharding and then the disclaimer says that shards can’t communicate with each other. Each shard might as well be its own blockchain. A.Looks like the person doesn’t know the technology well. First, a lot of projects use the Etheteum components such as ledger model or smart contracts (e.g., ADA, TRON, Ethermint). I don’t think there is anything wrong there. Second, the guy clearly misunderstood what is a fork or what is new project. A fork is done by forking ethereum code and keep all its history — while we just use part of eth code and the main rest of code — consensus, ledger, cluster are built from Scrach. If the guy has the basic skill of programming and GitHub, all these can be easily verified instead of fudding. Third, it seems that the guy doesn’t have an idea of sharding — we already partition the user accounts to different shards and describe how we learn from Google’s Bigtable’s sharding. The article is published half years ago. Q5. When will QKC shards be able to communicate with each other? A. It could right now (as demonstrated in our testnet). Q6. Are you satisfied with the work of the mainnet? Everything goes well? A. Running surprisingly stable at the moment although we added a lot of improvements in the last minute (week). Thanks for reading this summary. QuarkChain always appreciates your support and company.
OPCFW_CODE
package natsstreaming import ( "context" "crypto/rand" "encoding/hex" "errors" "fmt" "time" "github.com/nats-io/stan.go" "github.com/uw-labs/substrate" ) var ( _ substrate.AsyncMessageSink = (*asyncMessageSink)(nil) _ substrate.AsyncMessageSource = (*asyncMessageSource)(nil) ) const ( // OffsetOldest indicates the oldest appropriate message available on the broker. OffsetOldest int64 = -2 // OffsetNewest indicates the next appropriate message available on the broker. OffsetNewest int64 = -1 ) type connectionTimeOutConfig struct { seconds int tries int } // AsyncMessageSinkConfig is the configuration parameters for an // AsyncMessageSink. type AsyncMessageSinkConfig struct { URL string ClusterID string ClientID string Subject string // number in seconds between pings (min 1) ConnectionPingInterval int // the client will return an error after this many pings have timed out (min 3) ConnectionNumPings int } func NewAsyncMessageSink(config AsyncMessageSinkConfig) (substrate.AsyncMessageSink, error) { sink := asyncMessageSink{subject: config.Subject, connectionLost: make(chan error, 1)} clientID := config.ClientID if clientID == "" { clientID = generateID() } if config.ConnectionPingInterval < 1 { config.ConnectionPingInterval = 1 } if config.ConnectionNumPings < 3 { config.ConnectionNumPings = 3 } sc, err := stan.Connect( config.ClusterID, clientID, stan.NatsURL(config.URL), stan.Pings(config.ConnectionPingInterval, config.ConnectionNumPings), stan.SetConnectionLostHandler(func(_ stan.Conn, e error) { sink.connectionLost <- e }), ) if err != nil { return nil, err } sink.sc = sc return &sink, nil } type asyncMessageSink struct { subject string sc stan.Conn // nats streaming connectionLost chan error } func (p *asyncMessageSink) PublishMessages(ctx context.Context, acks chan<- substrate.Message, messages <-chan substrate.Message) error { ctx, cancel := context.WithCancel(ctx) defer cancel() conn := p.sc natsAckErrs := make(chan error, 1) publishErr := make(chan error, 1) go func() { LOOP: for { select { case <-ctx.Done(): break LOOP case msg := <-messages: _, err := conn.PublishAsync(p.subject, msg.Data(), func(guid string, err error) { if err != nil { select { case natsAckErrs <- err: default: } return } acks <- msg }) if err != nil { publishErr <- err break LOOP } } } }() select { case <-ctx.Done(): return ctx.Err() case ne := <-natsAckErrs: return ne case pe := <-publishErr: return pe case cle := <-p.connectionLost: return cle } } func (p *asyncMessageSink) Close() error { return p.sc.Close() } func (p *asyncMessageSink) Status() (*substrate.Status, error) { return natsStatus(p.sc.NatsConn()) } // AsyncMessageSource represents a nats-streaming message source and implements // the substrate.AsyncMessageSource interface. type AsyncMessageSourceConfig struct { URL string ClusterID string ClientID string Subject string QueueGroup string MaxInFlight int AckWait time.Duration Offset int64 // number in seconds between pings (min 1) ConnectionPingInterval int // the client will return an error after this many pings have timed out (min 3) ConnectionNumPings int } type asyncMessageSource struct { conn stan.Conn conf AsyncMessageSourceConfig disconnected <-chan error } func NewAsyncMessageSource(c AsyncMessageSourceConfig) (substrate.AsyncMessageSource, error) { clientID := c.ClientID if clientID == "" { clientID = c.QueueGroup + generateID() } switch { case c.Offset == 0: c.Offset = OffsetNewest case c.Offset < -2: return nil, fmt.Errorf("invalid offset: '%v'", c.Offset) } if c.ConnectionPingInterval < 1 { c.ConnectionPingInterval = 1 } if c.ConnectionNumPings < 3 { c.ConnectionNumPings = 3 } disconnected := make(chan error, 1) conn, err := stan.Connect(c.ClusterID, clientID, stan.NatsURL(c.URL), stan.Pings(c.ConnectionPingInterval, c.ConnectionNumPings), stan.SetConnectionLostHandler(func(_ stan.Conn, e error) { disconnected <- e close(disconnected) })) if err != nil { return nil, err } return &asyncMessageSource{conn: conn, conf: c, disconnected: disconnected}, nil } type consumerMessage struct { m *stan.Msg } func (cm *consumerMessage) Data() []byte { return cm.m.Data } func (c *asyncMessageSource) ConsumeMessages(ctx context.Context, messages chan<- substrate.Message, acks <-chan substrate.Message) error { msgsToAck := make(chan *consumerMessage) f := func(msg *stan.Msg) { cm := &consumerMessage{msg} msgsToAck <- cm select { case <-ctx.Done(): return case messages <- cm: } } maxInflight := c.conf.MaxInFlight if maxInflight == 0 { maxInflight = stan.DefaultMaxInflight } ackWait := c.conf.AckWait if ackWait == 0 { ackWait = stan.DefaultAckWait } var offsetOpt stan.SubscriptionOption switch offset := c.conf.Offset; offset { case OffsetOldest: offsetOpt = stan.DeliverAllAvailable() case OffsetNewest: offsetOpt = stan.StartWithLastReceived() default: offsetOpt = stan.StartAtSequence(uint64(offset)) } sub, err := c.conn.QueueSubscribe( c.conf.Subject, c.conf.QueueGroup, f, offsetOpt, stan.DurableName(c.conf.QueueGroup), stan.SetManualAckMode(), stan.AckWait(ackWait), stan.MaxInflight(maxInflight), ) if err != nil { return err } err = handleAcks(ctx, msgsToAck, acks, c.disconnected) se := sub.Close() if err == nil { err = se } return err } func (ams *asyncMessageSource) Close() error { return ams.conn.Close() } func (ams *asyncMessageSource) Status() (*substrate.Status, error) { return natsStatus(ams.conn.NatsConn()) } func handleAcks(ctx context.Context, msgsToAck chan *consumerMessage, acks <-chan substrate.Message, disconnected <-chan error) error { var toAck []*consumerMessage for { select { case msgToAck := <-msgsToAck: toAck = append(toAck, msgToAck) case cr := <-acks: if len(toAck) == 0 { return substrate.InvalidAckError{Acked: cr, Expected: nil} } msgToAck := toAck[0] cm, ok := cr.(*consumerMessage) if !ok || cm != msgToAck { return substrate.InvalidAckError{Acked: cr, Expected: msgToAck} } if err := msgToAck.m.Ack(); err != nil { return fmt.Errorf("failed to ack message with NATS: %v", err.Error()) } toAck = toAck[1:] case <-ctx.Done(): return ctx.Err() case e, ok := <-disconnected: if ok { return e } return errors.New("nats connection no longer active, exiting ack loop") } } } func generateID() string { random := []byte{0, 0, 0, 0, 0, 0, 0, 0} _, err := rand.Read(random) if err != nil { panic(err) } return hex.EncodeToString(random) }
STACK_EDU
SPF and DKIM issues are not CipherMail specific. Any mail server which forwards email for internal and external domains will have to deal with SPF DKIM issues. Since encrypted email can contain a virus, it’s best to scan an email after decryption. This would imply that anti virus/spam scanning should be done after decryption. However scanning for spam is best done before decryption because the email can then be rejected before acceptance. Also SPF checks require the original incoming IP address. The best setup, in our view, is therefore a setup where the mail server first receives the email and at the same time checks the email, then forwards it to the CipherMail gateway, and then the CipherMail gateway forwards the email back to the mail server (see discussion above). To discuss the DKIM/SPF issues, it’s best to split up the problem into two parts. Part one discusses email sent to by external senders to internal recipients (lets call this incoming). Part two discusses email sent by internal senders sent to external recipients (lets call this outgoing) For the discussion let’s assume that every domain, internal and external, configures SPF and signs email with DKIM and that every email is S/MIME encrypted. Decrypting email will break the existing DKIM signature because the body and headers of the email are replaced. It’s therefore important to validate the DKIM signature before applying any changes to the email. After decryption, the CipherMail gateway sends the email back to the anti-spam system. The problem now is that DKIM validation fails because the body was changed. Removing the faulty DKIM signature will not solve the issue because the sender might have configured DMARC (which makes DKIM more or less mandatory). DKIM check should therefore be skipped when the mail sever receives the email back from the CipherMail gateway. Unfortunately there is no other option. You cannot DKIM re-sign the email because the domain is not under you control and you therefore have no access to the DKIM private key. The best solution is to use The Authenticated Received Chain (ARC) Protocol (RFC 8617). However this is not yet supported by CipherMail. Until ARC is widely supported, the only solution is either to not anti-spam scan the email after encryption or scan but skip DKIM. Because the email is forwarded by the CipheMail gateway after decryption, the sending IP address if the IP address of the gateway. SPF check therefore fails after forwarding (unless the ant-spam check checks the received headers and not the IP address directly). SPF validation should therefore be done when the email is received for the first time but not when the email is forwarded by the CipherMail gateway. Outgoing email should be DKIM signed after the email was handled by the CipherMail gateway, i.e., after encryption because encryption changes the email. It does not matter whether the mail server of the CipherMail gateway DKIM signs the email just as long as DKIM signing is done after encryption. It should also not be an issue if the email is DKIM signed before encryption just as long as DKIM signing is done again after encryption. Because you own you internal domains, DKIM signing should not be an issue. SPF should not be an issue because you can add the IP address of the CipherMail gateway to the SPF records. There are other options if non of the above work but they have their own issues, One option is to rewrite the sender domain using SRS. This will rewrite the sender to a domain you own. The downside of this is that your inbox now shows that every external sender comes from the same domain. I know that you configure RSPAMD to skip DKIM and SPF when the sender IP is on some white-list. Unfortunately I do not have any experience with RSPAMD. I’ll see whether I can find the time to experiment.
OPCFW_CODE
how to update python in raspberry pi I need python newest version in raspberry pi. I tried apt install python3 3.8 apt install python3 but this didnot work. And I also needed to update my raspberry pi python IDLE https://www.ramoonus.nl/2020/10/06/how-to-install-python-3-9-on-raspberry-pi/ I would recommend using a python version manager such as pyenv. Don't try to change default python's as that can break the OS. First update the Raspbian. sudo apt-get update Then install the prerequisites that will make any further installation of Python and/or packages much smoother. sudo apt-get install -y build-essential tk-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev libffi-dev And then install Python, maybe by downloading a compressed file? example 1 : wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz Extract the folder : sudo tar zxf Python-3.8.0.tgz Move into the folder : cd Python-3.8.0 Initial configuration : sudo ./configure --enable-optimizations Run the makefile inside the folder with the mentioned parameters : sudo make -j 4 Run again the makefile this time installing directly the package : sudo make altinstall Maybe You already did it but You don't know how to setup the new version as a default version of the system? Check first that it has been installed : python3.8 -V Send a strong command to .bashrc telling him who (which version) is in charge of Python echo "alias python=/usr/local/bin/python3.8" >> ~/.bashrc Again! Tell him because .bashrc has to understand! I am joking - You have to source the file so the changes can be applied immediately : source ~/.bashrc And then check that Your system changed the default version of Python to Python 3.8 python -V The failure depends on many factors : what dependencies are installed, what are the packages added to the source_list.d, some inconvenient coming up during the installation. All may give you more information than you think, just read carefully. Hope it helped. I guess this is old. But isn't there a more... normal... way to update a required program? I'm new to Pi and by extension linux but this is more like a final year project. Is there really no decent/normal way for linux to update a python version? It feels like it's overcomplicated just for fun. I guess I'm more used to there being 1-3 commands to update a library not a whole essay that I need to rewrite from memory on a different machine (pi doesn't do well with browsers)(ram) I don't know what the constraints are but this totally feel waaaaaay out of necessary bounds. But hey, this is linux. Thanks Xerozz, amazing answer with detailed / kind explanation, which are very valuable for newbies like myself. Cheers. Linux is painful. Ah, of course. The everyone's favorite "easy, beginner friendly" programming language. Just compile it from source to update, no big deal... Is this truly the simplest solution? This is because the latest version of python isn't on the raspberrypi repo's (I just checked and it's still on 3.7?!?!?). I would poke them about this. Python is a very common tool on rasppi's so the fact they do not keep the repos up to date is mind blowing. To all of you who got a problem with your RPi 3 freezing during this step: sudo make -j 4 just change it to: sudo make -j 2 or simply: sudo make Best regards If you want to suggest an answer like this, please also give your reasoning. Is this a workaround to a temporary bug? Or is it always necessary to do this? What does the -j flag do? @AlexSpurling It specifies the number of CPU threads to use. This thread will explain: https://stackoverflow.com/questions/15289250/using-make-with-j4-or-j8/15295032#15295032 To anyone looking at this answer in 2024, you can upgrade to bullseye and install Python 3.9 directly from apt Backup Your Sources List: $ sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak $ sudo cp /etc/apt/sources.list.d/raspi.list /etc/apt/sources.list.d/raspi.list.bak Edit the Sources List: Open the sources list file in a text editor: $ sudo nano /etc/apt/sources.list Replace all instances of stretch with bullseye. Your file might look something like this after the change: deb http://raspbian.raspberrypi.org/raspbian/ bullseye main contrib non-free rpi Update, and install Python 3.9 $ sudo apt update $ sudo apt install -y python3.9 Follow below commands to install the version which you want: tar xf Python-3.x.x.tar.xz cd Python-3.x.x ./configure --enable-optimizations make sudo make install once completed run python -V You can painlessly create virtual environments of any recent Python version (3.9 up to 3.13 worked fine for me) using the uv tool. First make sure to install uv, e.g. via curl -LsSf https://astral.sh/uv/install.sh | sh # avoid need to exit/restart shell source $HOME/.local/bin/env Then create the virtual environment uv venv mycustomvenv --python 3.13 Finally, activate via e.g. (uv also supports other ways) . mycustomvenv/bin/activate python -V # gives Python 3.13.X
STACK_EXCHANGE
If we do a level order traversal of tree such that the right-most node of each level is visited before all other nodes in that level then all we need to do is to print out the first visited node at each level to print the right view of tree. This can be done by doing a level order traversal from right end to left end instead of usual traversal from left end to right end and keeping track of max-level seen so far to find out when the new level starts. As soon as we find out that the new level has started, we print out the current node that is being visited. The steps for this algorithm are - 1. Initialize maxLevelSeenSoFar to -1 and call printRightViewLevelOrder(currentNode = root) 2. In function printRightViewLevelOrder if currentNode is null, do nothing and return. 3. Else, add tuple (node = currentNode, level = 0) to 'queue'. 4. while 'queue' is not empty do - remove the first tuple(currentNode, level) from the queue. - if (level > maxLevelSeenSoFar) then we know that we are starting to traverse a new level and this is the first(and right most) node for this new level and therefore we print currentNode's value and update maxLevelSeenSoFar to level. - if right child of currentNode is not null then we add tuple (currentNode.right, level + 1) to 'queue'. - if left child of currentNode is not null then we add tuple (currentNode.left, level + 1) to 'queue'. * Please note that we are adding right child of node to the 'queue' before left child to make sure that the right-most node in any level is visited before other nodes in that level. After execution of step #4, right view of tree would be printed. Notice that this algorithm takes O(n) extra space. With the following algorithm, we can save on extra space. This algorithm basically uses modified pre-order traversal. In this modified pre-order traversal we make sure that - a. The right-most node of any given level is visited before other nodes in that level. This can be easily achieved by visiting right sub-tree of node before left sub-tree. Basically, in this traversal, we visit the node first, then visit the right sub-tree and finally the left sub-tree(N-R-L). b. We print out the node value as soon as we encounter a node in the level that is greater than the maximum level seen so far and update maximum level seen so far to current level. The steps of this algorithm are as following - 1. Initialize maxLevelSeenSoFar to -1 and call printRightView(currentNode = root, level = 0) 2. In function printRightView(currentNode, level), a. If currentNode is null, then we do nothing and return. b. Else, if (level > maxLevelSeenSoFar) we print out the currentNode's value and update maxLevelSeenSoFar to level. c. Make a recursive call printRightView(currentNode.right, level + 1) to make sure nodes in right sub-tree are visited. d. Make a recursive call printRightView(currentNode.left, level + 1) to make sure nodes in left sub-tree are visited. ** Notice that while visiting nodes with recursive calls, printRightView(currentNode.right, level + 1) is called before printLeftView(currentNode.left, level + 1) in order to make sure that for any node, right sub-tree of that node is visited before left sub-tree. This guarantees that for any level, the right-most node is visited before other nodes in that level. After execution of step #2, right view of tree would be printed. Please checkout code snippet and algorithm visualization section for more details of the algorithm.
OPCFW_CODE
S3C24XX GPIO Control¶ The s3c2410 kernel provides an interface to configure and manipulate the state of the GPIO pins, and find out other information about them. There are a number of conditions attached to the configuration of the s3c2410 GPIO system, please read the Samsung provided data-sheet/users manual to find out the complete list. See Documentation/arm/samsung/gpio.rst for the core implementation. With the event of the GPIOLIB in drivers/gpio, support for some of the GPIO functions such as reading and writing a pin will be removed in favour of this common access method. Once all the extant drivers have been converted, the functions listed below will be removed (they may be marked as __deprecated in the near future). The following functions now either have a s3c_ specific variant or are merged into gpiolib. See the definitions in arch/arm/plat-samsung/include/plat/gpio-cfg.h: - s3c2410_gpio_setpin() gpio_set_value() or gpio_direction_output() - s3c2410_gpio_getpin() gpio_get_value() or gpio_direction_input() - s3c2410_gpio_getirq() gpio_to_irq() - s3c2410_gpio_cfgpin() s3c_gpio_cfgpin() - s3c2410_gpio_getcfg() s3c_gpio_getcfg() - s3c2410_gpio_pullup() s3c_gpio_setpull() If you need to convert your board or driver to use gpiolib from the phased out s3c2410 API, then here are some notes on the process. If your board is exclusively using an GPIO, say to control peripheral power, then it will require to claim the gpio with gpio_request() before it can use it. It is recommended to check the return value, with at least WARN_ON() during initialisation. The s3c2410_gpio_cfgpin() can be directly replaced with s3c_gpio_cfgpin() as they have the same arguments, and can either take the pin specific values, or the more generic special-function-number arguments. s3c2410_gpio_pullup() changes have the problem that while the s3c2410_gpio_pullup(x, 1) can be easily translated to the s3c_gpio_setpull(x, S3C_GPIO_PULL_NONE), the s3c2410_gpio_pullup(x, 0) are not so easy. The s3c2410_gpio_pullup(x, 0) case enables the pull-up (or in the case of some of the devices, a pull-down) and as such the new API distinguishes between the UP and DOWN case. There is currently no ‘just turn on’ setting which may be required if this becomes a problem. s3c2410_gpio_setpin() can be replaced by gpio_set_value(), the old call does not implicitly configure the relevant gpio to output. The gpio direction should be changed before using gpio_set_value(). s3c2410_gpio_getpin() is replaceable by gpio_get_value() if the pin has been set to input. It is currently unknown what the behaviour is when using gpio_get_value() on an output pin (s3c2410_gpio_getpin would return the value the pin is supposed to be outputting). s3c2410_gpio_getirq() should be directly replaceable with the gpio_to_irq() call. The s3c2410_gpio and gpio_ calls have always operated on the same gpio numberspace, so there is no problem with converting the gpio numbering between the calls. See arch/arm/mach-s3c24xx/include/mach/regs-gpio.h for the list of GPIO pins, and the configuration values for them. This is included by using #include <mach/regs-gpio.h> Each pin has an unique number associated with it in regs-gpio.h, e.g. S3C2410_GPA(0) or S3C2410_GPF(1). These defines are used to tell the GPIO functions which pin is to be used. With the conversion to gpiolib, there is no longer a direct conversion from gpio pin number to register base address as in earlier kernels. This is due to the number space required for newer SoCs where the later GPIOs are not contiguous. Configuring a pin¶ The following function allows the configuration of a given pin to be changed.void s3c_gpio_cfgpin(unsigned int pin, unsigned int function); e.g.:s3c_gpio_cfgpin(S3C2410_GPA(0), S3C_GPIO_SFN(1)); s3c_gpio_cfgpin(S3C2410_GPE(8), S3C_GPIO_SFN(2)); which would turn GPA(0) into the lowest Address line A0, and set GPE(8) to be connected to the SDIO/MMC controller’s SDDAT1 line. Reading the current configuration¶ The current configuration of a pin can be read by using standard gpiolib function: s3c_gpio_getcfg(unsigned int pin); The return value will be from the same set of values which can be passed to s3c_gpio_cfgpin(). Configuring a pull-up resistor¶ A large proportion of the GPIO pins on the S3C2410 can have weak pull-up resistors enabled. This can be configured by the following function:void s3c_gpio_setpull(unsigned int pin, unsigned int to); Where the to value is S3C_GPIO_PULL_NONE to set the pull-up off, and S3C_GPIO_PULL_UP to enable the specified pull-up. Any other values are currently undefined. Getting and setting the state of a PIN¶ These calls are now implemented by the relevant gpiolib calls, convert your board or driver to use gpiolib. Getting the IRQ number associated with a PIN¶ A standard gpiolib function can map the given pin number to an IRQ number to pass to the IRQ system.int gpio_to_irq(unsigned int pin); Note, not all pins have an IRQ.
OPCFW_CODE
#!/usr/bin/env python from db_tseeker import dbclient from schema_tseeker import User, Query import argparse # set up for parsing command line arguments parser = argparse.ArgumentParser(description='Add items to the tweeter-seeker database for workers to fill out.') parser.add_argument('type', choices=['user','query'], help="The type of item(s) to be added.") # TODO: add geo objects (i.e. all tweets within x km of a place) parser.add_argument('items', help="A single item or list of items; queries or users as specified in the earlier arguments.") parser.add_argument('-v', '--verbose', action="store_true", help="Increase output verbosity.") parser.add_argument('--dbhost', default="localhost", help="Specify database address. Defaults to localhost.") parser.add_argument('--dbport', type=int, default=27017, help="Specify database port. Defaults to 27017.") # now do the actual parsing args = parser.parse_args() # connect the db db = dbclient(args.dbhost, args.dbport) if not db: print("Unable to connect to database at " + args.dbhost + ":" + args.dbport) sys.exit(0) def addUsers(users): # recieves array of Twitter user handles (may not be valid) toInsert = [] rejected = [] for user in users: # check if exists # TODO this is very slow, but will do for now exists = db.findOne('users', {"username":user}, {"username":1}) if exists == None: toInsert.append(User(user)) # turn the username into a valid object for insertion, and queue it up else: print("User " + user + " is not unique.") rejected.append(user) # bulk insert whatever is unique db.insert('users', toInsert) return rejected # in case we want to print a failure message def addQueries(queries): # recieves array of Twitter queries (may not be valid) toInsert = [] rejected = [] for query in queries: # check if exists # TODO this is very slow, but will do for now exists = db.findOne('queries', {"query":query}, {"query":1}) if exists == None: toInsert.append(Query(query)) # turn the username into a valid object for insertion, and queue it up else: print("Query " + query + " is not unique.") rejected.append(query) # bulk insert whatever is unique db.insert('queries', toInsert) return rejected # in case we want to print a failure message # items comes in as a comma separated list, may have leading/trailing spaces items = [x.strip() for x in args.items.split(",")] if args.type == 'user': addUsers(items) elif args.type == 'query': addQueries(items) else: # the arg parser should catch this, but just in case print("Unrecognized item type: " + args.items) sys.exit(0)
STACK_EDU
How does Google manage its service package installation and upgrade? The RPM, yum or apt tools are very popular in the Linux world, but I don't know whether Google uses these tools in their internet service management work, and use these tools to manage the service version, service rollback. I'm guessing you're asking how they manage to update their systems without ever having down time. The answer is a fair amount of load balancing and high availability cluster (more the latter). With HA clustering, you can migrate services (including the IP addresses associated with them) between the various nodes in the cluster. After dependent resources are migrated, the service (apache, samba, whatever) gets started on the other node, then the VIP (virtual IP) is finally migrated to the destination. So they just need to migrate whatever service is running on the node they're updating to another node, do their work, then move it back when they're done. I don't know what HA software uses, but if you're interested in doing this yourself Cluster Suite is the only FOSS product I know of that's any good (not that I have much experience with other products). There are other non-cluster HA solutions like LVS you could look into as well. thanks a lot, you answer part of my question, and also want to know whether they use RPM, apt-get like tool for their operating level work? means they need to put their software to a RPM or deb package, and use tool install it, and resolve the dependency by tool? or they develop their own tools, since i think the above tool are week for some area for offical service, for example, the support for rollback. I'm not sure what google uses as far as a package manager, but I can't imagine they would roll their own. I'm a RHEL admin and most places I've worked have used yum without issue. Rollbacks are supported for kernels (in that it lets you keep three kernels installed in case a newer one breaks something). Tools saying they support rollback and actually supporting it are sketchy at best. Yum supports "downgrade" and I'd wager that's as close to a rollback as you can get. Yum's actually such a good tool that Solaris 11 essentially implements the idea for their pkg and pkgrepo utilities. Also, most enterprise environments have "development" and/or "testing" environments. "development" would be for internal solutions development and "testing" is obvious. Usually, admins are supposed to be pretty sure of what they're applying before they actually do it. Even then, they go through the test environment and verify functionality before they apply it to production. If the test environment is set up to mirror production, any issues should surface there. thanks joel, do you think a internet application can be mananged by the yum or rpm tool? the truth is the application changed frequently, maybe one or two publish per month, if met site issue, we should apply package the same day, of course, the new package maybe not well tested, so the rollback function is very important for our product, what i descide is implementing the installation, rollback and version manangement strategy by ourself, do the by ourself, i think this will be more flexible for our situation, and also we know what to do(maybe some workaround) if we met critical issues
STACK_EXCHANGE
Early computers were enormous affairs, that oftentimes filled up entire rooms. Early technologists, however, were predicting in the 1950s that within a few decades these behemoths would be small enough to fit on a desk, and common enough that everyone would own one. Unlike many of the other extremely optimistic predictions of the era, this proved soon to be the case. Until the end of the 1960s there was simply no way to shrink a computer past a certain point, even had it seemed that there was a need to do so. At the end of the 1960s, however, the military began to invest heavily in smaller computers for use in fighter planes. By 1970 the microprocessor had essentially been invented, drastically reducing the amount of size needed for a computer processor, and opening the door to smaller and smaller computers. Minicomputers came on the scene a few years before the true first PCs. These were small enough to fit on a desktop, but prohibitively expensive for any normal consumer, making them somewhat different from the modern conception of a PC. Within a few years, however, the technology had trickled down, and the first PCs began to be created in hobbyists basements and garages. In 1975, the first PCs produced as a mass-production kit were released by Altair, a year after a less complete kit-list was released as the Mark 8. These kits became enormously popular, with software written for them by two programmers, Paul Allen and Bill Gates, and their company Micro-Soft. A year later Stephen Wozniak and Steven Jobs started their own personal computer business, Apple Computer Company, also offering a kit along the lines of Altair. A year later the company released a pre-assembled version of their computer, the Apple II, which became virtually an overnight success. In 1981, the International Business Machine (IBM) company decided to enter the personal computer world. With their massive resources and decades of experience creating mainframes, they released their own desktop, which they called the PC 5150. This was the first widespread use of the term PC, although it was only one of the first PCs. These first PCs were a far cry from the computers of today, but had a surprising number of similarities. The Altair 8800 featured a motherboard with a number of slots for various cards which held things such as the memory and the CPU. On the front of the computer was a plate with various switches and lights, to input binary data directly into the computer and see instant feedback. Using these first PCs basically consisted of inputting complex programs into the computer by toggling switches in specific sequences. A few years before the Altair 8800 was another of the first PCs, which, although it did not achieve widespread fame, did implement a number of important features which would later impact personal computers as a whole. The Xerox Alto was released in 1972, and had features such as a graphical user interface, the idea of a desktop upon which various items sat, and a mouse for interacting with the desktop. Although the Alto eventually faded into relative obscurity, many of the ideas it introduced would later be resurrected in Apple’s computers, and eventually in PCs as a whole. By 1977, the first PCs were on their way to looking like modern PCs, and by the early 1980s they had most of the features, albeit in a less aesthetic and diminished capacity. Mice, full keyboards, disk drives, and RAM were all found on popular computers such as the Apple Macintosh, the Xerox Star, and the Atari ST. Color was widely introduced at this time, and over the years hardware became more robust, software became more efficient, and the internet offered a widespread connectivity, forever transforming these first PCs into modern machines that dwarf even the most powerful supercomputers of the 1970s.
OPCFW_CODE
The purpose of the project is summarising effort from a number of analytic libraries, adding interactive web-based user interface and making a free open source solution for risk analytics and stress testing. Feb 8, 2012 Paul Glasserman's Importance Sampling and Tail Approximations as well as plain Monte Carlo have been implemented for for the widely used normal copula model of portfolio credit risk. The package includes source code, examples, spreadsheet with results and references to the papers. Simple file logger. Androger is file logger, that shows logged files in GUI interface. Has got support for watching multiple files, and filtering their content. It's suppose to be fast, simple & robust. Just run it and chose log file to be watched. This application requires Java 7 or greater to be installed (https://www.java.com/en/download/). Making the home catalog anime, designed for home noncommercial use Anime DB - is a program for making home catalog anime, designed for home noncommercial use Kind of macro player for Windows, written in Java. Its original purpose was to control a telescope through rutinary commands late at night. General purpose now, it can simulate mouse clicks, keyboard presses and change of windows focus. This is a very simple board that every one can use it in his website. It doesnt require any database. just fill the config.php file by the title and footer of your website and all will run well. You can also change the style according your view. Subset of QML widgets styled as Plasma Breeze theme for mobile devices Subset of QML widgets styled as Plasma Breeze theme for mobile devices. The goal is to implement Breeze like widgets using mostly QtQuick primitives for Android and other devices. In features you can see already working elements. A simple open source text editor. Clonepad is a simple open source text editor, licensed under the GNU General Public License v. 3.0. Recursive source code line counter for C, BASIC, and web files. Recursively count lines of source code and comments through files and sub-directories. Created to parse entire projects rather than individual files. C, BASIC, and web files (general) supported. Convman is a multi-purpose conversation manager, it allows: Cleaning Facebook conversations to store. Converting Nokia/Android SMS to mbox files. Converting Tim's last 10 free sms sent to mbox. wip:Send free sms from Tim's site and save content to eml file. Hope you like it and find it useful! The icon is a modified one (text-editor) from the Faenza icon set http://tiheum.deviantart.com/art/Faenza-Icons-173323228 The mork parser is a modified version of http://www.scalingweb.com/mork_parser.php Oracle database can be design in excel. This small API will help to generate SQL scripts for the designed tables. One can generate the excel from the existing tables and group them as per module, master/transaction table etc. DotNet Wrapper for Digital Enlightenment DMX IF This is a .net wrapper to access the digital-enlightenment DMX Interface. You can find all information related to digital enlightenment on http://www.digital-enlightenment.de (german page) To run the application, you have to get the usbdmx.dll from the digital enlightenment website (http://www.digital-enlightenment.de/usbdmx.htm). The dll is part of the project archive linked at the bottom. You also need nlog for the library to compile. The project is configured with nuget to make it easier to get the required packages. Lightweight fully extendable client/server application framework DotNetOpenServer SDK is an open source lightweight fully extendable TCP socket client/server application framework enabling developers to create highly efficient, fast, secure and robust cloud based smart mobile device and desktop applications. Why? Unlike most application server frameworks, which are implemented over slow inefficient stateless protocols such as HTTP, REST and SOAP that use bulky ASCII data formats such as JSON and XML, DotNetOpenServer has been built from the ground up with highly efficient stateful binary protocols. Open, extendable and eye candy clipboard manager. Easy Clipboard Organizer is an open, extendable and eye candy clipboard manager. You can browse history of copied texts and files, translate selected text, simultaneously paste files from different directories or save part of the screen to clipboard or file. By using plugins and themes you can customize the functionality and appearance to your needs. If you are an office worker, programmer, student or you are just working with texts or files - ECO will make your life easier. Offers tools to easily manage windows, including automatic alignment and sizing, pinning and unpinning (set on top), shortcuts to stack and view side by side and shortcuts to change size and position. A database to support large history research projects. This is desktop application focuses on aiding individuals working on large projects like books and dissertations. Search and download torrents from mininova.org auto-magically! If you a developer you can compare two mdb ( Access 2000 ) databases. And the old database will be upgraded with the new one. Copy the CompareData.mdb and 'newdatabase.mdb' to your 'olddatabase.mdb' directory. And run this application. Maintenance Manager is a Help Desk Application written using Java to allow tracking of "trouble" tickets and their resolution. Please see http://maintmgr.sourceforge.net for installation instructions Downloader for Mangas Downloads Mangas from the sites mangafox.com, mangareader.net, mangastream.com With official permission of mangafox.com For the other permissions i am still waiting... so please wait with downloading from this sites until I have the permission! THIS PROJECT HASN'T BEEN UPDATED FOR QUITE SOME TIME. IF SOMEBODY IS STILL INTERESTED IN IT HE CAN SEND ME A MESSAGE AND I WILL UPDATE THE PROGRAM TO WORK WITH THE NEW VERSIONS OF THE WEBSITES!!! MultiStart starts several programs on a list with only one click, with a given interval and parameters. MultiStart helps you edit the list, and shows the list as buttons. Thus you can also use MultiStart as a menu and start the programs one by one. The text editor written in Java. MultiText Editor is the program for editing plain text, xml, rtf, html documents. Several files can be opened at time. Supports search in text. Supports options for change the font, color, style of text. Open Source File Downloader. Download files automatically on scheduled hour. You need .NET 4 to run OFD. OFD makes use of open source framework CSharpFrame, https://sourceforge.net/projects/csharpframe. It's an infrastructure helps engineers to develop enterprise software solutions in Microsoft .NET easily and productively which consists of an extendable and maintainable web system architecture with a suite of generic business model, APIs and servi Catalogueing orders and references,customer relationships,statistics. The program will help you to easily catalog the references, organize relationships with customers and obtain income and expense statistics. Dear artists, crafters and all who has interested this program. The program is developed and maintained personally by me on a non-commercial basis. Designed for private artists, crafters and those engaged in this kind of freelancing. You can use the program absolutely free for commercial purposes to organize your own activities. Version 0.1 of fonts released. Download Fonts that support Seraiki/ Saraiki language. You may contribute in developing Seraiki supporting fonts. For more details see Project Web Site.
OPCFW_CODE
The XML batch upload is a flexible upload feature that is an excellent option when it comes to integrating with 3rd party systems. It uses 3rd party ID to synchronize locations and allows certain data elements including search log data to persist when updates are made to data within a location record. The XML upload uses a special caching technique that stores a snapshot of the entire XML for each location on the system. When new XML files are imported, it compares the incoming XML for each location with the last cached version. Through this comparison, Bullseye determines which records have been updated, which records have been deleted and which records are new. Once changes have been identified, it will only act on those records that have changed. The upside to this is that when there is very little data that is changed, the comparison and update require very little processing time because they don't have to loop through each record even when changes have not been made. However, because of this, it is required that each XML file be a full dataset so deletions and additions can be identified. By default, the XML upload ignores specific data elements housed in Bullseye. This is to allow content which would be web facing and not part of another system to be maintained in Bullseye while still allowing other core data to be managed within a 3rd party system that is updated via XML. The elements that are excluded by default are: - SEO information - Landing page images - Social media links These can however also be configured to be updated as part of the XML update. Likewise, we can configure other elements to be excluded from the XML update. This would be a consideration if you were delegating content management responsibilities to someone who would be updating this data via the Bullseye admin but still having an XML integration with another system. Elements that can be excluded from the XML upload include: - Attributes (currently we do not have the ability to exclude specific attributes; it's all or nothing) - Territory mappings - Business hours - Location preview image Another important data consideration is with the use of holiday hours. Holiday hours can be updated via the XML import or through the Bullseye admin. If the XML file includes holiday hours, the import will compare dates. If a listing already appears in Bullseye, but is not included in the XML import, the import will leave the existing record. If the date matches and hours have changed, Bullseye will update the record with the XML information. If a record does not exist in Bullseye, but is in the XML import, a new record will be added. This flexibility allows changes to be made through the admin as well as via XML. Notes on Holiday Hours Keep in mind that if an XML file contains a holiday hours entry which is then modified through the admin, that record will also need to be updated in the source system for the XML. Otherwise, it runs the risk of being overwritten the next time the XML import is run. If you wish to have holiday hours maintained only through the Bullseye admin, you should not include these in the XML import. Updated about 6 years ago
OPCFW_CODE
from Sources.slottedAloha import SlottedAloha from Sources.csmap import CSMAp from Sources.recuobinario import CSMAp_recuobinario from statistics import fmean,stdev def CalculosEstatisticos(VetorTPrimeiro,VetorTTodos): #Funcao para calcular medias e desvios padrao, recebe vetor contendo 33 valores de tempo gastos pela primeira maquina para enviar e todas para enviar, respectivamente. print("\n\t->O tempo medio para a primeira maquina enviar foi de: %.5f microssegundos" % fmean(VetorTPrimeiro)) print("\n\t->O desvio padrao do tempo para a primeira maquina enviar foi de: %.5f microssegundos" % stdev(VetorTPrimeiro)) print("\n\t->O tempo medio para todas as maquinas enviarem foi de: %.5f microssegundos" % fmean(VetorTTodos)) print("\n\t->O desvio padrao do tempo para todas as maquinas enviarem enviar foi de: %.5f microssegundos" % stdev(VetorTTodos)) if __name__ == "__main__": while(1): n = int(input("Entre com a quantidade de maquinas:")) #Menu print("Escolha o algoritmo desejado:\n\t1 - Slotted Aloha\n\t2 - CSMA p-persistente\n\t3 - Algoritmo de recuo binario exponencial") a=0 while(a != 1 and a != 2 and a!=3): a = int(input("Entre:")) if(a != 1 and a != 2 and a!=3): print("Opcao invalida!") if(a==1): #Slotted Aloha VetorTEnviadoPrimeiro = [] #Vetor para armazenar tempos da primeira maquina enviar em cada uma das 33 instancias VetorTTotal = [] #Vetor para armazenar tempos para todas as maquinas enviarem em cada uma das 33 instancias for k in range (33): TemposAloha = SlottedAloha(n) #Chamando SlottedAloha, funcao retorna um vetor com tempo necessario para enviar de cada uma das maquinas MenorT=TemposAloha[0] for j in range(n): #Pegando menor tempo, referente a primeira maquina que enviou if(MenorT>TemposAloha[j]): MenorT = TemposAloha[j] VetorTEnviadoPrimeiro.append(MenorT) #Adicionando menor tempo ao vetor de primeiras maquinas a enviar Total=TemposAloha[0] for i in range(n): # Pegando maior tempo, referente a ultima maquina que enviou if(Total<TemposAloha[i]): Total = TemposAloha[i] VetorTTotal.append(Total) #Adicionando maior tempo ao vetor de tempo total para maquinas a enviar CalculosEstatisticos(VetorTEnviadoPrimeiro,VetorTTotal) #chamando funcao para calcular estatisticas e imprimir resultado if(a==2): VetorTEnviadoPrimeiro = [] VetorTTotal = [] for k in range (33): Tempos_CSMAp = CSMAp(n) #Chamando CSMAp, funcao retorna vetor com duas posicoes, referente ao tempo do primeiro enviar e todas maquinas enviarem VetorTEnviadoPrimeiro.append(Tempos_CSMAp[0]*51.2) #Adicionando ao vetor valor em tempo da primeira maquina a enviar VetorTTotal.append(Tempos_CSMAp[1]*51.2) #Adicionando ao vetor valor em tempo de todas maquinas enviarem CalculosEstatisticos(VetorTEnviadoPrimeiro,VetorTTotal) #chamando funcao para calcular estatisticas e imprimir resultado if(a==3): VetorTEnviadoPrimeiro = [] VetorTTotal = [] for k in range(33): Tempos_CSMApRecuoBinario = CSMAp_recuobinario(n) #Chamando CSMAp com recuo binario, funcao retorna vetor com duas posicoes, referente ao tempo do primeiro enviar e todas maquinas enviarem ou em caso de erro false. if(Tempos_CSMApRecuoBinario==False): #Caso aconteça um erro no tratamento de colisoes, algoritmo é interrompido e a mensagem False, referente a um erro é retornada print("\n\tERRO! Houveram mais de 16 colisoes no algoritmo de recuo binario!") else: #Caso nao retorne False VetorTEnviadoPrimeiro.append(Tempos_CSMApRecuoBinario[0]*51.2) #Adicionando ao vetor valor em tempo da primeira maquina a enviar VetorTTotal.append(Tempos_CSMApRecuoBinario[1]*51.2) #Adicionando ao vetor valor em tempo de todas maquinas enviarem if(len(VetorTEnviadoPrimeiro) > 0 and len(VetorTTotal) > 0): CalculosEstatisticos(VetorTEnviadoPrimeiro, VetorTTotal) #chamando funcao para calcular estatisticas e imprimir resultado print("\n\nDeseja voltar ao menu inicial?\n\t1 - Sim\n\t2 - Nao") b=0 while(b!=1 and b!=2): #Pequeno menu que pergunta se usuario deseja finalizar ou voltar ao menu inicial. b=int(input("Entre:")) if(b != 1 and b != 2): print("Opcao invalida!") if(b==2): break pass
STACK_EDU
A scaled deployment of IRIS web application tends to have multiple web servers. These play well to load-balance traffic, providing good utilization on busy applications. Web Servers take over the checking and serving static of content, to let the Database Server focus on data query centered tasks. Recently Vector Similarity Search with generative AI has gone mainstream. This can be broken down into several steps of activity 1) Generate an encoding for an input question. For example: Convert string "Tell me how to increase the lock table size" into an encoding for vector search. 2) Using the new encoding scan, index retrieval and calculate from a Vector column the highest similarity content 3) Return one or more text content that with highest similarity 4) [ optional ] Generate content based on prompt and retrieved text content So the idea here is to delegate / offload both step 1 and 4 to the web servers, while retaining vector search within the main database. Web Server deployment would normally be for a CPU type cloud node. So the hypothesis is for the web server to alternatively be hosted on a TPU ( Inference ) type node, where it hosts a model for: Deriving encoding input [ optional ] generative output enriching outbound content. This mitigates need to: Sharing a model to the client for generating encoding Additional pre-query request-response cycle step to generate encodings next used for vector search. A convention for client web requests to indicate which form fields require generating an encoding for Configuration to allow functionality for csp application Metrics for utilization Header shared to database to indicate generative AI is available on response processing. There would need to be for output processing: "prompt placeholder" - removed after processing One or more "content placeholders" - removed after processing "Output placeholder" - replaced by generative content Metrics for utilization Output prompt templates could be focused on: Shorten length of text Convert list to sentences Redacting / anonymizing content where appropriate based on user context Web Browser -> Web Gateway Encode -> Database Search and output -> Web Gateway Generative Transform -> Web Browser Render For uploaded picture encoding, a web gateway to database connection optimization could be to discard the uploaded picture content after first generating the encoding vector and / or classifications. Ie: Where only need to pass on the lightweight encoding / classifications as input to the database. Wound / burn / rash / skin inflammation identification from uploaded image and retrieval and summary of corresponding patient knowledge data. Essentially looking for a viable way for encoder and generative transform functionality to be provided out-of-the-box, in a plugable way, to enrich existing and new web applications. So application developers don't need to reinvent the approach, or obtain new training or experience to achieve greater impact. Thank you for submitting the idea. The status has been changed to "Needs review".
OPCFW_CODE
Interact with a report in Reading View in Power BI Reading view is not as interactive as Editing view, but it still gives you many options for exploring your data. This comes in handy when viewing reports shared with you, that can only be opened in Reading View. Reading view is a fun and safe way to play with and get to know your data. In Reading View you can cross-highlight and cross-filter visuals on a page. Simply highlight or select a value in one visual and instantly see its impact on the other visuals. Use the Filter pane to add and modify filters on a report page, and change the way values are sorted in a visualization. Any filtering and highlighting that you do is not saved with the report. Cross-highlight the related visualizations on a page The visualizations on a single report page are all "connected" to each other. What this means is that if you select one or more values in one visualization, other visualizations that use that same value will change based on that selection. To select more than one element in a visualization, hold down the CTRL key. Hover over visual elements to see the details Sort the data in a visualization Select the ellipses (...) to open Sort by. Select the dropdown arrow to choose which field to sort by or select the AZ icon to switch between ascending and descending. Interact with filters If the report author added filters to a page in a report, you can interact with them in Reading View. Changes you make will not be saved with the report. Select the Filter icon in the upper-right corner. You'll see all filters that have been applied to the visual you have selected (Visual level filters), across the whole report page (Page level filters), and across the entire report (Report level filters). Hover over a filter and expand it by selecting the down arrow. Make changes to the filters and see how the visuals are impacted. In this example, we have a Page level filter for Chain. Change it to Fashions Direct instead of Lindseys by removing the checkmark from one and adding it to the other. - Or completely remove filtering on Chain by selecting the eraser icon or by selecting both chain stores. Select the District page level filter and switch to Advanced filtering. Filter to show only districts that start with FD and don't contain the number 4. Zoom in on individual visuals Hover over a visual and select the Focus mode icon . When you view a visualization in Focus mode, it expands to fill the entire report canvas as shown below. To display that same visualization without the distraction of menubars, filter pane, and other chrome -- select the Full Screen icon from the top menubar . Adjust the display dimensions Reports are viewed on many different devices, with varying screen sizes and aspect ratios. The default rendering may not be what you want to see on your device. To adjust, select View and choose: - Fit to Page: scale content to best fit the page - Fit to Width: scale content to the width of the page - Actual Size: display content at full size In Reading View, the display option you select is temporary - it is not saved when you close the report. To learn more, see Tutorial: Change display settings in a report. More questions? Try the Power BI Community
OPCFW_CODE
Description: The InnovidXP platform delivers reach, frequency, incremental reach, and attributed outcomes across linear, CTV, and digital video buys. The articles below are included in the Using the Product section: |InnovidXP User Guide |Describes how to use XP and includes the latest enhancements to the Open Beta version of the platform. |XP + Ad Serving Reach & Frequency Guide |Provides details about the XP for clients who are ad serving with Innovid, including detailed summary of our CTV data granularity in the measurement tab (Reach and Frequency). |XP + Ad Serving OTT and Reach Extension User Guide |Describes the OTT and Reach Extension metrics available to InnovidXP + Ad Serving customers in the U.S. only. |Approach to Linear TV Attribution |Describes our approach to linear attribution and outlines how the probabilistic spike model works vs. the linear impression-based model. |How InnovidXP Captures Responses |Explains how responses are captured and calculated in InnovidXP. |Calibration Process Overview |Describes the InnovidXP calibration process and what it involves. |Lag Model Overview |Learn how the Lag Model operates, when to apply it, and the effects on multiple spots and digital. |Control Groups Overview |Gives an overview of control groups and explains how and why control groups are used in InnovidXP. |Provides an overview of what the InnovidXP feature Predict is, how it works, metrics, and recommendations. |Provides an overview of the InnovidXP extrapolation model and why it is used. |Explains how to enable and display the Newsfeed page and view notifications. Provides an overview of the InnovidXP hybrid approach and explains why it is used and how it works. |Setting Web Traffic Classifications |Describes how to set traffic classifications for your web responses and how to add a new rule. |Impression Device Class Definitions |Provides a list and definitions of the impression device classes found in InnovidXP. |InnovidXP Impression Data Guide |Details what to provide when sending customer linear and/or streaming impression data to Innovid. |Approach to Seasonality in Attribution |Outlines how InnovidXP deals with seasonality in our attribution method. |Data Evaluation Process |Informs data partners of the Innovid data evaluation process and provides information for each step. |Compares InnovidXP's approach to our three key methodologies: Spike linear model, impression-based linear, and CTV/OTT. |InnovidXP Mapping Tool Overview |Provides an overview of the InnovidXP mapping tool and explains how to format and upload impression mapping files. |Threshold Filter Overview |Details the mechanism to filter metrics from the InnovidXP Pivot area based on predefined criteria (thresholds). |Control Group Overview |Provides an overview of control groups and explains how and why control groups are used in InnovidXP. |FAQs: Using the Product |Answers commonly asked questions when using the InnovidXP platform.
OPCFW_CODE
Importing nmslib, cannot load any more object with static TLS I am getting a dlopen error when importing nmslib after tensorflow: python -c "import tensorflow; print(tensorflow.contrib); import nmslib" File "<string>", line 1, in <module> <module 'tensorflow.contrib' from ...> ImportError: dlopen: cannot load any more object with static TLS (printing tensorflow contrib simulates importing a module that uses tensorflow, e.g. extending its classes) switching the order gives no error, and everything works correctly. But in the context of a large code base, it is hard to ensure that nmslib is always imported first, since there are many entry-points. The same thing happens with tensorflow, and it looks similar to this issue: https://stackoverflow.com/questions/50398358/import-error-when-unit-testing-flask-application-with-nmslib This does not happen on my mac, but only on our Circle CI build, running in a circleci/python:3.6 docker image. Any suggestions for how to fix this? I have tried setting the env variable CXXFLAGS="-fPIC -ftls-model=global-dynamic" when pip installing nmslib. Hi @matthen thank you for the thorough investigation! does tls-model=global-dynamic help? No it doesn't :/ Funny, we don't use much thread local storage: just a few bytes. I was trying to reproduce the numpy example, but it seemed to be working on a new instance. I replaced the original comment above with a tensorflow example that is failing. In practice the issue looks like: module using tensorflow module_a.py import tensorflow class MyClass(tensorflow.contrib....): ... module using nmslib module_b.py import nmslib ... entry-point, e.g. a unit test: test.py import module_a import module_b gives the dlopen error. I think the tensorflow library is forcing static TLS for all the following imports, as it uses TLS and nmslib does not. readelf -l tensorflow/libtensorflow_framework.so | grep TLS TLS 0x0000000000cc5360 0x0000000000cc6360 0x0000000000cc6360 no results for nmslib. That explains why importing nmslib first causes no error. Another 'fix' is to set LD_PRELOAD=nmslib.cpython-36m-x86_64-linux-gnu.so, but I don't have a very good way of setting that everywhere/for everyone @matthen could you try to add the option -fPIC -ftls-model=global-dynamic inside the BuildExt function in puthon_bindings/setup.py? You can then install the library locally. I think pip likely ignores external CXX flags. I confirmed above that there is no TLS line in my built nmslib.so file with readelf: # built normally: readelf -l nmslib.cpython-36m-x86_64-linux-gnu.so | grep TLS TLS 0x00000000003f8168 0x00000000005f8168 0x00000000005f8168 # built with CXX_FLAGS="-fPIC -ftls-model=global-dynamic" readelf -l nmslib.cpython-36m-x86_64-linux-gnu.so | grep TLS (no results) thank you for the thorough check. Internet says that with global-dynamic it should be fine, but it is not, argh. :-( One trick that helped in one part of our code was to remove some parts of tensorflow we don't use, and that use TLS: rm -rf tensorflow/contrib/tensor_forest && touch tensorflow/contrib/tensor_forest.py @matthen interesting, how do figure out which parts do? Solved for me by placing import nmslib in the _init,py file of the project @guyalo this is interesting, thank you. @matthen could you check if this helped you? Unfortunately that is not an option in the context of our work as we are using a mono repo with multiple entry points, unit tests, scripts etc, not all of which should import nmslib
GITHUB_ARCHIVE
Editor's Note: This post is part of a series produced by HuffPost's Girls In STEM Mentorship Program. Join the community as we discuss issues affecting women in science, technology, engineering and math. I wasn't always interested in going into a STEM field. I got into computer science by accident - a scheduling error in high school. Early on, I wasn't sure that computer science was where I was meant to be. It was important to me to see the fields of science, technology, engineering and math as a place where I could make a contribution that would actually impact the world for the better, and I didn't find that insight right away. But with time, I fell in love with computer science; even after achieving the highest degree in the field, I still come to work every day excited to learn something new. I had a lot of great advice and help along the way and I hope that I can pay it forward by sharing some of my experiences with you. Lesson 1: Celebrate your progress, instead of comparing to others The first part of this challenge was seeing that I was actually good at Computer Science. This was hard because I was comparing all of the challenges and insecurities I was feeling internally with the confident swagger I was seeing expressed by my male counterparts. I wanted to channel my energies into an area where I had aptitude and it took some time to recognize that my way of expressing expertise was just different from the boys'. My advice is to remember that your progress might look different from the way others measure their accomplishments, so find metrics of success that work for you and take time to celebrate achievements that are meaningful to you. Lesson 2: Find what drives you The second part of this challenge was that I needed to see how work in Computer Science could go beyond just being an interesting puzzle. I wanted to know how my efforts could help make the world a better place in ways as concrete as if I became a doctor or a teacher. It wasn't until my senior year of college that I actually got a chance to program something that I could see as being useful and show to others as the kind of contribution I hoped to make in the future. When I got to this point, I was hooked! But, getting there was sometimes a struggle when I couldn't see the impact or importance of some aspects of my work. To me, it was about seeing the bigger picture of how Computer Science could change the world; for you, there may be a different driver that makes your work meaningful. So, find what drives you and actively seek out those experiences. Sometimes it means taking one project, one class, or even one day at a time until you finally get to do the kind of work that makes you excited to wake up in the morning. Lesson 3: Make your own path My path in STEM has included a computer science B.Sc., a Ph.D. in human-centered computing, various internships and now a research position at AT&T Labs. Through the process of finding my niche within STEM, I developed skills that are unusual for a Computer Scientist. I bring a knowledge of design and psychology to my work as an AT&T Labs researcher and the field of communications technology in the home. Psychology gives me the methods to understand if the technology that I made is something people will actually use, and design helps me bring my ideas to life in ways that are compelling to users. I'm continuing to deepen my expertise through research that allows me to create the kind of impact I value by applying my knowledge to real innovations that are being developed by AT&T today. I'm still pursuing opportunities to both mentor and be mentored. And ultimately I've found a place where I can see the difference I'm making on people's lives - which is my personal measure of success. Svetlana "Lana" Yarosh is an HCI researcher at AT&T Research Labs in New Jersey. She was born in Moscow, Russia and immigrated to the U.S. with her family in 1995. She received two Bachelors of Science from University of Maryland (in Computer Science and Psychology) and recently graduated from the Human-Centered Computing Ph.D. program at Georgia Institute of Technology. Her research falls primarily in the area of Human-Computer Interaction, with a focus on Ubiquitous and Social Computing and a special interest in Child-Computer Interaction. Lana has a passion for empirically investigating real-world needs that may be addressed through computing applications, designing and developing technological interventions and evaluating them using a balance of qualitative and quantitative methods. Her work has been featured on CNN, has won multiple innovation competitions and has been recognized with a Fran Allen Ph.D. Fellowship Award. Lana is honored to have been the recipient of numerous grants and scholarships including the AT&T Research Labs Graduate Fellowship.
OPCFW_CODE
External Terminologies Binding External Code Sets to attributes –Case 1 – Class is a specific observation in LOINC or HL7 Example: Number of brothers and sisters –Case 2 – Class is a generic observation Example: Observation as part of the patients clinical statement to record everything from current weight and age to existing medical conditions (MedDRA, SNOMED-CT, or LOINC) –Case 3 – EntityCode Country Code, Drug Code, Ingredient, Blood Product Code, Device Code, Vaccine? –Case 4 – Role Code Occupation and/or Job Code Context-Specific Conformance Profiles –Specify allowed bindings in Standard, constrain in Profile Creating Value Sets from External Code Sets Case 1: CE attribute is constrained to one specific external terminology –Blood Product Type, Country Code Case 2: CE attribute can be either external terminology A or B –Example: Drug Code (RxNorm or NDC) –Can not mix within a set, class, or report Case 3: LOINC concept constrains Observation.code and Answer List in LOINC constrains Observation.value Case 4: HL7 concept constrains Observation.code and SNOMED-CT or MedDRA constrains Observation.value –Reaction.code and Reaction.value –ConcurrentObservation.code (can mix LOINC and SNOMED) Case 5: CE attribute constrained to one value –Autopsy.code E2B, MedWatch, VAERS codes Add to LOINC or HL7? May vary by report type (i.e, drug vs. device) –Represented as Observations (Summaries, judgments) CaseSeriousness ReactionRelatedness.code Severity.code Outcome.code Interpretation.code Intervention.code Indication.code ActionTaken.code InterventionCharacterization.code DeviceEvaluationObservation.code –Represented as Acts Other Report Types Document Types –More Observation Code and Value Datatypes Code ST, The plain code symbol defined by the code system. For example, "784.0" is the code symbol of the ICD-9 code "784.0" for headache. codeSystem UID, Specifies the code system that defines the code. codeSystemName ST, The common name of the coding system. codeSystemVersion ST, If applicable, a version descriptor defined specifically for the given code system. displayName ST, A name or title for the code, under which the sending system shows the code value to its users. originalText ED, The text or phrase used as the basis for the coding. Translation SET, A set of other concept descriptors that translate this concept descriptor into other code systems. To LOINC-ize or to HL7-ize? Observation.code LOINCHL7 code =12345-6 19423 (under Observtiontype) codeSystem =LOINC OID HL7 ActCode OID codeSystemVersion = x.x x.x displayName =DrugCharacterizationCode DrugCharacterizationCode Observation.value code = S,I, or C S,I, or C codeSystem =LOINC? HL7 ObservationValue OID codeSystemVersion =x.x x.x displayName =Suspect, Interacting or Suspect, Interacting or Concommittant
OPCFW_CODE
Getting net values as a proportion from a dataframe in R I have a dataframe in R (p2.df) that has aggregated a range of values into the following (there are many more columns this is just an abridge version): genre rating cc dd ee Adventure FAILURE 140393 20865 358806 Adventure SUCCESS 197182 32872 492874 Fiction FAILURE 140043 14833 308602 Fiction SUCCESS 197725 28848 469879 Sci-fi FAILURE 8681 1682 24259 Sci-fi SUCCESS 7439 1647 22661 I want to get the net values of the proportions for each column, which I can get in a spreadsheet but can't in R studio. The formula in the spreadsheet follows the pattern: net_cc = (cc(success)/(cc(success)+dd(success)+ee(success)) - (cc(fail)/(cc(fail)+dd(fail)+ee(fail)) What I want to get out in R is this table that I can get from the spreadsheet: genre net_cc net_dd net_ee Adventure 0.002801373059 0.005350579467 -0.008151952526 Fiction -0.01825346696 0.009417699223 0.008835767735 Sci-fi -0.01641517271 0.003297091109 0.0131180816 Any ideas how? If it's any use I created the p2.df by summarising a previous table as: library(dplyr) p2.df<- s2.df %>% group_by(genre,rating) %>% summarise_all(sum) Thanks all, I selected Moody's as the answer as it was the simplest (I couldn't get utubun's neater one to work) but MKR's also worked. ...and then it stopped working. I think it's because I used 'summarise_at' to get the above dataframe and it doesn't like working with groups. that's probably because you created your data set by data.frame() or read it by read.csv() which by default convert strings to factors. I wrote my example using data with rating and genre converted to character, that's a default for tible and read_csv from readr. Please see the data @MKR used in his answer (last row - stringsAsFactors = FALSE). Thanks yes you were right, the table had groupings so I added as.data.frame() that fixed it. using tidyverse: library(tidyverse) df %>% gather(,,3:5) %>% spread(rating,value) %>% group_by(genre) %>% transmute(key,net = SUCCESS/sum(SUCCESS) - FAILURE/sum(FAILURE)) %>% ungroup %>% spread(key,net) # # A tibble: 3 x 4 # genre cc dd ee # <chr> <dbl> <dbl> <dbl> # 1 Adventure 0.00280 0.00535 -0.00815 # 2 Fiction -0.0183 0.00942 0.00884 # 3 Sci-fi -0.0164 0.00330 0.0131 It's always better to work on data in long format. But if OP doesnt want to transform data in long format due to any constraint (e.g. number of columns are more which will lead to large number of rows in long format etc) then a solution in using dplyr::summarise_at can be achieved as: library(dplyr) df %>% mutate(rowSum = rowSums(.[,names(df)[3:5]])) %>% group_by(genre) %>% summarise_at(vars(names(df)[3:5]), funs(net = .[rating == "SUCCESS"]/rowSum[rating == "SUCCESS"] - .[rating == "FAILURE"]/rowSum[rating == "FAILURE"] )) %>% as.data.frame() # genre cc_net dd_net ee_net # 1 Adventure 0.002801373 0.005350579 -0.008151953 # 2 Fiction -0.018253467 0.009417699 0.008835768 # 3 Sci-fi -0.016415173 0.003297091 0.013118082 Data: df <- read.table(text=" genre rating cc dd ee Adventure FAILURE 140393 20865 358806 Adventure SUCCESS 197182 32872 492874 Fiction FAILURE 140043 14833 308602 Fiction SUCCESS 197725 28848 469879 Sci-fi FAILURE 8681 1682 24259 Sci-fi SUCCESS 7439 1647 22661", header = TRUE, stringsAsFactors = FALSE) it's a neat intuitive solution, but you can clean it a bit further, you could use just rowSums(.[,3:5]) on 1st line and then summarise_at(3:5,... @Moody_Mudskipper Thats elegant suggestion. I had done same at first. But, the problem was that for summarise_at it was expected to be as 2:4 since one column was out for grouping. Hence, I thought it would be easier to relate if I use 3:5 at both places. It could be something related to different versions, for me it works with 3:5 and returns an error with 2:4 . I'm using dplyr_0.7.5 see: https://stackoverflow.com/questions/45883513/using-dplyr-summarise-at-with-column-index/51009642#51009642 Thanks for your help on this, you say it's better to work in long format which I'm happy to do as these solutions seem to be behaving temperamentally on me - when I expand them into my own summary table sometimes it works and sometimes it'll tell me objects (column names) are not found or Column rowSum must be length 2 (the group size) or one, not 16. Perhaps I should try the long form rather than from the summary table. Thanks both for your help. I'm now trying to get the overall net value for each column (ie not splitting by genre but doing similar maths). I have adapted this to the following but I get zero (sorry I can't even get a linebreak in this comment. Realise I may have to post a new query) . df %>% mutate(rowSum = rowSums(.[,names(df)[3:5]])) %>% group_by(rating) %>% summarise_at(vars(names(df)[3:5]), funs(net = .["rating"]/rowSum["rating"] )) %>% as.data.frame() @JRUK In that case you dont have to even use group_by. Just use df %>% summarise_at(vars(names(df)[3:5]), funs(net = sum(.))) %>% as.data.frame() . Please let me know if you need to have relative net values. @Moody_Mudskipper Sorry. I had missed your comments earlier. I'm not sure about this change in dplyr. I'm using 0.7.4 version of dplyr though. Thanks @MKR that didn't quite give what I was after, I didn't explain myself well so I have clarified and posted as a new question. My answer is very close to @MKR answer, however, I just wish to point out, that we can make use of decoded rating (SUCESS = 1 and FAILURE = -1`) variable to avoid subsetting in the last part: df %>% mutate(rating = (rating == "SUCCESS")*2 - 1, denom = rowSums(.[3:5])) %>% group_by(genre) %>% summarise_at(vars(cc:ee), funs(sum(rating * . / denom))) # A tibble: 3 x 4 # genre cc dd ee # <chr> <dbl> <dbl> <dbl> # 1 Adventure 0.00280 0.00535 -0.00815 # 2 Fiction -0.0183 0.00942 0.00884 # 3 Sci-fi -0.0164 0.00330 0.0131
STACK_EXCHANGE
View Full Version : users online 04-19-2003, 03:32 PM i'm able to code to track users' movements throughout the site, a bunch of login details, etc. but if i want to have a "Users online" feature, i'll need to know when users logs off. i'll have a 'logoff' form of course, which will update the 'user' table in the database to 'not logged in', but lots of people don't bother to sign off. they just close the browser window or disconnect. how can i track when they've done this? i'm using sessions mainly. how can i track when a user has stopped being logged in? 04-19-2003, 04:17 PM hmm... well from the sounds of what you describe... you're using sessions to see where everybody is, ay?... so i'll presume "logging out" ends the session, eh?... now... when you close the browser window that also ends the session, so either way, the session's been ended... im no php guru though so i could be wrong ;)... 04-19-2003, 08:01 PM lol i know the session'll be ended in any case..:p the problem is how do i know when? like if i close this Coding Forums window which i'm viewing right now, my session will end, then if anyone refreshes the CF home page, they won't see me in the list of current users online. how did the database get to know that I'm not online any more? 04-21-2003, 10:40 AM There is a way to do it because my php teacher has done it on his site. I have thought about how it could be done many times but have not asked him much about it but one of the other co-webmasters of the site said that the whois online code can not be made accurate to the second and has to made to an resonable time limit. In the url above, it's reset to 5 minutes per page click. Now when he said that, the first thing that came to mind was ofc a cookie. I think if you set a page hit field in the database to generate a cookie that lasted for 5 minutes to be used in a whois online table, it should work. I've yet to have the nerve to toy with a code of my own to test it's possibilities but I would think that is your best bet. 04-21-2003, 12:35 PM If you were to set it at say 15 minute intervals, of checking who is online. When a user visits a page, have it put a value in a database of time()+900, then when someone visits the page, get them to check the time. If it is greater than the value entered, delete the value, else leave it there. 04-21-2003, 09:57 PM on my sites i like to put 2 values on ever user record (in a database table). last page = url of the last page they were at last page date time = timestamp of when they clicked that page. then for a 'who's online' section just do a lookup in your users table and limit to those users who have a last page date time greater than whatever your session time out is. You can also use this to show admins where folks that are logged in are. Nice thing about this is you can make custom pages that sort users by last page hit day/time. to see who your active users are. Powered by vBulletin® Version 4.2.2 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.
OPCFW_CODE
This group was initiated to promote and make know the Java language in the region of Americana / SP - Brazil. Welcome! Esse é o projeto do JUG Java Americana dentro do portal java.net. Disponibilizaremos aqui, todo o conteúdo e todas as ferramentas para que possamos cada vez mais fazer com que a comunidade Java aumente na nossa região. JavaBin is an independent, idealistic organisation established in 1996 by a group of Java enthusiasts. Our goal is to create a community for sharing knowledge, experience and viewpoints on Java technology. We work to strengthen the members professional Java skills by spreading the knowledge of relevant results, pragmatic techniques and useful tools. The group has since then evolved to be Scandinavia's biggest and most active group for the promotion of Java. Mailing List for Java users in Thailand. Any discussion relating to Java in Thailand or using/developing Java on Thai environment. Espirito Santo Java Users Group Java Users Group in Novi Sad Hong Kong Java User Group (HKJUG) is the JUG based in Hong Kong. Please visit our website http://www.hkjug.org Java User Group Sardegna, Italy The Club des Utilisateurs de Java collaborative project NL-JUG Dutch Java User Group Where new JUG Projects prepare for world's challenges RedFoot J Dukes - Grupo de Usuários Java do Norte do Paraná NYC Java User Group. Lectures, workshops and study groups (hands on coding). Focusing on all Java related technologies for over 10 years. Stop by a general meeting (first Thursdays) at the NYPC meeting room at the New Yorker Hotel. Free and no RSVP required. Networking over dinner after meeting. Silicon Valley Java Users Group JUG-Petropolis - "Among the most social Brazilian JUGs" Java Open Source initiative in Brazil The Java Users Group of the Federal District was established in Brasília (capital of Brazil) in February 1998 to promote technical learning and the popularizing of the Java platform. www.dfjug.org Rio Java Users Group, Rio de Janeiro, Brazil Hellenic Java User Group (www.jhug.gr) Sydney Australia Java UG project GUJ is a virtual java user group, focused on portuguese content. It has already 1850 user registered on our forums, and we are working on a new java web application that could be used by many JUGs to host articles, news and others, and also been feeded by the java.net RSS. A global community of Java User Groups whose aim is to advance Java, and promote the growth of our communities through education, development, and fraternity. A Users Group in Baltimore, Maryland The Ceará Java User Group The Austin Java Users Group Australian Java User Groups Javagruppen.dk (The Danish JUG)
OPCFW_CODE
ADELPHI, Md. (Jan. 28, 2015) -- Army cyber defenders released code to help detect and understand cyber attacks. The forensic analysis code called Dshell has been used, for nearly five years, as a framework to help the U.S. Army understand the events of compromises of Department of Defense networks. A version of Dshell was added to the GitHub social coding website on Dec. 17, 2014 with more than 100 downloads and 2,000 unique visitors to date. Dshell is a framework that its users can use to develop custom analysis modules based on compromises they have encountered. It is anticipated that other developers would contribute to the project by adding modules that benefit others within the digital forensic and incident response community, said William Glodek, Network Security branch chief, U.S. Army Research Laboratory, or ARL. "Outside of government there are a wide variety of cyber threats that are similar to what we face here at ARL. "Dshell can help facilitate the transition of knowledge and understanding to our partners in academia and industry who face the same problems," said Glodek, whose page is the first official U.S. Army page on GitHub. GitHub is the center of gravity for software developers not only in the U.S, but around the world. Since the release, Dshell has been accessed by users in 18 countries, he said. "For a long time, we have been looking at ways to better engage and interact with the digital forensic and incident response community through a collaborative platform," Glodek said. "The traditional way of sharing software even between government entities, can be challenging. We have started with Dshell because the core functionality is similar to existing publicly available tools but provides a simpler method to develop additional functionality. What Dshell offers is a new mechanism, or framework, which has already been proven to be useful in government to better analyze data." Glodek would like to see others in the open source community add value and expertise to the existing Dshell framework, he said. He is starting an open source working group at ARL to look at other potential projects for a GitHub repository. "I want to give back to the cyber community, while increasing collaboration between Army, the Department of Defense and external partners to improve our ability to detect and understand cyber attacks," Glodek said. In the next six months, Glodek expects to have a flourishing developer community on GitHub with users from government, academia and industry. "The success of Dshell so far has been dependant on a limited group of motivated individuals within government. By next year it should be representative of a much larger group with much more diverse backgrounds to analyze cyber attacks that are common to us all," Glodek said. The Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to develop technology and engineering solutions for America's Soldiers. RDECOM is a major subordinate command of the U.S. Army Materiel Command. AMC is the Army's premier provider of materiel readiness--technology, acquisition support, materiel development, logistics power projection and sustainment--to the total force, across the spectrum of joint military operations. If a Soldier shoots it, drives it, flies it, wears it, eats it or communicates with it, AMC provides it.
OPCFW_CODE
My web agency, Tech Made has been developing MERN Stack web application to manage the growing student body club life for one of our clients, Make School Product College of Dominican University. The beta launch is about to happen and after some user testing here, the system may open up for any institution to use to better manage their student clubs. I’ll soon write more blog posts on the Clubs App. Today, as the title says, we’re reviewing a cool refactor I recently implemented in the codebase of the Clubs App monolithic backend Node API. Before I dive into the technical details of my code refactor, I’d like to give you some context as to why. I already had my project working just fine so this was not a premature refactor/optimization. During the time of development, I needed to ship fast so I didn’t want to waste even 30 minutes trying over optimize when I knew I could write not the most dry code, but get it to work. I threw it on my Trello backlog and now here we are. Let’s dive in. I was using JWT tokens stored in the session for user authentication. I had certain routes in my back end that only a user logged in should be able to do, some routes were only meant for club leader users, and some were meant for admins only. So each of these routes would first trigger the middleware to check for a user. I had 3 different authentication checking functions, called checkAuth, checkLeader and checkAdmin. All 3 of them damn near did the same thing to look at the jwt token, get the user id, find the user and continue with what was next in the route or else return an error. The only difference in checkLeader and checkAdmin was before setting req.user and returning next(), I checked if the user.type property was leader or admin, that is all. One line for like 20 extra lines of code in each function! Here’s what the code looked like and what I refactored it into (The order of new and old code are in reverse order because of GitHub Gists ordering.) So you can see in the old_checkAuth.js file there’s tons of repeated code happening. But what’s going on in the file below, new_checkAuth.js? Allow me to walk you through this process of my refactor and finally fucking understanding async/await. So at first, I legit just tried to get rid of all the code in checkLeader and checkAdmin. Then, simply calling checkAuth function from them and trying to access req.user to check their type. Once ran, my middleware threw a Reference Error, it did not know what req.user was. So calling another function from my other function wasn’t allowing me to keep the scope of the user variables. After some thinking, I decided to try and make the original function being called, checkAuth, return a Promise. Finally, I realized I need to wait for that function to finish before executing the next line of code. Welcome, async/await! You can see in my new code how I made the checkLeader function asynchronous by adding the term async in front of function. Then on the checkAuth function call line I set user equal to whatever the function returns and I add await in front of it. So lessons learned here. It’s just a more elegant syntax of getting the promise result than promise.then, easier to read and write. Await must only be used within an async function. If we try to use await in non-async function, there would be a syntax error, you can’t async a regular function, you just can’t.
OPCFW_CODE
Docker install dependencies issue with glibc in Centos 7.3 The issue seems to be strange when the server is updated to latest centos version Below is the error: sh -c 'sleep 3; yum -y -q install docker-engine' Error: Package: glibc-2.17-106.el7_2.4.i686 Requires: glibc-common = 2.17-106.el7_2.4 Installed: glibc-common-2.17-157.el7_3.1.x86_64 glibc-common = 2.17-157.el7_3.1 Available: glibc-common-2.17-105.el7.x86_64 glibc-common = 2.17-105.el7 Available: glibc-common-2.17-106.el7_2.1.x86_64 glibc-common = 2.17-106.el7_2.1 Available: glibc-common-2.17-106.el7_2.4.x86_64 glibc-common = 2.17-106.el7_2.4 Error: Package: policycoreutils-python-2.2.5-20.el7.x86_64 Requires: policycoreutils = 2.2.5-20.el7 Installed: policycoreutils-2.5-9.el7.x86_64 policycoreutils = 2.5-9.el7 Available: policycoreutils-2.2.5-20.el7.x86_64 policycoreutils = 2.2.5-20.el7 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest glibc package exist on the server is shown below: glibc-2.17-157.el7_3.1.x86_64 glibc-common-2.17-157.el7_3.1.x86_64 Please suggest on fixing the issue. Thanks I'm not able to reproduce this; I tried installing docker on a 7.2 host, then updating (yum update) to 7.3. Also tried creating a new 7.2 host, then updating (yum update) and then installing docker, both worked for me. Could it be that the update from 7.2 to 7.3 was interrupted (outstanding transactions)? Did you reboot the host after upgrading? During the update process, I see that both glibc and glibc-common are updated; Updating : glibc-common-2.17-157.el7_3.1.x86_64 8/349 Updating : glibc-2.17-157.el7_3.1.x86_64 9/349 @runcom @andrewhsu any other suggestions? Just to check; which repository are you installing from? What does; cat /etc/yum.repos.d/docker* show? And yum list docker-engine ? Find the repo details below and yum list [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Available Packages docker-engine.x86_64 1.12.6-1.el7.centos dockerrepo Kernel version is : 3.10.0-514.2.2.el7.x86_64 There is no interruption or outstanding transaction and rebooted the host after upgrade as well. does yum update glibc update the package? Or is something blocking it from being updated? No, There is no package available for update or there is no blocker or outstanding transaction in yum. Loading mirror speeds from cached hostfile No packages marked for update I have the exact same problem when trying to install gcc : glibc 2.17-106 is required glibc 2.17-157 is installed The reason is: the version 2.17-157 was installed by then "FROM centos:7" of the Dockerfile that as Centos version 7 points to version 7.3.1611 (that is correct) the version 2.17-106 is the version available on a local mirror at my company that has a deprecated version (http:///centos/7.2.1511/updates/x86_64/Packages/) I warned my administrator that the local mirrors are not beeing updated. Conclusion: one of your yum repository is probably not pointing to the Centos version 7.3.1611 as it should be. I mean: http:///mycompany/7.2.1511/updates/x86_64/Packages/ (text has been modified after post) I agree with @jlerny, and highly suspect this is something in your setup. Since we're not able to reproduce, I'm going ahead and close this issue, but feel free to continue the conversation.
GITHUB_ARCHIVE
A package created Josie Hayes. Processes peptide adducts from LC-MS data. The software interrogates tandem mass spectra to perform retention time drift corrections, untargeted putative adduct detection with MS2 plot outputs for quality control, and peak area quantification relative to a chemically neutral peptide. The **adductomicsR** package performs peak-picking, retention time alignment, grouping, peak table output pre-processing, and peak area quantification. The package requires an R version of at least 3.6 which is currently in development. There is also a version that will work with R version 3.5 in branch R3.5. It is also recommended to install mzR manually prior to installing the package if it is not already installed on your system. Note that if you receive an error "fatal error: netcdf.h" in installing mzR, homebrew has not been able to install netcdf. To install mzR download it from https://github.com/Unidata/netcdf-c/releases/v22.214.171.124 (using configure, make, make install) and then install mzR again. The latest development version and all other package dependencies can be installed with one-line of code directly from GitHub using the devtools package. First ensure devtools is installed, instructions can be found here: https://github.com/hadley/devtools devtools::install_github('JosieLHayes/adductomicsR', dependencies=c("Depends", "Imports", "Suggests")) Including ref= 'R3.5' in this command will download the version that can be used for R 3.5. If using this version the example data must be downloaded from https://berkeley.box.com/s/fnhttc87v4mn1x50nvckpt99999y7uhl rather than accessing through the adductData package and a separate run order file is available for this data in inst/extdata/runOrderR3.5.csv. The vignette can be viewed here https://github.com/JosieLHayes/adductomicsR/blob/master/vignettes/adductomicsRWorkflow.Rmd with 2 example mzXML files acquired on a LTQ Orbitrap XL HRMS coupled with a Dionex Ultimate® 3000 nanoflow LC system via a Flex Ion nano-electrospray-ionization source and converted to mzXML using MSConvert(http://proteowizard.sourceforge.net/)). The *adductomicsR* package has thus far only been tested with a LTQ Orbitrap XL HRMS on computers running Windows, OSX and Linux operating systems but depending on interest could be readily extended to other instrument manufacturers. The R package utilizes [xcms](https://bioconductor.org/packages/release/bioc/html/xcms.html), [CAMERA](https://bioconductor.org/packages/release/bioc/html/CAMERA.html), [MetMSLine](https://github.com/WMBEdmands/MetMSLine), and many other packages to attempt to implement LC-MS adduct identification and quantification. A major impetus for development of this package was to provide an open-source pipeline to identify protein adducts on a peptide of interest. Our laboratory has extensive experience in identification and quantification of putative adducts to the Cys34 of human serum albumin (https://www.ncbi.nlm.nih.gov/pubmed/27684351, https://www.ncbi.nlm.nih.gov/pubmed/27936627, https://www.ncbi.nlm.nih.gov/pubmed/29350914, https://www.ncbi.nlm.nih.gov/pubmed/29538615). These analyses used Xcalibur https://www.thermofisher.com/order/catalog/product/OPTON-30487, a proprietry software from Thermo Fisher Scientific, to acquire MS1 and MS2 spectra. The *adductomicsR* workflow consists of a retention time correction step (optional), `rtDevModeling`, a adduct identification step `specSimPepId`, and a putative adduct quantification step `adductQuant`. A target table can be created for `adductQuant` from the results of `specSimPepId` using `generateTargTable` and the `adductQuant` result object can be processed and filtered using the `outputPeaktable` and `filterAdductTable` respectively. **rtDevModeling** - Performs MS/MS spectrum grouping and loess retention time deviation modeling. Requires as input a directory path where the mzXML files are and a path to a run order file. Examples mzXML files are available in the data package adductData. Information on the internal standard (for Cys34 we use isotopic T3 adducted with iodoacetamide) must be provided here - a list (no white space) of expected fragment ions for the internal standard spectrum and the expected mass-to-charge ratio of the internal standard precursor (default = 834.77692, for Cys34) In addition the internal standard retention time drift window (in seconds) can be specified by the user (default 200-600 ppm). This function produces a plot for the internal standard RT, ppm difference and deviation from the median across the run order to highlight retention time drift. This is a plot from a previous dataset https://www.ncbi.nlm.nih.gov/pubmed/27936627 A plot of the adjusted retention time for each retention time (seconds) of this study shows that retention time deviated specifically at certain times. This may indicate caution should be taken when results are reported at these retention times, and may be due to washes and instrument related artifacts that occur during the run. **specSimPepId** performs spectral similarity based adducted peptide identification. It takes as input the `rtDevModeling` object and a directory path where the mzXML files are. A retention time window within which to identify spectra can be specified using minRT and maxRT (default 20-45 minutes). Similarly a mass-to-charge window can also be specified using minMz and maxMz (defaults 750-1000). A model spectrum file for the peptide under study must be provided to perform spectral similarity to. Built in model tables (in the extdata directory) can be used by specifying the path to the table (currently available are: "ALVLIAFAQYLQQCPFEDHVK" and "RHPYFYAPELLFFAK"). If supplying a custom table it must consist of the following mandatory columns ("mass", "intensity", "ionType" and "fixed or variable"). This function also performs grouping using hierarchical clustering of the spectra. The mass-to-charge ratio and RT threshold for cutting the tree can be specified using groupMzabs and groupRtDev respectively. This function produces an MS2 plot for each adduct in each scan. This is saved in the output directory in a separate directory for each sample ending in _adductID. These should be used to visually inspect 2-3 plots for each adduct group identified to remove false positives. A plot of the model spectrum provided is also saved in the mzXML directory for comparison. An example plot for adduct A40 from dataset https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5555296/ is shown below. In addition a plot of the mass-to-charge vs the RT and adjusted RT is produced by this function. Each group, assigned using the grouping thresholds the user provided) is colored differently. These plots are provided within the output directory in a directory labeled spectrumGroups_[peptide]. The plot of all groups for the dataset https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5555296/ is shown below. This shows that some groups, such as those at m/z 850, should be merged into one group as they represent tails of the same peak. **generateTargTable** can be used to generate a target table from these results for the quantification step. It is recommended that the MS2 plots and spectrum grouping plots are used to remove false positives and merge groups that are tails of the same peak prior to quantification. **adductQuant** quantifies putative adducts by peak area. The putative adducts must be provided in the form of a target table which can be manually generated or produced from **generateTargTable**. Two example target tables are provided in inst/extdata. The **rtDevModeling** object should also be provided. The maximum parts per million to be used for peak integration is specified by the user (default 4), increasing this will merge peaks and lower resolution. The number of scans that a spike must be seen in for it to be integrated as a peak can also be specified with spikeScans (default 2). The maximum retention time drift default is 20 seconds and can be altered by the user, and the maximum retention time window to search in is set at 120 seconds. A string for the amino acid sequence of a chemically neutral peptide ('housekeeping peptide') of the protein under study must also be provided. The default is LVNEVTEFAK for Cys34. It is recommended to also include this in the target table (automatically done using the generateTargTable function) so that peak area ratios relative to the housekeeping peptide can be calculated. The result is an adductQuant object. This can be converted to a peak table using **outputPeakTable** and filtered using **filterAdductTable**. The *adductomicsR* package is licensed under Artistic License 2.0
OPCFW_CODE
An application with the most used e-commerce features & based on the opencart CMS database . Basic requirements : 1- Native Android & ios platforms (iphone / ipad - all versions). 2- Provide clean source code & the technical details, including detailed description of the app-server interaction mechanism, protocols and likewise data. 3- Should be Detect Number from voices by signal processing I have signal processing project, i want to ext...numbers is natural for example, 1 , 2 , 3 ,4 .... i want to extract numbers below 500 and Yes and No I need two applications: windows and android Speakers are different: man , woman , child , … The language is NOT English Could you help me? Greetings for writers. In a short words, I own an Android application and I want to propose it to some comunities and organizations. This app is related with child care. More information you could find here [url removed, login to view] As it turned out many potential users have difficulty I need an Android app. I would like it designed and built. My name is Ahmed and I am an English teacher.. I need someone to create an android app that can Everybody download it from play and google store ... App helps contact and communication Arabian English learners with a native speakers whom have the ability to just speak English with typing Hi! This project involves buying a food recipes database in format suitable for programming. The goal is to create an android app to search recipes by ingredients. All recipes must be in English. Database must contain at least 150k recipes. Database must include at least such information: recipes name, ingredients (with necessary measurements in US ...works in Japanese would especially be an advantage. Key Features: * Gesture support - to send info to the keyboard to switch between modes. * Switch between Japanese / English / Emoji (within the same keyboard) * Information area - show information about the word that is current selected in any app. * Prediction candidates - When someone types Hello, I would like to create a reward program website that comes in both English and arabic with a control panel. - Responsive website - Ajax sign up with custom fields - Integrated with Opencart 2.3 x. Automatically creating an acount once an acount created in the integrated opencart website x. Automatically add and redeem reward points with I have signal processing project, i want to convert non english sound to numbers,, i want to record sound and detect number, for example there is 4 seconds recoded sound file, and at this sound record file speaker says a number , i want to extract this number, the range of numbers is 1 to 500 All numbers I'm looking for an Android Developer to create an app similar to Etsy, or even Freelancer mobile app. Features included: Login & register page, email verification, search, filter, sort, chat, payment, user profile, review on a user profile. Aside from technical quality, you need to be/have: 1. Trustworthy. 2. Good communication skill, able to ...necessary requirements: * Fluently in English (or Dutch) and good internet connection. We will need a number of Skype meetings * Available both on Wednesday (11th of October) and Thursday (12th of October) * At least 5 years Android experience Problem: * Long-running foreground service on a Samsung device with Android Lollipop shuts down * The graphical ...functions o Orders list is a web view o Products list • Technology: Angular4 with framework ionic • Compatibility and build: ios9, ios10, android 5 and android 6 • Time to dev: 3 weeks • LANGUAGE: Italian, English, Spanish • We provide HTTPS API on server side – we request the app side • By accepting the job you will provide sour... ...Free Screen Video Recorder for destop, AZ Screen Recorder for an Android smartphone). All videos have to be sent to D-Rating. Requirements - Live in Poland – Important for automatic store location on websites - Owns an Android-powered smartphone and a computer - Speaks English fluently - Proficient computer skills (Internet, office software) Add and remove som futures for phone app template add arabic and english redsign remake the template for iOS and android [url removed, login to view] [url removed, login to view] I am running a mobile app for home delivery service for fast food restaurant based in Kuwait. I have lot of bugs and issues with the App and I need teste...based in Kuwait. I have lot of bugs and issues with the App and I need tester to figure out the errors and help to be fixed. the App is running on IOS and Android, Dual Language Arabic and English A simple running style game to be developed to work on both iPhone and Android. Game will be a single person running based game similar to Temple Run with continuous running (runner never stops just like in temple run) Basic directional functions (SWIPE LEFT – SWIPE RIGHT – SWIPE UP (JUMP) - TILT LEFT – TILT RIGHT) and other adjustable and optional Looking for a Native Mobile App. Must work on both IOS and Android. Must be able to understand and speak english well! Project to be done one step at a time to ensure quality and accuracy. More info will be provided via private chat. Food Truck Tracker Realtime tracking. Options for subscribers foodtruck owners: 1-to add the menu. 2- to share offers. 3- events...selection of (breakfast, lunch, dinner) , only juice , only coffee . 3- rate and comment of food trucks. For sure i need social media integration. Languages: English and Arabic. Notes: App for android and iphone. Develop an App (Android + iOS) for a travel agency, where you can book, select your seat (for bus), pay and get your e-ticket. Very similar to Despegar.com. Spanish and english language Mexican currency Clean Architecture patern implementation implementation or uso of libraries / frameworks (Dagger2,RxJava2,Retrofit2, fabric segment, siftscienne) I want to build an Online store Application for Android and (IOS Swift 4.0) 1. Users registers on the app (Name, Email, Mobile, Address, Location, Barth Day, Gender, Password, resive notification) 2. user can change language between English and Arabic 3. User can Browse through multiple Gategory with multiple items available for sale 4. user can
OPCFW_CODE
Download:http://down.supercard.sc/download/dstwo/to...retools-FC9.zip 1?Ready VMware Workstation 6 and Fedora 9 2?Install VMware Workstation 3?Create a VM by VMware Workstation Before commence, you'd better be sure there is a logic disk using NTFS file system, and the free disk space are 8 GB or more. FAT32 also is OK, but it can't creat a file bigger than 4 GB. Following are detailed steps: First, you need create a new virtual machine. Click[Virtual Machine] After you click to create a new virtual machine, new virtual machine wizard started, click[Next] Default [Typical], click[Next] Depending on the system and kernel version to choose, select the Linux operating system here, and Other Linux 2.6.x kernel, click [Next] Depending on your need to modify the virtual machine name and installation location, then click [Next] Default, click [Next] Select hard disk space whitch the virtual machine would be available to use, requires more than 8GB, here select 10GB, and click [Finish] After creation, right-click the Fedora 9 ? Settings, can modify related settings, such as memory size, number of processors and so on. Set the installed location of Fedora 9 system. If your system file of Fedora 9 in physical disk, select [Use physical drive], as shown below: If your system file of Fedora 9 is ISO image file, select [Use ISO image:] and select the specific path where the image file, as shown below: Complete settings, click [OK]. At this point, the virtual machine was installed. 4?Install Fedora 9 in VM The following will install operating system Fedora 9, in step 3, we have already set the correct position of Fedora 9 system files, so here just click[Powered On], the following thing is the same as installing an OS on a bare PC. 5?Install Vmware tools You need to install Vmware tools to enhance system performance, and you also need to complete the shared settings. You will find this will be a very troublesome thing. Different versions of Vmware Workstation and operating systems and different operating systems have different methods. Here, the tool kit which we provide you will save you a lot of trouble. After completion of step 4, you can enter Fedora system. This time you have to root login identity (recommended every time the user root to login, this will save you a lot of trouble because of insufficient permissions), then copy the toolkit named vmwaretools-FC9.zip into Fedora 9 by U disk, for example: /opt, and start terminal, enter the following command: [root@localhost ~]# cd /opt [root@localhost ~]# . go Run the installation process, step by step it will have issues to respond to you, in this process, you just see the problems behind the show [yes], [no], [yes / no] to have input yes, then enter, regardless of other issues behind the what it is like directly enter, do not input. Finally, it will output a resolution of 1-15, enter the number you choose, according to the resolution of your monitor to select it. After completion, the same as step 3, right-click the Fedora 9 - Settings, do the shared settings. Sharing was completed, restart the computer, you can see the shared file on the path "/mnt/hgfs". You're done! Linux environment has been builded OK!
OPCFW_CODE
Exception handling when using OCI_CHARSET_WIDE on MAC I'm using ocilib on MAC OSX and want to build the OCI_CHARSET_WIDE version. The library builds fine and I've got ocilib.demo running in XCode. However, the exception handling does not work. I believe it's because the Exception::what() returns char* (because that's that the base class does). I get a compilation error in Execption.hpp:- The compilation error can be resovled by doing this:- But this does not help in reporing the error in the "catch" handler. I assume I was able to build the library with OCI_CHARSET_WIDE because these methods are inline so only get compiled when they are used. What is the correct way to handle excpetions when using OCI_CHARSET_WIDE based builds? Am I missing something? -AidyCC Seems this is issue is related to a change I had to make to Stringutils.c :- Is there an alternative to vsnwprintf ? Hi, I will create a v4.7.3 branch today with the fix for the C+X Ocilib::Exception::what() :) about the vsnwprintf not available on mac( was not aware of it), why not something like : int OcilibStringFormat ( otext * str, int size, const otext * format, ... ) { va_list args; va_start(args, format); #ifdef OCI_CHARSET_ANSI #ifdef __APPLE__ const int n = (int)vsprintf(str, format, args); #else const int n = (int)vsnprintf(str, (size_t)size, format, args); #endif #else #ifdef __APPLE__ const int n = (int)vswprintf(str, format, args); #else const int n = (int)vsnwprintf(str, (size_t)size, format, args); #endif #endif va_end(args); return n; } Vincent Thanks Vincent. I'll give that a go. That would be a lot more elegant than my current solution which I was just about to post. -AidyCC I changed the Exception class for WIDE builds only as follows:- I then changed OcilibStringFormat as follows based on your suggestion:- When running the ocilib_demo.cpp supplying a wrong password to generate an exception I see this:- So there is an issue with encoding %s placeholders. Modifying the WIDE branch of OcilibStringFormat as follows:- Results in this:- So there is an issue with the string returned by OCIErrorGet in Wide mode.. Modifying OcilibExceptionOCI as follows:- Results in this:- Which is the correct output. Like I said, not an elegant solution. I look forward to seeing your update. -AidyCC Hi, In the meanwhile, I have create a develop-v4.7.3 branch and commit a fix for Exception::what() Regards, Vincent About the formatting issue, i have pushed another commit when doing internal formatting in OCI_CHARSET_WIDE on linux/unix platforms (302a523a) Can you let me kwown if all issues are resolved ? thanks Vincent Ok, taking a look now. Not quite... Slight modification needed in Exception::GetMessage() const to move the return message outside of the #endif:- Output of a OCI_CHARSET_ANSI build (OK) :- Output of a OCI_CHARSET_WIDE build (suspected OCIErrorGet encoding issue):- Hi, I got why... and pushed another commit. I will make test under Linux later today (quite busy now). Can you have try in the meanwhile ? Vincent Sure. Thanks for the quick turn around. Yep! Those latest changes did the trick. Thanks, -AidyCC good 👍 Let me know if you find other issues related to OCI_CHARSET_WIDE on macos. I'll create an issue for the error message retrieval and add tests case for both issues and will make release by the end of the week I guess Best regards, Vincent
GITHUB_ARCHIVE
Note: "permalinks" may not be as permanent as we would like, direct links of old sources may well be a few messages off. Dominik Klein wrote: > Dominik Klein wrote: >> There was a user on the linux-ha channel today that had problems with >> the config example from this page: >> >> http://www.drbd.org/users-guide/s-heartbeat-crm.html >> >> While looking at it, there were several things about it that are >> either wrong or at least not perfect. >> >> * ordered and colocated default to true in groups - no need to specify >> them as you leave out other not absolutely needed things like IDs Correct, but IIRC the collocated=true default for resource groups has been in place only since Heartbeat 2.0.7 or so. Or maybe it was ordered -- can't remember. Anyway, in early 2.0.x versions you had to set one of these explicitly. So I just included both in the example to be on the safe side. >> * The attribute tags need to be closed - this is actually an error Touché. >> * The target_role should not be set as an instance_attribute, but as a >> meta_attribute Correct post-2.0.8; I had left the original syntax in for Debian and SLES users. Now that SLES 10 SP2 has 2.1.3 and so does etch-backports, you are correct; I need to fix that. >> and not just for one primitive of the group but for the >> entire group. Not true. target_role is per resource. >> Actually, you might leave it out as it defaults to the >> highest target_role a group can have (namely started) anyway. IMHO it's way smarter to configure things first with target_role=stopped, then start resources one by one. >> * LSB and Heartbeat resources do not need the provider option - does >> not cause any pain, but it is not needed OK. > Actually ... There's even more. The V2 example ... > > I would suggest just to point people to > http://www.linux-ha.org/DRBD/HowTov2 > > That's - imho by far - the best document on how to use the master_slave > DRBD RA. And I have been continuously getting complaints from users that it was they found it waaaay to complicated... that's why I included a separate section in the User's Guide. Any more comments are much appreciated. Cheers, Florian -- : Florian G. Haas : LINBIT Information Technologies GmbH : Vivenotgasse 48, A-1120 Vienna, Austria
OPCFW_CODE
|An Overview of the TransformAble project What is TransformAble? TransformAble is a set of Web services which can be used by any suitable Web application to deliver a more accessible and customizable user experience. The TransformAble services modify a site's user interface and content resources in order to accommodate the individual needs and preferences of each user. These services enable Web sites to enhance and rearrange their appearance, layout, and structure. Such transformations provide customized accommodations including large type and high contrast colour schemes, simplified site navigation, accessible alternatives to audio or video content, and more. TransformAble works with the proposed ISO standard AccessForAll model, which provides a common representation for both user preferences and resource descriptions. AccessForAll is used by the services to match the user's needs with the most appropriate resources and presentation available. AccessForAll was designed for interoperability, and enables preferences to be used portably across compliant systems. TransformAble is being developed by the Adaptive Technology Resource Centre at the University of Toronto. The project consists of three Java-based services which are available as open source under the MIT license: PreferAble, StyleAble, and SenseAble. PreferAble provides an easy-to-use Web interface which enables the user to edit and store their preferences. PreferAble walks the user through a series of questions about how the application should appear and behave. This allows the user to configure preferences including screen enhancements such as larger type and higher contrast colour schemes, preferred language, control preferences, and required alternatives to multimedia content such as captions and audio descriptions. StyleAble performs a range of display and structural transformations on any well-formed Web page. These transformations can be categorized into two types: 1) generation of custom style sheets, and 2) document transformations. Style Sheet Generation StyleAble's CSS generator can create customized style sheets based on a user's stated preferences, allowing them to control the overall appearance of the site, including the font size, face, foreground colour, background colour, and link appearance. Document transformations provide augmented views of a document which help users to navigate and understand the content more easily. This includes the on-the-fly creation of a table of contents or a list of links. Unlike SenseAble, the StyleAble service doesn't require metadata or a content repository. It uses the structure of well-formed HTML documents to provide information about how to perform these transformations. The SenseAble service works alongside content repositories which contain multimedia resources and associated metadata in the AccessForAll format. This metadata helps to describe the characteristics and accessibility of a particular resource, including the potential alternatives which may be available for it. Based on this information, SenseAble matches the available resources with the accessibility needs and preferences of the user. This process may involve substituting, augmenting, or re-aggregating portions of the content to make it more accessible to the user. For example, if a user is viewing a video resource and is deaf, hard of hearing, or is working in a noisy environment, SenseAble can match this need with any associated captions or sign language resources which may be available for the video. SenseAble's matching engine determines the availability and appropriateness of content alternatives, ranking them based on user preferences. The content aggregator in SenseAble can work with audio, video, textual and SMIL content to build alternative versions of the resource that are more accessible to the individual user. AccessForAll is a multi-part standard consisting of the Personal Needs and Perferences (PNP) which encapsulates user preferences, and the Digital Resource Description (DRD) which describes content resources. The DRD describes how a resource is perceived, understood, and interacted with. The TransformAble services depend on the AccessForAll to provide a standard means whereby resources are matched to the accessibility needs and preferences of a person. The concepts behind the AccessForAll framework were originally developed by the IMS Accessibility Working Group and are now in the process of becoming an ISO standard. The ATRC has developed a Java-based implementation of the PNP and DRD models for use by PreferAble, StyleAble and SenseAble. |Last Updated on Friday, 30 March 2007 16:53
OPCFW_CODE
GetOverlappedResult failed on Macbook air running windows 10 (Bootcamp) Hi, i installed the debug usbdk on my macbook running windows 10 in bootcamp (so its native windows 10), when i capture a device it fails immediately when trying to reap the completed URB on the pipe 00000012 58.70553970 [2808] [1]0EE8.0468::10/12/2015-20:23:55.814 [UsbDk]CUsbDkControlDevice::AddRedirect Success. New redirections list: 00000013 58.70557404 [2808] 00000014 58.70597839 [2808] [1]0EE8.0468::10/12/2015-20:23:55.814 [UsbDk]CUsbDkRedirection::Dump Redirect: DevID: USB\VID_18A5&PID_0243, InstanceID: 070B544992654402 00000015 58.70605087 [2808] 00000016 58.70634460 [2808] [3]0004.13C0::10/12/2015-20:23:55.926 [UsbDk]CRegText::Dump ID: USB\VID_18A5&PID_0243 00000017 58.70641327 [2808] 00000018 58.70670319 [2808] [3]0004.13C0::10/12/2015-20:23:55.926 [UsbDk]CRegText::Dump ID: 070B544992654402 00000019 58.70683289 [2808] 00000020 58.70699310 [2808] [3]0004.13C0::10/12/2015-20:23:55.926 [UsbDk]UsbDkEvtDeviceAdd Entry 00000021 58.70706177 [2808] 00000022 58.70731735 [2808] [3]0004.13C0::10/12/2015-20:23:55.926 [UsbDk]CRegText::Dump ID: USB\VID_18A5&PID_0243 00000023 58.70745087 [2808] 00000024 58.70771408 [2808] [3]0004.13C0::10/12/2015-20:23:55.926 [UsbDk]UsbDkEvtDeviceAdd Exit STATUS_SUCCESS 00000025 58.70784760 [2808] 00000027 64.89150238 [3816] Driver file operation error. GetOverlappedResult failed (Element not found. Error code = 1168) 00000028 69.83258057 [3816] Driver file operation error. GetOverlappedResult failed (Element not found. Error code = 1168) 00000029 74.97229767 [3816] Driver file operation error. GetOverlappedResult failed (Element not found. Error code = 1168) However it runs ok in window 10 vm, and also it runs ok in windows 7. Does the depend on the native host controller? Looking on the macbook its running the standard XHCI usb 3.0 host controller driver for the chipset. Fixed by commit 4a0354fdcd2d9ba342199180c16a853534c52527 at version 1.0.7
GITHUB_ARCHIVE
docker and duplicity multivol_snapshot Hi, I'm facing a problem to understand multivol_snapshot of dupliciy. Trying to restore manually a db from tar.gz files, I cannot untar them because they are on multivol_snapshot. I have 2 docker-compose using volumerize: one for preprod (rc) and one for production: rcbackup: # https://github.com/blacklabelops/volumerize/tree/master/backends/AmazonS3 # https://github.com/blacklabelops/volumerize/issues/97 image: blacklabelops/volumerize:1.6 container_name: RcBackup env_file: - ../../.env restart: always depends_on: - rcdb - rcweb volumes: - /var/run/docker.sock:/var/run/docker.sock - /etc/timezone:/etc/timezone:ro - volumerizerccache:/volumerize-cache - '../../db_backup:/source/db_backup:ro' - '../../media:/source/media:ro' environment: - VOLUMERIZE_SOURCE=/source - VOLUMERIZE_CACHE=/volumerize-cache - VOLUMERIZE_TARGET=s3://s3.eu-west-3.amazonaws.com/${VOLUMERIZE_USER} - AWS_ACCESS_KEY_ID=${VOLUMERIZE_USER_ACCESS_KEY} - AWS_SECRET_ACCESS_KEY=${VOLUMERIZE_USER_SECRET_KEY} - TZ="Europe/Paris" - VOLUMERIZE_JOBBER_TIME=0 35 3 * * * # - VOLUMERIZE_JOBBER_TIME=0 1/15 * * * * - VOLUMERIZE_FULL_IF_OLDER_THAN=7D - JOB_NAME2=RemoveOldBackups - JOB_COMMAND2=/etc/volumerize/remove-older-than 7D --force - JOB_TIME2=0 40 3 * * * networks: - intern labels: - traefik.enable=false volumes: rcpgdb: volumerizerccache: rcredis-data: prodbackup: # https://github.com/blacklabelops/volumerize/tree/master/backends/AmazonS3 # https://github.com/blacklabelops/volumerize/issues/97 image: blacklabelops/volumerize:1.6 container_name: ProdBackup env_file: - ../../.env restart: always depends_on: - proddb - prodweb volumes: - /var/run/docker.sock:/var/run/docker.sock - /etc/timezone:/etc/timezone:ro - volumerize-cache:/volumerize-cache - '../../db_backup:/source/db_backup:ro' - '../../media:/source/media:ro' environment: - VOLUMERIZE_SOURCE=/source - VOLUMERIZE_CACHE=/volumerize-cache - VOLUMERIZE_TARGET=s3://s3.eu-west-3.amazonaws.com/${VOLUMERIZE_USER} - AWS_ACCESS_KEY_ID=${VOLUMERIZE_USER_ACCESS_KEY} - AWS_SECRET_ACCESS_KEY=${VOLUMERIZE_USER_SECRET_KEY} - TZ="Europe/Paris" - VOLUMERIZE_JOBBER_TIME=0 35 3 * * * - VOLUMERIZE_FULL_IF_OLDER_THAN=7D - JOB_NAME2=RemoveOldBackups - JOB_COMMAND2=/etc/volumerize/remove-older-than 365D --force - JOB_TIME2=0 40 3 * * * networks: - intern labels: - traefik.enable=false volumes: prodpgdb: volumerize-cache: prodredis-data: First, my volumes were named the same but since I've read that they must not be shared, I've changed the one for preprod to volumerizerccache But now on the preprod, when I run docker exec RcBackup backup I'm also facing a multivol_snapshot. I cannot understand why. Any help? 🙏 🙏 Thank you! Wrong setup. You need to define more enumerated envs for each job. Especially each backup job needs its own cache. Read it again: https://github.com/blacklabelops/volumerize#multiple-backups I do not understand your question. What is a 'multivol_snapshot'? What does it mean 'to face a multishot_snapshot'? Why do you rename the volumes, because you run them on the same machine? If you run them on the same machine, why do you rename the volumes but point them to the same folder? It means that when I download a duplicity-full.20220706T0000Z.vol1.difftar I have 2 folders: multivol_snapshot snapshot Files in snapshot can be red in my Macos finder but not thoses on multivol_snapshot. What I have now understand is that I should not try to restore files manually by downloading archive. I should only use volumerize command to do so. Yes I run the containers in the same machine. I guess I should not point them to the same folder. But I've understood that source is a temp folder. So this is not a big deal is it? And thank you for your answer!
GITHUB_ARCHIVE
Blender Foundation heeft na een halfjaar van noeste arbeid versie 2.46 van het opensource en crossplatform 3d-programma Blender uitgebracht. Dit programma is bedoeld voor 3d-modeling, animatie, rendering, post-production, interactive creation en playback. Meer informatie over de uitgebreide mogelijkheden van Blender kunnen op deze pagina worden gevonden. Met dit programma onder meer is de animatiefilm 'Big Buck Bunny' gemaakt en de meeste veranderingen in versie 2.46 zijn het gevolg van dit filmproject. De release notes laten zich als volgt weglezen: The work of the past half year - also thanks to the open movie project "Big Buck Bunny" - has resulted in a greatly improved feature set, now released as Blender 2.46, the "Bunny release"! This version supports a new particle system with hair and fur combing tools, fast and optimal fur rendering, a mesh deformation system for advanced character rigging, cloth simulation, fast Ambient Occlusion, a new Image browser, and that's just the beginning. Check the extensive list of features in the log below... have fun! Hair and Fur Many features have been added to make fur and grass rendering for Peach possible. Big improvements were made in visual quality, rendering speed and memory usage. The new Image Browser is blazingly fast and stable, and not only allows to browse for images in your filesystem, but can also show previews of materials, textures, world, lamp and image data. Reflections and refractions are now possible to be rendered with a glossiness factor, controlling the roughness of material. Great tools for making UV textures: you now can bake normal maps based on rendering a higher resolution mesh, you can bake displacement (including 32 bits depth), and you can bake transparency. Physics caching and baking The softbody, cloth and particle physics now use a unified system for caching and baking. For real-time tweaking, a new option "Continue Physics" will continue the simulation regardless of the current frame. Armatures now support Bone groups, custom Bone colors, automatic colors, more custom shape options, ... and many more goodies for our rigging department. Many new tools and improvements have been made to speed up the rigging and posing workflow. There are now tools for more intuitive bone creation, various hotkeys to speedy batch-editing of bones, auto-ik and auto-keyframing tweaks, and many more goodies. Bone Heat Weighting is a new method to create vertex weights for bone deformation, it generates better results, and does not require setting a radius for bones. Also added was Quaternion-Interpolated Deformation for superior blends. Game Engine improvements The Blender GameEngine has seen a great deal of improvement with an increase in play-back speed, a number of nice new features including 2D filter compositing, and of course attention to quality through bug fixes. Raytraced soft shadows are now possible for all lamp types; including spot, sun and point lights. FSA gives superior anti-aliasing for high dynamic range and compositing. Zmasks allow rendering of composite masks. Instancing gives efficient memory re-use for duplicates. Cubic shading (to prevent discontinuity banding) Higher level texture coordinates for duplicates (like feathers) Lamp fall-off curves Softer Halos, premul alpha, multisample shadowbuffers, ... Python Scripts and API There have been a large number of script additions and script updates, as well as API improvements since the last release. Particle system rewrite The particle system has been rewritten from scratch. It now allows advanced hair grooming tools, but also much better physics, boid animation and even explosions! Cloth simulation is available in Blender via a modifier on Mesh objects. Cloth then realistically and in real-time interacts with other objects, the wind or other forces, all of which is fully under your control. Ambient Occlusion is a render option that darkens areas with less visibility, simulating the effect of environment light. This new AO option is based on quick approximation, giving many factors of speedup. Mesh Deform Modifier This new method allows to use any random Mesh cage to become a deformation "lattice" for animated characters. By layering - using both a MeshDeformer and an Armature, you can both achieve high level as precise control. Action Editor improvements The Action Editor has been rewritten to have a more flexible codebase that is more future-proof and extendable. This has enabled tools to be shared between the editing modes for Actions and ShapeKeys, and now has many new features... Constraints are crucial for setting up good character rigs. A wealth of new features have been added to improve and extend this system. Most notable is the addition of PyConstraints, allowing full control to animators. QMC & Adaptive Sampling Blender now includes two new sampling methods, using a Halton sequence (Adaptive QMC) and a Hammersley sequence (Constant QMC). Raytracing now also supports adaptive sampling. Many many new goodies in our Video Sequence Editor: UV texture editing - UI made more accessible, new panels/views - Built-in strip blending - Color correction tools - Markers, NTSC support, preview, ... UV texture coordinates now are accessible via regular Mesh editmode. And many more features were added such as: - UV draw types - Solid opengl view with textures - 2d cursor in Image window New nodes have been added for shading and compositing. And more features NDOF devices support Align to Transform Orientation Pole target for IK chains Pose Libraries Weight Paint visualization Multi-Modifier support Weight-group selecting Custom transform orientations Distributed rendering options External Paths Tools Automerge Recursive Dupli Groups[break]
OPCFW_CODE
Designing Meaningful Choices to Protect User Privacy Continuing last week's conversation, in which I presented Transparency by Design - one of the frameworks I am proposing in my Ph.D. - today, I would like to discuss the design of meaningful choices in the context of online data protection. Transparency by Design highlights UX design's essential role as a tool to empower users with relevant, timely, and adequate information about how their personal data is being collected and processed by organizations. Another important element of Transparency by Design, particularly connected to its fairness dimension, is the availability of meaningful choices to users. Choice is the main voice of users in their daily interaction with organizations online; without meaningful choices, users have a weaker presence, and existing informational vulnerabilities are exacerbated. Even if there are accessible privacy notices containing all the legally mandatory information, absent meaningful privacy choices, users will be unable to exercise their autonomy. In terms of meaningful choices, at this point, I am not discussing the content of these choices or the extent to which privacy deliberations should be open or not for discussion with users. I focus on design guidelines to help organizations implement meaningful privacy choices in real-world interfaces and systems, according to the approach proposed by HCI scholars Yuanyuan Feng, Yaxing Yao, and Norman Sadeh, whose model I incorporate in my research. By designing choice mechanisms that are meaningful and aware of users’ cognitive biases and vulnerabilities, organizations will be able to support autonomy and empower users, therefore helping once again to reduce informational vulnerabilities that permeate the online interaction between users and organizations. Feng et al. developed a design space of five key dimensions to be considered when designing privacy choices. Their design space also provides a “taxonomy to categorize, evaluate, and communicate different privacy choice design options with all involved stakeholders, including users and legal professionals.” According to the authors, there are five key dimensions for the design space in privacy choices: type, functionality, channel, timing, and modality, and each of these dimensions has multiple options to be chosen from. According to Transparency by Design, an organization should evaluate what options can help mitigate users’ informational vulnerabilities and promote the other elements of the Transparency by Design framework. Below is an image from Feng et al.’s research illustrating the multiple aspects of each of the five key dimensions in the design space for privacy: Yuanyuan Feng, Yaxing Yao & Norman Sadeh, A Design Space for Privacy Choices: Towards Meaningful Privacy Control in the Internet of Things, CHI’21 (2021). Transparency by Design proposes that to promote fairness, which is one of the aspects of transparency obligations in the GDPR, an organization must be aware of the key design elements that compose a privacy choice, as developed by Feng et al. While designing products and services, an organization must create privacy choice mechanisms that can reduce users' informational vulnerabilities. Meaningful privacy choices are at the intersection between Privacy by Design and Transparency by Design, as they highlight how the design of a certain product or service – since its inception – should have privacy in mind. Privacy is a complex topic, and even a simple privacy toggle with the options “yes” or “no,” to be coherent with Privacy by Design and Transparency by Design, must be aligned with a background planning and strategy on how to better serve users’ best interest and mitigate informational vulnerabilities. Meaningful privacy choices are also part of the broader concept of usable privacy. Jakob Nielsen has defined usability as “a quality attribute that assesses how easy user interfaces are to use.” He added that: “Usability is defined by 5 quality components: a) learnability: how easy is it for users to accomplish basic tasks the first time they encounter the design? b) efficiency: once users have learned the design, how quickly can they perform tasks? c) memorability: when users return to the design after a period of not using it, how easily can they reestablish proficiency? d) errors: how many errors do users make, how severe are these errors, and how easily can they recover from the errors? e) satisfaction: how pleasant is it to use the design?” When discussing usable privacy, we are inquiring about methods and tools to protect privacy that embrace or aim at achieving the quality attributes described above. Meaningful privacy choices are one of these methods. The aim is to bring usability to the choices presented to users so that they can choose meaningfully regarding their privacy. See you next week. All the best, Luiza Jarovsky
OPCFW_CODE
View all our technical documentation here. Select a product based on make, model, and whether or not you have an active or legacy product. The MySQLi Extension (MySQL Improved) is a relational database driver used in the PHP scripting language to provide an interface with MySQL databases. This page includes the software download for Oracle VM Server for x86 and Oracle VM Server for SPARC. Learn What’s New here. Oracle VM Server for x86 and Oracle VM. Moving from MySQL 5.5 to 5.6. Step 1: Backup – Manejando datos – Feb 4, 2014. Apart from this, I must say that I found less information than I imagine for moving from MySQL 5.0, 5.1 or 5.5 to versin 5.6 on Windows. So, I will. Feb 2, 2008. The parser in MySQL 5 is much faster than it used to be. from a master running MySQL 4.0 to a slave running MySQL 5.0 or 5.1 or 5.5 or 5.6" Database issues. This section holds common questions about relation between PHP and databases. Yes, PHP can access virtually any database available today. Jun 5, 2016. Hello Community. I have task to migrate from WIN 2003/mysql 5.1 to linux/mysql 5.5 server for moodle. For win serverI have 1.9.5 version for. ActiveVFP is a completely free and open source project for creating web applications with Visual Foxpro. Why use PHP? Easy yet powerful Foxpro web development. Ruby On Rails Migration Set Primary Key Apr 9, 2012. Don't do this. Use the built-in id field as the primary key. If you're going to use Rails, you should build your app the "Rails way" unless you have. ruby – How to add primary key to Rails? – Stack Overflow – Apr 14, 2017. This content will be auto generated from Jul 12, 2016. Once we migrate to Aurora, we will not have to conduct this type of maintenance. Step 1: Create a MySQL 5.5 Slave and Begin the Replication. Step 5: Stop Running the Replication of the MySQL 5.6 Slave, and Create a. Jan 9, 2015. Still, when it all adds up, 5% is relevant in particular for web server backends, MariaDB 5.5 is a complete drop-in-replacement for MySQL 5.5. Oracle 9i To 11g Migration Checklist Apr 17, 2013. This document explains how organizations can upgrade or migrate from Oracle Database 9i or Oracle Database 10g to Oracle Database 11g. Oracle OpenWorld 2017- San Francisco: Connect, learn, and discover this fall at Oracle OpenWorld 2017. Join industry and product thought leaders October 1-5 to take. Statistical Techniques | Statistical Mechanics Complete Aug 24, 2015. Database migration from MySQL 5.1 to MariaDB 10 on OpenBSD. I would have to go 4.9 -> 5.0 -> 5.1 -> 5.2 -> 5.3 -> 5.4 -> 5.5 -> 5.6 -> 5.7. Cheat Sheets for Developers. The largest collection of reference cards for developers.
OPCFW_CODE
Amazingfiction Fey Evolution Merchant novel – Chapter 575 – Allstar Match broken bump reading-p2 she buildeth her house kjv Novel–Fey Evolution Merchant–Fey Evolution Merchant Chapter 575 – Allstar Match smiling trucks At the top branch on the snow pine sat a crow as well as a bat. They looked extremely unusual. Just how much determination and laws will I have to collect to get to top rated celebrities for the Sacred Sword Wielding Queen? The Sacred Sword Wielding Queen did not would like to critique Lin Yuan’s identifying capability. The Sacred Sword Wielding Queen was no more an image of white-colored that it really was at first. The Sacred Sword Wielding Queen was now an element of Lin Yuan’s soul. Cairo Trilogy: Palace Of Desire Now, there were an added part of secret around his first foundation. Just after he contemplated it, Lin Yuan resolved to not ever test out the Sacred Sword Form’s ability. The Sacred Sword Wielding Princess possessed no idea so it would not feasible for it to try out a really feast once more. There have been now two darkish-kind willpowers, as well as 2 dim-sort laws imprinted on the Sacred Sword Wielding Queen’s skirt. Three of the Regulations Crystals about the Sacred Sword Wielding Queen’s crown acquired switched a pure whitened. Feng Yu Jiu Tian Lin Yuan was previously an affectionate and brilliant younger years. Lin Yuan was in the past an affectionate and vivid younger years. The amount of strength of will and laws will I need to acquire to contact leading actors to the Sacred Sword Wielding Queen? Nonetheless, Lin Yuan had not been delighted concerning this development quite, he believed installation force. my grandmother dear has a garden After he thought about it, Lin Yuan decided not to ever test out the Sacred Sword Form’s power. My sacred reference lifeform is ridiculous! It actually regards these Regulation Crystals with pure legislation ability as garbage. There are no clue what number of emperor-cla.s.s pros and Delusion Breed of dog feys would cure these as treasures. They could a single thing to obtain their mitts on these. Obviously, Lin Yuan obtained underwent an enormous change as he went into his room. He experienced gained the opportunity to cover his capabilities. In under a couple of hours, its service provider got already improved its agreed-upon identity. There were no difference between Bright or Dark-colored pill. peeps at many lands norway In under two hours, its service provider obtained already changed its predetermined-upon label. When Lin Yuan viewed it, he noticed as if he was staring at the embodiment of darkness. The Sacred Sword Wielding Princess stared blankly at its licensed contractor, sensing so it should give up on its decide to make its specialist more intelligent. The Mom of Bloodbath and Countless Summer time saw that they can no more feeling Lin Yuan’s spirit qi professional ranking. A idea did actually come to Liu Jie, in which he claimed, “On the second day time from the new season, there’ll be an Allstar Fit S Competition. Even though the Allstar Match up is designed for enjoyment because it is much less proper as being the S Tournament, it requires all of the stage and B-degree guilds. Chu Ci, in order to go, I’ll get seat tickets. We’ll all have the ability to go jointly. The showcase of the S Competition is usually that the crowd will get into a lucky bring. The winner will get the chance to obstacle an Allstar fellow member.” It felt excellent to help increase its actors as soon as it was subsequently delivered! Lin Yuan walked from his area and sat around the drinking water rhinoceros synthetic leather couch to view the S Tournament with all of those other party. Chu Ci usually referred to as Lin Yuan by his identify. Lin Yuan got no reason at all to refuse since she obtained even long gone when it comes to to simply call him ‘Big brother’. The Sacred Sword Wielding Queen was now an important part of Lin Yuan’s heart and soul. The Sacred Sword Wielding Queen was now an element of Lin Yuan’s heart and soul. The Sacred Sword Wielding Queen stared blankly at its company, experiencing it should quit its decide to make its specialist better. Now, there had been an extra part of mystery around his original base. Nevertheless, it obtained taken place to Lin Yuan. There was already too many unbelievable circ.you.mstances which had surrounded Lin Yuan. The Mother of Bloodbath and Endless Summer season had been already used to it. The Sacred Sword Wielding Princess did not want to critique Lin Yuan’s naming power. Three of the Laws Crystals for the Sacred Sword Wielding Queen’s crown experienced switched a natural white colored.
OPCFW_CODE
Reducing the size of minidumps of managed programs while keeping some heap information? With the dump debugging support in .NET 4.0 we are looking into automatically (after asking the user of course :) creating minidumps of C# program crashes to upload them to our issue tracking system (so that the minidumps can assist in resolving the cause of the crash). Everything is working fine when using the WithFullMemory minidump type. We can see both stack and heap variables. Unfortunately the (zipped) dumps are quite large even for small C# programs. If we use the "Normal" minidump type we get a very small dump, but not even stack variable information is available in the managed debugger. In fact, anything less than WithFullMemory seems quite useless in the managed debugger. We have made a few attempts at using a MINIDUMP_CALLBACK_ROUTINE to limit the included module information to our own modules, but it seems that it has almost no effect on a managed dump but still manages to break the managed debugging? Does anyone have any tips on how to trim the minidump while keeping it useful for managed debugging? I use the following flags to save space will generating useful minidumps for C++ applications: MiniDumpWithPrivateReadWriteMemory | MiniDumpWithDataSegs | MiniDumpWithHandleData | MiniDumpWithFullMemoryInfo | MiniDumpWithThreadInfo | MiniDumpWithUnloadedModules The flag values are specified in DbgHelp.h and would need to be marshaled into C#. The dump is further restricted by specifying a CallbackRoutine. Just fyi, as mentioned above ClrDump looks very cool but it appears it only works with the 1.1. and 2.0 runtimes. With all due respect, I STRONGLY encourage you to sign-up for a Microsoft WinQual account, register your applications with Microsoft. http://www.microsoft.com/whdc/winlogo/maintain/StartWER.mspx This will allow you to not only take advantage of Microsoft's extensive crash collection and analysis services (for free!), but will also allow you to publish fixes and patches for your applications through Windows' built-in error reporting facilties. Further, by participating in the WinQual program, enterprises who deploy your app and who employ an in-house Windows Error Reporting system will be able to collect, report and receive patches for your app too. Another benefit is that employing WinQual, you're one step closer to getting your app logo certified! Every OEM & ISV I've worked with who uses WinQual saves an ENORMOUS amount of effort and expense compared to rolling their own crash collection and reporting system. As much as I support WinQual: Microsoft still has a bad reputation with many users in terms of privacy. I know tons of people who would never, ever send one of those crash dumps while they would happily send them directly to the developer. Sometimes, Emotion trumps Arguments and and rolling your own crash collection is the way to go. But that depends on the target audience of the app. Most people don't care enough to have an opinion about Microsoft's reputation Many users would rather send a crash dump to Microsoft than to a company/entity that they don't recognize, know or trust. A LOT of malware trawls user data by popping up a warning message which the user hits and then authorizes UAC. Next thing they know, the user's machine won't boot as they're now infected by installed malware. 90% of the time, it's better to support the OS' built-in error reporting infrastructure. Thank you for the suggestion, but WinQual is not relevant for us. ClrDump might help you out. ClrDump is a set of tools that allow to produce small minidumps of managed applications. In the past, it was necessary to use full dumps (very large in size) if you needed to perform post-mortem analysis of a .NET application. ClrDump can produce small minidumps that contain enough information to recover the call stacks of all threads in the application. I wrote an email to author of ClrDump asking a question what MINIDUMP_TYPE parameters his tool used to create dumps in 'min' mode. I posted his answer here: What is minimum MINIDUMP_TYPE set to dump native C++ process that hosts .net component to be able to use !clrstack in windbg
STACK_EXCHANGE
I will try to make this post as light on mathematics as is possible, but a complete in depth understanding can only come from understanding the underlying mathematics! Generative learning algorithms are really beautiful! Like anything they have there advantages and disadvantages. These algorithms are a bit “tough” to understand but comparing them by the “easier” to understand discriminative learning algorithms should make things easier. Let’s start by talking about discriminative learning algorithms. Discriminative Learning Algorithms: A discriminative classifier tries to model by just depending on the observed data. It makes fewer assumptions on the distributions but depends heavily on the quality of the data (Is it representative? Are there a lot of data?). An example is the Logistic Regression. A discriminative model learns the conditional probability distribution: P(y | x) – which should be read as “the probability of y given x“. To predict the label y from the training example x, the models evaluate: f(x) = argmax_y P(y | x) The above equation means: choose the value of y which maximises the conditional probability P(y | x). Or in simpler terms – the above chooses the most likely class y considering x. This can be visualised as the model creating decision boundaries between the classes. To classify a new example the model checks on which side of the decision boundary the example falls on. Now let’s move onto generative learning algorithms. Generative Learning Algorithms: Generative approaches try to build a model of the positives and a model of the negatives. You can think of a model as a “blueprint” for a class. A decision boundary is formed where one model becomes more likely. As these create models of each class they can be used for generation. To create these models, a generative learning algorithm learns the joint probability distribution P(x, y). Now time for some maths! The joint probability can be written as: P(x, y) = P(x | y) . P(y) ….(i) Also, using Bayes’ Rule we can write: P(y | x) = P(x | y) . P(y) / P(x) ….(ii) Since, to predict a class label y, we are only interested in the arg max , the denominator can be removed from (ii). Hence to predict the label y from the training example x, generative models evaluate: f(x) = argmax_y P(y | x) = argmax_y P(x | y) . P(y) The most important part in the above is P(x | y). This is what allows the model to be generative! P(x | y) means – what x (features) are there given class y. Hence, with the joint probability distribution function (i), given a y, you can calculate (“generate”) its corresponding x. For this reason they are called generative models! Generative learning algorithms make strong assumptions on the data. To explain this let’s look at a generative learning algorithm called Gaussian Discriminant Analysis (GDA) Gaussian Discriminant Analysis (GDA): This model assumes that P(x|y) is distributed according to a multivariate normal distribution. I won’t go into the maths involved but just note that the multivariate normal distribution in n-dimensions, also called the multi-variate Gaussian distribution, is parameterized by a mean vector μ ∈ Rn and a covariance matrix Σ ∈ Rnxn. A Gaussian distribution is fit for each class. This allows us to find P(y) and P(x | y). Using this two we can finally find out P(y | x), which is required for prediction. For a two class dataset, pictorially what the algorithm is doing can be seen as follows: Shown in the figure are the training set, as well as the contours of the two Gaussian distributions that have been fit to the data for each of the two classes. Also shown in the figure is the straight line giving the decision boundary at which p(y = 1|x) = 0.5. On one side of the boundary, we’ll predict y = 1 to be the most likely outcome, and on the other side, we’ll predict y = 0. As we now have the Gaussian distribution (model) for each class, we can also generate new samples of the classes. The features, x, for these new samples will be taken from the respective Gaussian distribution (model). The above should have given you some intuition of how a generative learning algorithm works. The following discussion will help you see the difference between generative and discriminative learning algorithms much more clearly. Generative algorithms make a strong assumption on the data. GDA assumes that the data is distributed as multi-variate Gaussian. Naive Bayes (another generative algorithm) assumes that each feature is independent of other features in the data. On the contrary, discriminative algorithms make weak assumptions on the data. Logistic regression assumes that P(y | x) is a logistic function. This is a weaker assumption to that given by GDA because if P(x|y) is multivariate Gaussian then P(y|x) necessarily follows a logistic function. The converse, however, is not true; i.e., P(y|x) being a logistic function does not imply P(x|y) is multivariate Gaussian. (Please carefully note the ordering of x and y in the above probabilities!) Generative models often outperform discriminative models on smaller datasets because their generative assumptions place some structure on your model that prevents overfitting. For example, let’s consider GDA vs. Logistic Regression. The GDA assumption is of course not always satisfied, so logistic regression will tend to outperform GDA as your dataset grows (since it can capture dependencies that GDA can’t). But when you only have a small dataset, logistic regression might pick up on spurious patterns that don’t really exist, so the GDA acts as a kind of “regularizer” on your model that prevents overfitting. What this means is that generative models can actually learn the underlying structure of the data if you specify your model correctly and the model actually holds, but discriminative models can outperform them in case your generative assumptions are not satisfied (since discriminative algorithms are less tied to a particular structure, and assumptions are rarely perfectly satisfied anyways!). There’s a paper by Andrew Ng and Michael Jordan on discriminative vs. generative classifiers that talks about this more. The crux of the paper is: - Generative models have higher asymptotic error (as the number of training examples becomes large) as compared to discriminative models. - Generative models generally approach their asymptotic error much faster than discriminative model. Asymptotic error is the lowest possible error achievable by the model. These graphs from the paper help visualise the above two points: - Plots are of generalisation error vs m (averaged over 1000 random train/test splits). Dashed line is logistic regression (a dicriminative learning algorithm); solid line is naive Bayes (a generative learning algorithm). As can be seen from the plots, if the amount of training data is less, generative algorithms generally give better results. One thing to keep in mind is that, neither of the two types of algorithms are superior compared to each other! Both have their use cases. I hope this post has helped you understand generative learning algorithms and develop an intuition of how these compare to discriminative ones. If you have any doubts feel free to ask them and if you find any corrections or some problems please feel free to contact me.
OPCFW_CODE
|[ Main index » Bicycle components tests » (Dynamo) bicycle lighting analysis » Bicycle lighting regulations in various countries » ECE r113 bicycle lighting rules (E-bike)||]| Not sure if the ECE rules are freely available, but I quickly found an English version of ECE R113 on the web (I had already found it in German long ago) and you can have a look too. Update 2020-2-14: The ECE rules are freely available after all, only hidden by the unclear UN website... A reader who read my criticism of behaviour by a few moderators on cpf (see my analysis of the stupidity there HERE), sent me a link for ECE R101-120 and then I had a closer look at the UN website and found how to navigate it and find the rest of the rules (without relying on direct URL manipulation) which also showed the links were not hidden links. The following shows you where to download them all: First go to https://www.unece.org/trans/main/welcwp29.html, then on the left menu click on "Agreements and Regulations", then on "UN Regulations (1958 Agreement)", then on "UN Regulations (Addenda to the 1958 Agreement)" and then finally you will see the choice for: Regs 0-20 Regs 21-40 Regs 41-60 Regs 61-80 Regs 81-100 Regs 101-120 Regs 121-140 Regs 141-160 Here is an information leaflet from Philips from 2011-11-22 about its ebike headlamp: It says that the lamp conforms to ECE R113 class C and can be used as a low beam by any 2 wheeled vehicle at any speed. So I just need to look for class C headlamps to know what e-bike headlamps need to conform to? Or did Philips set out to produce something more than just an ebike lamp from the start? What are other ebike lamps validated as? Need to look into this... It's a pity I never got my hands on one of these ebike lamps to see how good the light distribution is and to see what the light above the horizon does for e.g. lighting up traffic signs. From the r-113 document, r113 means 'regulation # 113'. Starting to read: ECE is about headlamps with exchangeable bulbs, for motorised vehicles. Somewhere else on the web I found that LEDs are considered exchangeable bulbs in ECE rules, I presume the exact LED type is specified, possibly also with the PCB used in those (car) headlamps? There are 4 categories of headlamp, A,B,C,D Bulbs must be approved according to regulation 37 (so they mean ECE R37?). Headlamps of categorie A,B: any bulbs may be used as long as the headlamp produces less than 600 lm. Headlamps of categorie C,D: any bulbs may be used as long as the headlamp produces less than 2000 lm. What are class A,B,C,D? In the table for class C,D are mentioned D > 125cc, C ≤125cc , so it seems motorcycles... Are all vehicles in R113 are 2 wheeled? I saw R112 mentioned on the web which is presumably for cars and non-symmetrical light. Measurement is done at 25 m. Now we come to the interesting bits: Pictures of the points/lines etc. to come. Note to compare to StVZO: 1. StVZO allows for bicycles at most 2.0 lux above the horizon at 10m, so the lux values should be multiplied with a factor of (25m/10m)^2= 6.25. So 0.32 lux as below, conforms to 2.0 lux in case of bicycles... 2. StVZO allows for cars at most 1.0 lux above the horizon (measured at 25 m as in ECE). Class C&D: Fooking hell! They went crazy in defining a ridiculous number of test zones/points! This will take some time to digest. Perhaps it looks worse than it is :) For class C&D: Light distribution must be as smooth as possible in zones 1,2,3. 188.8.131.52 Minimal lux values: class B,C: 32 lux, class D: 51.2 lux, Maximal lux values: class B: 240 lux, class C,D: 180 lux. Further analysis in progress. |To email me go to the email page| Last modified: 2015-2-2
OPCFW_CODE
Relationship between the material properties of an edge and the fringes behind this edge The double-slit experiment shows fringes on a screen. Closing one of the slits there is still an interference pattern on the screen behind the slit. Making the slit wider we still see fringes between the shadows and the exposed area. Even with single photons or electrons. So a single edge is enough to get an intensity distribution on a screen. On the other hand, using a polarisation filter it is possible to let through about 50% of the light (for equally distributed electric fields of the involved photons). Using a second filter, 90° rotated to the first, no light (of suitable wavelength) goes through. The amazing fact is that using a third filter between the other two - best under 45° - some light goes through. That means that there has to be an influence of of the slits. The slits rotate the photon's electric field. But slits are made from edges and an edge (a term from geometry) in reality is some material. So to be precise, there has to be an interaction between the photons and the material of the edges. In the experiment with electrons an electrostatical potential changes the fringes dimensions: This pictures of the intensity distributions were made by G. Möllenstedt(not available in the English Wikipedia) and H. Düker in a biprisma experiment: And it was given an explanation by the help of an electrolytic trough model (Elektrolytischer Trog), why the intensity distribution changes: According to Möllenstedt, Düker, Beobachtungen und Messungen an Biprisma-Interferenzen mit Elektronenwellen, Zeitschrift f. Physik, Band 145, 1956, S. 377 Does in an experiment with light the use of different materials of the slit plate or the use of a material with different temperatures or different electrostatic potentials changes the fringes dimensions (widths and positions) too? Can you clarify the following in your question text? Which edges are you referring to? What material are you referring to? The polarizer? I think that here there are two different questions. First of all, polarizers are not materials with slits in a given direction (it seems to me that you are suggesting it), at least not in the optical regime, they are materials that interact with light in such a way that they let pass light with a certain kind of polarization. Therefore in the experiment with three polarizers what is happening is: let me call $\hat{x}$ and $\hat{y}$ the basis of the transverse plane, therefore after the first polarizer we would have a field with polarization $\vec{E}_1 = E_0 \hat{y}$, the polarizer at $\theta=45^o$ would let pass the componet of this field on its axis, that it would be the result of rotating with this angle the intial basis, in other words in the second polarizer basis' $\vec{E}_2^{rot} = E_0 \cos{45^o} ~ \hat{y}^{rot} $, transforming back again in the intial basis we would have $\vec{E}_2 = E_0 \cos{45^o} ( \cos{45^o} \hat{x} + \sin{45^o} \hat{y}) $, since the third polarizer is in the $\hat{x}$ axis its output would be the x component of $\vec{E}_2$, i.e. $\vec{E}_3 = (E_0/2) \hat{x}$. To sum up, the polarizer in the middle of the other two rotates the initial polarization plane of the wave by taking its components in a rotated basis, and therefore permits that there would be a component normal to the original polarization plane, which would not happen otherwise. About the two slit experiment, as you suggest different materials produce different results, it you heat up the air a lot you will see changes because of the change of the refractive index, but if you ask about changes only in the material, I would say that the only changes that could affect the experiment (apart from modifying the slits) would be to use materials with different transparency, because if you let pass a percentage of the light that would be blocked by a metalic surface otherwise, you will get interference not only between the light coming from the two slits but also with the background light that has passed through the areas of the material that were 100% reflective before. I don't quite buy the bit that the barrier material's only possible attribute is opacity. Rather, I should hope to be able to set up an experiment that will measure the beam in terms of its conservation. That means a reflective or photometric material would affect the result whether conservation is measured explicitly or implicitly, i.e. not at all.
STACK_EXCHANGE
When running the installation program, follow the instructions on the screen. Further, presentation of results of your work will be impressive and easy to understand. Runoff Coefficient Calculations Overlay Land Use or zoning data to determine area-weighted runoff coefficients C for use with the Rational Method. The modified version adds several new features, including more distributions, correlated random variables, sensitivity analysis, and the ability to run user-defined macros during simulation. A Monte Carlo simulation software solution for dynamically modeling complex systems in business, engineering and science. The Landsat and Landsat-like satellite images acquired from different sensors at different spatial resolutions and projections can be re-projected, co-registered, and orthorectified to the same projection, geographic extent, and spatial resolution using a common base image. This state-of-art model is extraordinarily powerful, capable of providing realistic modeling of complex three dimensional emergency releases. Because there are a lot of research projects and publications related to them. It incorporates the effects of a number of important physical processes on the water body, including: currents, tides, winds, gravitational and coriolis forces, bathymetry, friction, sources and sinks, and chemical reactions. Some of these are: lateral interpolation of W. This whole farm model provides a tool for evaluating the long term performance, economics, and environmental impacts of production systems over many years of weather. Drinking Water Parameter Database available to customers. The quantity and quality of water can both be modeled with ease from this comprehensive tool. Federal, state, and Indian tribal natural resource trustees can use the procedures in the model to develop a claim for compensation from potentially responsible parties. The system is both a tool and framework that provides an easy to use interface to a variety of environmental hydraulic models. These contaminant sources may be adjacent or offset from each other. Each model is supported through the Hydrologic Modeling Module with a completely integrated interface for parameter input, job control, and output review. The model interfaces allow you to view and edit model input parameters quickly and easily. Those files can be accessed at:. Plans include Kuno's and Green's numerical sequential sampling plans, Wald's sequential probability ratio test for binomial sampling plans and Fixed-sample-size binomial sampling. It simulates the movement and weathering of oil spills on open water and in the surf, with an emphasis on shoreline interactions. FloodWorks is a modular software package for real-time simulation and forecasting of extreme hydrological and hydraulic conditions within river basins, drainage systems and the coastal zone. One or more landfills, buried waste, spills, or disposal ponds can be modeled. Models water distribution and urban drainage networks. Any parameter such as hydraulic conductivity or rainfall intensity may be interpolated from a set of scattered data points to the grid. Water — Surface Water Modeling Software Surface Water Modeling Software Navigate Following is a list of Surface Water Modeling Software. The setup program provides a software wizard which guides the user through the setup process. The system allows the engineer or scientist to develop numerical grids, perform hydrodynamic simulations, conduct single constituent pollutant transport and multiple constituent eutrophication studies in a geographic context all from one application. Calcium solubility is also calculated for both calcite and aragonite. Because they are supported by big institutions and scientific communities. Why are these software the best? Problem The nation deals daily with the problems of urban flooding, stream erosion, and non-point source pollution due to urban runoff, construction activities, hydrologic modifications, and forestry, mining, and agriculture practices. Each location includes 18 climate files created using 9 general circulation models and 2 projected emission scenarios. Designed for operational use in the control room, FloodWorks is particularly suitable for real time flood forecasting, warning and management for river catchments and coastal areas. PhreeqcI is a Windows-based user interface that allows defining input, running simulations, and plotting results. Software and related material data and documentation are made available by the U. PondPack for Windows is a comprehensive urban storm water solution capable of modeling gauged and synthetic rainfall, runoff hydrographs, culverts, channels, pond sizing, outlet structures with tailwater effects, interconnected pond modeling, diversions, and tidal outfalls - just to name a few. Images are one of the four basic object types that is supported in the Map module. For a directory of free Surface Water Modeling Software, check out. The Penn State Hydrologic Modeling System is open source software, freely available for download at this site along with installation and user guides. Hydrologic Modeling Module The Hydrologic Modeling Module, sometimes refered to as the Tree Module, is the center for hydrologic modeling input, execution and output review. More information is available at. The amount of packages incorporated and also its condition of an open source code make it useful to explore the possibilities of modeling several types of problems including the addition of a reactive model. The local control volume contains all equations to be solved and is referred to as the model kernel. The channel is well-mixed vertically and laterally. It can be efficiently used for hydrologic, hydraulic and storm drain modelling. We have looked for different water resources free software, checked their documentation and analyzed their advantages and deficiencies to get this top 12. The Dispersant Planner is a computerized model developed for the planning and implementation of a effective dispersant application program. Watershed Modeling System Free Download Latest Version for Windows. An integrated software solution for simulating flows in rivers, in channels and on floodplains. The natural processes of runoff and precipitation are stochastically Monte-Carlo simulated and the respective time series are balanced with monthly water use requirements and reservoir storage changes. Its modeling capabilities let engineers streamline the hydraulic analysis and design of pipes, pumps, open channels, weirs, orifices, culverts, and inlets.
OPCFW_CODE
Sponsor: Do you build complex software systems? See how NServiceBus makes it easier to design, build, and manage software systems that use message queues to achieve loose coupling. Get started for free. In my previous post, I explored how words and language used by users of our system in our domain space can have different meaning based on their context. This establishes which services own which behavior and data. In this post, I’m going to explore why services are autonomous and how we can communicate between them This blog post is in a series. To catch up check out these other posts: - Context is King: Finding Service Boundaries - Using Language to find Service Boundaries - Focus on Service Capabilities, not Entities - 4+1 Architectural View Model Autonomy is the capacity to make an informed, uncoerced decision. Autonomous services are independent or self-governing. What does autonomy mean for services? A Service is the authority of a set of business capabilities. It doesn’t rely on other services. We are constantly in a push/pull battle between coupling and cohesion. High coupling ultimately leads to the big ball of mud. What’s unfortunate is the move to (micro)services with non-autonomous services that rely on RPC (usually via HTTP) hasn’t reduced coupling at all. It’s actually made the problem worse by introducing an unreliable network turning the big ball of mud into a distributed big ball of mud. Prefer Messaging over RPC We want services to be autonomous and not rely on other services over RPC to reduce coupling. One way to do this is to communicate state changes between our services with events. When Service A has a state change, we publish that event to our message broker. Any other service can subscriber to that event and perform whatever action it needs to internally. The producer of the event (Service A) doesn’t care about who may consume that event. Services that don’t Serve This may seem completely counter-intuitive since the definition of a service is an act of assistance. However, an autonomous service does not want to assist other services synchronously via behaviors, rather exposing to other services things that have happened to it via asynchronous messaging. An example of this in our distribution domain is in the form of Sales services and the quantity on hand of a product. Does Sales need the quantity on hand of a product? Sort of. You could assume without knowing this domain that you do not want to oversell. However, in my experience in distribution, overselling isn’t really a sales problem as it is a purchasing problem. Sales want to know the quantity on hand of a product, as well as what has purchasing ordered from the vendor but has not yet been received. This is called Available to Promise (ATP) and is used by sales to determine if they can fulfill an order for a customer. Another interesting point is related to Quantity on Hand. The quantity on hand that is owned by the warehouse service is still not really the point of truth for the real quantity on hand. Whatever the quantity on hand is for a product in a database isn’t the truth. The real truth is what’s physically in the warehouse. Products get damaged or stolen and aren’t immediately reflected in the system. This is why physical stock counts exist which end up as inventory adjustments in our warehouse service. If we’re using RPC, for the Sales Service to calculate ATP it would need to make synchronous RPC to: - Purchasing Service to get what purchase orders have not yet been received. - Warehouse Service to get the quantity on hand. - Invoicing Service to determine what other orders have been placed but not yet shipped. However, if we want our Sales Service to be autonomous it needs to manage ATP itself. It can do so by subscribing to the events of the other services. Sales can manage it’s own ATP for a product subscribing to the various events. When a purchase order is placed it will increase the ATP. When inventory is adjusted it will increase or decrease the ATP. And finally when an order is invoiced it will decrease it’s ATP. More on all of these topics will be covered in greater length in other posts. If you have any questions or comments, please reach out to me in the comments section or on Twitter.
OPCFW_CODE
Crash when using C# 9 records Installed product versions Visual Studio: 2019 This extension: 2.5.5 Description When generating with a record, Steps to recreate Create a record:using System; public record TestRecord(String TestProp); Right click file: create Unit Test boilerplate Press: Create unit test class Current behavior We get the error "could not find class or struct" Expected behavior Generated testclass Guess this is https://github.com/RandomEngy/UnitTestBoilerplateGenerator/blob/1214531aab3ccde4349f283b5d24722d5355581f/src/Services/TestGenerationService.cs#L165 Thanks. Looks like it needs some work to keep up with the new C# features. Been a bit short on time recently so I don't know when I might get to this. But always accepting PRs. I could give a try, but the unit tests arent running on AppVeyor? Testing is done through a custom debug mode tool. After you launch in debug mode there's a "UTBG Self-test" option in the Extensions menu. That runs the extension on code in the Sandbox solution and compares against expected values. Unfortunately that gives me: System.Windows.Markup.XamlParseException HResult=0x80131501 Message=The type initializer for 'UnitTestBoilerplate.ViewModel.SideBySideDiffModelVisualizer' threw an exception. Source=PresentationFramework StackTrace: at System.Windows.Markup.WpfXamlLoader.Load(XamlReader xamlReader, IXamlObjectWriterFactory writerFactory, Boolean skipJournaledProperties, Object rootObject, XamlObjectWriterSettings settings, Uri baseUri) at System.Windows.Markup.WpfXamlLoader.LoadBaml(XamlReader xamlReader, Boolean skipJournaledProperties, Object rootObject, XamlAccessLevel accessLevel, Uri baseUri) at System.Windows.Markup.XamlReader.LoadBaml(Stream stream, ParserContext parserContext, Object parent, Boolean closeStream) at System.Windows.Application.LoadComponent(Object component, Uri resourceLocator) at UnitTestBoilerplate.View.SelfTestDialog.InitializeComponent() in D:\github\304NotModified\UnitTestBoilerplateGenerator\src\View\SelfTestDialog.xaml:line 1 Inner Exception 1: FileNotFoundException: Could not load file or assembly 'DiffPlex, Version=<IP_ADDRESS>, Culture=neutral, PublicKeyToken=1d35e91d1bd7bc0f' or one of its dependencies. The system cannot find the file specified. That's weird. Do you see DiffPlex in the NuGet packages list? Did you run a nuget package restore? Do you see DiffPlex.dll in /src/bin/Debug after you build? Yes I did a restore (also retried) and the package/bin is there: For some reasons it all won't work the first time, but it does after some tries. 🤣 I get locally spaces instead of tabs. Is that because of one of my VS2019 settings? See https://github.com/RandomEngy/UnitTestBoilerplateGenerator/pull/10 I think you need to temporarily change your VS settings to use Tabs in order to run the self-test. It seems it's no longer picking up the project settings when doing the auto-format. C# 9 record support is available in 2.5.7.
GITHUB_ARCHIVE
Cumulus Linux supports the ability to take snapshots of the complete file system as well as the ability to roll back to a previous snapshot. Snapshots are performed automatically right before and after you upgrade Cumulus Linux using package install, and right before and after you commit a switch configuration using NCLU. In addition, you can take a snapshot at any time. You can roll back the entire file system to a specific snapshot or just retrieve specific files. The primary snapshot components include: - btrfs — an underlying file system in Cumulus Linux, which supports snapshots. - snapper — a userspace utility to create and manage snapshots on demand as well as taking snapshots automatically before and after running apt-get upgrade|install|remove|dist-upgrade. You can use snapperto roll back to earlier snapshots, view existing snapshots, or delete one or more snapshots. - NCLU — takes snapshots automatically before and after committing network configurations. You can use NCLU to roll back to earlier snapshots, view existing snapshots, or delete one or more snapshots. Install the Snapshot Package If you are upgrading from a version of Cumulus Linux earlier than version 3.2, you need to install the cumulus-snapshot package before you can use snapshots. Take and Manage Snapshots Snapshots are taken automatically: - Before and after you update your switch configuration by running the NCLU - Before and after you update Cumulus Linux by running apt-get upgrade|install|remove|dist-upgrade, via You can also take snapshots as needed using the snapper utility. Run: For more information about using snapper --help or View Available Snapshots You can use both NCLU and snapper to view available snapshots on the switch. net show commit history only displays snapshots taken when you update your switch configuration. It does not list any snapshots taken directly with snapper. To see all the snapshots on the switch, run the sudo snapper list command: View Differences between Snapshots To see a line by line comparison of changes between two snapshots, run the sudo snapper diff command: You can view the diff for a single file by specifying the name in the command: For a higher level view; for example, to display the names of changed, added, or deleted files only, run the sudo snapper status command: You can remove one or more snapshots using NCLU or snapper. Take care when deleting a snapshot. You cannot restore a snapshot after you delete it. To remove a single snapshot or a range of snapshots created with NCLU, run: To remove a single snapshot or a range of snapshots using Snapshot 0 is the running configuration. You cannot roll back to it or delete it. However, you can take a snapshot of it. Snapshot 1 is the root file system. snapper utility preserves a number of snapshots and automatically deletes older snapshots after the limit is reached. It does this in two ways. snapper preserves 10 snapshots that are labeled important. A snapshot is labeled important if it is created when you run apt-get. To change this number, run: NUMBER_LIMIT_IMPORTANT an even number as two snapshots are always taken before and after an upgrade. This does not apply to NUMBER_LIMIT, described next. snapper also deletes unlabeled snapshots. By default, snapper preserves five snapshots. To change this number, run: You can prevent snapshots from being taken automatically before and after running apt-get upgrade|install|remove|dist-upgrade. Edit /etc/cumulus/apt-snapshot.conf and set: Roll Back to Earlier Snapshots If you need to restore Cumulus Linux to an earlier state, you can roll back to an older snapshot. For a snapshot created with NCLU, you can revert to the configuration prior to a specific snapshot listed in the output from net show commit history by running net rollback SNAPSHOT_NUMBER. For example, if you have snapshots 10, 11 and 12 in your commit history and you run net rollback 11, the switch configuration reverts to the configuration captured by snapshot 10. You can also revert to the previous snapshot by specifying last by running net rollback last. If you provided a description when you committed changes, mentioning a description rolls the configuration back to the commit prior to the specified description. For example, consider the following commit history: net rollback description turtle rolls the configuration back to the state it was in when you ran net commit description rocket. Roll Back with snapper For any snapshot on the switch, you can use snapper to roll back to a specific snapshot. When running snapper rollback, you must reboot the switch for the rollback to complete: You can revert to an earlier version of a specific file instead of rolling back the whole file system: You can also copy the file directly from the snapshot directory: Configure Automatic Time-based Snapshots You can configure Cumulus Linux to take hourly snapshots. Enable TIMELINE_CREATE in the snapper configuration: Caveats and Errata You might notice that the root partition is mounted multiple times. This is due to the way the btrfs file system handles subvolumes, mounting the root partition once for each subvolume. btrfs keeps one subvolume for each snapshot taken, which stores the snapshot data. While all snapshots are subvolumes, not all subvolumes are snapshots. Cumulus Linux excludes a number of directories when taking a snapshot of the root file system (and from any rollbacks): |This directory is excluded to avoid user data loss on rollbacks.| The log file and Cumulus support location. These directories are excluded from snapshots to allow post-rollback analysis. There is no need to rollback temporary files. Third-party software is installed typically in This directory contains data for HTTP and FTP servers. Exclude this directory to avoid server data loss on rollbacks. This directory is used when installing locally built software. Exclude this directory to avoid re-installing this software after rollbacks. Exclude this directory to avoid loss of mail after a rollback. This is the default directory for libvirt VM images. Exclude this directory from the snapshot. Additionally, disable Copy-On-Write (COW) for this subvolume as COW and VM image I/O access patterns are not compatible. The GRUB kernel modules must stay in sync with the GRUB kernel installed in the master boot record or UEFI system partition.
OPCFW_CODE
Hi John & Gene ! About the spam problem, I get around the spam problem, by using 4 different accounts. a hotmail, yahoo, an earthlink (my provider) and an Army. Like you John, I use my hotmail account, primarily for family. I learned that any time you request a catalog, download anything from CNET, (or anything free, for that matter) you are going to get spam. When I was deployed, I lost internet access, for about 5 months, which turned out to be a good thing, hotmail-wise, as the spammers all got 'bounced out' and quit trying to send me more messages. I was up to like, 30- 40 a day. So, I re-activated my account and viola- spam free, and all filters turned on. Same with yahoo, it's free, has 10 megs, I can order catalogs online, use it for all my commercial traffic, and I think they have a pretty good filtering system, for free. I like free... free usually means spammers, and once in a while, something sneaks past all the filters, and into my inbox, and I report it, and that s it, never again.But i do get about 8-10 in my 'bulk' account. I read the subject lines, just in-case its from a list - member, and if it is, I just add thier address in. easy, semi-simple and free. My other accounts I use for work, and it's something I use for news groops, and letters, but i 'm going to lose the earthlink account when I move, next month, and switch to a satalite system, as I 'm moving to a more rural area. This was kinda long winded, must be from the caffiene, but solution, get another account, transfer your addresses, and let it die-off. don't use it, clean it out, or anything, and let itself bounce the spammers back to the rocks they crawled out from. Then re-open it after it reaches it's limits...worked like a charm for me.. On- topic, I'm enjoying a 3 way blend after roasting 4 lbs with my new heat-gun, this week. I bought a Melitta - 1 cup filter and cup from the DAV ( Disabled Vets Store, in town...really nice people there, and the $ goes to a good cause. Even if I don't need anything, I feel compelled to buy something...and I picked up 3 popcorn poppers for $2. each ( as backups, you know) over the last month...I'm stock piling them. I noticed something that escaped me, before...i had my grinder set for an espresso, earlier, and ground without looking.I was not about to waste it, so I put it in the filter, a paper one, that I found in the back of my cupboards. I boiled some water on the stove, and when I added it to the open filter, slowly, of course, it expanded a lot. (the grounds poofed up). I then added the rest of the water, and it was the best coffee I had in weeks. Suprise, suprise, suprise... Now I think I'll get a French Press, because I finally got hit over the head with what I've been 'missing' taste wise, in my coffee. From: Wandering John John.S.Abbott>Subject: Re: +Off Topic - SPAM I shut down my personal domain about three weeks ago. I was getting 20 to 30 an hour on week-ends. I have opened several Google mail accounts - this one for the coffee list - another for the rest of the world and a third just for family. What amazes me is that I have NEVER used my Direcway account Except to tell family the domain server was off-line. In the past week I've received 25 pieces of spam on that account!! They all take the form "Re: " where the number changes. The sender is different for most of them. And good old Direcway (the most expensive way to the Internet) doesn't offer any spam filtering! John - loving a pork free life in the slow lane On Sat, 18 Sep 2004 14:39:43 -0500, Gene Smith wrote: <Snip> The easiest way to find something lost around the house is to buy a replacement. Gary --------------------------------- Do you Yahoo!? Yahoo! Mail is new and improved - Check it out!
OPCFW_CODE
A Light Introduction to Text Analysis in RBrian WardBlockedUnblockFollowFollowingMay 3Working with Corpora, Document-Term Matrices, Sentiment Analysis, etc…IntroductionThis is a quick walk-through of my first project working with some of the text analysis tools in R. The goal of this project was to explore the basics of text analysis such as working with corpora, document-term matrices, sentiment analysis etc…Packages usedtmSentimentAnalysissyuzhetOther : tidyverse, SnowballC, wordcloud, RColorBrewer, ggplot2, RCurlQuick Look at the Data SourceI am using the job descriptions from my latest web-scraping project. Which is about 5300 job postings pulled from Indeed. We are going to focus on the job descriptions here, as they contain the most text and information. Let’s take a look at our first job description to see what we’re working with. postings1$job_descriptionAs you can see, it is a large string containing all of the text from the job listing. Creating a CorpusA corpus (corpora pl. ) is just a format for storing textual data that is used throughout linguistics and text analysis. It usually contains each document or set of text, along with some meta attributes that help describe that document. Let’s use the tm package to create a corpus from our job descriptions. corpus <- SimpleCorpus(VectorSource(postings1$job_description))# And lets see what we haveview(corpus)You can see that our outermost list, is of a type = list, with a length = 5299, the total number of job descriptions (or documents) we have. When we look at the first item in that list, , we see that this is also of a type = list, with a length = 2. If we look at these two items we see there is content , and meta. Content is of a type = character and contains the job description text as string. Meta is of a type = list, with a length of 7. These are the 7 meta attributes that are automatically added to the simple corpus even though I did not have any values for them. author = emptydate-time-stamp = another list… but empty for my datadescription = emptyheading = emptyid = ‘1’ (automatically created by position)language = ‘en’ (the default of the tm package I’m assuming)origin = empty. And there you have it. That’s the general format of a simple corpus. Keep in mind that you can edit the meta attributes to include whatever you want. Transformations : Cleaning our CorpusTransformations in the tm package refer to the pre-processing or formatting of the text that we might want to do before any analysis. We are going to perform 5 quick transformations, that will prepare our data for the analysis. Stripping any extra white space:dfCorpus <- tm_map(dfCorpus, stripWhitespace)# 2. Transforming everything to lowercasedfCorpus <- tm_map(dfCorpus, content_transformer(tolower))# 3. Removing numbers dfCorpus <- tm_map(dfCorpus, removeNumbers)# 4. Removing punctuationdfCorpus <- tm_map(dfCorpus, removePunctuation)# 5. Removing stop wordsdfCorpus <- tm_map(dfCorpus, removeWords, stopwords("english"))Most of these transformations are self-explanatory except for the remove stop words function. What exactly does that mean?.Stop words are basically just common words that were determined to be of little value for certain text analysis, such as sentiment analysis. Here is the list of stop words that the tm package will remove. stopwords(“english”)Now that we have transformed our job descriptions, let’s take a look at our first listings again to see what has changed. corpus[]$contentStemmingStemming is the process of collapsing words to a common root, which helps in the comparison and analysis of vocabulary. The tm package uses The Porter Stemming Algorithm to complete this task. Let’s go ahead and stem our data. dfCorpus <- tm_map(dfCorpus, stemDocument)And now let’s take a look at our job description one last time to see the differences. corpus[]$contentGreat, now all of job descriptions are cleaned up and simplified. Creating a Docoment-Term Matrix (DTM)A document-term matrix is a simple way to compare all the terms or words across each document. If you view the data simply as a matrix; each row represent a unique document and each column will represent a unique term. Each cell in that matrix will be an integer of the number of times that term was found in that document. DTM <- DocumentTermMatrix(corpus)view(DTM)As you can see, the DTM is not actually stored as a matrix in R, but is of the type = simple_triplet_matrix. Which is just a more efficient way of storing the data. You can get a better idea of how they are formatted here. For our purposes, it’s better to think of it as a matrix which we can see with the inspect() function. inspect(DTM)So, we can see that we have 5296 documents (removed three NA’s) with 41735 terms. We can also see an example matrix of what the DTM looks like. Now let’s take a look at what the most frequent words are across all of the job postings. Creating a Word Cloud of the Most Frequent TermsTo do this we are going to first convert the DTM into a matrix so that we can sum the columns to get a total term count throughout all of the documents. I can then pick out the top 75 most frequent words throughout the entire corpus. note: I chose to use a non-stemmed version of the corpus so that we would have the full words for the word cloud. sums <- as. matrix(DTM)))sums <- rownames_to_column(sums) colnames(sums) <- c("term", "count")sums <- arrange(sums, desc(count))head <- sums[1:75,]wordcloud(words = head$term, freq = head$count, min. freq = 1000, max. pal(8, "Dark2"))So, nothing too crazy here, but we can get a good sense of how powerful this tool can be. Terms like support, learning, understanding, communication, can help paint a picture of what these companies are looking for in a candidate. Sentiment Analysis“Sentiment (noun) : a general feeling, attitude, or opinion about something” — Cambridge English DictionarySentiment Analysis is simple in its goal but is complicated in its process to achieve that goal. Sanjay Meena has a great introduction worth checking out:Your Guide to Sentiment AnalysisSentiment Analysis helps you discover people’s opinions, emotions and feelings about your product or servicemedium. comWe are going to start out using the ‘SentimentAnalysis‘ package to do a simple polarity analysis using the Harvard-IV dictionary ( General Inquirer) which is a dictionary of words associated with positive (1,915 words) or negative (2,291 words) sentiment. sent <- analyzeSentiment(DTM, language = "english")# were going to just select the Harvard-IV dictionary results . sent <- sent[,1:4]#Organizing it as a dataframesent <- as. frame(sent)# Now lets take a look at what these sentiment values look like. head(sent)As you can see each document has a word count, a negativity score, a positivity score, and the overall sentiment score. Let’s take a look at the distribution of our overall sentiment. summary(sent$SentimentGI)Okay so overall our job descriptions are positive. The minimum score in all of the documents was0. 0, so it looks like the companies were doing a good job writing their job descriptions. Now for fun let’s take a look at the top and bottom 5 companies based off their sentiment score. # Start by attaching to other data which has the company names final <- bind_cols(postings1, sent)# now lets get the top 5 final %>% group_by(company_name) %>% summarize(sent = mean(SentimentGI)) %>% arrange(desc(sent)) %>% head(n= 5)# And now lets get the bottom 5 final %>% group_by(company_name) %>% summarize(sent = mean(SentimentGI)) %>% arrange(sent) %>% head(n= 5And there ya go. Now, this is obviously not a great use-case for sentiment analysis, but it was a good introduction to understand process. EmotionsOne more fun thing we can do is pull out emotions from the job descriptions. We will do this with the syuzhet package using the NRC emotion lexicon, which relates words with associated emotions as well as a positive or negative sentiment. sent2 <- get_nrc_sentiment(postings1$job_description)# Let's look at the corpus as a whole again:sent3 <- as. frame(colSums(sent2))sent3 <- rownames_to_column(sent3) colnames(sent3) <- c("emotion", "count")ggplot(sent3, aes(x = emotion, y = count, fill = emotion)) + geom_bar(stat = "identity") + theme_minimal() + theme(legend. major = element_blank()) + labs( x = "Emotion", y = "Total Count") + ggtitle("Sentiment of Job Descriptions") + theme(plot. title = element_text(hjust=0. We already know that the job descriptions are mostly positive, but it is ineresting to see trust and anticipation with higher values as well. It is easy to see how this could be applied to other types of data such as reviews or comments to simplify large sets of textual data into quick insights. Thanks for reading, I hope this walk-through might have helped some other beginners trying to get started with some of R’s text analysis packages. I would love to hear any questions or feedback, as I am just getting started myself.
OPCFW_CODE
Add support for docker secrets Docker secrets is a Docker swarm feature. It mounts a file named /run/secrets/<secret_name> (by default) in the container, which contains the secret's value. The file can then be used by the container entrypoint to access sensitive configuration information such as passwords or keys, without storing them in an image layer, an environment variable, or the Docker stack file. Docker secrets can be manually managed through docker secret commands, or can be set to point to a file on the host machine. Using docker secrets for storing the API key would allow to remove that sensitive information from the stack file. Storing the API key in the stack file is a problem for people who want to version their stack file, which is a common practice in gitops organizations. It would require to make the entrypoint (run.sh) able to read the API key from a file rather than from an environment variable. Image writers usually do this by using another environment variable such as SEQ_API_KEY_FILE. Official documentation on docker secrets : https://docs.docker.com/engine/swarm/secrets/ Example docker image supporting docker secrets : https://hub.docker.com/_/mariadb/ Thanks for the suggestion 👍 Hey, I've made more tests on that topic and I came across an important security information. When configuring a docker container to use the gelf logging driver, the communication with the graylog endpoint is coming from the docker engine (at host level) and not from inside the container. The same applies with docker swarms: logging is done by the node rather than the swarm. This has multiple consequences : You can't access a sqelf container over an overlay network / using a container name, even if sqelf runs in docker You don't need to attach your source containers to a common network with the sqelf container to allow logging Your sqelf container must be accessible from the host level, which you can do by binding port 12201 to the host Since the graylog protocol has no authentication feature, the sqelf container must not be accessible from the network You'll probably end up binding --port "<IP_ADDRESS>:12201:12201/udp" This, purposely, can't be done on docker swarm services. Swarm services are spread across multiple nodes and communicate on a virtual (overlay) network. You can't bind a swarm service to the lo interface, because if you were able to do so, the container would be unable to communicate with the rest of the swarm, which you can't tell which part is hosted on the same node and which part was moved to another one. This would, in turn, cause problems with the docker swarm load balancing features. Docker swarm are forcefully bound to the overlay network, either internally with no access from the hosts, or publicly accessible from the world. Solving this problem is actually very simple. The graylog endpoint has to thought of as an infrastructure concern. Sqelf must not run on the docker swarm, which has to hosts only business services. You would just deploy a non-swarm docker stack to the local docker, with a lo port binding on a sqelf container. You would configure all your docker containers, including swarm services, to log with gelf towards udp://localhost:12201, which would point to your node's local sqelf instance. Whenever your container is moved across the swarm to another node, there would still be a locally accessible gelf instance listening on the same relative endpoint. The node's local gelf instance would then forward logs to your Seq ingress port using the local API key authentication. Each node could have a different API key, with custom filters, additional tags, etc ... On the other hand, whether or not you run Seq on the swarm is up to you ; only sqelf has to be local. The conclusion of all this is : you can't use sqelf with docker secrets (because secrets is a swarm feature). Loading the API key from a file still is a nice feature I think. Thanks for all of you input on this. We're open to revisiting but will close this now because we're not expecting to push forward with this in the near future. Thanks!
GITHUB_ARCHIVE
I’ve been primarily a sysadmin my whole career. I’ve been on a skills modernization journey over the past 18 months, extending my existing sysadmin skills to include automation and networking. Chances are if you follow me on social media, your interests are similar. Skills Modernization for the VI Admin – Redux [APP1349] This is a Tech+ Session. Building on my skills modernization VMworld 2020 session, this is a demo of how I expanded my skills using the VEBA project. This session is designed to be 15 minutes of presentation and 15 minutes of Q&A. I will show how I am using Kubernetes and Knative to write event-driven PowerCLI functions. Learn a bit about VEBA, Kubernetes, and Knative, and discover how you can expand your skillset beyond the traditional sysadmin. Tales from the Trenches – Real-World VMware Cloud on AWS Migrations [CODE2742] This session is open to anyone. In this session, my colleague Steve Barron and I present two Flings: SDDC Import/Export for VMC on AWS and PyVMC. We will demonstrate real-world use cases for the Flings, showcasing ways our customers have leveraged them in their migrations to VMware Cloud on AWS. In this session, my colleague Michael Fleisher and I demonstrate potential career progression from sysadmin – the Site Reliability Engineer (SRE). I’m a Windows person, and Michael is all Mac. We will show you how to get your laptops ready to start learning the #1 skill any SRE needs – how to interact with APIs. VMware Cloud on AWS Guided Workshop [GWS-HOL-2284-01-HBD] These are Tech+ Sessions. I will be delivering some of these sessions – free hands-on labs with a VMware expert to guide you. This HOL gives you an introduction to VMware Cloud on AWS. Drop in if you want to see the fastest way to get your on-prem VMs into the cloud. For my final suggestion, I’m steering you toward the world of networking – I am not delivering this one. VMware NSX Advanced Load Balancer (Avi Networks) – Getting Started [GWS-HOL-2237-01-NET] This is a Tech+ Session I just started learning Avi and blogging about it. If you’re looking to expand beyond sysadmin knowledge, you might want to take this guided workshop. Every enterprise application is load balanced – understanding how load balancing works is critical. This workshop starts you off learning how to use the NSX Advanced Load Balancer (formerly known as Avi). Enjoy VMworld 2021!
OPCFW_CODE
Is the likelihood a valid statistic to assess p-value? The p-value is defined as the probability, under the assumption of hypothesis H, of obtaining a result equal to or more extreme than what was actually observed (Wikipedia). By "a result" it is intended a particular statistic. For example, if I want to fit a distribution of data ($N$ items), I would choose the parameters that maximize the likelihood. Then, if I want to compute the p-value I can use the Kolmogorov-Smirnov distance: I compute it between the CDF of the empirical distribution and the CDF of the fit; then I sample datasets of $N$ items from the fit distribution, fit them, and compute the KS between the samples CDF and the CDF of the fits of the samples. Finally I estimate the probability that the KS of data-fit is bigger than the KS from samples, which would be the p-value. Now, the KS is a distance between two distributions. Can I use the Kullback-Leibler divergence instead of the KS? In this case, it would be the Likelihood itself. So I would sample from the fit distribution, apply maximum likelihood and estimate the probability of the likelihood of the data w/r the model to be as extreme as the likelihood of sampled datasets w/r to their model. What would be the problem with this approach? Is the $\chi^2$ analysis the same for gaussian-distributed data? Can I do the same for other distributions on the basis of the Wilks' Theorem? If is not clear, please tell me, I can expand/explain. You lost me at computing the p-value: that's not how it is done when you fit data using Maximum Likelihood. You have abruptly changed your procedure, suddenly abandoning one approach (with its set of assumptions) and replacing it with another (with a completely different set of assumptions) that is, BTW, invalid. Given that you are mixing and matching such different and incompatible approaches, I can't imagine what you are referring to by a "$\chi^2$ analysis" or the application of Wilks' Theorem. to compute the p-value, shouldn't I sample the statistic that I measure, given that the model is true? (in this case the KS) I think the approach has not changed, can you explain me which would be the two approaches? ML does not have either a null hypothesis or a statistic. It can be used for hypothesis testing when comparing ML fits of nested models. In any event, ML assumes a finitely parameterized distribution (it is "parametric"), whereas KS makes no such restriction (it is "nonparametric"). It sounds a little like you would like to run a goodness-of-fit test for a ML estimate, but your references to p-values, $\chi^2$ analysis, and so on make it hard to be sure. Yes, so I would propose a parametric distribution, and estimate its parameters through ML. Then, I make an hypothesis test, where the null hypothesis is that the data is sampled from the distribution with the estimated parameters. In order to do that I compute the p-value of this hypothesis. You think this is not a valid approach?
STACK_EXCHANGE
In my last post I showed you one view of the Apache Software Foundation, the relationship of projects as revealed by the overlapping membership of their Project Management Committees. After I did that post it struck me that I could, with a very small modifications to my script, look at the connections at the individual level instead of at the committee level. Initially I attempted this with all Committers in the ASF This resulted in a graph with over 3000 nodes and over 2.6 million edges. I’m still working on making sense of that graph. It was very dense and visualizing it as anything other than a giant blob has proven challenging. So I scaled back the problem slightly and decided to look at the relationship between individual members of the many PMCs, a smaller graph with only 1577 nodes and 22,399 edges. Here’s what I got: As before I excluded the Apache Incubator, Labs and Attic, but looked at all other PMC members. Each PMC member is a dot in this graph, with a line connecting two people who serve together on a PMC. The layout and colors emphasizes communities of strong interconnection. An SVG version of the graph is here. Each PMC is a “clique”, a group that strongly interacts with itself. But aside from a small number of exceptions, which you can see at the top of the graph, each PMC has one or more members who are also members of other PMCs. In structural terms they are “between” the two communities and help connect them. This could mean various things in social terms, from acting as a conduit of information, a broker, or even a gatekeeper. The person who introduces you to new people at a party serves the same role as the person who tells the prisoner stories of the outside world. The context is different, of course, but in either case, the structural position is one of importance. A common way of quantifying the importance of the nodes that connect other nodes, is via a metric called “betweenness centrality“, which you can think of as a measure of how many shortest paths between other nodes pass through that node. If the shortest path is always going through you, then you have high betweenness and you’re helping connecting the disparate parts of the organization. Let’s draw the graph again and show each node with a size proportionate to its betweenness. You can see more clearly now the position of the high betweenness nodes and how they bridge sub-communities. Now of course, the structural role doesn’t necessarily equate to the actual social role. Someone could be inactive or lurking in multiple projects and not serve as the conduit of much of anything, though on paper they appear central. But Apache participants might take a look at this larger version of the chart, where I have labeled the nodes, and see how well it matches reality in many ways.
OPCFW_CODE
This file will be downloaded by an altered Boinc client that includes torrent capabilities 16 and thus can download the required file. The nomad conflict was just the beginning. Years passed by, alliances formed and the enemy returned. The war has just begun. Get prepared for the Dark Ages. Discover sectors which have been unknown for mankind, until now. After being struck by lightning, Barry Allen wakes up from his coma to discover he’s been given the power of super speed, becoming the Flash, fighting crime in Central City. Whenever I need to download a couple of - legit - torrents for a review, I head over to the Pirate Bay website to find movie trailers that I can download to … Nikumikyo #3 - "/t/ - Torrents" is 4chan's imageboard for posting links and descriptions to torrents. an rtorrent automated monitor script that will transfer from a seedbox to local machine, with additional post-processing for sonarr, radarr, lidarr, mylar, lazylibrarian and sickrage. The Store is a collection of useful multifunctional software! Here applicationss for work with word, other documents, productivity tools, communication, files playback, downloaders and much more are collected. BitComet, free and safe download. BitComet latest version: Innovative P2P BitTorrent/HTTP/FTP download client. With hide.me VPN service, you can follow the steps in this post on how to securely torrent over VPN without worrying about leaking your IP address Download Torrent URLs Anonymously, No registration, No software, No logging. Free trial. A free and reliable P2P BitTorrent client µTorrent is a lightweight and efficient BitTorrent client for Windows with many features. Valkyrie User Manual- En - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Whenever I need to download a couple of - legit - torrents for a review, I head over to the Pirate Bay website to find movie trailers that I can download to … Nikumikyo #3 - "/t/ - Torrents" is 4chan's imageboard for posting links and descriptions to torrents. an rtorrent automated monitor script that will transfer from a seedbox to local machine, with additional post-processing for sonarr, radarr, lidarr, mylar, lazylibrarian and sickrage. Last conversation here - #26 I believe I was part of the problem and I'd like to get the thread started on the right foot again. I'm basing most of this off the TC39 process which has been an effective means of getting features included Simple yet powerful Humble Bundle downloader written in Erlang - amiramix/humbundee Luckily, there is a way in Control Panel of Windows 10 to revert back to Windows 7, and I did exercise that option at that time. If you have an AMD graphics processor, you will have no problems during installation. Download the plugin egg that matches the Deluge Python version from above, e.g. August 30ish: ArchNacho's & TortillaGodzilla's Quality ROMs, a site that hosted ROMs for NES, SNES, and Genesis games, which has announced its effective death back in January 2006, is now finally completely inaccessible, both on its… View and Download Lenovo Ix2 user manual online. Network Storage with LifeLine 4.0. Ix2 Storage pdf manual download. We explained that FrostWire is a tool, a BitTorrent client, a downloader for a distributed peer-to-peer network and that as such, it does not host, index, nor has the ability to control the content it is technically capable of downloading… Kodi is a multi-platform home-theater PC application of the open-source variety, which means that it feeds from contributions and updates made and provided by the community. Want the best PC software for your Windows computer? Our massive list collects the best and safest programs for all needs. The Torrent Share list only shows the torrent's task name, its size and its InfoHash(q.g.), but not the contents of the torrent. (If you wish to see what actual files are contained in the torrent you must download the torrent file first… The Store is a collection of useful multifunctional software! Here applicationss for work with word, other documents, productivity tools, communication, files playback, downloaders and much more are collected. A repository of plugins and extensions for the Vuze Bittorrent Client and Azureus Apart from the plain vanilla Wwysiwyg text editor, you can download a client from http://www.livejournal.com/download/ so that you type your post even while you are disconnected. PII refers to any information that can be used to identify an individual. For example, age and physical address alone could identify who an individual is without explicitly disclosing their name, as these two factors are unique enough to… - descarga dell intel sata ahci controlador controlador windows 7 - descargar archivos completos de la página web - la descarga del controlador conexant no instalará el portátil con windows 10 dell - descarga de whitecap gratis - descarga gratuita de pinball game - explorer of the golden planet game descargar la última versión
OPCFW_CODE
/** * @fileoverview * These are the packages, and their exports that are included in `bento.js` * Extension `bento-*.js` binaries will use these exports as provided by * `bento.js` from the `BENTO` global. * * We specify each export explicitly by name. * Unlisted imports will be bundled with each binary. */ const types = require('@babel/types'); const {parse} = require('@babel/parser'); const {readFileSync} = require('fs-extra'); const {relative} = require('path'); // These must be aliased from `src/`, e.g. `#preact` to `src/preact`. // See tsconfig.json for the list of aliases. const packages = [ 'core/context', 'preact', 'preact/base-element', 'preact/compat', 'preact/component', 'preact/context', 'preact/slot', ]; /** * @param {string} source * @return {string[]} */ function getExportedSymbols(source) { const tree = parse(source, { sourceType: 'module', plugins: ['jsx', 'exportDefaultFrom'], }); const symbols = []; for (const node of tree.program.body) { if (types.isExportAllDeclaration(node)) { throw new Error('Should not "export *"'); } if (types.isExportDefaultDeclaration(node)) { throw new Error('Should not "export default"'); } if (!types.isExportNamedDeclaration(node)) { continue; } symbols.push( // @ts-ignore ...(node.declaration?.declarations?.map(({id}) => id.name) ?? []) ); // @ts-ignore symbols.push(node.declaration?.id?.name); symbols.push( ...node.specifiers.map((node) => { if (types.isExportDefaultSpecifier(node)) { throw new Error('Should not export from a default import'); } if (types.isExportNamespaceSpecifier(node)) { throw new Error('Should not export a namespace'); } const {exported, local} = node; if (types.isStringLiteral(exported)) { throw new Error('Should not export symbol as string'); } if (local.name !== exported.name) { throw new Error( `Exported name "${exported.name}" should match local name "${local.name}"` ); } return exported.name; }) ); } return symbols.filter(Boolean); } let sharedBentoSymbols; /** * @return {Object<string, string[]>} */ function getSharedBentoSymbols() { if (!sharedBentoSymbols) { const backToRoot = relative(__dirname, process.cwd()); const entries = packages.map((pkg) => { const filepath = require.resolve(`${backToRoot}/src/${pkg}`); try { const source = readFileSync(filepath, 'utf8'); const symbols = getExportedSymbols(source); return [`#${pkg}`, symbols]; } catch (e) { e.message = `${filepath}: ${e.message}`; throw e; } }); sharedBentoSymbols = Object.fromEntries(entries); } return sharedBentoSymbols; } module.exports = {getExportedSymbols, getSharedBentoSymbols};
STACK_EDU
Nicole Godellas recently earned her PhD in Molecular and Integrative Physiology. Along the way, she joined the Beckman Institute’s outreach team and learned a few lessons about communicating science to the general public. I never anticipated joining a communications team during my graduate career. When I was a PhD student in the Department of Molecular and Integrative Physiology, I spent a lot of my time at the bench and in front of my computer. I always expected my journey to be quite linear: run experiments, analyze data, assemble manuscripts, repeat. However, this linear path took a detour when I interacted with Lexie Kesler, the Beckman Institute for Advanced Science and Technology’s outreach and communications specialist, and the rest of Beckman’s communications team. These interactions were initiated through the Graduate College’s Career Exploration Fellowship, which allows PhD students to explore atypical career paths. I facilitated science communications seminars, internal research presentation events, and K–12 programming with local schools. I was quickly hooked — what was supposed to be a temporary, semester-long interaction became an hourly position that extended through the rest of my time as a graduate student. My outreach work has undoubtedly shaped me into a better scientist. Here’s what I learned: Style and substance go hand in hand It is easy to get stuck in the rut of presenting your work to peers at a conference or to faculty members at a departmental seminar series. If it hadn’t been for my outreach work, I don’t know if I ever would have talked to a pre-K student about MRI or an eighth grader about non-Newtonian fluids. By understanding how and where to cut field-specific jargon, I have learned how to present not only my work, but also share the general concepts of others’ research to individuals of all age groups and levels of expertise. This is a skillset I'm still developing, but I have realized that the successful engagement of an audience should never be taken for granted. An ‘Aha!’ moment goes a long way Sometimes it only takes one interaction to influence the career trajectory of a young student. Think back to that “Aha!” moment that you may have had yourself. Sharing the ongoing research at the Beckman Institute and providing accessible programming for young audiences has shown me the lasting impact a single interaction can have on a student. Providing mentorship and guidance to those following in your footsteps is a beneficial way to give back — wherever you end up. Focus on building bridges Science can certainly be intimidating to non-scientists. And as a scientist myself, I know just how intimidating it can be to shape my work in a way that will grab the attention of an audience without any scientific background. Outreach at the Beckman Institute forces a bridge between researchers and the general public. To bridge this gap, it was essential for me to connect with researchers and identify the core concepts of their work. Then, it was time to get the creative juices flowing. For example, what was the best way to explain the behavior of a non-Newtonian fluid to a group of middle schoolers? Making slime, of course. On the other hand, how could we effectively connect our researchers with the local community? Partnering with The Literary Book Bar in downtown Champaign for a research presentation about reading and the brain. In conclusion ... What I thought was a temporary interaction turned into one of the most influential experiences of my graduate career. Bridging the gap between the research world and the so-called real world was always something I was curious to explore. My work in outreach helped to turn that curiosity into reality. And, along the way, I’ve come to realize that it’s perfectly fine to take a few unexpected detours. Photo credit: Beckman Institute Communications Office.
OPCFW_CODE
In this article’s context, an enterprise refers to one that software is not its main product, rather software plays a supporting role. A software enterprise might be running its projects quite differently. How agile is affecting project managers With Agile becoming more popular, the role of a traditional project manager will undergo significant changes. To understand this, we need to look at the anatomy of an agile team. Agile promotes a self-organizing team consisting of a scrum master, product owner, devs and QAs. In this self-organizing team, many of the traditional project manager’s responsibilities will be undertaken collectively by the team. - Planning: There is much less up-front planning in Agile. Planning will be deferred before the start of the iteration, PO will decide the next batch of most important user stories to be worked on for the next iteration. The whole team will break down user stories, estimate their sizes and schedule them accordingly. - Project status tracking and reporting: this is built into the Agile practices, and is generally less formal. For example, the daily standup meeting and burn down charts make the project status transparent. - Resource planning: In an Agile team, the primary members of the team are dedicated for the duration of the project, there is less need to do resource planning and allocation. - Drive improvement: Agile has a quick feedback loop and stresses on learning, at the end of each iteration, the team does a retrospective to reflect the good and bad of the last iteration, come up with improvement ideas and commit to them. This doesn’t mean that with Agile there is no need for project managers. To understand why, we need to consider the disadvantages of Agile and the reality of an Enterprise. Disadvantages of Agile I do not want to offend the enthusiasts who claim Agile is the best thing that has ever happened and it can cure any dysfunctional team. To avoid arguing in this regard, I admit the power of Agile and humbly propose that the way Agile is implemented today has the following disadvantages: There is a risk that Agile teams will make local optimizations at the expense of global optimization. Agile teams strive to react to changes quickly, but not all changes should be welcomed. For example, the users for who an Agile team is working with propose a change, which will help out their work efficiency, but if examined in the whole value-stream, this change might negatively impacts other parts and result in the overall loss. The Agile team might be doing Agile practices perfectly, but they might not be situated to perceive the whole value-stream and understand their decisions on the whole value-stream. Lean has a principle called “mapping the value-stream” which can be implemented to identify the big-picture value-stream (told you, I acknowledge the power of Agile), but it would be beyond the scope of an individual Agile team, some process should exist to support it. Unpredictable project schedule and cost This is not necessarily a bad thing. Traditional project management strives to deliver the scope within the cost and schedule, which we all know has many issues, those issues has led us to Agile. Agile, instead of striving for “doing the project in the right way”, strives for “delivering the right product”. Agile has short iterations, at the end of each iteration, users review and provide feedbacks. Based on the feedback, Agile teams make adjustment to scope and schedule, which will inevitably lead to schedule and cost changes. Changes on schedule and cost will also happen because of the following: - Up-front planning is usually limited, but it doesn’t mean there should be no up-front planning. Without sufficient understanding of project requirements and risks, the project schedule and cost might be way off. - If there is no adequate up-front architectural design, requirement changes might require extensive technical rework, ramping up project schedule and cost. One might argue that the above causes for project schedule and cost deviation is because of bad Agile executions, which might be true, but pointing this out without suggesting a solution doesn’t help running projects. Agile methodology is not prescriptive, it won’t tell you what is the sufficient up-front planning and design. For an enterprise, it helps to have some support structure or process. Don’t fall into the trap “if all you have is a hammer, everything looks like a nail”. Just because Agile is great, doesn’t mean you should run every project as an Agile project. In reality of an enterprise, it will be hard to find pure waterfall or pure Agile projects, most projects will be run somewhere between the two extremes. When choosing a project management approach, the factors that should be considered include: - The uncertainty of requirements. Generally, the more uncertainty of requirement, the more likeliness of users changing their minds, the more Agile the project approach should be. - The flexibility of cost and schedule changes. If there is rigid required on cost and schedule, strict control has to be exerted. - The teams’ capabilities. Agile is really really hard. For me personally, I am still trying to figure out two things: how to do architecture in Agile and how to test in Agile. Agile calls for more high-skilled people. Just because you organize Scrum teams and have all the ceremonies of Scrum (daily standing up, story grooming etc) doesn’t mean you are running Agile – in fact, a rigid-running Scrum is probably the worst way to start Agile. - The regulatory requirements. If there are strict regulatory requirements, you might need to run the project in phases and have adequate documents. - The scope and complexity of projects. The more complex a project is, the more up-front planning and designing is called for. - The willingness or capacity of user involvement. If you can’t find users that are willing or able to participate in each iteration, you might need to come up with comprised ways to engage users. - The commitment of upper management to bring about the culture changes that are necessary for Agile to flourish. Agile is a mindset shift, from following the plan rigidly to embracing changes and failures. Upper management must understand what they are walking into, Agile is not a silver bullet that will magically cure any dysfunctional project, to make Agile work, they need to know the trade-offs between Agile and control, they need to put investment to train people, they need to break up the organizational silos to enable transparent communication and value alignment, and they need to learn to celebrate early failures. For example, upgrading a server usually shouldn’t be run as an Agile project, it has a deadline: on the next weekend, upgrade must finish; a thorough impact analysis should be performed to understand the who and what will be impacted; an emergency plan should be formed in case of upgrade failure; success criteria are usually clear. On the other hand, a project that emphasizes heavily on usability might benefit from a more Agile approach. In a board term, choosing different project management approaches for different projects is more in the spirit of “being Agile”, rigidly following agile practices without consideration of context is definitely not. The disadvantages of Agile and the reality of an Enterprise do call for experienced project managers, but they will play a very different role than the traditional project managers. To understand this, we first need to understand the core value of Agile. The core value of Agile and its implication Agile has short iterations, in each iterations, the most valuable user stories are worked on; at the end of the iteration, users review the work and provide feedback. Based on the feedback, agile teams reprioritize the backlog, reexamine features and start a new iteration. The whole point of approach is to make sure teams are doing the right thing for the users, the meaning of the right thing is defined by users. Users are able to review the work early and make changes if what they see doesn’t satisfy them. Compared with the traditional project managers, they are focused on making sure the scope is finished within schedule and cost – the famous triangle: The core value of Agile is that it is “value-oriented”, compared with the traditional project management which is “process-oriented”. All companies are jumping onto the Agile bandwagon with mixed results, regardless if nothing sticks, “value-oriented” will stick, because what CEO doesn’t want to hear “we are using a methodology that will create values earlier?” The implications for a project manager are: - He needs to focus on values. It doesn’t mean that he should forgo schedule and cost, because values are seldom absolute, he needs to consider many factors to understand the optimal values. - He needs to have a systematic thinking to know how to maximize the global values instead of achieving values locally. - He needs to understand how changes impact values. - He needs to find a way to consolidate data from projects run in different ways, and design reports that are consistent and easy to understand. It will be confusing to show reports saying this project has done 300 story points and that project is now in the QA stage, somehow, he needs to convert the data into a consistent form that are easy to understand for all stakeholders. - He needs to exert a certain level of control without impacting teams’ agility. - For him, the project might not end on the day when it is released, he needs to continue to monitor and gather values of the project in order to justify its existence and future investment. Since the role of a traditional project manager is weakened significantly in an Agile team, the above implications could essentially mean that project managers need to move up to program or portfolio managers. And with that move, comes other responsibilities, such as: - Project selection - Project prioritization - Overall resource optimization. In short, an enterprise project manager will be facing a more complex and probably more chaotic environment, his goal, while before was to make sure projects are completed within the time and the budget, will be changed to create maximum value, which requires a higher degree of systematic thinking.
OPCFW_CODE
Ever since Google Adwords dumped Gmane for reasons, Gmane hasn’t had any income. It doesn’t really matter that much, but I find it annoying. The Gmane web site has some traffic. About half a million page views per day. Surely there’s $$$ in that. Not that I like ads. I think ads are yucky. I signed up for Google Webmaster Tools to see if it had anything interesting to say. I’m going to digress a bit. Google sucks. Has anybody noticed? They make a bunch of things that are barely good enough, and then they don’t improve, in general. Take the Webmaster Tools signup process. Please. Gmane has a bunch of different sub domains, and they all have to be signed up separately. Geez. So I signed up them all, which involves putting a special file in a special file on all the domains, which was kinda complicated, since none of the domains actually map unto actual files. But I got it done, and then removed the hacks to make it work. Uh-oh. If the files aren’t there, then they’re “unverified” after a while, so I stop getting stats. Anyway, I left the file in place for blog.gmane.org (by accident), so I’ve got some stats there: So, er… There are 15K queries. Which are… uhm… distinct things that people have asked and which has led to Gmane? Perhaps? And there are 4M “impressions”. “Displaying 60K”. Which means… Ok, I had no idea, so I binged. After fifteen minutes, I’m still not sure, but some people seem to say that “impressions” means that Gmane was included in the search result on some page or other. The page displayed, perhaps? And “displaying 60K” is a mystery. It might be the things included in the list below, but I don’t know why they’re only displaying 60K, or what the criteria is. Then there’s “Clicks 200K”. So 200K people clicked on a link to Gmane? Again with the mysterious “Displaying”. Classic Google information quality: I love the Y axis here. Choosing 70K as the scale here instead of the more traditional 50K is a daring, avant garde choice. Or perhaps they just don’t care. Anyway, interesting as all this may be, it doesn’t really give me any money. So I signed up with the Amazon ad network. They have a wonderful product called “omakase”, where you just put the ad JS into your page, and then they spider your page and put relevant stuff on your page. Money will soon ensue! *a month passes* So let’s look at the relevant ads on gmane.linux.kernel now: Amazing. I didn’t know that so many Linux kernel hackers were into Jimmy Buffet and Jay-Z! But looking at the logs, I can’t really see any activity from the Amazon omakase spide, so perhaps it’s not so surprising after all. Anyway, it’s not that this is that important, but it’s… annoying.
OPCFW_CODE
2 issues related to scroll Hi, I am facing 2 issues related to scroll: 1 - When the weekview has scrollType to .pageScroll, there is a delay between page scrolls, for exemple I scroll to previous week, I have to wait for the animation to end before being able to scroll again. 2 - When the weekview is scrolled to the top (at midnight), I can't scroll to previous or next page, I have to scroll a little to the bottom to be able to scroll to previous or next week. Is there a way to prevent these 2 issues ? Regards For the first issue, this is an existing issue. After refactoring the pagination logic, I found this issue, because there are only 3 pages reusing at the same time, which means you have to wait until the animation finished and to call the reload. This is really annoying, I know, I will try to fix this one. For the second issue, I got no time to try. I will test it and back to you. For the first issue, loadPage() isn't triggered when the next page exist, it is because loadPage() isn't triggered each time the user drag because of this row : if !decelerate { self.endOfScroll() } If we remove the "if" and call self.endOfScrool() each time, loadPage() is called everytime. Then I modified loadPagePageScroll like this : private func loadPagePageScroll() { let pageWidth = collectionView.frame.size.width let currentPage = Int((collectionView.contentOffset.x + pageWidth / 2) / pageWidth) if currentPage >= 1 { loadNextOrPrevPage(isNext: true) } else { loadNextOrPrevPage(isNext: false) } } And it looks to work. I don't prefer to amend this if !decelerate { self.endOfScroll() } because it may cause some pagination issues. I will look into this after vertical scrollable range. It won't be too long. I found a better way (still not perfect) to do. I modified these functions : open func scrollViewWillEndDragging(_ scrollView: UIScrollView, withVelocity velocity: CGPoint, targetContentOffset: UnsafeMutablePointer<CGPoint>) { // vertical scroll should not call paginationEffect guard let scrollDirection = self.scrollDirection, scrollDirection.direction == .horizontal else { return } paginationEffect(scrollView, withVelocity: velocity, targetContentOffset: targetContentOffset) perform(#selector(self.endOfScroll), with: nil, afterDelay: Double(0)) } open func scrollViewDidEndDragging(_ scrollView: UIScrollView, willDecelerate decelerate: Bool) { // handle the situation scrollViewDidEndDecelerating not being called if !decelerate { self.endOfScroll() } } // This function will be called when veritical scrolling ends open func scrollViewDidEndDecelerating(_ scrollView: UIScrollView) { //self.endOfScroll() self.scrollDirection = nil } /// Some actions need to be done when scroll ends @objc private func endOfScroll() { // vertical scroll should not load page, handled in loadPage method loadPage() self.scrollDirection = nil } The problem is that perform(#selector(self.endOfScroll), with: nil, afterDelay: Double(0)) is triggered when the main thread finishes all actions so there is still a delay when you want to scroll but next or previous pages are created at the end of each scroll. Any update on this issue.
GITHUB_ARCHIVE
We’re pleased to announce that today we released a new version of the Vera Mobile app for Android v. 7.40.354 with the following: New features for Ezlo hubs (Ezlo Atom, Ezlo PlugHub) : Local access version 1 - you can create scenes that are triggered by other devices and have the scenes run even if the Ezlo Atom or Ezlo PlugHub doesn’t have internet connect. For example you have a scene controller and push a button to run a scene -> the scene will run even if there is no Internet connection. Use scene controllers to trigger scenes; Ezlo VOI™ (Voice Orchestration Infrastructure) is a patent pending platform that enables you to control any device connected to voice assistants like Alexa or Google to do stuff for you. What that means is now with Vera app you can “CONTROL EVERYTHING”. But that’s not all, the magic is that you can now create scenes having all the triggers from the VeraMobile app, including the devices paired with your controller and perform actions on devices that are controlled by Alexa or Google. - Ezlo Atom or Ezlo PlugHub - VeraMobile app - Alexa or Google Home We posted a detailed step-by-step guide for Ezlo VOI™ here. New features for Vera controllers (Vera Edge, Vera Plus, Vera Secure): - Added support for VistaCame 702* *VistaCam 702 will be soon available in our website. Fixes for Ezlo Atom and Ezlo PlugHub controllers: - fixed meter values which were not shown in the parent device for Aeotec ZW078 device; - fixed color value that wasn’t saved in scenes if user set dimmer level for RGB bulbs; - fixed the preset state and value from devices which were not saved in action state in Scenes; - fixed the random spinner that was shown over devices for a few seconds; - fixed the issue where if user tried to create a scene RGB and with current value set as action , scene is saved with value 0 instead of current value; - fixed the issue with changing the color for RGB devices; - fixed a crash when adding pin code; - fixed app crash after user tried to unpair an non responsive smart switch device; - fixed app crash when user tried to access device control panel for a non working device - fixed app crash when user tried to create a new room from ADW name your device step - fixed an issue when a scene with preset values for RGB cannot be created; - fixed an issue when user was unable to login; - fixed the issue with the RGB color picker that was missing from scenes and device page; - fixed a Random app crash after user created a few schedule scenes; - fixed app crash when user tried to save a scene before spinner on add a room disappeared; - fixed app crash when user tried to save a scene with RGB bulb added as action; - fixed app crash when creating a scene with “Whenever dimmer reaches” % value; - fixed app crash when user tapped Validate button without selecting a trigger; - fixed the text added in Enter Command that overlapped the microphone icon; - fixed app random crash when user reopened the app from background on Send Command Page after disabling Microphone permission from settings. Fixes for Vera Edge, Vera Plus, Vera Secure controllers: - fixed the Help content font which was too big comparing with the rest elements in the page; - fixed the issue with Saved restriction which weren’t displayed in page after restriction was saved; - fixed Control page which wasn’t updated with new restriction added or removed; - fixed an issue with the Plugin Version that wasn’t displayed correctly in app; - fixed the favorite devices which were not displayed in Dashboard page; - fixed the Endless spinner that was displayed random over devices on Devices page; - Fix Crashes from Crashlytics(Firebase) - fixed camera live view crash; - fixed an app crash when going to Device tab when there is a Siren paired with the controller. If you upgrade from 7.40.344 to v. 7.40.354 a reinstall of the app is needed. Please note the build is under Google review at the moment.
OPCFW_CODE
Hello. I'm a novice user of Audition on a Macbook Pro. I was working in Multitrack and it just crashed. I've saved many, many times but the current version when I reopened the app is only showing the edits from a week and a half ago. The other tracks are there, on the index on the left, but all of the clips I had edited and placed in tracks have disappeared. Can you walk me through recovering any of the versions that I've saved? Anything at all would be helpful. What you get back from a backup rather depends upon how you've set it up. One thing you have to bear in mind is that it's only the session file that is backed up, not any of the audio. The best scenario is the one where you have a backup Multitrack session file location within the session folder, and you let it back up automatically - mine's set to 3 minutes and there's a large 'max number of files' figure as well. You can set all that under the Auto Save settings in Edit>Preferences. If it's set like that, then you should be able to open the backup folder within your session, and you'll find all the backup files. Double-click on the most recent one, and that will get your session back to as close to where you were as you can get. Of course that's the ideal scenario. If you have actually suffered a real crash, then the situation might be somewhat different, depending upon what caused it... What it means is that you are storing all of the backup information about your (presumably precious) session down the end of a metaphorical piece of wet string (the internet) in a place that you have no idea of the location of, and you're 100% reliant on being able to extract it from there. You need to change the setting in Preferences so that this data is stored with the session! As it is, I only have a limited idea of how you rescue it. You'd need to access your cloud storage location from the Desktop app, and look there. I'm afraid that last time I looked at Adobe's cloud storage it wasn't exactly intuitive, but I get the impression that you have to open the Creative Cloud app, click on the 'Your work' tab and see if you can figure out which library, if any, the backup folder is in. Then you have to export a copy of it back to your machine and store it locally, and access it from there. Storing session file backups on the cloud is a seriously bad idea, I'm afraid. I'm quite surprised that Adobe even let you do it, because I think it could lay them open to consequential loss claims if anything important really went missing. Chances are though that they've included a get-out clause in the user/licence agreement to prevent this. In principle though I have this down as irresponsible behaviour on Adobe's part though, because of the basic common-sense idea of how you should use cloud storage, and that's simply not to store anything on it that could become the only copy of something. And simply on those grounds alone it shouldn't be the default option for storing backup session files, which I believe it is. Hi there - I'm having a similar recovery issue, hoping I'm missing something. My auto save settings are on the default (Multitrack every 3mins, up to 10 files, backing up to location within the session folder). The last backup file timestamp is close to the time I had to reboot, besides that I manually save often as well. I had to force quit Audition and then shut down the whole computer. When I rebooted it, I don't recall the exact options offered by Audition but I believe I chose to open the auto-recovery file, something like that. As with other people, this session does not reflect my editing at all. Is there another way to open something else that will restore the edits? And, is there any chance this has to do with a file path issue? I get this error when opening the backup session file. The location is the backup file in the session folder. (it's the "1060 clips" that "have no asset" that concerns me. They're all present in the session, but of course, no longer Multitrack edited. Halp please? Thanks! Okay! Reporting back on my own issue. It was a file linking issue! As per the error above & previous experience with InDesign, realized source file location wasn't open/doesn't open at the Audition restart. So I opened the folder location where they were, THEN opened my session file, and everything magically appeared in place and edited as it had been. Huzzah! So, follow your file paths! Minor complaint is that I only knew about this from prior experience, not because there's a clue about it in the searching/help etc. Perhaps if you're experienced with these kinds of programs you'd take that knowledge for granted, but we less-practiced people could use a better pointer for something this essential to working with this. Hope this helps any others crying in the corner after a reboot. Hi wym? do I open the files in adobe first or in windows? been searching for 6hrs and having dread rn maybe I should've just done the work all over very novice so plz assist Hello. My computer crashed and then I experienced this same issue. My auto save was set to backup to Creative Cloud. But when I checked, there were no backups. So the file in question lost all music and special fx settings. I was able to link the media, but not able to retrieve all work. After checking Creative Cloud files for backups, there were none. So I edited the auto save settings in Audition to save every 3 minutes, 100 files, to a backup file I created on my pc. That was 30 minutes ago and no backups. I cannot find a backup anywhere of the lost file. What to do? So you created a specific location on your machine, restarted Audition, started a new session and no session files were stored every three minutes? (I bolded the bit you may have left out...) So, I am rebuilding the file. What a tedious process. 😞 If you use cloud backup for a session file, which thinks that the audio is in the same location and it's not there (which of course it won't be, because the audio's not backed up) then if you use that backup rather than a local one, re-linking is inevitable. First rule of backing up sessions: Don't use the Creative Cloud option - or anybody else's cloud option, come to that. It's a sure-fire route to trouble. If it doesn't get you now, it will later. I came across the same issue here: After an unexpected CRASH/RESTART, the saved multitrack files reverted to a version from a few hours ago. I'm certain that I had manually saved them just 1 minute before the crash! After reading through those extensive solutions from some audio masters, I realized they were completely ineffective! Audition, this stup*d app is not as pro as you might expect. Unlike FCPX, it doesn't retain all render files for users; it only preserves the latest version! If you follow certain settings advice from experts to modify your file version and click save, then unfortunately, you'll permanently lose the version you want. Fortunately, I managed to recover it using my own method. Here's how. 1. Absolutely do NOT make any changes to the shown version in Audition after a crash or restart (in my case, a version from a few hours ago). All recent saved audition files are in File->open recent. Audition doesn't save recent files with the same name in folder temp/adobe/audition. Your latest version .sesx file is only saved in your defined folder, so 2. Close audition app without any changes, and reopen the .sesx from the folder you save. Normally until this step, the latest version (1 minute before the crash in my case) will appear. If not, then repeat step 2. or restart then repeat step 2. AVOID using auto-save settings! If you enable this feature, Audition may overwrite the older version with the latest one before you recover it, requiring you to start from scratch. Then it'll be a tragedy... Just remember, audition is not a smart app. Do not use any high-tech to retrive the lost files. Just close and re-open will solve. 🤣 The only thing that's not smart about Audition is that you aren't necessarily going to be offered the last session backup - and that trips up a lot of people. If you go to the place that your session files are stored, you will very likely find a more recent one than the one you were offered. No I don't know why this happens, but it does. But, if it says it's auto-saved, then it has. And auto-saves don't overwrite themselves - you can determine the number of them that you wish to save - any number up to the high thousands. And everybody else - don't avoid using auto-saves for session files - that's very bad advice. What you should do is use auto-saves, but also do manual saves. The auto-saves are a safety net, and you throw that away at your own peril.
OPCFW_CODE
My son presented the Arduino Data Logger he wrote for my circuits class to the Global Physics Department on 2013 May 15. The sessions are recorded, and the recording is available on the web (though you have to run Blackboard Collaborate through Java Web Start to play the recording). I thought he did a pretty good job of presenting the features of the data logger. Now that school is beginning to wind down, he’s started looking at making modifications to the data logger code again, and has updated it at https://bitbucket.org/abe_k/arduino-data-logger/ He’s down to only three classes now (US History, Physics, and Dinosaur Prom Improv), though he still has homework to catch up on in Dramatic Literature and his English class. He’s still TAing for the Python class also. On Thursday and Friday this week, he’ll be taking the AP Computer Science test and the AP Physics C: Electricity and Magnetism test. He’s having to take both tests in the “make-up” time slot, because we couldn’t get any local high school to agree to proctor the tests for him during the regular testing time. Eventually his consultant teacher convinced the AP coordinator to let her proctor the tests, but by then it was too late to register for anything but the makeup tests. We’re way behind schedule on the physics class, so he’s just going to read the rest of the physics book without working any problems before Friday’s exam—we’ll finish the book in a more leisurely fashion after the exam. He won’t be as prepared for the physics exam as I had hoped, but at least the CS exam looks pretty easy to him. One thing I didn’t realize is that schools can charge homeschoolers whatever the market will bear for proctoring the tests: - Depending on the reasons for late testing, schools may be charged an additional fee ($40 per exam), part or all of which the school may ask students to pay. Students eligible for the College Board fee reduction will not be charged the $40-per-exam late-testing fee, regardless of their reason for testing late. - Schools administering exams to homeschooled students or students from other schools may negotiate a higher fee to recover the additional proctoring and administration costs. We’re paying $145 per exam (not just the $89 standard fee and the $40 late fee), but I’m glad he gets to take the exams at all this year. Tomorrow he and I are doing another campus tour—this time at Stanford. He managed to get an appointment with a faculty member, but we noticed that the faculty member is scheduled to be teaching a class at the time of the appointment—I wonder what is going to happen with that. I’ll report on the visit later this week.
OPCFW_CODE
SCOTT SIMON, HOST: A rare two-headed copperhead snake has recently been found alive in Virginia. One snake. One long, slithering body but two heads and two mouths with venomous fangs. A remarkable discovery, really, so we're joined now by J.D. Kleopfer. He is the state herpetologist at the Virginia Department of Game and Inland Fisheries. He joins us from Williamsburg, Va. Thanks so much for being with us. J.D. KLEOPFER: Well, thank you for having me on. SIMON: And how did this snake make it to you alive? Seems to me we usually hear about these things when the snakes are, you know, former snakes. KLEOPFER: That is true. I'm equally as impressed that it's two-headed, as well as it's still alive. A lady up in Northern Virginia was coming out her front door and saw this small snake in her flower bed and realized that it had two heads and somehow got it into a bucket alive. And the rest has become social media history. SIMON: So two heads, I gather, but a common digestive system? KLEOPFER: Yes. It has two heads, but the rest of its anatomy is shared - digestive system, everything else. SIMON: So does one head eat and the other burp? KLEOPFER: Well, based on the X-rays, it appears that one head has a more well-developed esophagus, while the other head has a more developed throat. So based on that, we're just attempting to feed the one head to make sure that everything's OK. SIMON: And the other head just kind of like - what, does it frown or what? KLEOPFER: No, I don't think it's too unhappy. I haven't seen any tears come down its eyes yet. But they have a shared digestive system, so both heads are getting the nutrition they need. SIMON: What happens to a two-headed copperhead snake? KLEOPFER: Well, in the wild, they're extremely rare because of - they just don't live very long. They can't coordinate escaping from predators, and they can't coordinate capturing food. So they tend to not live. But in captivity, captive-bred two-headed snakes occasionally pop up, but that's usually the result of inbreeding. SIMON: Well, does this snake - or do we call it snakes? KLEOPFER: Singular would be fine. SIMON: Have much prospects for a long life? KLEOPFER: You know, it's a tough question because there's such few examples to work off of. But the latest I've heard from the individual who's caring for the animal is that he did get it to successfully feed over the weekend, and it appears to be doing fine. So we're keeping our fingers crossed that, you know, we're able to keep it going, and it lives a long, happy life. SIMON: Forgive me. I'm one of these people that attaches names to animals. SIMON: Maybe a herpetologist doesn't, but do you give the snake one name or two? KLEOPFER: Well, we've had a few folks that have asked if we're going to have some kind of naming contest. And at this point, I don't want to give it a name in fear that I'll jinx it, and it won't live very much longer after we give it a name. But eventually, if the animal continues to thrive and grow, we'd like to lace it with a zoological facility somewhere within the Commonwealth of Virginia. And then they choose to do a naming contest. That's their prerogative. SIMON: J.D. Kleopfer is state herpetologist at the Virginia Department of Game and Inland Fisheries. Thanks so much for being with us. KLEOPFER: Thank you. NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.
OPCFW_CODE
Search the Community Showing results for tags 'parent'. Found 4 results Hi everyone, I'm using Houdini 16 and I'm trying to parent a tube to the inside of the mouth of a dragon character that is animated. My goal is for the dragon to breath fire and I want to use the tube as an emission source object. I've parented the dragon and tube together but I need the tube locked onto the dragon's mouth. At the moment when the dragon moves it head, the tube isn't animated with it. I need the tube to inherit the animation of the dragon's mouth. Does anyone have any suggestions for how to achieve this? I'm guessing I need to constrain the tube to a section of the mouth? Thanks in advance, Chris (AUS) I cant parent an object to one that's already been simulated, where do i parent what? Got a dynamic object flipping around, want to have a sphere inside it acting as an emitter for pyro. Dont want pyro on the flipping object, but on one inside it following the animation. borbs727 posted a topic in PipelineI'm new to Houdini pipeline and I'm starting to port a Maya pipeline tool over to Houdini. Everything is working great but the tooltip windows aren't appearing in the right positions. I've used Paul Winex's MSE and hqt as a reference to see how Houdini is working with windows, if there are any other learning resources out there let me know! 1) The popups are supposed to lock onto the edge of the panel, but it looks like they're offsetting from the left edge of my center monitor (in a 3 monitor setup). No matter where I move the main Houdini window, the popups still appear in the same spots. 2) There's only 1 UI element that creates a popup correctly, but it gets hidden behind Houdini's main viewport. Tearing off my tool's panel doesn't resolve any of these problems. I'm getting the parent window in Houdini like so: class Build( QtGui.QWidget ): '''Main functions to build UI.''' def __init__(self, parent = hou.ui.mainQtWindow()): super( Build, self ).__init__( parent ) My tool works fine when getting the parent window in Maya like so: ptr = mui.MQtUtil.mainWindow() class Build( QtGui.QWidget ): '''Main functions to build UI.''' def __init__(self, parent = wrapInstance( long( ptr ), QtGui.QWidget ) ): super( Build, self ).__init__( parent ) Example for problem 1): Example for problem 2): Hi, Does anybody know how to submit a "A Simple Parent-Child Example" : http://www.sidefx.co...ationships.html Or how to submit a dependency chain out of Houdini. Like, Simulation --> Meshing --> Rendering. It's simple in Houdini on one machine but how to take it over to Hqueue? Cheers, nap
OPCFW_CODE
Remediation module for automatically adding an IP address to a Security Intelligence blacklist. The file contains a readme with more information. I'm not able to upload the module, there is an error. Could you help me please? I have implemented the module that you have created. It appears to be working well, populates a blacklist. Once an IP address is blacklisted I shouldn’t see a corresponding Intrusion Event any longer, should I? Thank you in advance, Thanks, needed something like this. Very cool local use (no need to spin up external web host). I doubt this is getting any updates, but it would be nice to see a whitelist similar to pix shun module and an optional way to either set a expiration/timeout on a listed ip or like a scheduled file deletion. Would also be nice if there was an option that adds a comment after an IP to say which correlation rule added it and a timestamp of when. Couple of tiny things I noticed: - In both BlacklistLocal and BlacklistRemote you are returning 1 instead of 0 which causes an benign error msg in syslog and remediation status. - typo "rememdiation" in a warning msg on line 243 - for local_dst_blacklist your default files names are using .txt and .md5 instead of html per other defaults and note about local web server in readme. (could denote .html required for local files fields in template) Got it working by uploading the gz part. Can't seem to get any data on it in analysis. I see the file gets populated but its almost worthless if we can't make changes to remove devices from the blacklist. Is there an easy way to remove ip addresses from the local blacklist? Not with the GUI. You can use the CLI by ssh'ing and editing the file it puts in /var/sf/htdocs/, just need to mindful to do it swiftly incase it gets written to while you are trying to make changes. (Note: making changes would also make the md5 file no longer match, you could probably generate a new one with the command used in the script "md5sum /var/sf/htdocs/blockfilename > /var/sf/htdocs/md5filename". That is if you are actually using it.) Thanks for the info. I could probably script something. Once it is working it works quite well. :) I was looking for something similar to the cisco IPS host blocking. I wish they created the shun module for ASA so we could do something similar. We've found the PIX module can work with for ASA (in firesight 5.x). Just need to use SSH2 and edit the script to prefer SSH2. The PIX Shun module might not be exactly what you are looking for though, as again there is no gui "no shun" option. Warning: since PIX module is a default module, changes are reverted if you update the Defense Center and so must be re-applied after an update. Edit the read-only SSH.pm file in /var/sf/remediations/cisco_pix_1.1/ change line 64 from:$ssh = Net::SSH::Perl->new($host);to:$ssh = Net::SSH::Perl->new($host, protocol => '2,1'); This change will make it prefer SSH2 but it can still try SSH1 (though SSH1 and Telnet didn't seem to work with our ASA in testing, did not dig deeper as to why since SSH2 is preferred anyway.) I'm not actively maintaining this, but some may benefit from the changes I made. The code comments have been updated. 1. Remediation Status shows proper Result Messages with custom values. (XML modifications) 2. I added the ability to limit the length (nothing with date or time) of a custom list. The list will be pruned FIFO if it exceeds the limit set in the instance configuration. Turning the restriction off allows infinite file size, as IPs are never removed from the list. You can also alter the size within the instance after creation. If you increase, more IPs will be added to the list until the new limit is met. If you decrease, the next remediation run will reduce the size. As an example, if you were set to 1000 entries and were maxed out, and then change the limit to 800, the next run will take the oldest 200 entries and prune them from the file and start maintaining the 800 IP limit. (XML and code changes) I use this module with two rules, one that looks for a scan and puts the IP in a limited list that will get pruned and a second rule with tracking that looks for multiple scans in a time window that places the IP in a list that does not get pruned (repeat offender). Extract the file back to a blacklistIP_1.1.tar.gz that can be uploaded to the FMC. Can you please describe how you set up your two rules? Thank you for "not" maintaining this module :D Can someone make this module available for FSM6 or 6.2 Please? Its gone after upgrading. Actively maintaining, just means I don't have plans to alter it beyond what I uploaded. Anyone can write code and change how it works. Please have a look at the attachment from my Cisco Live presentation. It has screen shots of most of my set up. The only difference being that I have a second rule/remediation with tracking (not shown) that uses a different html file and thus a different custom security intelligence feed. I have a subnet that I'm protecting and blocking on all traffic from outside the US. You could have a similar block rule for a country of your choice or anything else for that matter. Hope that helps, Everything worked except I am unable to modify the html file that is created. I need to change the permissions but it looks like I would have to do this via cli. Did anyone else run into this? @cvcucooper Are you trying to modify the file manually or do you have a correlation setup that fails to modify the file? I don't recall having to change any RW permissions on the HTML files. That file is created during the setup in the GUI. My HTML files are owned by root and have-rw-r--r-- or 644 permissions. Just to clarify, you should not be manually editing the html file in the case of correlation and remediation.
OPCFW_CODE
Managing data from large scale projects such as The Cancer Genome Atlas (TCGA) for further analysis is an important and time consuming step for research projects. Several efforts, such as Firehose project, make TCGA pre-processed data publicly available via web services and data portals but it requires managing, downloading and preparing the data for following steps. We developed an open source and extensible R based data client for Firehose pre-processed data and demonstrated its use with sample case studies. Results showed that RTCGAToolbox could improve data management for researchers who are interested with TCGA data. In addition, it can be integrated with other analysis pipelines for following data analysis. |Author||Mehmet Kemal Samur| |Date of publication||None| |Maintainer||Mehmet Kemal Samur <email@example.com>| |License||GPL (>= 2)| CorResult-class: An S4 class to store correlations between gene expression... DGEResult-class: An S4 class to store differential gene expression results FirehoseCGHArray-class: An S4 class to store data from CGA platforms FirehoseData-class: An S4 class to store main data object from clinent function. FirehoseGISTIC-class: An S4 class to store processed copy number data. (Data... FirehoseMethylationArray-class: An S4 class to store data from methylation platforms FirehosemRNAArray-class: An S4 class to store data from array (mRNA, miRNA etc.)... getCNGECorrelation: Perform correlation analysis betwwen gene expression and copy... getData: Export data from FirehoseData object getData-methods: Export data from FirehoseData object getDiffExpressedGenes: Perform differential gene expression analysis for mRNA... getFirehoseAnalyzeDates: Get data analyze dates. getFirehoseData: Get data from Firehose portal. getFirehoseDatasets: Get list of TCGA cohorts. getFirehoseRunningDates: Get standard data running dates. getMutationRate: Make a table for mutation rate of each gene in the cohort getReport: Draws a circle plot into working directory getSurvival: Perform survival analysis based on gene expression data hg19.ucsc.gene.locations: Gene coordinates for circle plot. RTCGASample: A sample data object for sample codes. RTCGAToolbox: RTCGAToolbox: A New Tool for Exporting TCGA Firehose Data showResults: Export toptable or correlation data frame showResults-CorResult: Export toptable or correlation data frame showResults-DGEResult: Export toptable or correlation data frame
OPCFW_CODE
class Navigation { constructor(){ this.facings = {'N': [0,1], 'S': [0,-1], 'E': [1,0], 'W': [-1,0]}; this.rightRotations = {'N': 'E', 'E': 'S', 'S': 'W', 'W': 'N'}; this.leftRotations = {'N': 'W', 'W': 'S', 'S' : 'E', 'E': 'N'}; } blocksAway(directions) { let curFacing = 'N'; let curCoords = [0,0]; const stepMatches = directions.match(/([RL]\d+)/g); stepMatches.forEach((stepStr) => { const direction = stepStr.match(/([RL])/g)[0]; if(direction === 'R'){ curFacing = this.rightRotations[curFacing]; } else { curFacing = this.leftRotations[curFacing]; } const distance = stepStr.match(/(\d+)/g); curCoords[0] += this.facings[curFacing][0] * distance; curCoords[1] += this.facings[curFacing][1] * distance; }); return Math.abs(curCoords[0]) + Math.abs(curCoords[1]); } firstDuplicateSpotBlocksAway(directions){ let curFacing = 'N'; let curCoords = {x: 0, y: 0}; let pastCoords = []; const stepMatches = directions.match(/([RL]\d+)/g); for(let i = 0; i < stepMatches.length; i++){ console.log(pastCoords); const stepStr = stepMatches[i]; const direction = stepStr.match(/([RL])/g)[0]; if(direction === 'R'){ curFacing = this.rightRotations[curFacing]; } else { curFacing = this.leftRotations[curFacing]; } const distance = stepStr.match(/(\d+)/g); let nextCoords = {}; let foundMatch = false; for(let d = 1; d <= distance; d++){ nextCoords = { x: curCoords.x + this.facings[curFacing][0] * d, y: curCoords.y + this.facings[curFacing][1] * d}; console.log(`Next coords: ${nextCoords.x}, ${nextCoords.y}`); if(pastCoords.includes(`${nextCoords.x},${nextCoords.y}`)){ console.log(`found match: ${nextCoords.x}, ${nextCoords.y}`); foundMatch = true; break; // Stop midway through - we've been here before } else { pastCoords.push(`${nextCoords.x},${nextCoords.y}`); } } curCoords = nextCoords; if(foundMatch){ break; // Break out of this outer loop - we've found where we've been before } } return Math.abs(curCoords.x) + Math.abs(curCoords.y); } }
STACK_EDU
permission_callback has no effect WP version is 5.5.3 I have 3 API routes set in a plugin that is used in an admin dashboard page. One route is meant to be used "publicly". I have two very curious issues happening: My 3 admin-centric routes do not specify permission_callback. I should be getting notices but I do not when the docs and WP core functions say it will throw a doing_it_wrong error. My 4th public route does have 'permission_callback' => '__return_true' set. I receive a rest_not_logged_in error code. class My_Plugin { public function __construct() { add_action( 'rest_api_init', [ &$this, 'register_routes' ] ); } public function register_routes(): void { register_rest_route('my-api-route', '/uri', [ 'methods' => WP_REST_Server::READABLE, 'callback' => [&$this, 'api_get_available_stuff'], ]); register_rest_route('my-api-route', "/uri/(?P<param>[a-zA-Z0-9-]+)", [ 'methods' => WP_REST_Server::READABLE, 'callback' => [&$this, 'api_get_specific_stuff'], ]); register_rest_route('my-api-route', "/uri/(?P<param>[0-9-]+)", [ 'methods' => WP_REST_Server::EDITABLE, 'callback' => [&$this, 'api_update_specific_stuff'], ]); register_rest_route('my-api-route', "/uri/(?P<param>[a-zA-Z0-9-]+)/load-more", [ 'methods' => WP_REST_Server::READABLE, 'callback' => [&$this, 'api_load_more_stuff'], 'permission_callback' => '__return_true', ]); } } // header approach $.ajax({ url: '/wp-json/my-api-route/uri/param/load-more', method: 'GET', headers: { 'X-WP-Nonce': '<?php echo wp_create_nonce('wp_rest'); ?>' }, data: { 'max_items': 5, 'offset': 5 * current_count, }, }) // _wpnonce approach $.ajax({ url: '/wp-json/my-api-route/uri/param/load-more', method: 'GET', data: { '_wpnonce': '<?php echo wp_create_nonce('wp_rest'); ?>', 'max_items': 5, 'offset': 5 * current_count, }, }) My only conclusion could be that, despite seeing "Version 5.5.3" in the bottom corner of WP Admin, I might not actually be on 5.5.3. "I should be getting notices but I do not when the docs and WP core functions say it will throw a doing_it_wrong error." Where does it say that? If you don't have a permissions callback, the route will be public. For reference: https://developer.wordpress.org/rest-api/extending-the-rest-api/adding-custom-endpoints/#permissions-callback "functions say it will throw a doing_it_wrong error" - yes, but only if you enable debugging, i.e. WP_DEBUG is true. 2. "I receive a rest_not_logged_in error code" - maybe that's being returned from your api_load_more_stuff() function - check for rest_not_logged_in in your callbacks. @SallyCJ load_more_stuff is running a query and returning the array. No other logic. Heading in a different direction currently to where this issue becomes irrelevant. Still would like to know why this is happening but being frank, I don't care enough when needing to keep moving. However, gonna keep checking back to see if others have thoughts. @JamesWagoner, could be a plugin/theme issue - try deactivating all plugins and/or switching to a default theme, and see if the problem persists. Btw, I'm keen in knowing about that "different direction"? Are you sure that a _doing_it_wrong notice isn't being issued? You won't see the notice visibly output on the page because that would break the JSON response. But if you look in the headers of the response you should see a X-WP-DoingItWrong header. It should also appear if you use a plugin like this to record developer notices: https://wordpress.org/plugins/log-deprecated-notices/ I did end up seeing a X-WP-DoingItWrong header but only in my dev instance. I also tail the error log during development and that is where I didn't see any logs when hitting endpoints that did not have permission_callback set. Though, as stated above, heading in a different direction where this is not a problem for me anymore. That doing it wrong did not write into debug.log file. It should do that. Up vote because I've never know it until see this answer. The error only appears when using new widget editor in WP 5.8.
STACK_EXCHANGE
I think I'm just gonna have a perioidic torrid love-hate affair with Plan9, until I get the hang of it at which point I'll jack in Linux I start playing with it, am amazed at all the wondrous things it does that simply leaves Linux and friends behind, then go back to Linux because learning a new OS is hard and effort and stuff. Sometimes I have sympathy for the people I've forced to use Linux. Sometimes I sit here and wonder how in hell I EVER managed to learn Linux, without a permanant internet connection sitting on a machine nearby. Then I consider that if I can learn Linux having only used Windows and dos, I can sure as hell use Plan9 having become familiar with a wide variety of other OSs, and having the internet sitting near me, with people who actually know answers to stupid questions. "We have a fairly new policy where we collect a small fee to guarantee a seat in the emergency exit row. As you know, this area of the aircraft is very popular and affords more legroom. A passenger can request to be seated in this row but, only by paying the fee, can we absolutely guarantee that this row will be assigned." You are a worthless bunch of money-grubbing turds. I always used to be able to "guaranee a seat in the exit row" the same way I could guarantee a seat anyplace else on your stupid flying metal box - by turning up, asking for a seat that's not yet assigned, and getting it printed on my ticket. Evidently, an exit row seat was in fact a free-for-all before, and it's only now, through a one-time-fee, that I can guarantee that even though I've been assigned the seat, I'll actually get it? Please. Maybe they should put a 20 dollar premium on window seats, to "guarantee" that someone who's turned up and asked to see out the window will actually get the opportunity. You know what I find most offensive? That they can /possibly/ try to fob off a normal rational human being with a reason like that. I think my fingers would physically rebel against me if I actually tried to type that statement. I was a paying customer, not just a dumb animal, and I normally appreciate it when I'm treated like one. I think Orkut is a perfect example of why you should never, ever, even consider running a website on IIS, ASPX, and friends. I mean, let's face it - A group of the undisputedly best web engineers in the world can't make it work properly, and with one of probably the biggest clusters in the world, they can't make it run at a decent speed, Dad liked his birthday presents. I think there's a theme going on with me and him. At Christmas, I gave him a set of lockpicks. He gave me marine flares. For his birthday, I gave him one set of average throwing knives, and one really really nice throwing knife. Definitely. I have 5 months until next Christmas to work out some more completely irresponsible gifts to give him. Forgot to say yesterday - there really is a platform 9 3/4 at London King's Cross. Part of my trip home involves going through it ["it" being King's Cross, not the Hogwarts platform :-)], and I happened to It's just a sign on a random brick wall, and it's not between platforms 9 and 10 [there is no "between" platforms 9 and 10], but it's close. Virgin Atlantic demonstrate flagrant capitalism at it's most offensive: I arrive 4 hours early, as per usual, so I can scab an exit row seat by dint of being first in the queue, and asking nicely. It's what I usually do, and it usually works. "Can I have an exit row seat, please?" "Lemme check for you... Yep, we have two left." "One for me, please" "That'll be seventy five bucks" Uhm. Fuck that. What the fuck... I mean, I turn up with a valid ticket for this seat [exit row is still cattle class, remember?], and they won't let me have it without paying more? Yes, I had a really shitty flight. The seats fit people comfortably[ish] if they're skinny and 4'5 tall. I think we can safely say that I'm neither particularly skinny, nor, at 6'3, am I at or below the requisite height. I failed miserably to get any sleep at all, and the movies were all ones I'd seen before [excepting Shaun of the Dead, which I can VERY highly recommend]. Bleh. Nowadays, I kinda find it offensive. You know that, at the cost of inconveniencing a couple passengers, you'll still be able to make 75 bucks for each and every seat, because there are enough people that /are/ prepared to pay for it. So you do it. It'll piss off lots of customers, but it doesn't matter because they've already paid for the flight. And in a way, they can't /really/ complain, since it's just as if there were no seats left when they got there. Ugh. All people that make decisions like this should be forced to sit in seats for people a foot or two shorter than them for 12 hours. Starting a couple hours before bedtime, so they'll be really, really, tired, but still unable to sleep. I finally beat Zelda Wind Waker. Really, Really, Great Game. Pretty much warrants the purchase of a gamecube on it's own merits, so that's nice. I bought Grand Theft Spiderman, so that'll probably keep me busy for I think we can safely say that Farenheit 9/11 isn't likely to improve And I'm lit up like a Christmas tree. Last night, I finally got that together. I think I need to shoot the photographer, though. Now I'm working on a stainless steel bikini set, in a looser weave, and it's infinitely easier. I think I can safely say that making a substantial bit of 6-in-1 titanium maille is really quite a lot more effort than a bit of 4-in-1 stainless Mario Kart Double Dash still very much rocks. Great fun, and am still working on it in co-op with the rather lovely girlfriend, who rather conveniently actually enjoys computer games.http://www.penny-arcade.com/view.php3?date=2001-12-14 I think I've passed "the test" with flying colors. Currently, we have Gold in all but three of the cups. Couple minor-ish points, though: 1) Fucking game fucking cheats. It's deliberately engineered to let you get away with some stuff sometimes, and not at others. But to the point where it's annoying. For example, sometimes it just decides that no way are we going to win a specific race, and goes out of it's way to descend weapons onto us until a couple computer karts have gone past, just at the finish line. Kinda annoying when we're about to win a new cup, or something. 2) Mirror mode is possibly one of the filthiest hacks I think I've /ever/ seen in a computer game. They literally gaffered the 3d part of the engine in backwards, reversed the left and right controls to make up for it, set the computer on cheat-as-much-as-you-want mode, and hoped no-one would notice... Uck. Of course, it's still GREAT fun, and I'd really recommend that if you have a GC you pick it up. Doesn't really come into it's own until you play with other people, but very, very, cool, anyways.
OPCFW_CODE
Nordic provides a unique one-stop solution for asset tracking applications. We provide the hardware, software, connectivity, and the cloud services. Our low-power nRF9160 SiP has a multimode modem that supports both LTE-M and NB-IoT, as well as GPS. And we have plenty of great applications and samples in our nRF Connect SDK that work out-of-box to support your asset tracking application. Developers also have the freedom to either choose to use our nRF Cloud or their own cloud platform. Either way, you can have access to our nRF Cloud Location Services for power-efficiently obtaining accurate location data. In this blog, we will compare the power consumption of GPS and nRF Cloud's assisted GPS (A-GPS) feature. While regular GPS downloads the necessary assistance data via satellites (~50 bps), Nordic offers A-GPS to download the assistance data via LTE (~150 kbps). The data is provided in seconds instead of minutes, and the lower power consumption lets asset trackers either decrease in size or increase in battery life. Using real-life measurements from test scenarios outlined in this blog post, we will be able to see how the A-GPS service helps minimize the power consumption in your application. Furthermore, these parameters will be taken into consideration for accurate asset tracking application designs, which will show how A-GPS can help extend your device's battery life. The following table shows the significant parameters to consider with regard to GPS service power consumption evaluation. |TTFF||Time-to-first-fix. The device firmware is designed to clean up all the GPS satellite data stored in the device and force a cold start, as GPS fix searching will take a longer time from the cold start. This will help to evaluate the potential maximum TTFF under the measurement environment.| |Floor Current||The floor current consumption when the device is idle.| |GPS Fix Searching Current||The current consumption when GPS is searching for a fix.| |A-GPS Handling Period||The period used for handling A-GPS data.| |A-GPS Handling Current||The average current during the A-GPS handling period.| We will perform two different tests to obtain all these parameters; TTFF measurements and power consumption measurements. TTFF measurements are designed to evaluate the device GPS service TTFF with and without using assistance data. Power consumption measurements are designed to measure the power consumption during idle state, GPS searching, and A-GPS data handling periods. The GNSS sample in nRF Connect SDK demonstrates all the features related to the nRF9160 GNSS service. It provides an operation mode called TTFF test, which will search for a GPS signal until a fix is obtained and report the time taken to get the fix, ie the TTFF. nRF Connect SDK samples can use overlay files for setting additional configurations for the firmware. The overlay-ttff-measurement.conf file is created to enable the TTFF test mode and also enable cold start, which makes sure the devices do not contain any GPS data downloaded from the previous search. To enable the A-GPS service, the overlay-enable-agps.conf file is created to allow the device to access the nRF Cloud A-GPS service. The LTE network is only connected when A-GPS assistance data downloading is required. You will also need to follow Securely generating credentials on the nRF9160 | nRF Cloud Docs to provision the device to nRF Cloud to be able to access the nRF Cloud A-GPS data during testing. # TTFF measurement configurations CONFIG_GNSS_SAMPLE_MODE_TTFF_TEST=y CONFIG_GNSS_SAMPLE_MODE_TTFF_TEST_COLD_START=y # Enable to use nRF Cloud A-GPS service CONFIG_GNSS_SAMPLE_ASSISTANCE_NRF_CLOUD=y CONFIG_NRF_CLOUD_CLIENT_ID_SRC_INTERNAL_UUID=y CONFIG_GNSS_SAMPLE_LTE_ON_DEMAND=y For testing regular GPS, build with overlay-ttff-measurement.conf and for A-GPS, build with both Connecting the device to serial monitor tools such as LTE Link Monitor will report the Time to fix until successfully obtaining a fix. [00:00:14.472,656] [0m<inf> gnss_sample: Time to fix: 5[0m [00:00:14.484,924] [0m<inf> gnss_sample: Sleeping for 120 seconds[0m |Regular GPS TTFF [s]||28||37||32||26||36||28||20||36||27||20||29| |A-GPS TTFF [s]||1||1||3||1||1||2||2||1||1||1||1.4| |GPS TTFF / A-GPS TTFF||28||37||10.6||26||36||14||10||36||27||20||20.70| Power consumption measurements overlay-low-power.conffile. It includes configurations that disable the logging system used for debugging. This allows us to measure the actual power consumption of the devices during GPS fix searching and A-GPS data downloading processes. This new overlay file is added to the previous regular and A-GPS TTFF measurement firmware configuration in addition to other overlays to generate new firmware files for the power consumption measurement. #Allow device to achieve lowest power consumption CONFIG_LOG=n CONFIG_SERIAL=n CONFIG_UART_CONSOLE=n CONFIG_AT_HOST_LIBRARY=n The Power Profiler application recorded the results of three TTFF tests for each reach during the two measurements. The following pictures and tables show the results for the regular GPS searching process and the A-GPS searching process separately. The regular GPS fix searching process is straightforward as it continues to search for a fix until it eventually obtains one. The table below shows the measurement results for the regular GPS fix searching process. |Regular GPS||Fix searching current (mA)||Fix searching period (s)||Floor current (uA)| The A-GPS TTFF test is slightly different as it involves an A-GPS data handling process, allowing the device to connect to the LTE network and download A-GPS data. With the help of A-GPS data, the device can quickly obtain a fix, resulting in a much shorter GPS fix-searching period. |A-GPS||A-GPS data handling period (s)||A-GPS data handling period average current (mA)||Fix searching period (s)||Fix searching current (mA)||Floor current (uA)| Application power consumption evaluation To illustrate the difference between A-GPS and regular GPS in an asset tracking application, we assume that the process of obtaining a fix is always the same and will use the average value from the above measurements. As the fix searching current and floor current of A-GPS and regular GPS are quite similar, around 50mA and 2.19uA respectively, we will use these values as the input in the calculations in the next step.For the application design input, we utilize the battery parameters of the Nordic Thingy:91 and assume a total power loss rate of 20%. We will use different intervals of fix searching, ranging from 10 seconds to 1 day, to cover the needs of various asset-tracking applications. The following table presents the overall input data derived from both application design and previous measurements. |Fix interval (s)||Battery capacity (mAh)||Battery voltage (V)||Power loss rate||Power voltage (V)||Floor current (uA)||Fix searching current (mA)||Regular GPS TTFF (s)||A-GPS TTFF (s)||A-GPS handling period (s)||A-GPS handling current (mA)| Using the measurement data, we can calculate the average power consumption for A-GPS and regular GPS. By combining the application battery parameters and fix intervals, we have calculated the runtime of various theoretical asset tracking devices. The resulting values are shown in the table below. |Application||Run time (day)||Average electric power (mW)| A-GPS/GPS run time |A-GPS||Regular GPS||Power saving percentage| The excel file attached at the bottom of the blog post, 4621.nRF9160 GPS Application Power Consumption Evaluation.xlsx, covers the calculation process. Taking a look at the Power saving percentage column in the Average electric power column, A-GPS is capable of saving on average 70.6% more energy than regular GPS, using the same battery. When taking the Thingy:91 battery as an example, we can calculate the actual run time of an application before the battery is drained. The result shows that A-GPS can help asset tracking applications run at least 2.4 times longer, and on average 3.54 times, when we assume a cold start every time. This blog explores the use of the nRF9160 SiP for asset tracking applications and how the A-GPS service in nRF Cloud can reduce power consumption and extend device battery life. Our real-world measurements show that A-GPS is much quicker than regular GPS for TTFF. Power consumption evaluation of real-life asset tracking applications shows that the device battery life is on average between 2.4 and 4.3 times longer for A-GPS vs GPS depending on how often you need a GPS fix. This evaluation is intended to demonstrate how A-GPS can help extend battery life compared to regular GPS. Actual GPS performance may vary under real conditions, so it is recommended to use PPK2 for recording and analyzing data specific to an application. - 4621.nRF9160 GPS Application Power Consumption Evaluation.xlsx - Cloud Services - nordicsemi.com - Getting started with current measurements on the nRF9160 - Using GNSS and GPS assistance | nRF Cloud Docs - Field verification of GNSS on the nRF91 Series
OPCFW_CODE
Are Sophia's parents vampires? As far as Sophia being a vampire, the anime never really addressed why she was a vampire, and her parents never commented on it. Is one or more of her parents also a vampire? The light novels answer this. In short, no, neither of Sophia's parents are vampires. And given that the light novels actually specify that very few people have Appraise, and that high-level Appraise Stones are rare, it's entirely possible they're completely ignorant that Sophia is one. Though Sophia's parents are nobles, so it's possible they have access to an Appraisal Stone. It isn't clear whether they know Sophia is a vampire or not (I'll update if necessary, as I'm still reading volume 5). The reason Sophia is a vampire is that Vampire was the unique skill assigned to her at birth. That each of the reincarnations were assigned a unique skill is also something the anime didn't really explain (at least as far as I can remember). Kumoko's was the skill Skanda, which increases her speed. Note that these unique skills are not necessarily one-of-a-kind, only for this reincarnation, as plenty of monsters are shown to have the Skanda skill. Back to Vampire, this skill gives Sophia the Vampire title as well, which comes with some really good skills like Undying Body (Which was evidently mistranslated as Immortality in the English version in volume 4, and corrected in volume 5 when they realized it wasn't the same thing. To be sure, there's a difference, because it only allows her to survive any attack at 1 HP, once per day, whereas Immortality lets one survive any kind of physical attack, with Abyss Magic or other attacks on the soul being the only way to fully kill someone with it.). Another really interesting thing, is that since Sophia was assigned Vampire from birth, she also gets the Progenitor title, which negates all negative effects of being a vampire. Textual Evidence Getting back to the main question; this all comes from volume 5 of the light novels. From page 4 in Chapter 1: The Spider and the Vampire: Does that mean her parents are vampires or what?But according to Appraisal, the woman who's holding this baby is human.The lady's name is Seras Keren.Same last name as the baby bloodsucker.If you put two and two together, that means this lady is definitely the kid's mother.Her mother is human. From page 6 in the same chapter: So by the process of elimination, her reincarnation bonus skill is... Vampire?Hmm? Hmmmm?Which means that the reason this kid is a vampire is because that's what she got for being a reincarnation?The description for the Vampire title did say that it gets added to your species when you get the skill. From page 30 in Chapter 2: The Town: Actually, aren't they nobles?From what I saw earlier and all, I'm guessing the baby bloodsucker's father is in charge of the town.His name is John Keren.Race: Human. Human. I say it twice because it's important!Good for you, Baby Drac! You're a vampire who was born to human parents via some freak mutation!...I don't know how vampires are treated in this world, but if an important noble suddenly has a vampire baby, that smells like it'll be trouble in the future.Well, they'll have to deal with that themselves.
STACK_EXCHANGE
I tried to search cs:Category:Údržba:Wikidata only in talkpages (I was discussing this category some years ago). I selected only talk namespaces. But there was 125 results, all in namespace 14, which was not selected. Bug searching on talkpages And second "bug" - search URL is not easily copyable to wikitext Hi @JAn Dudík, thanks for the report! I created a ticket for the bug: phab:T195832 and we will look into it. I also created a ticket for the second issue, so that we have it on our radar - phab:T195833. How to have a clean constent tag I went to know if it’s possible to have defaultValue: ["Actif"] like for pages in these categories. Because today I use Adding Fields to AdvancedSearch solution to have it. So I set defaultValue:['\\"Actif"\\'] in order to have a good result for the search but now Actif is displayed like this: \"Actif"\ . so how to have defaultValue: ["Actif"] in Pages in these categories input box? Hello, maybe I don’t quite understand the documentation : Adding Fields to AdvancedSearch. I want to put a new field named extra (like the example), the search is good but I have a problem with the label. I put "advancedsearch-optgroup-extra" : "test" in i18n/fr.json and in every other language. In my wiki the result like I don't put an ID: The same for help message : ⧼advancedsearch-field-help-undefined⧽ Any help appreciated. Thank you @TheDJ, I have access to my server so I use the extension.json solution, but for the optgroup I use a trick, because it was undefined so I just put it like it was my special label… I have a question if I understand well if I put !category:"music", theoretically the music tag will be excluded. But it puts me on an error and when I put !-category:music it normally works like -category:"music". Your new interface is very good! -livrewikier irice7350- no results : strange as people exist !! I'm finding it crisper and easier to navigate Is it possible to modify the namespaces selected by default? I've tried the follow and various forms to add a namespace to the default selected list Any help appreciated. The namespace selected by default ( as part of the default search name space ) is defined by this variable: Manual:$wgNamespacesToBeSearchedDefault independent from the AdvancedSearch extension. If you change that ( e.g. by removing main and adding something else ) you should get what you need. :-) Thank you @Christoph Jauera (WMDE) that was my issue and solved my problem. search function does not work does not work..., i only get the prompt "A database query error has occurred. This may indicate a bug in the software." please advise Same problem! Search function does'nt work with any words other than headlines. Hej, AdvancedSearch only provides a better interface for the search. For most of its features it depends on the underlying Help:CirrusSearch extension. So please first make sure that CirrusSearch is setup and works. Could you kindly explain how it works. I could not find it out I have been using this for a year and suddenly the search function doesn't work. For me the same problem. I have not understand the Cirrus extension. Christoph could you kindly explain General issues with the search results or errors showing when executing a search, like stated above, are very most likely not related to AdvancedSearch. If your wiki has CirrusSearch set up it might be related to issues there. Please ask the maintainers there then onExtension talk:CirrusSearch. Otherwise you might also find help on the more general help page for the search feature Help talk:Searching. Information on how to install CirrusSearch on your wiki and set it up correctly can you find here: Extension:CirrusSearch Is it possible to sort by date desc? It's not yet possible to do this in the search interface but you can do this by changing the URL. See here for the list of options: https://www.mediawiki.org/wiki/Help:CirrusSearch#Explicit_sort_orders This interface is working well for me. Thank you for your work on here. How it's working for me in here. So far so good. Just have to remember where everything is and when I should use it. :-) Why the heck do I keep getting redirect here from the tree Id website!!! All I want to do is find out what sort of tree I got sold by the wrong name. It's definitely not a ribbon wood!
OPCFW_CODE
Why does XCode scale Vector images better than Illustrator or Photoshop? I tried using the new PDF feature of XCode that basically scales the image to 1x, 2x, and 3x. Unfortunately I'm also using Spritekit, so I'd rather use SKTextureAtlases than the Asset Catalog. My problem is that the rasterized version of the pdf looks better than any exports from Adobe Illustrator (or Photoshop using Smart Objects). Here's a link to an Imgur album with examples. Specifically, the image exported from Illustrator is in 2 square sizes: 60px and 90px. The images in Xcode all have the same name but are in two different atlases<EMAIL_ADDRESS>and<EMAIL_ADDRESS>The PDF was exported at 30px square from Illustrator and then Xcode scales it to the 2x and 3x versions. So why does the Xcode version look sharper (especially around the junction between the rounded corner and the flat side)? your link doesn't go to an album, it just goes to Google Thanks Ron. I fixed the link. @user2280092: Maybe this is due to Photoshop/Illustrator settings? For this kind of job I created atlas dynamically from vector images. I use PaintCode to create methods rendering PNG images then add these images to atlas. @Domsware That's what I thought too. However I've tried changing all of the settings I can think of but that hasn't worked. As for PaintCode, does it improve performance or is it about the same as using a normal atlas of PNGs? PaintCode does not improve performance but size of app. It allow to optimize the way bitmap are created. I don't have to worry about resolution as PaintCode handle all of this. How the pictures you show on the link are obtained? In other words how do you obtain the PNG created from Xcode? And please, could you insert images here? Maybe in the future you'll delete the external files and then images won't be available in this question. @Domsware Both of the images are screenshots taken of the app running on an iPhone 4S. I took the screenshots through the Xcode "Devices" window. I wish I could get the PNG generated by Xcode in it's raw form because then I'd just use that. And I tried to post the images here but I don't have enough reputation. Let us continue this discussion in chat. I think this could be a resolution issue: Xcode does not have the resolution needed so it enlarge an image causing glitches. When a SKSpriteNode is created without indication of size, the size of the texture is used. Thus if you SKSpriteNode have a size of 30x30 points, you have to bring a 60x60 pixels image for @2x and a 90x90 pixels image for @3x. This also maybe be due to settings in Illustrator. To have a true comparison on screen, you can display two SKSpriteNode with the same size of 30x30 points: the first of them have it's texture from atlas generated by Illustrator the seconde have it's texture from an image from assets created from PDF feature of XCode Note that for this test you don't even need an atlas as atlas is intend for rendering optimizations. Sorry I didn't get back to you sooner but I did test it and it's working now. Thanks for your time.
STACK_EXCHANGE
[typescript] Standard Props This PR adds a StandardProps type constructor which is a uniform way to declare props for standard components. This moves the className and style out of StyledComponentProps, because the withStyles HOC does not actually produce a component with these props. This addresses the problem mentioned in this comment. All props are changed to interfaces, which causes type errors when there are prop conflicts, whereas before we had to find out by trial and error about problems like #8618. A number of bugs of his nature were discovered and fixed in this PR. For a given component Foo, the type FooProps now represents all of Foo's props, not just the non-style related ones. The benefit of this is that components can refer to one another's props in their entirety for the purposes of forwarding. For example, before TextFieldProps containedFormHelperTextProps?: FormHelperTextProps & StyledComponentProps<any>; which was not even correct due to the any. Rather than duplicate the logic of which ClassKeys should be passed to StyledComponentProps here, if FormerHelperTextProps is entirely self-contained, it can just beFormHelperTextProps?: FormHelperTextProps; The StandardProps macro allows declaring a base type to be extended what classes are available for it any props that should be omitted from the base type because they will conflict with the extending props @pelotom @sebald Thanks! :) @oliviertassinari (responding outside of outdated code snippet) I'm wondering. Is a extra degree of freedom in this situation can harm? The only way it could cause harm is if one provided a component that expected some required props that they weren't going to get: const PlusOne = (props: { n: number }) => <div>{n} + 1 = {n + 1}</div> <Modal BackdropComponent={PlusOne} /> // won't get the `n` prop it expects! Since React.ReactType is defined as string | ComponentType<any>, it would allow this. Since all of BackdropProps fields are optional, this type would be ok too BackdropComponent?: string | React.ComponentType<{}>; But it's kind of nice to indicate what props you actually might get, so I think the current typing is ideal. @pelotom This makes sense, you are right, I think that it's better this way. Thanks for raising my awareness on the typing system. Hmm, I think this PR broke my build if I try to use material-ui together with styled-components. I'm really not sure why though. TypeScript spits out weird error messages: rc/components/editor/comments.tsx(40,21): error TS2345: Argument of type 'ComponentType<CardProps>' is not assignable to parameter of type 'StatelessComponent<CardProps>'. Type 'ComponentClass<CardProps>' is not assignable to type 'StatelessComponent<CardProps>'. Type 'ComponentClass<CardProps>' provides no match for the signature '(props: CardProps & { children?: ReactNode; }, context?: any): ReactElement<any> | null'. when the file just contains: import MuiCard from 'material-ui/Card'; const Card = styled(MuiCard)` // Line with the error width: 100%; `; material-ui makes use of React.ComponentType<CardProps> which is defined as type ComponentType<P = {}> = ComponentClass<P> | StatelessComponent<P>;, so TS should theoretically grok that the component is either stateless or not. Especially it shouldn't be a problem, because the styled-interface is declared as: export interface ThemedBaseStyledInterface<T> extends ThemedStyledComponentFactories<T> { // yada yada <P>(component: React.ComponentClass<P>): ThemedStyledFunction<P, T>; // Should work <P extends { [prop: string]: any; theme?: T; }>(component: React.StatelessComponent<P>): ThemedStyledFunction<P, T, WithOptionalTheme<P, T>>; // throws error } Any idea why this happens? @NeoLegends Hm, this works fine for me: import MuiCard from 'material-ui/Card' import styled from 'styled-components' const Card = styled(MuiCard)` width: 100%; `; Using: <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> @pelotom Hmm, I thought it might have something to do with outdated dependencies, but I made a full component upgrade and I'm still facing the errors. I've got: <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> Seems like the folks over at styled-components face the problem as well, and they blame it on react typings... https://github.com/styled-components/styled-components/pull/1281 @pelotom Did you try using your Component afterwards? @NeoLegends I'm able to reproduce it now. I'm not sure where the blame lies, but it looks like styled can take either a React.ComponentClass<P> or a React.StatelessComponent<P>, but for some reason not a React.ComponentType<P> (which is defined as React.ComponentClass<P> | React.StatelessComponent<P>). All of material-ui's components are defined as React.ComponentType<P> because it is an implementation detail whether the component is represented by a class or stateless function. I'm not sure if it's covered by the bug report you linked, but this seems like a good thing to show to the styled-components people, because it seems like it should work: import styled from 'styled-components' declare const A: React.ComponentClass declare const B: React.StatelessComponent declare const C: React.ComponentClass | React.StatelessComponent styled(A) // succeeds styled(B) // succeeds styled(C) // fails This seems like the kind of thing that should "just work", but it may be a limitation of TypeScript currently which styled-components should modify their typings to work around. @pelotom Just for the sake of completeness, can you add the error message? 🙂 @sebald it's essentially the same error as @NeoLegends gave above: Argument of type 'ComponentClass<{}> | StatelessComponent<{}>' is not assignable to parameter of type 'StatelessComponent<{}>'. Type 'ComponentClass<{}>' is not assignable to type 'StatelessComponent<{}>'. Type 'ComponentClass<{}>' provides no match for the signature '(props: { children?: ReactNode; }, context?: any): ReactElement<any> | null'. Sorry ... didn't see that 😅 Great error, though!
GITHUB_ARCHIVE
//Home route Handler function HomeR(req, res) { res.render('home.ejs', { title: 'I Love my City', headline: 'Every city has its own personality' }); } // City Route Handler function CityR(req, res) { var cityname = req.params.city; var titleValue; var headlineValue; if (cityname === 'newyork') { titleValue = "New York"; headlineValue = "Business capital of the world"; } else if (cityname === 'london') { titleValue = "London"; headlineValue = "City of the Themes"; } else if (cityname === 'paris') { titleValue = "Paris"; headlineValue = "Fashion Capital of the world"; } else if (cityname === 'delhi') { titleValue = "Delhi"; headlineValue = "Capital of India"; } res.render('city.ejs', { title: titleValue, headline: headlineValue }); } module.exports.cityFn=CityR; module.exports.homeFn=HomeR;
STACK_EDU
[Issue]: 'WebBrowserClient' does not contain a definition for 'LogWarning' What platform are you experiencing this issue on? Windows What architecture is your platform? 64-Bit What version of UWB are you using? Preview GitLab Package What Unity version are you running? 2021.3.2f1 Describe what the issue you are experiencing is. When I import the newest package following the instructions I receive this error: Library\PackageCache\dev.voltstro.unitywebbrowser@7ab688c164\Runtime\WebBrowserUI.cs(333,39): error CS1061: 'WebBrowserClient' does not contain a definition for 'LogWarning' and no accessible extension method 'LogWarning' accepting a first argument of type 'WebBrowserClient' could be found (are you missing a using directive or an assembly reference?) Provide reproducible steps for this issue. Create a new 2021.3.2f1 project. Follow instructions in readme https://github.com/Voltstro-Studios/UnityWebBrowser/blob/01c1020ffc41aedadacbd6b10053b646e1be85f1/README.md: Open up the package manager via Windows -> Package Manager Click on the little + sign -> Add package from git URL... Type https://gitlab.com/Voltstro-Studios/WebBrowser/Package.git#2.x and add it Type https://gitlab.com/Voltstro-Studios/WebBrowser/Package.git#engine/cef/base and add it Type https://gitlab.com/Voltstro-Studios/WebBrowser/Package.git#engine/cef/win-x64 and add it (If you need Windows support) Type https://gitlab.com/Voltstro-Studios/WebBrowser/Package.git#engine/cef/linux-x64 and add it (If you need Linux support) Unity will now download and install the package Boom! Issue. Any additional info you like to provide? No response I think this issue has something to do with the package manager and the way its fetching things off gitlab. I think the package manager is probably just grabbing an out of date package because it looks like the WebBrowserUI shouldn't be showing up in the package version of the project. OK! That's the case -- https://gitlab.com/Voltstro-Studios/WebBrowser/Package is out of sync with the rest of the project (which makes perfect sense since its still in development) the only problem is that its also not functional right now. My workaround for now is to include a fork of the project as a submodule of my project and import the packages from there. I just had a look at it, it is due to a call to a LogWarning method that no longer exists, but the issue only happens if you are using the old input system (If you switch to the new input system you won't have this issue), since it only in that compiler define. The reason why this was never caught by me, is at the time the Linux version only supported using the new input system, and I do all my devlopment on Linux, so when I did a refactor is must of not caught this. The GitLab package is quite out of date, and I don't want to push a new version to it as I want to completely ditch using GitLab for package hosting for many reasons. If you want to mitigate this issue for now, either use the new input system, or build the version in the repo, and copy the packages you need from /src/Packages to <Your Project Dir>/Packages, which Unity will then automatically add. You will also need to add the OpenUPM scoped registry for the new version, like so: "scopedRegistries": [ { "name": "OpenUPM", "url": "https://package.openupm.com", "scopes": [ "org.nuget", "com.cysharp.unitask" ] } ] Once the full release is done, it should be as easy as just adding my package registry, and it will already have all the dependencies mirrored in the registry, as well as all the UWB packages, so you don't have to add them one at a time from a git URL. Thank you! Adding that scoped registry worked perfectly! Issue should now be fixed in the latest release.
GITHUB_ARCHIVE
Richard Epstein has a new report on intellectual property. Epstein is a brilliant legal theorist (seriously–several of his books are classics of libertarian scholarship) but unfortunately, I think he analysis of IP issues–especially technology related IP issues–is hampered by his lack of familiarity with the underlying domain. Take this passage about open source, for example: One ongoing question is how well open source stacks up against traditional proprietary software. Much depends on the scale of the enterprise. The decentralized methods for open source work well with small systems, but are difficult to maintain as the network expands–a problem that any proprietary system also faces in integrating backwards to existing products while introducing new products. In addition, loose cooperatives must organize to fend off outsiders claiming that the entire system incorporates their trade secrets or IP. The present SCO litigation, for example, puts the entire Linux system at risk on these grounds, prompting the formation of a litigation committee to coordinate the common defense. Right now at the heart of the movement lies a commercial joint venture spearheaded by well-established firms like IBM, Intel, and Hewlett-Packard, which develop service and proprietary programs that operate on top of an open source infrastructure. The new development gives ample testimony that no loose assemblage of voluntary contributors will be able to carry the day any longer. To be honest, I’m not sure I follow the third sentence. I think that to the contrary, in many ways open source scales better than proprietary development models, because it takes advantage of the decentralized, spontaneous processes to solve problems rather than relying on hierarchical, top-down processes. Of course, generally speaking, large corporations like dealing with other large corporations for the IT needs, so it’s not surprising that IBM does a lot of business selling open source software (along with some of their own proprietary software) to Fortune 500 companies. But that’s not because open source can’t solve the technical problems of large companies. It’s simply that “open source,” as an idea, doesn’t have a sales force and can’t meet with corporate IT directors. IBM does, and can, so it tends to get the IT contracts. But most of the value was created by the volunteers who built the underlying software. He then claims that “at the heart of the open source movement” are IBM, Intel, and HP. He doesn’t elaborate, but I assume he’s equating “open source” with “Linux.” This is misleading for several reasons. First of all, those companies might be spending the most money on Linux-related products, but they’re hardly the core of the Linux community. Linux is still developed by a decentralized group of mostly-volunteer programmers from a wide variety of institutions, led by Linus Torvalds. They probably don’t seem significant to Mr. Epstein because they don’t have PR departments or billion-dollar balance sheets, but they’re the ones who control the direction of the core product. The work of IBM, Intel, HP, and their ilk is largely focused on making Linux work better on their particular systems, as well as building software on top of Linux to meet the needs of particular clients. Obviously, that’s often helpful to the overall project, but it hardly puts Big Blue “at the heart” of the Linux effort. But the broader point is that Linux is just one out of dozens of major, successful open source products that are used by millions of people every day–and most of them receive far less corporate support than does Linux. Most of them are programs that Epstein has probably never heard of–projects with names like Apache, Samba, Perl, Python, gcc, MySQL, KDE, Gnome, FreeBSD, OpenSSH–but that make up the “plumbing” that make the Internet work. Each of these projects has a core team made up of, well “a loose assemblage of voluntary contributors.” Some of them get corporate support, but that support is incidental to the projects’ viability in most cases. I can’t think of any recent developments that prove that the open source model will not “be able to carry the day any longer.” To the contrary, the open source development model continues to demonstrate its vitality by churning out spectacular products without significant corporate subsidies. Now, obviously it wouldn’t be fair to expect a 50-something law professor to be intimately familiar with products like gcc and FreeBSD. Linux is the product that gets the most press, and IBM is the Linux contributor that gets the most attention, and so Epstein naturally assumes that IBM is the biggest driver of open source software. It’s an understandable error, but these kinds of blind spots are dangerous when you’re doing public policy analysis. If you misdiagnose the source of innovation, you’re likely to misunderstand the institutions required to promote it. Computer geeks are the ones closest to the ground of high-tech innovation. When they’re shouting from the rooftops about problems with our IP system, I think the law professors of the world ought to pay a bit more attention to what they have to say.
OPCFW_CODE
Thanks for the clarification. I'm really new to this, and do not know of interactive mode. I presume there's a command to turn it off and on, and possibly an example on how to do it. Is this method of operation with show() mentioned anywhere? The unfortunate present use of show() is that it ties up the shell script, where I happened to have written program output. It's handy to put it there, since it's meant to be interactive. The user is keyboard arrowing through images, and statistical data is placed on the shell window. At the same time he sees a plot of data relevant to the image. He needs to close the plot window before going to the next image. I can probably figure out how to kill the plot window when he does that. My problem with using ipython is that the program I'm modifying is used with IDLE, and people have gotten to use it that way. I had nothing to do with that method of op. I doubt any of the users would be agreeable to using ipython. None of them know Python. The next time the program is released, I may provide it in executable form. I used matlab five years ago, for about two months. To see if it could help me understand MPL, I fired it up, and it's now working. Perhaps the interactive op is explained there. I take it there is no show() there? Interesting mention of "non-blocking". In the midst of this dilemma, I started getting socket errors. Using McAffe I found pythonw as blocked. Would that be in anyway associated with the use of show()? I've since changed it to outbound blocking. On 2/9/2010 8:18 AM, John Hunter wrote: On Tue, Feb 9, 2010 at 10:06 AM, Wayne Watson > <sierra_mtnview@...209...> wrote: the last line. "show" is meant to start the GUI mainloop, which is usually blocking, and raise all windows, so the behavior you are reporting is the intended behavior. When working interactively, as in Idle, you shouldn't need to use show if you turn interactive mode on. We recommend using ipython in pylab model when working interactively because it is designed to make the correct interactive settings and override "show" to be non-blocking. You can obtain the right results in matplotlib using Idle if you are careful, but for "just works out of the box" ipython in pylab mode will be easier. "Crime is way down. War is declining. And that's far from the good news." -- Steven Pinker (and other sources) Why is this true, but yet the media says otherwise? The media knows very well how to manipulate us (see limbic, emotion, $$). -- WTW
OPCFW_CODE
What were all the occasions where the Star Trek Captains have met each other? According to Iszi, "Well, it would be the only time (IIRC) where Kirk & Picard were together" (speaking of ST: Generations)" What are all the occasions - in movies or TV, or other licensed material - where the Star Trek Captains (Kirk, Picard, Sisko, Janeway, Archer, Alternate-Universe Kirk) have met or interacted with each other? (If they meet multiple times in the same movie/TV episodes, that's one instance). @DVK I'm not quite sure this is a good fit for SE. It's a list question that covers a lot of territory, and it's an indefinite one at that - since there are still Star Trek productions in work. @Iszi If we're talking only about the four captains - It's a short answer. It really depends on the question... This really does seem like a list question to me. @DVK You really need to work on this, it's a list question as is. Perhaps reduce it to the scope of your first comment. Not every question whose answer is a list is a list question. There's only a problem if each answer is likely to list only one item. If you can clearly distinguish which answer is better (the one with the most complete answer, i.e. most complete list), then there's absolutely no problem with the question. I do agree that perhaps DVK should list exactly which characters he's interested in. @Beofett It is finite... until the next movie/series/book comes out. @Iszi ah, I missed the book portion. And I guess with the reboot of the movie franchise, just about any crossover is possible. @Beofett Don't forget too, that Michael Dorn has been wanting to do another series focused on Worf. @Iszi - can you provide at least ONE more example aside from the 3 already mentioned in the question and 2 comments that would make it a "wide open list" above 3 elements? @Iszi - isn't new Star Trek still a Kirk as the main figure? (haven't seen the reboot yet) @DVK The new Star Trek is effectively an alternate-universe Kirk - not really the same Kirk as from TOS. @Jeff - small scale well-defined single franchise list questions aren't off-topic. As it stands, nobody could provide more than 3 examples for this "list" - all listed in a single answer. DVK I'm not quite sure I understand your query for "ONE more example". The problems with a question like this are many. First, the scope not well-defined. You said "main figures", and you did happen to list captains, but there are many more characters in each series who may (or may not) be considered a "main figure" depending on who you ask. Second, for what parts of the question's scope are determinable, this question's scope is huge. For one person to provide an authoritative answer, they would have to have seen every movie & episode and read every comic & book. ... ... I personally doubt that there is one person who even has this entire collection, let alone has gone through all of it or remembers it all in enough detail to provide a complete answer. Third, the scope is still open-ended. There are still other Star Trek productions in progress and in concept which would be in-scope of this question. A prime example would be the reboot series. Others might be the Worf-centric series proposed by Michael Dorn, or any of a number of possible spin-off books or comics based on existing works. ... ... In short, it is very unlikely that any one answer here will be a complete and authoritative answer. Even if it is, it has a high potential to become outdated before long. @DVK Given the ensemble cast nature of ST, "Main Characters" was pretty ambiguous, even with your partial list. I actually took it for granted that characters like Spock and Data were considered "Main". Thank you for clarifying. That improves the question considerably. I've removed my -1, and I'm trying to decide if I should vote to reopen. However, the whole issue of potential changes due to yet-to-be-release books, movies, whatnot is still concerning to me. How good is a question where the answer may have to be updated every couple of years? @DVK I see you did fix my problem #1. I edited the question further to specifically call out that you're looking for, what I believe most Trekkies would call, the "Star Trek Captains". As Beofett says, that is a considerable improvement to the question. However, problems two and three on my list still remain - with problem two being perhaps the most significant. Take "other licensed works" out of scope, and I may be more inclined to vote for re-open. @Iszi - Not being familiar with DS9, I was not sure if Sisco is, indeed, a captain. If he is, the edit works. @Beofett - IIRC it was discussed on Meta that "there are more works coming" is not a vaild reason to close a question. @Beofett - for example, 100% of Star Wars questions should be closed since until Lucas dies, there's a higher chance of him making more edits to invalidate any and every single one of their answers, than of a new ST series introducing tons of cross-captain meetings. @DVK Personally, I see the open-endedness as a much smaller problem now that the scope is limited to just the Star Trek Captains. I think if you would be willing to further limit it to just the canonical films and TV series, it would be a much more answerable question - it would get my re-open vote, at least. @DVK I remember a recent brief discussion that touched on it, but I do not recall seeing any community consensus on meta stating that it is not a valid close reason. Is there another discussion that you are referring to? @DVK That's a strawman. The amount of Star Wars questions that ask "list every instance where this happens" is very far from 100%. @Beofett - Aside from "this answer will change", how is "there are more series coming" any more of a reason to dislike a list question than a non-list question? And if the reason to dislike is "this answer will change", as I noted, a non-list SW question has a HIGHER likelyhood of being invalidated by Lucas than this question by introducing new series. @DVK it is more of an issue of "this answer will change repeatedly", than "this answer might change once". However, I agree that "there might be more movies/tv shows coming" isn't strong enough to VTC. I did so more based off of the myriad books, comics, and video games. Since those are no longer part of the question, I had already cast my vote to re-open. @Beofett You're mistaken - those are still part of the question. It's the non-Captain "main cast" that's been pulled out. @Iszi Ugh. I saw "non-canon" edited out, and missed that the "other licensed materials" was left in. How many books alone are there? How many comics? How much material exists in video games? That's a huge scope. @Keen - I think you can remove all of mine... or better yet migrate the whole thread to chat and delete all of them. Spock met Spock, and Spock had been a captain. Additionally, other characters became captains, such as Sulu. So depending on the requirements of your question, there are more possible answers. Additionally, if you count space stations, Picard visited Deep Space: Nine and met Sisko. @Jason - original question was very specific to the "main" leader of each series (e.g. Kirk - but NOT Spock - for TOS. The subsequent edits kind of erased that precision). Picard meeting Sisko is exactly the kind of event I was looking for. Kirk met Picard in "ST : Generations" Janeway talked with Picard in "ST : Nemesis" Picard met Sisko in "DS9 : The Emissary" Admiral McCoy and Lt. Commander Data - TNG: Encounter at Farpoint. Picard received a call (and orders, a diplomatic mission to Romulus) from Admiral Janeway in one episode. I just watched it a few days ago but I'm not sure which it was. @Kevin: That would be the movie Nemesis (already on the list). @jwodder so it was. Evidently I forgot we pulled up a full movie instead of a series episode. Why not time travel? There actually aren't many AFAIK that result in characters from completely different arcs meeting each other; the only one I can think of is Sisko and the DS9 crew meeting Kirk and the Enterprise crew in the TWT flashback. And technically, the Kirk/Picard meeting can be easily considered time travel because the Nexus transcends time. @KeithS How about Sulu and Janeway then? I mentioned TWT in comment above. Oh Oh!!!! Tuvok and Sulu obviously!!!! Not sure if this one counts, but Crusher activates the Enterprise-E's EMH in First Contact, which, being a standard Mark I, is visually identical to The Doctor on Voyager. Not the same character, but the same actor. This actually highlights a problem common to "list" questions... answers that are "here are the ones I can think of off the top of my head" or "this is probably missing a bunch, but..." are, by the very nature of the question being "what are all...", wrong. @Beofett That's true, but if we're limiting ourselves to Movies and TV it should die out pretty quickly and I think it's fun if it doesn't get out of hand :) @E.T. True, but if we wind up with a bunch of incomplete answers, it simply isn't quality content. I think new info in comments should be edited into this answer to make it as complete as possible. The question very clearly indicated that this is only about the "main" character. Riker, Tuvoc etc... are all off-topic @all - the question was about "list all crossovers", it was about "did the main character - usually Enterprise's captain except DS9 - from each series meat each other". MUCH smaller scope Though I don't think it was shown, I think it's safe to assume that Janeway would have met Sisko, since Voyager's first episode started out on DS9.
STACK_EXCHANGE
What 3d framework for flash should i use? Which one os the best? Seems, like papervision is no longer updated and away3d is only for flash player 11. And I need a good (maintained) free framework to start with with flash player 10. Any suggestions? Away3D is not only for flash player 11... there are already several instances of Away3D for as3, though the team has moved on to flash 11 3d stuff and how much maintenance there is for the fp10 version of Away3D I do not know... It all depends if you are aiming at the current tech (CPU rendering) or the next-gen Stage3D APIs (GPU rendering). FP11 has gone into beta, and it probably won't be too long before its released into the wild - it opens up a lot of doors that prior versions simply couldn't. Whatever you do with the current gen stuff, keep your eyes on Stage3d. It's a game-changer. At any rate - which one is the best? The one you know, or the one that makes the most sense to you. If you are just getting into flash 3d, then it can't hurt to test several out. GPU rendering (>= FP11) Away3d(broomstick) - is probably the right choice, given its community of users and documentation (it's not just FP11, btw, its been around for several years). It has a great feature set and is pretty easy to get up and running. You can not go wrong. Alternativa3d - I don't have much experience with it (my understanding was that it wasn't a pure actionscript solution - that may change with Stage3D), but the results are quite good. Unity3d - apparently is going to be integrating SWF / AS3 support. It is a robust middleware platform, and it's awesomeness cannot be overstated. It will support scripting in AS3 and export SWFs. How cool is that? CPU rendering (<= FP10) Papervision3D - although it is no longer being supported (or actively improved, anyway), I feel that it remains a quality solution. PV2.0 has a lot of really nice features, and straightforward to use. Considering all work on non-Stage3D frameworks are likely to slow the hell down, PV2.0 is as good as any other choice (and arguably better). Away3D is also quite good. It was a fork of Papervision, so if you have any experience with that, Away will be very familiar. The real bonus with Away3d is that when FP11 does release, you will be able to integrate with the new features pretty seamlessly, as the architecture of the central framework hasn't changed that much. There are several others (Sandy3d, Five3d, Alternativa etc. ) but I have no experience with them. Hope that helps. thx, seems like away3d is a great choice :) I've spent a lot of time with papervision3d (I'm currently finishing up a PV project, actually, and still really like it). But at this point - Away3d is definitely the right choice. Glad to help. Alternativa3D seems to be the winner these days. Away3D is not just for Flash Player 11. The latest stable release can be found here : http://away3d.com/download/away3d_3.6.0
STACK_EXCHANGE
Understanding Bottom-up Dynamic Programming I'm trying to understand the solution for LeetCode 983. I do understand the relation dp[day] = Min(dp[day - 1] + costs[0], dp[day - 7] + costs[1], dp[day - 30] + costs[2] and the fact that dp[i] represents the minimum cost traveling until day i What bugs me is that, let's say dp[day-7] doesn't have to be a day we were traveling (i.e. it's not in days). It means (as I see it) that we can't just jump from that day to our current day. So I'm kinda missing how it works as expected. I do understand that we carry dp[day] = dp[day - 1] if day wasn't a traveling day but I still don't understand why it's allowed. I think the following test-case represents my confusion. On day 41, why are we allowed to look at day 41-30? days = [1,2,3,4,41,42,43,44,45] costs = [3, 500, 4] Think of dp[d] as "the cost of meeting all your ticket requirements for journeys on or before day d". Do you agree that is a meaningful quantity when d itself is not a travelling day? @slothrop, well yeah, because dp[lastDay] is what we want if that's the definition of dp OK, so in your example, then dp[34] has a coherent meaning even though day 34 is not a travelling day. It's not a problem then to use dp[34] in the calculation of dp[41]. @slothrop, honestly I'm still stuck with the thought that it's not valid since 34 isn't a travel day and we might have carried dp[34] from an earlier day in which we couldn't buy a 7-days pass since the gap is larger than 7 days. Can you detail the scenario that you think would give incorrect results here? (btw, I should have said that the same argument goes for dp[11]. The calculation for dp[41] involves dp[40], dp[34] and dp[11], all of which are meaningful quantities. Actually, let me ask a question about your thinking. Obviously in DP we iteratively calculate dp[d] for each d in turn. Are you thinking of that loop as a kind of day-by-day simulation, where in each iteration of the loop we make a decision about our strategy for a given day? That view doesn't really work, because for example we don't "know" whether we bought a 7-day pass on day 34 until we calculate dp[41]. The loop should just be thought of as a mechanical procedure for solving the mathematical recurrence relations that describe the scenario. @slothrop, sort of. What did you mean by "ticket requirements"? you mean that dp[i] promises that I'll cover all the days in days until the day i with the minimum cost? Exactly. So if a 7-day pass costs $Y, then the inference is "if I can cover my costs until day i-7 for $X, then I can cover my costs until day i for at most $X+Y". You can make similar inferences for the 1-day and 30-day passes. @slothrop, I think I'm sort of getting it through some examples. Like days=[1,100,101] and costs=[5,1,1000]. When evaluating dp[101] we find out that we could have bought a 7-days pass for both days 100 and 101 by "coming from" day 94 Exactly. The calculation at every point is purely backward-looking and doesn't use information about later days in the schedule. At the point you calculate dp[94] you don't need to know whether you will buy a 7-day pass on day 94: that just ends up being figured out in the later steps of the calculation. So you had maybe been thinking "when we calculate dp[94] we can't know whether we should buy a pass" - that's a correct observation, but it's not a problem - the overall calculation works through repeated backward-looking steps and never requires us to look forward in time.
STACK_EXCHANGE
ant executes within eclipse but not from the command line I have a really basic question about ant. When I execute the build.xml file within Eclipse with Run as -> Ant Build Then everything works fine. However, when I try to run the same build.xml file from the command line like ant -f build.xml Then for same classes I get errors like: Error: `package com.sun.image.codec.jpeg` does not exist Any ideas, what should I do? Thanks check http://stackoverflow.com/questions/1906673/import-com-sun-image-codec-jpeg as its clear from the error its talking about package com.sun.image.codec.jpeg does not exist which is not present. Whats happening in case of eclipse is that that jar/package is there in build path of project as a result of which its getting included in build process from eclipse. As a solution try to include that jar file in build path of ant i.e. put that url in ant script and then try building the project from terminal. It should work. ýehp, it worked. Thanks. I put the "c:\Program Files (x86)\Java\jre7\lib" in the build.xml file, and it worked. Is it possible to give this path as a ANT argument in the command line? I don't want to have it in the build.xml file. Thanks NO! Don't reference the JRE! Download the JDK from Oracle. Then point %JAVA_HOME% to _that_, and put %JAVA_HOME/binin front of your path beforeC:\Windows\system32`. You want to make sure you're using the JDK! Hi David, my system was already configured as you describe, but that does not solve my problem, as the library that I need i "rt.jar" is in the "lib" folder of "jre" not "jdk". The problem is that "ANT 1.9.2" does not include this library by default when building the "build.xml" file. As pointed by Tushar, when building within Eclipse, Eclipse includes this libraries, but I don't know how to point ANT to include the libraray when executed from the command line Can't tell too much from your brief description. The first question is whether this is an issue with Ant itself or your build.xml file. Eclipse installs it's own version of Ant. I recommend you download the latest from the Ant Project page. It's version 1.9.1 or 1.9.2. Now, let's do a simple test. Write a simple build.xml: <project> <echo>Hello, world!</echo> </project> And, run that. If this works, the problem may be your build.xml file It might depend upon embedded Eclipse jars. However, looking up this particular error in Grep Code, I see it's a dependency upon the Java JDK itself. Again, Eclipse will come with an embedded JDK (it requires a JRE to run, but needs the JDK to compile). Do you have Java 1.6 or Java 1.7 JDK installed on your system? Do you have it in your path? Do you have $JAVA_HOME set on your terminal pointing to it? Is $JAVA_HOME exported (if you're using Mac or Linux or Unix)? Try each of these things, then update your question with your findings. it's not a problem with the ANT, I already manage to build simple programs and the ant is building a large part of my build.xml file, however, when it comes to the compilatin part, then I get this error. the questin is rather, if I can add some input parameters to the ANT such that standart java libraries can be included You shouldn't have to include standard Java libraries that are part of the JDK itself. Do you have $JAVA_HOME pointing to a JDK and not a JRE? What version of Ant are you using? Is it one you installed, or part of Eclipse?
STACK_EXCHANGE