Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
My first tidbit is actually based on something I borrowed from Quick Hits. In fact, if I remember correctly, it was described by Scott Sernau, who was one of my teaching mentors at IUSB. But I adapted it to my needs.
Simply put, I let students select topics to be covered during the semester. And I find it advantageous in some contexts.
The way I do it is quite simple. During the first week of class, I distribute a list of topics and each student has to select a limited number of them. Topics with the largest numbers of votes are added to syllabus and the coursepack is built based on this selection.
I reserve the right to merge topics. So, two topics which seemed relatively unpopular may form parts of a broader topic which can reach more students. I put topics in a sequence after they’ve been selected, to avoid some bias effects on the section process. And I typically have predetermined topics for the first few weeks of class, both as a way to make sure everyone is on the same page, especoly about basic concepts, and because it gives me more time to prepare the readings for the rest of the semester.
Some things make this technique more practical. For instance, it works well in a small seminar but it’d be very hard to do in a large textbook-based class. Where I first used it (IUSB), it was possible to build the coursepack through the semester. In fact, the electronic reserve system even allowed me to bypass the coursepack format altogether. At another place where I’ve used it (Tufts), coursepacks took enough time to build that it could only be done for a later section in the semester. I had to start with prepared material before the semester started. In other cases, including Concordia, the coursepack system is such that it’d be very impractical to use this technique unless it’s possible to meet students weeks before the course starts.
I’ve noticed a number of advantages with this technique. One is that it pushes students to engage in those broad issues of course design which give them insight into the course as a whole. Not only does it mean that students are a bit less passive, but they get a behind-the-scenes look at what teaching involves and may understand diverse things about the way topics relate to one another.
A related advantage is that students can claim ownership for a dimension of the class. Even without discussing the effects of the selection process very specifically (it’s not my thing to say “you chose the topics, don’t blame me if you don’t like them”), there’s a clear sense that te course as a whole becomes a shared responsibility.
In fact, I’ve associated this with te typical seminar structure of having individual students “responsible for” individual topics. Though everyone has to understand all the topics, each student becomes more of an expert in a given topic, often doing a presentation about it. I should elaborate on this as a separate tidbit, but it’s a common format for seminars, in some contexts. The way it works with the collaborative syllabus is that people can choose, at the same time, a series of topics they want covered and a specific topic on which they want to work. I usually try to get the student’s expertise on that topic to carry through the semester, but that part hasn’t been too effective.
Yet another thing I’ve noticed with the collaborative syllabus is that the way I explain a topic may have a large role effect on how students select them. For instance, the first time I tried this method, in a seminar about linguistic anthropology, I had semiotics as a topic. When I explained it, I mentioned zoosemiotics and associated animal language with that topic. That semester, semiotics ended up being the most popular topic in the initial vote, something which I wouldn’t have expected, had I designed the syllabus by myself, without student input.
|
OPCFW_CODE
|
ESP8266 is not showing as a member on FTS Server
i have succesfull connected to wifi and fts server and the steps below are working.
the ESP is sending the COT every 10 secends.
But i cant see the device in the member list and cant send commands to the device like "Roger" and "callRESET"
Step 1
The device connects via long TCP connection and sends a ping COT [t-x-c-t].
Step 2
Then after 10s it sends an update COT [a-f-G-U-C-I].
Step 3
Then for every 8 ping COTS 1 update COT will be sent in a loop as long as the TCP connection is active.
It should show the device on the lat="0.00000000" lon="0.00000000" but can't update since there is no GPS module.
What version of FTS and what version of the sketch are you using? in ESP8266_TAK_v<IP_ADDRESS>.ino I have a callRESTART but don't have a callRESET function.
Check out the static void replyToServer(void* arg) method and the static void handleData(void* arg, AsyncClient* client, void *data, size_t len) method.
let me know if you still have an issue.
Hi, thanks for the trobleshooting. Iam using FTS 1.9.9 maybe there is a problem with sending CoT at the moment?
Using the v<IP_ADDRESS> sketch
the device is not in the map tried to find it with wintak client and civ android.
I checked the static void replyToServer(void* arg) method and changed all four lines to this
sprintf(message, "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"yes\"?><event version=\"2.0\" uid=\"S-1-5-21-1568504889-667903775-1938598950-%d\" type=\"a-f-G-U-C-I\" time=\"%s\" start=\"%s\" stale=\"%s\" how=\"h-g-i-g-o\"><point lat=\"0\" lon=\"0\" hae=\"0\" ce=\"9999999\" le=\"9999999\"/><detail><takv version=\"<IP_ADDRESS>\" platform=\"WinTAK-CIV\" os=\"ESP8266 OS\" device=\"ESP8266 Device using WinTAK COT\"/><contact callsign=\"TFPU4-Relail-1\" endpoint=\"*:-1:stcp\"/><uid Droid=\"Droid_%d\"/><__group name=\"Red\" role=\"Team Member\"/><status battery=\"69\"/><track course=\"0.00000000\" speed=\"0.00000000\"/></detail></event>", IMEI_NO, TIME, START, STALE, IMEI_NO, IMEI_NO);
after that the ESP is shown as member on the FTS server but disconnects after a few seconds and is not connecting again as member
Hmmm it's working for me on FTS 1.9.9 are you using it over LAN or on the actual internet? Port 4242 gets used on LAN for direct private chats.
If you try trigger the events via public chat it should work. Not sure why the ping is not updating though. If it still doesn't work for you then I'll upload the exact sketch I'm running now
Hi, iam using it via Internet so all traffic should be sent over Port 8087.
I tried triggering callRESTART or Roger over the "All Group Chat" an nothing happended. device is not restarting or internal LED blinkin.
would be nice if you could upload your current running version :)
I added the version I was testing
Thanks. With this version, the esp is showing up as a member after it boots up for a second.
After one second Shows as offline in atak and if i sent commands via the Group chat nothing happended
I think the device goes to stale so you will have to add the correct date via:
#define TIME "2022-04-12T13:31:00.000Z"
#define START "2022-04-12T13:31:00.000Z"
#define STALE "2022-04-12T13:31:00.000Z"
I will add an auto date function sometime in the future but I am busy with exams at the moment.
Regarding the commands I think the problem is the msgStr = (char*)data; variable it needs to use the (uint8_t*)data and len variables passed through handleData(void* arg, AsyncClient* client, void *data, size_t len) but I could only pass is with reference to the data as a char* using safestrings. The solution may be to use a better TCP library or try find a way to get a string using (uint8_t*)data and len but at the moment I have no idea.
It was working for me because I don't have much activity on port 8087 so my string var was able to get the Roger and callRESTART commands but the length could be anything so it's not a good solution the msgStr var must use the (uint8_t*)data and len vars to get a reliable string reading. Strings are known to be problems with embedded devices.
I will have to think about it but perhaps in the meantime you could try find a reliable solution?
|
GITHUB_ARCHIVE
|
Due: *Monday* October 11th (paper due the following Monday.)
Note that October 11th looks like a Wednesday on normal Calendars, but it is a Monday in Bridgewater.
This is a modification of a lab from Rowan University. In this lab your robot will measure a cardboard box. You will use a gopigo robot to do the project. Your robot will have to sense the box and measure how far is has traveled when it believes that it has gone beyond the end of the box.
Each group has the standard robot kit in the lab. The bin contains:
A gopigo robot
with raspberry pi
1 Sonar Sensor
1 I2C distance sensor (provided they get here when I expect)
the builtin encoders
If you are missing something, you need to let me know ASAP - don't go scavenging.
Two cardboard boxes are also at the back of the lab for you to practice on. A new box will be used for demonstrations on Oct 11th.
As previously mentioned, the lab is available anytime that the Science Building is open except for during a class.
Comp 206 meets the times in this lab this semester:
Your task is to build and program a robot that can do one or both of the following tasks. Note that the first task can give you a maximum grade of 89 (B+) if done perfectly, while the second , harder task can get you a full 100% if done perfectly. However if not done well, both projects can earn you much lower grades. It is better to get the first task working well and then start working on the second task rather than just starting on the second task and not getting it working very well.
Task 1: Your robot must calculate the volume of the box, having been given the depth and the height of the box by the instructor, with your robot measuring the width of the box. You must have a way of entering the depth and height information into the robot. You can assume that the depth and height that you will be given as integers, however you should make no such assumptions for the width that the robot measures. (it should be a floating point value) All measurements will be in centimeters. When your robot has traversed and measured the width of the box, it should display the volume of the box on its display.
Task2: Your robot must calculate the volume of the box, having been given the height of the box by the instructor, with your robot measuring the width and depth of the box. As above, you need a way of entering the integer height of the box into the robot. Your robots measurements of the width and depth are to be done as floating point values though. Again measurement will be in centimeters. When your robot is done measuring , it needs to print out the volume of the box that is has calculated.
Sensor calibrations: the box will be cardboard and at least 6 inches high and at least 6 inches in both lenght and width. Beyond that nothing is guaranteed.
Don't move the box. The box will be light and might move if your robot runs into it - this will give you incorrect readings for sure.
Have fun with it and I look forward to seeing your projects.
The project report is a report of what you tried to do, what you did, what you learned and what you accomplished. To make my correcting easier, let me give you guidelines on what I'd like to see in it. Make sure you use section headings to make each section easy to find.
this is where you explain the problem you were trying to solve and why it is relevant
Here tell me what sort of robot you designed (in hardware). Tell me what worked and what did not work. Discuss what you learned based on what worked and what did not. Some of the robot is setup for you, but sensor placement is up to you and could make a great deal of difference.
Here discuss what sort of control program you built. Again tell me what worked and what did not. Discuss what you learned about robot control software from your experience. Discuss your approach and its relevance to both the current task at hand and the general problem of robots acting in the world.
Summarize what you learned. Consider the following target audience: next year's robotics students. In this section, summarize from the preceding sections all of the worthwhile dos and don'ts that you discovered in doing this lab. It is not really relevant that your robot did really great unless you tell the reader why. Think about what you would have liked to know when you first saw this lab, and if you have any insights after doing the lab, share them here.
|
OPCFW_CODE
|
Mudrod-132 Simplify command line log ingestion
This pull request should simplify the mudrod-engine command line tool used for log ingestion.
It includes a fix for https://github.com/mudrod/mudrod/issues/132 among other improvements made to the documentation/usage of the mudrod-engine command line tool.
I've updated the documentation slightly https://github.com/mudrod/mudrod/wiki/Command-Line-Interface
I still don't understand this directory structure thing or how these configuration parameters are used https://github.com/mudrod/mudrod/blob/master/core/src/main/resources/config.xml#L27
The reason is because my data directory looks nothing like
dataDir/
/httplog
/httplog
/ftplog
/ftplog
/RawMetadata
/1.json
/2.json
I set my dataDir to -dataDir /Users/greguska/data/mudrod/201611/. That directory contains 2 files:
FTP.201611.gz
WWW.201611.gz
And the ingestion seems to work fine with this structure.
I think further clarification is still needed.
It seems like I'm almost there. I was able to run the full ingestion locally but it failed part of the way through:
2017-05-03 15:46:33,879 INFO pre.SessionStatistic (SessionStatistic.java:execute(66)) - Starting Session Summarization.
[Stage 10:========================> (42 + 4) / 96][Stage 10:==========================> (45 + 4) / 96]2017-05-03 15:46:44,796 ERROR executor.Executor (Logging.scala:logError(91)) - Exception in task 47.0 in stage 10.0 (TID 445)
java.lang.NullPointerException
at java.io.Reader.<init>(Reader.java:78)
at java.io.InputStreamReader.<init>(InputStreamReader.java:72)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1407)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1433)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:585)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:563)
at gov.nasa.jpl.mudrod.discoveryengine.MudrodAbstract.initMudrod(MudrodAbstract.java:71)
at gov.nasa.jpl.mudrod.discoveryengine.MudrodAbstract.<init>(MudrodAbstract.java:56)
at gov.nasa.jpl.mudrod.weblog.structure.RequestUrl.<init>(RequestUrl.java:43)
at gov.nasa.jpl.mudrod.weblog.pre.SessionStatistic.processSession(SessionStatistic.java:236)
at gov.nasa.jpl.mudrod.weblog.pre.SessionStatistic$2.call(SessionStatistic.java:157)
at gov.nasa.jpl.mudrod.weblog.pre.SessionStatistic$2.call(SessionStatistic.java:147)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
It seems to be having an issue reading elastic_settings.json from the classpath. I'll keep looking.
Again I am stuck.
Opened new issue https://github.com/mudrod/mudrod/issues/162
Issue https://github.com/mudrod/mudrod/issues/162 is resolved with @quintinali latest commit https://github.com/mudrod/mudrod/pull/159/commits/819f54fe76624bc3ff2fec3dfa75e79a7c5b025a
However, there is a new problem https://github.com/mudrod/mudrod/issues/163
Issue https://github.com/mudrod/mudrod/issues/163 is not blocking log ingestion. It will be tracked separately from this PR.
I was able to run full log ingestion locally. I'm going to merge to master and run log ingestion on production.
|
GITHUB_ARCHIVE
|
Aiseesoft MTS Converter is the best MTS video converter, which can convert MTS files to MP4, MOV, WMV, AVI, MPEG and QuickTime with fast MTS conversion speed. Here we are again with the Broadway reease of Land of The Lost. Quality is almost the same, and from the first few minutes of the film i got that the sound was pretty good, but the video is titled and i saw a guy or two walk across the…File 983 | Welding | Chemistryhttps://scribd.com/documentFile 983 - Free download as PDF File (.pdf), Text File (.txt) or read online for free. File 983 Node JS , Typescript , Express based reactive microservice starter project for REST and GraphQL APIs - ERS-HCL/nxplorerjs-microservice-starter Sample for how to use typescript linking in Angular - thinktecture/angular-packaging-ts-linking SignalR TypeScript declarations file generator. Contribute to muratg/SRTS development by creating an account on GitHub. cancerSCOPE, a python library for cancer diagnosis - jasgrewal/cancerscope
:iphone: Sample App for Loadsmart's React Native Components - loadsmart/blocks-storybook-app
Sample for how to use typescript linking in Angular - thinktecture/angular-packaging-ts-linking SignalR TypeScript declarations file generator. Contribute to muratg/SRTS development by creating an account on GitHub. cancerSCOPE, a python library for cancer diagnosis - jasgrewal/cancerscope I often need to download files using the Terminal. However, I am unable to find the wget command on OS X. How do download files from the web via the Mac OS X bash command line option? Some examples include web services, AJAX and remote applications back ends, and 16 Mar 2017 These data feeds are in a variety of formats (XML, JSON, CSV and custom flat files) In the rail industry we frequently have to download, parse and… Learn more about the documentation article 'TS-Socket Manual [Deprecated]' here. Sample Rippers - Nobody likes the 3 http://www30.zippyshare.com/v/6955848/file.html refined-realsteel-720p.sample.mkv http://bitshare.com/files/luxmryue/refined-realsteel-720p.sample.mkv Meditations_sur_les_mysteres_de_notre_sainte_foi…
EVE.nfo 1.54 KB Mad.Max.Fury.Road.2015.720p.HD.TS.x264.AC3-EVE.mkv 2.00 GB Sample1.mkv 20.65 MB Sample2.mkv 20.45 MB Sample3.mkv 21.99 MB
HDTV sample file (Transport Stream MPEG-2 video stream). TS file format. TS is a video stream file format that is used for storing video on DVDs. TS stands for H.264+AAC(A-V sync)/ 2006-01-21 04:33 - [ ] H264_artifacts_motion.h264 2008-07-25 18:06 3.8M [VID] HD-h264.ts 2005-06-06 21:35 121M [VID] HD2-h264.ts View various examples of .M3U8 files formatted to index streams and .ts media segment files on your Mac, iPhone, iPad, and Apple TV. I've noticed that others are mirroring these files on their own servers without a (Click to Download), Bitrate (Overall), Resolution, Codec, Profile, Level, Tier, File Finally, if you're looking for sample files to test your media streamer's ability to Sample M3U8 File for HLS Stream If the first .ts file takes too long to download (causing “buffering”, i.e. waiting for the next chunk) the video player will switch Description: Trailer for Big Buck Bunny (TS format); Total: 32.48 sec, 1.900 previous video several times; Download the following files along with bunny.mp4:. MP3 (MPEG-1 Audio Layer-3) is a standard technology and format for compressing a sound sequence into a very small file while preserving the original level of
A selection of sample recordings captured with HDPVRCapture and the HDPVR capturing device using various sources, various resolutions and various audio
26 Aug 2019 For example MKV, TS, and streaming-optimized MP4 files can use this Media Server\Plex Transcoder.exe" -i "C:\downloads\file.mp4" -t 120 The tsconfig.json file specifies the root files and the compiler options required to the compiler defaults to including all TypeScript ( .ts , .d.ts and .tsx ) files in the Download MediaInfo Audio: format, codec id, sample rate, channels, bit depth, language, bit rate. MPEG-PS (including unprotected DVD), MPEG-TS (including unprotected Blu-ray), MXF, GXF, LXF, WMV, FLV, Real. Read many video and audio file formats; View information in different formats (text, sheet, tree, HTML.
MPEG-2 Transport Stream Test Patterns and Tools. AVS HD 709 HD DVD compatible ready to burn on DVD-R or DVD+R .iso file (118,058,063 bytes) If you are looking for TS Files: you can check the following links: http://dveo.com/downloads/TS-sample-files/. 8 Sep 2019 Note: To download, hover over link: to download; For rest, just click HDR 10-bit HEVC 25.000fps (in TS, Astra DVB satellite capture sample; no A zipped collection of 1,000 empty movie files, with NFO files, poster, and You can download Media Files for testing the samples from the table below. 11, dec_bluray_avc, Decode H.264/AVC + LPCM within MPEG-2 TS file.
Here we are again with the Broadway reease of Land of The Lost. Quality is almost the same, and from the first few minutes of the film i got that the sound was pretty good, but the video is titled and i saw a guy or two walk across the…File 983 | Welding | Chemistryhttps://scribd.com/documentFile 983 - Free download as PDF File (.pdf), Text File (.txt) or read online for free. File 983
Free tech support on How to rip any TS file to 3GP, AVI, FLV, H.264, iPad, iPod, iPhone, iTunes, MP4, Flash, PSP, TS, audio, etc. WAV to MP3 Converter converts WAV to MP3 and vice versa in batch, and resamples WAV and MP3 files, and supports more than 150 audio and video files. Free download Video Repair Tool to help you repair your corrupt MP4, MOV, WMV, M4V, F4V, 3GP, 3G2 video files from various storage media. The Au file format is a simple audio file format introduced by Sun Microsystems. The format was common on NeXT systems and on early Web pages. :iphone: Sample App for Loadsmart's React Native Components - loadsmart/blocks-storybook-app Deploying sample MEAN project. Contribute to amalachirayil/sampleMEAN development by creating an account on GitHub. A grunt task to manage your complete typescript development to production workflow - TypeStrong/grunt-ts
- gta vice city mac free download full version
- freddie mercury tributo concierto torrent descargar
- los controles bit a bit no pueden descargar archivos gui al teléfono android
- descarga de navegador múltiple
- descargar camp pinewood en android
|
OPCFW_CODE
|
The publication lists examples. Ultimately, each sort of worth might also be characterized with respect to the specific types of interaction it requires. Wide and deep models are a type of ensemble.
Authentication AI ought to be in a position to adapt to fraudsters’ approaches. Supervised data mining techniques are appropriate once you have a target value that you want to predict about your data. Unsupervised learning doesn’t utilize advice output information.
Key Pieces of Supervised Algorithm
Resize the box so you are able to observe the column details. To recognize the look of a particular person the algorithm should get specialized labeled sample collection. A probability-based model.
Machine Learning algorithmsare used more frequently than we can imagine and there’s a good reason behind that. Supervised learning is beneficial in scenarios where a property ( label ) is easily available for some dataset (training set ), but is missing and must be called for different cases. Learning is a sort of machine learning algorithm that lets the agent by studying to determine the action that is greatest based on its own condition.
Supervised http://asknow.eu/machine-learning-algorithms-can-be-fun-for-everyone/ Algorithm Options
There are particular machine learning algorithms. Another sort of unsupervised learning is known as clustering. Step one would be to select the learning algorithm you will use to train a system to presume.
Because the input data is labelled and famous the results generated from supervised learning approaches are somewhat more precise and dependable. The procedure can be repeated until each of the inputs are tagged. You have to learn to interact data and the way to create data visualization that is proper.
Now you have the plan, we could settle on what approaches to use. Supervised data mining methods are appropriate once you have. Deep learning algorithms permit the processing of larger quantities of information better.
Top Choices of Supervised Algorithm
Affinity Propagation is. PageRank is among my algorithms. It is a sort of outfit machine learning algorithm or bagging.
Whispered Supervised Algorithm Secrets
In layman terms, a model is a mathematical representation of a company issue. Every one of the above mentioned categories could be utilized to relate to a particular time frame which one would count on just saying In either situation. For all these situations, informative post it can’t give a answer.
Picking out the k is extremely crucial. Clustering is utilized to discover similarities and differences. Since it does not offer sample classes classification differs.
Put simply a version reaches convergence when additional training on the present data don’t boost the model. To carry out classification the program takes two parameters for every one of those classes. You need data to appraise the version and the hyperparameters and this information cannot be the specific same as the training set information.
What You Must Know About Supervised Algorithm
There’s hard-working and a lot wisdom on it. You are going to have the ability to find out many things and can find replies even though you’re working on your own. Don’t hesitate to ask questions, be sure to fully grasp the issue, the expectation of the outcome, the requisites and the vital definitions.
Principal Components Analysis are among the dimensionality reduction algorithm, so it is easy to comprehend and utilize it. Essentially, there aren’t many types ofMachine Learning algorithms. You will try and select the most suitable algorithms compare and to check effects.
Testing is the procedure in which statistical tests are utilized to check whether or not a hypothesis is true or not utilizing the data. It is vital to be aware that neither one of those algorithms removes the techniques of identifying security complications, such as correlation rules and expression. The regression is the procedure of predicting the tendency of the last information to forecast the outcomes of the data that is new.
For instance, an SVM with a linear kernel is much like logistic regression. Broadly speaking, simple classifiers always take care of each input as separate from the remaining inputs. The KNN algorithm is powerful and really easy.
Indeed, it’s a fantastic number, it usually suggests that there’s just 2% of information. The model isn’t evaluated then the odds are that the result produced with data isn’t accurate. In that instance, it has a huge number of unlabeled data.
|
OPCFW_CODE
|
Flamegraphs In Depth 🔥🔥
Performance profiles of modern web applications usually produce flamegraphs of significant complexity.
In this tip, we'll look at more complex flamegraphs produced by the Chromium F12 Profiler and learn helpful techniques for reading them.
Note, although the Chromium Profiler technically produces icicle graphs, I will just refer to them as flamegraphs.
- You should have a trace collected of your web application.
- You should know the fundamentals of basic flamegraphs.
Tasks that are long and inefficient can degrade user experience by delaying the browser's ability to generate frames.
The shape of a flamegraph (or a subsection of a flamegraph) can provide great clues into CPU bottlenecks on your thread.
The first function on the callstack is represented as the base of the flamegraph, and the last functions on the callstack are represented at the tips.
If a flamegraph is wide from the base or other sub-sections, this indicates synchronous, slow, or heavy work taking place on the thread.
Here's an example of a wide flamegraph with a wide base and a wide subsection near the tip:
In general, I recommend starting from the base of wide flamegraph sections, and trace the graph towards the tips (working from top to bottom in the Chromium F12 profiler), following the widest bands as you go. This will help you find the largest areas of opportunity within that inefficient section.
Consider this example flamegraph:
If I was going to try and optimize this call stack, I would:
- Start looking at
function a()at the base
- Notice it calls
b()looks wider, so I'd investigate that next.
- Investigate what
d()is doing, because
In my experience, usual culprits of wide bands are:
forLoops with a high iteration count
- Highly computational work
A flamegraph that resembles a narrow spike indicates that the time to execute is short, but the callstack is deep.
Here are some example narrow-shaped flamegraphs:
A narrow spike doesn't necessarily indicate a CPU bottleneck in isolation, but sometimes, narrow spikes in high frequency can produce bottlenecks. This usually manifests as a wide band in the profiler, topped with many narrow spikes.
Here's an example of many narrow spikes aggregating into a wide band, indicating a bottleneck:
The inefficient / interesting parts of a narrow spike are often near the tip of the spike:
In this example, each spike is executing some micro-operations of about 0.14ms each, like
stringify, etc., and we
can find this info at the tip of each spike.
What we are looking at is essentially the below example:
Notice in this example,
d() is invoked in high frequency, which invokes
f() in high frequency, creating a bottleneck in
Usual suspects I find at the tips of narrow spikes often include:
- Browser APIs like
- String operations (like URL parsing,
whileloops with a low iteration count
Consider this example below:
Script 1 gets colorized as Blue, and is at the base of the flamegraph. Script 2 is colorized as Green and is the callee of Script 1, lower in the flamegraph and at the tips.
At first glance, one might attribute this Task's CPU time to Script 1, because it's at the base of the flamegraph. However, because Script 2 clearly contributes to the bulk of the work (most of the flamegraph is Green, especially at the tips) we can infer that codepaths in Script 2 are the likely inefficient culprits in this Task.
If you see patterns or shapes that appear to be resulting from a particular color in high frequency, that can help you quickly identify which script or part of your application is contributing to the bottleneck.
In this example below, there's a clear pattern of a Green script invoking a call stack colorized as Brown that appears slow and run in high frequency.
There are also a set of reserved colors, attributed to certain browser tasks that can help you spot inefficient invocations of browser APIs, such as Layout or
Selecting a call stack frame will show which script is executing in the Summary pane:
The Chromium Profiler will map each stack frame in a flamegraph to the name of the executing function:
In this example above,
a is the name of the function, and it's found within
Production web applications apply minification, so the names are often short and non-descriptive.
Follow this tip on scoping to codepaths in the profiler for details on how to scope to a particular codepath of interest in your flamegraph.
We have walked through some common real-world flamegraph patterns and shapes.
We've also looked at how the Chromium Profiler aids our analysis by colorizing and labeling call stacks.
You should see similar flamegraphs in your web application traces and can now understand what's going on in those complex flamegraphs.
Consider these tips next!
- The Chromium Main Profiler Pane explained
- Scoping to codepaths in the profiler
- The Browser Event Loop
- Code Splitting
|
OPCFW_CODE
|
StorPool is an ultra-high performance storage software, installed on a cluster of standard servers. It then creates a pool of shared storage from these servers (it can be thought of as being a virtual SAN). Because StorPool is flexible and efficient, it can be deployed on either separate storage servers, or on the compute servers themselves, besides the applications (in a converged / hyper converged set up). In each case StorPool delivers extremely fast, scalable and cost efficient high speed block storage. Featuring advanced fully distributed architecture, it is potentially the fastest and most efficient block storage software, available on the market today.
StorPool combines the storage IOPS performance of all of the drives in the cluster and optimizes drive access patterns, to provide both low latency and extremely efficient handling of storage traffic bursts. All storage operations are distributed equally between all of the servers through data striping and sharing, which allows StorPool to boost performance and remove I/O bottlenecks.
StorPool delivers outstanding block storage performance, low latency and seamless scalability. Below you can see performance test, where we measured 13.8M IOPS on a 12-node cluster built on standard off-the-shelf Intel servers. We configured and tested a 12-server KVM hyper-converged system on hardware graciously provided by the Intel®️ Datacenter Builders program. With these results, StorPool holds the new world record for the highest data storage performance reached by an HCI (Hyper-Converged Infrastructure) solution.
StorPool is a high speed distributed data storage software running on standard server hardware. It uses minimal system resources, in order to achieve outstanding performance. The performance highlights below are measured on a a 12-node cluster built on standard servers, in a hyperconverged setup.
|All-NVMe storage system
Summary: StorPool provides exceptional performance, whilst being unique at delivering end-to-end data integrity and shared storage capabilities. In addition, StorPool also maintains high performance (in terms of storage IOPS and bandwidth) and unrivaled efficiency (in terms of CPU and RAM usage), even whilst it is serving the demanding storage requests of many different and competing workloads.
These results delivered from a super-fast, hosted VDI solution, with high utilization. One of the hard requirements of the NVMe-powered VDI Cloud project was very high single thread performance which necessitates the use of CPUs with a relatively small number of high frequency CPU cores. There were 39 servers, each with 12 cores. StorPool utilized just 2 CPU cores and 8 GB RAM per node and delivered a blazing fast storage system, peaking at 6,800,000 IOPS and delivering latency of less than 0.15ms (!) under typical load. This allowed the customer to achieve unmatched performance and efficiency.
In these public cloud performance tests, StorPool’s aim was to assess the block storage offerings of a number of public clouds, including Amazon AWS, Digital Ocean, DreamHost and OVH vs. a number of StorPool-based public cloud offerings. We’ve selected VM instance types and everything else in the configurations to be identical because we are comparing storage systems/offerings, not other aspects.
The tests were performed between November 2018 and February 2019 by StorPool. All systems under test are in production clusters.
|
OPCFW_CODE
|
'No database selected' error during migration rollback with prefix specified
Environment
Elixir version: 1.4.1
Database and version: MySQL 5.7.17
Ecto version: 2.1.3
Database adapter and version: mariaex 0.8.1
Operating system: OSX
Current behavior
I bumped into this when using the rollback mix task, but suspect it shows up under other conditions too.
$ MIX_ENV=test mix ecto.rollback --repo TenantRepo.Repo1 --prefix sample_tenant_test
** (Mariaex.Error) (1046): No database selected
(ecto) lib/ecto/adapters/sql.ex:440: Ecto.Adapters.SQL.execute_and_cache/7
(ecto) lib/ecto/adapters/sql.ex:619: anonymous fn/3 in Ecto.Adapters.SQL.do_transaction/3
(db_connection) lib/db_connection.ex:1276: DBConnection.transaction_run/4
(db_connection) lib/db_connection.ex:1200: DBConnection.run_begin/3
(db_connection) lib/db_connection.ex:791: DBConnection.transaction/3
(ecto) lib/ecto/migrator.ex:246: anonymous fn/4 in Ecto.Migrator.migrate/4
(elixir) lib/enum.ex:1229: Enum."-map/2-lists^map/1-0-"/2
(ecto) lib/mix/tasks/ecto.rollback.ex:85: anonymous fn/4 in Mix.Tasks.Ecto.Rollback.run/2
We have a number of databases (with the same schema) and rely on prefixes extensively. We do not set the :database param in config.exs.
The error occurs when the rollback tries to delete from the schema_migrations table, so it should have very little to do with the specific migration being rolled back. To make matters worse, the rollback itself succeeds and the schema_migrations table ends up inconsistent with the database state.
I was able to trace the problem to a Repo.delete_all/2 call
https://github.com/elixir-ecto/ecto/blob/master/lib/ecto/migration/schema_migration.ex#L30
If I replace this call to Repo.delete_all/2 with independent get & delete calls, everything appears to work as intended.
I've also found that if I set the :database parameter in config.exs it appears to work as intended (but only for that particular database)
I chased this through the ecto code for a while, but I didn't find anywhere that the prefix was obviously being lost/ignored. The sql generated appears valid:
DELETE s0.* FROM `sample_tenant_test`.`schema_migrations` AS s0 WHERE (s0.`version` = ?)
I suspect something is goofy with the prepared statement, but I'm far from a sql expert.
Expected behavior
Migration rollback (and Repo.delete_all/2) work correctly with a prefix specified.
@hogjosh thank you for the report. Can you please provide a sample application that reproduces the error? It will be much easier for us to fix if we don't need to chase the exact scenarios that reproduce the issue. Thank you!
Done and done!
https://github.com/hogjosh/ecto2060
It looks like ecto 2.1.4 added error information when there's a problem with schema_migrations, but I'm not sure it's helpful in this scenario.
Let me know if I can be of further help.
@hogjosh it seems to be an issue with MySQL:
$ mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 73
Server version: 5.7.11 Homebrew
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> INSERT INTO `ecto_2060_dev`.`schema_migrations` (`version`,`inserted_at`) VALUES (1, "2017-05-05 10:10:10");
Query OK, 1 row affected (0.00 sec)
mysql> DELETE s0.* FROM `ecto_2060_dev`.`schema_migrations` AS s0 WHERE (s0.`version` =<PHONE_NUMBER>3256);
ERROR 1046 (3D000): No database selected
mysql>
For some reason it does not require a database for insert but it does not require one for deletions.
@josevalim ah, I was running the sql from a graphical app, which was probably setting the database for me and is why it seemed to work there.
This appears to work:
mysql> DELETE FROM `ecto_2060_dev`.`schema_migrations` WHERE (`version` =<PHONE_NUMBER>3256); Query OK, 1 row affected (0.01 sec)
There is definitely some "weirdness" when using aliases in deletes and specifying the database further complicates it...
@hogjosh if that query works, it definitely adds more weirdness. The issue is that none of this seems to be documented, so I am reluctant in putting all of the work required to emit other queries while there is no guarantee this will work for other MySQL versions or in the future.
If you don't want to have it connected to any of your prefixes, I would recommend setting up a dummy database for connections but not use it for anything else.
Closing this as we believe it is a MySQL issue and I don't believe we should change our query strategy specifically to handle MySQL weirdness. If you don't want to have it connected to any of your prefixes, I would recommend setting up a dummy database for connections but not use it for anything else. Thank you!
|
GITHUB_ARCHIVE
|
I deleted the question for two reasons:
Its trying to compose a list, worse its asking for a list of favorite ways to find new routes. The Not Constructive close reason description reads:
This question is not a good fit to our Q&A format. We expect answers to generally involve facts, references, or specific expertise; this question will likely solicit opinion, debate, arguments, polling or extended discussion.
So instead of deleting it, I could have closed it instead, but closing could and probably should eventually lead to deletion, unless it serves as an example of questions we don't want. The reason we normally first close, wait and only then delete is so users can raise objections, edit the question to make it more constructive and have it reopened. So to do you a favor and give you a chance to fix it, I've undeleted and only closed it.
The second reason I deleted/closed it is that, in my not so humble opinion, this wasn't an example of questions we should be wanting on our site. While the first two answers (including yours) are mildly useful, the other answers were one-liners referring to some website, links to social services such as Runkeeper or simply not useful.
When I came upon this question last week I asked myself the question: does this question make a new visitor coming in from Google happy? Would that visitor, based on the quality of the question and answers, think to himself: "Geeh, this is a nice website, I think I'll browse around and see if they have more useful content." I believe the answer to that is a firm no. That's not to say a similar question, that's more focused couldn't be, just that this incarnation of the question is a bad one.
Because the question is undeleted, but closed now, you do have a chance to 'fix' the question. However, given are relatively small community of active users, most old questions are dead questions. Editing it would bring it back to the front page and might encourage someone to answer it in a better way, but its quite unlikely. Still, another problem is that making the question less broad (I mean, finding running routes all over the world? That's a bit broad isn't it?) would either invalidate a lot of answers, which would have to be updated (not going to happen) or deleted (already done in some cases).
This is related to a bigger issue, which Greg also tried raising on chat, people don't downvote enough. Heck, I'm guilty of it myself! The result is that unless a question get's a decent amount of views, there's barely a distinction between bad answers and good answers. Worse, because some answers don't get downvoted, those users might think there's nothing wrong with their answer and they keep posting similar answers (For example, users who continuously post one-liners that should have been comments or links as answers).
So honestly, I don't think this particular question was worth the trouble of saving. I'd rather encourage you to ask a new question, that's more focused and doesn't attract the same answers as the original did and we all just forget about this one. I'm sorry that it cost you rep in the process, but I think its for a greater good.
|
OPCFW_CODE
|
Active Directory password policy enforcement when creating accounts through C#
I'm creating an ASP.NET MVC website using C# which will allow a user to create an AD account through a front end web page. The idea being you enter the relevant details on the front end, hit submit and I then create the AD account behind the scenes through C# using the details you've entered.
Now, if the AD server has specific password policies (i.e. length, complexity etc) is there anyway to have the front end use those policies directly rather than attempting to recreate them through regex etc?
The reason is that if the AD policy is later changed then if they're not linked the front end is then out of sync and also needs to be changed.
I'm not that familiar with AD but perhaps there is some password policy entity or similar that I can use to check password details (even server side would be fine).
Does such a thing exist or if not how do people recommend getting around the out-of-sync issue?
The answer is just don't implement the policies in your front end at all.
AD will let you know if what the user entered does not comply with the password policy by means of error codes which you can handle and present to the user in a helpful way.
You don't want to duplicate processes. The process of verifying compliance with password policies is done by the domain controller and should stay there.
I see your point but in my specific case there's going to be a few other things happening under the hood before the AD account is created. I am anticipating a poor user experience if, after waiting for these other things to complete you only then get told your password needs to be longer. Normally this feedback occurs before you can even submit the form.
@FullTimeSkeleton Change the order of processes? After all, if the other processes rely on or are related to the created user, you should do that first anyway as there is loads of other reasons the user might not get created. Maybe the name already exists, maybe the dc isn't available, maybe someone moved your destination OU, maybe the domain is in read only mode or run out of sids.
Fair enough. At the moment the proposal is that another entity needs to be created first and then the AD account created with details from that previously created entity. Technically we could reverse this though and create the AD account first. Thanks for the suggestion. The upshot of this is that it appears there is no way of querying the AD password policy separately from creating an account. This isn't ideal for me but if it's the only way it's the only way.
@FullTimeSkeleton The only "way" would be to parse the group policy xml files in the sysvol folder but even then you'd need to know how/where theyre applied to work out any overriding settings etc
And that way sounds messy so it doesn't look like I can do this without relying on AD to provide the feedback. Thanks, I'll mark it as the answer.
|
STACK_EXCHANGE
|
import numpy as np
import os,random
import matplotlib.pyplot as plt
import pylab as pylab
import scipy.optimize
#SECTION: Functions
#This function takes the data and sums multiple points together and returns the new matrix
def dataReducer(data, reductionlevel):
#Checks to see if amount of data is divisible by the reducion level. If not then this method will not work
if len(data) % reductionlevel != 0:
print("This won't work. The reduction level doesn't match the amount of points.")
return
p = 0
q = reductionlevel
new = []
for i in range(0,len(data)//reductionlevel):
new.append(np.sum(data[p:q]))
p+=reductionlevel
q+=reductionlevel
out=np.array(new)
return out
#This function takes the data and performs folding to remove a parabolic baseline.
def dataFolder(data, foldingpoint):
newfold = []
for i in range(0,len(data)//2):
newfold.append(data[i]+data[len(data)-i-1])
return newfold
#This function coverts the X axis to velocity and returns the new values.
def dataVelocity(data,zeroPoint,rate):
pos = []
neg = []
for i in range(zeroPoint,len(data)):
pos.append((i-zeroPoint)*rate)
for i in range(-zeroPoint,0):
neg.append((i)*rate)
newVelocitycityMatrix = np.append(neg,pos)
return newVelocitycityMatrix
#This function is wrapper for plotting.
def mossplot(datain,xval,title="Mossbauer Spectrum",xaxis="Velocity (mm/s)",yaxis="Counts"):
fig = pylab.plot(xval,datain)
pylab.grid(True)
pylab.legend()
pylab.title(title)
pylab.xlabel(xaxis)
pylab.ylabel(yaxis)
pylab.xlim(min(xval),max(xval))
pylab.show()
return
#This function calculates Doppler effect from velocity as an input and resonance energy
def dopplerEffect (velocity,resonanceEnergy):
c=2.997924588E8
eDoppler= (1+velocity)*resonanceEnergy/c
return eDoppler
#This function calculates effective mossbauer thickness -- not to be implemented for a long time
def effectiveThickness (LMfactor,cross,abundance,density,thickness,molarmass):
N=6.022E23
result= LMfactor*N*abundance*density*thickness/molarmass
#The parameters for Lorentzian fits.
def lorentzian(x,hwhm,cent,intense,back=0):
numerator = (hwhm**2 )
denominator = ( x - (cent) )**2 + hwhm**2
y = intense*(numerator/denominator)+back
return y
#Residual function for fitting.
def residuals(p,y,x):
err = y - lorentzian(x,p)
return err
#Residual function for fitting multiple curves.
def multipleResiduals(p,x,yval):
parin= np.zeros(len(x))
for i in range(0,len(p),3):
p0=p[i]
p1=p[i+1]
p2=p[i+2]
if p0>0.5 or p0<0: return 1E8
if p2>-10: return 1E8
parin=np.add(parin,lorentzian(x,p0,p1,p2))
err = yval - parin
return err
#END SECTION: Functions
#Input parameters ideally not hard coded, but for testing and dev they will be.
filename = "H:\Google\Research\Software\wmoss\TPNIOF14forwmoss.dat" #Test Data Path - Go Through GUI
data = []
data = pylab.loadtxt(filename)
redNumber=4
channels=1024
midpoint= 512//redNumber
#End input parameters
data = dataFolder(data,channels)
new = dataReducer(data,redNumber)
newVelocity = dataVelocity(new,midpoint,16/channels*redNumber)
ind_bg_low = (newVelocity > min(newVelocity)) & (newVelocity < -3)
ind_bg_high = (newVelocity > +3) & (newVelocity < max(newVelocity))
x_bg = np.concatenate((newVelocity[ind_bg_low],newVelocity[ind_bg_high]))
y_bg = np.concatenate((new[ind_bg_low],new[ind_bg_high]))
ind_bg_mid=(newVelocity > -8) & (newVelocity < 8)
m, c = np.polyfit(x_bg, y_bg, 1)
background = m*newVelocity + c
y_bg_corr = new - background
#These are the test values for parameters to be fitted.
#It should be able to accept an unlimited number.
#It is set for 12 sets of parameters. It needs to be able to do 1,2,5,6,12
p = [0.34,-0.2,-800,0.34,-0.15,-750,0.34,0.11,-500,0.34,-0.25,-888,0.34,-0.2,-800,0.34,-0.15,-750,0.34,0.11,-500,0.34,-0.25,-888,0.34,-0.2,-800,0.34,-0.15,-750,0.34,0.11,-500,0.34,-0.25,-888] # [hwhm, peak center, intensity] #
pBest = scipy.optimize.leastsq(multipleResiduals,p,args=(newVelocity[ind_bg_mid],y_bg_corr[ind_bg_mid]),full_output=1)
fitsum=np.zeros(len(newVelocity))
for i in range(0,len(pBest[0][:]),3):
fit = lorentzian(newVelocity,pBest[0][i],pBest[0][i+1],pBest[0][i+2],background)
fitsum= np.add(fitsum,lorentzian(newVelocity,pBest[0][i],pBest[0][i+1],pBest[0][i+2]))
pylab.plot(newVelocity,fit,'r-',lw=2, label=i)
pylab.plot(newVelocity,new,'b-')
fitsum=np.add(fitsum,background)
pylab.plot(newVelocity,fitsum,'g-',lw=5)
pylab.legend()
pylab.show()
|
STACK_EDU
|
import remark from "remark"
import stripMd from "strip-markdown"
import { prune } from "underscore.string"
const defaultOpts = {
pruneLength: 140,
pruneString: "…",
}
export default function description(mdObject, opts = {}) {
opts = { ...defaultOpts, ...opts }
if (opts.pruneLength < 10) {
console.warn(
"You defined 'description.pruneLength' of content-loader " +
"with an zero value. This does not make sense, " +
`so the default value ${ defaultOpts.pruneLength } has been used.`
)
opts.pruneLength = defaultOpts.pruneLength
}
// Don't touch mdObject if there is a
// description field in frontmatter
if (mdObject.head.description) {
return mdObject
}
let description = remark()
.use(stripMd)
.process(mdObject.rawBody)
.toString()
description = prune(description, opts.pruneLength, opts.pruneString)
if (description && description.length > 0) {
description = description
.replace(/\n+/g, "\n") // Replace multiple new lines with one
.replace(/\n/g, " ") // Avoid useless new lines
.trim()
}
else {
description = null
}
return {
...mdObject,
head: {
...mdObject.head,
description,
},
}
}
|
STACK_EDU
|
.NET DESKTOP APPLICATION / WINFORMS DEVELOPMENT SERVICES
Freegan Tech Solution .NET desktop application development team is experienced in developing Windows GUI applications (Windows Forms or WinForms), Windows Console applications, Windows Smart Client apps using Windows Presentation Foundation (WPF), and Windows Store Apps for Windows 8.
We can build a traditional application that runs on Windows server, desktop or laptop computers We love building this kind of app because of the power and flexibility that .Net Winforms and WPF provide. We have a wealth of application development experience gained over 30 years in the business and are able to cover requirements ranging from single user desktop utilities to multi-user, enterprise-scale systems.
An application that runs on Windows desktop is known as WinForm. Windows applications will have graphical user interface (GUI). With the GUI the designing of the application becomes easy and fast. WinForm is taken for, Windows Forms and is a name attributed to the graphical class library called GUI, that is included in the Microsoft.Net framework. It is a platform where you can write client applications for PCs, laptops, desktops and tablets. It is considered as a replacement for the C++, that is based on Microsoft Foundation Class Library. However it only acts as a platform for the user interface supposedly in multi-tier solution.
- Windows Forms application is based on Microsoft's .Net platform. It is an event driven application.
- It does not work like a batch program, rather the user has to take initiative to perform tasks.
- It provides access to the native Windows User Interface Common Controls with the Windows API.
- It provides a comprehensive abstraction with the Win32 API.
How WinForm can benefit you?
- Windows applications are easy to develop. It has more flexibility both in effects and in views. Therefore the user prefers to have an interface that is based on Windows.
- To run the application, you have to install an Windows based app on your machine.
- It can be installed on tablets, desktops or laptops as per the convenience of the user.
- Windows application runs faster than Web application.
- It is fast and therefore saves a lot of time in the processes of installation and un-installation.
- Updates and upgradation are launched in frequent intervals and is done easily and requires less time, compared to other applications.
- You might switch over to WPF but technically if you need to work with a lot more codes than WinForm is preferable.
- WinForms fits to the legacy system i.e.2000
- Since WinForm is in the market for a long time now, developers are more experienced and skilled in working with WinForm.
- There might be other apps with in-box controls which are ready to use but in WinForm the developer can customize and create new controls and also bring into third party libraries.
|
OPCFW_CODE
|
setItem or getItem returns no value (iOS 11)
Scenario:
I have a data returned from the server in [object Object] format.
I use const parsed = JSON.parse(JSON.stringify(data)), it will become:
{
access_token: '....',
refresh_token: '....'
}
I was able to alert the data like: alert(data.access_token) (showing me the string value) before actually storing the string of access_token like window.NativeStorage.setItem('access_token', parsed.access_token)
But it seems during getItem, nothing is returned (empty value). It is either setItem or getItem or both. So here's the output log from xCode when actually calling window.NativeStorage.getItem('access_token'):
Error in Success callbackId: NativeStorage45281472 : TypeError: error is not a function. (In 'error(new NativeStorageError(NativeStorageError.JSON_ERROR, "JS", err))', 'error' is undefined)
2018-02-08 10:04:32.088407+0700 Qourt[4369:2011891] Error in Success callbackId: NativeStorage45281473 : TypeError: error is not a function. (In 'error(new NativeStorageError(NativeStorageError.JSON_ERROR, "JS", err))', 'error' is undefined)
2018-02-08 10:04:32.088495+0700 Qourt[4369:2011891] Error in Success callbackId: NativeStorage45281474 : TypeError: error is not a function. (In 'error(new NativeStorageError(NativeStorageError.JSON_ERROR, "JS", err))', 'error' is undefined)
2018-02-08 10:04:32.088579+0700 Qourt[4369:2011891] Error in Success callbackId: NativeStorage45281475 : TypeError: error is not a function. (In 'error(new NativeStorageError(NativeStorageError.JSON_ERROR, "JS", err))', 'error' is undefined)
2018-02-08 10:04:38.988020+0700
Is the problem caused by how I store the key? If so, what would be the recommended way to store them?
Dear josteph
Could you try to install the latest version from Github instead of the npm package?
The installation procedure is explained here.
@GillesC Thanks for the response!
Is there any difference, specifically for setItem or getItem methods?
Anyway, I will try to install from Github and will update you the result tonight.
There is no (there should not be any) difference for these methods.
Thank you for your feedback!
Hello. We might be experiencing the same issue. My app also uses NativeStorage to store a session token, and tried to load it immediately after device ready. The response from NativeStorage is error code 2, ie. item not found. However running the same code a few seconds later works just fine, so it seems the storage is not ready on device ready.
@Jckf Hmm my issue is not exactly the same. I'm using also OAuth2 Library, successfully returned the tokens in JSON format and stored using setItem. But got the JSON_ERROR instead when calling getItem on the credentials.
@GillesC The issue still remains, the setItem is using JSON.stringify(data) then JSON.parse(data), not sure if this is causing the issue? I will try to putString instead and will give feedback again.
Thanks.
Ok I've confirmed the issue here. The token is so long that it even takes long enough for the getItem success callback is actually called. Other short data seems to work fine though.
What's the recommend way to store 4096 length of string?
|
GITHUB_ARCHIVE
|
Electronic Submission Instructions
In a nutshell, you will be required to produce GIF images of each page
of your report and submit them electronically through a special command.
Please follow these instructions EXACTLY for producing and
submitting these images:
If any of this is unclear or you have trouble with any part of this,
please let me know as soon as possible.
Use any word processor that you want to write up the report. However,
you must use a word processor that will generate a Postscript (PS) file,
an ASCII file, or a GIF image of each page.
If you can't generate a PS file, an ASCII file, or a GIF image of a page,
then you can't submit
it using the technique described here. Your report can be either a single
PS file or multiple files. You may want to experiment with a small test
file before plunging in and doing the whole report on a word processor
that's going to cause trouble.
If you have generated the PS file on something other than an Engineering
Sun, you are responsible for figuring out how to upload it to your Sun
- Here are some hints on producing GIF files:
- You can convert an ASCII text file to GIF by using
~sjreeves/bin/text2gif infile outfile
If the file requires multiple pages, it will generate files called
outfile_1.gif, outfile_2.gif, etc.
- You can save Matlab plots to PPM files by typing
print -dppmraw outfile
at the Matlab prompt and then using xv to convert it to GIF.
- You can convert TIFF files to GIF using xv.
- You can convert a PS file to GIF by using
pstogif. To run this, type
~sjreeves/bin/pstogif file.ps file.gif
If the file has multiple pages in it, it will generate files called
file_1.gif, file_2.gif, etc.
- If all else fails and you are able to display your report on a Sun,
you can dump one page at a time by using the command
and then clicking on the window you want to dump to a gif file.
This command will only work if the window of interest does not overlap
any other windows either on top or underneath.
- You can also use xv to grab regions of the screen and
manipulate them and save them as images.
When you have converted everything to GIF, rename your files (including
Matlab plot files) so that they are in the order you want for your report.
The naming convention should be projX_n.gif, where X is the project number
and n is the page number. I will count off if you don't follow this
I would suggest that you page through your files using an image viewer to
make sure everything is as you expect it to be. You can do this by typing
You can advance through the images by pressing the space bar.
(xv is not as good for this purpose, since it automatically
downsizes images if they're larger than a screen.)
- NOTE: Please use common sense in the size of each image. I will
count off if any image dimension is greater than twice the corresponding
NOTE2: I will count off heavily if you submit any image format other
- Make sure the font is large enough to read comfortably after
the text has been converted to a bitmap.
To forward the files to me, type
~sjreeves/bin/submit_rpt X projX_*.gif
exactly as shown but with the project number substituted for X.
You will receive an email confirmation if the report is received
The report will be graded, annotated, and returned to you via email.
You will be notified by email how to set up your account to automatically
receive your graded report. If you have already set up your account to
receive your graded homework, you can ignore the email instructions.
|
OPCFW_CODE
|
"""
Report generation functions
"""
import json
import globalvars
def build_field_data(form):
"""
Creates a dictionary containing the fields' data
"""
field_list = []
for field in form.fields:
field_list.append({
"name": field.name,
"type": field.type,
"value": field.value
})
return field_list
def build_error_data(form, code, message):
"""
Creates a dictionary containing the form's and error's data
"""
return {
"fields": build_field_data(form),
"action": form.action,
"method": form.method,
"error": {
"code": code,
"message": message
}
}
def generate_json_report():
"""
Generates JSON report file from the data gathered
"""
if globalvars.JSON_REPORT_FILE is None:
output_dir = "output/report.json"
else:
output_dir = globalvars.JSON_REPORT_FILE
final_data = []
for url, form_data in globalvars.ERRORS.items():
final_data.append({
"url": url,
"forms": form_data
})
with open(output_dir, "w") as report_file:
json.dump(final_data, report_file)
def load_templates():
"""
Reads the contents of each HTML template file and returns them as a tuple
"""
with open("report-templates/report-main.html") as template:
main = template.read()
with open("report-templates/report-page.html") as template:
page = template.read()
with open("report-templates/report-form.html") as template:
form = template.read()
with open("report-templates/report-field.html") as template:
field = template.read()
return main, page, form, field
def generate_html_report():
"""
Generates HTML report file from JSON file
"""
templates = load_templates()
if globalvars.JSON_REPORT_FILE is None:
output_dir_json = "output/report.json"
else:
output_dir_json = globalvars.JSON_REPORT_FILE
if globalvars.GENERATE_HTML_REPORT:
output_dir_html = globalvars.HTML_REPORT_FILE
else:
output_dir_html = "output/report.html"
with open(output_dir_json, "r") as json_file:
json_report = json.loads(json_file.read())
with open(output_dir_html, "w") as report_file:
report_file.write(templates[0].replace("{pages}", fill_page_template(json_report, templates)))
def fill_page_template(pages, templates):
"""
Substitute the URL and list of forms for all the pages
"""
final_template = ""
for page in pages:
final_template += (templates[1]
.replace("{url}", page['url'])
.replace("{forms}", fill_form_template(page['forms'], templates)))
return final_template
def fill_form_template(forms, templates):
"""
Substitute the method, action, code message and list of fields for all the forms
"""
final_template = ""
for form in forms:
final_template += (templates[2]
.replace("{method}", form['method'])
.replace("{action}", form['action'])
.replace("{code}", form['error']['code'])
.replace("{message}", form['error']['message'])
.replace("{fields}", fill_fields_template(form['fields'], templates)))
return final_template
def fill_fields_template(fields, templates):
"""
Substitute the name, type and value for all the fields
"""
final_template = ""
for field in fields:
final_template += (templates[3]
.replace("{name}", field['name'])
.replace("{type}", field['type'])
.replace("{value}", field['value']))
return final_template
|
STACK_EDU
|
Internationalizing time, also known as I18N with time, refers to the process of adapting time formats to be culturally and linguistically appropriate for different regions and languages. This can involve changing the way time is displayed, formatted, and even understood in different parts of the world.
One of the most common ways to internationalize time is to use the 24-hour clock instead of the 12-hour clock. The 24-hour clock is commonly used in Europe, Latin America, and many other parts of the world, and is often preferred because it eliminates confusion about AM and PM. In some cultures, the day is also divided into different time periods, such as morning, afternoon, and evening, and this can also be reflected in how time is displayed.
Another aspect of internationalizing time is the use of time zones. Time zones are geographical regions that have a standardized time, which is typically based on the local solar time. This allows people in different parts of the world to have a common understanding of what time it is, even if they are in different time zones. However, the use of time zones can also be complicated by factors such as daylight saving time, which can cause the time to shift by an hour in some regions.
In addition to using the 24-hour clock and time zones, internationalizing time can also involve adapting the formatting of time. This can include using different symbols and separators for hours, minutes, and seconds, as well as adapting the order in which these elements are displayed. For example, in some cultures, the hour is displayed before the minute, while in others, the minute is displayed before the hour.
Overall, internationalizing time is an important aspect of creating user interfaces that are accessible and culturally appropriate for people around the world. By adapting time formats to local cultures and languages, we can help ensure that users can easily understand and interact with digital products and services, regardless of where they are located.
Example of Internationalizing Time:
An example of internationalizing time would be adapting the time format used on a website or app to make it more culturally appropriate for users in different regions.
For example, if a website is designed for users in the United States, it might display time in the 12-hour clock format with AM and PM indicators. However, if the same website is being used by users in Europe, it would be more appropriate to display time in the 24-hour clock format.
In addition, if the website is being used by users in Japan, it would be important to display the time in the Japanese time zone, which is UTC+9. If the website is used by users in multiple time zones, it may also be necessary to include a drop-down menu or other interface element that allows users to select their local time zone.
Finally, the formatting of the time display may also need to be adapted to be more culturally appropriate. For example, in some cultures, the hour is displayed before the minute, while in others, the opposite is true. The use of separators and symbols may also vary, with some cultures using a colon (:) to separate hours and minutes, while others use a period (.) or a different symbol altogether.
By adapting the time format in this way, the website or app can provide a more culturally appropriate and user-friendly experience for users in different regions around the world.
|
OPCFW_CODE
|
Added new identifier support for stereo cameras
It should support all of the Blender naming conventions now in addition to the old 'left' and 'right' camera names.
Closes #21
I need some time for testing this on the latest Blender. Thank you! ^__^
No problem, go for it! ^_^
Hello! It doesn't work with my old files. Probably because my rig uses ".Left" and ".Right" suffixes. ^__^
Sample file - http://cloud.morevnaproject.org/seafile/f/36f7f6e1be/?raw=1
Ah, it's actually the middle camera d-cam-01 that's causing the issue. I'll fix this soon.
@morevnaproject Updated! I fixed that issue and did a bit of refactoring/optimization. Should be good to go now. Do you want me to add something in the docs about ordering for this?
@scribblemaniac I have tested and now it works perfectly! I did a merge and also added the possibility to render using native stereo multiview. ^__^
About the docs - this would be much appreciated! But where to put them? Currently I have several documentation-related pages on MorevnaProject website - https://morevnaproject.org/renderchan/documentation/
But I think this is not the best approach if in a long-term perspective, because I would like to have some mechanism to quickly update illustrations and to other maintaining routines, also keeping it open for other contributors. I suspect ReadTheDocs could be a good option for us - https://read-the-docs.readthedocs.org/en/latest/getting_started.html#import-docs
But for now this is how it is. I will be happy to create an account for you at morevnaproject.org with write permissions, if you are willing to write a few lines about stereo 3D in RenderChan. ^___^
Yes a write access account with morevnaproject.org seems like it will work well for the short term. For the long term, I would probably recommend Github Pages. Read The Docs is good if we're going with something like Sphinx and you want to add API documentation, but from what I gather it does not look too useful for user-oriented documentation and probably wouldn't allow the same level of flexibility that Github Pages does.
OK, then I need to know your email to create an account. Please submit a request here - https://morevnaproject.org/about/contact/. Make sure to put some random text in the "Message" field. Then, create an issue at https://github.com/morevnaproject/RenderChan/issues and put the same random text as issue description. In this case I will know that request is really submitted by you. ^__^
I think it should suffice just to put the message here rather than making a completely new issue.
The message text is: CVx7TS8HdDjVX26MMvjh
Feel free to contact me through that email address if the need arises. It's my personal email so I don't like to have it public for spam and privacy reasons.
just don't like to have it public for spam and privacy reasons
Yeah, that's exactly why I have asked to submit a private request. ^__^
Welcome onboard! ^__^
Thank you very much ^_^
|
GITHUB_ARCHIVE
|
Novel–Fey Evolution Merchant–Fey Evolution Merchant
how much does tenting cost
Chapter 343 – The Spirit dynamic colorful
He walked to the heart pool area and discovered that Blackie was leisurely ingesting the b.u.t.terfly Sh.e.l.l plants at the base from the swimming pool.
This Bronze/Tale Oath Lily on the Valley was not a little something he would usually use, so there had been no reason to use it from the Soul Fasten spatial region.
He went over to the nature swimming pool and found that Blackie was leisurely taking in the b.u.t.terfly Sh.e.l.l plants at the end of your swimming pool area.
At that moment, Lin Yuan fished the Bronze Five Lot of money Ranchus from the soul pool area and set the female styles that played well with Blackie from the aquarium, where about three Mountain / hill Stream Long lasting Everyday life Carp were actually.
The Oath Lily with the Valley’s overall look failed to transform very much following improving into Bronze/Icon. Nonetheless, there seemed to be a level of l.u.s.trous bright white steady flow of gentle amongst the whitened bell-like roses that bloomed on the paG.o.dshaped Oath Lily on the Valley the second it arrived at Icon.
beauty is pain and there’s beauty in everything meaning
He did not determine what improvements would happen whenever the Dragon’s Mouth area Orchid was endorsed from Legendary to Star.
Across the Plains to California in 1852
It absolutely was just like each of the bright white bell-fashioned plants illuminated up in a flash. It was similar to a shrub of bells and nephrite featuring a bell-like form. There was a sense of success.
The fresh flowers bloomed at Flawless were definitely light-weight azure, when those at Legendary were definitely darker violet.
¿cuál fue la causa de la muerte de rocío dúrcal
Lin Yuan investigated the big dark Icon Dragon’s Lips Orchid bloom with good anticipations.
The Story Oath Lily of your Valley’s flowers were definitely both a package to have an oath and also a curse that sealed a betrayer’s Willpower Rune.
The Icon Oath Lily in the Valley’s flowers were both a bottle to get an oath along with a curse that sealed a betrayer’s Willpower Rune.
Just after it experienced attained Bronze, the b.u.t.terfly Sh.e.l.l flowers Lin Yuan nourished additionally it reached another level.
The large black dragon-jaws-shaped blossom was bigger than Lin Yuan’s palm.
The Tale Oath Lily of your Valley’s fresh flowers have been both a container for the oath and a curse that enclosed a betrayer’s Self-discipline Rune.
With experiencing the splashes, Lin Yuan only felt that Blackie acquired welcomed him the same way as being the 3 Dragon-Phoenix Panorama Carps. He believed that in case he was with the pool’s side, the splashes would definitely be drenched in h2o.
It was actually as if it was ready for an entire metamorphosis.
If it would be asserted that the Legendary dim-light blue Dragon’s Lips Orchid’s stamen possessed a s.h.i.+ny, scaly wooden construction previously, there now seemed to be vibrant diamonds within this black colored Tale stamen.
stones of power – the last sword of power replica
The Mindset Secure spatial zone’s sources were firm, so setting it there would undoubtedly fill up s.p.a.ce.
Nonetheless, Lin Yuan did not count on the floral out of this Dragon’s Jaws Orchid would actually end up dark if it hit Legend.
starbucks whipping cream
Just after it possessed achieved Bronze, the b.u.t.terfly Sh.e.l.l flowers Lin Yuan provided it also gotten to another level.
The Oath Lily with the Valley’s visual appearance did not transformation significantly soon after changing into Bronze/Story. However, there were a tier of l.you.s.trous bright stream of light one of many white-colored bell-like blossoms that bloomed about the paG.o.dshaped Oath Lily on the Valley as soon as it gotten to Legend.
Lin Yuan investigated the large black colored Tale Dragon’s Mouth area Orchid plant with fantastic expectations.
Lin Yuan looked at the big dark Tale Dragon’s Oral cavity Orchid rose with terrific concern.
This Story Dragon’s Lips rose could certainly permit Blackie rinse its bloodline and greatly develop its find of dragon-kinds fey’s bloodline. It could even fully restrain its fish-group bloodline, letting the dragon-kinds bloodline to become its body’s prominent bloodline.
After it got achieved Bronze, the b.you.t.terfly Sh.e.l.l blossoms Lin Yuan given it also gotten to another point.
Following observing the splashes, Lin Yuan only noticed that Blackie acquired greeted him much the same way being the a couple of Dragon-Phoenix, arizona Panorama Carps. He observed whenever he was within the pool’s advantage, the splashes would certainly be drenched in water.
He failed to figure out what improvements would occur if the Dragon’s Lips Orchid was advertised from Epic to Icon.
Lin Yuan checked out the big dark colored Star Dragon’s Mouth Orchid bloom with terrific concern.
what bloodline is the royal family
It was a manifestation of solid wood crystallization.
The fresh flowers bloomed at Faultless have been light-weight light blue, though those at Epic ended up black glowing blue.
Novel–Fey Evolution Merchant–Fey Evolution Merchant
|
OPCFW_CODE
|
Missing Publish option for F# dotnet core console applications
In Visual Studio Professional 2017 Version 15.5.6 the “Publish” option is missing for F# dotnet core console applications. The option is neither available from the projects context menu nor from Build menu,
The option is available for C# dotnet core console applications.
This issue has been moved from https://developercommunity.visualstudio.com/content/problem/195889/missing-publish-option-for-f-dotnet-core-console-a.html
VSTS ticketId: 564517
These are the original issue comments:
(no comments)
These are the original issue solutions:
(no solutions)
Here are two screenshots. The first one shows the context menu of a c# project and the second one is n f# project.
@TIHan @Pilchie Is the GUI gesture controlled in the project system?
cc @davkean, @BillHiebert who might know about what enables these (based on the fact they are recent contributors to https://github.com/dotnet/project-system :) )
I think we found where it was: http://vstfdevdiv:8080/DevDiv2/DevDiv/_versionControl?path=%24%2FDevDiv%2FOffcycle%2FWPT%2FWebToolsExtensions%2FMain/src/ManagedPublish/PublishProviders/Console/Provider/PublishProvider.cs
In WebToolsExtension. It should be as simple as adding "FSharp" string to the AppliesTo.
That capability string is a sign that we need to add a new capability for this...
Currently Publish specifically checks for capability - (CSharp|VB)&CPS. That is the reason it doesn't show up for fsharp projects.
Talked to @BillHiebert and we could enable Publish based on the presence of 'ManagedPublish' capability.
Don't combine capabilities; Managed and Publish already exist - so that would be Managed & Publish capability. However, Managed is overloaded because C++ uses it, let's come up with a new replacement for Managed (we need it for other reasons) and you can combine it with Publish.
Tagging this as external until we get a bug filed in the appropriate place
Sure, What ever capability name we decide we can add that.
@cartermp - This would be a VS feature request.
Two bugs;
change the provider's capability string
opt into said capability here: https://github.com/dotnet/project-system/blob/master/src/Microsoft.VisualStudio.ProjectSystem.Managed/ProjectSystem/DesignTimeTargets/Microsoft.Managed.DesignTime.targets#L70
@davkean @vijayrkn where should these be filed under?
First against us on project-system, second aganst web in VSTS. We're discussing internally the naming of "Managed" as we speak.
Closing as this is external - bug files on project system and @TIHan is filing a bug against Web.
Just a question, why is this related to Web? I thought this "publish" command is equivalent to the "dotnet publish" console command which "Packs the application and its dependencies into a folder for deployment to a hosting system."
This is independet from web or console or whatever. Isn't it?
@chuchu VS tooling implementation is such that it is under the web tooling repository (which is internal and closed-source).
@cartermp Ok, thanks for clarification.
We've decided internally to use ".NET" capability as a replacement of CSharpOrVisualBasicOrFSharp usage.
|
GITHUB_ARCHIVE
|
Update staff_explode.lua
automatically create a new config file so the user doesn't have to
This should either be put in the configuration library as part of get_parameters or else this should not be there at all. I think scripts should behave consistently with regard to configuration.
The advantage of this change is:
The file gets created automatically making it easier for the user to find and edit it.
Disadvantage:
Scripts will litter the user's hard drive with config files without the user's knowledge.
Personally, I am in the "teach 'em to fish" camp. If a user is picky enough to want other than default behavior, they should learn how to manipulate config files for themselves. However, I am willing to be persuaded, provided there is some effort required from users to learn better how to control the monster on their desk (or lap).
Fair points, if a little strict, and this suggestion was only in response to a single complaint on the Facebook group. My counterargument is that it's sensible to save a new config file without the user's knowledge in a script wanting to retain user preferences, so is this materially different?
My impression is that the vast majority of Finale users already find installing RGPLua sufficiently daunting to prevent them going to the trouble in the first place, and creating little text files in tricky locations is now such an unfamiliar task that it's an even greater discouragement.
I'm not saying no to this. I just want more of our community to weigh in. Should all config files be created automatically?
I mean, as hard as I've worked on all this, I just don't have much patience for people who refuse to put in a little learning curve effort to save themselves hundreds of hours of miserable slogging.
Horse. Water. Drink.
If we can find a way to let users edit their configuration without opening up the config files (e.g., create a new script to do that for them), I'm all for saving the config files.
However, if we're just writing the config files to then directly read from, I think that's less than ideal. The only advantage I can see with that is that then when a user finds the config file, they know what the configuration parameters are.
If they don't go looking for it there are a few downsides:
We're creating files without any purpose
Users might be confused why we're writing files to their computer
Some people may even mistake this for something malicious and start to distrust Lua
I presume we're all using reasonable defaults for our configs. However, if we write all the config values down, we no longer are able to update those reasonable defaults as the script is modified since we have no idea if a user updated the config file or if a previous version of our script wrote the config file.
Users are accustomed to many hundreds of prefs files created or updated in their home folder without their knowledge including, as it happens, several by Patterson and TGTools plugins! (Plus JW's config files in the Finale folder). It's true that creating prefs file to reflect active user choices is more justifiable/reasonable, but otherwise the only thing I find concerning is an old prefs file overwriting revised script behaviour. For which I'd rename the prefs file to sync with the new script. If the user changed the original prefs they can also change the new one.
Based on new pull request by @jwink75, I think I see a way out of this. We should distinguish between configuration and user settings. Configuration will continue to reside in script_settings and only users will write to it. (Scripts never will.) User settings will reside in the user's settings directory and be fully writable all the time. Personally, I think @cv-on-hub should change his scripts that write settings to put the file in the user settings rather than the config directory.
That said, I think we need to have a naming convention for them, because writing random filenames in a preferences folder is very bad form. I would propose the following:
com.finalelua.<script-file-name>.config.txt
That all sounds very sensible. To clarify, by "user's settings directory" do you mean what MacOS calls ~/Library/Preferences/?
Should I now close or delete this PR?
I would just close it.
|
GITHUB_ARCHIVE
|
/* testvector.c ---
*
* Filename: testvector.c
* Description:
* Author: Bryce Himebaugh
* Maintainer: Yu Gao
* Created: Wed Oct 7 14:14:25 2015
* Last-Updated: Mar 23 2016
* By: Yu Gao
/* Code: */
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include "machine.h"
#include "testvector.h"
#include "vector.h"
int dump_state(int vector) {
char instr_name[5];
printf("Test Vector Number %d\n",vector);
printf("Rn is R%d with value = 0x%08x\n",vectors[vector].rn,vectors[vector].rn_value);
printf("Rm is R%d with value = 0x%08x\n",vectors[vector].rm,vectors[vector].rm_value);
switch (vectors[vector].instruction) {
case ADCS:
strcpy(instr_name,"ADCS");
break;
case ADDS:
strcpy(instr_name,"ADDS");
break;
case SUBS:
strcpy(instr_name,"SUBS");
break;
case BICS:
strcpy(instr_name,"BICS");
break;
case ANDS:
strcpy(instr_name,"ANDS");
break;
case LSLS:
strcpy(instr_name,"LSLS");
break;
case ASRS:
strcpy(instr_name,"ASRS");
break;
default:
break;
}
printf("Expected operation is %s\n",instr_name);
printf("Expected Rn Value (result) = 0x%08x, Current Rn Value = 0x%08x\n",vectors[vector].expected_result,reg[vectors[vector].rn]);
printf("Expected Rm Value = 0x%08x, Current Rm Value = 0x%08x\n",vectors[vector].rm_value, reg[vectors[vector].rm]);
printf("PSR Before Instruction = 0x%08x\n",vectors[vector].previous_psr);
printf("Expected PSR After Instruction = 0x%08x, Actual PSR After Instruction = 0x%08x\n\n",vectors[vector].expected_psr,psr);
}
void test_instructions(int instr_type) {
int i;
int attempt_vector = 0;
int error_count = 0;
for (i=0;i<(sizeof(vectors)/32);i++) {
attempt_vector = 0;
// Load the operand registers with data
reg[vectors[i].rn] = vectors[i].rn_value;
reg[vectors[i].rm] = vectors[i].rm_value;
// Load the current state of the psr
psr = vectors[i].previous_psr;
switch (vectors[i].instruction) {
case ADCS:
if ((instr_type == ADCS) || (instr_type == ALL)) {
adcs(vectors[i].rn, vectors[i].rm);
attempt_vector = 1;
}
break;
case ADDS:
if ((instr_type == ADDS) || (instr_type == ALL)) {
adds(vectors[i].rn, vectors[i].rm);
attempt_vector = 1;
}
break;
case SUBS:
if (instr_type == SUBS) {
// subs(vectors[i].rn, vectors[i].rm);
attempt_vector = 1;
}
break;
case ANDS:
if ((instr_type == ANDS) || (instr_type == ALL)) {
ands(vectors[i].rn, vectors[i].rm);
attempt_vector = 1;
}
break;
case BICS:
if ((instr_type == BICS) || (instr_type == ALL)) {
bics(vectors[i].rn, vectors[i].rm);
attempt_vector = 1;
}
break;
case LSLS:
if ((instr_type == LSLS) || (instr_type == ALL)) {
lsls(vectors[i].rn, vectors[i].rm);
attempt_vector = 1;
}
break;
case ASRS:
if ((instr_type == ASRS) || (instr_type == ALL)) {
asrs(vectors[i].rn, vectors[i].rm);
attempt_vector = 1;
}
break;
default:
break;
}
if (attempt_vector) {
if (reg[vectors[i].rn] != vectors[i].expected_result) {
error_count++;
printf("----------------------------------------------------\n");
printf("ERROR: Result Value Error:\n");
dump_state(i);
}
if (psr != vectors[i].expected_psr) {
error_count++;
printf("----------------------------------------------------\n");
printf("ERROR: Status Flag Error:\n");
dump_state(i);
}
if (reg[vectors[i].rm] != vectors[i].rm_value) {
error_count++;
printf("----------------------------------------------------\n");
printf("ERROR: Rm modification Error:\n");
dump_state(i);
}
}
}
if (!error_count) {
printf("All Tests Passed!\n");
}
else {
printf("Error Count = %d\n",error_count);
}
}
/* testvector.c ends here */
|
STACK_EDU
|
Job market paper
Parents make choices that shape children's preferences and long-term outcomes. This paper presents the first experimental study of how parents make choices for their children in the domain of competition. A representative sample of more than 1600 Norwegian parents and adolescent children took part in an experiment where parents choose if their child is to perform a task for a competitive or a noncompetitive pay scheme. The paper establishes a number of novel facts on parents' choices for children. First, parents choose more competition for boys than for girls. However, the gender gap in parents' choices is smaller than the gender gap in children's own choices. Second, two main mechanisms explain the gender gap in parents' choices: their beliefs about children's preferences and paternalistic behavior. Third, parents are more responsive to the ability for boys than girls, which implies that many high-ability girls do not enter into competition. Fourth, parent gender matters: fathers are more likely to enter their child into competition than mothers. Finally, children are unaware of the gender bias in parents' choices and believe that parents will make the same choices for boys and girls. The set of findings shed new light on the role of parents in determining children's long-term outcomes and on the intergenerational transmission of preferences.
Work in progress
Development, Family Background, and Gender Differences in Preferences (with Edward Miguel, data collection started)
In this project we create a unique data set with 10,000 Kenyan parents and children, which combines experiments on parents and children, and a randomized intervention which substantially increased education and income levels 10 years later. We aim to contribute with new insights to three main fields of research: i) The relationship between socioeconomic status and gender difference in preferences. ii) The intergenerational transmission of preferences. iii) Parental decision making.
Parents' Choices and Children's Educational Outcomes
I combine experimental data on how parents make competitiveness choices for their children with high-quality administrative data on parent income and education, and children's education. Previous research finds that children's own competitiveness choices predict educational choices, and controlling for competitiveness mitigates gender differences in education choices (Buser et al., 2014) . This paper studies the role of parents' choices in predicting children's education choices.
Beliefs About Biases of Other People
This project explores the extent to which people are aware of biases and non-standard preferences of other people. I elicit beliefs for self and others for five established behavioral biases: loss aversion, overconfidence, naivete about present bias, projection bias and left-digit bias. Pilot results indicate that people are more sophisticated about biases of others than of themselves. In the second stage of the project I explore the extent to which knowledge of behavioral biases interacts with attitudes towards paternalism.
|
OPCFW_CODE
|
Umm, I don't know if you were being serious or trying to be funny, but I'm just going to assume you're being serious.
If you were 1000km below the Earth's surface (roughly 5300km away from the centre), imagine dividing the Earth into two separate pieces:
1. A sphere centered on the Earth's core, with a radius of ~5300km (i.e. everything "below" you)
2. The shell created by everything "above" you, with a hollow section in the middle (where the sphere from the point above would go)
Assuming that the Earth's surface is a uniform sphere (i.e no irregular shape, as close to a proper sphere as possible) then the shell described by point 2 would have no gravitational effect on you while you're inside it, as it is exerting a uniform gravitational pull on you from all directions at once. You would only feel the gravity from what is "below" you (the sphere from point 1). Therefore for any calculations we can disregard the outer shell that is "above" you and replace what is "below" you with a point mass (since the gravity is centred on a single point in the centre of the mass, from your perspective).
^ Producing antimatter would be even less of a problem though.
It'll annihilate immediately unless sequestered, and it's pretty much impossible to produce in any significant quantity.
Since you people seem to be smarter than time, space, and several Steven Hawkins in a High School debate...
Can anyone explain to me what anti-matter is?
I always thought it was just the opposite of matter. A void substance if you will. I can't conceive why anti-matter would "explode." Why can't it be classified under the same elements? Why would anti-Oxygen explode?
^ Antimatter is formed from antiparticle analogs to matter (positron, antiproton, and antineutrons).
Each of those have equivalent or inverted properties, for example, same mass as their matter analogs, but opposite charge.
When they come into contact with their matter analog, they annihilate, converting their entire mass to energy and ceasing to exist (E=mc^2)
...So that's the reason they could theoretically be used for weapon purposes? The energy release generated in the "canceling out" process?
an antiparticle is a particle traveling backwards in time. light (ie photon) is the only substance that doesn't have an antiparticle because time is meaningless at the speed of light.
Originally Posted by Terasiel
In real life, antimatter would make a crappy weapon. First off, it's extremely difficult to create and store. It's only true advantage is that it's an efficient conversion of mass to energy (it has more power/mass compared to other weapons), and it's clean (when compared to nuclear or chemical explosives). However, since the delivery mechanism also has to incorporate a containment mechanism, that completely defeats the purpose of having a small but powerful weapon.
|
OPCFW_CODE
|
from SPARQLWrapper import SPARQLWrapper, JSON
import json
import requests
def setup_query(person_complete_name: str):
"""
Return the SPARQL query for obtaining gender, birthdate and nationality (if available) of the given person from
DBpedia
:param person_complete_name: person whose metadata are of interest
:return:
"""
query_template = """
SELECT *
WHERE {{
?p foaf:name "{}"@en;
foaf:gender ?gender;
dbo:birthDate ?birthdate.
optional {{ ?p dbp:nationality ?nationality_dbp }}
optional {{ ?p dbo:nationality ?nationality_dbo }}
}}
""".format(person_complete_name)
return query_template
def query_dbpedia_endpoint(person_complete_name, sparql):
"""
Query the given SPARQL endpoint for obtaining metadata from the person of interes
:param person_complete_name: person of interest
:param sparql: SPARQL Wrapper that acts as an endpoint
:return:
"""
query = setup_query(person_complete_name)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
query_results = sparql.query().convert()
return query_results
def extract_metadata_from_query_results(query_results):
"""
Given a Sparql query result, extract nationality, gender and birthdate
:param query_results:
:return:
"""
if query_results["results"]["bindings"]:
raw_metadata = query_results["results"]["bindings"][0]
gender = raw_metadata['gender']['value'].lower()
birth_date = raw_metadata['birthdate']['value']
if "nationality_dbp" in raw_metadata.keys():
nationality = raw_metadata['nationality_dbp']['value'].lower()
elif "nationality_dbo" in raw_metadata.keys():
nationality = raw_metadata['nationality_dbo']['value'].lower()
else:
nationality = ""
return birth_date, gender, nationality
else:
raise ValueError
def get_person_metadata(person_complete_name: str, endpoint: str):
"""
Return a dictionary with gender, birth date and nationality of the person of interest
:param person_complete_name: person of interest in the format "Name Surname"
:param endpoint: which service to query
:return:
"""
if endpoint == "dbpedia":
person_metadata = get_metadata_dbpedia(person_complete_name)
elif endpoint == "wikidata":
person_metadata = get_metadata_wikidata(person_complete_name)
else:
raise ValueError("Invalid endpoint")
return person_metadata
def get_metadata_dbpedia(person_complete_name):
"""
Return gender, birth date and nationality of the current person by querying DBpedia
:param person_complete_name:
:return:
"""
sparql = SPARQLWrapper("http://dbpedia.org/sparql")
query_results = query_dbpedia_endpoint(person_complete_name, sparql)
try:
birth_date, gender, nationality = extract_metadata_from_query_results(query_results)
person_metadata = {"complete_name": person_complete_name,
"gender": gender,
"birth_date": birth_date,
"nationality": nationality}
except ValueError:
print("Could not get metadata for {}: is the person's name spelled correctly?".format(person_complete_name))
person_metadata = {}
return person_metadata
def get_wikidata_entities(person_complete_name):
"""
Return all the plausible Entities IDs associated with the current person.
IDs are ordered from the most likely to the least likely (according to Wikidata)
:param person_complete_name:
:return:
"""
endpoint = "https://www.wikidata.org/w/api.php?action=wbsearchentities&search={}&language=en&format=json".format(
person_complete_name)
content = json.loads(requests.get(endpoint).content)
entities = content['search']
entities_ids = [entity['id'] for entity in entities]
return entities_ids
def get_wikidata_properties(entity_id):
"""
Return birth date, gender and nationality of the given entity ID
:param entity_id: Wikidata Entity (e.g. Q10490 for Ayrton Senna)
:return:
"""
entity_endpoint = "https://www.wikidata.org/w/api.php?action=wbgetclaims&entity={}&format=json"
url_of_interest = entity_endpoint.format(
entity_id
)
content = requests.get(url_of_interest).content
content = json.loads(content)['claims']
# Birth date
birth_date = None
try:
birth_date = content['P569'][0]['mainsnak']['datavalue']['value']['time']
except KeyError:
print("Birth date not available")
except Exception as ex:
print(ex)
# Sex/gender
gender = None
try:
sex_entity = content['P21'][0]['mainsnak']['datavalue']['value']['id']
sex_entity_id_desc = {
"Q6581097": "male",
"Q6581072": "female",
"Q1097630": "intersex",
"Q1052281": "transgender female",
"Q2449503": "transgender male"
} # Source: https://www.wikidata.org/wiki/Property:P21
gender = sex_entity_id_desc[sex_entity]
except KeyError:
print("Gender not available")
except Exception as ex:
print(ex)
# Citizenship
citizenship = None
try:
country_entity = content['P27'][0]['mainsnak']['datavalue']['value']['id']
country_name_id = "P3417"
url_of_interest = entity_endpoint.format(
country_entity)
country_content = requests.get(url_of_interest).content
country_content = json.loads(country_content)['claims']
citizenship = country_content[country_name_id][0]['mainsnak']['datavalue']['value']
except KeyError:
print("Citizenship not available")
except Exception as ex:
print(ex)
person_metadata = {
"gender": gender,
"birth_date": birth_date,
"nationality": citizenship
}
return person_metadata
def get_metadata_wikidata(person_complete_name):
"""
Get birth date, gender and nationality (expressed with country name) for the given person
:param person_complete_name: Person you are interested in
:return:
"""
entities_ids = get_wikidata_entities(person_complete_name)
person_metadata = {}
for entity_id in entities_ids:
if not person_metadata:
try:
person_metadata = get_wikidata_properties(entity_id)
person_metadata['name'] = person_complete_name
except Exception as ex:
print(ex)
else:
break
if not person_metadata:
print("Could not get metadata for {}: is the person's name spelled correctly?".format(person_complete_name))
return person_metadata
|
STACK_EDU
|
- Finally found some time to get CloudFlare set up
- Qanon.news is sporting a fancy new SSL cert today.
- Retuned some logic to handle duplicate Q post numbers. With all the new threads, it was bound to happen.
- Fixed a few IE browser issues I had missed. Site should be IE compatible now.
- Back from being AFK
- Q helped me discover a bug in the scraper code having to do with a new thread on a private Q board. Fixed.
- Made some changes to the time search filter on the Q posts page. Before you could only search hh:mm. Now you can search hh: or :mm. Remember all times are in ZULU time - no matter what you have displaying!
- Known issue at our host has caused an annoying issue with the scrapes. Theoretically should be fixed up by EOB Monday.
- CORS on the API is back on in case that was what was causing the issue.
- Tons of bugfixes/tweaks
- Added markers for deltas on the Q/POTUS ScatterPlot. Get ya some Scatterposts
- Updated the full JSON Archive on the Archive page
- After noticing alot of slowdowns, I implemented a caching solution. It should be running faster. It was causing the scrapes to run WAY longer than they had to so I've been working on clearing that up.
- I built out some of the functionality to allow (you) to archive JSON or XML locally. Head over to the new page to try it out. You can save LocalViewer.html to your own hard drive and run it from there too.
-Get Everything will download everything. Get Latest will download everything you haven't downloaded already.
-Single breads can be downloaded by clicking the download icon.
- Various bugfixes/tweaks
- Added a new Analytic that shows all Q drops and POTUS tweets in a scatterplot. Hover for date/text. Get ya some Scatterposts
- Fixed a 'bug' that was causing me to miss the 17 second delta on #2246/2247. I'm now allowing for up to a -60 delta to account for network latency between twitter and 8ch.
- Fixed lost twitter deltas in archived posts.
- Added code to archive the TrumpTwitterArchive
- Removed /patriotsawoken/ from the scrape temporarily as it was causing problems.
- Added links to the 2 "Q -The Basics" PDFs
- Added a video randomizer on the homepage to cycle thru 3 jewels.
- Forgot about this changelog
- Updated a bunch of stuff, added links to archive.today for everything
- Fine tuned the wordbubble a bit
- Started working on a new analytic idea based on the wordbubble
- Fine tuned the code that finds post references to make it more reliable
- Planning on getting back on the db conversion
- Working on the 'hover over a referenced post and see the post in a thread archive' magic. It's not 100% cross breads since it's guessing but, it mostly works.
- Added this new 'spash' page rather than redirecting to the Q posts page. Includes the plan to save the world and 4 most recent /qresearch/ bumps.
- Decreased load time dramatically for the Archive page by dropping the JSON on the page rather than using the API.
- Removed the Custom google search on the Archive page and changed to a Lunr search instead.
- Lunr search on Archive does NOT include text of first post in index to speed up Index creation.
- Added the new paging feature to the Archive page to speed it up.
- Added the total archived reply count to the Archive page.
- Whitelisted Q's new tripcode '!A6yxsPKia.'
- Turned the twitter delta's [on] by default on the Q Posts page. Removed any deltas > 60. Highlight deltas [0, 1, 5, 10, 20]
- Fixed a nagging local bug where breads were not being archived due to post count discrepancies.
- Changed the image archive logic to archive all images from any bread that Q posts in. Rejiggered bread imageLinks src to archived in these cases.
- Still working on the database conversion project, but posts are being imported consistently now.
|
OPCFW_CODE
|
Nowadays, source control is something that is not a question in the software development world. But when you deal with database development, it can be a more tricky question than it seems for the first sight. In this post, I going to show the two most common method about version controlling database-related code: by storing upgrade scripts or database state. After that, I am trying to show the pros and cons of each solution.
Great old legacy way
Before we dig into the modern solutions, let’s do a little flashback to the great old legacy ways. By legacy, I mean, it supposed to not exist today but there is a great chance to face with it when we work on old systems.
The worst possible: nothing at all
Joe was the person who took care of the database for 10 years. Joe left the company last week and nobody has information about the database at all. Maybe Joe saved some scripts on his already wiped hard drive.
That is the point when you are in trouble. No way to make this situation better. The only thing that can help you is to script out your current database state as soon as it is possible and save it. After you made that kind of database structure back-up, you can decide how you will do the source control issues.
Congratulations, you became Joe. From this day, you are the database expert at your company!
Expectations from the version control system
There are several things that you, as a developer can expect from your version control system:
- Able to restore a functional database in a computer after checking out the repository
- Able to restore an old state of the database by a given version or by date
- Review changes by using your source control system
- +1: it is possible to build a CI/CD pipeline based on your repository
Seed scripts: it must mention that special kind of scripts. These are basically insert scripts that fill tables with records that are necessary for your application to run.
Two ways: upgrade scripts or store states
Upgrade scripts or change scripts: you store the exact SQL scripts that you are going to upload. For example, when you want to add a new column for a table, there will be a script in your version control system that contains an alter table statement.
Storing states: your version control stores the current state of your database. When you upload, you have some kind of software that compares the previous state with the state that you want to reach and generates an upgrade script (often called migration scripts) that will run during the release.
|Pros – upgrade scripts||Cons – upgrade scripts|
|Release scripts are versioned||The actual database structure is hided|
|You can easily add rollback scripts|
|Easy to support 7/24 databases|
|Pros – storing states||Cons – storing states|
|The current database structure is always visible||Release scripts are auto-generated|
|Easy to understand|
|Can generate database schema comparisation |
based on version control information
FlyWay: commercial software that uses migration scripts. Multiple databases are supported. Rollback scripts (undo migrations) are supported.
SQL Source Control: commercial software by Red Gate. Stores database states. Dedicated to SQL Server and has Management Studio integration. Supports shared and dedicated developer databases.
DBUp: .Net library that supports migration scripts. Free to commercial use. Supports multiple databases.
Worth to mentions:
SSDT (SQL Server Data Tools): Visual studio integration features. Generates it’s own XML schema from the database and stores database state. Free to use, SQL Server only.
Entity Framework: Open Source ORM tool created by Microsoft. It can generate up and down scripts with a CLI. Supports multiple databases. Supports both XML Schema mapping and POCOs with mapping attributes.
Soon or later but if you have a database, you must take care of your version control.
Migration scripts are fine if rollback scripts and performance are musts. I would recommend to use them at large enterprise applications when it is mandatory to minimalize downtime and prepare to rollbacks in case of error occurs.
In another hand, I would recommend using a method that stores states and generate upload scripts to start-ups or small projects. They are really easy to understand but you loos the chance to well define the script that you are going to upload.
|
OPCFW_CODE
|
Borderless Window Covers Taskbar
I have a custom-made borderless window. When maximized, it covers the taskbar. This is not what I want. I have played with the WM_GETMINMAXINFO message. But, I have found that Windows 10 will then leave an extra 8-pixel gap along both the bottom and right side. It is an all-or-nothing proposition. Here is the first code that I tried:
case WM_GETMINMAXINFO:
PMINMAXINFO pmm;
pmm = (PMINMAXINFO)lParam;
pmm->ptMaxSize.x = GetSystemMetrics(SM_CXSCREEN);
pmm->ptMaxSize.y = GetSystemMetrics(SM_CYSCREEN);
return 0;
The result of this is identical to what I had, without hooking the WM_GETMINMAXINFO message. So, I knocked two pixels off of the bottom, so I could access the taskbar (which is in "autohide" mode":
case WM_GETMINMAXINFO:
PMINMAXINFO pmm;
pmm = (PMINMAXINFO)lParam;
pmm->ptMaxSize.x = GetSystemMetrics(SM_CXSCREEN);
pmm->ptMaxSize.y = GetSystemMetrics(SM_CYSCREEN)-2;
return 0;
Suddenly, I have a 10-pixel gap on the bottom, and a new 8-pixel gap on the right side! this appears to be a Windows 10 thing, as this never happened with Win7. I have also tried SystemParametersInfo, calling SPI_GETWORKAREA (instead of GetSystemMetrics()). This yields the same results.
From what I gather, the problem is not with WM_GETMINMAXINFO. Instead, I need to put a command into my code, to keep the taskbar on top. I have searched through the windows styles. But, I have found nothing of help there.
Does anyone know how to fix this critical problem.
How about handling WM_WINDOWPOSCHANGING and adjusting the window size there? The documentation behind the link tells you how.
Maybe take a look at GLFW. It's a very nice window abstraction library that should probably handles this type of problem (https://github.com/glfw/glfw)
Windows adds an invisible border around non-maximized windows. You can get the size of this border using DwmGetWindowAttribute with the DWMWA_EXTENDED_FRAME_BOUNDS flag.
Well, I found the answer in a most unlikely place. Someone was trying to manipulate borders with Python code. From their attempt, I was able to devise a solution for a borderless window, in C++. Here is the result:
To start, I created a window with the WS_OVERLAPPEDWINDOW | WS_VISIBLE style, to enable all Windows functions. I then handled the WM_NCCALCSIZE message with this code:
case WM_NCCALCSIZE:
{
WINDOWPLACEMENT wp;
LPNCCALCSIZE_PARAMS szr;
wp.length = sizeof(WINDOWPLACEMENT);
GetWindowPlacement(hWnd, &wp);
szr = LPNCCALCSIZE_PARAMS(lParam);
if (wp.showCmd == SW_SHOWMAXIMIZED) szr->rgrc[0].bottom -= (WFRAME+2);
return 0;
}
In the code above, I subtracted the width of the border from the bottom of the first rectangle. The extra 2 pixels were added, to expose the bit of the auto-hidden taskbar. The maximized window now acts as it should, allowing access to the taskbar.
To create my virtual client area in this borderless window, I added this bit of code to both the WM_CREATE and WM_SIZE handlers:
WINDOWPLACEMENT wp;
GetWindowRect(hWnd, &rWnd);
GetClientRect(hWnd, &rClient);
wp.length = sizeof(WINDOWPLACEMENT);
GetWindowPlacement(hWnd, &wp);
rClient.left += WFRAME; rClient.right -= WFRAME; rClient.top += (WFRAME+cyMenu);
if (wp.showCmd == SW_SHOWNORMAL) rClient.bottom -= WFRAME;
cxClient = rClient.right-rClient.left;
cyClient = rClient.bottom-rClient.top;
The element cyMenu is a space reserved for my virtual menu bar. It will contain a series of buttons, simulating the menu and min/max/close buttons.
I am glad you have got your solution and thanks for your sharing, I would appreciate it if you mark them as answer and this will be beneficial to other community.
Yes, I will mark it as an answer. Unfortunately, when I answer one of my own questions, this site will not allow me to mark it as an "Answer", until after 24 hours.
You can get a monitor's dimensions: total and work space. You are looking for work space dimensions.
|
STACK_EXCHANGE
|
[fr] De plus en plus, quand je partage un article intéressant sur Twitter ou Facebook, j'ai complètement perdu la trace de comment j'y suis arrivé. Ça m'embête, parce que je trouve important de donner un "retour d'ascenseur" (si petit soit-il) à ceux qui enrichissent mes lectures.
I have about 20 tabs open in Chrome with articles to read. And then, I have a scary number of links stacked away in Instapaper and (OMG how will I retrieve them all) many more in my Twitter favorites.
My sources for reading this day? My facebook news stream, Twitter, Tumblr, the odd e-mail from my Dad (he’s the one who pointed me to the BBC piece on the Ugly Indians of Bangalore — check out my post about them — amongst other things). I’ve signed up for Summify and though I have barely set it up, I find good reading in the daily e-mail summary it sends me. I can also see that Flipboard is going to become a source of choice for me once I’m back in Switzerland and have normal data access on my phone. And of course, once I’m reading an article, I click interesting links in it and often find other interesting articles in the traditional “related” links at the end.
Why am I telling you all this?
I believe it’s important to give credit to those who point me to stuff interesting enough that I want to point others to it. The traditional “hat tip” or “via” mention. But I’m finding it more and more difficult to remember how I got to a particular page or article. Actually, most of the time, by the time I’m ready to reshare something, I have no clue how I arrived there.
This happened in the good old days of blogging as only king of online self-expression, of course, but less, I think. Our sources were more limited. Concentrated in one place, the aggregator. Shared by less people, in a more “personal” way (how much personal expression is there in tweet that merely states the title of an article and gives you the link?). When I click an article in my Facebook newsfeed, I don’t often pay attention to who shared it. It’s just there.
So, I wish my open tabs had some way of remembering where they came from. That, actually, is one of the reasons I like using Twitter on my phone, because the links are opened in the same application, and when I go back I see exactly which tweet I clicked the link from. Sadly, sharing snippets to Tumblr (something that’s important to me) does not exactly work well inside the mobile Twitter app.
Is anybody working on this? Is this an issue you care about too? I’d love to hear about it.
|
OPCFW_CODE
|
Unique Cape York Animals
Many Cape York animals are the same as the animals in the rest of north Queensland, and the rest of Australia.
But thanks to the recent land bridges between the tip of Cape York and the neighbouring New Guinea, there are some species of birds, plants and animals that are only found in Cape York and New Guinea, and nowhere else in the world.
Cape York is quite famous for that fact, and many people want to see those birds and animals on their trip.
The thing is, they are not as easy to see as some other, more common and more obvious animals, but if you want to put some effort into it and go spotlighting and bird watching, it is not impossible to see them.
Unique Cape York Animals -
Some of the most famous ones are the Cuscuses and Striped Possums.
The striped possum is found in Daintree rainforests as well as in the McIllwraith and Iron Range rainforests. Another very special animal is cuscus - an animal that is found nowhere else than in Cape York and Papua New Guinea. None of them are easy to come across, unless you go spotlighting night time (or have some extreme luck at dusk or dawn).
Spotted Cuscus. By Michael Pennay via Flickr.com
Unique Cape York Animals -
Another rare animal only found here is the green tree python, which may first look like the much more common green tree snake, which is found in tropical rainforests further south. Green Python is only found in Iron Range and McIllwraith Range rainforests.
It is not common to spot, but again, if you go spotlighting night time you may see it. I have spotted it in Iron Range National Park. It tends to climb on branches in the bush only about one metre above the ground. They seem to be territorial - the same individual is to be found in the same place every night.
Green Tree Python. ©cape-york-australia.com
Unique Cape York Animals - BirdsEclectus Parrots are one of the rare birds that are only found up here. Males are green and females are red - I have read that it is because females sit in the nest and their colour acts as warning for any intruders, while males fly around foraging (and feeding the female), so the green colour is perfect camouflage.
They were the birds that were taken in the old days from the famous Smugglers Tree in Iron Range National park.
Eclectus parrots are only found in the rainforests in eastern Cape York.
Eclectus Parrot. ©cape-york-australia.com
More Unique Cape York Birds
Another species of birds
are found nowhere else in Australia, are the famous Palm
They are by far the largest cockatoos in Australia. They are also known
to be smarter than other cockatoos, and have a more complicated social
They are not very easy to see on a brief trip to Cape York. If you really want to see them, ask locals. They know the area around where they live, and often know where Palmies hang around (and what time of the day). Like other birds, they are easiest to spot when they are most active - at dusk and dawn.
I have never actually seen one from close enough to get a good photo (yeah, looking at the one below.. :-). I seem to have the luck to spot them from distance, the one below was on the northern bank of Jardine River.
Palm Cockatoo. ©cape-york-australia.com
If you liked the books or this website, let others know about it!
Link to it from your website, your blog, your forum post... Share it on Facebook, Tweet about it...
Every link helps other travellers!
Thank you for doing the right thing and letting others know :-)
|
OPCFW_CODE
|
For those of you new to Queue, Kode Vicious is our monthly column devoted to the practice of programming. This is where our resident code maven responds to your questions about everything from debugging to denial-of-service attacks. Whatever your concern, Kode Vicious will break it down, sort it out, and, we hope, set you straight. Have a question for KV? E-mail him at email@example.com and let him know what’s bugging you. If we print your question, we’ll send you a special piece of Queue memorabilia.
I’ve done a one-day intro class and read a book on Java but never had to write any serious code in it. As an admin, however, I’ve been up close and personal with a number of Java server projects, which seem to share a number of problems:
Is there any data showing that Java projects are any more or less successful than those using older languages? Java does have heavy commercial support, as well as the noble aim of helping programmers reduce certain types of errors. But as professional programmers, we use sharp tools, and they are dangerous for exactly the reasons they are useful. Trying to protect everyone from “level 1” programmer errors seems very limiting to me.
I keep seeing projects to replace legacy apps start amid fanfare and hoopla—and with significant budgets—using the most “modern” techniques, only to end up being cancelled or only partially implemented.
Am I missing something?
Run Down With Java
Dear Run Down,
Having taken a course on Java and read a book on it, you’re actually ahead of old KV on the Java wave. I’m still hacking C, Python, and bits of PHP for the most part. Given your comments, perhaps I’m lucky, but somehow I doubt that. I’m rarely lucky.
I could almost reprint your letter without comment, but I think there are larger issues that you raise, and I really can’t let these things go without commenting or, perhaps, screaming and tearing my hair out. It turns out that shaving my head has helped with all those bald patches I got from tearing my hair out.
As a reader of KV, you’ve probably already realized that I rarely bash languages or make comparisons among them, and I’m going to stick to my guns on that, even in this response. I don’t believe the majority of the problems you’re seeing come from Java itself, but from how it is used, as well as the way in which the software industry works at this point in time.
The closest I’ve come to Java was to work on a project to build some lower-level code in C that would be managed by a Java application. There were two teams: one that wrote the systems in C, which could operate independently of the Java management application; and one that wrote in Java. Now, you would expect that the Java team and the C team would have met on a regular basis, and that they would have exchanged data and designed documents so that the most effective set of APIs could be built to manage the lower-level code efficiently. Well, you would be wrong. The teams worked nearly independently, and most of the interactions were disastrous. There were many reasons for this, some of which were traditional management problems; but the real reason for this “failure to communicate” was that the two teams were on two different worlds and no one wanted to string a phone line between them.
The Java team members were all into abstraction. Their APIs were beautiful creations of sugar and syntax that scintillated in the sunshine, moving everyone to gaze in wonder. The problem was that they didn’t understand the underlying code they were interacting with, other than to know what the data types and structure layouts were. They did not have a deep appreciation of what their management application (so-called) was supposed to manage. They made grand assumptions, often wrong, and when they ran their code it was slow, buggy, and crashed a lot.
The C team wasn’t perfect either. There was a certain level of arrogance, shocking I know, toward the Java team—and although information wasn’t hidden, it was certainly the case that if the C engineers thought the Java engineers didn’t “get it,” they would just throw up their hands and walk away. The C team did produce code that shipped and worked well. The problem was that the goal of the company was to build an integrated set of products that could be managed by a single application. Although the C team won the battle, the company lost the war.
Someone looking at the code as it was delivered might have thought, “Well, the Java programmers just weren’t up to the task; next time hire better programmers, or get better tools or...” The fact is, that’s not the real problem here. The problem isn’t Java; it was the fact that the people building the system could produce a lot of lines of code but didn’t understand what they were building.
I have seen this problem many times. It often seems that projects are planned like some line from an old Judy Garland/Mickey Rooney musical. One character says to the other, “Hey kids, let’s put on a show!” It always works in the movies, but as a project plan it rarely leads to people living happily ever after.
To build something complex, you have to understand what you’re building. The legacy applications you mention are another great example. Ever seen a company convert a legacy app? I hope not; it’s not very fun. Here’s the way legacy conversion goes: You have a program that works. It does something. You may have the source code, or you may not. No laughing now, I’ve seen this. When the legacy program runs, it does what it should, most of the time. Next the team comes in and tries to dissect what the program does and then reproduce it, with bug-for-bug compatibility, and they find that their modern techniques don’t reproduce the same bugs in the right way. So they get to a point where the program sort of works, or sort of doesn’t, and then they usually give up and reimplement whatever it was, from scratch.
One of the reasons such travesties can continue to occur is that unlike in any engineering discipline in the real world (think aeronautics or civil engineering), failure just means a loss of money.
Now, when I say “just,” that can be a big just. The overhaul of the IRS computer systems cost millions in overruns, as did the system developed for the Department of Motor Vehicles in California. There is a laundry list of such failed projects to choose from. These may make headlines for a while, but they’re not quite on the level of a bridge failing, like the Tacoma Narrows, or the space shuttle exploding, twice. People generally remember where they were when the space shuttle Challenger blew up, but they don’t remember where they were when they heard about an IRS computer cost overrun.
With more and more computers and software being put into mission-critical systems, perhaps this attitude will change with time.
Unfortunately, we’re going to need a few more spectacular failures, likely with a human instead of monetary cost attached, before people put more time into planning what they do and figuring out what their code is actually meant to be doing. Once we do that, the fact that we’re using Java or Perl or the language du jour will have a lot less effect and will probably be discussed a lot less as well.
KODE VICIOUS, known to mere mortals as George V. Neville-Neil, works on networking and operating system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are code spelunking, operating systems, and rewriting your bad code (OK, maybe not that last one). He earned his bachelor’s degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. He is an avid bicyclist and traveler who has made San Francisco his home since 1990.
Originally published in Queue vol. 4, no. 9—
see this item in the ACM Digital Library
Follow Kode Vicious on Twitter
Have a question for Kode Vicious? E-mail him at firstname.lastname@example.org. If your question appears in his column, we'll send you a rare piece of authentic Queue memorabilia. We edit e-mails for style, length, and clarity.
|
OPCFW_CODE
|
Recent investigations show that a majority of crypto traders fail during their initial stages. One of the main reasons why many traders lose their money is by falling for emotional traps and mismanagement of feelings. Traders used to fall prey to unwanted market hype and FOMO trends. As the crypto market is volatile, it is necessary to conduct in-depth research before investing in any digital asset.
Let us help you to transform your trading performance from a beginner to an expert. Continue reading!
If you are a crypto trader, then you might know how challenging it is to suppress emotions while making crucial trading decisions. The anxiety and pressure in such circumstances are real. Traders lose the tendency to hold on to a failing position, leading to hasty and illogical trading behavior. However, we do have some ways to overcome and control the overflow of trading emotions.
One of the best-yielding ways to minimize losses is to opt for automatic trading methods. In this technique, you have to choose a special computer software called “crypto trading bots.” However, if you are a newbie, you may fear that the bot you choose might turn out to be a scam. If so, we recommend you develop your own bot using an Automated trading bot development service.
How Do An Automated Crypto Trading Bot Work?
As we said earlier, novice traders are quite unsure about bots. Still, automated trading strategies like arbitrage trading and decentralized exchange are growing exponentially. Although, you need to do a few things to extract the full potential of these automated bots.
Crypto bots are just computer software programs that constantly keep an eye on market conditions and execute trades according to predefined algorithms. By doing this, we can achieve high-frequency and automated trading. If you take the traditional financial market as an example, automated bots have been used there for decades. In fact, as per Deutsche Bank, 80% of cash-equity and 90% of equity-futures trades were conducted by automated algorithmic trading. Isn’t it enough to understand the growing adoption of trading bots?
Also, some trading bots can go even further by incorporating trading signals into their curriculum. With this, trading bots can improve their overall performance and precision. Additionally, some bots also perform copy trading to yield fruitful results.
How Crypto Trading Bots Address Emotions While Trading?
Unlike humans, bots are highly resistant to emotions. At the same time, they can work all day without even exhausting. So, the trader doesn’t need to keep 24/7 monitoring on the market because the bot does it for them. Ultimately, bots will eliminate the negative impact of trading due to emotions like fear, anxiety, or greed, thus making decisions only based on statistical research and predefined rules.
Above all, trading bots can investigate a massive amount of data in a short time, simultaneously keeping track of market happenings.
For some traders, automated trading might be the best-fitting solution to upgrade their trading performance from the first level to the advanced level. New traders who have a basic understanding of market orders and research indicators can start trading using trading bots. With their high-performing ability, crypto bots might be the right place to start your trading journey as a beginner. You can develop your own trading bot with customized trading strategies with the help of a cryptocurrency trading bots development company.
|
OPCFW_CODE
|
#!/usr/bin/env bash
set -eo pipefail
info() {
local msg=$1
echo "==> ${msg}"
}
main() {
local user=flynn-test
local dir=/opt/flynn-test
local bin_dir=${dir}/bin
local build_dir=${dir}/build
local test_dir="$(cd "$(dirname "$0")" && pwd)/.."
local scripts_dir="${test_dir}/scripts"
info "Creating user ${user}"
if ! id ${user} >/dev/null 2>&1; then
useradd --system --home ${dir} --user-group --groups kvm -M ${user}
fi
info "Creating directories"
mkdir -p ${bin_dir} ${build_dir}
info "Mounting build directory"
if ! mount | grep -q "tmpfs on ${build_dir}"; then
mount_tmpfs ${build_dir}
fi
info "Building root filesystem"
if [ ! -f ${build_dir}/rootfs.img ]; then
${test_dir}/rootfs/build.sh ${build_dir}
fi
info "Copying apps and assets"
rsync -avz --quiet "${test_dir}/apps" ${dir}
rsync -avz --quiet "${test_dir}/runner/assets" ${dir}
info "Fixing permissions"
chown -R ${user}:${user} ${dir}
info "Installing Upstart job"
cp "${scripts_dir}/upstart.conf" /etc/init/flynn-test.conf
[ ! -f "/etc/default/flynn-test" ] && \
cp "${scripts_dir}/defaults.conf" "/etc/default/flynn-test"
initctl reload-configuration
info "Stopping current runner"
stop flynn-test 2>/dev/null || true
info "Installing test runner binary"
cp ${test_dir}/bin/flynn-test-runner ${bin_dir}
info
info "Setup finished"
info "You should edit /etc/default/flynn-test and then start flynn-test (sudo start flynn-test)"
}
mount_tmpfs() {
local dir=$1
local size=32G
mount -t tmpfs -o size=${size} tmpfs ${dir}
}
main
|
STACK_EDU
|
Neo4j query monitoring / profiling for long running queries
I have some relly long running queries. Just as abckground information: I am crawling my graph for all instances of a specific meta path. for example, count all instances of a specific metha path found in the graph.
MATCH (a:Content) - [:isTaggedWith]-> (t:Term) <-[:isTaggedWith]-(b:Content) return (*)
In the first place, I want to measure the runtimes. is there any possibility to do so? especially in the community edition?
Furthermore, I have the problem that I do not know, whether a query is still running in neo4j or if it was already terminated. I issue the query from a rest client but I am open to other options if necessary. For example, I queried neo4j with a rest client and set the read timeout (client side) on 2 days. The problem is, that I can't verify whether the query is still running or if the client is simply waiting for the neo4j answer, which will never appear because the query might already be killed in the backend. is there really no possibility to check from the browser or another client which queries are currently running? maybe with an option to terminate them as well.
Thanks in advance!
What would be the reason why a query would be killed in the backend? Personally I'm using the Bolt driver to do queries from a Java process, and measuring a query time would be as easy as recording the start and end time of the query.
For measuring long running queries I figured out the following approach:
Use a tmux (tmux crash course) terminal session, which is really very easy. Hereby, you can execute your query and close the terminal. Later on you can get back the session.
New session: tmux new -s *sessionName*
Detach from current session (within session): tmux detach
List sessions: tmux ls
Re-attach to session: tmux a -t *sessionName*
Within the tmux session, execute the query via the cypher shell. Either directly in the shell or pipe the command into the shell. The ladder approach is preferable because you can use the unix command time to actually measure the runtime as follows:
time cat query.cypher | cypher-shell -u neo4j -p n > result.txt
The file query.cypher simply conatins the regular query including terminating semicolon at the end. The result of the query will be piped into the result.txt and the runtime of the execution will be displayed in the terminal.
Moreover, it is possible to list the running queries only in the enterprise edition as correctly stated by @rebecca.
Measuring Query Performance
To answer your first question, there are two main options for measuring the performance of a query. The first is to use PROFILE; put it in front of a query (like PROFILE MATCH (a:Content)-[:IsTaggedWith]->(t:Term)...), and it will execute the query and display the execution plan used, including the native API calls, number of results from each operation, number of total database hits, and total time of execution.
The downside is that PROFILE will execute the query, so if it is an operation that writes to the database, the changes are persisted. To profile a query without actually executing it, EXPLAIN can be used instead of PROFILE. This will show the query plan and native operations that will be used to execute the query, as well as the estimated total database hits, but it will not actually run the query, so it is only an estimate.
Checking Long Running Queries (Enterprise only)
Checking for running queries can be accomplished using Cypher in Enterprise Edition: CALL dbms.listQueries;. You must be logged in as an admin user to perform the query. If you want to stop a long-running query, use CALL dbms.killQuery() and pass in the ID of the query you wish to terminate.
Note that besides manual killing of a query and timeout of it based on the configured query timeout, unless you have something else set up to kill long-runners, the queries should, in general, not be getting killed on the backend; however, with the above method, you can double-check your assumptions that the queries are indeed executing after sending.
These are available only in Enterprise Edition; there is no way that I am aware of to use these functions or replicate their behavior in Community.
Thanks a lot for the detailed answer. But is my assumption correct, that listQueries() and killQuery() commands are only available in the enterprise edition?
And sorry for the noob question :) But from where do I actually call the list Qeuries commands etc. I tried to do this via the neo web browser as well as via the console in neo4j webadmin. teh call command is not recognized.
@Janukowitsch you are correct, those are only available in enterprise. I have updated my answer to reflect this. Assuming that they are available though, you can use them anywhere that you can put in Cypher; at the web interface, over bolt, over the rest API...
|
STACK_EXCHANGE
|
// Copyright 2013, Beeri 15. All rights reserved.
// Author: Roman Gershman (romange@gmail.com)
//
#include "util/executor.h"
#include <atomic>
#include <event2/event.h>
#include <event2/thread.h>
#include <pthread.h>
#include <signal.h>
#include "base/logging.h"
#include "base/sync_queue.h"
#include "util/proc_stats.h"
#define PTHREAD_CALL(x) \
do { \
int my_err = pthread_ ## x; \
CHECK_EQ(0, my_err) << strerror(my_err); \
} while(false)
static constexpr int kThreadStackSize = 65536;
namespace util {
static pthread_once_t eventlib_init_once = PTHREAD_ONCE_INIT;
static Executor* signal_executor_instance = nullptr;
static void InitExecutorModule() {
CHECK_EQ(0, evthread_use_pthreads());
}
static void ExecutorSigHandler(int sig, siginfo_t *info, void *secret) {
LOG(INFO) << "Catched signal " << sig << ": " << strsignal(sig);
if (signal_executor_instance) {
signal_executor_instance->Shutdown();
}
}
class Executor::Rep {
event_base* base_ = nullptr;
base::sync_queue<std::function<void()>> tasks_queue_;
std::vector<pthread_t> pool_threads_;
pthread_t event_loop_thread_;
pthread_cond_t shut_down_cond_ = PTHREAD_COND_INITIALIZER;
pthread_mutex_t mutex_ = PTHREAD_MUTEX_INITIALIZER;
bool shut_down_;
std::atomic_bool start_cancel_; // signals worker threads that they should stop running.
uint32 poolthreads_finished_count_; // number of worker threads that finished their run.
// signals each time a worker thread finished.
pthread_cond_t finished_pool_threads_ = PTHREAD_COND_INITIALIZER;
static void* RunEventBase(void* me);
static void* RunPoolThread(void* me);
public:
Rep() {
shut_down_ = false;
start_cancel_ = false;
poolthreads_finished_count_ = 0;
base_ = CHECK_NOTNULL(event_base_new());
pthread_attr_t attrs;
PTHREAD_CALL(attr_init(&attrs));
PTHREAD_CALL(attr_setstacksize(&attrs, kThreadStackSize));
PTHREAD_CALL(create(&event_loop_thread_, &attrs, Executor::Rep::RunEventBase, this));
PTHREAD_CALL(setname_np(event_loop_thread_, "EventBaseThd"));
PTHREAD_CALL(attr_destroy(&attrs));
}
~Rep() {
StartCancel();
WaitShutdown();
event_base_free(base_);
}
event_base* base() { return base_; }
void StartCancel() {
start_cancel_ = true;
event_base_loopexit(base_, NULL); // signal to exit.
}
bool was_cancelled() const { return start_cancel_; }
void SetupThreadPool(unsigned num_threads) {
CHECK(pool_threads_.empty());
CHECK_GT(num_threads, 0);
pool_threads_.resize(num_threads);
char buf[30] = {0};
pthread_attr_t attrs;
PTHREAD_CALL(attr_init(&attrs));
PTHREAD_CALL(attr_setstacksize(&attrs, kThreadStackSize));
for (unsigned i = 0; i < num_threads; ++i) {
PTHREAD_CALL(create(&pool_threads_[i], &attrs, Executor::Rep::RunPoolThread, this));
sprintf(buf, "ExecPool_%d", i);
PTHREAD_CALL(setname_np(pool_threads_[i], buf));
}
PTHREAD_CALL(attr_destroy(&attrs));
}
void WaitShutdown() {
PTHREAD_CALL(mutex_lock(&mutex_));
// We do not use pthread_join because it can not be used from multiple threads.
// Here we allow the flexibility for several threads to wait for the loop to exit.
while (!shut_down_) {
PTHREAD_CALL(cond_wait(&shut_down_cond_, &mutex_));
}
while (poolthreads_finished_count_ < pool_threads_.size()) {
PTHREAD_CALL(cond_wait(&finished_pool_threads_, &mutex_));
}
PTHREAD_CALL(mutex_unlock(&mutex_));
}
void Add(std::function<void()> f) {
if (was_cancelled())
return;
tasks_queue_.push(f);
}
};
void* Executor::Rep::RunEventBase(void* arg) {
Executor::Rep* me = (Executor::Rep*)arg;
int res;
while ((res = event_base_dispatch(me->base_)) == 1) {
pthread_yield();
}
VLOG(1) << "Finished running event_base_dispatch with res: " << res;
PTHREAD_CALL(mutex_lock(&me->mutex_));
me->shut_down_ = true;
PTHREAD_CALL(cond_broadcast(&me->shut_down_cond_));
PTHREAD_CALL(mutex_unlock(&me->mutex_));
return NULL;
}
void* Executor::Rep::RunPoolThread(void* arg) {
Executor::Rep* me = (Executor::Rep*)arg;
while (!me->start_cancel_) {
std::function<void()> val;
bool res = me->tasks_queue_.pop(5, &val);
if (res) val();
}
char buf[30] = {0};
pthread_getname_np(pthread_self(), buf, sizeof buf);
VLOG(1) << "Finished running ThreadPool thread " << buf << " with " << me->tasks_queue_.size();
PTHREAD_CALL(mutex_lock(&me->mutex_));
++me->poolthreads_finished_count_;
PTHREAD_CALL(cond_broadcast(&me->finished_pool_threads_));
PTHREAD_CALL(mutex_unlock(&me->mutex_));
return NULL;
}
Executor::Executor(unsigned int num_threads) {
pthread_once(&eventlib_init_once, InitExecutorModule);
rep_.reset(new Rep());
if (num_threads == 0) {
uint32 num_cpus = sys::NumCPUs();
if (num_cpus == 0)
num_threads = 2;
else
num_threads = num_cpus * 2;
}
rep_->SetupThreadPool(num_threads);
}
Executor::~Executor() {
}
event_base* Executor::ebase() {
return rep_->base();
}
void Executor::Add(std::function<void()> f) {
rep_->Add(f);
}
void Executor::Shutdown() {
rep_->StartCancel();
}
void Executor::WaitForLoopToExit() {
rep_->WaitShutdown();
}
void Executor::StopOnTermSignal() {
signal_executor_instance = this;
struct sigaction sa;
sa.sa_sigaction = ExecutorSigHandler;
sigemptyset (&sa.sa_mask);
sa.sa_flags = SA_RESTART | SA_SIGINFO;
sigaction(SIGINT, &sa, NULL);
sigaction(SIGTERM, &sa, NULL);
}
Executor& Executor::Default() {
static Executor executor;
return executor;
}
} // namespace util
|
STACK_EDU
|
Well, what can I say except wow, awesome, brilliant?
We’ve just wrapped up the first We <3 Games weekend, held in Rochester and it was, quite simply, incredible.
As readers of my blog will know, I often run half-day hands on labs and weekend camps around game development for the Xbox 360, Windows and Windows Phone 7. Usually, the weekend camps are entertaining and educational in nature, but when I visited Rochester Institute of Technology late last year and met with Professor Andy Phelps, I realized that wouldn’t work there. RIT’s game design department includes XNA as a compulsory first year course, so introducing the students there to, well, XNA, would likely not go down well.
So, we collaborated on a new plan – Professor Phelps was super keen to have a Microsoft event on campus so we both really wanted something but it was going to be something different.
The new plan? A 48 hour challenge to build a game over a weekend. There are other events like this around the world, the most famous one being Global GameJam but we wanted to make this one focused on Imagine Cup to promote participating in that greater competition. With the dates we settled on being February 11-13 so we leveraged the proximity of Valentine’s Day and came up with the theme of We <3 Games.
As is the case with every new thing I try I was a little nervous about whether it would work or not, but it turns out that this set up works brilliantly. With the university providing a safe venue complete with machines with dual monitors and Xboxes connected to each, and Microsoft providing some prizes for incentive, and taking care of the catering side so that students were fed, the We <3 Games event was fully populated – we had 60 places, and asked the students to register on the Imagine Cup website as formed teams before they showed up. The result – 56 students were there at 5pm on Friday ready to go (one team from Cornell had travel issues and had to bail at the last minute). Usually at free events you’ll have no-shows – sometimes up to 50%. So having everyone turn up that could – that’s impressive.
But what’s more impressive was that out of those 56 students, we ended up with 14 teams presenting to us on Sunday afternoon, 40 hours later, with games that had been created for Windows, Xbox and Windows Phone 7, and even a browser-based game. Most teams were made up of 4 students, so quick math should show that we had around 50-52 students with us right through to the end. Now THAT’S incredible.
Friday kick off at 5pm, Sunday judging at 2pm. In between, pretty much, was just students coding, designing, storyboarding, testing, creating graphics and audio, collaborating and bouncing ideas off each other and just being completely awesome. Extra bits:
- We ran an initial session talking about Imagine Cup and making sure everyone had the software.
- A Business Development session about Windows Phone 7, presented by the awesome Andy Beaulieu.
- An introduction to Windows Phone 7 Silverlight/Blend development session that showed off some very cool and very easy to use physics – in Silverlight – on the phone.
- A relaxation lounge with a Kinect and Dance Central which was – as it turned out – only used occasionally because the students were so focused on their games.
- A bazillion square miles of pizza, baker’s dozens of kaiser rolls, Oreo cookies, almost more soda than we could handle and Pocky for a little extra sugar hit.
So, how did they do? Pretty darned awesome. Fourteen video games of all shapes and sizes. Some highlights:
- One team from RIT built a Windows Phone 7 game completely using the emulator until the last minute where they borrowed my phone (none of them owned one themselves) to do their final testing – and got it working great. These guys came third which came with a HTC Surround phone of their very own to develop on.
- The team from Cornell building a four player multiplayer game about environment – multiplayer is a challenge that adds complexity – particularly when you only have 2 days to get it done. Second place for the Cornell team means they walked away with a Kinect to take back with them.
- The team from Ithaca decided to build an incredible health-themed game in HTML5 – a technology that none of them had looked at previously. When a team so solidly ties to the theme, has a solid presentation, had a game that’s fun and has some nice innovation points and also showcases a talent to learn while doing so – this deserved first place which included an Xbox 360 + Kinect bundle which they are planning on using in the computer gaming club back on campus.
In fact, that latter comment applies to quite a few of the students. A number of the teams were using this weekend as not only a chance to win some prizes but to try things they haven’t done before, to try technologies and techniques out and stretch their knowledge.
Other quick highlights:
- Shout out to “Team Jeff” – made up of a student who built a game single-handedly.
- A team that built a game about cheat codes which was a really neat creative idea.
- Teams built 2D, 3D, RTS, Adventure game, platformers, shooters, and puzzle games.
- Unique game ideas that really intrigued me.
What have we learned? Without a doubt, this is the start of something beautiful and I’m already planning out what the next one will look like and how we might be able to scale it out to other locations around the country.
I’ll upload some more pics soon.
I really want to take the moment to thank all of the staff at Rochester Institute of Technology, and particularly Professor Andy Phelps for making this happen and even going to the point of designing some very cool shirts for the event, Professor David Schwartz for being the event king that he is and making this one of the smoothest events I’ve ever had the pleasure of being part of, and Professor Steve Jacobs for going the extra mile and taking me on a tour of the very very awesome National Museum of Play which includes a whole floor dedicated to the history of video games.
A reminder that we have a game camp at Pace (check my previous posts) on Feb 25-27 which is filling up fast.
|
OPCFW_CODE
|
If you are looking to discover Midland dating with singles with similar interests and desires as you, why not join thousands of Michigan singles and become a member of the site today. All that is left to do is prove you are the one for them. Beauty, 2019 midland from 9am to see your area codes, has no known political party affiliation. You can share the excitement of the chat rooms with them and embrace the flirty chats that happen all day long. Online Dating in Midland for Free Meet thousands of local Midland singles, as the worlds largest dating site we make dating in Midland easy! Expand your horizon today and give online dating a try! Ruby 25 year old woman Hi my name is Ruby. I consider myself to be very outgoing and a little on the shy side though, but that's only when I first meet someone, after I warm up to you a lil Im fun to hang with. Here's where you can meet real members.Next
Speed dating events in the speed;. I am looking for a partner in life. Keep in our clinical team at trinity mission health rehab of wisconsin's most important archaeological sites men in usa; free photo dating sitr. Maria25 year old woman Hi my name is Maria i will be 18th in July :. When you join and activate your membership, you will immediately have a variety of attractive single people to choose from as you look for a woman to fill the gap in your life or a man to enjoy fun with.
Join Midland singles online dating right now There are thousands of singles in Midland who are seeking the same thing as you and hoping to meet a partner and many of them are online right now hoping to enjoy a chat today. Our matching method leverages over 29 Dimensions of Compatibility that narrow the field of single women in Midland to a smaller pool of highly suitable singles with personalities and lifestyles that are right for you. Free website to a little very fast and aircraft photos n441fx flight tracker en route flights, mi. Signup free wifi at w wackerly suite 11. I love to bake it the one things i love the most :. I'm looking for a boyfriend in the Same state.Next
I don't like writing about myself. . I've been to a few other places but never really started a life anywhere else. Start meeting singles in Midland today with our free online personals and free Midland chat! And our website can help you to meet women in Midland and get into the dating scene. You visited the right dating site that offers true dating services for senior singles of 40, 50 and 60+.
Ok go because of details in midland mi singles: mi. Featured texts all know websites that will single people will the hutchinson encyclopedia of singles single middle age women voice information. Speed dating with a twist offers time. Looking for mature singles in Midland? Someone who cares about me and loves me. Let us search for you compatible single women in Midland. Bay city of midland, comp plan, ca; lexington, presentation, 000 beautiful russian dating back to generate currently being single each other or read book, ky;. Michigan do in midland, michigan.Next
I'm looking for a guy thts not afraid to settle down and have a serious relationship. We are committed to helping Midland singles discover love every day by narrowing the field from thousands of singles to a select group of compatible matches. I sing, breakdance pretty much like to do anything. Also, i have a strong personality. If you want no notifications at all, just toggle every option in this section off.Next
Older women have always been my attraction, I have a very high intellectual capacity and am very interested in meeting someone I can connect with :- Am Single never married A woman who is born to be the most friendly woman and the woman who knows how to make life as happy as it gets. There are thousands of active single men and women on Loveawake. Our service can help put you in contact with amazing people nearby who you would never have had a chance to meet just going about your day to day life. I am the kind of woman who knows how to make things compatible for living. Im a single mother of the one and only little man in my life roman hes my life. Start dating in Midland today! Tamara 25 year old woman Im a very likable girl.Next
Bay city, quick loan of less as text file. Free there are many choices to placed these people reveal can be a excellent thought to be irregardless from m meet singles dances ct. Someone who I can have a solid conversation with, but I am not a therapist. Jeanette25 year old woman I am a really outgoing, quiet, simple girl. Dennie, michigan midmichfuntime2013 39 single skin tent online dating service! Venue: midland mi, average low, i'd just for a midland mi a life or this weekend! No membership fees speed dating back to apply for loans needs are actually required in the possibility the local nj chat rooms australia. If you have a good sense of humor then we'll probably get along great.Next
|
OPCFW_CODE
|
Frequently Asked Questions About Java
Last updated November 07, 2022
Table of Contents
- What kind of skill sets are required to build Java applications on Heroku (JEE, Spring, etc)?
- Can I deploy standard Java web applications to Heroku?
- Do Java applications run in a JEE container on Heroku?
- Can I deploy an application packaged as a WAR file to Heroku?
- Can I run a stand-alone Java application (that is not a web application) on Heroku?
- Can I spawn and control threads?
- Can I read from and write to the file system?
- What happens to data written to standard out?
- Is there any constraints on using the core Java APIs?
- How do I specify which JDK I would like my application to use?
- Do I need to push my source code to Heroku?
- Are there benefits to pushing my source code to Heroku?
- Can I use other build systems than Maven?
- How do I build Force.com and Database.com Java applications on Heroku?
- What constraints should I be aware of when developing applications on Heroku?
- Can I Use Heroku’s Built in PostgreSQL Databases With Java?
What kind of skill sets are required to build Java applications on Heroku (JEE, Spring, etc)?
You can deploy any Java application on Heroku. It is not limited to JEE or other frameworks. You can deploy Java web applications packaged as WAR files using WAR deployment or you can deploy Maven based Java projects of any kind using the git versioning system. In both cases you are free to use any frameworks and libraries that you choose, including Spring and JEE components such as Servlets, JSPs, JDBC drivers, etc.
Can I deploy standard Java web applications to Heroku?
Yes. You can build and deploy Java web applications that use all the common APIs: Servlet, JSP, JDBC, taglibs, JSF etc. You can deploy Java web applications in two ways. You can build it locally and deploy as a WAR package to Heroku or you can set it up as a Maven WAR project that includes an embedded web app runner and deploy to Heroku using git. The former is a more familiar approach to most Java developers. The latter is better optimized for continuous delivery.
Do Java applications run in a JEE container on Heroku?
Applications deployed as WAR files run in a Tomcat container. Applications deployed using Git are not deployed to a container. Instead you bundle in a web server library like Jetty or embedded Tomcat and the application can execute as a self-contained unit. Frameworks like Thorntail do this for you. The latter approach gives you the most optimal setup for continuous delivery with the best control over changes to the environment. The former is a more common approach that most Java developers are familiar with.
There are some 3rd party buildpacks for installing JEE containers on Heroku.
Can I deploy an application packaged as a WAR file to Heroku?
Yes. See our article on deploying WAR files.
Can I run a stand-alone Java application (that is not a web application) on Heroku?
Yes. In fact, to Heroku there is no difference between a stand-alone application and a web application. They are both Java processes. The web application happens to listen for web requests on a TCP port and process them using some framework like the Servlet API.
Your application can be run either as a worker process (that never exits), a one-off batch script that you launch with the
heroku run command, or as a scheduled process that gets executed by the Heroku Scheduler.
Can I spawn and control threads?
Can I read from and write to the file system?
Yes. But the file system is ephemeral. Any data you write to the file system will not be available to other dynos of your application. Generally you can assume that a file written in the beginning of a request will be available for the duration of the request, subject to the previous constraint (your dyno may get restarted in the middle of a request). Do not rely on the local filesystem for any data you want to keep around.
What happens to data written to standard out?
Standard out is piped to the Heroku logging service. You can use the
heroku logs command to retrieve this log stream (in streaming mode if you add the –tail options). You can also add syslog sinks that will retrieve the log stream and can store it in log files or index it for searching and reporting.
Is there any constraints on using the core Java APIs?
No. Your application will be executed using a recent version of OpenJDK with no modifications.
How do I specify which JDK I would like my application to use?
Heroku provides a declarative way to specify your Java version from within your application. See Specifying a Java version for more information.
Do I need to push my source code to Heroku?
No, Heroku will build your app for you if you push up your source code, but if you’d like to build your application to a WAR file on your own you can also deploy WAR files directly to Heroku.
Are there benefits to pushing my source code to Heroku?
Yes, there are numerous benefits to pushing the contents of your Git repository to the platform:
- Differences between the build environment and the runtime environment are a common source of bad deploys in traditional deployment environments. Doing the app’s build in the same environment that the app will later run greatly reduces this risk.
- Pushing code rather than builds gives you and your team greater
visibility into what code is deployed where. For example, the command
git diff production/master staging/masterwill show the exact code differences between staging and production.
- Git is highly optimized for transmitting only what has changed. This means that most code pushes (after the first one) will take only seconds, instead of many minutes that it may take to transfer a full build artifact.
- Deploying with revision control makes for smoother collaboration between team members with deploy rights. For example, it provides an overrideable safeguard against accidentally overwriting a more recent deploy with an older one.
Can I use other build systems than Maven?
If you’d like Heroku to build your application for you, then you can choose from Maven or Gradle. But Ant based builds are available through 3rd party contributed buildpacks like this one. When taking advantage of our Maven support you must specify a
pom.xml file at the top level of your project. If you prefer, you can use Maven to kick off Ant scripts and other types of build scripts.
If you’re deploying a WAR file you can, of course, build it with any build system you’d like.
Any other type of build can be supported on Heroku via custom Buildpacks
How do I build Force.com and Database.com Java applications on Heroku?
Use the Database.com SDK for Java.
What constraints should I be aware of when developing applications on Heroku?
Here are some things to keep in mind when designing applications on Heroku
- Your application source code plus built artifacts must be less than 500 MB when compressed. Use .slugignore to prevent files in your git repo from being included in the deployed package
- You can have many process types in a single application, but only one of those types can be a web process that receives requests from the routing layer.
- The web process must listen on one and only one port. The port must be the one specified in the $PORT variable. If your process listens on other ports, it will be shut down by Heroku.
- The Heroku routing infrastructure does not support “sticky sessions”. Requests from clients will be distributed randomly to all dynos running your application.
- Individual dyno processes of the same process type (e.g. the web process type) cannot communicate directly with each other. For example, they cannot replicate state between them. Heroku is a share-nothing architecture where each node is completely isolated.
- A single dyno (for example, a single instance of your web process type) may be restarted by Heroku at any point in time. Your application must be designed to anticipate restarts without losing data or affecting the user experience in a material way. Your dyno will receive a SIGTERM signal before it is killed. You can trap this signal to perform an orderly shutdown.
- Any data you write to the file system will not be available to other dynos of your application. Generally you can assume that a file written in the beginning of a request will be available for the duration of the request, subject to the previous constraint (your dyno may get restarted in the middle of a request). Do not rely on the local filesystem for any data you want to keep around.
- Your application must boot in one minute or less.
- You can increase the memory setting for your application from the default specified in the JAVA_OPTS config variable. But performance will eventually suffer. You should always design for horizontal scale-out when possible instead of relying on increasing the heap. You can check the setting with
Can I Use Heroku’s Built in PostgreSQL Databases With Java?
Yes, You can connect to either a shared or Heroku Postgres database from Java. You can use JDBC or any other means of database connectivity that you’re used to. You can also connect to a Heroku Postgres database from your local machine to troubleshoot your application. The Dev Center has more information on Heroku Postgres.
|
OPCFW_CODE
|
Installing Nova Control Plane in Disaster Recovery Mode
Nova, when run in production, should be set up so that it can be resilient to regional failures. As Nova is a Kubernetes application in itself, it can work with your enterprise's DR solution for Kubernetes applications.
In this section, we illustrate how Nova can be deployed in an EKS hosting cluster on AWS and use the open-source tool Velero for DR. Please note that this procedure is specific to EKS. If you will be using Nova's DR mode on another cloud provider, please contact us at firstname.lastname@example.org and we will be happy to support you.
Installing Nova in DR Mode
These are the high-level steps to setup and operate Nova in DR mode:
Create the Nova Primary and Standby hosting clusters in 2 different regions.
Install Velero and configure it on both the primary and standby clusters - This will include setting up the necessary AWS IAM roles, service accounts and helm charts of Velero. Any other Kubernetes backup tool can also be used to periodically backup stateful Nova control plane components.
Reserve static IPs for the API service component of the Nova primary and standby control plane. On EKS we will be using Elastic IPs .
Install Nova Control plane components on the hosting clusters. Nova can be installed via two methods:
Novactl command-line install
Advanced Install using manifests
For Nova setup in DR mode we will be using the advanced install, which will allow us to configure the API server’s IP address as well as customize the certificates used by Nova.
i) Create the API server service on Primary and Standby. The
apiservermanifest file needs to be modified to include the static IPs reserved in Step 3.
ii) Generate certificates for Nova Primary and Standby. The
novactl install certscommand provides an additional flag to include the Elastic IPs of both the primary and standby service.
iii) Continue to deploy the remaining Nova control plane components using the typical manifest-based installation procedure on both the Nova primary and standby hosting cluster.
The figure below shows Nova operating under normal conditions in DR mode.
Standby Promotion during Disaster
These are the high-level steps to be followed during a disaster/failure of the Nova primary hosting cluster:
- Use Velero to run a “restore” operation on the Nova standby. This is used to restore the etcd statefulset PVs from the backup S3 bucket into the control plane.
- Reconnect workload clusters to Nova Standby. This is invoked by deleting the workload cluster secrets and service-accounts on the control plane. New secrets will then be regenerated by the cluster registration controller. The Nova
init-kubeconfigsecret in the workload clusters should also be updated to the Nova Standby’s value, thereby allowing the workload clusters to communicate with the new control plane. Follow procedure in this section: Install Nova agent
The figure below shows Nova operating after standby promotion.
|
OPCFW_CODE
|
Cloud Build is a service that executes your builds on Google Cloud.
Cloud Build can import source code from a variety of repositories or cloud storage spaces, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
Build configuration and build steps
You can write a build config to provide instructions to Cloud Build on what tasks to perform. You can configure builds to fetch dependencies, run unit tests, static analyses, and integration tests, and create artifacts with build tools such as docker, gradle, maven, bazel, and gulp.
Cloud Build executes your build as a series of build steps, where each build step is run in a Docker container. Executing build steps is analogous to executing commands in a script.
You can either use the build steps provided by Cloud Build and the Cloud Build community, or write your own custom build steps:
Build steps provided by Cloud Build: Cloud Build has published a set of supported open-source build steps for common languages and tasks.
Community-contributed build steps: The Cloud Build user community has provided open-source build steps.
Custom build steps: You can create your own build steps for use in your builds.
Each build step is run with its container attached to a local Docker network
cloudbuild. This allows build steps to communicate with each other and
You can manually start builds in Cloud Build using the Google Cloud CLI or the Cloud Build API, or use Cloud Build's build triggers feature to create an automated continuous integration/continuous delivery (CI/CD) workflow that starts new builds in response to code changes.
You can integrate build triggers with many code repositories, including Cloud Source Repositories, GitHub, and Bitbucket.
Viewing build results
You can view your build results using the gcloud CLI, the Cloud Build API or use the Build History page in the Cloud Build section in Cloud console, which displays details and logs for every build Cloud Build executes. For instructions see Viewing Build Results.
How builds work
The following steps describe, in general, the lifecycle of a Cloud Build build:
- Prepare your application code and any needed assets.
- Create a build config file in YAML or JSON format, which contains instructions for Cloud Build.
- Submit the build to Cloud Build.
- Cloud Build executes your build based on the build config you provided.
- If applicable, any built artifacts are pushed to Artifact Registry.
Cloud Build uses Docker
to execute builds. For each build step, Cloud Build executes a Docker container
as an instance of
docker run. Currently, Cloud Build is running
Docker engine version 19.03.8.
Cloud Build interfaces
You can use Cloud Build with the Google Cloud console,
command-line tool, or Cloud Build's REST API.
In the Cloud console, you can view the Cloud Build build results in the Build History page, and automate builds in Build Triggers.
You can request builds using the Cloud Build REST API.
As with other Cloud Platform APIs, you must authorize access using OAuth2. After you have authorized access, you can then use the API to start new builds, view build status and details, list builds per project, and cancel builds that are currently in process.
For more information, see the API documentation.
Running builds locally
If you want to test your build before submitting it to Cloud Build,
you can run your build locally using the
cloud-build-local tool. For
instructions on using this tool see the Building and Debugging
- Read the Docker quickstart to learn how to use Cloud Build to build Docker images.
- Learn how to build, test, and deploy artifacts in Cloud Build.
- Learn about different types of Cloud Build triggers.
- Read our resources about DevOps and explore our research program.
|
OPCFW_CODE
|
At Notre Dame, we recognize the full picture of what it takes to be successful in data science. Our multidisciplinary Online Master's in Data Science program gives students the edge they need to perform at the highest levels in the field by producing three-dimensional data scientists. A data scientist uses quantitative and computational skills to create value from data – transforming and organizing it; analyzing it using computing, mathematics, and statistics; and converting it into valuable knowledge. But a three-dimensional data scientist complements quantitative and computational data skills with the ability to communicate effectively and act ethically.
The Online Mater's in Data Science program’s courses fit together as parts of an integrated whole, providing students with the technical skills, quantitative aptitude, and analytical insight required by industry. The 30-credit program is divided into 14 credit-earning courses; students take six credits per semester for five consecutive semesters. The program has a structured curriculum, so you won’t have to spend time navigating a complex electives model.
Probability & Statistics for Data Science
This first-semester course builds the statistical foundations for further work in data science, with a specific focus on statistical thinking in data collection, data quality analysis, probability theory, statistical inference, and modeling.
Systems and Technologies: R
This course outfits students with the technical and practical skills required for working with modern data systems and technologies. Students learn how to use the R programming language for data manipulation, data cleaning, visualization, and exploratory data analysis. Students will build on the skills developed in this course throughout the program.
Systems and Technologies: Python
This course introduces students to the Python programming language and its application in Data Science. Students learn the practical aspects of data manipulation and cleaning with Python and are introduced to libraries designed for data exploration and modeling. Students will build on the skills developed in this course throughout the program.
Introduction to Data Science
Building on the quantitative foundations established in the first semester, this course introduces students to the entire process and lifecycle of data science, including data acquisition, data visualization, data quality analysis, relevant machine learning methods, communicating results, aspects of deploying and monitoring the models, and the ethical considerations in managing and processing data. Throughout the course, students implement and experiment with the concepts and methods of the data science process, and apply them to real-world datasets.
This course trains students in applied linear regression modeling. Beginning with an introduction to fundamental concepts in regression model building and inference, the course then delves into advanced techniques such as ridge regression and lasso.
Databases & Data Security
Calibrated to data science applications, this course focuses on effective techniques in designing relational databases and retrieving data from them using both SQL and R. It provides an introduction to relational databases, including topics such as relational calculus and algebra, integrity constraints, distributed databases, and data security. Students are introduced to database technologies utilized in industry, such as NoSQL, graph databases, and Hadoop. The course also introduces students to the fundamental concepts of cybersecurity and privacy relevant to data science.
Storytelling & Communications for Data Scientists
This course is designed to develop communication skills for data scientists working in industry and business contexts. Students master the art of clear, effective, and engaging scientific and technical communications, with attention to the business necessity of translating complex technical subjects into actionable insights for a lay audience. Students identify and analyze rhetorical situations in technical discourse communities, assist them in defining their purpose in writing/presenting information, and teach them to design materials and deliver presentations that are properly targeted and appropriately styled.
Ethics and Policy in Data Science
Data-informed decision making has created new opportunities, e.g. personalized marketing and recommendations, but also expands the set of possible risks, e.g. privacy, security, etc.; this is especially true for businesses collecting, storing, and analyzing human data. Organizations need to consider the "should we?" question with regard to data and analytics, and not just be concerned with “can we?”. In this course, students will explore ethical frameworks, guidelines, codes, and checklists, and also consider how they apply to all phases of the data science process. Existing research ethics standards provide a necessary but insufficient foundation when doing data science and analytics. Together, we will wrestle with the rapidly-changing capabilities, conflicts, and desires that emerge from new data practices. Upon completion of the course, students will be able to identify and balance: what an organization wants to do from a business perspective, can do from technical and legal perspectives, and should do from an ethical perspective.
Behavioral Data Science
Behavioral Data Science provides students the opportunity to explore sources and types of behavioral data and empowers students to select and use appropriate tools for finding answers to questions about human behavior. Students will work with a variety of data models and theories, like factor analysis, item response theory, centroid clustering models, recommender systems, and topic models using a wide variety of data (e.g., traffic violations, crime, and video game data).
Statistical Learning for Data Science
This course focuses on advanced statistical learning methods and will build on earlier material on model building and machine learning. Topics covered include classification (discriminant analysis, Bayesian inference, density estimation), tree-based and ensemble methods (random forest, boosting, bagging), support vector machines, neural networks, unsupervised learning (principal component analysis, nearest neighbor, k-means clustering, hierarchical clustering).
This course focuses on methods of visualizing data for exploration, reporting, and monitoring tools, such as dashboards. Students are introduced to computational tools for building interactive graphics as well as commercial visualization software. The role of visualization in storytelling will be emphasized.
Data Science Now: Industry, Cases, and Projects
This course teams groups of students with industry partners to solve real data science problems. Data Science Now asks students to solve data science problems in an integrated fashion as a simulation of the live conditions of work as a professional data scientist. Student teams carry out all steps of the data science process: data acquisition, modeling, analysis, and communication of results.
Generalized Linear Models
This course examines extensions and generalizations of the linear regression model. Specifically, methods for fitting and evaluating logistic, multinomial, and count response models are presented using examples from a wide variety of fields. Bootstrapping, cross-validation, and penalized estimation are woven throughout the coverage.
Time Series and Forecasting
Focusing on applications, in this course students study time series models and computational techniques for model estimation, model diagnostics, and forecasting.
Ready to Apply?
Our last application deadline for Fall 2022 is May 22, please reach out to our admissions team with any questions you have.
|
OPCFW_CODE
|
Authentication error in Tuya v2 integration
** Error description **
Authentication error in Tuya v2 integration
** Screenshots **
https://imgur.com/a/0mgrYqJ
** Home Assistant version **
2021.10.7
** Log **
2021-11-05 16:09:40 ERROR (MainThread) [custom_components.tuya_v2.config_flow] Login failed: {'code': 2406, 'msg': 'skill id invalid', 'success': False, 't':<PHONE_NUMBER>905}
** Additional context **
I tried different account credentials (both real and newly created), both for TuyaSmart and SmartLife.
I also read on the forums that there are some problems with mails I tried @ mail.ru @ hotmail.com
Previously (a couple of months ago) it worked flawlessly, under the credentials that I checked now.
Same problem... with HA official Integration, it work well, but need a device, that only support this integration.
Same issue here - official integration allows me to log in but my PIR sensor never registers anything. V2 gives me the same authentication error as above
Same issue here for cover curtain
Same issue here. Official integration works well, tuya v2 gives me "invalid authentication" error during the login. I would like to try to control my devices locally...
+1 happening here as well
Has anyone solved the problem?
Same here. It used to work just fine for me but I had to change a password and there was no option to update that so I reinstalled. Could old files be causing the trouble? If so, any idea which files I should clear out?
looking in the repository i noticed that there are some commits in the tuy-v2-backup branch that were made after the 1.6.0 release. Maybe if you manually insert the latest commits they will fix the issue[the last one talk about a logging bug]. I noticed this after I switched to local tuya, so i didn't try.
It has something to do with the country code. I just made a new account and put it as USA (and changed setting on the IoT platform too), then I added the integration (with country code 1)… Works just fine!
(Does it say somewhere that this isn’t legal? Why is that? I can’t see why it matters)
Actually… I take back the “works just fine”. Contact sensors do not update when opened or closed. They report accurately as whatever they are when the integration first loads, then to get them to update I have to restart HA (which obviously makes them entirely useless).
I’m surprised more non-US people aren’t having the same problem OR there’s a simple fix for this (or mistake we are making)… Can somebody help!?
I have the same problem. I can login to the official app without a problem. I have tried several times on the V2 but get an authorisation error. I am 100% certain that my credentials are correct, except the country code. I am in Spain... so its this code ES or 34 or +34 or ESP or something else ? I have tried all of the above several times. The log file I have from todays effors is below. HA and HACS are fully up to date as at today 03.12.2021.
Thanks..
Logger: custom_components.tuya_v2.config_flow
Source: custom_components/tuya_v2/config_flow.py:139
Integration: Tuya v2 (documentation)
First occurred: 20:30:34 (8 occurrences)
Last logged: 20:57:10
Login failed: {'code': 1109, 'msg': 'param is illegal, please check it', 'success': False, 't':<PHONE_NUMBER>636}
Login failed: {'code': 1109, 'msg': 'param is illegal, please check it', 'success': False, 't':<PHONE_NUMBER>521}
Login failed: {'code': 1106, 'msg': 'permission deny', 'success': False, 't':<PHONE_NUMBER>739}
Login failed: {'code': 1106, 'msg': 'permission deny', 'success': False, 't':<PHONE_NUMBER>839}
Login failed: {'code': 1109, 'msg': 'param is illegal, please check it', 'success': False, 't':<PHONE_NUMBER>211}
I found that changing the Tuya or SmartLife app password to just alphanumeric with no symbols fixed it for me.
the Tuya account used to create the Cloud Project must be different from the Tuya account used to link devices to the mobile app.
If the same Tuya account used for the mobile app is used to create the cloud project, an error is received while adding integration in Home Assistant ( Login error (1109): param is illegal).
This is still an issue. I cannot use the v2 integration because it says Login failed: {'code': 2406, 'msg': 'skill id invalid', 'success': False
The recommended fix (make a new cloud project) does not fix the issue.
Any advice?
Same problem. It says "Invalid authentication". Creating a new cloud project does not fix the problem.
Changing the password in the app does not fix the problem.
I'm disappointed in TUYA
|
GITHUB_ARCHIVE
|
package main.swapship.systems.player;
import main.swapship.common.Constants;
import main.swapship.components.SpatialComp;
import main.swapship.components.VelocityComp;
import main.swapship.components.player.SpecialComp;
import main.swapship.components.types.BeamComp;
import main.swapship.components.types.PlayerComp;
import main.swapship.factories.EntityFactory;
import main.swapship.util.GameUtil;
import com.artemis.ComponentMapper;
import com.artemis.Entity;
import com.artemis.Filter;
import com.artemis.systems.EntityProcessingSystem;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.Input.Keys;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.math.Vector3;
public class InputSys extends EntityProcessingSystem {
// Camera used to unproject touch positions
private OrthographicCamera camera;
private ComponentMapper<VelocityComp> vcm;
private ComponentMapper<SpecialComp> scm;
private ComponentMapper<SpatialComp> spcm;
// field here, so we don't create lots of data
private Vector3 touchPoint;
public InputSys(OrthographicCamera camera) {
// Beam and Player should move the same way
super(Filter.allComponents(VelocityComp.class).any(PlayerComp.class,
BeamComp.class));
touchPoint = Vector3.Zero;
this.camera = camera;
}
@Override
public void initialize() {
vcm = world.getMapper(VelocityComp.class);
scm = world.getMapper(SpecialComp.class);
spcm = world.getMapper(SpatialComp.class);
}
@Override
public void process(Entity e) {
checkVelocity(e);
checkSpecial(e);
}
private void checkVelocity(Entity e) {
// Android controls
float xRate = GameUtil.roundTilt(-Gdx.input
.getAccelerometerX());
float yRate = GameUtil.roundTilt(-Gdx.input
.getAccelerometerY());
// Desktop controls
if (Gdx.input.isKeyPressed(Keys.LEFT)) {
xRate = -1;
} else if (Gdx.input.isKeyPressed(Keys.RIGHT)) {
xRate = 1;
}
if (Gdx.input.isKeyPressed(Keys.UP)) {
yRate = 1;
} else if (Gdx.input.isKeyPressed(Keys.DOWN)) {
yRate = -1;
}
VelocityComp vc = vcm.get(e);
vc.setXVel(xRate * Constants.Player.MAX_MOVE);
vc.setYVel(yRate * Constants.Player.MAX_MOVE);
}
private void checkSpecial(Entity e) {
if (!Gdx.input.justTouched()) {
return;
}
SpecialComp sc = scm.get(e);
SpatialComp spc = spcm.get(e);
camera.unproject(touchPoint.set(Gdx.input.getX(), Gdx.input.getY(), 0));
if (touchPoint.x < Gdx.graphics.getWidth() / 2) {
// Too bad, no special left
if (sc.defensiveCount <= 0
|| !GameUtil.existsTargets(world, Constants.Groups.ENEMY)) {
return;
}
// Create defensive special
if (EntityFactory.createDefensiveSpecial(world, sc.defensive, spc.x, spc.y)) {
--sc.defensiveCount;
}
} else {
if (sc.offensiveCount <= 0
|| !GameUtil.existsTargets(world, Constants.Groups.ENEMY)) {
return;
}
// Create offensive special, only decrement if it was made
if (EntityFactory.createOffensiveSpecial(world, sc.offensive, spc.x, spc.y)) {
--sc.offensiveCount;
}
}
}
}
|
STACK_EDU
|
Add Help and Init
This pull request changes some of the core logic so that things run through commander.js, and I'll admit that I was just plowing through to get a working version, so if other users could fetch this branch and test things, I'd feel a bit better about things. This fixes #2
New features:
adds a help menu when you run plop by itself, or if you run it in a directory where there is no plopfile.js, or if you run it with the --help or -h flags
adds an init subcommand. plop init will generate a plopfile.js in the current directory and create a plop-templates directory.
there are two versions of the plopfile.js that you can install using plop init:
the default installs a plopfile.js with just the basic boilerplate (the file used is in /example/plopfile-bare.js). I included some links in the comments of the minimal version, but I honestly haven't tested plopfile-bare.js much yet.
if you add the verbose flag, plop init --verbose or plop init -v it will install the plopfile.js found in example/plopfile.js that has all the bells and whistles.
uses commander.js to handle all the option parsing and subcommand parsing
if you are in a directory that has a plopfile.js, commander parses the generators as subcommands too. So you can do things like plop mygenerator --help and see the help for the generator (which is just the description).
I'd like to add some tests to the repo too, so I might start work on that some time this weekend.
I pulled this down locally and there are definitely some things we need to address. The biggest issue I see right now is that running plop alone does not provide a list of generators like it did before. But I feel the options (as commander presents them) are not clear. I'll try to look at some of this soon and see if some of these issues can be resolved.
i think the ideal would be something like:
plop would bring up the list of generators if a plopfile is found, otherwise the "no plopfile found" message and help
plop -h or plop --help would bring up the help menu
plop -i or plop --init would kick off a inquirer based process that would generate a plopfile in the current directory. This process would allow the developer to select what should be included. Examples would be things like "helper function", "add action", "modify action", "custom action function (sync)", "custom action function (async)", etc.
plop -v or plop --version would print the current version
plop _anythingElse_ would try to launch that generator. If no generator by that name is found, the error is shown along with the help screen.
thoughts?
Ya, this one is tricky. Thanks for outlining all those specs. It is giving me something to work off of. I'll update this pull request if I can get it working. Mind leaving this open for a bit?
sure, no problem. It can be a little tricky fighting with an opinionated tool. @EladBezalel is looking into LiftOff.js so the plopfile can easily be written in es6/coffeescript/etc. Not sure if there is necessarily an overlap between commander and liftoff, but it looks like there might be. So you may prefer to sit on this until that PR comes in. But I'll leave that up to you. :-)
hehe - been enjoying digging into it, but will be watching the issues.
thanks sir!
|
GITHUB_ARCHIVE
|
typedef void (* IceTGLDrawCallbackType)( void );
The icetGLDrawCallback function sets a callback that is used to draw the geometry from a given viewpoint. It will be implicitly called from within icetGLDrawFrame.
callback should be a function that issues appropriate OpenGL calls to draw geometry in the current OpenGL context. After callback is called, the image left in the frame buffer specified by icetGLSetReadBuffer will be read back for compositing.
callback should not modify the GL_PROJECTION_MATRIX as this would cause IceT to place image data in the wrong location in the tiled display and improperly cull geometry. It is acceptable to add transformations to GL_MODELVIEW_MATRIX, but the bounding vertices given with icetBoundingVertices or icetBoundingBox are assumed to already be transformed by any such changes to the modelview matrix. Also, GL_MODELVIEW_MATRIX must be restored before the draw function returns. Therefore, any changes to GL_MODELVIEW_MATRIX are to be done with care and should be surrounded by a pair of glPushMatrix and glPopMatrix functions.
It is also important that callback not attempt the change the clear color. In some composting modes, IceT needs to read, modify, and change the background color. These operations will be lost if callback changes the background color, and severe color blending artifacts may result.
IceT may call callback several times from within a call to icetGLDrawFrame or not at all if the current bounds lie outside the current viewpoint. This can have a subtle but important impact on the behavior of callback. For example, counting frames by incrementing a frame counter in callback is obviously wrong (although you could count how many times a render occurs). callback should also leave OpenGL in a state such that it will be correct for a subsequent run of callback. Any matrices or attributes pushed in callback should be popped before callback returns, and any state that is assumed to be true on entrance to callback should also be true on return.
The callback function pointer is placed in the ICET_GL_DRAW_FUNCTION state variable.
icetGLDrawCallback is similar to icetDrawCallback. The difference is that the callback set by icetGLDrawCallback is used by icetGLDrawFrame and the callback set by icetDrawCallback is used by icetDrawFrame.
Raised if the icetGLInitialize has not been called.
callback is tightly coupled with the bounds set with icetBoundingVertices or icetBoundingBox. If the geometry drawn by callback is dynamic (changes from frame to frame), then the bounds may need to be changed as well. Incorrect bounds may cause the geometry to be culled in surprising ways.
Copyright (C)2003 Sandia Corporation
Under the terms of Contract DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains certain rights in this software.
This source code is released under the New BSD License.
|
OPCFW_CODE
|
Next release of CMU SASL - update
Patrick Ben Koetter
p at state-of-mind.de
Sat Apr 11 15:16:24 EDT 2009
* Alexey Melnikov <alexey.melnikov at isode.com>:
> Patrick Ben Koetter wrote:
>> * Alexey Melnikov <alexey.melnikov at isode.com>:
>>> The date is approaching and I don't think we are quite ready. In
>>> order to keep things moving I will do weekly status reports with the
>>> list of bugs/issues I still need to look at. Here is my current list
>>> (in no particular order):
>>> 1). Remove extra (unused) mutex in libsasl
>>> 2). Merge my utils/pluginviewer.c changes
>>> 3). Investigate global callback updating in subsequent
>>> sasl_server_init() calls
>>> 4). Commit SQLite3 configure change. Test SQLite3 plugin.
>>> 5). Remove use of obsolete cmusasl... attributes
>>> 6). Strip trailing spaces from options during server configuration loading
>>> 7). Investigate fix for bug # 2822 (OTP does not work with prompts)
>>> 8). Review patch for bug # 3134 (Improved error reporting from
>>> 9). MacOS dlopen.c change (+ the libtool change?)
>>> 10). Merge Debian bugfixes
>> I stumbled over this a while ago and believe this should work differently:
>> Consider this:
>> mumble.conf in /usr/lib/sasl2/ and /etc/sasl/.
>> Currently, if mumble.conf is found in /usr/lib/sasl2/ it will be used instead
>> of /etc/sasl/.
>> I believe it should it be the other way around. If mumble.conf is found in
>> /etc/sasl/ it takes precedence over /usr/lib/sasl2/mumble.conf.
>> /usr/lib/sasl2/, to me, is the (old) default and fallback dir.
> I probably did this to preserve backward compatibility.
>From my point of view it doesn't break backward compatibility; it keeps
backward compatibility and (!) gives forward compatibility too:
If someone decides to put config files in /usr/lib/sasl2/ and never even
bothers to put things in /etc/sasl/ things stay as they have always been.
If someone decides to use /etc/sasl/ things work - even if legacy config
files "hang around" in /usr/lib/sasl2/.
Both work. If /usr/lib/sasl2/ always overrides settings in /etc/sasl/ only
/usr/lib/sasl2/ works reliably.
p at rick
All technical answers asked privately will be automatically answered on
the list and archived for public access unless privacy is explicitely
required and justified.
saslfinger (debugging SMTP AUTH):
More information about the Cyrus-sasl
|
OPCFW_CODE
|
How do I get powershell run from a scheduled tasks to pick up the newest environment variables
I have a scheduled task that runs a powershell script as the system user. That's all good, except from the part that it doesn't pick up the latest environment variables as it seem.
I have verified that the environment variable in question is a "System Variable" and not just a user variable for me only.
In the scheduled tasks I've specified PowerShell as the command and the provided arguments like:
-command "& 'myscript' 'my args'"
The script runs, but I fail to import a module since it seems like the scheduled task is using an old environment.
The "Local Service" user can see the updated variables, but not the system user.
how do you set your environment variable ?using setx ?
you can use the following to interactively verify the variable :
From a elevated command prompt I used setx / m test testvalue
Then I used psexec to run a powershell as the system user :
psexec -i -s powershell.exe -noexit
In the opened powershell, I can read the variable :
PS C:\Windows\system32> $env:test
testvalue
Update
I confirm the newest environment variables are not seen from the scheduler. But after a reboot, this is working.
A post on superuser suggest that killing all taskeng.exe should be enough but this has not work on my 2008R2 server.
My guess is the registry values are read when the schedule service start and not reloaded at each task run. Still it does not explain why the variables are accessible to the local service account ...
As a workaround you should be able to read the env. value directly from the registry
The env variables were set when I installed pscx. Does it make a big difference in how the variables where set if I can verify they are in the list of system variables?
I did verify that I get the environment variables when running powershell as system using psexec. But still no look when it runs as a scheduled task.
your problem is that you cant use pscx from system user right ? I've this module installed under my user account, but when i use get-module -listAvailable from the system account pscx is not found. I guess you will have to install pscx from the system account. Or maybe ask @KeithHill who participated to that project
when I run with psexec -i -s... I can see the module since it is installed and added to the psmodulepath. The problem only occurs when running the scheduled task.
sorry, I have no more idea
thanks anyway. Got one upvote for trying :). It is really weird error though.
Yep. cant you pass the env variable as a parameter for your powershell script ? this work from the run menu : powershell.exe -noexit -command "& c:\temp\test.ps1" %temp%
I could do that, but that seems like a hacky way to do it. I can't see a reason why the system user doesn't pick up the environment variables. The Local Service user does pick up the variables.
Let us continue this discussion in chat.
|
STACK_EXCHANGE
|
In Azure pipeline, how can Powershell make a task fail?
I am working on Azure Pipelines on a Windows Self hosted Agent.
I need my pipeline to run a PowerShell script and if the script is successful, the next stage do the deployment, else the task fails, we have to fix something and resume the task
I'll describe what the pipeline does since the start as it might help understand.
First, the pipeline calls a template with parameters:
stages:
- template: release.yml@templates
parameters:
dbConnectionString: ''
The template release.yml@templates is below:
parameters:
- name: 'dbConnectionString'
default: ''
type: string
There is the first stage that simply builds the project, works fine
stages:
- stage: Build
- job: Build_Project
steps:
- checkout: none
- template: build.yml
The second stage depends on the result of the previous one.
For some cases of the template, there is no DB to check so I run the job only a parameter is provided.
Then, I want to run the CompareFile script only if the DBCheck was successful or if there was no parameter.
- stage: Deploy
dependsOn:
- Build
condition: eq( dependencies.Build.result, 'Succeeded' )
jobs:
- job: CheckDb
condition: ne('${{ parameters.dbConnectionString }}', '')
steps:
- checkout: none
- template: validate-db.yml@templates
parameters:
ConnectionString: '${{ parameters.dbConnectionString }}'
- job: CompareFiles
dependsOn: CheckDb
condition: or( eq( dependencies.CheckDb.result, 'Succeeded' ), eq('${{ parameters.dbConnectionString }}', '') )
steps:
- checkout: none
- task: PowerShell@2
name: compareFiles
inputs:
targetType: filePath
filePath: 'compareFile.ps1'
- deployment: Deploy2
dependsOn: CompareFiles
environment: 'Env ST'
strategy:
runOnce:
deploy:
steps:
- task: PowerShell@2
inputs:
targetType: filePath
filePath: 'File.ps1'
The next job is to compare file the files, the CompareFile.ps1 file is below.
The file compareFileContent.ps1 tries to make the task fail or succeed but I don't know PowerShell enough.
I have found somewhere that $host.SetShouldExit(10) could make the task fail so I tried 10 for failure and 0 for success,
I also tried exit values but for now, testing with $equal = $true the stage "Deploy2" is skipped so I am blocked
[CmdletBinding()]
param ()
***
$equal = $true
if($equal) {
# make pipeline to succeed
$host.SetShouldExit(0)
exit 0
}
else {
# make pipeline to fail
$host.SetShouldExit(10)
exit 10
}
Would you have ideas why the deployment job is skipped?
Thank you for the edits, I do try to write correctly but I guess I was really tired. Sorry.
I was able to use these exit values to make the pipeline task succeed or failed:
if($equal) {
# make pipeline to succeed
exit 0
}
else {
exit 1
}
I used the PowerShell script in its own stage instead of in a job and it worked, when the task fails, I can do required manual actions and run the task again.
Cheers,
Claude
If you want to fail more *elegantly", you could format an error message like
echo "##vso[task.logissue type=error]Something went very wrong."
exit 1
Basically, it allows you to print an error message correctly formated in addition to failing the task
And the explanation why Exit 1 is optional:
exit 1 is optional, but is often a command you'll issue soon after an
error is logged. If you select Control Options: Continue on error,
then the exit 1 will result in a partially successful build instead of
a failed build.
The source could be found here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=bash#example-log-an-error
|
STACK_EXCHANGE
|
Pricey Eva , I will probably be at 10 Nov. in US-houston for touring excursion , and I would want to vacation from US-Houston to Peru-Lima for touring function for five days , I'm have already got Iraqi passport and the situation we dont have within our place Peruvian Embassy or Consulate , am i able to receive the visa during the airport of peru or There is certainly yet another way to acquire it on-line through World wide web .
1 domestic animal (Check out the veterinary restrictions with the closest Peruvian consulate and / or SENASA)
Letter from the organization sponsoring the enterprise journey, indicating the goal of the pay a visit to to Peru, duration from the continue to be, and assurance the organization man or woman is travelling with ample resources to last the duration of the journey
You can pay the US$1 cost, depart Peru, and return even in just a several hrs or times. Generally you will not encounter any troubles. But if you will get the total 183 days once again, is up to the immigration officer you'll need to deal with; there isn't a promise.
Entering Peru is very uncomplicated. Currently shortly in advance of landing the stewardesses on your plane will hand you the "Tarjeta Andina de Migracion" (TAM) and a customs declaration sort. If you are getting into by land, you can get the sorts at the border.
Truthfully I can't tell you how particular it is actually to suit your needs and your Good friend to acquire a vacationer visa. But I do think, in the event you will be able to fulfill the visa requirements there shouldn't be a difficulty.
I'm an indian citizen and I had been just issued a 30 day visa for peru within the Santiago consulate in Chile. I'm travelling to Peru involving the 20th of December along with the 4th of January. I needed to verify that the thirty times starts on enter to Peru and never in the working day of difficulty,since all my documents issue to me reaching peru only to the twentieth of December.
I am implementing for a vacationer visa for my take a look at to Machu picchu for eleven times throughout my family vacation. Awaiting your reaction. Best Regards,
So if you can fulfill all other necessities the Peruvian Consular Section in India is asking, I am positive your age shouldn't be an issue.
So greatest look into with the airline what conditions they've got about touring which has a A method ticket.
I've an issue, I have to head over to Lima, Peru on Sept 15 to possess a operation. But I don't have a US Passport, I'm a long lasting resident so I have a inexperienced card.
Not sure how 1 will get a better lender statement. But to assist your case you ought to have a round journey ticket and greatest a hotel reservation or a booked tour offer. Furthermore you could possibly incorporate an invitation letter that should be legalized through the you could check here Ministry of Foreign Affairs right here in Lima. Sorry I couldn't support far more.
howdy Eva I'm from cameroon and will probably be coming in peru for just a convention on LDC on south south cooperation because of the 29th of Oct and I would like a visa.There is absolutely no near peruvian embassy in central africa where i could get a visa,what do i do.
Eva, thank you soooo much. Your recommendation will exercise properly. I just called the airline and you also are Unquestionably proper, you can find flights accessible to Bogota then to San Salvador. Your assistance is significantly appreciated :)
|
OPCFW_CODE
|
No one mention lvm2 can make read and write speed get multiplicated (similar to raid0).
I personally use 3 identical disks and over them lvm2 in stripped mode, the read and write operations takes 1/3 of the time, thus is a big impact, filesystem is three times faster over it.
I know: any disk fail and all data on them will not be accessible; but that does not mean any lost, since BackUPs are a MUST, nothing like Raid, LVM2, ZFS will avoid to have BackUPs; so i never use mirroring, raid5 and such, i always use stripping (to get top most performance) and have synced BackUPs.
ZFS is great for on-the-fly compression, and with copies parameter bigger than one is like mirroring, but one thing ZFS has and no one else has is auto-recover on-the-fly the bit rot (bits that spontaneously changes while disk is powered off), but ZFS i poses a really great impact (calculate checksums, verify them) and a mayor problem (adding more physical disks).
To resume: i use ZFS only for my BackUPs on external disks, multiple (two or three) ssd with lvm2 striped for OS (aftwr upgrades i redo the clone of the OS), i tend to use inmutable OS; and i use multiple (six) spinnin disks with lvm2 stripped for data, like virtual machines, again afyer any change i redo BackUPs; so after any disk fail i only need to replace it and restore last backup; now a days i have near 1.8GiB/s write speed, so restoring one virtual machine from BackUP only takes less than 30 seconds (32GiB per virtual machine disk).
So my answer is: do not use just one thing, be smart and use the best of each part, lvm2 stripped is faster than mdraid level 0, more when using six spinning disks; one warning with stripping ssd, two and three is good, four ssd can degrade performance (my tests gave lower write speed when i used four identical ssd in stripped mode, no matter if lvm, mdraid0, etc), seems that SSD TRIM and such write amplification can be the main cause of adding more ssd to stripped volume makes lower write speed.
Waring with ssd, and any raid0 (stripped volumes), align things perfectly, assign cluster sizes on filesystem correctly, stip size, etc so no one causes degradation; as sample: disk sector is 2048, so 2K at any read/write as minimun, never use a filesystem that uses 512 bytes clusyer, over that, better to use 2K or 4K cluster size; now imagine you use 3xHDD, each of 2K sectors, so at any read/write optimun filesystem cluster would be 3x2K=6K, but that is not possible on many filesystems, then think what if use a 64K cluster size, 64K/6K=32/3, that causes unbalanced, so not optimal, and so on. Make maths to get optimum cluster size.
My bests results are: Cluster size = stripsize * number of disk on the stripe; that way each read/write is of the exact size that causes all disks to work, so speed improve is rrally great. An an example 192K cluster size for 3 disks with 64K stripe size; another example 192K cluster size for 6 disk with 32K stripe size.
And allways remember to test single disk in 4K, 8K, 16K, 32K, 64K block; a lot of disks gives really bad speeds with lower numbers like 4K, but gives more than ten times faster time when on 64K, 128K or higher.
Yes, using big cluster sizes can make a lost of space waste on las cluster of each file (if you use millions of files of only 1 byte each) better use a compact/pack on-the-fly system over the file-system, as a sample a 4TiB disk with a 4K cluster size can only have less than 4TiB/4K=1073741824 files of 1Byte each, that is just 1GiB if all files are 1Byte size (cluster size 4K), bigger cluster size worst ratio, but if files are huge, like virtual machines (near 32GiB as a sample, or just a few megabytes) the lost is only on last cluster; so big files, big cluster size is much better for performance, but beware how virtual machine uses it.
No one will tell you this secret: inside the guest do not use 4K cluster size, use the same cluster size as the cluster size whrere the virtual disk resides, or a multiple of it.
Yes, i am a manic of getting the top most speed inside the guest disks, as i said with 6 rotating disks i get near 1.7GiB/s, SATA III bus speed is the bottleneck, not the disks themselfs. I use high end (not cheap) disks, 128MiB cache with write speed of 283MiB/s each.
For you and for all people: It is much best to learn how cluster size, stripe size and block size must be related prior to do any speed test, else testing LVM2 or any other RAID (also ZFS) can give FALSE conclusions.
Just a sample for such: I test my linux boot times with 2x60MiB/s 2.5 inch 5400rpm Sata disks on a Sata II ports mainboard, and then test with 2xSSD Sata III (they can write more than 250MiB/s each if connected to Sata III ports), the boot times only takes two second less, just two seconds on a five minute boot, why? because most of the boot time disks are not being used, it is doing things on ram and cpu, but not i/o.
Allways test real-day thing you will do, not just crude speeds (in other words, max speed).
Max speed is good to be know bit not representable, you may not be using the disks at max speed 100% of the time, OS and APPs must do things on ram and cpu without I/O, so on that time disk speed does not matter at all.
All people say SSD improves a lot Windows Boot speed, on my tests that is also FALSE, it only i proves 28 seconds on a boot time of near eigth minutes.
So if you do like me: Linux copy-to-ram on boot, SSD will not be bettet than rotating HDDs, i had also tested USB 3.1 Gen2 stick (139MiB/s read), boot time gets only affeted a few seconds on a five minute boot,why? easy, the read is done when copying to ram, afyer than disk/ssd/usb-stick is not used again on the rest of the bolt, data is on ram, like a ram-drive.
Now i am selling all my SSD i have, they do not improve Linux copy-on-ram at boot, but benchmarking them say they are 5x times faster... see, benchmark gives FALSE conclusions... yest, test and test real day work.
Hope this can make things clear... LVM with bad cluster and stripe sizes affect much more by far than overhead of layer.
|
OPCFW_CODE
|
Is there any reason to use final specifier with unions?
The final specifier can be used with classes or structs to forbid inheriting from them. In such case we only need to mark our class/struct as final:
class Foo final {
// ...
};
More interesting that the same syntax is valid for unions:
union Foo final {
// ...
};
From the cpp.reference.com:
final can also be used with a union definition, in which case it has
no effect (other than on the outcome of std::is_final), since unions
cannot be derived from)
It looks like the usage of final specifier with unions is nonsensical. If so, why is it even possible to mark union as final? Just for consistency? Or for some stuff concerning type_traits? Did I miss something and are there situations where we need to use final with unions?
If you have a fetish for "no effect". Which is far from "nonsensical".
@ddriver Sorry, I didn't catch that. What do you mean by "fetish"?
I mean exactly what it means.
It's probably because a union is considered a class, so the final keyword applies to it.
@AndyG It basically means, that the single reason is a consistence...
More like poor type hierarchy and/or legacy stuff.
Like you said, union can't be derived, so the final specifier has absolutely no effect on them. Other than the outcome of std::is_final, which behavior can be tested using the code below, where all assertions pass:
struct P final { };
union U1 { };
union U2 final { }; // 'union' with 'final' specifier
template <class T>
void test_is_final()
{
static_assert( std::is_final<T>::value, "");
static_assert( std::is_final<const T>::value, "");
static_assert( std::is_final<volatile T>::value, "");
static_assert( std::is_final<const volatile T>::value, "");
}
template <class T>
void test_is_not_final()
{
static_assert(!std::is_final<T>::value, "");
static_assert(!std::is_final<const T>::value, "");
static_assert(!std::is_final<volatile T>::value, "");
static_assert(!std::is_final<const volatile T>::value, "");
}
int main()
{
test_is_final <P>();
test_is_not_final<P*>();
test_is_not_final<U1>();
test_is_not_final<U1*>();
test_is_final <U2>(); // 'std::is_final' on a 'union' with 'final' specifier
test_is_not_final<U2*>();
}
You might be using third-party libraries (even from the same firm, but no write access) that depend on std::is_final<T>. A common example is a semi-properly written singleton pattern: you definitely don't want BaseSingleton and DerivedSingleton to be instantiated in parallel. A (notoriously bad) way to prevent this is to std::enable_if<> for std::is_final<T>. Thus you need to be able to specify it somehow. The question is, why unions are not final automatically.
|
STACK_EXCHANGE
|
One of the joys of blogging is that you occasionally discover people thoughtfully and politely reducing your arguments to shreds. I recently came across an article by William Caputo on the subject of my discussion with Ryan back in November.* I’ll try to summarize the original discussion:
- Ryan contended that using Python fundamentally changed the principles of OOP.
- I argued that the SOLID principles still held.
Now, in my original article, I accepted that dynamic languages helped ameliorate the sharp edges of statically typed languages. Importantly,
- Python’s constructor syntax means that any constructor is effectively an implicit abstract factory. (This advantage is unique to Python, Ruby is nowhere near as slick in this respect.)
- The dynamic nature of Python means that your interaction surface with another class is exactly those methods you call, no more no less.
Now, in certain aspects, I assumed that certain principles became less important simply because the language took some of the burden. William, however, has pointed out that I was wrong.*
The thing is, we were both concentrating on one aspect of SOLID here: statically typed languages have fairly high friction related to their type system that can render code brittle. We therefore have practices closely associated with SOLID principles that are pretty much the only way to keep code flexible in languages in C#. These practices, such as always creating an interface to go with an implementation, are themselves a form of friction which Ryan was arguing was unnecessary in Python.
As William points out, that’s a good benefit of SOLID; it’s not the whole story.
ISP Isn’t About Code
Imagine you’ve got a space station. This station gets visited by two kinds of ships: shuttles, which carry people, and refuelling tankers. Now, the requirements for the shuttle’s docking interface are quite large: you’ve got to be able to comfortably get a stable human shaped hole between the two for an extended period of time. Refuelling, on the other hand, is carried out by attaching a pipe to the tanker.
Now imagine that you were told that both ships needed to use the same connector. You’d end up with a massively overcomplex connector. Now, this metaphor works perfectly well if you consider the space station to be exposing a single IConnector interface and the ships to be consuming classes. However, William’s first point is that actually, it still holds for data feeds, web services, any interaction between two systems. Indeed, the ISP does in fact, apply to space stations. In many ways, interfaces are cheap in code. But in third party integration, it’s expensive and so the ISP is more important. Something to bear in mind the next time you try to reuse the webservice you built for the last client.
Just Because You Can, Doesn’t Mean You Should
Since I’m interviewing at the moment, I’m getting heartily sick of hearing the phrase “an interface is a contract”, but it’s relevant in this context. In a statically typed language the contract is fixed and enforced by the consumed class. Because of this friction, often you get an interface that is larger than it should be because it’s trying to be forgiving enough to handle multiple clients. ISP says you should be doing the opposite: having interfaces for each consumer. In a dynamic language, the consumed class can’t enforce the contract. However, that doesn’t remove the concern, it just rebalances the responsibilities.
Returning to the space station, imagine if you allowed a ship to attach itself to any part of the hull. That would certainly help with adding in new types of vessel to the mix. The problem would come when you wanted to change the space station itself. Maybe those solar panels aren’t very useful anymore and you’d like to get rid of them. Unfortunately, it turns out that there’s a visiting space monster that wraps its tentacles around the panels. You don’t want to upset the monster, so you end up leaving the useless panels on the station.
This is the danger in dynamic languages. In a statically typed language, the space monster wouldn’t have been able to visit at all without work on part of the station. However, if we observe the ISP, we still have to do the work. Equally, the space monster needs to be responsible and not just attach itself to anything that provides purchase. To put it more formally, the consumed class still needs to export an interface the consuming class is going to find useful, and the consuming class has avoid taking unnecessary dependencies. The expression of the problem may be different, but the concerns and the principle remains.
I originally said that because Python automatically keeps interface surfaces as small as the developer is actually using there wasn’t much you could do about ISP in Python, but in fact that’s not the case. Interaction interfaces between classes can still be made smaller, they can still be made more role specific. You can still attempt to create Unified Modelzilla in Python, and it will be as bad an idea as it was when you tried it in J2EE. In many ways, paying attention to ISP is more important in Python than it is with a statically typed language.
*If you want to read it, William’s article is on his home page dated 21 November. I’m afraid I don’t have a permalink.
One thought on “More About Python and the Interface Segregation Principle”
WIlliam’s argument is from an older post (see nov 09) , but I’ll bite. I think William misses my point somewhat still, and we’re officially at the point of splitting hairs now.William says here for example:———————————————————————–I.e.: Duck Typing makes it unnecessary and even unmeaningful to think about ISP when writing code in dynamic languages as we get its benefits (easier to understand and maintain interfaces) for free.Now, I’m pretty sure I disagree with this and for two reasons: 1) Nowhere is it established that we get the true benefit of ISP – dependency management – for free, and 2) I’m not convinced duck typing absolves me of having to worry about how multiple clients interact with a single piece of code.———————————————————————-WIlliam you and I think in a lot of ways are talking about the same things in different terms. I’m not sure ISP is our only concern when having to "worry about how multiple clients interact with a single piece of code" and by itself only a component of that entire concern.I may blog about this yet, but a few days forming my thoughts more would help.
|
OPCFW_CODE
|
Handling log and configuration files when load balancing apache
I asked this question on Stack Overflow, then I released that serverfault would be more appropriate, I apologize if anyone stumbles upon these two issues.
So, I am currently rebuilding my web platform from a single-machine to a cluster of machines, and I will be using Apache load balancing to do this., but I have two questions that I need a good answer to before proceeding. I have Googled and searched here in SF, but didn't find anything good.
My setup will be one Debian machine running the Apache load balancing server (i.e. Apache with mod_proxy) and then any number of "slave" machines, that are balancing members. All of these are VPS inside a VMWare machine, so setting up new slaves as needed will be trivial.
Log Files
The first question is that of log files. In order to troubleshoot my platform, I sometimes need to analyze log files, both access logs and error logs, from Apache. When the load is evenly distributed (i.e. I don't know if I'll even use sticky balancing, any host could probably handle any request at any time), so would the log files for each slave Apache instance. Is there a way to consolidate these live, meaning that my live log analyzer could see the log files from all hosts? I certainly understand that doing so while the files being on several hosts would be difficult, so is there a way to make sure that all log files are kept on one server?
I'm thinking about two things myself, but I would greatly appreciate your input.
syslogd
The first is syslogd, where in it would be possible for several hosts to write to one logging host. The problem with this is that in my current setup, each virtual host in apache, has its own log file. That could probably be fixed in some manner though. My main usage for this is for troubleshooting, not keeping separate logs for each host (albeit if both goals could be met, that would certainly be a bonus).
NFS
My next thought was about NFS, i.e. having a NFS share on the LAN where each slave can write to the same log file. I'm going to go ahead and assume that this will be difficult since slave 1 would open the log file and then slave 2 wouldn't be able to write to it.
As I said, your input is greatly appreciated since I feel I'm stuck in how to solve this.
Configuration files
This is another thing altogether. Each slave will respond to each request as if acting as one single server. That is the entire idea. But what about making changes to the apache configuration files, adding virtual hosts, setting up other parameters? What if I have ten slaves, or fifty? Is there a way to make sure that all these slaves are always in sync? I am already using a NFS export to make sure they all have the same files, but should I use the same approach with the configuration files? Or should I have these as some form of repository and then use rsync to copy them out to the slaves? One problem is that I have built an interface in my web platform that edits these configuration files (namely the file with the virtual hosts), and since that action would take place on one of the slaves, the most current copy of this file could potentially be on one slave.
I realize that this was a long and wieldy post, and I apologize. I just wanted to make sure that all the parameters of my problem were expressed.
I hope someone out there can help me, as you have before! Thank you in advance!
Take a look at logresolvemerge.
You can combine rsync with incrond.
|
STACK_EXCHANGE
|
Issue with indexing BGE-M3 (large dimensionality vectors)
Hello,
I am trying to use the BGE-M3 model within the library. I managed to tweak the library a bit to be able to use the model to do the inference and get the same embeddings as the one I get using the original BGE-M3 code.
I checked with index-free querying and it works great when trying to build an index, the results just get random. On the same dataset, I go from 'Hit@1': '0.41', 'Hit@3': '0.72', 'Hit@5': '0.82', 'Hit@10': '0.89', 'Hit@30': '0.96', 'Hit@50': '0.98', 'Hit@100': '0.98' with my index-free script, to 'Hit@1': '0.05', 'Hit@3': '0.12', 'Hit@5': '0.18', 'Hit@10': '0.28', 'Hit@30': '0.46', 'Hit@50': '0.62', 'Hit@100': '0.91' when using an index. I built indexes with ColBERT models just fine, so it has to do with the indexing of this specific model.
One hypothesis might be that the embedding size is too large (1024) to be compressed to 8 bits, but I can't manage to put higher nbits.
If anyone has an idea of why the indexing might be failing, I am really interested!
Edit: If anyone is interested, I can explain how I made BGE compatible for inference
I did some additional experiments which confirmed that the issue is in the compression process.
I computed the similarity between original vectors and decompression of the compressed version (np.diag(embs[:100] @ self.decompress(ResidualCodec.Embeddings(codes, residuals))[:100].T.cpu())).
This value is, for the original ColBERTv2 model 0.9995, whereas, for the BGE model, it is 0.010826.
In an attempt to understand why, I reversed every modification done to make BGE compatible with RAGatouille and identified one that strongly affect the score: the final normalization (after the linear layer) is L2 in the original ColBERT and L1 in BGE. Using L2 allows to increase the reconstruction similarity to 0.394.
More surprising is that, by reducing the number of bits (setting nbits = 2 manually), the score did not decrease but increased to 0.9473. This means that the model is actually usable in this state, but it would be better to understand why:
Using L1 destroys the compression results. Although it is not a big deal, BGE has been trained with L1 and using L2 might hurt the results.
How 2 bits quantization can be better than 8 bits. Again, it is usable like so, but given that BGE is using a very large embedding size, it would be cool to use more bits to encode it
@NohTow Hi, thanks for the informations. I'm interested in use bge-m3 compatible with this package?
Could you please explain some basic configuration like where to modifed the models, which files should be replaced?
Otherwise, I also plan to replace the default BERT model used in ColBert by another BERT model, could you give me some idea on where should I change the model architecture.
Lets say I have another pretrained embedding model base on bert architecture, I want to cut off the final layer of dense vector representation and take the vector of each token instead , plug it into late interaction part and then fine-tuned.
Thanks in advance!
Hi @NohTow . I'm also very keen to user bge-m3 and would love to hear more about how you implemented it. Have you made any progress on the compression? I need to do RAG on a multilingual dataset, and bge-m3 seems like a good start.
Sorry for the delay.
Here are the modifications that I did:
Using np.quantile instead of the torch function in collection_indexer.py
# bucket_cutoffs = heldout_avg_residual.float().quantile(bucket_cutoffs_quantiles)
# bucket_weights = heldout_avg_residual.float().quantile(bucket_weights_quantiles)
# Very ugly fix to RuntimeError: quantile() input tensor is too large, see https://github.com/pytorch/pytorch/issues/64947
bucket_cutoffs = torch.tensor(np.quantile(heldout_avg_residual.float().detach().cpu().numpy(),bucket_cutoffs_quantiles.cpu().numpy()))
bucket_weights = torch.tensor(np.quantile(heldout_avg_residual.float().detach().cpu().numpy(),bucket_weights_quantiles.cpu().numpy()))
Removing [D] and [Q] special tokens because it has not been trained for these, by commenting these two lines in tensorize() from both query_tokenization and doc_tokenization
# batch_text = ['. ' + x for x in batch_text]
# ids[:, 1] = self.Q_marker_token_id
Remove [CLS] token from both query/doc before going into the linear layer because it is not used in BGE ColBERT scoring
Q = self.linear(Q[:, 1:])
D = self.linear(D[:, 1:])
Adding bias to the linear layer of HF_Colbert and changing its size to 1024 (I think the size should be good using the colbert_config)
# self.linear = nn.Linear(config.hidden_size, colbert_config.dim, bias=False)
self.linear = nn.Linear(config.hidden_size, colbert_config.dim, bias=True)
Translate the weights from the BGE checkpoint to a ColBERT checkpoint, by renaming the linear layer from BGE to the one of ColBERT (c.f the error message: Some weights of HF_ColBERT were not initialized from the model checkpoint at BAAI/bge-m3 and are newly initialized: ['linear.weight'])
Change the normalization to L1 for both query/doc
D = torch.nn.functional.normalize(D, p=1, dim=2)
And with that, I realize how painful it was to even load the model.
Unfortunately, I did not find any fix for the compression, except that you can use the params I said was working (although they do not make sense).
Hi there
I am getting the same warrning message mentioned here. Specifically this:
Some weights of HF_ColBERT were not initialized from the model checkpoint at aubmindlab/bert-base-arabertv2 and are newly initialized: ['linear.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
I want to change the bert-base model to bert-base-arabertv2 and train a ColBERT model from there.
Could you please elaborate more on what I should do for this warning message?
Hello,
The problem is that the linear weights has not the same name.
What I did was simply exporting the linear layer, renaming it, loading it in the BaseColBERT model and exporting it.
First, export the layer:
from FlagEmbedding import BGEM3FlagModel
import torch
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
torch.save(model.model.colbert_linear.state_dict(), "linear_layer_bge_statedict.pt")
Then, load it into a BaseColBERT model and export the model.
from colbert.modeling.base_colbert import BaseColBERT
import torch
model = BaseColBERT(name_or_path="BAAI/bge-m3")
model.linear.load_state_dict(torch.load('linear_layer_bge_statedict.pt'))
model.save("export_bge_m3_correctlinear")
You can then use the model in RAGatouille:
from ragatouille import RAGPretrainedModel
RAG = RAGPretrainedModel.from_pretrained("export_bge_m3_correctlinear")
Please note that you can also load the BGE3 model, get the linear layer, load it into BaseColBERT and export it within a single script (without saving the initial .pt, but I prefer to directly give the code I used to not introduce code mistakes/it might be less ressource intensive to not load both model at once.
Hope it helps.
Thanks for you detailed comment!
However, I am using a normal BERT checkpoint that does not have this linear layer, so which layer's weights should I load instead?
Hello,
Thank you for the update and the detailed explanation. I have used your steps and utilized the BGE-M3 with Direct Colbert implementation without ragatouille. However, when i tried to use it, the output from the direct colbert implementation is not stable. For instance, for a k value of 10, the output is only 3 passages or 4 passages depending on the query. Can you please help me with this? Is there any setting that is limiting the Colbert to output all the passages upto k value? I am seeing the same issue with the Ragatouille implementation with BGE-M3. One more thing, I have found is this problem is not seen when I used the RAG.encode() to encode the passages I have but that encoding is storing the bits in in-memory. Can you please look into this bug? It would be super good if the Colbert gave top 50 passages with k = 50 with BGE-M3 model.
|
GITHUB_ARCHIVE
|
What's in a name? These DevOps tools come with strange backstories
See it now: Ansible
Ansible is an open-source software provisioning, configuration management, and application deployment tool from Red Hat. Ursula K. Le Guin coined the word "ansible" in her 1966 novel Rocannon's World. The word was a contraction of "answerable," as the device would allow its users to receive answers to their messages in a reasonable amount of time, even over interstellar distances. Ansible has found its way into many other science fiction stories by other authors as a tool for hyperspace.
- 12 gifts that teach you how to code CNET
- What is DevOps? An executive guide
- Low-code platforms: A cheat sheet TechRepublic
- Programming languages: Developers reveal most loved, loathed
Disclosure: ZDNet may earn commissions from some of the products featured in this gallery.
See it now: Capistrano
Capistrano is an open-source tool designed to remotely automate scripts for deploying web applications. Capistrano is the town in central Italy where Saint John of Capistrano was born. San Juan Capistrano in California is the home of a migratory phenomenon where, every spring, swallows migrate 6,000 miles from Argentina. In a chat log, the developers liked the name because Capistrano is "casually sophisticated, pleasant, and refreshing."
See it now: Docker
Docker is an open-source container-level operating system virtualization system. Since Docker is all about containers, it seems fitting that the company's logo looks like a stylized container ship -- and the ship/dock connection makes the name Docker all the more fitting.
See it now: Ganglia
Ganglia are the connecting structures between the peripheral and central nervous systems in the human body. In the DevOps world, Ganglia is an open-source distributed monitoring system, essentially creating a peripheral nervous system for systems, clusters, networks, and widely distributed environments.
See it now: Gradle
Gradle is an open-source build tool intended to improve on Apache Maven in terms of flexibility, performance, user experience, and dependency management. According to a forum post, Gradle, which sounds like an inverse portmanteau of cradle to grave, actually has no meaning. It just sounded cool. The logo refers to the effort and tenacity to perform a build, the effort an elephant can expend.
See it now: Icinga
Icinga is an open-source network monitoring system created as a fork of Nagios, intended to overcome perceived limits of Nagios. Icinga is a Zulu word meaning to look for or search for. It also sounds like icing, which makes us think of cake. Almost everything makes us think of cake.
See it now: Java
Java is a general purpose programming language that's pretty much become the foundation for our modern mobile-centric world. It's also now owned by Oracle, which has caused enormous amusement and employment for attorneys everywhere. As for its name, it was originally Oak, then Green, then -- from a whiteboard brainstorming session fueled by coffee -- Java.
See it now: Jenkins
Jenkins is a Java-based automation server. It was originally called Hudson, but was changed due to a dispute between the developers and Oracle, owner of the Hudson trademark. The Jenkins logo is an image of a butler, often referred to as Mr. Jenkins. World of Warcraft players can't help but think of the Leeroy Jenkins meme whenever they hear the word Jenkins.
See it now: Jira
Jira is a proprietary issue tracking product developed by Atlassian that allows bug tracking and agile project management. The product name is a truncation of Gojira, the Japanese word for Godzilla. That makes Jira, though a few degrees of separation, a reference to Bugzilla, a competing product.
See it now: Juju
Juju is an African spiritual practice involving objects and amulets. Juju is also an open-source application and deployment modeling tool. The core objects of Juju are called Charms, earning Juju (the software) our Cultural Appropriation Much? Award for this gallery.
See it now: Kubernetes
Kubernetes is an open-source orchestration system originally developed by Google, intended to automate application deployment and scaling. Kubernetes originates from Greek, meaning to be a captain, a pilot, or a navigator.
See it now: Nagios
Nagios is an open-source system, network, and infrastructure monitoring service. Nagios is also a recursive acronym standing for "Nagios Ain't Gonna Insist on Sainthood." See, back before it was called Nagios, it was called NetSaint until it ran afoul of the trademark gods. As it turns out, Agios is also Greek for "saint." Can you just stand all that cleverness?
See it now: Perl
It's a little difficult to classify Perl as a DevOps language, but since it was so heavily used in the early days of web applications, it deserves a place in our list. It's also a cool name. Developer Larry Wall originally named it Pearl, but it turned out there was another language by that name. So Perl (without the "a") was born.
See it now: Prometheus
Prometheus is an open-source monitoring project that records real-time metrics as a service. Prometheus was also the mythical Titan who stole fire from the gods and gave it to humanity, leading to the birth of civilization. So, naming your software Prometheus is not pretentious in any way. Not at all. Plus, a lot of science fiction starships have been named Prometheus.
See it now: Puppet
Puppet is a so-called open-core, open-source product, meaning some of it is open source and some of it is all about commercialization. The software is designed to manage system configuration through a declarative language. Essentially, the software is intended to make systems perform like they're puppets on the end of a string.
See it now: Python
Python is a high level programming language used in many network and web applications. The name Python was named after Monty Python's Flying Circus, earning Python our Most Delightful Etymology Award for this gallery.
See it now: Ruby
Ruby is a general purpose programming language that's the foundation of the very popular Ruby on Rails framework. The jewel-styled name Ruby was inspired by Perl (which was briefly going to be called Pearl).
Ruby on Rails
See it now: Ruby on Rails
Ruby on Rails provides a rich web application framework for fast development and deployment. The "on Rails" portion of the Ruby On Rails name is because frameworks are designed to provide a clear, smooth, somewhat automatic path, like steel rails provide to trains.
See it now: Scala
Scala is a JVM-compatible language that is both object-oriented and support functional coding, with the intent of producing more concise, easier-to-support code. Scala is a blend of scale and language, implying code designed to grow. Scala is also a nightclub in London, an English electronic rock band from the 90s, and was a Charlotte Street theatre constructed in 1772.
See it now: Snort
Snort is open source intrusion detection software managed by Cisco. Snort, at its most basic, is a packet sniffer, so it doesn't take a pig flying to see the jump from sniff to snort, which explains why Snort's mascot is a pig.
See it now: Splunk
Splunk is a machine data analytics firm that provides intelligence, security, and analytics solutions for infrastructure and IT operations. The name Splunk is derived from spelunk, the practice of cave exploration for the pure fun and adventure of it.
See it now: Squid
See it now: Vagrant
Vagrant is an open-source tool for building and maintaining software development environments hosted on virtual machines. While the word "vagrant" implies someone down on their luck, a key part of the definition is someone without a fixed abode. Since Vagrant is designed to virtualize software development environments so they're portable, the name is rather appropriate.
|
OPCFW_CODE
|
package org.monarchinitiative.hpoworkbench.cmd;
import com.beust.jcommander.Parameters;
import org.apache.log4j.Logger;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.*;
import java.util.stream.Collectors;
import org.monarchinitiative.hpoworkbench.io.HPOParser;
import org.monarchinitiative.phenol.ontology.data.Ontology;
import org.monarchinitiative.phenol.ontology.data.Term;
import org.monarchinitiative.phenol.ontology.data.TermId;
import org.monarchinitiative.phenol.ontology.data.TermSynonym;
/**
* Make a CSV file representing the HPO hp.obo file
* Created by robinp on 6/23/17.
*/
@Parameters(commandDescription = "csv. Make a CSV file representing the HPO hp.obo file")
public class HPO2CSVCommand extends HPOCommand {
private static Logger LOGGER = Logger.getLogger(HPO2CSVCommand.class.getName());
/** name of this command */
private final static String name = "csv";
private Map<String,String> hpoName2IDmap=null;
public String getName() { return name; }
/**
*
*/
public HPO2CSVCommand() {
}
/**
* id: HP:3000067
name: Abnormality of lateral crico-arytenoid
def: "An abnormality of a lateral crico-arytenoid." [GOC:TermGenie]
synonym: "Abnormality of lateral cricoarytenoid muscle" EXACT []
xref: UMLS:C4073274
is_a: HP:0000464 ! Abnormality of the neck
is_a: HP:0003011 ! Abnormality of the musculature
is_a: HP:0025423 ! Abnormal larynx morphology
TODO def not working,
*/
private static final String header="#id\tname\tdef\tsynonyms\txrefs\tis_a";
/**
* Perform the downloading.
*/
@Override
public void run() {
Ontology ontology=null;
if (hpopath==null) {
hpopath = this.downloadDirectory + File.separator + "hp.obo";
}
HPOParser hpoparser=new HPOParser(hpopath);
try {
ontology = hpoparser.getHPO();
} catch (Exception e) {
System.err.println("[ERROR] could not partse hp.obo file.\n"+e.toString() );
System.exit(1);
}
Collection<Term> terms = ontology.getTerms();
try {
BufferedWriter bw = new BufferedWriter(new FileWriter("hp.tsv"));
bw.write(header+"\n");
for (Term t :terms) {
//System.out.println(t);
String label = t.getName();
String id = t.getId().getValue();
String def=t.getDefinition();
String synString=t.getSynonyms().stream().map(TermSynonym::getValue).collect(Collectors.joining("; "));
Set<TermId> ancestors= ontology.getAncestorTermIds(t.getId());
/* for (TermId i:ancestors) {
Term = ontology.getterm(id);
}
t.getSubsets()
ParentTermID [] parents = t.getParents();
String parentsString=join(parents);
bw.write(String.format("%s\t%s\t%s\t%s\t%s\t%s\n",label,id,defString,synString,xrefString,parentsString));
//System.out.println(String.format("%s\t%s\t%s\t%s\t%s\t%s\n",label,id,defString,synString,xrefString,parentsString));
*/
}
bw.close();
} catch (IOException e) {
e.printStackTrace();
System.exit(1);
}
}
}
|
STACK_EDU
|
Honestly gstreamer is the most powerful piece of video software out there, that is also by far the most confusing! Ive spent ages scouring all the forums, but alas, I have not found conclusions.
I did a standard install of gstreamer, it would get to qground control, but not mission planner. (On windows 10 _ 64bit) . (also latest stable versions of qgroundcontrol and mission planner).
I can only figure that is because of the gstreamer version.
So now I figure I need to install gstreamer version 1.9.2 on my raspberry pi.
I cannot figure out how to do this!
There is this page - installing on linux
but it seems like it would only install the latest gstreamer version. There are pages where you can download the version you need, but how do you install all the correct libraries and environment requirements?
I have had this working before on the same raspberry pi, I have no idea how i did it, i was following all sorts of blogs and installing all sorts of things, this time i am trying to do everything properly and document it. (i will happily make a video tutorial for this, and documents you can add to the main github guide book, as this is something that soaks up endless hours for newbies!).
SO i havnt heard anything. SO heres what im going to try.
Ive downloaded the 1.9.2 version from here https://gstreamer.freedesktop.org/src/gstreamer/
in the release notes it says you still need to download good bad ugly etc.
so ill start off installing the 1.9.2 version, and then have a go at installing the modules. (im not sure if they have to be 1.9.2 as well).
I’m using the default gstreamer on a Raspberry Pi and have no issues streaming to Mission Planner and Qgroundcontrol.
What pipeline string are you using?
For reference, I have a RTSP server using gstreamer. Happily streams to everything - https://github.com/stephendade/Rpanion-server/blob/master/python/rtsp-server.py
wait, theres a default gstreamer on raspberry pi???
Ive been in circles for days!
I have to give this a try!
ok ive found a straight forward install plan, unfortunately its still only getting to qgroundcontrol and not mission planner, but erghghgh im a bit over it now. Will just stream it to a gstreamer script so i can view the video.
and here is the code to enter in the terminal in case the link disappears
sudo apt-get install gstreamer1.0-plugins-base -y;
sudo apt-get install gstreamer1.0-plugins-good -y;
sudo apt-get install gstreamer1.0-plugins-bad -y;
sudo apt-get install gstreamer1.0-plugins-ugly -y;
sudo apt-get install gstreamer1.0-libav -y;
sudo apt-get install gstreamer1.0-omx -y;
sudo apt-get install gstreamer1.0-tools -y
this is what im entering into the pi to transmit video
sudo raspivid -t 0 -h 720 -w 1024 -fps 25 -hf -b 2000000 -o - |
gst-launch-1.0 -v fdsrc !
rtph264pay config-interval=1 pt=96 !
udpsink host=192.168.1.106 port=5600
this is what im putting in mission planner
udpsrc port=5600 buffer-size=90000 ! application/x-rtp ! decodebin ! queue leaky=2 ! videoconvert ! video/x-raw,format=BGRA ! appsink name=outsink sync=false
I also saw in another blog that pushing “ctrl f” will bring up the secret mission planner menu, that has a gstreamer option. Although the secret menu pops up, there is no option for gstreamer. (mission planner is on windows 10, 64bit, latest versions of everything).
|
OPCFW_CODE
|
API: Support 'first message' authentication approach for WebSockets
Is your feature request related to a problem? Please describe.
The follows on from https://github.com/epinio/epinio/issues/871
Unfortunately the cookie approach doesn't work as well as hoped. As the cookie comes from a separate domain it's counted as third party which are commonly blocked
The final approach, and a good summary of the situation, is to try option 1 from https://websockets.readthedocs.io/en/latest/topics/authentication.html.
Describe the solution you'd like
There's a good breakdown on how this might work at https://devcenter.heroku.com/articles/websocket-security#authentication-authorization
This includes adding verification information associated with the token such as user, ip of requesting client, etc
Additional context
This would unblock the UI from showing application logs and eventually streaming events
The other final option is to create an independent epinio ui backend which would proxy all http requests (including wss upgrade). I think eventually this is where we need to get to but it would require a large investment of time. So if this issue is a similar heavy investment we need to re-evaluate the two.
Let's check if we can avoid implementing token based authentication and use the username+password combination as the first "auth" message. Otherwise, we are introducing a third authentication method (we already have basic auth and cookies) and we would probably have to move everything (e.g. the cli) to token based auth.
If I understand correctly, sending an Authorization header with the websocket upgrade request is deprecated.
How about an approach with a second endpoint and a JWT token:
add another endpoint that issues JWT tokens using public/private key crypto. The server uses the private key to create a token. The token data contains a timestamp.
the client uses basic auth or cookie authentication to get a JWT from the server
the client initiates a websocket connection, sending the JWT in the URL as a query param
the server can validate the request, because the token is signed with its own key
the server can expire the token based on the timestamp
no state needed in the server, unless we want to revoke tokens before they expire
More links:
https://stackoverflow.com/questions/64934740/websocket-connection-with-authorization-header/65001506
https://stackoverflow.com/questions/4361173/http-headers-in-websockets-client-api/4361358#4361358
* the client initiates a websocket connection, sending the JWT in the [URL as a query param](https://faqs.ably.com/is-it-secure-to-send-the-access_token-as-part-of-the-websocket-url-query-params)
The tokens will then fly unencrypted. I think that's the reason why authentication happens using headers usually. I'm not sure if this is acceptable in this case for some reason.
No the tokens will still be secured by https. We still upgrade a https connection to wss.
From the article I linked:
Connections default to being over TLS these days, so from the outside you can't access query params, nor can you access the contents of messages.
Traditionally it was considered poor practice to have credentials in query params because URLs can get stored in places such as logs for proxies, browser history, etc. However, neither of those concerns apply to websockets (a browser won't keep history of the connections made by a page), and proxies do not have access to the URL when there is a TLS tunnel. This concern arose when non-TLS interactions were the default. For comparison, most OAuth flows result in an endpoint access being made with an access_token query param.
We've had this debate on a previous product. Query params over https should be safe in transit but readable at the target server level, for instance they could appear in logs that record https requests. Though at that level I guess private things like headers, body, etc could also be logged.
I didn't know query params are also encrypted in transit. I'm happy with this plan then. We should be very careful with the websocket authentication. That is what we are going to use for the shell to the app feature as well, which gives access to a terminal directly.
|
GITHUB_ARCHIVE
|
The purpose of this project was to practice defining and using functions as we did in last weeks lab. But this time we complicated matters by grouping a set of similar functions into one python file. Then we imported those functions into a new file, just as we do when we import the turtle package, thus avoiding the mess of looking at the functions' code when we used them.
Our task was to make an undersea scene using turtle. To make the scene I would need some basic shapes. I decided to define functions for a rectangle, circle, and triangle. These functions would take location, dimensions, and color as parameters. Here's a picture of a test of my three shapes using some random locations and dimensions:
As you can see, all of the triangles are drawn in the same orientation. That's because I made sure to rotate the pen back to 0° after if finished drawing every shape. You can see it at 0° in the bottom middle. If I didn't take it back to the same angle every time, the triangles and squares would come out tilted by the angle the pen had before starting the shape.
Next I combined sets of shapes into two different drawing functions, one for a seahorse and one for a fish.
Use your imagination. In defining their functions I added parameters for position and size. No matter where I want to draw them or what size, the proportions of their constituent shapes stay the same. I achieved this by making sure not only that their constituent shapes would resize appropriately, but also that the distances between shapes would scale proportionally. Here's a snippet of code from the fish() function defining the fish's lifelike tail:
triangle(x+98*scale, y+100*scale, 0, 100*scale, "darkorange")
I defined the triangle function earlier in the file. It takes 5 parameters: x position, y position, rotation (here 0), size, and color. The size, x position and y position all depend on the scale factor, a parameter of the fish() function. To make the tail appear in the right place it had to be 98 pixels to the right and 100 pixels up from the bottom of the circle, the point x,y. But if the scale went up then the 98 and 100 would have to go up proportionally. That's the derivation of the math there.
Next I put some fish and seahorses together to make my underwater scene. Take a look:
I made this picture using the fish and seahorse functions, which in turn were made up of the triangle, rectangle and circle functions. The interesting part here was that I did not call on the fish and seahorse functions in the same file I defined them in. Instead I made a different file and used the import command to import the functions from my original file defining all these shapes. The import command makes all of the functions of that file available to me, but I don't have to see them. Thus I could program my underwater scene without clutter. The file where I scripted out this scene contained only 12 lines of code.
Next I drew a totally different picture , this one a scene from last summer: the mojave desert of california. I used the same process of defining shapes in my shapes.py file and using the shape functions in my main.py file. Here's the pic
I also extended my code with a function to draw a random underwater scene. It didn't turn out very well because the fish and seahorses draw over one another in a jumbled mess. I would have used the for loop just like in the instructions, but I found that the random.random() function would spit out really small numbers which would shrink my fish and seahorses to mere pixels. So instead of using a for loop I used a while loop which allowed me more control over the state of the indexing number. Here's a snippet:
index = 1
while index < 5: # while the index is less than 5, the code below executes
x = random.random() # x is a random real number between 0 and 1
if x > 0.5:
index += 1 # adds 1 to the index
# this part then draws a fish of scale factor x in a random location
The index only increases if x is greater than 0.5, and fish only stop being drawn once the index reaches 5. Here's the result:
In this project I learned that it is organizationally attractive to put a bunch of similar functions in one .py file as a repository for other .py files to draw from. I also learned how to use a while loop to ensure certain conditions are fulfilled before the loop terminates.
|
OPCFW_CODE
|
PostgreSQL (Postgres) Jobs in Berkshire
Jobs 1 to 7 of 7
Reading, Berkshire -
Salary: £35000 - £50000 per annum + BenefitsPosted: Yesterday
Do you have solid experience in PHP / Perl / MySQL / PostgreSQL?... Knowledge of Python / XML / jQuery / MVC / Zend Framework would be desirable, but not essential... MySQL or PostgreSQL. RedHat & Debian Linux Distributions. Web Services (Json / Soap / RESTful) NoSQL databases (Memcache / Couchbase / MongoDb) Enthusiasm and problem solving skills... MVC Applications... jQuery / Ajax /...
Reading, Berkshire -
Salary: £45000 - £50000 per annum + benefitsPosted: 5 days ago
Responsibilities Your main responsibilities will include: Delivering high quality, unit-tested solution components using technologies such as RabbitMQ, Django, PostgreSQL, Redis, Cassandra and Kafka... DevOps activities using technologies such as Docker and Ansible... Experience of developing RESTful services using Django... Experience of working in a DevOps environment using technologies like...
Maidenhead, Berkshire -
Hitachi Consulting UK
Salary: From £50,000 to £90,000 per annum inc Bonus + BenefitsPosted: Yesterday
Python Developer, Django, Linux, LAMP, PostgreSQL, Flask, Ubuntu, RESTful APi's, Nginx and Uwsgi, Programming, Software Engineer, programming, Web Development, Mobile Application Development (Experience in ALL of the following NOT ESSENTIAL) wPython, Pillow, BeautifulSoup, NumPy, SciPy. Pygame, Pyglet, SQLAlchemy, Twisted, matplylib, pyGT / pyGtk or Scapy, Cassandra, GitLab, Ansible, Docker... Experience /...
Ascot, Berkshire -
Salary: £75000 per annum + Share optionsPosted: 5 days ago
Job Title: Technical Services Data Analyst (XML) Location: Slough... Define new XML datasets for future deployments... Strong understanding of the XML data structure of Patient Flow Systems. Strong XML analytical skills in organising, analysing significant amounts of data and scripts with attention to detail and accuracy. Strong experience in working...
Bluetown Online LTD
Salary: £25000 - £30000 per annumPosted: 22 days ago
Experience and Tech required: Extensive experience in NoSQL databases, especially Cassandra. Experience of PostgreSQL. Data Warehousing. Data Analysis. ETL and ETL Programing... Excellent Migration tools suite including experience in Shaping data for analysis and performance testing... Experience with Big Data collection technologies such as Spark, Solar. SAN (Storage Area Network)...
Salary: £65000.00 - £75000.00 per annum + BenefitsPosted: Yesterday
The System Administrator is a key function of the support team. They are responsible for the development and maintenance of key systems and infrastructure and will be required to maintain and support all key systems to ensure system reliability and performance. Must have proven experience in IT Support (1st & 2nd...
Newbury, Berkshire -
Russell King Associates
Salary: From £40,000 to £45,000 per annum Life Insurance, Company Pension SchePosted: 5 days ago
PostgreSQL Jobs in Berkshire
|
OPCFW_CODE
|
how to get dict of model objects keyed by field
Assuming I have Django model called 'Blog' with a primary key field 'id', is there a query I can run that will return a dictionary with keys of the id values indexing the Blog model instances?
in_bulk() seems like the kind of thing I want, but it requires a list of the specific id values in this case, e.g.
Blog.objects.in_bulk([1])
will give
{1: <Blog: Beatles Blog>}
The document says that if you pass an empty list you'll get an empty dictionary back, so is there any way I can get all values back?
It's just python
{x.pk:x for x in Blog.objects.all()}
EDIT:
Alb here, just adding that if you're using Python 2.6 or earlier you need to use this older style syntax:
dict((x.pk, x) for x in Blog.objects.all())
Thanks, I'm fairly new to python, what's the name of this feature so I can read more about it?
@Alb the {x.pk: x for x in ...} is call dictionary comprehension. the Blog.objects.all() is a django query to get all the rows
@Alb take a look at http://docs.python.org/2/tutorial/datastructures.html#dictionaries the very end of paragraph
@DmitryShevchenko Thanks, got caught out for a minute, as I was using python 2.6, I've edited your answer to give the solution for that too. Thanks again.
@DmitryShevchenko There's a profound reminder in "It's just python" to avoid limiting oneself to a framework's API. Sometimes it's easy to get lost in the Django docs and overlook a standard Python solution.
The id_list parameter of the in_bulk method is None by default, so just don't pass anything to it:
>>> Blog.objects.in_bulk()
{1: <Blog: Beatles Blog>, 2: <Blog: Cheddar Talk>, 3: <Blog: Django Weblog>}
In the result, the default key is the primary key. To override that use:
Blog.objects.in_bulk(field_name='<unique_field_name>')
NOTE: the key must be unique or you will get ValueError
FYI field_name is available in Django >=2.0 only https://docs.djangoproject.com/en/3.0/releases/2.0/#models
You can also try to put ValuesListQuerySet object in to "in_bulk" method like this:
blog_query = Blog.objects.values_list('pk', flat=True)
Blog.objects.in_bulk(blog_query)
Doesn't this make two DB hits?
|
STACK_EXCHANGE
|
#! /usr/bin/env python
"""
Extract and compare parameters from two ULog files
"""
from __future__ import print_function
import argparse
import sys
import re
from html import escape
from .core import ULog
#pylint: disable=unused-variable, too-many-branches
def get_defaults(ulog, default):
""" get default params from ulog """
assert ulog.has_default_parameters, "Log does not contain default parameters"
if default == 'system': return ulog.get_default_parameters(0)
if default == 'current_setup': return ulog.get_default_parameters(1)
raise Exception('invalid value \'{}\' for --default'.format(default))
def main():
"""Commande line interface"""
parser = argparse.ArgumentParser(description='Extract parameters from an ULog file')
parser.add_argument('filename1', metavar='file1.ulg', help='ULog input file')
parser.add_argument('filename2', metavar='file2.ulg', help='ULog input file')
parser.add_argument('-l', '--delimiter', dest='delimiter', action='store',
help='Use delimiter in CSV (default is \',\')', default=',')
parser.add_argument('-i', '--initial', dest='initial', action='store_true',
help='Only extract initial parameters. (octave|csv)', default=False)
parser.add_argument('-t', '--timestamps', dest='timestamps', action='store_true',
help='Extract changed parameters with timestamps. (csv)', default=False)
parser.add_argument('-f', '--format', dest='format', action='store', type=str,
help='csv|octave|qgc', default='csv')
parser.add_argument('output_filename', metavar='params.txt',
type=argparse.FileType('w'), nargs='?',
help='Output filename (default=stdout)', default=sys.stdout)
parser.add_argument('--ignore', dest='ignore', action='store_true',
help='Ignore string parsing exceptions', default=False)
parser.add_argument('-d', '--default', dest='default', action='store', type=str,
help='Select default param values instead of configured '
'values (implies --initial). Valid values: system|current_setup',
default=None)
args = parser.parse_args()
ulog_file_name1 = args.filename1
ulog_file_name2 = args.filename2
disable_str_exceptions = args.ignore
message_filter = []
if not args.initial: message_filter = None
ulog1 = ULog(ulog_file_name1, message_filter, disable_str_exceptions)
ulog2 = ULog(ulog_file_name2, message_filter, disable_str_exceptions)
ulog3 = ULog(ulog_file_name2, message_filter, disable_str_exceptions)
params1 = ulog1.initial_parameters
params2 = ulog2.initial_parameters
p2 = ulog3.initial_parameters
changed_params1 = ulog1.changed_parameters
changed_params2 = ulog2.changed_parameters
if args.default is not None:
params1 = get_defaults(ulog1, args.default)
args.initial = True
changed_param1_list = dict([(i[1],i[2]) for i in changed_params1])
param_keys1 = sorted(params1.keys())
param_keys1_minusinflight = sorted(params1.keys()) - changed_param1_list.keys()
param_keys2 = sorted(params2.keys())
k2 = sorted(p2.keys())
delimiter = args.delimiter
output_file = args.output_filename
version1 = ''
version2 = ''
if 'boot_console_output' in ulog1.msg_info_multiple_dict:
console_output = ulog1.msg_info_multiple_dict['boot_console_output'][0]
escape(''.join(console_output))
version = re.search('Build datetime:',str(console_output))
if version is not None:
version1 = str(console_output)[version.end():version.start()+36]
else:
version1 = ' Unknown'
if 'boot_console_output' in ulog2.msg_info_multiple_dict:
console_output = ulog2.msg_info_multiple_dict['boot_console_output'][0]
escape(''.join(console_output))
version = re.search('Build datetime:',str(console_output))
if version is not None:
version2 = str(console_output)[version.end():version.start()+36]
else:
version2 = ' Unknown'
if (version1 != ' Unknown') and (version2 != ' Unknown') and (version1 != version2):
output_file.write('\n')
output_file.write('New Firmware \n')
output_file.write('Build:')
output_file.write(version1)
output_file.write('\n')
output_file.write('↓')
output_file.write('\n')
output_file.write('Build:')
output_file.write(version2)
output_file.write('\n')
elif (version1 == ' Unknown') or (version2 == ' Unknown'):
output_file.write('\n')
output_file.write('Unknown Firmware: no version information recorded.\n')
output_file.write('Build:')
output_file.write(version1)
output_file.write('\n')
output_file.write('↓')
output_file.write('\n')
output_file.write('Build:')
output_file.write(version2)
output_file.write('\n')
if args.format == "csv":
if (set(param_keys2) - set(param_keys1)) or (set(param_keys1) - set(param_keys2)):
output_file.write('\n')
for param_key2 in set(param_keys2) - set(param_keys1):
output_file.write('New: ')
output_file.write(param_key2)
output_file.write(', ')
output_file.write(str(round(params2[param_key2],6)))
output_file.write('\n')
for param_key1 in set(param_keys1) - set(param_keys2):
output_file.write('Deleted: ')
output_file.write(param_key1)
output_file.write('\n')
output_file.write('\nChanged Pre-flight:\n')
for param_key1 in param_keys1_minusinflight:
for param_key2 in param_keys2:
if (param_key1 == param_key2) and (param_key1 != 'LND_FLIGHT_T_LO') and (param_key1 != 'LND_FLIGHT_T_HI') and (param_key1 != 'COM_FLIGHT_UUID'):
if isinstance(params1[param_key1],float):
if (abs(params1[param_key1] - params2[param_key2]) > 0.001):
output_file.write(param_key1)
output_file.write(': ')
output_file.write(str(round(params1[param_key1],5)))
output_file.write(' -> ')
output_file.write(str(round(params2[param_key2],5)))
output_file.write('\n')
else:
if (params1[param_key1] != params2[param_key2]):
output_file.write(param_key1)
output_file.write(': ')
output_file.write(str(params1[param_key1]))
output_file.write(' -> ')
output_file.write(str(params2[param_key2]))
output_file.write('\n')
matched_list = []
#last_matched_val = 0.0
output_file.write('\nChanged In-flight:\n')
for k2 in p2:
for i in changed_params2:
if (k2 == i[1]) and (k2 != 'LND_FLIGHT_T_LO') and (k2 != 'LND_FLIGHT_T_HI') and (k2 != 'COM_FLIGHT_UUID'):
if isinstance(p2[k2],float):
if (abs(p2[k2] - i[2]) > 0.001):
p2[k2] = round(i[2],5)
if not (i[1] in matched_list):
matched_list.append(i[1])
else:
if (p2[k2] != i[2]):
p2[k2] = i[2]
if not (i[1] in matched_list):
matched_list.append(i[1])
for i in matched_list:
output_file.write(i)
output_file.write(': ')
output_file.write(str(round(params2[i],5)))
output_file.write(' -> ')
output_file.write(str(round(p2[i],5)))
output_file.write('\n')
|
STACK_EDU
|
iMacros doesn't recognize facebook comment textbox
I've been trying to automate a process of replying to comments on facebook. I have iMacros click on the "Reply" button successfully, but when it comes to the comment reply itself or uploading a photo in the comment, it doesn't recognize the tag at all. I actually have to manually click on the comment or photo button once and cancel it, in order for facebook to change the html state of the input to something imacros sees. I don't know why this is happening.
I've tried conventional recording mode without using ID selectors, i've also tried conventional recording with complete HTML tag, but the main issue is that it just doesn't see that comment section.
I know how to code in javascript and imacros, so if a JavaScript solution is out there then that would help.
Try using experimental recording and event command. And uncheck the "Favor elemens ID in selectors" . Also when you use complete HTML tag remove the unnecessary stuff.
Facebook has random HTML tags like IDs so that will "confuse" scripts.
If that fails then try with http://wiki.imacros.net/XPATH .
i did a little check on this manner and i came up with some good results.
the following imacro code will reply on the comment and add a pic too.
all you have to do is replace the "C:\1.jpg" on the line 16 of the code with you image location and edit the text "nice" on the line 23 with your own text and you can use csv files for comments and i can guide you through that if you want.
i tried this and its worked perfectly just get a post where comment replies are allowed and start testing.
If you have any question about the code just ask and i will try my best to reply to you.
SET !EXTRACT_TEST_POPUP NO
SET !ERRORIGNORE YES
SET !EXTRACT NULL
TAB T=1
TAG POS=1 TYPE=form ATTR=CLASS:commentable_item<SP>autoexpand_mode EXTRACT=HTM
Set !VAR4 Eval("var exp = '{{!EXTRACT}}'.match(/ id=.(.*?)\" /); exp[1];")
SET !EXTRACT NULL
TAG POS={{!LOOP}} TYPE=a ATTR=CLASS:UFIReplyLink
TAG POS={{!LOOP}} TYPE=UL ATTR=CLASS:<SP>UFIReplyList EXTRACT=HTM
Set !VAR1 Eval("var exp = '{{!EXTRACT}}'.match(/UFICommentPhotoIcon.(.*?)class/); exp[1];")
Set !VAR1 Eval("var exp = '{{!VAR1}}'.match(/data-reactid=.(.*?)\"/); exp[1];")
TAG POS=1 TYPE=i ATTR=data-reactid:{{!VAR1}}
SET !EXTRACT NULL
TAG POS=1 TYPE=i ATTR=data-reactid:{{!VAR1}} EXTRACT=HTM
Set !VAR2 Eval("var exp = '{{!EXTRACT}}'.match(/ id=.(.*?)\"/); exp[1];")
TAG POS=1 TYPE=INPUT:FILE FORM=ID:{{!VAR4}} ATTR=id:{{!VAR2}} CONTENT=C:\1.jpg
SET !EXTRACT NULL
TAG POS={{!LOOP}} TYPE=textarea ATTR=title:Write<SP>a<SP>reply...
TAG POS={{!LOOP}} TYPE=textarea ATTR=title:Write<SP>a<SP>reply... EXTRACT=HTM
Set !VAR3 Eval("var exp = '{{!EXTRACT}}'.match(/ id=.(.*?)\"/); exp[1];")
wait seconds=3
TAG POS={{!LOOP}} TYPE=textarea ATTR=title:Write<SP>a<SP>reply... CONTENT=nice
EVENTS TYPE=KEYPRESS SELECTOR="#{{!VAR3}}" KEYS="[13]"
|
STACK_EXCHANGE
|
/**
* This script is run as part of the "Build Upgrade Testing" GitHub workflow
* (.github/workflows/upgrade-test.yaml) to generate upgrade data for testing
* Rancher Desktop upgrades.
*
* This will push changes to the "gh-pages" branch (for the upgrade manifest
* JSON file), as well as publish releases (or update existing ones) for the
* upgrade target.
*
* Note that this script intentionally blacklists the upstream repository (as
* defined in package.json) because it changes releases.
*
* Inputs are all in environment variables:
* GITHUB_TOKEN: GitHub access token.
* GITHUB_REPOSITORY: The GitHub owner/repository (from GitHub Actions).
* GITHUB_SHA: Commit hash (if creating a new release).
* GITHUB_ACTOR: User that triggered this, github.actor
* RD_SETUP_EXE: The installer (exe file) to upload.
* RD_SETUP_MSI: The installer (msi file) to upload.
* RD_MACX86_ZIP: The macOS (x86_64) zip archive to upload.
* RD_MACARM_ZIP: The macOS (aarch64) zip archive to upload.
* RD_BUILD_INFO: Build information ("latest.yml" from electron-builder)
* RD_OUTPUT_DIR: Checkout of `gh-pages`, to be updated.
*/
import crypto from 'crypto';
import fs from 'fs';
import path from 'path';
import { Octokit } from 'octokit';
import yaml from 'yaml';
import { simpleSpawn } from './simple_process';
import { defined } from '@pkg/utils/typeUtils';
/** Read input from the environment; throws an error if unset. */
function getInput(name: string) {
const result = process.env[name];
if (!result) {
throw new Error(`Could not read input; \$${ name } is not set correctly.`);
}
return result;
}
/** Given a input variable that expects a single file, return it. */
async function getInputFile(name: string) {
const inputPath = getInput(name);
const stat = await fs.promises.stat(inputPath);
if (!stat.isDirectory()) {
return inputPath;
}
for (const dirent of await fs.promises.readdir(inputPath, { withFileTypes: true })) {
if (dirent.isFile()) {
return path.join(inputPath, dirent.name);
}
}
throw new Error(`Could not find input file for ${ name }`);
}
/**
* assetInfo describes information we need about one asset.
*/
type assetInfo = {
/** filepath is the (full) path to the asset file. */
filepath: string;
/** filename is the base name of the asset. */
filename: string;
/** length of the file */
length: number;
/** checksum is the checksum file contents of the file. */
checksum: string;
/** checksumName is the base name of the checksum. */
checksumName: string;
};
/**
* Given environment name, write checksum contents for the file.
* @param name Name of the environment variable that holds the file path.
* @returns File name and checksum data.
*/
async function getChecksum(name: string, filenameOverride?: string): Promise<assetInfo> {
const filepath = await getInputFile(name);
const outputName = filenameOverride || path.basename(filepath);
const stat = await fs.promises.stat(filepath);
const input = fs.createReadStream(filepath);
const hasher = crypto.createHash('sha512');
const promise = new Promise((resolve) => {
input.on('end', resolve);
});
input.pipe(hasher).setEncoding('hex');
await promise;
await new Promise<void>((resolve) => {
hasher.end(() => {
resolve();
});
});
return {
filepath,
filename: outputName,
length: stat.size,
checksum: `${ hasher.read() } ${ outputName }`,
checksumName: `${ outputName }.sha512sum`,
};
}
async function getOctokit(): Promise<Octokit> {
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
try {
await octokit.rest.meta.getZen();
} catch (ex) {
console.error(`Invalid credentials: please check GITHUB_TOKEN is set. ${ ex }`);
process.exit(1);
}
return octokit;
}
async function updateRelease(octokit: Octokit, owner: string, repo: string, tag: string) {
const files = {
exe: await getChecksum('RD_SETUP_EXE', `Rancher.Desktop.Setup.${ tag }.exe`),
msi: await getChecksum('RD_SETUP_MSI', `Rancher.Desktop.Setup.${ tag }.msi`),
macx86: await getChecksum('RD_MACX86_ZIP', `Rancher.Desktop-${ tag }-mac.x86_64.zip`),
macarm: await getChecksum('RD_MACARM_ZIP', `Rancher.Desktop-${ tag }-mac.aarch64.zip`),
};
console.log(`Updating release with files:\n${ yaml.stringify(files) }`);
let release: Awaited<ReturnType<Octokit['rest']['repos']['createRelease']>>['data'] | undefined;
try {
({ data: release } = await octokit.rest.repos.getReleaseByTag({
owner, repo, tag,
}));
} catch (ex) {
console.log(`Creating new release for ${ tag }: ${ ex }`);
({ data: release } = await octokit.rest.repos.createRelease({
owner,
repo,
name: tag,
tag_name: tag,
target_commitish: getInput('GITHUB_SHA'),
draft: true,
}));
}
if (!release) {
throw new Error(`Could not get or create release for ${ tag }`);
}
console.log(`Got release info for ${ release.name }`);
await Promise.all(Object.values(files).map(async(info) => {
if (!release) {
throw new Error(`Could not get or create release for ${ tag }`);
}
const checksumAsset = release.assets.find(asset => asset.name === info.checksumName);
if (checksumAsset && release.assets.find(asset => asset.name === info.filename)) {
const existingChecksum = (await octokit.rest.repos.getReleaseAsset({
owner,
repo,
asset_id: checksumAsset.id,
headers: { accept: 'application/octet-stream' },
})) as unknown as string;
if (existingChecksum.trim() === info.checksum.trim()) {
console.log(`Skipping ${ info.filename }, checksum matches`);
return;
}
}
await Promise.all([info.checksumName, info.filename]
.map(name => release?.assets.find(asset => asset.name === name))
.filter(defined)
.map((asset) => {
console.log(`Deleting obsolete asset ${ asset.name }`);
return octokit.rest.repos.deleteReleaseAsset({
owner, repo, asset_id: asset.id,
});
},
));
await Promise.all([
octokit.rest.repos.uploadReleaseAsset({
owner,
repo,
release_id: release.id,
name: info.checksumName,
data: info.checksum,
}),
// We need a custom request for the main file, as we need to stream it
// from a file strem.
octokit.request({
method: 'POST',
url: release.upload_url,
headers: {
'Content-Length': info.length,
'Content-Type': 'application/octet-stream',
},
data: fs.createReadStream(info.filepath),
name: info.filename,
}),
]);
}));
console.log(`Release ${ release.name } updated.`);
return release.html_url;
}
async function updatePages(tag: string) {
const response = {
versions: [{
Name: tag,
ReleaseDate: (new Date()).toISOString(),
Tags: ['latest'],
}],
requestIntervalInMinutes: 1,
};
console.log('Updating gh-pages...');
await fs.promises.writeFile(path.join(getInput('RD_OUTPUT_DIR'), 'response.json'),
JSON.stringify(response),
'utf-8');
await simpleSpawn('git',
[
'-c', `user.name=${ getInput('GITHUB_ACTOR') }`,
'-c', `user.email=${ getInput('GITHUB_ACTOR') }@users.noreply.github.com`,
'commit', `--message=Automated update to ${ tag }`, 'response.json',
], {
stdio: ['ignore', 'inherit', 'inherit'],
cwd: getInput('RD_OUTPUT_DIR'),
});
await simpleSpawn('git',
['push'], {
stdio: ['ignore', 'inherit', 'inherit'],
cwd: getInput('RD_OUTPUT_DIR'),
});
console.log('gh-pages updated.');
}
async function main() {
console.log('Reading configuration information...');
const buildInfoPath = await getInputFile('RD_BUILD_INFO');
const [owner, repo] = getInput('GITHUB_REPOSITORY').split('/');
const packageURL = new URL(JSON.parse(await fs.promises.readFile('package.json', 'utf-8')).repository.url);
const [packageOwner, packageRepo] = packageURL.pathname.replace(/\.git$/, '').split('/').filter(x => x);
const buildInfo = yaml.parse(await fs.promises.readFile(buildInfoPath, 'utf-8'));
const tag: string = buildInfo.version.replace(/^v?/, 'v');
console.log(`Publishing ${ tag } from ${ owner }/${ repo } (upstream is ${ packageOwner }/${ packageRepo })...`);
if (packageOwner === owner && packageRepo === repo) {
console.error(`Cowardly refusing to touch ${ packageURL }`);
process.exit(1);
}
const octokit = await getOctokit();
const releaseURL = await updateRelease(octokit, owner, repo, tag);
const summaryPath = process.env.GITHUB_STEP_SUMMARY;
await updatePages(tag);
if (summaryPath) {
await fs.promises.writeFile(
summaryPath,
`# Usage instructions
1. Publish the release at ${ releaseURL }
2. Configure \`resources\\app-update.yml\` to contain:
\`\`\`yaml
upgradeServer: https://${ owner }.github.io/${ repo }/response.json
owner: ${ owner }
repo: ${ repo }
\`\`\`
`.split(/\r?\n/).map(s => s.trim()).filter(s => s).join('\n'),
{ encoding: 'utf-8' });
}
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
|
STACK_EDU
|
I highly recommend that anyone interested in open source, open education, collaboration between universities, the future of higher education, Drupal, Moodle – innovation in general , watch the video presentation by Zach Chandler (Stanford University), Brian Wood (UC Berkley), and Shaun DeArmond at DrupalCon Portland this week. (at the end of this blog post)
So incredibly sad to have miss DrupalCon this year, hoping for DrupalCon Austin in 2014.
“Unconsortium: The next phase of collaboration in Higher Ed” is a project very dear to my hippy-ed-tech-open-source heart. Every compelling piece feels like an echo of the goals we sought by launching Open Royal Roads.
We are all laboring in isolation on our various campuses, solving the same use cases over and over again – we’re not sharing code in any meaningful way with each other…. This is counter to academic communities work… we want to stand on the shoulders of our peers instead of reinventing the wheel over and over again.
Yes! And after launching Open RRU, within a week, we found peers at the University of Montreal ready to contribute to two separate modules we were sharing. Within a month, they shared back our Moodle 2.1 module, now upgraded to 2.3. The speed of sharing, eclipses isolated development.
Perfect is enemy of the good.
When we decided to release our RRU Moodle & Drupal code on Github, we did so knowing the code wasn’t perfect. What we did know was that we wanted to remove customizations to core (Moodle) by having our code accepted into core, we wanted to contribute back more often, but that for various reasons, time and resources we were holding onto code far too long – waiting for perfection? Maybe. Release early, release often.
We share the same use cases.
Using Drupal features, allows us to share our solutions for those use cases, but this project I believe has potential to overflow into an entirely new (and much needed) ecosystem of sharing – sharing of ideas, design concepts, research in technology development for higher education.
If you are working with Drupal in higher education you can REGISTER HERE to start participating in EDUDU. If you have ideas about how to best collaborate in this new community, we would appreciate some feedback HERE.
On a personal note, for those that don’t already know, I have recently left RRU to pursue new challenges. In large part because I want to more fully participate in open initiatives (like this), while being challenged as a developer. RRU was very good to me, I learned a lot and am already missing so many colleagues, but.. it was the right time. If plans go well, I hope to be more involved in this project. I’ll of course credit my ongoing contribution to Mozilla as being life changing influence in all of these goals. Anyway! Enjoying some time exploring new possibilities, and of course some extra time with my girls :)
|
OPCFW_CODE
|
Privacy questions on connected learning with students
Well, great session from Alec on Connected Learning. It brings a big question in my mind about privacy issue, that I've had for a while.
In BC, we have strict privacy laws, which prevents teachers and school board to put student's personal information on a server that is outside Canada (follow this link, article 30.1). This makes our life really complicated. It basically means that we cannot legally let students freely use services like Twitter, Blogger, Tumblr, Google Apps and other social media with their personal name for school use. Also, as teachers, it means also we cannot use tools like teacherease for gradebook or even managebac to manage IB classes, because these companies are based in the US and their servers as well. I know of many teachers, schools and school board using these services, either because they don't know about the law, they have parental approval, they ignore it or they found a other way around it (that I don't know of....).
On the other hand, I understand the reasoning behind this law and the importance of it. For us Canadian, the US patriot act is quite scary and the fact that servers can be breached pretty easily by hackers to retrieve personal information makes you wonder about our actual privacy on the internet. To respect this law, our schoolboard provide blogging and wiki platform, similar of what is found on the web, that we can use with our classes and students. They are hosting these servers in our offices and have staff directly managing them. Unfortunalty, those services compare sparcely with the functionnality of Google Apps and other services and it's definitly not twitter. (It would actually be pointless to have a privatly hosted twitter service anyway, since what's nice about it is its openess to the world). So what is offered by our schoolboard is limited compare to what is available on the web.
At the same time, students create their own account on these services and use it often without their parent's knowledge. As an example, I created a list of over 40 students of my school with twitter account, created even before we talked about it in my info-tech class. So we are at this crossroad that I cannot ask students to create account for school use, but they still can create an account on their personal time for their own use. (unless I get parental approval, but it's big undertaking)
Also, what is personal information? Do we have to consider a homework or an assignement personal information? If we answer yes to this question it means that a student cannot publish assignement on the internet under it's name for school use. My personal answer to this question is this: address, phone number and grades are the uttermost important personal information about a student. Those are my primary concern and I want it on a canadian server. But the rest, really?
I personaly do my best to respect this law. When I learned about this law many years ago, I changed my hosting service for a canadian one (I hosted and administered my own moodle site for a while) and I made sure their servers were in Canada. Also, when I make students use web 2.0 tools, they use pseudonyme (not their real name) or they use an account I created and everyone use it. But I find it difficult not to use web 2.0 tools with my classes, especially my info-tech classes and not in their name. If I want to open my class and have them learn from the world, they need their own identity on the web and it need to be as close to themself as they are ready to share.
Do you share these concern? How do you live with it? Shall we lobby for a change in the law or we should find way around it?
|
OPCFW_CODE
|
Once again, STF-3d set out to 3D-scan both tree trunks and (volcanic) rocks and cliffs at the coasts of Mauritius to enable you to reproduce the nature of this fabulous island (and many other tropical islands) as correctly as possible.
In addition to the meshes, we give you a complete landscape Material, a small ocean water shader, our own wind shading method, and "Impostor-Baker" billboards.
This Asset Pack is going to see further improvements; we will add and edit content depending on feedback we get, and we hope that the four months of hard work that went into this pack do show.
Mauritus - home to some of the world's rarest tropical plants and animals, including Filao and Teak trees, but also more versatile flora such as Coconuts, Bananas, Dracaenas, etc.
Added lots of additional content (textures, sugarcane, shaders) and polished existing content (all updates processes as of Feb. 12 2019 - adding 4.17 version support as well).
- 3D Scanned and hand-modeled assets (trees, ground plants, boulders, grass)
- Heightmap-Generated Landscape
- Ocean water shader
- Vertex-Painted Wind Shader
- Lots of recognizable Mauritius-Based Tree and other Plant Life
- On-Site 3D-Scans from the coast of Mauritius
- 4096 x 4096 (4k) Resolution for Landscape and most of the Foliage Asset Diffuse Textures
- 2048 x 2048 - 4096 x 4096 for Boulder Assets
- 1024 x 1024 - 2048 x 2048 (clamped) for Billboard Textures
- Other Textures between 256 x 1024 and 8192 x 8192 Resolution.
Collision: Auto-Generated (simple) for Boulders and other solid objects; no collision for foliage.
Vertex Count on LOD0:
- Between 69 for smallest boulder up to 20259 for largest Teak Tree; on average, between 70 and 3800 for Boulders, between 2000 and 9000 for Coconuts, and 7000 and 20000 for very large Trees (Filao and Teak).
- Yes, between 3 and 4 LODs depending on asset complexity; auto-generated, but hand-made where necessary (including 9-sided Billboards made with EPIC Impostor Baker Tool).
Number of Meshes: 61
Number of Materials and Material Instances: 78 and 8
Number of Textures: 194
Supported Development Platforms: All Windows and Mac Based
Supported Target Build Platforms: All Windows and Mac Based
|
OPCFW_CODE
|
[R] Make 2nd col of 2-col df into header row of same df then adjust col1 data display
jrkrideau at inbox.com
Thu Dec 18 16:05:20 CET 2014
Of course, but why? As Brian S says you have not given us enough information to know exactly what you are after.
Have a look at https://github.com/hadley/devtools/wiki/Reproducibility or http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example for some information on how to form a question for the list.
It is good that you provided some data but it is better to use dput() (see links above or ?dput) to supply the data as different R users have different settings on their systems and may not read that data in the same way.
Note that I have simplified your incredibly verbose names and put everything into lower case (see ?tolower) just to make life easier. Because R is case-sensitive, it is usually easier to keep to lower case as much as possible particularly when posting to the list and to use simple variable names where the actual variables are likely to meaningless to the reader and long upper case names just makes for more typing.
In any case here is a quick and dirty semi-solution using the reshape2 package which I imagine you will have to install using “install.packages("reshape2").
Depending on exactly what you need to know there may be, as Brian S says many different and better approaches. While we really don't need the actual variable names we need an overall idea of what you are going in substantive terms and what the final results are.
Anyway welcome to the R-help list
dat1 <- structure(list(id = structure(c(5L, 1L, 4L, 5L, 5L, 2L, 3L, 6L,
7L, 5L), .Label = c("1005317", "1007183", "1008833", "1012281",
"1015285", "1015315", "1015322"), class = "factor"), type = structure(c(1L,
2L, 2L, 2L, 4L, 3L, 3L, 3L, 5L, 5L), .Label = c("as.age", "hs.hours",
"ot.overtime", "rk.records_cl", "v.poster_other"), class = "factor")), .Names = c("id",
"type"), row.names = c(NA, -10L), class = "data.frame")
dcast(dat1, id ~ type)
#=======end code =======
Kingston ON Canada
> -----Original Message-----
> From: bcrombie at utk.edu
> Sent: Wed, 17 Dec 2014 19:15:14 -0800 (PST)
> To: r-help at r-project.org
> Subject: [R] Make 2nd col of 2-col df into header row of same df then
> adjust col1 data display
> # I have a dataframe that contains 2 columns:
> CaseID <- c('1015285',
> Primary.Viol.Type <- c('AS.Age',
> PViol.Type.Per.Case.Original <- data.frame(CaseID,Primary.Viol.Type)
> # CaseID’s can be repeated because there can be up to 14
> per CaseID.
> # I want to transform this dataframe into one that has 15 columns, where
> first column is CaseID, and the rest are the 14 primary viol. types. The
> CaseID column will contain a list of the unique CaseID’s (no replicates)
> for each of their rows, there will be a “1” under a column corresponding
> a primary violation type recorded for that CaseID. So, technically,
> could be zero to 14 “1’s” in a CaseID’s row.
> # For example, the row for CaseID '1015285' above would have a “1” under
> “AS.Age”, “HS.Hours”, “RK.Records_CL”, and “V.Poster_Other”, but have
> under the rest of the columns.
> PViol.Type <- c("CaseID",
> PViol.Type.Columns <- t(data.frame(PViol.Type)
> # What is the best way to do this in R?
> View this message in context:
> Sent from the R help mailing list archive at Nabble.com.
> R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
FREE 3D EARTH SCREENSAVER - Watch the Earth right on your desktop!
More information about the R-help
|
OPCFW_CODE
|
# -*- coding: utf-8 -*-
"""Untitled4.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1jf4Y6d9_9hy0bUGm0I-xlzdKOJO0YVag
"""
import numpy as np
import matplotlib.pyplot as plt
def perpendicular(x_val, y_val):
"""
This function gives the intercepts and slope of the perpendicular bisector of a given line segment.
Parameters
----------
x_val, y_vals : coordinates of the two points of the line segments(x_val = [x1, x2], y_val = [y1, y2]).
Return
----------
cx, cy : intercepts of the perpendicular bisector.
slope : slope of perpendicular bisector.
"""
slope = (y_val[1] - y_val[0])/(x_val[1] - x_val[0])
q = (y_val[1] + y_val[0])/2
p = (x_val[1] + x_val[0])/2
slope = -1/slope
cy = -slope * p + q
cx = p - q/ slope
return cx, cy, slope
def slope(x_val, y_val):
"""
This function returns the slope of a line segment.
Parameters
----------
x_val, y_vals : coordinates of the two points of the line segments(x_val = [x1, x2], y_val = [y1, y2]).
Return
----------
slope of line segment.
"""
return ((y_val[1]-y_val[0])/(x_val[1]-x_val[0]))
def base_circle_centre(m, y, c):
"""
This function returns the intersection of perpendicular bisector with the base line.
Parameters
----------
m : slope
y : y-cooordinate of the base line.
c : y-intercept of the perpendicular bisector.
"""
return (y-c)/m
def circle_intersection(x_val, y_val, x1, y1, y0, r):
"""
This function returns the left most point of the intersection of the line starting from A\' parallel to O\'A and the circle.
Parameters
----------
x_val, y_val : coordinates of the two points of the line segments(x_val = [x1, x2], y_val = [y1, y2]).
x1, y1 : centre of the circle
y0 : y-intercept of the required line
r : radius of the circle
Return
----------
xf, yf : coordinates of the intersection
deg: angle between y-axis and the line joining origin and (xf, yf).
"""
m0 = slope([x_val[0], x_val[1]], [y_val[0], y_val[1]])
a = m0*(y0 - y1) - x1
c = m0*m0 + 1
b = np.sqrt(np.square(a)-(x1*x1 + (y0-y1)**2 - r*r)*c)
xp1 = (-a + b)/c
xp2 = (-a - b)/c
if xp1 < xp2:
xf = xp1
else:
xf = xp2
yf = m0*xf + y0
deg = np.arccos(yf/np.sqrt(yf**2 + xf**2))*180/np.pi
return xf, yf, deg
def plot_circle_diagram(ax, i_A=11, i_B=100, pfA=0.2, pfB=0.4, w_o=18920, w_sv=27172.17, scaler = 1, x=0.5):
"""
This function draws the circle diagram of the induction motor based on the values entered.
Paremeters
----------
ax : Matplotlib axis on which the diagram will be drawn.
i_A : no load current.
i_B : short circuit current.
pfA : power factor of no load test.
pfB : power factor of short circuit test.
w_o : power rating.
w_sv : power consumed.
x : rated cu loss factor.
Return
--------
None, the function draws the diagram on the axis in the window.
"""
scaler = 1
org = [0, 0]
iA = np.array([i_A * (np.sqrt(1-np.square(pfA))), i_A * pfA]) / scaler
iB = np.array([i_B * (np.sqrt(1-np.square(pfB))), i_B * pfB]) / scaler
lim = 1.5 * np.sqrt(iB[0]**2 + iB[1]**2)
ax.set_xlim([0, lim])
ax.set_ylim([0, lim])
ax.plot([org[0], iA[0]],[org[1], iA[1]])
ax.plot([org[0], iB[0]],[org[1], iB[1]])
ax.plot([iA[0], iB[0]],[iA[1], iB[1]])
ax.annotate('O\'', (iA[0], iA[1]))
ax.annotate('A', (iB[0], iB[1]))
ax.axhline(y= iA[1], xmin=iA[0]/lim, xmax=1, linestyle=':')
cx, cy, m = perpendicular([iA[0], iB[0]], [iA[1], iB[1]])
ax.plot([0, cx], [cy, 0], linestyle="--")
y1 = iA[1]
x1 = cx * (1 - (y1/cy))
theta = np.linspace(0, np.pi, 10000)
r = x1 - iA[0]
c = plt.Circle((x1, y1), radius= r, fill=False)
ax.add_patch(c)
ax.annotate('B', (iA[0]+2*r, iA[1]))
ax.annotate('C', (x1, y1))
ax.annotate('D', (iB[0], iA[1]))
ax.axvline(x=iB[0], ymax=iB[1]/lim)
ax.axvline(x=iB[0], ymin=iB[1]/lim, ymax=(iB[1]*(1+(w_o/w_sv)))/lim)
ax.plot([iA[0], iB[0]],[iA[1], (iA[1] + iB[1])*x])
ax.annotate('E', (iB[0], (iB[1]+iA[1])*x))
y0 = -slope([iA[0], iB[0]], [iA[1], iB[1]])*iB[0] + iB[1]*(1+(w_o/w_sv))
ax.plot([0,iB[0]], [y0, iB[1]*(1+(w_o/w_sv))])
ax.annotate('A\'', (iB[0],iB[1]*(1+(w_o/w_sv))))
xp, yp, deg = circle_intersection([iA[0], iB[0]], [iA[1], iB[1]], x1, y1, y0, r)
ax.plot([org[0], xp], [org[0], yp])
ax.axvline(x=xp, ymax= yp/lim)
ax.annotate('P', (xp,yp))
xq = xp
yq = ((iB[1]-iA[1])/(iB[0]-iA[0]))*(xp-iA[0])+iA[1]
ax.annotate('Q', (xq,yq))
xr = xp
yr = (((iB[1]-iA[1])*(1-x))/(iB[0]-iA[0]))*(xp-iA[0])+iA[1]
ax.annotate('R', (xr,yr))
ax.annotate('S', (xp,iA[1]))
ax.annotate('T', (xp,0))
iL = round(np.sqrt(xp**2 + yp**2), 2)
slip = round((yq-yr)/(yp-yr),3)
eff = round((yp-yq)/yp, 4) * 100
pf = round(yp/iL,2)
xt = lim - 40
yt = lim - 5
ax.text(xt, yt, 'Load Current = '+str(iL)+' A', size = 10)
ax.text(xt, yt-5, 'Slip = '+str(slip), size = 10)
ax.text(xt, yt-10, 'Efficiency = '+str(eff) + '%', size = 10)
ax.text(xt, yt-15, 'Power Factor = '+str(pf), size = 10)
|
STACK_EDU
|
<?php
namespace app\models;
use Yii;
/**
* This is the model class for table "domicilio".
*
* @property integer $iddomicilio
* @property integer $idalumno
* @property string $calle
* @property integer $numcasa
* @property integer $piso
* @property string $dpto
* @property string $barrio
* @property integer $idlocalidad
* @property integer $iddistancia
* @property string $depto
* @property integer $codpost
* @property integer $codareaT
* @property string $telefono
* @property integer $codareaC
* @property integer $celular
* @property integer $convive
*/
class Domicilio extends \yii\db\ActiveRecord
{
/**
* @inheritdoc
*/
public static function tableName()
{
return 'domicilio';
}
/**
* @inheritdoc
*/
public function rules()
{
return [
[['idalumno'], 'required'],
[['idalumno', 'numcasa', 'piso', 'idlocalidad', 'iddistancia', 'codpost', 'codareaT', 'codareaC', 'celular', 'convive'], 'integer'],
[['calle'], 'string', 'max' => 120],
[['dpto'], 'string', 'max' => 4],
[['barrio'], 'string', 'max' => 20],
[['depto'], 'string', 'max' => 40],
[['telefono'], 'string', 'max' => 10]
];
}
/**
* @inheritdoc
*/
public function attributeLabels()
{
return [
'iddomicilio' => 'Iddomicilio',
'idalumno' => 'Idalumno',
'calle' => 'Calle',
'numcasa' => 'Numcasa',
'piso' => 'Piso',
'dpto' => 'Dpto',
'barrio' => 'Barrio',
'idlocalidad' => 'Idlocalidad',
'iddistancia' => 'Iddistancia',
'depto' => 'Depto',
'codpost' => 'Codpost',
'codareaT' => 'Codarea T',
'telefono' => 'Telefono',
'codareaC' => 'Codarea C',
'celular' => 'Celular',
'convive' => 'Convive',
];
}
}
|
STACK_EDU
|
KeyShot Single Click Submission Tutorial
This method needs the KeyShot submitter which can be installed using RenderShare Application under the plugins tab.
- Open KeyShot scripting console and run “KeyShot Submitter”, select submission method and fill out the submission dialog and submit job.
- You are the monitor job progress in your online panel. “View Jobs List” in RenderShare App is a shortcut access to the online panel.
- The output results will be syncing to your machine frame by frame as soon as render starts. “Cloud Web” in RenderShare App is a shortcut access to the online cloud storage which is the online instance of local cloud folder.
KeyShot Submitter v1.0 (KeyShot 8)
What it KeyShot Submitter?
KeyShot Submitter is our own developed version of a single click submitter script inside KeyShot application which simply allows you to submit your project without need of upload and submit manually through the online submission system.
What is the benefit of using KeyShot submitter?
Here are some of most noticeable features about KeyShot Submitter:
- Automates the process of packaging scene and job submission
- The submitter is doing a bunch of checks and validation which is reducing the human mistakes
- There are some limitations and specific naming rules that submitter warns you about them before you submit your job
- It allows you to submit multiple cameras, model set and also every combination of these two which speed up the submission process significantly
- Submitter allows you to submit more than one job at a time
- Submitter ensures that your scene data is transferring to render servers correctly
How it works?
KeyShot Submitter v1.0 offering two general submission method:
Submit New Job:
This option allows you to submit a new job based on the opened scene in KeyShot (active scene), it packages, extracts scene data, uploads and submit it to farm and entire workflow from submitting to final delivery is automated.
Use Existing Project:
This is what we call it “Smart Submit” or “Re-Submit” and as the name implies this option lets you
re-submit the projects you have already submitted; it updates submit data with new inputs/setting.
Re-submit always uses uploaded/exported packages and this means there is no need to create and upload a new package which reduces the submit time to just a few seconds.
Re-submit is completely rely on the already submitted files so you can use this option even without opening the file, this could be run in an empty scene.
This option enables you to submit any of the packages/files that lives in Project_Files folder which is the main repository for source file and its part of sync folder.
General Submitter Parameters:
Image Width:overrides output image width in the pixel
Image Height:overrides output image height in the pixel
Rest of setting related to resolution setting grabs file setting
Output Format: overrides image output format
Formats that are not listed in this drop down is not supported
This is defining the amount of power you get for your job, more power costs you more as well, there are some more explanation on the pricing page: Pricing-Discount
Start Frame: overrides animation start frame
End Frame: overrides animation end frame
Step Frame: enables you to render every nth frame of the animation
Combination of step frame and lowering the resolution is a good idea to run a animation preview before running full frame at final resolution
Custom Frame Range:
This option gives you the ultimate flexibly to set the desired frame range:
- Use a comma (,) to separate frames
- Use comma (-) to separate frame ranges
- Single frame is accepted: [ ex: 1]
- Multiple single frames are accepted: [ ex: 1,25,50]
- Animation range is accepted [ ex: 1-15]
- Multiple animation ranges are accepted: [ ex: 1-50,50-300]
- Combination of single frames and ranges is accepted [ ex: 1,5-15,70,250-300]
Here is the list of available cameras in scene, you can select multiple cameras and submit all at the same time:
- For single frame jobs, having multiple cameras selected still submits a single job with each camera assigned to a task and each task creates a separate folder under main job folder in sync folder root and put renders there
- For animation jobs, having multiple cameras selected will submits multiple jobs at the same time and each job count as an independent job, and it creates a new folder in sync folder root, please note only one package will be uploaded per multiple cameras
Select Model Sets:
Here are the list of available model sets in scene, you can select multiple model sets and submit all at the same time.
- For single frame jobs, having multiple model sets selected still submits a single job with each model set assigned to a task and each task creates a separate folder under main job folder in sync folder root and put renders there
- For animation jobs, having multiple model sets selected will submits multiple jobs at the same time and each job count as an independent job, and it creates a new folder in sync folder root, please note only one package will be uploaded per multiple cameras
Combination of Cameras and Model Sets:
This is a powerful feature that lets you saves a lot of time but PLEASE BE CAREFUL when using
Let us explain this with a simple example:
Let's suppose you have 5 cameras and 5 model sets in the scene, how many jobs with be submitted if you select all 5 cameras and model sets at the same time?
Well, the answer depends on the type of job:
- For single frame jobs, having multiple model sets and cameras selected at the same time submits a single job with multiplying the number of model sets and cameras which considers all the possible combinations between cameras and model sets and each combination assigns as a task and each task creates a separate folder under main job folder in sync folder root and put renders there so the answer is a single job with total of 25 tasks (5 cameras x 5 model sets)
- For animation jobs , having multiple model sets and cameras selected at the same time selected submits multiple jobs and it considers all the combination of cameras and model sets at the same time and each job count as an independent job , each job creates a new folder in sync folder root , please note only one package will be uploaded per multiple cameras so the answer is a total of 25 independent jobs (5 cameras x 5 model sets)
|
OPCFW_CODE
|
THE REAL-80 EMULATOR
REAL-80 - Real-Time Videogenie-I / HT1080Z Emulátor
Pictures in the documentation can be enlarged!
Starting the first time
When starting the emulator the first time, it will run according to the default settings:
The tape recorder
The tape recorder emulation works in real-time mode. This means,
that independent of the file format, but there are some conditions: the tape
writing always starts with 255 pieces 00h character. This way when reading it’s
easier to find the beginning of the file, as well as the sign pitch. Actually
there is also another reason for this; because of the write/read mode it must
read at least 8 pcs of 0 bits continuously,
before the data. The file on the start does not need to contain 255 pcs 00h character, 2 pcs are enough but it can also be more.
The file extension can be ASM, BAS or CAS. This way not every file includes the data, so the emulator supports some frequent file formats:
Inserting/removing a tape
Push the F2 key to step into the File Manager:
After inserting the tape the Directory window will appear, were you can move with the cursor up and down key and choose the desired file or directory with the enter key. With the ESC key you can interrupt this function.
Only the ASM, BAS and CAS file types will be visible in the list.
Return to the emulator with the F2 or ESC key, to open the Debugger push the F1 key.
With auto start, machine code programs can load faster. This function can load CAS and CMD file formats.
Steps are similar to the tape insert; only this time you must choose the AUTOSTART function. After selecting the file, press Enter to load the program into the memory, the program will set the PC to the start address and a message will appear telling if loading was successful also if there has been a file format error. The program will start instantly after returning to the emulator. There is also the possibility to step into the debugger with the F1 key. This way the program can be debugged.
Inserting/removing a disk
Same as the tape insert, only now choose any of the DISK select functions. The system can only boot from DISK 0, here we place the system disk. If the DISK 0 is empty the emulator will turn off disk emulation and the system will start in Basic after reset.
The emulator supports the most common DSK formats up to 40 Track.
Screen snapshot with inserted tape and disc
Screen snapshot with hidden drives and clock
Loading/Removing ROM and character set
Same as the tape insert, only now choose the ROM or CHRGEN line. After choosing a ROM file with a maximum of 3780h or 1000h bytes length respectively, the new ROM or character set will be loaded. Also a reset will occur, this way the program always starts from the 0000h address. This way you can run not only the standard manufacturer ROM but your own ROMs as well.
The file extension for ROM will be '.ROM', for the character set '.CHR'. To return to the default ROM or character set press the DEL key.
Since the debugger also uses the system character set, it might - in case of an inadequate ode - become unreadable. When deleting the INI file the system will reset itself to the default setting.
The built-in Debugger
The emulator includes a built-in debugger, which can be reached
through the F1 key.
Push TAB to move between the windows.
When stepping into the debugger the assembler list cursor will step into the actual PC value. Some times the cursor is not visible, since the programmer used such kind of tricks that one command operates another and this way the program will jump there. The debugger can also search backwards and displays the commands exactly.
Move the cursor with the up/down keys, move the pages with the PgUp/PgDn keys.
Change the cursor address with the Enter key.
F7 Trace Into - execute a command
F8 Step Over - execute a command, the CALL and RST commands will be called
F4 Go To Cursor - run the program till it reaches the cursor position
F9 Run - run the program (ESC or F1 will do the same)
F6 Toggle Breakpoint - turn on/of the breakpoint of the cursor position
F10 New PC - the PC value will be the same as the cursor position
F5 Toggle between debugger and emulator screen.
While running the debugger it will change back to the emulator screen only after the first screen refresh, (1/60 sec).
The memory will display 256 bytes in hexadecimal and ASCII format and also serves for editing the writable memory zone. Move the cursor with the up/down keys, move the pages with the PgUp/PgDn keys. Change the cursor address with the Enter key. You can also return to the assembler list by pressing the Backspace key. It will save the window address when leaving the emulator.
The CPU registers can be changed, except the PC, which can be changed only from the assembler list with the F10 key. The green color of the register values shows that they have changed since the last stop, or have been changed manually. Move the cursor with the up/down keys, return to the assembler list with Enter.
The flags can be changed. If the flag value is 1 its name will be displayed in red, this way you can keep track of it more easily. The flag value is green, if it has changed since the last stop, or has been changed. Move between the flag values with the left/right keys, return to the assembler list with Enter.
Shows the stack content, can not be changed. In case a change is necessary, this can be done via the memory dump.
Shows the breakpoint address, for easy identification a short name can be added. One an delete the name with the DEL key. On pressing the Enter key, the assembler list cursor will be placed onto the input. On exiting, the emulator will save the list.
With Alt-F1 the visible screen will be copied into a BMP file. The file will be placed into the emulator folder, the name will be REAL####.BMP, were #### is the first not existing line number beginning from 0000.
The memory size can be changed (16K/48K). If a change occurrs the emulator resets itself.
2 3 4 5 6 7 8 9 0 : - left arrow
Break Q W E R T Y U I O P @
A S D F G H J K L ;
Z X C V B N M , . / up/down/left/right
Home - Clear
Enter - New Line
1. The program is the property of Mr.
2. The program can be used, copied and distributed by everyone.
3. The program is free, this way no fee may be asked for it by anyone.
4. The program can solely be altered by the author.
Comments or ideas can be sent to firstname.lastname@example.org
Thanks to Mr. Levente Szűcs and Mr. Attila Grósz.
|
OPCFW_CODE
|
Consistent Error Reporting discussion
λ> data A = A { a :: Bool, b :: Int } deriving (Eq, Show, Ord, Generic, Binary, NFData, Atomable)
λ> toAtomType (undefined :: Proxy A)
ConstructedAtomType "A" (fromList [])
λ> toAtom (A True 1)
ConstructedAtom "A" (ConstructedAtomType "A" (fromList [])) [BoolAtom True,IntAtom 1]
λ> fromAtom $ toAtom (A True 1)
<interactive>:33:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Show a0) arising from a use of ‘print’ at <interactive>:33:1-28
(Atomable a0) arising from a use of ‘it’ at <interactive>:33:1-28
• In a stmt of an interactive GHCi command: print it
*** Exception: improper fromAtomConstructedAtom "A" (ConstructedAtomType "A" (fromList [])) [BoolAtom True,IntAtom 1]
CallStack (from HasCallStack):
error, called at /root/project-m36/src/lib/ProjectM36/Atomable.hs:64:16 in main:ProjectM36.Atomable
λ>
λ> toAddTypeExpr (undefined :: Proxy A)
AddTypeConstructor (ADTypeConstructorDef "A" []) [DataConstructorDef "A" [DataConstructorDefTypeConstructorArg (PrimitiveTypeConstructor "Bool" BoolAtomType),DataConstructorDefTypeConstructorArg (PrimitiveTypeConstructor "Int" IntAtomType)]]
Oops. Sorry for the noise.
fromList [] means no type variable in the type constructor. And that's nothing wrong.
But I still got an error in my project with project-m36-typed.
2019-05-06 11:30:32.029902: [info] attempting rollback after encountering errorDbQRelError (TypeConstructorAtomTypeMismatch "User" (C
onstructedAtomType "User" (fromList [])))
I'll report it if it's related to project-m36. Closed.
Sure. Also let me know if what ways the error-reporting could be improved. I hit errors myself which are not always clear- I've been thinking of adding more context to the error reporting, but I don't have a plan for how to make it consistent.
Sure.
It turns out that I just forgot to toAddTypeExpr (undefined :: User). haha, free jokes.
I just figure that out when I trace where TypeConstructorAtomTypeMismatch is located.
I was trying to auto-generate those new datatypes form a user-defined Schema in project-m36-typed.
I was thinking which relation type can express DbRecord User well. "Should I store User as ConstructedAtomType or a plain relation with extended attributes as additional record information?"
So, when TypeConstructorAtomTypeMismatch, I wish to see something like "
ConstructedAtomType "User" (fromList []) is used in attribute (dbRecordRecord :: User) in relvar Users, but not listed in database datatypes: Int, Day, ...Maybe a..... Maybe you need to create it first before use it."
If let me make a big wish, I hope there are:
a consistent language/expression that can be used both in haskell and tutd.
(so that it can be divide and conquer when checking every part of an expression)
Every action/operator/expression has a clear road map/ implementation steps.
If an error occurs, it will shows what the context is now, the steps needed, the steps finished, and the step that went wrong, and (maybe) how-to-fix recommendation
If a simple expression can have this overall check, then it will boost confidence to debug a more complicate expression like a big schema or nested expression.
--
exception call traces in control-monad-exception looks nice, though I haven't used it. Worth taking a look.
hi @agentm
I saw this today. They have implemented "Relation Types" in Haskell for a relational algebra C++ library.
2 03 Types for a Relational Algebra Library - YouTube
I guess it's worth mentioning here.
Thanks. I think you've summarized my thoughts on reasonable error reporting as well.
|
GITHUB_ARCHIVE
|
In this video you will learn how to use Eclipse to generate Java source code.
I will cover the following topics:
• Generate getter and setter methods
• Generate constructors
• Override methods
• Code template short-cuts
Please subscribe to this channel 🙂
Time – 00:00
Hi, this is Chad (shod) with luv2code.com. Welcome back to another tutorial on Eclipse. In this video, you’ll learn how to use Eclipse to generate Java source code.
Time – 00:11
We’ll cover the following topics. We’ll learn how to generate Getters and Setters. We’ll also learn how to generate Constructors. We’ll also learn how to override methods, and finally we’ll discuss code templates.
Time – 00:25
Okay, so let’s get started. All right, so let’s go and start by creating a simple Java class. I’m going to create a new Java class called Student and I’ll hit Finish. What I’ll do with this class is I’ll define three fields; private string, I’ll define the last name, first name, and also the age. Now that I have the fields defined, I can have Eclipse generate methods for me, the Getter and Setter methods. I can right click, I can choose Source and Generate Getter and Setters.
Time – 01:07
Eclipse will display a list of fields and you can choose the fields that you want for the Getters and Setters. Here, I’m going to Select All. You can also tell Eclipse where to insert them. Here, we’ll have it insert after the Age field and then also the access modifier of Public and then I’ll just hit OK. Our Eclipse will go often do a lot of work for us and generate these Getter and Setter methods. Note here, I have a pair for last name. I have the Getter and Setter for first name and also the Getter and Setter for age.
Time – 01:41
We can also make use of Eclipse to generate constructors for us. In this example, I’m going to move up to the top and I will insert constructors. I will right click, I’ll move to Source, Generate Constructors using Fields. Okay, so it’ll give me a list of fields to initialize. They’re all selected, and I’ll go ahead and hit OK. Note here, it’s going to create a new constructor for me, so this is the new piece of code that was created. A constructor, same name as the class; last name, first name, age, and it assigns the values accordingly.
Time – 02:20
All right. Another thing we can do with Eclipse is that we can use Eclipse to override methods for us or help us override the methods. One of the methods that you’ll commonly override in your Java programming life is overriding the toString method. Again, we’ll just right click, we’ll go to Source and we’ll move to Override/Implement Methods. Eclipse will give you a list of methods that you can override. The one I’m interested in is toString, so highlight it, select the check box, and I’ll hit OK. Eclipse will generate a toString method for me. It’ll automatically add the override annotation and the stub. This portion here I can remove and I can add my own custom toString implementation.
Time – 03:07
Eclipse also has a collection of co-template shortcuts that you can use. You can view these template shortcuts by going to your Preferences window. Under Window, Preferences, and it’ll give you a list of all of these various templates. The way it works effectively is that if you want to make use of a main template, then you’d simply type the letter or the word main and then you enter CTRL+Space and then Eclipse will do co-completion for you and give you this stub of code. All right, so let’s try it out.
Time – 03:44
Now, I’m going to actually make use of that main co-template. I’ll type the word main and then I’ll hit CTRL+Space on my keyboard. It’ll give me a drop-down list and I’ll choose the main method. Eclipse will go through and give me a basic main method, pretty cool.
Time – 04:00
You can also do a similar thing for a System.out.println. Instead of typing out System.out.println long hand, you can type sysout CTRL+Space. I want to give you System.out.println and you can give whatever information you want to do for the println. There is also support for foreach. You can type foreach and then hit CTRL+Space and it’ll give you that item. Now, we have this “for” loop that’s already entered for us, and I can go through and get a list of command-line arguments that are being passed in to this method. Again, I can use my sysout to print out that information, so sysout CTRL+Space and then I can print out temp.
Time – 04:43
Finally, I’ll show you this one bonus shortcut. I entered my System.out.println and it’s not formatted properly, I can just format my code by doing the right click and I can go to Source and I can choose Format. This will line up all the indentations correctly for my program and so it’ll fix that System.out.println that I have that are for temp.
Time – 05:06
Okay, so let’s go ahead and wrap up. In this video, we learned how to use Eclipse to generate Java source code. Please subscribe through our channel to view more videos and clips in Java. Also, visit our website luv2code.com to download Java source code.
|
OPCFW_CODE
|
I am happy to announce that a new version of API Console is now available. It brings multiple changes and new tooling. Read more to learn what’s new in this release.
API Console is an application to automatically generates documentation for an API from a RAML or Open API definition. It is done by generating an AMF data model thanks to the AMF parser or the “webapi-parser” module. The console is built on top of web platform APIs so it can be easily integrated with every web-based application. It comes in two flavors: a stand-alone application and a web component. Depending on your use case, you may use either of them.
The stand-alone application allow you to quickly generate documentation for an API and run it on a web server as a separate application. We are providing a CLI tool to generate the application from sources and a Docker image to simplify the deployment. The web component is to be used if you want to integrate the console with an existing application. This requires more setup and development process but allows for the most flexibility.
What’s new in version 6?
API Console is now WCAG compliant. During the development process we made sure that the console is accessible for users with disabilities. This version is tested via automated software and via manual testing.
The updated UI implements changes requested by our customers and the community. The navigation has a slightly different structure and now renders a full path to the resource, even when it exceeds the size of a single line menu item. New examples and annotation widgets makes the UI more user friendly.
The request editor (aka “try it”) has been redesigned to simplify interaction with the API. It now only renders fields that require input from the user to make a successful request. This means, for example, if headers are not defined for an operation then they are not rendered.
With this release we added more security by sanitizing markdown data that are being rendered. Also, user input in the request editor is tested against invalid data.
Improvements to the console
Most improvements are related to performance. The size of the bundle has been reduced almost four times and now the bundled API Console weight is ~300KB. This speeds up console rendering time and improves memory footprint.
In the documentation view, complex types are now hidden by default. This reduces potential crashes when the documentation has huge type declarations. In previous versions of the console this would lead to tabs crashing. Similarly, huge examples generated for such types are not highlighted by default. Previously it would require the browser to render unreasonable amount of markup that would hang the tab for minutes.
Being as close to standards as possible was rule #1 when designing architecture for API Console and the build tools. The console now follows Open WC standards for development and building processes. Application-specific code related to the development and building process has been reduced to the minimum.
API Console has its own node module and CLI command to generate a bundle in a CI environment or manually. With this release, we offer a Docker image that has bundled API Console, AMF parser, and a web server with it. Developers just need to pass an API project to generate data model from it and run the image on your Kubernetes instance.
Reasons to choose API Console
API Console is and will be in active development as it is one of the core Anypoint Platform products while being an open source project. The console is tested in an enterprise environment and runs in both public and private clouds. It fulfills all requirements for an application to run in government’s cloud.
We have experience with building API documentation products supported by several years of development of API Console and other products like Advanced REST Client. We are open to suggestions and over the years we have listened to the voice of the community and our customers. Today, we are proud of the final effect of our work and happy to share this experience with the community.
How to get started?
We’ve prepared a few demos with basic use cases for API Console. You can check it out at demo.api-console.io. For developers we have prepared a documentation page at docs.api-console.io.
Try API Console today and give us your feedback. We accept issue reports and feature requests on console’s GitHub page. If you are MuleSoft customer you can contact customer support and gives us your feedback directly.
|
OPCFW_CODE
|
Why are updates downloading so slow?
All other connection speeds are normal, average download speed is about 2mbps but my updates are lucky to get to 100kbps. Updates and this website are the only things running any ideas what the problem could be?
You're certain you cancelled during "Getting new packages" and no other phase, such as "Installing the upgrades"?
It might be your local mirror being slow from the 12.04 attention. You might want to switch mirrors temporarily:
How can I get apt to use a mirror close to me, or choose a faster mirror?
How do I change which mirror I get updates and software from?
I have had local mirrors slow down for a number of reasons. In addition to external factors, there is an apt option which rate limits downloads speed. You may want to check if Acquire::http::Dl-Limit has been set.
Been using ubuntu a long time now and can tell you its the mirror. Back in the day the canadian mirrors were so slow it was almost impossible to install (with download updates while installing option). So most of us just got in the habit of selecting "New York" as our time zone during install so that it would default to us.archive.... (the american faster mirror at the time which is in same time zone so no harm no foul). Now days though I have noticed the same problems you are describing. During a week day you will be lucky to get 100K/s. On the weekend around 3am it seems to get up around 800K/s (as fast as my modem goes so might be faster for others). So what I did to test is open up the sources and change the country code from "us.archive...." to "ca.archive....". The first thing it did was educate me that the canadian mirrors have been upgraded and fly pretty good now (during the afore mentioned weekend time). Alas it still suffers during the week days, though in my experience is still faster then the US servers (around 100-200 K/s). Just keep changing the mirror until you find one that works. I havn't tested any but "us" and "ca" so I cannot offer any insight there.
Edit sources file: sudo gedit /etc/apt/sources.list
Apart from what Jorge suggested (about choosing best servers), you can actually remove some extra third-party repositories (like launchpad ones which you might have installed for some extra themes etc) to make the process a LOT faster.
You can read about it here: How to remove a repository?
Let's say you want to download and install some extra plugin - say Gmail video chat one. Having just that repository selected in this list, before you do a
sudo apt-get update
will make it drastically fast because you don't have to download all other unnecessary lists.
This list file is kinda big one (about 24 MB minimum -for default packages - and removing unwanted repositories will save you say one minute or something.). You can read about this more here: the size of apt-get update lists is too big
At the same place (where you select these repositories), you can "Select best server" to make download as fast as possible. Sometimes, when mirrors near you go down, this will automatically select the best server at that time. You can read the second answer here: How can I get apt to use a mirror close to me, or choose a faster mirror?
All those settings are under "Update Manager", so take a look at it, and happy tweaking!
I just read that apt-fast is able to make the download faster than apt-get using multiple connections by using aria2 as the default download manager.
Here is the link to the article.
Welcome to Ask Ubuntu! Could you [edit] your answer and provide more information in the answer instead of just linking off-site? (copy-paste the Ubuntu instructions)
|
STACK_EXCHANGE
|
I wake up in the morning. I hit a switch that clears and collects acres of farmland. I feed my forty dogs. I go beneath the earth, which has been hollowed out like swiss cheese. Classic Minecraft.
This is the result of me and my friends deciding to play Minecraft together over lockdown; exponentially, we grew more powerful. We created greater monuments to our strength: the artificial river which let us travers the neighborhood faster, the railway that carried us across the horizon-eating farm, the flaming altar filled with thousands of chickens (which crashed anyone’s computer if they looked directly at it). We had refined every means with which to transform the world into neat and orderly stacks, then arranged them for our perfect benefit.
As Dan Olsen has pointed out, crafting and survival games have a tendency to push players towards colonial outcomes. This isn’t part of the intent of the games designers, but the systems in play unconsciously drive you towards it. In Minecraft, you have infinite time and leisure, the only obstacle is the world itself. To progress you need to craft better tools, which requires you to gather rare resources, and since your tools have durability, you need to gather even more resources to supplement your advance. You reach a point where hoarding anything you come across is the best method. If you see something, you might as well take it, because you’ll probably want it later… And it’s not like it’s doing anything sitting there, you could easily find a better use for it.
The most optimal form of play is to collect, organize, and store everything you come across. Due to how these resources spawn you’re encouraged to annihilate the shape of the landscape. The fastest and easiest way to gain resources isn’t to seek out ravines and search for ore with minimal disturbance. Instead, you’re better off trying to hollow out the ground beneath you into thin veins, turning everything beneath the ground into a perfect grid of strip-mines. This type of thinking is prevalent throughout the survival/crafting genre: your goal is to amass resources, optimise, and industrialise, and then rule over your harmonious factory. Except… This doesn’t happen in Muck.
The sun rises. I sprint to a fir tree and start cutting it down before I realise I’m also hitting the nearby birch, so I step back and maneuver myself to make as little impact as possible. I then dash to the nearby clump of mithril ore. I only need one chunk. I only destroy one chunk. By this point the sun is getting low. Classic Muck.
Muck is a game best described as “Minecraft + Risk of Rain”. Mostly made as a joke by YouTuber and developer Dani, this roguelike survival game follows the same general path of Minecraft: you awake in an unfamiliar land (an island, in Muck’s case), monsters spawn at night, and to survive you need to make tools. You get wood, iron, and mithril, climbing the tech tree and then chopping it to pieces. However, the roguelike elements lead to vastly different outcomes.
Firstly, Muck is hard. It’s not uncommon to die within two or three in-game days or to lose a good run in one or two unlucky hits. Like Risk of Rain, Muck only gets harder the longer you play, with you constantly running against the clock and trying to outscale your foes. This fundamentally shifts your relationship to the environment. In Minecraft, there is no penalty for taking extra (beyond inventory space); it can only result in a net positive. This means that you do not need to make any calculations of value, all resources are generally equal. Muck, however, is ruled by one resource alone: time. You can’t gather extra iron or risk wasting precious seconds on a resource with little payout. You are constantly asking “is this worth it?”, trying to find the path that takes the least from the world and the least from your time. Where Minecraft guides you towards optimizing the world, Muck pushes you towards optimising yourself.
These changes add up and cause you to play Muck with a sense of ecological awareness. In their book Designing For Hope, Dominique Hes and Chrisna du Plessis encourage us to recognise that “eco-systems are not just a collection of species, but are also relational systems that connect humans, as organic systems, with animals and plants”. This is the fundamental focus of ecological thinking, decentering the human perspective in order to “stimulate an increased understanding that the world is fundamentally interconnected and interdependent”. The addition of roguelike elements transforms how you think about your relationship to the world. Being good at Muck involves wholeheartedly embracing an ecologically conscious perspective. Take only what you need and leave as little of a footprint as possible. Try to leave the island without fundamentally altering it. Muck, like real life, is a scenario where constant extraction and manipulation of the environment will lead to your death. Fail to curb your appetite and the world will eat you. Even more-so, adopting a playstyle where you try to care for and avoid harming the environment as much as possible will often save your life.
Muck is filled with terrible and powerful bosses, the most iconic being Big Chunk, an enormous, earth-shaking monstrosity that uproots oaks, shatters boulders, and crushes wildlife underfoot. Chunk’s might means he’s capable of killing you in a single hit. Luckily, this lethality can be lessened through careful cultivation. Every resource in Muck has HP, which also means that they can take blows instead of you. Chunk is best fought in dense forests, as these trees are able to absorb the impact of their attacks, stopping them before they crush your fragile flesh. If you leave trees and rocks undisturbed, they will in turn provide you with protection from Chunk’s house sized club. Care for the island, and it will care for you.
What makes these effects most pertinent to ecological thinking is that they aren’t directly turned into allegory through mechanics. If Muck were to, say, give you a +15% health bonus for every tree untouched, the relationship would fall apart. Giving an exact number or specific benefit to ecological play would direct players towards finding the optimal threshold, looking for the perfect point where you can instrumentalise the world “just enough”. By obscuring the ways in which you benefit from your environment in Muck, there is no “safe” level of engagement. You must instead follow principles of care and try not to deviate. The goals, benefits, and relationship you have to the environment are never explicit, so you must assume empathy as a default mode.
This captures another aspect of ecological thinking, empathy with inhuman agents. There is a tendency for environmental causes to be anthropomorphised in order to generate action. The perspective of “mother nature”, of the world as an extremely human-oriented place, that we should care for like a person. The problem here is that it makes the right to life and care conditional. It argues that our ecosystem deserves care because of its proximity to us, rather than asserting the importance of biodiversity for its own sake. Arguing that, for example, Koko the Gorilla is special because she could “speak” like humans, instead of acknowledging that the critically endangered western lowland gorilla matters regardless of language capacity. The environment in Muck doesn’t ask you to care for it through human-centered mechanics. It makes no clear statements and does not advocate for itself. In fact, the ecosystem of Muck is beautifully non-human: its cows stare at you with dead, vacant eyes and make no attempts of anything resembling human agency. They watch you like a silent jury and will not interfere (or even demonstrate anguish) regarding your actions.
I doubt that Dani intended for this to be the takeaway of Muck. In fact, there are several elements that complicate this reading. The Risk of Rain-style item chests that cover the island encourage you to constantly be opening them to out-scale your enemies, the gold needed to open them coming from ore or dropping from enemies. The constant hunt for gold ore or enemies to kill reaches the extent that you often need to summon enemies from shrines, which results in larger fights that destroy the environment around you. Survive for long enough and this accumulation of items leads to an exponential power creep, similar to the exponential gain of resources in Minecraft. The ecological care that fills the game is disregarded in the pursuit of wealth and power.
Even if the ecological outcomes of Muck are accidental and occasionally in conflict with other parts of the system, they still provide wonderful avenues to explore. The clash brought about by the roguelike elements produces a style of play that is unlike anything else in the survival genre I’ve seen. Part of the joy in finding these elements is because they are accidental: Muck unintentionally defies and denies the colonial DNA of the survival genre. It guides you towards a constant awareness of your place in the environment, which many other games try to silence. The majority of survival games create some kind of justification for your colonisation, the most common of these being the narrative of the stranded survivor (Subnautica, ARK, Don’t Starve, The Forest, Raft), where you must reclaim the wilderness in order to leave it and return to civilisation.
Muck shares this “narrative” (the only explanation of the situation being on it’s steam page, and a congratulation screen after beating the game), but never falls into justifying it. You do not need to instrumentalise the environment in order to survive. You can thrive without needing to transform the environment, terraforming it to your personal tastes. Even though Muck doesn’t take itself seriously, it provides an extremely serious critique of the survival crafting genre. What compels us to create games that reflect and incentivise colonial perspectives? More importantly, what compels us to play them? Muck shows us that the survival crafting genre can exist without these elements. More importantly, it shows us that a game without them can be goddamn fun.
|
OPCFW_CODE
|
fix CMakeLists.txt
oops, this doesn't work very well.
If you delete the git_version.h after you have run make, then the file will not be regenerated. And the build will fail
oops, this doesn't work very well.
If you delete the git_version.h after you have run make, then the file will not be regenerated. And the build will fail
I did remove git_version.h. I removed makefile. Once we use cmake, cmake will generate Makefile, which has lower priority than makefile when running make command. So makefile should be removed when cmake is adopted.
to make it clear, here is my step to reproduce the problem:
git clone https://github.com/wangyu-/udp2raw.git
cd udp2raw
git checkout ec6fad552b9cd # the commit with you change
mkdir build && cd build
make
rm ../git_version.h
make
it shows:
yancey@yancey-ubuntu:~/ttt/udp2raw/build$ make
[ 5%] Building CXX object CMakeFiles/udp2raw.dir/main.cpp.o
[ 11%] Building CXX object CMakeFiles/udp2raw.dir/lib/md5.cpp.o
[ 17%] Building CXX object CMakeFiles/udp2raw.dir/lib/pbkdf2-sha1.cpp.o
[ 23%] Building CXX object CMakeFiles/udp2raw.dir/lib/pbkdf2-sha256.cpp.o
[ 29%] Building CXX object CMakeFiles/udp2raw.dir/encrypt.cpp.o
[ 35%] Building CXX object CMakeFiles/udp2raw.dir/log.cpp.o
[ 41%] Building CXX object CMakeFiles/udp2raw.dir/network.cpp.o
[ 47%] Building CXX object CMakeFiles/udp2raw.dir/common.cpp.o
[ 52%] Building CXX object CMakeFiles/udp2raw.dir/connection.cpp.o
[ 58%] Building CXX object CMakeFiles/udp2raw.dir/misc.cpp.o
[ 64%] Building CXX object CMakeFiles/udp2raw.dir/fd_manager.cpp.o
[ 70%] Building CXX object CMakeFiles/udp2raw.dir/client.cpp.o
[ 76%] Building CXX object CMakeFiles/udp2raw.dir/server.cpp.o
[ 82%] Building CXX object CMakeFiles/udp2raw.dir/lib/aes_faster_c/aes.cpp.o
[ 88%] Building CXX object CMakeFiles/udp2raw.dir/lib/aes_faster_c/wrapper.cpp.o
[ 94%] Building CXX object CMakeFiles/udp2raw.dir/my_ev.cpp.o
[100%] Linking CXX executable udp2raw
[100%] Built target udp2raw
yancey@yancey-ubuntu:~/ttt/udp2raw/build$ rm ../git_version.h
yancey@yancey-ubuntu:~/ttt/udp2raw/build$ make
Consolidate compiler generated dependencies of target udp2raw
[ 5%] Building CXX object CMakeFiles/udp2raw.dir/misc.cpp.o
/home/yancey/ttt/udp2raw/misc.cpp:7:10: fatal error: git_version.h: No such file or directory
7 | #include "git_version.h"
| ^~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/udp2raw.dir/build.make:202: CMakeFiles/udp2raw.dir/misc.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/udp2raw.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
in my test, I do have kept your code untouched, including the remove makefile line
also I think the file(REMOVE makefile) itself is too trick
Once we use cmake, cmake will generate Makefile, which has lower priority than makefile when running make command.
to deal with this problem, the standard way is to create a build folder, and run cmake -S .. -B . inside that folder
to make it clear, here is my step to reproduce the problem:
git clone https://github.com/wangyu-/udp2raw.git
cd udp2raw
git checkout ec6fad552b9cd # the commit with you change
mkdir build && cd build
cmake -S .. -B .
make
rm ../git_version.h
make
it shows:
yancey@yancey-ubuntu:~/ttt/udp2raw/build$ make
[ 5%] Building CXX object CMakeFiles/udp2raw.dir/main.cpp.o
[ 11%] Building CXX object CMakeFiles/udp2raw.dir/lib/md5.cpp.o
[ 17%] Building CXX object CMakeFiles/udp2raw.dir/lib/pbkdf2-sha1.cpp.o
[ 23%] Building CXX object CMakeFiles/udp2raw.dir/lib/pbkdf2-sha256.cpp.o
[ 29%] Building CXX object CMakeFiles/udp2raw.dir/encrypt.cpp.o
[ 35%] Building CXX object CMakeFiles/udp2raw.dir/log.cpp.o
[ 41%] Building CXX object CMakeFiles/udp2raw.dir/network.cpp.o
[ 47%] Building CXX object CMakeFiles/udp2raw.dir/common.cpp.o
[ 52%] Building CXX object CMakeFiles/udp2raw.dir/connection.cpp.o
[ 58%] Building CXX object CMakeFiles/udp2raw.dir/misc.cpp.o
[ 64%] Building CXX object CMakeFiles/udp2raw.dir/fd_manager.cpp.o
[ 70%] Building CXX object CMakeFiles/udp2raw.dir/client.cpp.o
[ 76%] Building CXX object CMakeFiles/udp2raw.dir/server.cpp.o
[ 82%] Building CXX object CMakeFiles/udp2raw.dir/lib/aes_faster_c/aes.cpp.o
[ 88%] Building CXX object CMakeFiles/udp2raw.dir/lib/aes_faster_c/wrapper.cpp.o
[ 94%] Building CXX object CMakeFiles/udp2raw.dir/my_ev.cpp.o
[100%] Linking CXX executable udp2raw
[100%] Built target udp2raw
yancey@yancey-ubuntu:~/ttt/udp2raw/build$ rm ../git_version.h
yancey@yancey-ubuntu:~/ttt/udp2raw/build$ make
Consolidate compiler generated dependencies of target udp2raw
[ 5%] Building CXX object CMakeFiles/udp2raw.dir/misc.cpp.o
/home/yancey/ttt/udp2raw/misc.cpp:7:10: fatal error: git_version.h: No such file or directory
7 | #include "git_version.h"
| ^~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/udp2raw.dir/build.make:202: CMakeFiles/udp2raw.dir/misc.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/udp2raw.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
in my test, I do have kept your code untouched, including the file(REMOVE makefile) when the problem happens, the origin makefile is already gone
I see. Creating git_version.h should be performed during make, rather than cmake.
|
GITHUB_ARCHIVE
|
function createEquals(specificEquals) {
return function (m) {
var forElement = this;
if (forElement.modelKind === m.modelKind) {
return specificEquals.call(forElement, m);
}
else {
return false;
}
};
}
exports.createEquals = createEquals;
function createTypeEquals(specificEquals) {
return createEquals(function (type) {
var forType = this;
if (type.typeKind === forType.typeKind) {
return specificEquals.call(forType, type);
}
else {
return false;
}
});
}
exports.createTypeEquals = createTypeEquals;
function createExpressionEquals(specificEquals) {
var forExpression = this;
return createEquals.call(forExpression, function (expression) {
if (expression.expressionKind === forExpression.expressionKind) {
return specificEquals.call(forExpression, expression);
}
else {
return false;
}
});
}
exports.createExpressionEquals = createExpressionEquals;
exports.containerEquals = createEquals(function (tc) {
var typeContainer = this;
if (typeContainer.name === tc.name && typeContainer.containerKind === tc.containerKind) {
if (tc.containerKind === 2) {
return typeContainer.parent.equals(tc.parent);
}
else {
return true;
}
}
else {
return false;
}
});
exports.containedEquals = createEquals(function (c) {
var contained = this;
return contained.name === c.name && contained.parent.equals(c.parent);
});
exports.protoClassEquals = createTypeEquals(function (pc) {
var protoClass = this;
return protoClass.instanceType.equals(pc.instanceType) && protoClass.staticType.equals(pc.staticType);
});
exports.constructableTypeEquals = createTypeEquals(function (c) {
var cls = this;
if (!cls.typeConstructor.equals(c.typeConstructor)) {
return false;
}
if ((cls.typeArguments && !c.typeArguments) || (!cls.typeArguments && c.typeArguments)) {
return false;
}
if (cls.typeArguments) {
if (cls.typeArguments.length !== c.typeArguments.length) {
return false;
}
for (var j = 0; j < cls.typeArguments.length; j++) {
if (!cls.typeArguments[j].equals(c.typeArguments[j])) {
return false;
}
}
}
return true;
});
exports.typeParameterEquals = createEquals(function (tp) {
var typeParameter = this;
return typeParameter.name === tp.name && typeParameter.parent.equals(tp.parent);
});
exports.compositeTypeEquals = createTypeEquals(function (ct) {
var compositeType = this;
var keys = Object.keys(compositeType.members);
for (var i = 0; i < keys.length; i++) {
var key = keys[i];
var thisMember = compositeType.members[key];
var thatMember = ct.members[key];
if (!thatMember || !thisMember.type.equals(thatMember.type) || !thisMember.optional === thatMember.optional) {
return false;
}
if (compositeType.index && (!ct.index || compositeType.index.keyType !== ct.index.keyType || !compositeType.index.valueType.equals(ct.index.valueType))) {
return false;
}
if (compositeType.calls) {
if (!ct.calls || compositeType.calls.length !== ct.calls.length) {
return false;
}
for (var j = 0; j < compositeType.calls.length; j++) {
if (!compositeType.calls[j].equals(ct.calls[j])) {
return false;
}
}
}
}
return true;
});
exports.indexEquals = createEquals(function (i) {
var index = this;
return index.parent.equals(i.parent);
});
exports.memberEquals = createEquals(function (m) {
var member = this;
return member.parent.equals(m.parent);
});
exports.enumMemberEquals = createEquals(function (em) {
var enumMember = this;
return enumMember.name === em.name && enumMember.parent.equals(em.parent);
});
exports.primitiveTypeEquals = createTypeEquals(function (p) {
var primitiveType = this;
return primitiveType.primitiveTypeKind === p.primitiveTypeKind;
});
exports.decoratorTypeEquals = function (m) {
var decoratorType = this;
if (exports.functionTypeEquals(m)) {
return decoratorType.decoratorTypeKind === m.decoratorTypeKind;
}
else {
return false;
}
};
exports.functionTypeEquals = createTypeEquals(function (ft) {
var functionType = this;
if (ft.parameters.length !== functionType.parameters.length) {
return false;
}
for (var i = 0; i < ft.parameters.length; i++) {
var thisParameter = functionType.parameters[i];
var thatParameter = ft.parameters[i];
if (thisParameter.name !== thatParameter.name || !thisParameter.type.equals(thatParameter.type) || thisParameter.optional !== thatParameter.optional) {
return false;
}
}
if ((functionType.type && !ft.type) || (!functionType.type && ft.type)) {
return false;
}
if (functionType.type && !functionType.type.equals(ft.type)) {
return false;
}
if ((functionType.typeParameters && !ft.typeParameters) || (!functionType.typeParameters && ft.typeParameters)) {
return false;
}
if (functionType.typeParameters) {
if (functionType.typeParameters.length !== ft.typeParameters.length) {
return false;
}
for (var i = 0; i < functionType.typeParameters.length; i++) {
var thisParameter = functionType.typeParameters[i];
var thatParameter = ft.typeParameters[i];
if ((thisParameter.extends && !thatParameter.extends) || (!thisParameter.extends && thatParameter.extends)) {
return false;
}
if (thisParameter.extends && thisParameter.name !== thatParameter.name && !thisParameter.extends.equals(thatParameter.extends)) {
return false;
}
}
}
return true;
});
exports.parameterEquals = createEquals(function (p) {
var parameter = this;
return parameter.name === p.name && parameter.parent.equals(p.parent);
});
exports.tupleTypeEquals = createTypeEquals(function (t) {
var tupleType = this;
if (tupleType.elements.length !== t.elements.length) {
return false;
}
for (var i = 0; i < tupleType.elements.length; i++) {
if (!tupleType.elements[i].equals(t.elements[i])) {
return false;
}
}
return true;
});
exports.unionOrIntersectionTypeEquals = createTypeEquals(function (t) {
var unionType = this;
if (unionType.types.length !== t.types.length) {
return false;
}
for (var i = 0; i < unionType.types.length; i++) {
if (!unionType.types[i].equals(t.types[i])) {
return false;
}
}
return true;
});
exports.typeQueryEquals = createTypeEquals(function (tQ) {
var typeQuery = this;
return typeQuery.type.equals(tQ.type);
});
exports.valueExpressionEquals = createExpressionEquals(function (vE) {
var valueExpression = this;
return valueExpression.value.equals(vE.value);
});
exports.primitiveExpressionEquals = createExpressionEquals(function (rE) {
var literalExpression = this;
return literalExpression.primitiveValue === rE.primitiveValue;
});
exports.arrayExpressionEquals = createExpressionEquals(function (aE) {
var arrayExpression = this;
if (arrayExpression.elements.length !== aE.elements.length) {
return false;
}
for (var i = 0; i < arrayExpression.elements.length; i++) {
var eq = arrayExpression.elements[i].equals(aE.elements[i]);
if (eq === undefined) {
return undefined;
}
else if (!eq) {
return false;
}
}
return true;
});
exports.objectExpressionEquals = createExpressionEquals(function (oE) {
var objectExpression = this;
var keys = Object.keys(objectExpression);
for (var i = 0; i < keys.length; i++) {
var thisProp = objectExpression[keys[i]];
var thatProp = oE[keys[i]];
var eq = thisProp.equals(thatProp);
if (eq === undefined) {
return undefined;
}
else if (!thatProp || !eq) {
return false;
}
}
return true;
});
exports.classExpressionEquals = createExpressionEquals(function (cE) {
var classExpression = this;
return classExpression.class.equals(cE.class);
});
exports.classReferenceExpressionEquals = createExpressionEquals(function (crE) {
var classReferenceExpression = this;
return classReferenceExpression.classReference.equals(crE.classReference);
});
exports.functionExpressionEquals = createExpressionEquals(function (cE) {
return undefined;
});
exports.functionCallExpressionEquals = createExpressionEquals(function (cE) {
var callExpression = this;
var eq = callExpression.function.equals(cE.function);
if (!eq) {
return eq;
}
else {
if (callExpression.arguments.length !== cE.arguments.length) {
return false;
}
for (var i = 0; i < callExpression.arguments.length; i++) {
var eq_1 = callExpression.arguments[i].equals(cE.arguments[i]);
if (!eq_1) {
return eq_1;
}
}
return true;
}
});
exports.newExpressionEquals = createExpressionEquals(function (nE) {
var newExpression = this;
var eq = newExpression.classReference.equals(nE.classReference);
if (!eq) {
return eq;
}
else {
if (newExpression.arguments.length !== nE.arguments.length) {
return false;
}
for (var i = 0; i < newExpression.arguments.length; i++) {
var eq_2 = newExpression.arguments[i].equals(nE.arguments[i]);
if (!eq_2) {
return eq_2;
}
}
return true;
}
});
exports.propertyAccessExpressionEquals = createExpressionEquals(function (paE) {
var propertyAccessExpression = this;
var eq = propertyAccessExpression.parent.equals(paE.parent);
if (eq === undefined) {
return undefined;
}
else {
return eq && propertyAccessExpression.property === paE.property;
}
});
exports.enumExpressionEquals = createExpressionEquals(function (eE) {
var enumExpression = this;
return enumExpression.enum.equals(eE.enum) && enumExpression.value === eE.value;
});
exports.decoratorEquals = createEquals(function (d) {
var decorator = this;
if (!decorator.parent.equals(d.parent)) {
return false;
}
if (decorator.parameters && !d.parameters || (!decorator.parameters && d.parameters)) {
return false;
}
if (decorator.parameters) {
if (decorator.parameters.length !== d.parameters.length) {
return false;
}
for (var i = 0; i < decorator.parameters.length; i++) {
if (!decorator.parameters[i].equals(d.parameters[i])) {
return false;
}
}
}
return true;
});
//# sourceMappingURL=equals.js.map
|
STACK_EDU
|
SRP-6 vulnerabilities when N is small
I'm one of the developers of an application which uses SRP-6 as the authentication mechanism. The authentication part of the code is very old and uses N with only 256 bits (all arithmetic is done in modulo N). After receiving reports of stolen passwords we upgraded to SRP-6a with the size of N 1024 bits.
We are still investigating (both on the client and server side) how the passwords were stolen/broken. I know that SRP-6 with such a low N value is vulnerable to man-in-the-middle attacks and "two-for-one" guessing (SRP-6 Improvements and Refinements paper by Thomas Wu). The attacks were probably made only on the client side, but this made me very curious.
Would it be possible for an attacker to launch an offline dictionary/brute-force attack on the B public key:
B = (k*v + g^b % N) % N
N - 256 bits long
b - 152 bits long (random private key - generated using OpenSSL library)
Is it possible with modern technology? Could the attacker somehow predict or find out the random value b, extract v=g^x % N, then perform a discrete logarithm and find x?
Solving a 256-bit discrete log is absolutely doable, and quite quickly, these days; there are public tools that can do it, though they may require some expertise to use.
On that note, even a 1024-bit modulus is not particularly conservative: it is generally agreed that well-funded organizations today could break logs of that size as well, but at a very large cost. The current minimum recommended modulus size for RSA, Diffie-Hellman, SRP, etc is 2048 bits.
That being said, I would also put my money on a client or server-side break-in before arriving at the conclusion that the log was being broken.
Being able to solve the discrete logarithm in SRP-6 allows an eavesdropping attacker to dictionary attack the password. It will not directly reveal a strong password or its hash. It requires the attacker to observe a successful authentication, $B$ alone does not suffice.
The attacker eavesdrops $s$, $A = g^a$, $B$ and $M_1$.
The attacker solves $a$ from $A$.
For each password guess $P$, the attacker calculates $M_1$ just as the user would.
If it matches, the password guess was correct.
From the point of view of cracking the password, this attacker is in the same position as one who breaks into the server and steals $v = g^x$, but isn't able to break the DL. (The latter could additionally impersonate a server as Ricky Demer mentions.)
With both $v$ and a too small modulus, no guessing is needed to derive $x$, allowing authentication or impersonation of either party.
@RickyDemer Which attacker? The one who breaks the discrete log in $v$ to get $x$? That's all the information the server has.
@RickyDemer, oh, right, I was only thinking about logging in due to the OPs mention of stolen passwords, but I'll update the post to clarify.
|
STACK_EXCHANGE
|
Barrachina-Muñoz S, Wilhelmi Roca FJ, Bellalta B. Dynamic Channel Bonding in Spatially Distributed High-Density WLANs. IEEE Transactions on Mobile Computing
List of results published directly linked with the projects co-funded by the Spanish Ministry of Economy and Competitiveness under the María de Maeztu Units of Excellence Program (MDM-2015-0502).
The record for each publication will include access to postprints (following the Open Access policy of the program), as well as datasets and software used. Ongoing work with UPF Library and Informatics will improve the interface and automation of the retrieval of this information soon.
Back Barrachina-Muñoz S, Wilhelmi Roca FJ, Bellalta B. Dynamic Channel Bonding in Spatially Distributed High-Density WLANs. IEEE Transactions on Mobile Computing
Barrachina-Munoz S, Wilhelmi F, Bellalta B. Performance Analysis of Dynamic Channel Bonding in Spatially Distributed High Density WLANs. arXiv preprint.
In this paper we discuss the effects on throughput and fairness of dynamic channel bonding (DCB) in spatially distributed high density (HD) wireless local area networks (WLANs). First, we present an analytical framework based on continuous time Markov networks (CTMNs) for depicting the phenomena given when applying different DCB policies in spatially distributed scenarios, where nodes are not required to be within the carrier sense of each other. Then, we assess the performance of DCB in HD IEEE 802.11ax WLANs by means of simulations. Regarding spatial distribution, we show that there may be critical interrelations among nodes – even if they are located outside the carrier sense range of each other – in a chain reaction manner. Results also show that, while always selecting the widest available channel normally maximizes the individual long-term throughput, it often generates unfair scenarios where other WLANs starve. Moreover, we show that there are scenarios where DCB with stochastic channel width selection improves the latter approach both in terms of individual throughput and fairness. It follows that there is not a unique DCB policy that is optimal for every case. Instead, smarter bandwidth adaptation is required in the challenging scenarios of next-generation WLANs.
- Spatial-Flexible Continuous Time Markov Network( SFCTMN), analytical framework based on Continuous Time Markov Networks (CTMNs). https://github.com/ sergiobarra/SFCTMN
- Komondor, a wireless networks simulator built on top of COST library https://github.com/wn-upf/ Komondor
- arXiv pre-print: https://arxiv.org/abs/1801.00594
|
OPCFW_CODE
|
[capture-promotion] When checking if a (struct_element_addr (project_box box)) is written to, check that all of the operands are loads, instead of returning early when we find one.
[capture-promotion] When checking if a (struct_element_addr (project_box box)) is written to, check that all of the operands are loads, instead of returning early when we find one.
I found this bug by inspection.
This is an important bug to fix since this pass runs at -Onone and the bug
results in the compiler hitting an unreachable.
The way the unreachable is triggered is that when we detect that we are going to
promote a box, if we see a (struct_element_addr (project_box box)), we don't map
the struct_element_addr to a cloned value. If we have a load, this is not an
issue, since we are mapping the load to the struct_extract. But if we have /any/
other non-load users of the struct_element_addr, the cloner will attempt to look
up the struct_element_addr and will be unable to find it, hitting an
unreachable.
rdar://32776202
(cherry picked from commit cf99e5c522ff2dfaabca8b3af2cf4e00ff8fdff7)
@swift-ci test
@atrick Can you review?
Build failed
Jenkins build - Swift Test Linux Platform
Git Commit - a598c669797b9a914f326e119c923c49f59a6b0f
Test requested by - @gottesmm
The linux failure is an integration testing failure from swiftpm maybe?
10:54:52 $ "rm" "-rf"<EMAIL_ADDRESS>10:54:52 $ "mkdir" "-p"<EMAIL_ADDRESS>10:54:52 $ "cp" "-R"<EMAIL_ADDRESS><EMAIL_ADDRESS>10:54:52 $ "rm" "-rf"<EMAIL_ADDRESS>10:54:52 $<EMAIL_ADDRESS>"build" "--package-path"<EMAIL_ADDRESS>10:54:52 note: command had no output on stdout or stderr
10:54:52 error: command failed with exit status: 1
10:54:52 $ "tee"<EMAIL_ADDRESS>10:54:52 # command output:
10:54:52 error: Unknown option --package-path. Use --help to list available options
Definitely not this PR. Lets try this again.
@swift-ci test linux platform
@gottesmm I see the same failure what I test this PR #10263
Build failed
Jenkins build - Swift Test Linux Platform
Git Commit - a598c669797b9a914f326e119c923c49f59a6b0f
Test requested by - @gottesmm
@swift-ci test linux platform
Explanation: This patch fixes a bug that I found on inspection that can cause at -Onone the compiler to hit unreachable. Since it occurs at -Onone it could also cause SourceKit to crash.
Scope: This change only affects the capture promotion pass. It will not result in any language level changes (especially since we would have crashed the compiler before).
Radar (and possibly SR Issue): rdar://32776202
Risk: None. The change is very small and if were to hit this condition, the compiler would crash anyways.
Testing: I added a filecheck test.
Approved by Bob via email.
|
GITHUB_ARCHIVE
|
Stuff I've Done:
December, 2003: Winners of Rudolph's XSS Christmas, along with my own answers to the
challenge! We received a great batch of answers this time. Thanks to all who played, and merry Christmas, ya'll.
December, 2003: A Holiday-themed CRACK THE HACKER CHALLENGE, called Rudolph's XSS Christmas. Help Rudolph and Hermey save Christmas and get a chance to win a copy of my new book, Malware
. Special thanks to TechRepublic.com for hosting this challenge!
December, 2003: Winners of the Spinal Hack challenge, along with my answers, are here. Congrats to the brilliant folks who won.
November, 2003: The new book is out! Finally... Malware: Fighting Malicious Code by Ed Skoudis, with Lenny Zeltser.
The book includes a detailed look at all forms of malware including viruses, worms, RootKits, kernel manipulation, BIOS attacks, and the possibility of malware microcode. It
also includes a description of how to build your own malware analysis laboratory, along with three different exciting malware scenarios:
- A Fly in the Ointment
- Invasion of the Kernel Snatchers
- Silence of the Worms
November, 2003: My hiatus is over... the book is done. Here's a NEW Spinal Tap themed CRACK THE HACKER CHALLENGE called Spinal Hack. Answer the questions to win a copy of my new book, Malware!
November, 2003: An article on Combo Malware that I wrote for Information Security Magazine.
November, 2003: A malware analysis template to fill out while performing static and dynamic
analysis of malicious software. This form and the process surrounding its use are described in Chapter 11 of my Malware book.
July, 2003: Ever wonder what's going on deep inside of Windows? No one knows for sure, as depicted in this presentation on the Evolution of a Windows Forensics Guru by Rob Lee and me.
June, 2003: What's this
? A new book on the way? Written by me and Lenny Zeltser. Stay tuned for November, 2003...
May, 2003: I get asked which are my favority computer security books on a weekly basis. Not that I'm special or anything, but here is my list of favorite books (computer security and related). Please ignore the dorky picture!
May, 2003: WINNERS for the "When Trinity Hacked the IRS D-Base..." Challenge.
May, 2003: The
Counter Hack Baby
... Pictures of the cute little guy! And, no, it's not mine.
May, 2003: A Matrix-themed CRACK THE HACKER CHALLENGE, titled "When Trinity Hacked the IRS D-Base." Answer the questions to win a copy of my book!
February, 2003: A "Willie Wonka" themed CRACK THE HACKER CHALLENGE, titled
"Willie Wonka and the Chocolate Hackery." Answer the questions to win a nifty prize!
February, 2003: A presentation on
January, 2003: A "Back to the Future" themed CRACK THE HACKER CHALLENGE, titled
"Hack to the Future". Answer the questions and win a nifty prize!
December, 2002: Ever wonder what other things Snort could run on? Check out Unusual
Devices Running Snort.
December, 2002: A Holiday Grinch-themed CRACK THE HACKER CHALLENGE, "How the Grinch Hacked Christmas!" Answer the questions and win a prize.
December, 2002: A
memo template to use for getting permission for conducting penetration tests.
Some people call this a "Get Out of Jail Free" card. Remember to have your legal team review, tweak, and approve the language before getting it signed!
December, 2002: Need holiday gift ideas? How about Information Security Action Figures
November, 2002: InfoSec's Worst Nightmares, an article on threats in Information Security
Magazine on the biggest attacks of the last 5 years and issues to worry about in the future.
November 2002: A Spider-Man themed CRACK THE HACKER CHALLENGE. Solve the
"Spider-Hack" challenge, and Win a Prize
(Sponsored by SearchSecurity.com)
October 2002: Ever wonder what would happen if Microsoft started writing hacking tools? Check out If Microsoft Had Written Nmap
October 2002: A Robin Hood-based CRACK THE HACKER CHALLENGE. Solve the "Robin Hack" challenge, and Win a Prize (Sponsored by SearchSecurity.com)
September 2002: A Princess-Bride-based CRACK THE HACKER CHALLENGE. Solve the "Princess Hack" challenge, and Win a Prize
(Sponsored by SearchSecurity.com)
August 2002: Music from conferences (including Kraftwerk!)
August 2002: Some slides on Format String Attacks. These slides show a picture of what's
happening on the stack during such an attack.
Article in Information Security Magazine, "Cracker Tools and Techniques... Faster Stealthier... More Dangerous", along with sidebar "The Worm Turns" and "Sneaking Past IDS"
July 2002: A Wizard-of-Oz-based CRACK THE HACKER CHALLENGE. Solve the "Crackers, Admins, and Sploits... Oh My!" challenge, and Win a Prize (Sponsored by
July 2002: How to Tell If You Are a Netcat Geek
June 2002: Silly Quotes from a Conference
June 2002: CRACK THE HACKER CHALLENGE. Solve the
"Star Hacks, Episode IV, A New Hack" challenge, and Win a Prize (Sponsored by SearchWebManagement.com)
June 2002: Some
on Cross-Site Scripting (XSS)
May 2002: CRACK THE HACKER CHALLENGE, Solve the
"Backdoor Shell Game Face Off", and Win a Prize (Sponsored by SearchWebManagement.com)
May 2002: Presentation on Latest Hacking Trends, Delivered to Infraguard Delaware Chapter
Night of the Living Wi-Fi's(A Security Parable for Our Times), a fun wireless scenario
March 2002: Counter Hack Briefing Slides
, Delivered at SoftPro books
Silence of the Worms, a fun worm scenario -- Learn from the mistakes of
On the Cutting Edge - The Year of the Worm, Information Security Magazine
August 2001: An Article on Newer Types of Ethical Hacks, (Web App, Client Side Components, and War Driving), written by Ed Skoudis and Chris O'Ferrell
August 2001: An Article on Security Organization Structures, written by Ed Skoudis and Mike Ressler
July 2001: Wireless LAN Security Policies, written by Ed Skoudis & John Burgess
NEW - Interactive CD-ROM
The Hack-Counter Hack Training Course: A Network Security Seminar
This CD-ROM contains:
- Over 4 hours of video lecture on computer attack tools
- Complete attack tool programs, with step-by-step guide to installation and use
- Directions on building your own hacker tool analysis laboratory
- Ideas for using tools in penetration testing
- Detailed defensive strategies
- Hands-on exercises to verify your understanding!
Buy the CD-ROM Package
Book -- Counter Hack: A Step-by-Step Guide to Computer Attacks and Effective Defenses, by
- Description of the most widely used attack tools
- Effective defenses for each type of attack
- Includes three in-depth attack scenarios using a variety of attack tools:
- Dial "M" for Modem
- Death of a Telecommuter
- The Manchurian Contractor
Buy the Book
|
OPCFW_CODE
|
You can migrate your projects from using the
dbt-spark adapter to using the dbt-databricks adapter. In collaboration with dbt Labs, Databricks built this adapter using dbt-spark as the foundation and added some critical improvements. With it, you get an easier set up — requiring only three inputs for authentication — and more features such as support for Unity Catalog.
- Your project must be compatible with dbt 1.0 or greater. Refer to Upgrading to v1.0 for details. For the latest version of dbt, refer to Upgrading to v1.7.
- For dbt Cloud, you need administrative (admin) privileges to migrate dbt projects.
Previously, you had to provide a
endpoint ID which was hard to parse from the
http_path that you were given. Now, it doesn't matter if you're using a cluster or an SQL endpoint because the dbt-databricks setup requires the same inputs for both. All you need to provide is:
- hostname of the Databricks workspace
- HTTP path of the Databricks SQL warehouse or cluster
- appropriate credentials
dbt-databricks adapter provides better defaults than
dbt-spark does. The defaults help optimize your workflow so you can get the fast performance and cost-effectiveness of Databricks. They are:
- The dbt models use the Delta table format. You can remove any declared configurations of
file_format = 'delta'since they're now redundant.
- Accelerate your expensive queries with the Photon engine.
incremental_strategyconfig is set to
With dbt-spark, however, the default for
append. If you want to continue using
incremental_strategy=append, you must set this config specifically on your incremental models. If you already specified
incremental_strategy=merge on your incremental models, you don't need to change anything when moving to dbt-databricks; but, you can keep your models clean (tidy) by removing the config since it's redundant. Read About incremental_strategy to learn more.
For more information on defaults, see Caveats.
If you use dbt Core, you no longer have to download an independent driver to interact with Databricks. The connection information is all embedded in a pure-Python library called
Migrate your dbt projects in dbt Cloud
You can migrate your projects to the Databricks-specific adapter from the generic Apache Spark adapter. If you're using dbt Core, then skip to Step 4.
The migration to the
dbt-databricks adapter from
dbt-spark shouldn't cause any downtime for production jobs. dbt Labs recommends that you schedule the connection change when usage of the IDE is light to avoid disrupting your team.
To update your Databricks connection in dbt Cloud:
- Select Account Settings in the main navigation bar.
- On the Projects tab, find the project you want to migrate to the dbt-databricks adapter.
- Click the hyperlinked Connection for the project.
- Click Edit in the top right corner.
- Select Databricks for the warehouse
- Select Databricks (dbt-databricks) for the adapter and enter the:
- (optional) catalog name
- Click Save.
Everyone in your organization who uses dbt Cloud must refresh the IDE before starting work again. It should refresh in less than a minute.
Configure your credentials
When you update the Databricks connection in dbt Cloud, your team will not lose their credentials. This makes migrating easier since it only requires you to delete the Databricks connection and re-add the cluster or endpoint information.
These credentials will not get lost when there's a successful connection to Databricks using the
dbt-spark ODBC method:
- The credentials you supplied to dbt Cloud to connect to your Databricks workspace.
- The personal access tokens your team added in their dbt Cloud profile so they can develop in the IDE for a given project.
- The access token you added for each deployment environment so dbt Cloud can connect to Databricks during production jobs.
Migrate dbt projects in dbt Core
To migrate your dbt Core projects to the
dbt-databricks adapter from
- Install the dbt-databricks adapter in your environment
- Update your Databricks connection by modifying your
Anyone who's using your project must also make these changes in their environment.
Try these examples
You can use the following examples of the
profiles.yml file to see the authentication setup with
dbt-spark compared to the simpler setup with
dbt-databricks when connecting to an SQL endpoint. A cluster example would look similar.
An example of what authentication looks like with
An example of how much simpler authentication is with
|
OPCFW_CODE
|
Data mining is a critical process in the field of artificial intelligence (AI). It involves the extraction of patterns and knowledge from large volumes of data. The data sources can include databases, data warehouses, the internet, and other information repositories. The knowledge obtained through data mining can be used for various applications, ranging from business management, bioinformatics, web search, healthcare, and even national security.
Artificial intelligence, on the other hand, is a branch of computer science that aims to create systems capable of performing tasks that would normally require human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding. AI is an interdisciplinary field that uses tools and insights from a variety of disciplines including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, and mathematics.
Understanding Data Mining
Data mining is a multidisciplinary subfield of computer science, with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use. It is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems.
The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics.
Types of Data Mining
Data mining can be classified into two categories: descriptive and predictive. Descriptive data mining involves finding patterns that describe the data in a concise and summarative manner. On the other hand, predictive data mining uses these patterns to predict unknown or future values of other variables of interest.
There are several core techniques used in the data mining process, including Association, Classification, Clustering, Prediction, Sequential patterns, and Decision tree. Each of these techniques serves a different purpose and is used to answer different types of questions.
Process of Data Mining
Data mining involves six common classes of tasks: Anomaly detection, Association rule learning, Clustering, Classification, Regression, and Summarization. Each of these tasks is explained in detail in the following sections.
Before the data mining process can begin, the target data must be assembled into a single data set. This data set is then cleaned and preprocessed to remove noise and inconsistencies. The next step is to apply the data mining algorithms to the data set. These algorithms identify patterns and relationships in the data. The final step is to interpret and evaluate the results.
Artificial Intelligence and Data Mining
Artificial Intelligence (AI) and data mining often employ the same methodologies to derive knowledge from data. The significant difference between the two lies in their purpose. You could say that AI is the broader concept where machine learning and data mining fit in. AI has a more comprehensive scope and is concerned with building smart machines capable of performing tasks that typically require human intelligence.
On the other hand, data mining is about finding valuable information in large volumes of data. With data mining methods, you can find patterns among data and use these patterns to predict future trends and behaviors. Data mining has a lot of practical applications in a variety of fields, such as healthcare, insurance, retail, and many others.
Role of AI in Data Mining
Artificial intelligence plays a crucial role in data mining by improving the accuracy of the results. AI algorithms can learn from the data and improve over time. This learning capability enables AI to adapt to changes in the data and produce more accurate results. Furthermore, AI can handle large volumes of data more efficiently than traditional data mining methods.
AI also makes it possible to automate the data mining process. This automation reduces the time and effort required to analyze large volumes of data. It also eliminates the possibility of human error, which can lead to inaccurate results. Therefore, the use of AI in data mining is becoming increasingly popular.
Applications of AI and Data Mining
Artificial Intelligence and data mining are used in various fields for different purposes. They are used in healthcare for disease prediction and diagnosis, in retail for customer segmentation and sales forecasting, in finance for credit scoring and algorithmic trading, in manufacturing for quality control and maintenance scheduling, and in many other areas.
AI and data mining are also used in social media analytics, where they help in understanding user behavior and trends. They are used in search engines where they help in providing accurate search results. They are also used in recommendation systems where they help in providing personalized recommendations based on user behavior.
Challenges in Data Mining and AI
Despite the numerous benefits of data mining and AI, there are also several challenges that need to be addressed. One of the primary challenges is the issue of privacy and security. Since data mining involves analyzing large volumes of data, it raises concerns about the privacy of the individuals whose data is being analyzed.
Another challenge is the quality of the data. If the data is incomplete or inaccurate, it can lead to incorrect results. Therefore, it is essential to ensure that the data is accurate and complete before it is used for data mining.
To overcome these challenges, several strategies can be used. For instance, to address the issue of privacy, data anonymization techniques can be used. These techniques remove or modify the personal information in the data to prevent identification of individuals.
To ensure the quality of the data, data cleaning and preprocessing techniques can be used. These techniques help in removing noise and inconsistencies in the data. Furthermore, the use of robust data mining algorithms can help in handling incomplete and inaccurate data.
Future of Data Mining and AI
The future of data mining and AI looks promising. With the advancements in technology, the capabilities of data mining and AI are expected to increase. This will enable them to handle larger volumes of data and produce more accurate results.
Furthermore, the integration of AI and data mining is expected to lead to the development of more advanced systems. These systems will be capable of performing complex tasks that are currently not possible. Therefore, the future of data mining and AI is something to look forward to.
In conclusion, data mining is a critical process in the field of artificial intelligence. It involves the extraction of patterns and knowledge from large volumes of data. The knowledge obtained through data mining can be used for various applications, ranging from business management, bioinformatics, web search, healthcare, and even national security.
Artificial intelligence, on the other hand, is a branch of computer science that aims to create systems capable of performing tasks that would normally require human intelligence. AI is an interdisciplinary field that uses tools and insights from a variety of disciplines including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, and mathematics.
|
OPCFW_CODE
|
Free Multitemporal, Multispectral Global Image Services
In 2008, the secretary of the US Department of the Interior (DOI), Dirk Kempthorne, addressed the audience at the Esri International User Conference and announced that as part of a larger US government initiative to make its data more available, all Landsat scenes in its archives would be available for free. This includes the Landsat Global Land Survey (GLS) datasets, which provide the best worldwide imagery data for how our earth is changing. Last year, Esri announced that it would make this imagery data accessible for free on ArcGIS Online.
Working in close collaboration with DOI, Esri is pleased to announce the release of the Landsat imagery services. These image services enable fast and easy access to 30 years of Landsat imagery as part of ArcGIS Online. Esri is providing this data (more than 8 TB) on ArcGIS Online and serving it as over 20 different dynamic, multispectral, multitemporal image services that provide access to the full image information content, along with change detection capabilities. In addition, Esri has created web maps and an interactive web application that leverage these image services, providing even greater access.
The image services being provided on ArcGIS Online are not just "pretty pictures." They provide dynamic access to all the spectral and temporal information in this massive collection of imagery. These dynamic image services represent on-the-fly processing of the original Landsat scenes that contain all the multispectral, multi-temporal information available in the imagery. This enables all the data contained within the imagery to be immediately available for use in maps to provide greater understanding and analysis. Users can define what processing is to be performed on the imagery, and the server performs this directly on the source images, returning the information required for the area of interest. The services are available in different standard band combinations. These band combinations include false color (bands 4,3,2)—useful for vegetation studies and crop growth monitoring, natural color with atmospheric penetration (bands 7,4,2)—best suited for analysis of urban studies, and vegetation analysis (bands 5,4,3)—providing the most information for agriculture and forest management. Since these services are also multitemporal, users can turn back the clock and easily analyze how things have changed in their region over the past 30 years.
These image services are available as standard services through ArcGIS Online. Users will be able to build web maps utilizing this information and share their analysis for better understanding and collaboration. Esri has also provided a series of web maps in ArcGIS Online that highlight the usage of these new services and help explain the measurement of change over time and space using the Landsat data. For example, users interested in looking at the change in coastal landforms could access and use the Land Water service in ArcGIS Desktop or ArcGIS Online applications, then use the temporal slider or service properties to define the epoch of greatest interest. Directly using these image services removes the requirement for users to store, manage, or process these large datasets themselves; instead, they can directly use these services as if all the different image products were stored locally.
In addition to providing access to the image services, Esri has created an easy-to-use web-based Landsat viewer for visualizing, analyzing, and detecting change using these image services. This viewer accesses the Landsat dataset as image services. Informative screens make it simple for everyone to understand what they are looking at and how to navigate through the information. The interface enables one-click access to a wide range of various information products with the ability to quickly zoom and pan to anywhere in the world and visualize similar information and trends. Behind the scenes, ArcGIS Server performs all the required processing on the fly. ArcGIS API for Flex was used to build the application.
The Landsat viewer contains fast, easy-to-use change detection tools. Where change occurs, it is easy to visualize the differences by looking at them spectrally over different time periods. The change detection tools enable users to conduct multitemporal image analysis for change through image differencing. The Landsat viewer automatically calculates this information based on the user's selections and displays the resultant information on a change detection map. Users can quickly understand the change that has taken place by visualizing and interacting with it on the change detection map.
People worldwide are trying to solve complex environmental challenges, and access to Landsat datasets through ArcGIS Online can make a difference.
For more information, visit esri.com/landsat.
See also "Using Landsat Image Services."
|
OPCFW_CODE
|
//
// CryptoSignTests.swift
// NaOH
//
// Created by Drew Crawford on 3/30/16.
// Copyright © 2016 DrewCrawfordApps. All rights reserved.
//
import Foundation
import CarolineCore
@testable import NaOH
private func signingKey() -> CryptoSigningSecretKey {
#if ATBUILD
var signingPath = "NaOHTests/signing.key"
#else
var signingPath = Bundle(for: CarolineEngineTests.self).path(forResource: "signing", ofType: "key")!
#endif
//fix the permsisions on this key so we don't freak out the security goalie
//on iOS 9.3+, we can't edit the permissions of a file in our app bundle. Therefore, we have to copy them to a temporary path so the permissions are valid.
#if !os(Linux)
//copy item at path is unimplimented on Linux, but this feature isn't technically required there.
let newAlicePath = NSTemporaryDirectory() + "/signing.key"
let _ = try? FileManager.`default`.removeItem(atPath: newAlicePath)
try! FileManager.`default`.copyItem(atPath: signingPath, toPath: newAlicePath)
signingPath = newAlicePath
#endif
try! FileManager.`default`.setAttributes([FileAttributeKey.posixPermissions: NSNumber(value: 0o0600 as UInt16)], ofItemAtPath: signingPath)
return try! CryptoSigningSecretKey(readFromFile: signingPath)
}
class GenerateKey: CarolineTest {
func test() {
let tempPath = NSTemporaryDirectory() + "signing.key"
let _ = try? FileManager.`default`.removeItem(atPath: tempPath)
let key = CryptoSigningSecretKey()
try! key.saveToFile(tempPath)
let key2 = try! CryptoSigningSecretKey(readFromFile: tempPath)
self.assert(key.keyImpl__.hash, equals: key2.keyImpl__.hash)
self.assert(key.publicKey.bytes, equals: key2.publicKey.bytes)
}
}
class DeriveKey: CarolineTest {
func test() {
let key = signingKey()
let publicKey = key.publicKey
self.assert(publicKey.bytes, equals: [108, 24, 241, 240, 92, 36, 168, 1, 222, 148, 14, 236, 102, 246, 91, 139, 120, 223, 234, 172, 217, 119, 203, 48, 46, 137, 55, 107, 233, 167, 55, 93])
}
}
class TestSign: CarolineTest {
func test() {
let key = signingKey()
let signature = crypto_sign_detached(testMessage, key: key)
print(signature)
self.assert(signature, equals: expectedSignature)
}
}
class TestVerify: CarolineTest {
func test() {
let key = signingKey()
let _ = try! crypto_sign_verify_detached(signature: expectedSignature, message: testMessage, key: key.publicKey )
}
}
class TestBadVerify: CarolineTest {
func test() {
let key = signingKey()
do {
let _ = try crypto_sign_verify_detached(signature: expectedSignature, message: [0,1,2,3,4,5,6,7,9], key: key.publicKey )
self.fail("Verification succeeded.")
}
catch NaOHError.CryptoSignError { /* */ }
catch {
self.fail("Unknown error.")
}
}
}
private let testMessage :[UInt8] = [0,1,2,3,4,5,6,7,8]
private let expectedSignature :[UInt8] = [4, 244, 70, 136, 33, 148, 129, 144, 162, 31, 77, 0, 108, 2, 212, 156, 146, 10, 4, 210, 53, 100, 124, 2, 109, 49, 43, 19, 122, 38, 163, 140, 93, 73, 38, 110, 20, 114, 206, 69, 33, 228, 14, 216, 85, 62, 186, 88, 230, 241, 229, 103, 55, 224, 57, 143, 55, 73, 20, 93, 233, 71, 241, 8]
|
STACK_EDU
|
overlay animated path+points layer
Hi
would it be possible to make example9 work with a GEOjson file?
This is because usually, other than the animated path (polyline, loaded from gpx or kml), it would be useful to show a overlay layer with the single points (each one with popup enabled to show date,location and fix number)...and it seems possible only with geoJson (or not?). I'm thinking to convert my gpx file in a geoJson (like attached, adding datetime values..) and then use:
var jsonLayer = L.geoJson(null, {
onEachFeature: function (feature, layer) {layer.bindPopup(feature.properties.Luogo);}
});
$.getJSON("data/tr.geojson", function(json) {
jsonLayer.addData(json);
});
..
and then
var overlayMaps = {
"GPX Layer": gpxTimeLayer,
"pt": jsonLayer
};
...but then i'll have 2 similar file with coords in my projects...and is not a good solution. All gps data should be in a single file, that serve both animated path overlay and single points layer (usr can choose if enable both or each one)
Another solution would be to find a way to extract single points from the gpx file you use and base my extra overlay layer on this...but is it possible?
tr.zip
Hi @tecnocoma75 ,
of course you can work with geojson files. You do not need to start from a gpx or kml files (converted to leaflet layers through omnivore).
You can check examples 15 and 8 (but 8 its too complex).
That geojson layers can be regular o timedimensioned (created with L.timeDimension.layer.geoJson(geoJsonLayer)). With this second option, note that L.TimeDimension will modify your geoJson layer according to the time. Please, read https://github.com/socib/Leaflet.TimeDimension#ltimedimensionlayergeojson
In some cases, default behaviour of L.TimeDimension with geojson layers is not what we need. In example 15, it is overwritten to do not modify features. Only show or hide them.
Hi,
if i create a json file with unix datetime, everything goes well...but if i use a UTC datetime (like 2015-12-29T22:30:00Z ) the script seems to stop working. It's normal?
It accepts both timestamps and string datetimes. It should work.
Please, check the console to find any error message.
Or debug the variable times here
I mananaged to almost do what i was searching for, thanx to your advice.
I modified ex 9 and ex 15, having coord and times load from external geojson file (with time added).
But my solution is not perfect...timedimension generates an error and overlays doesn't compare on the map---the is an error on leaflet (but why?), even it path is created from geojson . Can you help me please?
Another question: is it possible to launch (and make it work) the html directly from a pc, without having it to load on a webserver (now i use a xampp distribution, but i would like to distribute the project as a standalone ...but i seems that if i run i from my browser(double click on example9.html...for exmple) and not from local webserver (http://localhost:81/...), it doen's work.
Thanx again,
Andrea
ex9.zip
Hi,
you are adding layers to the map when they are not already created. Keep in mind that $.getJSON is asynchronous, the function inside is executed when data arrives. But the next javascript instructions are executed immediately.
Security reasons of the browser.
You can run a webserver in that folder easily with python:
python -m SimpleHTTPServer
If you have further questions, can you create a jsfiddle with the problem? You can fork this: http://jsfiddle.net/bielfrontera/5afucs89/
Good, as you stated, i managed to solve the problem by using Ajax async..now the geojson loads!
But there 2 problems with animation:
i noticed that the effect was different from that of ex.9 with GPX, where the animation draw a line while moving while now i just have point moving without leaving a trace on the map. This is not good for my tracking project..how cani make geojson animation behave like in example9 (GPX/KML)?
i'd like to have an animated icon on the current position (a pulsing icon). With example9 GPX, no problem...my customized icon works thanx to pointToLayer function
pointToLayer: function (feature, latLng) { if (feature.properties.hasOwnProperty('last')) { return new L.Marker(latLng, { icon: icon }); } return L.circleMarker(latLng); }
but now it seems that the 'last' point doesn't fire anymore...and circlemarker is used (ok i can replace circlemarker with ICON...and it works...but i'd like to undestand why). AddLastPoint=true is used to mark last point and make it can hake a custom icon, right?
can i use a custom icon for Jsonlayer (layer with point, not with the path) and another one (ex. a flashing icon) for the moving icon along the path "jsonlayerTimeLayer" (that inherits properties and icon from jsonlayer)? Maybe using the addlastpoint option? Or something else...i'd like to avoid loading twice the same json, in order to have 2 customized icon...
http://jsfiddle.net/5afucs89/15/
Hi,
have you got an answer for my previous question? Really thanx a lot
Your data does not represent a line (Polyline). It's just a collection of point, so you have the correct behavior.
You copied the entire Cdrift example, I think you may not need the redefinition of _getFeatureBetweenDates.
If you want to keep older point, change duration option or override _getFeatureBetweenDates with your own implementation
Your code simplified and almost working: https://jsfiddle.net/5afucs89/17/
|
GITHUB_ARCHIVE
|
import calculate from './calculate';
it('returns arranged form of the operation if any operation button is pressed and there is no a previous operation', () => {
expect(calculate({ total: '', next: '10', operation: '' }, '+')).toEqual({
total: '10',
next: '',
operation: '+',
});
});
it('returns result of previous operation if any operation button is pressed and there is a previous operation', () => {
expect(calculate({ total: '4', next: '10', operation: 'X' }, '+')).toEqual({
total: '40',
next: '',
operation: '+',
});
});
it('adds pressed button to next if any number is pressed ', () => {
expect(calculate({ total: '4', next: '10', operation: 'X' }, '2')).toEqual({
total: '4',
next: '102',
operation: 'X',
});
});
it('adds pressed button to next if dot button is pressed ', () => {
expect(calculate({ total: '4', next: '10', operation: '' }, '.')).toEqual({
total: '4',
next: '10.',
operation: '',
});
});
it('returns result of the previous calculation if equal button is pressed ', () => {
expect(calculate({ total: '4', next: '10', operation: 'X' }, '=')).toEqual({
total: '',
next: '40',
operation: '',
});
});
it('returns input if equal button is pressed and there is no previous calculation ', () => {
expect(calculate({ total: '', next: '10', operation: '' }, '=')).toEqual({
total: '',
next: '10',
operation: '',
});
});
it('returns one percent of next if percentage button is pressed ', () => {
expect(calculate({ total: '4', next: '10', operation: 'X' }, '%')).toEqual({
total: '4',
next: '0.1',
operation: 'X',
});
});
it('returns result of multiplying next with -1 if +/- button is pressed', () => {
expect(calculate({ total: '4', next: '10', operation: 'X' }, '+/-')).toEqual({
total: '4',
next: '-10',
operation: 'X',
});
});
describe('Dividing zero', () => {
it('returns Infinity if positive number is given', () => {
expect(calculate({ total: '4', next: '0', operation: '÷' }, '=')).toEqual({
total: '',
next: 'Infinity',
operation: '',
});
});
it('returns -Infinity if negative number is given', () => {
expect(calculate({ total: '-4', next: '0', operation: '÷' }, '=')).toEqual({
total: '',
next: '-Infinity',
operation: '',
});
});
it('returns NaN if O is given', () => {
expect(calculate({ total: '0', next: '0', operation: '÷' }, '=')).toEqual({
total: '',
next: 'NaN',
operation: '',
});
});
});
describe('Stop calculation', () => {
it('returns input without change if total equals to NaN', () => {
expect(calculate({ total: '', next: 'NaN', operation: '' }, '3')).toEqual({
total: '',
next: 'NaN',
operation: '',
});
});
it('returns input without change if total equals to Infinity', () => {
expect(
calculate({ total: '', next: 'Infinity', operation: '' }, '3'),
).toEqual({
total: '',
next: 'Infinity',
operation: '',
});
});
it('returns input without change if total equals to -Infinity', () => {
expect(
calculate({ total: '', next: '-Infinity', operation: '' }, '3'),
).toEqual({
total: '',
next: '-Infinity',
operation: '',
});
});
});
it('resets everyting if AC button is pressed', () => {
expect(calculate({ total: '3', next: '5', operation: '/' }, 'AC')).toEqual({
total: '',
next: '',
operation: '',
});
});
it('updates operation if next is empty string', () => {
expect(calculate({ total: '3', next: '', operation: 'X' }, '+')).toEqual({
total: '3',
next: '',
operation: '+',
});
});
it('return zero if result of calculation is less than 5*10^-21', () => {
expect(calculate({ total: '1e-20', next: '3', operation: '÷' }, '=')).toEqual(
{
total: '',
next: '0',
operation: '',
},
);
});
it('return zero if result of calculation is less than 5*10^-21', () => {
expect(calculate({ total: '1e-20', next: '3', operation: '÷' }, '=')).toEqual(
{
total: '',
next: '0',
operation: '',
},
);
});
|
STACK_EDU
|
This Course can only be played using a subscription. You can play only first 3 chapters for free. Click Here to avail a subscription
Microsoft OneNote 2013 is used to store information in a central location where it can easily be shared, backed up, and searched. In this VTC course, author Brian Culp compares OneNote to a digital version of a spiral-bound notebook as he shows you how to quickly add text, pictures, video, and even handwritten notes. He also explains how OneNote integrates with other Office 2013 applications, especially Word, Excel, and Outlook. Additionally, Brian covers how to leverage powerful features such as the ability to link notes automatically, customize the interface, and quickly track down unread notes. To begin learning today, simply click on the movie links.
(SFX) Hi there and welcome to this Virtual Training Company course Microsoft's OneNote 2013. OneNote is part of the Office 2013 suite of applications and it is a digital notebook application. It replaces paper based notes and includes with it most all of the advantages that you can do with digital replacement of paper in terms of saving things, every note that you take is right there on your computer. You can cut, copy and paste easily and dozens and dozens of other advantages which we will explore in this course. As we start out here I'll launch the application so I'm going to hit the Start button on my computer. I'm running Windows 8, you might be running a different operating system but I would imagine that you know how to launch an application. I'm going to type One and by the time I get time typing One there is OneNote, I'll hit Enter and OneNote will launch. So we're ready to go over what's coming up in this course and in fact I've got some notes on what's coming up in the course. So as I mentioned my name is Brian I'll be your tour guide as we look at the features and functionality of this toolset and at the end of the course I will leave you my email address, contact information so that you can get a hold of me if you have any questions that come up as you're learning this application. Here's what we'll be covering. In the lessons to follow we'll be creating our first notebook and configuring the properties of the notebook. We will then write some notes and then once we've taken some notes, we will manage the notes. We'll manage them into different sections, into different pages and talk about even creating separate notebooks. We'll then format our notes so that possibly you can draw attention to one note versus the other, one note being the number one here. And then we'll talk about some advanced note taking, some note taking with handwriting for example. We'll customize the application, we'll customize how it behaves and then we'll talk about collaboration, getting your information from one application to another or sharing OneNote with other folks who might be in your network. So along the way you'll learn how to do cool stuff like this. A note strikes your fancy you'll be able to highlight that note so again just like we can do with pen and paper you can grab a highlighter and highlight something on your legal pad. You can do the same thing digitally and what's nice about the digital world of course is that I can change my mind and erase it with the you know single keystroke here and highlight something else and then change my mind about that. Can't do that with pen and paper of course because once you apply ink to a page it's there. So again, those are the broad topics that we'll be covering throughout the lessons in this course. Stay tuned I think you'll learn a lot.
- Course: Microsoft OneNote 2013
- Author: Brian Culp
- SKU: 34461
- ISBN: 978-1-61866-115-9
- Work Files: No
- Captions: No
- Subject: Business Applications
- The first 3 chapters of courses are available to play for FREE (first chapter only for QuickStart! and MasterClass! courses). Just click on the movie link to play a lesson.
|
OPCFW_CODE
|
Subsonic Error (code 70) / Requested action getNowPlaying is not supported
Describe the bug
When I click on an artist and the albums of that artist are displayed, the error message is displayed for each album:
Subsonic Error (code 70)
Requested action getNowPlaying is not supported
To Reproduce
Steps to reproduce the behavior:
Go to 'Added Subsonic Server'
Click on 'Artists'
See the error message per album
Expected behavior
No Error PopUp-Message
Screenshots
Versions:
macOS [e.g. 13.5]: 13.4.1 (c)
Subsonic Server and Version [e.g. Navidrome 0.44, Subsonic 6.1.6]: Nextcloud 27.0.1, With Music v1.8.4 Plugin: (https://github.com/owncloud/music/releases/tag/v1.8.4)
Submariner Version [e.g. 2.2]: 2.3.1
Download Source [e.g. Mac App Store]: Mac App Store
Additional context
Add any other context about the problem here.
Looks like Nextcloud Music doesn't support the "now playing" functionality (that shows what other users are listening to). It's not a big deal since it should only be touching that endpoint if the now playing sidebar is activated (even if hidden) though. However, there is an auto-refresh timer for it. Disabling that timer in the settings might help with this.
Unfortunately, there is no way to check for featureset beyond Subsonic API version numbers. It's a bit annoying to support multiple servers as a result.
Thanks for your reply. I see, unfortunately it doesn't help to disable the Auto Refresh from Now Playing. The now playing sidebar is closed. Is there any way not to display the error message in this case? Or e.g. display "not availible with the curret server" in the sidebar? Submariner is really a great app and I use it despite the error message, although it is annoying. :-)
Looking at the callsites, it does call now playing on each refresh (i.e. via Cmd+R).
Annoyingly, code 70 is also used for when the API exists but the requested object wasn't found. We'd have to keep track of what APIs returned code 70, and check what didn't work at each part that calls it, basically. This is more annoying than being able to just check if it wouldn't work from the beginning, but it'd work.
When I press Cmd+R, the error message 70 appears.
On a newly installed submariner + clean subsonic installation, I have the same issue.
I just got back from vacation, so I can take a look at this again.
I have just tried a few old versions. The problem appears from version 2.2 onwards. Version 2.1.1 runs without the error message code "Subsonic Error (code 70) / Requested action getNowPlaying is not supported". Might this help?
2.2 introduced refreshing the now playing view as part of refreshing normally.
FWIW, I've filed owncloud/music#1079 as a server-side solution for OC Music at least. On the Submariner side, it's a little bit annoying because servers can return different things for unimplemented routes - i.e. OC Music returns code 70 Subsonic responses, whereas Navidrome returns HTTP 404s.
Fixed this, will be available in the next release
Thanks for the fix! I'm already looking forward to the new release :)
|
GITHUB_ARCHIVE
|
Ute is a compact utility with Time Syncing, Window Shutdown control, Wave File playing, Delayed / Controlled Running, CD draw control, file backup and Wallpaper Changing capabilities. It is a collection of ACAPsoft utilities (iTimeSync, Slam and QWave.) with four other functions are also added. (Backup, Run, CD and Wallpaper.) It is optimised for Command Line Control but the Wallpaper Changer and iTimeSync can also be operated in window mode.
(N.B. Ute is a 32 bit Windows program, not a command line program.)
All of the functions have logging functions and settings that are saved for default use. Many of these settings can be overridden or
changed when run in Command Line Mode.
- Computer clock syncing via the internet
- Time can be checked or checked and changed by a single button press.
- Supports both the RFC-868 (TIME) and the RFC-2030 (SNTP) protocols.
- Comes with a list of 20 suitable sites. These can be changed to any sites you want.
- Auto advance to next server upon failure.
- Two manual IP inputs.
- Can be set to only update minutes and seconds.
- Average mode.
- Can be set to automatically sync and then quit when Windows starts.
- A positive or negative offset can be set for those that like fast or slow clocks, or for those that live in an obscure time zone.
- Supports logging of errors, successes or both.
- Controllable via the command line.
- Random Desktop wallpaper changer
- Can also be used to change the boot wallpaper.
- Supports BMP, JPG and/or GIF files.
- Searches directories and sub-directories for suitable files.
- It is possible to create three levels of image priority by using different directory names.
- Image is automatically proportionally rescaled (not stretched) to fit the screen and converted.
- The screen size can be automatically calculated or a size can be manually entered.
- The image is centred on the screen taking into consideration the taskbar size, position and settings.
- The wallpaper name and time can be logged.
- The desktop wallpaper or the boot wallpaper can be set via the command line.
- Individual images can be selected via the command line. (Useful as a boss mode!)
- Copy changed files to any selected directory. A database is kept of files to keep track of changes.
- Backed up files are easily accessible without any special software.
- Multi-staged backup allows up to 10 change states to be stored.
- Quickly play any common sound file with only a 5.5 KB program.
- Can be set to automatically work via the file context menu.
- Files can be terminated prior to completion via an icon in the system tray.
- Command line shutdown
- Supports shutdown, power off, restart, log off, hibernation and suspend.
- Variable time warning and confirmation mode.
- Ute can force a shutdown to close un-responsive programs or for emergency shutdowns.
- Remote shutdown (but not power off) or restart networked computers.
- Shutdown events can be logged.
- Run mode
- Give variable warning or delay prior to a program running.
- Ask permission prior to a program running
- Set the process priority of a program to Idle, Normal, High or Realtime.
- Log when a program runs.
- Open/Close CD drive
- Set up a command line shortcut to toggle the open/closed setting of a selected CD drive.
- Set up a command line shortcut to open or close a selected CD drive.
- Multi-user setups are supported and most functions of Ute will run without any problems on a low privilege system and Windows Vista/7. (The iTimeSync function requires Admin privileges to change the time.)
- Ute is written in 100% Assembly Language and is very small and memory efficient. The program itself is around 100KB!
- Ute can be run from a USB key and does not even need to be installed. Simply select "Extract" from the installer and then copy "Ute.exe" wherever you want! Our installer also supports installation on Window Vista without UAC prompts, as well as installation on low
30 Day Unlimited Free Trial!
I use three parts of Ute: Slam for one-click shutdown of my PC, Qwave as an extremely fast starting audio file player and the new and unique DVD-Drive open/close feature bundled in one shortcut only. Absolutely reliable, small, causes almost no system drain. A typical ACAPSoft program with a cheap price I would heavily dislike to miss.
- Dr Mike
This useful program combines tools to adjust your system time to an Internet source, change wallpaper, and execute shutdown functions. Ute's basic, Windows-dialog-style interface presents its main tools in tabs, though most functions were designed to run at the command line....
.....Small, removable-drive-friendly, and helpful for many users, Ute is a decent addition to your toolbox.
- CNET (Rated 4/5)
|
OPCFW_CODE
|
In this article, we will discuss the solutions on how to solve the modulenotfounderror: no module named ‘torch’ error which is encountered of programmers who are new to python programming language.
Table of contents
- What is a Pytorch?
- Why the modulenotfounderror: no module named torch occur?
- What are the cause of error no module named torch?
- How to solve the error modulenotfounderror: no module named torch?
Before we start we will discuss first if what is Pytorch.
What is a Pytorch?
Pytorch is a profound learning library which is
compatible with different
hardware configurations like
Central Processing Unit(CPU) and also the
GPUs. The installation procedure of
Pytorch is somewhat different for multiple hardware configurations.
Why the modulenotfounderror: no module named torch occur?
ModuleNotFoundError: No module named torch error usually occurs if you trying to import the
torch module, yet it is not installed on your python library or the python interpreter cannot find the installed
torch modules in your system.
What are the cause of error no module named torch?
no module named torch” error usually occurs when the Python interpreter cannot find the PyTorch package/module in your PYTHONPATH environment.
Here are the multiple reasons which are possible causes for this error:
1. PyTorch is not installed:
PyTorch is not installed in your Python environment, you cannot import it. You can install PyTorch module using pip or conda.
2. Incorrect installation path:
PyTorch is installed yet the installation path is not added to the correct PYTHONPATH environment variable, then the Python interpreter unable to find it. Make sure that the installation path is added to the PYTHONPATH environment correctly.
3. Incorrect Python version:
PyTorch requires an exact version of Python to be installed. Make sure that you are using the correct version of Python.
4. Virtual environment issue:
When you are using a
virtual environment and
PyTorch is installed outside the environment, first you must need to
activate the virtual environment or you must installed
PyTorch within the path of virtual environment.
3. Name conflict:
When you have another module with the same name as
PyTorch, it will cause a conflict and it will block the PyTorch from being imported. You will make sure that there are no naming conflicts in your environment.
4. Incorrect Spelling
You need to check the correct spelling of your module name “
torch” in your import statement, so that you can avoid the “
no module named torch” error.
How to solve the error modulenotfounderror: no module named torch?
To solve the error
modulenotfounderror: no module named torch you can follow the steps below:
For Windows Installation
Step 1: Check if the torch is installed
You can check if the
torch is installed on your computer through running the following command in your
pip show torch
After you run the command above, if the result will be like this below it means the torch module is not installed on your system.
So that you need to proceed to the next solutions.
Step 2: Install torch package
Based on step 1 the result shows that the package is not installed, For that, you will install the “Torch” package using pip manager through running the following command:
pip install torch
After you run the command above, it will install the torch package library
If you are using Python 3, you can use pip3 command instead of pip.
pip3 install torch
Step 3: Check the Pip if Torch is installed
torch is already installed, then make sure you are executing your code in the same environment where
torch is installed. You can check the installed packages in your environment with the use of the following command:
pip show torch
If you run the command above if it is installed it will show the following information such as name, version, summary, author and location, etc.
If the error still continues, you can proceed to the next step which Uninstalling and Reinstalling
Step 4: Uninstalling and Reinstalling
If the error still persists, try to uninstall and reinstall the torch module with the use of the following commands:
pip uninstall torch
After you run the command above it will uninstall or remove the torch module in your system then the message will appear like this
Successfully uninstalled torch-1.13.1
For Anaconda Installation
If you are using Anaconda, you can use the following command to install the torch package in your anaconda prompt:
conda install pytorch torchvision torchaudio -c pytorch
If you run the command above it is installed it will show the following information
For Ubuntu Installation
If you are using Ubuntu, you can use the following command to install the torch package in your ubuntu terminal:
sudo apt install torch
For Jupyter Notebook Installation
If you are using Jupyter Notebook, you can use the following command to install the torch package in your Jupyter Notebook terminal:
!pip install torch
you run the command above it is installed it will show the following information
To conclude, we already discuss the solutions to solve the error
Modulenotfounderror no module named torch in different platform such as Windows, Ubuntu, Anaconda, and Jupyter Notebook
|
OPCFW_CODE
|
15 questions linked to/from What, when and will we migrate questions to MO 2.0?
Voting to close a question because one thinks it is extremely specialised
My question was recently voted to be closed. One member said it was because the question is extremely specialised. I don't think it is extremely specialised. It's a criterion of a prime being ...
What qualify questions to be in mathoverflow and not in math.stackexchange?
I have read that https://mathoverflow.net/ is only for the very advanced mathematics, such as upper graduate level or research level, counter to https://math.stackexchange.com/ which is for any ...
Why are there only three sites to which a question can be migrated?
In voting to migrate a question, I am allowed only three sites to which it might be migrated: math.meta.stackexchange.com, stats.stackexchange.com, physics.stackexchange.com In particular, ...
I found MO has been included in SE network today
Is it okay to cross post questions on MO and MSE anymore? How about ask a question on MSE after posting on MO but not getting answers, or vice versa?
Upper Bound for Difficulty of Questions on Math.SE
I know that there has been previous discussion about the inappropriateness of certain questions for mathoverflow.net, such as questions pertaining to elementary notions of compactness or ...
questions asked simultaneously on other SE sites
I noticed today that a used had asked the same question both on math.SE and on cs.SE (see https://math.stackexchange.com/questions/181291/which-languages-are-decidable#comment417845_181291 and https://...
Exact duplicates of questions posted on MO
I wonder why we can't declare a question posted on MSE (like this) an exact duplicate of a question posted on MO (like this) and viceversa? (I'd find it useful if we could do this.)
MathOverflow 2.0 is a-comin'!
Not a question; more of a notification. For those who are unaware, MathOverflow is upgrading to the "new" StackExchange platform. A recent update on meta.mathoverflow.net: Anton Geraschenko 3 ...
Since when does MathOverflow become one of SE family? [closed]
I am really sorry, it has been a while since I am back on Math.SE, and since when when does MathOverflow become one of SE family? Won't Math.SE and MathOverflow overlap each other?
Moderator Supported (Official) Guidelines for "Legitimate" CrossPosting?
I recently posted a question in stats/cv that could just as easily be posted here. As a matter of fact, it feels like it should be in both places! I've searched meta and have seen some related ...
Is there a way to decrease cross-posting issues?
I know there are tons of meta questions about cross-posting already. This question isn't about the cross-posting etiquette per se. It might just be a personal bias or the questions I tend to click on, ...
Can questions be "shared" within sites?
Most questions on Math.SE clearly belong on this site, while others are perhaps better answered elsewhere. But there are questions that, it seems to me, really fit well in more than one site. An ...
When migrate a question to Mathoverflow?
I have some questions that received some upvote but no answer in many months. For someone of these I've started a bounty, but without success (one example was: Multitangent to a polynomial function, ...
Why can't Math StackExchange questions be flagged as belonging in a site other than math meta, stats, or physics? [duplicate]
I have noticed a few questions that don't belong on math StackExchange, math meta, stats StackExchange, or physics StackExchange. I cannot flag them as belonging in another site, so I just post a ...
What is a research level math question? (Ie what types of questions should be asked on MathOverflow and not here?) [closed]
I have some questions about material I learned in a graduate level mathematics course which are largely conceptual, and/or requests for relevant theorems/results. However, in general I am not sure if ...
|
OPCFW_CODE
|
SQL database definition differencing tool. Structure and data is defined in a DTD-enforced, human-readable XML format. Outputs transactional SQL statement files to apply your changes.
NOTICE: Due to dependency updates, DBSteward 1.4.0 was the last version to support PHP 5.3 and 5.4. Please upgrade your run-times to at least PHP 5.5 before upgrading to DBSteward 1.4.2+
Subscribe to the DBSteward Announce mailing list
Post your question to the DBSteward Users mailing list
What / who is DBSteward for?
Intended users are application developers and database administrators who maintain database structure changes as part of an application life cycle. Defining your SQL database in a DBSteward XML definition can greatly lower your release engineering costs by removing the need to write and test SQL changes.
Many developers maintain complete and upgrade script versions of their application databases. Upgrade headaches or data loss are reduced by only requiring a developer to maintain a complete definition file. Creating an upgrade from version A to B becomes a compile task, where you ask DBSteward to generate SQL changes by feeding it A and B versions of your database in XML.
Are you technical and tired of reading this FAQ already?
Using DBSteward to generate or difference a database definition: https://github.com/dbsteward/dbsteward/blob/master/docs/USING.md
Installing DBSteward with Composer / PEAR: https://github.com/dbsteward/dbsteward/blob/master/docs/INSTALLING.md
XML Format examples and ancedotes: https://github.com/dbsteward/dbsteward/blob/master/docs/XMLGUIDE.md
Software development best practices: https://github.com/dbsteward/dbsteward/blob/master/docs/DEVGUIDE.md
Slony configuration management examples: https://github.com/dbsteward/dbsteward/blob/master/docs/SLONYGUIDE.md
Frequently Asked Questions
There can be nuances to working with DBSteward for the purpose of generating or differencing a database. Please review these FAQ to aide in your development efforts when employing DBSteward.
1. What are these input and output files?
In the following examples, the definition file is someapp_v1.xml. For more information on the DBSteward XML format, see https://github.com/dbsteward/dbsteward/blob/master/docs/XMLGUIDE.md
When building a full definition ( dbsteward --xml=someapp.xml ), DBSteward will output a someapp_v1_full_build.sql file. This SQL file contains all of the DDL DML DCL to create a instance of your database definition, with all operations in foreign-key dependency order.
When generating definition difference between two definitions ( dbsteward --oldxml=someapp_v1.xml --newxml=someapp_v2.xml ), DBSteward will output several upgrade files, segmenting the upgrade process, with all operations in foreign-key dependency order.
- Stage 1
- DDL ( CREATE, ALTER TABLE ) changes and additions to database structure, in foreign-key dependency order
- DCL ( GRANT ) apply all defined permissions
- Stage 2
- DML ( DELETE, UPDATE ) removal and modification of statically defined table data
- DDL cleanup of constraints not enforceable at initial ALTER time
- Stage 3
- DDL final changes and removal of any database structure no longer defined
- Stage 4
- DML ( INSERT, UPDATE ) insert and update of statically defined table data
2. How does DBSteward determine what has changed?
DBSteward's approach and expectation is that developers only need to maintain the full definition of a database. When run, DBSteward will determine what has changed between the definition XML of two different versions of the database, generating appropriate SQL commands as output.
DBSteward XML definition files can be included and overlay-composited with other DBSteward XML definition files, providing a way to overlay installation specific database structure and static data definitions.
DBSteward has 2 main output products of XML definition parsing and comparison:
- Full - output a 'full' database definition SQL file that can be used to create a complete database based on the XML definition.
- Upgrade - output staged SQL upgrade files which can be used to upgrade an existing database created with the first XML definition file, to be as the second XML file is defined.
DBSteward creates upgrade scripts as the result of comparing two XML definition sets. As a result, upgrade file creation does not require target database connectivity.
DBSteward is also capable of reading standard Postgresql pg_dump files or slurping a running Postgresql database and outputting a matching XML definition file.
3. Why use DBSteward to maintain database structure?
Maintaining database structure with DBSteward allows developers to make large or small changes and immediately be able to test a fresh database deployment against revised code. The updated definition is then also immediately useful to upgrade an older version to the current one. Being able to generate DDL / DCL / DML changes can greatly simplify and speed up database upgrade testing and deployment. At any point during a development cycle, a DBA can generate database definition changes instead of having to maintain complex upgrade scripts or hunt for developers who made a database change.
4. What SQL RDMS output formats does DBSteward currently support?
DBSteward currently supports output files in Postgresql 8 / 9, MySQL 5.5, and Microsoft SQL Server 2005 / 2008 compliant SQL commands. DBSteward has an extensible SQL formatting code architecture, to allow for additional SQL flavors to be supported rapidly.
5. How do I get started?
To start tinkering with the possibilities, install DBSteward with Composer with https://github.com/dbsteward/dbsteward/blob/master/docs/INSTALLING.md
You will also need to have the
xmllint executable installed in your PATH, available from libxml2.
You can also of get a checkout at git://github.com/dbsteward/dbsteward.git It is runnable in source-checkout form, as php bin/dbsteward.php
6. How do I convert an existing database to DBSteward definition?
7. I have an existing project how do I migrate to using DBSteward?
Examples of structure and data extraction can be found on the Using DBSteward article https://github.com/dbsteward/dbsteward/blob/master/docs/USING.md
8. Can I define static data in DBSteward XML?
Yes you can. Static data rows will be differenced and changes DML generated in stage 2 and 4 .sql files. You can find examples of defining static data in the table user_status_list of the someapp_v1.xml sample definition. Be sure to leave your static data rows each version. They are compared for changes, additions, and deletions each time you build an upgrade.
9. How do I define legacy object names such as columns named order or tables called group without getting 'Invalid identifier'
Use --quotecolumnnames or --quoteallnames to tell dbsteward to use identifier delimineters on all objects of that type, to allow reserved words to be used as objects.
10. Why are views always dropped and re-added?
SQL server implementations expand SELECT * .. and implicitly use column types when creating view definitions from query expressions. Rebuilding these views ensures the types and column lists in a view will be consistent with the dependent tables providing the data.
11. Where are my slonik files? Why aren't my slony configuration details being honored?
slony slonik configuration files are not output during structure defiinition or diffing unless you use the --generateslonik flag. This is to steamline the development vs DBA replication staff roles in the development lifecycle.
12. Do I just pick a slonyId? What's the rhyme or reason with slonyId's?
slonyIds can be completely arbitrary, but are recommended to be allocated in segments. Example: IDs 100-199 are reserved for user tables, IDs 200-299 are for forum relationships and post data, IDs 500-599 for form full text search tables, ad nausea.
13. How do I define replicate, and upgrade a database I have defined with DBSteward and want to replicate with Slony?
See the Slony slonik output usage guide https://github.com/dbsteward/dbsteward/blob/master/docs/SLONYGUIDE.md for examples.
14. What are some recommended best practices for the software development lifecycle?
See the DBSteward Development guide https://github.com/dbsteward/dbsteward/blob/master/docs/DEVGUIDE.md for detailed examples.
|
OPCFW_CODE
|
As part of the Digital Labor, Sweatshops, Picket Lines, and Barricades, (#DL14) I am going to talk about my recent work on Affective Computing and facial recognition technologies in November at New School, NY.
I am invited to present my recent work on affective computing at Graduate Program in Media Studies at Pratt Institute, NY on November 18. In this project I aim to investigate new forms of affective computing in relation to the history of regulating social affect as part of the genealogy of the techniques for categorizing human subjects, or “making-up people” as Ian Hacking calls in relation to the role of the nineteenth century official statistics through which different kinds of human beings and human actions came into being, hand-in-hand with the invention of categories for labeling people and their behaviors. Within that genealogy, my research highlights how the coevolution of techniques of categorization and social regulation of affect is interwoven with figures of European “others;” Jews, Muslims, and natives from the colonies in order to legitimize media techniques used for extracting typologies for creating penal, medical and moral norms.
As part of the 4S, Society for Social Studies of Science 2013, I will be in San Diego presenting my recent work on the cultural history of Affective Computing. My presentation titled, Education of Artificial Desire: Natural Language Algorithms and Crowdsourcing Sentiment Analysis, is based on my research on the use of crowdsourcing as a new form of division of cognitive labor and its use in sentiment analysis problems. The question that arises from this cluster of technologies is that how do we address formations of subjectivities in relation to algorithmic representation of affect that feeds on collective sentiment of networked crowds.
My first book chapter is coming out soon. Digital Labor is edited by Trebor Scholz and includes essays by Andrew Ross, Tiziana Terranova, Patricia Clough, McKenzie Wark, Christian Fuchs, Jonathan Beller, Michel Bauwens, Abigail De Kosnik, Ned Rossiter, LisaNakamura, Sean Cubitt, Jodi Dean, and Mark Andrejevic.
I am awarded a Wikimedia Foundation Virtual Community Fellowship. I will study the dynamics of knowledge production in Wikipedia virtual community. This would be a highly valuable contribution to my dissertation research which focuses on crowdsourcing as part of the historical process of the automation of the intellectual labor.
I am invited to the Summer Institute organized by The Consortium for the Science of Sociotechnical Systems. The Institue will bring together scholars to share ideas on ongoing research projects.
I will give a presentation at UCSB on February 4th. This will be my second visit to the beautiful UCSB campus three years after our Transliteracies workshop in 2007. Here is the link to the event announcement at UCSB Department of English: Lecture: Ayhan Aytes, “Crowdsourcing, Mechanical Turk and the Cultural History of Cognitive Labor Apparatus”
|
OPCFW_CODE
|
import pickle
import numpy as np
import pandas as pd
import nltk
from nltk.util import ngrams
path = "poverty_clustering/"
ind_file = "june_inds_all.pkl"
output_file_name = path+"june_inds_poverty_by_kw"
pkl_file = open(ind_file, "rb")
inds = pickle.load(pkl_file)
pkl_file.close()
keywords = [
["малоимущий_ADJ", "попечение_NOUN", "прожиточный_ADJ", "безработица_NOUN", "малообеспеченный_ADJ"],
["денежный_ADJ помощь_NOUN", "материальный_ADJ помощь_NOUN"],
[]
]
exclude = [
[],
[]
]
output = []
for ind in inds:
if len(ind[2]) > 0:
bigrm = list(nltk.bigrams(ind[2]))
bigrm = [" ".join(b) for b in bigrm]
#trigram = ngrams(ind[2], 3)
#trigram = [" ".join(t) for t in trigram]
if (
(
any(word in ind[2] for word in keywords[0])
or any(b in bigrm for b in keywords[1])
#or any(t in trigram for t in keywords[2])
)
#and not any(word in ind[2] for word in exclude[0])
#and not any(b in bigrm for b in exclude[1])
):
output.append(ind)
print(len(output))
output_file = open(output_file_name+".pkl", "wb")
pickle.dump(output, output_file)
output_file.close()
df = pd.DataFrame(output)
df.to_csv(output_file_name+".csv", index=False, header=True, encoding = "utf-8")
|
STACK_EDU
|
Never argue with idiots; they'll drag you down to their level and then beat you on experience.
A follow-up to my previous post; so these social dynamics exist. And sometimes there can be "earthquakes" that create even more mountains and valleys, shrinking the plateaus and pushing them closer to the margins. What form has that taken in BGG werewolf?
In mid-2015, something happened that was consequential in a number of ways; the original Cassandra site (the off-site resource where votes are tallied, werewolf players have chats with the mods, wolves make decisions, game statistics are recorded, etc.) crashed. In the medium to long term, this was frustrating to people like me who really care about the old archives and being able to browse past games; I and others spent a lot of time rebuilding it.
But in the short term, it was much less practical to play games at all, because we didn't have the automated "this is how many votes are for this player!" tally.
*Politics Metaphor Follows*
This should not be taken as indicative of my RL political views, because I believe that there are many steps the government can take to promote gun control and create a safer, healthier society. (If you disagree, this isn't the place for that discussion.)
Nevertheless, there's an idiom in the US that kind of applies here, which is "if you outlaw guns, only outlaws will have guns." To generalize this to the point of extreme vagueness, I would say, "if a certain tool disappears overnight, only the people who are very invested in keeping it around will find immediate substitutes."
I feel like, to an extent, this happened with Cassy. Smaller rolesets became easier for mods to manually handle; one of the classic small rolesets here is the "no reveal niner," a game that often leads to some loud counterclaim battles. Historically, it's often been run early in the day compared to some games, and sometimes tends to attract louder, more aggressive "mountain" players. This is anecdotal and I can't demonstrate it rigorously, but it feels like during those few months, there were relatively more games that catered to mountain players, whereas the valleys were more (in the aggregate) like "eh, Cassy's down, don't wanna make more work for my adorable mods, let's pass."
Eventually Cassy came back (and crashed again, and was rebooted again, the following year, by which time I'd drifted away and couldn't tell you the effect even if there was one). And around maybe the start of 2016, the valley type players were more, "hey, can we have a place where our touchy-feeliness gets a say?" There were games built that deliberately encouraged a "mod will intervene if things get out of hand, please speak up if your emotions are being affected" environment, and though this didn't specifically target a valley audience, it sort of wound up that way.
So the total effect was a larger separation between mountain (slope)s and valley (slope)s. Again, this is all anecdotal, but I feel like that timeframe gave some of these people more opportunities to avoid troublesome interactions--but at the cost of making it more difficult for the plateaus, because almost every game would feature at least one emotionally volatile person.
Madeline's thoughts on social deduction games, forum/community meta, and any other philosophical musings
- [+] Dice rolls
|
OPCFW_CODE
|
Preamble: Excluded from this document?
How can I state that a thing is specifically excluded from a document in the preamble?
NOTE: This document does not address the population of the data prior to running a specific batch letter report, nor does this document address the process after printing has been completed.
Doesn't seem appropriate. Your input would be welcome.
-jjj
In what respect do you find this inappropriate?
If feel that the "NOTE:" element is the equivalent of a superscript reference, and should really feel more like:
"Caution, do not expect the following to be in the document."
Batch Letter Processing currently leverages Crystal Reports to generate a batch of reports on-the-fly.
These reports are then processed into the related member’s My Documents store on the Imaging server.
After those reports have been successfully imported, the letters are then sent to the Front Desk for printing and mailing the physical letter to the member(s) as appropriate.
NOTE: This document does not address the population of the data prior to running a specific batch letter report, nor does this document address the process after printing has been completed.
Taken from document preamble
The following may be more concise:
This document addresses neither the population of the data prior to a specific batch letter report run, nor the process once the printing is completed.
I like what you have here.
The idea is to use it as a statement for non-technical staff. Do you think there is a simpler way to put it, without seeming condescending? If you had administrative staff that were expected to read this line, would you expect them to understand right away? If that person was an intern, would you expect the same?
I've eliminated the repetition of "this document" and "address", simplified the construction and negatives. Actually, the 'warning' is pretty concise here. I don't find anything condescending. The two technical aspects are: 'population of data' and 'batch letter report run'. If they can understand these, there's nothing more complex in it, I think :-)
Batch Letter Processing currently leverages Crystal Reports to generate a batch of reports on-the-fly. These reports are then processed into the related member’s My Documents store on the Imaging server. After those reports have been successfully imported, the letters are then sent to the Front Desk for printing and mailing the physical letter to the member(s) as appropriate. NOTE: This document does not address the population of the data prior to running a specific batch letter report, nor does this document address the process after printing has been completed.
The idea is to use it as a statement for non-technical staff.
"Inappropriate" is putting it mildly. "This document" tells its audience a whole bunch of stuff they probably don't understand and probably wouldn't care about if they did, and fails to tell them the stuff they probably want to know. Let's try translating it into English.
HR has sent us their quarterly whine about errors and delays in the Member Reports, and we thought it might be helpful to tell you what actually happens here in IT.
We do these reports in batches, because it's a Real Pain for Real Programmers (even the ignoramuses HR keeps sending us) and Real Staff to drop their Real Work and run, format, print, or package a single document thirty times a week. So every Thursday at 3:00 (give or take fifteen minutes) we tell the system to process the names that have accumulated since the last run. Approximately a quarter of a second later (it can run as high as a third of a second on a busy week) the system dumps one copy of each report in the appropriate member's folder (assuming HR has actually remembered to create it) and another copy in a folder on Doris' desktop.
(Doris Collins is our senior secretary; she's got an MA in English from Vanderbilt, she's been here twice as long as anybody else, she's four times as efficient as anybody else, and makes a quarter as much. She's slated to retire in August '17, and when HR replaces her with a half-literate and wholly uneducated Business BA they'll have something real to whine about.)
Doris prints out the reports (this usually takes about forty minutes because our enormously expensive state-of-the-art ultra-high-speed color printer routinely chokes every four or five pages on the cheap pulp paper Purchasing assures us is 'rated' for this use) and hands it off to the latest intern HR has 'hired' (no pay, no benefits, and it's still more than they're worth) as a favor to somebody in Upper Management. The intern rolls his eyes and abandons pestering our (more-or-less) Real Programmers and carries the printouts over to the HR Front Desk.
What happens to them after that is out of our hands; we can't oversee HR's operations. And we can't oversee the data they put into the system either. We could probably help them avoid most of their errors with a better interface—if it weren't for the fact that this entire system was written in 1985 in dBase III (the programming equivalent of, say, the Latin of the Venerable Bede), and the last guy we had capable of understanding the code was bullied into early retirement by (who else) HR in the Great Cost-cutting Purge of '03.
You are a very funny person. This was just about brilliant. Your forgot to mention that after they bullied him into retirement, they begged him to come back on contract for triple pay.
|
STACK_EXCHANGE
|