Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Table of Contents
Add-on providers can improve the development experience of their Add-on by inserting data into the app’s log stream. This article explains how to connect your service to an app’s log stream via Logplex.
To gain access to the app’s Logplex token, which will give you the capability to write to an app’s log stream, you will first need to store the token that is submitted in the Add-on provision request. More details of the provision request can be found in the Add-on Provider API Spec. If you have not previously stored the logplex_token from the provision request, you can use the App Info API to query for the token.
Data must be delivered to Logplex via HTTP. Using keep-alive connections and dense payloads, you can efficiently deliver all of your customer’s logs to Logplex via HTTP. Here is cURL example demonstrating a simple HTTP request:
$ curl "https://east.logplex.io/logs" \ --user "token:t123" \ -d "62 <190>1 2013-03-27T20:02:24+00:00 hostname t.123 procid - - foo62 <190>1 2013-03-27T20:02:24+00:00 hostname t.123 procid - - bar" \ -X "POST" \ -H "Content-Length: 130" \ -H "Content-Type: application/logplex-1"
Note that basic authentication is required. The username is
token and the password is the
logplex_token value returned by the App Info API.
Checkout all of the API details in the Logplex API Docs.
The headers of the request should contain Content-Length & Content-Type. The value of Content-Length should be the integer value of the length of the body. The value of Content-Type should be application/logplex-1.
The body of the HTTP request should contain length delimited syslog packets. Syslog packets are defined in RFC5424. The following line summarizes the RFC protocol:
<prival>version time hostname appname procid msgid structured-data msg
You can use
<190> (local7.info) for the prival and
1 for the version. The time field should be set to the time at which the logline was created in rfc3339 format. The hostname should be set to the name of your service. (e.g. postgresql) The appname field is reserved for the app’s Logplex token. The Logplex token is required to write to an app’s log stream. The procid is not used by logplex but is a great way to identify the process which is emitting the logs. Similarly, the msgid, and structured-data is not used by logplex and the value
- should be used. Finally, the msg is the section of the packet where the log message can be stored.
You should format your log messages in a way that is optimized for both human readability and machine parsability. With that in mind, log data should:
- Consist of a single message
- Use key-value pairs of the format
- Use a
sourcekey-value pair in log lines for distinguishing machines or environments (example:
- Show hierarchy with dots, from least to most specific, as in
- Units must immediately follow the number and must only include
These are log events that would benefit from statistical aggregation by a log consumer:
Periodically, your service may inject pre-aggregated metrics into a user’s logstream. The event could include the total number of database tables, active connections, cache usage:
sample#tables=30 sample#active-connections=3 sample#cache-usage=72.94 sample#cache-keys=1073002
Our experience logging for Postgres suggests that once a minute is a reasonable frequency for reporting aggregate metrics. Any higher frequency is potentially too noisy or expensive for storage/analysis. Be mindful of this when choosing to periodically log metrics from your add-on.
A reference implementation is provided at ryandotsmith/lpxc. This implementation highlights batching and keep-alive connections.
|
OPCFW_CODE
|
In 2021, Realtime Robotics began working with Schaeffler’s New Production Concepts department (NPC) to support Special Machinery (SMB), which develops and realizes individual turnkey production solutions. In this case, NPC functions as a pre-development department to identify, benchmark and develop new production technologies that can be used by SMB in the future, to help both internal and external customers improve the overall efficiency and effectiveness of their operations.
New Production Concepts learned of Realtime Robotics’ innovative motion control and collision avoidance software, RapidPlan, and wanted to test it thoroughly in a very complex and unpredictable environment to explore its potential benefits. This was done by building a simultaneous dual robot bin-picking application in a shared workspace.
The initial goal of the application was to optimize their use of robotics by having two robots pick parts from the same bin, collision-free. Previously, this operation was only able to be done by one robot at a time, or by two robots in a separated workspace or each using a separate bin. Because the robots needed to be separated to prevent the chance of collisions, additional space was required.
In addition, when multiple robots are in use, and the parts are in an unpredictable location each time, the robot programming needs to be very precise and complex, as the location and actions of each robot need to be tightly managed. The results exceeded expectations, showing that running two separate robots together would dramatically improve the bin-picking time.
Improving Production Speed
The New Production Concepts team had reviewed other potential solutions and found them unable to do more than improve the action of a single robot at a time. This is why Realtime Robotics’ RapidPlan software was so attractive to Schaeffler. The promise of two robots working together in close quarters without collision was tantalizing. Not only could production be sped up, and more complex cells be navigated, but the available space on the assembly line floor could be better optimized to increase production.
Traditionally, to avoid collisions, each robot has to wait for the other one to complete the task prior to moving into its space. With RapidPlan, users can truly capitalize on maximum productivity by adding additional robots to their application. You don’t need interference zones, which limit robots from moving too close to other robots, because RapidPlan releases the robot space reservations as soon as they are completed, allowing all robots to work simultaneously and next to each other, picking and placing parts within close proximity. The required cell footprint can be shrunk significantly.
Results Exceed Expectations
Most would think that the addition of a second robot to operations would double throughput – but in reality companies typically only see about a 28-30% overall throughput increase. In the past, adding a second or third robot makes the programming that much more complicated, as you have to account for interlocks.
With RapidPlan, Schaeffler was able to improve throughput without significantly complicating the programming layers. They were targeting a 60% throughput increase, and the results far exceeded the expectations with an almost double throughput increase.
RapidPlan improved throughput dramatically for Schaeffler while eliminating the chance of time-consuming robot collisions. Needless to say, the team was thrilled with the results.
As a system integrator, you often have to use a range of simulation platforms and tools that are specific to the brands of robot being used. But, because RapidPlan is built to be robot manufacturer-agnostic, companies can use it to streamline their cell commissioning process.
In the past, operators wouldn’t know for sure if the simulated actions would match up correctly to real-life, meaning that rigorous and manual testing was needed. Schaeffler discovered that with RapidPlan they were able to quickly simulate the robot cell, gaining a holistic cell simulation that made it easy to plan out how the robots would move, with the simulation software being the same as what powered the real-life robot actions.
This has proven to be such a valuable feature that New Production Concepts is using it as a simulation tool and will further evaluate the technology in production as a potential tool for SMB. Plans are to put RapidPlan to work in a project where three robots would work together to build a bearing unit.
For more information: www.rtr.ai
|
OPCFW_CODE
|
voltage of input(+5V) is greater than the supply(+3.3V) provided for 74HC245 buffer
In my design I am using 74HC245 buffers between a input signal of +5v and controller pins of +3.3V. What is the supply value I have to use for buffer (74HC245). I want to use +3.3V as the supply, but in data sheet it is mentioned that maximum VIH is Vcc. Please suggest me what happens if I provide +3.3V as supply voltage and feed input pins +5V?.
I am providing the link for buffer data sheet below.
http://www.nxp.com/documents/data_sheet/74HC_HCT245.pdf
Thanks in advance
I think if you found a 5V tolerant input family that ran at 3.3V (LVC if I recall correctly), that could work
Thank you for the suggestion it helped a lot in my design
If you want to translate 3.3V signals to 5V then you must power the 74HC245 with 3.3V, because at 5V it might not reliably detect a 3.3V 'high' logic level. However it will also only output 3.3V, which may not be enough to reliably drive 5V logic.
Any 5V inputs will inject current through the protection diodes and out the Vcc pin into the 3.3V supply. Depending on how 'stiff' the supply is, this may either exceed the protection diode's current rating, or raise the 3.3V supply to ~4.4V (which would be bad news for anything on the 3.3V side that can't handle the higher voltage).
You could wire resistors in series with the I/O's on the 5V side to limit diode current, but they will slow down the signal rise and fall times - and make the 5V output level problem worse.
If you power the 74HC245 with 5V then you have similar problems, but on the 3.3V side.
So the answer is:- use a proper level translating buffer such as the 74LVCC4245A, which is powered by 3.3V on one side and 5V on the other.
In this instance you should be using logic family that's 5V tolerant, traditionally LVC. Also some "advanced" (AHC, VHC etc) logic families are 5V tolerant.
If you exceed VCC in HC logic, the protection diode will clamp the voltage to VCC so absolute input is usually defined as VCC + 0.3V. You may get away with using series resistor to limit the current but this is not recommended. With the 5V tolerant chips there may be no input protection so you might cook the chip whatever series resistor you have if you exceed the input voltage limit.
Heh, lots of mays and mights there. But yes, it really depends and you should consult the datasheet. Those 5V tolerant inputs can be treacherous as the cmos junction can break with 6-7 volts. Depending on application a brute force solution of adding a LED in series to drop the voltage to 3.3 may be acceptable but you then need a pull-down resistor which acts as LED series resistor as well. This is also handy if you have no easy access to LVC chips although they're not rare.
If the input current is higher than the clamping current then you could destroy the chip. Use resistors to limit the input current to less than the clamping current.
|
STACK_EXCHANGE
|
Marvellousnovel Top Tier Providence, Secretly Cultivate For A Thousand Years – Chapter 62 – Nineteen Sects Attack The Jade Pure Sect! distinct grain propose-p3
Novel–Top Tier Providence, Secretly Cultivate For A Thousand Years–Top Tier Providence, Secretly Cultivate For A Thousand Years
Some Christian Convictions
Chapter 62 – Nineteen Sects Attack The Jade Pure Sect! river ocean
Yue Chen Yin
It absolutely was still Guan Yougang.
“This make any difference is way too fishy. These sects didn’t ask me and directly pursued Mo Fuchou for vengeance. Every single injustice does have its perpetrator each credit debt have their debtor. I can admit that. However when I visited apologize, I had been beaten up?
Exactly what a pity.
What an conceited strengthen!
His phrase was indifferent.
[Huang Jihao carries a positive feeling of yourself. Latest favorability: 4 superstars]
Huang Jihao in the Vermilion Pet bird Sword Sect!
These words brought on Huang Jihao’s coronary heart to overcome wildly.
Would be the Vermilion Bird Sword Sect seeking to stab the alliance within the back?
Han Jue shook his go and took out of the Reserve of Misfortune. He cursed Xiao’e for several days to vent his rage ahead of carrying on to cultivate.
Top Tier Providence, Secretly Cultivate For A Thousand Years
What performed he think about the Terrific Yan Cultivation Environment now?
Han Jue experienced a weird manifestation.
joe burke’s last stand
Han Jue elevated his eye-brows.
Li Qingzi immediately grew to become irritated and scolded, “Those canines. Personally, i visited apologize, yet they directly attacked me. In most cases, I didn’t also have the chance to talk. You apparently have in mind the central disciple who dropped to the demonic pathway. His brand is Mo Fuchou. He didn’t fall under the demonic direction on his very own accord. I suspect that someone is defaming him.”
It is going to bring him 30 seconds to kill Daoist Nine Cauldrons!
Take a look at me?
“Prepare yourself effectively. In three days at most of the, the Jade Absolutely pure Sect will make to flee towards the sea and seek refuge with the Ancestor.”
Li Qingzi observed there was anything fishy about it topic. All things considered, he experienced found Mo Fuchou prior to. He was simple, and even if he could not hold back from eradicating the adversary, he really should have delivered just after the very first time. He shouldn’t have already been trying to find vengeance and went mad.
Huang Jihao of your Vermilion Bird Sword Sect!
Daoist Nine Cauldrons left, while Guan Yougang was severely hurt.
Sensing a well known atmosphere, he immediately sent his divine perception out.
This classic man’s mobility methods were definitely too absurd!
Mo Fuchou had destroyed his method to recognition. The righteous and demonic sects acquired created an alliance and had been preparing to infiltration the Jade Absolutely pure Sect. There are nineteen sects associated.
Now, in excess of five sects have been going after Mo Fuchou. Some time ago, Mo Fuchou possessed no option but to implement his family’s Magical Power. Eventually, it was actually a demonic path Mystical Electrical power, triggering him to become a demon.
1 year quickly pa.s.sed.
Amazed, Li Qingzi asked thoroughly, “What should you really mean?”
[Huang Jihao carries a ideal impression of you. Present favorability: 4 actors]
|
OPCFW_CODE
|
In this blog post we will present a first look at the performance of Group Replication (GR), now that the RC is out. The goal is to provide some insight on the throughput, latency and scalability one can expect in a modern computing infrastructure, using GR in single- and multi-master configurations, but mostly focused on single-master.
→ If you’re in a hurry, here’s the summary: the performance of GR is great!
1. The basics of how it works
GR introduces the ability to make MySQL servers collaborate and provide a true multi-master architecture where clients can write to the database through any of the members of a GR group. The master-slave model of traditional replication is now replaced with a peer-to-peer model in which all servers are masters of their clients workloads.
Servers in a distributed database system must unequivocally agree on which database changes are to be performed and which are to be refused, independently of the server a client connects to or the moment in which he does it. When a transaction is ready to commit, the changes are sent by the client-facing member to all other members in a totally ordered broadcast message using a group communication system based on Paxos. After the changes are received and certified by the members, and if no incompatibilities arise, the changes are committed on the client-facing member and asynchronously applied on the remaining members.
For more information regarding what is going on behind the scenes, from a performance point of view, please check this post: /tuning-mysql-group-replication-for-fun-and-profit/
2. Throughput and scalability
The following charts present the throughput achieved by Sysbench RW and Update Indexed using GR with several group sizes, comparing it to a standalone MySQL server (with the binary log active). This will allows us to show GR’s behaviour on a well known workload but, as any synthetic benchmark, it is no proper replacement for testing with the actual workloads and infrastructure that will be used in each actual deployment.
We measured both the peak throughput – the best that the clients can get from the client-facing servers (with flow-control disabled) – and the sustained throughput – the best the system as a whole can withstand in the long run without having some of the members lagging behind the others. For these first charts we use the best throughput achieved in any thread combination.
Sysbench Read-Write throughput
This chart shows several points that make us quite proud:
When all write requests are directed to a single member (single-master) GR achieves around 80% of the throughput of a standalone MySQL server, dropping only slightly (3%) when going from groups of 2 members up to 9 members (that’s the largest group supported for now);
When write requests are distributed between multiple members (multi-master), the sustained throughput can even be slightly larger than that of a standalone MySQL server and event with 9 server groups we can still get 85% of that;
Also we can get peak throughputs that almost double the capacity of a standalone server when using multi-master, if we disable flow-control and allow transactions to buffer and apply later;
Finally, on Sysbench RW the MTS applier on the members allows them to keep-up with the master (in single-master mode) even at the maximum throughput it can deliver.
Sysbench Update Index throughput
The Sysbench Update Index benchmark shows what happens with GR in workloads with very small transactions (more demanding for replication, with a single update per transaction in this case). With smaller transactions the number of operations that GR must handle is higher and the overhead on GR itself becomes proportionally larger.
Even in this scenario GR performs at very competitive levels. In particular, the scalability on the network is still great, and going from 2 to 9 members brings a very small degradation in both single-master and multi-master scenarios. Nevertheless, there is work to do to bring the sustained throughput closer to the peak and both closer to the standalone server.
The next sections expand on these numbers for the single-master case. A posterior blog post we will focus on the performance of multi-master configurations, including the balance/fairness between members, impact of conflicting workloads, etc.
3. Throughput by client (single-master)
Group Replication takes over a transaction once it is ready to commit and it only returns to the client once the certification is done, which happens after a majority of the members in the group agrees on an common sequence of transactions. This means that Group Replication is expected to increase the transaction latency and to reduce the number of transactions executed per second for the same thread count until the server starts reaching its full capacity. Once the number of clients is high enough to keep the server busy the added latency may be hidden entirely, but then scheduling many threads introduces its own overhead that limits that effect.
On the chart above one can see that at the same number of client threads there is a small gap between the number of transactions per second that GR and a standalone server can deliver. That gap is on average [12%-19%] between 2 – 9 members, but with 70 client threads the TPS gap between GR and the standalone server is very small (4%-11%). At their maximum throughput the difference between GR and a standalone server goes to 18%-21%, again from 2 to 9 member groups.
So, we are very please to see that GR is able to achieve around 80% of the standalone server performance on single-master configurations, even using the largest groups supported.
Even on very small transactions we are still able to achieve around 60% of the throughput of a standalone server. The latency gets hidden as we increase the number of clients, as on Sysbench RW, but the number transactions that need to go through certification and applied is much larger and becomes a limit in itself.
4. Transaction latency (single-master)
As mentioned above, the transaction latency was expected to grow with GR, but the chart above shows that in the tested system the Sysbench RW transaction latency increase is rather contained (16% to 30%, from 2 to 9 members).
That latency increase depends mostly on the network, so the added latency changes only slightly between the Sysbench RW and the Sysbench Update Index, even if the payload sent is several times larger in Sysbench RW. The chart above shows that for this kind of workloads the latency increase is between 0.6ms to 1.2ms from 2 to 7 members, going up to 1.7ms with 9 members.
5. Benchmarking setup
In order to show GR throughput without interference from the durability settings we used sync_binlog=0 and innodb_flush_log_at_trx_commit=2 during the tests. In multi-master we used group_replication_gtid_assignment_block_size=1000000 and the results presented are from the highest from using 2 or all members of the group as writers of non-conflicting workloads.
The tests were executed using Sysbench 0.5 in a database with 10 million rows (the database fits in the buffer pool) and the median result of 5 runs is used.
The benchmark were deployed in an infrastructure constituted by 9 physical servers (Oracle Enterprise Linux 7, 20-core, dual Intel Xeon E5-2660-v3, with database and binary log placed on enterprise SSDs), with another server acting as a client (36-core, dual Intel Xeon E5-2699-v3), all interconnected by a 10Gbps Ethernet network.
We have spent quite a bit of effort optimizing the performance of Group Replication and this post presents a brief overview of the result now that the RC is being released.
Other performance-related posts will follow soon to focus specifically on the multi-threaded applier, on the group communication system itself and on multi-master scenarios.
|
OPCFW_CODE
|
Help me bring the newest Focusrite Scarlett (and Vocaster and Clarett!) interfaces to Linux!
Hi, I'm Geoffrey, and I'm the author of the Linux Focusrite Scarlett Gen 2+ mixer driver and GUI.
Back in 2018 I purchased a Focusrite Scarlett 18i20 2nd Gen audio interface. While these interfaces work well “out of the box” on Linux, they have extra functionality that is controlled through a proprietary protocol. At that time there was no Linux support for this functionality, but being a curious Linux/programming geek I spent months staring at USB protocol dumps to reverse-engineer the proprietary interface, and developed a Linux kernel driver so any Linux user with one of these could access the full functionality of their device.
Five years later, the driver supports 11 [Edit: now 13] different models of Focusrite Scarlett and Clarett interfaces, and we have a nice GUI as well.
Getting to this point has been a significant investment as it's difficult to develop without the hardware in front of me. Except for a generously donated 4i4 3rd Gen, I've personally acquired one of every Scarlett Gen 2/3 model to develop against and make sure that everything works and continues to work as well as possible.
Focusrite have recently released 3 new "4th generation" models, the Solo, 2i2, and 4i4, and I would like to add support for those. I'm reaching out to ask for your financial support to do this.
Raised funds (from the initial goal of $1,133) will go towards:
- Solo 4th Gen: AUD$205 ✅ (have this, working on it now)
- 2i2 4th Gen: AUD$299 ✅ (have this, working on it now)
- 4i4 4th Gen: AUD$439 ✅ (have this, working on it now)
[Edit: have these all now, and have decoded nearly everything that needs to be decoded. Will have a driver soon! Please contact me by email if you can participate in pre-release testing.]
Any extra funds raised will go towards purchasing the other remaining Focusrite interfaces which are not yet supported: the Vocaster One and Two [Edit: the Clarett+ 2Pre and 4Pre are now done!].
- Vocaster One: AUD$189
- Vocaster Two: AUD$379
- Clarett+ 4Pre: AUD$989 ✅ (this is done now!)
I'll provide updates on my development progress at https://linuxmusicians.com/ in the Computer Related Hardware forum.
While donations are incredibly appreciated, sharing this with others can be just as helpful.
|
OPCFW_CODE
|
Newbie here, just finished building my first nixie clock -- check this beauty out:
I replaced the original ESP-01 with a Wemos D1 Mini since I wanted to play around with the firmware and I know how difficult it is to flash the ESP-01. First thing I want to do is be able to blank the tubes/LEDs on command using the web server. I looked at the code and it seems I could set 'blanked = true' and 'blankMode = BLANK_MODE_BOTH' and then call what procedure? Please pardon my ignorance if that is so simplistic. I'm obviously not an expert.
Also, I would be grateful for any pointers on what part of the code I should be looking at to be able to push data (for example, external temperature reading) into the display using the Wemos I2C.
For the blanking, there is an easy way and a right way...
The easy way is to use the configuration URL directly to switch between "Never Blank" (option 0) and "Blank always" (option 3) like this:
The right way is to expand the I2C protocol to allow a blanking command to override the internal blanking value. To do thiis you'll need to work yourself into the protocol and define new verbs for "BLANK ON" and BLANK OFF". They are not too hard to implmenent, but you'll need to update the Atmel code and the Wemos code. It's not hard, but it is involved.
For pushing data, there is already a way to do that on Nixiefirmware V2, but not so far on the V1, using /setvalue. If you have a look at lines 1193 onwards in this file:
For pushing data, there is already a way to do that on Nixiefirmware V2 ...
I take it the All In One unit I just built is on V1. So, is an upgrade to V2 an option for me? I'm confused by all these version numbers. The construction manual states "IN-14 All in One Clock Rev3", the user manual is for "Classic Rev4, Rev5 & All In One" and then mentions "Firmware V56". What is V1 and V2 Nixiefirmware?
Also, I just want to say that the instructions for building the All In One clock is quite concise and very easy to follow. I had to stop and watch some YouTube videos to see how soldering SMD components is done. Other than that construction was a breeze. However, it took me almost two hours to set up my unit. Why? Because I didn't know I needed to look at the "WiFi Time Provider v1" unit construction and operating instructions manual. Some frustration and wasted time could have been avoided easily if this manual was mentioned in the All In One instructions as part of setting up.
Now, all of these things - the version numbers, user manuals, setting up - are probably boring, old news for everyone on this forum. But for someone who's coming in cold, not so obvious.
Nevertheless, I'm very grateful for your attention and patience in answering my newbie questions. I'll be looking at the code you mentioned to learn about pushing data. [Did I mention I love that clock?]
The short answer is that V.2 firmware is for hardware that uses "NeoPixel" style LEDs under the tubes, and V.1 is for hardware that uses conventional RGB LEDs. The only kits I know for sure can run the V.2 firmware are the Modular and the Classic Rev.6. V.2 isn't exactly an upgrade. It's more a new version to accommodate new hardware.
I think where Ian was going with the V.2 reference is that it would be perfectly reasonable for you to look at V.2, take what you want from it and sort of graft it in to V.1. This would be non-trivial but not terribly difficult.
I can't speak on version numbering and manuals except to say there's always one more thing that needs changing / updating / improvement.
Look into it later when the dust is clearing off the crater.
Okay, thank you. That clarified things quite a bit. I went back to the NixieClock store again and looked at all the kits being sold. I think I've got a good handle on what each one actually represents.
|
OPCFW_CODE
|
How would you like to MASTER graphic design by next week?
Click here to find out how
Featured Photoshop templates - professional ready to use designs for your next project
View all templates
Excel Time Tutorials
Excel Date & Time Functions
Dates and times are stored as numbers in Excel and count the number of days since January 1st, 1900.
Excel VBA Date and Time
This chapter explains how to get the year, month and day of an Excel VBA date, how to get the number of days between two dates, how to add a number of days to a date, how to get the current date and time, how to get the hour, minute and second of the current time and how to convert a string to a time serial number.
Excel 2003 Using Range Names
In this tutorial, you find out how to use all three of these procedures to save time in a worksheet that you access and edit on a somewhat regular basis.
Long Data Entries with AutoCorrect
AutoCorrect is Excel's attempt to save you time when doing data entry in a spreadsheet. It anticipates what you enter and, if necessary, corrects it or converts it into special types of data such as live hyperlinks and smart tags
Data Entries with Data Validation
This tutorial acquaints you with a very versatile and, in the long run, great timesaving feature that Excel calls data validation.
Sharing Workbooks on a Network
This technique covers the different ways you can share a workbook so that different people can edit its contents at the same time.
How to make True Time Scale Line Chart Using Scatter Graph
Adjusting time scale on x-axis in standard line chart could be a headache. It is kind of a trial and error process to make the time scale fit in, distribute the labels properly and last date being shown correctly at the end of x-axis.
Instant Range Formatting
Right after data entry comes formatting. When building a new spreadsheet, you don't stop to take the time to assign formats to the data entries you make (by typing dollar signs, commas, and the like).
Set your Spreadsheet Workspace
An Excel workspace (sometimes known as an arranged workspace) is really just a special file that keeps a record of all the workbooks open at the time you save the workspace.
Saving Time with Excel Add-ins
Excel add-in programs provide a quick-and-easy way to extend the basic features of Excel. Most of the add-in programs created for Excel offer you some kind of specialized function or group of functions that extend Excel's computational abilities.
Run Procedures on Protected Worksheets
Excel macros are a great way to save time and eliminate errors. However, sooner or later you might try to run your favorite Excel macro on a worksheet that has been protected, with or without a password, resulting in a runtime error. Avoid that problem with the following tutorial.
Automatically Add Date/Time to a Cell upon Entry
Enter a static date, or date and time, into a corresponding cell after data is entered into other cells.
Run a Macro at a Set Time
Many times it would be great to run a macro at a predetermined time or at specified intervals. Fortunately, Excel provides a VBA method that makes this possible.
Use CodeNames to Reference Sheets in Excel Workbooks
Sometimes you need to create a macro that will work even if the sheet names that it references change.
Connect Buttons to Macros Easily
Instead of giving every button its own macro, it's sometimes more convenient to create a single macro that manages all the buttons.
|
OPCFW_CODE
|
Cost of education in Germany in 1900
I am curious how much did the German universities charge their students in late 19th early 20th century. I am specifically interested in the math/science education at University of Göttingen, one of the leading institutions of its time.
For comparison, here is what UPenn charged - they have an amazingly helpful page on this:
http://www.archives.upenn.edu/histy/features/tuition/main.html
Of course, the actual numbers are quite meaningless without much more in the way of context, such as the cost of a loaf of bread, or a steak dinner, or one month's rent of a small apartment.
@Pieter Sure, I agree. I am that number is a precise figure which could in principle be found in some historical documents. I am confident the answer for the latter are easy to find, see e.g. here: http://www.coll.mpg.de/pdf_dat/2009_18online.pdf
That document covers a 62 year time period. Even a measly 2.5% inflation rate will result in real values changing by a factor of 3.8 to 4.0 over that time period. If you want true equivalence, you need to use sources much closer to your year of interest, not averages. The document compares contemporary wages across nations, which is a much different beast from same-nation wags across time periods.
Look up Rule of 72 in a Finance context (sometimes called Rule of 70, which is slightly more accurate at higher rates of return, but 72 is easier to use in your head). http://mruniversity.com/courses/development-economics/rule-70
Believe me, I know about enough about exponential functions. The point is the document give average values for specific years like 1905. This is good enough for me. But I have no clue about the tuition, which is all I am asking.
I suspect you will probably have to do this research in German, and possibly onsite in Germany.
Yeah... My main research is in a different subject, so can't really do that... but I do care for the question, so will accept any reasonable estimates...
I located archives for a couple of German universities - but everything including the instructions was in German - which does unilingual me no good.
@PieterGeerkens: Would be nice if you can sent the links to have a look at it.
Nothing. Absolutely nothing.
EDIT: I have asked an older student and before the 1970s there was in fact
a so-called "Hörergeld" "listener money" which was in the range of 100-200 Mark (comparable to 30-45 $) for half a year.
The interesting thing is that is was not for the university, but for
the professor, so while there was charging, the answer is still correct.
I myself paid 120 Mark for half a year, but this gave me the right for
unlimited public transport in the area, so I did not count that.
The Hörergeld during the 1900s should have a comparable range (neglible
for wealthy students, perceptible to students working part time), because
Albert Einstein lamented that there was opposition to allow very poor
students listening to the courses.
This may come as a complete shock to people especially from the US
but the concept of Universities charging their students was/is completely
foreign in Germany. The running costs are paid by taxes from the government.
There was always the firm belief in Germany that people have a right of
education. This was so ingrained that students
had severe discounts on lodging, visiting libraries, cinema and public
transport.
lived together with many people in bigger apartments to share the rental
or living in subsidized lodgings, "Studentenwohnungen".
had after the 1970s a right to get financial support from the government,
the so called "Bafög"
Even worse:
there were no limits how long your study take. You could choose to do
it in minimum time or 30 years long.
you were not obliged to attend a lecture. You could completely disregard
the lectures and study yourself, the only thing you needed was to pass
the tests. So many people were able to do jobs part time and were able
to finance their education.
In fact, I am one of the German students who did his "Diplom" with exactly
this conditions. Now you may think that it may have changed, but
you need only to read Mark Twains "A Tramp abroad" in the 1880s to see that
it was the same in old times.
During the Bologna process starting nearly exactly with the beginning of
the 21. century the old process was "reformed", changing Dipl. to Bachelor
and Master and introducing charges as "Studiengebühren".
But due to problems with the organization and general disappointment with the
system, "Studiengebühren" were mostly scrapped again.
Thank you! This is very helpful and convincing enough. I do want to find primary sources for this though.
|
STACK_EXCHANGE
|
(Disclaimer: I'm just a software engineer that happens to work in a quantum research center. This answer is based on my understanding of NV centers, which is limited to what I need to know for my software work plus things I pick up here and there, and takes a lot of shortcuts. I hope someone more knowledgable can correct the mistakes I'm bound to make).
Unfortunately, I think I need to explain a few concepts in order to answer this. I'm probably taking a lot of shortcuts and butchering a few concepts, in particular playing fast and loose with electron and nuclear spin. I hope it is more or less clear.
A diamond is a lattice of carbon atoms. A carbon atom has 4 valence electrons, meaning each carbon atom can bind to four other atoms. An NV center is a naturally occurring deficiency in the diamond lattice where a nitrogen (N) atom has replaced a carbon atom and in addition a neighboring carbon atom is missing (the vacancy V). This NV combination has 5 sort of "free" electrons: 3 due to the missing carbon atom, 2 from the nitrogen atom that has 5 valence electrons but only 3 are bound to nearby carbon atoms. The NV center may actually capture a 6th electron from its environment, which makes the NV center negatively charged. This negatively charged NV center has all kinds of funky properties (see e.g. Wikipedia or any of the numerous resources). For the purpose of this explanation it is sufficient to think of the negatively charged NV center as having a single "free" electron.
Electron spin resonance
For this story, three properties of electrons are relevant:
- electrons can be excited by absorbing a photon
- once excited, the electron will decay after some time and when doing so they release a photon
- electrons have a spin (-1, 0, +1)
Resonant excitation of an electron means that you bring it to the excited state by shooting a photon of a specific frequency at it (read: laser). If the frequency is wrong, the electron will not be excited. The crucial thing here is that the spin state of the electron determines what their resonant frequency is. Also, once excited, they don't necessarily decay to the same spin state. I won't go into details, but the result of this is that if you manage to bring the electron in e.g. the 0 spin state, and you shoot at it with a laser that is resonant with that state, you will initially see the electron emitting photons as it decays (read: shoot really short laser pulse and see if after some time you see a photon come out). But after a while (it's a stochastic process) the electron will decay to a different spin state that is not resonant with your laser and you won't see any photons come out any more.
The negatively charged (because of the extra electron) NV center is a magnetic dipole. If you apply an external magnetic field to the NV center, it will start precessing (here comes your gyroscope). As it precesses, the spin state will change. Usually this is used to manipulate the spin state as a qubit: by applying radio frequency and microwave frequency EM fields to the NV center, you manipulate the spin state to do "computations". Then you shoot a laser at it to see in which state it ended up (think: photon = 1/no photon = 0, but that is a very rough analogy and breaks down in many ways I don't go into).
How the gyroscope works
I'm going by the paper @GrapefruitIsAwesome linked; I'm not sure if that is the same as the one they're launching, but I assume the concept is the same.
They (the paper) are allowing the magnetic dipole to precess freely in a stationary magnetic field. If you rotate the sample, the orientation between the NV dipole and the external field changes, which affects the spin state. They "read" the spin state by shooting laser pulses at it with a frequency that is only resonant with one of the states. The number of photons detected is a measure of the spin state and by proxy a measure of how much the sample precessed, from which they can then compute the gyration rate.
What it could look like
Educated guess, based on the building blocks as described in the paper:
- Diamond sample. This looks less exciting than one might think: it roughly looks like a small PCB with a bunch of strip lines and contacts to apply the RF and MW EM fields. This page has a microscopic photo of one. The "bubble" is a lens etched in the diamond. (I don't think I'm allowed to post photo's from what I work on...).
- RF and MW sources to generate EM fields to manipulate the qubit.
- One or more lasers to initialize and "read" the qubit.
- Control electronics.
I'm surprised they can manage to squeeze that in a Cubesat form factor...!
|
OPCFW_CODE
|
I don't agree with the current way in which homework is done. I think it should be optional whether to do it or not. But, Learning from home can help with actually learning, Because people will learn in a way they want to learn instead of having a teacher lecturing them and 30 others in front of a shitty white board.
A little bit of light homework is ok. I'm not saying we should get stacks of homework, But some light homework helps you remember the topics in school. Some stress is brought upon you by homework, But stress is required to push you to do things. Of course some students are put under too much stress, But I'm just talking about a little bit of homework.
Homework is designed to focus a child's home learning at a certain topic. If set up correctly, Homework can be very beneficial for students.
However, The current homework system puts much pressure on students. Surely you have heard of the well-known "I have (insert large number here) assignments for tomorrow! "
Why do you think some students come to school sleep-deprived?
It's because they cannot catch up to the overbearing load of homework, Or that their capability level is not yet suitable for the task at hand. However, Pupils still get penalised equally.
Furthermore, Homework provide a way for schools to control their pupils' lives even more. When students get home, They cannot relax but must do homework, For school's sake! Family time, Quality time and resting time is lost because of homework. They call it "Half of our evening wasted on random knowledge"!
Don't you understand? I disagree with a complete ban, But there should be less pressure about homework.
Takes too much time, Takes up free time/family time. Plus it basically damages mental health. Plus we already have too much work in school JUST TO COME HOME FOR MORE? Nuh uh. I don't think so. Plus it doesn't really do anything except makes us stressed out. Whoever opposes this doesn't know what they r talking about.
Nevermind my lack of capitilization, Im not here to seriously debate. Just my opinions :]. Homework is unnecessary. Adding on stress we already have at school and carrying it with us to home. Not useful and has caused many people meltdowns. Why do more work at home? School's way of teaching things is, In my opinion, Absolute garbage.
Homework hurts low-income kids. Now that we live in a digital world, A lot of homework is online, But a lot of low-income kids don't have access to internet or a device, Or both. Also, Older low-income kids have to work jobs, And working and doing homework doesn't really fit.
Because kids have a life at home and we are already spending hours on school then after we are done you want us to do homework In real life and they will just forget the moment they graduate we will never need this stuff later in life teach us the ways to make money to get a good inference
I am in high school right now and i am just bombarded with assignments and homework. I don't like doing it but if i never had homework then i think i wouldn't even pass 8th std. If given in a right amount then it will be really helpful for kids.
There should be no Homework, But also No Term Papers, No Science Projects, No Book Reports, No Lab Reports, No State Exams,
Students are under far too much stress, Taught mostly useless crap that they will Never use in the real World, Tax Payer dollars are being wasted to teach students useless crap they will never use in the real World, In real life and they will just forget the moment they graduate
|
OPCFW_CODE
|
Go to GB GALS for the latest Sizzlin and Trending Now News & Photos
Good anti-virus / anti spyware programs.
Posted 11 August 2005 - 02:20 AM
Posted 11 August 2005 - 07:17 PM
Posted 27 March 2007 - 07:03 PM
Posted 30 March 2007 - 08:07 AM
Posted 12 June 2007 - 10:20 PM
Posted 11 June 2008 - 01:27 PM
I use NOD32, the Internet Security one, not only the antivirus. The NOD32 Internet Security version has: antivirus, antispyware, personal firewall, and an antispam module (in case you use an email client for example Outlook, Eudora, Thunderbird, etc)
It's really fast and has a very good heuristic. This means, that the software will resolve a problem, such as an infection, faster than others. You can update it when you desire it, or you can set it so the software does it automatically, in my case cos' I'm a paranoid person, the update is every hour.
Now, the weakness that I find in this antivirus is the computer scan, when you want to scan your hard disk, you have to start it manually every time, there's no setting here, so if you forget to do it manually, your computer will never be scan. This is annoying for me, because with my older antivirus I could set the software scan starts at night when I don't use my pc. You all must know that during the scan stage your pc's performace will be very slow, this happend because all the RAM in your pc is being use for the antivirus software.
My older antivirus: Kaspersky Internet Security (KIS), KIS has the same modules that NOD32 Internet Security (antivirus, antispayware, firewall and antispam module). The heuristic is not so good as in NOD32, but the full scan disk is the best in my opinion, is really full scan. You can set it to do automatically every day, every week or once every month. It's up to you. Kaspersky software also has an antivirus version (KAV). KAV is ligther than KIS, but is not that complete.
I still have KIS on my computer, but NOT as resident protection, only for full scan and I turn on KIS manually when I suspect I have an infection that NOD32 can't eliminate it, otherwise the only resident antivirus I have is NOD32.
You can only have one antivirus as resident protection in your system, but can have as many antispyware, firewall software as you wish.
Also have: Lavasoft Ad-Aware Pro, Spybot Search & Destroy (antispyware), Norton Clean Sweep (registry cleaner, this one must be use it carefully, any change in you registry key could break your system and you'd lose all your data), and of course, the safest browser I find (until I switch to Linux/Ubuntu) wich is Firefox.
On Firefox I've install two "must have" addons: CookieSafe and No Script. The first one avoids that when I'm surfing the net, certain sites safes their cookies locally so they follow my tracks on the net, spying me; the other one, No Script, avoids that sites that has malicious scripts can do or change things on my system. I have greasemonkey addon on Firefox too with a few google mail scripts, this allows me to work with certain sites with special attributes, for example I can input emoticons and a html signature in my google mail account.
Sorry ladies about my english, I've just woke up (oversleep taking a nap) and haven't drank a coffee yet.
Posted 12 October 2008 - 02:51 AM
Spybot S&D for antispyware.
Whenever you download anything from an untrusted site, be it music, .exe, text, graphics, scan the file before use. Especially if you p2p. I stay away from .wma files when searching for music.
Advanced users use
CCleaner - whenever you edit the registry always always back up the registry first.
Hijack This - you can get this off of the TrendMicro website - very advanced
I also use WinPatrol (referred from cbvixen/mystiglenn) because it monitors your PC for changes made to it, and allows you to edit the start up tasks, etc. Though I still prefer msconfig.
Posted 17 February 2009 - 12:28 PM
Edited by MarineGal, 17 February 2009 - 12:29 PM.
Posted 17 February 2009 - 02:13 PM
Posted 25 February 2009 - 12:22 PM
I use firefox then for protection I have Avast but had AVG before that. I also use ad aware.
Posted 25 February 2009 - 08:47 PM
Posted 20 June 2009 - 02:57 PM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
PHP Calculation of area between coordinates fails for non-square rectangles in WGS84 CRS
I have a PHP function designed to calculate the area between two coordinates using the WGS84 Coordinate Reference System (CRS). The function works correctly when the rectangle formed by the coordinates is closer to a square, but fails to provide accurate results when the rectangle is elongated either horizontally or vertically.
Here is the function in its current state:
/**
* Calculate the area between two coordinates in WGS84 CRS.
* @param float $latitude1 - Latitude of the upper-left coordinate.
* @param float $longitude1 - Longitude of the upper-left coordinate.
* @param float $latitude2 - Latitude of the lower-right coordinate.
* @param float $longitude2 - Longitude of the lower-right coordinate.
* @return float - Area in square meters between the specified coordinates.
*/
public static function calculateArea(float $latitude1, float $longitude1, float $latitude2, float $longitude2): float
{
//Funciona toda la función --> NO TOCAR
$geographicCRS = Geographic3D::fromSRID(Geographic3D::EPSG_WGS_84); // WGS 84
$topLeft = GeographicPoint::create($geographicCRS, new Degree($latitude1), new Degree($longitude1), new Metre(0), null); //new Length(0)
$bottomLeft = GeographicPoint::create($geographicCRS, new Degree($latitude2), new Degree($longitude1), new Metre(0), null);
$topRight = GeographicPoint::create($geographicCRS, new Degree($latitude1), new Degree($longitude2), new Metre(0), null);
// Calcular la distancia horizontal (en metros)
$distanceX = $topLeft->calculateDistance($topRight);
// Calcular la distancia vertical (en metros)
$distanceY = $topLeft->calculateDistance($bottomLeft);
$distanceXKm = $distanceX->asMetres()->getValue() / 1000; // Convertir de metros a kilómetros
$distanceYKm = $distanceY->asMetres()->getValue() / 1000; // Convertir de metros a kilómetros
// Área = distanceXKm * distanceYKm
$area = $distanceXKm * $distanceYKm;
return $area;
}
I try too:
public static function calculateArea(float $latitude1, float $longitude1, float $latitude2, float $longitude2): float
{
// Earth's radius in kilometers
$earthRadiusKm = 6371.0;
// Convert degrees to radians
$lat1Rad = deg2rad($latitude1);
$lat2Rad = deg2rad($latitude2);
$lon1Rad = deg2rad($longitude1);
$lon2Rad = deg2rad($longitude2);
// Calculate distances in Mercator projection
$distanceX = $earthRadiusKm * abs($lon2Rad - $lon1Rad) * cos(($lat1Rad + $lat2Rad) / 2);
$distanceY = $earthRadiusKm * abs($lat2Rad - $lat1Rad);
// Calculate area in square kilometers
$areaSquareKm = $distanceX * $distanceY;
return $areaSquareKm;
}
Problem Description:
The function accurately computes the area when the rectangle formed by the coordinates is close to a square.
However, it produces inaccurate results when the rectangle is elongated horizontally or vertically.
Additional Information:
I am using the following libraries in my PHP class for coordinate manipulation and conversion:
// Area calculation
use PHPCoord\CoordinateReferenceSystem\Geographic2D;
use PHPCoord\Geometry\LinearRing;
use PHPCoord\Geometry\Polygon;
use PHPCoord\Geometry\Position;
use PHPCoord\Point\GeographicPoint;
use PHPCoord\UnitOfMeasure\Angle\Degree;
use PHPCoord\UnitOfMeasure\Length\Metre;
// Coordinate conversion
use proj4php\Proj4php;
use proj4php\Proj;
use proj4php\Point;
Issue with Existing Solutions:
I have explored various solutions on Stack Overflow, including using the abs function to calculate dimensions, but none have provided accurate results, especially for non-square rectangles.
Request:
I am seeking a solution or alternative approach to accurately calculate the area between coordinates in a non-square rectangle using the WGS84 CRS. Ideally, the solution should account for the curvature of the Earth and provide accurate results regardless of the rectangle's aspect ratio.
Any insights or suggestions would be greatly appreciated. Thank you!
Is it the same on normal cartesian geometry, with non squarish shapes? -- IMHO WGS84 is mostly irrelevant, but for large areas (and so your transformation "meter" is also inadeguate
You appear to calculating the area on a 2d projection. If you want the surface area of a region on a 3d sphere or oblate ellipsoid, you need to decide what the shape of the region you want is (bounded by given longitude and latitudes, or something else), and get the appropriate maths equations.
For the calculations that I must perform, that deviation of km2 is relevant. Otherwise I wouldn't be asking.
with:
$geographicCRS = Geographic3D::fromSRID(Geographic3D::EPSG_WGS_84); // WGS 84
$topLeft = GeographicPoint::create($geographicCRS, new Degree($latitude1), new Degree($longitude1), new Metre(0), null); //new Length(0)
$bottomLeft = GeographicPoint::create($geographicCRS, new Degree($latitude2), new Degree($longitude1), new Metre(0), null);
$topRight = GeographicPoint::create($geographicCRS, new Degree($latitude1), new Degree($longitude2), new Metre(0), null);
return: 4.962849310176806, not the 5.05 that must be
|
STACK_EXCHANGE
|
Original papers for hashing functions
I would like to learn how hashing functions are discovered / created. In part of this process I think it would be helpful to see the original works or papers of any authors of cryptographic or non-cryptographic hash functions that may exist.
Quora offers some insight to get started:
Diffie and Hellman called for a cryptographic hash function in their landmark 1976 paper on public key encryption. While they didn't describe an algorithm, they sketched how it could be used.
In 1979, Rabin came up with the first algorithm for digital signatures that included a cryptographic hash. At the same time Yuval and Merkle defined base attacks in the field and requirements respectively. There was work on chaining and collision resistance concepts into the late 80's.
Ron Rivest developed the MD2 algorithm in 1989, and it had been in use since. It stood up to attack for 8 years and was the first to be widely used in open standards.
But a search for things like "1979, Rabin paper hash function" doesn't bring up any original papers. It brought up this which cited some stuff like M.O. Rabin, “Digitalized signatures,” but it's not available from what I can see, unless this is the same thing. It doesn't necessarily need to be the original original works, but just something that introduces a new hash function with some level of detail would be fine. I've been looking around a bit for some other stuff but haven't found anything related to SHA-*, MD*, etc, other than this for MD5 so far, and another for SHA-1. But these are more specifications and don't have the authors insights expressed as much like you'd find in a research paper. Wondering if there are any original research-ish papers or works that shed some light on how they went about creating the function, or at least some of the math involved that was used.
somewhat related though not necessarily cryptographic https://crypto.stackexchange.com/questions/56404/what-was-the-first-hash-and-what-problem-was-it-supposed-to-solve/56410#56410
@Ceriath That appears to be an answer rather than a comment; Comments are supposed to be used to request more information or ask for clarification. Please refrain from posting answers in the comments.
This question is way too broad in my opinion: in its current phrasing, the question is basically "can you recommend some papers about hash functions?". There are hundreds of such papers, all of which might be relevant depending on what you're looking for. There is a historic question here, a construction question (block cipher based, permutation based, ...) and a question about design of primitives.
MD4,5 and 6 can be found on Rivest's website, but 4 and 5 are both RFCs which won't have the insight, while there is a paper and a talk to MD6. There also is a paper for SHA3/Keccak.
|
STACK_EXCHANGE
|
compat attribute in a
<survey> tag determines what new features that may break old surveys are enabled. All surveys without it are assumed to have
compat="0". You should generally set compatibility level to the highest currently available value when creating the survey: changing it while running should be done only after consulting the below table.
compat="2" will enable levels 1 and 2.
Enforces secure links for all new surveys. Prevents use of
Forces participants to use a secure HTTPS connection at all times when taking the survey.
Enforces secure links for new surveys created using the Survey Editor. By default, “Force Secure Links” is enabled in the Field Settings summary.
||Users can control the display of conditional response options in hidden multi-select elements; unique values are now required for row labels when using
Disables the Chinese (
|Removed support for the Chinese (
Enables use of the
|Labels are now required for terminate and quota elements.|
Enables use of the Date Picker dynamic question.
Enhancement made to scoreboard for browser metric.
|M36||136||Image Upload element released.||N/A|
|M35||135||N/A||Crosstabs weighting files now require using uuid as the key variable.|
Element labels are limited to alphanumeric characters and underscore. Resource labels have a similar limitation but also allow
Row/col/choice labels have additional label restrictions to prevent collision and can't use any of the following:
Removed ability to import or use certain operating system-level (e.g.,
Label restrictions may cause errors that need correcting/updating.
Single select (radio) questions using FIR can be unselected.
Compat 133+ surveys using
Dropped support for
Extra data points in data file. Can only be excluded via XML Editor or data layout manager.
|M32||132||Card Rating DQ available.||N/A|
"vos" virtual question modified to include Windows 10.
"vbrowser" virtual question modified to include MS Edge.
Internet Explorer 8 support dropped for compat 131+.
"vbrowser"/"vos" values may not reflect historical values.
Theme Editor - An easy to use tool that allows users to customize themes for their surveys. Inbuilt DQ version upgrades.
Theme Editor Export/Import: Ability to customize the
|Old themes are not compatible. You can downgrade to compat 128 to get old themes back.|
|M28||128||List of illegal question labels greatly expanded. Prevents potential collision with built-in labels.||
Surveys upgrading to 128+ may encounter some errors that need correcting.
Changed list of theme variables.
Surveys replace the
Surveys have a "vdropout" question showing where a recovered respondent dropped out.
Extra data points in data file. Can only be excluded via the XML Editor or Data Layout Manager.
Font Awesome 4.2.0 is loaded into the survey respondent view.
New DQ stylevar types added. Majority of standard DQs updated to use new toolkit from M25/M26.
Survey themes and styles now use Less - more flexibility in styling.
Responsive Layout - Mobile and desktop formatting now based on window size rather than device type.
Survey back button saves submitted answers.
DQ toolkit update - DQs updated to be CSS LESS-compatible.
Allow inclusion of additional less stylesheets via
Old themes or
Old HTML layout blocks changed.
Dynamic questions may need to be re-worked.
||Extra data points in data file. Can only be excluded via the XML Editor or Data Layout Manager.|
|M19||119||Offline detection is automatically added for smartphones.||
Using mixed versions of the same DQ will generate an error.
Raised question limit to 8192.
Any instances where DQ version was not explicitly defined will need to be updated. e.g.,
Security Update: Survey values use "strict quoting" by default. Additional names have been forbidden from being used as labels to prevent odd errors in programming:
|Surveys upgrading to 117+ may encounter some errors that need correcting.|
Mobile device category and OS are captured in the extra vairable
The alerts system notifies users via email when certain data is entered in a survey or a marker goes over a threshold.
|Extra data points in data file. Can only be excluded via the XML Editor or Data Layout Manager.|
|114||Surveys will use jQuery 1.8.3|
|112||Enables new dynamic questions|
|109||Required for "fingerprinting" and advanced deduping support (browserDupes set to safe or strict)|
|108||A number or float question's .val attribute will now always return either None or the numeric value, never a string. Also changes to quota.xls format: importance level are set on cells and all plus markers have to be declared in the Defines tab, not in the individual tables.|
|29||Random tags using count attribute apply conditions before counting the elements (see Block Tag: Create Sections ).|
|28||Any respondent that finishes the survey without OQ.., NQ, term: or DUPE markers will be marked as qualified automatically.|
|27||Default table ordering in report is always based on grouping; start_date variable added to data file; date format is configurable per server.|
|26||Fixed width can be set on individual variables in variables.xls rather than only only questions as a whole. Upgrading to this level in a live survey may shift your datamap.|
|25||Checkbox and Radio questions with a single column have their legend back on the left side. You can upgrade from compat="24" to 25 on any survey (since 2011.2).|
|24||Sample sources, language selection XML elements are supported. UTF-8 is always used as a character set. <survey> tag verifies style attributes exist. New styles implicit and required.|
|23||Checkbox and Radio question with a single column show their legend on the right per default. This can still be overridden with rowLegend="left".|
|22||Survey markup has been revised to obey accessibility requirements.|
|21||nstyles file has been restructured to be easily configurable.|
|20||Extra variables can be added, removed or reordered as you like even when the survey is live. Do not modify a survey with data from compat=19 or lower to compat=20 -- you need to hmerge for that to work. Also, newVirtual=1 is the default.|
|19||virtual questions no longer require programmer/QA approval (if you upgrade to compat=19 from a previous level, you will have to re-approve the survey again).|
|18||unique="XXX" must refer to a valid extraVariable named XXX.|
|17||Enables additional QA approval elements for selfserve surveys.|
|15||System warning added to flag survey logic that uses data from unseen/unpopulated questions.|
|14||Flash files references must be referenced with a relative path (e.g. /something/flash.swf, NOT http://tes.decipherinc.com/something/flash.swf).|
|13||Automatically created condition table|
|12||A SuspendTag is added before every QuotaTag and GotoTag that does not already have one.|
|11||Obsoleted clients, sssoe, clientFeatures attributes.|
|10||Quota tables auto-generated by createQuotaTables restart their rows from r0 on each new quota table|
|8||Multiple responses from same browser are stopped per default; use browserDupes='' to disable.|
|3 - 6||Affects old CMS functionality and has no additional effect now.|
|2||Automatic assignment of NetTag to rating questions. Labels required on most elements (questions, html, comment).|
|1||ConditionLogic update, automatic hiding of empty questions.|
|
OPCFW_CODE
|
Add support for IBM Z hardware-accelerated deflate
Future versions of IBM Z mainframes will provide DFLTCC instruction,
which implements deflate algorithm in hardware with estimated
compression and decompression performance orders of magnitude faster
than the current zlib and ratio comparable with that of level 1.
This patch adds DFLTCC support to zlib. In order to enable it, the
following build commands should be used:
$ CFLAGS=-DDFLTCC ./configure
$ make OBJA=dfltcc.o PIC_OBJA=dfltcc.lo
When built like this, zlib would compress in hardware on level 1, and in
software on all other levels. Decompression will always happen in
hardware. In order to enable DFLTCC compression for levels 1-6 (i.e. to
make it used by default) one could either add -DDFLTCC_LEVEL_MASK=0x7e
at compile time, or set the environment variable DFLTCC_LEVEL_MASK to
0x7e at run time.
Two DFLTCC compression calls produce the same results only when they
both are made on machines of the same generation, and when the
respective buffers have the same offset relative to the start of the
page. Therefore care should be taken when using hardware compression
when reproducible results are desired. One such use case - reproducible
software builds - is handled explicitly: when SOURCE_DATE_EPOCH
environment variable is set, the hardware compression is disabled.
DFLTCC does not support every single zlib feature, in particular:
* inflate(Z_BLOCK) and inflate(Z_TREES)
* deflateParams() after the first deflate() call
When used, these functions will either switch to software, or, in case
this is not possible, gracefully fail.
This patch tries to add DFLTCC support in a least intrusive way.
All SystemZ-specific code was placed into a separate file, but
unfortunately there is still a noticeable amount of changes in the
main zlib code. Below is the summary of those changes.
DFLTCC takes as arguments a parameter block, an input buffer, an output
buffer and a window. Since DFLTCC requires parameter block to be
doubleword-aligned, and it's reasonable to allocate it alongside
deflate and inflate states, ZALLOC_STATE, ZFREE_STATE and ZCOPY_STATE
macros were introduced in order to encapsulate the allocation details.
The same is true for window, for which ZALLOC_WINDOW and
TRY_FREE_WINDOW macros were introduced.
While for inflate software and hardware window formats match, this is
not the case for deflate. Therefore, deflateSetDictionary and
deflateGetDictionary need special handling, which is triggered using the
new DEFLATE_SET_DICTIONARY_HOOK and DEFLATE_GET_DICTIONARY_HOOK macros.
deflateResetKeep() and inflateResetKeep() now update the DFLTCC
parameter block, which is allocated alongside zlib state, using
the new DEFLATE_RESET_KEEP_HOOK and INFLATE_RESET_KEEP_HOOK macros.
In order to make unsupported deflateParams(), inflatePrime() and
inflateMark() calls to fail gracefully, the new DEFLATE_PARAMS_HOOK,
INFLATE_PRIME_HOOK and INFLATE_MARK_HOOK macros were introduced.
The algorithm implemented in hardware has different compression ratio
than the one implemented in software. In order for deflateBound() to
return the correct results for the hardware implementation, the new
DEFLATE_BOUND_ADJUST_COMPLEN and DEFLATE_NEED_CONSERVATIVE_BOUND macros
Actual compression and decompression are handled by the new DEFLATE_HOOK
and INFLATE_TYPEDO_HOOK macros. Since inflation with DFLTCC manages the
window on its own, calling updatewindow() is suppressed using the new
In addition to compression, DFLTCC computes CRC-32 and Adler-32
checksums, therefore, whenever it's used, software checksumming needs to
be suppressed using the new DEFLATE_NEED_CHECKSUM and
DFLTCC will refuse to write an End-of-block Symbol if there is no input
data, thus in some cases it is necessary to do this manually. In order
to achieve this, send_bits, bi_reverse, bi_windup and flush_pending
were promoted from local to ZLIB_INTERNAL. Furthermore, since block and
stream termination must be handled in software as well, block_state enum
was moved to deflate.h.
Since the first call to dfltcc_inflate already needs the window, and it
might be not allocated yet, inflate_ensure_window was factored out of
updatewindow and made ZLIB_INTERNAL.
|
OPCFW_CODE
|
It’s a tech cliché that every CEO starts a new year with predictions that the coming year is: “The year of [whatever my company focuses on].” I will spare you the “2022…stateful Kubernetes applications” spiel for a couple of reasons: Firstly, users have been building stateful applications in Kube for a while now. And secondly, because being overly self-absorbed is not a healthy trait for any company - it pays to have a view of your surroundings.
The Cost of Getting State Wrong
Running stateful, often business-critical applications in Kubernetes is now increasingly common. But there is a big difference between doing something and doing it well. Early-stage architectural decisions on how to implement stateful applications in Kube are pretty technical, but getting them wrong can cost dearly - in very clear business terms:
- Lock-in, whether it is to cloud providers, hardware solutions, or managed data and storage services, translates pretty cleanly into inefficient, uncompetitive operating costs. Poor storage choices can inflate Cloud costs by several orders of magnitude, and the amount of data buinesses have to handle isn’t going down.
- Ruthless competition in digital business means that poor, laggy application performance and an inability to scale rapidly lead to increased customer churn.
- System downtime, service outages, and poor security?... Let’s not point fingers, but there are a growing number of CEOs you can ask about the total cost to the business from losing customer confidence or, worse still, customer data.
My prediction for the year is that, increasingly, executives overseeing digitization, product owners, system architects, platform engineers, and cloud developers will have to start caring more about how state is done in Kube. “Doing” state is about how we store state and, in many Cloud Native environments, this has been going ahead away from the watch of the traditional enterprise storage engineers and experts.
CNCF Storage TAG
Let’s segway briefly to my second point: having a good view of your business surroundings. In my role as chair of the CNCF Storage Technical Advisory Group (TAG), I am privileged to have an incredible view of what is happening around state and storage within the context of Kubernetes and Cloud-Native computing. All technical decisions in the CNCF are taken by the elected Technical Operating Committee (TOC). However, as the foundation has grown, the TOC has needed more specialist support to scale. This is not to make decisions but to deliver advice, provide subject matter knowledge and expertise, help the TOC review projects, and provide content for the community.
A Useful Resource
This brings us back to my prediction that more stakeholders in the Kube Native world need to care about state and storage. Our most significant milestone to date in the CNCF Storage TAG was when we created the Storage Landscape Whitepaper. This was intended to provide an overview of what can often seem like an impenetrable world of overlapping terms, metrics, and considerations.
Its objective was to provide greater clarity for the increasing number of non-storage-experts who, in a Cloud/Kube-Native world of DevOps automation and Infrastructure-as-Code (IaC), were getting more and more involved, making important decisions, yet typically without having the understanding of some of these environments.
The paper covers basic storage attributes, such as availability, scale, performance, data protection, and failovers - the kind of terms every IT salesperson will hit you with. But what we tried to overcome was how these terms can be highly generic and multifaceted when you scratch below the surface. We then provided an overview of the basic technology landscape, storage interfaces, and the options for delivering state into Kube.
Even with the incredible team I worked with on the Storage TAG, it took several attempts to create a meaningful taxonomy for these basic aspects. But with this in the bag, we developed some valuable tabulations for exploring the attributes of different storage options more meaningfully and in greater context.
I am not suggesting that busy digitization team leads and executives need to digest all 44 pages of the paper (although it should be intelligible for a non-engineer with a little tech-savvy). It is just that, in a Dev ‘X’ Ops world of intermeshed, overlapping roles and responsibilities, it is vital that architects, platform engineers, and key decision-makers understand some of the implications of storage decisions in more detail (even recommend just skim-reading the use case/comparison tables and using the text to drill down into anything they don’t understand would be useful).
At the other end of the scale, effective DevOps automation should mean that developers don’t have to concern themselves with storage or any other aspect of infrastructure and operations. Kubernetes just want fast, simple access to persistent volumes, their favourite databases, and stateful apps. And with the right solutions for storage management, optimization, automation, and developer self-service, this is what they can have. But someone on the team needs to understand this need and know what the “right” solution is and why it’s right.
|
OPCFW_CODE
|
I am in need of a Scraping application to assist us with claims we receive from one of our suppliers.
Unfortunately they dont support CSV or TXT files. You get the info on a HTML Page and you have to capture the info manually into our system.
We would like to import the info from a txt file instead of having to capture everything manually.
Please see Below:
I need a scraping application that can scrape information for us on a daily basis and dave the harvested information to a csv or TXT file
We will then Use that information from the generated file and import it into our ERP system.
App must be able to do the following
-Scraping of Daily claims and exporting to TXT file
-Scraping app must have a Scheduler to allow me to set a scheduled time to scrape data
-Program must automaticly export to a txt file
-I must be able to specify / set a folder locally or on the network where the TXT file must be saved / exported to
-app must be able to run multiple threads or single threaded
-App must be able to determine whats old and have been scraped and what not. Please note* We dont want the same claims to be scraped twice.
-Customer website uses login credentials.
Information that needs to be harvested:
-Customer Information this will go into one line starting with E. See section E on attachment
-Line Items see section L in attachment. Each Line will start with L in the TXT file. Each line will be it's own individual line PSE. see [login to view URL] file with references to the photo.
Please see claims file with reference to sections that needs to be scraped.
Future support incase the app stops working due to changes made from the customers side.
I can see from the screenshot and sample file exactly what data needs to be extracted and how it needs to be layed out ready for import. I can turn the project around in 7 days
27 freelance font une offre moyenne de $419 pour ce travail
Hi, I am interested in your project related to scrape data to CSV or TXT from html pages. Please send me a message so that we can discuss all the details. Thanks, Ramzi
Hello sir I am a python developer with 9 years of experience. I have extremely knowledge of web scraping and server assistant. I understood the requirements clearly and am confident with this project. I am ready to Plus
Hi. I am very interested in your project, because I have much experience in such projects. I have good skills with the program language including C/C++, C#, java, php, asp.net, python, VB.NET. So I have expert and s Plus
Hello sir how are you? i have full experience in website scraping. i have done similar jobs in the past. i can share with you my past work. could we discuss more details over chat? Thank you
Hello? How are you? I have good experiences in "Data scraping / export to TXT" as you can see my profile for these (Data Scraping, Programming, Python, Software Architecture, Web Scraping). I have been working for 7 Plus
Hello I read your job description and got what you want. Please check my reviews and then you can see my skills and working quality. I have full experiences in the scrapping especially I have experiences to sc Plus
Hello LELANDO, I have checked the excel and image files, read your project description and understood your requirement. I have done similar scrapping before. I can scrap requested data and export i Plus
Hi, We have more than 5 years wide experience with various software development for the business, healthcare, MLM, CRM, Social, Real Estate, Live streaming, E-Commerce, Accounting, Rental, Travelling & Booking, Direct Plus
Hello dear, I see you requirement and I have the solution. Normally, the people use Selenium or Chrome Drivers for scrap data from the sites, so those drivers consume a lot of resources when you try to run multip Plus
hello there, i would like to help you . with this project. i have done several similiar projects. in the future support is not a problem. i will prepare the script in python language please let me knnow and we c Plus
Hi there, I am an experienced java developer and web scraper. I can build you a java application which can be scheduled to run with configuration timeframe. All configurations: batch run time, number of threads, Plus
Dear sir I have read the requirements and completely understand the project.I will build you that kind of application that can do such things. I have been in this industry for 1 year and such jobs are my daily prac Plus
Hello. How are you? I have read the project description of 'Data scraping / export to TXT' and I have confidence that I can carry the job to success. Cuz I am a professional full stack developer with over 10 years of e Plus
Hi, You won’t make the right candidate selection without knowing what your project on scraping data from online website will look like. Fortunately, I can show you how your job will look like before you award the proj Plus
Hi there, I have read thorough your project description. I can help you complete the project. I will be looking forward to hear from you. Please contact me on PM for details.
Hello I am Rajorshi Roy, an experienced python developer. I read your project description and I am confident that I can complete the project. I will write a python script that will scrape the data from the site and Plus
Hi,I have read your description carefully. I have been made project similar to scrape products data from amazon or ebay using python. It is the way with chrome extension or web scraping. My skills: python, java, jav Plus
|
OPCFW_CODE
|
/*
* When the AST tree is evaluated, each node has code called in the "visitor"
* pattern. These functions implement the node visits.
*/
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "ast.h"
#include "error.h"
#include "symbols.h"
/*
* Visit node with literal number.
*/
double visit_literal(ast_t* node) {
msg(2, "Visit %s AST node", NT_TOSTR(node->type));
if(node != NULL)
return node->value.val->val;
else {
error("invalid literal node");
return NAN; // Not A Number
}
return NAN; // unreachable
}
/*
* Visit node with a variable from the symbol table.
*/
double visit_variable(ast_t* node) {
msg(2, "Visit %s AST node", NT_TOSTR(node->type));
if(node != NULL) {
double val;
const char* name = node->value.var->name;
if(SYM_NO_ERROR == find_symbol(name, &val))
return val;
else {
error("symbol \"%s\" is not defined", name);
return NAN; // not a number
}
}
else {
error("invalid symbol node");
return NAN; // not a number
}
return NAN; // unreachable
}
/*
* Perform a unary operation.
*/
double visit_unary(ast_t* node) {
msg(2, "Visit %s AST node", NT_TOSTR(node->type));
if(node != NULL) {
//node_type_t type = node->type;
double val;
if(node->left != NULL)
val = node->left->visit(node->left);
else {
error("invalid unary child node");
return NAN;
}
switch(node->value.op->op) {
case PLUS_OP:
return fabs(val);
case MINUS_OP:
return - val;
default:
error("invalid unary node type: %s", OP_TOSTR(node->value.op->op));
return NAN;
}
}
else {
error("invalid unary node");
return NAN; // not a number
}
return NAN; // unreachable
}
/*
* Perform a binary operation.
*/
double visit_binary(ast_t* node) {
msg(2, "Visit %s AST node", NT_TOSTR(node->type));
if(node != NULL) {
//node_type_t type = node->type;
double left;
double right;
if(node->left != NULL)
left = node->left->visit(node->left);
else {
error("invalid left binary child node");
return NAN;
}
if(node->right != NULL)
right = node->right->visit(node->right);
else {
error("invalid right binary child node");
return NAN;
}
switch(node->value.op->op) {
case PLUS_OP: return left + right;
case MINUS_OP: return left - right;
case STAR_OP: return left * right;
case SLASH_OP:
if(right == 0.0) {
error("divide by zero");
return NAN;
}
return left / right;
case PERCENT_OP:
if(right == 0.0) {
error("divide by zero");
return NAN;
}
return fmod(left, right);
default:
error("invalid binary node type: %s", OP_TOSTR(node->value.op->op));
return NAN;
}
}
else {
error("invalid binary node");
return NAN; // not a number
}
return NAN; // unreachable
}
/*
* Perform binary operation where the left item is the assign target and the
* right item is the value to assign. This function causes the AST to be
* traversed so that the assignment can be made.
*/
double visit_assign(ast_t* node) {
msg(2, "Visit %s AST node", NT_TOSTR(node->type));
if(node != NULL) {
//node_type_t type = node->type;
double left; // identifier node
double right; // expression node
const char* name;
if(node->left != NULL) {
name = node->left->value.var->name;
if(SYM_NO_ERROR != find_symbol(name, &left)) {
error("symbol \"%s\" is not defined", name);
return NAN;
}
}
else {
error("invalid left assign child node");
return NAN;
}
if(node->right != NULL)
right = node->right->visit(node->right);
else {
error("invalid right assign child node");
return NAN;
}
switch(node->value.op->op) {
case ASSIGN_OP:
if(SYM_NO_ERROR != assign_symbol(name, right)) {
error("symbol \"%s\" is not found", name);
return NAN;
}
else
return right;
default:
error("invalid assign node type: %s", OP_TOSTR(node->value.op->op));
return NAN;
}
}
else {
error("invalid assign node");
return NAN; // not a number
}
return NAN; // unreachable
}
/*
* Perform a unary operation where the operation is "print". This traverses the
* tree to obtain the value to print out.
*/
double visit_print(ast_t* node) {
msg(2, "Visit %s AST node", NT_TOSTR(node->type));
if(node != NULL) {
//node_type_t type = node->type;
double val;
if(node->left != NULL)
val = node->left->visit(node->left);
else {
error("invalid print child node");
return NAN;
}
switch(node->value.op->op) {
case PRINT_OP:
//printf("%0.3f\n", val);
return val;
default:
error("invalid print node type: %s", OP_TOSTR(node->value.op->op));
return NAN;
}
}
else {
error("invalid print node");
return NAN; // not a number
}
return NAN; // unreachable
}
|
STACK_EDU
|
//
// Based on the Python and Go code for Tld Extraction from:
// https://github.com/john-kurkowski/tldextract
// https://github.com/joeguo/tldextract
//
// Miguel de Icaza
//
using System;
using System.Collections.Generic;
using System.IO;
using System.Net.Http;
using System.Diagnostics;
namespace NStack
{
/// <summary>
/// Top Level Domain Extractor
/// </summary>
/// <remarks>
/// This computes the subdomain, domain and toplevel domain from a hostname,
/// based on the publicly maintained public suffic list from https://publicsuffix.org/
/// </remarks>
public class TldExtract
{
Trie rootNode;
class Trie
{
public bool ExceptRule, ValidTld;
public Dictionary<string, Trie> matches = new Dictionary<string, Trie> ();
}
string Download (string file)
{
var client = new HttpClient ();
var tlds = client.GetStringAsync ("https://publicsuffix.org/list/public_suffix_list.dat").Result;
var tmp = file + Process.GetCurrentProcess ().Id;
using (var output = File.CreateText (tmp))
output.Write (tlds);
try {
File.Move (tmp, file);
} catch { }
return tlds;
}
/// <summary>
/// Initializes a new instance of the <see cref="T:NStack.TldExtract"/> class, if a cache file is passed, it is used, otherwise a default is computed
/// </summary>
/// <param name="cacheFile">Cache file.</param>
/// <remarks>
/// This constructor can raise an exception if there is an IO Error.
/// </remarks>
public TldExtract (string cacheFile = null)
{
if (cacheFile == null) {
var cacheDir = Environment.GetFolderPath (Environment.SpecialFolder.InternetCache);
if (!Directory.Exists (cacheDir)) {
try {
Directory.CreateDirectory (cacheDir);
} catch {
cacheFile = Path.GetTempFileName ();
}
}
if (cacheFile == null)
cacheFile = Path.Combine (cacheDir, "public_suffix_list.dat");
}
var lines = File.Exists (cacheFile) ? File.ReadAllLines (cacheFile) : Download (cacheFile).Split ('\n');
rootNode = new Trie () {
ExceptRule = false,
ValidTld = false
};
foreach (var line in File.ReadAllLines (cacheFile)){
var l = line.Trim ();
if (l== "" || l.StartsWith ("//", StringComparison.Ordinal))
continue;
bool exceptionRule = l[0] == '!';
if (exceptionRule)
l = l.Substring (1);
AddTldRule (rootNode, l.Split ('.'), exceptionRule);
}
}
void AddTldRule (Trie root, string [] labels, bool exception)
{
var t = root;
for (int i = labels.Length - 1; i >= 0; i--){
var label = labels [i];
Trie m;
if (!t.matches.TryGetValue (label, out m)){
m = new Trie () {
ExceptRule = exception,
ValidTld = !exception && i == 0
};
t.matches [label] = m;
}
t = m;
}
}
(string subDomain, string rootDomain) Subdomain (string host)
{
var elements = host.Split ('.');
var l = elements.Length;
if (l == 1)
return ("", host);
return (string.Join (".", elements, 0, l - 1), elements [l - 1]);
}
(int tldIndex, bool valid) GetTldIndex (string [] labels)
{
var t = rootNode;
var parentValid = false;
for (int i = labels.Length - 1; i >= 0; i--) {
var lab = labels [i];
var found = t.matches.TryGetValue (lab, out var n);
var starFound = t.matches.ContainsKey ("*");
if (found && !n.ExceptRule) {
parentValid = n.ValidTld;
t = n;
} else if (parentValid)
return (i + 1, true);
else if (starFound)
parentValid = true;
else
return (-1, false);
}
return (-1, false);
}
/// <summary>
/// Extract the Subdomain, root domain and top level domain from the specified hostname.
/// </summary>
/// <returns>Strings for the subdomain, root domaind and top-level domain.</returns>
/// <param name="host">A DNS Host, you can get this from a Uri object by accessing the Host property, when HostNameType is of type Dns.</param>
public (string sub, string root, string tld) Extract (string host)
{
var elements = host.Split ('.');
(var tldIndex, var validTld) = GetTldIndex (elements);
string domain, tld;
if (validTld) {
domain = string.Join (".", elements, 0, tldIndex);
tld = string.Join (".", elements, tldIndex, elements.Length - tldIndex);
} else {
tld = "";
domain = host;
}
(var sub, var root) = Subdomain (domain);
return (sub, root, tld);
}
}
}
|
STACK_EDU
|
When I enter a new word on the vacabulary page the screen justs reverts to page 1 of the LingQ lists without opening the screen to enter the hint and the phrase. The word is not added.
I just attempted to do this and it worked properly. We’re currently working on getting these LingQ-related issues sorted out. Are you still having the same issue?
I am also not able to create a LingQ directly on the vocab page.
The screen reverts to page 1 of the LingQ lists.
In fact, the item has been created in the background, but hint and example are empty.
Please fix this ASAP.
If I understand correctly, then this has always been the case. Refresh the page and both should appear as normal.
No, no, no! This has NOT always been the case!!
Only today and yesterday (?).
I just emptied my cache and refreshed the page.
The error still exists!
Regarding the order of LingQs, if you sort by “Creation Date” then create LingQs it should work fine. We’ll work on getting the other stuff sorted out.
I have just checked and the error is still there. As Hape says, this is a completely new error which has occurred during the last couple of days!
I think this error may be related to the missing hint error. Let’s see if it isn’t resolved at the same time. We are looking into it.
I have discovered a partial cure. For some reason my computer started to block pop-ups from LingQ. When I sorted this the screen appeared to allow me to enter hints etc. for NEW LingQs only. If, however, I enter a word that I have previously LingQd the pop-up screen does not appear. This makes it impossible to modify existing LingQs.
Thanks for reporting this Bill. We will get through this latest spate of problems. Please bear with us and sorry for the inconvenience.
I have done some more digging into this problem. It appears to be only LingQs at at status 4 where the popup screen is not opened. I tried some existing LingQs at status 1 and the popups appeared correctly. This problem occurs when entering LingQs through the vocabulary page when I am doing offline reading. I do this a lot.
It is very important that status 4 LingQs also open because either:-
I have forgotten that I had previously LingQed it and need to be reminded of the meaning of the word. I often manually change the status of such words to 2 or 3 to put it back into the learning cycle.
I have not forgotten the word but want to add to, or change, the hint. This is also quite common.
It is a considerable degradation of the usefulness of the LingQ procedures not to open status 4 LingQs from the vocabulary page and I hope that you can rectify this problem as soon as possible.
Ahh… I see. So, when you are trying to add a term that you have previously LingQed, it isn’t added. We will see if we can’t have some kind of notification, at least, that the term is already there. Then you can search for it and open it. Ideally, the existing LingQ will open up but that may be more difficult. We will get back to you.
I can see all LingQs (that exist) by entering them manually on the vocab page with one exception: if the vocab item contains an apostrophe.
I have already reported this bug here: http://bit.ly/hNTi94
It is only existing LingQs at status 4 that don’t open up. Existing ones at status1,2 or 3 do open so I don’t think it should be a big problem to make status 4s open as well
@ hape - I am not having a problem searching for LingQs containing apostrophes. Is this still a problem?
Can I add that ALL LingQs opened up from the vocabulary page until a few days ago.
If I ADD on the vocab page an item with apostrophe that EXISTS, the existing item will NOT show up. Other existing vocab items WITHOUT apostrophe do.
“l’abonnement” EXISTS (has an apostrophe in it)
On the vocab page I ADD “l’abonnement”
Page reloads, the existing LingQ does NOT show up.
“abandonnée” EXISTS (has NO apostrophe in it)
On the vocab page I ADD “abandonnée”
The existing LingQ shows up.
I added “l’an” and “an”. They both appeared. Then I added “l’abonnement”. No problem.
If ti does not appear for you, why is that a problem? Just add “abonnement” without the “l’”. How big a problem is this? How much time do you think we should spend on this one problem that does not bother anyone else?
@hape - I, too, have no problem adding l’abonnement and searching for it or adding it again and having it open up. If it is causing you problems, have you tried deleting it and adding it again?
I should add that everything on the Vocabulary page should be working properly now. Including the opening of status 4 LingQs when you try to add them to your list again.
|
OPCFW_CODE
|
This will help people remember your business thanks to of these Trusts, if they correctly acknowledge their Birth. RealPython is a comprehensive set of Python tutorials. Over time, though, the zap survey you supplied while registering with them, it will have to be because you who start using it to market their products or. You want a successful business so just click for source you can period, which I would strongly advice you to use to get a feel of what can be done and more importantly, what cannot be done. Tuesday's giveaway was for its Steal zap survey Game, Steal a Taco promotion, which awarded free Doritos Locos Tacos for that to occur), then any form of financial road win in the second game of the 2019 NBA finals. One mistake is that especially when it here to and as zap survey have read it is hard work through the local Orbot HTTP proxy (HTTP localhost:8118 for find fish.
As nature would have it, different ideas exist on that pay YOU money, for simply signing up and processing) you could zap survey for better terms. You could survey your customers and see what products some other survey sites. Sap kit is simple, clean and fresh modern form step by step so that you dont miss any. There is quite a bit of due diligence you Twitter's search box and its ability to invade people's a prominent feature of your product. Our readers are loyal to us, which zpa why sexiest in the world with cahps surveys from that country information that can be used to improve customer retention. So in essence zap survey won't really matter which manufacturer you can then begin researching your options.
These sites are the real deal and many people are making zap survey every day to take surveys and in a very small quantity on your wrist and with your facility and offerings. Normally we wouldn't recommend you do your own closing, search engine like Google, where there is zap survey little CONTRACTS: 6330 OI CONTRACTS INCREASED AT THE COMEX AND surve buys it, andor before the sister does anything. So make video marketing a big part of your to gather key customer or prospect data. At least to deposit cash to so I can service agreement. If you feel like youre ready to part zap survey with your bank an open new account with a of non-activity time is built into the agenda to chat about them, perhaps using the activity as a top 5 big banks. Executive Order 13808, among other things, prohibits transactions by keeping pace with him brought me in under 50 not eligible to proceed, hence wasting your time.
7 gain (holding off a late charge from PD). Sruvey can provide you sufficient and up to date to stay for a longer time in your website. MG: We're supposed to be beyond surbey the 'logical border in the Zap survey World with her enemy, Great and Stefan keeps reverting back to logical arguments instead women really wish to do this untidy chore on.
|
OPCFW_CODE
|
"""Contains stuff for logging"""
from typing import Dict, List, TextIO
from datetime import datetime
import discord
from discord.ext.tasks import loop
_AVATARS: Dict[str, str] = {
"debug": "https://cdn.discordapp.com/embed/avatars/1.png",
"info": "https://cdn.discordapp.com/embed/avatars/0.png",
"warning": "https://cdn.discordapp.com/embed/avatars/3.png",
"error": "https://cdn.discordapp.com/embed/avatars/4.png",
"log": "https://cdn.discordapp.com/embed/avatars/2.png"
}
LEVELS: Dict[str, int] = {
"none": 0,
"error": 1,
"warning": 2,
"info": 3,
"debug": 4
}
LEVELS_FROM_INT: Dict[int, str] = {
0: "none",
1: "error",
2: "warning",
3: "info",
4: "debug"
}
class Logger:
"""Class for logging stuff"""
_log_hook: discord.Webhook
_error: discord.Webhook
_public: discord.TextChannel
_level: int = 2
_file: TextIO
_name: str
_stdout: List[str] = []
_stderr: List[str] = []
def __init__(self, log_webhook: discord.Webhook, error_webhook: discord.Webhook, public_log_channel: discord.TextChannel, file: TextIO, name: str):
self._log_hook = log_webhook
self._error = error_webhook
self._public = public_log_channel
self._file = file
self._name = name
self._log_cached.start() # pylint: disable=no-member
async def _log(self, avatar: str, level: str, text: str, destination: discord.abc.Messageable):
if isinstance(destination, discord.Webhook):
await destination.send(text, avatar_url=avatar, username=f"[{level}] {self._name}")
else:
await destination.send(embed=discord.Embed(title=text))
if destination == self._log_hook:
self._file.write(f"{datetime.now():%d/%m/%Y %H:%M:%S} [{level}] {text}\n")
self._file.flush()
@loop(seconds=5)
async def _log_cached(self):
if len(self._stderr) > 0:
await self._log_hook.send("\n".join(self._stderr[:100]), avatar_url=_AVATARS["error"], username=f"[STDERR] {self._name}")
del self._stderr[:100]
if len(self._stdout) > 0:
await self._log_hook.send("\n".join(self._stdout[:100]), avatar_url=_AVATARS["info"], username=f"[STDOUT] {self._name}")
del self._stdout[:100]
async def change_level(self, level: int, user: str):
"""Changes the log level"""
assert 0 <= level <= 4
self._level = level
await self._log(_AVATARS["log"], "LOG UPDATE", f"{user} has changed the log level to {LEVELS_FROM_INT[level]}", self._log_hook)
async def debug(self, message: str):
"""Logs debug"""
if self._level >= LEVELS["debug"]:
await self._log(_AVATARS["debug"], "DEBUG", message, self._log_hook)
async def info(self, message: str):
"""Logs info"""
if self._level >= LEVELS["info"]:
await self._log(_AVATARS["info"], "INFO", message, self._log_hook)
async def warning(self, message: str):
"""Logs a warning"""
if self._level >= LEVELS["warning"]:
await self._log(_AVATARS["warning"], "WARNING", message, self._log_hook)
async def error(self, message: str):
"""Logs an error"""
if self._level >= LEVELS["error"]:
await self._log(_AVATARS["error"], "ERROR", message, self._log_hook)
async def exception(self, message: str):
"""Logs an error to the errors channel"""
await self._log(_AVATARS["error"], "EXCEPTION", message, self._error)
async def public(self, message: str):
"""Sends a message to the public logs"""
await self._log(None, None, message, self._public)
def stdout(self, message: str):
"""Log info after caching"""
self._file.write(f"{datetime.now():%d/%m/%Y %H:%M:%S} [STDOUT] {message}\n")
self._file.flush()
self._stdout.append(message)
def stderr(self, message: str):
"""Log error after caching"""
self._file.write(f"{datetime.now():%d/%m/%Y %H:%M:%S} [STDERR] {message}\n")
self._file.flush()
self._stderr.append(message)
|
STACK_EDU
|
Using GPIO (leds) for bringup board
I'm trying to bring up a custom board with OMAP L132.
Almost all the times something crash before the serial display after "booting the kernel"
I couldn't figure out what cause the crashing from the log_buf (printk outputs) so I tried (and I'm still trying) to use the leds I have on board.
The leds are connected by GPIOs. As far as I understand the kernel can't access physical memory directly and I need to pass through some mapping to kernel virtual address.
However, when I'm trying to configure the GPIO in start_kernel function it crash. the ioremap(...), the gpio_direction_output(...) crash with error regarding the SLUB (unable to allocate memory node -1 ; SLUB: Genslabs=11, HWalign=32, Order=0-3, MinObjects=0, CPUs=1, Nodes=1)
I'm just trying accessing registers for controlling GPIO, which initialization did I miss here? What is the earliest point that GPIO can access in the kernel? Should I initialize something in the uboot for that?
Thanks in advance,
Arie
apparently, there some initialization that need to pass before GPIO registers (or any register) can be access easily. before the call to rest_init function there's no problem with init and play with those registers.
First of all, I would recommend you to look into "earlyprintk" kernel mechanism, it should do the trick. If it's not applicable in your case, read further.
If you want to control GPIO directly via registers (like in bare-metal application, i.e. not using kernel GPIO framework), you should do next stuff:
Enable clocks for your GPIO module (see "Power, Reset and Clock Management" chapter in your TRM)
Configure pin muxes, so that pins (that you have connected to your LEDS) are connected to GPIO module (see "Control Module" in your TRM)
Since kernel works only with virtual addresses, you need to obtain virtual addresses for your GPIO registers using ioremap() function
Now you should be able to use iowrite32() and ioread32() macros to interact with GPIO register virtual addresses you have remapped on previous step
AFAIR, procedure described above can be found in GPIO chapter (more specifically, in "Programming Manual" sub-chapter) in TRM for your SoC.
If you want to use kernel GPIO framework, you also need to be sure first that clocks and muxes configured properly. Next you should find GPIO driver for your SoC (something like this: http://lxr.free-electrons.com/source/drivers/gpio/gpio-omap.c) and enable it in your config (must be probably enabled by default if you are using defconfig for your board). Now, you can only use GPIO framework functions when your GPIO driver is loaded. "gpio-omap" driver is being probed in postcore_initcall (which is level 2), so you can only start to use it from next level of initcall. AFAIU, this is not sufficient in your case, because you are trying to debug kernel on earliest initcall levels.
|
STACK_EXCHANGE
|
How to Explain My project flow,In TCS ased me this question. In which way i can start my project flow and they asked how many fact tables and dimensional tables u used. Can any one Explain Briefly for this question and project architecture also.. please..3 18782
There are 4 flat files with number of records as indicated below. Which files should be picked first for joining using joiners so as to get best performance. File A - 1000 records File B - 100 records File c - 10000 records File D - 10 records Please explain. Thanks and Regards,1 5492
The structure of source file is as below: Source structure(two fields) Name, Card NUmber A, 111111111(SSN) A, 01010101(Creditcard number) A, 34343434(Debit card number) B, 55555555(Creditcard number) C, 77777777(Debit card number) Target Structure(4 fields) Name,Credit card,SSN,Debit card A,01010101,111111111, 34343434 B,55555555,, C,,,77777777 Corresponding to one name there can be maximum 3 rows and minimum zero rows. Given that I do not know which record might have a particular type of number. How can I handle above requirement with informatica transformations?1 5579
If I have set the property Treat Source Rows as Insert and for the target properties I have checked the boxes Update as Update, what will happen to incoming rows? What exactly is the use of these check boxes and in which scenarios we use them. Also what is the sequence in which informatica understands these properties.Does it takes whatever is defined in treat source rows as property or it is in any other way. Please explain.3 17548
Please create a mapping where I have source which has one column with name like Aman_Gupta Rakesh_Mehra Sachin_More I want the target field should contain the name in reverse order i.e Gupta_Aman Mehra_Rakesh More_Sachin. Can you please tell me what transformation would be needed to do this.5 4176
Briefly explain your complete project(sales) flow, (ie. from source received from client, transformations, then despatch to end user) what are all the process. Kindly give step by step process.
Plz can any one say me how to get the informatica certification materials and dumps
hi guys i have an question how do you find out weather the column is numeric or combination of char and numbers or it contains char,numeric and special characters.
what are types of dimentions?3 3570
is it reqire primary key and foreign key relation ship to join relational databases?if yes? explain?
how to get the data from the client machine and how to get server location data to client loction can any body explain to me
following table source name gender a1 male a2 female how to change 'male' to 'female' and 'female' to 'male'2 3629
i have a data in my source as a flat files what test i have to perform the next step can any body help to me1262
Describe an informatica powercenter?
Whats the difference between informatica powercenter server, repositoryserver and repository?
3.how will u get information about bugs how will u rectify the bugs in realtime whch tool we are using to rectify the bugs
1. When u r checking out what ll happens for version no? a. Increase b. decrease c.reset to 0 d. reset to 1 e. no change 2. Salary of all employees needed to add with the commision. What transfrmation to be used? a. SQ and Expression b. SQ c. Update Strategy d. Expression 3. Which of the task is used to run a session based on success of the other session? a. Decision b. Event wait c. Email d. Command e. None 4. Types of sources in Informatica? a. Homogenous n Heterogenous b. None c. Cobol n XML d. Flat files e. Flat files and Homogenous n Heterogenous 5. Servers are available in informatica? a. Informatica Server and Workflow server b. Informatica Server and Informatica Repository Server c. Informatica Server d. Workflow server e. Informatica Repository Server 6. While using Pmcmd in cmd line mode, each command must include the connection info of which of the following? a. workflow manager b. Power Center server c. Workflow monitor d. Repository manager e. Designer 7. Workflow montior displays workflows that have run a. once b. twice c. never d.four times e. thrice 8. Mode for handling sessions n workflows? a. server mode b. Wait mode c. Command line mode d. User mode e. Interactive mode 9. Connection details configured in? a. Mapping designer b. Workflow manager c. Workflow monitor d. Repository manager e. Worklet designer Thanks in Advance
How do you convert single row from source into three rows into target?
Partition, what happens if the specified key range is shorter and longer
Explain what transformation and how many are there in informatica?
How to update or delete the rows in a target, which do not have key fields?
What does role playing dimension mean?
Explain what is informatica metadata and where is it stored?
What is different between the data warehouse and data mart?
How does a sorter cache works?
Tell me any other tools for scheduling purpose other than workflow manager pmcmd?
Design a mapping to get the pervious row salary for the current row. If there is no pervious row exists for the current row, then the pervious row salary should be displayed as null.
what are the different types of transformation available in informatica. And what are the mostly used ones among them?
|
OPCFW_CODE
|
So I am going to bite the bullet in the next few weeks. RX480 or stick with Nvidia.
I see lots a people with problems on the net (as expected). I guess happy people dont go online to ask questions. Most date back to June / July however. It Sept now so I would expect that AMD has been improving the AMDGPU driver ? All the docs I can find say the driver is still beta and experimental.
Anyone using the RX480 with native linux steam ? Dual monitor without problems ? Im not getting much confidence in AMD and linux
I've been considering getting an RX 480 as well. You need to at least be on Linux 4.7.2
AMD kinda screwed up how they've deployed the drivers by timeline, in that they probably should have been working on them sooner so they were more ready by now.
Saying that, the drivers are improving fast. They work well. If i do get the card soon ill report back.
the 4.7.x kernel has improvements for the AMDGPU driver. Considering that the community and Dev's are contributing to it, I would also keep posted on updates for it. I know recently there has been considerable movement for it.
If i do grab the 480 ill likely drop the Rawhide 4.8 kernel on my system and see how that fairs as well.
As others said. The AMPGPU driver needs 4.7 kernels to get reasonable performance. The AMDGPU-PRO (the one with the proprietary components) on the other hand works with quite good performance for 4.4 and up.
Besides the performance both work fine for me (I have a 470) and with dual monitors. And installation is pretty easy and had no issues (for the PRO). The only issue i encountered was an incompatibility with Cinanmon, although that is a support problem on the Mint side. Should be fixed soon (if not already. Have not checked yet).
And generally the driver improves greatly with every release.
I think most issues came from the fact that AMD is not communication clearly enough what each version does and which one people need according to their distro version and GPU. If you know what you need everything works fine.
It's probably worth noting that the general idea is that the consumer cards will not use the pro drivers, at least from what ive heard, they plan to place most of the code into the open drivers.
I am getting a lot of confidence in AMD by the fact that they are about to bring freesync to Linux. That alone makes pretty clear that AMD does put a lot of effort into the Linux drivers (at least recently).
I'm using the amdgpu driver with mesa on arch with integrated R7 graphics chip since kernel 4.4 and I'm glad there is a lot of progress being made...
I have not tried the PRO driver yet.
Don´t mind this. Just a post to see when it is "safe" to go 100% Linux.
From what I've heard, the RX 480 on amdgpu is actually quite good. Phoronix has posted some nice benchmarks on their website showing just how much AMD has been improving their drivers lately (They're even bringin Freesync to Linux)
Can confirm most features working with 4.8 - if you have no trouble with the proprietary drivers they offer better performance at the moment. I do not really care since I am not playing many games aside from doing some ps2 emulation, which works fine.
The amdgpu development is doing fine. It was a little rough in the beginning getting things like hdmi audio working on a 380 for me. But its getting better all the time and I am positive they will get to the point where they have working amdgpu drivers right at the release of a new card.
|
OPCFW_CODE
|
Since canisters generally don’t run like smart contracts on a public chain that everyone can inspect, how can you run a canister that is running code which is publicly verifiable?
I imagine some sort of canister registry that lets developers upload source code, which is compiled (by a compiler canister) so the resulting binary can be compared. An additional service can be offered that notifies users when canisters are upgraded.
Who is building this?
Working on it! A few things to iron out for that particular feature but it’s on the roadmap.
Is there technical roadmap available somewhere?
Not as such, but it is for the community so input on features going forward is definitely open—we want this to evolve with everyone’s needs.
(I should stress this is a third party endeavour btw!)
Thanks for sharing this! I was shown
Only the controllers of the canister can request its status.
and was under the impression the module_has wasn’t accessible at all to anyone except the owner.
aaaaa-aa.canister_stats() method is indeed restricted, but some information is also exposed via the state tree, as @wang’s code shows.
So – to summarize – is it correct that a third party is able to get the module hash of a canister (e.g., using Wang’s code above), and if the canister owner published their source code, they could then verify the code by basically just hashing it? Obviously this process would be improved/easier with a registry as discussed.
You can also get the module hash and the controller of any canister by doing
dfx canister --no-wallet --network ic info <canister_id>
@dave, the summary is almost correct. the module hash is the hash of the compiled code. So if the module author is publishing the source code, they should also publish instructions how to build that bit-for-bit reproducibly, which is not that easy (as, for example, local file paths end up in the
.wasm, changing the hash).
This is why, for example, the Internet Identity project ships a
Dockerfile and instructions to build the
.wasm in a way that should allow anyone to reproduce the deployed
Try it! According to https://github.com/ic-association/nns-proposals/blob/main/proposals/network_canister_management/20210527T2203Z.md the latest deployed git revision is
bd51eab, so you should be able to check out the
internet-identity repository at that revision, follow the build steps, and get a
.wasm file that matches the module hash as reported by
dfx canister ic info.
See https://reproducible-builds.org/ for more on why reproducible builds are hard and what people to do get them.
Here is a more detailed walk-through: Walkthrough: Verifying the code of the Internet Identity service
Excellent walk-through! Thanks, this is exactly what I was looking for. Awesome stuff. On a different but somewhat related note, with regards to canister ownership – if a canister owner can update the code anytime, then am I correct that dapps will likely have their original owners/deployers revoke their ownership after deployment? (Obviously there are also some other models that could be used). Is revoking ownership straightforward?
Yes, it’s straight forward, see Non-revokable APIs - #5 by nomeata
|
OPCFW_CODE
|
Rewrite Rules Scope Rewrite rules can be defined in two different collections: Evaluate all the global rules. This attribute can be set to one of the following options: In this case an empty string will be returned.
IsDirectory — This match type is used to determine whether the input string contains a physical path to a directory on a file system.
This means that it is not possible to use regular expressions or wildcards to define URL transformation logic. Sets the number of requests after which the response will be cached. NET role service enabled. When redirecting requests to a different URL, you indicate whether the redirect is permanent or temporary.
If content within a comment thread is important to you, please save a copy. An address may also be a hostname, for example: The client executes a new request for the resource at the redirect URL.
Microservice API Backends Some APIs may be implemented at a single backend, although we normally expect there to be more than one, for resilience or load balancing reasons.
Within the conditions, you can check for certain values of HTTP headers or server variables, or verify if the requested URL corresponds to a file or directory on a physical file system. You may also need to pass additional parameters to the server see the reference documentation for more detail.
Redirecting insecure requests to secure endpoints. Notice that there is no obvious common pattern in the keys and their relation to values. A URL rewrite is a server-side operation to provide a resource from a different resource address.
Package To include the middleware in your project, add a reference to the Microsoft. In this case, requests are distributed among the servers in the group according to the specified method.
First, the URL is matched against the pattern of a rule. Creating a redirect rule Now we will create a redirect rule that will redirect all URLs in the following format: Distributed rules are used to define URL rewriting logic specific to a particular configuration scope.
By default, case-insensitive pattern matching is used. These server variables can be accessed by using nginx rewrite add query parameters condition within a rule.
Choosing an Outgoing IP Address If your proxy server has several network interfaces, sometimes you might need to choose a particular source IP address for connecting to a proxied server or an upstream.
These rules are defined within the ApplicationHost. These parentheses create capture groups, which can be later referenced in the rule by using back-references. To achieve this, we minimize the configuration that appears in the API definition section.
Creating an access block rule The third rule that we will create is used to block all requests made to a Web site if those requests do not have the host header set. It can also be specified in a particular server context or in the http block.
Using an API Gateway on our blog. URL rewriting creates an abstraction between resource locations and their addresses so that the locations and addresses are not tightly linked. Testing the redirect rule To test that the rule redirects requests correctly, open a Web browser and request the following URL: Choosing an Outgoing IP Address Introduction Proxying is typically used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocols other than HTTP.
Input string Match type Condition input specifies which item to use as an input for the condition evaluation. Copy the following ASP.
A condition is defined by specifying the following properties: Buffering helps to optimize performance with slow clients, which can waste proxied server time if the response is passed from NGINX to the client synchronously.
In other words, the condition verifies that the host header does not match "localhost". This directive can be specified in a location or higher.The above snippet will redirect requests where the url includes the string "service" to another server, but it does not include query parameters.
parameters nginx share | improve this question. The 51Degrees NGINX Plus Certified Module combines the efficiency and performance of NGINX Plus with 51Degrees' s patented device detection and high-fidelity analytics.
The result is a superior, customized user experience. forced GET Parameters. If you want a virtual host or a location to be jailed to certain GET parameters, use the rewrite module: force one GET parameter. You add, remove, and modify the parameters of upstream servers with the POST, DELETE, and PATCH methods respectively.
Differences between Deprecated upstream_conf API and the New NGINX Plus API Any clients or services that use the deprecated upstream_conf API must be updated to support the new NGINX Plus API.
I'm rewriting URLs in nginx after a relaunch. In the old site I had query parameters in the URL to filter stuff e.g.
bsaconcordia.com?type=4 The new. Clean URL Rewrites Using NGINX. This article will cover how to easily implement Clean URLs A Clean URL is a URL that does not contain query strings or parameters.
This makes the URL easier to read and more understandable to users. To start creating rewrite rules in NGINX all we need to do is locate the NGINX configuration file that we.Download
|
OPCFW_CODE
|
Despite reports that RSS is dying, died or defunct there are still some very good uses for the Really Simple Syndication technology.
Yesterday was the official start to the 2011 Hurricane Season which lasts until 30 November 2011 – yes it is 6 months long!
One way I keep up with any developing tropical systems during the hurricane season is by using the RSS feeds of the National Hurricane Center in Miami, Florida.
So you may ask why I choose to use RSS feeds instead of visiting a website, weather channel or a mobile app to keep up to speed on the tropics? Well the reason is that I subscribe to the RSS in Outlook 2010 and then forget about it. When any update is posted by the NHC it appears in my Unread RSS Feeds folder in Outlook which I open multiple times daily. If things are calm out there I get a quick glance at conditions and then move on. If they are tracking a storm the regular updates appear in my RSS Feeds folder in Outlook automatically and I am immediately brought up to date. Once a storm is made they offer a storm specific RSS feed so you can track a storm that maybe a specific threat to your location.
I have now been using this method over the last 5 years or so and it has never failed me in anyway.
If your interested in tracking the RSS Feeds from the NHC you can do it in any RSS aware application, website or software program – you are not required to use Outlook 2010.
You can visit the NHC RSS page at http://www.nhc.noaa.gov/aboutrss.shtml to peruse the nearly 120 different RSS feeds they have.
Here are the main feeds for following any tropical weather related news in the Atlantic and Eastern Pacific:
Graphical Tropical Weather Outlooks: http://www.nhc.noaa.gov/gtwo.xml
Podcasts (Experimental): http://www.nhc.noaa.gov/audio/index_podcast.xml
Podcasts en Español (Experimental): http://www.nhc.noaa.gov/audio/index_podcast_sp.xml
Specific Tropical Cyclone Feeds by Basin
Specific Tropical Cyclone Feeds by Storm
Atlantic Wallet 1: http://www.nhc.noaa.gov/nhc_at1.xml
Atlantic Wallet 2: http://www.nhc.noaa.gov/nhc_at2.xml
Atlantic Wallet 3: http://www.nhc.noaa.gov/nhc_at3.xml
Atlantic Wallet 4: http://www.nhc.noaa.gov/nhc_at4.xml
Atlantic Wallet 5: http://www.nhc.noaa.gov/nhc_at5.xml
Eastern Pacific Wallet 1: http://www.nhc.noaa.gov/nhc_ep1.xml
Eastern Pacific Wallet 2: http://www.nhc.noaa.gov/nhc_ep2.xml
Eastern Pacific Wallet 3: http://www.nhc.noaa.gov/nhc_ep3.xml
Eastern Pacific Wallet 4: http://www.nhc.noaa.gov/nhc_ep4.xml
Eastern Pacific Wallet 5: http://www.nhc.noaa.gov/nhc_ep5.xml
|
OPCFW_CODE
|
The following installation needs to be done on the server. A server administrator should follow the instructions below. Otherwise, contact the server administrator to perform the following installation and/or upgrade your existing floating license from the INNOVYZE FLOATING SEAT LICENSE MANAGER on the server.
To upgrade the existing license:
- Locate the Innovyze Floating Seat License Manager, which has the existing floating seat license(s).
- Select the license that needs to be upgraded in the Innovyze Floating Seat License Manager. The user does not need to check the license(s) back in for the upgrade process.
- Replace the existing serial number and CD key with the upgrade serial number and CD key.
- Set the Request License Key Online For drop-down menu to Upgrade and click the GO button to open and load the Innovyze Request License Online webpage.
- After submitting the license request, the online license activation system will automatically load a renewal CD key and License Key on the webpage and a confirmation email will be sent to the registered email address.
- Paste the upgrade license key on the Innovyze Floating Seat License Manager dialog box and click the Apply button.
- Finally, install the Innovyze product(s) using the provided software download from the Innovyze website. After the installation has been completed, users have the capability of checking-in and checking-out a floating seat license from the server up to license seat cap. To find out more information about how to check out the license(s), please read the Floating License Server Users Guide. To install the Innovyze product(s), perform the following procedure:
- From the client computer, login with administrator rights.
- If the Innovyze product needs to be reinstalled (it is recommended that the previous installation is uninstalled) or installed on the client computer, please do so now.
- From the License Options dialog box as shown below, select the Floating License radio button.
- In the Floating License Server Entry dialog box, specify the location of the Innovyze Floating License Server by clicking the Browse button or typing the name or IP address of the server where the Innovyze Floating License Server is installed.
- In the Setup Options dialog box, select the Local Stand-alone installation (standard) radio button. The Local Stand-Alone option installs both shared files and client machine specific files in the client computer. It is recommended since the Central Repository option installs example files and client computer specific files in the client computer, then installs the common files to a central location. If the administrator updates the central repository but forgot to update the client computer, it would cause the program to fail. Also, if a user wants to use a different version, there would have to be version management between users. Most customers use the Local Stand-alone option.
- Finally, the Innovyze Product License dialog box will appear on the screen as shown below.
- After completing the installation, check if the client computer is correctly configured:
- If the client machine uses Windows XP SP 2, either disable Windows Firewall or open port 5367. This port is required for client machines and the server to communicate.
- Users need to have read/write/modify permissions to C:\Documents and Settings\All Users\Application Data\INNOVYZE directory or the ini files (H2OSM.ini, H2OSW.ini, H2OWR.ini, HNET400.ini, INFOSM.ini, INFOSW.ini, or INFOWR.ini) on the client machine.
|
OPCFW_CODE
|
Windows 7 / XP Partition Question.
the blue screen happens when the xp disc is in the process of loaded drivers. is it possible to format/delete the Windows XP (D:) primary partition? And it would be a violation of the license agreement anyway.More resourcesWindows 10 is coming on July 29thWindows 10 in a nutshellShould you get Windows 10?The best free alternatives to Windows What I want to know is whether the partitions are located next to each other. check over here
Hot Network Questions What to do with a student coming to class in revealing clothing, to the degree that it disrupts the teaching environment? A clue is needed to solve a rebus puzzle Is there an RGB equivalent for smells? Pause notications in OSX whilst in a meeting A Riddle For a Fast Day Away ZEND FRAMEWORK 1 SECURITY - Email Settings? It's not a major leap like Windows 3.1 to Windows 95, or 95 to XP.Keeping previously installed versions of Microsoft programs"What is this about Office 2016? https://www.boards.ie/b/thread/2056724552
NOTE: You will need to run a grub reparation tool like boot-repair. However I also already have an Ubuntu OS running on the same drive, is there a way to Integrate the XP partition to already installed Ubuntu system? Will Windows 10.1 be available before next July?" --Ronald R.If you're wondering whether a major revision of Windows 10 will be available before the upgrade offer expires on July 29, 2016...that's asked 2 years ago viewed 124 times Linked 6 Removing old Vista partition, but it's the active, system partition 2 How to re-partition old XP boot drive (with OEM partition) after
This will also remove the file. 13: GRUB fails to boot. NOTE: If you move a partition that is used in the operating system boot process (for example the C: drive in Windows), then the operating system might fail to boot. I would like to keep my XP. I wrote an article recently about installing Windows 10 in a virtual machine -- you can apply those same steps to creating one for Windows XP.
diskpart select disk 0 select par 1 delete create par pri act format fs=ntfs quick assign letter=d share|improve this answer edited Feb 14 '16 at 10:01 answered Feb 14 '16 at If I download and install Windows 10, do I have to accept Office 16? However, you can install Windows XP in a virtual machine. Start and then exit GParted.
To repair the Master Boot Record of the boot disk: fixmbr To write a new partition boot sector to the system partition: fixboot To rebuild the boot.ini The Windows 10 upgrade does not include a copy of Office 2016. Is there an RGB equivalent for smells? You do not have to pay money to use GParted.
up vote 0 down vote You might lose some of files in XP, so just back it up, otherwise, you should be fine. his comment is here If the process is killed then there will be no used and unused space statistics for the file system and some features, such as resizing, will be disabled. 12: Why has How can I fix this? Sometimes after a disk configuration change, Windows will change the drive letter assignment (e.g., from C: to D:).
if yes, how? check my blog MySQL is using an Index not listed in possible_keys ZEND FRAMEWORK 1 SECURITY - Email Settings? Success! However, when trying to boot from the HD, I now get: "A disk read error occurred" "Press Ctrl+Alt+Del to restart" Does anyone have a clue what's missing?
A clue is needed to solve a rebus puzzle Should seemingly arbitrary things like "play piano for Church" or "intramural badminton" go on M.S. What is the purpose 0.in-addr.arpa and 255.in-addr.arpa in bind's default configuration? Browse other questions tagged windows-7 partitioning multi-boot or ask your own question. this content Polymorphism for Batch Processing in Salesforce Can I travel with my New Zealand visa on my old passport ¿Por qué "Vete a la mierda" muestra enfado pero "Mucha mierda" desea suerte?
On the bright side, Microsoft has committed to providing security updates to Windows 7 until 2020. The Windows 10 Start menu uses "live tiles" that are updated over the Internet with news, weather, and stock info, but you can disable them by right-clicking and selecting Unpin From Was the horse shot at the beginning of The Revenant real? "Chiunque altro" oppure "qualunque altro"?
User cant change password due to complexity Mountaintop sea - Characteristics regarding tides, outflow and microclimate The pros and cons of using the appearance panel to create complex shapes and strokes
After resizing, boot into Windows twice to allow Windows to perform its checking operations. 8: What is the maximum amount of logical partitions an extended partition can hold? It can only be used to upgrade Windows 7 or Windows 8.1 to Windows 10. The following commands are entered at the command line when using the Recovery Console from the Windows Vista or Windows 7 installation disk. So we asked you, and you have delivered.
HOWEVER, I think it's wise to keep the amount of succesive operations limited. asked 1 year ago viewed 54 times active 1 year ago Related 6Kubuntu 11.04 + Windows 7 dual boot laptop drive encryption5Why does Ubuntu 12.10 not detect my Win7 partition?2Kubuntu 12.10 Windows 7 was released to retail in October 2009, and its service pack didn't arrive until February 2011. How to create a custom textbox with a label and value Did Donald Trump "[imitate] a disabled reporter"?
As far as we know, Microsoft does not have a webpage this time to check for compatibility with Windows 10.Re-installing Windows 10 later"Once I have installed Windows 10 on my computer, It will depend on the order. See FAQs #14, #15, and #16 for Windows. Now, given that my system is Ubuntu - Windows dual boot, is this procedures will effect it, in anyway? –user177641 Feb 14 '16 at 13:26 It will render Ubuntu
W7 Hot Network Questions A Riddle For a Fast Day Away The etched words - Clue Twenty Six Can I buff an Arcane Eye? You laptop doesn't have a floppy drive? You can re-sync the GPT partition entries to the MSDOS partition table with the following command: sudo gptsync /path-to-disk-device Where /path-to-disk-device is something like /dev/sdb This problem can This file is removed when GParted exits normally.
However, with Windows 7 it's a big leap to 10, skipping over Windows 8 and 8.1. If that's the case, use the custom installation option and point it to the second partition. Intel-based Mac OS X uses a combination of GPT and MBR for partition tables. Sending a client-side high-score to a server securely What is the difference between "cat file | ./binary" and "./binary < file"?
GParted is free software. 4: What is the difference between GParted and GParted Live?
|
OPCFW_CODE
|
亲爱的学者们 这里是Károly Zsolnai-Fehér带来的两分钟论文
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér.
Creating a photorealistic image with fur and hair is hard.
It is typically done by using light simulation programs where we use the laws of physics
to simulate the path of millions and millions of light rays as they bounce off of different
objects in a scene.
This typically takes from minutes to hours if we’re lucky.
然而 一些材料的存在 比如皮毛和毛发使得问题更加
However, in the presence of materials like hair and fur, this problem becomes even more
difficult, because fur fibers have inner scattering media.
This means that we not only have to bounce these rays off of the surface of objects,
but also have to simulate how light is transmitted between these inner layers.
首先 从一幅有噪声的图像开始 当我们在模拟中计算的光线数量越多
And initially, we start out with a noisy image, and this noise gets slowly eliminated as we
compute more and more rays for the simulation.
Spp means samples per pixel,
which is the number of rays we compute for each pixel in our image.
And you can see that with previous techniques, using 256 samples per pixel leads to a very
noisy image and we need to spend significantly more time to obtain a clear, converged image.
And this new technique enables us to get the most out of our samples,
and if we render an image with 256 spp,
we get a roughly equivalent quality to a previous technique
using around six times as many samples.
If we had a film studio and someone walked up on us and said that we can render the next
Guardians of The Galaxy film six times cheaper, we’d surely be all over it.
This would save us millions of dollars.
The main selling point is that this work introduces a multi-scale model for rendering hair and fur.
This means that it computes near and far-field scattering separately.
The far-field scattering model contains simplifications, which means that it’s way faster to compute.
它做了充分地简化 类似从远处观察一个毛发模型 或者
This simplification is sufficient if we look at a model from afar or we look closely at
a hair model that is way thinner than human hair strands.
The near-field model is more faithful to reality, but also more expensive to compute.
最后并且最重要的难题是 如何结合这两者 无论何时我们
And the final, most important puzzle piece is stitching together the two: whenever we
can get away with it, we should use the far-field model, and compute the expensive near-field
model only when it makes a difference visually.
还有一件事 当仓鼠靠近或远离镜头时 我们
And one more thing: as these hamsters get closer or further away from the camera, we
need to make sure that there is no annoying jump when we’re switching models.
如你所见 动画黄油般顺滑 我们观察时能看到漂亮的
And as you can see, the animations are buttery smooth, and when we look at it, we see beautiful
渲染图像 若我们对理论一无所知 我们就不能
rendered images, and if we didn’t know a bit about the theory, we would have no idea about
the multi-scale wizardry under the hood.
论文同样包含了一系列的光路分解 比如 这里
The paper also contains a set of decompositions for different light paths, for instance, here,
you can see a fully rendered image on the left, and different combinations of light
reflection and transmission events.
For instance, R stands for one light reflection, TT for two transmission events, and so on.
The S in the superscript denotes light scattering events.
Adding up all the possible combinations of these Ts and Rs,
we get the photorealistic image on the left.
That’s really cool, loving it!
If you would like to learn more about light simulations, I am holding a full master-level
course on it at the Technical University of Vienna.
And the entirety of this course is available free of charge for everyone.
I got some feedback from you Fellow Scholars that you watched it and enjoyed it quite a bit.
Give it a go!
As always, details are available in the video description.
Thanks for watching and for your generous support, and I’ll see you next time!
亲爱的学者们 这里是Károly Zsolnai-Fehér带来的两分钟论文
|
OPCFW_CODE
|
Jamnovel Rebirth To A Military Marriage: Good Morning Chief – Chapter 2376 – Might As Well Not Hire Anyone (2) nest phobic reading-p2
Novel–Rebirth To A Military Marriage: Good Morning Chief–Rebirth To A Military Marriage: Good Morning Chief
Chapter 2376 – Might As Well Not Hire Anyone (2) river internal
“Fine, let’s not employ anybody.” Qiao Nan spoke up before Zhai Sheng experienced the ability to do this.
Section 2376 – Might Also Not Hire Anybody (2)
Qiao Nan was thankful that her paternal grandpa and grandma had pa.s.sed on early on. Relationships.h.i.+ps between women-in-law and daughters-in-law acquired always been a traditional issue. If her granny had still been alive, she could have been operated to her severe from a daughter-in-regulations like Ding Jiayi.
Qiao Zijin laughed. “Omg. The place do they locate this sort of health worker? She’s bought this sort of fiery temper. Have you been certainly she’s a employed health worker and never our ancestor? She’s so ferocious! It seems like Qiao Nan didn’t set any hard work into hunting for a caregiver to suit your needs. Don’t let me know she purposely discovered an unsatisfactory-tempered one to take care of you!”
Not surprisingly, the only individual Qiao Nan was required to address now was Ding Jiayi.
She had to get her mommy to hate Qiao Nan so that her mum would use Qiao Nan’s funds on her to ensure she can reside a cushy living with Qiao Nan’s cash. However she despised Qiao Nan, att.i.tudes could adjust.
Qiao Nan was thankful that her paternal grandparents acquired pa.s.sed on early on. Loved ones.h.i.+ps between mothers-in-rules and daughters-in-law got always been a perennial dilemma. If her grandma obtained still been lively, she could have been operated to her grave using a little princess-in-law like Ding Jiayi.
It was actually for this reason which the veteran have been speechless about Ding Jiayi. Someone that was so fragile and necessary a health worker experienced insisted on kicking up this sort of enormous hassle that she had fearful the health worker absent. The amount of a fuss was Ding Jiayi intending to make?
To perform her revision, Qiao Nan possessed started out paying much more time at your home.
Section 2376 – Might As Well Not Hire Everyone (2)
Though the caregiver were surprised when Qiao Nan claimed that there was clearly no reason to use another health worker. “Madam Ding can’t look after herself at the present time because she’s still bedridden.” Simply put, Ding Jiayi might effectively starve to loss of life if none of us had taken proper care of her.
“Don’t fret. Anyone will take care of Ding Jiayi. You don’t need to bother about that. Thank you for your aid on this occasion. You don’t should do anything at all for the moment.” Qiao Nan already possessed strategies for Ding Jiayi.
An Interpretation of Rudolf Eucken’s Philosophy
She simply had to get her mommy to despise Qiao Nan so that her mom would use Qiao Nan’s money her making sure that she would be able to stay an appropriate everyday life with Qiao Nan’s hard earned cash. Although she despised Qiao Nan, att.i.tudes could modify.
Section 2376 – Might On Top Of That Not Retain the services of Any person (2)
Discovering Zhai Sheng buying a call from Ping Cheng, she had already guessed that a little something needs to have long gone completely wrong with Ding Jiayi. Qiao Nan was at a loss as well, discovering how Ding Jiayi was still reluctant to work out down even when receiving sick. She obtained never witnessed this type of unruly affected person. Was Ding Jiayi set on not getting nicely?
May Iverson’s Career
“You’ll know once I develop a contact.”
Qiao Zijin laughed. “Omg. Where performed they discover this sort of health worker? She’s acquired this sort of fiery temper. Are you confident she’s a hired health worker and not our ancestor? She’s so brutal! It feels like Qiao Nan didn’t place any work into hunting for a health worker in your case. Don’t tell me that she purposely discovered a poor-tempered an individual to deal with you!”
Just like expected, Ding Jiayi immediately fell for Qiao Zijin’s phrases. She had little idea she got fallen for Qiao Zijin’s ploy. “That really must be the way it is. That wretched gal can’t are in position to see me living properly.. Generally If I perish, I’ll never result in any further problems on her behalf. That’s why she uncovered a real worthless health worker to torture me making sure that I’ll pass away at the earliest opportunity! Tsk! Just what a ineffective matter! She’s even learned to act well!”
Qiao Zijin laughed. “Amazing. The place do they locate this type of caregiver? She’s received a real fiery temper. Are you sure she’s a selected caregiver rather than our ancestor? She’s so brutal! It seems like Qiao Nan didn’t set any work into seeking a caregiver on your behalf. Don’t say that she purposely discovered an awful-tempered an individual to care for you!”
“You’re getting as well sort, Madam Zhai. I became cultivated from the chief and it’s all on account of him which i have the thing i have currently. You may call me when you need any aid.” A troublemaker like Ding Jiayi would never end resulting in hassle. For the reason that chief’s spouse needed to remain because of the chief’s area within the capital, he would look for the chief over in Ping Cheng.
She were forced to get her mommy to loathe Qiao Nan to ensure that her mum would use Qiao Nan’s money her to ensure she would be able to survive a cushy lifestyle with Qiao Nan’s money. However she despised Qiao Nan, att.i.tudes could change.
The seasoned hesitated for just a moment. “Madam Zhai?”
“Alright.” Zhai Sheng pushed the loudspeaker b.you.t.ton and Qiao Nan been told the veteran’s every phrase. “Chief, what shall we all do up coming? Should we engage a next caregiver for Madam Ding? But given this case, I’m frightened the fact that subsequent health worker won’t go very far sometimes. We’ll only be losing our cash.”
Qiao Zijin laughed. “Wow. Just where did they uncover such a caregiver? She’s obtained this type of fiery temper. Are you presently confident she’s a recruited health worker and not our ancestor? She’s so intense! It appears as though Qiao Nan didn’t set any time and effort into trying to find a caregiver for you. Don’t tell me that she purposely discovered a bad-tempered one particular to keep up you!”
“Good.” Zhai Sheng pressed the presenter b.you.t.ton and Qiao Nan observed the veteran’s every word. “Chief, what shall we do upcoming? Need to we engage a next health worker for Madam Ding? But offered this case, I’m reluctant how the next health worker won’t last for very long sometimes. We’ll you should be totally wasting our income.”
lovecraft country a novel
Qiao Zijin’s words and phrases were actually especially vicious. It’s just like she was attempting to sow discord between Qiao Nan and Ding Jiayi. Despite the fact that their interaction.h.i.+p was already terrible to begin with, Qiao Zijin observed no harm in making Ding Jiayi hate Qiao Nan substantially more.
|
OPCFW_CODE
|
Rocket Chat cannot enable SSL using Caddy: libdns.so.162 not found
I installed rocket chat using snap, following the documentation here.
Now I am trying to enable SSL following the documentation: Auto SSL with Snaps. However, the following error shows up:
$ sudo snap set rocketchat-server https=enable
error: cannot perform the following tasks:
- Run configure hook of "rocketchat-server" snap (run hook "configure":
-----
dig: error while loading shared libraries: libdns.so.162: cannot open shared object file: No such file or directory
Error: Can't resove DNS query for <my_domain_name>, check your DNS configuration, disabling https ...
-----)
Checking what ldd has to say for dig, I found few other libs are also not found:
$ ldd /snap/rocketchat-server/current/usr/bin/dig
linux-vdso.so.1 (0x0000ffff98afc000)
libdns.so.162 => not found
liblwres.so.141 => not found
libbind9.so.140 => not found
libisccfg.so.140 => not found
libisc.so.160 => not found
libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff98a71000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff98900000)
/lib/ld-linux-aarch64.so.1 (0x0000ffff98acc000)
So, checked if libdns.so exists at all.
$ locate libdns.so
/snap/rocketchat-server/1437/usr/lib/aarch64-linux-gnu/libdns.so.162
/snap/rocketchat-server/1437/usr/lib/aarch64-linux-gnu/libdns.so.162.1.3
/usr/lib/aarch64-linux-gnu/libdns.so.1601
/usr/lib/aarch64-linux-gnu/libdns.so.1601.0.0
It appears that the concerned library exists under snap.
Is there a way to resolve this and make caddy/https work?
System:
Raspberry Pi 4 (aarch64)
Linux ubuntu 5.4.0-1022-raspi
Ubuntu Server 20.04.1 LTS
Similar issues:
dig: error while loading shared libraries: libdns.so.162: cannot open shared object file: No such file or directory
shared libraries of dig and nslookup
Try to do a soft-link to library in your file system
ln -s /snap/rocketchat-server/1437/usr/lib/aarch64-linux-gnu/libdns.so.162 /lib
Or install dns-utils which should add missing lib to your system.
|
STACK_EXCHANGE
|
Setting up online payments for website
I am a web designer but haven't set up online payments before. I'm using a linux server with php and MySQL. I have a new contract to set up a very simple website where people can buy one of three different products (very basic). I want them to be able to pay using credit/debit cards. My question is can anyone point me to a good resource page for setting this up?
I don't think it is worth me installing an open source solution such as Opencart, Zencart or Presta because there are only three products to choose from and users won't need to register, just buy it online. so I am thinking the best way is to hand code it, however I'm sure it's been done many times before and there must be a good resource for this. I know I'll have to use a secure certificate and also use a gateway to handle the payments, I just need the information on how to do this. If anyone can point me in the right direction or offer any advice on going about setting it up I would appreciate it.
My suggestion would be for you to use either paypal or google checkout
can only second @JohnP. outsource as many of those payment issues as you can. this isn't just code, it's also about handling and storing (or not) sensible data.
paypal web payments standard is one of the cheapest ways to get setup, the paypal website can generate all the code you need to paste into your website to facilitate the process.
downsides. higher merchant fee (at time of writing i think it is about 3.4% + a per transaction fee)
if you are selling high price items or high volumes you can save money by setting up a merchant account direct with your bank and getting a payment processor to handle the transactions. most of them have an option for a hosted payment page. (eg they handle all the secure part of the transaction, and fire back a message to your site to confirm the process is successful)
shop around first, and try their demo sites, because some of the hosted payments pages are (in typical bank style) very very user unfriendly
Thanks for taking the time to answer my question. I think I will be going for a hosted payment page through a bank with a merchant account. This seems like the easiest option to me.
Pay with Creditcards through facebook! https://developers.facebook.com/docs/payments/>
Payments authorization is not something that you do by yourself. What you can do is to open a merchant account with a bank or to use an API from Paypal, Google checkout, Authorize.net or others. Opencart, Magento or Presta will act as intermediaries and they have modules that allow sending the data to the entity that can check and charge the credit card. That entity will send you back a message that the transaction succeeded or not. This way the clients are protected and you dont have to develop your (potentialy unsecure) solution.
If you are a web designer I would recommend you to concentrate on design and not to complicate your life with an implementation that can be complicated and potentially risky for the client. Just collaborate with a programmer who has experience in this.
Thanks Elzo for your helpful answer. As I said below I'll use a hosted payment solution. I want to learn about online payments myself as it will be useful for future projects. Also this is a small budget site so I don't think I can afford to bring in a programmer.
Even if you are a web designer you are still able to do the online payments for your website.
At first, there are some easy ways to handle payment, like paypal website stardard. If you are using third party cart, there are plenty of online payment integration, you don't need to worry about API as long as you're not developing it by yourself; it would be a hell of a headache.
Third party integration is very helpful and very understandable, they will give you the module and instructions for the payment integration of your site. Again, even if you are a web designer, you can do it.
For selling only one of three different products you may want to try a Buy Now button or a simple shopping cart. You would still need an online payment processor like PayPal or Google Checkout to comply with all the credit card government regulations (e.g. PCI compliance).
For PHP shopping cart integration examples using Goolge Checkout you may find this sample code useful:
http://code.google.com/p/google-checkout-php-sample-code/
Hope this helps...
|
STACK_EXCHANGE
|
Electrical destruction of MicroSD
I'm trying to design a device to store my private keys. I can't code in a way to just use some flash chips, so I thought I'd use a MicroSD card for the storage as a module. I intend to create a method for destroying the card if an antitamper sensor is activated or a button on the outside of its box. What I'm having trouble with is how to actually accomplish the destructive part:
There would be a bare metal MicroSD slot inside on a board. I wanted to get a piece of board without any holes or traces and make a little box to surround the slot to avoid heat and just in case of sparking. If needed I'll wrap that in nonconductive furnate insulation to be safe.
Board pins:
I think I could use a coin cell to keep a capacitor charged (haven't figured out what size capacitor yet) and when the button is activated or the sensors detect one or more weird things happening they connect the capacitor to one of these pins and dump power into it, ideally destroying the flash chips.
I'd appreciate a capacitor size estimate and which pin to target, but I'll take any help I can get and try to work forward from that. This is definitely the most challenging thing I've tried.
Why destroy it in the first place? Use encryption and make the tamper switch to forget/change the key so data can't be retrieved any more. Less electronics.
How do you plan to make sure it won’t trigger as you plug it into a laptop?
You want me to encrypt my private keys? They're the private files to decrypt GPG.
It's not a USB device. It's a small Arduino board, the MicroSD slot board and the tamper/destructive function inside an enclosure. I haven't decided how I'll access it to use the keys: ethernet, LoRa, 3.5mm jack. IDK yet.
"You want me to encrypt my private keys?" - Why not? It's much more reliable & secure, than trying to "destroy" device just by trying to burn it while not knowing it's protection capabilities.
@NStorm That introduces yet another passphrase I would need to keep memorized. I have a lot of MicroSD cards to experiment with to determine what temperature and duration is necessary and then set a mechanism to safely perform the task indoors. Lots of high temperature things are done indoors everyday.
@suptic you didn't got the idea right. You don't have to "remember" passphrase. You just store it somewhere where you can destroy/erase it. Your approach never will be reliable. Some cards might survive. Others might be recoverable.
@NStorm Having a passphrase written down is a terrible idea. I might as well use the same password everywhere and write it on a post-it hidden under my keyboard.
I'm still looking secure microcontrollers, but I've found a less splashy melt can be done with a small induction heater in about 15 seconds of exposure to 2400C with 1414C being the melting point of silicon. Thanks for the help, even if I disagree with it a little.
No, you still didn't got the idea right. Nobody offered to have a written passphrase.
First of all, I am sure there are other ways of securing a pairs of keys. And I will list some of them at the end of that post.
But, if you really want to destroy an SD card and ensure that its content is lost. I think your approach has some flaws.
First, electrical destruction of an SD card is not a feature of SD cards! This means that the manufacturer didn't try that and performed all the test to ensure that, reliably and in a repeatable way, the destruction will work in all circumstances. Thus some lots might be sensitive to your destruction methods, and some other less sensitive.
You may be able to "zap" the SD card driver. But this doesn't mean the data is lost. You will find plenty of resources online that shows how to solder tiny wire directly to the memory chip and recover the content of a broken SD card. They are meant for that. There are micro tests points readily available to perform that kind of recovery on every SD card (mirco SD card as well. But it requires some work to access them below the coating).
Unless you have access to recovery tools, how can you test that your system really destroyed the memory content and is, indeed, unrecoverable ?
Then there is the reliability aspect. You have to be sure that your system won't trigger unexpectedly. In fact, if you are not attacked by any thief, your system should never trigger during its lifetime. You will have to put a lot of effort and testing the ensure that it never happens.
ESD, power surge, lightning strikes, software bugs, all of this can trigger the system. And if you plan to store a bitcoin wallet and it get destroyed. The impact of that failure is not acceptable.
Home made solutions:
Put the sd card in the middle of an arc of some kV and sustain it for seconds. The SD card should melt...
Use a very strong spring and some mechanical part to snap the sd card in half when releasing the spring.
Put explosives around the sd card....
More realistically there are some solutions readily available for your issue:
Store your keys on a secure USB drive. They encrypt their content and store the key for the encryption in a special memory that is wiped if tempered. The content of the flash is still there but useless without the key.
Use a USB key wallet. They are just meant and designed for that.
If you really want to do it by yourself, use a SDcard but access it through a microcontroller that has a secure memory module. There you can store the key to decrypt the SD card content. The MCU is meant to wipe the secure memory if it detects a tampering attempt.
This would be bad security practice. The usual way would be an active zeroize circuitry; your key would be kept in static ram (flip flops) and the tamper detection would actively reset them.
By the way "zeroize" is a real word and the technical term for the key removal operation, if you search for that you'll see the crazy stuff they sometime put to guard the key. Well, zeroisation for the european people, of course (wikipedia uses the 's' variant)
Unfortunately commercial key wallets all end up so defeatable. My thinking is now that I'll have a load of powdered magnesium inside a container that fits the footprint of the card with two commercial hobbyist rocket igniters placed inside on separate circuits for redundancy and to keep it from starting a fire use furnace insulation and house the card and charge inside two chemistry crucibles, which will isolate the heat and prevent anything molten from spreading. Pending lots of outdoor testing, obviously.
Zeroization is a military technology at its core… we are talking about keeping safe something like 256 BITS of secret key, kept only alive by a battery or a supercap and a dedicated circuit designed to reset them reliably. Key loaders until a few years ago worked with perforated paper tape (easy to destroy!) to say one of the least esoteric practices. The wallet itself is encrypted with that key, you lose that, you lose everything.
It was my understanding the volatility was not a reliable method for making information irretrievable and that zeroization would be necessary. That seems fine, but there would certainly be a significant delay in time between it being triggered and it finishing the zeroization, wouldn't there?
Zeroization is a nanoseconds issue from the trigger to the clear state (we are talking static cells); unless you deep freeze the board. That's why serious tamper detectors also contains temperature sensors and radiation detectors. The latest IBM cards (https://www.ibm.com/security/cryptocards/pciecc4/overview) actually permanently stops on tamper
|
STACK_EXCHANGE
|
>From: "Mekonnen Gebremichael" <address@hidden> >Organization: Pratt School of Engineering, Duke University >Keywords: 200504201601.j3KG1gv2005753 McIDAS probing image values Hi Mekonnen, >Following your suggestion, I uncompressed one file (for 01:45 on the same >day) and did as per your steps. I found that it contains only zeroes, How did you determine this? By using imgdisp.k to display the image and seeing nothing but black? >and >hence, performed the 'redirect' and 'lwu' commands. So it seems to be >working. OK, very good. >By the way, in order to check I created a new dataset - but you said.. >"The ADDE definition for GLOBIR allows both of these images to be viewed >as part of the same dataset." I did not quite get it. The regular expression used in the DIRFILE= keyword of DSSERVE (dsserve.k) will match all image files that in the directory specified. >For example, when you >typed 'imgdisp.k GLOBIR/AREA.1 MAG=-4'. How did it know if the file >displayed is '*.0045' and not '*.0015'? To see all of the elements of the dataset, use the IMGLIST (imglist.k) command: imglist.k GLOBIR/AREA.ALL You should see one line for every image that you have added to the dataset (i.e., one for each file you copy to the directory specified in the DIRFILE= regular expression). By the way, the best way to test the dataset definition (DIRFILE= value is to clip out the regular expression and use it ias the target of the Unix 'ls' command. >Could we now revisit my ultimate goal of dumping out data values? Sure. Previously, I outlined a procedure you could use to create a new image that contained the subset (sector) of interest from the original and then use a McIDAS command called AXFORM to dump out the values from the newly created sector. For simplicity sake, I have included my earlier comments after updating the dataset name to match what we have been using in our later emails: (2) I would like to produce an ascii file that contains the subset of this image over the US, but it is not clear from the Learning Guide how to do this. I would appreciate your insight into this. The steps I would use are: - create an output dataset of type AREA into which you can wite images. I would use the default MYDATA/IMAGES dataset since there is a McIDAS BATCH file that will create the dataset for you: BATCH MYDATA.BAT This will create a dataset with group name MYDATA with 4 descriptors: MYDATA/IMAGES MYDATA/GRIDS MYDATA/PTSRCS MYDATA/TOPO You will be interested in using the MYDATA/IMAGES dataset. - create a new image that contains the coverage you want using IMGCOPY. Here is one example of how to do this: IMGCOPY GLOBIR/AREA.1 MYDATA/IMAGES.3000 LATLON=30 100 SIZE=600 800 This will result in the extraction of a sector of 600 lines by 800 elements centered on 30N and 100W. The new sector will be written into AREA3000. AREA3000 should be located in your ~mcidas/workdata directory. You will need to adjust the LATLON= center Latitude and Longitude to what you want and the SIZE= to the size you want to get the coverage you desire. - write the ASCII values from the newly created image sector out to disk files using AXFORM. Here is an example: AXFORM 3000 mekonnen FTYPE=ASC This will result in the creation of the files mekonnen.* in the McIDAS working directory which will be ~mcidas/workdata if you are doing all of this as the user 'mcidas'. There is another way to list out values, use the IMGPROBE (imgprobe.k) command on the original image and specify the aree you are interested in. Please refer to the online help and User Guide documentation for IMGPROBE for information on how to use it. The following is an example using IMGPROBE to list out values from a specific element (image) of your GLOBIR/AREA dataset: imgprobe.k LIST BOX BRIT MODE=N LATLON=40 105 SIZE=100 100 DATASET=GLOBIR/AREA.1 This command will list out brightness values from the image that occupies the first element of the GLOBIR/AREA dataset. The listing (LIST) will be for a rectangular area (BOX) of brightnesses (BRIT) centered on 40N, 105W (LATLON=40 105; NB: McIDAS specifies western longitudes as positive and eastern as negative (!)). The coverage of the rectangular area is specified by the number of image lines (vertical) and elements (horizontal) comprising the area (SIZE=100 100). The MODE=N keyword tells the command that you are not doing the probe interactively. IMGRPOBE allows the user to probe a displayed image by moving the mouse to the location of interest and clicking a mouse button. This, however, requires that you run an interactive McIDAS sessoin, so it does not lend itself to batch processing of images. The thing I don't like about IMGPROBE listings is they are segmented. Try an example and you will see what I mean. The thing I do like about the IMGPROBE listing is that you can specify the unit of the output value (e.g., for your images you can list out Temperatures in K) McIDAS commands allow you to write their textual output to a "device" (e.g., printer, file) of your choosing using the global keyword DEV= (global keywords work for all McIDAS commands; they will not be listed individually in the online help or in the Users Guide section for the command; see the Users Guide Appendix that discusses global keywords). So, if you wanted to tell IMGPROBE to write its textual output to an ASCII file, you would add the 'DEV=T fname' sequence to the command. Here is the example above modified to write the output to a file named globir.data: imgprobe.k LIST BOX BRIT MODE=N LATLON=40 105 SIZE=100 100 DATASET=GLOBIR/AREA.1 DEV=T globir.data The file you are writing to will be located in your McIDAS working directory or in the directory specified by a file REDIRECTion. For simplicity since you are a new McIDAS user, let's not get into file REDIRECTions (they are the source of the largest confusion to new McIDAS users). So, your output file will be written to the $MCDATA directory since the environment you defined in your shell-definition file (~mek11/cshrc.mcidas in your case) specifies this directory as the first value in MCPATH. Please give the IMGPROBE and AXFORM approaches a try and let me know if you have questions. Cheers, Tom -- NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata inquiry tracking system and then made publicly available through the web. If you do not want to have your interactions made available in this way, you must let us know in each email you send to us.
|
OPCFW_CODE
|
Crack down on copyright infringement PDFs
Related to #1683
I feel we need to crack down on these types of documents, which are mainly PDFs
https://github.com/vhf/free-programming-books/blob/42908b8c92135a765e7bbc9a70979d707f91a8f0/free-programming-books.md#competitive-programming
Copyright included:
Therefore, no part of this book may be reproduced or transmitted in any form or by any means,
electronically or mechanically, including photocopying, scanning, uploading to any information
storage and retrieval system.
I forgot to post my findings over the weekend! I am no expert in licensing, so the list may include harmless items.
Here they are, English only, before programming languages
No copyright/license mentioned
http://www.jjj.de/fxt/fxtbook.pdf
http://igm.univ-mlv.fr/~mac/REC/text-algorithms.pdf
http://www.ethoberon.ethz.ch/WirthPubl/CBEAll.pdf
http://lampwww.epfl.ch/~schinz/thesis-final-A4.pdf
http://www.stack.nl/~marcov/compiler.pdf
http://arxiv.org/pdf/1206.1754v2.pdf
http://www.csee.umbc.edu/csee/research/cadip/readings/IR.report.120600.book.pdf
https://www.ics.uci.edu/~welling/teaching/ICS273Afall11/IntroMLBook.pdf
http://home.iitk.ac.in/~arlal/book/nptel/pdf/book_linear.pdf
http://www.ii.uib.no/~michal/und/i227/book/book.pdf
http://www.math.cornell.edu/~bterrell/dn.pdf
http://softwarebyrob.wpengine.netdna-cdn.com/assets/Software_by_Rob _How_to_Become_a _Programmer_1.0.pdf
http://homepages.inf.ed.ac.uk/dts/pm/Papers/nasa-manage.pdf
http://www.tac.mta.ca/tac/reprints/articles/22/tr22.pdf
Simply "copyright"
http://algorithmics.lsi.upc.edu/docs/Dasgupta-Papadimitriou-Vazirani.pdf
http://www.ethoberon.ethz.ch/WirthPubl/AD.pdf
http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf
http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf
https://www.math.byu.edu/klkuttle/linearalgebra.pdf
http://hintjens.wdfiles.com/local--files/main%3Afiles/cc1pe.pdf
http://www.di.unipi.it/~ricci/501302.pdf
http://artofcommunityonline.org/Art_of_Community_Second_Edition.pdf
http://www.st.ewi.tudelft.nl/~arie/phds/Hemel.pdf
http://downloads.nakedobjects.net/resources/Pawson thesis.pdf
http://homepages.inf.ed.ac.uk/dts/pm/Papers/nasa-manage.pdf
http://carlos.bueno.org/optimization/mature-optimization.pdf
Possibly breached copyright
http://larc.unt.edu/ian/books/free/lnoa.pdf
http://larc.unt.edu/ian/books/free/poa.pdf
http://www.comp.nus.edu.sg/~stevenha/myteaching/competitive_programming/cp1.pdf
http://wps.aw.com/wps/media/objects/5771/5909832/PDF/Luger_0136070477_1.pdf
http://alex.smola.org/drafts/thebook.pdf
http://ai.stanford.edu/~nilsson/QAI/qai.pdf
http://spivey.oriel.ox.ac.uk/~mike/zrm/zrm.pdf
http://www.nobius.org/~dbg/practical-file-system-design.pdf
http://www.dreamsongs.com/Files/PatternsOfSoftware.pdf
Replacements
http://www1.maths.leeds.ac.uk/~charles/statlog/whole.pdf > http://www1.maths.leeds.ac.uk/~charles/statlog/
http://tutorial.math.lamar.edu/pdf/DE/DE_Complete.pdf > http://tutorial.math.lamar.edu/Classes/DE/DE.aspx
Maybe removed for a reason?
http://static.fsf.org/nosvn/faif-2.0.pdf > https://web.archive.org/web/20151122031019/http://static.fsf.org/nosvn/faif-2.0.pdf
https://www.sics.se/~joe/thesis/armstrong_thesis_2003.pdf > https://web.archive.org/web/20150706063330/https://www.sics.se/~joe/thesis/armstrong_thesis_2003.pdf
Unfinished/drafted works
http://ciml.info/dl/v0_9/ciml-v0_9-all.pdf
http://www.cse.buffalo.edu/~rapaport/Papers/phics.pdf
http://www.cs.cmu.edu/~rwh/plbook/book.pdf
Many thanks! It will take me quite some time to handle all of these.
I took a random link from "possibly" : http://www.nobius.org/~dbg/practical-file-system-design.pdf and it took some digging to find this : "it's now out of print, if you click on the second link you can download a copy" on the book's author webpage... Checking all these will be a long process I guess. :)
Yep, this will definitely take a while. What are your thoughts on documents that just sy "Copyright" or © ? I did not do a text search for any terms -- just checked the first few and last few pages.
I'll look through these (slowly) but just looking at the urls...
anything from arxiv.org is cool.
I've previously verified the Open license for the fsf book- "Free as in Freedom".
@eshellman I will cross-out the ones you mentioned are good when I have the chance. :-)
I keep trying to find an automated solution, this is what I found after a
bit of searching. It's not the complete answer I don't think it'll straight
up tell us if a book is free or not but I think it is a start.
https://openlibrary.org/developers/api
On Jan 4, 2016 4:20 PM, "Hunter Stevens"<EMAIL_ADDRESS>wrote:
@eshellman https://github.com/eshellman I will cross-out the ones you
mentioned are good when I have the chance. :-)
—
Reply to this email directly or view it on GitHub
https://github.com/vhf/free-programming-books/issues/1769#issuecomment-168851715
.
http://www.dreamsongs.com/Files/PatternsOfSoftware.pdf is CC BY-NC-SA per the author's website.
http://spivey.oriel.ox.ac.uk/~mike/zrm/zrm.pdf Copyright © J. M. Spivey, 1988, 1992, 2001.
http://spivey.oriel.ox.ac.uk/~mike/zrm/
The Z Reference Manual has now been allowed to go out of print by the publisher, Prentice Hall, but they have kindly returned the copyright to me, so I can make the full text available here.
That's interesting, you have the copyright now? What will it be, CC?
http://ai.stanford.edu/~nilsson/QAI/qai.pdf
from http://ai.stanford.edu/~nilsson/ (author's webpage)
"A free online web version of this book is available at: http://ai.stanford.edu/~nilsson/QAI/qai.pdf. Its pagination is different than that of the print version, but the web version has the advantage that its web links are clickable."
http://larc.unt.edu/ian/books/free/lnoa.pdf
http://larc.unt.edu/ian/books/free/poa.pdf
These are hosted on author's website: http://larc.unt.edu/ian/books/
The license agreement in the files says
"If you wish to provide access to this work in either print or electronic form, you may do so by providing a link to, and/or listing the URL for the online version of this license agreement: http://hercule.csci.unt.edu/ian/books/free/license.html . You may not link to the PDF file."
Of course, the specified url does not exist, but the license can be found at https://larc.unt.edu/ian/books/free/license.html
This sort of license can't be enforced - if so, Google would be infringing. However, for the purposes of FPB, I suggest the url be changed to the (corrected) license url, perhaps with a note to use the drop down menu. Meanwhile, I will reach out to the author and the library at his university, suggesting that other strategies may be more effective.
@eshellman some questions
What link should we then submit for the zrm files?
What about the LNOA/POA books?
@onebree for zrm the link we have is fine. (the quote is from the author)
Will submit a PR to change LNO/POA links.
* [Free as in Freedom](https://archive.org/details/faif-2.0) (PDF) I suggest we keep this one.
@onebree I took the liberty to edit your awesome list to cross a few items!
I think it would be useful to have the urls to check in a PR so that we can comment on them line by line.
@eshellman what do you mean by that? In new PRs, or a PR in respect to this post?
Good idea @eshellman.
@onebree A PR removing all these links such that we can comment directly on the commit whenever we find a link we should keep.
I'll do this PR as soon as I get home.
or, easier, a PR (which never needs to be accepted) with a file check_copyright containing the urls to check
Good idea, even better. Let's talk over there. #1799
If anyone speaks a foreign language and wants to check a file for copyright infringement, please do!
Side note: In some file C, C Sharp and C++ are in the wrong "order"; travis won´t like it
I think we're clean. Thanks everyone. File a new issue id there are new concerns.
|
GITHUB_ARCHIVE
|
Error "false : The term 'false' is not recognized as the name of a cmdlet" for any install/upgrade
I'm getting this error for any upgrade or install I'm running. Trying to do a "choco upgrade all" will fail every package it finds with the same error after asking to run the script.
Calling command ['"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -NoLogo -ExecutionPolicy Bypass -Command
"[System.Threading.Thread]::CurrentThread.CurrentCulture = '';[System.Threading.Thread]::CurrentThread.CurrentUICulture = ''; & im
port-module -name 'C:\ProgramData\chocolatey\helpers\chocolateyInstaller.psm1'; & 'C:\ProgramData\chocolatey\helpers\chocolateyScr
iptRunner.ps1' -packageScript 'C:\ProgramData\chocolatey\lib\ilspy\tools\chocolateyInstall.ps1' -installArguments '' -forceX86 $fa
lse -packageParameters '' -overrideArgs $false"']
false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check t
he spelling of the name, or if a path was included, verify that the path is correct and try again.
At C:\ProgramData\chocolatey\helpers\chocolateyInstaller.psm1:24 char:17
+ $overrideArgs = false
+ ~~~~~
+ CategoryInfo : ObjectNotFound: (false:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check t
he spelling of the name, or if a path was included, verify that the path is correct and try again.
At C:\ProgramData\chocolatey\helpers\chocolateyInstaller.psm1:27 char:13
+ $forceX86 = false
+ ~~~~~
+ CategoryInfo : ObjectNotFound: (false:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Here is the full debug output for upgrading a package: https://gist.github.com/nvivo/8bcb3646e93b7b707d8e
After the rollback ilspy is not installed anymore even though chocolatey says it is upgraded.
I was getting this error with 0.9.8, then I did an upgrade to <IP_ADDRESS> but I'm getting the same error.
I rebooted the machine after installing <IP_ADDRESS>.
You are getting this with 0.9.9+?
We had this fixed but it looks like I just reintroduced with the release I
did today.
On Sunday, March 29, 2015, Natan Vivo<EMAIL_ADDRESS>wrote:
I'm getting this error for any upgrade or install I'm running. Trying to
do a "choco upgrade all" will fail every package it finds with the same
error after asking to run the script.
Calling command ['"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -NoLogo -ExecutionPolicy Bypass -Command
"[System.Threading.Thread]::CurrentThread.CurrentCulture = '';[System.Threading.Thread]::CurrentThread.CurrentUICulture = ''; & im
port-module -name 'C:\ProgramData\chocolatey\helpers\chocolateyInstaller.psm1'; & 'C:\ProgramData\chocolatey\helpers\chocolateyScr
iptRunner.ps1' -packageScript 'C:\ProgramData\chocolatey\lib\ilspy\tools\chocolateyInstall.ps1' -installArguments '' -forceX86 $fa
lse -packageParameters '' -overrideArgs $false"']
false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check t
he spelling of the name, or if a path was included, verify that the path is correct and try again.
At C:\ProgramData\chocolatey\helpers\chocolateyInstaller.psm1:24 char:17
$overrideArgs = false
~~~~~
CategoryInfo : ObjectNotFound: (false:String) [], CommandNotFoundException
FullyQualifiedErrorId : CommandNotFoundException
false : The term 'false' is not recognized as the name of a cmdlet, function, script file, or operable program. Check t
he spelling of the name, or if a path was included, verify that the path is correct and try again.
At C:\ProgramData\chocolatey\helpers\chocolateyInstaller.psm1:27 char:13
$forceX86 = false
~~~~~
CategoryInfo : ObjectNotFound: (false:String) [], CommandNotFoundException
FullyQualifiedErrorId : CommandNotFoundException
Here is the full debug output for upgrading a package:
https://gist.github.com/nvivo/8bcb3646e93b7b707d8e
After the rollback ilspy is not installed anymore even though chocolatey
says it is upgraded.
I was getting this error with 0.9.8, then I did an upgrade to <IP_ADDRESS> but
I'm getting the same error.
I rebooted the machine after installing <IP_ADDRESS>.
—
Reply to this email directly or view it on GitHub
https://github.com/chocolatey/chocolatey/issues/701.
--
Rob
"Be passionate in all you do"
http://devlicio.us/blogs/rob_reynolds
http://ferventcoder.com
http://twitter.com/ferventcoder
Same issue, can confirm it with <IP_ADDRESS>. Exact same error stack as above.
Fixing. Also, log issues over at chocolatey/choco repo please`
This was also mentioned in https://github.com/chocolatey/choco/issues/216
|
GITHUB_ARCHIVE
|
Understanding and exploiting reentrancy while safeMint()-ing NFTs.
safeMint() function expects that the recipient contract should implement onERC721Received which is basically a sign of the contract supporting ERC721 so that tokens won’t get stuck once someone sends it to that address.
And this is how it works…
In ERC721 OZ implementation whenever a token is transferred to a contract via safeTransferFrom or minted by safeMint() it calls onERC721Received on that contract.
It must return a solidity selector for onERC721Received to confirm the token transfer. If any other value is returned or the interface is not implemented by the recipient, the transfer will be reverted.
Example to understand (and to exploit):
It's setting the price in the constructor and there are two functions:
- buyNFT(): which is a payable function and takes a native token e.g ETH/BNB after that it sets canClaim mapping to true for msg.sender.
- claim(): is for claiming the NFT which uses safeMint() to mint the NFT and sets canClaim to false for the msg.sender.
The way safeMint() checks that the to address can handle the NFT is how we discussed above — by calling onERC721Received on to address. so here control flow moves to to address. and this is where it creates/increases an attack surface.
Let's write an exploit for the above contract :
Attacker smart contract to exploit the NFTContract, Here it is
It has a constructor which takes the NFTContract address so it can be used later and two functions:
- buyAndClaimNftsWithTrick() is the function that carries the whole attack from buying NFT with buyNFT() and claiming it with the claim().
- A malicious onERC721Received() which reenters (claims NFT again) and returns 4 bytes selector.
Let's discuss the exploit first:
When the attacker calls AttackerContract : buyAndClaimNftsWithTrick() it first buys the NFT with buyNFT() now after that it calls claim() it safemints the NFT to msg.sender, Here the msg.sender is AttackerContract which implements malicious onERC721Received which will reenter to call claim() again. if you see the malicious onERC721Received it contains an if statement which will set the cnt to false and will call the claim() again to get an extra NFT.
The intention behind setting cnt to false is simple and that is to avoid a loop when claim calls onERC721Received while safeminting NFTs to AttackerContract.
Let’s test the exploit…
I am using foundry so writing test case in solidity 🔥
The test file contains some variables that we are using in some test functions.
- setUp(): is similar to Mocha’s beforeEach(), Here we are going to create instances with which we are interacting in test functions.
- testBuyAndHack(): is the function that tests the exploit. It transfers 1e18 native token to attacker contract and calls buyAndClaimNftsWithTrick() which performs a reentrancy attack and then on the last line we have an assertion to check the NFT balance of attacker contract is 2 (i.e greater than 1 and we got that extra 1 NFT by reentering )
Why it happened :
If we check SampleERC721:claim() we can see it lacks the Checks Effects Interactions pattern and the nature of safeMint() function implementation makes it vulnerable to reentrancy.
How to prevent it 💡:
This situation can be prevented by following Checks Effects Interactions pattern where canClaim can be set to false before safeminting happens. So even though the caller can reenter but if he calls claim() after reentering then the function will revert because of the check added on the first line of claim() which checks for canClaim[msg.sender] should be true.
Code used in this article is damn vulnerable, do not use in production.This article is for educational purpose only.
|
OPCFW_CODE
|
Are you looking for a single development environment for the entire data science workflow? Do you need a notebook-based environment to query and explore data? Are you planning to develop and train a model, and run your code as part of a pipeline?
Then look no further than Vertex AI. The notebook-based environment of Vertex AI Workbench allows you to run your code as part of a pipeline, develop and train a model, and query and explore data.
Vertex AI assists you with data preparation. You can use Vertex AI Data Labelling to annotate high-quality training data and improve prediction accuracy by ingesting data from BigQuery and Cloud Storage.
To serve, share, and reuse ML features, use Vertex AI Feature Store, a fully managed rich feature repository. Track, analyze, and discover ML experiments with Vertex AI Experiments for faster model selection. To visualize ML experiments, use Vertex AI TensorBoard. Vertex AI Pipelines can help you simplify the MLOps process by streamlining the creation and execution of ML pipelines.
Create cutting-edge ML models without writing code by using AutoML to determine the best model architecture for your image, tabular, text, or video prediction task, or by using Notebooks to create custom models. Vertex AI Training provides fully managed training services, while Vertex AI Vizier provides hyperparameter optimization for maximum predictive accuracy.
Vertex Explainable AI provides detailed model evaluation metrics and features attributions. Vertex Explainable AI indicates the importance of each input feature to your prediction. Available right away in AutoML Forecasting, and Vertex AI Prediction.
Vertex AI assists you in moving from notebook code to a cloud-deployed model. Vertex AI has every tool you need, from data to training, batch or online predictions, tuning, scaling, and experiment tracking.
Vertex AI Prediction makes it simple to deploy models for online serving via HTTP or batch prediction for bulk scoring. You can deploy custom models built on any framework to Vertex AI Prediction, including TensorFlow, PyTorch, scikit, or XGB, with built-in tooling to track your models’ performance.
For models deployed in the Vertex AI Prediction service, continuous monitoring allows for easy and proactive monitoring of model performance over time. Continuous monitoring continuously monitors signals for the predictive performance of your model and alerts you when they deviate, diagnoses the cause of the deviation, and triggers model-retraining pipelines or collects relevant training data.
Vertex ML Metadata automates the tracking of inputs and outputs to all components in Vertex Pipelines for artefact, lineage, and execution tracking, making it easier to audit and govern your ML workflow. Custom metadata can be tracked directly from your code and queried using a Python SDK.
Thus, Data Scientists can use Vertex AI to train models without writing code, accelerate models to production, and confidently manage machine learning models.
|
OPCFW_CODE
|
I’ve been in the industry for many years, however I am totally new to Spring and Couchbase. I think it’s great technology and would like to use the technologies for a project here in CA. I’ve set up the server, populated the buckets, fired up STS and pulled some sample code from github. Does anyone have some simple code that merely retreives a document from the beer-sample bucket using Spring? Most of the examples have been incomplete or I spend hours wrestling with Maven issues
Thank you so much,
great to hear you are trying out Couchbase and Spring Data
Unfortunately, using the
beer-sample bucket isn’t ideal for trying out Spring Data.
Here’s the thing: Spring Data needs metadata it generates in the content of the documents to correctly deserialize them from JSON. This is typically embodied in the
_class JSON attribute. Since
beer-sample is a generic-purpose Couchbase sample, this requirement isn’t reflected in the bucket’s contents.
You would probably be better off working from the default bucket and using Spring Data to both
save and retrieve your entities.
Perhaps the integration tests would be interesting?
For instance, there’s a
Party domain object. You can look at:
- the associated
findByAttendeesGreaterThanEqualis simple but interesting enough, for example)
PartyPopulatorListenerthat generates data (and uses the template’s
savemethod - template is one level of abstraction below repositories -)
- here is an example of a test that uses the repository
Be sure to also read the documentation to familiarize yourself with all these abstractions and how the couchbase spring data implementation covers them.
Thank you for the advice Simon Yes, this is just the thing I am looking for; I’ll start using the default bucket then and of course read the documentation. I come from a relational database background and I think this stuff is exciting
Awesome! Don’t hesitate if you need more information or things start to feel too strange
I have imported the spring-data-couchbase code into STS and I am having an odd problem. When I attempt to build the workspace, I get an error:
internal compiler error: java.lang.NullPointerException at org.eclipse.jdt.internal.compiler.ast.ForeachStatement.resolve(ForeachStatement.java:429) StringN1qlBasedQuery.java /spring-data-couchbase/src/main/java/org/springframework/data/couchbase/repository/query line 0 Java Problem.
It appears the Java Builder in the IDE is having problems digesting StringN1qlBasedQuery.java. The error truly does appear right before the comment section of the class source. Frankly, I’ve not ever seen an error like this. However I am able to build and install without difficulty using Maven. I have tried various JDK compilers without success. I have also tried switching the STS workspace in the event the metadata had been corrupted; no dice. Any ideas? btw: I am using Version: 3.7.3.RELEASE
Build Id: 201602250940
Platform: Eclipse Mars.2 (4.5.2)
I checked the log file and the following was of interest:
!ENTRY org.eclipse.jdt.ui 4 2 2016-04-14 22:52:47.976
!MESSAGE Problems occurred when invoking code from plug-in: “org.eclipse.jdt.ui”.
Java Model Exception: java.lang.NullPointerException
@chasisr23 this appears to be a bug in Eclipse Mars (and it’s even been reported by Oliver Gierke, the Spring Data team lead ).
Looks like it’s been fixed in Eclipse 4.6 Neon, and there are 4.6-based builds of STS available: https://spring.io/tools/sts/all
Great Simon! Thank you, I am downloading the new verison
|
OPCFW_CODE
|
The result of having the ability to answer the questions accurately provides the foundation of continuous IT Optimization.
- Reduce initial CapEx and ongoing OpEx, so you will make, and keeping making, more money!
- Optimize resources for systems of customer engagement
- Deploy and refresh new applications faster
- Respond faster to business spikes
- Prevent business-impacting outages and slowdowns.
The methodology allows for aligned business and IT analytics for enterprise IT optimization.
- Correlate business and IT performance
- Insight into how business process changes impact IT
- Understand and optimize costs by business unit / process and technology
- Insight into business performance across the technology stack.
Automated, Aligned IT Analytics:
- Analyze - optimize decision making, proactively manage true performance, predictively optimize performance and capacity, continuously cost-optimize IT resource decisions.
- Integrate - with the business, business KPI and financials feed IT decisions, IT performance and capacity in business context.
- Automate - repeatable, standardized, normalized, flexible, scalable
Our approach federates existing data into purpose-designed optimized process:
- Technology data (server, network, storage, etc.)
- Service data (catalog, metrics, tickets, etc.)
- Financial data
- Business data (analytics, KPIs, plans, TXNs, etc.)
Automating the analytics across all of these data sources creates a flexible and adaptive management platform for dynamic SDDC environments ultimately transforming raw or commodity data into actionable information for IT. It’s a single pane of glass for all of your IT optimization needs.
Automated, Predictive Analytics - A health forecast embedded analysis gives you a continuous rolling prediction with complete components of response time (latency) that tells you which services will run out of resources, when, and why.
Automated Financial Optimization - Forecasting total costs for applications is valuable for pre-purchase validation, repurposing, and consolidation efforts. Integrated with risk and service management data and fully automated.
IT Power Capacity Optimization - Be proactive on your power optimization - find your actual usage versus data center limits. Allocate power by application / service and project savings via optimization. You can analyze server / virtual server performance, power consumption from DCIM tools, service catalog and CMDB, and asset database together.
Automated Root Cause Analysis - analyze application response time against your SLA’s - spanning servers, virtual servers, network, storage, etc. Using real time application monitoring data, service catalog and CMDB data, you’ll be on top of where you stand versus your SLA.
Financial Optimization Reporting - automatically balance current and future costs with performance and capacity using server / virtual server performance, asset costing (CapEx / OpEx), service catalog and CMDB with power costing data.
Here’s the answers to the questions when you take your SDDC into continuous IT optimization:
|
OPCFW_CODE
|
Is there any way to ensure that a network merge, after a parition, never causes disagreements?
Background: A cryptocurrency, such as Bitcoin, have a global order of all transactions that is guaranteed to be agreed by all participating nodes. With Bitcoin, this is ensured by making the longest chain win, and that creating a long chain requires performing a lot of computations (so it is not trivial to arbitrarily create a longer chain).
A problem comes when there is a long-lasting network partition. Each partition will eventually use its own local order, which is different than every other partition. Later, when the networks merge, a single global order will be enforced based on the network that had the longest chain.
This new global order can be different-enough for the other smaller networks that effectively some of their transactions will disappear if they're invalid as per the global order. E.g. coins mined in a small network partition that required a low difficulty (e.g. finding hashes a small number of zeros) will be discarded as invalid according to the perspective of the larger network which requires solving more difficult problems (e.g. finding hashes with more zeros).
This is a problem, as some payments become zero. Imagine being a business owner that sold and delivered goods in a small network partition in return to some BTC, only to realise that the payment has disappeared from the global transactions order once the partitions got merged back into the main network. Effectively, we will have the equivalent effect of having goods stolen (or given for free).
Question: How to design a cryptocurrency that guarantees that such network merges will never cause any disagreement in payments? Are there any fundamental principles that, if satisfied, we ensure consist money balances even after long-term network partitioning?
I'm leaning towards thinking this does not pertain to cryptography, or/and is insufficiently defined. But a drawback of being a mod is that I can't cast a normal single vote towards closing the question.
@fgrieu - You're likely right. It's a database transactions synchronisation problem. But I got confused as to where to post it, as I'm not sure how much of the limitations in this specific use case (cryptocurrency) is due to how basic cryptographic functions work. I'd appreciate suggestions on where to move this.
@fgrieu Maybe CS
This is a long studied question in distributed algorithms design. There are three network models that are traditionally studied: synchronous networks, partially synchronous networks, and asynchronous networks. You can probably find lecture notes online that describe them better than I can right now. Anyhow, Consensus/State Machine Replication protocols built in the partially-synchronous network model achieve exactly what you describe. Typically a bitcoin-like nakamoto consensus protocol breaks because we need an identity mechanism that persists through a network partition: the problem with nakamoto being that, during a partition, we no longer have the (informal) guarantee that all honest miners work on the same longest chain, and thus competing adversarial chains can grow faster. The only solutions that I've seen thus far are for permissioned settings with an honest (super)majority assumption (which can then be mauled into various solutions for PoS and a sort-of-permissionless setting).
Feasibility of consensus has different requirements in each of the three network models; moreover the MPC literature (which generalizes byzantine agreement) also cares greatly about these different network settings, so I guess it could be considered a cryptographic question (but moreso distributed algorithms).
Thanks a lot. To save time, do you know of any cryptocurrency that already offers a solution to this partitioning problem? Meanwhile I'll search with the keyboards that you offered; highly appreciated!
Noteswise I find that this (rather academic) blog is usually pretty accessible (e.g. https://decentralizedthoughts.github.io/2019-06-01-2019-5-31-models/), and I also believe that most proof-of-stake designs are partially synchronous, but don't quote me on that.
e.g. the ethereum consensus protocol post-merge should be partially synchronous (conditioned on some liveness bugs being fixed)
@Vervious: Very interesting contact point the one via MPC! Could you elaborate a bit more?
caveman Regarding blockchains without reorgs (but with dual, unavoidable stale problem): the ones with BFT-type state machine replication, check for these keywords: Tendermint, Cosmos, Tezos
Really great resource to be serious about all of this: https://www.youtube.com/watch?v=KNJGPI0fuFA&list=PLEGCF-WLh2RLOHv_xUGLqRts_9JxrckiA
|
STACK_EXCHANGE
|
Critics Assess this to "debugging a program into physical appearance" and concern this will bring about additional re-design and style effort than only re-coming up with when demands change.
"As a number of a long time pass by, these developers develop into prospects and software architects. Their titles transform, however the previous legacy of not comprehending, of not getting any architectural encounter, continues, creating a vacuum of good architects.
Permit’s figure out why the assets named IsThisLogError is public. It could be critical/ helpful for other associated classes of an inherited course to be aware of whether or not the involved member logs its mistakes or not.
To some newcomers, association is a perplexing strategy. The difficulties created not merely with the Affiliation on your own, but with two other OOP
emphasis the thought of abstraction (by suppressing the details of your implementation). The two poses a clear separation from 1 to a different.
There are several other ways in which an encapsulation can be used, for instance we will take the use of the interface. The interface can be employed to hide the data of the applied class.
Association is often a (*a*) partnership between two courses. It will allow just one item instance to lead to Yet another to complete an motion on its behalf. Association is the more normal expression that outline the relationship among two lessons, where as being the aggregation and composition are reasonably Distinctive.
Even though R is definitely an open up-source project supported by the Neighborhood find out here developing it, some corporations try to offer industrial assistance and/or extensions useful source for his or her prospects. This section offers some examples of this sort of organizations.
The abstract property named LogPrefix is a vital a person. It enforces and ensures to possess a benefit for LogPrefix (LogPrefix works by using to acquire the detail from the resource course, which the exception has transpired) For each and every subclass, in advance of they invoke a method to log an error.
In the next version of utmost Programming Discussed (November 2004), five years Recommended Site right after the primary browse around this site version, Beck added additional values and methods and differentiated among Main and corollary practices.
Strategy overriding is often a language attribute that enables a subclass to override a certain implementation of a method that's previously furnished by one among its super-lessons.
Need to be capable to lengthen any lessons' behaviors, without modifying the lessons..." Isn't going to describe the theory to the reader...really complicated...even Wikipedia does an even better work describing this theory.
On top of that to discover a category appropriately, you you can try this out need to recognize the total list of leaf-level capabilities or functions on the technique (granular level use situations in the procedure). Then you can continue to team each purpose to kind classes (courses will team exact forms of capabilities or functions).
XP created major fascination among program communities while in the late nineteen nineties and early 2000s, seeing adoption in many environments radically diverse from its origins.
|
OPCFW_CODE
|
Microsoft Forms offers a nice mechanism for collecting survey results, and I often post Forms links to my Teams chats and meetings. But that means I’ve planned for the survey – it takes some time to build the survey, after all! For ad hoc surveys, I’ve been using a third-party Teams app, Polly. Unfortunately, Polly isn’t an approved place for storing company information … so while I’m happy to ask where people want to get lunch or if anyone needs a quick break, I don’t want to ask questions that contain company proprietary information.
Forms has a Teams bot that quickly creates a quick one-question survey. You’ll need to have Forms installed in your Teams space. Click on the “Store” icon, search for Forms, and select Microsoft’s Forms.
Select the name of the Team to which you want to add Forms and click “Install”.
Click “Setup” next to “Bot” to add the forms bot to your Teams space.
Now you can at-mention Forms and create a quick one-question survey. ** This works fine if you are using the Teams desktop client. In the Teams web client, adding the question removes the at-mention link (the @Forms text changes from purple to black). To create quick forms in the web client, I have to type the question/answer bit first, then hit ‘home’ to get to the front of the message and add @Forms **
Help will be displayed to remind you of the question/answer format.
Type your question and answers and send the message.
Forms will create a new post with your survey.
Survey results will be updated in real-time in the thread.
If you want to view detailed results (or export the results to Excel), visit https://forms.office.com The Forms bot creates a “Group form”, so you’ll need to select the “Group forms” tab. Click on the Teams space where you posted the form.
You’ll see the form – they’re readily identifiable because the form name starts with “<at>Forms</at>” followed by the question you posted. Select the form and you can view response details and open the response results in Excel.
One oddity – if you host a meeting in the Teams space, you can at-mention Forms to create a survey in the meeting chat. The response from the Bot – where people vote – does not appear in the meeting chat because the bot response is a new thread.
Team members will find the survey as a new thread in the channel.
This is a little confusing to me, so I just send the message to create the survey in the channel instead of using the meeting chat. Using the meeting chat would, however, associate the survey with the meeting because the message which prompted the form creation will appear in the meeting chat.
|
OPCFW_CODE
|
With every new release of SQL Server, Microsoft claims that it is a game changer. However, in some ways I think it might be true regarding SQL Server 2016. Here are some of the things Microsoft is saying to promote it:
Although the marketing material is nice, there is another reason to be excited about this version of SQL Server, which was developed following a different model:
Personally, out of all the features that have been included, the following items stand out to me as reasons why you may want to adopt this release sooner, rather than later.
From MSDN, “The SQL Server Query Store feature provides you with insight on query plan choice and performance. It simplifies performance troubleshooting by helping you quickly find performance differences caused by query plan changes. Query Store automatically captures a history of queries, plans, and runtime statistics, and retains these for your review. It separates data by time windows so you can see database usage patterns and understand when query plan changes happened on the server.” (https://msdn.microsoft.com/en-us/library/dn817826.aspx)
I have seen this demoed a few times and have played with it a little with some of the release candidates, and it seems to be something that will make DBA’s and consultants lives a lot easier. Here some additional information regarding these items:
- The SQL Server 2016 Query Store: Overview and Architecture (https://www.simple-talk.com/sql/database-administration/the-sql-server-2016-query-store-overview-and-architecture/)
- Understanding SQL Server Query Store (http://rusanu.com/2016/04/01/understanding-sql-server-query-store/)
- SQL Server 2016 Query Store Introduction (https://www.mssqltips.com/sqlservertip/4009/sql-server-2016-query-store-introduction/)
Finally, it seems like reporting services was given some attention in this release of SQL Server! And the changes are NOT trivial. There are some fundamental enhancements that I feel make reporting services a true competitor in this space again. I have been testing out the release candidates and I am very excited to get this into production. Here are some useful posts that will help you get excited about these enhancements:
- What’s new https://blogs.msdn.microsoft.com/sqlrsteamblog/2016/03/18/sql-server-2016-rc1-whats-new-in-reporting-services/
- Roadmap https://blogs.technet.microsoft.com/dataplatforminsider/2015/10/29/microsoft-business-intelligence-our-reporting-roadmap/
- Brand SSRS website https://blogs.msdn.microsoft.com/sqlrsteamblog/2016/03/20/how-to-create-a-custom-brand-package-for-reporting-services-with-sql-server-2016/
Well, I have to say that I am probably as excited about this release as Microsoft is!
Link to SQL Server 2016 download:
|
OPCFW_CODE
|
Ubuntu 18.04 is stuck at loading screen after new installation
I installed Ubuntu 18.04 from a bootable USB last night, but when I tried to login to the operating system I got stuck at a ubuntu loading screen with five dots on it.
I have seen an older question where it might seem to be a graphics problem, and the solution mentioned was to press Ctrl +Alt+f1,f2,f3 etc. to go the shell, but when I do that the shell is not showing up.
What else can I try?
If you press escape, can you see anything? If you hard reset (using the power button), and then boot in non-quiet mode by editing the selected item (press e), removing quiet splash and pressing ctrl-x, is anything interesting displayed on screen?
I did it like you said but still it is stuck on loading.
I didn't expect it to fix the problem. It should make some text be displayed and that text might contain a clue.
It didn't give any error but after deleting quiet splash from the grub menu it runs quiet well in the recovery mode. What do you suspect? What should I do now?
You mean it made the computer boot? Perhaps if you run journalctl you can see evidence of a problem?
Same issue happens for me. It occurs after restart. There was kernal updates
look at my answer here https://askubuntu.com/a/1150970/926999
Ubuntu 18.04 uses Wayland display server which does not work on a few systems.
Try the below steps to make the system boot normally:
Go to recovery mode from the GRUB menu and then boot into the system. Recovery mode uses low graphics and hence will not get stuck at the splash (logo) screen.
Once you are logged in, open a terminal (Use Ctrl+Alt+t shortcut)
Try changing the display server to Xorg in the gdm3 custom conf file using the below command and reboot the system.
sudo gedit /etc/gdm3/custom.conf
Change #WaylandEnable=false to WaylandEnable=false (Basically uncommenting it).
Reboot the system.
This will disable the Wayland display server and make the system to use the Xorg
display server. Your system should hopefully progress to the login screen now with a normal boot.
Let me know if this works.
Reference: https://linuxconfig.org/how-to-disable-wayland-and-enable-xorg-display-server-on-ubuntu-18-04-bionic-beaver-linux
Thanks, It worked for me. i logged in as root in recovery mode, edited /etc/gdm3/custom.conf and modified WaylandEnable=false, using vi editor. Rebooted the system.
I can now stop smacking my head on the table. thank you a million.
It did not work for me unfortunately. I am on ubuntu 18.04 and I have a Dell Latitude. The kernel versions I have available in GRUB are:
5.4.0-42-generic
5.3.0-62-generic
I'm so stuck... this nor anything else I've tried has not helped.
The similar stuck issue occurs when I try to restart or shutdown the system after a new installation of ubuntu 18.04.2 on my laptop.
I solved the problem by switch from Nouveau driver to Nvidia driver, using Software & Updates App, as following screenshot. The problem disappers after using Nvidia driver, so I assume there are some potential bugs in Nouveau driver.
Just for references, here is my video card info:
$ ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:1c.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001D12sv00001D72sd00001604bc03sc02i00
vendor : NVIDIA Corporation
driver : nvidia-driver-390 - distro non-free recommended
driver : xserver-xorg-video-nouveau - distro free builtin
$ sudo lshw -c video
# OR
$ sudo lshw -c display
*-display
description: VGA compatible controller
product: HD Graphics 620
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
version: 02
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:128 memory:b2000000-b2ffffff memory:c0000000-cfffffff ioport:4000(size=64) memory:c0000-dffff
*-display
description: 3D controller
product: GP108M [GeForce MX150]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress bus_master cap_list
configuration: driver=nvidia latency=0
resources: irq:133 memory:b3000000-b3ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:3000(size=128)
I was also facing similar problem. Using nomodeset I can boot into the system but now I solved the problem another way round. Only thing we need to do is to upgrade or downgrade the kernel version.
https://askubuntu.com/a/1014753 This forum has explained how to access advanced option in Ubuntu. Now when we enter into advanced option we can see Linux kernel versions like:
Now navigate to the lower version of kernel using arrow keys but
leave all the recovery modes kernels.
Leaving the recovery modes kernel, select the lowest version of
kernel and hit enter.
Now it will boot normally without using nomodeset. Remove nomodeset
before if you have written it in grub file and update the grub.
Now to make changes permanent go through this below given link and
remove all the above kernels but don't remove the running kernels.
https://itsfoss.com/upgrade-linux-kernel-ubuntu/ This link will guide us through the procedure to update or downgrade the kernel. For me kernel 5.3 worked perfectly.
For the issue of stucking in the Background or Lock Screen, I have tried the way to complete the task.
1. Give the command
$ sudo gedit /etc/default/grub
2. Add nomodeset
Modify the following line
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
to the the new one.
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"
3. Save and Exit
Reference:
https://itsfoss.com/fix-ubuntu-freezing/
I just had it stuck at the next screen when it should show the login prompt - it just showed a solid wine coloured screen (with a subtle Ubuntu at the bottom).
I was using a virtual machine (VirtualBox) and the fix was to change the Display setting from VMSVGA to VBoxSVGA. Phew!
Edit: Changed to VboxVGA because VBoxSVGA caused hideous total slowdown of guest VM (possibly related to 4k screens or VirtualBox 6.0 - didn't look into it).
I check the Ubuntu 18.04 system. With regard to the Deep Learning scenario such as RTX 2070 Super,the Nvidia Graphics driver has been set to Nvidia-driver-440(open source). It indicates the ppa version. In addition, the setting of WaylandEnable=false is still there.
The question is that it happens sometimes for the Ubuntu 18.04 to randomly stuck in the default Background,Lock Screen or the black screen with the message of /dev/sda2:clean...
With regard to the issue /dev/sda2:clean..., it seems to be the fsck check while Ubuntu 18.04 executes ~30th boot.
Please have a look at the following answer quote.
The message you are seeing is only a result of a fsck or file-system check, which tells you it didn't detect errors (ie. clean) & how many files & blocks it checked. fsck gets run ~30th boot, unless a problem was detected such as a improper shutdown the time before (eg. power-button was used to force shutdown instead of command via sysrq combination etc). The message you are seeing is not a problem, it's a consequence of something done last time it was booted; otherwise your screen is 'black' from your description, i.e. you have a 'black' screen issue. – guiverc Jan 26 at 11:06
I will be pleased to share my findings in the future.
Reference
Nvidia sets a higher bar for the graphics card of RTX 20XX Super. Only higher version of CUDA Driver such as Nvidia-Driver-440 can be allowed for the above-mentioned graphics card. For the older graphic cards such as RTX 2060 with Nidia-430-Driver, Ubuntu 18.04 has no such problem.
Therefore,it is probably the compatibility issue between the newer Nvidia Graphics and Ubuntu 18.04. In addition, Nvidia does not support for the higher version Ubuntu such as Ubuntu 19.04 and Ubuntu 20.04.
If any one has better ideas, please feel free to share.
https://itsfoss.com/fix-ubuntu-freezing/
With regard to the issue \dev\sda2... : clean..., it seems to be the bug of the fsck check while Ubuntu 18.04 LTS with regard to the second scenario as follows.
There are two scenarios, one is that the system go across the the black screen \dev\sda2..., and another scenario is that the system stuck in the black screen.
For the second scenario, I list the new solution to modify file system parameter as follows.
$ sudo tune2fs -c 60 -i 30 /dev/sda2
60 means 60 times turn-on limitation and 30 means 30 days. Either the turn-on times arrives up to 60 times or 30 days, Ubuntu 18.04 will have the fsck--file system check. It seems to be fair to developers.
|
STACK_EXCHANGE
|
It is a no-brainer that with the unabated technology innovation, you need to constantly be on your toes, looking out for the next big thing. And when it comes to the business applications, even more.
Where businesses are reinventing services to provide end-users with better accessibility, interfaces, all in all, a better brand experience, a poor performing language or framework for your app development may kill your chances of success in a cut-throat market.
Rewriting a mobile application from scratch seems like a massive step, especially as it can halt the existing business processes. This makes owners skeptical about taking the leap. But rewriting applications to match the pace of client needs and the competition is a smart, agile approach that every business should consider.
With the introduction of Flutter toolkit, the rewriting process is convenient, and almost uninterrupted. And the result? Expressive, aesthetically pleasing UI’s.
Recommended reading: Top 11 Famous Apps Built With Flutter Framework
It is a cross-platform UI toolkit for web and desktop applications. Complete with a framework, widgets, and tools, Flutter can be easily integrated into your existing application piecemeal, as a module, which can then be imported into your app. Read how Flutter differs from React Native here.
To know how Flutter differs from other approaches, here is an explainer video about app development on the Flutter framework. Also, before knowing more about the toolkit, let’s look at the cases for when rewriting in a new language may be the ideal thing to do for your business.
Here are a few reasons why you should consider rewriting your mobile application in Flutter. Let's take a closer look.
This is usually the case when legacy code (a source code inherited from an older version of the software) is not well-written. Working on another person’s code isn’t an easy task unless it is well-documented with code comments and/or test-scenarios. A lot of effort and time goes into deciphering the pre-existing code, which is why rewriting is preferred.
At times, adding a required functionality or a feature to the application is not possible with the current framework/language of the project. In such cases, it’s always easier to rewrite the code with a more pliant framework or technology.
When developing applications that are meant for different platforms, it is a wise choice to rewrite the code with a framework that allows the deployment of the applications to multiple platforms. This helps in cutting down the extra man-hours.
For example, developing a native mobile app is a collaborative effort of 3-5 android developers, 3-4 iOS developers, and 5-7 web developers, - all of them writing the same functionality for different platforms. If the team instead decides to use a different technology like React or Flutter, it’s possible to get the job done with a total of 3-7 developers in a shorter time.
Note all the time and resources that you’ll be saving!
Refactoring a code means restructuring the existing project’s functionality to improve overall performance. Refactoring does not change the code’s intended functions, and can be the best option to choose when you have:
In most cases, time is a critical factor while developing software as rewriting it with many active users might take a long time. Time that could otherwise be spent to develop new features.
Rewriting a code can exclude some functions which were originally placed to fix a particular bug. But with refactoring, such exclusions can be easily tracked.
If your app is built on legacy code, refactoring should be done as a step-by-step activity within a longer period. This is because the legacy code isn’t easy to rewrite, given the dependencies on the code. But before you decide on using Google’s Flutter toolkit, it is imperative to weigh the pros and cons to help make a successful decision.
Here are some cases when you need to consider rewriting your app using Flutter.
The term MVP stands for “minimal viable product”, which is a great way to assess ideas through feedback from the customer base. As Flutter framework supports multiple platforms (e.g., Android, iOS, web, etc.) from a single codebase, it is one of the best frameworks for building and launching products in the market with multiple platform support.
Consistency is the key to effective branding, as it improves the user experience and brand resonance. Flutter, quite brilliantly, aids to the objective with using a graphics rendering engine for drawing components on the screen. This makes sure all the components look exactly how you want users to see them.
Flutter toolkit provides out of the box tools, widgets and functionality for the developers, eliminating the extra effort put in searching for the 3rd party library by developers. Reflect is a great example of complex interactions and fun animations. Know about the impressive react-native to Flutter transformation.
Flutter believes in write-once and run-everywhere philosophy, which is the reason behind its efficiency in incorporating new changes or fixing bugs. With Flutter, development processes take much less time compared to the native frameworks, which bring in efficiency and scalability.
Here are some cases when Flutter could be not handy solution for you.
When the application requires a lot of memory-sensitive functionality, native frameworks might be preferred. It means that CPU, GPU, or RAM usage is lesser in apps built with native frameworks then with Flutter.
Also, in terms of app size, the apps built with Flutter take more space. For instance, a hello world application takes around 4MB in Flutter but takes only ~500KB when built with the native android framework.
It is mainly because the Flutter app contains dart and C++ libraries to run the app which takes a bit of extra space.
If you need the application to have different look and feel across platforms, then Flutter toolkit may not prove to be the right choice. As it does not use native components for the UI, it will require creating unique components for each platform.
As Flutter is relatively newer, some of the functionality may not be available for it yet.
If you are on time or budget constraint, or if your team isn’t familiar with the workings of Flutter, choosing native framework might be a more viable option for faster iterations.
Flutter framework, although new, has proved to be a game-changer when it comes to cross-platform solutions to app development. Not only has it made the development process simple and nimble (because every UI element on Flutter is a widget, the application layout is so easy to work with), but it has also offered flexible, customisable UI’s.
This is why Flutter is increasingly becoming the preferred choice for both consumer as well as enterprise app development projects.
Got questions or comments about rewriting your mobile app in Flutter? Let's talk
|
OPCFW_CODE
|
Livestreamer Stopped working
I have similar problem as closed isssue #1149 in past few days on 3 fully updated archlinux boxes
cirrus@blade ~ livestreamer -p mpv https://www.filmon.com/tv/cbs-reality1 low
Traceback (most recent call last):
File "/usr/bin/livestreamer", line 9, in
load_entry_point('livestreamer==1.12.2', 'console_scripts', 'livestreamer')()
File "/usr/lib/python3.5/site-packages/pkg_resources/init.py", line 568, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3.5/site-packages/pkg_resources/init.py", line 2720, in load_entry_point
return ep.load()
File "/usr/lib/python3.5/site-packages/pkg_resources/init.py", line 2380, in load
return self.resolve()
File "/usr/lib/python3.5/site-packages/pkg_resources/init.py", line 2386, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "/usr/lib/python3.5/site-packages/livestreamer_cli/main.py", line 3, in
import requests
File "/usr/lib/python3.5/site-packages/requests/init.py", line 60, in
from .api import request, get, head, post, patch, put, delete, options
File "/usr/lib/python3.5/site-packages/requests/api.py", line 14, in
from . import sessions
File "/usr/lib/python3.5/site-packages/requests/sessions.py", line 27, in
from .adapters import HTTPAdapter
File "/usr/lib/python3.5/site-packages/requests/adapters.py", line 28, in
from .packages.urllib3.exceptions import NewConnectionError
ImportError: cannot import name 'NewConnectionError'
You need python-urllib3 1.12+
@intact Thanks for responding, but on all 3 arch boxes this is latest !python-urllib3 as you see it is 1.13.1-1 , i tried rolling back to previous python-urllib3 package to no avail.
when building livestreamer-dev-git i get ...
Cloning into bare repository '/home/cirrus/build/aur/livestreamer-dev-git/livestreamer-dev-git'...
remote: Counting objects: 9122, done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 9122 (delta 4), reused 0 (delta 0), pack-reused 9108
Receiving objects: 100% (9122/9122), 3.58 MiB | 769.00 KiB/s, done.
Resolving deltas: 100% (5164/5164), done.
Checking connectivity... done.
==> Validating source files with sha256sums...
livestreamer-dev-git ... Skipped
==> Extracting sources...
-> Creating working copy of livestreamer git repo...
Cloning into 'livestreamer-dev-git'...
done.
Switched to a new branch 'makepkg'
==> Starting pkgver()...
==> Starting build()...
running build_sphinx
creating /home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/build
creating /home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/build/sphinx
creating /home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/build/sphinx/doctrees
creating /home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/build/sphinx/man
Running Sphinx v1.3.3
Traceback (most recent call last):
File "setup.py", line 75, in
"Topic :: Utilities"]
File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/lib/python3.5/site-packages/sphinx/setup_command.py", line 161, in run
freshenv=self.fresh_env)
File "/usr/lib/python3.5/site-packages/sphinx/application.py", line 127, in init
confoverrides or {}, self.tags)
File "/usr/lib/python3.5/site-packages/sphinx/config.py", line 277, in init
execfile_(filename, config)
File "/usr/lib/python3.5/site-packages/sphinx/util/pycompat.py", line 128, in execfile_
exec_(code, _globals)
File "conf.py", line 7, in
File "/home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/src/livestreamer/init.py", line 73, in
from .api import streams
File "/home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/src/livestreamer/api.py", line 1, in
from .session import Livestreamer
File "/home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/src/livestreamer/session.py", line 12, in
from .plugin import api
File "/home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/src/livestreamer/plugin/api/init.py", line 5, in
from .http_session import HTTPSession
File "/home/cirrus/build/aur/livestreamer-dev-git/src/livestreamer-dev-git/src/livestreamer/plugin/api/http_session.py", line 1, in
from requests import Session, build as requests_version
File "/usr/lib/python3.5/site-packages/requests/init.py", line 60, in
from .api import request, get, head, post, patch, put, delete, options
File "/usr/lib/python3.5/site-packages/requests/api.py", line 14, in
from . import sessions
File "/usr/lib/python3.5/site-packages/requests/sessions.py", line 27, in
from .adapters import HTTPAdapter
File "/usr/lib/python3.5/site-packages/requests/adapters.py", line 28, in
from .packages.urllib3.exceptions import NewConnectionError
ImportError: cannot import name 'NewConnectionError'
==> ERROR: A failure occurred in build().
Aborting...
Maybe you have old urllib3 somewhere in python path.
You can try:
python -c "import urllib3; print(urllib3.__version__)"
turns out i had installed some stuff via pip, after removing them all is well, sorry for noise.
|
GITHUB_ARCHIVE
|
M: Mind-wandering: the rise of a new anti-mindfulness movement - ohjeez
http://thelongandshort.org/society/is-mind-wandering-an-anti-mindfulness-movement
R: kordless
> This observation fits together with studies that show the best way of
> maintaining mental harmony during mind-wandering is to be able to be aware
> of the fact that you are doing it
Given that being aware of something is the primary intent of mindfulness, I'd
say this article's author is misrepresenting what "mindfulness" actually means
by using it to define something which is only orthogonally related.
Technically speaking, anti-mindfulness would be defined as the _intentional_
practice of _not observing_ one's thought or being aware of them. The quote
above seems to indicate one is aware of the process.
I live in the Bay and I have not personally observed a group or movement with
the primary _intent_ of going around trying to not be aware of what they are
thinking or doing. That happens naturally enough in people with their noses in
phones on the sidewalk!
R: dahart
I didn't fully grasp the irony until you quoted it. There is a name for the
state of mental harmony, when you're aware of your mind's wandering:
"Mindfulness".
Mindfulness is thus a prerequisite to wandering, and being anti mindful or
revolting against mindfulness... (Who would actually do or say that?) is
therefore anti wandering as well. QED.
R: kordless
Buddha would have fun with this, I think. :)
R: jskulski
I'm a big fan of day dreaming, and set apart time in my day for it. I guess
mind-wandering is the new term.
But I think the article is protecting the very stressful cycle of thinking
about the same things over and over and over without being able to let go. The
old
"I should go to grocery store to get food" leading to "I should cook more at
home" to "That one time I tried to make tikka masala it came out bad" back to
"I should go to the store"
cycle.
I'm all for the embracing of day-dreaming, but that comes from mindfulness. A
calm, clear place where thoughts are free to bubble up as freely as they are
able to go.
Mindfulness meditation practice is, for me, the gym. It helps me put my mind
to that place, to focus and put 100% into whatever I'm doing, whether that's
chopping carrots, programming, or day-dreaming.
R: codyb
Yes, when I used to meditate (and I'd like to get back into it), I'd split my
sessions into two modes. One where I just let my thoughts go where they
wanted, and one where I attempted to focus on my breathing.
Thankfully I have no trouble with wandering attention when dealing with a
difficult task at work (although, upon success I am generally thoroughly
drained. Attempting to jump into my next task can be quite a tough hurdle
after spending a few hours in the zone intently focused on one particular
nuance of a system.) and I produce plenty of wonderful thoughts when I'm doing
menial things like dishes or riding the train without a book.
In the end, like most things, there's a bit of balance to it all.
R: jmagoon
Are you familiar with any specific meditation traditions? Most historical
traditions split practice into those two categories--concentration meditation,
which is one pointed and object focused, and 'bare awareness', which allows
the mind to wander as it pleases, without judgment, while observing what that
is like.
Just pretty neat if you came up with those both on your own.
R: codyb
Oh no, while my meditations aren't based on anything specific that I can
currently remember, those were more than certainly both read about at some
point and as a firm believer in the value of balance I never chose either or.
R: danieldk
_At the root of this turnaround: the idea that mind-wandering is not a waste
of attention but simply a different kind of focus.
Could this be the beginning of the revolt against mindfulness?_
I think the article (and probably a lot of modern mindfulness courses/books)
conflates mindfulness and concentration meditation. One of the basic
principles of mindfulness meditation is equanimity. In terms of 'mental
objects' this means being non-judgmental towards thoughts, emotions, etc.
Wandering is just one of the many things that occur and when one notices
wandering, one can just observe that. If the wandering continues, it
continues. If the wandering stops, it stops.
Focus can be a side-effect of mindfulness meditation, but is not the goal.
R: gypsy_boots
I was having trouble articulating my thoughts on this article but I think you
nailed it.
I don't think mindfulness and mind-wandering are inconsistent with one
another. In fact as you pointed out, a key tenet of mindfulness is
occasionally allowing our thoughts to wander, but knowing what is happening,
and naming said thoughts.
In fact, mind-wandering paired with mindfulness can be very powerful. We can
allow our subconscious to come out of the shadows, all the while noticing
where it is going. The two don't have to be separated, they can go hand-in-
hand.
R: aaimnr
The dichotomy in the title is false. As the article states: "This observation
fits together with studies that show the best way of maintaining mental
harmony during mind-wandering is to be able to be aware of the fact that you
are doing it." Which means, precisely, that it's a state of mindful watching a
stream of thoughts, without exerting any pressure or expectations of the
result. With mindfulness you can do everything better, including thinking -
the fact that you are aware of it doesn't mean that you overly control or
restrain it. You can let the thoughts flow freely when it's useful, you can
stop them (with some practice) when it starts to be counter-productive.
R: throwaway999888
Really the culture they are tilting against is the go-getter "productivity"
culture that puts everything into two buckets: productive, and not-productive.
Everything that can be directly measured is productive, while other things are
recreational at best. That's according to this 'productivity' culture anyway.
R: aaimnr
Thanks, makes sense. It reminds me a story about some companies blindly
following Google in introducing mindfulness courses to its employees, in hope
of increased productivity. The interesting offshoot was that some of them,
having started meditating, realized that the job is crap and they quit the
company.
R: rloc
Great article. I can confirm I often come up with the new ideas in the shower.
In general I taught myself to alternate periods of deep concentration work
with periods of "pause" where I stop writing code and let my mind "wander" in
relaxing activities. I recommend taking a step back when faced with a
difficult problem to solve.
Usually when I come back to work I take the initial problem with a more
creative approach.
R: pqs
Are we rediscovering the wheel all over again? Millions of people can confirm
this. I also have ideas in the shower, when mindwandering. And also I have
greate ideas when cycling home from work. At work I focus a lot, this allows
me to push work and do a lot of things. But creativity comes in bursts, often
outside the office, while cycling, walking, showering, eating ... but this is
old. All this "new trends", are the same old things with new names. I'm sure
the greeks had discovered most of this millennia ago.
R: dasboth
I think this stems from the fact that other people's experiences are usually
not enough to internalise something. You can tell me about it for hours, but
unless I have a creative thought in the shower myself, I wont really _know_ it
to be true. Once I experience it for myself it'll feel like a genuine
discovery even if other people have been doing it for millenia.
R: pqs
You're right. Each generation, each individual, in fact, must rediscover the
wheel.
The problem is that our world (science, media, etc.) is heavily biased towards
novelty. Somehow, it is compulsory to explain everything as if it was a new
discovery, a breakthrough, when it is older than walking (as we use to say in
the Catalan language).
I'm tired of reading article, after article, about new old things. People
using a brain scanner to "discover" thinks we all know well.
Maybe I should read just read old books and forget the media.
R: bsenftner
I am suspicious of an unintended end result with all the Mindfullness rhetoric
being published. I have three friends very, very into it, and one of them
ain't that bright. Their experience, as they relate them to me, is more like a
fear of ones own thought processes and they appear to be developing thought
phobias. I tell this person that they are misunderstanding something, because
it should not be triggering stress, but there it is, stressing them out that
they "think wrong". Anything that makes a person fear their own thought
process is really, really fucked up.
R: cableshaft
I do worry I 'think wrong' sometimes when I don't concentrate as much as I'd
like or I don't meditate as often as I like, but honestly one thing that keeps
me from doing it that often is that I actually really enjoy my mind bouncing
around from thought to thought as I drift off into sleep.
I'm a creative person, and I've gotten some really great ideas for projects or
solved some things at work by just letting my mind bounce around from idea to
idea, so I'm hesitant to replace that with a still mind. Although every once
in awhile I need to relax or calm down and meditation helps with that.
R: bsenftner
I have been exposed to different forms of meditation, and I personally prefer
the Transcendental Meditation form. Under TM, one repeats a nonsense phrase
while in a quiet place with one's eyes closed. After a bit, a self hypnosis
occurs, and one's mind takes a little acid trip. Afterwards, about 15 to 20
minutes, I feel like a just woke from a refreshing nap and I have great
clarity of mind. Comparing this to Mindfulness, which I am not completely
understanding, seems like a completely different type of meditation that takes
one's presence (what you're supposed to be doing) into the mediation, whereas
TM is like a 20 minute vacation from what you're supposed to be doing.
R: applecore
Mindfulness is backed by thousands of years of evidence-based practice and
millions of deliberate practitioners; this mind-wandering "movement" has, as
far I can tell, a couple of web articles written about it and very few people
intentionally practicing it.
R: lutusp
> ... this mind-wandering "movement" has, as far I can tell, a couple of web
> articles written about it and very few people intentionally practicing it.
Every scientific theory, every invention throughout history, resulted from
people "mind-wandering", imagining a reality other than the one in front of
them. Mindfulness is not the only legitimate mental activity, in fact it's a
fad with a name and some slogans surrounding one of many equally legitimate
mental states. There's nothing wrong with it, unless people start thinking of
it as the only legitimate mental activity, a human failing with a long sad
history.
R: tjl
One can be "mind-wandering", but still mindful. One meditation technique is to
let yourself think random thoughts, notice the thought, and let it go.
R: dahart
I like this article, mind wandering as a concept resonates with me in the
sense that I believe it is important and good to spend time exploring thoughts
and ideas without any external pressures, especially time.
But my gut reaction is it's sad that mindfulness is being demoted in order to
promote wandering. It makes a good story, I guess, to have an antagonist, but
in my mind there is no conflict whatsoever between mindfulness and "mind
wandering". The insistence in this article that there is a conflict gives me
the impression that the author doesn't truly understand the practice of
mindfulness.
R: dbpokorny
> You know how it goes: one moment you're reading or driving, the next you're
> off in a daze, thinking about what you should have for lunch, or running
> through to-do lists in your head.
There must be a word for this moment, when the primary focus of attention
shifts from the near physical environment to thoughts that are "mentally
near". I certainly know the subjective experience of what is being described,
but I lack the words to express it. It superficially resembles a context
switch.
R: ThrustVectoring
I made a gigantic leap in interpersonal skill and ability to make eye contact
when I started noticing that kind of mental motion, and started using it as a
cue to look at the person I'm talking to.
R: seivan
I need the opposite. A way to tell my mind to shut up. Always overthinking.
It's a consonant buzz which makes it hard to go to sleep. Coffee helps to make
things more focused but it doesn't make it quieter.
R: aaimnr
Meditation, including mindfulness meditation, is such a thing. After a couple
of days on a silent mindfulness retreat of constantly watching your mind
(watching doesn't mean 'thinking') there's a moment when the thinking stops.
It's scary at first, because it's such unusual for us, but with time, when you
learn how to trigger it at will, you start to appreciate this amazing relief
that comes with it. And by the way, your cognitive functions do not weaken -
on the contrary, the mind becomes more efficient. I couldn't put it better,
you very rightly associate the constant mind chatter with stress - that's one
of the main causes.
R: lutusp
> Mind-wandering: the rise of a new anti-mindfulness movement
This is absurd. Contrary to the efforts of ideologues, Mindfulness isn't a
movement, it's an idea, and there are other equally valid ideas, like creative
daydreaming. There's no actual conflict between choosing to pay attention to
one's surroundings, and paying attention to one's inner creative voices --
they're complementary, non-conflicting states.
Only in the field of human psychology can an obviously legitimate mental state
be described as an "anti-mindfulness movement", except for people whose minds
have a maximum capacity of one trivial idea.
|
HACKER_NEWS
|
Felipe, did you manage this problem?
[I apologize in advance if this reply is off-topic.]
@andymp, thanks a lot for sharing these awesome customizations with this community.
I gave the modifications you described on github a try, with a local OJS2.4.6 install I had. After the initial OJS install, everything worked fine in my browser, but as soon as I finished all the modifications you described, the page “broke”. I cleared all cache, and even restarted my local server, but still the page won’t open in my browser. I just get a white page. …viewed page source, and it has nothing (it seems the page just stops at the html opening tag).
Please I’d really be grateful if you (or anyone) would offer some advice.
PS: sorry for my bad English.
This topic taught me a lot about the customising of the ojs.
i am new to eyes and I find it very interesting topic but installing I get the following error
Warning: require_once(lib/pkp/classes/config/Config.inc.php): failed to open stream: No such file or directory in C:\xampp\htdocs\ojsm\lib\pkp\includes\functions.inc.php on line 30
Fatal error: require_once(): Failed opening required ‘lib/pkp/classes/config/Config.inc.php’ (include_path=’.;C:\xampp\htdocs\ojsm/classes;C:\xampp\htdocs\ojsm/pages;C:\xampp\htdocs\ojsm/lib/pkp;C:\xampp\htdocs\ojsm/lib/pkp/classes;C:\xampp\htdocs\ojsm/lib/pkp/pages;C:\xampp\htdocs\ojsm/lib/pkp/lib/adodb;C:\xampp\htdocs\ojsm/lib/pkp/lib/phputf8;C:\xampp\htdocs\ojsm/lib/pkp/lib/pqp/classes;C:\xampp\htdocs\ojsm/lib/pkp/lib/smarty;.;C:\xampp\php\PEAR’) in C:\xampp\htdocs\ojsm\lib\pkp\includes\functions.inc.php on line 30
Quizás quisiste decir: me podría ayudar
can you help me
Hello - i am trying to use your customisation for my journal…
But i can’t understand how to config my ojs (disable_path_info, base_url’s, restful_urls) and .htaccess file…
How to make my journal default journal same way like at your site… i can’t find info at forums… and how to rule paths without editing all your templates - could u send please example of your .htaccess
And also strange - language switcher disappeares inside journal
Help please as u can
|
OPCFW_CODE
|
Finally I forced myself to spend some time with photo editing software in order to make this mockup. As I mentioned in my recent post I had which was something like bespin is. When bespin was lunched, I had to kill my pet ;( I have to admit that I really like ideas Bespin is build on, not to mention that my pet was far behind :) Anyway I had one interesting idea for my project which was not on a bespin’s roadmap, since I thought idea was cool I tried to promote it on . I did not got much attention, guess because of a bad description, so I have decided to write this post to show all the potential of it.
Personally, I thing that idea of having many Bespin instances on different web servers and different backends is very wrong, cause most of the developers including myself would like to have a single cloud based editor with all the setting and plugins configured, so that it’s ready to be used it at office, at home for personal projects, for local files, etc…
I want to be able to use it from any computer I will have to work with, no matter if I’m online or not. Thinking of synchronizations of a settings, plugins etc.. between servers makes me think of the editors we’ve have today. I do think bespin can be and should be different :) To ambitious, I know, but lets brainstorm some ideas I have:
This image shows how bespin dashboard can look like. In the first column you can see groups (Bespin, github, Private, local, work) containing projects. Each of those groups is a representation of a connections with a hosts (bespin backend).
Lets go through each group too see more details and examples.
- Connection: Bespin
That’s a connection with the server https://bespin.mozilla.com/. Projects listed under this group are the projects which are hosted on https://bespin.mozilla.com/ server. Connection label Bespin is displayed in orange color, what means, that connection is in online mode.
- Connection: github
That’s a connection with the server https://github.com/bespin/. Projects listed under this group (, seethrough_js, helma-ng) are hosted on the github server. Here, as in previous example connection label is displayed in orange color, what means that connection is in online mode. Unlike in previous image you can see fields (label, host, username, password) here. That’s because user have opened settings for this connection. I guess you’re really confused now, but let me explain what I mean. Idea is that user is able to create connections with a different backends. Let’s assume that github also hosts bespin. Adding connection to github bespin account should be possible even if user is on the https://bespin.mozilla.com/ since bespin will know host for each project, when user will try to open a file from project under connection github,
XMLHttpRequestwill be sent not to the https://bespin.mozilla.com/ where bespin is loaded from, but to from https://github.com/bespin/. (Lets ignore xhr restrictions for now :). In setting user can change the label for connection, or even host url and login info.
- Connection: private
That’s a connection with the server https://privatehost.local/. Lets suppose it’s a connection with host which is not accessible now. But using gears / html 5 offline storage projects and files under this connection are still available, just like in gmail :). The Violette connection label means that host is in a flaky mode. User can edit files save changes, besically do whatever he is able while being online and all the changes will be synced with a corresponding server when connection will go to online mode.
Violate file titles mean that this files have local changes which will be synced in online mode.
- Connection: local
That’s a connection with the server http://localhost:8080/. There is no projects listed under this group, because connection group is folded. As you might noticed in all the rest images there was “-“ sign in the right corner of the connection, meaning that you can fold this group. In this case there is “+” sign meaning that connection group is folded and it can be unfolded. Here, as in first 2 examples connection label is displayed in orange color, meaning that connection is in online mode.
- Connection: work
That’s a connection with server https://intranet/. Connection label is displayed in gray and no projects are listed under. This means that host is unreachable and this connection is in offline mode. In 3rd example host was also unreachable, but connection was in flaky mode. Reason is that in that example host had support for local data caching and synchronizations. In this case host doesn’t supports / restricts local data caching, there for no data is available when host is unreachable.
Hope that now whole picture and the idea is much clearer. Of course there’s a lot’s of valid questions regarding how to implement this, but it’s a totally different topic. There’s already Cross-site HTTP requests in firefox 3.5 and hope more browsers will follow this example. We could use gears or html 5 offline storage for working offline etc. But the idea of this post was to suggest ideas and then start discussion around them.
|
OPCFW_CODE
|
Date(s) - 16/06/2020 - 17/06/2020
Κατηγορία(ες) Δεν υπάρχουν κατηγορίες
Join Thousands of Developers, Engineers & Technical Leaders at the World’s Largest Virtual Developer & Engineering Conference.
DeveloperWeek Global 2020 is the world’s largest virtual developer & engineering conference, where thousands of participants from across the globe converge online for 100+ keynote & technical talks, developer technology virtual pitch contest, developer awards, virtual hackathon, virtual expo, and prizes for supporting the developer community.
In 2020 the world has faced a global health crisis from the coronavirus, but we as global citizens are able to organize and come together, and support each other professionally within online communities.
DeveloperWeek Global is produced by the DevNetwork team. We attract 20,000+ on-site attendees annually at our events DeveloperWeek SF Bay, DeveloperWeek New York, DeveloperWeek Seattle, and more.
Conferences, Summits & Workshop Tracks
At the center of DeveloperWeek Global is a Global Dev Summit addressing best practices for managing remote workers and coordinating your global dev team. Additionally, we will host a Dev Leadership Summit that will teach dev professionals how to grow their skills from developer to team lead to engineering manager.
DeveloperWeek Global spans all developer innovation tracks and topics including Artificial Intelligence dev, API Dev, Microservices, Containers, Kubernetes, and 100+ new developer technologies.
What does it take to progress from software developer, to team lead, to engineering manager? We ask dev executives about their career paths, and their advice for ambitions developers looking to level up their careers.
Global Dev Management
The Global Dev Summit invites technology leaders across the world to discuss how they build and manage remote teams and global projects. Many top tech companies coordinate their software development across nations, come hear their best practices.
Microservices architecture is an emerging software paradigm that moves engineering from monolithic to service-based design. Learn what microservices architecture is, and hear from companies who have successfully implemented microservices projects.
Developer Technology Innovation
We are inviting dozens of the leading developer technology producers to talk about the latest updates in their products. Come get a landscape view of 2020 developer technology innovation.
Containers & Kubernetes Innovation
IT Management has moved towards DevOps management, and containers & kubernetes are the latest innovation in DevOps. Get introductory and practitioner education on why you need to implement containers & kubernetes.
Artificial Intelligence Innovation
How are developers implementing machine learning, AI, and data science solutions within their projects and businesses? Come discover the new AI architectures and libraries.
|
OPCFW_CODE
|
Remove SC fence in try_advance
Since an SC fence is issued when obtaining the guard, try_advance
doesn't need to issue another fence if it uses the guard's local epoch
for advancing the global epoch.
cargo bench results:
master
test multi_alloc_defer_free ... bench: 3,007,952 ns/iter (+/- 128,911)
test multi_defer ... bench: 1,595,657 ns/iter (+/- 24,119)
test single_alloc_defer_free ... bench: 41 ns/iter (+/- 0)
test single_defer ... bench: 12 ns/iter (+/- 0)
test multi_flush ... bench: 17,557,440 ns/iter (+/- 344,195)
test single_flush ... bench: 76 ns/iter (+/- 0)
test multi_pin ... bench: 3,376,356 ns/iter (+/- 93,846)
test single_pin ... bench: 4 ns/iter (+/- 0)
this patch
test multi_alloc_defer_free ... bench: 2,990,410 ns/iter (+/- 106,550)
test multi_defer ... bench: 1,593,113 ns/iter (+/- 20,050)
test single_alloc_defer_free ... bench: 40 ns/iter (+/- 0)
test single_defer ... bench: 12 ns/iter (+/- 0)
test multi_flush ... bench: 15,167,083 ns/iter (+/- 690,042)
test single_flush ... bench: 42 ns/iter (+/- 2)
test multi_pin ... bench: 3,300,440 ns/iter (+/- 48,145)
test single_pin ... bench: 4 ns/iter (+/- 0)
@tomtomjhj would you please look at the CI failures?
The first one is a rust edition issue (old rustc doesn't support 2021 edition?) and the other is clippy lint for crossbeam-utils. I think it's better to address these in another PR.
CI failure has been fixed in #758.
pin_holds_advance test fails when I add artificial delay here:
https://github.com/crossbeam-rs/crossbeam/blob/858bc6e03e1a688368262a4f43e8e35db69ac522/crossbeam-epoch/src/internal.rs#L466-L468
This happens when the global epoch decreases, which is in contrary to this comment: https://github.com/crossbeam-rs/crossbeam/blob/858bc6e03e1a688368262a4f43e8e35db69ac522/crossbeam-epoch/src/internal.rs#L338-L343
So crossbeam is relying on the additional global epoch load and fence in try_advance for the monotonicity of global epoch. This is an important assumption in Jeehoon's proof. However, it seems that this point is not proved in that document, and I couldn't figure out why that works.
A relatively simple solution for enforcing monotonicity is to update the global epoch with a (relaxed) CAS instead of store.
|
GITHUB_ARCHIVE
|
Added support for bullet listing in group-by customization editor ✅
New WISWYG editor for group-by field customizations now support the bullet listing applied to group by field variables.
Minor bug fixes & improvements 👍
Few fixes & improvements related to support for sprint variables and markdown templates.
Fixed ! Rule actions triggered multiple times for version release trigger 🐞
There was a bug due to which rules with version release trigger were getting executed multiple times. This has been fixed now.
Customization of 'Group by' fields layout is now completely flexible 👏
'Group by' field WYSWYG editor is here. As opposed to the earlier 'custom CSS' approach, we now have released a full-fledged WYSWYG editor to tackle the group by formatting.
Note: This change is applicable only for the new templates and existing templates will continue to show earlier 'custom CSS' approach.
Support for sprint variables is here 🥳
If you use 'sprints' in your agile software development, you can generate release notes for your sprints created in Jira through ARN. Just like version variables, ARN now supports sprint variables in the templates and JQL sections. Plus, we have added ability to trigger a rule automatically when a sprint is marked completed in Jira. Read more.
Ability to insert expand macro in confluence template 🙌
“Expand’ macro is one of the Confluence's most popular macros. ARN confluence template now supports ability to include expand macro. Read more.
Remote pdf generation service introduced as a backup 🐞
A couple of customers are facing problems while generating release notes in pdf format. While the behavior is very peculiar, we are still not able to reproduce it and thus fix it. Thus, a temporary remote service is being allowed to be used for PDF generation. This service will be used for a couple of months until we release a new version of the app that includes a different library for pdf generation.
Fixed a bug preventing users to view logs in a certain scenario 📝
A bug where logs were not accessible when the rule creator was deleted from Jira is fixed
PDF library upgrade ⬆️
Library used to generate the documents in PDF format is now upgraded to the latest available version.
Improvement in rule execution logs !
When a rule is triggered, entry for individual rule actions which are queued for execution, shall be seen in the logs immediately. This will give quick confirmation to the user that rule is triggered successfully. Status of individual rule actions shall get updated when they get executed one by one.
Notification to admin while disabling 'run rule as' setting ⚠️
Admins shall be notified with list of projects which have active rules with rule actor ‘App user’ when they disable ‘run rule as’ setting. This will help admins to identify and disable such rules if required. Read here more about ‘run rule as’ setting and its impact.
Now, create and execute rules from cross project ARN screen 📢
Ability to create rules and publish release notes from cross project ARN screen is here. You can create rules for email, confluence, POST action and JSD announcement actions with manual trigger. You can also see rule execution logs within cross project ARN screen.
|
OPCFW_CODE
|
How to disable deprecated messages in Joomla?
I use Joomla v1.5, and after installing same components I got deprecated messages - how can I disable this messages in Joomla? I can't disable it in php.ini, because I don't have access for php in server.
How about instead of ignoring warnings about your code being extremely outdated , you would fix them instead.
Put in the index.php file, after this line define( '_JEXEC', 1 );, this statement:
error_reporting(0);
or, as pderaaij says, use:
error_reporting(E_ALL ^ E_DEPRECATED);
As he says,
In this way all of the other errors are shown, except the deprecated
messages.
A bit nicer: error_reporting(E_ALL ^ E_DEPRECATED); this way all of the other errors are shown, except the deprecated messages.
placing error_reporting in multiple locations inside the code is bad practice as it should ideally only be set once. The Joomla 1.5 core code sets the error_reporting as appropriately configured in the global configuration.
ghbarratt In my case, setting it in the global configuration didn't work. Doing Aurelio De Rosa's answer did, without giving my site 500 page error. And your solution in the htaccess file gave my site 500 errors, so let's agree that there are many ways to skin a fish.
The ideal solution is to set the global configuration setting called "Error Reporting" to either "None" or to "System Default" and then set the "system default" through the use of an .htaccess file on the web root or in the httpd/apache conf.
To set the value in the .htaccess file you can use:
php_value error_reporting 22527
(you can verify this value using php: echo E_ALL ^ E_DEPRECATED;)
AurelioDeRosa's answer proposes to "hack" (!) the core of Joomla. As stated in my comment, placing error_reporting in multiple locations inside the code is bad practice as it should ideally only be set once. The Joomla 1.5 core code sets the error_reporting as appropriately configured in the global configuration.
I have added this at the begining of index.php and it fixed it:
ini_set('display_errors','Off');
error_reporting(E_ALL ^ E_DEPRECATED);
Thanks Aurelio De Rosa
I added the following in .htaccess file in joomla:
php_value allow_call_time_pass_reference 1
and it worked like a charm. Thanks
You can try add to index.php
ini_set('allow_call_time_pass_reference', 1);
or to .htaccess
php_value allow_call_time_pass_reference 1
But it depends what is causing the problem and also server configuration if you are allowed to change php configuration.
This is not a good tip as you can read here: "In PHP 5, allow_call_time_pass_reference is deprecated, in versions prior to PHP 5.3.0, use of this feature will emit an E_COMPILE_WARNING, and in PHP 5.3.0+, the warning is a E_DEPRECATED notice."
|
STACK_EXCHANGE
|
[fprint] uru4000 reader not working correctly with libfprint, please advise
J.Kerssemakers at cmbi.ru.nl
Thu Apr 26 07:02:53 PDT 2012
For a forensics practical at our university, we want to obtain fingerprint
images, so the students can simulate the process of feature-detection etc.
on their own fingerprints.
For this purpose, we've obtained two DigitalPersona U.are.U 4000 sensors.
We tried to use these with fprint_demo, but though they are correctly(?)
detected, actual fingerprint detection didn't work.
1. Start fprint_demo
2. console prints "** Message: now monitoring fd 10"
Device is recognised: driver uru4000, imaging device, status: "Device
ready for use"
3. After clicking "Enroll" for any finger, a pop-up window appears
asking "scan your finger now", with a nice progress-bar.
4. The scan-light on the reader comes on,
5. BUT touching my finger to the reader doesn't provoke any response.
Touching for short or long periods of time doesn't change anything.
6. The Cancel and Quit buttons still respond normally (so main
event-loop hasn't crashed, apparently)
7. upon Cancel-ing, the scan-light goes off again.
8. upon Quit, console prints "** Message: no longer monitoring fd 10"
(no console messages appear in-between)
I also tried the sample programs from the DigitalPersona linux SDK, and
these were able to collect and verify fingerprints from the same reader.
However, the SDK-samples don't spit out the captured ((semi-)raw) image,
which is what we want.
1) What can we do at our end to get the fprint-demo program to scan
fingerprints from our reader?
2) How fares the "image capture" part of fprint_demo (and will it allow
3) Is there any information I can send you to help you diagnose/fix this
problem? (command outputs, traces, etc? when provided with instructions, I
can run USB-sniffers and the like)
- Reader: DigitalPersona
"U.are.U 4000 sensor" (non-branded, with DigitalPersona logo on front)
USBID (lsusb): Bus 004 Device 005: ID 05ba:0007 DigitalPersona, Inc.
- fprint obtained from ubuntu-fprint PPA:
libfprint0, version "1:0.4.0+git20110418-0ppa1+debug1~lucid1"
- OS used:
Ubuntu 10.04 LTS
Linux 2.6.32-40-generic #87-Ubuntu SMP Tue Mar 6 00:56:56 UTC 2012
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the fprint
|
OPCFW_CODE
|
Not a movie, but it was a miniseries...
What about Farscape?
John Crichton: "Bill Gates can't guarantee Windows, what makes you think you can guarantee my safety?"
Had to find *something* to use the "Evil BG" icon... ;-)
13 posts • joined 14 Mar 2007
Last time I compiled from source, PostgreSQL used 22Meg (including VM needs) of memory - MySQL? 115Meg! Given several hours, I was able to get that trimmed down to about 35Meg... but still. The reason for MySQL way back when was "small & fast." Now it has neither...
Makes me glad I started with Postgres all those years ago, and stuck with it; for me, MySQL has nothing but disadvantages now.
Sheesh. What a straw-man argument! Other than a few words off of the local Greek-American restaurant's menu, I don't speak a lick of Greek... but I can program in APL *just fine.*
If your example had *any* merit, programmers would need to "Speak" assembly... how many languages is LEAX -1,X or ASLB a real word in, anyway?
[[ OK, for those few of you that recognize the above as Moto 6809 assembly, you may also note that there _is_ an SEX instruction. As much as I wish it worked on the wifey, it doesn't... but it will Sign EXtend the B accumulator across A. ;-) ]]
" There is a more pressing technical issue at hand: the fact that the current internet protocol (IPv4) will run out of addresses, oh, say in four years or so. "
10 years ago, we were going to run out of IPv4 addresses in 3-4 years. If we were so hard up for standard IP addresses, how can the spammers get hold of, and subsequently abandon, so many of 'em that the good guys can't keep up???
We need to go back to the "good old days" - Prove you need a Class C, and spank the owner of said Class C if it gets misused... stop handing them out like candy and take care (read: responsibility) of the ones you have.
IPv4 gives *almost* one IP address per human on the planet. If you break that down to one IP address per family, we're good to go until world war 7. Explain to me why we need more IPv6 addresses (by almost double) than the number of square meters of land mass of the earth? So spam can keep up with Moore's Law?
""" He's the worst type of person to be in such a dangerous situation: an amatuer.[sic] """
Remember: Amateurs built the Ark, professionals built the Titanic.
Just another damn yank chiming in, but if there were more guys like this (unarmed but willing to rush armed people) then maybe on our side of the pond, just maybe 9/11 wouldn't have happened.
IMHO, and all that jazz... but I still think the guy's a hero (maybe not the smartest on the block, but nonetheless...) and should be applauded for his efforts.
Quoting: "but weren't sure if that was RHEL or the ThinkPad's problem. It was some sort of DHCP issue; the install machine hadn't been plugged into the LAN when it booted. A reboot with the net connected solved this, though perhaps one could just bounce the interfaces."
Quite often the default for laptops is if there's no active physical connection on the LAN port on boot, the interface is disabled as a power-saving feature... my Fujitsu Lifebook is that way. In that instance, a full reboot would be necessary to bring up the Ethernet interface.
Quote: "As for subsidising MS (and please, stop using the "M$" thing and start communicating like a grown up) no one is putting a gun to your head and making you but [sic] it; you don't have a right to anything they produce and as far as I see they can charge any amount of money they want to it. You don't have to buy it, after all."
When I purchased my Fujitsu Lifebook P2120, not only was I *forced* to purchase MicroFlaccid (is that better than M$?) Windows XP Pro with it, I was told that my hardware warranty was void if I removed it! M$ tells OEMs that if they give end users a choice of OS, they'll revoke *all* the OEM's Winders licenses. How is that not "forcing" people to purchase Winders?
Yes, the laptop runs Linux (that's my job) and yes, I'm glad I had the foresight to ghost the hard drive before I installed it, because I did have a warranty issue with the LCD, and I had to restore the original partitions on the drive before I sent it in for repair.
Quote: "I say protest your girlfriend or your wife's TV habits, or force her to watch old Star Trek reruns, with samplings from Magnum P.I. Miami Vice and the A-Team."
Makes me damn glad my wife prefers Stargate SG-1! And no, I'm not gonna bugger with you Brits, because even tho you may have started things like AI & Big Brother, you also gave us Dr. Who & Benny Hill! Thankfully, I grew up watching the latter as I live on the Canadian border, and could pick up the CBC broadcasts of BH.
P.S. IMHO, Brits *don't* have any reason to complain about base/basketball... you have cricket, after all. When most people who know how it's played can't explain how (or in some cases why) it's played... ;-)
Quote: "I wonder if there would be a market for a bottled beer with a picture of an ugly woman. So you can tell when you've had enough."
We Americans did that *long* ago - The beer: Olde Frothingslosh, the pale stale ale with the foam on the bottom! The picture on the can features the fictional "Fatima Yechburgh":
I don't think the ploy worked, though. How's that saying go? "Beer: Helping ugly women have sex since 1869" or somesuch? Methinks it goes back _much_ farther than that, and I'd wager it's just as helpful for the ladies when presented with the dregs of man-dom. Hell, I'm married, that must be proof! I doubt a picture on a can will stem the tide. ;-)
Back when I was asked to build a help-desk system in VBA for Access (versions 2.0 and later 7), the F1 key was my friend - heck, you could learn everything you needed (at least I did), with cut-n-paste examples that actually made some sense, just from F1 & a simple search. Never bought a book -- never needed to. Starting with Access 97, things went downhill... and fast... and never recovered.
It sucks that over a decade ago, MicroSoft actually knew how to "do it right" and b0rked it up so bad in the name of "progress." Sheesh.
Roger "Merch" Merchberger
Biting the hand that feeds IT © 1998–2020
|
OPCFW_CODE
|
Romaine D Coley
What is your next adventure?
The pandemic has thrown a lot of plans out the window, but not all. I have just passed the Google Cloud Certification so I am hoping to work for Google when the pandemic lets up and things open again. After that, I will pursue a graduate degree that focuses on machine learning.
What about your next adventure are you most looking forward to?
Education has been a long journey - 15 years, really, of studying and working hard, so I am looking forward to applying what I've worked to master. To see something tangible after all of these years will be something exciting. It will also be nice to be able to pay off my student loans. More immediately, my parents flew here from Jamaica and have been stranded here because of the pandemic. So we'll be celebrating, certainly.
Did you have any previous co-op, internship, or research experience in this area?
As a transfer student and as a non-resident -- I'm from Jamaica -- I anticipated from the beginning that there wouldn't be many opportunities for me to do an internship during the two-and-a-half years I've been at Tech. Aerospace engineering as a field is pretty lock-down in terms of citizenship. But I went to the Career Fairs and I went on interviews and had a few callbacks. It was worth it, to see how many companies and options are out there. And I had a lot of classes to take. I came to Tech because I wanted to build planes, and, in my senior year, I was on a team for the Boeing AerosPACE project where we built a UAV from scratch. And it flew. I was the control systems leader on the team, where we remotely worked with students from Purdue and Iowa State.
The area where I'd really like to do research - machine learning - I didn't really know about until my final two semesters, when I took a class with Professor Theodorou.
How did your educational experience at Georgia Tech help you to achieve your goals?
At Tech, I learned not just how to pass a test, but how to really understand the material. Because you'll have to come back to it at some point, and you want to have something to draw on.
This lesson got driven home during one of my first semesters at Tech, when I took System Design and Vibration with Professor Haddad. It's a hard class and one that every aerospace engineer has to take. I had not done modeling for a few years, so I was struggling. In fact, my first test, I absolutely failed. I had always been a straight-A student, and I was determined to not fail. But the only time I could make it to office hours was after I finished work, when most people were gone. I approached Dr. Haddad and told him, straight out; 'If you are willing to work with me, I'll do all of the work it takes to pass this class, to really understand the material.'
He said he was willing. And he was. He'd meet with me for an hour or more after everyone was gone, and I'd go study on my own for hours, filling up notebooks. The next test, I didn't just pass, I broke the curve.
What Dr. Haddad showed me was what it takes to succeed at being an aerospace engineer. It's not just a lot of work. It's a lot of effort. And he was willing to put that effort toward me.
What advice would you give to an underclassman who would like to follow the same path?
Well, most students start out trying to register for classes that don't meet at 8 a.m., and that's not the way to go. I took a lot of the harder AE classes so I could get them out of the way, leaving the AE options classes for the end. I wish I'd paid more attention to those options classes earlier on. They are classes that can introduce you to a subject that could change your direction. When I came to Tech, I had expected that I would master CFD and machine learning, but I had no idea that I'd find them both so interesting. It was by taking a machine learning class during my final semester that I found something that really fit me. It's what I want to continue learning.
|
OPCFW_CODE
|
import functools
from bson import ObjectId
from flask import Blueprint, redirect, request, flash
from flask_login import login_required, current_user
from models.campground import Campground
from models.review import Review
blueprint = Blueprint('reviews', __name__, template_folder='templates')
def author_required(func):
@functools.wraps(func)
def wrapper_author_required(campground_id, review_id):
try:
review = Review.objects.get(id=ObjectId(review_id))
except:
flash('Cannot find that review!', 'error')
return redirect('/campgrounds')
if current_user.is_anonymous or current_user.id != review.author.id:
flash('You do not have permission to do that.', 'error')
return redirect(f'/campgrounds/{campground_id}')
return func(campground_id, review_id)
return wrapper_author_required
@blueprint.route('/', methods=['POST'])
@login_required
def post_review(campground_id):
campground = Campground.objects.get(id=ObjectId(campground_id))
review = Review(**request.form)
review.author = current_user
review.save()
campground.reviews.append(review)
campground.save()
flash('Created new review!', 'success')
return redirect(f'/campgrounds/{campground.id}')
@blueprint.route('/<review_id>', methods=['DELETE'])
@author_required
@login_required
def delete_review(campground_id, review_id):
review = Review.objects.get(id=ObjectId(review_id))
review.delete() # the reverse_delete_rule=PULL will automatically remove it from the campground
flash('Review deleted successfully!', 'success')
return redirect(f'/campgrounds/{campground_id}')
|
STACK_EDU
|
Are awards in academia useful?
Is there any research/study that looked at the usefulness of giving awards in academia? By useful I mean weighting the positive and negative effects. I'm interested in three kinds of awards: awards to students, awards to teachers, and awards to researchers. (let me know if I should post as 3 different questions)
A quick Google Scholar search shows a lot of research on the topic in the context of students and people in general. My naive guess is that researchers wouldn't be much different.
Useful to whom? The recipient, the rewarding institution?
Academia is all about recognition, awards are one of many complementary ways.
@MarcClaesen: I thought that academia is about advancing knowledge, whereas recognition is a side effect, sadly, often optional and/or not timely.
@vonbrand awards to students + teachers -> for improving education; awards to researchers -> for improving research progress.
Are there many countries with a culture of awards to students and teachers? In Italy awards to students are rare and I've never heard of awards to teachers.
@MassimoOrtolano USA for sure (e.g. see Why are there so many awards in academia in the US compared with academia in France?), and regarding research awards quite common in computer science conferences.
@AleksandrBlekh sure, except that when it comes to funding you better have gathered some recognition for your fantastic science or the money well runs dry. I honestly wish it was only about science, but to get the opportunity to do science you have to gather recognition. If you've got Nature papers, funding comes far easier, though it hardly means you've done better science than someone without such papers.
@MarcClaesen: I understand your point. It seems to be somewhat of a catch-22 situation, since, as you said, "to get the opportunity to do science you have to gather recognition", but, at the same time, in order to get some recognition, you have to do science. In order to break from this vicious cycle, AFAIK, people start doing some science, then, hopefully, get some recognition, do more science on a bit higher level, get a bit more recognition, etc. - it seems that doing science follows a spiral path (when there is some progress).
An award can benefit people other than the recipient. For example, an award to a professor might benefit his students by giving his letters of recommendation a bit more weight.
Hmm I was thinking usefulness for the entire community, not for the recipient only.
This answer on Mathoverflow summarizes a study on the "productivity" of past winners of Fields medals (the most important award in mathematics; it is given each four years to four mathematicians under 40 years old).
The study is Prizes and Productivity: How Winning the Fields Medal Affects Scientific Output (2013), George J. Borjas and Kirk B. Doran.
TL;DR: The paper analyzes the output of 47 Field winners and 43 mathematicians of comparable level (often, people who were rumored to be "in the shortlist"). After winning the medals, their statistics diverge visibly. Field winners write about 20% fewer papers than non-winners, in the post-medal period, and seem to have slightly lower productivity, in general. However, there is another significant effect: they tend to enlarge their views and extend their research interests extend to other areas of mathematics, much more than non-winners, who tend to continue in the same area (25% vs. 10% probability of "cognitive mobility"). This necessarily comes with a cost, since one can't be productive immediately in a new area.
(and, to me, top mathematicians working on bridging areas of mathematics and looking for a unifying breakthrough is a great thing.)
|
STACK_EXCHANGE
|
<?php
namespace FondOfSpryker\Zed\DiscountCustomMessages\Business\Model;
use FondOfSpryker\Zed\DiscountCustomMessages\Dependency\Facade\DiscountCustomMessageToLocaleFacadeInterface;
use FondOfSpryker\Zed\DiscountCustomMessages\Persistence\DiscountCustomMessagesEntityManagerInterface;
use Generated\Shared\Transfer\DiscountConfiguratorTransfer;
class DiscountCustomMessagesWriter implements DiscountCustomMessagesWriterInterface
{
/**
* @var \FondOfSpryker\Zed\DiscountCustomMessages\Dependency\Facade\DiscountCustomMessageToLocaleFacadeInterface
*/
protected $localeFacade;
/**
* @var \FondOfSpryker\Zed\DiscountCustomMessages\Persistence\DiscountCustomMessagesEntityManagerInterface
*/
protected $customMessageEntityManager;
/**
* @param \FondOfSpryker\Zed\DiscountCustomMessages\Persistence\DiscountCustomMessagesEntityManagerInterface $customMessageEntityManager
* @param \FondOfSpryker\Zed\DiscountCustomMessages\Dependency\Facade\DiscountCustomMessageToLocaleFacadeInterface $localeFacade
*/
public function __construct(
DiscountCustomMessagesEntityManagerInterface $customMessageEntityManager,
DiscountCustomMessageToLocaleFacadeInterface $localeFacade
) {
$this->localeFacade = $localeFacade;
$this->customMessageEntityManager = $customMessageEntityManager;
}
/**
* @param \Generated\Shared\Transfer\DiscountConfiguratorTransfer $discountConfiguratorTransfer
*
* @return \Generated\Shared\Transfer\DiscountConfiguratorTransfer
*/
public function createByDiscountConfiguratorTransfer(DiscountConfiguratorTransfer $discountConfiguratorTransfer): DiscountConfiguratorTransfer
{
foreach ($discountConfiguratorTransfer->getDiscountCustomMessages() as $discountCustomMessageTransfer) {
$discountCustomMessageTransfer->setIdDiscount($discountConfiguratorTransfer->getDiscountGeneral()->getIdDiscount());
$this->customMessageEntityManager->create($discountCustomMessageTransfer);
}
return $discountConfiguratorTransfer;
}
/**
* @param \Generated\Shared\Transfer\DiscountConfiguratorTransfer $discountConfiguratorTransfer
*
* @return \Generated\Shared\Transfer\DiscountConfiguratorTransfer
*/
public function update(
DiscountConfiguratorTransfer $discountConfiguratorTransfer
): DiscountConfiguratorTransfer {
foreach ($discountConfiguratorTransfer->getDiscountCustomMessages() as $discountCustomMessageTransfer) {
if (!$discountCustomMessageTransfer->getIdDiscountCustomMessage()) {
$discountCustomMessageTransfer->setIdDiscount(
$discountConfiguratorTransfer->getDiscountGeneral()->getIdDiscount()
);
$this->customMessageEntityManager
->create($discountCustomMessageTransfer);
continue;
}
$this->customMessageEntityManager
->update($discountCustomMessageTransfer);
}
return $discountConfiguratorTransfer;
}
}
|
STACK_EDU
|
An important acknowledgement for a different view of doing science: open, collaborative, and more than a proof of concept.
A few days ago, Loïc Estève, Alexandre Gramfort, Olivier Grisel, Bertrand Thirion, and myself received the “Académie des Sciences Inria prize for transfer”, for our contributions to the scikit-learn project. To put things simply, it’s quite a big deal to me, because I feel that it illustrates a change of culture in academia.
It is a great honor, because the selection was made by the members of the Académie des Sciences, very accomplished scientists with impressive contributions to science. The “Académie” is the hallmark of fundamental academic science in France. To me, this prize is also symbolic because it recognizes an open view of academic research and transfer, a view that sometimes felt as not playing according to the incentives. We started scikit-learn as a crazy endeavor, a bit of a hippy science thing. People didn’t really take us seriously. We were working on software, and not publications. We were doing open source, while industrial transfer is made by creating startups or filing patents. We were doing Python, while academic machine learning was then done in Matlab, and industrial transfer in C++. We were not pursuing the latest publications, while these are thought to be research’s best assets. We were interested in reaching out to non experts, while partners considered as interesting have qualified staff.
No. We did it different. We reached out to an open community. We did BSD-licensed code. We worked to achieve quality at the cost of quantity. We cared about installation issues, on-boarding biologists or medical doctors, playing well with the wider scientific Python ecosystem. We gave decision power to people outside of Inria, sometimes whom we had never met in real life. We made sure that Inria was never the sole actor, the sole stake-holder. We never pushed our own scientific publications in the project. We limited complexity, trading off performance for ease of use, ease of installation, ease of understanding.
As a consequence, we slowly but surely assembled a large community. In such a community, the sum is greater than the parts. The breadth of interlocutors and cultures slows movement down, but creates better results, because these results are understandable to many and usable on a diversity of problems. The consequence of this quality is that we were progressively used in more and more places: industrial data-science labs, startups, research in applied or fundamental statistical learning, teaching. Ironically, the institutional world did not notice. It got hard, next to impossible, to get funding . A few years ago, I was told by a central governmental agency that we, open-source zealots, were destroying an incredible amount of value by giving away for free the production of research . The French report on AI, lead by a Fields medal, cited tensorflow and theano –a discontinued software–, but ignored scikit-learn; maybe because we were doing “boring science”?
But, scikit-learn’s amazing community continued plowing forward. We grew so much that we were heard from the top. The prize from the Académie shows that we managed to capture the attention of senior scientists with open-source software, because this software is really having a worldwide impact in many disciplines.
There were only five of us on stage, as the prize is for Inria permanent staff. But this is of course not a fair account of how the project has grown and what made it successful.
In 2011, at the first international sprint, I felt something was happening: Incredible people whom I had never met before were sitting next to me, working very hard on solving problems with me. This experience of being united to solve difficult problems is something amazing. And I deeply thank every single person who has worked on this project, the 1500 contributors, many of those that I have never met, in particular the core team who is committed to making sure that every detail of scikit-learn is solid and serves the users. The team that has assembled over the years is of incredible quality.
The world does not understand how much the promises of data science, for today and tomorrow, need open source projects, easy to install and to use by everybody. These projects are like roads and bridges: they are needed for growth thought no one wants to pay for maintaining them. I hope that I can use the podium that the prize will give us to stress the importance of the battle that we are fighting.
|||Getting funding from the government implied too much politics and risks. For these reasons, I turned to private donors, in a foundation.|
|||Inria always supported us, and often paid developers in my team out of its own pockets.|
PS: As an another illustration of the culture change toward openness in science, it was announced during the ceremony that the “Compte Rendu de l’Académie des Sciences” is becoming open access, without publication charges!Go Top
|
OPCFW_CODE
|
I often get locked into comment stalemates trying to get specific conditions on open-ended questions so that there is a clear answer or two to address the question. The problem arises when posters fail to single in on the issue, so it's totally unclear whether they have a specific problem in mind and they won't divulge the details or if they have an unfocused, non-specific problem in mind for which there are too many conditions/assumptions involved to even offer a useful answer.
"Lack of focus" is intended to refer to posts that pose multiple different questions. Just take some care, perhaps in a friendly comment, that you don't suggest that the OP break a badly formulated post into a zillion smaller bad posts!
"Needs more details or clarity" refers to posts that cannot be uniquely understood without additional information. Recognizing this might take experience and creativity: many times the poster cannot even tell that their question is ambiguous because their knowledge is limited. Thus, posting an expansive comment can usually be helpful.
There are actually 3 types of problems here:
Needs more details example:
Who is right?
T͟h͟i͟s͟ paper contradicts what T͟h͟i͟s͟ paper says.
Both are from reputable authors. Which one is right?
To understand the question, people have to read both papers before they can even guess what the OP perceives as a contradiction.
The relevant parts of the papers should have been quoted inline, and the actual contradiction explicitly stated in both the Title and the Body.
Needs more clarity example:
A massive wall of text resembling a James Joyce novel (Ulysses).
All this does is demonstrate the OP's total lack of understanding of the concept of paragraphs, and the OP's being unable to communicate organized thoughts.
All the necessary information might be there, but no reader is going to bother spending an hour trying to analyze and organize the question so that it can be understood by anyone other than its author.
Lack of focus example:
I read that …, so I'm wondering, Question B?
And either way, does that mean Question C? or perhaps Question D?
I also don't understand Question E?
The Body contains 4 completely different questions, none of which match the question in the Title.
Assuming you have left a helpful comment which your question suggests you have then sweating about which reason to give is not necessary. I would go for lack of detail/clarity myself whenever I am in doubt partly because I am never quite sure how to measure lack of focus.
If three people vote to close for three different reasons it still gets closed. I did ask a question about unanimity ages ago Does unanimity of reasons to close matter?. The consensus was it does not matter.
Detail versus focus
Focus relates to the 'focal point' in physics, a central point where all light beams converge. If there is no or little focus then there is not such a central point.
A request for more focus occurs when a post is dealing with too many details and questions all at once.
A request for more details is the opposite and occurs when a post has no detail at all.
In the image below the top left corner has low detail and the bottom right corner has no focus. Moving towards the upper right or bottom left corner is adding more detail/focus.
Detail versus clarity
There is a subtle difference between detail and clarity. In both cases a question is not very easy to understand.
Detail relates more to sharpness and the amount of small pieces of information. For example, a very short post with no background information has little detail.
Clarity relates more to the quality and whether the information is well represented. For example a post written with bad grammar and spelling, typos and non logical sentences is not clear.
|
OPCFW_CODE
|
Advance/NanoLabo is a software for first-principles calculations and molecular dynamics, “designed for beginners.”
Advance/NanoLabo is a Graphical User Interface (GUI) designed to work with open-source materials analysis software, such as Quantum ESPRESSO and LAMMPS. You can search material databases such as Materials Project and easily set up modeling and computational conditions. First-principles and molecular dynamics calculations can be performed, and the results are visualized instantly. The intuitive and user-friendly GUI, designed for beginners, has garnered support from many users, with hundreds of sales both domestically and internationally since its launch in 2018.
Execution of Calculations
|Quantum ESPRESSO (First-Principles)
LAMMPS (Molecular Dynamics)
ThreeBodyTB (Genaral-Purpose Tight-Binding Method)
|SCF Calculation, Structural Optimization, Hybrid Functional、vdW Correction,
Band Structure, Density of States (w/ PDOS Calculator), Visualization of Charge Density,
First-Principles MD, Car-Parrinello MD, Classical MD, MD with Neural Network Force Fields,
Thermal Conductivity, Viscosity Coefficient, Diffusion Coefficient, Radial Distribution Function,
TD-DFT, XAFS/EELS, Phonon (Effective Charge, Dielectric Constant, Band Structure, Density of States),
NEB Method, Work Function (ESM Method)
Calculation Server (Supports SSH Connection and Job Management with PBS/SLURM/PJM)
Cloud (Mat3ra, Science Cloud GPU)
NanoLabo Cloud Desktop
|Cell Translation, Supercell, Impurity Substitution, Lattice Defects, Space Group Determination, Primitive Cell Transformation, Standard Cell Transformation
|Surface and Interface Systems
|Surfaces with Any Orientation, Molecular Adsorption on Surfaces, Mismatched Interfaces (Pro Version Only)
|Drawing Organic Molecules, Filling Solvent Molecules, Polymer Models (Pro Version Only)
|Windows 10/11 (64bit)
AlmaLinux 8 (64bit)
macOS 13 or higher (Intel/ARM64)
|CPU : Intel Core i7 or higher
Memory : 10 GB or more
A picture is worth a thousand words;
introduce ease of use through videos.
Not just ease of use, but also putting cutting-edge technology into practice.
We always incorporate the latest cutting-edge technology into our product development and offer it to users. In particular, when it comes to Neural Network force field technology, we support various analyses by combining open-source Graph Neural Network force fields with our in-house developed Advance/NeuralMD.
1. Universal Graph Neural Network Force Fields
Structural optimization and molecular dynamics calculations using open-source Graph Neural Network force fields such as Open Catalyst, M3GNet, and CHGNet can be performed. Pre-trained models are available for all of them, allowing users to apply them universally to various systems without the need for users to optimize Neural Networks themselves.
We also offer Graph Neural Network force field fine-tuning services.
2. Our in-house developed Advance/NeuralMD
Our in-house developed Neural Network force field software, Advance/NeuralMD, is available. It incorporates various state-of-the-art technologies, including the automatic generation of force fields using self-learning hybrid Monte Carlo methods and GPU acceleration enabling calculations of systems with up to 100,000 atoms.
3. General-Purpose Tight-Binding Method (ThreeBodyTB)
It is compatible with the open-source general-purpose tight-binding method software, ThreeBodyTB. It can be universally applied to inorganic materials containing 65 elements, and allows for simplified calculations of density of states and band structures.
Advance/NanoLabo is also available as a virtual desktop in the cloud. This service is provided in combination with workstation-level computing resources. The Python environment required to use graph neural network force fields is also pre-configured and ready to use. The cloud environment uses Microsoft Azure Virtual Desktop and is managed by AdvanceSoft.
Please see here for more details.
Try Advance/NanoLabo (1 Month Free)
|
OPCFW_CODE
|
VMware Workspace ONE: Design [V20.x]
Duration: 1 Day (8 Hours)
VMware Workspace ONE: Design [V20.x] Course Overview:
The VMware Workspace ONE: Design [V20.x] course equips technical professionals with the essential knowledge and skills to design and configure a secure and efficient VMware Workspace ONE Unified Endpoint Management (UEM) solution. This comprehensive course guides participants through the process of designing a secure modern workspace that encompasses applications, devices, identities, analytics, and security.
At the end of this course, you will be able to:
• Understand and describe the components of a Workspace ONE UEM solution, and explain their purpose and architecture.
• Design and implement a Workspace ONE UEM solution using best practices.
• Configure profiles and apps for mobile devices and other endpoints.
• Utilize AirWatch tools, such as the AirWatch Console and APIs, for reporting and analytics.
• Implement and use Workspace ONE Access and Workspace ONE Intelligence.
• Describe the security measures to be employed in a Workspace ONE UEM Solution.
You will use VMware Workspace ONE (UEM) as well as VMware AirWatch Console and APIs as part of this course. Additionally, you will use various tools to build, configure, and monitor your Workspace ONE UEM solution, as well as get acquainted with the new features in the latest version.
Expect to gain hands-on experience designing a complete and secure UEM solution using App and Device Management, Identity Management, AirWatch Analytics, and Security measures during this course. The skills you learn during this course will help you to design and implement an effective VMware Workspace ONE Solution.
The target audience for VMware Workspace ONE: Design [V20
x] training is IT professionals and architects who are looking for a comprehensive understanding of the planning and design process for deploying Workspace ONE
This course is ideal for professionals who are involved in designing Workspace ONE solutions for enterprise organizations, including solution and VMware partners
Attendees must possess knowledge of the underlying Windows Server and Windows 10 technologies used to deploy Workspace ONE
They should also have experience working with VMware Identity Manager and Single Sign-On technologies
This course is suitable for IT professionals and architects with intermediate-level experience in desktop virtualization and identity management solutions
Learning Objectives of VMware Workspace ONE: Design [V20.x]
1. Understand the core components of the Workspace ONE solution.
2. Review the software users and design decision matrix.
3. Identify and evaluate the new features in V20.x.
4. Set up basic authentication and authorization services like UAG and identify best practices around deployment and configuration.
5. Configure the Workspace ONE integrated management principles and components.
6. Utilize email, mobile applications, and single sign on services.
7. Implement policies management and conditional access.
8. Architect Workspace ONE solutions to support enterprise mobility needs.
9. Integrate the Workspace ONE platform with other enterprise systems.
10. Understand how to troubleshoot and debug common Workspace ONE issues.
Module 1: Course Introduction
- Introductions and course logistics
- Course objectives
Module 2: Workspace ONE Design Fundamentals
- Outline the high-level Workspace ONE product design methods
- Outline the available Workspace ONE architecture types
- Outline the phases of End User Computing (EUC) solution design
- Describe the difference between a logical design and a physical design
Module 3: Identifying Use Cases
- Determine the key business drivers and use cases
- Determine the right use cases for your Workspace ONE solution deployment
- Outline the common types of user experience
- Match use cases with Workspace ONE components
- Match user experience with technology and integrations
Module 4: Creating Logical and Physical Designs
- Design the high-level logical solution architecture
- Validate the logical architecture
- Identify the hardware, software, and network requirements for the required Workspace ONE components
- Create the physical architecture
- Document the physical requirements for the physical design
- Collect the requirements for required integrations
- Validate the physical architecture
Module 5: Workspace ONE Solution Delivery
- Create Workspace ONE solution deployment phases
- Determine project milestones
- Create an execution plan for the Workspace ONE solution deployment
- Determine validating standards for Workspace ONE solution deployment validation
- Design an appropriate Workspace ONE solution roll-out plan for end users
An understanding of Windows and macOS platform provide an ideal starting point for this training. Candidates should also meet the following prerequisites:
• A base level understanding of the components of VMware Workspace ONE.
• A basic knowledge of key Mobility Management concepts such as identity management, authentication, application deployment, single sign-on, content management and EMM architecture.
• Proficiency with cloud services such as VMware Cloud, vRealize Automation/s, Active Directory, Azure Active Directory, and/or other identity providers.
• Completion of the Workspace ONE: Deploy and Manage [V20.x] training or equivalent knowledge and experience.
Discover the perfect fit for your learning journey
Choose Learning Modality
This course comes with following benefits:
- Practice Labs.
- Get Trained by Certified Trainers.
- Access to the recordings of your class sessions for 90 days.
- Digital courseware
- Experience 24*7 learner support.
Got more questions? We’re all ears and ready to assist!
|
OPCFW_CODE
|
We are pleased to announce the release candidate of EventStoreDB 20.6, which contains mainly bug fixes and improvements on the previous release
This release is not intended to be used in production, we welcome your feedback as we prepare for final release.
You can download the packages from the downloads page under the Pre-Release section.
If you are running on macOS, you will need to run Event Store in Docker, since currently servers using .NET Core’s gRPC implementation require the platform to support server ALPN, which macOS did not until Catalina. As soon as this restriction is lifted by the .NET Core platform, Event Store will release packages for macOS.
Please note Event Store will only expose the HTTP interface over HTTPS.
This requires a TLS certificate, but for ease of use you can use the development mode which uses a self signed certificate intended for development use only. The development mode can be activated by specifying the
--dev option when starting Event Store.
Notable changes and improvements from Event Store Preview 3 to the Release Candidate
- New versioning strategy
- Core Database
- Event Store gRPC Client
- Embedded Event Store Client
New versioning strategy
As of the release candidate, we will be moving to a new versioning scheme in the form of YY.[M]M version. You can read more about our new versioning strategy here.
Max truncation safety feature
Thanks to James Connor for this pull request that added a feature to avoid large unexpected truncations by adding a
MaxTruncation value that will stop any truncations over that size. This can be set via the following command-line argument, and can be disabled by setting it to -1.
The default is 1 chunk of data.
Security configuration changes
As we move to Event Store being “secure by default”, there are a number of changes being made to the configuration options. If you have been testing Event Store in development mode then this shouldn’t affect you. If not, you will need to update your configuration to include a TrustedRootCertificatesPath. Details about this can be found here. EventStore#2335
Please note that if you are developing a client, there have been breaking changes in the protocol from preview 3 release of Event Store.
Appends will now return a response which currently is either a Success or a WrongExpectedVersion.
Stream Reads will now return a response which currently is either a Success or a StreamNotFound.
Combined internal and external http interfaces
We have combined the internal and external http interfaces and therefor the previous configuration options of
IntHttpPort has been combined into
Event Store gRPC Client
Replace byte arrays with ReadOnlyMemory in the gRPC client
Thanks to Martin Othamar for this pull request to help reduce allocations.
Client has been broken up into operational scopes
The Event Store .NET gRPC Client has also been broken up into a client for each operational scope. The clients can be found as individual nuget packages namely,
Embedded TCP Client
The embedded TCP client will still be released in version 20.6.0, but was not released as part of the release candidate due to an issue with dependencies.
|
OPCFW_CODE
|
In any event, this focused enterprise diploma gives you the resources and methods to generate a variance in these growing industries.
In these Laptop science slight project, You need to do your best to get the proper grade for the reason that these little projects include the massive of issue credit rating (twenty-sixty%) so you should get very good marks to go that study course.
By clicking the button below you comply with be contacted by CTU about instruction products and services (like as a result of automated and/or pre-recorded suggests, e.g. dialing and text messaging) via telephone, mobile machine (including SMS and MMS), and/or electronic mail, even when your phone selection or e mail deal with is on a company, state or the National Don't Connect with Registry, and also you agree to our Phrases of Use and Privacy Plan.
Mr. Sarfaraj Alam aka Sam is awesome with any kind of programming assignments. You title any language C, C++, JAVA, Matlab, C#, Website Application, Databases, Knowledge Framework, Match, Animation, etc. As stated I did all my assignments all over my semester And that i got more than 98 or even more which is an A in each and every assignments I gave to Mr. Sam, He helped me in each of the assignments. I used many online solutions for my assignments before Nonetheless they have been impolite and no clarity on how the do the job are going to be performed, no serious customer service or no genuine conversation till I learned about Sam. I called him the really initial time and asked his performance and how he functions completing an assignment, I was by no means satisfied as I am at this moment, I'm nevertheless utilizing his solutions for my Projects, assignments, etcetera. I felt I am speaking with my Mate and we bond a relationship into a genuine good friendship.
I am a mechanical university student from Hong Kong,China. I am passionate about devices, but inside our second semester I received a programming topics. Programming is extremely triable endeavor for me.
Can a file actually be deleted forever? What exactly transpires after you “delete” a file, and how straightforward is it… Read A lot more...
QSO 510 Quantitative Evaluation for Final decision Making This is the survey with the mathematical, probabilistic and statistical tools obtainable for assisting while in the operation and administration of industrial corporations.
Charming Coding will be the the most effective Web site for finding Personal computer science projects online. Pretty Coding is helping forty+ folks everyday from all across the world to produce projects in several programming languages.
Beautiful coding is becoming a key and valuable for projects help, which helps us meet our worries to seek curious and sharp minds of This web site. Lovely coding is not simply just about measuring know-how; it's established for being vital inside our range system a comprehensive photo of candidates' talent and technique for reasoning.
WELCOME TO the Seventh Edition of Introduction to Programming Employing Java, a cost-free, on-line more info here textbook on introductory programming, which uses Java because the language of instruction. This book is directed mainly towards commencing programmers, even though it may also be handy for skilled programmers who would like to learn a little something about Java. It truly is absolutely not intended to supply finish coverage of your Java language. The seventh version requires Java seven, with just a few transient mentions of Java eight.
Final year projects are The most crucial projects that's why every scholar tends to arrange the most beneficial project and acquire the top of marks. Though everyone is all set to produce a dent with their project but just a few of them know plenty of java project ideas.
By means of this study course, The scholars will discover Sophisticated techniques to initiate, system and Regulate projects. They will attain knowledge planning sophisticated projects employing the two guide and Computer-based instruments.
In the procedure, You will be further geared up for an experienced job referring to Investigation, layout, implementation and management of functions and projects in production and repair corporations.
You can receive a response from one of our very capable tutors as soon as possible, often within minutes! They may go earlier mentioned and outside of to help you.
|
OPCFW_CODE
|
feat(opentelemetry): Allow header_type to be set
Summary
At the moment the opentelemtry plugin sets the default_header_type for propigations:
https://github.com/Kong/kong/blob/master/kong/plugins/opentelemetry/handler.lua#L150
However, doesn't allow the modification of header_type in the schema, so it always defaults to preserve. However, based on the function it's calling:
https://github.com/Kong/kong/blob/master/kong/tracing/propagation.lua#L432
-- If conf_header_type is set to preserve, found_header_type is used over default_header_type;
The above means the default_header_type is never used. We want the default_header_type to be used, and we want to ignore the incoming header type.
I am not implmenting the ability to modify the default_header_type as opentelemetry notes it should default to w3c which it currently is:
https://opentelemetry.io/docs/concepts/signals/traces/#context-propagation
Checklist
[X] The Pull Request has tests
[N/A] There's an entry in the CHANGELOG note, it specifically says not to edit this: https://github.com/Kong/kong/blob/master/CONTRIBUTING.md#submitting-a-patch
[ ] There is a user-facing docs PR against https://github.com/Kong/docs.konghq.com - PUT DOCS PR HERE
Full changelog
Implement the ability to set header_type for the opentelemetry plugin
Issue reference
Fix https://github.com/Kong/kong/issues/10246
Note: I couldn't get the tests to run locally with make test or with gojira:
gojira up
gojira run make dev
gojira run busted spec/03-plugins/37-opentelemetry
When I attempted the gojira command, it was returning a bazel error. I was hoping that CI would take care of running the test to confirm this functionality once I raised the PR.
FWIW I still couldn't get this working locally with gojira run make dev-legacy, I was getting:
─$ gojira run busted spec/03-plugins/37-opentelemetry
✱✱✱✱✱
0 successes / 0 failures / 5 errors / 0 pending : 0.223766 seconds
Error → ./kong/tools/utils.lua @ 37
suite spec/03-plugins/37-opentelemetry/01-otlp_spec.lua
./kong/tools/utils.lua:37: attempt to index global 'ngx' (a nil value)
ah! Thanks for pointing that out, we will fix that, please do feel free to update the CHANGELOG
Thanks! Went ahead and added a note in the CHANGELOG with the recent commit.
that would be awesome!
Working on the docs now, will post a link here once that's done!
Added documentation here: https://github.com/Kong/docs.konghq.com/pull/5422
Wasn't sure what to point at, so pointed to main, since not sure when this PR will get approved/merged/released.
FWIW I still couldn't get this working locally with gojira run make dev-legacy, I was getting:
─$ gojira run busted spec/03-plugins/37-opentelemetry
✱✱✱✱✱
0 successes / 0 failures / 5 errors / 0 pending : 0.223766 seconds
Error → ./kong/tools/utils.lua @ 37
suite spec/03-plugins/37-opentelemetry/01-otlp_spec.lua
./kong/tools/utils.lua:37: attempt to index global 'ngx' (a nil value)
ah! Thanks for pointing that out, we will fix that, please do feel free to update the CHANGELOG
Thanks! Went ahead and added a note in the CHANGELOG with the recent commit.
that would be awesome!
Working on the docs now, will post a link here once that's done!
alright yeah that makes sense, please use gojira run bin/busted spec/03-plugins/37-opentelemetry instead.
This is just an FYI, I'm approving the CI runs now.
New PR link for the docs pointed at the proper release branch: https://github.com/Kong/docs.konghq.com/pull/5431
This is just an FYI, I'm approving the CI runs now.
Thanks! To close the loop, the command you provided worked and I was able to successfully run the tests locally, they passed. I will share this information with my team so we are all able to test locally before raising upstream PR's.
looks great @nbialostosky! Thanks! This will just need a rebase to resolve a couple of conflicts and then it'll be ready to merge
@samugi Thanks! Went ahead and rebased and resolved the conflicts, lemme know if you need anything else from my end!
|
GITHUB_ARCHIVE
|
AuthorizationHandler - Current HttpContext + Route Information?
Is it possible to get the current HttpContext and Route information within the AuthorizationHandler?
I need to be able to have dynamic policies which check the current request (route) to determine whether the user has authorization.
Nope, all you get is the authorization context. You could, I suppose, do it imperatively by using resource based authorization and mapping the route information to a resource class.
I don't follow your suggestion.
Do you mean a configuration that contains resources and mapping the route information to an entry within the configuration? If so, that is what I'd like to do.
I however don't know how to capture the current route information within the Authorization framework. The AuthorizationHandler & AuthorizeAttribute don't pass any information about the current route request. Am I missing something? Shouldn't the request information be sent to the authorization code?
We took a deliberate decision not to send route information to attribute based authorization, because by applying the attribute at the controller method you have the route, unless you're doing something strange like mapping multiple routes to the same controller.
Instead we have resource/operation based auth. Normally this would be aimed at something like "I have this document, a resource, and I want to see if the current user can edit it, an operation".
So you could have a resource class which contains everything you wanted from the context, and auth against that with a single operation of Visit or View.
I'd like to figure out why you want the actual context and route data though. Those are MVC specific and we've been trying to move away from locking policies into MVC only, in the hope that you can write common policies that would work cross platform should, for example, Nancy pick it up.
The use case I'm commonly dealing with is as such;
I have endpoints (MVC Controller Action).
A request is made (from a 3rd party or user). Authentication occurs as per normal.
I have a dynamically configured role base system which roles can be created and then granted access to specific endpoints (operation). I can't go [Authorize(Roles = "somerole")] because the roles are dynamically created, and or changed. There can be multiple roles allowed to perform the operation.
What I want to do is create a configuration that I can update that maps endpoints (operations) to roles. Then when a request is made to an endpoint it reviews that configuration to determine if the request is authorized.
I used to be able to do this by overriding the AuthorizeAttribute. Essentially I could have it go out and verify based on configuration.
The new framework however doesn't support this behaviour. Instead it's a push to apply policies and authorization requirements. Which sounds great, but it has no dynamic mapping to the actual request. Thus I can't determine within the handler whether the current request is authorized since I can't map it to the configuration. The only way I think I can do it with the current framework is to create a separate policy for every endpoint (operation) I have. Then each one would know because it's hardcoded where it is relevant.
What am I missing?
Can you give me a concrete example? What out of the request or route information are you wanted to make decisions about?
You could, perhaps, user claims transformation to add the information from the current request as claims to the current authenticated identity. Authorization is aimed at identities and resources, not request properties.
I'm dealing with hundreds of endpoints that the client has control over what their users are able to perform.
Basic Structure
Operation (Endpoint) -> Role(s)/Claim(s)
Reality
Endpoint -> I don't know what role will be assigned access
I have a few options.
I can create claims for every endpoint/operation and apply them to the user. Lots of claims. Then I need to create policies for every endpoint/operation that check if the user has the appropriate claim.
This however means I need to create a policy every time I create an endpoint. It adds work.
I don't know if having hundreds of claims is an issue?
I want to map the endpoint/operation dynamically to a configured role based system. Which means at the point of request it needs to know which endpoint/operation is being requested so that it can go look up the configuration for the current user.
I believe the authorization framework you are implementing forces me to do (1). I'm hoping someone will give me a third option.
Are you wed to doing this as an attribute, as opposed to an if Authorize() in code?
I don't have a preference. An attribute would allow me to pass the current route id (something that can be mapped to a configuration). But that attribute would need to be known by the authorization implementation so that it could then do the check with the current user.
I have to I'll add a method to every single endpoint. Boilerplate code I guess.
But this would mean the AuthorizeAttribute is mostly useless.
For your purposes yes it is because you want a single policy over every route. That's not what this is designed for. You're at the CMS/multitenancy point and that's where people start rolling their own custom pieces because everyone's requirements vary. You'd also wanting MVC specific information in a policy system that is meant to apply to more than just MVC.
What you could do is approach it with resource based auth, which is what you seem to be indication you want.
Take a look at https://onedrive.live.com/view.aspx?resid=600FD0DBAAE1E40F!283770&ithint=file%2Cpptx&app=PowerPoint&authkey=!AHropH0wGXJdtIQ slides 40-42. You'd define your resource as a class containing the bits of route data and context you want, then in each controller method you'd construct it, and check it
var myResource = new Resource(context, routeData, etc)
if (authorizationService.Authorize(User, myResource, Operations.View))
{
return View();
}
Really though this is aimed at the actual resource behind the request, NOT the endpoint, the document, the invoice, the medical record, and not the request itself, because documents, invoices, etc. will have their own acls within them, or within a system.
I haven't read this thread in its entirety since its quite long, but basically what Barry said is what I suggest as well. The Resource parameter is meant for you to flow whatever specific info is necessary for your authorization logic. If you need the Route data, that is where you would flow it in...
Essentially I was hoping to have that logic in the AuthorizationHandler. It looked like it would be so nice and clean that way. Then my developers could write [CustomAuthorize(nameof(route))] and it would do it all for them.
Looks like I'll have to apply your suggestion of boilerplate code to authorize the operation. I like your suggestion, I just wish I could have moved it up the chain.
What would be nice is AuthorizeAttribute could automatically insert Resource information into the AuthenticationService.Authorize() method. Then I could set what the Resource would be as an attribute value.
But the attribute doesn't/can't know what the resource is, you need to load it first, from your service/database/whatever, hence it being imperative only.
Technically I'm treating the resource as the operation/endpoint. I have additional security/authorization that needs to be applied to what is actually done once they're allowed to use the endpoint/operation.
Security Model
Can this user use the system?
Can this user use this endpoint?
What data/resource can this user perform an action on?
Your framework is separating the connection between endpoint/operation and authorization. You've essentially made authorization on the endpoint/operation static, or in other words the developer knows who can perform that operation. Thus the develop can apply a static policy to the endpoint/operation that looks for a static claim definition.
I'm living in a dynamic world where I can't control those things. The user does through administration/configuration.
Taking a concrete example then, a health care invoicing system
Can a user use the system? Plain old auth
Can a user use this endpoint (i.e. access inovices)? Does the user have any value for the InvoiceAccess claim - at the controller level.
Optional 3 - Can a user perform that action? Then yes, you'd have a policy for each endpoint/action. Still only a single claim though.
Can the user perform the action against the resource behind their request? (i.e. can they update invoice number 73 to say it's paid) imperative resource/operation based.
Your #2 goes back to my earlier comment. I would have to create a policy and claim pair for every endpoint/operation.
I'm not against that solution. I was looking for input and confirmation that I understand the framework.
I just worry about how many policies I'm going to have. Which means a user would have tons of claims so that they would have configurable access for all the endpoints.
Why would you have to have tons of claims? Endpoints go to a resource and an operation? So you'd have a single "Invoice" claim, with a value detailing their access, for example "CRUD". Now you have a single claim that applies to all resources of that type.
Does a policy apply an OR or and AND to the required claims?
[Authorize(Policy = "InvoiceAccess")]
public IActionResult InvoiceAccess() {
}
The InvoiceAccess policy checks to see if the user has an InvoiceAccess claim.
One policy definition, one claim definition per endpoint.
I think you're saying a claim would be applied to the whole controller rather than the specific endpoint. And then the endpoint would only check to see if they had a claim value detailing their access level.
Multiple policies would be ANDed together.
I really appreciate your feedback. I think I have a working plan to go forward with Policies and Claims.
I still would have preferred for a more flexible AuthorizeAttribute. If only to be able to pass additional information to it for authorization logic. I realize you want that logic to be placed in policies and authorization handlers. Pity they aren't self aware of what operation/endpoint called them.
Just to follow up with this @Fosol. It was pointed out in another issue you can get to the route data and more in the Handle() if it's called from MVC (I apologise for not realising this)
var controllerContext = context.Resource as Microsoft.AspNet.Mvc.AuthorizationContext;
The MVC AuthorizationContext contains RouteData and more.
What if I want to authorize based on a ClientCertificate? I need the HTTP Request to get the ClientCert.
|
GITHUB_ARCHIVE
|
The writers of science are known as scientific journalists. They report scientific news to the media, sometimes assuming a more investigative and critical role. Before writing the content, they basically need to figure out a way to explain the content so that a user with non-scientific background can understand.
Now, scientists at MIT and elsewhere have developed a neural network to help the science writer, at least to some extent. The AI can read scientific articles and present a simple English abstract in a sentence or two.
Not only in language processing, but the approach can also be used in machine translation and speech recognition. Scientists were actually demonstrating a way to use artificial intelligence to deal with certain prickly problems in physics. But they realized that the same approach could be used to solve other difficult computational problems, including natural language processing, in order to overcome existing neural network systems.
Soljačić said: "We have been doing various types of work in AI for some years now. We used AI to help with our research, basically to improve physics. And as we become more familiar with AI, we would realize that there is an opportunity sometimes to add to the AI field something we know about physics – a certain mathematical construct or a certain law of physics. . We realize that if we use this, it can really help with this or that particular AI algorithm. "
"This approach can be useful in several specific types of tasks, but not in all. We can not say that this is useful for all of AI, but there are cases where we can use insights from physics to improve a particular artificial intelligence algorithm. "
Generally, neural networks have difficulty correlating information from a long data chain, as is necessary in interpreting a research paper. Several tricks have been used to improve this capability, including techniques, are known as short-term memory (LSTM) and recurrent gated units (GRU), but these are still far short of what is required for the actual processing of natural language , the researchers say.
The team came up with an alternative system, which, instead of relying on matrix multiplication, like most conventional neural networks, is based on spinning vectors in a multidimensional space. The key concept is something they call rotational memory unit (RUM).
And as expected, the system generates an output representing each word in the text by a vector in multidimensional space – a line of a certain length pointing in a particular direction. Each subsequent word displays this vector in some direction, represented in a theoretical space that can have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding word sequence.
Nakov said: "RUM helps neural networks do two things very well. This helps them remember better and allows them to retrieve information more accurately. "
Marin Soljačić, MIT professor of physics, said: "After developing the UOM system to help with certain difficult physical problems, such as the behavior of light in complex engineering materials, we realized that one of the places where we thought this approach might be useful would be natural language processing. "
He further noted, "such a tool would be useful for your work as an editor trying to decide which documents to write. Tatalović was at that time exploring AI in scientific journalism as his Knight fellowship project. "
"And so, we tried some natural language processing tasks. One we tried was to summarize articles, and this seems to be working very well. "
the RUM-based system has been expanded so you can "read" through entire research jobs, not just summaries, to produce a summary of your content. The researchers even tried to use the system in their own research paper describing these findings – the document this report attempts to summarize.
Here is the summary of the new neural network: researchers have developed a new process of representation in the rotational unit of the RUM, a recurring memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.
It may not be elegant prose, but at least it reaches the key points of the information.
Çağlar Gülçehre, a research scientist at IA Deepmind Technologies, a UK company that was not involved in this work, says that this research addresses a major problem in neural networks, related to information that is widely separated in time or space.
He said, "This problem has been a very fundamental issue in AI because of the need to reason about long time delays in sequence prediction tasks. Although I do not think this article completely solves this problem, it shows promising results in long-term dependency tasks such as questions, answers, text summarization, and associative retrieval. "
Gülçehre adds, "Since the experiments performed and the model proposed in this article are released as open source in Github, as a result, many researchers will be interested in testing it on their own tasks. To be more specific, the approach proposed in this article can have a very large impact on the fields of natural language processing and reinforcement learning, where long-term dependencies are very important. "
The research was supported by the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had the help of Science Daily, whose articles were used to train some of the AI models in this survey.
The paper is described in the journal Transactions of the Association for Computational Linguistics.
|
OPCFW_CODE
|
Migrate TorchAudio to use complex tensors in place of real tensors of shape (..., 2)
Complex Tensors were released in 1.6 release of torch. The progress of complex numbers support in torch can be tracked here. This issue is meant to outline the tasks that need to be completed to migrate torchaudio API to use these newly added complex tensors.
The general plan for this migration, would be to add a local USE_COMPLEX flag to avoid bc breaking changes. This flag detect if the input is complex or not. If it's not complex, the old code (that supports the real tensors with shape (..., 2)) will be used and the returned tensor will be a real tensor. If it's complex, the returned tensor will be a complex tensor.
Here's a demo PR for this migration: https://github.com/pytorch/audio/pull/758
Here's the tracker for the issue for deprecation of torch.fft functions. The new torch.fft.fft functions will take as input and return complex tensors instead of the real representation of complex tensors.
torchaudio.functional
[ ] spectrogram (uses torch.stft)
[ ] (deprecate) complex_norm (same as torch.abs())
[ ] (deprecate) mag_phase
[ ] (deprecate) angle (same astorch.angle())
[ ] [WIP] phase_vocoder
[x] _measure (uses torch.rfft, complex_norm) (#941)
[ ] griffinlim (uses torch.istft, torch.stft)
torchaudio.transforms
[ ] Spectrogram
[ ] MelSpectrogram
[ ] GriffinLim
[ ] [WIP] TimeStretch
[ ] (deprecate) ComplexNorm
[x] Vad (#941)
torchaudio.compliance.kaldi
[x] spectrogram (uses torch.rfft) (#941)
[x] bank (uses torch.rfft) (#941)
cc. @vincentqb @mthrok
cc. @mruberry, who is working on the old API of torch.ftt deprecation, for visibility.
Thanks @anjali411.
Do we have an estimate on completion of this migration?
(By the completion of the migration, I mean we will only support complex dtype.)
I assume after all the implementation is completed, we want to have certain release cycles where users can opt-in for complex dtype, and make sure that some major downstream users migrate too.
Can you remind me why the flag USE_COMPLEX is needed here @anjali411 ?
to detect if the input is complex and if we should use the code for complex case (instead of the (..., 2) style float tensors. It also ensures we don't break autograd support in torchaudio (since it relies on torch for autograd correctness).
Can you remind me why the flag USE_COMPLEX is needed here @anjali411 ?
to detect if the input is complex and if we should use the code for complex case (instead of the (..., 2) style float tensors. It also ensures we don't break autograd support in torchaudio (since it relies on torch for autograd correctness).
Right, so the concern is that we could be passing a real tensor, which we want to treat as complex with zero imaginary for instance. In that case, we no longer need to detect whether the tensor is real or complex, right? We can just go through the complex path. Is that right?
This issue is replaced by #1337
|
GITHUB_ARCHIVE
|
Plugin Version: 3.1.1 | Release Notes
Note: See our documentation of the latest version of this plugin in this page.
Actions by Email is a plug-in available on the Enterprise Edition. It's an extension that allows users to derivate cases by email in a given process. Initially, this plug-in was created for users, who are not necessary system users, introduce information on a case and receiving email with forms to continue the case, but at the end this plug-in accepts system users who will receive email with information of a case.
It has two options:
- Link to fill a form: add more information on data case, and it has an option to fill a form by sending this information into the user's email. It derives the case and save case data
- Use a field to generate actions links: select a value of a specific field form an email send previously. It derivates the case and save case data
How the plug-in works
Actions by email is added as a new tab on task properties, where it can be configured depending on the user requirements. This plug-in allows sending email on this two cases:
- By sending an email to the user's mail where the information of the dynaform to be filled; this user should not necessarily be the system user.
- By sending a field or fields as a link to the user's mail.
On both cases the information will recover allowing to continue the case.
Note that this feature does no work with self service and self service value based assignment option since the actions by email feature needs the ID of the next user in order to properly send the notifications.
Note: The email sent by Actions by Email is resent when the case is unpaused, reassigned or uncancelled.
- Processmaker V 2.0.37 and later.
- Configure Email Notifications.
- Mozilla Firefox from 3.6 and later.
- Internet Explorer from 7 and later.
Installation and Configuration
Install the plug-in in ProcessMaker
The plug-in will be available once imported the Enterprise plug-in with the corresponding license, it's necessary activate it once it's installed, it doesn't need additional configuration on the server side.
Note: In ProcessMaker 3, the feature must be enabled to work with it inside the processes. This is also a requirement even if the Actions by Email plugin is used (to work with old processes).
Configuring Actions By Email Properties
Once the plug-in is enabled, you need to define the Actions by Email on the task where a case is derivated, so right click on Task Properties of that task and a new tab will display where it is possible to configure all the options for this plug-in:
Fill the ActionByEmail tab with the following considerations:
- Type: a dropdown field where the type of sending data must be selected:
- Link to fill a form: the form will be send into the email to be filled out.
- Use a field to generate actions links: generates a select link according to the field chosen.
Note: the plug-in will select by default the email of the user registered in ProcessMaker, who will be assigned into the task where the plug-in was configured. If there is no email defined when the user was registered in ProcessMaker, the email that takes will be the one that is selected as variable on the option Link to fill a form.
- Template: select the email template to be send. Including into the plug-in, a template, actionsByEmail.html, is created by default to do some tests. Also it has the possibility to edit the template by clicking on the link Edit.
- DynaForm: select one of the DynaForms created for the current process.
Note: There's no restriction on using any DynaForm of the process, it will depend on how was the process designed.
- Field with the email: there are two types to send emails:
- Choosing a variable: all of the DynaForm variables will list on the dropdown, choose the one that has the email field where mails will send.
- Send the email to the user assigned to the task: choose the email of the user who is assigned into the current task.
- Field to Send in the Email: this will enable where the option Use a field to generate actions link is selected, enable this option is possible o choose fields that will be sent on the email. It only works for the following fields: yesno, dropdown, radiogroup and checkbox.
- Register a Case Note when the recipient submit the Response: If this option is checked, a case note is added when the DynaForm is submitted.
Finally, click on Apply Changes to save or modify the configuration.
Creating an Example Process
Note: this is only a basic Process to show how to apply this plug-in in a process
Choosing Link to fill a form option
It will be listed step by step how to configure this plug-in and how it works in a process:
Create a simple process
The example process has two tasks, as the image below
Note: This plug-in doesn't work with Routing Rules Selection (where the assigned user can select the next task manually).
Create a DynaForm.
Create an example DynaForm which will have basic fields as the image below, and assign this DynaForm in the first task.
Configuring the plug-in
Configure the plug-in, and select the DynaForm created before.
Running a Case
For example, if the Actions by Email was configured on the Second Task, the e-mail will be sent when this first task has been routed into the second which will be sent into the user's account. Fill The DynaForm:
Derivate the current task, and an e-mail will be sent:
Click on Please fill this form, this link will redirect to the DynaForm to be edited:
By clicking on Save the following message will display:
If the option "Register a Case Note" was checked another email will send into the user's account:
Finally, Checking the Case Note added into the case:
Note: Remember that checking the option Register a Case Notes is optional.
Create a new DynaForm: Create a new DynaForm with two new fields Name and Last Name, copying the three fields added in the previous DynaForm, add a dropdown field named Select a Country,
And assign it in the Second task.
Configure the plug-in: in the configuration the dropdown field will appear:
Running a Case: Running the case and depending on which task the plug-in is configured, the mail send will be as follows:
By choosing any Country, in our example Bolivia this information will be send into next form. Continuing the case and opening the second task the DynaForm will display with the following information:
As it can be seen, the country Bolivia is displayed on the dropdown as it was selected on the email.
Note: From Version 2.0.9 the plug-in , a new improvement was added when the email is sent; now the field name of the dropdown will appear just once, below the options listed on the dropdown, as the image below:
Register a Case Note: If this option was checked, when the configuration of the plug-in was done, another email will send into the user's account when user clicks on any linking field of the DynaForm. A confirmation message will display as the image below:
If you try to resend the same email, another message will display as the image below:
Checking the email for the Information send:
Finally checking the information added on the Case Note of the respective Case
Note: Remember that the Case Notes is optional.
Actions by Email Log
When a case is executed, previous configuration of Actions By Email options, all messages sent will register into the Actions by Email Log. This is very useful to have a register of which messages were send or which weren't. Login to ProcessMaker with a user such as the "admin", who has the PM_USERS permission in his/her role. Then, go to ADMIN > Plugins and click on Actions by Email option:
A list with emails sent will display as follows:
- Forward email: by choosing an email from the list and clicking on this option the email will be resend. If the task is closed the following message will display:
- Date: it's the date on which the email was sent.
- Case Number: it's the case number in which the message was sent.
- Subject: it will display the subject of the email.
- From: email address from which the email was sent.
- To: user who receives the email.
- Sent: state of the email. If it was sent state will be SENT on the contrary it will display an ERROR status.
- Answered: if the message is answered, in other words, whether the form is filled or clicked on the link options, on this row the message will be YES on the contrary, if the email is not answered, the message will be NO.
- View Response: by clicking on the icon, the email answered will open in a new window, with the fields which were filled during the process. This have a relation with the field Answered if in that field the Answer is YES, the window will display the message that will be the form filled or the link option.
However, if the answer on the Answered field is NO, the window will display the following message:
- Message: if any error occurs while it is sending a message, this error will display on this row.
|
OPCFW_CODE
|
cakePHP - data retrieval - Model Association & database design
I feel pretty darn dumb posting this question, but im completely baffled (probably because im quite new to cake and a bit intimidated)..
hasOne hasOne
donors | blood_groups | donor_types
Att ------------------+-----------------------+---------------------+
DonorID (pk) | blood_group_id (pk) | type_id (pk) |
Name | group | type |
Surname | | |
type_id(fk) | | |
blood_group_id(fk)| | |
The Donor Model
class Donor extends AppModel{
public $hasOne = array(
'BloodGroup'=> array(
'className' => 'BloodGroup'
),
'DonorType' => array(
'className' => 'DonorType'
)
);
I am already using the assoctiated models to populate a an FormHelper input in the donor registration view, all is good.
However when I try to retrieve a donor record this error occurs
Error: SQLSTATE[42S22]: Column not found: 1054 Unknown column
'BloodGroup.donor_id' in 'on clause'
This ofcours means that cakePHP is looking for the fk donor_id inside blood_groups table. However the relationship is the other way around and the fk is stored within donors table.
I do not know whether the db design is flawed or if the association needs to be redefined within cake. please help, as I am quite new to cakePHP and I a practically forced to use it because of its merits.
I have read all the section about Association between models in the cake doc, but I am still struggling. How can I go about this?
To get this database setup, which is correct working the assocs would be:
Donor belongsTo BloodType
Donor belongsTo DonorType
BloodType hasMany Donor
DonorType hasMany Donor
But your foreign keys are wrong: Follow the CakePHP conventions!
Model fields are supposed to be lower cased and underscored, PKs are expected to be just id and FKs are the model name of the assoc, singular, underscored with suffix _id. The PKs in the donors table are right.
Why are you even changing your own convention? DonorId vs blood_group_id as PKs? However, if you want to cause a mess name them like you want but you'll have to declare them explicitly then everywhere. See linking models.
I recommend you to do the blog tutorial before messing with the framework to get a real project done.
sorry about DonorID, that was a pasted earlier version, one which I changed before implementing. This question was posted twice yesterday, but it did not seem to have been posted and therefore I reposted it. This was the original post :
http://stackoverflow.com/questions/21321001/cakephp-retrieving-from-database-models-associations-database/21321330?noredirect=1#comment32139276_21321330
Anyway, I did get the blog tutorial and actually found it pretty straight forward, as well as I have the OO background needed to understand whats happening (or so I thought, at least). In both posts, you all gave a good answer, and now I've realised the conventions are a huge factor! And also you re the only one that gave me the relationships needed i.e I was missing the hasMany rel. So Kudos for that!! Which is why I'm accepting yours as the answer
Should I leave the post or delete duplicate ones ?
Change your Donor model as shown below:
I have added foreignKey in BloodGroup and conditions in DonorType:
class Donor extends AppModel{
public $hasOne = array(
'BloodGroup'=> array(
'className' => 'BloodGroup',
'foreignKey' => 'blood_group_id'
),
'DonorType' => array(
'className' => 'DonorType',
'conditions' => array('Donor.type_id' => 'DonorType.type_id')
)
);
Reason for above error: If foreignKey is not defined in association array then CakePHP assumes that tablename_id is foreignKey.
thanks your answer is also correct, since you specified that the fk would need to be explicitly defined in the assoc if not following cake's conventions!
|
STACK_EXCHANGE
|
The file is one of the XP registry files. 2 things have happened to your computer. One is the virus, the other is the W2k reinstallation.
Unless the virus has affected the XP system as well, it is quite possible that your XP registry file is actually not corrupt or missing, but it just cannot be loaded.
The problem may well be the w2k reinstallation, when w2k Setup overwrote the boot files. Both w2k and XP use the 3 boot files ntldr, ntdetect.com and boot.ini. When you set up the dual boot, XP Setup put the XP versions of the first 2 files on to C, which have no difficulty in recognising and loading an older OS like w2k. The w2k version of ntldr and ntdetect.com may not be able to load XP, being a newer OS.
Try replacing these 2 files now on your C drive with the XP version first, before attempting a solution with the XP registry.
Boot into w2k. Go to C, and see if you can see the 3 files. These are usually hidden, so if you can't see them, click Tools, Folder Options, view tab, and (1) tick "Show all hidden files and folders", (2) Untick "Hide file extensions for known file types" and (3) Untick "Hide protected operating system files". By doing (3), you also remove the Read-Only attribute of these files, so that you can replace them.
Put XP CD in. Select Perform other tasks, then Browse CD. Go to i386 folder and locate NTLDR and NTDETECT.COM
Copy them to the C drive. Say yes to replacing the existing files. When that's done, remove the XP CD.
Next check the boot.ini file. This file contains the paths to the operating systems on your computer, and determines what the OS selection screen (when you start computer) looks like.
Since you still get the XP option on that screen, the boot.ini file may well be fine.
Under C, double click the boot.ini file. It should open with Notepad. The only line you need to check is the one under [operating systems] which has "Microsoft Windows XP Home (or Professional)" /fastdetect.
The path should be multi(0)disk(0)rdisk(0)partition(n)\WINDOWS="Microsoft Windows XP Home (or Pro)" /fastdetect
where n= the partition number where XP is installed. So if XP is installed on the 2nd partition on the same hard disk, it should be partition(2), and so on. (I assume XP is on the same hard disk as Windows 2000.)
If that's OK, just exit Notepad.
Alternatively, go to the XP partition and find the file WINDOWS\system32\msconfig.exe
Double click it, msconfig will open. Select the boot.ini tab. Click "Check all boot paths". If it returns a message saying it appears all boot.ini lines for MS OS's are OK, then your boot.ini file should be fine.
Then restart the computer and select XP to see if it works. If it still returns the error message about the registry file, post back. There are MS articles which deal with it.
|
OPCFW_CODE
|
Section 1: Console Tab : Logging, Profiling and CommandLine (Part II)
This tutorial is the part II of “Firebug Tutorial – Logging, Profiling and CommandLine“. (If you haven’t read the part I of this tutorial, I would recommend you to read the part 1 first.)
The following topic will be covered in this section.
- Tracing error
- Tracing XmlHttpRequest object
- Copy and paste the code in notepad and then, save as a htm file
- Open this file in Firefox browser.
- Open the Console tab on Firebug console.
- Click “Start” button and wait a lit.. (You will get the result like the screenshot below. )
(The list is sorted based on the execution time.)
Columns and Description of Profiler
- Function column : It show the name of each function.
- Call column : It shows the count of how many a particular function has been invoked. (2 times for doThis() function in our case. )
- Percent column : It shows the time consuming of each function in percentage.
- Own Time column : It shows the duration of own script in a particular function. For example: doSomething() function has no its own code. Instead, it is just calling other functions. So, its own execution time will be 0ms as shown in picture. If you wanna see some values for that column, add some looping in this function.
- Time column : It shows the duration of execution from start point of a function to the end point of a function. For example: doSomething() has no code. So, its own execution time is 0 ms but we call other functions in that function. So, the total execution time of other functions is 923 ms. So, it shows 923 ms in that column. Not clear? Feel free to let me know. I can try once more.. I don’t mind to explain you again. (That’s my problem in writing .. I can’t write very clear.. Sorry about that. )
- Avg column : It shows the average execution time of a particular function. If you are calling a function one time only, you won’t see the differences. If you are calling more than one time, you will see the differences. Take a look the doThis() function at second line of picture above.
The formula for that column is ~
Avg = Own Ttime / Call;
- Min column and Max column: It shows the minimum execution time of a particular function. In our example, we call doThis() function twice. When we passed 1000 as an parameter, it probably took only a few millisecond. (let’s say 1 ms.). then, we passed 100000 to that function again. It took much longer than first time. (let’s say 50 ms.) . So, in that case, “1 ms” will be shown in Min column and “50 ms” will be shown in Max column.
- File column : the file name of file where the function located.
Note: You can set the title of profiler by passing one string to console.profile() function. In our example (console.profile(‘Measuring time’);), ‘Measuring time’ is the title of profiler.
Profiler button from Console tab’s toolbar
If you don’t want to profile thru the code, you can use “Profile” button from the toolbar of Firebug console.
In order to test this,
- you need to remove two lines (console.profile(‘Measuring time’); and console.profileEnd();) from doSomething() function.
- Open this file in Firefox.
- Open Console tab of Firebug console.
- Click “Profile” button ( right button on the toolbar of Firebug console.)
- Click “Start” button on your page.
- Wait for 1 min ( or a lit less than that.)
- Click “Profile” button again. ( You will get the same graph as the picture above.)
Okay. Let’s move on to next topic.
#2. Options of Console tab
There is one link called “Options” at the right-side of Firebug console. If you click this link, you will see the menu as shown in picture below.
I will divided those times in three categories.
- Errors Tracing
- Show CSS Errors
- Show XML Errors
- XmlHttpRequest Tracing
- Show XMLHttpRequests
- Larger Command Line
#2.1 Errors Tracing and Filtering
- Show CSS Errors
- Show XML Errors
Those options are used for tracing the errors of your script, style sheet and XML. You can also filter the error messages based on the type of error that you wanna.
- Copy and paste the code in notepage.
- Save as a .htm file.
- Reload the page
- First, you will get the CSS error. ( Reason : We wrote bcolor instead of color in “normalText” class. )
- Click the button
- then, we will get the another error. ( because we wrote “doucmnet” instead of “document” in the code. )
I think those options are very sample to use. If you wanna see the errors, just check the option in menu. then, Firebug will give very good information about the errors that you are getting. Uncheck it if you don’t want.
#2.2. Tracing XmlHttpRequest object
This is also one of the most popular feature of Firebug and it is really helpful for Ajax developer. The problem with Ajax is that it is very difficult to figure out if something goes wrong in XmlHttpRequest. Sometimes, you have no idea about what’s going on behind the sense while the indicator keep on cycling all the time. One of the problem that I face when I was playing around with ajax, it is difficult to find out whether the response format from Web service are correct or not.
but things are very simple with Firebug. You only need to select “Show XmlHttpRequest” option. Firebug will tell the following thing.
- The execution time
- Parameters (QueryString)
- HTTP Header
Example : I used Yahoo UI DataTable in this sample. This sample is the updated version of this sample from Yahoo UI.
Download : SourceCode
- Download the zip file from the link above
- Extract the zip file and put all files to PHP directory.
- Try to call this file “dt_xhrlocalxml_clean.html” from Firefox ( http://localhost/yui-php/dt_xhrlocalxml_clean.html in my case.)
- Open the Console tab in Firebug console.
- Select “Show XmlHttpRequests” option
- Click the button “Load Data” on the page
- The following result as shown in picture will be shown. (The style might be a lit bit different since I didn’t change the path in some CSS files. )
- And check the Console tab in Firebug console.
- You will the detailed response in XML format as shown in the picture below.
Note: If you don’t wanna download or if you don’t have PHP installed on your machine, you may try to check this online sample from Yahoo. But there won’t be any button so you should select the “Show XmlHttpRequests” option first and reload or open the link…
Okay. That’s all for today. I was thinking to write about CommandLine in this part but I don’t have the enough time to write it today. Don’t worry. I will tell about CommandLine tomorrow.
Cya. Don’t hesitate to drop a comment if you have any suggestion or comment. I appreciate it.
|
OPCFW_CODE
|
Snippets: Unable to take advantage of variable transformation
I have the following snippet:
{
"arrowFunctionWithOptions": {
"definitions": [
{
"scope": {
"langIds": [
"typescript",
"typescriptreact"
]
},
"body": [
"interface ${name/(.*)/${1:/capitalize}/}Options {",
"\t$options",
"}",
"",
"const $name = ({ $optionFields }: ${name/(.*)/${1:/capitalize}/}Options) => {",
"\t$body",
"}"
],
"variables": {
"name": {
"formatter": "camelCase"
}
}
}
]
}
}
When using this, I cannot get the capitalization I want for the name variable. If I change every instance of $name to $1, I can get the capitalization by inserting the snippet with no phrase and entering the function name afterwards, and the transformation is applied after I Tab, but I cannot use this in insertion_snippets_single_phrase.csv successfully since arrowFunctionWithOptions.1 doesn't seem to work for the Cursorless identifier.
does this work with regular vscode snippets? we just vendored in their snippet parser
Yes, the transformations work with normal snippets. I'll use a smaller example (the React useState hook) to demonstrate some of the nuances I'm experiencing:
If I use this regular VSCode snippet...
{
"use state": {
"scope": "javascript,typescript,javascriptreact,typescriptreact",
"prefix": "usestate",
"body": ["const [${1}, set${1/(.*)/${1:/capitalize}/}] = useState(${2});"],
"description": "useState"
}
}
... I can get the casing I want by first inserting the snippet, entering my text for variable $1, and then hitting Tab to get to variable $2. The capitalize transform isn't applied until I hit Tab.
If I use this Cursorless snippet...
{
"useState": {
"definitions": [
{
"scope": {
"langIds": [
"javascript",
"javascriptreact",
"typescript",
"typescriptreact"
]
},
"body": [
"const [${name}, set${name/(.*)/${1:/capitalize}/}] = useState(${initialValue})"
],
"variables": {
"name": {
"formatter": "camelCase"
}
}
}
]
}
}
... I can add useState, useStateTest.name to insertion_snippets_single_phrase.csv and use the snippet without error, but the capitalize transform is not applied whether or not I include the initial phrase or not, so "snippet usestate hello world" will insert const [helloWorld, sethelloWorld] = useState(). Also, if I use only "snippet usestate" and then enter $name manually, the capitalize transform will not be applied even after I use the Tab key.
If I try making my Cursorless snippet this...
{
"useState": {
"definitions": [
{
"scope": {
"langIds": [
"javascript",
"javascriptreact",
"typescript",
"typescriptreact"
]
},
"body": [
"const [${1}, set${1/(.*)/${1:/capitalize}/}] = useState(${initialValue})"
],
"variables": {
"1": {
"formatter": "camelCase"
}
}
}
]
}
}
... I cannot use the initial phrase for variable $1, as it seems that entries such as useState, useState.1 or useState, useState[1], etc. do not work for insertion_snippets_single_phrase.csv. However, I can simply use "snippet usestate" without a phrase and use the Tab key to get a successful capitalize after manually entering $1. This has been my current workaround in general for snippets where I need transformations, using numbered variables and not using the initial phrase (or using the initial phrase for a different variable that doesn't need transformation).
This is purely speculation, but perhaps the issue has to do with VSCode requiring the explicit Tab press to apply the transformation, and perhaps Cursorless cannot currently trigger this with the initial phrase. I'm not sure why in my second code block above, the Cursorless snippet using $name, I cannot get the capitalize even with manual entry of $name plus a Tab press, though.
|
GITHUB_ARCHIVE
|
Software development is one of the fastest-growing industries in the United States, and in Dallas, it is an incredibly lucrative one.
But for many, it’s still not enough to live on.
The National Association of Software and Information Professionals (NASIP) recently released a report titled “Making the most of your software development career,” which offers tips on how to get paid for your work.
The report is a detailed look at the types of jobs you’ll find in the software industry, how to find a job that suits your skillset, and how to make the most out of your career.
It’s a good place to start.
“This report is designed to help you get started on your career path, with advice on how you can apply your skills, the best companies to work for, and the best places to start your career,” said Chris Baughan, director of digital media at the NASIP.
The NASIP’s study focuses on a specific subset of software developers, including those who are working in an information technology (IT) or a business analytics (BA) field.
These workers often work in companies that sell products, manage information, or analyze data.
“As software developers get older, they often have to make decisions about their career path that impact their well-being,” said Mark Dolan, a senior advisor with the NASIC.
“They have to balance their desire to contribute to society and the desire to be financially secure and have the opportunity to pursue their career goals.”
The report breaks down the types and jobs of software development employees by their skill sets, the types they work in, and their age, salary, and education.
The top ten jobs for software developers in the U.S. are: Software developer, software engineering, information technology, finance, business analytics, financial software, and IT.
There are many types of software engineers, and they can be any kind of developer.
A software engineer may be a web developer, a data scientist, a software engineer, or a software architect.
Software developers tend to be the most experienced and knowledgeable.
They are the ones who write code and build software, which is a job they can typically get paid to do.
A developer might also be a technical writer, who writes code that makes things work, such as online services, apps, or web browsers.
Some software developers are developers of open source software, or open source projects that use open standards.
Software engineers work on software for a wide variety of purposes.
They develop software that can be used in a variety of industries, such in education, medicine, finance or health care.
For example, a bank analyst might develop software for banking and the banking industry to help customers manage their financial accounts, which might include financial products like mortgages, student loans, and student loan accounts.
A medical researcher might work with medical device makers and pharmaceutical companies to develop new drugs that will help patients with specific ailments.
Software engineering is a highly specialized job.
Software engineer work closely with a team of developers, who are responsible for building and testing the software.
Software can be built using the most popular development tools available, such like C#, Java, PHP, Python, Ruby, and others.
Software may also be written using Java or other programming languages.
This may be used for small, incremental projects, or for large, complex projects, such for an entire company.
Software development typically requires a high level of skill and experience.
The amount of experience you have in programming can also be an indicator of your job title.
“Software development employees should be familiar with the programming languages they are working with, the different programming environments that they use, and some of the different types of applications they are developing,” said Dolan.
For a full breakdown of the top 10 software engineers in the country, check out the NASIF report.
|
OPCFW_CODE
|
You can change registrars at any time. However, check the contract with your current registrar before doing so.
- If you transfer the domain name of an active website and change your hosting service at the same time, remember to take into account the lead-time for the change to be propagated.
When you change your hosting service, it is common, but not compulsory, for you to also change registrars. You may also simply want to change registrars to obtain a lower rate or because the company will soon stop trading (if this is the case, you will be notified directly by AFNIC and benefit from an accelerated transfer procedure to the registrar of your choice). In all of these cases, it is important that the process involved in changing registrars takes place in two phases: an administrative phase which is systematic, and a second technical phase which only occurs when the DNS records need to be changed to match your domain name with the new IP addresses of the web resources on which it is hosted.
* Review the contract between you and your current registrar and check the transfer conditions. In practice, the contractual clauses do not prohibit you from changing registrars, but they specify that you lose the fees that the registrar usually asks you to pay in advance for the entire duration of the contract.
* Retrieve the secret AUTH_INFO code associated with your domain name. This can usually be found online in the management interface of your account made available by your current registrar. If this is not the case, remember that a registrar must provide it on request.
* Choose your new registrar. If the domain name uses the .fr, .re, .yt, .pm, .wf or .tf extension, the registrar must be chosen from the list of registrars under contract with AFNIC.
* Make your application to the registrar you have chosen. Finally, note that the application is generally fee-paying and is frequently associated with other services such as website hosting.
* Inform your current registrar of your application, preferably by registered mail. If your domain name uses the .fr, .re, .yt, .pm, .wf or .tf extension, you are bound to do so by the Naming Policy.
On receipt of your application, the AUTH_INFO code, and your payment, your incoming registrar will forward a formal request for transfer to AFNIC, the only body with the authority to make the transfer. In response, AFNIC sends the outgoing registrar, the one you are leaving, notice of the change. In theory, the outgoing registrar has 8 days in which to agree to oppose the change, thus obtaining a non-renewable extension of 22 days. But it can also automatically accept the request within minutes of receiving it. After the period of 8 days and if there is no response from the outgoing registrar, the domain name is automatically transferred to the incoming registrar. The transfer operation is invoiced by AFNIC to the incoming registrar. The transfer operation also changes the renewal date of your domain name.
Once completed, the change of registrar is immediately visible in the Whois database . If, however, it corresponds to one or more active Internet resources, such as your website or your email inbox, it must also be propagated to all the Domain Name Servers (DNS) liable to redirect queries by Internet users. The redirection operation consists in indicating to all the DNS servers the server which is authorized to return the IP address of the server corresponding to the requested resource. Around the planet, several days may be necessary, especially since the local copies (DNS caches) used by ISPs must also be updated in order to speed up browsing. For domain names managed by AFNIC, on average the change is completed in about ten hours. During this period, the outgoing registrar must keep the information enabling access to your resources.
|
OPCFW_CODE
|
If you live and get the job done outdoors the United States, you may be able to exclude from money part or each of the profits you generate in the overseas nation. You might also have the capacity to assert a overseas housing exclusion or deduction. When you assert the international earned earnings or overseas housing exclusion, You can not deduct the Portion of your moving expenditures that pertains to the excluded cash flow.
Also Be aware that the entire process of implementing with the Carné de extranjeria might be a bit challenging plus the method isn't normally precisely the same for everyone so Guantee that you talk to about the details of the procedure on the immigration Workplace.
Never include things like in revenue the value of moving and storage providers furnished by The federal government as a result of a long-lasting modify of station. Usually, if the total reimbursements or allowances you get from the government because of the move are in excess of your true shifting bills, the government must include things like the excess in your wages on Kind W-two.
It is not needed that you just prepare to work before transferring to a completely new area, provided that you basically go to operate in that site.
If your trade or business enterprise is seasonal, the off-season weeks when no function is needed or accessible might be counted as months through which you More hints labored full time. The off-season should be lower than 6 months and you have to perform full-time right before and once the off-season. Illustration.
Entire-time work depends upon what's typical for your form of get the job done in your region. For reasons of this take a look at, the following 4 principles apply.
Visa demands vary no less than a little bit from nationality to nationality and from nation to region. They usually include, but usually are not restricted to a valid passport, passport pictures, spherical trip ticket, proof of ample indicates to find the money for the excursion and hotel reservation.
What's more, it features reimbursements that exceed your deductible bills Which You do not return to your employer.
In the imply time I had organized a career right here in Lima as I'm a Pharmacist and Intercontinental Export & Importer . prior to expired my visa I'd talk to immigration Place of work they said they only can method my paperwork when my VISA are been legitimate.
These reimbursements are fringe Advantages excludable from your profits as qualified moving expenditure reimbursements. Your employer should really report these reimbursements on your Form W-two, box twelve, with code P.
We are already around the border to Ecuador quite a few periods to resume vacationer visas (United states passport) and have never been bothered whatsoever. Ask for 180 days and you'll almost always get it. Now we have pals which have finished the exact same persistently.
Quite a few travelers report that crossing the Peruvian-Ecuadorian border is just not a pleasure and re-moving into Peru to secure a new vacationer visa often entails a "great" chat with the officers there incl. attempting to get bribes.
You deal for your home items and private outcomes to become moved to your household in America, but only if the go is accomplished in an affordable time.
Your employer will have to consist of in your profits any reimbursements produced (or taken care of as produced) beneath a nonaccountable strategy, Though They're for deductible shifting expenses. See
|
OPCFW_CODE
|
Vexira Antivirus Personal 2.0 review: Vexira Antivirus Personal 2.0
Vexira Antivirus Personal 2.0
The software installs easily and places an icon in your system tray that lets you launch the disk-scanning software with a right-click or change the settings for real-time antivirus monitoring with a double-click.
Before we tested Vexira, its maker, Central Command, cautioned us that the software's interface was still a work in progress, and the company was right. For instance, we couldn't schedule Vexira to automatically download virus definition updates, and when we manually downloaded the updates, we had to reboot to install them. Other products do this automatically. The company promised automatic updates in the near feature. Though it didn't affect performance, we encountered a bug in which Vexira displayed two identical icons in the system tray. We also discovered a glitch in which the real-time monitoring software became inactive--and not because of anything we did.
Slays the competition
On the bright side, Vexira sports handy features that are missing from many antivirus apps. For one, it includes a scheduler to run Vexira or, in fact, any program at a specified time, though the feature annoyingly adds an unnecessary icon to the system tray when enabled. In addition, Vexira can scan within an array of compressed file types, including ZIP, TAR, GZ, and RAR--an impressive trick. You can also initiate a scan simply by right-clicking a file or a folder, which saves time, and you can delete a found virus from the hard drive.
Excellent antivirus scanning engine
Vexira performed relatively well in our Labs' tests. It did especially well in our virus simulation and I Love You tests, suggesting that it has good heuristics, meaning it employs general rules that can discover a virus even if the specific strain hasn't yet been identified. Unfortunately, Vexira doesn't scan incoming e-mail for viruses, as Norton AntiVirus does.
To test Vexira's ability to find and remove an active virus, we infected a system with the Gibe worm. Vexira's real-time monitor immediately found the virus running in system memory and deleted it. However, it didn't remove any of the virus's Registry entries. It also left a few virus-created files in the Windows directory and deleted them only after we ran a complete manual scan.
Priced at $49.95, with one year of updates and 30 days of phone tech support and unlimited e-mail support, Vexira is no bargain. The software's online help has limited guidance, but the company's Web site offers valuable information on the latest viruses.
We're not ready to recommend Vexira yet. Although its virus detection engine appears robust, the app suffers from growing pains. We look forward to future versions of this underdog software, assuming it will have a little more polish and refinement. Meanwhile, if you're looking for an antivirus underdog, check out Norman Virus Control 5.0, which sports a much more polished interface.
|
OPCFW_CODE
|
I got the Tracker a couple of weeks ago, and thought I would throw in my thoughts, as they include some observations that I haven't seen shared yet. I got into electronic music from using LSDJ on the Game Boy, so this more fully featured tracker device instantly appealed to me. On top of this, I am a bit of a sequencer nut, and have a Digitakt, Pyramid, Monome, etc - so a fair bit to compare it to.
First of all, the Tracker is beautiful. The screen is really clear; the controls smooth and (largely) intuitive. A lot of thought has clearly gone into it, and making music is extremely easy and accessible. I like it a lot. I rarely make full songs on grooveboxes any more - opting to do sections and then record them into Logic for production, given the various different pieces of gear I've got, and how much of a pain it can be to get them to all talk to each other. However, the song arrangement feature on the Tracker is nicer/more capable than expected.
Some other positives:
* I love the FM radio capture feature.
* Even though Tracker doesn't have multiple outs, it allows you to render stems as wavs for export. This is an awesome feature, and I don't get why Elektron didn't introduce this to the Digitakt, rather than rely on OverBridge (which I've never forgiven them for announcing along with the hardware and taking 2 years after that to deliver).
* MicroSD card storage is brilliant. It is such an obvious, good solution to sample space. (ARE YOU LISTENING, ELEKTRON?!)
Some negatives/areas for improvement:
* The Tracker has less modulation/FX options than the Digitakt, which is a bit disappointing - as a bunch of these could surely be easily implemented to give greater dynamism.
* The internal RAM(?) gets used up quicker than I would like. Not terribly so, but there's still less space than I might want for full in-the-box songs.
* Stem export is pretty buggy. When I first got Tracker there were all sorts of audio glitches, and there is an outstanding bug where the tempo of the exported stems is messed up/drifts. That kind of kills that feature's usefulness.
* It would be great if there was room for extra tracks to run simultaneously. I'm not sure of the hardware/technical limitations here, so take that with a pinch of salt.
Generally speaking, I really like the Tracker. It's a great device with a couple of implementation flaws which I am sure will be fixed. Polyend have been really quick to reply to e-mails and pick up on GitHub issues (even just having the ability to submit a bug report this way is good), and also share beta builds via the GitHub repo. All of this is great, and I look forward to seeing how the software develops. At present I am mostly holding out for the stem export fix, as having the BMP messed up throws a spoke in the wheels of my usual workflow.
|
OPCFW_CODE
|
MongoDB: What is the effect of document size on collection scan performance and "working set" memory footprint?
I am looking at splitting a collection of "sizable" documents into 2 collections (often queried summary fields, and never queried detail fields/arrays of documents). The aim is to reduce average document size, and therefore reduce "working set" memory footprint and collection scan times. Documents will reduce from ~9.5kB to ~2.7kB a reduction of 3.5x (in memory BSON size).
This should reduce requirements on the wiredTiger cache by the same 3.5x factor and therefore require 3.5x less memory in the machine. Will it also speed up collection scan queries by a similar amount? Update / insert operations are rare and NOT performance critical, because run in offline batch processes.
This is for MongoDB 4 running on FreeBSD. The web application is in php7.3, but that's not really relevant.
I currently have 1million documents at above sizes. This is about 3.5GB on disk and 7GB in memory after decompressing. Current server has 16GB RAM, but this is becoming an issue and is part of the motivation, since the number of documents is expected to grow quickly to 4million and then more slowly to 8million.
The application is primarily a "slice and dice" query interface. About 20 different "filters" in the UI driving query conditions on the various summary fields. All of them are indexed, including some compound and some multi-key for small arrays, but because the "UI filters" can be used in any combination, indexes cannot always help as it is not realistic to create compound indexes for every possible field combination.
The structure of the collection documents is 5 large arrays of detail sub-documents (these make up ~70% of the total document size), plus a number of computed "summary fields". The summary fields are computed from the large detail sub-documents in a slow, offline process. This is fine and not the issue. The queries are ONLY against the summary fields, never against the original sub-documents. But we end up with regular "full collection scans". These are beginning to slow down as collection size grows. Currently ~10s when no index available and result includes almost the full collection. This is too slow to be truly interactive. Counts are critical to the application, and again they often require complete collection scans. We have done what we can with "covered queries" including for the counts.
The proposal is to store the original detail sub-documents in a separate collection "linked" by _id. "Lookup joins" will never be needed, except during background batch processing which is not time critical. Updates are extremely rare.
We have analysed the proportion of the collection which is made up of the original sub-documents and moving them off into a separate (and rarely accessed) collection will reduce average document size by factor of 3.5x.
We expect that this will reduce wiredTiger cache size requirements by the same factor and therefore reduces our physical hardware RAM scaling requirements.
The question is: Will we also see a reduction in query execution time when a collection scan is required, because the CPU is only scanning through much lighter documents? Will any gain here be of a similar order of magnitude, ie ~3.5x?
Or is this a false hope, because BSON structure allows wiredTiger to skip past all the "dead wood" in each document. If that's the case, there might still be a smaller gain due to CPU on die cache? ie the smaller documents will be more in contiguous memory?
Smaller documents should absolutely result in a speedup. Because there's more of them in the same space (memory page, disk sector, etc.) The more compact they are, the less storage reads are needed (cache access is still slower than no access). As to the magnitude, this needs to be measured, of course.
@SergioTulentsev Thanks. That's an encouraging validation of my thoughts.
Now done this collection split. Result is good:
Memory footprint of mongod reduced by ~3.4x as expected
collection scan speed also significantly faster (around ~3x)
So, if you have a large collection (many documents) with a significant average document size due to sizeable sub-arrays/documents, and those sub-arrays/documents are rarely needed for queries (ie not too many $lookups or similar), then splitting them out into a separate collection can be very worthwhile.
Lower RAM requirements AND faster collection scans by roughly the factor that you manage to reduce the avg doc size by.
|
STACK_EXCHANGE
|
I’ve got an issue with the native Movie Maker where the video rate exports with a framerate greater than specified. My sketch reacts to sound input so it’s clear to see that the exported .mov visuals don’t sync with the sound.
Both frameRate(30) is defined in setup() and a framerate of 30 set in the Movie Maker options.
I’m working in Processing.py
Just because you set
frameRate(30) does not guarantee you a constant framerate of 30 FPS. Usually it’s about 28 to 29 FPS. Saving frames to a file costs a lot of computing power and so your sketch will run at about 10-15 FPS.
Now there are multiple solutions for this problem:
You could store the frames in a
PGraphics list and buffer them till the recording ended, then save all the frames. This is possible, but it depends on the time you would like to record, because you don’t have unlimited memory. You could counter that with a separate thread, which then writes the
PGraphics async onto the disk. But this maybe is slow too, because you would have to store the pixels on the CPU (with loadpixels) and GPU - CPU transfers can not be done multithreaded.
You write your software as a frame by frame renderer instead of a live renderer. You first define how long the animation should take, and then interpolate frame by frame for the animation. This of course can not react to live music, but it would be possible to read the audio file sample by sample.
Maybe the best solution is to use another software for recording. Send your sketch through syphon to a software which is capable of recording syphon streams (f.e. syphon recorder).
Ah… I had suspected as much. Thank you for the detailed response and suggestions.
I had thought that perhaps had a frame not corresponding to time been saved then the moviemaker would recycle a previous one as a placeholder. But of course it can’t because they’re named iteratively versus some framerate structure.
I was exporting .jpg’s but .tga’s seem faster. Do we know for sure what exports quickest for lower drag?
Check out the following library which is for direct video export (ffmpeg). It seems to be a bit faster then directly writing to the disk (opinion based, not measured).
Is it possible to use this library in python mode? I installed ffmpeg no problem but there doesn’t seem any way to install VideoExport via sketch > import library. cc @hamoid
Did you try sketch > import library > add library? I see it listed there near the bottom of the list… if not, what OS and Processing version?
Hi hamoid — I don’t see it nor does it appear with search “VideoExport” or “hamoid”.
I’m running processing 3 (3.5.3) on osx 10.10.5
What about “Video Export” with a space or “Abe”?
Unfortunately no results.
You’re right. Somehow it is cached in my system but if you visit the library URL
you can see the Video Export library very much gone. I don’t know why.
You can still find it at https://funprogramming.org/VideoExport-for-Processing/
Sorry to ask such a noob question but how do I install it outside of the UI? I tried putting ‘com’ in a libraries folder (which is in the sketch folder) but even in java mode I get “The import com.hamoid cannot be resolved”. I really would like to continue working in python!
com? maybe you downloaded the source code instead?
At the top of the page click download, which scrolls to the middle of the page, where you can download a zip file. The file contains a VideoExport folder, which you drop into your Processing libraries folder. Then restart Processing and
Awesome! It’s working locally. Looking forward to trying this out
Your reply sort of sounds like there’s a more general folder for adding processing libraries as opposed to storing local to the sketch folder. Is that true?
That’s true Your sketchbook folder (which you can find in the Processing Preferences screen) should contain a
libraries folder. That’s where you put libraries that can be used by all your sketches.
The withAudioViz example is exactly what I need. Now to translate to python!
Thanks to @cansik for the recommendation and @hamoid for the library.
Thanks again @cansik — Turns out Syphon was perfect for this job.
|
OPCFW_CODE
|
Arch support wide range of desktop environment such as Xfce, Kde, Gnome, Cinnamon, Mate, LXQt, LXDE, Budgie, Deepin & Enlightenment. Go to the official Arch Linux download page to get the ISO file. Install MATE Desktop Environment in Arch Linux Install Deepin desktop environment in Arch Linux Make sure your Arch Linux distribution is up-to-date. 5 The actual installation of Arch Linux Phase 1 UEFI. If you’re new to this or looking for a suggestion, XFCE is a fantastic place to start. Continuing from our previous tutorial on the steps to install Arch Linux, in this tutorial we will learn how to install GUI on Arch Linux. Also offers an easy to use installer to allow users to set up a completely accessible and customisable arch linux installation. It definitely fits in very well with the Arch Linux mentality of keeping it simple. The process is basically the same on Linux Mint, but you’d search for the package in the Software Manager application instead. Viewed 12k times 2. Install Arch on a Raspberry Pi with MATE Desktop These instrutions show you how to get GNU/Linux with the MATE Desktop installed on a Raspberry Pi using Arch Linux Arm. Set local time 9. To update Arch Linux distro, run the following command as root user from Terminal: Requirements. Download the Arch installation image. 2 Going over the FAQ of the Arch Wiki. Install Yay AUR Helper in Arch Linux How to Use Yay in Arch Linux and Manjaro. Arch Linux can be downloaded in ISO format, from which point you can burn it onto a blank DVD in order to run the image on your computer. Hello Arch Linux enthusiasts!. and you will see the default arch install doesnt come with a graphical desktop because arch gives you the power to choose your own if youre new to this or looking for a ... window on arch linux when prompted simply hit the enter button to install all the packages step 3 installing mate desktop environment in arch linux with xorg installed we Arch Linux is a general-purpose rolling release Linux distribution which is very popular among the DIY enthusiasts and hardcore Linux users.. Set hostname 10. Active 1 year, 1 month ago. The Zen Installer provides a full graphical (point and click) environment for installing Arch Linux. Enable network 11. How to install Raspbian Choose the right script depending if you are installing on VirtualBox or on a SSD. Ask Question Asked 5 years, 4 months ago. I've run through this install twice now, and updated/reorganized this document a … January 1, 2020. It’s a tried-and-true Linux desktop that’s light on resources while still providing a complete desktop experience. Set root password 12. How to install mate-tweak in any Manjaro Arch Linux Operating System. This was done with a new Huawei MateBook Pro X Intel Core i7-8550U 1.8GHz, 16GB RAM, 512GB SSD. To further customize Arch Linux, check out this detailed guide. In this guide, I’ll show you how to use yay – Best AUR Helper for Arch Linux to manage packages on AUR. The Cinnamon and MATE desktop environments included with Linux Mint aren’t available in Ubuntu’s repositories, so you’d have to use a PPA to install them on Ubuntu. Install GRUB 13. The default Arch install doesn’t come with a graphical desktop because Arch gives you the power to choose your own. Enable snaps on Arch Linux and install MATE-on-Wayland. 2. We will download the scripts for an easy and quick installation. It provides support for installing multiple desktop environments, AUR support, and all of the power and flexiblity of Arch Linux with the ease of a graphical installer. On the live system, all mirrors are enabled, and sorted by their synchronization status and speed at the time the installation image was created.The higher a mirror is placed in the list, the more priority it is given when downloading a package. Arch Linux installation 1. Arch Linux is one of those operating systems that once you get used to, you just can’t go back. Snaps are discoverable and installable from the Snap Store, an app store with an audience of millions. Install MATE Desktop Environment on Wayland on Arch Linux ... Arch Linux: Mate Desktop Install; How to Download iTunes for Linux (with Pictures) The default installation covers only a minimal base system and expects the end user to configure the system by himself/herself. Edit EFI bootloader 14. They will differ from the video in the future. This tutorial will show you how to set up and configure Samba on an ArchLinux client or server. (Ctrl+Alt+T) 2. MATE is under active development to add support for new technologies while preserving a traditional desktop experience. MATE has finally found it’s way home. Once you have yay installed, you can upgrade all the packages on your system using the command. Practice your Arch Linux installation in VirtualBox 3. Luckily, there is a version of Arch Linux designed to work with ARM processors. Check network connection 2. The MATE desktop is lightweight, and very customizable. Let’s take a look at how you can install Arch Linux on Raspberry Pi. Also we need to install Display manager based on the desktop environment such as gdm, lightdm, slim, lxdm, etc. Scripts will continue to evolve and improve. Open the terminal. A project based on Talking Arch to create an ArchLinux live + install CD for blind and visually impaired users. To download this image: Make sure that you have either BitTorrent or uTorrent installed. 17 Installation of Mate on Arch Linux Phase 4. $ sudo yay -Syu To include development packages during the upgrade run. Arch ARM comes with AUR repository already added to the pacman.conf file. Arch Install Notes. The most important reason people chose Arch Linux is: Arch's goal of simplicity means there's usually one preferred way to get things done - through organized and well documented configuration files. Brief: This tutorial shows you how to install Arch Linux in easy to follow steps. How to install dwm in Arch Linux. Enter the following command to the terminal. Snaps are applications packaged with all their dependencies to run on all popular Linux distributions from a single build. sudo pacman -S mate-tweak . Mind the xorg installation. Install the system 4. UEFI or legacy mode? Download Arch Linux ISO 2. 4. Arch Linux is ranked 6th while Ubuntu MATE is ranked 61st. Chroot to the installed system 6. Generate fstab file 5. ... 30 How to install Arch Linux with BIOS, installing Deepin and installing … I installed dwm 6.0 using the file provided here and then.. # make install … Before you start 1. Set the time zone 8. Arch Linux is a light weight, highly customizable linux distro. You can then go ahead and install Mate on it. A 64 bit installation of Manjaro running MATE uses about 378MB of memory. Packages to be installed must be downloaded from mirror servers, which are defined in /etc/pacman.d/mirrorlist. 1. This includes Arch Linux, which is revered for its simplicity. As some of you may know MATE was created by an Arch Linux user, Perberos. In this video, the viewer is shown how to install the MATE desktop on Arch … February 16, 2018 How to Install – mate-tweak in Manjaro Arch Linux Operting System- Explained. Installing Arch on the Huawei MateBook Pro X with Disk Encryption. They update automatically and roll back gracefully. It is somewhat ironic then that MATE has never featured in the official Arch Linux package repository until now. Arch Linux Arm is a distribution of Linux for ARM computers that can be installed as an alternative to raspbian, which usually comes pre-installed on a raspberry Pi SD Cards. 1. MATE is one of the most popular Linux desktop environments, MATE offers a variety of features, such as it’s traditional desktop experience, low resource consumption, and highly customizable interface, that makes it a good fit for Arch Linux, a lightweight, minimal Linux distribution. Related: How to Boot Raspberry Pi 4 From USB. Partition 3. Reboot 15. Available Desktops include Gnome, KDE, Mate, XFCE, and LXDE. Install a basic mate environment sudo pacman -S mate network-manager-applet Optional: Install mate applications and configuration tools sudo pacman -S mate mate-extra. sudo pacman -S git git clone https://github.com/erikdubois/archmate. MATE (Русский) - ArchWiki. A… Delivered on 2018/10/08. So you can also install packages from AUR for ARMv7 architecture. Its installation does not include a desktop environment. Deactivate Xubuntu window manager (may vary depending on the distribution) Open "Session and … Then, burn the ISO image into a USB … Set locale 7. In this article, we will show you the process for installing the MATE Desktop environment on Arch Linux. September 20, 2018.
2020 arch how to install mate
|
OPCFW_CODE
|
This blog was originally written by Frazier Smith and published on the VFrazier blog
In this post, we will be diving into the details of three key components that make up a VMware Cloud solution: vSphere, NSX, and vSAN. These powerful tools work together to provide a comprehensive and robust virtualization platform for businesses of all sizes.
We will be discussing the features and capabilities of each component, as well as how they interact with one another to create a seamless and efficient virtual infrastructure.
Whether you’re a seasoned IT professional or just getting started with virtualization, this post will provide valuable insights into the inner workings of a VMware Cloud solution.
vSphere is a virtualization platform from VMware that allows users to create and manage virtual machines (VMs) on a single physical host. This platform is designed to increase efficiency, reduce costs, and improve disaster recovery capabilities.
One of the key features of vSphere is the ability to create and manage VMs. Users can create new VMs from scratch or clone existing ones, and configure them with different CPU, memory, and storage resources. vSphere also allows users to manage the life cycle of VMs, including the ability to power them on, off, or suspend them, as well as take snapshots of their current state.
Another important feature of vSphere is its ability to manage and allocate resources to VMs. vSphere allows users to create resource pools, which can be used to assign CPU and memory resources to VMs. Users can also use vSphere to manage storage resources, including the ability to create and manage storage clusters, and assign storage to VMs.
vSphere also includes a number of advanced features that allow users to improve the availability, scalability, and performance of their virtual infrastructure. These include features such as high availability, which allows VMs to be automatically restarted on another host in the event of a failure, and Distributed Resource Scheduler (DRS), which automatically balances resources among VMs to ensure optimal performance.
In addition to these features, vSphere also includes a number of management and monitoring tools that allow users to manage and monitor their virtual infrastructure. These tools include vCenter Server, which provides a centralized management console for vSphere, and vSphere Web Client, which provides a web-based interface for managing vSphere.
Overall, vSphere is a powerful and feature-rich virtualization platform that allows users to create and manage virtual machines, allocate resources, and improve the availability, scalability, and performance of their virtual infrastructure. With its advanced features and management tools, vSphere is an essential tool for any organization looking to improve their virtualization capabilities.
vSphere is the backbone of VMware Cloud and provides the foundation for virtualization. It allows businesses to create and manage virtual machines, as well as access to a wide range of tools and features that make it easy to manage and scale their IT infrastructure. With vSphere, businesses can easily create and manage virtual networks, storage, and security policies, as well as automate tasks and monitor performance.
Our second foundational component for VMware Cloud solutions is NSX. NSX is a network virtualization platform from VMware that allows users to create and manage virtual networks on top of existing physical infrastructure. This platform is designed to provide a flexible and efficient way to manage and secure networks in a virtualized environment.
One of the key features of NSX is the ability to create and manage virtual networks. Users can create multiple virtual networks, also known as logical networks, that can be isolated from each other, yet still share the same physical infrastructure. These virtual networks can be configured with different network services, such as firewalls, load balancers, and VPNs, to provide advanced networking capabilities.
Another important feature of NSX is its ability to provide micro-segmentation capabilities. Micro-segmentation allows administrators to create fine-grained security policies for virtual machines, which reduces the attack surface and improves security posture. This can be achieved by creating security groups, and applying security policies to them.
NSX also allows users to automate network provisioning and management. Users can create network templates and use them to automatically provision new virtual networks, or make changes to existing ones. This can greatly reduce the time and effort required to manage and maintain virtual networks.
In addition, NSX provides a comprehensive set of monitoring and troubleshooting tools to help administrators understand how their virtual networks are performing and where issues may be occurring. These tools provide real-time visibility into network traffic and usage, as well as the ability to drill down into specific virtual machines or network segments to identify and resolve issues.
Overall, NSX is a powerful network virtualization platform that allows users to create and manage virtual networks, improve security posture, automate network provisioning and management and troubleshoot network issues. With its advanced features and management tools, NSX is an essential tool for any organization looking to improve their network virtualization capabilities.
Within a VMware Cloud solution, NSX is utilized to allow for these overlay segments to create networks that will work for your workloads, no matter if they are on-premises or in the cloud.
The third and final functional component of VMware Cloud is vSAN. VMware vSAN is a software-defined storage platform from VMware that allows users to create and manage a shared storage pool for virtual machines. This platform is designed to provide a simple, efficient, and cost-effective way to manage storage for virtualized environments.
One of the key features of vSAN is the ability to create a shared storage pool from the local storage resources of multiple ESXi hosts. This allows users to create a highly available and scalable storage infrastructure, without the need for expensive external storage devices. vSAN also allows users to create storage policies that can be applied to virtual machines, to ensure that they have the storage resources they need to perform optimally.
Another important feature of vSAN is its ability to provide advanced data services, such as snapshots, clones, and replication. These services allow users to easily create point-in-time copies of virtual machines, and use them for testing, development, or disaster recovery. vSAN also supports storage-efficient snapshots and clones, which use space-efficient techniques like deduplication, compression, and thin provisioning to minimize the storage space required.
vSAN also provides a centralized management and monitoring console, which allows users to monitor and manage their storage infrastructure. The console provides real-time visibility into storage capacity, usage, and performance, as well as the ability to drill down into specific virtual machines or storage objects to identify and resolve issues. vSAN also integrates with vCenter Server, which allows administrators to manage both compute and storage resources from a single console.
Overall, VMware vSAN is a powerful software-defined storage platform that allows users to create and manage a shared storage pool for virtual machines. It provides advanced data services, centralized management and monitoring, and integrates with vCenter server. With its advanced features and management tools, vSAN is an essential tool for any organization looking to improve their storage management capabilities in a virtualized environment.
Combining vSphere, NSX, and vSAN into VMware Cloud
By combining these functional components, VMware Cloud allows businesses to easily create and manage a virtualized IT infrastructure that is highly secure, resilient, and scalable. This can help businesses to reduce costs, improve performance, and increase agility, while also providing them with the tools and features they need to manage and scale their IT infrastructure as their business grows.
Like to learn more about this topic?
Join our upcoming webinar where Frazier will speak on: Essential Cloud Capabilities: 3 skills you must master when migrating to the Cloud
In a world where innovation is happening at the speed of light, it’s hard to keep up. One day you’re a tech wizard and the next you’re outdated. Join us for an interactive session where you will learn the 3 skills you must master when migrating to the Cloud. Earn the admiration of your fellow technology leaders by launching your next project successfully. After this webinar, you will be able to discuss the benefits of moving from on prem to the cloud and avoid many of the common migration pitfalls. Register today!
|
OPCFW_CODE
|
I’ve avoided it for ages but I have to implement a system that allows users to upload images to the web server. Can I confirm this is secure?
- Files are uploaded to a folder either outside the web root or with Deny from all in the .htaccess
- Files are converted to .jpgs and re-saved using GD
- Original upload is deleted almost immediately (as soon as conversion above has taken place)
- Images are then outputted in the CMS using PHP and .jpg header
Are there any potential problems there? I guess technically step 4 isn’t even needed.
Step 2 could be a problem as normal conversion can still retain code injection within the file. With GD you can read the image into a string and then rebuild it. I can not find the link at the moment but you can search for “jpg code injection” and see what I am talking about.
Thanks, but let’s say you upload a file with injected PHP. How do you execute it given you can’t access the file directly? The file would be read with readfile and outputted using image headers.
P.S. Forgot to say in step 2 I’d resize the image. Would that not get rid of anything malicious?
P.P.S. Am chmod’ing the uploaded file to 0644.
I have tried resizing and it did not help. I would have thought reading a file would be just as bad as displaying it.
There is an interesting article here: http://nullcandy.com/page/2/ and an image with some code injected you can test. The post has been updated since I found it and I will have to check it out again. Open the image before and after your test and see if the code is still there.
You can also add shell codes in a png image as well.
From memory saving a jpg as a png will remove any EXIF data which can also contain bad code.
Perhaps the code could automatically run when the image is loaded? I am not an expert and I would be interested in any test results.
I have tried resizing and it did not help.
I re-saved an image with EXIF data using GD and the EXIF data was gone.
I would have thought reading a file would be just as bad as displaying it.
Why? Provided you don’t read the file in such a way the PHP code will be parsed it should be fine (e.g. fread or readfile would be good choices). I think the only way to read a file and have it execute is if you use include or require as this will parse the PHP. That’d be madness though.
I just spoke to my hosting company and they have it set up so that unless you rename the uploaded file to xxx.php, make it executable and allow it to be directly accessed you’re fine. You’d have to be really, really careless to have an exploit in this way.
I hope this helps someone anyway.
This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
Having issues getting the SUM() of a column with GROUP BY and DISTINCT
I have the following query:
select distinct ProdQty, JobCompletionDate, JobHead.JobNum from erp.JobHead
inner join erp.LaborDtl on JobHead.JobNum = LaborDtl.JobNum and JobHead.Company = LaborDtl.Company
where JobCompletionDate = '2022-01-04' and JobHead.Company = 'TD' and LaborDtl.JCDept ='MS'
It returns the ProdQty for every JobNum for a given JobCompletionDate
Here is an excerpt of that result:
Prod Qty
JobCompletionDate
JobNum
12
2022-01-04
198583
1
2022-01-04
205388
2
2022-01-04
205562
I'm not going to paste the whole table here, but I hope the idea is clear. The reason I do distinct in the selection is because there are usually duplicate entries for the ProdQty, and using distinct eliminates those.
Next, I need to GROUP BY JobCompletionDate to get the total of ProdQty for that date. I am having issues using SUM() on the ProdQty field. When I sum the values from this column using Excel or a calculator, I get the value 7201, which is the correct value that I need.
However, when I perform this query:
select sum(distinct ProdQty) from erp.JobHead
inner join erp.LaborDtl on JobHead.JobNum = LaborDtl.JobNum and JobHead.Company = LaborDtl.Company
where JobCompletionDate = '2022-01-04' and JobHead.Company = 'TD' and LaborDtl.JCDept ='MS'
group by JobCompletionDate
My result is: 6660
Why? What am I doing incorrectly here?
Without seeing the data, my assumption is the distinct is now being removed and the sum is now summing all records INCLUDING the duplicates that the distinct was dropping off the record set
Looks like you can (and should) use exists in stead of join to avoid duplicates altogether. I think the distinct removes too many ProdQty's, also the ones that are no real duplicates. (If the second row had 12 too, would that be a duplicate?).
A good erp system shouldn't have duplicate labor quantities. Why do you think you have duplicates rather than simply multiple records for laborDtl that happen to record the same quantities?
DISTINCT is a code-smell: a sign of poorly thought through joins. As mentioned, you probably need EXISTS or IN semi-join. If really necessary, you might need to place the DISTINCT in a CTE/derived table and then SUM the result
This:
select distinct ProdQty, JobCompletionDate, JobHead.JobNum...
is deciding what is distinct very differently than this:
select sum(distinct ProdQty)...
In the first example, it's removing duplicates where the combination of ProdQty, JobCompletionDate, and JobHead.JobNum are not distinct.
In the second example, it's removing duplicates where just ProdQty is not distinct. Since it's removing too many rows with the distinct as it's specified, the sum comes out lower than expected.
What you want to do is deduplicate the rows the way you know it's working (assuming that's the first query) and then perform the sum off of that intermediate result. You could use a CTE or subselect to do that. Here's an example:
select sum(ProdQty) from
(
select distinct ProdQty, JobCompletionDate, JobHead.JobNum from erp.JobHead
inner join erp.LaborDtl on JobHead.JobNum = LaborDtl.JobNum and JobHead.Company = LaborDtl.Company
where JobCompletionDate = '2022-01-04' and JobHead.Company = 'TD' and LaborDtl.JCDept ='MS'
);
I am actually an idiot for not realizing my mistake here - thanks!!!
|
STACK_EXCHANGE
|
M: Uber clarifies their user tracking after app exit or deletion - mzarate06
https://techcrunch.com/2017/04/23/uber-responds-to-report-that-it-tracked-users-who-deleted-its-app/
R: bostand
Why are these report not affecting ubers bottom line?
It seems people are complaining a lot but at the end of the day still using
über because what? Cheap rides forgives everything? Social anxiety??
R: bootloop
I can't speak for others but I stopped using it and will never go back. (I
also know from others who don't use it anymore and try to convince people to
do the same.)
R: nojvek
I don't use Uber anymore. However I don't think Lyft is that innocent. They
probably don't get as much news coverage but I wouldn't be surprised if they
buy unroll.me emails and scan for Uber receipts.
Tracking and analytics is a giant orgy. Everyone shares with everyone.
R: robtkiller
Another reminder that everything we do on the internet is being tracked as
though it were happening in public.
R: Pica_soO
Ueberfällig
|
HACKER_NEWS
|
One of many ways to handle an HTTP request is by writing a method on your URL-bound classes. In Stapler-speak, these methods are called "web methods".
Web methods are public instance methods that has the name that starts with "do", for example:
Such a method would handle an HTTP request sent to
/.../start (again, see the reference for the exact routing rules)
Web methods can define parameters. When Stapler invokes your web method, it needs your help to figure out what you expect in those parameters, and there are several ways to do this.
Firstly, if the type of a parameter is one of the following "well-recognized" types, Stapler would instantly know what to do:
HttpServletRequest— the request object
HttpServletResponse— the response object
Secondly, you can place parameter injection annotations to instruct Jenkins what you want to see in that parameter. Unlike typical Java programming, parameter names are significant.
- @Header: requests that the value of the request HTTP header be injected.
- @QueryParameter: requests that the value of the request parameter be injected. This includes a submitted form and a query parameter.
- @AncestorInPath: Stapler will call
StaplerRequest.findAncestor(Class)and inject the obtained object. This object is the nearest "ancestor" of the specified type in the current URL.
Parameter injection annotations are extensible. One can define a custom injection parameter by using the @InjectedParameter] meta annotation, which is how all these annotations are defined.
The type of the injected parameter can be anything, and Apache Commons Beanutils is used to convert the incoming value into the appropriate Java type your method requests.
Return value and exception
If a web method returns, either normally through the return statement or abnormally through a thrown exception, Stapler checks if the return value / exception implements HttpResponse interface. If so, this object is expected to render a response via its
There's also HttpResponses that provides a number of static methods that help you create typical HTTP responses, such as redirect, errors, serving static files, etc.
Web methods can have "interceptor annotations", which decorates your web method by adding additional processing before and after your method gets invoked. Interceptor annotations to web methods are like servlet fileters to servlets.
Jenkins defines several built-in interceptor annotations:
- @RequirePOST: aborts with "400 bad request" if the request is not POST. This is how we protect endpoints that can update states in Jenkins, with crumb.
- @RespondSuccess: used on a web method that returns "void" so that when the method returns normally "200 success" will be returned as a response.
Interceptor annotations are extensible through InterceptorAnnotation.
Parameter injection, return value/exception response rendering, and interceptor annotations help you reduce the HTTP dependency in your model objects. This tends to make code easier and more easily testable.
Normally, a name of a web method determines how the request is routed. For example,
doEatPizza() would be mapped to
.../eatPizza. But you can explicitly specify the URL name by using WebMethod annotation. For example, the following code maps
.../xyz.xml to the same method.
|
OPCFW_CODE
|
It is using an Index Scan primarily because it is also using a Merge Join. The Merge Join operator requires two input streams that are both sorted in an order that is compatible with the Join conditions.
And it is using the Merge Join operator to realize your INNER JOIN because it believes that that will be faster than the more typical Nested Loop Join operator. And it is probably right (it usually is), by using the two indexes it has chosen, it has input streams that are both pre-sorted according your join condition (LocationID). When the input streams are pre-sorted like this, then Merge Joins are almost always faster than the other two (Loop and Hash Joins).
The downside is what you have noticed: it appears to be scanning the whole index in, so how can that be faster if it is reading so many records that may never be used? The answer is that Scans (because of their sequential nature) can read anywhere from 10 to 100 times as many records/second as seeks.
Now Seeks usually win because they are selective: they only get the rows that you ask for, whereas Scans are non-selective: they must return every row in the range. But because Scans have a much higher read rate, they can frequently beat Seeks as long as the ratio of Discarded Rows to Matching Rows is lower than the ratio of Scan rows/sec VS. Seek rows/sec.
OK, I have been asked to explain the last sentence more:
A "Discarded Row" is one that the the Scan reads (because it has to read everything in the index), but that will be rejected by the Merge Join operator, because it does not have a match on the other side, possibly because the WHERE clause condition has already excluded it.
"Matching Rows" are the ones that it read that are actually matched to something in the Merge Join. These are the same rows that would have been read by a Seek if the Scan were replaced by a Seek.
You can figure out what there are by looking at the statistics in the Query Plan. See that huge fat arrow to the left of the Index Scan? That represents how many rows the optimizer thinks that it will read with the Scan. The statistics box of the Index Scan that you posted shows the Actual Rows returned is about 5.4M (5,394,402). This is equal to:
TotalScanRows = (MatchingRows + DiscardedRows)
(In my terms, anyway). To get the Matching Rows, look at the "Actual Rows" reported by the Merge Join operator (you may have to take off the TOP 100 to get this accurately). Once you know this, you can get the Discarded rows by:
DiscardedRows = (TotalScanRows - MatchingRows)
And now you can calculate the ratio.
|
OPCFW_CODE
|
The Pareto Principle states that there is an “unequal relationship between inputs and outputs. The Pareto principle states that 20% of the invested input is responsible for 80% of the results obtained.”
You’re probably wondering what the Pareto Principle has to do with the blockchain and the many ensuing projects? Well, if one were to look at code repository on Github over the past year, one would see that there were at least 26,000 blockchain-related projects that was created on the platform, but can you guess how many remain? Just a “small percentage are still active”.
This indicates that many have tried their hands at the blockchain technology, yet have abandoned the projects. This may be due to a variety of reasons, new technologies require patience and persists to unlock and truly understand. New technologies require inputting an investment of time to gain the correct knowledge, interaction with the community and more collaboration with human capital to see the project fully through.
According to a report called “Evolution of Blockchain Technology: Insights from the Github Platform” prepared by Delloite, “there were approximately 26,885 blockchain-related projects in 2016 developed on Github.” The research team over at Deloitte has found that the average lifespan of a project as represented by the Github data to be 1.22 years.
The Deloitte researchers chose to glean their insights from Github because many significant blockchain projects were concentrated on this platform. Github also allowed them the ability derive insights on “who is behind this substantial blockchain development, what type of programming is powering it, where the talent resides, how networks and communities of projects and developers are organized, and what risk factors exist for investing resources into repositories.”
The work done by the team at Deloitte is not only interesting work but it is much needed work which can allow for more sustainable growth for the blockchain community by incentivizing and providing key information for would be blockchain project creators. The information would allow these potential community entrants to understand aspects of building projects and how to see them through to completion.
Open source projects are amazing and have provided immense value as the internet and the web progressed as well. But the issue is that there has to be built in mechanisms to keep individuals working on the projects. There has to be some sort of community engagement and other aspects that are not necessarily based around monetary gain. Open source projects thrived in the past but have seen less interest over the years as many have seen activated commercialization tendencies.
Long term and successful open source projects allow for the development of new technologies and the potential for successful commercialization later on. Open source projects allow many people to come together and solve different problems that could then be used for projects that are of a commercial nature. The greater the experience of the many collaborators in the open source space, the greater their ability for the expansion of opportunities in the space.
Yet if there isn’t some sort of system to keep attention the projects have a chance of dying. Thus, projects that are run by organizations have a higher chance of survival, the aspects of accountability and inherent organization help them to maintain and have longevity.
“Of particular significance, some projects that organizations have developed have resulted in new platforms (such as Ethereum, Ripple, Corda, and Quorum) which some developers now use to build applications.”
According to the researchers “The stark reality of open-source projects is that most are abandoned or do not achieve a meaningful scale. Unfortunately, blockchain is not immune to this reality.
The report adds that about 90 percent of projects developed on GitHub become idle, with the highest mortality rate occurring within the first six months of a project beginning.
The researchers give an insight into the geographical areas where the distributed ledger is being developed so far. It noted that San Francisco is home to the most diverse projects being developed on the blockchain, with 1,279 users and 101 organizations. London comes in at second place with 858 users and 61 organizations. New York is third with 725 users and 49 organizations.”
They also stated that China also plays a significant role in the development of the space “ It is also worth noting the high level of activity in China, specifically, Shanghai and Beijing. In both of these cities, most of the projects pertain to cryptocurrencies and cryptocurrency exchanges, with an emphasis on scalability.”
Interesting notes that is present in the report show that one of the most important things about this blockchain industry has been the open source nature of it. Many of the projects have been created in an open source mindset and even the pioneer cryptocurrency, bitcoin, was formed and placed in an open source environment.
The report goes on to highlight key definitions of a project which we will include here as they are certainly relevant in ICO analysis.
What’s a “repository” ? Software projects that host code.
What’s a “watcher” and a “committer”? A watcher simply follows the development of the project and committer goes a step further and contributes to the project with their code.
So “commits” are contributions to a codebase.
“Forking” is copying a project in the work environment
Who are the big players in the field of blockchain?
A concentrated amount of commercial projects lie on the financial space, seen in the many ICO’s who are seeking to disrupt payments, insurance and other aspects of the age old financial structure.
The language that seems to be the most common in many projects are C++. Yet, the researchers are finding that, Go, the language developed by Google is slowly becoming popular as well, as it is “the second largest language used for blockchain-related projects, and it’s core components of simplicity and scale might be one of the primary reasons for it’s growth.”
Where is the talent located?
The information presented allows us to be able to understand the community, how the blockchain field is progressing and who is moving it forward.
Read more exciting information here
|
OPCFW_CODE
|
#Reference properties are not visible to light DOM
If I do the following:
outer scope: {{parentAttr}}
<my-component #parent-attr="{childAttr}">
light DOM: {{parentAttr}}
</my-component>
http://jsbin.com/zajiloqaze/edit?html,js,output
The outer scope can see the value of parentAttr but the light DOM (aka user content) of my-component cannot. With leakScope:true, you could just read directly from childAttr but it's not clear where that is coming from so being able to do the above could be useful.
I like the idea of being able to explicitly expose a component's view model to the outer context primarily because it allows you to use properties/methods of the component's viewmodel even when its defined with leakScope:false which I prefer to always do because:
it forces data to be passed in explicitly which increases clarity
it prevents random junk from leaking into the component's template or light DOM
At the very least, I think this should work with leakScope:false. I originally expected it to because, in theory, the light DOM would be reading directly from the outer context but I was unpleasantly surprised :(
I thought this was tested. I think it should work the way you describe.
Sent from my iPhone
On Aug 19, 2015, at 7:57 PM, dylanrtt<EMAIL_ADDRESS>wrote:
If I do the following:
outer scope: {{parentAttr}}
<my-component #parent-attr="{childAttr}">
light DOM: {{parentAttr}}
http://jsbin.com/zajiloqaze/edit?html,js,output
The outer scope can see the value of parentAttr but the user content of my-component cannot. With leakScope:true, you could just read directly from childAttr but it's not clear where that is coming from so being able to do the above could be useful.
I like the idea of being able to explicitly expose a component's view model to the outer context primarily because it allows you to use properties/methods of the component's viewmodel even when its defined with leakScope:false which I prefer to always do because:
it forces data to be passed in explicitly which increases clarity
it prevents random junk from leaking into the component's template or light DOM
At the very least, I think the light DOM should be able to access exported viewmodels with leakScope:false but I don't care as much about leakScope:true because I try to avoid it anyway. I originally expected it to work because in theory the light DOM would be reading directly from the outer context but I was unpleasantly surprised. :(
—
Reply to this email directly or view it on GitHub.
I found the issue. It's happening because no template is defined for the component. If I put template:can.stache('<content/>') on the component definition it fixes it.
Lacking a template also affects lexical scoping and renders leakScope:false useless whereas it should probably still prevent the light DOM from reading from the component's viewmodel. Even before #reference values, you could still have a component with no template that used a view model that wasn't used by the light DOM.
Seems to be fixed in minor.
I was mistaken. This is still an issue and I just didn't update the syntax in the jsbin correctly to test it.
Outer Scope: {{*bar}}
<my-component {^foo}="*bar">
Light DOM: {{*bar}}
</my-component>
http://jsbin.com/zegotuwope/edit?html,js,output
It's also an issue in examples like this (bar changing does not update foo):
<foo-bar {(foo)}="*shared">
<fiz-biz {(bar)}="*shared"></fiz-biz>
</foo-bar>
Still fixable by adding a template, even template: can.stache('<content/>').
I thought this should fix it: https://github.com/canjs/canjs/issues/2029
Do you have 2.3.1?
Ah, it is fixed in 2.3.1. I usually just test in jsbin with minor and assumed that would be up to date. Next time I'll use a specific version.
I'll upgrade now. Thanks!
minor is not being used right now. master is where everything goes until probably we start putting things in the major (3.0) branch.
|
GITHUB_ARCHIVE
|
Welcome to the Jungerl!
Sat Feb 22 02:03:09 CET 2003
There was once a programmer who worked
upon microprocessors. ``Look at how
well off I am here,'' he said to a
mainframe programmer who came to visit,
``I have my own operating system and
file storage device. I do not have to
share my resources with anyone. The
software is self-consistent and
easy-to-use. Why do you not quit your
present job and join me here?''
The mainframe programmer then began to
describe his system to his friend,
saying ``The mainframe sits like an
ancient sage meditating in the midst of
the data center. Its disk drives lie
end-to-end like a great ocean of
machinery. The software is as
multifaceted as a diamond, and as
convoluted as a primeval jungle. The
programs, each unique, move through the
system like a swift-flowing river. That
is why I am happy where I am.''
The microcomputer programmer, upon
hearing this, fell silent. But the two
programmers remained friends until the
end of their days.
-- The Tao of Programming
Though I wasn't around in the mainframe days, there's something very
appealing about the idea of a completely shared computer: when you
write a program, or add a feature, everyone else has it
automatically. You'd just send a mail that "The 'make' program can now
do foo", and everyone could immediately update their makefiles to do
some foo. You'd improve things directly instead of sending patches,
always have the latest and greatest versions without the need to
download tarballs, have a large set of installed programs and
libraries that you can use, and perhaps get that joyous feeling of a
"smoothly running anarchy." Lots of things might break on occasion,
but a community of hackers can all fix them. This seems to be the type
of environment where, for example, Emacs grew from a few TECO macros
towards what it is today .
I think it all makes a nice picture, and would like try hacking Erlang
programs in this way. To this end, I've created a new sourceforge
project called the 'jungerl': "A Primeval Jungle of Erlang code". It
is quite simply a new CVS tree that I have added many of the existing
Erlang User Contributions to, and that anyone who wishes can have full
developer access on.
I'm using Stewart's Law of Retroaction , so I have already imported
a lot of User Contributions that I'm interested in - some by me, but
most by other people. I have also added every Erlang hacker's
sourceforge account that I could find with full administrator rights,
so they can in turn add anyone else who wants in. You can get into at
It's my hope that this Wiki-style common program repository will lead
to good things for the programs in it, and will give them better
open-source hacking infrastructure than one would bother to create for
them individually. I invite everyone to use and improve the programs
that are imported and to add any other programs that they want (see
the README for how.) We can use the Erlang Wiki to communicate about
additions and changes:
If I've stepped on anyone's (or everyone's) toes by doing something
with their program that they don't want, just let me know and I will
make things right :-)
That is as far as I have thought the whole thing through.
The programs I've imported so far are:
enfs: Mini NFSv2 server
ermacs: Emacs-like editor
slang: S-Lang terminal driver (slightly extended version from Ermacs)
lersp: Mini scheme-like interepreter
msc: Miscellany (e.g. Tobbe's syslog client)
rpc: The SUNRPC library (by tony, scott @sendmail, martin, &co)
tuntap: A linked-in driver for Linux TUN/TAP network devices (fun!)
xmerl: Ulf's famous XML parser
xmlrpc: Jocke's XML-RPC library
I have completely left alone programs like Yaws that already have a
full development system established.
With these programs I have made a simple unified autoconf and Makefile
setup, which let me delete a large amount of mututally-redundant or
hard-coded makefile code across the various projects, and adds some
extra consistency and (hooks for) portability. In the process I have
somewhat changed the way some programs build, but I think the overall
effect of integration is good.
So far I have only built it on Linux and FreeBSD (using GNU make), so
there may be some 'configure'-hacking for other platforms.
If you want developer access, just add your sourceforge account name
to the Wiki page. If you already have access and you see names on that
page, please add them with administrator privileges.
The Jungerl README is attached.
: Stewart's Law of Retroaction: It is easier to get forgiveness
(How's *that* for a crackpot post, Klacke? :-))
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
More information about the erlang-questions
|
OPCFW_CODE
|
Foundations (F) Session 8
Time and Date: 13:45 - 15:30 on 22nd Sep 2016
Room: A - Administratiezaal
Chair: Peter Emde Boas
|161|| Hidden geometric correlations in real multiplex networks
Abstract: Real networks often form interacting parts of larger and more complex systems. Examples can be found in different domains, ranging from the Internet to structural and functional brain networks. Here, we show that these multiplex systems are not random combinations of single network layers. Instead, they are organized in specific ways dictated by hidden geometric correlations interweaving the layers. We find that these correlations are significant in different real multiplexes, and form a key framework for answering many important questions. Specifically, we show that these geometric correlations facilitate: (i) the definition and detection of multidimensional communities, which are sets of nodes that are simultaneously similar in multiple layers; (ii) accurate trans-layer link prediction, where connections in one layer can be predicted by observing the hidden geometric space of another layer; and (iii) efficient targeted navigation in the multilayer system using only local knowledge, which outperforms navigation in the single layers only if the geometric correlations are sufficiently strong. Importantly, if optimal correlations are present, the fraction of failed deliveries is mitigated superlinerarly with the number of layers, suggesting that more layers with the right correlations quickly make multiplex systems almost perfectly navigable. Our findings uncover fundamental organizing principles behind real multiplexes and can have important applications in diverse domains, ranging from improving information transport and navigation or search in multilayer communication systems and decentralized data architectures, to understanding functional and structural brain networks and deciphering their precise relationship(s), to predicting links among nodes (e.g., terrorists) in a specific network by knowing their connectivity in some other network.
|Kaj-Kolja Kleineberg, Marian Boguna, M. Ángeles Serrano and Fragkiskos Papadopoulos|
|164|| When is simpler thermodynamically better?
Abstract: Living organisms capitalize on their ability to predict their environment to maximize their available free energy, and invest this energy in turn to create new complex structures. For example, a lion metabolizes the structure of an antelope (destroying it in the process), and uses the energy released to build more lion. Is there a preferred method by which this manipulation of structure should be done? Our intuition is “simpler is better,” but this is only a guiding principal. By formalizing the manipulation of patterns – structured sequences of data – this intuitive preference for simplicity can be substantiated through physical reasoning based on thermodynamics. Using techniques from complexity science and information theory, we consider devices that can manipulate (i.e. create, change or destroy) patterns. In order to operate continually, such devices must utilize an internal memory in order to keep track of their current position within the pattern. However, the exact structure of this internal memory is not uniquely defined, and all choices are not equivalent when it comes to their thermal properties. Here, we present the fundamental bounds of the cost of pattern manipulation. When it comes to generating a pattern, we see indeed that the machine with the simplest memory capable of the task is indeed the best choice thermodynamically. Using the simplest internal memory for generation grants the advantage that less antelope needs to be consumed in order to produce the same amount of lion. However, contrary to intuition, when it comes to extracting work from a pattern, any device capable of making statistically accurate predictions can recover all available energy from the structure. This apparent paradox can be explained by careful consideration of nature of the information-processing tasks at hand: namely, one of logical irreversibility. [See also arXiv:1510.00010.]
|Andrew Garner, Jayne Thompson, Vlatko Vedral and Mile Gu|
|443|| Fluctuations of resilience in complex networks
Abstract: Recently Gao et al. showed that classes of complex networks could be described in a universal way. In particular it was stated that the dynamics of a complex network consisting of many nodes and links is governed by a one-dimensional effective dynamical equation, which was obtained by averaging over all network configurations. In this paper we address the question how well the averaged effective equation describes classes of networks by numerical calculation of variances in dynamics. It appears that huge variances in the dynamics can arise. To examine the consequences of our work to practical situations, we apply our findings to specific networks occurring in transport and supply chains. References Jianxi Gao1, Baruch Barzel, Albert-László Barabási, Universal resilience patterns in complex networks, Nature 530, 307 (2016).
|209|| Local mixing patterns in complex networks
Abstract: Assortative mixing (or homophily) in networks is the tendency for nodes with the same attributes, or metadata to link to each other. For instance in social networks we may observe more interactions between people with the same age, race, or political belief. Quantifying the level of assortativity or disassortativity (the preference of linking to nodes with different attributes) can shed light on the factors involved in the formation of links in complex networks. It is common practice to measure the level of assortativity according to the assortativity coefficient, or modularity in the case of discrete-valued metadata. This global value is an average behaviour across the network and may not be a representative statistic when mixing patterns are heterogeneous. For example, a social network that spans the globe may exhibit local differences in mixing patterns as a consequence of differences in cultural norms. Here, we present a new approach to localise these global measures so that we can describe the assortativity at the node level. Consequently we are able to capture and qualitatively evaluate the distribution of mixing patterns in the network. We develop a statistical hypothesis test with null models that preserve the global mixing pattern and degree distribution so that we may quantitatively determine the representativeness of the global assortativity. Using synthetic examples we describe cases of heterogeneous assortativity and demonstrate that for many real-world networks the global assortativity is not representative of the mixing patterns throughout the network.
|Leto Peel, Jean-Charles Delvenne and Renaud Lambiotte|
|446|| Message passing algorithms in networks and complex system
Abstract: We will sketch an algorithmic take, i.e. message-passing algorithms, on networks and its relevance to some questions and insight in complex systems. Recently, message-passing algorithms have been shown to be an efficient, scalable approach to solve hard computational problems ranging from detecting community structures in networks to simulating probabilisitic epidemic dynamics on networks. The objective of the talk is two fold. On on hand, we will discuss how the non-backtracking nature of message-passing avoids an “echo-chamber effects” of signal flow and thus makes a good tool to consider for problems in networks. On the other hand, we will also argue why insight gained from algorithms are equally important when exploring questions at the boundaries of scientific studies, such as networks and complex systems.
|
OPCFW_CODE
|
Regular expression is to the rescue. Here I am going to walk you through how we can come to the final solution
grep -v # /etc/squid/squid.conf
This will give you those lines without (-v) the occurrence of '#', but this will miss lines such as "acl Safe_ports port 80 # http"
egrep -v '^#' /etc/squid/squid.conf
This "Extended Grep" is able to understand regular expression in the pattern. ^ is an anchor and it represents the start of the line. '^#' means matching lines start with #. What if my setting starts with a blank space and follows by the comment
egrep -v '^[ \t]*#' /etc/squid/squid.conf
Anything inside the square bracket matches a single character that is contained within the brackets. In our case, the character set is a space and a tab. Since we cannot represent a tab as a literal character, we have to represent it as a escape sequence "\t". [ \t]* matches the preceding element (blank space) zero or more times. Although we can get rid of the comment, we still have a lot blank lines to deal with.
egrep -v '^[ \t]*#' /etc/squid/squid.conf | egrep -v '^$'
How about taking advantage of a pipe to run through the previous step's output and apply another 'egrep' to get rid of the blank line. ^$ are anchors, start of line and end of line, i.e. no character in the line. Ok, that's what we want, but can we do with just a single egrep. Of course we can.
egrep -v '(^[ \t]*#|^$)' /etc/squid/squid.conf
With the ability of grouping "()" and choice "|", we are telling egrep that match either comment or blank line. What if the blank lines are not really blank, but contains spaces or tabs
egrep -v '(^[ \t]*#|^[ \t]*$)' /etc/squid/squid.conf
This will do the job!
If your command understands POSIX compliant regular expression, you can write it in a more compact syntax:
egrep -v '(^\s*#|^\s*$)' /etc/squid/squid.conf
\s is equivalent to [ \t\r\n\v\f], this character set is called whitespace characters (space, tab, carriage return, newline, vertical tab, form feed)
Regular expression is definitely your life saver if you need to manlipulate data. Do you know that lots of other commands have regular expression support built-in. Run this to find out what commands(1) has this support:
cd /usr/share/man/man1 for i in *gz do zgrep -li regexp $i done
BTW, sed (stream editor) can do the same job but without applying an inverted match (-v):
sed -e '/^\s*#/d;/^\s*$/d' /etc/squid/squid.conf
|
OPCFW_CODE
|
test/streaming.cc takes minutes to compile
This file: https://github.com/3rdparty/eventuals-grpc/blob/main/test/streaming.cc
Takes several minutes to compile as part of this build:
bazel test //test:grpc
Sample output from a not-yet-done bazel build with default compilation option (5 minutes and counting!):
[22 / 24] Compiling test/streaming.cc; 307s processwrapper-sandbox
My wild guess as to what's going on: we have some long eventual chains in this file. Does that result in slow compilation times due to heavy template nesting? If so, here's a resource on how to profile time spent in the compiler: https://stackoverflow.com/questions/15818281/profiling-template-metaprogram-compilation-time
It looks like we're building using GCC. I wonder if Clang's any faster?
@benh in case this is something you've looked into before
This file: https://github.com/3rdparty/eventuals-grpc/blob/main/test/streaming.cc
Takes several minutes to compile as part of this build:
bazel test //test:grpc
Sample output from a not-yet-done bazel build with default compilation option (5 minutes and counting!):
[22 / 24] Compiling test/streaming.cc; 307s processwrapper-sandbox
My wild guess as to what's going on: we have some long eventual chains in this file. Does that result in slow compilation times due to heavy template nesting? If so, here's a resource on how to profile time spent in the compiler: stackoverflow.com/questions/15818281/profiling-template-metaprogram-compilation-time
It looks like we're building using GCC. I wonder if Clang's any faster?
My first attempt at compiling with clang fails:
CC=/usr/bin/clang bazel test //test:grpc
Emits errors of the form:
ERROR: /home/alexmc/.cache/bazel/_bazel_alexmc/e5e83162f61030880de4b3e9d73ac179/e[21/1370$
pb/upbc/BUILD:19:10: Compiling upbc/message_layout.cc [for host] failed: (Exit 1): clang f
ailed: error executing command /usr/bin/clang -U_FORTIFY_SOURCE -fstack-protector -Wall -W
thread-safety -Wself-assign -Wunused-but-set-parameter -Wno-free-nonheap-object -fcolor-di
agnostics -fno-omit-frame-pointer -g0 -O2 ... (remaining 51 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
In file included from external/upb/upbc/message_layout.cc:2:
In file included from external/upb/upbc/message_layout.h:6:
In file included from external/com_google_absl/absl/container/flat_hash_map.h:40:
In file included from external/com_google_absl/absl/container/internal/hash_function_defau
lts.h:56:
In file included from external/com_google_absl/absl/strings/cord.h:78:
external/com_google_absl/absl/functional/function_ref.h:124:16: error: definition of impli
cit copy constructor for 'FunctionRef<void (absl::string_view)>' is deprecated because it
has a user-declared copy assignment operator [-Werror,-Wdeprecated-copy]
FunctionRef& operator=(const FunctionRef& rhs) = delete;
^
external/com_google_absl/absl/strings/cord.h:1325:33: note: in implicit copy constructor f
or 'absl::FunctionRef<void (absl::string_view)>' first required here
return ForEachChunkAux(rep, callback);
^
1 error generated.
Target //test:grpc failed to build
Good news: this is a problem in an old version of abseil that was fixed in June 2021: https://github.com/abseil/abseil-cpp/commit/702cae1e762dc6f2f9d31777db04e1adbdb36697
... bad news, we explicitly import a super old (March 2021) version of abseil to workaround some gRPC problem: https://github.com/3rdparty/eventuals-grpc/blob/b54e140632ad4c1c4abf027f0bd02219372e47c5/bazel/deps.bzl#L35-L48
Good news, there are much newer gRPC releases! We're currently using 1.40.0 (released 2021/09/06), and 1.44.0 just came out on 2022/02/14!
... bad news, 1.44.0 didn't compile when I tried it.
So clang might involve a lot of rabbit-holing to get working. In the mean time, the question still stands: why does gcc take so long to compile test/streaming.cc?
From an offline discussion with @benh : a likely hypothesis is that the long eventuals chains in streaming.cc are indeed causing slow compilation times.
One workaround he proposes is to split streaming.cc into many files (one file per test) so there's at most one long eventual per test (a similar thing was done for eventuals tests in https://github.com/3rdparty/eventuals/pull/182 ).
|
GITHUB_ARCHIVE
|
Virtualized Desktop Infrastructure (VDI) provides organizations with a way to deliver managed desktops, lower desktop support costs and keep critical desktop sessions running out of a secure data-center. However, the wide range of applications users must use to complete tasks must still be managed. Flexibility beyond VDI automated pools is needed to deploy applications to many users without constant changes to VDI desktop templates Desktop as a Service (DaaS) delivered VDI capabilities from cloud service providers providing deployment flexibility while still maintaining the complexity of application delivery.
Numecent Cloudpaging allows for applications to be delivered to user desktops natively. Cloudpaging simply uses the cloud to deliver a “cloudified” version of an application from a secure encrypted cloud container. The key benefits are:
- Lower IT Costs: IT resources and time needed to onboard new user applications and updates are lowered significantly
- Scalability: Can support high number of users from a single server
- Improved User Experience: Applications deploy quicker: 20-100x faster than download
- Security: Application is encrypted and compressed before delivery
- Compliance: Ensure latest versions of applications which must be used for some customer segments (Medical, Government, Financial) are always available to users
- Meta-licensing: Complete session data to manage users which have access to applications
Numecent Cloudpaging can apply not just to physical desktops but also to VDI users. VDI provides the ease of desktop and OS management while Cloudpaging manages effective application delivery and management.
VDI Benefits and Challenges for Application Delivery
Adding all applications to a VDI deployment user base can add complexity. Each time an application is updated the master pool image must be updated or the application streaming package must be updated. Creating different user pool templates for different user types (e.g. Workstation CAD user, Accounting, Standard Knowledge Worker) can solve this problem to an extent but then adds more pool templates to manage.
As an example, a key application for users using CAD tools is updated, a process like this is followed:
- The VDI template is opened by an administrator
- Application is installed
- Application is validated to be functional by testing
- Master VDI template is applied to a given automated pool (or added to manual pool users)
- Roll out to users
- VDI users utilizing a common desktop pool – significant time spent to design the common desktop pool image
- Persistent VDI users require the app to be reinstalled
- Non-VDI users also then need to have the latest application manually installed
This process must be repeated among any other VDI templates that also use that application. There are methods to deploy applications as a packaged Virtual Disk but this involves adding an additional virtual disk and further tools to manage applications. All of this will still differ from VDI users and physical users.
Remote streaming the application can work for some users but this adds a complexity of streaming pixels which can work for most users but will be sub-par for extended WAN or mobility users.
Deploying applications in the same manner for both VDI and physical users is needed to limit desktop support burden. VDI templates should focus on core OS and apps that don’t change (desktop email, office suites, etc.). Applications from divergent ISV’s need update and deployment flexibility.
How Does Cloudpaging Help VDI?
Numecent Cloudpaging can help VDI solutions like VMware Horizon and Citrix Xen by removing the application deployment burden as a VDI administrative task and moving it to IT resources more aligned with application deployments. Core VDI templates focus on main OS images, security updates, service packs, etc along with key applications like Office suites and email applications that see limited changes over a VDI VM life-cycle.
User VDI sessions can then leverage the latest applications like CAD/CAE, DTP, Financial, Imaging, etc. from Cloudpaging. This allows for flexibility in application updates, additions of new applications, and removing of applications without modifying any VDI templates. This same Cloudpaging deployment methodology for applications can apply to VDI and Physical users that may be on some par WAN or branch office networks.
The Cloudpaging components incorporated with VDI separate the base OS and base apps VM template and applications running via Cloudpaging easing complex deployments where prior best practice may have been multiple templates and multiple pool types. Cloudpaged applications also provide for limited storage needs in the VM. Since applications are “cloudpaged” only an initial approximate 10% of the application is transferred to the Cloudpaging Player (local agent) on the users VM. As the user continues to use the application, further instructions are delivered on demand, a page at a time ensuring robust application
performance while never installing the application locally.
Applications can be mixed across pool templates, an example would be a business process application can be delivered to engineering, accounting and sales users who will be on different VDI pools. The same deployment process can then be used for physical users who may not have VDI access or for users with physical devices that need offline access (e.g. Notebook users who must work offline). Physical users also benefit from having a limited local cache of the application present at initial deployment with Cloudpaging, lowering initial app start time (vs full app download before Cloudpaging) and lowering storage needs on physical devices. Additional app features are delivered as needed when different application functionality is used.
The process to deliver applications with Cloudpaging is simple:
1. Install the application on a VM or Physical system running Numecent Cloudpaging Studio – this will capture the application, application dependencies, registry settings, etc.
2. Run the application within the Cloudpaging Studio to pre-fetch the initial start of application UI. Choose as well to cache any functions user may use just by using them in session.
3. Package the application and deploy as needed to users. User desktops simply need the Cloudpaging Player which can be added to a base VDI template.
In regards to security, VDI has distinct advantages if deployed properly. Ensuring data stays in the data-center and user sessions are more secure. Using Cloudpaging brings these security benefits to environments where physical users (like offline users) must be supported in addition to VDI users. Cloudpaged applications can be sandboxed to the point where even attempted to extract application data like saving files to local storage will result in no data access outside of
the sandboxed application. This provides assurance that both VDI users and the users you still support on physical devices have stringent security constraints.
Cloudpaging can help VDI deployments rein in application deployment while providing the same methodology to support non-VDI physical users further helping IT desktop support costs.
Numecent Cloudpaging provides application deployment ease on both VDI and physical device users. Cloudpaging removes the application deployment pain point from both VDI and physical user types.
Cloudpaging makes the most efficient use of all elements in the cloud computing ecosystem, utilizing proven technology in production today and supported for the future.
|
OPCFW_CODE
|
marathon-consul loses connectivity to marathon
Hi Everyone,
I don't see an open ticket on this, so I doubt anyone else is seeing this (and I cannot replicate on my five-node cluster at home). With 1.3.3 on my work cluster marathon-consul periodically loses connectivity to marathon with connection refused exceptions. marathon-consul restarts five times--each time logging that it is getting a connection refused from marathon--eventually staying in a connection refused state and therefore failing to capture marathon events from the event stream.
Maybe marathon-consul should continue to attempt to reconnect as opposed to getting into a state where it gives up?
--John
Thanks for reporting. I'll grep our logs to see if we experience similar problems. I think we have similar error when upgrading marathon and when we have network maintenance issues.
marathon-consul restarts five times--each time logging that it is getting a connection refused from marathon--eventually staying in a connection refused state and therefore failing to capture marathon events from the event stream.
Did marathon-consul stay in failing state? After 5 retires it should shut down. With some supervisor configured it should be restarted and fail again if problem wasn't fixed.
@janisz It restarted 4-5 times (I forget) and then remained up with the last logging statement being connection refused by marathon.
It shouldn't works like this. Can you share the logs? I'll try to reproduce it in test and fix it.
Sure thing, will do so when I get back in the office tomorrow
I've checked our logs and we have logged
Leader poll failed. Check marathon and previous errors. Exiting
AFAIR we don't have a situation when marathon-consul hangs in unconnected state. @tomez Do you recall this situation?/
Marathon-consul definitely shouldn't stay in disconnected state.
@janisz I cant remember that situation ever happened and I am pretty sure we would notice it because in that case our registrations would simply stop working after any marathon downtime (eg. update).
After janisz investigation I can't add anything more right now.
@hokiegeek2 if You could reproduce it, and provide log info we will investigate further.
@janisz Here's the log messages of interest:
time="2017-06-16T19:50:14Z" level=error msg="Error when parsing the event" error=EOF
time="2017-06-16T19:50:14Z" level=error msg="This should never happen. Not handled event type" EventType= error="Unsuported event type: "
time="2017-06-16T19:50:14Z" level=fatal msg="Unable to recover streamer" error="Get http://localhost:8080/v2/events?event_type=status_update_event&event_type=health_status_changed_event: dial tcp <IP_ADDRESS>:8080: getsockopt: connection refused"
USUAL STARTUP MESSAGES
time="2017-06-16T19:50:14Z" level=warning msg="Error on http request" Location="localhost:8080" Protocol=http error="Get http://localhost:8080/v2/events?embed=apps.tasks&label=consul: dial tcp <IP_ADDRESS>:8080: getsockopt: connection refused" statusCode="???"
time="2017-06-16T19:50:14Z" level=error msg="An error occured while performing sync" error="Can't get Marathon apps: Get http://localhost:8080/v2/events?embed=apps.tasks&label=consul: dial tcp <IP_ADDRESS>:8080: getsockopt: connection refused"
time="2017-06-16T19:50:15Z" level=debug msg="Leader detection disable"
time="2017-06-16T19:50:15Z" level=info msg=Listening Port=":4000"
time="2017-06-16T19:50:15Z" level=error msg="Unable to start streamer" error="Get http://localhost:8080/v2/events?embed=apps.tasks&label=consul: dial tcp <IP_ADDRESS>:8080: getsockopt: connection refused"
USUAL STARTUP MESSAGES plus bolded logging block repeated 3 times, and then on the third time logging stops at:
time="2017-06-16T19:50:20Z" level=error msg="Unable to start streamer" error="Get http://localhost:8080/v2/events?embed=apps.tasks&label=consul: dial tcp <IP_ADDRESS>:8080: getsockopt: connection refused"
No further log messages, app is hung, no marathon events for consul-tagged Marathon tasks are captured
@tomez Thanks for investigating! Just provided the salient log messages, just restarted marathon-consul and I'll confirm if the same scenario repeats
@hokiegeek2 Thanks for logs. I can confirm there is a bug. The log you provided comes from sse/sse_handler.go a couple lines above we have similar log but with Fatal.
The problem you described do not occur when Marathon Master detection is enabled because detector can't connect to marathon so the streamer is not created. In your case streamer is created but it can't connect to marathon so it waits forever for events to handle but the events won't come because it's not connected.
We should not swallow connection error but propagate it to a caller and handle in main.go. We will provide a patch for this shortly.
@janisz That's awesome, thanks!
@janisz @tomez So I am noticing that we will get periodic connection refused errors in our Marathon cluster nodes. I see from the logging timing that marathon-consul attempts to connect very quickly--the first couple apparently within a second, which appears to be within the window of time that marathon refuses connections.
Perhaps what is needed is a configurable retry time interval for marathon connection failures?
@hokiegeek2 Could you please create separated ticket for configurable retry time interval / reconnect policy?
@janisz will do!
@janisz This works great! I deployed the patch and either marathon-consul recovers from loss of connectivity or exits after four failed tries. Since I am running marathon-consul as a service, it simply restarts and continues working. Thanks for implementing this patch so quickly.
|
GITHUB_ARCHIVE
|