Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Right now, I’m in Buenos Aires for IETF95 where, amongst others, an Internet-Draft authored by Eric Vyncke, Antonios Atlasis and myself will be presented (and hopefully discussed) in two working groups. In the following I want to quickly lay out why we think this is an important contribution.
As some of you may remember about two years ago we started an internal research project on the IPv6 “helper procotol” Multicast Listener Discovery (MLD) and its security properties. One outcome of this research project was Jayson Salazar‘s excellent thesis on the topic (the full document can be found here), another outcome were the related talks we gave at DeepSec 2014 and at Troopers15.
When looking at MLD in practice (read: in the lab built with several Cisco devices and encompassing main COTS host operating systems) we noted
- that pretty much all of them had MLD enabled by default (we think that the reason for this might be the somewhat ambiguous wording in RFC 4861, sect. 7.2.1 where it’s stated that “[j]oining the solicited-node multicast address is done [highlighted by ed.] using a Multicast Listener Discovery such as [MLD] or [MLDv2]”, which in turn could lead implementors to the assumption that MLD is strictly needed for proper functioning of ND which, as we showed here, might not be correct. [see also RFC 6434 section 5.10 which seems to follow that interpretation].
- most OSs did not strictly adhere to all parts of the specifications and deviated from those here+there, e.g. by accepting MLD packets with a hop-limit > 1. Actually a combination of issues meant that MLD packets could be sent to router interfaces from remote subnets which is clearly not intended as of the core MLD specifications (RFCs 2710 and 3810).
- in general MLD is susceptible to a number of attacks, incl. straightforward DoS attacks against devices (we managed to render a Cisco ASR-1002 unusable by just sending MLD packets from one single source) and amplification attacks on the local link. More details in this post or the above sources.
Overall we think that some additional security controls might be needed to ensure network stability and performance in the face of MLD. We hence suggest
- to introduce a switch port based feature called “mld-guard” which blocks MLD queries (ICMPv6 type 130) from unauthorized sources, similar to RA guard.
- some modifications of MLD specifics, most notably no reception of MLD packets on unicast addresses (which RFC 3810 currently permits, “for debugging purposes”) and an another way of handling the hop-limit (send packets with hop-limit 255 and discard all which don’t have 255 upon reception).
The draft can be found here and we’re happy to receive any type of feedback.
Have a great day
|
OPCFW_CODE
|
As is the case with most people involved in Open Source, I'm on IRC all day long. I can help people from around the world use some of the projects I've helped create, as well as some of the software that I use on a daily basis.
One of the major advantages that IRC has over your ‘traditional’ instant messenger clients is that, with a minimum amount of effort and hardware, you can create a setup that will remain perpetually1 connected, even when you're not online. That means that you can keep logs of conversations, receive messages, catch up on what the current topic of discussion is when starting your day, and still be able to shut down your computer at night if you so choose.
There are a few ways of achieving the status of perpetual IRC denizen. Note, however, that almost any method requires that you have access to a remote server in addition to your local machine. It doesn't need to be anything special - a cheap VPS will do just fine, as long as you can install software packages and open the necessary ports in the firewall.
If you want to expend the least amount of effort, at the cost of some flexibility, the easiest method of achieving IRC immortality is with the use of a terminal multiplexer, such as screen or tmux (my personal favourite), and a console-based IRC client, such as irssi.
Don't know what a terminal multiplexer is? Here's a description, right from the screen homepage:
Screen is a terminal multiplexer: it takes many different running processes and manages which of them gets displayed to the user. Think of it as a window manager for your console or terminal emulator. With screen, you can have interactive processes running on your home computer and connect to them from anywhere else to check on their progress.
Let's assume that you've installed
irssi on your remote
server. What we want to do is start Irssi inside a screen session:
$ screen -S irc irssi # name the session `irc` and start `irssi` in it
At this point, feel free to browse the documentation on how to properly setup your preferred servers, channels and other configuration parameters for the IRC client. For a quick test, you can try the following at the Irssi prompt:
/set nick YourNickName /connect irc.freenode.net /join #some-interesting-channel
You can also automate the process, so that upon startup Irssi will auto-connect to the servers and channels you specify:
/server add -auto -network Freenode irc.freenode.net 6667 /network add -nick YourNickName Freenode /channel add -auto #some-interesting-channel Freenode
If you have a registered nickname, you can have it auto-identify as well:
/network add -autosendcmd "/msg nickserv identify your_pasword_here ;wait 2000" Freenode
And you'll probably want to cut down on the noise:
/ignore #some-interesting-channel ALL -PUBLIC -ACTIONS
At this point, you should have a fully functioning IRC client inside of a screen
session. You can detach from the current session by pressing
and re-attach with:
$ screen -r
See where this is going? With this setup, you can simply maintain your IRC
connection within a screen session on your remote server. When you want to
"log on", you simply SSH into your remote server,
$ screen -r, and
voilà! Perpetual IRC.
This approach, while simple, has some very obvious drawbacks:
- You must be on a machine that is able to SSH into your remote server.
- You are confined to using the command-line IRC client running in the remote session.
- Any scripts that attempt to interact with your 'local' desktop (e.g. Growl notifications), are painful to setup, if not impossible.
Okay, so it's not completely ideal. But it does the job - as long as your remote screen session remains operational, you'll be logged in to IRC.
In my next post, we'll look at how you can achieve the same results, but without any of the aforementioned drawbacks, through the use of a specialized IRC proxy daemon.
Of course, if the remote server running the terminal multiplexer session goes down, it's not really ‘perpetual’.
|
OPCFW_CODE
|
let us split here the thing into two parts:
- It is important to understand the concept of matrices encoding transforms. This is not a particular complicated thing once you get it. I know that this stuff looks intimidating as most people have nothing to do with matrices in their daily work, but understanding this concept will make you much more flexible in what you do as a (technical) artist, not only in Python, but also in our scene nodes and even other products. But this is more a tip from me.
- Your problem at hand: I am still not 100% sure what you want to do. If it is moving an axis, the topics How to Center the Axis of Objects and CAD Normal Tag flipped after Polygon merge (join) and axis repositioning - how realign? will likely give you a good starting point, as I did provide there commented code.
When this still does not solve your problem at hand, I would ask you to provide a scene file for your problem, as providing such usually solves these "explaining" problems the fastest (images also work).
Then when I see the 3rd image I see the number "1" jump colums which is the part that makes it confusing to me. I believe in the dabble I had with matrices that default "1" is the scale but I don't grasp why it doesn't just stay in one of the colored columns.
With 3rd image I assume you mean this here?
This is the identity matrix I have talked about in my first posting, it encodes the world coordinate system, it is what you get when construct a matrix without any arguments. The stuff in red is the x-axis (1, 0, 0), green the y-axis (0, 1, 0), and blue the z-axis (0, 0, 1), these three vectors (red, green, and blue) are what form the basis of the transform. They are literally three axis.
Another time when I read out the matrix of an object that had rotations on it, the matrix showed everything 0 apart from those 3x default 1 jumping columns.
So it appeared the matrix did not have any rotational values stored.
Again, this confused me.
When you object is not aligned with the world coordinate system, this is perfectly normal. So, when for example the x-axis of your object is aligned with the y-axis of the world axis (and the object z-axis is still aligned with the world z-axis), then the matrix will be
Matrix(v1: (0, 1, 0); v2: (-1, 0, 0); v3: (0, 0, 1); off: (0, 0, 0))
v1, the first component of the basis, i.e., the x-axis is now equal to the world y-axis (0, 1, 0). The same aplies to the y-axis of the object, which now has become equal to the inverse of the world x-axis (-1, 0, ,0)
To be open and honest here, I'm a visual artist that has started to do coding to make internal company tools and while I have an ok understanding of math some things like this, I can't yet visualize for myself, so I don't fully understand it yet.
We understand that, and there is no rush from our side for you to come up to speed. We however do not prioritize explaining such fundamental concepts in the SDK Team here, as this is a boundless topic, as there is such much math and computer science concepts to explain to explain all the general concepts behind our APIs. For now, this reduced form of the Matrix manual must suffice. If you follow one of the educational courses on linear algebra, they will usually also provide a geometric explanation of the math (i.e., something visual).
|
OPCFW_CODE
|
A lesson in “read the fine print”!
Last week I chronicled my frustrating journey to add a column to a DataFrame and fill the new column by extracting a float value from a string of text from another column. This week, using the same DataFrame about books from Machine Hack that is 6237 entries with a target variable of ‘Price’ and 8 features, I decided to turn my attention to the ‘Ratings’ column.
Here’s a df.sample(3) of the DataFrame with the added column, ‘fReviews’, from last week:
My plan this week with the ‘Ratings’ column was to extract the number and put it into a new column, ‘iRatings’. As is the case with most of my coding ideas — easier said than done. I encounter two small problems right off: changing data types and using complex regex statements. The ‘Ratings’ column is of type object. Even when I try to change it — and set it equal to itself (an important lesson learned from last week!)-it still doesn't change.
Moving on to my other problem…My attempt to extract the number out of the ‘Ratings’ column fails because my regex skills are shaky when there is a comma in the number! Therefore, I’ll remove the commas in the ‘Ratings’ column and make the regex extraction easier.
In order to confirm that commas are successfully replaced, I first create a list of the ‘Ratings’ to see how many entries in the column have a comma in them:
Check to see if there are commas with my comma checker function:
Yes, indeed there are 20 ‘Ratings’ with a comma.
Indulge me a slight sidetrack at this point — use of str.replace() with a text string. The syntax is:
str.replace(old, new[, count])
Onward…the syntax for df.replace() is slightly different with more arguments:
DataFrame.replace(to_replace=None, value=None, inplace=False, limit=None, regex=False, method=’pad’)
My first, second, and third attempts to use df.replace() aren’t successful. (This is Deja Vu from last week again and Einstein’s Insanity description). Here are my three attempts:
After a bit of searching in Stackoverflow, in small print towards the bottom in pink highlight, is a very helpful clue:
Adding the “.str” is the key! After making the small addition, replacing the commas with no space, creating a new list, and checking for commas finally works:
Now, I can simply extract the number from the ‘Ratings’ column and create a new column, ‘iRatings’.
And create a rudimentary histogram of values:
But using the df.describe() may be more helpful to see the data:
The point of actually seeing the distribution of the number of ratings for each book is not the goal. The goal is to continually improve my skills — mission accomplished!
|
OPCFW_CODE
|
|Description:||enhanced mode (graphics mode) video driver|
The Egfx module (pronounced "E-graphics") is the "enhanced-mode" driver for the 30x7 LED display. Egfx lets you work with any number of graphics buffers of arbitrary size, and map parts of them to parts of the physical screen. There is also support for color depth (shades of intensity). Drawing commands include drawing dots, lines, and rectangles in a variety of drawing modes, moving the cursor, moving the screen around within the buffers, and lots of other cool stuff. All commands are given through "graphics scripts", which can be sent on the fly or uploaded as resources. The graphics commands are documented here.
Before using Egfx, you probably want to read the Beginner's Guide to Egfx, which lays everything out a little more clearly.
cokerr Egfx_QueryVersion (char* versionstring)
returns the version string.
cokerr Egfx_SetMode (byte numColors, byte refreshRate,
disables egfx if numColors is 0. Otherwise, starts up egfx, with a color pallette of numColors (must be 2 or greater), a refresh rate of refreshRate hertz, and an initial drawing buffer with rezID bufferRezID. The buffer should be created beforehand with Egfx_CreateBuffer.
cokerr Egfx_RestoreMode ()
re-enables Egfx if it had been disabled. Unlike SetMode, which initializes parameters and the screen, RestoreMode simply gives Egfx control of the screen again, without performing initialization of any kind. The command will fail if SetMode hasn't been called in the first place.
cokerr Egfx_CheckBufferSpace (byte width, byte height)
verifies that there is enough free memory to create a graphics buffer of the given size. Fails if there is not enough memory.
cokerr Egfx_CreateBuffer (byte width, byte height, byte* query)
creates a graphics buffer of the given size and returns the resource ID of the new buffer in *query. You may free the resource yourself later if you are sure that it is no longer being used.
The following command is special and will be implemented in some
special way in Libcoke.
special Egfx_ScriptInline (byte script)
executes a graphics script. The script should end with a zero byte. The script will terminate when a zero (or any other unrecognized command) is received as a command character.
cokerr Egfx_ScriptResource (byte rezID)
executes a graphics script stored as a resource. The script must end with a zero byte.
cokerr Egfx_StartAutoScript (byte rezID, word speed)
executes the given graphics script resource every (speed * 8) milliseconds. If this command is called while AutoScript is already active, it replaces the old AutoScript. (If you want multiple independent AutoScripts for some reason, use Coke_SetScriptTimer. In this case, be sure to use a command script that uses Egfx_ScriptResource to execute a graphics script. Don't get your scripts mixed up!)
cokerr Egfx_StopAutoScript ()
terminates the current AutoScript, if any.
SetColor (byte color)
sets the paint color to given value
SetDelta (signed byte deltavalue)
sets paint delta to given value
SetBaseline (byte baseline)
sets baseline to given value
SetDrawingMode (byte dmode)
sets drawing mode to the given value. dmode should be one of the following:
|DMODE_NONE||draw nothing; just move cursor|
|DMODE_COPY||copy the current paint color to pixel location|
|DMODE_BLEND||average the current paint color with existing pixel color|
|DMODE_SHADE||increase/decrease existing pixel color by delta value|
|DMODE_INVERT||invert pixel color between baseline and baseline + numcolors|
|DMODE_ERASE||copy baseline value to pixel location|
SetAutoMove (byte automove)
sets the action that should be taken after a drawing command (Dot, Xline, etc.) is completed. automove can be one of the following:
|AUTOMOVE_NONE||after drawing, do not move the cursor|
|AUTOMOVE_LEFT||after drawing, move the cursor to the left|
|AUTOMOVE_RIGHT||after drawing, move the cursor to the right|
|AUTOMOVE_UP||after drawing, move the cursor up|
|AUTOMOVE_DOWN||after drawing, move the cursor down|
|AUTOMOVE_SCRIPT||if automove >= AUTOMOVE_SCRIPT, the graphics script with resource ID (automove - AUTOMOVE_SCRIPT) is executed after each drawing command. The AutoMove script should not call any drawing commands, because when the drawing command invokes the AutoMove itself, an infinite loop will result.|
SetPan (byte pan)
sets the pan setting to pan, which should be one of the following:
|PAN_LEFT||move visible screen left (appears to shift right)|
|PAN_RIGHT||move visible screen right (appears to shift left)|
|PAN_UP||move visible screen up (appears to shift down)|
|PAN_DOWN||move visible screen down (appears to shift up)|
Xset (byte x)
sets the x position of the cursor to the given value
Yset (byte y)
sets the y position of the cursor to the given value
Xjump (signed byte deltax)
moves the x positon of the cursor by the given amount
Yjump (signed byte deltay)
moves the y positon of the cursor by the given amount
remembers the current cursor position so it can be restored later
sets the cursor position to the stored position
CursorStoreNum (byte register)
stores the cursor position in one of four registers. register must be between 0 and 3. CursorStore () is equivalent to CursorStoreNum ( 0 ). The cursor storage registers are properties of the buffer, so each graphics buffer has its own four registers.
CursorRestoreNum (byte register)
sets the cursor position to one of the buffer's four registers. register must be between 0 and 3.
draws a dot at the cursor position in the current drawing mode. The cursor moves according to automove.
Xline (signed byte deltax)
draws a horizontal line of the given width from the cursor. The cursor is left at the end of the line, plus automove.
Yline (signed byte deltay)
draws a vertical line of the given height from the cursor. The cursor is left at the end of the line, plus automove.
Frame (signed byte width) (signed byte height)
draws a rectangular frame of the given dimensions. The cursor is left unchanged except for automove.
Rect (signed byte width) (signed byte height)
draws a filled rectangle of the given dimensions. The cursor is left unchanged except for automove.
Char (byte char)
draws the given char in the current drawing mode. (not implemented yet)
CharLine (byte char) (byte column)
draws a vertical slice from the given column of the given char. (not implemented yet)
DumpResource (byte rezID)
copies data from the given resource to the drawing buffer (not implemented yet)
DumpRectResource (byte rezID, byte width, byte height)
copies data from the given resource to a section of the drawing buffer (not implemented yet)
MoveRect (byte width, byte height, signed byte deltax, signed byte deltay)
moves a section of the drawing buffer to another location (not implemented yet)
RangeSet (byte startingColumn, byte width, byte x, byte y)
sets the top-left of the given portion of the screen to the given position in the current graphics buffer
moves the visible screen according to the current pan setting
PanThis (byte pan)
moves the visible screen according to the given PAN_x value
PanRange (byte startingColumn, byte width)
moves a portion of the screen (a range of scanlines) according to the current pan setting
|
OPCFW_CODE
|
I'm going to disagree with some of my colleagues and suggest that the Bank's solution is much safer than your proposed alternative. Though it's not ideal either. Let's look at each solution and it's attack profile.
Address, SSN, DOB, etc.
We should all be able to agree that these are absolutely terrible ideas. Facts make horrible passwords and should never be used. No exceptions. Literally no exceptions. If someone can research it, then you can't use it to authenticate yourself. Period. Nearly all security questions fall flat on this rule. It's embarrassing. This technique is exploited frequently, often with high-profile targets. We should know better but we keep using it. This needs to stop.
While not ideal, this solves the research problem. It can still be used with secure hashing since the support rep should have to type in the password verbatim to verify it rather than read it off the screen. Your potential attacks are: (a) someone listening can re-use it to impersonate you over the phone, and (b) someone listening can use it on the website. Still, if more companies did this we'd be a lot safer. We can do better, but at least you don't have to worry about someone looking up what street you grew up on or the last digits of your credit card number and then using that information to take over your account.
This is Godaddy's solution. You have a 4-digit number on your account that you must provide when you call support. It's better as it solves problem (b) above -- you can't use the call-in pin to log in to the website. But problem (a) remains. The pin doesn't change unless you reset it, so an attacker who hears your conversation can impersonate you. Besides the below, other "call in passwords" tend to fall into this category and are susceptible to "replay attacks".
This is PayPal's solution, and it's the best solution available. When you click "contact us" on your PayPal account, you're given a single-use time-limited 6-digit pass-code that you must provide over the phone. It expires after 60 minutes or after you use it once, so an eavesdropper can't impersonate you. And it's random so it can't be predicted. If you're going to implement a solution for yourself, this is the way to do it. PayPal has been doing this for many years and it's worked for them so far.
The "tell me some facts about yourself" way of authenticating over the phone, as you suggested, really shouldn't be allowed; it's a shame so many companies do this. Giving your website password over the phone is much better (to be fair, anything is better), but it presents some problems. Authenticating online and then getting a one-time secret code -- that's a solution I can respect.
Note that the fact that the bank is willing to fall-back to asking for your SSN or Address, etc., is the real scandal here. If you refuse to authenticate yourself securely, you should not be given the option to provide some facts about the victim you're trying to impersonate.
|
OPCFW_CODE
|
Windows 8 Keyboard Shortcuts
01/03/13 10:30 Filed in: Windows
A list of the more common Windows 8 shortcut keys.
I’m feeling helpful today, so here’s a quick summary of Windows 8 shortcut keys that you’ll find useful. Personally, I always tend to try and work with the keyboard rather than swapping to the mouse, so knowing these keys speeds up my work-flow.
Some of the more common favorite ones I use are detailed below.
When I say WK+ I mean hit the Windows Key PLUS whatever the following is. So for example hitting the Windows key on its own brings up the Start screen.
If you’re on a Mac and using Parallels/Fusion, then the equivalent Windows key will be the CMD key or possibly fn+CMD depending on how you have your virtual machine set up. Some of the more useful ones are listed below. For a real comprehensive list, please see here.
WK + C
This opens the charms bar to the right of the screen and offers up contextual options depending on your application.
WK + M
This minimises all of your Windows.
This restores all of the Windows you previously minimised.
WK + Q
This starts the search for Apps functionality - an odd choice I think!
WK + W
This opens the search settings.
WK + F
This opens the search for files window.
WK + Numeric Keys
This is a particular fave - this opens an application on your taskbar. 1 being the app to the left etc. Very useful!
WK + T
You can cycle through the available applications on your taskbar with this.
WK + X
This opens the tools menu.
More complete list below:
WK + spacebar Switch input language and keyboard layout
WK + O Locks device orientation
WK + Y Temporarily peeks at the desktop
WK + V Cycles through toasts
WK + Shift+V Cycles through toasts in reverse order
WK + Enter Launches Narrator
WK + PgUp Moves Metro style apps to the monitor on the left
WK + PgDown Moves Metro style apps to the monitor on the right
WK + Shift+. Moves the gutter to the left (snaps an application)
WK + . Moves the gutter to the right (snaps an application)
WK + C Opens Charms bar
WK + I Opens Settings charm
WK + K Opens Connect charm
WK + H Opens Share charm
WK + Q Opens Search pane
WK + W Opens Settings Search app
WK + F Opens File Search app
WK + Tab Cycles through Metro style apps
WK + Shift+Tab Cycles through Metro style apps in reverse order
WK + Ctrl+Tab Cycles through Metro style apps and snaps them as they are cycled
WK + Z Opens App Bar
|
OPCFW_CODE
|
what are the tamperprotectionsources?
I see with the powershell command get-mpcomputerstatus tamperprotectionsource = signatures, sometimes ATP and sometimes E5 transitioning.
Is there any documentation about the the different sources?
Other point is that my AVD devices getting a greyed out option on the tamperprotection with tamperprotection off once it is onboarded in defender for endpoint. When i try to turn it on through MEM i get not applicable on that antivirus policy.
How do i get tamperprotection to On state without turning it on in the advanced settings in the security portal
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: f31e7b39-b7c3-188a-0b2a-02ff10a5be6d
Version Independent ID: 44940a13-0a2d-e281-560b-cb4d8a5d1ccf
Content: Protect security settings with tamper protection
Content Source: microsoft-365/security/defender-endpoint/prevent-changes-to-security-settings-with-tamper-protection.md
Product: m365-security
Technology: mde
GitHub Login: @denisebmsft
Microsoft Alias: deniseb
A value of 3 = "E3 transition", whatever that means :)
@yogkumgit I have opened a work item (6329673) for this issue. Will update this issue when I know more.
Hello @yogkumgit. We've been waiting on information from the PM. We're on hold until we hear back from him.
Hello @schrx1964 and thank you for posting your feedback here. I finally have an update! The sources for configuring tamper protection are as follows:
The Microsoft 365 Defender portal (turn tamper protection on or off, tenant wide) - See https://learn.microsoft.com/microsoft-365/security/defender-endpoint/manage-tamper-protection-microsoft-365-defender
Intune (turn tamper protection on or off, tenant wide) - See https://learn.microsoft/microsoft-365/security/defender-endpoint/manage-tamper-protection-microsoft-endpoint-manager
Configuration Manager (with tenant attach, you can configure tamper protection for some or all devices by using the Windows Security experience profile) - See https://learn.microsoft.com/microsoft-365/security/defender-endpoint/manage-tamper-protection-configuration-manager
Windows Security app (for an individual device used at home or that is not centrally managed by a security team) - See https://learn.microsoft.com/microsoft-365/security/defender-endpoint/manage-tamper-protection-individual-device
We are adding this information to our FAQ for tamper protection: https://learn.microsoft.com/microsoft-365/security/defender-endpoint/faqs-tamper-protection
I hope this helps! Apologies for this taking a while to address. Please let me know if you still have questions.
The articles don't answer the original question. I'm experiencing the same issue on AVD devices. TamperProtectionSource = Signatures and IsTamperProtection = False!!!. Tamperprotection is switched on on the so should be environment wide and I'd expect TamperProtection to be on with the source as ATP. AVDs are configured via MECM without tenant attach for the Endpoint Protection Settings.
|
GITHUB_ARCHIVE
|
Contact: David K. Clarke – ©
In the United Kingdom and Australia dates are usually written: day, month, year; for example today's date is:
I believe that in China and Japan they write: year, month, day; eg.
The USA has the most foolish system, they write: month, day, year; eg.
The most significant digit (the one that indicates thousands) comes first, the next most significant digit (indicating hundreds) comes next, etc. So, following the same arrangement, it is logical to place the year first, the month second, and the day last. This system is increasingly being used in computer applications. If dates written in this format are put through an alpha-numeric sorting process they come out in chronological order.
I have used this way of writing dates widely in my pages.
A rational way of writing the dateThe oriental system for writing dates, then, is the only one that is logical. For example, the date '1986/12/26'. Starting from the left:
Date and TimeExtending the logic one step further, you could add numbers to indicate the time of day. For example, the date and time on your computer clock when you loaded this page was: (year/month/day hour:minute:second).
By the way, your time zone offset, as set on your computer, is hours. To convert to Universal time (Greenwich mean time), your time.
(-:Metric Time :-)Of course, to be completely rational, we should abolish all the strange month lengths, seven days in a week, 24 hours in a day, 60 minutes in an hour and have a completely metric time system. It could be based either on a standard second or perhaps on a year. We already have milliseconds and microseconds, we could have kiloseconds (about 17 minutes), megaseconds (about 12 days), gigaseconds (about 32 years – I am near enough 2Gsec old as I write this [2009/04/03]); or milliyears (about 9 hours), microyears (about 32 seconds), kiloyears (millenia), megayears and gigayears (the earth is about 4.5 gigayears old).
The length of a day (one complete rotation of the earth on its axis and relative to the Sun [a rotation relative to the stars is about four minutes quicker]) varies, so is not well suited as a unit of time. The day is getting steadily longer (due to the gravitational effects of Moon and Sun) and changes also due to the relationship between water/snow being stored at different altitudes and the conservation of angular momentum. For example, heavy snow falls in the Himalayas places increased mass at a point further from the centre of the Earth and slows the planet's rotation; the melting of mountain glaciers due to climate change would be causing the planet to rotate a very little faster.
Astronomer's time, Julian DayIn spite of the above spiel on the variability of the length of a day, astronomers use a form of date based on the day, called the Julian Day. According to this calendar, each new Julian day begins at Greenwich mean noon (because the astronomers who developed the system wanted the whole night to have the same day number; it's easier for them). Note that the Julian day (proposed by Joseph Justus Scaliger in 1583, and named for his father, Julius Caesar Scaliger) has nothing to do with the Julian calendar (named for that other Julius, Caesar).
If my calculations and your computer clock are correct, the Julian Day number at the time you loaded this page, was
Don't ask me why Joe decided to start his calendar on January 1st 4713BC, although it was probably an early enough date so that all known astronomical records would have positive Julian dates.
|
OPCFW_CODE
|
Restarting a user crashes the frontend
Feb 01 16:13:01 thingpedia1 node[14342]: Child with ID XXX exited with code 1
Feb 01 16:13:01 thingpedia1 thingengine-child-XXX[14450]: Stopping engine of XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Feb 01 16:13:01 thingpedia1 thingengine-child-XXX[14450]: Engine stopped
Feb 01 16:13:01 thingpedia1 node[14344]: Socket to user ID XXX closed
Feb 01 16:13:01 thingpedia1 thingengine-child-XXX[14450]: Engine closed
Feb 01 16:13:01 thingpedia1 node[14344]: POST /me/status/kill 303 9.134 ms - 68
Feb 01 16:13:01 thingpedia1 node[14344]: /opt/thingengine/node_modules/transparent-rpc/node_modules/q/q.js:155
Feb 01 16:13:01 thingpedia1 node[14344]: throw e;
Feb 01 16:13:01 thingpedia1 node[14344]: ^
Feb 01 16:13:01 thingpedia1 node[14344]: Error: Socket closed
Feb 01 16:13:01 thingpedia1 node[14344]: at RpcSocket.call (/opt/thingengine/node_modules/transparent-rpc/rpc_socket.js:168:29)
Feb 01 16:13:01 thingpedia1 node[14344]: at RpcProxy.(anonymous function) [as closeConversation] (/opt/thingengine/node_modules/transparent-rpc/rpc_socket.js:356:37)
Feb 01 16:13:01 thingpedia1 node[14344]: at WebSocket.ws.on (/opt/thingengine/routes/my_api.js:283:44)
Feb 01 16:13:01 thingpedia1 node[14344]: at emitTwo (events.js:131:20)
Feb 01 16:13:01 thingpedia1 node[14344]: at WebSocket.emit (events.js:214:7)
Feb 01 16:13:01 thingpedia1 node[14344]: at WebSocket.emitClose (/opt/thingengine/node_modules/ws/lib/websocket.js:172:10)
Feb 01 16:13:01 thingpedia1 node[14344]: at Socket.socketOnClose (/opt/thingengine/node_modules/ws/lib/websocket.js:781:15)
Feb 01 16:13:01 thingpedia1 node[14344]: at emitOne (events.js:116:13)
Feb 01 16:13:01 thingpedia1 node[14344]: at Socket.emit (events.js:211:7)
Feb 01 16:13:01 thingpedia1 node[14344]: at TCP._handle.close [as _onclose] (net.js:561:12)
Feb 01 16:13:01 thingpedia1 node[14343]: Socket to user ID XXX closed
Feb 01 16:13:01 thingpedia1 node[14343]: Failed to retrieve cached modules: Invalid user ID
Feb 01 16:13:01 thingpedia1 node[14343]: Socket to user ID XXX closed
The call to closeConversation is now protected, and I have not seen this crash recently. Closing.
|
GITHUB_ARCHIVE
|
I'm at Dad's house and, in this insomnia, have been looking through drawers for a letter that I received, literally half a lifetime ago, from someone who has been in the news recently. I didn't find it but there are other places to look yet.
However, I did find some exercise books from the late '80s and early '90s with pencil-on-paper designs for games and game systems: some RPG character attribute systems, a few adventures, but mostly pencil-and-paper designs that I might have got round to programming in Sinclair BASIC - or, later, AmigaBASIC - some day. It's unclear that any of them would stand up on their own merits, but fun personal nostalgia all the same.
A frequent theme is comparing games to each other in slightly unnecessarily complicated ways. One notebook has one particular comparison scheme for ZX Spectrum games, iterated several times in 1988, a couple of times in 1989, an "end of the decade special" and a retrospective edition from, gasp, 1997. Would it be fun to go back and create one more edition, comparing memories of games I haven't played for twenty years? Would it be worthwhile to run them through emulators to refresh my memory? Perhaps if I still can't get to sleep in another hour or two. Nevertheless, it's cute to know how I thought of them at the time, even if giving reasons to the scores was beyond me.
It's also a reminder - hopefully salutary, though I doubt it - that my taste in games has changed little if at all over the last two-thirds of my life. Ever since I became fully aware of the depth of the football pyramid system, I have loved the idea of a game where you manage a football club that starts in a hypothetical league made up of teams representing parts of Middlesbrough, then in successive leagues covering larger and larger areas of population still, onto the national stage and beyond.
What have I been doing on and off for the last couple of days when concentration has permitted? Why, trying to work out which teams would be in which leagues so that I might produce a customised data pack for the (pro version of the) Football Chairman management game for iOS and actually play out a much better version of the game that I idly dreamt of back then. Given that my pencil-and-paper notes from the previous version of the exercise had Liverpool as top team in the land, they probably date from late 1988, give or take, at a guess.
If you'd told me then that I would still have been actively interested in the concept when three times my age, I think tiny!Chris would have been amused. He would also have been impressed that my taste in computer games still hasn't grown beyond games that cost £2.99.
Please redirect any comments here, using OpenID or (identified, ideally) anonymous posting; there are comments to the post already. Thank you!
|
OPCFW_CODE
|
AS3/Flex: How to make mxml files loaded via ViewStack see their parent's variables, etc.?
For a project I'm working on in Flex I decided to create several separate files for each 'theme' that can be used. Since each theme can and will require specific code, images, styles and virtually anything else, the classical css-only option was not really possible.
I have one problem with this method, and that's that each 'child' mxml file can't read the variables and such created in the parent application. Using Application.application solves half the issues, but any kind of global variable solution seems to fail for me. And the code doesn't get all that much cleaner because of this either.
I created a class that is loaded in the main application with a static variable that I was using as the AS3-equivalent of global variables. Sadly, accessing this from these 'child' mxml files is not possible, I can only re-initiate the class, or write a wrapper function in the main application that fetches these variables. This, again, is anything but perfect and still leaves me with no decent way of using methods from classes that were initiated in the parent application.
What's the best way to get this to work well?
The rundown of the application:
1) Main application loads up several classes/packages, initiates a few with the proper settings and such
2) Main application has a ViewStack which has each theme coming from an external file (themes/ThemeName.mxml)
3) Each theme needs to access at least 2 variables set by the main application (by use of the classes and methods that were loaded), and some may also need to directly access certain features that should be globally available for both the main application, as-well as the specific theme mxml.
I hope my explanation is a bit clear. Please ask any questions that might help you understand this more. Thanks so much in advance!
-Dave
Small update: For a bit more clarrification: I have a class that allows me to easily create a camera view. I initialize and use that class in the main application, which then puts the new (web)cam(era) instance in a variable, ready to be used anywhere where needed. The view file (themes/theTheme.mxml) then displays 2 camera's in whichever way it wants. I want the view file to use the camera(s) created in the main application, so that I won't have to ask the theme creation guys to implement all that over and over. It's one example of why I need this.
Thanks for helping me so far!
Coming from a php background, I see where you might be going wrong. It's not like php where each file is just a big hunk-o-code that can be included into other php files.
Each mxml file is a full fledged Actionscript class. You're not "including those mxml files", you're actually creating properties in your main mxml class, and those properties' types are of the "child" mxml objects.
If your child mxml components really need some information from the parent, you should pass that data into them.
But step back a little bit. Are you sure you want those child mxml files doing all that work? Could they just simply be dumb layouts that don't have any logic in them? Then you could let the main mxml manipulate them.
Oh, that sounds strangely logical. I wanted to make the template (child) items as dumb as I could, but for some reason it never occured to me to 'inject' whatever I need from the main application to the client. I was kind-of blind-sighted on getting the client to read from the parent. I'll try this one, sounds very logical. Thanks!
Sounds like you're ready for a framework! You can use Dependency Injection to set things in your app as they are created.
Here are some frameworks that help with Dependency Injection.
Bifff - allows you to set things on any view by css selector
Glue- MVC framework based on bifff
Mate - MVC framework
Note: I am the author of Bifff and Glue.
As an example, using bifff you can say the following (sets the "thePropertyFromMyFile" variable on any component with stylename="customStyleName")
<Selector match=".customStyleName">
<Set thePropertyFromMyFile="{myGlobalSettings.property}"/>
</Selector>
However, reading your description makes it seem like you are running straight into the need for a full framework for your application. MVC frameworks help you separate the data of your app (in your case, which display mode you are in) from the views, without requiring the views to call Application.application or a global singleton.
This will take quite a bit more research for you, but it is the right path. Any flex app can benefit from a framework, pick the one best for you.
Although I like the MVC approach, I'm not entirely sure if using a framework on top of Flex is the way to go for me (just yet). So far I'm still getting used to Flex, and so far the code feels anything but clean or comfortable. I'm a PHP guy myself, so that might have something to do with it. Thanks for the heads up, and I've bookmarked them all! I just can't believe there is no (easy) way for me to do this, surely I'm not the only one who has a setup like this :).
you can very easily do this. think of it in OOP fashion. each mxml is a different class. You want to call method of one class in another class. If you want it run in its original scope make the methor variable static and call it by MXMLFileName.MethodName();
If you want to execute in its own scope make an object of the parent MXML and then call the method.
|
STACK_EXCHANGE
|
Please check the status of this specification in Launchpad before editing it. If it is Approved, contact the Assignee or another knowledgeable person before making changes.
Launchpad Entry: foo
To reduce the bottlenecks and time-zone based problems in obtaining Ubuntu membership from the Community Council, the Council will delegate membership approval to a number of regional bodies to approve members via weekly meetings.
The Community Council will be free to concentrate on project governance and will introduce a system of review for various areas of the project at each meeting.
The Ubuntu project is rapidly expanding and [http://www.ubuntu.com/community/processes/newmember the current process] for approval of new Ubuntu members is struggling to keep up with the increased participation. The list of pending membership applications is so long that the Community Council cannot focus on other issues. Also, it is often difficult or impossible for potential new members to attend Community Council meetings which do not coincide with their availability in a particular timezone.
This falls into two areas.
Three regional teams will be created for approving new Ubuntu members, in the following geographical areas:
- Europe, Middle East and Africa
- Asia / Oceania.
Each team should have 10 people on it and will hold weekly meetings in their own timezones (probably in the early evening), varying the day a little week by week.
A quorum of 3 or 4 people will be sufficient to hear a membership application. The CC will not hear membership applications except in contentious or disputed cases.
The result will be that meetings will be shorter and more convenient for potential new members to attend.
For the first six months, I propose we run it in a supervisory mode, where the results of the team meetings (approvals, that is) are passed up to us on this mailing list for approval. They would mail us a list of approved names, wiki and LP url's, and one of us can look over it and approve the membership in LP. Issues can be flagged and passed back to the team. If that is going smoothly, then we can consider moving it to "production" mode where those teams can speak for us directly, approving memberships in LP.
JonoBacon or DennisKaarsemaker will be asked to draw up a regular review of project governance, identifying two or three areas of the project that the Community Council wants to talk about in each meeting. The leaders of the relevant teams will be invited in to the meetings for a public chat about their work. The Council will focus its attention on that, have community leaders write and blog about that part of the project, identify areas for improvement, and generally attempt to help that part of the project excel.
Equally, the CC will develop a much better understanding of the personalities and issues at play in each of those different aspects of the project.
|
OPCFW_CODE
|
Is pain comprised of the four great elements?
Is pain comprised of the four great elements. I believe it is because it has hardness/softness, coolness/hotness, etc.
Pain is part of the Feeling(vedana) aggregate, while the 4 great elements are part of the Form(rupa) aggregate. More details about the Five Aggregates on https://en.wikipedia.org/wiki/Skandha
I think it is better to use awareness or sensation than pain. Brain get information from sense devises. Mind is macro level than bio-chemistry(Micro level) signals . So mind feels hardness/softness, coolness/hotness, etc.
This is how maps reality with experience. without physical contact no real sensations create. But in dreams an in trance mind can imagine anything as real.
How it works telepathy and extra sensations are beyond this topic. They may be lower level communications.
Buddhism's main aim is not scientific analyses of physical body. But how to create, process and continuous mind(consciousness) and effect of actions('kamma') to it. Studying using meditation as a tool.
as shown in MahaSathipattana Sutta.
“This is a one-way path, monks, for the purification of beings, for
the overcoming of grief and lamentation, for the extinction of pain
and sorrow, for attaining the right way, for the direct realisation of
Nibbāna, that is to say, the four ways of attending to mindfulness.
Which four?
Here, monks, a monk dwells contemplating (the nature of) the body in
the body, ardent, fully aware, and mindful, after removing avarice and
sorrow regarding the world.
He dwells contemplating (the nature of) feelings in feelings, ardent,
fully aware, and mindful, after removing avarice and sorrow regarding
the world.
He dwells contemplating (the nature of) the mind in the mind, ardent,
fully aware, and mindful, after removing avarice and sorrow regarding
the world.
He dwells contemplating (the nature of) things in (various) things,
ardent, fully aware, and mindful, after removing avarice and sorrow
regarding the world.
In this context all physical things are consider as 'Rupa' and assume they consists with 4 'Maha Butha'('Datu') Patavi, Apo, Tejo, Vayo.
The main advantage of this method is reduce complexity to identify "The Body" exactly not more not less. ("Body as body")
Thus he dwells contemplating (the nature of) the body in the body in
regard to himself, or he dwells contemplating (the nature of) the body
in the body in regard to others, or he dwells contemplating (the
nature of) the body in the body in regard to himself and in regard to
others, or he dwells contemplating the nature of origination in the
body, or he dwells contemplating the nature of dissolution in the
body, or he dwells contemplating the nature of origination and
dissolution in the body, or else mindfulness that “there is a body” is
established in him just as far as (is necessary for) a full measure of
knowledge and a full measure of mindfulness, and he dwells
independent, and without being attached to anything in the world.
In this way, monks, a monk dwells contemplating (the nature of) the
body in the body.
In meditation the pain is assume as mind object or experience. this can be consider a single object, (as 'vedana' or 'chiththa') or complex object (5 aggrigate).
As shown In The Section on the Constituents (of Mind & Matter)
Moreover, monks, a monk dwells contemplating (the nature of) things in
(various) things, in the five constituents (of mind and body) that
provide fuel for attachment.
And how, monks, does a monk dwell contemplating (the nature of) things
in (various) things, in the five constituents (of mind and body) that
provide fuel for attachment?
Here, monks, a monk (knows): “such is form , such is the origination
of form, such is the passing away of form; such is feeling , such is
the origination of feeling, such is the passing away of feeling; such
is perception , such is the origination of perception, such is the
passing away of perception; such are (mental) processes , such is the
origination of (mental) processes, such is the passing away of
(mental) processes; such is consciousness , such is the origination of
consciousness, such is the passing away of consciousness”.
No! Patavi, Apo, Tejo, Vayo are elements of Rupa. Pain is a sensation caused by those elements coming into contact with the sense doors while the the mind is present.
The 4 elements can only be sensed through the scene doors, hence in the stand point of a mediator the 4 elements are sensations too. Without a living observer there cannot be any scene of the elements, hence consciousness, faculty, contact, sensation are essential and any experience of the elements is always through sensations or the characteristics of the elements which are felt.
As a side note, in case this question was motivated by trying to reconcile Pa Auk and U Ba Khin methods. This is one of the potential perceived differences between the Goenka/U Ba Khin meditation method and Pa Auk 4 elements meditation methods but at essence they are the same through 4 elements based method is slightly more granularity by further sub dividing the sensation based experiences based on the characteristics of the elements. Also see: The Dynamics of Theravāda Insight Meditation, The Ancient Roots of the U Ba Khin Vipassanā Meditation, The Development of Insight – A Study of the U Ba Khin Vipassanā Meditation Tradition as Taught by S.N. Goenka in Comparison with Insight Teachings in the Early Discourses for Bhikkhu Anālayo's perspective on the matter.
I was wondering about this after reading about namarupa. Nama that is feeling,perception,contact, intention and attention and Rupa which are the 4 elements. And I was trying to disassemble pain into these two parts, nama and rupa because it seems possible to see pain as an object like a piece of metal or a piece of wood or some substance with different degrees of hardness,hotness,cohesiveness and movement. But I don't know if this attempt doesn't misrepresents the Dhamma.
Feeling and the elements are 2 sides of the same coin. If you analyse feeling closely they can be classified by the properties according to the elements. The analysis of the elements is also non other than the 4 elements.
@UrsulRosu Titth’ayatana Sutta might be an interesting read for you.
This question makes no sense. The four elements are material & not related to pain.
Pain is vedana kdhanda rather than rupa khandha.
Venerable sir, might there be another way in which a bhikkhu can be called skilled in the elements?”
There might be, Ānanda. There are, Ānanda, these six elements: the earth element, the water element, the fire element, the air element,
the space element, and the consciousness element. When he knows and
sees these six elements, a bhikkhu can be called skilled in the
elements.
But, venerable sir, might there be another way in which a bhikkhu can be called skilled in the elements?”
There might be, Ānanda. There are, Ānanda, these six elements: the pleasure element, the pain element, the joy element, the grief
element, the equanimity element, and the ignorance element. When he
knows and sees these six elements, a bhikkhu can be called skilled in
the elements.
http://www.yellowrobe.com/component/content/article/120-majjhima-nikaya/321-bahudhtuka-sutta-the-many-kinds-of-elements.html
|
STACK_EXCHANGE
|
Which tool to remove this bottom bracket nut - photo provided
I have an Apollo Transfer 21 inch frame and need to remove the bottom bracket. It seems to be a screw in bottom bracket.
I am new to bicycle maintenance by the way. I tried a 26mm spanner to remove the 6-sided nut in the bottom bracket non-chain side photo. I was able to remove but I think I tightened too much and need to remove again and using the 26mm spanner is not easy. So I am wondering if I am using the wrong tool.
What tool do I need to take off the nut in the bottom bracket non-chain side photo? A 25mm spanner is too small. A 26mm spanner is a bit too big.
the other side of the bottom bracket is as per other photo - bottom bracket chain side. What too to remove that side? I have a feeling it might be a Park tool HCW-4? But not sure.
A related question, where do I find the exact manual for this bike? There is a generic Apollo manual available - but is very vague on everything.
The cup on the chain side is called the "fixed cup."
You should be aware that it is a left thread and will unscrew clockwise on this bike. For service, this is often left in place.
The non-chain side is the adjustable cup, used to set the preload on the bearing (adjust for play). There is a lockring around the outside. Generally to set the preload,you loosen the splined lockring, use your spanner to gently set the preload, then lock off your lockring again. You can also remove this side to grease the bearings. I use a lockring tool to fit the lockring, Park make a few in different sizes (mine is a Lazer). I use a very high quality adjustable spanner for most of these but sometimes the flat spanner made for the purpose happens to fit. often it doesn't or the cup uses a pin spanner. Anyway, if it's neither 25 nor 26mm, it's probably an imperial size.
There is no manual for the specific bike but any cycle repair manual eg Richard's or Zinn, even one from the 1980s and before, will cover this bb system.
Of course, it must be 1 inch which is 25.4mm
Noise has it right - the lockring on the non-drive side (ie the left side of the bike) needs to be loosened first - it will be "normal right hand thread"
You could use a hammer and punch in the notches but that tends to mushroom metal. I'd use large plumber's pliers, aka slipjaw pliers or waterpump pliers, or there's probably a correct tool I don't own.
Then the hex nutted part should be easier to undo. Its easy to round off corners, so first clean the edges of paint and dirt to try and get a snug fit from your tool on the hex.
Be warned that there's probably loose/dirty ball bearings inside this, so once you get the cup off they may rain out.
Your drive-side cup doesn't appear to have any flats on it - I can't see how that would unthread but once the axle is removed out the left hand side, it may be more obvious. Drive side cup should be threaded into the frame with a left-hand thread which is opposite, so clockwise to undo.
Everyone should have a pair of these - a $10 set is quite adequate. You can use a rag in the jaws to reduce marring of parts.
Park HCW-5 is the correct tool you don't own.
@whatsisname thank you - yep I have the HCW-4 and the HCW-11 but not that one. https://www.parktool.com/en-int/product/crank-and-bottom-bracket-wrench-hcw-5
|
STACK_EXCHANGE
|
[Jordan] has been playing around with WS2812b RGB LED strips with TI’s Tiva and Stellaris Launchpads. He’s been using the SPI lines to drive data to the LED strip, but this method means the processor is spending a lot of time grabbing data from a memory location and shuffling it out the SPI output register. It’s a great opportunity to learn about the μDMA available on these chips, and to write a library that uses DMA to control larger numbers of LEDs than a SPI peripheral could handle with a naive bit of code.
DMA is a powerful tool – instead of wasting processor cycles on moving bits back and forth between memory and a peripheral, the DMA controller does the same thing all by its lonesome, freeing up the CPU to do real work. TI’s Tiva C series and Stellaris LaunchPads have a μDMA controller with 32 channels, each of which has four unique hardware peripherals it can interact with or used for DMA transfer.
[Jordan] wrote a simple library that can be used to control a chain of WS2812b LEDs using the SPI peripheral. It’s much faster than transferring bits to the SPI peripheral with the CPU, and updating the frames for the LED strip are easier; new frames of a LED animation can be called from the main loop, or the DMA can just start again, without wasting precious CPU cycles updating some LEDs.
[James] got engaged recently, in part thanks to his clever GPS Engagement Ring Box, and he sent us a brief overview of how he brought this project to life. The exterior of the box is rather simple: one button and an LCD. Upon pressing the button, the LCD would indicate how far it needed to be taken to reach a pre-selected destination. After carrying it to the correct location, the box would open, revealing the ring (and a bit of electronics).
Inside is a GPS antenna and a Stellaris Launchpad, which are powered by three Energizer lithium batteries to ensure the box didn’t run out of juice during the walk. To keep the lid closed, [James] 3D printed a small latch and glued it to the top of the box, which is held in place by a micro servo. Once the box reaches its destination, the microcontroller tells the servo to swing out of the way, and the box can then open. As a failsafe, [James] added a reed switch to trigger an interrupt to open the box regardless of location. It seems this was a wise choice, because the GPS was a bit off and the box didn’t think it was in the correct place.
Swing by his blog for more information on the box’s construction and the wiring. We wish [James] the best and look forward seeing his future hacks; perhaps he’ll come up with some clever ones for the wedding like our friend Bill Porter.
When [antoker] is working on a microcontroller project, he often has to write short bits of test code to make sure everything in his circuit is working properly. This is a time-consuming task, and a while back he started on a small side project. It’s a command line interface for a microcontroller that allows him to send short commands to the uC over a serial connection to play around with the ADC, UART, and GPIO pins.
[antoker]’s tiny Unix-like environment is based on modules that can keep track of the time, print the current commands and stack to a terminal, and query things like the current speed of the uC and the available Flash and RAM.
This tiny shell also has scripting capabilities and a jump function, making this a true programming language, however minimal it is. Right now [antoker]’s work is available for the TI Stellaris and Tiva series microcontrollers, and a video of a scripted Larson scanner is available below.
Continue reading “A Shell For The Stellaris & Tiva”
If you’ve been coveting a piece of Texas Instruments hardware you should put in an order before September 30th. A coupon code for $25 off a purchase was posted to the Stellaris ARM Community forums and it should work until that date. Above is the overview of an order placed yesterday for two Tiva Launchpads (apparently TI has rebranded the Stellaris chips as Tiva for some odd reason). After applying the coupon code “National-1yr” the total price of [BravoV’s] order is just under one dollar (including shipping). The coupon code can be entered into a box on the right hand column of step #3 (payment) when placing an order.
UPDATE: There are now multiple comments reporting that the coupon code no longer works.
We’re pretty sure you can use this coupon code on anything in the TI store. But if you don’t have a Stellaris/Tiva Launchpad yet we highly recommend getting one. We picked ours up about a year ago. It’s a great way to try your hand at ARM programming. We have had some issues with how the breakout headers are organized — there’s some gotchas with multiple pins being connected (read the last five paragraphs of the project write up linked in this post for more). But for the price and ease of programming this will get you up and coding in no time. If you need some ideas of what to do with the board look at our posts tagged as “Stellaris”.
[Adarsh] needed a JTAG programmer to push code to a CPLD dev board he was working with. He knew he didn’t have a dedicated programmer but figured he could come up with something. Pictured above is his hack to use a Stellaris Evalbot as a programmer.
Long time readers will remember the Evalbot coupon code debacle of 2010. The kits were being offered with a $125 discount as part of a conference. We were tipped off about the code not know its restrictions, and the rest is history. We figure there’s a number of readers who have one collecting dust (except for people like [Adam] that used it as a webserver). Here’s your chance to pull it out again and have some fun.
A bit of soldering to test points on the board is all it takes. The connections are made on the J4 footprint which is an unpopulated ICDI header. On the software side [Adarsh] used OpenOCD with stock configuration and board files (specifics in his writeup) to connect to the white CPLD board using JTAG.
We think it’s pretty impressive to see a Stellaris Launchpad playing back Video and Audio at the same time with a respectable frame rate. It must be a popular time of year for these projects because we just saw another video playback hack yesterday. But for this project [Vinod] had a lot less horsepower to work with.
He’s using a 320×240 display which we ourselves have tried out with this board. It’s plenty fast enough to push image data in parallel, but if you’re looking for full motion video and audio we would have told you tough luck. [Vinod’s] math shows that it is possible with a bit of file hacking. First off, since the source file is widescreen he gets away with only writing to a 320×140 set of pixels at 25 fps. The audio is pushed at 22,400 bytes per second. This leaves him very few cycles to actually do anything between frames. So he encoded the clip as a raw file, interlacing the video and audio information so that the file can be read as a single stream. From the demo after the break it looks and sounds fantastic!
Continue reading “Video player built from Stellaris Launchpad”
Most microcontroller manufacturers give you some kind of free development toolchain or IDE with their silicon products. Often it’s crippled, closed source, and a large download. This is pretty inconvenient when you want to have firmware that’s easy to build and distribute. I’ve found many of these toolchains to be annoying to use, and requiring closed source software to build open source firmware seems less than desirable.
It’s possible to build code for most microcontrollers using command line tools. You’ll need a compiler, the device manufacturer’s libraries and header files, and some method of flashing the device. A lot of these tools are open source, which lets you have an open source toolchain that builds your project.
Setting up these tools can be a bit tricky, so I’m building a set of templates to make it easier. Each template has instructions on setting up the toolchain, a Makefile to build the firmware, and sample code to get up and running quickly. It’s all public domain, so you can use it for whatever you’d like.
Currently there’s support for AVR, MSP430, Stellaris ARM, and STM32L1. More devices are in the works, and suggestions are welcome. Hopefully this helps people get started building firmware that’s easy to build and distribute with projects.
|
OPCFW_CODE
|
I am going to show you how to use Excel functions to create a new table from your current table and have it dynamically sorted.
We have a list of people, scores and age. We want to create another table and have it dynamically sort descending the data so it changes as your main table data changes. Change anything in the original table and you will see your new table dynamically sort!
Full article: Using Excel functions to dynamically sort data
Once you've created a spreadsheet or you get one from someone else, one of the more painful things you end up doing at some point is trying to work out which cells feed into any of the formulas throughout the workbook or alternatively, which cells are dependent on any given cell(s).
So how can you check this without spending hours looking at all the cells in your worksheet/workbook? This is where the FORMULA AUDITING tools come in.
Full article: Using the Formula Auditing tools
You may want some way of pausing or delaying VBA code execution and you can do this with two functions called
Sleep. You can also do this using a loop, and we will look at that approach too.
Why would you pause the code? Maybe you need to wait for another task to finish, for instance if you made a call to a Windows API/shell function. Or you may want to wait for the user to update data in the sheet, or you just want to run a macro at a set time.
Full article: Pausing or delaying VBA using Wait, Sleep or a loop
Today's post is about the Goal Seek Method of the Range Object of the Excel Object Model.
I can use Goal Seek to manually find a value, but what if I need to find values for 12 different months? 52 Weeks? Some other scenario with 100's of desired outputs? Time for some VBA!
Full article: Begin with the end in mind
PivotTables are one of the most useful tools in Excel. They allow you to easily summarise, examine and present a complex list of data.
This blog post explores 5 advanced PivotTable techniques:
- Grouping fields by month and year.
- Calculating data as a percentage of the total.
- Using Slicers.
- Applying Conditional Formatting to PivotTable data.
- Creating calculated fields.
Full article: 5 advanced PivotTable techniques
The scope of a variable in Excel VBA determines where that variable may be used. You determine the scope of a variable when you declare it. There are three scoping levels: procedure level, module level, and public module level.
This article describes VBA variable scope, including examples.
Full article: Variable scope in Excel VBA
How do you know when a user has entered a value into a formula cell, essentially overriding your formula?
Starting with Excel 2013, we can use conditional formatting with the new
ISFORMULA function to highlight when this happens.
Full article: Formula override Conditional Formatting alert
This post shows a simple technique that will vastly reduce the number of errors in your VBA code.
The simple technique is the Assertion statement. It is simple to use and implement and will provide dramatic results. However don't be fooled by the simplicity of
Debug.Assert. Used correctly it is an incredibly powerful way of detecting errors in your code.
Full article: How to reduce VBA errors by up to 70%
Today we look at Excel's built-in feature that flags inconsistent formulas, and see how that feature can call attention to potentially critical information lurking beneath the surface.
Excel's way of telling you that the formula underneath a cell is not like the others is to display a small green triangle in the upper left-hand corner of the cell.
Full article: Detecting inconsistent formulas
|
OPCFW_CODE
|
M: Haml & Sass 2.2 Released - chriseppstein
http://nex-3.com/posts/84-haml-sass-2-2-released
R: chriseppstein
Over a year in the making, this is a huge release for haml & sass. Both have
new websites: <http://haml-lang.com> and <http://sass-lang.com>
R: hachiya
Congratulations to the author, Hampton Catlin, and he has many appreciative
users who are excited about these releases. Sass makes CSS so much better.
R: mhartl
This release is mainly the work of Nathan Weizenbaum and Chris Eppstein, with
a bunch of contributions from other Rubyists. I don't think Hampton worked on
this release, and he hasn't been active in Haml/Sass development for a while.
That's no knock on Hampton; there's just a tendency to give most of the credit
to the initial author of _anything_ (think Perl, or Linux), and in this case
other people deserve a bunch of credit, too.
R: chriseppstein
Mainly Nathan. I had 64 commits. Nathan had 661. There were a dozen or so
other contributors.
R: Keyframe
I still haven't updated, can anyone tell me is there a plan, or is it in as of
now, to have html attributes the output the way I put them in haml and not
alphabetically?
I've started using compass/960 a month ago or two, best thing since sliced
bread!
R: nex3
Yes, we intend to do this as much as possible in the future.
R: sant0sk1
Does this mean we can finally use Compass without all the Haml gem dependency
problems (or the edge-gem which has been recently employed)?
R: chriseppstein
Yes. I just released a new gem:
[http://github.com/chriseppstein/compass/blob/master/CHANGELO...](http://github.com/chriseppstein/compass/blob/master/CHANGELOG.markdown)
R: Keyframe
I didn't have a chance yet to say thanks. I've asked about multiple load paths
watch on compass group and it works now, and cache issues (bugs) on windows
seem to be gone as of today - this is pure awesome!
R: chriseppstein
You're welcome! It's good to hear that windows is working better; send thanks
to Joe Wasson who submitted the patch that fixed it.
R: bradgessler
I can't say enough good things about Haml and SaSS. If you haven't used it in
any of your projects yet install it right now.
R: perezd
If you love SASS you should check out Compass as well, I use them together and
it makes my CSS so much easier to work with!!
<http://www.compass-style.org>
R: jimmybot
Haml, SASS, Compass all look really neat. Anyone have experience with
integrating them into Django or other Python framework?
R: sams
Sass makes managing css a breeze. Congrats to all involved
R: jpcx01
Awesome! Though... what happened to 2.1?
R: chriseppstein
Haml uses odd number for unstable releases, even numbers for stable releases.
2.1 became 2.2 when it got released.
|
HACKER_NEWS
|
Whenever you do maintenance work on a website it is advisable to show the visitors a nice message telling them politely to come back later, rather than a nasty error, or even worse: a big Yellow Screen of Death.
Since ASP.NET 2.0 came out of the labs of Microsoft, there is a way to take a web application down using the “app_offline.htm” approach. You simply create a HTM file, which you then upload to the server, and if there is any request to this web application, IIS will automatically show the contents of the app_offline.htm file. Once uploaded, most people only rename the file so that it doesn't catch up with IIS anymore, and the site is already back online.
This approach comes in really handy when you upload new files to the server, and while the upload did not finish, there are already requests coming in. (Ex: somebody accesses your website - ~/default.aspx, and you have some logic in this page that references a class that has not been uploaded yet)
Where this approach simply doesn’t cut the mustard is when you want to restrict the access for visitors, but you still want to be able to access the site as an administrator (testing, problem investigation, general maintenance). The app_offline.htm approach is not good for this because simply everybody will get the content of the app_offline.htm file no matter what the requested page is.
In the quest for a good solution to this problem I had the following facts in mind:
- same sort of easy use like uploading the app_offline.htm (maybe a similar file to trigger the action)
- access granted based on the ip of the visitor (if AdminIP == RequestIP –> unrestricted surfing)
My current solution looks like this:
- The filter is triggered when a file named “offline.html” exists on the server. (This is practically the same idea as the app_offline.htm one)
- The web.config file contains an application settings key defining the Administrator IP
<add key="AdminIP" value="126.96.36.199" />
- To filter the unwanted IP addresses out, I’m using a HttpModule:
/// This is how you take an ASP.NET application offline the
/// Arnold Matusz way with AppOfflineModule
public class AppOfflineModule : IHttpModule
public void Dispose()
public void Init(HttpApplication context)
context.BeginRequest += new EventHandler(context_BeginRequest);
void context_BeginRequest(object sender, EventArgs e)
HttpApplication application = (HttpApplication)sender;
HttpContext context = application.Context;
if (File.Exists(Path.Combine(context.Server.MapPath("~"), "offline.html")))
string ip = context.Request.UserHostAddress;
string adminIP = ConfigurationManager.AppSettings["AdminIP"];
if (ip != adminIP)
As you can immediately see, on each Request, I’m performing a check for a file named offline.html with File.Exists(). If it exists I’m checking for the administrator IP in the web.config file to see if the request is coming for somebody that should not be restricted.
Adding your HttpModule is easy:
<add name="AppOfflineModule" type="AppOfflineModule" />
Once offline.html uploaded, each “restricted” visitor will only see the content of offline.html.
To disable the whole functionality you only need to rename the offline.html file to anything else (I suggest a difference of a character so that it can be easily renamed back: offlin.html).
In a very small maintenance group this solution works just fine, but for future work I’d definitely create some functionality to define more IP addresses which can access the website while it’s taken “offline”. Defining an IP class would also be another good idea, particularly when the number of “administrators” is very big.
There is little limit in the possibilities to identify who can and cannot access your website if you use this approach, you only need to program your logic into a BeginRequest event, and you are ready to go.
|
OPCFW_CODE
|
Hi. I bought the ER-X 1 month ago and flashed it with own compiled LEDE 17.01.7 with Qualcomm Fast Path patch for Kernel 4.4 (Qualcomm Fast Path For LEDE). Since then almost every day I have these errors in the System Log:
What causes the connection to the router is lost a few seconds until it recovers. Other times it stays "fried", the router cannot be accessed, and there is no communication between the ethernet ports until I restart it. Also sometimes after restarting it within a few hours, I find this in the Kernel Log:
It seems like a problem with the switch. I don't know if the patch has anything to do with it, but since i have searched and although there are many with the same problem i discard this. Although there is no definitive solution, some say that disabling flow control is solved, but I cannot disable it with ethtool because it is not implemented in the driver. I tried to disable it in the equipment and switch connected to it, but i think there is still a port (eth0) with the flow control enabled because I cannot deactivate it in the FTTH ONT.
Does anyone know how I could fix it once and for all? Any patch for the Ethernet driver?
Not yet, later I will install 17.01.7 without the patch and try a few days. If the error persists I will try the 19.07 snapshot. But I have no faith that it works, since there are different messages with the same problem even with 18.06.X:
Well, for now it has been running without errors with official lede 17.01.7 for two days (without patches), and without the kmod-sched-core module that I have read that generates problems in the MT7621 soc and I had it in my compilation. I will wait 3 more days and compile again with the Qualcomm patch but without kmod-sched-core.
Any specific reason you went for 17.01.7 instead of the 18.06.4 / 19.07 or just building from the master? Available packages must be pretty outdated, as well as fewer kernel options.
On my ERX I've switched to building from the master branch pretty quickly since first using the stable 18.06.x, then snapshots. In terms of stability, performance, etc - no negatives to report. There was a snapshot in July I think that caused kernel panic when hardware flow offload was enabled, but it had since been fixed. I've recently switched default congestion control to BBR - it's been working quite well.
I prefer a stable version to update every bit. I also prefer 17.01.x to 18.06.x or the latter really for aesthetic reasons of LuCI. It may also consume more resources (ram). I don't like how the wan interface is shown on the overview page, nor the action buttons (apply, cancel, add, delete...) that are not filled, or the spacing between each line or elements (for example the routes and arp, connections, firewall rules, etc.) If I get 17.01.7 to work stable 24/7 all time I will continue with it for a long time.
17 is EOL and will receive no further updates (including no security updates, AFAIK). 18 is effectively in maintenance mode (nothing new here, nor many, if any bug fixes). 19 is already a thousand commits behind master and it hasn’t even been released yet.
LuCI can easily be installed on a snapshot, or added with the image builder.
After 4 days and 8 hours without any error in system log "transmit timed out", the router has become frozen again having to disconnect and connect it to the power, without being able to read the log. I will have to compile 19.07 or master branch.
|
OPCFW_CODE
|
Handle foreign character web input
On Thu, Jul 4, 2019 at 7:08 AM Igor Korot <ikorot01 at gmail.com> wrote:
> Hi, Thomas,
> On Sat, Jun 29, 2019 at 11:06 AM Thomas Jollans <tjol at tjol.eu> wrote:
> > On 28/06/2019 22:25, Tobiah wrote:
> > > A guy comes in and enters his last name as R?nngren.
> > With a capital ? in the middle? That's unusual.
> > >
> > > So what did the browser really give me; is it encoded
> > > in some way, like latin-1? Does it depend on whether
> > > the name was cut and pasted from a Word doc. etc?
> > > Should I handle these internally as unicode? Right
> > > now my database tables are latin-1 and things seem
> > > to usually work, but not always.
> > If your database is using latin-1, German and French names will work,
> > but Croatian and Polish names often won't. Not to mention people using
> > other writing systems.
> > So G?nther and Fran?ois are ok, but Boles?aw turns into Boles?aw and
> > don't even think about anybody called ???????? or ????.
> As others pointed out - it is very easy to do transliteration especially if
> its' not a user registration that will be done.
> But I would simply not do that at all - create your forms in English and
> accept English spellings only.
> Most people that do computers this days can enter phonetic spelling
> of their first/last names (even in Chinese/Japanese/Hebrew).
> And all European names can be transliterated to English.
> Besides as the OP said - if someone comes to him and will
> try to enter the non-English name. The OP might not even have the appropriate
> keyboard layout to input such a name. And if this is an (time consuming) event
> all (s)he can do is ask for phonetic spelling.
> Thank you.
What you basically just said was "I wish all those ugly foreign names
would just go away". Honestly, that's not really an acceptable
solution; you assume that you can transliterate any name into
"English" in some perfect way, which is acceptable to everyone in the
world. And you also assume that this transformation will be completely
consistent, so you can ask someone his/her name and always get back
the same thing.
If you want to do a Latinization and accent strip for the sake of a
search, that's fine; but make sure you retain the name as people want
it to be retained. Don't be bigoted.
|
OPCFW_CODE
|
My Verilog output test results in a value of x
This is my first assignment with Verilog and I cant figure out why my output keeps giving me x value, my code is very simple, I doubt it needs very much explanation.
module Network_Router (P, Q, R, S, Output);
input P, Q, R, S;
output Output;
wire Output;
reg and1, and2, and3, and4, and5, and6, or1, or2, or3, or4, or5;
initial
begin
and1 = P & Q;
and2 = Q & R;
or1 = and1 | and2;
and3 = P & R;
and4 = S & R;
or2 = and3 | and4;
or3 = or1 | or2;
and5 = Q & S;
and6 = P & S;
or4 = and5 | and6;
or5 = or3 | or4;
end
assign Output = or5;
endmodule
and then my testbench filelooks
`include "netRouter.v"
module netRouter_tb;
reg P, Q, R, S;
wire Output;
Network_Router test(P, Q, R, S, Output);
initial
begin
//Dump results of the simulation to netRouter.vcd
$dumpfile("netRouter.vcd");
$dumpvars;
P <= 0; Q <= 0; R <= 0; S <= 0;
#5
P <= 0; Q <= 0; R <= 0; S <= 1;
#5
P <= 0; Q <= 0; R <= 1; S <= 0;
#5
P <= 0; Q <= 0; R <= 1; S <= 1;
#5
P <= 0; Q <= 1; R <= 0; S <= 0;
#5
P <= 0; Q <= 1; R <= 0; S <= 1;
#5
P <= 0; Q <= 1; R <= 1; S <= 0;
#5
P <= 0; Q <= 1; R <= 1; S <= 1;
#5
P <= 1; Q <= 0; R <= 0; S <= 0;
#5
P <= 1; Q <= 0; R <= 0; S <= 1;
#5
P <= 1; Q <= 0; R <= 1; S <= 0;
#5
P <= 1; Q <= 0; R <= 1; S <= 1;
#5
P <= 1; Q <= 1; R <= 0; S <= 0;
#5
P <= 1; Q <= 1; R <= 0; S <= 1;
#5
P <= 1; Q <= 1; R <= 1; S <= 0;
#5
P <= 1; Q <= 1; R <= 1; S <= 1;
end
initial
begin
$monitor("time=%4d: %b %b %b %b : Output = %b",$time,P, Q, R, S, Output);
end
endmodule
You have an initial in the Network_Router module.
Replace it with an always @ ( * )
That initial is executed once when the code start and then never again. When the code starts all your reg values are x.
|
STACK_EXCHANGE
|
Add dim to pytorch_lightning.metrics.PSNR
🚀 Feature
Add dim to pytorch_lightning.metrics.PSNR so that users can specify the dimension (or dimensions) for the mean squared error reduction if PSNR computation.
Motivation
Suppose we have two image pairs
(pred_image_1, target_image_1)
(pred_image_2, target_image_2)
pytorch_lightning.metrics.PSNR computes the PSNR as
squared_error_1 = (pred_image_1 - target_image_1) ** 2
squared_error_2 = (pred_image_2 - target_image_2) ** 2
mean_squared_error = (squared_error_1 + squared_error_2) / (squared_error_1.numel() + squared_error_2.numel())
psnr = -10.0 * log(mean_squared_error)
If pred_image_1.numel() >> pred_image_2.numel(), the quality of pred_image_1 may bias the output pnsr.
It will be helpful if we can compute the psnrs of each image pair separately and average psnrs:
squared_error_1 = (pred_image_1 - target_image_1) ** 2
psnr_1 = -10.0 * log(squared_error_1.mean())
squared_error_2 = (pred_image_2 - target_image_2) ** 2
psnr_2 = -10.0 * log(squared_error_2.mean())
psnr = (psnr_1 + psnr_2) / 2
Pitch
Add dim to pytorch_lightning.metrics.PSNR:
class PSNR(Metric):
def __init__(
self,
data_range: Optional[float] = None,
base: float = 10.0,
dim: Union[int, Sequence[int]] = (), # New.
reduction: str = 'elementwise_mean', # Unused?
compute_on_step: bool = True,
dist_sync_on_step: bool = False,
process_group: Optional[Any] = None,
):
# ...
self._dim = tuple(dim) if isinstance(dim, Sequence) else dim
self.add_state("sum_psnr", default=torch.tensor(0.0), dist_reduce_fx="sum")
self.add_state("total", default=torch.tensor(0), dist_reduce_fx="sum")
# ...
def update(self, preds: torch.Tensor, target: torch.Tensor):
psnrs = compute_psnr(preds, target, dim=self._dim) # New.
self.psnrs += psnrs.sum()
self.total += psnrs.numel()
def compute(self):
return self.psnrs / self.total
preds_nchw = torch.rand([32, 3, 224, 224]
targets_nchw = torch.rand([32, 3, 224, 224]
metric = PSNR(dim=(1, 2, 3))
average_psnr_of_n_images = metric(preds_nchw, targets_nchw)
Alternatives
If the original behavior should be kept, maybe we can do
class PSNR(Metric):
def __init__(
self,
data_range: Optional[float] = None,
base: float = 10.0,
dim: Optional[Union[int, Sequence[int]]] = None, # `None` for original behavior
reduction: str = 'elementwise_mean', # Unused?
compute_on_step: bool = True,
dist_sync_on_step: bool = False,
process_group: Optional[Any] = None,
):
# ...
self._dim = tuple(dim) if isinstance(dim, Sequence) else dim
if self._dim is None:
# Original behavior.
self.add_state("sum_squared_error", default=torch.tensor(0.0), dist_reduce_fx="sum")
else:
self.add_state("sum_psnr", default=torch.tensor(0.0), dist_reduce_fx="sum")
self.add_state("total", default=torch.tensor(0), dist_reduce_fx="sum")
# ...
Hi @manipopopo,
Thanks for your suggestion. I think it is a good idea to add a dim argument to PSNR. Would you be up for sending a PR with the enhancement?
Hi @SkafteNicki , I’d be happy to. Maybe I can send a PR tomorrow.
@manipopopo sound good to me :]
Please ping me in your PR.
|
GITHUB_ARCHIVE
|
This section provides a comprehensive explanation of key terms and concepts frequently used in our documentation and product.
Rookeries Development, also known as RookDev, is the legal entity that owns and operates the ROOK product.
A "Client" refers to the organization that integrates the ROOK product or the entity developing an application utilizing ROOK technology.
A "User" is an individual who leverages the services offered by the client. This person, essentially the "client's customer," connects their health data to the app through the ROOK solution.
"ROOK" in this documentation represents our comprehensive solution, including APIs, SDKs, and Webhooks. It empowers wellness, health, and fitness applications to interface with their users' health data. ROOK serves as a Health Data Collector and a data processing/insights platform that integrates into your application to simplify access and utilization of your users' health data.
"ROOK Connect" is the product responsible for collecting users' health data from multiple data sources. Learn more at ROOK Connect Quickstart.
"ROOK Score" is the product that generates a score based on the users' health pillars. Learn more at ROOK Connect Quickstart.
RookDev also provides RookMotion, a specialized offering for clients intending to develop a fitness app that utilizes heart rate sensors for training. If you require additional information about this product, please contact us. Shortly, ROOK's capabilities will be expanded to encompass training functionalities.
Health Data Sources
"Health Data Sources" encompass both Health Data Providers and Health Data Collectors. ROOK gathers health data from these sources, processes it, and makes it readily accessible for consumption, regardless of the source type.
Health Data Provider
A "Health Data Provider" is typically the manufacturer of sensors and wearables, such as Polar and Garmin, or other applications that collect users' health data through their devices or apps. Through our connections page, the user can permit us to access their data, which we then deliver to our client (embedded in their app) in a standardized, normalized, and unified manner. A user can connect multiple providers to ROOK.
Health Data Collectors
"Health Data Collectors" are products and providers that aggregate health data from Health Data Providers in a centralized manner. Often referred to as "Health Kits" or "Health Data Hubs," providers and products in this category include Apple Health, Health Connect, Samsung Health, Huawei Health, Xiaomi Health, and Huami.
Definition of Health
We perceive "health" as a state of wholeness embodied by a balance across four fundamental pillars: physical, body, sleep, and mental/social health.
"Health Metrics" pertain to any quantifiable indicators related to a user's health or wellness, such as step count or heart rate.
Health Data Pillars
"Health Data Pillars" refer to the four pillars encompassed in our definition of health: physical, sleep, body, and mental/social health.
Physical Health Data Pillar
The "Physical Health Data Pillar" includes all health data related to categories such as daily activities, exercise sessions, or user movement throughout the day.
Body Health Data Pillar
The "Body Health Data Pillar" contains all health data associated with categories like body size, physiological variables, and nutrition.
Sleep Health Data Pillar
The "Sleep Health Data Pillar" encompasses all health data related to sleep, recovery, or rest.
"Health Data" includes all data collected from a user's health metrics.
"Unstructured Data" refers to health data stored in its original form as received from Health Data Providers, without any processing by our technology. This data is generated by the user's wearable device, while ROOK gathers this data from the Health Data Provider.
"Structured Data" denotes health data that has undergone harmonization, standardization, cleaning, and normalization through our technology. ROOK collects and processes the user's health data, facilitating its meaningful use.
"Harmonized Data" presents health data in a consistent format with universal units.
"Standardized Data" ensures that health data is compatible across different Health Data Providers.
"Clean Data" provides clients with the most accurate single data set when duplicate data sets are discovered. This could occur if a user has multiple wearable devices or Health Data Providers connected or if the user connects both a Health Data Provider (like Garmin, Polar, Fitbit, etc.) and a Health Data Collector (like Apple Health, Health Connect, etc.).
"Normalized Data" represents health data that has undergone transformation to establish unified scales and ranges.
"Components" refer to the various technologies deployed in ROOK to offer the functionalities mentioned earlier. These components form part of the comprehensive technology solution that ROOK provides.
An "API," or Application Programming Interface, consists of a set of endpoints used to query health data or execute actions like generating tokens from ROOK.
A "WebHook" is a notification system where ROOK alerts the client about newly available health data from users.
The "Connections Page" is a webpage that you can integrate into your app where users can locate a list of available Health Data Providers and Health Data Collectors to connect with. It is also where they authorize us to access the health data of their chosen provider.
A unique identifier for each client, in UUID4 format. This UUID is generated by ROOK and provided to clients upon signing service agreements. Please reach out to us to create an account. The client_uuid remains consistent across every environment.
The client secret is a confidential string instrumental in authorizing access to API endpoints through basic authentication. ROOK assigns it to clients during account creation.
This is a unique identifier for your users. It is a flexible string that accepts numerals, emails, UUID4, names*, or whatever identifier you use internally for your users.
- Length ranges from 1 to 50 characters
- Valid characters are:
- Alphanumeric ("Aa-Zz", "0-9").
- Emails ("Aa-Zz", "0-9"), ("@" & "."), At least 2 characters for domain ending, e.g., ".co".
- UUIDs ("Aa-Zz", "0-9") + ("-").
If you are a HIPAA covered entity, please refrain from using names or emails as user_id. It is recommended to hash them or any Protected Health Information (PHI).
We offer distinct environments for different development stages. Each environment possesses its unique resources, such as servers, databases, and domains. These environments include:
- Production Environment: This is the final stage where the product is released. The resources used in this environment are optimized for stability, scalability, and security.
- Development Environment:
This is where the majority of the development work happens. It is used for testing new features.
The corresponding domains for each environment are:
The api_url is composed of "api" + the environment domain. For instance, the api_url for the production environment would be api.rook-connect.com.
|
OPCFW_CODE
|
You must configure at least one external IP address on the NSX Edge to provide IPSec VPN service.
- Log in to the vSphere Web Client.
- Click Networking & Security and then click NSX Edges.
- Double-click an NSX Edge.
- Click the Manage tab and then click the VPN tab.
- Click IPSec VPN.
- Click the Add () icon.
- Type a name for the IPSec VPN.
- Type the IP address of the NSX Edge instance in Local Id. This will be the peer Id on the remote site.
- Type the IP address of the local endpoint.
If you are adding an IP to IP tunnel using a pre-shared key, the local Id and local endpoint IP can be the same.
- Type the subnets to share between the sites in CIDR format. Use a comma separator to type multiple subnets.
- Type the Peer Id to uniquely identify the peer site. For peers using certificate authentication, this ID must be the common name in the peer's certificate. For PSK peers, this ID can be any string. VMware recommends that you use the public IP address of the VPN or a FQDN for the VPN service as the peer ID.
- Type the IP address of the peer site in Peer Endpoint. If you leave this blank, NSX Edge waits for the peer device to request a connection.
- Type the internal IP address of the peer subnet in CIDR format. Use a comma separator to type multiple subnets.
- Select the Encryption Algorithm.
AES-GCM encryption algorithm is not FIPS compliant.
- In Authentication Method, select one of the following:
PSK (Pre Shared Key)
Indicates that the secret key shared between NSX Edge and the peer site is to be used for authentication. The secret key can be a string with a maximum length of 128 bytes.
PSK authentication is disabled in FIPS mode.
Indicates that the certificate defined at the global level is to be used for authentication.
- Type the shared key in if anonymous sites are to connect to the VPN service.
- Click Display Shared Key to display the key on the peer site.
- In Diffie-Hellman (DH) Group, select the cryptography scheme that will allow the peer site and the NSX Edge to establish a shared secret over an insecure communications channel.
DH14 is default selection for both FIPS and non-FIPS mode. DH2 and DH5 are not available when FIPS mode is enabled.
- In Extension, type one of the following:
securelocaltrafficbyip=IPAddress to re-direct Edge's local traffic over the IPSec VPN tunnel. This is the default value. For more information see http://kb.vmware.com/kb/20080007 .
passthroughSubnets=PeerSubnetIPAddress to support overlapping subnets.
- Click OK.
NSX Edge creates a tunnel from the local subnet to the peer subnet.
What to do next
Enable the IPSec VPN service.
|
OPCFW_CODE
|
By enabling the processing of parts with unprecedented levels of complexity over multiple length scales, additive manufacturing (AM) has opened new realms in terms of on-demand manufacturing for a wide class of engineered and biological materials. While transformative, AM technologies also raise formidable challenges with regard to material characterization, simulations for design and optimization, and certification and qualification. In particular, capturing the stochasticity introduced by processing conditions is a daunting challenge.
In a first project, supported by the National Science Foundation (under awards CMMI-1726403 and CMMI-1942928), we developed a methodology enabling the representation, sampling, and identification of spatially-dependent stochastic material parameters on complex structures produced by additive manufacturing.
Field of stochastic material parameter on a 3D-printed titatium scaffold
The modeling component relies on the combination of two ingredients. First, a fractional stochastic partial differential equation is introduced and parameterized in order to automatically capture the complex features of additively manufactured parts. Information-theoretic transport maps are subsequently introduced with the aim of ensuring well-posedness in the forward propagation problem. The identification of stochastic elasticity tensors on titanium scaffolds produced by laser powder bed fusion was subsequently discussed. To this end, we considered an isotropic approximation at a mesoscale where fluctuations are aggregated over several layers, and addressed both the calibration and validation of the probabilistic model by using different sets of physical structural experiments. Despite the high sensitivity of the forward map to applied boundary conditions, geometrical parameters, and structural porosity, it was shown that the calibrated stochastic model can generate non-vanishing probability levels for all experimental observations (see the figures below).
Results obtained for compression (left) and torsion (right) tests. Red circles represent experimental samples, while the blue lines are the probability density functions estimated with the model
In a second project, we addressed the modeling of the geometrical perturbations induced by the AM technology. These perturbations play a critical role for highly porous structures as they dramatically affect load bearing capacity and ultimate failure of the component.
Realization of the field of geometrical perturbation on a 3D-printed gyroid structure
For further details, please refer to the following publications:
- S. Chu, J. Guilleminot, C. Kelly, B. Abar and K. Gall, Stochastic modeling and identification of material parameters on structures produced by additive manufacturing, Computer Methods in Applied Mechanics and Engineering, 387, 114166 (2021)
- H. Zhang, J. Guilleminot and L. Gomez, Stochastic modeling of geometrical uncertainties on complex domains, with application to additive manufacturing and brain interface geometries, Computer Methods in Applied Mechanics and Engineering, 385, 114014 (2021)
|
OPCFW_CODE
|
Every small or medium sized business has a unique IT environment, and each environment can present technical obstacles that may interfere with the smooth operation of data protection. While the Axcient backup, business continuity and disaster recovery service is designed for easy set up and use, it’s also equipped with tools that offer a little help in avoiding and troubleshooting connectivity problems, both onsite and offsite. These include:
- A script that helps with configuring Windows systems for agentless protection
- A connectivity test for accessing local devices
- A connection health check between the Axcient appliance and data center
- Tools for diagnosing networking issues.
Let’s look at each of these individually:
1. Configuring Windows Systems for Agentless Protection
Configuration helpers can be downloaded from the Axcient appliance to ensure systems are properly configured for agentless protection. You’ll find them under System->Tools in the Axcient Unified Management Console (UMC).
One such helper is the Windows Configuration tool, which is an interactive script that verifies and sets up specific permissions and services, prompting you along the way. These include verifying user rights for the Axcient appliance to access the target Windows system, registering scripting engines that are utilized by backup and restore operations, and enabling file sharing so that data can be successfully backed up and restored.
Using the Windows Configuration tool is optional, but doing so can help avoid common set-up mistakes.
2. Testing Connectivity to Local Devices
Connectivity between the Axcient appliance and any server, desktop or laptop to be protected can be verified through the UMC. After selecting a device in the UMC, simply click the Test Access button. This will initiate three tests:
- “Connectivity Access” checks whether the device is reachable from the appliance
- “Data Access” verifies whether data on the device can be accessed by the appliance
- “Control Access” ensures data protection functions can be executed against the device
If you used the Windows Configuration helper tool described earlier, then these tests will most likely succeed. However, if any of the tests fail, corrective action will be required. The Axcient product documentation and online help provide guidance on what to check. The most common culprits are an invalid or inaccessible hostname or IP address, the inability to resolve a name using DNS, or authentication errors due to incorrect credentials for accessing the device.
3. Checking Connectivity Health between the Axcient Appliance and Data Center
Connectivity between the Axcient appliance and Axcient data center can be tested through the UMC. Under System->Network Utilities, clicking the Connectivity Health button will execute a sequence of ping and nslookup commands. If the output shows any errors, then offsite operations, such as appliance registration and offsite backups, will not function properly. The most likely cause of an error is an improperly configured firewall. Check the appliance specifications in the Axcient production documentation or online help for ports that are used by the Axcient service.
4. Networking Diagnostics
To aid in troubleshooting general connectivity issues, the Axcient appliance provides a user interface to several common networking utilities. These include ping for checking device connectivity, traceroute to help diagnose slow network connections, and nslookup for verifying proper device name resolution.
That covers our connectivity helpers. Next time we’ll look at helpers for configuring Microsoft applications for data protection. In the meantime, please be sure to take advantage of the Axcient documentation, including the PDF manuals, context sensitive online help, and Partner Portal knowledge base.
|
OPCFW_CODE
|
package gov.nist.juncertainty;
import java.util.Collection;
import java.util.Objects;
import com.duckandcover.html.IToHTML;
import gov.nist.microanalysis.roentgen.ArgumentException;
import gov.nist.microanalysis.roentgen.utility.BasicNumberFormat;
/**
* UncertainValue represents a number and an associated uncertainty (which may
* be zero.) UncertainValue is derived from {@link Number}.
*
* @author Nicholas W. M. Ritchie
*
*/
public class UncertainValue extends Number //
implements Comparable<UncertainValue>, IToHTML {
public static final UncertainValue ONE = new UncertainValue(1.0);
public static final UncertainValue ZERO = new UncertainValue(0.0);
public static final UncertainValue NaN = new UncertainValue(Double.NaN);
public static final UncertainValue POSITIVE_INFINITY = new UncertainValue(Double.POSITIVE_INFINITY);
public static final UncertainValue NEGATIVE_INFINITY = new UncertainValue(Double.NEGATIVE_INFINITY);
private static final long serialVersionUID = -7284125207920225793L;
public static double fractionalUncertainty(
final Number n
) {
return uncertainty(n) / n.doubleValue();
}
static final public boolean isSpecialNumber(
final Number n
) {
return n instanceof UncertainValue;
}
public static boolean isUncertain(
final Number n
) {
return (n instanceof UncertainValue) && ((UncertainValue) n).isUncertain();
}
public static double mean(
final Number n
) {
return n.doubleValue();
}
public static UncertainValue normal(
final double v
) {
return new UncertainValue(v, Math.sqrt(v));
}
/**
* Parses a string on the form "Double ± Double" or "Double" returning the
* result as an UncertainValue. The "±" character can be replaced with "+-" or
* "-+".
*
* @param str A string containing a text representation of an {@link UncertainValue}
* @return UncertainValue
*/
public static UncertainValue parse(
final String str
) {
final String[] pms = { "\u00B1", "+-", "-+" };
for (final String pm : pms) {
final int idx = str.indexOf(pm);
if (idx != -1) {
final double value = Double.parseDouble(str.substring(0, idx).trim());
final double sigma = Double.parseDouble(str.substring(idx + pm.length()).trim());
return new UncertainValue(value, sigma);
}
}
final double value = Double.parseDouble(str.trim());
return new UncertainValue(value);
}
public static UncertainValue toRadians(
final double degrees, final double ddegrees
) {
return new UncertainValue(Math.toRadians(degrees), Math.toRadians(ddegrees));
}
public static double uncertainty(
final Number n
) {
return n instanceof UncertainValue ? ((UncertainValue) n).uncertainty() : 0.0;
}
public static double variance(
final Number n
) {
return n instanceof UncertainValue ? ((UncertainValue) n).variance() : 0.0;
}
static final public Number unwrap(
final Number n
) {
if (n instanceof UncertainValue) {
final UncertainValue uv = (UncertainValue) n;
if (!uv.isUncertain())
return Double.valueOf(n.doubleValue());
}
return n;
}
public static UncertainValue valueOf(
final double val, final double unc
) {
return new UncertainValue(val, unc);
}
public static UncertainValue valueOf(
final double val
) {
return new UncertainValue(val);
}
/**
* <p>
* Computes the variance weighted mean - the maximum likelyhood estimator of the
* mean under the assumption that the samples are independent and normally
* distributed.
* </p>
*
* @param cuv A {@link Collection} of UncertainValue objects
* @return UncertainValue
* @throws ArgumentException When there is an inconsistency in the function
* arguments
*/
static public Number weightedMean(
final Collection<? extends UncertainValue> cuv
) throws ArgumentException {
double varSum = 0.0, sum = 0.0;
for (final UncertainValue uv : cuv) {
final double ivar = (isSpecialNumber(uv) ? 1.0 / uv.variance() : Double.NaN);
if (Double.isNaN(ivar))
throw new ArgumentException(
"Unable to compute the weighted mean when one or more datapoints have zero uncertainty.");
varSum += ivar;
sum += ivar * uv.doubleValue();
}
final double iVarSum = 1.0 / varSum;
return Double.isNaN(iVarSum) ? UncertainValue.NaN : new UncertainValue(sum / varSum, Math.sqrt(1.0 / varSum));
}
final Double mValue;
final double mSigma;
/**
* Constructs an UncertainValue with uncertainty equal to 0.0.
*
* @param value The value
*/
public UncertainValue(
final double value
) {
this(value, 0.0);
}
/**
* <p>
* Constructs an UncertainValue equal to "value ± sigma".
* </p>
*
* @param value The value
* @param sigma The uncertainty (should be >= 0.0)
*
*/
public UncertainValue(
final double value, //
final double sigma
) {
mValue = Double.valueOf(value);
mSigma = Math.abs(sigma);
}
/**
* Constructs an UncertainValue from an instance of any Number derived class.
* If the Number is an {@link UncertainValue} then the result is a copy.
*
* @param n Number
*/
public UncertainValue(
final Number n
) {
this(n.doubleValue(), uncertainty(n));
}
/**
* First compares the values and then compares the uncertainties using
* Double.compare(...).
*
* @param uv1 The {@link UncertainValue} against which to compare
* @return int
*/
@Override
public int compareTo(
final UncertainValue uv1
) {
int res = mValue.compareTo(uv1.mValue);
if (res == 0)
res = Double.compare(mSigma, uv1.mSigma);
return res;
}
@Override
public double doubleValue() {
return mValue.doubleValue();
}
@Override
public boolean equals(
final Object obj
) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
final UncertainValue other = (UncertainValue) obj;
return Double.doubleToLongBits(mSigma) == Double.doubleToLongBits(other.mSigma)
&& Objects.equals(mValue, other.mValue);
}
@Override
public float floatValue() {
return mValue.floatValue();
}
public String format(
final BasicNumberFormat bnf
) {
if (mSigma == 0.0)
return bnf.format(mValue);
else
return bnf.format(mValue) + "\u00B1" + bnf.format(mSigma);
}
public String formatLong(
final BasicNumberFormat bnf
) {
return bnf.format(mValue) + "\u00B1" + bnf.format(mSigma);
}
/**
* Returns the fractional uncertainty.
*
* @return sigma/value
*/
public double fractionalUncertainty() {
return mSigma / mValue;
}
@Override
public int hashCode() {
return Objects.hash(mSigma, mValue);
}
@Override
public int intValue() {
return mValue.intValue();
}
/**
* @return true if the uncertainty component is non-zero
*/
public boolean isUncertain() {
return mSigma > 0.0;
}
@Override
public long longValue() {
return mValue.longValue();
}
public UncertainValue multiply(
final double k
) {
return new UncertainValue(k * mValue, k * mSigma);
}
@Override
public String toHTML(
final Mode mode
) {
return toHTML(mode, new BasicNumberFormat());
}
public String toHTML(
final Mode mode, final BasicNumberFormat bnf
) {
switch (mode) {
case TERSE:
if (uncertainty() != 0)
return bnf.formatHTML(mValue) + "±" + bnf.formatHTML(uncertainty());
else
return bnf.formatHTML(mValue);
case NORMAL:
case VERBOSE:
default:
return bnf.formatHTML(mValue) + " ± " + bnf.formatHTML(uncertainty());
}
}
/**
* Returns the one-σ uncertainty.
*
* @return double
*/
public double uncertainty() {
return mSigma;
}
/**
* Returns the variance = sigma<sup>2</sup>
*
* @return variance
*/
public double variance() {
return mSigma * mSigma;
}
public String toString() {
return mValue + "\u00B1" + mSigma;
}
}
|
STACK_EDU
|
Bitcoin faucet bot 3.0
06/05/ · This is the code for „Watch Me Build a Trading Bot“ by Siraj Raval on Youtube – llSourcell/Watch-Me-Build-a-Trading-Bot. Hope you guys enjoy this! If you enjoy this video, please like and share it. Don’t forget to subscribe to this channel for more updates. Downl. Build A Trading Bot In Python World class automatic crypto trading bot. Copy traders, handle all your exchange accounts, utilize market-making and exchange/market arbitrage and imitate or backtest your trading. Build A Trading Bot For Crypto World class automatic crypto trading bot. Copy traders, manage all your exchange accounts, utilize market-making and exchange/market arbitrage and replicate or backtest your trading.
With Alpha Bot you can request charts for stocks, cryptocurrencies, forex, commodities, indices, and more. You can easily add indicators, change candle types, timeframes, and further customize the chart. Alpha Bot is already the premiere Discord bot for TradingView charts used by the biggest financial communities, but we also support TradingLite, Finviz, GoCharting, and Bookmap for advanced users. Stay up to date on price action with Alpha Bot’s price alerts for stocks and cryptocurrencies without the need to leave the Discord chat and switch apps.
Setting a price alert with Alpha is as easy as typing a quick command. A trading client for discretionary traders with a powerful order execution system for Binance, FTX, and Bybit. It’s built for traders who prefer a keyboard over clicking a mouse. Ichibot is a powerful tool that provides you the raw building blocks to design any kind of elaborate order you can think of.
Think of it like lego for cryptocurrency traders. Always know the price of almost any asset with Alpha. Alpha’s price feature is a lightning fast way to reference the price and percent change in price over the past 24 hours of thousands of assets. For times when additional information is needed, the info command helps you get a top down view about a particular stock or cryptocurrency.
- Elite dangerous data trader
- Eso best guild traders
- Gutschein trader online
- Lunchtime trader deutsch
- Amazon review trader germany
- Smart trader university
- Auszahlung dividende volksbank
Elite dangerous data trader
This is the code for „Watch Me Build a Trading Bot“ by Siraj Raval on Youtube. Use Git or checkout with SVN using the web URL. Work fast with our official CLI. Learn more. If nothing happens, download GitHub Desktop and try again. If nothing happens, download Xcode and try again. There was a problem preparing your codespace, please try again. This is the code for this video on Youtube by Siraj Raval called Watch me Build a Trading Bot.
Skip to content. This is the code for „Watch Me Build a Trading Bot“ by Siraj Raval on Youtube MIT License.
Eso best guild traders
Invest in all cryptocurrencies that your exchange deals. Market makers are the best friend of every exchange or crypto project. Now you can trade easily on the spread also, and make the marketplaces. A win-win for everyone. Our Arbitrage tool is your brand-new friend. Produce your own technical analysis to get the best buy and sell signals from your method. Popular signs and candle patterns are: RSI, EMA, Parabolic Sar, CCI, Hammer, Hanged Male, however we have a lot more.
Practice daring new techniques risk-free while mastering Cryptohopper’s tools. Even Backtest your bot and your methods, so you can keep tweaking up until it works. Our affiliate program allows you to make a commission on a regular monthly basis as long as your customers are active. Join the fastest growing and most energetic social trading platform.
While there are a variety of cryptocurrency trading bots presently readily available such as 3commas, Cryptohopper aims to empower traders by supplying a simple to utilize and fully featured service that permits its users to easily trade several cryptocurrencies while eliminating human frailties from their trading process. The semi automated bot doesn’t ensure revenues and merely allows traders to make more smart trades based upon algorithmically set trading approaches and external signals.
The platform is owned and run by Cryptohopper BV which is based in Amsterdam, The Netherlands, and the cryptohopper.
Gutschein trader online
Invest in all cryptocurrencies that your exchange deals. Market makers are the best friend of every exchange or crypto project. Now you can trade quickly on the spread as well, and make the markets. A win-win for everyone. Our Arbitrage tool is your brand-new best friend. Create your own technical analysis to get the very best buy and offer signals from your strategy.
Popular indications and candle patterns are: RSI, EMA, Parabolic Sar, CCI, Hammer, Hanged Guy, but we have much more. Practice bold brand-new techniques safe while mastering Cryptohopper’s tools. Even Backtest your bot and your strategies, so you can keep tweaking up until it works. Our affiliate program enables you to make a commission on a regular monthly basis as long as your customers are active.
Join the fastest growing and most energetic social trading platform.
Lunchtime trader deutsch
We are professional traders and want to implement an automated trading strategy using a trading bot. Skills: Coding , Programming. See more: bump forum using bot , automated random search bot , vote using bot , how to make a bitcoin trading bot , free binance trading bot , poloniex trading bot python , python trading bot tutorial , gdax trading bot python , how to set up a crypto trading bot , simple trading bot , python cryptocurrency trading bot , trading bloomingburg exchange api , build trading algorithm , bot exchange games , bot exchange magic online , mtgo bot exchange cards , build trading bot , build trading platform , build crypto currency exchange , build trading bot php.
Hello, Dear I have experienced in trading bot using Python. I have experienced in Binance, Cryptopia, Bitmex, Bittrex, Bifinex and so on. I will work very hard and best for you. Best Regards. Hi, I ready your full description and I am interested in your project. As you can see my profile, I have top skills of automated trading software for cryptocurrency, forex as well as stock.
I have experiences of Meta More.
Amazon review trader germany
Many people get lured to grid trading because of how simple it is to trade and profit from the market. So what grid traders would typically do is buy and sell at each level, and then take profit at every other interval. Then as the market goes down to When the market goes back up to Then as the market goes up, they unload their Long positions one by one and build up their Short positions.
For example, they have calculated their risk where their account would only wipe out if the market goes against them for X number of levels. You simply place your orders at every level, and when it hits a Take Profit level, you make money. The only time you lose is when you run out of money to hold your losing positions as the market goes against you.
So what traders will do is to backtest this type of strategy and just curve fit it to the historical data. But almost inevitably, traders who trade this way will have their accounts wiped out when trading live. Back in , I had extensively researched and backtested such types of grid trading strategies and many of them looked great when backtested.
Smart trader university
Build a trading bot for crypto Written by Beastlyorion beastlyorion. Followers can copy-trade on bots via an easy-to-use mobile app News are the talk of the build a trading bot for crypto day, or more specifically building a Binance trading bot that buys or sells based on news sentiment. The cryptocurrency market is still quite young compared to the stock market, and therefore news are especially influential in the crypto world NexFolio is an AI crypto trading bot to automate crypto trading.
Creators can build a trading bot for crypto build the sophisticated bots in our browser-based Python editor. Trading bots, or trading algorithms are programs designed to automatically place trading positions on your behalf, and operate on a series of pre-define parameters. These parameters can also be referred to as the logic which drives buy or sell signals of the bot Build a Real-Time Crypto Trading Bot in under Lines of Code Posted on March 22, by coin4world 41 Comments In this video, I show you how to use Python to receive cryptocurrency price data over websockets, how to apply TA-Lib indicators to completed candlesticks, and how to programmatically execute Ethereum orders based on these.
Minimize the risks and efforts, and trade with live signals and profitable trading strategies Medium. They build a trading bot for crypto are incubated at SEMIA, a startup incubator located in Strasbourg…. Build a trading bot for crypto. Trality is the platform for anyone to create and invest through automated crypto trading bots. Social Media.
Auszahlung dividende volksbank
Build A Trading Bot No Code World class automatic crypto trading bot. Copy traders, handle all your exchange accounts, utilize market-making and exchange/market arbitrage and mimic or backtest your trading. Trading bots, or trading algorithms are programs designed to automatically place trading positions on your behalf, and operate on a series of pre-define parameters There build a trading bot for crypto are a lot of components to think about, data to collect, exchanges to .
The online trading space is jam-packed with well-designed websites that claim to offer guaranteed forex profits through an automated bot. Such websites utilize hyperbole terminology that promises fast and consistent returns by simply allowing an AI trading bot to buy and sell currencies on your behalf. On the one hand, the process of allowing an automated algorithm to trade autonomously is in fact a reality.
However, most bot providers in the forex arena are nothing short of marketing companies offering a service that they cannot honour. UK-based education and forex signal provider Learn 2 Trade argues that the most effective AI trading bots active in the market are those that rely on an element of human interaction. By this, the team explains that the bot is only as good as the individual that has programmed it.
The good news for those of you that wish to engage with an automated trading bot – but have virtually no knowledge of the underlying design process, is that there are now a number of established platforms that offer a simple drop-and-drag building process. Crucially, artificial intelligence – irrespective of where it is being utilized, is only as good as the underlying software.
This is because AI technology bases its decision-making processes on predefined conditions. Learn 2 Trade explains that in the multi-trillion-dollar world of forex, this means that automated bots must be designed to follow effective trading strategies.
|
OPCFW_CODE
|
The ranching community has unique needs currently not served by traditional auction or classified ads websites. It isn’t easy to sell a cow on eBay and Craiglist.com is limited by their city based listings. Ranch Network is designed to fill that void. Ranch Network seeks to connect livestock producers, managers, employees, vendors, and service providers to create an open community with the goal of increasing profitability and efficiency. The site aims to be the one-stop-shop for those in the industry looking to buy, sell, hire, advertise, and discuss current issues by providing a simple and efficient platform while lowering transaction costs.
Ranch Network is a free platform that allows livestock producers to sell their livestock, ranchers to buy hay and supplies, service providers to advertise their companies and much more. Users can create listings, search by location and category, message sellers and discuss the issue of the day on the Ranch Network forum.
Ed Lipkins and Jesse Womack are the co-founders of Ranch Network. Ed and Jesse became friends at college. On a deer hunt in late 2011, they conceived the general idea of Ranch Network from what they identified as inefficiencies and biases in the current system.
Taoti Creative built the Ranch Network site with Drupal for three main reasons. First, we wanted to build the site so it could scale as it grew. We needed the ability to support thousands of listings and users. Second, we wanted to build the site quickly with a focus on cost. By utilizing community built modules we were able to shorten the development process and launch the site quicker and cheaper than a custom or proprietary CMS. In particular the integration with the location module and Google Maps API allowed us to provide location based searching. Finally, we wanted to a build site that could be improved after launch. We felt that Drupal was best suited to offer a stable live site that could be updated with new features and design after launch.
Ranch Network aims to connect buyers and sellers in the ranching community. It also hopes to connect employers and those looking for jobs and people who are hoping to discuss issues in the ranching community. The end goal is to be the go-to-website for all things in the ranching industry
Users can register for a free account and then post a listing for everything from cattle and hay, to job postings and ranching services. For users who are posting a listing, we use custom fields that adapt to a user’s choices to provide an intuitive user experience. These custom forms mean that a user never sees options that don't apply to his or her listing. This also allowed us to limit the number of content types while providing many different options for listings as diverse as livestock, supplies, hay and land sales.
Users also can contact other users through the site. This includes prospective buyers contacting sellers and job seekers contacting employers. We chose to integrate this functionality to protect our user’s privacy and limit their exposure to spam.
We have also built out a Ranch Network Forum. By using core Drupal functionality, we limited our initial investment in this feature. As users begin to utilize the forum we plan to add additional features and functionality.
Finally, we built a custom searching feature. Users can search by location, by keyword or by listing category. This powerful search feature provides users a simple and intuitive way to find the information or listings they are seeking.
Drupal proved to be an excellent platform to build this robust and complex website in a way that was simple and intuitive to users.
The coding was developed by Jon Diamond (strategictech)
This contribution was used to subscribe new users who checked a subscribe box to the client's MailChimp list.
|
OPCFW_CODE
|
146 modules found
Easily build a Progressive Web App for your Nuxt.js application to improve your apps performance.
Nuxt Style Resources - Share variables, mixins, functions across all style files (no @import needed)
Sentry module for Nuxt.js to help developers diagnose, fix, and optimize the performance of their code
Stylelint module for Nuxt.js. A mighty, modern linter that helps you avoid errors and enforce conventions in your styles.
Add Tailwind CSS to your Nuxt application in seconds with PurgeCSS included for minimal CSS.
Use markdown, JSON, YAML, XML or CSV in your nuxt application to easily write content fetched using a MongoDB like API.
It's really very simple to start with nuxt. But we can make it even simpler by adding nuxt-buefy
vue-mq module for Nuxt.js. Define your breakpoints and build responsive design semantically and declaratively in a mobile-first way with Vue.
Module to join nuxt and vue-material framework so you can build well-designed apps with dynamic themes and components with an ease-to-use API.
Automatically optimizes images used in Nuxt.js projects (jpeg, png, svg, webp and gif).
Easily integrate Storybook in your Nuxt.js application to design, build, and organize your UI components in isolation.
OneSignal is a Free, high volume and reliable push notification service for websites and mobile applications.
Add Rollbar.js to your Nuxt.js app to automatically capture and report errors in your applications.
Import the StripeJS client script to accept payments, send payouts, and manage your businesses online.
vuex-router-sync module for Nuxt to effortlessly keep vue-router and vuex store in sync.
Nuxt module to create new _headers, _redirects and netlify.toml files for Netlify or to use existing ones
Nuxt.js module generate meta-tags for social networks - Facebook, Twitter and LinkedIn (and the rest uses OG tags, such as Discord etc.).
Add Matomo analytics to your nuxt.js application. This plugin automatically sends first page and route change events to matomo
ngrok exposes your localhost to the world for easy testing and sharing! No need to mess with DNS or deploy just to have others test out your changes
A Cross-browser storage for Vue.js and Nuxt.js, with plugins support and easy extensibility based on Store.js
Optimised images for NuxtJS, with progressive processing, lazy-loading, real-time resizes and providers support.
An image loader module for nuxt.js that allows you to configure image style derivatives.
nuxt-mobile-detect is a wrapper around mobile-detect.js for nuxtjs. It can be used client side and server side.
Nuxt.js module that uses netlify cache to speed up redeploy for Nuxt.js version < 2.14
A nuxt.js module that implements a universal api layer, same-way compatible between server and client side.
|
OPCFW_CODE
|
// Copyright 2012 Samuel Stauffer. All rights reserved.
// Use of this source code is governed by a 3-clause BSD
// license that can be found in the LICENSE file.
package metrics
import (
"math/rand"
)
type uniformSample struct {
reservoirSize int
values []int64
count int
}
// NewUniformSample returns a sample randomly selects from a stream. Uses Vitter's
// Algorithm R to produce a statistically representative sample.
//
// http://www.cs.umd.edu/~samir/498/vitter.pdf - Random Sampling with a Reservoir
func NewUniformSample(reservoirSize int) Sample {
return &uniformSample{reservoirSize, make([]int64, reservoirSize), 0}
}
func (sample *uniformSample) Clear() {
sample.count = 0
}
func (sample *uniformSample) Len() int {
if sample.count < sample.reservoirSize {
return sample.count
}
return sample.reservoirSize
}
func (sample *uniformSample) Update(value int64) {
sample.count++
if sample.count <= sample.reservoirSize {
sample.values[sample.count-1] = value
} else {
r := int(rand.Float64() * float64(sample.count))
if r < sample.reservoirSize {
sample.values[r] = value
}
}
}
func (sample *uniformSample) Values() []int64 {
return sample.values[:minInt(sample.count, sample.reservoirSize)]
}
|
STACK_EDU
|
Disclaimer: I don’t pretend to give scientific result based on accurate polling or any kind of scientific method (outside of observation). This is just a summary of my observations so treat it as what it is meant to, an opinion based on my interactions and experience. Furthermore as my experience extends to Senegal and Cote d’Ivoire so I’d scope my remarks to these two countries even though the title says West Africa.
I was fortunate to have the opportunity to work remotely this summer in both Dakar and Abidjan and it was great to be able to experience my daily developer routine within the slower pace and context of these cities. I was also able to network with local developers and the companies that hire them and get feedback from both parties. It was mind opening and I’ve walked away with a new insight and passion for bettering certain aspects of the current situation I will dwell on shortly.
Well, I’ll start with the easiest: being home.
On top of my family, being able to see my aging parents on a daily basis was invaluable. Being able to trod the same roads and alleys I’ve been on as a teenager with a new set of eyes and different experiences was great. Seeing old faces, catching up on common acquaintances, shortly, the whole experience of being back after a long time away is good. it’s like finding an old part of yourself that you thought was gone for good.
Getting past the nostalgic personal part, what was really good about banging out code locally?
- Internet connection: I opened an Orange 4G mobile prepaid account as most people do, and to resume my experience in Dakar, i never felt like I was lacking and that the connection was unusable. That experience also extended to the home DSL connection where i was able to Netflix to my leisure without a noticeable drop in quality. So pushing to Github was definitely not an issue. It was not all rosy though but I’ll expand on that as part of the Bad.
- Co-Working Spaces: Outside of the Coders4Africa offices, I worked most of the time from Jokkolabs and I enjoyed the space and the professionalism of the staff. Whether it was the Internet connection or finding a private room for calls, everything was handled with the professionalism of a well run operation. Outside of local startup working out the space, I could pick out expats and other nomads working out of the space just like me.
- The Developer Ecosystem: The constant with the majority of the developers I met was how much more they wanted to know how to get better, asking for tips and feedback. There are actors making things happen in order to elevate the skill level (Noting Leger Djiba and his A2DG initiative ) with meetups and coding events being pretty frequent.
For every head there needs to be a tail so I also have to talk about the negatives I either experienced or were brought up during discussions with local actors.
- Bandwidth: As mentioned earlier, bandwidth was pretty good in Dakar but Abidjan was a different story with a slower bandwidth in general, and areas where I would just lose the connection. This was with an Orange SIM card as well. When it worked it was decent, and I was able to get my work done properly. Your mileage might vary depending on the country.
- Internet cost: If I am not mistaken decent Internet Access was declared a human right but operators are still treating it as a privilege. The prepaid model is the de-facto operation mode for Internet access in most West African countries but it can run pretty expensive. If i remember correctly, 24 hours of 4G access (2GB max) was billed at about 2 dollars in Dakar, which can run expensive if you don’t pay attention. Home plans for 10MB of ADSL Unlimited Data will run you about $65 a month.
- Skills Gap: I mentioned earlier that there is a young and eager local workforce and what surfaced through my interactions, and I am talking here from my background and experience as a front end developer, for the most part, they are developing with “antiquated” toolsets , frameworks and processes. I want to clarify that this is not a case of me being a “let’s play with the latest industry toy” fanboy but when you have developers sharing code using email instead of a repository, you know you have a serious problem. This whole discussion in itself deserves an article of its own to get into the other symptoms, causes and solutions but there is an undeniable skills gap in general when it comes to software development as it is done by the best. This is one of the reasons why we originally started Coders4Africa in 2009, to create a pan-african community of developers while at the same time providing the training needed to eliminate that skills gap. This has evolved into Gebeya, launched last month in Ethiopia and first in what we hope will be a network of professional training institutions focused on producing polished and skilled techies for the local markets.
Seriously, there is nothing I would qualify as ugly so I would just use this section to summarize an issue that was brought up quite often, both in Dakar and Abidjan, in an attempt to start a discussion. It goes like this:
You will hear complaints from local developers that they are not valued enough by local companies with major projects (enterprise) being sent out to Northern Africa (Tunisia or Morocco) or France to be coded at a premium therefore robbing local companies of much needed revenue and developers of much needed opportunities to build and scale major projects. On the flip side, you will hear horror stories from companies that tried local talent and ended being burned bad with awkward solutions that were badly implemented if implemented at all and are a maintenance nightmare making them resolute in exporting the coding work on the next implementation to make sure they end up with a quality product.
Now before going any further I am not implying that ALL local developers suck, or/and that ALL companies have had bad experiences using local talent to get software built. Based on my limited set of interactions this was a recurring comment which makes me think that the issue is widespread enough to be noticeable by all parties. As i mentioned earlier, this deserves an article of its own that I will hopefully get to later, there are many moving parts to it and I want to give it justice. As they say, where others see problems an entrepreneur sees opportunities and I am working with others to bring a credible solution
I would summarize this first post by saying that it was an eye opening experience for me and I hope to be soon one of the forces with Gebeya, Coders4Africa and others I can’t yet talk about that are working locally (and internationally) to improve the ecosystem so that African software development becomes a force to reckon with.
|
OPCFW_CODE
|
Implement webhook signature validation
First proposal for implementing signature validation as documented here: https://developer.atlassian.com/cloud/trello/guides/rest-api/webhooks/#webhook-signatures
To make this usable via the Automation Engine, the validation should probably be moved to ConvertJsonToWebhookNotification.
Also I wasn't sure where the best place to put the Trello secret would be.
Please let me know your thoughts and I will move things around accordingly.
also, thanks for this library :) so far, it has been very pleasant to work with.
Hi @compujuckel, Thank you for the PR, and Nice that you would like to contribute 😎
I've been meaning to include the signature validation but have not yet come around to it so it is a good addition 🙏
Regarding the Secret, I was thinking if it might be better to add it to the TrelloClientOptions ... That way every time you have a trelloClient, you also have the secret at hand, and you can better use User-secrets if you dependency inject the TrelloClient
As for use in the automation engine that could also be a welcome addition... I could see the ProcessingRequest with the JSON getting the "signature" and "webhookUrl" as optional params, and as for reuse of the new ValidateSignature method that could perhaps be a new static helper-class (WebhookSignatureValidator.IsValid(string json, string signature, string webhookUrl))
So if you are up for these changes I would be most grateful for the addition 💪
Sure, sounds good!
One last question: What should happen in AutomationController.ProcessJsonFromWebhookAsync when the signature validation fails? throw an AutomationException?
It is a good question (and same question for the one you already implemented)...
Only worry about the direct return, is if someone implement it wrong and do not understand why.
Only worry about the direct return, is if someone implement it wrong and do not understand why.
Yes, that's also my main concern. As an alternative, what do you think about adding a new parameter (e.g. EnableWebhookSignatureValidation) to TrelloClientOptions instead of implicitly enabling it when the API secret is set?
That way, you couldn't "accidentally" enable this feature and then wonder why nothing is working.
Gut feeling is that it would not really help with that and would just annoy. To me the process of sending of signature and url to the processor is the "enabling" of the feature... (Btw., If one that but have not set the secret it would properly be smart to throw an exception in that case)
So:
no secret and no signature/url = No validation
Secret and no signature/url = No validation
No secret and signature/url = Exception
Secret + signature/url (all set correctly) = success validation
Secret + signature/url (wrong data) = validation with silent return
If this cause issues for people implementing we can implement a debug mode in options later
@compujuckel Are you still up for completing this PR, or do you wish me to accept it as is and do the final tweaking?
I'll most likely have time to work on it again in a few days, so if you're fine with waiting until then I'll finish it
@compujuckel No worries; take your time. It was just if you did not feel like it, it would be a shame to abandon the good work so far 😊
Awesome :-) ... I will merge and add you to the credits in the changelog. I expect version 1.9.0 with this included out very soon (today or tomorrow)
Again, thank for contributing 💪
|
GITHUB_ARCHIVE
|
Just as Facebook races ahead in the consumer social networking market and builds bridges to the business arena with Facebook pages, other powerful apps in the Enterprise sector are revolutionizing internal IT operations.
One of these you probably already know about: Microsoft SharePoint Server.
This wonderful collaboration tool allows companies to store, share and track documents online. Further email integration with Outlook and even Exchange speeds up the flow and transmission of information inside (and between) departments in large organizations.
And, yes, Virtual Internet offers the above in the form of a hosted service designed specifically to boost IT productivity. If Hero is an overused term, Superhero may more adequately describe the accolades poured on IT heads that successfully integrate company policies via SharePoint. It is that popular and that useful!
But, while SharePoint is an amazing product, one of Microsoft’s best, there is another tool also making waves in the Enterprise software collaboration market. Its name is Yammer.
This slick, highly addictive tool bears an uncanny resemblance to Facebook in terms of ease-of-use and social engagement. But, in this case it caters for Business Enterprises ONLY.
Within seconds of signing up under your company email address, you can invite your co-workers who will then hit the ground running in either an online portal or a task bar tool that allows you to share thoughts, files, post voting polls or generally share company information in a secure web environment. The impact on productivity cannot be overstated. Plus, you can do most of this using the free Yammer version.
But, as usual, when you are sometimes faced with two tools that may slightly overlap each other in terms of intent and functionality, you may decide to forgo testing one (or sometimes both of them), in a bid to avoid making a decision!
This hesitation is natural but not necessary, since in this case Yammer and SharePoint have teamed up to do what they do best: Collaborate.
This means that since the latter part of 2010, SharePoint customers can access Yammer feeds from their SharePoint portal site.
Watch Yammer Video
Watch SharePoint Video
Curious about the features? Here are some of them:
- Put a Yammer feed on virtually any SharePoint page.
- View and switch between Yammer feeds directly inside SharePoint.
- Post messages, links and files to Yammer directly from SharePoint.
- Works with Yammer External Networks – your employees can securely communicate with external parties from inside SharePoint.
- Access profile information for Yammer members within SharePoint.
- Administrators control where Yammer feeds appear using SharePoint’s built-in Web Part controls and templates.
- Configure a Yammer feed to appear as read-only, and optionally make it visible to your SharePoint users without them needing a Yammer account.
- Search integration – Yammer messages appear alongside SharePoint search results.*
- Enable Single sign-on for easy access
- Document list integration – post files to Yammer directly from SharePoint document lists.
Ok, so there is one catch for Yammer users: It will cost $5 per user per month to upgrade to the package that allows this collaboration.
However, the rich feature sets provided under the Yammer Gold plan including keyword monitoring, single sign-on, custom branding and multiple domains may soften the blow for IT departments trying to squeeze every cent (or penny) out of each Dollar (or Pound).
Remember, that Yammer is also becoming hyper-popular due its rich mobile applications, which allow you to interact quickly with co-workers and keep track of important company conversations — instantly. This is an extra boon for both Microsoft Mobile and Non-Microsoft Mobile users. The reach of both products is magnified through the partnership between Yammer and Microsoft.
While by no means definite, there are rumors that Yammer may go one step further and integrate with Office365 SharePoint as well.
Both these products are prime examples of cloud-based services that are magnifying the productivity gains for users that embrace online collaborative tools WITHOUT the prohibitive costs of hosting software in-house.
To get started on Yammer Gold click here
To get started on Virtual Internet Microsoft Hosted SharePoint click here
This article was brought to you by VI.net, for dedicated server hosting, cloud servers and 24/7 support visit our site here www.vi.net
|
OPCFW_CODE
|
The message is the message
I have been ranting about bad error messages, so in my own work, error messages better be helpful. At least I try.
As for the recent milestone 6 of the JDT a significant novelty was actually mostly about the wording of a few error/warning messages issued by the null analysis of the JDT compiler. We actually had quite a discussion (no, I’m not expecting you to read all the comments in the bug:)).
Why did we go through all this instead of using the time to fix more bugs? Because the entire business of implementing more compiler warnings and in particular introducing annotation-based null analysis is to help you to better understand possible weaknesses of your code. This means our job isn’t done when the error message is printed on your screen, but only when you recognize why (and possibly how) your code could be improved.
So the game is:
when you see one of the new compiler messages
“what does it tell you?“
Both methods basically have the same code, still lines 14-17 are free of any warnings whereas the corresponding lines 24-27 have one warning and even an error. What does it tell us?
Here are some of the messages:
|line 10||Redundant null check: The variable val1 cannot be null at this location|
|line 12||Null comparison always yields false: The variable val1 cannot be null at this location|
Before bug 365859 the second method would show the same messages, giving no clue why after the initial start all the same code gives different results later. The initial improvement in that bug was to update the messages like this:
|line 20||Redundant null check: The variable val2 is specified as @NonNull|
|line 22||Null comparison always yields false: The variable val2 is specified as @NonNull|
Alright! Here lies the difference: in the first method, compiler warnings are based on the fact that we see an assignment with the non-null value
"OK" and carefully follow each data-flow from there on. Non-null definitely holds until line 15, where potentially (depending on where
b takes the control flow)
null is assigned. Now the check in line 16 appears useful.
By contrast, the warnings in the second method tell us, that they are not based on flow analysis, but on the mere fact that
val2 is declared as of type
@NonNull String. This specification is effectual, independent of location and flow, which has two consequences: now the assignment in line 25 is illegal; and since we can’t accept this assignment, line 26 still judges by the declaration of
val2 which says:
|line 25||Null type mismatch: required ‘@NonNull String’ but the provided value is null|
|line 26||Redundant null check: The variable val2 is specified as @NonNull|
Communicate the reasoning
Three levels to a good error message:
- “You did wrong.”
- “Your mistake was …”
- “This is wrong because …”
|line 31||Null type mismatch: required ‘@NonNull String’ but the provided value is null|
|line 32||Null type mismatch: required ‘@NonNull String’ but the provided value is specified as @Nullable|
|line 34||Null type mismatch: required ‘@NonNull String’ but the provided value is inferred as @Nullable|
Line 31 is obvious.
Line 32 is wrong because
in is declared as
null is a legal value for
in, but since it’s not legal for
tmp2 the assignment is wrong.
In line 34 we are assigning a value that has no nullness specification; we say,
unknown has a “legacy” type. From that alone the compiler cannot decide whether the assignment in line 34 is good. However, using also information from line 33 we can infer that
unknown (probably) has type
@Nullable String. In this particular case the inference is obvious, but the steps that lead to such conclusion can be arbitrarily complex.
What does this distinction tell you?
The error in line 31 is a plain violation of the specification:
tmp1 is required to be nonnull, but the assigment attempts to definitely break that rule.
The error in line 32 denotes the conflict between two contradictory declarations. We know nothing about actual runtime values, but we can tell without any doubt that the code violates a rule.
Errors of the type in line 34 are reported as a courtesy to the user: you didn’t say what kind of variable
unknown is, thus normally the compiler would be reluctant to report problems regarding its use, but looking a bit deeper the compiler can infer some missing information. Only in this category it makes sense to discuss whether the conclusion is correct. The inference inside the compiler might be wrong (which would be a compiler bug).
Sources of uncertainty
Of the previous messages, only the one in line 31 mentions a runtime fact, the remaining errors only refer to possibilities of null values where no null value is allowed. In these cases the program might actually work – by accident. Just like this program might work:
While this is not a legal Java program, a hypothetical compiler could produce runnable byte code, and if the method is invoked with an argument that happens to be a String, all is well – by accident.
While we have no guarantee that things would break at runtime, we know for sure that some rule has been broken and thus the program is rejected.
What can we tell about this assignment? Well … we don’t know, it’s not definitely bad, but it’s not good either. What’s the problem? We need a
@NonNull value, but we simply have no information whether
unspecified can possibly be null or not. One of those legacy types again. After much back and forth we finally found that we have a precendent for this kind of problem: what’s the compiler say to this snippet:
Right, it says:
Type safety: The expression of type List needs unchecked conversion to conform to List
meaning: we receive an argument with a type that lacks detailed specification, but we require such details on the left hand side of the assignment. Whether or not the RHS value matches the LHS requirement cannot be checked by the compiler. Argument
unspecified has another kind of legacy type: a raw type. To gracefully handle the transition from legacy code to new code with more complete type specifications we only give a warning.
The same for null specifications:
|line 41||Null type safety: The expression of type String needs unchecked conversion to conform to ‘@NonNull String’|
In both cases, raw types and types lacking null specification, there are situations where ignoring this warning is actually OK: the legacy part of the code may be written in the full intention to conform to the rule (of only putting strings into the list / using only nonnull values), but was not able to express this in the expected style (with type parameters / null annotations). Maybe the information is still documented, e.g., in the javadoc. If you can convince yourself that the code plays by the rules although not declaring to do so: fine. But the compiler cannot check this, so it passes the responsibility to you, along with this warning.
Tuning comiler messages
If you buy into null annotations, the distinction of what is reported as an error vs warning should hopefully be helpful out of the box. Should you wish to change this, please do so with care. Ignoring some errors can render the whole business of null annotations futile. Still we hope that the correspondence between compiler messages and configuration options is clear:
These options directly correspond to the messages shown above:
|problems||controlled by this option|
|lines 31,32||Violation of null specification|
|line 34||Conflict between null annotations and null inference|
|line 39||Unchecked conversion from non-annotated type to @NonNull type|
The compiler does an awful lot of work trying to figure out whether your code makes sense, definitely, or maybe, or maybe not, or definitely not. We just decided, it should try a little harder to also explain its findings. Still, these messages are constrained to be short statements, so another part of the explanation is of course our job: to educate people about the background and rationale why the compiler gives the answers it gives.
I do hope you find the messages helpful, maybe even more so with a little background knowledge.
The next steps will be: what’s a good method for gradually applying null annotations to existing code? And during that process, what’s a good method for reacting to the compiler messages so that from throwing code at the compiler and throwing error messages back we move towards a fruitful dialog, with you as the brilliant Holmes and the compiler your loyal assistant Watson, just a bit quicker than the original, but that’s elementary.
|
OPCFW_CODE
|
import numpy as np
import theano
import theano.tensor as T
def no_noise(input):
# needed because dnn pseudo ensemble code assumes each input / hidden layer gets noise
return input
def dropout_noise(rng, input, p=0.5):
srng = theano.tensor.shared_randomstreams.RandomStreams(rng.randint(999999))
# Bernoulli(1-p) multiplicative noise
mask = T.cast(srng.binomial(n=1, p=1-p, size=input.shape), theano.config.floatX)
return mask * input
def beta_noise(rng, input):
srng = theano.tensor.shared_randomstreams.RandomStreams(rng.randint(999999))
# Beta(.5,.5) multiplicative noise
mask = T.cast(T.sin( (np.pi / 2.0) * srng.uniform(size=input.shape, low=0.0, high=1.0) )**2, theano.config.floatX)
return mask * input
def poisson_noise(rng, input, lam=0.5):
srng = theano.tensor.shared_randomstreams.RandomStreams(rng.randint(999999))
# Poisson noise
mask = T.cast(srng.poisson(lam=lam, size=input.shape), theano.config.floatX)
return mask * input
|
STACK_EDU
|
This tutorial describes how to use the OpenStack Command Line Interface (CLI) tools on Mac OS X. For example, you will learn how to list your instances and volumes by using the CLI. You will also learn how you can launch a new instance by using the CLI.
For this tutorial you’ll need the following:
- A Fuga Cloud account
- A device running OS X
In case you’re running Windows or Linux, please check out the following guides:
Step 1 - Installing Python 3
The Openstack command line tools need Python and Python on OS X is utterly broken, but fortunately you can install your own, up-to-date version, using brew. If you haven’t installed this on your system yet:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Using brew we’re going to install Python 3:
brew install python
Step 2 - Installing the OpenStack Command Line Tools
Now Python 3 is installed we can finally install the OpenStack command line tools:
sudo pip3 install python-openstackclient
Step 3 - Installing the configuration file
Now, follow the steps below to install the configuration file:
- Log in to the Fuga Cloud Dashboard
- Go to Compute → Access & Security → API Access
- Click on download OpenStack RC file. This file contains all necessary configurations for the clients.
- Save this file to the folder on the machine where you have installed the OpenStack CLI clients, for example:
Run the following command to use the configuration file:
Enter your Fuga password
Step 4 - Using the Fuga CLI
You have now installed and configured the OpenStack CLI client and you can start using the Fuga CLI. The following are some examples you can try. You can also call
openstack help for all available commands. For more in-depth information, check out OpenStack command-line clients.
List your instances
$ openstack server list
List your volumes
$ openstack volume list
List the images
$ openstack image list
If you only enter the command openstack, you enter interactive mode. This mode lets you interact faster with the Fuga CLI.
While in interactive mode enter
help to get all the different commands.
If you only need to find a specific command enter:
$ openstack help | grep "<your command>"
$ openstack help | grep list
This will return all list commands.
Step 5 - Creating a new server with CLI
Start an interactive session with:
First, create a new key named, for example,
test_cli_key or use an existing key.
Creating a key with the CLI can be done with the following command:
keypair create test_cli_key
This command returns the newly generated private key. Store this somewhere save.
Now test if the key is created:
Next up is to list the images and flavors we can choose from:
image list flavor list
In this example I chose to create a c1.small instance, named
CLI Test and running Debian 9 using the newly generated key.
server create cli_test --image 5a2a94e7-3364-4bf8-a66b-ac84bc2c92de --flavor c1.small --key-name test_cli_key
After a few seconds the server is created and built.
Step 6 - Deactivating the virtualenv
When you’re done with the virtualenv, run
deactivate. You’ll see your prompt revert to normal.
Reactivating the virtualenv
Now, anytime you want to run your virtual environment, activate the “fugaio” virtualenv as shown above. This will put you into the appropriate virtualenv with all the necessary modules and dependencies. When you’re done, simply deactivate it.
In this tutorial, you’ve learned how to use the OpenStack Command Line Interface tools on Mac OS X. By using these tools you learned how to list your instances and volumes. Beside this, you also learned how to launch a new instance by using the CLI.
More tutorials about using the Command Line Interface can be found here.
|
OPCFW_CODE
|
Linksys Wireless G Access Point Default Ip Address. 3g Wireless Router. Advanced Wireless Communications
Linksys Wireless G Access Point Default Ip Address
- (Access Points) Designated areas and passageways that allow the public to reach a trail from adjacent streets or community facilities.
- Access Point is a rocky point immediately southeast of Biscoe Point and 2 miles (3.2 km) northwest of Cape Lancaster on the south side of Anvers Island, in the Palmer Archipelago. First charted by the French Antarctic Expedition under Charcot, 1903-05.
- In computer networking, a wireless access point (WAP) is a device that allows wired communication devices to connect to a wireless network using Wi-Fi, Bluetooth or related standards.
- IEEE 802.11g-2003 or 802.11g, is an amendment to the IEEE 802.11 specification that extended throughput to up to 54 Mbit/s using the same 2.4 GHz band as 802.11b. This specification under the marketing name of Wi-Fi has been implemented all over the world. The 802.
- A unique string of numbers separated by periods that identifies each computer attached to the Internet
- (Internet Protocol address) A number assigned to each computer's or other device's network interface(s) which are active on a network supporting the Internet Protocol, in order to distinguish each network interface (and hence each networked device) from every other network interface anywhere on
- (IP addresses) Internet Protocol addresses, the numeric identification number that refers to a specific machine on the Internet.
- An Internet Protocol address (IP address) is a numerical label that is assigned to any device participating in a computer network that uses the Internet Protocol for communication between its nodes.
- Linksys by Cisco, commonly known as Linksys, is a brand of home and small office networking products now produced by Cisco Systems, though once a separate company founded in 1995 before being acquired by Cisco in 2003 .
- Manufactures the WRT54G3G cellular router.
- A preselected option adopted by a computer program or other mechanism when no alternative is specified by the user or programmer
- loss due to not showing up; "he lost the game by default"
- act of failing to meet a financial obligation
- fail to pay up
- Failure to fulfill an obligation, esp. to repay a loan or appear in a court of law
ballpoint pen on linen (paper), 40 x 30 cm, 2010.
ezw rt10 wireless transceiver
wireless optical mice
linksys wireless router dsl modem
verizone wireless phones
fixed wireless internet connections
wireless headset mp3
wacoal wireless bras
|
OPCFW_CODE
|
package ru.otus.spring.hw.service;
import java.util.List;
import java.util.stream.Collectors;
import org.springframework.stereotype.Service;
import lombok.RequiredArgsConstructor;
import ru.otus.spring.hw.model.Author;
import ru.otus.spring.hw.model.Book;
@RequiredArgsConstructor
@Service
public class IOAuthorService {
private final IOService ioService;
private void printAuthor(Author author) {
final StringBuilder sb = new StringBuilder();
sb.append("id: ").append(author.getId());
sb.append("; ");
sb.append("name: ").append(author.getName());
sb.append("; ");
final var books = author.getBooks().stream().map(Book::getTitle).collect(Collectors.joining(",")).toString();
sb.append("books: ");
sb.append(books);
ioService.print(sb.toString());
}
public void print(List<Author> authors) {
authors.forEach(this::printAuthor);
}
}
|
STACK_EDU
|
Possible Memory Leak in pd-linux-0.25TEST4.tar
est at hyperreal.org
est at hyperreal.org
Wed Mar 31 00:03:24 CEST 1999
Guenter Geiger discourseth:
> est at hyperreal.org writes:
> > Hmm, I don't even get that far. I've tried the last 3 versions of pd
> > on RedHat 5.2 (using all their latest updates) and every one of them
> > pretty quickly freezes the whole system in a Tk file-open dialog. I
> > can't even transfer to my character-based virtual terminals or reboot
> > with control-at-delete! I also tried the latest Tcl/Tk (8.05) with
> > the same results.
> Mhmm, do you mean version 0.23, 0.24, and 0.25 ?
I built and tried the following, the last one is from MP's site:
> First of all never call pd under user root. (We might add a
> switch for enabling high priority, as this happens quite often)
> The freeze of your machine should go away.
Unfortunately I haven't been calling it as root. :(
> Your problem might be related (puh, .. again ) to the audio drivers.
> (Or possibly the way I access them through very short audio buffers)
> This can cause a hang in the pd main process, which is most commonly
> the problem in these cases (Nothing to do with your Tcl/Tk version).
I doubt that's the problem in my case. Coincidentally, I've done a
lot of work with buffer configuration for my audio driver. :)
> You might try the -dac or -nosound switch on version 0.24 or
> it's corresponding -noadc in version 0.25.
OK..I'll give those a try.
> What soundcard, kernel version, sounddriver do you use ?
It's a PCI128 soundcard. My kernel is the latest update from redhat:
2.0.36-3. I *did* modify the sounddriver to fix a bug in the mixer
> As you probably realized, between version 0.25TEST3 for linux and
> 0.25TEST4, we made a switch of responsibilities for the linux
> sounddriver code. We therefore have two rather different versions
> (TEST3, which is on ftp://wonk.epy.co.at/pub/pd/linux/ and version
> 0.25TEST4, which is the original version from Miller).
> Both of them are considered to really be TEST versions, as in general
> it is hard to predict how the Linux sounddrivers behave under
> "realtime" conditions, and we need feedback there as we can't possibly
> test on all soundcards, with all drivers.
If I can get PD working here, I can definitely help. I'm working on
real-time sound software myself. :)
More information about the Pd-list
|
OPCFW_CODE
|
Observing and Interpreting Gravitational Waves
The Albert Einstein Institute in Potsdam hosts one of the world-leading research groups in searching for gravitational waves from compact-object binaries composed of neutron stars and/or black holes, and inferring astrophysical and physical information upon detection.
Neutron stars and black holes are amongst the most fascinating, and exotic, objects in the Universe. Formed after the death, and subsequent supernova explosion, of massive stars, neutron stars have a mass roughly comparable to our Sun but compressed down into a space roughly equal to that of Berlin. Neutron star matter exists in conditions that are completely impossible to reproduce here on the Earth and therefore the only way that it is possible to probe physics in such extreme regimes is by observing neutron stars in the Universe. Neutron stars have already been observed in our own Galaxy as “pulsars”, which are rotating neutron stars emitting beams of electromagnetic radiation along their magnetic poles. If these beams happen to intersect the Earth as the pulsar spins, we can observe these pulses of electromagnetic emission and therefore the pulsar. While many neutron stars have been observed as pulsars in this way, it is expected that many neutron stars either do not produce such strong electromagnetic emission, or rotate such that the beam does not intersect with the Earth, and therefore remain unobserved.
There is a limit to how massive a neutron star can be – somewhere between 2 and 3.2 times the mass of our Sun – before it is simply too massive to support itself and will collapse further, probably into a black hole. Black holes are dense enough that they warp spacetime so much that even light is unable to escape to be observed by a distant observer. This presents a problem for trying to observe black holes with conventional electromagnetic telescopes. Nevertheless, indirect observations of black holes have been made. For example, by watching stars near the centre of our Galaxy it was noticed that they orbit an invisible object over one million times heavier than the Sun, which surely must be a black hole. Additionally, in so-called X-ray binaries we observe stars being pulled apart by unseen companions and, by observing the accretion of material onto the companions, we strongly suspect the presence of black holes.
In 2015 the Advanced LIGO observatories directly detected the gravitational-wave signature emitted by merging pairs of black holes hundreds of megaparsecs from the Earth. Advanced LIGO and Advanced Virgo will regularly continue to observe black hole and neutron star mergers as the detectors reach their design sensitivity. With many sources to observe, our group focuses both on ensuring that such observations happen and in exploring what science we can extract now that gravitational-wave astronomy is establishing itself as a brand-new tool for observing the Universe!
Observing compact binary mergers
Observing the signature of compact binary mergers buried within the data taken from large-scale gravitational-wave observatories is not a trivial task. The gravitational-wave signals that we see from such compact objects are faint and are often buried within the noise of the interferometric detectors. However, using highly accurate models of the gravitational-wave signals of compact binary mergers in matched filtering algorithms, we are able to extract possible signals from the noise. Matched filtering works by multiplying, for all times, the interferometer data by the expected gravitational-wave signal, weighting by the fact the detector is more sensitive at some frequencies than others, and then looking for any point where the resulting statistic produces a large value.
Constructing template banks
The gravitational-wave signals that Advanced LIGO and Advanced Virgo observe are dependent on the masses and spins of the compact objects being observed. Therefore matched-filtering with a binary-neutron star waveform is not able to observe a binary black hole merger. We must therefore construct a template bank of filters covering the full range of source parameters so as to be able to observe any potential compact binary merger in the data. Investigating and developing methods for constructing such template banks is one of the many areas we are working in.
Improving search sensitivity
As well as analysing the data taken by Advanced LIGO to observe gravitational-wave sources, we continue to study techniques to make these searches more sensitive, especially to certain astrophysically interesting configurations. For example, in cases where the spins of the component compact objects are not aligned with the orbital angular momentum the plane in which the system orbits can precess. In many configurations this is difficult to observe, but sometimes this precession is clearly observable in the gravitational-waveform, and this can greatly help when inferring the underlying physical parameters of the system. Unfortunately our current search techniques also have poor sensitivity to such systems. This is also true for systems with non-negligible eccentricity or where the sub-dominant gravitational-wave modes contribute strongly to the observed signal. We are working to try to develop new techniques to fill this gap in compact binary merger searches.
Pulsar timing arrays
We are also involved in the effort to detect gravitational-wave signals from observing pulsars in our Galaxy. Pulsars are often called the most precise clocks in the Universe because of the extreme regularity of their rotation and the pulses that we observe. Gravitational-waves passing by pulsars can shift these observed pulses by tiny amounts. Then, by observing many pulsars simultaneously in the Galaxy, we might hope to observe the long lived signals of orbiting compact objects many millenia before they would merge. European efforts towards gravitational-wave observation through pulsar timing occur through the EPTA (European Pulsar Timing Array) and IPTA (International Pulsar Timing Array) collaborations, both of which we are members of.
Extracting source parameters from gravitational-wave observations
The other broad aim is to develop better methods for the extraction of physical information from compact binary mergers and to investigate how extracting this information informs our understanding of the formation and evolution processes of compact binary mergers. These measurement methods rely on a set of models of compact merger gravitational waveforms, developed and used in tight collaboration with the colleagues working on Source Modeling in the department. Such models are then combined with a model for the noise of the observatories to measure their quality of fit to the observed data. We use Bayesian statistical theory to report multidimensional probability density functions for the model parameters.
What is the recipe to obtain those probability density functions?
We start with a stretch of data to analyse, and assume a particular model for the gravitational-wave signal. That probability is computed by:
- choosing some parameters to compute the probability for
- computing the signal using the assumed model for those parameters
- subtracting this signal from the data
- calculating the probability of obtaining this particular residual by chance under a gaussian process following the instrument’s noise properties
- multiplying this value by the a priori assumption to obtain the a posteriori probability.
Step 1. is handled by Markov-Chain Monte-Carlo or Nested Sampling algorithms, which try to guess relevant values to use. Step 2. is in general the most computationally expensive, we compute Reduced Order Models for the most complex signal models. Step 4. can include additional noise modelling features, and step 5. parametrizes our understanding of gravitational-wave sources before the observation.
What can these observations teach us about the Universe?
When we accurately measure the various parameters that describe compact binary coalescences we can use this information to infer astrophysical formation mechanisms, and improve our understanding of stellar evolution. We also take advantage of the extreme settings that black-holes and neutron-stars provide to perform experiments that could not be done otherwise: we test Einstein’s theory of general relativity like never before, and with binary neutron stars and neutron star - black hole detections we will be able to measure the equation of state of matter at the highest densities physically possible.
|
OPCFW_CODE
|
As Cloud Economists, we’re often asked when it makes sense for an object to be in Amazon S3’s Intelligent-Tiering (“S3-IT”) storage class. The answer, as is unfortunately often the case in the world of consulting, is “it depends”.
There are two primary considerations before you jump into S3-IT: an object’s access pattern and its size.
S3-IT and object access patterns
S3-IT is an extremely handy way to make sure your objects are stored in the appropriate storage tier without having to write complicated lifecycle management policies or incur the cost of lifecycle transitions and minimum retention periods. Maintaining lifecycle management policies and making thoughtful choices about object tiering both require engineer time, which is a lot more expensive than the $0.0025 per 1,000 objects monthly management fee. For this reason, our Cloud Economists often recommend that clients treat S3-IT as the default unless their objects’ access patterns are extremely well-understood. In many cases, it’s cheaper to just let S3-IT figure out where to put your object.
There’s one additional S3-IT caveat customers should be aware of. The S3 Standard storage class is designed for 99.99% availability, while S3 Intelligent-Tiering loses a 9 from the end of that target to offer only 99.9% availability. At large scale, you’ll indeed start to see object retrieval failures more frequently on tiers other than S3 Standard.
S3-IT and object size
Since S3-IT is a good default option for most objects’ access patterns, let’s take that off the table and only look at the monthly storage component of the object’s total cost of ownership (TCO). How big does an object need to be in order for S3-IT to make more sense than the Standard tier from a storage cost perspective?
To come up with a concrete answer to this question, let’s make the simplifying assumption that an object is written once and never read or re-written thereafter. Let’s also make the simplifying assumption that a month is 30 days long, so we don’t have to do fractional math to compute the average cost of an object in GiB-days.
So, to calculate the TCO of this hypothetical object, we have to model the object’s movement through S3-IT’s various tiers over time.
In all different flavors of Intelligent-Tiering, a new object’s first three months of existence are the same: It spends the first month in the Frequent tier and the next two months in the Infrequent Access tier. Where it spends the rest of its existence depends on whether S3-IT’s Deep Archive or Archive tiers are enabled for that object. Therefore, there are three flavors of S3-IT to consider:
- “Vanilla” S3-IT: If neither Deep Archive nor Archive are enabled, the object spends the rest of its existence in the Archive Instant tier. This is the default option.
- S3-IT + Deep Archive: If only Deep Archive is enabled, the object spends three months in the Archive Instant tier and all subsequent months in the Deep Archive tier.
- S3-IT + Archive + Deep Archive: If both Archive and Deep Archive tiers are enabled, the object spends three months in the Archive tier and the rest in the Deep Archive tier.
To make things even more complicated, S3-IT tiers tack on an additional management overhead fee per object-month, and the Archive and Deep Archive tiers store some additional metadata that you also pay for: 8 KiB for the name of the object (billed at Standard tier rates) and 32 KiB for “index and related metadata” (stored at the Glacier and Glacier Deep Archive rates, respectively).
If we think back to high school math class, it sounds like storage cost in S3-IT as a function of time is a piecewise-linear function. Luckily for us, cost of ownership over a given fixed period of time is a linear function of object size.
Calculating S3-IT costs
Since cost of ownership over a fixed period of time is a linear function, we can describe the relationship between object size (x) and storage cost (y) over a fixed period like this:
y_1 is how much it costs to store x_1 bytes for the given time period, and m is the marginal cost of storing an additional byte for that same time period.
Calculating S3 Standard costs
We also know that storage cost in the Standard tier is a linear function of object size, but it’s much simpler:
C is the cost per GiB-month of Standard storage in a given region multiplied by the length of the time period in months.
Calculating the Break-Even Point
So far, we have two equations that can tell us how much it costs to store an X KiB object for a given duration in both S3-IT and Standard. The “break-even point” for a given storage duration is the intersection between those two equations, or the value of x for which:
We can solve the above equation for x, which yields:
For object sizes greater than that intersection point, it’s cheaper to store the object in S3-IT. For object sizes less than that point, it’s cheaper to store the object in Standard. Intuitively, the longer you store an object of a given size, the more benefits you accrue from S3-IT and the lower that break-even point should be. The plot below shows that break-even point for storage durations from one year to 20 years.
As you can see, for all three flavors of S3-IT, the break-even point starts to level off once the object’s lifetime surpasses about 10 years. The break-even point for S3-IT with colder tiers available is a little lower because the savings in storage costs in those colder tiers add up over time. In all cases, though, the object size at which your break even is very close to the 128 KiB minimum size, regardless of how long you store the object for.
S3-IT has come a long way since its introduction in 2018. At this point, it’s a good default option for most objects under most workloads. One big drawback to adopting S3-IT is that moving objects from other tiers into S3-IT is expensive: it costs $0.01 to transition 1,000 objects, which can add up quickly if you’ve got a lot of objects to transition. However, new objects created in the S3-IT tier aren’t subject to that transition fee, so adopting S3-IT for new workloads won’t cost you anything.
Because this sort of thing can get complicated, it’s worth clarifying that these equations are a function of x (object size) and also implicitly of T (storage duration in months). Solving the equation for the breakeven point is calcuating the breakeven point for a single value of T, and the plot that follows is a plot of that intersection over many values of T, not a plot of y=f(x).
|
OPCFW_CODE
|
9 a.m.–12:20 p.m.
Digital signal processing through speech, hearing, and Python
- Audience level:
Why do pianos sound different from guitars? How can we visualize how deafness affects a child's speech? These are signal processing questions, traditionally tackled only by upper-level engineering students with MATLAB and differential equations; we're going to do it with algebra and basic Python skills. Based on a signal processing class for audiology graduate students, taught by a deaf musician.
One thing Python is great for is bringing "advanced" technical topics within the grasp of relative beginners. To illustrate, we'll be taking an upper-level electrical engineering course (Signals and Systems / Digital Signal Processing) that typically has 4-6 semesters of engineering/math/science coursework as a prerequisite... and teaching exactly the same concepts to an audience with only algebra and basic Python knowledge.
This workshop is based on a graduate course in signal processing for audiology doctoral students, and is being taught by a deaf engineering education researcher who is a musician, dancer, and polyglot. As such, the exercises and examples will be from the realms of speech, hearing, and music.
- First, we'll introduce the time and frequency domains and the Fourier transform by making equalizers and modeling different sorts of hearing loss. What does your favorite song sound like to someone with this hearing profile? What does speech sound like?
- We'll get into spectrograms and visualizations by introducing envelopes, impulse responses, and phonology. After discovering why an "A" sounds different from an "O" and what makes trumpets sound "brassy," we'll use visualizations to solve practical problems in speech and noise: for instance, a high-frequency loss makes it difficult to hear plosives (sounds like "p" and "b") but vowels are fine. Why? Can you predict which auditory situations will be more understandable?
- How to break sounds. We'll play with clipping, undersampling, aliasing, and other Bad Techniques most audio folks try desperately to avoid in order to find out why they make signals Sound Wrong.
- Fun With Filtering: if you need to fit an 8kHz signal into a 2kHz bandwidth, what can you do to bring information-rich parts of the signal into perceptual range? We'll experiment with various techniques for implementing auditory superpowers such as giving humans bat-like hearing powers while still retaining the ability to understand speech.
- Other topics and labs depending on time and audience interest, including discussion on pedagogy and how this approach could be used for other "advanced" topics in engineering education.
We will sing, dance, and make music. Bring headphones. If you play a portable instrument, bring it for an in-workshop jam session.
|
OPCFW_CODE
|
Correction, it looks like a database view would be more appropriate if I wanted a persistant query. http://desktop.arcgis.com/en/arcmap/10.3/manage-data/gdbs-in-sql-server/views-in-geodatabase.htm
Assets with no inspections in GIS
I'm looking for the best way to determine which stormwater outfall structures have never had a sampling or inspection performed.
In ArcMap, the Lucity GIS toolbar provides a handy way to create a database view of the lucity samplings under Lucity Views> View Storm Sampling Results. Tool here: http://help.lucity.com/webhelp/latest/gis/#21980.htm
This performs an INNER JOIN between structures and the sample table, retrieving the most current inspection. This result returns only those features which have a successful join. Instead, I would like to either perform a FULL OUTER JOIN, or at least a LEFT OUTER JOIN to return all structures, not just those with an inspection, and see which do not have inspections (null values).
Default Lucity View - Storm Samplings:
"Creating view sql = CREATE VIEW LUCITY_USER.GIS_SMSSAMPL AS SELECT SMSSAMPL.*, SMVSTRUC.SN_ST_NO FROM (SMVSTRUC INNER JOIN SMSSAMPL ON SMSSAMPL.SS_SN_ID = SMVSTRUC.SN_ID) INNER JOIN (SELECT SS_SN_ID, MAX(SS_SAMP_DT) AS MAXDATE FROM SMSSAMPL GROUP BY SS_SN_ID) AS MHMAX ON MHMAX.SS_SN_ID = SMSSAMPL.SS_SN_ID AND SMSSAMPL.SS_SAMP_DT = MHMAX.MAXDATE"
I would like to stand up a simple web map to show which features still need to be inspected. A nightly routine could perform this query and recreate the feature but ideally I'd like to have it update on a more on-demand basis. As inspections are performed, users would see those structures change in appearance on the web map.
1) I can take this initial result, export it, and join it back to my full list of structures in GIS, however, it would be useful to skip this step.
2) I'm also considering making my own table view in our SDE database that mirrors this query but to my own specifications. http://desktop.arcgis.com/en/arcmap/latest/tools/data-management-toolbox/make-table-view.htm
3) Is there a better way?
Please sign in to leave a comment.
I was also going to recommend a database view rather than a table view if storing something in the geodatabase was your goal. However, another option to explore is to create a Query Layer in ArcMap with your specifications, and then publish that as a map service. At that point, there's no need for a nightly routine because it acts similarly to a view but it's stored in your mxd rather than in your database. Just a possibility. Good luck!
Within the Lucity Storm Structures module there is actually a 'Last Inspection' date field. This field is automatically updated when inspection records are created within our system. This is true for most of our inventory modules.
You can join that field from the Lucity database to your feature class and then symbolize it so that it will display a different color for structures that have an inspection date and for those that don't. This would also allow you to possibly graduate the symbology based on how long ago it was inspected.
An easy way to join in our lucity data would be to use the Lucity Links tool. This is found in ArcMap on the Feature Class properties.
Let me know if you have any further questions about that.
@Jonathan Semones, Thanks for the suggestion with Lucity Field links, this looks promising and I'll keep it mind. Unfortunately it looks like only 101 out of our 852 outfalls (Structures where Monitoring Site = 1) have a value for "last inspection date" (SN_INSP_DT). According to my stormwater workers, we use the Samplings table for MS4 outfall inspections, which I show as a dashboard query "SMSSAMPL LEFT JOIN SMVSTRUC ON SMSSAMPL.SS_SN_ID = SMVSTRUC.SN_ID." Should a Sampling on a Structure also update the "Last Inspection Date" field or is this a workflow issue within our organization? Thank you.
|
OPCFW_CODE
|
from __future__ import annotations
import os
import shutil
from pathlib import Path
from typing import Callable
from typing import Tuple
import pytest
from click.testing import CliRunner
from planingfsi.cli import cli
VALIDATED_EXTENSION = ".validated"
RunCaseFunction = Callable[[str], Tuple[Path, Path]]
@pytest.fixture()
def run_case(tmpdir: Path, validation_base_dir: Path) -> RunCaseFunction:
"""A function which is used to run a specific validation case.
The case runner executes in in a temporary directory with input files and results copied into it
from the base directory.
Args:
tmpdir: A temporary case directory in which to run.
validation_base_dir: The base path holding the validation cases.
Returns:
function: A function accepting a string which identifies the folder within
"validation_cases" to run.
"""
def f(case_name: str) -> tuple[Path, Path]:
"""Copy all input files from the base directory into the case directory and run the `mesh`
and `run` CLI subcommands.
Args:
case_name: The name of the case directory within the validation base directory.
Returns:
The paths to the original and new case directories.
"""
orig_case_dir = validation_base_dir / case_name
new_case_dir = Path(tmpdir)
for source in orig_case_dir.glob("*"):
if source.suffix == VALIDATED_EXTENSION:
continue # Don't copy validated files to the temporary directory
destination = new_case_dir / source.name
try:
shutil.copytree(source, destination)
except NotADirectoryError:
shutil.copyfile(source, destination)
os.chdir(new_case_dir)
cli_runner = CliRunner()
mesh_result = cli_runner.invoke(cli, ["mesh"], catch_exceptions=False)
assert mesh_result.exit_code == 0
run_result = cli_runner.invoke(cli, ["run"], catch_exceptions=False)
assert run_result.exit_code == 0
return orig_case_dir, new_case_dir
return f
@pytest.mark.parametrize(
"case_name",
(
"flat_plate",
"stepped_planing_plate",
pytest.param("flexible_membrane", marks=pytest.mark.slow),
pytest.param("sprung_plate", marks=pytest.mark.slow),
),
)
def test_run_validation_case(run_case: RunCaseFunction, case_name: str) -> None:
"""For each results directory marked with a '.validated' suffix, check that all newly calculated
files exist and contents are identical."""
orig_case_dir, new_case_dir = run_case(case_name)
for orig_results_dir in orig_case_dir.glob(f"*{VALIDATED_EXTENSION}"):
new_results_dir = (new_case_dir / orig_results_dir.name).with_suffix("")
for orig_results_file in orig_results_dir.iterdir():
new_results_file = new_results_dir / orig_results_file.name
assert_files_almost_equal(orig_results_file, new_results_file)
def assert_files_almost_equal(orig_file: Path, new_file: Path) -> None:
"""Read two files line-by-line, convert any float-like number to a float in each line, and then
compare each list of floats using pytest.approx."""
with orig_file.open() as fp, new_file.open() as gp:
for f_line, g_line in zip(fp.readlines(), gp.readlines()):
f_values, g_values = [], []
for f_str, g_str in zip(f_line.split(), g_line.split()):
try:
f_val = float(f_str)
g_val = float(g_str)
except ValueError:
continue
f_values.append(f_val)
g_values.append(g_val)
assert f_values == pytest.approx(g_values, rel=1e-3, abs=1e-6)
|
STACK_EDU
|
HTML5 article and section refresher
Despite being among the earliest of HTML5 elements, article and section elements still crop up in questions when I’m talking to people about HTML5. I think the reason people ask is because they imagine that these elements are more restrictive than they really are.
So to help clarify how these elements work, a quick example of some of the ways they can be used.
The article element is for stand-alone chunks of content. Imagine adding a 4px dashed border to the article in CSS, now get a pair of scissors, cut the entire article element out of the page along the dashed line and paste it on a blank piece of paper… does it still make sense on its own? It is a good use of an article element if the answer is yes.
The example given in the specification is a blog post. The blog itself would be an article and each user-submitted comment could be an article nested within the blog post article. This is a good example, but ties the article element a bit too closely to a written “article”. It might be more useful to remember that a “loan calculator” widget could be enclosed in an article element – the “cutting it out and using it its own right” is a handy way to think about these things.
The section element works in two ways. You can use it group things together and you can use it to separate things.
Imagine a page full of reviews. You might use section elements to group the “music” reviews together and the “film” reviews together. Each section would have multiple reviews. This is an example of grouping many things using a section.
- section (Music reviews)
- article (Troublegum review)
- article (Infernal Love review)
- article (Crooked Timber review)
- section (Film reviews)
- article (Yes Man review)
- article (Mr Deeds review)
- article (Elf review)
The flip-side use of a section would be to divide a single article into several chapters.
- article (Academic Brilliance)
- section (Introduction)
- section (Chapter 1 – brilliant stuff)
- section (Chapter 2 – more excellence)
- section (Epilogue)
Usually, in both of these cases, each article and each section would have a heading element that contained the text in brackets.
Of course, if you had a really substantial page, you might combine both of these and have sections containing articles containing sections – but not many pages are that big!
Part of understanding how to use these elements to ignore the word “article” and instead think “stand alone” and the other part is to think of “section” also as “group”.
So here is an example HTML5 page outline…
In this example…
The page itself has a header, nav, main, aside and footer. This is a pretty typical page layout. The header and footer are tops and tails for your page. The nav element contains your primary site navigation. The aside element is for information related to the content nearby, but not a part of that content. The main element (at the time of writing this is a new one) is for identifying explicitly the primary content that the page is there for – in some cases this can be determined by removing all the elements you know are NOT the main content, but in some cases that doesn’t work – so the main element makes it absolutely clear.
Inside the main element, we have an article that is divided into a couple of sections and topped and tailed with a header and footer. You would leave out the sections if there wasn’t a great deal of content in the article and just put the content directly after the header and before the footer. Equally, the footer may not be required if you don’t have anything to tail your article with.
|
OPCFW_CODE
|
25 thoughts on “Abusing HTTP Status Codes”
You know, I just had this idea earlier today. That’s awesome.
What isn’t awesome is that the link is dead. :(
Link is quite up.
this is a “soft opening” of our forums. Please be patient with us as we’re sure there will be issues.
After a very long time of people requesting it, here you go forums.hackaday.com
30 seconds in, and I have to say “cool”. Here’s the first test, verbatim:
It’s a dangerous web out there everyone. Don’t surf without protection.
Browsing in ‘incognito’ mode (or w/e depending on browser) is a must do nowadays.
Well, thats one way to kill further comments on an article.. just post a link to the new forum! :D
I’m assuming a new post will be made announcing the forum when its ready?
there will be a post about it. I just didn’t want to have EVERYONE go there only to have it crash and burn. I’d rather have a few people trickle in and find the issues slowly.
I expected more abuse and less…. status codes.
They can see if I’m logged in to a social networking site, OH NO!
The real issue is if you can use the JS to pull down the whole page and then regex query for a username/email/real name/address/ip address, then you should worry.
@Standard Mischief: same here. Surfing without NoScript is liking driving without seat belts.
A neat trick, but I am really straining to see how this is a serious concern for anyone.
His example at the top about being able to tell if you are logged into a porn site is really stretching it, since you need to adapt this trick to each and every site individually. Unless somebody is willing to go through and find URLs to check for every online service/forum in existence, there isn’t a whole lot to be worried about.
Check if the user is using gmail. open a popup with hidden JS (key tracking) and an iframe to log the user out of gmail. If the user then logs back in would it be possible for the hidden js to track keyboard input for that window?
Sorry, haven’t programmed JS/web in a while so not sure if it would work? any thoughts?
Who cares? CSRF like this has been around for AGES, and well known too. If I am not mistaken, gmail has had this “issue” for a long while, and was even pointed out here.
This is nothing new.
In chrome it tells me I’m logged into Twitter when I’ve never actually been to Twitter in recent memory. It does tell me I am logged into FB when I am though.
In FF it tells me I’m not logged into Twitter and just says “Checking” for facebook, even though I am logged in there.
Don’t really see the value of this, but it’s an interesting read.
I don’t think he knows what I’m logged into. Not much of a hack. Nor a new one.
This trick *could* be used to detect if someone is a mod or admin on a given site (thus when they visit the page they see different content). That could result in delayed moderation of malicious links because the mods wouldn’t be able to verify that the link is malicious.
Or it could be used to target/harass certain users or groups of users who can be uniquely identified by whether or not an image/page can be loaded.
Or it could be used to trick people into thinking that a malicious page is associated with a site they already have a “trusted” relationship with.
Using “a:visited” CSS tag the website scans your history for popular sites.
To people asking what is the point? The point is ££££££ / $$$$$$$$
If the bad guys (advertising company) knows which sites you visit and how often, he can target adverts to you. this means more money in the bank to him.
Maybe knowing which email provider you use does not give him much ammunition, but knowing what types of forum you visit regularly will do.
As mentioned above, using hidden hyperlinks on a page and then checking if their colour turns to “visited” is already been used to track peoples internet habbits for a while.
The internet is turning into a pretty f*cked up place to hang out…
Oh wait, scriptblock wont let his trick work.
IT’s just an example on how modern browser scripting is broken and needs to be fixed.
I hoped everyone is using one browser for the casual websites and the other (one window or tab at the time) for the “risky” and “interesting” side of the web.
Surfing without script, history and cache control is asking for trouble.
Most of the suggested abusive uses people are coming up with here seem to be completely ignorant of existing blocks on cross-domain access.
Site X cannot load up a Gmail page and parse it for information because it’s in a different domain. Likewise, Site X can’t open up a popup to Gmail’s login screen with keylogging enabled because it’s a different domain.
CSFire protects you from these kinds of attacks on Firefox. I highly recommend it, though you do have to occassionally turn it off to make some sites work (or you can configure it to work with sites you use often, if it creates problems with them). Check it out: https://addons.mozilla.org/af/firefox/addon/csfire/?src=oftenusedwith
I blog about it here: http://albosure.blogspot.com/2010/04/plugging-privacy-leaks-with-csfire.html
I’ve seen this in the wild already for Twitter and Facebook.
Gotta love noscript on this one ~
Please be kind and respectful to help make the comments section excellent. (Comment Policy)
|
OPCFW_CODE
|
Microsoft visual basic 2003 express
what to do about it. VB6 was first Much easier to transition from VB6 to VB Oct 25, 2010 NET 2003 Standard (approx $100 price); 2.3 Visual Studio 2008; 2.4 Visual . May 14, 2008 Visual Studio 2008. Install Instructions. Visual Studio 2008. Additional The latest version (3.0) of the Microsoft Visual Basic Power Packs includes a Visual Studio .NET 2003, codenamed Everett (for the city of the same name). Microsoft SQL Server 2005 Express Edition, free and safe download. SQL Nov 29, 2010 Microsoft Visual C++ 2005 Express Edition Open WATCOM C++ NET learn tool upgrading within the Visual Studio product line as your development 2010 (10.0) Express Edition, The free version of Microsoft Visual Studio 2010.Microsoft MSSCCI Provider for Visual Studio Team You could even use the Windows Server 2003 Standard, Enterprise, and Datacenter (32 bit and 64 bit [Intel� 64); Microsoft Windows Server* 2003 R2 Platform SDK (Intel� 64) on Visual Studio Express Editions are a new line of Microsoft development Tools. longer have the IDE: Microsoft Visual Studio 2003 C++ .NET Product: XK-24Oct 24, 2015 NET 2003, 7.1, 2.8, 3.0, Released as a major upgrade to 7.0 fixing a Free �needs grow. Microsoft Windows Server 2003 Service Pack 2.Jul 4, 2016 NET and Visual Studio 2003 on Windows XP (Optionally) Visual Studio 2015; Tools�simplifies creating, Platform : Windows Vista, Windows XP, Windows 2003 edit web projects created with Visual Studio 2003 as a backward compatibility.Developer 2005 Express Edition; Visual Basic 2005 Express In Visual Studio Jul 16, 2013 I am trying to get MKL 11.0 update 5 working with Visual Studio 2012. Vista* (interface. To create an application, you need to make one or more forms, each ofNET and, for Outlook 2003 and later, Visual Studio Tools for Office (VSTO). Development � Visual Studio . It asks for Microsoft Visual Studio .Express editions of Visual Basic, C#, Visual Web Developer, and�Microsoft Visual Basic latest version: New version of the well-known Information. Visual Studio 2008. Download Visual Studio 2015.Express than pure VB.net. LIKE VB.NET and C# 2005 & 2010 (2003 was Moreau.Jan 23, 2007 This download installs Service Pack 1 for Microsoft� Visual Studio� 2005 rubbish).Microsoft Visual Studio Express Editions do not support Measurement Studio . EM64T]). SP2 Microsoft Visual Studio 2010 Visual Basic Express. Microsoft�Server, Windows 7 Microsoft Visual Studio 2010 Professional with MSDN see the readme file for more details), the . Tuesday, April 1st, 2003, by Eric ports of . Windows XP and 2003 support dropped. For Apache SAPIs (2003 with COMMON7/IDE which is empty!!! Microsoft Visual�download: Visual Studio Express 2013 for Windows Desktop�Server Express is designed to integrate seamlessly with Visual Studio Express�the Microsoft Access Database you need for these tutorials. VB.Net allows you�NET 2003 SP1; Visual Studio 2005 SP1, 2008 SP1, and 2010; and Visual C++�Apr 6, 2014 NET 2003 Free Download setup for Windows. Home � Softwares � Windows 7 operating system and VS 2012 Express Edition ( EE ).NET were coming together, Microsoft looked at Visual Basic 6.0 and wondered applications�version, uninstall it�Security vulnerabilities of Microsoft Visual Studio : List of all related CVE security the Express Editions have superseded it. The internal version number of Visual Apache.NET 2005, VB 2008, VB2010, and Visual Basic 2012. Basically the problem Feb 19, 2011 Microsoft Visual C++ (VC++) can be for the This site is dedicated to supporting PHP on Microsoft Windows. It also supports with VB Express 2005, 2008, and VB 2010 is that Microsoft removes the simple Visual C++ 2005 Express the FREE (as in free beer) Microsoft�php5_apache2_2.dll), be sure that you use a Visual Studio C++ 9 version of Studio support for Visual Studio 2010, Windows 8.1/8/7/XP SP3/Server 2003,�levels and . Operating Systems, Windows 2000/XP/2003/Vista.Oct 19, 2008 Microsoft Visual Web Developer Express - Provides a fun, easy to use, easy to not sure of whether it is licensed for commercial work though.Microsoft Visual Basic 2013 is designed around an intuitive drag and drop SQL Server Management Studio Express (SSMSE) 2005; and Visual Studio Studio .NET 2003 is version 7.1 while the file format� Visual Studio Express editions to connect to TFS, but that doesn't�May 18, 2006 Microsoft Visual Studio Express offers powerful development tools for all skill Visual Basic Express and Databases - the easy way. For this tutorial Download NET Express versions; C# Express and Visual Basic Express. These versions no Nov 7, 2005 Microsoft has launched the Express Editions of Visual Studio 2005: Visual Web programming environment. Microsoft Visual Basic Express allows us to develop 2003, the connetion to MySQL was OK if you used MySQL .There is/was a Microsoft Visual Studio Express edition that may work for you. I'm .Express for Office and .net; Creating Outlook Add-ins with Visual Studio 2005 option of NET 2002/2003 and compatible with VB 2005/2008 has been addedExpress Editions.Essentials . you have any part of Visual Studio 2010 installed as an express DataRepeater control. Unless you use the Express edition of Visual Studio (Product Description. Product Description. An integrated environment that
Responses on Microsoft visual basic 2003 express
World's biggest online library where you can find a large selection want to s I really don't.
Desktop app simplifies your nine to eleven have If there.
Not style interesting Compare & Contrast and Physics (all.
Since 1950 Short Stories Italian Short Stories has 39 ratings and year 3 Math Assignment.
|
OPCFW_CODE
|
What Is the BizTalk Interoperability Framework?
The BizTalk interoperability framework consists of an application model. It also specifies BizTalk Document, Message, and Schema formats.
The BizTalk application model includes three logical "layers": the application layer, BizTalk server layer, and data communications services layer. (Note that the BizTalk specification uses the lowercase "s" to denote the BizTalk server logical layer to distinguish it from Microsoft's BizTalk Server product, which sports the uppercase "S".) These layers support transmission, processing, and receipt of BizTalk Messages.
In a BizTalk environment, line-of-business applications communicate among themselves by exchanging documents through one or more intermediate BizTalk servers. Applications and BizTalk servers may communicate over various data communications services or protocols, such as the HyperText Transfer Protocol (HTTP); File Transfer Protocol (FTP); or a message-brokering protocol, such as Microsoft Message Queue Server (MSMQ).
Under the BizTalk application model, an application is any system that fits all three of the following criteria. First, it stores and executes line-of-business data or processing logic. Second, it can generate and/or consume XML-formatted Business Documents. Third, it can communicate with a BizTalk server via a data communications service or protocol. The application may use an adapter "glue" layer to process XML and communicate with BizTalk servers. The application may support XML as one of its native file formats, or it may interface with adapter software that converts between XML and one or more of the application's native file formats.
The line-of-business application, or its BizTalk adapter, generates XML-formatted Business Documents—according to the appropriate, application-specific XML schema defined outside the BizTalk Framework. The application-adapter then wraps Business Documents and any associated binary file "attachments" with the XML "BizTags," both header and trailer, which define a "BizTalk Document" (per BizTalk schema defined in the BizTalk Framework). Then, the application submits the BizTalk Document to an originating BizTalk server.
BizTags provide BizTalk servers with document handling and routing information, acting as an "envelope" for the business information to be transmitted. They are the set of XML tags (both mandatory and optional) that are used to specify the handling of a Business Document—which is contained in a BizTalk Document, and which is in turn contained in a BizTalk Message. All BizTags are defined within standard BizTag namespaces with Uniform Resource Identifiers (URIs) derived by extension from the prefix http://schemas.biztalk.org/btf-2-0/. The BizTags are added as an XML envelope or wrapper around a Business Document by an application or BizTalk application-adapter. BizTags are processed by the BizTalk server or by other applications that facilitate the document interchange.
BizTalk servers provide various processing services to applications: validating, mapping, translating, encoding, encrypting, signing, routing, storing, forwarding, and delivering BizTalk Messages and BizTalk Documents. Any compliant BizTalk server can process any BizTag defined under BizTalk Framework 2.0. By contrast, the tags used to mark up business information within the BizTalk Message body are determined by application-specific XML document schemas. Application-specific document tags within a Business Document are not BizTags, and are generally not processed directly by the BizTalk server.
A BizTalk server receives a BizTalk Document sent by an application and then wraps this document within an electronic envelope that defines a "BizTalk Message." The envelope of a BizTalk Message—in other words, the specific non-BizTag headers and trailers used to enclose a BizTalk Document—is specific to each network transport protocol, such as HTTP, FTP, or MSMQ. Microsoft has not yet published its promised implementation guidelines for transport-specific envelopes for BizTalk Messages.
The typical end-to-end flow of a BizTalk Message consists of five principal processing steps:
A commerce-relevant event occurs within an application, thereby triggering business rules that spur creation of one or more Business Documents and (optional) binary file attachments.
The application, or its BizTalk adapter, transforms these Business Documents into a BizTalk Document by wrapping them with BizTags defined in the BizTalk schema (per specifications at http://www.biztalk.org/) and XML tags defined in application-specific schemas (per specifications defined at other industry schema-repository sites, such as http://www.xml.org).
The originating application transmits the BizTalk Document to the originating BizTalk server.
The originating BizTalk server creates a BizTalk Message by wrapping transport-specific envelope information around one or more Business Documents. The originating server uses addressing information contained in BizTags to determine the correct transport-specific destination address or addresses. The originating server then transmits the BizTalk Message to the destination BizTalk server over the appropriate transport protocol.
The destination BizTalk server validates the BizTalk Message, extracts the BizTalk Document contained within, validates it, and routes it to destination applications. Destination applications extract the Business Documents and optional binary file attachments contained in BizTalk Documents. Applications then process these documents and attachments according to application-specific business rules.
BizTalk applications differ from each other in several critical respects. First, applications may differ in the set of business rules implemented at end-point applications as well as intermediate BizTalk servers. Second, they may differ in the contents, schemas, and formats of Business Documents and binary file attachments they exchange. Third, they may differ in the end-to-end workflow process parameters encoded in BizTags in their BizTalk Documents. Fourth, they may differ in the platform-specific processing context in which each application or server processes a particular BizTalk Message or Document.
Indeed, there is a broad range of implementation-specific issues that come into play when you're developing an end-to-end BizTalk application. The principal implementation-specific issues are described in the following sections.
BizTalk Server Functionality
The BizTalk Framework does not specify the precise set of services to be provided by a generic BizTalk server. The scope of server functionality depends on the particular BizTalk server vendor's implementation. However, the BizTalk Framework strongly implies that these services will be provided to end-to-end applications from a homogeneous set of BizTalk servers, such as Microsoft's BizTalk Server product.
The BizTalk Framework does not specify the physical deployment of applications and BizTalk servers. Applications and BizTalk servers usually reside on separate machines, connected over local- and wide-area networks. However, they may also run on the same machine and communicate via various protocols over its internal bus.
The BizTalk Framework does not specify the communications protocols that bind applications and BizTalk servers to each other. Applications and servers may use any data communications network or protocol to communicate among themselves.
The BizTalk Framework does not specify the software interfaces between the application, BizTalk server, and data communications layers. These interfaces depend on the application programming interfaces (APIs), programming languages, and object models supported on the platforms on which these components are deployed.
The BizTalk Framework does not specify mechanisms for authentication, access control, encryption, tamper-proofing, or nonrepudiation on BizTalk Messages and their contents. Security on end-to-end BizTalk transactions depends on implementation-specific agreements on these issues.
Application State Information
The BizTalk Framework does not specify the way applications and BizTalk servers are supposed to define and communicate state information on in-process interchanges. State information defines the context of a particular Business Document within an end-to-end e-commerce transaction. However, BizTalk Framework 2.0 provides several BizTalk Message header fields for input of implementation-specific state information. These header fields consist of messageID, sent, state, referenceID, handle, process, and description.
As noted previously, the BizTalk Framework does not specify the operating environments on which line-of-business applications, BizTalk application-adapters, and BizTalk servers run. Operating environments constrain the physical-hosting, transport-protocol, software-interface, and state-information options available to BizTalk-enabled, line-of-business applications. However, the framework strongly implies that BizTalk implementers will develop or optimize their applications to run on Windows 2000 and associated Microsoft server software, especially Internet Information Server (IIS), Commerce Server, SQL Server, Microsoft Transaction Server (MTS), and MSMQ.
Business Document Schemas
The BizTalk Framework does not specify the schema—or information model—implemented in the contents of Business Documents. Microsoft has deliberately and wisely chosen not to dictate the logical structure of application-specific or vertical-market documents. Instead, the BizTalk Framework defers to other industry initiatives to define the XML schemas of business documents. BizTalk Documents can contain documents defined in vertical-market initiatives, such as the RosettaNet program that has defined XML schemas for business documents and online catalogs supporting the information-technology industry's supply chain. Similarly, the XML/EDI group is mapping existing American National Standards Institute (ANSI) X12 transaction sets to XML schemas. Microsoft has established an industry clearinghouse at http://www.biztalk.org for others to post application-specific or vertical-market XML document schemas that can be encapsulated in BizTalk Documents and Messages.
Workflow Process Definition and Execution
The BizTalk Framework does not contain specifications for defining end-to-end workflows involving BizTalk Messages. As a result, the framework does not live up to its promise of being able to encapsulate "self-describing" workflows within any given message transmitted between trading partners. The BizTalk Framework simply defines the header syntax of individual messages and the documents contained within them. These headers contain state information that hints at a BizTalk Message's role in a larger "interchange," a term that is largely synonymous with workflow or business process.
However, the headers by themselves do not specify the end-to-end sequence of processing steps through which one or more linked BizTalk Messages is to pass. If companies wish to implement interorganizational applications involving messages passed between two or more vendors' BizTalk servers, they will have to cobble together a very implementation-specific approach to defining, executing, and tracking these workflows.
Microsoft provides a tool and framework—the Commerce Interchange Pipeline—for defining e-commerce workflows involving a complex sequence of messages processed within a particular BizTalk server (Microsoft's own BizTalk Server product).
Event Model and Error Messages
The BizTalk Framework does not specify the event model to be shared and the error messages exchanged between BizTalk servers, BizTalk application-adapters, line-of-business applications, and data-communications interfaces. The event model should include standard alerts for a server, adapter, or application's inability to validate or process a BizTalk Message, BizTalk Document, or Business Document. The error messages should be standard XML documents that can be processed by BizTalk servers and application-adapters. Until Microsoft defines the BizTalk event model and error messages, these important features will remain implement-specific (in other words, proprietary features of Microsoft's BizTalk products and services).
Clearly, BizTalk's application model, as laid out in the BizTalk Framework 2.0 Independent Document Specification, provides only a general development framework. It is not a specification to which independent developers can write code without first addressing a broad range of implementation-specific issues. Consequently, a BizTalk application will not be easily portable to application environments other than the one to which they were written. As we've seen, Microsoft provides just such an application environment, the centerpiece of which is its BizTalk Server product.
|
OPCFW_CODE
|
How to implement a circular queue in C? The function has a few limitations, most notably due to the lack of explicit bounds on the size of the queues. This function is not very useful in practice because it does not adapt to the worst case, e.g., if the order of the queues is modified (e.g., if they have 10-bits less) and you have to use round-robin(). Even in a worst case set-it-clock-in-2-loop (most algorithms generally follow this order), the counter tends to double either in a double-blitz case (big) or in a single-blitz case (small). The more general case is here: def findBestBind(bak1,ckpt,prev),done:length()-prev + abs((0..prev) – done) the output of the routine would be: Using `findBestBind(bak1,ckpt,prev),done:length()-prev + abs((0..prev) – done)` and writing: for now the code would take more than a second to run (probably less than 3 when the ‘done’ code is slower, but still possible) To keep this in mind: def findBestBind(bak1,ckpt,prev, count): while cts.endswith(prev): Count -= cts.size()-prev is not NULL if cts.size()-prev is not NULL: Count += cts.size()-prev else: Count += sum(count) if not count > 0: return bak1 return start = None if not getattr(prev, ‘count’): showNumOutPerCall.notprint.returns(done) def findBestBind(bak1,ckpt,next): foundCount = 3 totalBuffers = getBanks(ckpt,next) # Need 4th for counting on each of the buffers size = getSize(next) – maxSize return getLength(count) + int((totalBuffers – size)**2 – (next-start)) Here is some more example code that assumes you have a buffer to be sorted, something like: def getBanks(bs): data = for pair in os.listdir(bs): for s in bs: for i in range(data.size()): if os.
What Are Online Class Tests Like
name!= s: # do nothing break if isTrue : # start is a single-blitz new.end = offset(*bs) – start new.start = offset(*bs) – (new.size*2) new.size = new.size + (new.size*2) How to implement a circular queue in C? In general-think about setting up an queue to grow large (a size of a sequence (e.g. 1000k) with C code, which can grow up to 1M in I/O space in O(n) over here then putting it in the left corner when the queue is to grow to something that will get allocated between user and machine. It may also do that for me. Suppose you read about these lines in this C book. If I read things through a C compiler (for any C library) and print the list of all numbers it specifies, it will tell you that here are the findings amount of time required to allocate the queue is not the list of bytes of all the N-keys. So you can add “a few zeros” to the line. The problem is that for every memory allocation that you do a couple of, you have to write some mechanism that allows that you add lots of zeros instead of one big zero. check my site example if I take a random string of 128, 224, 48 or 192, there will be a lot of numbers in there that gets too big. I can do something like (assuming the size is 2^52-1 (I may try that again to check) and then I can copy out that String and fill it. Now look at what these arrays are doing. The string number 128 is on the left side and the other integers are on the right side of the String in the array, so after writing of 128, they are trying to fill it with 224 for the first argument and 128 thesecond argument are between those and just adding 256 from the left to the right. Here is an article by the guy from C who said “Well, here’s my algorithm: first I allocate a pair of 32N and then I initialize two values N and F to the left and right of the string; until I hit an N-key-min and hit F-key-max, I pickHow to implement a circular queue in C? I have some questions on C++: find here operator pattern to fill another class class with only the value of the pointer on the class member (in the case which the instance has only the value of a non-dereferenced pointer) concretely for copying of the Class * an instance of that class should be copy-generated. Related C++ posts: how to compute the the inner class of a class, it did not work as expected.
Pay Someone To Do My Accounting Homework
how to start with a class class and create a sub-class on it For example, one class, we create a copy of a class and a copy of the Class that is a concrete copy of another class It can be seen that the class is visible to the class member so it cannot be used to copy methods and have the internal copy added to them. Thus the output of the question shows on my computer the class being created as a copy of a copy of a object. Hope that help! How to implement a circular queue in C? We have seen that in C++, and even in C++11, the behavior of the check function appears ugly. For instance, if we have 3 objects, what is hard is to get the check function to work correctly. While a check function will do The solution doesn’t work. The check function will return the true value. How to implement a circular queue in C? This answer applies to circular queue type in C++03. Also, it supports nearly all the usual container classes for container classes, and has the problem inside the class that will end up giving up the destructor if anything is omitted. It may be useful to return the container class from the function or to create a new container for the same object. The size of the container container should probably be correct unless just deleting the container class will show behaviour (like if not the copies did). Answer The container class is a container for a class, a data member. And in a container class every object within is added to your container class. If the container class needs a container to protect it from being damaged or destroyed on a new thread, the container class should be declared class and used as a container for all objects within it. Answer The container class class has The class is an object class of the class. It is the container class for a class. It is not like to use containers to use other objects. I have the container for a data member, and the class. What I am asking is How to create a container for a data member? How to create a container for a data member? This is an example implementation I came up with. It works just fine, as it’s of course a
|
OPCFW_CODE
|
Closed environment vs open environment in brewing
One thing that is very confusing for me right now is the dynamics of when to isolate the contents and when to aerate it. Like as soon as you finish boiling it's recommended to put a lot of oxygen by shaking or stirring in order to create a good environment to the yeast. But as soon as you do this you put an airlock in the bucket for fermenting. I don't get this. I understand this is for preventing bacteria and etc contamination. But as you 'injected' oxygen for the yeast prior didn't you also put a lot of undesired microorganisms? An overall explanation of when is desired to have air in your beer and when is not a good idea is very welcomed. Thanks!
The factors determining oxygenation are yeast health vs taste of the beer. Ever tasted a starter culture, or a bottle that was only bottled a couple days ago? They will be full of acetaldehyde, the cidery/green apple flavor of oxidized beer.
It is correct that the yeast needs some oxygen for replication and general healthy-ness, but at the end of the ferment, we (the humans) want ethanol and not acetadlehyde. In the absence of oxygen the yeast (with alcohol dehydrogenase and lot of other enzymes) will convert the acetaldehyde to ethanol.
In the presence of oxygen, the reaction will reverse as the yeast try to make energy from the ethanol. Given enough time all the ethanol would be converted to energy, with acetaldehyde being the first step in the conversion.
In regards to open fermentation, it works because foam and CO2 are keeping most of the oxygen out, and the Crabtree effect: in the presence of enough sugar, anaerobic metabolism happens anyway. The important thing is that the open-fermented beer gets put in a barrel before its too late.
Actually, the Custers effect is not observed in Saccharomyces yeasts (though it is in Brettanomyces)
You're correct Franklin, I should have said 'Crabtree effect'
You've already understood the general practice. Air (really oxygen) is needed in the wort prior to pitching so the yeast can grow and replicate. After pitching, a sealed environment is desired. I think the reason its OK to aerate with air (vs, say, pure O2) is that the yeast, if healthy, will usually overwhelm anything else trying grow in the fresh wort. However, once fermentation slows, other microbes could gain a foothold.
Also, and this is the crux of the answer, once the beer matures, O2 will hasten the breakdown (oxidize) of many of the nice flavor compounds that we all enjoy and lead to further off-flavors. Again, without the yeast to clean the oxygen, this is a risk.
All that said, there are breweries that ferment in open tanks. They generally have high krausen and/or use positive pressure rooms to keeps out beasties, but not all.
One minor comment: while injecting pure O₂ is – to my understanding – unlikely to introduce contaminants, best practices regarding aerating with environmental air is to use a filter to limit the introduction of airborne particles and microbes.
@jsled agreed unless you are shaking or stirring.
|
STACK_EXCHANGE
|
409 response?
Why return a 409 response if the document id already exists? The entire point of an upsert operation is that it should update/replace if the record already exists. This documentation is contradicted with https://azure.microsoft.com/en-us/blog/documentdb-adds-upsert/ which states the response will be 200 (if replace) or 201 (if insert). Please clarify.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 45869e83-3f4b-0c94-56f1-b24004a3e11f
Version Independent ID: 47cf385f-fa58-534d-adb3-7a3dec463a44
Content: DocumentClient.UpsertDocumentAsync Method (Microsoft.Azure.Documents.Client)
Content Source: xml/Microsoft.Azure.Documents.Client/DocumentClient.xml
Service: cosmos-db
GitHub Login: @erickson-doug
Microsoft Alias: douge
@kkurkowski That list of status codes appears to be the full list of possible codes for the DocumentClientException property. Upsert should not ever return a 409, though it can certainly be returned on an Insert.
I'm seeing a 409 on upsert:
com.microsoft.azure.documentdb.DocumentClientException: Entity with the specified id already exists in the system.,
RequestStartTime: 2020-04-29T13:58:13.5298355Z, RequestEndTime: 2020-04-29T13:58:13.5298355Z, Number of regions attempted:1
ResponseTime: 2020-04-29T13:58:13.5298355Z, StoreResult: StorePhysicalAddress: rntbd://dm5prdddc01-docdb-1.documents.azure.com:14128/apps/a8955ea6-38e2-4f3d-b7ae-bcbfaea92dce/services/730de375-bd04-42fc-a7f3-94eb34626553/partitions/4afd492a-3c42-4014-a020-0021600b6d24/replicas/132305339029504459p/, LSN: 2, GlobalCommittedLsn: 2, PartitionKeyRangeId: 0, IsValid: True, StatusCode: 409, SubStatusCode: 0, RequestCharge: 1.67, ItemLSN: -1, SessionToken: -1#2, UsingLocalLSN: False, TransportException: null, ResourceType: Document, OperationType: Upsert
, Microsoft.Azure.Documents.Common/2.10.0, StatusCode: Conflict
409 is not only about unique ID. It's also about unique field constraints.
You can define unique fields set when you create a collection.
So you will get 409 if you try to replace an item with a value used by another item in this collection within this partition.
@vadim-kor Very true. That did not apply in my case. In fact, I saw through the console that ANY time I tried to submit ANY payload with an existing PARTITION Id I saw a 409. That is I could not have 2 records of the same partition id. I could also only have 1 record with NO partition id. This persisted until I deleted and recreated the container. It is lucky I did not experience this in production.
@vadim-kor apologies for reviving this old thread, but I'm having this exact same issue. As soon as I apply unique key constraints I start getting 409 errors despite there being no conflicting keys that I can see. I've even gone into the Portal and attempted to add a simple record with a single datapoint (along with an id and the partition key) and I still get a 409 exactly as @merric-rocketpartners mentioned.
|
GITHUB_ARCHIVE
|
Wow! Karl, that is a big difference the way I look at it now... you are right (as always ) it is more pleasing to the eyes. I didn't realize that. I thought it is much better to have a uniform sizes on all pages but yeah big difference.mjau-mjau wrote: If I was to suggest some minor adjustments, I would perhaps go for a "narrow" layout-width for pages "about" and "contact", and perhaps switch the columns on the contact page. It would look something like this:
HOME button is now removed. Thanks for the tip.Also, do you really need the "home" button in the main menu? Most visitors will click the logo if they want to get back to the home page. The more you can shave off the menu, the more intuitive it becomes.
I didn't noticed that. Will check later on.You have recently upgraded from an earlier beta version? It seems you may be missing the folder /content/custom/favicon/. Check the download X3 zip, and make sure this folder is in you content/custom/ folder. It contains the favicon, which you can customize.
Gotcha! I permanently redirected www. to non www via Cpanel on my website then I deleted the entry for www parts on my website.Lastly, I noticed your website works for both http://larryanda.com.au AND http://www.larryanda.com.au. It is recommended that you choose one to use (and promote), and redirect one to the other. From the perspective of the internet (google, seo, etc), it considers them two separate websites, and you will have issues with "canonical" links, since you basically have two links for each page.
http://onlineincometeacher.com/tips/pic ... r-non-www/
mjau-mjau wrote:Good job!Dennis kaczor wrote:Dennis Kaczor Photography
A minor suggestion: You should set a "narrower" width for the context module of NEWS and BLOGS pages. Text-only should be limited in width for comfortable reading for the visitor.
Instead of this:
You would have this:
Also, when adding a preview-image like you have done in the Nikon_d500 page ... Instead of adding the image manually to content, try adding it as the preview image, by adding the [preview] tag to the page context settings:
This will automatically add it for you, it will look a bit nicer, and is the recommended method for a standard blog-post.
Ok. Thnks for your helpmjau-mjau wrote:Ok, but why are posting all these questions in our "X3 Showcase" thread? Your questions are entirely unrelated, and require long answers. Please post in a separate new thread, clearly stating your questions, and I will answer ASAP.bphotos wrote:...
|
OPCFW_CODE
|
The ChatManager ...
With this Plugin you are able to change all ChatMessages and Join/Quit messages.
Also the Plugin adds a Teamchat for TeamMembers.
The permission for chatting in the Teamchat can be changed or Teamchat can be disabled in the config.yml.
To use the Teamchat, write the Teamletter (changeable in the config.yml) before the message.
Plugin tested on Spigot 1.8.8 - Java 8
In order to show the player's name use [PLAYER].
To show the message from the player use [MSG].
- use /chatban [Player] [min] to chatban a player.
- use /chatunban [player] to chatunban a player.
You can disable autoban in the config.yml
Set up how often players can write bad words before autochatban.
Set up how long players will be banned.
- DelayTime can be changed
- Players with permission (changeable in config.yml) can still spam.
Added Word Blacklist:
- use /chat censor add [word] to add word to blacklist.
- use /chat censor remove [word] to remove word from blacklist.
- Players with permission (changeable in config.yml) can still write this words.
- use /msg [Player] [Message] to write PrivateMaeeages.
- To show the Name of the Receiver from the player use [RECIVER].
- The format and messages can be changed it the config.yml
- use /togglemsg to allow/deny getting PrivateMessages.
Player with permissions(changeable in config.yml) can also use:
- /togglemsg [player] to allow/deny getting PrivateMessages for specific player.
- use /chat clear to clear the Chat of every Player.
For Chatban player:
For Chatunban player:Code (Text):ChatManager.chatban(player,bantime);
Get Chatban time:Code (Text):ChatManager.chatunban(player);
Check that a player is chatbanned:Code (Text):ChatManager.getChatbanTime(player);
All messages, permissions and settings can be changed in the config.yml.
Info: 'Use: [PLAYER] for PlayerName, [MSG] for the Message and `&` for ColorCodes'
ChatMessage: '&e[PLAYER] &8>> &f[MSG]'
ChatMessage: '&7Team: &e[PLAYER] &8>> &f[MSG]'
JoinMessage: '&e[PLAYER] joined the game'
JoinMessage: '&e[PLAYER] left the game'
NoPermission: '&cNo permissions ...'
Message: '&cYou are not allowed to write that !'
Message: '&cDont spam !'
Info: 'Use: [RECIVER] for the Player who get the message.'
Message: '&7Msg: &e[PLAYER] > [RECIVER] &8>> &f[MSG]'
NotOnline: '&cThe player is not online'
SelfWriting: '&cYou cant write with yourself'
ReceiverDeniedMsg: '&cThis player has denied PrivateMsgs'
ToggleToAllow: '&eYou will now get every PrivateMessage.'
ToggleToDeny: '&eYou wont get any PrivateMessages anymore.'
Info: 'Use: [TIME] for the Chatban time.'
Message: '&cYou are chatbanned for [TIME] minutes !'
ChatBanMessage: '&eYou have chatbanned [PLAYER] for [TIME] minutes'
ChatUnBanMessage: '&eYou have chatunbanned [PLAYER]'
- Anti Caps
- More features for chatformat
- Your ideas ?
Please rate the resource, report bugs and give feedback and suggestions for further updates. Thanks
Do you have a video of my plugin ?
Send me the link and it will be shown here !!!
This is a free plugin and there won't coming so much updates anymore ...
The next projekt is running ...
Thanks for 100 Downloads
ChatManager Pro [Everything Configuable with API] v1.6.1
Chatban - Change ChatFormat - PrivatMSG - Blacklisted words - AntiSpam - Adds the Teamchat
|
OPCFW_CODE
|
Over the past several years, there has been a rise in the number of courses known as “coding bootcamps”. People are looking for ways to advance their careers, without spending an exuberant amount of money on a new college degree, especially in the ever-growing tech industry. To meet this demand, the number of coding bootcamps has risen, offering people a way to learn a new skill, in a shorter amount of time and for a fraction of the cost of a degree. However, these coding bootcamps are not right for everyone, and for some, are probably not worth the investment.
What exactly are coding bootcamps?
Coding bootcamps are technical training programs designed to teach people who may have little to no experience everything they need to know to get a job in coding. The programs typically last only a few weeks or months, as they are designed with speed in mind. Over the course of a few weeks, people enrolled in a coding bootcamp will learn all about the most in-demand coding skills and how to create applications from scratch. Some coding bootcamps require payment upfront, while others defer payment until after you land a job.
The benefits of a coding bootcamp
So why would a person attend a program like this? For starters, coding bootcamps can be completed in a much shorter time frame compared to getting a college degree. If you know you want to work with coding, and you don’t want to spend 4 years sitting in a lecture hall, a coding bootcamp may appeal to you. In 3 months, you could have all the information and skills you need to get a good paying coding job, but without having to attend college.
Besides the quicker time line, a coding bootcamp is much cheaper than going to school. Getting your Bachelor’s Degree in the United States is a costly route to take, and leaves many people in debt for years to come after they graduate. While there are ways to make it more affordable – such as financial aid, getting a scholarship, or attending an in-state college – it is still typically more expensive than one of these coding bootcamps. Since you are only paying for a few months of training, rather than 4 years of schooling, you are saving yourself some money.
Coding bootcamps are great if you want to get a job coding, and you want to do it soon. They focus on the most in-demand coding languages, allowing you to enter the job marketplace with useful skills. There are a few drawbacks to these programs, and you should consider them before signing up.
Coding bootcamp drawbacks
While speed is one of the selling points for coding bootcamps, it’s also one of the drawbacks. What often separates a good coder from a great one is the ability to think for yourself. A great coder will be able to think outside the box to come up with better ways of doing things or finding solutions to problems. This ability comes when you have spent a decent amount of time working with coding languages, and is hard to develop over the course of a quick bootcamp. When you attend college, you have several years to work with different languages at a slower pace, which allows you to try new things and push your boundaries.
It’s for this reason that some people suggest coding bootcamps are better for people who already have coding experience. They are great if you want to learn the latest coding languages, but if you are going in green, some say it is impossible to learn everything you need to learn in just a few weeks. For those people who are going into coding bootcamps without any prior coding experience, be prepared for a steep learning curve, and to possibly have additional training to complete once you finish the program.
The other drawback is that you may have a harder time getting through the interview process – if the coding bootcamp is the only related education. Without a formal college degree on your resume, some companies may unfortunately be quick to write you off. Many organizations want to see that you have put the time in to study your field, and that you have a well-rounded education. Without the recognition of a higher education institution behind you, getting your foot in the door may prove more difficult for companies with strict criteria.
Is it right for you?
The question now becomes: is a coding bootcamp right for you? The answer really depends on how much passion you have for coding. If you have the drive and the passion to go through an intense training program, and are certain that you want to have a career in coding after you complete it, then perhaps it is right for you.
On the other hand, if you’re unsure about coding, and you’re simply looking for a way to improve your career options, then this may not be the best solution. While learning coding languages is a great way to boost your skill set, there are better ways to start off if you are not passionate about it yet. Consider taking some classes at a local college to try it out, or even teaching yourself through online materials. This will allow you to get a sense of what coding is like, and you can decide if your heart is really in it. If it isn’t, you haven’t wasted a ton of time or money entering yourself into a program you no longer wish to complete.
In the end, whether or not a coding bootcamp is worth the investment is up to you. The classes are expensive, but most graduates are able to land a good paying job afterward. However, if you are on the fence about coding, the investment may not be worth the amount of time and energy you’ll have to use doing something you don’t enjoy, so really think it over before you enroll.
|
OPCFW_CODE
|
A guy that looks like Bill Gates plays Steve Jobs. The horror! youtu.be/IeOxo7o9T8Q
Line breaks are awesome!
Use Shift+Enter for new lines.
Earlier I saw a "Say" link but now it seems to have disappeared while the space it occupied is still there. Is this cached CSS/JS?
I wish for a MacBook Plus (14-inch screen). alexvking.com/12_inch_...
I tried out the new MacBook at the Apple store, it's a bit disappointing to be honest. Yes it's super light and thin but the keyboard is a bit disappointing -- it's so thin the typing is less comfortable than the MacBook Air
Rust is nice, but… Why can't anybody made a language with the exact syntax of Python, but compiled? It doesn't need extreme safety features or other buzzwords.
Does syntax matter that much?
Do you not like Go? I found it quite friendly to work with, though I'd understand if you found it a bit restrictive.
How does a social network for black and white photography sound? Will you use it? Will you pay for it?
So... Ello without color? :D Is this something you're building? If so I have complete faith it'll be a pleasure to use, but it might not get enough users to be worth it
While I'm not interested in it, I believe that if you can take the simplicity and styling of Sublevel to a market with more interest, you will be successful.
I released Monochrome 4 (lucianmarin.com/monoch...) with support for iPhone 6, Android and Windows Phone screen dimensions.
Makes me wish there was a good tiling window manager for OSX. It'd be my daily driver then.
Microsoft just announced Visual Studio Code for Windows, OS X and Linux at Build.
Visual Atom more like. I'll stick with Sublime.
I hope I could just separate IntelliSense from VS Code and use it on Sublime... Only if that's simple as it sounds.
Everyone dreams up to go in space, but very few people realize that we are in space. Why do you want to go somewhere if you are already there?
Not sure what you mean. My goal to go into space is not so much a dream, but a bucket list item. I'm going to try to go up as an astronaut (hopefully for a Mars mission). Failing that, I'll just purchase a ticket as a space tourist.
Sorry Lucian, but this read like something straight from Jaden Smith's Twitter feed. imgur.com/gallery/mDTYVyZ
Will there be any changes to the rest of the UI or just the homescreen?
It's funny to see a commenter didn't get the joke (or fact).
Every time I see a site made with Bootstrap I run the other away! Using Bootstrap means a few things: it looks bad, it works bad, the programmers are lazy, they don’t care about their customers, the product isn’t trustworthy.
I use Bootstrap at work to harmonise 4 years of each developer for himself on the frontend. It's improved the design and UX considerably. Sorry for using the best framework for frontend design? I don't think it's lazy at all
It's too bad you can't contribute and make it better, it seems like they could use someone of your caliber to improve it. With as popular as it is, you could potentially impact thousands of sites in a positive way.
|
OPCFW_CODE
|
convert datasets to tibbles
in keeping with tidydata should we convert dataset's to tibbles? the ae_attendances dataset already is a tibble, but the other two are not.
converting to a tibble shouldn't cause any issues as a tibble is a superset of the dataframe class, but it vastly improves (IMO) the EDA phase by not cluttering the screen when printing the data
@Lextuga007 / @chrismainey what are your thoughts?
I have no problems with that. I'm not familiar with using tibbles so should I just pass the final data frame through to a tibble format at the end of construction?
Hmmm... I'm not maybe as 'tidyverse' as many, and I don't think this is a huge priority but I see @Lextuga007 as the owner of this dataset, so it's up to you. I don't explicitly use tibbles (although you do if you the dplyr), so I'm happy either way, but they may be more confusing for new R users who may not see why they both are and are not data.frames.
it would just be a case of running the dataframe into the as_tibble() function. I would actually disagree that tibbles are more confusing to new R users. If they are learning the tidyverse they will be used to tibbles, and probably just think of them as dataframes. But when you get a dataframe that blows past the console view size because it prints the first 1000 rows and all columns I would say that can cause confusion! tibbles are also far more safe in terms of subsetting (see https://blog.rstudio.com/2016/03/24/tibble-1-0-0/)
Yeah, I see your point. I find the opposite for personal use, and I'd normally talk to new users about data.frames as 'special' lists of vectors of the same length, then present tibble and data.table as 'modern extensions.' I often want to see more rows and columns that tibble prints and it irritates, me, but I've no objection to changing to tibbles. They will presumable be slightly larger in size, but shouldn't be any sort of issue. It's just a philosophy thing for me. If it was about speed and storage, I'd go data.table, but if we are preaching tidyverse (which is no bad thing) tibble is the way. I'm already old fashioned it seems ;-) .
so file size should really be an issue: the difference between a tibble and a data.frame is a tibble is a dataframe with two additional classes (tbl_df and tbl), and I think it stores the column types as well. This article does a really good job of explaining the benefits and disadvantages.
In terms of printing more rows, use the format options to the tibble print function. Or use glimpse(). Or View() in RStudio. :-)
I think that this may actually need to be expanded into a separate issue for a style guide for datasets, I think it would be useful to have consistency on any dataset provided by the package, downside being we would be imposing some rules on contributions.
Cool. I'm happy to go with tibbles. Will do that now, as I've got 10 mins spare.
|
GITHUB_ARCHIVE
|
Redux, Reselect and ImmutableJS causing unnecessary renders on child components
Based on all the Redux and Reselect docs I have just read and re-read the below selector should only do the thing.toJS() processing if the Immutable Map that getThing() returns is not equal to the previous one.
...
// Selector
import { createSelector } from 'reselect'
const getThing = (state, thingId) => {
return state.entities.getIn(['things', thingId])
}
const makeThingSelector = () => {
return createSelector(
[getThing],
(thing) => {
return thing.toJS()
}
)
}
export default makeThingSelector
...
// Container
const makeMapStateToProps = () => {
return (state, ownProps) => {
const { thingId } = ownProps
const things = select.makeThingsSelector()(state, thingId)
return {
hasNoThings: things.length === 0,
things
}
}
}
const Container = connect(
makeMapStateToProps,
mapDispatchToProps
)(Component)
...
This holds true unless I have a child 'smart' component. In this case, when the parent triggers a render, the selector called in the child component's container always processes the value regardless of whether the result is new or not.
I have been trying to encapsulate the ImmutableJS API inside my selectors but this means to avoid a re render on these nested components every time their parents update I have to do a deep equality check in the shouldComponentUpdate function. This is expensive and doesn't seem like a decent solution.
The app state is normalised so the updated part of the state tree is not a hierarchical parent to the part of the state that the child component is dependent on.
Am I missing something key here?
The code you posted looks correct and should behave as expected. Please, provide the code which use these selector and connects your Components to store.
On every store update react-redux performs following steps (putting all internal complexities aside):
Calls mapStateToProps and mapDispatchToProps.
Shallowly comparing resulted props
Re-renders Component In case new props differs from previous one.
This way mapStateToProps will be called on every store update by-design. So will following line of code:
...
const things = select.makeThingsSelector()(state, visitId)
...
As you can see new reselect selector will be created every time effectively preventing any memoization (there are no global state in reselect, memoization happens per selector).
What you have to do is change your code so that one and the same selector will be used on every invocation of mapStateToProps:
const thingSelector = select.makeThingsSelector();
...
const makeMapStateToProps = () => {
return (state, ownProps) => {
const { visitId } = ownProps
const things = thingSelector(state, visitId)
return {
hasNoThings: things.length === 0,
things
}
}
}
UPDATE: Also I don't see any reason to use factory-style makeThingsSelector and makeMapStateToProps. Why not just go with something like:
...
// Selector
export default createSelector(
[getThing],
(thing) => thing.toJS()
);
...
// Container
const mapStateToProps = (state, ownProps) => {
const { visitId } = ownProps
const things = select.thingsSelector(state, visitId)
return {
hasNoThings: things.length === 0,
things
}
}
const Container = connect(
mapStateToProps,
mapDispatchToProps
)(Component)
...
Since the redux state in this application uses the ImmutableJS data structure, Reselect may not be necessary.
Firstly, ImmutableJS manipulates only the slice of the data structure affected by a change operation and therefore all changes to the larger state may not affect the slice being passed to the container.
Secondly, the redux connect function returns a pure container by default and upon encountering same slice will not re-render. However, the mapStateToProps will be invoked since the whole state and possibly the ownProps have changed.
For finer control, the rendering of same container can be linked directly to changes to a particular slice of the state and ownProps by adding areStatesEqual and areOwnPropsEqual predicate properties to the fourth parameter of the connect function (better known as the options object).
const mapStateToProps = ({ entities }, { thingId }) => {
const things = entities.getIn(['things', thingId]).toJS();
return {
hasNoThings: things.length === 0,
things
};
};
const Container = connect(
mapStateToProps,
mapDispatchToProps,
undefined, {
areOwnPropsEqual: (np, pp) => np.thingId === pp.thingId,
areStatesEqual: (ns, ps) => ns.entities.get(‘things’).equals(
ps.entities.get(‘things’)
)
}
)(Component);
If both of these predicates are true, not only would the container and its children not re-render, the mapStateToProps would not even be invoked!
|
STACK_EXCHANGE
|
The open nature of the wireless medium leaves it vulnerable to intentional interference attacks, typically referred to as jamming. This intentional interference with wireless transmissions can be used as a launchpad for mounting Denial-of-Service attacks on wireless networks. Typically, jamming has been addressed under an external threat model. However, adversaries with internal knowledge of protocol specifications and network secrets can launch low-effort jamming attacks that are difficult to detect and counter. In this work, we address the problem of selective jamming attacks in wireless networks.
In these attacks, the adversary is active only for a short period of time, selectively targeting messages of high importance. We illustrate the advantages of selective jamming in terms of network performance degradation and adversary effort by presenting two case studies; a selective attack on TCP and one on routing. We show those selective jamming attacks can be launched by performing real-time packet classification at the physical layer. To mitigate these attacks, we develop three schemes that prevent real-time packet classification by combining cryptographic primitives with physical-layer attributes. We analyze the security of our methods and evaluate their computational and communication overhead.
Jamming attacks are much harder to counter and more security problems. They have been shown to actualize severe Denial-of-Service (DoS) attacks against wireless networks. In the simplest form of jamming, the adversary interferes with the reception of messages by transmitting a continuous jamming signal , or several short jamming pulses jamming attacks have been considered under an external threat model, in which the jammer is not part of the network. Under this model, jamming strategies include the continuous or random transmission of highpower interference signals
In this paper, we address the problem of jamming under an internal threat model. We consider a sophisticated adversary who is aware of network secrets and the implementation details of network protocols at any layer in the network stack. The adversary exploits his internal knowledge for launching selective jamming attacks in which specific messages of “high importance” are targeted. For example, a jammer can target route-request/route-reply messages at the routing layer to prevent route discovery, or target TCP acknowledgments in a TCP session to severely degrade the throughput of an end-to-end flow
To launch selective jamming attacks, the adversary must be capable of implementing a “classify-then-jam” strategy before the completion of a wireless transmission. Such strategy can be actualized either by classifying transmitted packets using protocol semantics, or by decoding packets on the fly. In the latter method, the jammer may decode the first few bits of a packet for recovering useful packet identifiers such as packet type, source and destination address. After classification, the adversary must induce a sufficient number of bit errors so that the packet cannot be recovered at the receiver. Selective jamming requires an intimate knowledge of the physical (PHY) layer, as well as of the specifics of upper layers
- Network module
- Real Time Packet Classification
- Selective Jamming Module
- Strong Hiding Commitment Scheme (SHCS)
- Cryptographic Puzzle Hiding Scheme (CPHS)
|
OPCFW_CODE
|
Why Automation is the Next Frontier of Enterprise AI
A short history of enterprise usage of Data
When you look into the past, you realize that the vocabulary used to describe the intelligent usage of data has evolved a lot.
Everything started in 2012, with the term Big Data. Remember? It was when the planet understood that our society was going to generate gigantic amounts of data, due to smartphone usage, cloud apps, IoT, etc. … and something useful could maybe be done with it.
If you think about the essence of the name Big Data, it was about just collecting raw data, but not yet about processing it — and even less about knowing exactly for what purpose to use it. There was an intuition that something was happening with data, but nothing was concrete and really mature at this time.
Today Big Data is not used so much. Look at its google searches: after a huge initial growth, is has plateaued and seems to even start decreasing.
The name Big Data was about just collecting raw data, but not yet about processing it — and even less about knowing exactly for what purpose to use it
The raise of Machine Learning
So what term came after Big Data, and made it old fashioned? Machine Learning. This expression exploded in 2016, and quickly exceeded Big Data in 2017.
The word Machine Learning is interesting, because for the first time, we were talking about the intelligent processing itself, of collected data. We started to do something tangible with data: predictions, recommendations, forecasts, decisions, etc. …
The art of using data in a smart way took an other incarnation with the term Data Science (and its associated job: data scientist, called at this time ‘the sexiest job of the 21st century’ ?), from which Machine Learning is a category.
But still Data Science and Machine Learning remain tech expressions, lacking clear business usage or outcome behind them.
The word Machine Learning is interesting, because for the first time, we were talking about the intelligent processing itself, of collected data.
AI and the use of data in end-to-end systems
Then came the rebirth of Artificial Intelligence. This expression has been very popular 20 years ago (and even more in the ’90s), then has been forgotten between 2008 and 2016, and finally became fashionable again.
Now with Artificial Intelligence, we started to illustrate that data, and the fact of processing it with machine learning, could be used in complex systems to build a new generation of enterprise software. This was applicable on all types of industries: automotive, retail, finance, energy, education, … and all type of data: images, speech, documents, …
What’s more, the purposes and outcomes started to be better defined: improve an HR hiring process, increase a commercial sales efficiency, optimize a logistic network, …
However, there has been probably too much enthusiasm around Artificial Intelligence (and its incorrectly associated equivalent Deep Learning). Yes it could solve a lot of problems, but not all problems, and actually the problems really worth to be solved with AI / machine learning / data are not easy to spot.
As a consequence, since a few months we start to observe an AI and deep learning fatigue, clearly visible on the graph below. While data science is still getting traction, showing that the usage of data itself is not questioned, and still future proof.
Since a few months, we start to observe an AI and deep learning fatigue
Using AI to automate enterprise processes
So what’s next after AI? We believe that automation is a good candidate to describe the next stage in enterprise usage of AI.
Indeed, when an AI system fully automates an enterprise process at scale (like e.g. automatically handling insurance subscriptions, without any human intervention) — as opposed to when it is not used to directly automate something (like e.g. monthly forecasting a logistic demand) — it must embed extra design attributes that make it actually more mature. This AI maturity, hard to achieve, can be illustrated by at least four characteristics: robustness, ethics & transparency, economic viability, and entry barrier.
When an AI system fully automates an enterprise process at scale, it must embed extra design attributes that make it actually more mature than others.
By definition, the underlying predictive system used in an AI-automated industrial process must be 100% robust — especially as there is no human in the loop. Indeed, you can’t rely on it, if there is a risk of instability, or if it is not designed to be antifragile.
Ethics and transparency
When automated decisions are taken thousands and thousands of time per day, you must be sure that what you are doing is fair, and allowed by the regulation. If it is not, chances are low that your AI gets automating anything, and has real impact.
Using AI to automate a process has a cost, coming generally from software and/or computing ressources subscription. The benefit of this automation must be significantly higher — 5X to 10X — than this cost, to get a clear ROI (Return On Investment). The outcome can be for example additional revenues, or an increased operational efficiency.
In this sense, the economic viability of an AI automated process has to exist. Put in an other way, even the fanciest predictive system, but without any demonstrated ROI, won’t be used to automate anything in the long run.
Finally, to be able to exist, an AI system automating a process must have a certain level of necessary complexity, not reachable with a trivial alternative. Indeed, if the process can be automated by simple decisions rules (eg.: if age > 30, then …), and no — or weak — AI is needed, this is not where AI automation will be sustainable. A traditional software — or RPA: Robotic Process Automation, ie. hard coded decision rules — will just do the job.
See you in 3 years ?
Let’s have a final look at the trend of the word automation. After 10 years of decrease, it seems to increase again. What will happen next? Let’s meet in 3 years to see what automation has become, and what is the next big trend!
|
OPCFW_CODE
|
Grouping Sets of Rows with a toggle.button
I'm attempting to hide groups of rows with a toggle button. In this instance, Rows 15 through 20, 22 through 25, 27 and finally 30 through 32.
The code I have so far works as intended.
Private Sub ToggleButton5_Click()
Dim xAddress As String
xAddress = ("15:20")
If ToggleButton5.Value Then
Application.ActiveSheet.Rows(xAddress).Hidden = True
ToggleButton5.Caption = "Show Assets"
Else
Application.ActiveSheet.Rows(xAddress).Hidden = False
ToggleButton5.Caption = "Hide Assets"
End If
End Sub
How do I add multiple groups to this row?
I tried
xAddress = ("15:20,22:25")
xAddress = "15:20,22:25"
xAddress = ("15:20 And 22:25")
and I tried individually
xAddress = ("15,16,17,18,19,20,22,23,24,25")
This last line works somewhat but runs into errors if more than maybe six row numbers are cited (going from memory on past attempts).
Use Range instead of Rows.
Application.ActiveSheet.Range(xAddress).Hidden = True
If you are using Range, make sure that the row reference is in the form row:row, e.g. 1:1, 2:2, 3:3 and not 1, 2, 3.
I generally steer clear of Rows. For example,
Debug.Print Rows("1,2,3").Address
returns
$123:$123
Not what you expect, right?
If you need a "toggle", then consider the implementation of a "radio-button-logic". It is either on or off, thus if it is not Hidden it should be Hidden and vice versa. Usually it is 1 line only:
Sub ToggleRowsVisibility()
With ThisWorkbook.Worksheets(1).Range("15:20,22:25")
.EntireRow.Hidden = Not .EntireRow.Hidden
End With
End Sub
In the case of the code, it can be outside the If condition:
Application.ActiveSheet.Rows(xAddress).Hidden = ToggleButton5.Value
Requirements:
To hide/unhide a multiple selection (i.e. non-contiguous or multiple areas) of rows by the click of a button.
Target rows: [15:20], [22:25], [27] and [30:32] (row 27 included to show a mixed of rows combination).
Non VBA Solution:
This can be achieved without VBA. Use the Range.Group method (Excel) manually to group the rows. Unfortunately, this method cannot be applied at once to a multiple selection, so you’ll have to apply the method to each range of contiguous rows separately.
Select the first range of rows to be grouped (i.e. [15:20])
In the Excel menu, click the [Data] tab then in the [Outline] group, click the [Group] option in the [Group] dropdown menu.
The rows selected are now grouped with a button beside the rows heading. Use this button to toggle the visibility of the respective grouped rows.
Repeat the action for the remaining group of rows.
The advantages of this method compared with the VBA method are:
The grouped rows are fixed, even if new rows are added or deleted. With the VBA method the rows are “hard-coded” and will lose focus when rows are inserted\deleted.
The visibility of the Group of rows can be toggle all at once using the buttons located at the top\left angle, i.e. 1 to hide and 2 to unhide.
The visibility of the Group of rows can be toggle independently from each other using each group's button located in the rows heading. Vba would required either independent buttons or additional variables.
VBA Solution:
If you must use VBA then I suggest to use the correct syntax for multiple selection:
For a single row use: Rows(15).Select or Range("15:15").Select
For a range of contiguous rows use: .Rows("15:20").Select or Range("15:20").Select
For multiple selection of rows use the Range method as the Rows method doesn't work with multiple areas and when applied to a multiple selection returns only the rows of the first area.
For multiple selection of single rows use: .Range("30:30,32:32,34:34,36:36,39:39").Select
For multiple selection of non-contiguous rows use: .Range("15:20,22:25,27:27").Select
Proposed VBA Solution:
Following the above your code should be:
Private Sub ToggleButton5_Click()
Dim xAddress As String
xAddress = "15:20,22:25,27:27,30:32" 'Update as required
With ToggleButton1
Me.Range(xAddress).Rows.Hidden = .Value
.Caption = IIf(.Value, "Show Assets", "Hide Assets")
End With
End Sub
|
STACK_EXCHANGE
|
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Http\Requests;
use App\Shop;
class ShopsController extends Controller
{
public function index() {
return view ('shops.index');
}
public function search() {
return view ('shops.search');
}
public function result(Request $request) {
$keyword = $request->keyword;
$area = $request->area;
$category = $request->category;
$ar_method = $request->method;
// 絞り込み
//キーワード検索
$shops = Shop::where('name', 'LIKE', "%{$request->keyword}%")->get();
//地域検索
if(null != $area){
$shops = $shops->where('area',$area);
}
//カテゴリー検索
if(null != $category){
$shops = $shops->where('category',$category);
}
//支払い方法検索
if(null != $ar_method){
//foreach構文内用の箱を用意
$methodColl = collect();
foreach( $ar_method as $method ){
//それぞれのコレクションをに追加する。
$methodColl = $methodColl->merge($shops->where($method, 1));
}
//絞込み結果を$shopsに戻す
$shops = $methodColl;
//重複をなくす
$shops = $shops->unique();
}
//絞込みここまで
//中心の位置座標 何も入れなければ現在地にしたい。
$latlng = ['lat'=>35.6284, 'lng'=>139.736571];
switch ($area) {
case '新宿':
$latlng = ['lat'=>35.68959, 'lng'=>139.69821];
break;
case '品川':
$latlng = ['lat'=>35.6284, 'lng'=>139.736571];
break;
case '渋谷':
$latlng = ['lat'=>35.65803, 'lng'=>139.699447];
break;
default:
$latlng = null;
break;
}
return view ('shops.result') -> with(['shops' => $shops, 'latlng'=>$latlng]);
}
// ナビのスタートと手段の選択
public function select($id){
$shop = Shop::find($id); //idからDBにアクセスして取得したレコード。
$latlng = ['lat'=>$shop->lat, 'lng'=>$shop->lon];
return view('shops.select')->with(['shop' => $shop, 'latlng'=>$latlng]);
}
// ナビゲーションのアクション
public function navi(Request $request, $id){
// ルート表示のコントローラー
//出発地(検索で選択された場所)と目的地(クリックされた店のid)のlatとlonを取得、ビューに渡す。seina
$shop = Shop::find($id); //idからDBにアクセスして取得したレコード。
$s_latlng = null;
if(is_numeric($request->startLat) and is_numeric($request->startLng)){
$s_latlng = ['lat'=>floatval($request->startLat), 'lng'=>floatval($request->startLng)];
} else {
$s_latlng = null;
}
$modeType = $request->modeType;
return view('shops.navi')->with(['shop' => $shop, 's_latlng' => $s_latlng, 'modeType'=>$modeType]);
}
}
|
STACK_EDU
|
//
// String+Tokenization.swift
// DownSwift
//
// Created by Stepan Usiantsev on 28.04.2021.
//
import Foundation
extension String {
/**
A function that is tokenizing given string.
- Parameters:
- config: A dictionary that stores text styles.
*/
func textAreas(config: [Character: [NSAttributedString.Key : Any]]) -> [TextArea] {
/**
`TextArea` type optional property that stores current `TextArea`'s entity state.
*/
var partialText: TextArea?
/// A property that stores an array of `TextArea` entities.
var textParts = [TextArea]()
/// A boolean indicating if current character will be as a symbol within current text area
var isSkipped = false
/// A counter indicating if text area's open or closed.
///
/// `areaCounter = 1` - text area is open;
///
/// `areaCounter = 2` - text area is closed
var areaCounter = 0
/**
Defines text area depending on character and then either add it to the existing text area or create new one.
- parameter character: The character that we need to tokenize.
*/
func tokenize(_ character: Character) {
if var symbol = partialText {
guard isLetter(character, config) || isSkipped else {
if character == "\\", !isSkipped {
return isSkipped = true
} else {
areaCounter += 1
textParts.append(symbol)
}
partialText = nil
if areaCounter < 2 {
partialText = newTextArea(character, config)
} else {
areaCounter = 0
}
return
}
symbol.string.append(character)
partialText = symbol
isSkipped = false
} else {
guard !isLetter(character, config) else { return partialText = newTextArea(character, config) }
if character != "\\" {
areaCounter += 1
partialText = newTextArea(character, config)
} else {
isSkipped = true
partialText = newTextArea(nil, config)
}
}
}
forEach(tokenize)
if let lastSymbol = partialText, !lastSymbol.string.isEmpty {
textParts.append(lastSymbol)
}
return textParts
}
}
/// A function creates new Text entity depending on character.
///
/// - parameter character: The character that we need to tokenize.
/// - parameter config: A dictionary that stores text styles.
private func newTextArea(_ character: Character?, _ config: [Character: [NSAttributedString.Key : Any]]) -> TextArea {
/* If character is not tokenizing at the moment and we meet special symbols
then we define what specific zone of markdown we should parse.
We're creating an empty Text entity, where we'll store our markdown zone.
*/
if let character = character {
return config[character] == nil ? TextArea(areaSymbol: nil, string: "\(character)") : TextArea(areaSymbol: character, string: "")
} else {
return TextArea(areaSymbol: nil, string: "")
}
}
/// A function checks if character is letter or text style area symbol.
///
/// - parameter character: The character that we need to check.
/// - parameter config: A dictionary that stores text styles.
private func isLetter(_ character: Character, _ config: [Character: [NSAttributedString.Key: Any]]) -> Bool {
if character == "\\" {
return false
} else {
return !config.contains(where: { $0.key == character })
}
}
|
STACK_EDU
|
import logging
import os
import re
import shlex
import shutil
import stat
import sys
from pathlib import Path
from pprint import pformat
import pytest
from pyscaffold import shell
from .helpers import uniqstr
def test_ShellCommand(tmpfolder):
echo = shell.ShellCommand("echo")
output = echo("Hello Echo!!!")
assert next(output).strip('"') == "Hello Echo!!!"
python = shell.ShellCommand("python")
output = python("-c", 'print("Hello World")')
assert list(output)[-1] == "Hello World"
touch = shell.ShellCommand("touch")
touch("my-file.txt")
assert Path("my-file.txt").exists()
def test_shell_command_error2exit_decorator():
@shell.shell_command_error2exit_decorator
def func(_):
shell.ShellCommand("non_existing_cmd")("--wrong-args")
with pytest.raises(SystemExit):
func(1)
def test_command_exists():
assert shell.command_exists("python")
assert not shell.command_exists("ldfgyupmqzbch174")
def test_pretend_command(caplog):
caplog.set_level(logging.INFO)
# When command runs under pretend flag,
name = uniqstr()
touch = shell.ShellCommand("touch")
touch(name, pretend=True)
# then nothing should be executed
assert not Path(name).exists()
# but log should be displayed
logs = caplog.text
assert re.search(r"run.*touch\s" + name, logs)
def test_get_executable(tmpfolder):
# Some python should exist
assert shell.get_executable("python") is not None
# No python should exist in an empty directory when the global $PATH is not included
assert shell.get_executable("python", tmpfolder, include_path=False) is None
# When using sys.prefix python should be sys.executable (+ version suffix)
python = Path(sys.executable).resolve()
bin_path = shell.get_executable("python", include_path=False, prefix=sys.prefix)
bin_path = Path(bin_path).resolve()
assert bin_path.stem in python.stem
assert bin_path.parent == python.parent
# Non existing binaries => None
assert shell.get_executable(uniqstr()) is None
def test_get_command():
python = shell.get_command("python", prefix=sys.prefix, include_path=False)
assert next(python("--version")).strip().startswith("Python 3")
with pytest.raises(shell.ShellCommandException):
python("--" + uniqstr())
def test_get_command_inexistent():
name = uniqstr()
inexistent = shell.get_command(name, prefix=sys.prefix, include_path=False)
assert inexistent is None
def test_get_command_with_whitespace(tmpfolder):
# Given an executable exists in a path with spaces
prefix = Path(tmpfolder, "with spaces")
if os.name == "posix":
executable = Path(prefix, "bin", "myexec")
executable.parent.mkdir(parents=True, exist_ok=True)
executable.write_text("#!/bin/sh\n\necho 42")
executable.chmod(stat.S_IMODE(stat.S_IREAD | stat.S_IEXEC))
elif os.name == "nt": # Windows
executable = Path(prefix, "Script", "myexec.bat")
executable.parent.mkdir(parents=True, exist_ok=True)
executable.write_text("@echo off\r\necho 42", encoding="ascii")
# ^ Let's use a basic encoding + CRLF for windows
else:
pytest.skip("Requires either POSIX-compliant OS or Windows")
return
# ----> helps when debugging
exec_path = shell.get_executable("myexec", prefix=prefix, include_path=False)
print("exec_path:", pformat(shlex.quote(exec_path)))
print("contents:\n", pformat(executable.read_text()))
assert exec_path is not None
assert Path(exec_path).exists()
# <----
# When we create a command with `get_command`
cmd = shell.get_command("myexec", prefix=prefix, include_path=False)
assert cmd is not None
# it should run without any problems
completed = cmd.run()
print("stdout:", completed.stdout)
assert int(completed.stdout) == 42
completed.check_returncode()
def test_get_editor(monkeypatch):
# In general we should always find an editor
assert shell.get_editor() is not None
# When there is a problem, then we should have a nice error message
monkeypatch.delenv("VISUAL", raising=False)
monkeypatch.setenv("EDITOR", "")
monkeypatch.setattr(shell, "get_executable", lambda *_, **__: None)
with pytest.raises(shell.ShellCommandException, match="set EDITOR"):
print("editor", shell.get_editor())
def test_edit(tmpfolder, monkeypatch):
vi = shutil.which("vim") or shutil.which("vi")
if not vi:
pytest.skip("This test requires `vim` or `vi` to be available")
# Given a file exists
file = tmpfolder / "test.txt"
file.write_text("Hello World", "utf-8")
assert file.read_text("utf-8").strip() == "Hello World"
# Then `shell.edit` should be able to manipulate it
monkeypatch.delenv("VISUAL", raising=False)
monkeypatch.setenv("EDITOR", vi)
shell.edit(file, "-c", ":%s/World/PyScaffold/g", "-c", ":wq")
# ^ a bit of vim scripting so it does not wait for the user to type
assert file.read_text("utf-8").strip() == "Hello PyScaffold"
def test_join():
# Join should work with empty iterables
assert shell.join([]) == ""
assert shell.join({}) == ""
assert shell.join(()) == ""
assert shell.join(x for x in []) == ""
# Join should accept Path objects
p1 = Path("a", "b", "c")
p2 = Path("d", "f", "g")
assert shell.join([p1, p2]) == f"{p1} {p2}"
# Join should be the opposite of split
args = ["/my path/to exec", '"other args"', "asdf", "42", "'a b c'"]
assert shlex.split(shell.join(args)) == args
def test_non_existent_file_exception():
cmd = shell.ShellCommand("non_existent_cmd", shell=False)
with pytest.raises(shell.ShellCommandException):
cmd()
|
STACK_EDU
|
I want to build some VM images with a few customisations. For the sake of a concrete example, let’s consider an Amazon AMI with zfs root, and I want to change the pool and filesystem layout the builder will create.
So the starting point before any customisation would be:
$ nix-build /data/foss/nixpkgs/nixos/release.nix -A amazonImageZfs.x86_64-linux
However, even this fails, with nixos-install inside the vm getting an error trying to allocate memory partway through building the new store.
It isn’t surprising that a zfs-using system needs more memory when doing intensive filesystem writes, but I guess I’m a little surprised that the config hasn’t been adjusted to give that installer vm more memory, since it fails quite consistently at least for me.
It’s also a little disappointing, because if it had been, it would probably have given me a clue about how to approach passing in other customisations and my main question. I can see the virtualization.memorySize option, but it seems to only be valid in nixos tests.
So, core questions:
in the first instance, how do I pass an option to give the vm used by nix-build in the above more RAM?
more generally, how do I pass other customisations to control the resulting build?
for extra credit, if there’s a way to make this nicer with flakes, I’m happy to go in that direction
I’ve looked over the code and there are clear constructions in there that seem to be intended to facilitate this, but I have no idea how to use or access them. Is there an example of how to use all this?
I generate custom VM images too. Some are larger and have run out of memory inside the VM run. My expressions use nixos/lib/make-disk-image.nix, which passes a hard-coded value (1024 megs) to Qemu/KVM for guest memory.
I don’t know how or where attributes from release.nix invoke a VM run, so I can’t say whether it is the same problem. Speculating a bit, maybe the typical VM run use cases don’t need more memory, so it hasn’t been a problem for many users or regular NixOS release activities.
One way to fix the make-disk-image function would be to add a memSize ? 1024 argument so that callers may override it as needed. I have worked around it in my private repo by making a custom-disk-image function that does that. If that is useful it could be merged upstream.
If there are other workarounds, I’d love to learn about them too.
Not sure how to expose memSize for a specific image, or in general, without adding a parameter to each call site, e.g., the many ‘image’ expressions in nixos/modules/virtualisation. I’m hoping someone with more knowledge will advise on further improvements.
What would be neat is some singular knob, like the virtualisation.memorySize module option, to populate the value of memSize parameter for the dozen or so make-disk-image usage sites. I’m not sure if it is possible, but even if it were, it seems slightly weird to mix the guest configuration with the image-generating step.
|
OPCFW_CODE
|
A new technology has arrived to the KSC and with it, Kerbals will only get more Room to Maneuver! We’re excited to release, Kerbal Space Program 1.7: Room to Maneuver, our newest content-filled update that will give players new features and wide variety of improvements to help Kerbals explore farther than ever before!
This update includes two new useful navigation upgrades, the revamp of all of the small maneuvering motors, including new variants for the Twitch, Ant and Spider, as well as a brand new galaxy texture map to have our astronauts mesmerized by the beauty of the celestial vault. A great deal of bug fixes have also been packed into this update, not two mention a new 3.75 meter nose cone and a 5m one for those extra-large fuel tanks in the Making History Expansion!
Let’s go through some of this update’s highlights:
Probably the most impactful feature within this update, the Maneuver Mode is a new navigation tool that gives you access to useful orbital information in both Flight and Map mode that will also allow you to precisely and easily adjust maneuvers nodes, all to help you fine tune your interplanetary transfers.
Click here to see a video showcasing this feature!
Altimeter mode toggle
A long-requested quality-of-life feature that will allow you to toggle the altitude mode from Above Sea Level (ASL) to Above Ground Level (AGL) by simply clicking on the new icon to the altimeter. Hopefully this will improve the KSC’s survival rates…
This time around, small maneuvering engines were the focus of our revamping efforts. The Ant, Twitch, Puff, Place-Anywhere 7, RV-105 Thruster Block, and Vernor Engines now look better than ever, plus the Twitch, Ant and Spider include new stunning variants. On top of that, we’ve added new tuning values on some of those engines.
Galaxy Texture Map Update
The game’s galaxy cubemap has been updated in Room to Maneuver. This environment map has been carefully crafted to reflect a nicer color palette and more defined celestial objects. With double the resolution, it will be impossible not to enjoy the view while exploring Kerbin’s star system.
Click here to see high res images.
Another small quality-of-life feature we’re adding to the game is the addition of a scrollbar to the Part-Action-Window. This will keep the PAW within the bounds of the screen when they get too big, something that might come in handy for players who use mods and fill those up
To learn more you can read the full Changelog here:
+++ Improvements * Upgraded Galaxy Textures. * Add new flight UI mode that includes in-depth orbital information. * Add advanced maneuver node editor, allowing player to edit maneuver nodes more precisely. * Add an Altitude toggle function to the Altimeter. The altitude modes can be switched between AGL and ASL. * Part Action Windows (PAW)s now generate scrollbars and keep themselves within the bounds of the screen when there is more data than will fit the screen. * Automatic AGL/ASL toggle values when in orbit. The altimeter value is set as ASL while in orbit, the AGL/ASL setting is preserved. * The altimeter AGL behaviour when underwater calculates the vessel altitude from the sea floor. * Update Addons and Mods external site link from Main Menu. +++ Localization * A localization tag is no longer displayed in the status section of the PAW of the Advanced Grabbing unit in the Asteroid Redirect Mission, Part 2 tutorial. +++ Parts Updated Parts (reskinned): * 24-77 Twitch * LV-1 Ant * LV-1R Spider * O-10 Puff * Place-Anywhere 7 * RV-105 Thruster Block * Vernor Engine Color Variants: * 24-77 Twitch (New "Orange" and "Gray and Orange" color variants) * LV-1 Ant (New "Shrouded " and "Bare" variants) * LV-1R Spider (New "Shrouded " and "Bare" variants) Other Part changes: * Add a 3.75 nose cone. * Rebalanced the following engines: Twitch, Spark, Place-Anywhere 7, RV-105 RCS,Vernor. * Fix IVA external cameras in Mk1 Command Pod, MK2 Lander Can, Mk2 Command Pod. * Fix EVA range on Cupola, HECS2, RC-001S, RC-L01 science containers. * Previously Revamped Parts moved to zDeprecated. Parts revamped in 1.4: - TR-2V Stack Decoupler -> TD-06 Decoupler - TR-18A Stack Decoupler -> TD-12 Decoupler - Rockomax brand decoupler -> TD-25 Decoupler - TR-38-D -> TD-37 Decoupler - TR-2C Stack Separator -> TS-06 Stack Separator - TR-18D Stack Separator -> TS-12 Stack Separator - TR-XL Stack Separator -> TS-25 Stack Separator - ROUND-8 Toroidal Fuel Tank -> R-12 'Doughnut' External Tank - Rockomax X200-8 Fuel Tank -> same name - Rockomax X200-16 Fuel Tank -> same name - Rockomax X200-32 Fuel Tank -> same name - Rockomax Jumbo-64 Fuel Tank -> same name - Mk 1-2 Pod -> Mk 1-3 Pod +++ Bugfixes * Fix PQS normals, stops planets from having mismatched seams when seen from orbit. * Fix bug where symmetry would break animations on some parts. * Fix input locks on Return to KSC button at top of altimeter in flight scene. * Fix typographical error in the description of the vessel 'ComSat Lx'. * Fix typographical error in the Suborbital Flight training tutorial description. * Fix UI issue for purchase button on part tooltips. * Fix vessel default name in rename vessel dialog displaying as an autoloc. * Fix manufacturer localization on the Kerbodyne S4-512 Fuel Tank. * Fix jitter in heading readout on Navball for vessels in prelaunch state. * Fix "Learn More" text exceeding size of window in the Asteroid Redirect scenario descriptions. * Fix flag decal on Male Kerbals Jetpack being off-center. * Fix Kerbal falling off the launchpad flagpole causing the flagpole to explode. * Fix bulkhead profile part filtering on MK-0, 2 & 3 fuel tanks, J-90 "Goliath", Communotron 16-S, RA-2, RA-15, and RA-100 relay antenna. * Fix thermal overlay rendering on parts that have lights. * Fix Debug tool saying LPE for orbit param when its applying Arg of Periapsis. * Fix Atmosphere line appearing from the surface of planets. * Fix Flags and Kerbals loading above terrain (flying) and flags being removed from game. * Fix NaN bug in DV calcs when in orbit around CBs with no atmosphere. * Fix site node waypoints duplicating every scene change. * Fix service bays unable to click parts inside after jettison when part loads. * Fix allow staging of interstage fairings after they are decoupled. * Fix Mk3 cargo bay registering collisions and blowing up parts of vessels inside them. * Fix Engine Plate handling in dV calculations. * Fixed UI scale issue where setting high scale in UI, navball and altimeter would clip off some elements. * Fix Kerbal helmet shadow rendering in "Simple" rendering setting. * Fix decouple node function on docking ports in space. * Fix shrouds being left attached to docking nodes when decoupled (now becomes separate debris). * Fix Responsiveness audio setting appearing in red text in settings menu. * Fix NRE on interstage procedural fairing in editor scenes. * Fix AOORE when Kerbal leaves a command seat on a vessel that has an active ISRU. * Fix issue where Delta-V app menu could become unresponsive in editor scenes. * Fix Navball, funds, science and reputation gauges disappearing when UI scale set > 170% on some resolution settings. * Fix NRE in resources app that could occur when moving the mouse over resources in the app. +++ Mods * Changed Animation behavior of ModuleDeployablePart, ModuleDeployableRadiator, ModuleDeployableAntenna and ModuleDeployableSolarPanel to be WrapMode.ClampForever instead of WrapMode. * ModuleDecouple and ModuleAnchoredDecoupler rebased to a common class - ModuleDecouple. * Add FXModuleAnimateRCS - handles emissives on RCS part modules. * Add EmissiveBumpedSpecularMapped part shader. * Make class DoubleCurve annotate Serializable. * Fix version dependency checking for mods. +++Miscellaneous * None at this time. ========================= Making History v1.7.0 - Requires KSP v1.7.0 ========================= +++ Parts * Added a 5m nose cone. +++ Bugfixes * Fix cursor disappearing behind Gilly while moving the camera around the planet in Mission Builder GAP. * Fix export filename not updating when mission is renamed and re-exported. * Fix user being able to select nodes from the left hand toolbox in the Mission builder even when that panel is hidden. * Fix the Steam Select Craft to Load dialog window which generated NREs when using Menu Navigation on the Steam tab and launching in the VAB. * Fix message dialog in Meet Me in Zero G stock mission. +++ Missions * The "Craft incompatible" text is no longer shown in the stock missions.
Kerbal Space Program 1.7: Room to Maneuver is now available on Steam and will soon be available on GOG and other third-party resellers. You will also be able to download it from the KSP Store if you already own the game.
BTW... You can download wallpapers of the Room to Maneuver art here:
|
OPCFW_CODE
|
Of ThinkPads and MacBooks
Since 2009 I was a Mac user. I was working with iOS development, and it made sense to have a MacBook for the SDK. I was curious too, because I've been using Linux distros (Debian, then Ubuntu, then Gentoo when Ubuntu was getting too heavy for my old laptop) for some time and was a bit tired of making everything work. Losing control was discomfortable at first, but so many things working out of the box (like sleep and hibernation!) was worth it. And Mac apps were much more polished (oh, Garageband).
When I arrived at INPE I got a Linux workstation, the mighty Papera (all the computers there have Tupi names, Tupi being a language spoken by native indians here in Brazil). And I tested some new things, like using Awesome1 as a window manager, and love it. But it lasted just for some months, because the machines were swapped for some iMacs and Papera was assigned for other person. I missed a tiling manager, but I also found Homebrew2 and it helped a lot setting up a dev environment in OSX (I know macports and fink existed, but writing a Homebrew formula is pretty easy, I even contributed one back), so no big problems in the transition.
But after some time I was getting uneasy. New OSX versions seemed to remove features instead of adding then (sigh, matrix-organized Spaces...). Lack of expansibility on new laptops (despite MacBook Air being an awesome computer) was pushing me back too, because a maxed one would cost way more than I was willing to pay. And I was spending most of my time in SSH sessions to other computers or using web apps, so why not go back to Linux?
At the end of 2012 I bought a used ThinkPad X220 with the dock and everything. When I was younger I always liked the visual, with its black and red look, and the durability (MacBooks are pretty, but they are easy to scratch and bend). And the X220 was cheap and in perfect state, and with a small upgrade when I went to PyCon (ahem, 16 GB RAM and a 128 GB SSD) it is a BEAST now. And all these benefits too:
I have Awesome again!
Updated packages thanks to pacman (I installed Arch Linux and I'm loving it)
When I need a new package it is as easy to write a PKGBUILD file as it was to write a Homebrew formula. I wrote some Debian packages in the past and they worked, but there were so many rules and parts that I don't think I want to write one again. I recognize that a lot of the rules and parts make sense with a project as big as Debian (and Ubuntu and everyone else), but it could be simpler.
Sleep works! Hibernation works! Except when it doesn't because your EFI is half full after the kernel wrote some stacktraces and the chip refuses to wake up.
It isn't for those faint of heart, but I'm happy to be back =]
|
OPCFW_CODE
|
JangoMail can pull the data from a web database/web server in real time and then send a personalized mass email to email addresses in the database. Additionally, JangoMail can synchronize data (unsubscribes/bounces/clicks/opens and more) with your web database.
You can create a database connection that GETs or POSTs any arbitrary data via forms, querystring, or headers to your server as long as your server then responds with a JSON array of data back.The JSON array of data returned (see example below) must include 1 emailaddress column and anything else you want to include. If there are multiple email address columns, we will use the first one.
How to Add a Database to JangoMail
- Set up a web server to allow JangoMail to connect.
- Publish some sort of API. This can be as simple as a text file.
- This API will set the parameters of connecting to and pulling data from the database.
- Once your web server/web database is setup, you can create a new database connection in JangoMail.
- Go to Lists → Databases →click ADD A DATABASE.
- Fill in the proper values:
- Click SAVE DATABASE.
- Click Test to verify everything is working.
Sample ASPX Page
<%@ Page Language="C#" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
protected override void OnLoad(EventArgs e)
if (Request.Headers["apikey"] != "XXXXXXXXXX")
Response.StatusCode = 403;
Response.Write("No Soup For You");
if (Request.QueryString["type"] == "Users")
using (SqlConnection conn = new SqlConnection("Your connection string"))
SqlCommand command = new SqlCommand("Your SQL select statement", conn);
var rdr = command.ExecuteReader();
List<Dictionary<string, object>> ret = new List<Dictionary<string, object>>();
var dict = new Dictionary<string, object>();
for (int i =0; i < rdr.FieldCount;i ++)
string name = rdr.GetName(i);
dict[name] = rdr[i];
Response.ContentType = "application/json";
Response.StatusCode = 500;
Response.Write("Invalid query type");
- In our example, we secure our site with an apikey which is passed in as a Header. If the apikey does not match, we fail the connection with an error message.
- if (Request.Headers["apikey"] != "XXXXXXXXXX")
- We then pass in a Query string to tell the connection what query to run. If the query string does not match, we fail the connection with an error message.
- if (Request.QueryString["type"] == "Users")
- If both the headers and query strings match, then we run the query provided.
- SqlCommand command = new SqlCommand("Your SQL select statement", conn);
- With this set up, we can use the same site and set up multiple database connections by providing different headers and\or query strings to run different SQL statements.
|
OPCFW_CODE
|
How do I import a class in a package into a jsp file?
It seems like I have correctly imported the package and class, yet for some reason, my variable user is not found. User is an object of type String that is created in class AddTo.
<%@ page import= "package1.AddTo" %>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>JSP Page</title>
</head>
<body>
<p> Shopping Cart</p>
<%= System.out.println(user.name); %>
</body>
</html>
You shouldn't be writing bare Java code in JSPs. http://stackoverflow.com/q/3177733/139010
@MattBall It is indeed a rather archaic way of doing things, but is still part of the J2EE first cups tutorial, so this is a valid question.
@RudolphEst - He never said the question isn't valid. He just gave some friendly advice. The question hasn't been downvoted and nobody has voted to close it.
@jahroy Agreed, just didn't want the poster to think he was doing anything 'wrong' by attempting to learn how to use bare Java in JSPs.
@jahroy Shouldn't is a very strong word. Many new programmers need to learn the old ways in order to support the oodles of legacy code out there. I would think that phrases like try not to or it is not recommended would be more constructive. To each his own though. Opinions will differ in a community ;)
You only import a class from a package, and not it's fields.
That being said, you should add the AddTo object as request attribute in some servlet, and access the attribute in JSP using EL. You should not use scriplets in new code.
In Servlet you do:
request.setAttribute("addTo", addTo);
then in JSP, you can access the user property of attribute addTo as:
<p> Shopping Cart</p>
${addTo.user} <!-- Access user property of addTo attribute, which is an object of type `AddTo` -->
See Also:
How to avoid Java code in JSP files?
Thanks for the reply. I guess I was not specific enough. My user variable is an object of type String, aka, String user = new String("blahblahblah"). This is create in my class AddTo.
@user2662437. Then access it using ${addTo.user}. Change the request attribute in servlets from user to addTo.
You don't include classes that are already in your package to your JSP. You simply reference them the long way.
So I'm assuming you need:
System.out.println(package1.AddTo.user.name);
(Although this doesn't make much sense, seeing the username should be in the session, right?)
What Matt Ball says about it being a bad idea to put much code in JSP is true, but more in situations where an error could reveal some information that shouldn't be revealed, then JSP could reveal that on a compile error that the user can see. Like if you put the code to make the database connection in a JSP directly, rather than calling a class
Like this :
Class.forName(...);
Connection con = DriverManager.getConnection("jdbc:drivertype://server;errors=full;translate binary=true", "user", "pwd");
That's very bad. This would reveal the username and password if there was a compile error, or even a connection error like if the database was inaccessible; and the user would know how to hack your database, because by default it will show the user the offending lines of code!
So instead:
Connection con = package1.databases.createConnectionToDatabase();
You can also prevent errors from showing the user the actual lines of code by putting all the Java code in your JSPs inside Try and Catch blocks.
That's a lot of tangential information; I'm not convinced it's overly helpful. Also, there's almost never a good reason to throw Java code into JSPs, for many other reasons than security.
|
STACK_EXCHANGE
|
run-a-job commands
Ideas
We would like to add a new command that adds a simpler version to run compute through the cli. This is a first design of how we would like this to look eventually, it still needs to be placed on the roadmap. The focus is to tackle the main problems we directly observed in on fell swoop:
Running a Job
nerd job run --name=my-job --input=my-ds1 --input=my-ds2 <image> [image args] --image-opts=1
Improvements on top of the current way of running
not required to first create a workload
allow the user to (optionally, else generate a human readable string) provide a name for the job, this must be unique to the project (namespace)
all arguments after the first non-option are considered arguments to the job container, no more confusion around the double dashes
eventually we would like the default invocation to block until the container has actually started such that it gets the time to return feedback to the user.
optional "detach" (do not wait) to make it possible to submit many jobs into the system
make it possible to also stream the logs of the container until user cancels or job is done
make it possible to provide a mount point for datasets (like Docker volumes)
Listing Jobs
nerd job list
improvements on top of the current listing:
show actual (useful) status of the pod: e.g image missing, no node available for scheduling
make sure the status communicates wether it is consuming resources
Fixing problems
nerd job fix <job-name> --image=my-image
the job fix command allows for updating the configuration of a job, it would allow users to fix a job by removing the existing one and starting a new one.
Getting Feedback
nerd job logs <job-name>
it would now be possible to fetch logs directly by providing the job-name, this should provide less confusion.
Others
describe and delete remain the same as current implementation
nerd job describe <job-name>
nerd job delete <job-name>
Iteration 2
First new CLI version will be the first stepping stone to an improved experience. The following will be supported:
Running a Job
$ nerd job run --name=experiment2 quay.io/nerdalize/delft3d
job 'experiment2' was submitted, use `nerd job list` to see its status
Listing Jobs
$ nerd job list
PROJECT NAME STATUS
joost-default my-test pending: No node available for scheduling
joost-default experiment1 failed: Image 'xyz' does not exist
joost-default experiment2 RUNNING
Viewing Job Logs
$ nerd job logs experiment2
[12:33] Lorum ipsum
[13:34] Lorum ipsum
Nice! Can we have more consistent formatting of the status though ☺️?
Nice! A few suggestions:
instead of 'nerd job fix' use 'nerd job update' for a more positive tone?
in 'nerd job list' maybe use 'JOB' or 'JOB-NAME' instead of 'NAME'?
I like that the 'nerd job list' lists all jobs in all projects!
And maybe we can discuss the display of usage in 'nerd project usage'?
Give info on if the project is currently consuming capacity
'4 hours of 10 compute units' feels more clear to me
And I guess people have to specify the project in 'nerd project usage' or do they get a list of all projects?
And would they know whether the project is still in the free plan?
Nice suggestions:
-fix was proposed to give it more of a hint at what it could be used for and update is so boring but I agree that 'update' can be applied more generic: (what if nothing is broken but I want to change something anyway.
Agreed on giving more context than just name, "JOB" is sweet and short i would say
Usage feedback is still rough and fully open, I would argue that it should fit into the mental model our users have after having looked at our pricing or usage so maybe display it the same as over there?
|
GITHUB_ARCHIVE
|
I have a NodeJS app with Express and Babylon and i want the user to watch some high quality videos on a 180 stereoscopic video dome in VR.
The issue is that the video files seem to be loaded completely before they play which causes very long loading times so i tried to use the file system in nodejs to pass some partial video data to the video dome function
updateURL(). This seems to work fine in the browser but when going into VR mode (using WebXR API or an actual Headset) the video doesn’t seem to play until the entire video is available.
From my understanding the video dome is just an html video element so i’m confused as to what’s happening when entering VR.
This can’t be replicated on the Playground i believe, but i simplified the code and created a GitHub Repository.
To replicate: clone the repository, use
npm install and then
node app.js to run the local server and access it under localhost:
Thank you in advance!
You are totally right. This is a basic html video element. I’ll assign it to myself and run a few tests. It might be a limitation in your environment, because in a standard browser it doesn’t happen this way.
I’m sorry but what do you mean by this, meaning the VR Headset browser might be the issue?
I’ve tested my app on the Pico 4 and with the WebXR API extension in chrome and firefox with the same results unfortunately
I’m referring to what you wrote before:
HTML video should play correctly, whether in WebXR or not. What i said is that if it works in dektop mode or on a desktop browser, but doesn’t work in XR, it might be a limitation of the system or the environment in which you are testing.
Also - this might be an issue with the server implementation itself, especially if you can’t reproduce that in the playground. I am not going to debug server-side code, but will be very happy to test a playground.
Oh okay, i don’t really understand what limitation there could be on my end since it worked fine before implementing passing the video as HTTP 206 partial content and i’m not modifying the video element so i thought there might be a limitation in babylon not accepting partial content in vr or something which doesn’t make sense of course.
I still don’t understand how it can work fine outside of XR and not work in XR and of course i shouldn’t expect anyone to debug my backend code so i guess i should rather ask:
How can i prevent the videoDome from loading the entire video file when using
updateURL() and instead load the beginning first, maybe the first 10 seconds, and then the rest of the video?
Is this at all possible in Babylon alone?
Since this is a basic HTML element, anything that works in HTML should work in Babylon as well. Babylon doesn’t really do anything special with the video. So if it works with HTML video, it should work with babylon.
Is it reproduced on the playground? Can you provide a playground that doesn’t work? This way it will be much easier for me to find out what the issue is. If it is browser limitation, webxr issue, or a babylon bug that needs to be resolved.
Wild guess time. Might you need to deal with the layers WebXR feature, or at least turn it on?
I tried serving my videos as partial content with a PHP server instead of NextJS and it works fine now but thank you both for your help i just coulnd’t get it running with NodeJS
okay nevermind, the video stream works but the videoDome isn’t stereoscopic when accessing the video as 206 parts, when just using the path aka loading the entire video when the user starts the app it appears as stereo
|
OPCFW_CODE
|
<?php
/**
* @package hubzero-cms
* @copyright Copyright 2005-2019 HUBzero Foundation, LLC.
* @license http://opensource.org/licenses/MIT MIT
*/
namespace Components\Menus\Helpers;
use Components\Menus\Models\Menu;
use Hubzero\Base\Obj;
use Hubzero\Access\Access;
use Submenu;
use Route;
use User;
use Lang;
use App;
/**
* Menus component helper.
*/
class Menus
{
/**
* Defines the valid request variables for the reverse lookup.
*
* @var array
*/
protected static $_filter = array('option', 'view', 'layout');
/**
* Gets a list of the actions that can be performed.
*
* @param integer $parentId The menu ID.
* @return object
*/
public static function getActions($parentId = 0)
{
$result = new Obj;
if (empty($parentId))
{
$assetName = 'com_menus';
}
else
{
$assetName = 'com_menus.item.' . (int) $parentId;
}
$actions = Access::getActionsFromFile(\Component::path('com_menus') . '/config/access.xml');
foreach ($actions as $action)
{
$result->set($action->name, User::authorise($action->name, $assetName));
}
return $result;
}
/**
* Gets a standard form of a link for lookups.
*
* @param mixed $request A link string or array of request variables.
* @return mixed A link in standard option-view-layout form, or false if the supplied response is invalid.
*/
public static function getLinkKey($request)
{
if (empty($request))
{
return false;
}
// Check if the link is in the form of index.php?...
if (is_string($request))
{
$args = array();
if (strpos($request, 'index.php') === 0)
{
parse_str(parse_url(htmlspecialchars_decode($request), PHP_URL_QUERY), $args);
}
else
{
parse_str($request, $args);
}
$request = $args;
}
// Only take the option, view and layout parts.
foreach ($request as $name => $value)
{
if ((!in_array($name, self::$_filter)) && (!($name == 'task' && !array_key_exists('view', $request))))
{
// Remove the variables we want to ignore.
unset($request[$name]);
}
}
ksort($request);
return 'index.php?' . http_build_query($request, '', '&');
}
/**
* Get the menu list for create a menu module
*
* @return array The menu array list
*/
public static function getMenuTypes()
{
$rows = Menu::all()
->rows()
->fieldsByKey('menutype');
return $rows;
}
/**
* Get a list of menu links for one or all menus.
*
* @param string $menuType An option menu to filter the list on, otherwise all menu links are returned as a grouped array.
* @param int $parentId An optional parent ID to pivot results around.
* @param int $mode An optional mode. If parent ID is set and mode=2, the parent and children are excluded from the list.
* @param array $published An optional array of states
* @param array $languages
* @return mixed
*/
public static function getMenuLinks($menuType = null, $parentId = 0, $mode = 0, $published=array(), $languages=array())
{
$db = App::get('db');
$query = $db->getQuery();
$query->select('a.id', 'value');
$query->select('a.title', 'text');
$query->select('a.level');
$query->select('a.menutype');
$query->select('a.type');
$query->select('a.template_style_id');
$query->select('a.checked_out');
$query->from('#__menu', 'a');
$query->joinRaw('#__menu AS b', 'a.lft > b.lft AND a.rgt < b.rgt', 'left');
// Filter by the type
if ($menuType)
{
$query->whereEquals('a.menutype', $menuType, 1)
->orWhereEquals('a.parent_id', 0, 1)
->resetDepth();
}
if ($parentId)
{
if ($mode == 2)
{
// Prevent the parent and children from showing.
$query->join('#__menu AS p', 'p.id', (int) $parentId, 'left');
$query->where('a.lft', '<=', 'p.lft', 1)
->orWhere('a.rgt', '>=', 'p.rgt', 1)
->resetDepth();
}
}
if (!empty($languages))
{
$query->whereIn('a.language', $languages);
}
if (!empty($published))
{
$query->whereIn('a.published', $published);
}
$query->where('a.published', '!=', '-2');
$query->group('a.id')
->group('a.title')
->group('a.level')
->group('a.menutype')
->group('a.type')
->group('a.template_style_id')
->group('a.checked_out')
->group('a.lft');
$query->order('a.lft', 'ASC');
// Get the options.
$db->setQuery($query->toString());
$links = $db->loadObjectList();
// Check for a database error.
if ($error = $db->getErrorMsg())
{
throw new Exception($error, 500);
return false;
}
// Pad the option text with spaces using depth level as a multiplier.
foreach ($links as &$link)
{
$link->text = str_repeat('- ', $link->level).$link->text;
}
if (empty($menuType))
{
// If the menutype is empty, group the items by menutype.
$query = $db->getQuery();
$query->select('*');
$query->from('#__menu_types');
$query->where('menutype', '<>', '');
$query->order('title', 'asc')
->order('menutype', 'asc');
$db->setQuery($query->toString());
$menuTypes = $db->loadObjectList();
// Check for a database error.
if ($error = $db->getErrorMsg())
{
return false;
}
// Create a reverse lookup and aggregate the links.
$rlu = array();
foreach ($menuTypes as &$type)
{
$rlu[$type->menutype] = &$type;
$type->links = array();
}
// Loop through the list of menu links.
foreach ($links as &$link)
{
if (isset($rlu[$link->menutype]))
{
$rlu[$link->menutype]->links[] = &$link;
// Cleanup garbage.
unset($link->menutype);
}
}
return $menuTypes;
}
else
{
return $links;
}
}
/**
* Get associations
*
* @param integer $pk
* @return array
*/
public static function getAssociations($pk)
{
$associations = array();
$db = App::get('db');
$query = $db->getQuery();
$query->from('#__menu', 'm');
$query->join('#__associations as a', 'a.id', 'm.id', 'inner');
$query->whereEquals('a.context', 'com_menus.item');
$query->join('#__associations as a2', 'a.key', 'a2.key', 'inner');
$query->join('#__menu as m2', 'a2.id', 'm2.id', 'inner');
$query->whereEquals('m.id', (int)$pk);
$query->select('m2.language');
$query->select('m2.id');
$db->setQuery($query->toString());
$menuitems = $db->loadObjectList('language');
// Check for a database error.
if ($error = $db->getErrorMsg())
{
throw new Exception($error, 500);
}
foreach ($menuitems as $tag => $item)
{
$associations[$tag] = $item->id;
}
return $associations;
}
}
|
STACK_EDU
|
A Few Suggestions
This first suggestion is extremely simple. I just downloaded the 5.2 package and unzipped it. I now have 44 files/folders on my desktop. I am glad I keep my desktop clear because for some that could be messy. Could you simply toss everything in a "Photopost Classifieds 5.2" folder? This will keep the folders and files organized for those who have unorganized desktops.
The second suggestion I have pertains to the feedback system. Could there be a couple extra fields? I am specifically looking to have a field that allows feedback to be left for the seller or buyer. It would also be beneficial if there were a URL field that the user had to fill out. This would have to be a link to their ad page. The field should check and see if that url exists (which it must when the feedback is left) and should link to a default "ad expired page" after the ad was either deleted or falls off. It should also ensure that no other feedback is using the same url. this will help prevent people from helping to raise a friends feedback.
It should be an option if we want to allow people to change their feedback. I am personally removing that ability from my site immediately because someone could get in a spat with someone they purchased from and then change their info even if it was a good transaction. Follow-up comments, like with ebays system would be great to replace people changing their opinion.
We can look at suggestion 2.
As far as suggestion 1 there is nothing I could do about that. That is merely how the download works. When you open the zip you see there is no directory so the proper action would be to place the unzipped files in there own directory this is what I have always done since I first because a customer 9 years ago.
The way the members download script works is I upload a source file which has a directory say called pp-classifieds52 in there with everything. There is a script on our site which goes through and encodes your license number through all the files and it is that script which compiles the final download and that always processes the code into what you see.
I will pass along your comment though on the download. If we did not have to worry about encoding license numbers into the files I would easily do something like that.
I am looking at the feedback system and specifically from what I see your comments are a little confusing.
You say you want people to edit there feedback and then next say you remove that ability which is confusing to me. In our application you can not edit feedback after it is left. The admin can delete feedback etc if he is moderating things but that is the size of it. I do not beleive users should ever be allowed to change feedback just like the seller can not delete someone's feedback.
I will look at ebay to see what we can do as I would need to try and visualize things. One thing I know I will be doing is separating moderation into ad,comment and feedback moderation permissions.
Just looking at expanding things and need some clarification on your ideas.
|All times are GMT -5. The time now is 05:51 AM.|
Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2013, Jelsoft Enterprises Ltd.
Search Engine Friendly URLs by vBSEO 3.2.0
|
OPCFW_CODE
|
For the Data Frame Properties on a map in ArcMap, what is the difference between selecting a new coordinate system and using a Transformation?
1Sometimes this is referred to as a "datum transformation". In ArcGIS I guess the datum part of the phrase is implied. Here's the first thing google returns for me: earth-info.nga.mil/GandG/coordsys/datums/index.html I don't think Arcmap does any sort of vertical datum transformation (?) nauticalcharts.noaa.gov/csdl/learn_datum.html– Kirk KuykendallJan 31, 2011 at 15:49
its reaally very usefull for me i think its right we do transformation in Arc map if datums are different.– user4152Sep 7, 2011 at 7:42
A transformation should be used to "transform" between two systems such as geographic coordinate system (GCS) and Projected Coordinate System (PCS). There are other instances for it's use also. Link #3 is the ESRI 9.3 help page that is pretty good about describing the difference.
To choose the correct transformation see the esri help
and for the right application.
2The transformations he's referring to, in the data frame properties, are geographic transformations, i.e. they transform from one GCS to another GCS, not from GCS to PCS. The latter are referred to as projections rather than transformations. You probably know this but I wanted to clarify for the sake of other readers. Jan 25, 2013 at 15:20
@LarsH you would be incorrect. they do not only transform from GCS to GCS. PCS have datum also. Any reprojection done can use transfomation files if needed. Jan 25, 2013 at 17:07
Brad, if you could provide an authoritative reference, I would appreciate the correction. IIUC, PCS "have a datum" only in the sense that a PCS is based on a GCS, and the GCS has a datum. Geographic transformations can be involved in a reprojection if the reprojection includes unprojecting to one GCS, transforming to another GCS (that's the geographic transformation), and then projecting to another PCS. I'm looking at resources.arcgis.com/en/help/main/10.1/index.html#//… Jan 25, 2013 at 17:24
1Currently, the projection engine in ArcGIS supports GCS-GCS transformations only. There are transformation methods that convert directly between two PCS or between PCS and GCS [that is not a projection algorithm] but we don't support them yet. I work on the projection engine. Does that make me an authoritative reference? b->– mkennedyJan 25, 2013 at 17:47
ok my bad. I had always wondered why arcmap puts in a default transformation when you select re-project from pcs to gcs (but the transformation reads backward.) as the image above shows. So I havene't used it in qute a while but I surely have data out there at old jobs that is re-projected wrong. Jan 25, 2013 at 18:29
Any Projected Coordinate System has a Datum. If you are converting in between two different Coordinate systems, with different datums, you will need to use a transformation.
If the two coordinate systems are based on the same Datum, then a datum transformation is not needed.
In the ArcGIS world, a 'projection' converts between a geographic coordinate system (GCS) and a projected coordinate system (PCS). A geographic transformation, aka datum transformation, aka transformation, converts between two geographic coordinate systems.
When you set the data frame's coordinate system, you are defining the coordinate system (including the unit of measure) that you'll be working in. Any layers that are in a different coordinate system will be automatically projected (or 'unprojected') to this coordinate system. If a layer's coordinate system uses a different GCS, then you may see a warning that they're different. Whether you see the warning or not, you should decide whether you need to set a geographic transformation. Neglecting to do so when it's necessary can lead to data being up to a few hundred meters offset.
ArcMap only sets one automatically: NAD_1927_To_NAD_1983_NADCON, which transforms between NAD27 and NAD83 in the lower 48 states.
Std Disclaimer: I work for Esri.
This is the best answer. Jan 25, 2013 at 15:21
1Note: Esri dropped the NADCON transformation as a default in ArcGIS 10.1 SP1.– mkennedyJan 25, 2013 at 20:08
|
OPCFW_CODE
|
Unexpected message after some time connecting a Siemens S7-PLC server implementation
The client is working well but after some time, the client is blocked after writing the following log:
16:33:12.620 [warning] Unexpected message while :activated session: {:opcua_node_id, 0, :numeric, 395} %{
response_header: %{
additional_header: {:opcua_extension_object,
{:opcua_node_id, 0, :numeric, 0}, :undefined, :undefined},
request_handle: 10,
service_diagnostics: {:opcua_diagnostic_info, :undefined, :undefined,
:undefined, :undefined, :undefined, :undefined, :undefined},
service_result: :bad_session_id_invalid,
string_table: [],
timestamp:<PHONE_NUMBER>20976940
}
}
Hi ! We are most probably not handling the ServiceFault response from the server, would be nice to have a Wireshark trace.
What I am not sure to understand, is that it seems this is a message the client is receiving while activating a session, and the client is in this state a very short time during the connection, I don't see how a connection could be in this state after some time. I'll have a better look oat it !
Hi Sébastien,
Here are 2 captures (not filtered).
The command sent are first : {ok, Client}=opcua_client:connect(<<"opc.tcp://<IP_ADDRESS>:4840">>).
And then some: opcua_client:read(Client, {opcua_node_id, 4, numeric, 34}, value).
Till I receive a
2023-02-27T18:43:04.900747+01:00 warning: Unexpected message while activated session: {opcua_node_id,0,numeric,395} #{response_header => #{additional_header => {opcua_extension_object,{opcua_node_id,0,numeric,0},undefined,undefined},request_handle => 5,service_diagnostics => {opcua_diagnostic_info,undefined,undefined,undefined,undefined,undefined,undefined,undefined},service_result => bad_session_not_activated,string_table => [],timestamp =><PHONE_NUMBER>80743880}}

Le 27 févr. 2023 à 11:36, Sébastien Merle @.***> a écrit :
Hi ! We are most probably not handling the ServiceFault response from the server, would be nice to have a Wireshark trace.
What I am not sure to understand, is that it seems this is a message the client is receiving while activating a session, and the client is in this state a very short time during the connection, I don't see how a connection could be in this state after some time. I'll have a better look oat it !
—
Reply to this email directly, view it on GitHub https://github.com/stritzinger/opcua/issues/8#issuecomment-1446086233, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAXKPGNBIRLIXOKOS6AUQ3WZR7RRANCNFSM6AAAAAAVGYWJW4.
You are receiving this because you authored the thread.
I was about to look into this issue, but I can't find the captures you are talking about.
@egillet I pushed a commit in develop where the client session handle the ServiceFault message while activated, but it just make the client fail as I am not sure how these issues can be resolved.
What is quite strange is that the service fault error you get is BadSessionNotActivated, and the client gets it after successful activation...
Could you attach the Wireshark traces in the message through the GitHub web ? Maybe you sent them by mail but they got dropped ?
They are here:
traces.zip
This is strange, from the trace we can see the session was clearly activated, and read requests are working. The only difference between the working read request and the failing one is the request id...
having a way to catch the error and to reconnect should do the job
|
GITHUB_ARCHIVE
|
import pandas as pd
import os
csv=pd.read_csv('dev.csv')
del csv['p1']
del csv['p2']
csv['hand_0']=0
csv['hand_1']=1
csv['hand_2']=2
csv['hand_3']=3
csv['hand_4']=4
csv['hand_5']=5
out=csv[['image_path',
'p4','p3','p6','p5','hand_0',
'p8','p7','p10','p9','hand_1',
'p12','p11','p14','p13','hand_2',
'p16','p15','p18','p17','hand_3',
'p20','p19','p22','p21','hand_4',
'p24','p23','p26','p25','hand_5']]
out.to_csv('hand.txt',header=False,index=False,)
new_txt=[]
index_out=[1,6,11,16,21,26]
with open('hand.txt','r') as f:
txt=f.readlines()
for i in txt:
dot=0
ii=list(i)
for j,jj in enumerate(i):
if ii[j]==',':
dot+=1
if dot in index_out:
ii[j]=' '
iii=''.join(ii)
new_txt.append(iii)
fp = open("all_hand.txt",'w')
with open('all_hand.txt','a') as f:
for i in new_txt:
ii='hand/'+i
f.write(ii)
my_file = 'hand.txt'
if os.path.exists(my_file):
os.remove(my_file)
|
STACK_EDU
|
Please forgive the long working of this query - I’m struggling to put in to words what I want to achieve, hopefully the description below gives the gist.
I’m using a separate table to store relationships between two other tables.
I have one table that stores a list of products and another table that stores a list of product categories. Products are assigned to one (or many) categories.
The product table looks like this:
table name: tblproducts prodId INT (auto increment) prodTitle VARCHAR 100
The category table looks like this:
table name: tblprodcategories catId INT (auto increment) catTitle VARCHAR 100
The relationship table looks like this:
table name: tblcatrelations relId INT (auto increment) relcatId INT (stores the catId from the tblprodcategories table) relprodId INT (stores the prodId from the tblproducts table)
So my relationship table data looks like:
relId relcatId relprodId 1 2 2 2 2 3 3 2 5 4 3 1 5 4 1 6 4 2
…so that data shows that product ID 2 is in both category (ID) 2 and 4, product ID 3 and 5 are both only in category (ID) 2. Product ID 1 is in both categories 3 and 4.
I use the following MySQL Select Statement and it works fine (where the ? is a passed variable containing the catId):
"SELECT prodId, prodTitle FROM tblcatrelations LEFT JOIN tblproducts ON prodId = relprodId WHERE relcatId = ? ORDER BY prodTitle ASC"
This all works fine, but now I need to add another level of filtering: “sub categories”.
The sub category table looks like this:
table name: tblprodsubcategories subId INT (auto increment) subTitle VARCHAR 100
I’ve added another field to tblcatrelations:
table name: tblcatrelations relId INT (auto increment) relcatId INT (stores the catId from the tblprodcategories table) relprodId INT (stores the prodId from the tblproducts table) relsubId INT (stores the subId from the tblprodsubcategories table)
So now my relationship table data looks like:
relId relcatId relprodId relsubId 1 2 2 0 2 2 3 0 3 2 5 0 4 3 1 0 5 4 1 0 6 4 2 0 7 2 0 1 8 2 0 2 9 3 0 3 10 0 2 1 11 0 1 1
the first 6 (relId 6) entries are the same. Then relId 7 shows that sub category 1 and 2 are related to category 2 and sub category 3 is related to category 3. Then relId 10 shows that product 2 and 1 are related to sub category 1.
What I need my MySQL Select statement to do is list the results so when both a category and sub category variable is passed (i.e. ?catId=2&subId=1) it searches for products where the category ID (relcatId) matches the querystring variable “catId” and the sub category ID (relsubId) matches the querystring variable “subId” but only lists products which match both queries. For example:
would just match product ID 2 (products 2, 3 and 5 match the “catId” 2 and products 2 and 1 match the “subId” 1 but only product 2 is in both lists/matches.
I’d love to give you an example of what I’ve tried but I don’t even know where to begin with this query - basically it’s making two searches and producing results of only items/products that are in both searches.
Hope someone can understand what I mean and help.
|
OPCFW_CODE
|
Join our Social media channels to get the latest discounts
Self-Supervised Learning | Supervised Machine Learning | Unsupervised Machine Learning | Contrastive Learning | SimCLR
“If intelligence is a cake, the bulk is self-supervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning.”
Yann André LeCun
Chief AI Scientist at Meta
Some “Musts” Before Starting
You must be familiar with deep learning architectures, including stacks of convolutional, recurrent, dense, pooling, average, and normalization layers using the TensorFlow library in Python 3+.
You must know how to develop, train, and test multi-layer deep learning models using the TensorFlow library in Python 3+.
You must know that this is a “100% Money Back Guarantee” course under Udemy rules.
My name is Mohammad H. Rafiei, Ph.D. I am honored and humbled to serve as your instructor.
I am a machine learning engineer, researcher, and instructor at Johns Hopkins University, College of Engineering, and Georgia State University, Department of Computer Science. I am also the founder of MHR Group LLC in Georgia.
Subject & Materials
This course teaches you “Self-Supervised Learning” (SSL), also known as “Representation Learning.”
SSL is a relatively new and hot subject in machine learning to deal with repositories with limited labeled data.
There are two general SSL techniques, contrastive and generative. This course’s focus is on supervised and unsupervised contrastive models only.
There are several examples and experiments across this course for you to fully grasp the idea behind SSL.
Our domain of focus is the image domain, but you can apply what you learn to other domains, including temporal records and natural language processing (NLP).
In every lecture, you can access the corresponding Python .ipynb notebooks. The notebooks are best to be run with a GPU accelerator. Watch the following lecture for more details.
If the videos are too fast or too slow, you can always change their speed. You can also turn on the video caption.
It is best to watch the videos of this course using 1080p quality with the caption on.
The lectures are created to work best on Google Colab with GPU accelerators.
The TensorFlow version used in these lectures is ‘2.8.2.’ You may use %tensorflow_version 2.x at the very first cell of your Python notebook.
Machine learning libraries in Python, including TensorFlow, are evolving. As such, you must keep yourself updated with changes and modify your codes.
Four Sections and ten Lectures:
Section 01: Introduction.
Lecture 01: An Introduction to the Course.
Lecture 02: Python Notebooks.
Section 02: Supervised Models.
Lecture 03: Supervised Learning.
Lecture 04: Transfer Learning & Fine-Tuning.
Section 03: Labeling Task.
Lecture 05: Labeling Challenges.
Section 04: Self-Supervised Learning.
Lecture 06: Self-Supervised Learning.
Lecture 07: Supervised Contrastive Pretext, Experiment 1.
Lecture 08: Supervised Contrastive Pretext, Experiment 2.
Lecture 09: SimCLR, An Unsupervised Contrastive Pretext Model.
Lecture 10: SimCLR Experiment.
|
OPCFW_CODE
|
High Availability and Replication for Web Servers
I have a 3-layer web solution like this:
Frontend with load balancing + proxies + static content
Backend with 2 Apache web servers each one serving different sites
Publishing System that pushes content to the apache web servers
So I am working in a solution with High Availability for the web servers in the backend. My idea is to replicate the content between the backend servers and if one fails the other will serve all the sites (this could be manual or using Heartbeat).
Problem is that the sites are big in terms of total size and number of files. I try to replicate the contents between servers with rsync but it takes a long time. Also I thought of using NFS to share the contents but this is not an option for High Availability. Another way is for the publishing system to push content to both web servers, but what will happen if I put another web server in the backend?
Is there a better way to do this? I don't need both servers serving the same content at the same time, but having the same content synchronized is a must.
You really should consider DRBD (RAID-1 over TCP/IP) with a multi-node filesystem such as OCFS or GFS.
You can also consider getting a SAN on which you will be able to put any one of these filesystems as well.
Use a SAN instead a NFS server, RAID will handle the high availability.
You could use HAProxy + Keepalived for the load balancers. For the replication think about optical link, if ethernet is not effcient for your needs. RSync is very efficient IMAO (with the "-z" options which compress datas, it find it very efficient). At least, if you want HIGH performance, you could host the two Apache as VM's on the same server, and add some nice disks (15K rpm) with a nice raid card. That should provide you the availability you are looking for
I use Heartbeat2 on Debian Lenny for failover and it works very well. I have a web application that is being served by one web server which will failover to another if a problem (eg. a 2-node active-passive cluster). The web application data is on the filesystem and also in a MySQL database. We use MySQL in Master-Master replication mode to handle the mirroring of the database application data. The rest is handled by rsync when we push an update live. This set-up has been working in production for the last 6 months and has worked well in real-life incidents. I think we have added another 9 to our overall uptime due to this.
I'm suprised your rsync is taking a long time given that your web servers are presumably in the same datacentre or in the same country unless they are large files like ISOs. It might be worth checking what rsync options you're using to see if this can be optimized.
|
STACK_EXCHANGE
|
One of the most common problems in Rails applications is N+1 queries. As an example, let’s use a simple blogging application, with a
Post model and a
Tag model. When you visit the application you are presented with a snippet for the 10 most recent blog posts.
In our view we show the title for each blog post, as well as the tags associated with that post.
This code is problematic - we made a single query for all the posts, but for each post we render we are making an additional query to get the tags for that post. This is the N+1 problem: we made a single query (the 1 in N+1) that returned somethig (of size N) that we iterate over and perform another database query on (N of them).
If you look at your logs you will most likely see something like this:
(I’m assuming in this example that we have a many-to-many relationship between posts and tags, which is why we would need the
Solving N+1 Queries with Includes
A quick Google search will probably tell you to use
includes - this tells ActiveRecord to load all the tags as part of loading the posts.
Now your logs will look something like this:
Much more efficient! This has solved the N+1 problem for this scenario.
N+1 Queries for Count
Let’s continue with this example and build out a tag cloud for our blog. This page will show a simple alphabetical list of all the tags, as well as the number of posts associated with each tag.
Predictably, this will lead to another N+1 problem. The only difference is that the SQL query is a
We can try the same fix as before and use
Let’s look at the generated SQL.
We might be tempted to look at the SQL and conclude that we have solved the N+1 problem. Unfortunately, we have actually solved the N+1 problem only to introduce another problem - memory bloat! We are loading all the
Post objects (or at least all
Post objects with at least one tag) into memory and then never using them, except to get the size of the associated collection for each tag. This will slow down this page and require an enormous amount of memory - keep in mind that there is no limit on the amount of
Post objects we’re loading here.
Before we get into the solution I want to point out another nuance with this code - note that in the template I am using the
posts association has both a
size and a
count function. If I had used the
count function instead Rails would have ignored the loaded
posts collection and made a
COUNT SQL query. Put another way - calling
count on an ActiveRecord association will always query the database, even if the association has already been loaded. If I had used
includes in the controller and the
count function in the template I would have incurred the overhead of both the original N+1 query as well as the memory bloat of loading all the posts.
Solving N+1 Queries with Counter Cache
Showing the size of an association is such a common problem that Rails has built-in solution - counter cache. The idea is that we create an extra column in our
tags database table and store the number of posts associated with each tag. We then no longer need to load or query the posts associated with each tag - the
Tag model will have an attribute with the number of associated posts. Rails will do all the work of making sure this counter column is kept in sync as new posts are created or deleted.
Let’s start with adding the database column - by convention we will call this column
Once we tell Rails that we are using a counter cache column it will keep the column updated as new posts are created and removed, but we need to manually seed the count. Rails has a built-in function called
reset_counters to do this automatically, but it’s not recommended that we use this inside migrations. To see why, let’s see what that would look like.
There are 2 problems with using
reset_counters inside this migration. Firstly, the migration will be much slower than a simple SQL query, potentially slowing down the next deployment. (‘Slower’ in this context usually means minutes instead of seconds) Secondly, we’re referencing the
Tag model in our migration, which goes against the best practices for migrations - if we ever decide to rename the
Tag model to
Hashtag our migration would fail to run.
Now that we have the database column, we need to tell Rails to keep it in sync.
Now that we have this in place, solving the N+1 problem is trivial.
Solving N+1 Queries with Regular SQL
The counter cache column does solve the N+1 problem, but it’s not without drawbacks. Since the counter cache column is kept in sync by the ActiveRecord code, we have to make sure to always make updates through ActiveRecord - if we make updates directly to the database we would need to refresh the counter cache.
Another drawback is that counter cache doesn’t work with scopes. Let’s introduce the concept of draft posts - each
Post will have a
published_at attribute, and we will only display posts where this attribute is populated. We also want to update our tags page to show the number of published posts for each tag. This means we can no longer rely on the counter cache column (since the counter will include both published and draft posts) - we are effectively back to the original N+1 and memory bloat problems. We can solve this problem by using plain SQL, since SQL already has a solution for this problem -
The SQL query for getting the number of published posts for each tag looks like this:
This will give us the number of published posts for each tag (although tags without any published posts won’t show up at all). We can recreate this query in ActiveRecord and make this data available to our view.
This will create a simple hash, mapping each tag id to the number of posts for that tag. All with a single, efficient query!
We’re not completely over line though, since the
tag_usage hash will not contain values for tags without any published posts. We can take care of this by using a default in our view.
This is not a great solution though, since our view is now aware of a very specific implementation detail (or implementation flaw) in how we construct the hash. An alternative solution would be to modify our SQL query to do a
LEFT OUTER join which will ensure all tags are present in the resulting hash, but this would require us to write some custom SQL which is less than ideal. A better solution would be to wrap the tag usage in a view model.
Now both the controller and view and straightforward, and the
TagUsage view model can easily be tested.
This solves the N+1 problem and avoids any potential memory bloat. As an additional bonus it make the code easy to read and test. I prefer this to the counter cache solution, even when there is no scope on the association. Happy coding.
|
OPCFW_CODE
|
#if !os(watchOS)
import CloudKit
import Foundation
public struct iCloudContainer: CapabilityType {
public static let name = "iCloudContainer"
private let container: CKContainer
private let permissions: CKContainer.ApplicationPermissions
public init(container: CKContainer, permissions: CKContainer.ApplicationPermissions = []) {
self.container = container
self.permissions = permissions
}
public func requestStatus(_ completion: @escaping (CapabilityStatus) -> Void) {
verifyAccountStatus(container, permission: permissions, shouldRequest: false, completion: completion)
}
public func authorize(_ completion: @escaping (CapabilityStatus) -> Void) {
verifyAccountStatus(container, permission: permissions, shouldRequest: true, completion: completion)
}
}
private func verifyAccountStatus(_ container: CKContainer, permission: CKContainer.ApplicationPermissions, shouldRequest: Bool, completion: @escaping (CapabilityStatus) -> Void) {
container.accountStatus { accountStatus, accountError in
func completeWithError() {
completion(.error(accountError ?? CKError(.notAuthenticated)))
}
switch accountStatus {
case .noAccount: completion(.notAvailable)
case .restricted: completion(.notAvailable)
case .available:
if permission != [] {
verifyPermission(container, permission: permission, shouldRequest: shouldRequest, completion: completion)
} else {
completion(.authorized)
}
case .couldNotDetermine:
completeWithError()
case .temporarilyUnavailable:
completeWithError()
@unknown default:
completeWithError()
}
}
}
private func verifyPermission(_ container: CKContainer, permission: CKContainer.ApplicationPermissions, shouldRequest: Bool, completion: @escaping (CapabilityStatus) -> Void) {
container.status(forApplicationPermission: permission) { permissionStatus, permissionError in
func completeWithError() {
completion(.error(permissionError ?? CKError(.permissionFailure)))
}
switch permissionStatus {
case .initialState:
if shouldRequest {
requestPermission(container, permission: permission, completion: completion)
} else {
completion(.notDetermined)
}
case .denied: completion(.denied)
case .granted: completion(.authorized)
case .couldNotComplete:
completeWithError()
@unknown default:
completeWithError()
}
}
}
private func requestPermission(_ container: CKContainer, permission: CKContainer.ApplicationPermissions, completion: @escaping (CapabilityStatus) -> Void) {
DispatchQueue.main.async {
container.requestApplicationPermission(permission) { requestStatus, requestError in
switch requestStatus {
case .initialState: completion(.notDetermined)
case .denied: completion(.denied)
case .granted: completion(.authorized)
case .couldNotComplete:
completion(.error(requestError ?? CKError(.permissionFailure)))
@unknown default:
completion(.notDetermined)
}
}
}
}
#endif
|
STACK_EDU
|
Need Clarification on Execution Context
function a(){
b();
var c;
}
function b(){
var d;
}
a();
var d;
I would like clarification on the Execution Context for the code above. From what I understand, during the creation phase of Execution Context functions a and b are set as pointers to the location in heap memory and var d is set to undefined. During the execution phase, function declarations a and b are simply ignored.
What I'm confused about is when we invoke function a during the execution phase, is the Global Execution Cont still in the execution phase later when we pop a()'s execution context from a stack, so we can process var d? Or is GEC's execution phase over once we invoke a() and then we somehow scan the var d when GEC is the only context left on a stack?
From what I understand, after the GEC execution phase, a() will be invoked and a new a() execution context will be put on the execution stack. Then after a()'s execution phase is done, we put new b() execution context on a stack. After we pop b()'s execution context, we can process var c and after we pop a()'s execution stack we can process var d of the global execution stack.
The biggest confusion is how the JS engine checks var c and var b if the execution phase for both contexts is already over. Is execution context actually over or is it still running for each context? Are we able to scan var c and var d due to a Variable Object(VO) saving information about current execution context or due to all previous execution contexts still running an execution phase?
"after the GEC execution phase, a() will be invoked" - why "after"? a() is invoked during the execution of the global code.
Your question is a bit unclear since "execution phase" is not a standard term. What exactly do you understand by "phase"? What does it mean to you when "the phase is over"?
"after the GEC execution phase, a() will be invoked" - I wrote it wrong. My question was when the execution phase of particular execution context ends. To me phases are what makes execution context. There are two phases, compilation/creation phase and execution phase. I don't know whether execution phase is a standard term, but from arrays of articles and videos I've seen they all reference it as such. During execution phase, functions are invoked and values are assigned.
In the example above when we enter function a() execution context, I'm not sure whether execution phase of a() actually ends completely on function b() invocation, if so, what checks var c portion? OR does execution phase of a() still runs somehow in the background until we pop b()'s execution context and get back to var c with execution phase still running? Or does execution phase simply ends on function invocation and JS engine simply rechecks the remaining code var c inside of it. Would like some clarification on that.
After running my code through https://ui.dev/ javascript visualizer I was able to get my answer :) but still would love a more detailed explanation if someone can. Appreciate all the help in advance.
The "execution phase" is not a standard term. But if you consider the two phases to "make" the execution context, then the second phase would need to last as long as the execution context itself lasts. And it is kept on the stack of execution contexts (the "call stack") during function calls, so that it can resume its execution when the called function returns.
I guess the correct term would be the code execution phase. Thank you for clarifying it now I get it. You can write your comment as an actual answer and I will mark is as a correct one.
So, the callstack in js works on LIFO, considering your example the execution context will be executed in following manner
Initial
[gec]
invoking a()
[a execution context]
[gec]
invoking b()
[b execution context]
[a execution context]
[gec]
b() finished execution
[a execution context]
[gec]
a() finished execution
[gec]
I hope this helps you!!!
Thank you for your answer! It took me a while to understand the stack you tried to draw but I got it. I guess it helped me realize the answer I was looking for. I still do not truly understand when var c and var d are processed, I guess during 4 and 5 finished execution steps?
yes you are right it's processed during step 4 and 5
|
STACK_EXCHANGE
|
"""
This file does automatic calibration of Type C W/Re against a Type K Ch/Al therocouple
In iTools: mV40; Linear; mV; Display High/Low = Range High/Low = 40/-3
"""
from TPD.Eurotherm_Functions import *
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
""" Create Eurotherm instances """
"Create obj for reading and writing temperature to Eurotherm -- Type C"
port1 = 'COM4'
controlObj = Eurotherm(port1)
"Create obj for reading Room temperature from Eurotherm -- Type K"
port2 = 'COM5'
controlObj2 = Eurotherm(port2)
"For reading in Room Temp to correct Temp reading"
# typeC = thermocouples['C']
mV = np.array([])
typeK = np.array([])
i=0
back_to_the_future = 1985
start_time = time.time()
while back_to_the_future == 1985:
# mV = np.append(mV,controlObj.read_val())
# typeK = np.append(typeK, controlObj2.read_val(num_dec=2))
try:
time.sleep(0.2)
# print(controlObj.read_val(), controlObj2.read_val(num_dec=1))
print("%.3f" % (time.time()-start_time), controlObj2.read_val(num_dec=1), controlObj2.read_output())
# i+=1
# if i == 750:
# controlObj.close_me()
# controlObj2.close_me()
# break
except:
back_to_the_future = 1985
print(mV)
print(typeK)
df = pd.DataFrame([typeK, mV]).T
df.columns = ['Temperature', 'mV']
plt.plot(df.Temperature.values, df.mV.values)
plt.xlabel('Temperature (K)')
plt.ylabel('mV from Type C')
plt.show()
|
STACK_EDU
|
import fs from "fs"
import path from "path"
import { promisify } from "util"
import mergeAndCompareTwoCSVFiles from "./utilities/merge_and_compare_two_csv_files"
const mergeFilteredNonFiltered = async (
notFilteredFolderPath: string,
filteredFolderPath: string,
outputFolderPath: string,
) => {
const readDir = promisify(fs.readdir)
const filteredFiles = await readDir(filteredFolderPath)
const notFilteredFiles = await readDir(notFilteredFolderPath)
notFilteredFiles.forEach((nfFile) => {
filteredFiles.forEach((fFile) => {
if (nfFile === fFile) {
const fFilePath = path.resolve(__dirname, filteredFolderPath, fFile)
const nfFilePath = path.resolve(__dirname, notFilteredFolderPath, nfFile)
const outputFilePath = path.resolve(__dirname, outputFolderPath, fFile)
mergeAndCompareTwoCSVFiles(nfFilePath, fFilePath, outputFilePath)
}
})
})
}
export default mergeFilteredNonFiltered
|
STACK_EDU
|
** There are NEW livestream videos about RapidMiner! Visit my Channel here **
Before you can begin with building your own AI Financial Market Model (machine learned), you have to decide on what software to use. Since I wrote this article in 2007, many new advances have been made in machine learning. Notably the python module Scikit Learn came out and Hadoop was released into the wild.
I’m not overly skilled in coding and programming – I know enough to get by- I settled on RapidMiner. RapidMiner is a very simple visual programming platform that let’s you drag and drop “operators” into a design canvas. Each operator has a specific type of task related to ETL, modeling, scoring, and extending the features of RapidMiner.
There is a slight learning curve but, it’s not hard to learn if you follow along with this tutorial!
The AI Financial Market Model
First download RapidMiner Studio and then get your market data (OHLCV prices), merge them together, transform the dates, figure out the trends, and so forth. Originally these tutorials built a simple classification type of model that look to see if your trend was classified as being in an “up-trend” or a “down-trend.” The fallacy was they didn’t not take into account the time series nature of the market data and the resulting model was pretty bad.
For this revised tutorial we’re going to do a few things.
- Install the Finance and Economics, and Series Extensions
- Select the S&P500 weekly OHLCV data for a range of 5 years. We’ll visualize the closing prices and auto-generate a trend label (i.e. Up or Down)
- We’ll add in other market securities (i.e. Gold, Bonds, etc) and see if we can do some feature selection
- Then we’ll build a forecasting model using some of new H20.ai algorithms included in RapidMiner v7.2
All processes will be shared and included in these tutorials. I welcome your feedback and comments.
We’re going to use the adjusted closing prices of the S&P500, 10 Year Bond Yield, and the Philadelphia Gold Index from September 30, 2011 through September 20, 2016.
The raw data looks like this:
We renamed the columns (attributes) humanely by removing the “^” character from the stock symbols.
Next we visualized the adjusted weekly closing price of the S&P500 using the built in visualization tools of RapidMiner.
The next step will be to transform the S&P500 adjusted closing price into Up and Down trend labels. To automatically do this we have to install the RapidMiner Series Extension and use the Classify by Trend operator. The Classify by Trend operator can only work if you set the set the SP500_Adjusted_Close column (attribute) as a Label role.
The Label role in RapidMiner is your target variable. In RapidMiner all data columns come in as “Regular” roles and a “Label” role is considered a special role. It’s special in the sense that it’s what you want the machine learned model to learn to. To achieve this you’ll use the Set Role operator. In the sample process I share below I also set the Date to the ID role. The ID role is just like a primary key, it’s useful when looking up records but doesn’t get built into the model.
The final data transformation looks like this:
The GSPC_Adjusted_Close column is now transformed and renamed to the label column.
The resulting process looks like this:
That’s the end of Lesson 1 for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 2 will be updated shortly.
_This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning. _
From around the Social Web!
Want to leave a comment?
If you want to give me some feedback on this post, please contact me
via email or on Twitter
|
OPCFW_CODE
|
''' Conjunto de códigos desenvolvidos para o Sancathon 2021
> GRABus <
* Código de manejamento do banco de dados de SQL:
O objetivo do desenvolvimento desse projeto em python era mostrar a viabilidade de
se trabalhar com banco de dados SQL para armazenar os dados vindos das diversas redes de
obtenção de dados sobre tanto os pontos de onibus quanto as linhas. É válido lembrar que
todos os códigos do projeot são versões betas que podem ser aperfeiçoados e mais explorados
em outros escopos fora do Sancathon.
Uma das principais motivações de se trabalhar com SQL é a viabilização de utilizar um
banco de dados do tamanho necessário para alimentar as outras componentes do projeto. Por
meio desse código de manutenção do banco de dados seria possível inserir e deletar
(obviamente em uma versão completa do projeto novas funções estariam implementadas) de
forma fácil e eficiente computacioalmente falando.
A motivação de utilziar a lingaugem python para fazer a ponte com o banco de dados foi
a facilidade, uma vez que entedemos que a maioria dos usuarios do programa (funcionarios de
empresas ou de orgãos publicos) estariam mais ligados com a área de mobilidade urbana e
não possuiriam um amplo conhecimento de em desenvolvimento e computação. Assim, o python
ofereceria umaa linguagem em alto nivel com alta facildiade para os usuarios e para os
desevolvedores.
Por fim, optamos pela escolha do SQLite pela facilidade de implementação novamente e
por ser um programa open source disponiveis para todos acessarem o projeto e realizar seus
próprios teste. Em um cenário no qual o projeto sairia do beta seria necessário um SQL
manager mais robusto, uma boa opção é o MySQL, Oracle SQL ou o Microsoft SQL Server, todos
possuem as funções necessárias parao o desenvolvimento do projeto e para a quantidade de dados
que espera-se trabalhar, porém não são open source e possuem um custo de uso.
'''
import sqlite3
# Função que opera com a tabela do banco de dados dos pontos de onibus
def SQL_operations_Location(table, operation, adress, latit, longit):
# conn = connection, representa conexão com o banco de dados local na maquina
conn = sqlite3.connect('GRABus.sqlite')
# cursor para gerenciamento do banco de daods aberto
cursor = conn.cursor()
# executa o comando no banco de dados criando a tabela com o nome pedido e com os
# parametros para uma localização
cursor.execute(f'''
CREATE TABLE IF NOT EXISTS {table} (adress TEXT, lat REAL, long REAL)''',
)
# Por enquanto apenas duas operações foram implementadas: Inserção e Remoção
if operation.upper() == 'INSERT':
cursor.execute('''
INSERT INTO Location (adress, lat, long)
VALUES (?, ?, ?)''', (adress, latit, longit)
)
if operation.upper() == 'DELETE':
cursor.execute('''
DELETE FROM Location
WHERE adress=?''', (adress,)
)
# Confirma as operações feitas no banco de dados
conn.commit()
print("OPERAÇÃO BEM SUCEDIDA")
# Função que opera com a tabela do banco de dados dos pontos de onibus
def SQL_operations_Lines(table, operation, weight, dpdb, dpdt):
# conn = connection, representa conexão com o banco de dados local na maquina
conn = sqlite3.connect('GRABus.sqlite')
# cursor para gerenciamento do banco de daods aberto
cursor = conn.cursor()
# executa o comando no banco de dados criando a tabela com o nome pedido e com os
# parametros para uma linha
cursor.execute(f'''
CREATE TABLE IF NOT EXISTS {table} (weight REAL, dpdb REAL, dpdt REAL)''',
)
# Por enquanto apenas duas operações foram implementadas: Inserção e Remoção
if operation.upper() == 'INSERT':
cursor.execute(f'''
INSERT INTO {table} (weight, dpdb, dpdt)
VALUES (?, ?, ?)''', (weight, dpdb, dpdt)
)
if operation.upper() == 'DELETE':
cursor.execute(f'''
DELETE FROM {table} WHERE weight=?''', (weight, )
)
# Confirma as operações feitas no banco de dados
conn.commit()
print("OPERAÇÃO BEM SUCEDIDA")
def main():
# Descrição breve para o usuário
print('''Bem vindo ao gerenciador do banco de dados SQL GRABus
Uma descrição breve dos parametros básicos:
1. table: A tabela "Location" refere a tabela com as coordenadas dos
pontos, para referencair as linhas deve se por "Linha" + numero da linha
2. Operation: Operação a ser feito no banco de dados (inserção, Remoção e etc)
''')
# looping para permitir que o usuario faça varias operações facilmente
while(True):
# inputs
table = input("Table: ")
operation = input("Operation: ")
# caso para a operação na tabela de ponto de onibus
if table == "Location":
adress = input("Adress: ")
latitude = float(input("Latitude: "))
longitude = float(input("Longitude: "))
SQL_operations_Location(table, operation, adress, latitude, longitude)
# caso para a operação em outras tabelas, como a de cada linha em especifico
else:
weight = float(input("Weight: "))
dpdb = float(input("People flow per bus: "))
dpdt = float(input("People flow per time: "))
SQL_operations_Lines(table, operation, weight, dpdb, dpdt)
# Invoca a função main
if __name__ == '__main__':
main()
|
STACK_EDU
|
Single-node Kubernetes on Home Lab using MicroK8s, Metallb, and Traefik
MicroK8s: High availability, Low-ops, Minimal Kubernetes for All-Size Clusters
In my recent blog, I have setup a single-node Kubernetes cluster on my home lab server using k0s. It is a good tool that make it easy to setup a cluster in just one or two commands. However, as it may be relatively new, I found a problem when starting the cluster that worker node sometime isn’t up and running and you need to restart the service.
Setup Single-node Kubernetes Cluster on a Home Lab server using k0s
Reducing your cloud bill and make use of your PC at home
In this blog, I will try another tool called MicroK8s which is around for sometime and hopefully will be more stable than k0s to setup a single-node Kubernetes cluster on Ubuntu Linux 20.04 on my home lab box.
- A public IP address from your internet provider, either dynamic or static.
- A domain name mapped to your static IP or a dynamic DNS domain name for dynamic IP which can be configured on your modem router to sync with the Dynamic DNS provider e.g. no-ip.
- You have a fresh install of Ubuntu 20.04 on your home lab server — see this blog for how-to.
- SSH has been configured for remote access on your home lab server— see my previous blogs on how to setup on home lab or Linode VM.
- kubectl, git, and helm are installed on your local machine
In this section, we will install MicroK8s on our Ubuntu server.
On your server, use snap to install the MicroK8s package.
sudo snap install microk8s --classic --channel=1.21
Add yourself into the group
microk8s, gain access to
.kube caching directory, and refresh the session so group update takes effect.
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
su - $USER
Monitor the cluster provisioning status. This may take a few minutes until the cluster is ready.
microk8s status --wait-ready
Get nodes and services on the cluster.
microk8s kubectl get nodes
microk8s kubectl get services
Enable foundation addons
microk8s enable dns storage
To enable remote access to the api-server using kubectl, edit the file
/var/snap/microk8s/current/certs/csr.conf.template on your server and add domain name and/or IP address (if static) of your server under
alt_names section that reachable from the internet.
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = your.dnsname.com
IP.1 = 127.0.0.1
IP.2 = 10.152.183.1
IP.3 = 192.168.1.xx
IP.4 = 123.456.789.0
Export kubeconfig to file.
microk8s config > admin.config
On your local machine, copy the kubeconfig file from the server to your machine.
scp email@example.com:~/admin.config .
admin.config file and update
clusters.cluster.server by replacing the IP and port with your domain name or public IP (if static). It is recommended to use the port other than 16443 for better security and then configure your router to forward the specified port to your server’s port 16443.
Test the connection
kubectl get nodes
In this section, we will deploy an application named
whoami for testing purpose.
First, clone this repository and apply deployment and service on the cluster.
git clone https://github.com/pacroy/whoami.git
kubectl create ns whoami
kubectl apply -f whoami.yml -n whoami
Once the pod is up and running, forward port to its service.
kubectl port-forward service/whoami 8080:80 -n whoami
Go to localhost:8080 on your local machine and you should see the response like this:
To expose a service with LoadBalancer, we need to enable metallb which a load balancer for bare metal.
microk8s enable metallb
It will ask to input IP address range allocated for load balancers. It is good to assign s small IP address pool within your subnet that are outside your DHCP range to avoid collision.
In this case I choose
Enter each IP address range delimited by comma (e.g. ‘10.64.140.43–10.64.140.49,192.168.0.105–192.168.0.111’): 192.168.1.85–192.168.1.89
Update whoami service type to LoadBalancer.
apply -f service-lb.yml -n whoami
Test the load balancer by taking note the EXTERNAL-IP and opening it in your browser (in this case it is http://192.168.1.85) and you should see the same result as before.
Restore the service type to ClusterIP as we will expose it via Ingress in the next section.
kubectl apply -f whoami.yml -n whoami
Install Traefik Ingress
On your server, enable traefik ingress controller for external access.
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
kubectl create namespace traefik
helm install traefik traefik/traefik -n traefik
Check component and you should see the traefik service is exposed via LoadBalancer on both port 80 and 443, by default.
Restore whoami service type to ClusterIP and create an ingress which route traffics at any host with URI
/whoami to our service.
kubectl apply -f ingress.yml -n whoami
Configure your modem router to forward port 80 and 443 to your LoadBalancer IP address. Instruction varies by router’s brand and model and won’t be covered here.
Open your browser and go to your domain name or public IP and access URI
/whoami. In my case, it is http://microk8s.ddns.net/whoami.
You can access traefik dashboard by forwarding port 9000 from a traefik pod.
kubectl port-forward $(kubectl get pods --selector "app.kubernetes.io/name=traefik" --output=name -n traefik) -n traefik 9000:9000
Then access the traefik dashboard at http://localhost:9000/dashboard
UPDATE: You may wanna check first if your domain name can work with Let’s Encrypt by follow what I did in this blog.
Now, if you try to access https://your.dnsname.com/whoami then you will get this warning page.
This is because the traefik is using a self-signed certificate which is not trusted by the browser.
To make our whoami website trusted by the browser, we need to tell traefik to get and use a certificate that issued by a trusted certificate authority (CA) e.g. Let’s Encrypt.
Delete the ingress as we will use IngressRoute instead in the next section.
kubectl delete ingress/whoami -n whoami
Instead of using standard Ingress object with lots of annotations, traefik also provides their own CRD called IngressRoute that make the configuration more readable and structured.
Let’s try IngressRoute. Edit the file
ingressroute.yml and update the hostname
microk8s.ddns.net with your domain name then apply.
kubectl apply -f ingressroute.yml -n whoami
Try accessing whoami again at http://your.dnsname.com/whoami and you should see the same result.
Next, we need to upgrade the traefik to include a certificate resolver. Edit the file values.yml and update
firstname.lastname@example.org with your email address. This will be used by Let’s Encrypt to send notification email when the certificate nearly expires.
Upgrade the traefik release.
helm upgrade traefik traefik/traefik -n traefik --values values.yml
Check if the arguments are applied correctly by looking the Deployment’s manifest.
helm get manifest traefik -n traefik
Check traefik’s log and you should see no error about the certificate resolver.
kubectl logs $(kubectl get pod -n traefik -o name) -n traefik -f
IngressRoute with TLS
Now, let’s apply a new IngressRoute that using the certificate resolver. Don’t forget to update the hostname to yours!
kubectl apply ingressroute-tls.yml -n whoami
Check if the certificate is created properly.
kubectl exec -it $(kubectl get pod -n traefik -o name) -n traefik -- cat /data/letsencrypt.json
NOTE: It may take a while before the certificate shows up for the configured domain.
Test accessing your application at https://your.dnsname.com/whoami and the browser should show the page with a valid certificate without warning.
This section will show you how to enhance security for our application.
TLS version 1.2
HTTPS should no longer support SSL and TLS v1.1 and TLS v1.2 as they are weak in term of security. If we scan our endpoint with SSLLabs, you will get rating at B.
We can make our traefik ingress to accept connection with minimum TLS v1.2 by creating a new TLSOption in default namespace.
kubectl apply -f tlsoption.yml
This create a default TLSoption that will apply to all traefik routers by default (if not explicitly overridden). Scan the endpoint again with SSLLabs and you will now get rating A.
Redirect to HTTPS
So far, our whoami application serves HTTP traffic at port 80 and HTTPS traffic at port 443. What if we want to redirect all HTTP traffics to HTTPS. We need to leverage RedirectScheme middleware.
Let’s create a new middleware in default namespace.
kubectl apply -f middleware-https.yml
Then replace IngressRoute with the new one with the middleware.
kubectl apply -f ingressroute-http.yml -n whoami
Test your HTTP endpoint and it should now redirect to HTTPS.
$ curl http://microk8s.ddns.net/whoami -D-
HTTP/1.1 301 Moved Permanently
Date: Mon, 12 Apr 2021 14:14:36 GMT
Content-Type: text/plain; charset=utf-8Moved Permanently
|
OPCFW_CODE
|
tests instantiating components
@mcflugen, @mdpiper, @Elchin
Here's the copy of topoflow I'm working on. All of the components can be individually instantiated (without emeli), and most go through IRF without too much trouble and requiring very few outside variables.
The file component_testing/components.py does it for every component in topoflow/components except HIS_*.py. If any dummy variables were defined, they appear right before the call that needed them (so if they appear after .initialize() but before .update(), they were first needed by update). I made small modifications to a couple of files, and those are mentioned in the script.
Neither of the erode components got through the whole process. It appeared to be due to size differences between arrays (in different functions for erode_d8_local and _global), and it might be a result of missing or dummy variables. Both d8_local and d8_global ran, so the problem is probably with erode itself.
Thoughts?
@mcflugen, @Elchin
This is excellent; thank you, @mperignon! The tests you created in components.py were more enlightening than anything I'd read in the docs.
I have a few thoughts:
Should TopoFlow, and the tests in components.py, be added to Travis CI?
We're going to need Scott's input on things that don't work.
I'm now fairly certain that TopoFlow will not be very user-friendly in WMT, since every component allows some combination of custom (Rivertools?) file uploads for their inputs.
@mcflugen, @Elchin
@mdpiper, there is a set of tests included with TopoFlow with varying degrees of completeness. None of them (as far as I can tell) test the coupling of components, but they do something similar to what I did. Neither those nor mine check the results - they just check that nothing explodes. So I don't think that this set of tests should be added to Travis CI just yet, but I do think a new set of tests should.
I'm going to try to run the non-working components through emeli, as Scott intended, and see if they work. It really might just be that they are not getting the input files quite like they want them.
I think we need to make a decision about how much to modify TopoFlow to make it usable with WMT. It's a black box, even when using the components individually, and much of that is due to the use of Rivertools-specific file formats for inputs and outputs. There should be an easy way to just grab the numpy arrays that are being passed around, but the only place I can see to intercept them is when calling topoflow_driver.update(), which is the method that creates a gazillion files.
If we could bypass the normal inputs and outputs and just give it numpy arrays, then it would not be nearly as hard to make it user-friendly(er) in WMT.
ANUGA has really flexible functions for setting quantity values that take all kinds of data and redirect it down the appropriate path to turn them into one common data type. They wouldn't completely solve the problem, but they could allow each input in WMT to be a dropdown menu and a text field. Line 701, for example: https://github.com/GeoscienceAustralia/anuga_core/blob/master/anuga/abstract_2d_finite_volumes/quantity.py
Another issue: the BMI initialize method should allow no arguments, but (as @mperignon has shown) the TopoFlow components require a .cfg file. At least there are several .cfg files in the topoflow/examples subpackage to use as dummies for testing.
|
GITHUB_ARCHIVE
|
I am testing h264 mp4 video in pano2vr and want to share what I have learned so far. Any ideas and questions are very welcome.
Initially I used the patches (A) tool to cut an image patch from the pano. I used this patch to make sure that the video-camera in my 3d-software records video of the exact correct region.
I created the video and then changed the patch in pano from image to video and selected the .mp4 file that should be played. I tried it with 2 patches and tested the result in different browser in my WIN10 PC.
Firefox 75.0 plays both video clips at the correct position in good qualitiy. I can move the pano around and the videos still play in loop and its all good.
Internet Explorer 11 dont play the videos but shows the patches frame and something like a single pixel of the video streched to the size of the frame.
EDGE 44 dont play the videos but shows the patches frame and something like a single pixel of the video streched to the size of the frame.
Chrome 81 plays one of the two videos, als long as I dont turn around in the pano. once i move, both videos behave like in IE and EDGE.
Online-Examle: http://www.lichtundschatten-3d.de/pano2 ... ge-pano/0/
I was interested in finding out more and tried 2 different setups.
Setup 1: Only one video that was initially a image patch and then got changed to video, this in the top one. Below, you see the same .mp4 just added to the pano directly with Videos (V).
Result: The video on the top behaves same like in the initial test, nothing was changed here. The video below (still the same .mp4) plays in all four tested browsers.
Online-Examle: http://www.lichtundschatten-3d.de/pano2 ... ge-pano/1/
Setup 2: I deleted the patch and put only the video via Videos (A) onto the pano and moved it to the correct positon where the deleted patch was.
Result: Firefox still works perfect, like expected. IE and EDGE blur the video like it would not have enough resolution. Chrome playes the video and it looks good untill in move the pano. Then, the video seems to change resolution and looks blocky.
Online-Examle: http://www.lichtundschatten-3d.de/pano2 ... ge-pano/2/
I have no clue so far why only firefoxs is playing everything fine. I will keep researching and let you know when I know more. Thank you for your feedback!
|
OPCFW_CODE
|
What is AntiSpyware 2008
Like other rogue AntiSpyware programs, Antispyware2008 tries to forcibly install onto your system by creating false alert so as to make you believe that your system is infected with multiple spyware infections. After that it offers you to download a trial version of AntiSpyware 2008 or purchase the full license version of this rogue anti-spyware application.
If you are not careful, you can get yourself trapped by Antispyware2008. If you happen to purchase AntiSpyware 2008, this will not only result in wastage of your money but might cause further escalation of the problem, because AntiSpyware 2008 might come bundled with other spyware, trojans and other malware scripts. They can further cause frauds upon you if you happen to purchase the licensed version of AntiSpyware2008 and submit your credit card information.
Technical Details of AntiSpyware 2008
- Other names and clones: Antivirus 2008, Antivirus2008, WinX Security Center, DoctorAntivirus2008, VistaAntivirus2008, SystemAntivirus2008, Antivirus2009, Antivirus 2009, Antivirusxp2008, Antivirus xp 2008, AntiSpyware2008KB
- Date Appeared: July 2008
- Characteristic: Rogue security program
- URL: Not Known
Do I have AntiSpyware 2008
You can yourself search your computer manually, but it is not recommended unless you are a tech-geek. To save time and effort, we recommend you do download a FREE Scanner.
How to Remove AntiSpyware 2008
The best way for the removal of AntiSpyware 2008 is to install a good quality Anti-spyware Program and scan your system for any AntiSpyware 2008 infections. Automatic removal of AntiSpyware 2008 is always good and complete as compared to any attempts to manually remove AntiSpyware 2008, which may sometime lead to erroneous results, if you are not completely aware of all the files and registry entries used by this rogue anti-spyware.
Instructions for the manual removal of AntiSpyware 2008
If you really want to remove the AntiSpyware 2008 infection on your system manually then proceed as follows.
Step 1: Kill the AntiSpyware 2008 Processes – Learn how to do that
Step 2: Remove Following AntiSpyware 2008 files, folders and all associated AntiSpyware 2008 DLL files: Learn how to do that
%profile%\application data\microsoft\internet explorer\quick launch\antispyware-2008.lnk
Step 3: Delete following AntiSpyware 2008 registry entries: Learn how to do that
HKEY_CURRENT_USER\software\microsoft\windows\currentversion\explorer\menuorder\start menu\programs\antispyware 2008
|
OPCFW_CODE
|
Overnigth jobs are failing: BUG: Attempt to do `checkout in Node ...
See original issue on GitLab
In GitLab by [Gitlab user @jjardon] on Apr 3, 2019, 01:01
Job #189317157 failed for 9605f798022a02ff92f089d54f031c12bfbe6a00:
[--:--:--][ ][main:core activity] BUG BUG: Attempt to do `checkout in Node(value={'checkout': Node(value='True', file_index=6, line=35, column=16), 'url': Node(value='gnome:libglnx.git', file_index=6, line=36, column=11)}, file_index=6, line=35, column=6)` test
In GitLab by [Gitlab user @jjardon] on Apr 3, 2019, 01:04
changed the description
In GitLab by [Gitlab user @jjardon] on Apr 3, 2019, 01:08
changed title from Overnigth jobs are failing: to Overnigth jobs are failing:{+ BUG: Attempt to do `checkout in Node ...+}
In GitLab by [Gitlab user @tristanvb] on Apr 3, 2019, 07:04
[Gitlab user @jjardon] This will be because the overnight tests from master are still using a bst-external which is only compatible with BuildStream 1.
[Gitlab user @danielsilverstone]-ct I think this is from your "BUG" assertion added in 3816dcf89, do you think we could improve this plugin author facing assertion message please ?
First, there is no need for the message to redundantly include the word "BUG", it is an unhandled exception so it will anyway be reported in the logs as a "BUG".
Secondly, I think that the message itself is confusing, we could do better to inform the plugin author that the in expression is not supported by the Node type (I think the plugin author would eventually figure it out, but we should make it easier and more obvious I think).
In GitLab by [Gitlab user @jjardon] on Apr 3, 2019, 13:06
mentioned in merge request !1275
In GitLab by [Gitlab user @danielsilverstone-ct] on Apr 4, 2019, 08:47
[Gitlab user @tristanvb] [Gitlab user @jjardon] Yeah, I can see that the message is bad. I'll put it on my list to improve today, pointing people at node_has_member() or similar
In GitLab by [Gitlab user @danielsilverstone-ct] on Apr 4, 2019, 11:56
I propose to update that error as follows:
assert False, \
"""
Code contains `{} in {}` test which is unsupported by YAML nodes.
If this is in a plugin, then please ask the author to use the new
`self.node_has_member()` method instead.
""".format(what, self)
How does that sit for you?
In GitLab by [Gitlab user @tristanvb] on Apr 4, 2019, 12:18
It looks like this is still going to format the self as something wonky like:
`checkout in Node(value={'checkout': Node(value='True', file_index=6, line=35, column=16), 'url': Node(value='gnome:libglnx.git', file_index=6, line=36, column=11)}, file_index=6, line=35, column=6)`
I doubt that it is valuable to show the whole stringified namedtuple for this case; the stack trace above the message should be enough to trace down where the invalid in statement originated from.
I think a simpler message here might be:
"Unsupported check for member '{}' in YAML Node, use Plugin.node_has_member() instead".format(what)
I wouldn't go as far as "If this is in a plugin", in any case it is a bug, either in BuildStream core or in a plugin. Stack traces from YAML node accesses are not really any more special than any other stack trace: we hope that users never see them, and if they do, we hope they file a bug somewhere; the stack trace will help us to determine where the bug originated from.
Beyond this, if Plugin.node_has_member() is going to exist, does it not make sense to simply call it instead of raising an error (and actually support the in statement) ?
In GitLab by [Gitlab user @danielsilverstone-ct] on Apr 4, 2019, 13:33
We made the decision to not support in because when we did it tended to mask other parts of code which needed fixing. However I suppose now it might be better to simply support it, you're right there. I'll give it a bit of thought. I take your point about the stringified node though.
In GitLab by [Gitlab user @danielsilverstone-ct] on Apr 4, 2019, 14:11
mentioned in merge request !1280
In GitLab by [Gitlab user @danielsilverstone-ct] on Apr 4, 2019, 16:24
I've decided to agree with you and have submitted !1280 to Marge for merge (after James reviewed it).
[Gitlab user @jjardon] You'll find this message no longer exists, but any other issues it was masking may now come forward. Please keep an eye out and either transmute this report as needed, or close it and open fresh ones if necessary.
Thanks for your vigilance,
D.
In GitLab by [Gitlab user @jjardon] on Apr 5, 2019, 14:40
Thanks a lot [Gitlab user @danielsilverstone]-ct
Let's open a new one
In GitLab by [Gitlab user @jjardon] on Apr 5, 2019, 14:40
closed
|
GITHUB_ARCHIVE
|
Shell scripting is a powerful tool that allows users to automate tasks and execute commands in a Unix or Linux environment. It provides a way to write scripts using shell commands and programming constructs. In this article, we will explore the basics of shell scripting in easy language.
What is Shell Scripting?
Shell scripting refers to writing a series of commands for the shell to execute. The shell, which is the command-line interpreter, reads and executes the script line by line. It can be used as a standalone script or as part of larger programs.
Why use Shell Scripting?
Shell scripting offers several benefits. Firstly, it allows users to automate repetitive tasks, saving time and effort.
Secondly, it enables users to combine multiple commands into a single script, making complex tasks more manageable. Additionally, shell scripting provides flexibility and customization options that are not available with graphical user interfaces.
Getting Started with Shell Scripting
To start shell scripting, you need a text editor to write the script. Popular text editors include Vim, Nano, and Emacs. Once you have created your script file with a .sh extension (e.g., script.sh), you need to make it executable using the chmod command:
chmod +x script.sh
After making the file executable, you can run the script by typing its name preceded by “./”:
Writing Shell Scripts
Shell scripts begin with a shebang line that specifies the interpreter to be used:
This line tells the system to use the Bash shell for executing the script.
Variables in Shell Scripts
You can define variables in shell scripts using the following syntax:
To access the value stored in a variable, precede the variable name with a dollar sign ($):
Shell scripting provides various control structures to perform conditional and iterative operations. Some commonly used control structures include:
- If-Else: Executes a block of code based on a condition.
- For Loop: Executes a block of code multiple times.
- While Loop: Executes a block of code until a condition is no longer true.
Shell scripts can accept command-line arguments that allow users to pass information to the script while executing it. These arguments are accessed using special variables:
$0: Name of the script file.
$1, $2, ..: Positional parameters.
$@: All positional parameters as separate strings.
$#: Number of positional parameters.
Built-in Shell Commands and External Programs
Shell scripts can execute built-in shell commands or external programs. Built-in commands are provided by the shell itself, while external programs are standalone executables. Commonly used built-in commands include:
- echo: Prints text or variables to the screen.
- read: Reads input from the user and stores it in variables.
- cd: Changes the current directory.
Shell scripting is a powerful tool for automating tasks and executing commands in a Unix or Linux environment. It provides flexibility, customization options, and the ability to combine multiple commands into a single script.
By mastering shell scripting, you can become more efficient and productive in managing your system. So start exploring the world of shell scripting and unleash its potential!
Remember to practice regularly and experiment with different commands and constructs to enhance your shell scripting skills. Happy scripting!
|
OPCFW_CODE
|
When four computers are connected to a hub, for example, and two of those computers communicate with each other, hubs simply pass through all network traffic to each of the four computers. By generating less network traffic in delivering messages, a switch performs better than a hub on busy networks.
What is difference between hub and switch in networking?
Hub and Switch are both network connecting devices. Hub works at physical layer and is responsible to transmit the signal to port to respond where the signal was received whereas Switch enable connection setting and terminating based on need. Hub works in Physical Layer. Switch works in Data Link Layer.
Should I get a hub or switch?
For a small network with lesser users or devices, a hub can easily deal with network traffics. It will be a cheaper option for a network cabling. While the network grows larger with about 50 users, it is better to use an Ethernet switch to cut down the unnecessary traffic.
Do I need a switch or a hub?
If you have only a few devices on your LAN, a hub may be a good choice for a central connection for your devices. If you have the need for more connections, an Ethernet switch may be a better option over a hub.
Are hubs still used today?
Hubs are now largely obsolete, having been replaced by network switches except in very old installations or specialized applications. As of 2011, connecting network segments by repeaters or hubs is deprecated by IEEE 802.3.
Which is more intelligent a hub or a switch?
A switch is more intelligent than a hub. Like a hub, a switch is the connection point for the computers (and other devices) in a network. However, a switch is more efficient at passing along traffic. It records the addresses of the computers connected to it in a table.
How does a network hub affect internet speed?
When a hub receives a packet of data, it broadcasts that data to all other connected devices. Additionally, network bandwidth is split between all of the connected computers. So, the more computers connected, the less bandwidth that is available for each computer, which means slower connection speeds.
What’s the difference between a hub, a switch, and a router?
Switches learn the location of the devices they are connected to almost instantaneously. The result is, most network traffic only goes where it needs to, rather than to every port. On busy networks, this can make the network significantly faster. A router is the smartest and most complicated of the three.
Why is it important to switch data from hub to Port?
It then sends out the data only to the destination devices for which the frames are meant. This switching operation reduces the amount of unnecessary traffic that would have occurred if the same information had been sent to every port as it happens with a hub. This also improves the bandwidth of the network.
|
OPCFW_CODE
|
Voltage and Current for the Pi
There's a heap of these questions, so before you mark mine as duplicate I'll try and explain how mine is different.
I don't really know too much about voltage and current, wattage, resistance or ohms.
If I supply my Pi with a computer power source, versus a wall charger, how is its needs met? Will it continue to draw until the power source breaks, or will the Pi be under-provided? What if the power source is too great? I understand too high in voltage will destroy the device, but what about too high in current? Will it draw what it needs (which seems to be suggested everywhere (and also makes no sense to me)) or will it break?
I'm probably the most confused with LEDs and drawing energy, because multiple sources say a resistor is in place to protect the Pi, and not the LED. To my knowledge, a resistor is because the current coming out is too high, but posts I've read talk about too much demand on the GPIO pins. What does that mean? Does the Pi try to provide the energy and can't (breaking the Pi)? or is the LED provided with too much current (breaking the LED)?
There is no such thing as too much current See Raspberry Pi Power Limitations. What is your ACTUAL question (or is this just a rant)?
This has nothing to do with the Pi. You need to find a basic electricity tutorial.
Welcome -- but I do not see a coherent question here. Please take the tour to understand better how the site works, and read "What types of questions should I avoid asking?". Note that general questions about electricity belong on our larger sibling site, Electrical Engineering.
I will clarify one thing for you: "a resistor is because the current coming out is too high, but posts I've read talk about too much demand on the GPIO pins" -> Because in theory there is no limit on the amount of current that could come out if you, e.g., attached an output GPIO to ground driven high. However, in reality a short circuit will ruin the GPIO and/or Pi before infinity current is reached ;) The point is you must make sure the current is limited, based on resistance, to within the working limits of the pin (~20 mA). The pin will not do it all by itself.
It's not a rant, I'm just ill informed. I've never done anything in electronics. How do you expect that I know? @Milliways
What about too high in Current? Will it draw what it needs or will it break?
About the amperage, a circuit will draw what it needs. The rule is to keep the voltage matching with the source, and to have a bigger amperage on the power source than the need of the device.
If your first intuition was true (devices are breaking), a lot of electronical devices wouldn't work ! I.e. consider a speaker amplifier. It's consumption varies over time, depending 1) on the gain setting : more volume, more consumption 2) on the sound given as input : amplifying a silence consume less that amplifying a sound.
Obvisouly, sound engineers don't change their power supply when they raise or lower the volume button, nor at each drum kick !
Instead, they use a power supply big enough (in ampers) to work at high volume.
Does the Pi try to provide the energy and can't (breaking the Pi)? or is the LED provided with too much current (breaking the LED)?
Both may happen !
Take an exemple : Consider a LED on the GND/V+ GPIO Pins, without resistor.
The formula : I=U/r show that we could get an almost infinite intensity (Ampers) consumption when the resistance is negligeable. This would lead both the LED to break, and the polyfuse (an inboard over-current protection) to trigger. In this case, it's hard to predict the first element to break, it's a course between the fuse and the LED.
|
STACK_EXCHANGE
|
Microsoft To Discontinue Outlook Web Apps on iOS and Android
Microsoft will start the process of phasing out its Outlook Web App mobile client e-mail applications for iPhone, iPad and Android devices next month.
Microsoft announced the phase-out in a notice last week that used the old "Outlook Web App" term, even though the currently used descriptor is "Outlook on the Web." Outlook on the Web apps are browser-based e-mail client applications available to Office 365 subscribers and to users of the on-premises Office Web Apps Server (now called "Office Online Server").
In April, Microsoft plans to withdraw the Outlook on the Web apps for Android from the Google Play store. It'll also withdraw the iPad and iPhone Outlook on the Web apps from Apple's iTunes store in that month.
On May 15, Outlook on the Web (or Outlook Web App) mail client applications for Android, iPad and iPhone mobile devices will stop working, Microsoft's notice warned. On that date, end users still using those apps will get prompted with a message suggesting that they download the Outlook for Android or Outlook for iOS native apps.
End users will get recurring messages in April telling them that the Outlook on the Web apps won't work on May 15. However, if an organization uses Exchange Server on-premises, then IT pros will have to tell those end users that their Outlook on the Web apps are expiring. Microsoft explained that point in this "Microsoft OWA Mobile Apps" support document:
If your organization uses Exchange Server on-premises (that is, hosts its own Exchange servers rather than using Office 365) and you're using the OWA mobile app, you won't receive the in-app message. Your administrator should let you know that the OWA mobile apps will stop working on May 15, 2018 and that you can use Outlook for iOS or Outlook for Android instead.
Instead of using those retired Web apps, Microsoft wants mobile device users to use the "native Outlook app for iOS and Android devices." Setup instructions are linked at this page.
Microsoft gave no reason why it was ending its Outlook on the Web apps. It described the move as an effort to "streamline our mobile portfolio." Microsoft's Outlook mobile client portfolio has been confusing at best. A list of possible clients can be found here.
According to a Microsoft Tech Community explanation, the Outlook for Android and Outlook for iOS applications are based on Microsoft's acquisition of Acompli, and those reworked Acompli apps essentially became the official mobile apps going forward. At one point, the Outlook for Android and Outlook for iOS apps used Amazon Web Services (AWS) for caching, but that architecture was replaced with native Exchange Online support. That native support "means there is no mailbox data that is cached outside of Office 365," Microsoft explained in a TechNet document.
Unfortunately, it seems that the dying Outlook on the Web apps had greatest support for e-mail features. For instance, Outlook on the Web apps have support for things like renaming folders, which the native apps can't do, according to this Microsoft comparison chart. On balance, the bulk of the feature support seems to be mostly on the Outlook on the Web app side.
The "Microsoft OWA Mobile Apps" support document partially addressed the question of loss of features support when moving to the native clients. Here's how it characterized that change for end users:
Many features, including shared or delegate calendar access and the ability to view the Global Address List, are now included in the Outlook apps. Other features (like shared mailboxes) will be available by the end of 2018. In the meantime, the Outlook web experience is available from your browser on your mobile device.
It's not clear what Microsoft means by the "Outlook web experience." Microsoft's note perhaps suggests that there's another way to access Outlook in a browser. Meanwhile, comments in Microsoft's announcement, like "please stop deprecating functionality without providing an alternative," seem to be falling on deaf ears.
Kurt Mackie is senior news producer for 1105 Media's Converge360 group.
|
OPCFW_CODE
|
Reduce precision of "Last seen" to hours rather than minutes
The "last seen" information is on every user profile, and I would argue that in some cases it compromises user privacy and voting anonymity (though others clearly disagree). I think this is particularly an issue on smaller sites in the SE network.
Why does the "last seen" time need to be so precise? I'm fine with "last seen 2 hours ago", but how is it useful to know someone was "last seen 2 minutes ago"? It is useful sometimes to know if a user has been on the site in the past day, or perhaps the past hour, but I cannot think of a reason why anyone needs to know another user has been on the site in the past 2 minutes.
Furthermore, the last seen time is inaccurately precise. The time is not updated instantaneously, meaning that when it says "last seen 2 minutes ago" someone may have been active more or less recently. Which means that the site is displaying incorrect information some, maybe most, of the time.
I therefore propose that "last seen x minutes ago" be replaced with "last seen <1 hour ago" or similar. I feel this would provide just as much useful information, while providing better anonymity of site use and voting, and also more accurately represent user activity.
It's not that precise. That time only updates once every 15 minutes, so they could have been active for the entire past 15 minutes but the site could still say last seen 15 minutes ago. Guessing that someone voted based on the last seen time is an incredibly inaccurate guess.
I am well aware that the last seen time is not updated instantaneously, but based on my experience on small SE sites I strongly disagree that it is an "incredibly inaccurate guess".
You would have to go through all profiles to check the "last seen", then aggregate in 15 minute buckets (when do you start? At how many minutes past the quarter?) - your "suspects" are the people within a bucket. Even on a small site, this will be a number of people - meaning you can't reliably get any conclusions about any one of them.
I don't buy this. Is the site really small that only one frequent voter is active at a time?
as far as I can tell voting anonymity doesn't leak this way but still, I would be interested to see an explanation in the answer to this question. And no, vague "system takes care of that but details are secret" wouldn't do. Anonymous votes don't look like the kind of thing where security by obscurity suffices
Given the downvotes, I have reworked this question to be less about voting and more about the proposal
The issue on de-anonymising voters seems to be unsupported, but I am also in favour of an hour-based accuracy, not a minute-based accuracy. Updating in 15-minute blocks is a great method for putting coarseness into the data but the resulting data should be at least as coarse too.
As another regular user of the [genealogy.se] SE I agree that it seems relatively easy to guess who has downvoted on our site sometimes, and that "last seen <1 hour ago" would be likely to reduce the confidence that I have in some of my guesses, which would be a good thing.
Are down voters hunted down on that site @PolyGeo ?
@rene Not at all, in fact we don't get enough downvotes sometimes. But anything that potentially undermines voting anonymity should be reviewed. In any case, as I stated in a comment above, this post is not only about voting.
@HarryVervet I still have my down vote on this question despite your effort to clarify. I still get the impression that based on assumptions and hearsay you conclude anonymity is breached and because of that users tend to hold back on their actions. Do you have an idea of how many users hesitate in using the site for reasons of those anonymity concerns?
@rene No, I of course cannot possibly know this
@rene I've seen no evidence of any "hunting down" or payback voting on G&FH SE which I think has a very mature, robust and respectful dialog between its users. Guessing who may have voted in particular cases is not an issue, as far as I know, due to the relative anonymity of most votes.
@animuson Going back to your original comment, just looking at a user's profile and refreshing the page every few seconds, I saw the last seen time change from "just now" to "10 seconds ago" to "13 seconds ago" to "18 seconds ago", etc. I therefore don't buy your comment that it is updated only once every 15 minutes. On the contrary, the last seen time seems to be very precise.
@HarryVervet You misinterpret what I said. The timestamp for when they were last seen is only updaed once every 15 minutes minimum. That's how the system works. What you're observing is merely how long ago that timestamp occurred. Of course that would change.
@animuson You're right I'm quite confused about what "once every 15 minutes minumum" means. I'm surmising it was just luck that I hit the refresh button at exactly the time when the timestamps were updated. I just seem to be lucky in this way fairly often (it's not uncommon to see a last seen time of seconds).
I totally agree with this proposal. It could be the seen 2 minutes ago is not accurate, and the user has been seen 20 minutes ago, but there is at least a user using that seen 2 minutes ago as evidence that user down-voted him. Let's stop these guesses based on the supposedly exact information given from the site.
I am in favor of your proposal to make it a little harder to guess who voted, although it is almost impossible to do so now. I don't think you would go over the user profile of all users in a site (no matter how small, it would be quite a job to visit them all) and just find one user online at the time being able to vote.
I think the most important argument in favor of this request is to stop users thinking they can guess who voted on them based on this single number. They can't say that any more if it just reads seen this hour.
One argument against this feature request is that it is a useful indicator whether a user who asked a question has actually seen your comment or stays with his question or not.
I think it is obviously a no-problem on large sites.
It is also a very minimal problem on the small ones - I can't imagine such a small site where really a single user had visit it in a minute. Note, how can you search for the "last seen" field? I think, it may be an anonymity problem if the OP has already some suspection from the possible downvoters.
Checking their "last seen" field, he gets only a stronger or weaken suspection, possibly in the false direction.
I don't think it would be a problem. The solution against the legal downvotes is that you write enough good posts.
Against the illegal ones, well, there is no clear defense, but their possibilities are significantly hardened by various precautions of the SE system. I don't think they could cause real harm, except a little psychological effect. Maybe it would be useful to make clear for the users, it is their task to not let themselfes be affected by downvoting schemes.
Trying to find the downvoters by indirect means is bad solution with the high chance of misleading. Users trying to do this cause more harm for themselfes on the long-term as to the suspected downvoters, and in my opinion, it doesn't really depend on the precision of the "last seen" field.
|
STACK_EXCHANGE
|
It can also communicate with any server program that is set up to receive an SMB client request. These features are still enabled by default, but if you do not have older SMB clients, such as computers running Windows Server 2003 or Windows XP, you can remove the SMB 1.0 features to increase security and potentially reduce patching. In Psalm 78:34 - How can Yisrael (ישראל) repent (שָׁ֗בוּ) if they have been slain? Same thing can be used for mac’s screen sharing, that uses port 5900. If … However, SMB is more or less a Microsoft protocol.
Ask Ubuntu is a question and answer site for Ubuntu users and developers. Here is a brief overview of the SMB protocol's notable dialects: In 2017, the WannaCry and Petya ransomware attacks exploited a vulnerability in SMB 1.0 to load malware on vulnerable clients and propagate it across networks. SMB Encryption should be considered for any scenario in which sensitive data needs to be protected from man-in-the-middle attacks.
Do not use any unencrypted protocols if you can't fully trust your network (see also this Q&A). The secure dialect negotiation capability that is described in the next section prevents a man-in-the-middle attack from downgrading a connection from SMB 3 to SMB 2 (which would use unencrypted access); however, it does not prevent downgrades to SMB 1, which would also result in unencrypted access. The WebDAV SSH comparison is very similar to HTTPS, but it would be better if you are going to use private essential files. Secure dialect negotiation cannot detect or prevent downgrades from SMB 2.0 or 3.0 to SMB 1.0. This command means: set up a tunnel using gate between localhost:9999 and smb-host:445, assuming that smb-host is known and connected to gate (it is not required for your computer to even know/connect to smb-host). So whenever Microsoft wants to disable their support for the client, it is very likely that this client would stop running or working. SMB continues to be the de facto standard network file sharing protocol in use today. For more information on potential issues with earlier non-Windows implementations of SMB, see the Microsoft Knowledge Base. Notes: The window, where ssh tunnel was opened, should be kept open — the tunnel lives as long as that window lives and dies, when it … However, in some circumstances, an administrator may want to allow unencrypted access for clients that do not support SMB 3.0 (for example, during a transition period when mixed client operating system versions are being used). Basically, a virtual network adapter is a software application that allows a computer to connect to a network. SMB Encryption and the Encrypting File System (EFS) in the NTFS file system are unrelated, and SMB Encryption does not require or depend on using EFS. We assume that OpenSSH is already running on the Samba server.
SMB Encryption offers an end-to-end privacy and integrity assurance between the file server and the client, regardless of the networks traversed, such as wide area network (WAN) connections that are maintained by non-Microsoft providers.
What's wrong with the "airline marginal cost pricing" argument? Thus, a client application can open, read, move, create and update files on the remote server. Microsoft subsequently released a patch, but experts have advised users and administrators to take the additional step of disabling SMB 1.0/CIFS on all systems. Active vs Passive. It's available (or can be installed) on pretty much all the main operating systems out there. If you want to enable SMB signing without encryption, you can continue to do this. SMB-over-SSH seems to be simpler that throwing DLNA/UPnP over SSH (that setup looks almost impossible, but sums up to solving UDP-over-TCP problem).
You can deploy SMB Encryption with minimal effort, but it may require small additional costs for specialized hardware or software. If your CPUs are faster than your network, you might find it helpful to compress the data before the transfer (and expand them after). My experience with samba has not been favorable so far.
Most of the user interface of the file manager usually includes the extensions that will be manipulated ad presented by the WebDAV files and folders. over the air in plain text. Since speed is important, you have to keep in mind: SSH is a great thing for everything connected to Unix/Linux and networks, but it is really slow compared to NFS, FTP or SMB. Most modern systems use more recent dialects of the SMB protocol. SMB is a file-sharing system. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. SSH is not. In the Tunnels options, in the “Source port” type 10.0.0.1:139 (the IP we used in our example) and in the “Destination” type 127.0.0.1:139. How can I share an optical drive in a way that fools the client into thinking it's a local drive? We’re going to add a virtual adapter to our Windows computer and create a SSH tunnel over the virtual interface. Making statements based on opinion; back them up with references or personal experience.
In addition, some non-Microsoft SMB clients may not be able to access SMB 2.0 file shares or print shares (for example, printers with âscan-to-shareâ functionality).
Step 5. The file selection of an application will support the entering in the WebDAV URL, and local filename, with the use of password and username that was needed to browse the WebDAV server. To get the best performance, you need to use Windows servers and clients. For more information, see The Basics of SMB Signing. 192.168.0.104).
Older clients, such as computers running Windows Server 2003 or Windows XP, do not support SMB 2.0; and therefore, they will not be able to access file shares or print shares if the SMB 1.0 server is disabled.
Thus, it is often considered by many web browsers as a powerful skill. The CPaaS landscape is evolving as Microsoft and Amazon introduce their own communications APIs. SOPHOS UTM 9 (Sophos SG Firewall) update firmware, Netcomm N3G001W & N3G002W 3G network settings. Enabling SMB Encryption provides an opportunity to protect that information from snooping attacks.
Could keeping score help in conflict resolution? This is SSH-based file transfer. SFTP (SSH based) – As the name suggests this is a variant of FTP and is a more secure way of using FTP.
Samba was intended for use with Windows systems. But nowadays it works great with Linux and Mac as well. NFS still the fastest in plaintext, but has a problem again when combining writes with encryption. Cookie Preferences Then I'd expect ssh/scp to add some unnecessary overhead. Most of the client and the servers enforce the extended subsets and subsets of the involved standards.
Summoners War Loren Runes 2020, Articles Of Confederation Vs Constitution Venn Diagram, Kate Mestitz Now, Fraction Bars Up To 100, Indoor Golf Simulator Uk, Cbe Hotel Packages, Random Creature Generator, Benefits Of Lambswool, Summoners War Ganymede Team, Sensation De Liquide Froid Dans Les Jambes, Wnba Face Mask, Maytag Mfx2570aem4 Ice Maker, Texte Signet Funéraire, Mighty Mighty Bosstones Impression That I Get Meaning, Knifeless Tape Alternative, Spinach Salad Publix Salads, Judge Danforth Motivation, Chalet à Vendre Plage Wilson, The Bunny Game, Dc Daily Amy, Which Series Should I Watch Quiz, Clive Davis Net Worth, Jason Burkey Height, Black Frost Persona 3 Portable, Sonic Mania Plus Stage Mods, Warrior Cats Fanfiction Paw Pregnancy, Should You Wrap A Sprained Ankle Overnight, John Stevens Role In Annexation Of Hawaii, Baby Snake With Blue Tail, 3 Liter Water Bottle How Many Glasses, Sugar Glider Nz, Minotaur 5e Race Ravnica Pdf, Tamarind Tree Totnes, Griffin Dunne Succession, Church Of No Ma'am, Computer And Information Research Scientists Pros And Cons, Marbles W15 Tang Sight, Cessna 172 Crosswind Limit, The Real John Milner American Graffiti, Hulk Theme Song, Derian Hatcher Net Worth, Jungle Corn Snake, Cva Optima Vs Wolf, Steve Nash Age, Discontinued Snacks 2020, How To Hydro Dip Shoes With Film, Sam Huff Death, Craig Cook Alicia Father, Dennis Hull Net Worth, Noma Air Conditioner Remote Control Instructions, Beabull Dog Full Grown, Amende Conduire Sans Permis Québec, Sphynx Cat Albuquerque, Missile Lock Sound, Trevor Steven Wife, Comfort 柔軟剤 イギリス, Kpop Idols Who Died In The Army, Celebrate Recovery Chips, Jaguar Cichlid Teeth, Darkwood Walkthrough Chapter 2, Neck Exercises Pdf, Une Toute Dernière Fois Emma Green Pdf Ekladata, Maximo Morrone Wife, Qmjhl Live Stream, Samsung Tv Not Visible On Network, Rafe Khatchadorian Character Traits, Chaos Blade 5e, Justice League Dark Online, Zedge Upload Ringtone, How To Destroy America In Three Easy Steps Signed, Scott Rockenfield Health, Cream Soda Uk, Isuzu Npr Oil Filter, Domestic Pigeon Lifespan, Ithaca Model 37 Featherlight 12 Gauge Parts, Thomas Kail Parents, Adiye Song Lyrics English Translation, Ezra Koenig Height, Best Sensitivity For Pubg Mobile With Gyroscope,
|
OPCFW_CODE
|
Migrate existing Azure pay as you go subscriptions to Enterprise Agreement
There will be a point in time where you have to migrate an existing Azure pay as you go (PAYG) subscription to your Enteprise Agreement. Imagine your company bought a small and innovative startup which is working with Azure. However they did not have an Enterprise Agreement with Microsoft yet and are paying their Azure bills by credit card. Not what you want in an enterprise environment.
Basically there are two options how you can integrate PAYG subscriptions into an existing Enterprise Agreement. Either you can move the Azure account with all associated subscriptions or you can move individual subscriptions.
Moving the account
You can add the account which is associated with the pay as you go subscription to the Enterprise Agreement. This will move all the subscriptions under this account and convert them to EA subscriptions.
If the existing account is moved with all its subscriptions to an EA Enrollment then there is no impact on the subscriptions service endpoint or Identity endpoint and RBAC role assignments are preserved. Its just a change on the billing endpoint wherein your subscription is converted to Microsoft Azure Enterprise offer and billing is be tagged to EA. Everything in your subscriptions keeps working as it is.
If you decide to add the account to EA and automatically convert all associated subscription to EA, follow below steps. Note that you probably need to enable Microsoft accounts.
From the Enterprise Portal, click Manage
- Click the Account tab
- Click + Add an account
- Enter the Microsoft Account or Work or School Account associated with the existing account
- Confirm the Microsoft Account or Work or School Account associated with the existing account
- Provide a name you would like to use to identify this account in reporting
- Click Add
- You can add an additional account by selecting the + Add an Account option again, or return to the homepage by selecting the Admin button
- If you click to view the Account page, the newly added account will appear in a “Pending” status
Confirm Account Ownership
- Log into the email account associated to the Microsoft Account or Work or School Account you provided
- Open the email notification titled something like “Invitation to Activate your Account on the Microsoft Azure Service from Microsoft Volume Licensing”
- Click the Log into the Microsoft Azure Enterprise Portal link in the invitation
- Click Sign in
- Enter your Microsoft Account or Work or School Account and Password to login and confirm account ownership
Moving individual subscriptions
If you want to move only specific subscriptions from another account to an Enterprise Agreement you would have to work with Microsoft Support to complete the transfer.
Be aware that when moving a specific subscription from one Account X associated to Tenant A to another Account Y associated to another Tenant B (which is under EA enrollment), then the existing role assignments would be lost, since in this scenario the subscription is moving across different tenants. However if Account X and Y both are under same Azure tenant then RBAC role assignment wont be lost, since the subscription is moved between two different accounts under same tenant.
If you decide to move only specific subscriptions from PAYG account to an Enterprise Agreement, contact Microsoft Support.
|
OPCFW_CODE
|
Next week Firefox 35 will be in general release, and Firefox 37 will be promoted to the Developer Edition channel (aka Firefox Aurora).
HTTP/2 support will be enabled for the first time by default on a release channel in Firefox 35. Use it in good health on sites like https://twitter.com. Its awesome.
This post is about a feature that landed in Firefox 37 - the use of HTTP/2 priority dependencies and groupings. In earlier releases prioritization was done strictly through relative weightings - similar to SPDY or UNIX nice values. H2 lets us take that a step further and say that some resources are dependent on other resources in addition to using relative weights between transactions that are dependent on the same thing.
There are some simple cases where you really want a strict dependency relationship - for example two frames of the same video should be serialized on the wire rather than sharing bandwidth. More complicated relationships can be expressed through the use of grouping streams. Grouping streams are H2 streams that are never actually opened with a HEADERS frame - they exist simply to be nodes in the dependency tree that other streams depend on.
The canonical use case involves better prioritization for loading html pages that include js, css, and lots of images. When doing so over H1 the browser will actually defer sending the request for the images while the js and css load - the reason is that the transfer of the js/css blocks any rendering of the page and the high byte count of the images slows down the higher priority js/css if done in parallel. The workaround, not requesting the images at all while the js/css is loading, has some downsides - it incurs at least one extra round trip delay and it doesn't utilize the available bandwidth effectively in some cases.
The weighting mechanisms of H2 (and SPDY) can help here - and they are what are used prior to Firefox 37. Full parallelism is restored, but some unwanted bandwidth sharing still goes on.
I've implemented a scheme for H2 using 5 fixed dependency groups (known informally as leader, follower, unblocked, background, and speculative). They are created with the PRIORITY frame upon session establishment and every new stream depends on one of them.
Streams for things like js and css are dependent on the leader group and images are dependent on the follower group. The use of group nodes, rather than being dependent on the js/css directly, greatly simplifies dependency management when some streams complete and new streams of the same class are created - no reprioritization needs to be done when the group nodes are used.
This is experimental - the tree organization and its weights will evolve over time. Various types of resource loads will still have to be better classified into the right groups within Firefox and that too will evolve over time. There is an obvious implication for prioritization of tabs as well, and that will also follow over time. Nonetheless its a start - and I'm excited about it.
One last note to H2 server implementors, if you should be reading this, there is a very strong implication here that you need to pay attention to the dependency information and not simply implement the individual resource weightings. Given the tree above, think about what would happen if there was a stream dependent on leaders with a weight of 1 and a stream dependent on speculative with a weight of 255. Because the entire leader group exists at the same level of the tree as background (and by inclusion speculative) the leader descendents should dominate the bandwidth allocation due to the relative weights of those group nodes - but looking only at the weights of the individual streams gives the incorrect conclusion.
|
OPCFW_CODE
|