Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Printing Error Error: invalidfont Ricoh Printer
I have a Ricoh Aficio MP 2000 printer. I have macOS Big Sur running on a Mac mini. It is possible to connect to the printer and print graphics, web pages, etc. But I have problems to print PDFs with embedded fonts. I always get Error: invalidfont.
I tried the following drivers.
Generic Postscript Printer
Driver provided by Ricoh
Ricoh Printer Drivers v3.0 for macOS
What else could I do?
From what I can find, the Ricoh MP 2000 is c. 15 years old. PostScript support seems to be an option, rather than as standard.
However, invalidfont is certainly a PostScript error. It frequently occurs in Level 3 PostScript submitted to printers that only handle Level 2.
You could see if it will print a PDF from Adobe Reader or Acrobat using the "Print as Image" print option in those apps. This will RIP (rasterize) the page data on the computer instead of on the printer.
I don't know whether those Ricoh print drivers are capable of switching to PCL. You may need to check all the available options in the print dialog, and see if there's anything relating to fonts, or how the data is sent.
PostScript seems to be the default for this device, you only get the PPD from Ricoh as a Driver
@SebastianSemmler You could try adjusting the PostScript Level in the PPD down to Level 2, if it's not already.
I got it working. You were right, PostScript is not the default option for the device, the machine has to have a PostScript card installed, which it has not. By searching the printer via the IP and choosing the LPD protocol, I could choose a generic PCL driver and everything worked.
From the Finder, select Go > Go to Folder...
In the dialog box enter: /Library/Printers/PPDs/Contents/Resources
Open the ppd file for your printer in e.g. TextEdit and change the TTRasterizer setting to None e.g.:
*TTRasterizer Type42 change to *TTRasterizer: None
Save the ppd file. If you’re using the generic ppd file just add the line *TTRasterizer: None after Language level entry.
Install your printer using this new PPD.
You could edit the PPD 'in use' at /etc/cups/ppd, without having to reinstall the printer.
It does not work. I also tried the options TrueImage, Accept68K, Type42.
My school has several MP 2000 in use and so far disabling internal rastering of included fonts always solved printing problems. Can you make one of your PDFs available for further inspection?
It is every file, where the font is not installed on the machine. Sorry, can't provide a file. I edited the file at /etc/cups/ppd, to be certain to change the right config. @slartibartfast could you provide your ppd for the MP 2000 file?
|
STACK_EXCHANGE
|
Need help to update values in ACF via WP API
I'm trying to update an item particularly with the "rows" feature in WordPress ACF plugin via PHP using Wordpress API.
The value I'm looking to change is "Monday"
Here's the json from the endpoint:
{
"id": 765,
"acf": {
"contact": "2150",
"sched": [
{
"field_611218a978d03": "Monday",
"field_6112197d78d04": "00:00:00",
"field_6112199f78d05": "17:00:00"
}
],
},
}
I can change the value of "2150" to "2151" with the following:
'body' => array(
'acf' => array (
'contact' => "2151",
),
)
But I can't change the value of "Monday", I've already tried the following:
'body' => array(
'acf' => array (
'sched' => array ( array ( "field_611218a978d03" => "Tuesday")
)
)
As well as:
'body' => array(
'acf' => array (
'sched' => array (
[0] => array( "field_611218a978d03" => "Tuesday")
)
)
)
I've been searching and there seems to be no answer pertaining to this specific problem. I've been stumped for days now. Any help would sincerely be appreciated!
Edit:
To answer CBroe this is the code I'm working with:
$api_response = wp_remote_post( 'domain.com/wp-json/wp/v2/person/765', array(
'headers' => array('Authorization' => 'Basic ' . base64_encode( 'user:pass' )),
'body' => array(
'acf' => array (
'contact' => '2150',
'sched' => array ( array ( "field_611218a978d03" => "Tuesday")
),
)
));
$body = json_decode( $api_response['body'],true);
What endpoint? What actual functions are you calling?
@CBroe I've placed the whole code at the bottom, please be patient with me as I try to answer your questions
|
STACK_EXCHANGE
|
git: Make a clean history after merging multiple times from a feature branch that is subsequently rebased
Suppose I am working on some feature in a branch B. My feature depends on another feature that my colleague works on in a branch A.
I work closely with my colleague, so during development he will often update A with new stuff that I need in B. The way I get his changes in is to just merge with his branch. So what I do in B is something like the following:
git checkout master
git checkout -b B
..
git commit Some work
..
git commit More work
..
git fetch origin
git merge origin/A
..
git commit Event more work
..
git fetch origin
git merge origin/A
..
git commit And even more work
..
git fetch origin
git merge origin/A
...
This works very well. The problem is that we want to get this into master and to have a nice clean history. In particular we want to:
Clean up the history of A using some kind of rebase
Clean up the history of B using some kind of rebase
Commit first A and then B to master without all the extra merges above.
The only way I can come up with to do this is to:
Rebase A into master in the usual way
Cherry pick all the non-merge commits from B onto master.
One problem with this is that I manually have to cherry pick the non-merge commits.
Is there a better way?
Well instead of manually cherry-picking, you can automatically cherry-pick, i.e. rebase:
git rebase A B
git will automatically:
find out the parent commit between A and B
go over all commits in B to be applied on top of A
figure out that some commits are already in A and do not need to be applied again.
However, you potentially might run into a lot of conflicts along the way.
I suggest that, if a clean history at the moment of merging is import to you, you adjust your workflow to have git rebase origin/A instead of git merge origin/A, which means your history will remain clean. You may also want to read up on git rebase workflows a little.
A common technique used in numpy development is to squash the feature branch into a single commit. This is probably not as effective as @Cimbali's answer for a smaller project, but it works really well for something the size of numpy, where the granularity of a single PR is very small with respect to the entire project. One advantage of doing the cleanup with a rebase is that you can move everything into a guaranteed fast-forwardable state well before doing any merging.
A standard command would be something like
git rebase -i master
Then select fixup for all commits except the first and let it roll.
Interactive rebase with the intertwined branch A / branch B history will be messy, but luckily the end result can be achieved directly with git merge --squash
|
STACK_EXCHANGE
|
How do I fix an error in a header file typedef struct (expected a ";")?
I downloaded this project from github https://github.com/fabvalaaah/rlec and in bitmap.h there is an error in typedef struct (expected a ";"). In the rest of the project there is an error with _bitmap (identifier is undefined). I am using Visual studio 2019. How do I fix it?
#ifndef BITMAP_H
#define BITMAP_H
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
#include <stdint.h>
#include "common.h"
const uint8_t lineFeed;
const uint8_t imageEnd;
typedef struct __attribute__((__packed__)) _bitmap{ //Error is here on _bitmap
uint8_t magicNumber[2];
uint32_t size;
uint8_t reserved[4];
uint32_t startOffset;
uint32_t headerSize;
uint32_t width;
uint32_t height;
uint16_t planes;
uint16_t depth;
uint32_t compression;
uint32_t imageSize;
uint32_t xPPM;
uint32_t yPPM;
uint32_t nUsedColors;
uint32_t nImportantColors;
}
_bitmap;
void printHeader(_bitmap* image);
int RLECompression(FILE* ptrIn, FILE* ptrOut);
int RLEDecompression(FILE* ptrIn, FILE* ptrOut);
#endif /* BITMAP_H */
Please [edit] your question so you can copy-paste (as text) the full and complete error you get into it. Also please add a comment in the shown code where you get the error. Also please tell us what compiler you're using, because the shown code includes some non-standard and non-portable extensions that might not work on your compiler.
And wouldn't it be better to try and create a [mcve] which you can use to possibly report the problem to the project author instead?
Which compiler? The attribute packed is a gcc only non-standard extension, which other compilers won't recognize. Also, you should never declare variables in header files, for multiple reasons.
I edited my question. I am using Visual studio 2019 and the error is in typedef struct.
There's your error. As mentioned the header file is using a non-standard and non-portable extension (the __attribute__ thing), which is not available in all compilers. Most notable __attribute__ is not supported by Visual Studio. The Visual C++ variant is the pack pragma. You need to add some conditional compilation to choose which to use (and perhaps add a feature request to the project authors to add such).
The code is using gcc extensions. So the easiest way to compile it would be to just use the gcc compiler.
But what you can do is changing
typedef struct __attribute__((__packed__)) _bitmap{
to
typedef struct _bitmap{
It may work, and it may break the code depending on what happens in other parts of the code. My guess is that it will work perfectly fine, but I cannot guarantee that the author of the code have not done anything "clever".
@viribus If your problem has been solved, feel free to accept an answer.
@viribus Note that pack may be important to get the exact fields layout.
typedef struct __attribute__((__packed__)) _bitmap
This is the way to define a struct definition without zero paddings in GCC.
To prevent zero padding in MVSC, use #pragma pack:
#pragma pack(push,1)
typedef struct _bitmap{ //Error is here on _bitmap
uint8_t magicNumber[2];
uint32_t size;
uint8_t reserved[4];
uint32_t startOffset;
uint32_t headerSize;
uint32_t width;
uint32_t height;
uint16_t planes;
uint16_t depth;
uint32_t compression;
uint32_t imageSize;
uint32_t xPPM;
uint32_t yPPM;
uint32_t nUsedColors;
uint32_t nImportantColors;
} _bitmap;
#pragma pack(pop)
As you can see here https://godbolt.org/z/Gr-Rsw it compiles just fine.
Note that disabling padding is important to get the exact structure you want (first 2 bytes which are "magic number" then 4 bytes of size, etc). In such case you can read first sizeof(_bitmap) bytes from the file and expect the layout be exactly as defined in the struct.
Without the #pragma pack, the size of such struct may very based on different architectures. For example here the size is 56 byte, that's because the compiler adds padding after magicNumber[2] to align it to 4 bytes while with the pack attribute it is 54 bytes (example).
When I see __attribute__((__packed__)) or #pragma pack(push,1) I think, "Bad code. Either too lazy or not given enough time to do proper, reliable, portable serialization. What else is going to be done sloppily like that?"
@AndrewHenle Totally disagree with you. Packing usually required to comply with some standard protocol. Assume that the structure _bitmap is not defined by you but by the definition of the bitmap file header or hardware block. Leaving the structure without packing, may cause padding on some architectures when, for example, compiled for best performance (-O3). The compiler may locate each struct at offset aligned to 64 bits.
Packing usually required to comply with some standard protocol. A "standard protocol" does not depend on platform-specific implementations such as packed structures. Using packed structures to serialize to or deserialize from a standard protocol is lazy and makes me wonder what other shortcuts were taken. If an implementation provides a packed structure, it's not portable and it's not standard.
@AndrewHenle on the contrary. Once you have a defined layout, either you copy field by field manually and fill in the fields of your structure (and such conversion should be done in both directs, on receiving and on sending the data) or you defines the structure without padding and then you can just memcopy it in any direction. I wouldn't call it lazy. Sometimes performance matters.
you defines the structure without padding and then you can just memcopy it in any direction And you just missed having to deal with endianness, which every standard protocol will address. You just made my point about bad code and making me wonder what else got missed.
@AndrewHenle So you are saying that working with "flat" data buffers of bytes would be better? Have you tried to maintain such thing? each change/new field addition/removal will require a complicated code change in order not to break anything. Structs were introduced exactly for such thing.
So it's too hard to do things portably and reliably? Every single argument you're making only reinforces my belief that using packed structures is the lazy, non-portable solution. It's not guaranteed to work, even on x86. If you can't update serialization/deserialization code to add something like a network-byte-order uint32_t to a protocol, you can't safely write C code anyway.
@AndrewHenle The link you provided is irrelevant to packing. It is about breaking alignment. Packing is needed because sometimes (for example on 64bit architecture) even a struct with two ‘uint32_t’ fields may be padded to force each field be aligned to 64 bits due to performance issues.
The link you provided is irrelevant to packing. Again, you miss the point: It is about breaking alignment. That's exactly what packing does - it breaks alignment. And causes failures. The fact that you apparently don't know this just reinforces my opinion of the use of packed structures - it's a lazy way bad coders do serialization/deserialization and it's use immediately makes me wonder what else is wrong in the code.
@AndrewHenle Anything that is done without thinking is a bad coding. I never said force packing everywhere. Each structure field requires minimal alignment requirement (based on its type). As long as those requirements take place, nothing bad will happen. For example struct with { uint32_t, uint16_t, uint16_t }; will satisfy the alignment requirements even after packing but packing, in this case, will ensure that the memory footprint (layout) of such struct is exactly as defined.
|
STACK_EXCHANGE
|
Callback function of vscode.workspace.onDidCloseTextDocument is working at development but not after packaging to vsix
In my extension, I'm opening a local file using vscode.workspace.openTextDocument and then vscode.window.showTextDocument and vscode.workspace.onDidCloseTextDocument to close the file.
This seems to be working fine if run the extension using the code.
Now I have created the vsix file using vsce package. Here comes the tricky thing
Both vscode.workspace.onDidOpenTextDocument and vscode.workspace.onDidSaveTextDocument callbacks are called as expected. However
vscode.workspace.onDidCloseTextDocument callback is not called when I close the file in the UI.(I am facing this issue when the extension is installed.)
Am I missing anything?
Steps to Reproduce:
1) Open the file using vscode.workspace.openTextDocument and then vscode.window.showTextDocument.
2) Listen for events in vscode.workspace.onDidCloseTextDocument.
3) Close the file in the editor.
4) Check that vscode.workspace.onDidCloseTextDocument callback is not called as expected.
Code is present in this git repo https://github.com/akhilravuri1/hellovscode, please clone the code and create vsix package by typing vsce package command in the cmd. Install the extension using vsix file and select the hello world command then it will open a file but when you close it just closes but I expect a select option with YES or NO at command palette.
Thanks in Advance.
Check the dev tools console, there might be a stacktrace / error there.
Does it also happen when you copy the extension development directory to the VSC extension directory of the current user?
@rioV8 sorry to tag you. I have copied the code to C:\Users\rakhil.vscode\extensions and tested it was working fine but when I package it then it doesn't work.
@Gama11 sorry to tag you. In the log, the close function is not called at all. Is there something that I missed. If it is related to workspace then how it is opening the file but not closing the file.Any Ideas?
use vsce ls to see which files are packaged and see which files are missing compared to the full directory. you can also use 7-zip to open the vsix file
How can you possible get a vsix file. I get 2 major ERRORs I have to fix before it is willing to create a vsix and it reports an big WARNING
When I put the vscode.workspace.onDidCloseTextDocument() in context.subscriptions.push() It started detecting the file close but the problem is not able to retrive the data in the file. If I see the logs It's strange It is opening normal file but while closing the file name is .txt.git ```
THIS IS PATH_FILE c:\Users\rakhil\VSCode_DevOps\hellovscode\ddl1.txt
close call c:\Users\rakhil\VSCode_DevOps\hellovscode\ddl1.txt.git
THIS IS VALUE (EMPTY) ```
@rioV8 can you paste the warnings that you are getting.
I tried your extension.
I get the OnClose-event when the Welcome Screen text editor closes and ddl.txt is loaded.
The docs say you should use the vscode.window on-events but they do not allow to monitor a document close.
Very strange that the OnClose-event is fired on language change (this happens). You can't tell if it is a language change or document close.
When I open the ddl.txt with File > Open and close it I get the OnClose-event.
I get the OnClose-event on all files opened with File > Open.
It looks like files opened with vscode.workspace.openTextDocument and vscode.window.showTextDocument do not fire an OnClose-event.
You should file a bug because these calls generate a normal TextDocument.
Thanks, @rioV8 I have raised an issue https://github.com/microsoft/vscode/issues/79110 but no response. Can you please make a comment that you are also facing the same issue? so that they might look into it.
@akhil Wait a day and look for a responds
@akhil. Bug report is wrong. It is not even working during development for me.
it is not working for me also..let me check it and update you.
|
STACK_EXCHANGE
|
Become a better investor
Incentive Stock Options, or ISOs, can feel pretty complex. This video aims to give a visual breakdown of how ISOs work and how they are taxed in a couple of scenarios.
Learn more about Java Wealth Planning:
In order to understand ISOs, there are three main things. First, they are granted to you, then you exercise them, and then you sell them.
So the first important thing is whenever you're given ISOs, they are granted to you and there's a certain date that you're granted them. There are two pieces of information that are really important.
Once that happens, then really the next thing for you to do is just stick around at your job because most of the time they are going to have something called a vesting schedule that is tied to these isos so it's saying that you can't actually exercise or sell this stuff until you've stuck around for a period of time. That's called the vesting schedule and so once they have vested then that's whenever you're able to actually act on these isos.
So how do you how long do you have to wait in order to get that preferential tax treatment?
Well, it's all based on these three dates (grant, exercise, and sell dates).
So in this scenario, let's say we've done this. Let's also say we've lucked out, and, in that period of time, it's actually gone up from $20 to $25 per share. So what happens is that the entire amount from the $10 exercise price to the $25 fair market value of that stock, all of that gain, gets taxed as long-term capital gains.
Seeing this, you can see the tax benefit of doing this - where you exercise, and then you wait more than a year before you actually sell it. As long as you satisfy these two time periods, then basically what we're saying is that instead of having this being taxed at for example 32% you're getting taxed at 15%. So you can see that there could be a pretty big advantage of waiting that period.
So this all sounds good, right? Well, there is one catch and that one catch, and it's called AMT.
Where AMT comes into play is in the scenario where you exercise, and you have a gain. If you don't sell it within that same calendar year, then this gain is considered a preference item for AMT. So there's a possibility that you actually have to owe a little, and you have to pay taxes on this based on the alternative minimum tax formula.
So, the bad part about that is you're actually paying an extra tax on something where you actually haven't sold it, and you haven't realized any of the money for it. You would have to pay that out of pocket, but the nice thing about it is that you would pay that in this tax year. Then in future tax years, you're generally able to recoup that through AMT credits.
Let's assume that we have passed the vesting period and now we have decisions to make. So the next part is the decision to exercise these options. What exercising means is that we are going to pay this price to turn this option into an actual share of stock.
So in our example, in order to exercise all of these shares, you would be putting up $10,000 in order to own 1,000 shares of this company's stock. In this case, we're saying that whenever we get to this point that the stock is actually worth $20,000 per share - It's a good deal because you could pay $10,000 to buy something that is actually worth $20,000.
The first option after you exercise is to immediately sell it. The nice thing about that is that you lock in the amount of money that you've made. So from a tax perspective, this $10,000 that you gained gets taxed as income, so it gets taxed at your tax rate.
On the other hand, another option that you can do with ISOs that helps from a tax perspective is that you exercise it here, but then you actually wait, and you don't sell it yet.
Instead, if you exercise and then wait a period of time before you sell at that later period of time, then there's a possibility that instead of this getting taxed at your in that as ordinary income (at your ordinary income tax rate) then it can get taxed as long-term capital gains.
That could be a difference between being taxed at 32% and instead taxed at 15%, so you can see how there could be a pretty large tax benefit for you waiting, which sounds pretty nice.
|
OPCFW_CODE
|
To isolate the algorithm performance in AI Challenge, the competition committee restricted the robot models since ICRA2019 (for more details, please click). Now in Team Ausdroid, two sets of standard robotic platforms are available in Doug Mcdonell Building. Both robots were modified to carry the camera, Lidar and computing device (J120 or Manifold 2). Except regular maintenance, not much fancy work needs to be developed in structure. But there are some important development tasks in embedded programming. Some potential projects in Hardware group beside routine maintenance are listed below. (NOTE: Considering the current COVID-19 situation, the robotic devices are no longer accessible until further notice. The available projects at this stage are marked with *.)
In past two years, three different types of mini computing devices which share the same core, TX2, were installed into the robots. In this year, with the generous support from CIS, computing devices for both robots were upgraded to NVIDIA Jetson AGX Xavier which is stronger in computing but bigger in dimension. This task requires
- Re-organize the internal space of the robot, including re-wiring.
- Design the mounting part for Xavier and install it inside the robot firmly.*
- Install other stuff inside the robot (if needed).
- Design a set of cooling system to guarantee the performance of the computing device.
Camera and Lidar Installation
As introduced above, the camera and Lidar are well installed on the robot. However there are still some minor points can be imporved. For example,
- Convenience of disassembly and assembly.
- Convenience of adjusting camera configuration (aperture and focal length).
The hadrware controllers embedded in RoboMaster AI robot are two pieces of RoboMaster Development Board Type A (STM32). The source code for embedded system is open source on GitHub. Recently, patrol function (gimbal rotates slowly to search enemy while the chassis works normally) has been added and can be called by switching left toggle on the remote controller. The further tasks are
- Develop the dodge function (chassis rotates quickly to avoid attacking while gimbal keeps stable to attack enemy).*
- Develop communication protocols for both functions to enable automatic control.*
If interested, what's next?
The following documents might be useful for you to explore more about the robotic platform. For more information, please contact Murphy (firstname.lastname@example.org).
- AI Challenge Rule Book – EN version, CN version
- AI Robot User Manual – EN version, CN version
- AI Robot 3D Model – STEP file, SolidWorks file (SW16 or higher version)
- NVIDIA Jetson AGX Xavier – User Guide and Data Sheet
- SLAMTEC A3 – User Manual
- RoboMaster Development Board Type A – User Guide, PCB Layout, and Schematic
- Referee System User Manual and Remote Contoller User Manual (CN)
- Motors User Manual (Google Drive Link)
|
OPCFW_CODE
|
This article is more than 1 year old
GitHub probes worker's claims of hostile, sexist office culture
Programmer Julie Horvath quits as unnamed co-founder, spurned engineer put on leave
GitHub is investigating allegations of sexism and inappropriate behaviour towards its female employees after one code jockey quit and went public on Twitter.
Software engineer Julie Horvath left the VC-funded site last week, telling world+dog of an unpleasant culture at work.
Horvath used her Twitter profile to allege she had been “harassed by the leadership" of GitHub for two years.”
In one day, all of the work I've done at that company to be a better place for women to work has come undone.— Julie Ann Horvath (@nrrrdcore) March 15, 2014
The programmer spoke to TechCrunch about, among other things, a strange series of run-ins with an unnamed co-founder of GitHub and his wife.
She also claimed to have experienced harassment from a male colleague whom she alleges ripped her code from projects after she turned down his romantic advances.
But, says Horvath, the act that made her leave was her female colleagues hula-hooping in the office to music while the men of GitHub reportedly ogled them “like something out of a strip club”.
GitHub chief executive and co-founder Chris Wanstrath said a “full investigation” had been launched.
Also, the “relevant founder” and the GitHub engineer whom Horvath accuses of harassing her have been put on leave.
GitHub is a six-year-old, venture-funded startup with 10 million code repositories. The company landed $100m in Series A funding from Andreessen Horowitz.
Horvath signed out after two years with GitHub. After resigning, she told an audience on Twitter that her only regret was not leaving or being fired sooner.
“What I endured as an employee of GitHub was unacceptable and went unnoticed by most,” she said.
“Don’t stand for aggressive behavior that’s disguised as ‘professional feedback’ and demand that harassment isn’t tolerated,” she told others on Twitter.
Horvath told TechCrunch that a person who she described as a "founder" of GitHub, but did not name, had invited her to drinks with his wife and that the wife subsequently made a range of claims: that she informed her husband’s decisions, was responsible for GitHub hires, and operated a network of spies through the company. Horvath claimed the woman warned her against leaving the company and writing anything negative about the startup.
The programmer took the matter to HR, with the result the co-founder accused Horvath of threatening his wife, and, according to Hogarth, "chastised" her and "called her liar".
Horvath claimed the wife then began showing up at the workplace and shadowing her around the company.
In yesterday's response, Wanstrath denied the unnamed founder’s wife had ever had hiring or firing power at the company and said she will no longer be permitted in the office.
Horvath also claimed to TechCrunch that a spurned colleague – another software engineer – had begun ripping her code contributions from projects after she turned down his advances.
The engineer claimed the "final straw" was when a pair of female fellow workers were hula hooping to music at the office and their male colleagues lined up "on one bench facing the hoopers and gawk[ed] at them". ®
|
OPCFW_CODE
|
The first in an occasional series about product design heuristics. The second part is about what "social" should really mean.
I generally work with startups, which means I'm working with companies that are still trying to figure out what they're building. I'm not a product manager, but over time I've assembled some heuristics that help figure out if a product is on track.
When you're building a user-driven product or adding a feature, the user must benefit and the company must benefit.
This sounds obvious but it's surprising how many companies put their own wishes above their customers'. It's an approach that can pay off in the short term, but at best you're growing an indifferent, surly customer base. At worst, you're driving people away. Human beings have very little patience. If you're not putting them first, they'll go somewhere else.
It's a heuristic – not a cast-iron rule – so it's not a disaster if you take a different path. But you are swimming against the tide, and you're going to have an uphill battle1. You're going to have to explain to your users why they should do what you want, or create something so compelling they'll jump through your hoops. Equally, if users benefit but the business doesn't, make sure you have a plausible plan for the long run. Your business isn't going to implode, but it might limp along, struggle to attract users, or fail to evoke the passionate response you want.
A startup is building a Foursquare-like service, but instead of checking in to places users will write a short review. The startup thinks they'll gather deeper knowledge about the places you visit, and that they can monetise that database. But they're missing a key step: why will a user write a review? What does the user get out of it?
Writing a review is a lot of effort. Even if it's just a sentence or two, the user still has to figure out:
- What's good about this place?
- What's bad about it?
- Am I broadly for or against?
- How am I going to express that in words?
And don't forget the user's out in the world, probably with friends. Will they really ignore the people with them and tap out a review on their phone? This is starting to sound like a rather anti-social product. Users won't complain about it; they just won't use it. Check-ins are already an unnatural behaviour – something a user's persuaded into trying, rather than demanding – and the mandatory review step makes that even harder.
Carrot and stick
People use your features for two broad reasons:
- Good things happen if they do.
- Bad things happen if they don't.
The first motivation is infinitely more preferable: the carrot is better than the stick. If you're lucky you can sometimes force things on your user, but everything flows more smoothly if the users want it themselves.
People shop on Amazon because they want cheap products conveniently delivered to them. They post statuses on Facebook so their friends click 'Like'. They sign up to Groupon to get big discounts in their inbox. Users actively want these things: they would complain if you took them away. But there's a benefit to the companies too: profit for Amazon, and engaged users for Facebook & Groupon2.
The second option puts your company's needs first. It's a mild form of blackmail. You're putting an obstacle in front of your user, and hoping that they want your offering enough to put up with your bullshit3. You can spot these by looking for double obstacles: something you've added to make the first obstacle work4. YouTube's first obstacle was "Watch this advert before you see a video", but nobody wants to watch an ad. So they added a second obstacle that makes you wait 5 seconds before you can skip it. Groupon really wants to email you every day5, so as soon as you land on a Groupon page they show you an undismissable box demanding you sign up.
Nobody on the internet thinks to themselves "I really wish this web page would demand my email address the instant I visit it." Groupon's betting that the people it turns off – the people who say "Stuff this" and close the tab – are worth less to them than the email addresses from casual visitors who want to see the offer. You could argue that users benefit from this too, as they get offers in their inbox every day, but it's an arm-twisty way of getting users. The user didn't get the chance to see some Groupon deals and decide to sign up because they liked the look of them: they signed up because that's the only way to see the offer in the first place.
Streaming music is another example. Companies like Rdio, Pandora, and Last.fm. Their streaming radio products generally limit how many times per hour you can skip a track. It's because of licensing laws, and the costs of licensing music: labels charge less for "radio" plays than "on-demand" plays, and it's easier to convince the labels that you're in the former category if you have limitations. Users hate it, but the businesses think it's the only way they can survive6.
Magical sticks that turn into carrots
There are occasional circumstances where you obstruct your user for a good reason: for the good of the community. A real life example is airport security: nobody wants to have to queue for ages and get searched, but most people don't want bombs on planes. So we mildly inconvenience everybody so society avoids hostage situations. A tech example is Dattch, a lesbian dating app. There's a bunch of people on the internet who love to spam & harass lesbians. So Dattch insists you sign in with Facebook, and every profile is screened by a human to ensure it's real. It's not that they think their users want to wait for several hours as soon as they sign up – it's that it's better for the rest of the community if the company has a chance to screen out the weirdos.
Who's getting it right?
It's hard to give examples of companies that get it right, because it looks so damn obvious. Consider any online shop, pretty much: people want to buy things, and the company wants to sell things. As long as the customers are happy with the product and the business is making a profit, everybody's happy. Or consider Dropbox: users benefit from having their files available everywhere, and Dropbox benefits from power users who buy premium accounts (and from being the de facto way of syncing files across devices).
Marketplace leaders are good examples, too – the eBays & AirBNBs of the world. Their users are both buyers & sellers: buyers win by having easier access to more products/accommodation, and the sellers can reach a bigger audience or even sell something they couldn't before. The business wins by taking a commission.
And despite the way they haemorrhage money, music streaming services would argue that they benefit from data when users listen. It can use that data to improve its personalisation services, and also sell aggregate information back to the music labels. It's still a cut-throat business, which demonstrates that even if you think you've ticked both boxes you're not guaranteed success.
When creating a product or adding a new feature, make sure that both the user and your company will benefit. Users are fickle; they won't do things solely because you want them to. There's got to be something in it for them. And likewise, you won't last long as a business if you give the users everything: there's got to be something in it for the company too. It might be a long-term payoff. It might not be direct. It's OK if it's not always there, but have a clear reason in your mind to forego it.
I enjoyed this mixed metaphor so I left it in. ↩
"Engaged users" means "People use your product", which means "We can sell things to them." Groupon sells to people directly. Facebook sells advertising space, which is an opportunity for someone else to sell to their users. ↩
Another way of spotting this is asking, "Did this request come from the advertising department?" ↩
If they email you every day, they can sell to you every day. If they sell to you every day, you're more likely to buy something. ↩
Spotify is taking a different approach: they're trying to get a huge audience before they run out of money. They have higher licensing costs, but if they can get enough users then economies of scale kick in and they'll be profitable. ↩
|
OPCFW_CODE
|
After my recent talk at Cocoaheads about Shopify’s mobile development and testing/deployment process, I wanted to summarize the tools and open source projects I mentioned…
Regarding the deployment and testing of our Rails applications, we have some open source components:
- Shipit – our production deployment tool
- EJSON – for securely storing secrets in source code
- Docker – containerization solution
For mobile development, our CI solution is BuildKite, which is a hybrid hosted CI solution. BuildKite provides the hosted management service for our automated testing, and handles all the webhook integration with GitHub and such. We provide the build machines which run BuildKite’s agent software.
Interesting features of BuildKite are that we can manage the build/test/deploy process using pipeline configuration files that are part of our source code management system, and also that BuildKite can operate with only minimal access to our GitHub repos – even able to operate without direct read access to our source code.
Our BuildKite agents are Mac mini computers hosted at a colo facility. Each one currently runs 2 macOS VMs with Xcode, fastlane, BuildKite’s agent and our other required tools.
The toolchain for our iOS/Android CI system includes some open source tools:
- Packer – for automating the creating of our Mac VM images
- Fastlane – tools for automating builds, tests, code signing, submission to beta services, etc..
- Shenzhen – (mostly phased out) for automating builds to HockeyApp and other hosting solutions
- FBSnapshot – library for iOS “snapshot” testing, taking screenshot of views and screens for testing
- OCMock – library for mocking Objective-C methods and objects
- OHHTTPStubs – library for mocking network requests using “canned” responses
We also use a hosted solution for deploying iOS simulator and Android emulator builds of our apps, which allows support and non-developers the ability to run copies of these apps via a web browser. This is thanks to a service called appetize.io.
We’re able to run a fairly complete copy of the Shopify software stack on our local machine. We used to use Vagrant and VMWare to run a complete Linux system on our Macs to develop with, but this solution had some issues.
Our current solution involves running all the shared services like Nginx, MySQL, Redis, and so on inside of a very lightweight Linux VM system, and running the Rails applications themselves directly on our Macs. We use a home-grown solution for managing our local system called dev and our VM system is called railgun.
My last slide was a gratuitous list of tools I use, some of which are not that well known…
- Tower – my favourite graphical Git client
- Kaleidoscope – the best diff tools I’ve found, works much better than Git’s diff and also has super image support.
- Trailer – Mac menu-bar app that gives quick access to GitHub issues and PRs and desktop notifications on updates
- Charles Proxy – an invaluable tool for examining HTTP/HTTPS traffic
- Paw – for querying and testing HTTP API services
- SimPholders – menu-bar utility that helps you find your simulator builds in the Finder
|
OPCFW_CODE
|
The photo above shows a sonar sensor called the HC-SR04 plugged into a solderless breadboard.
The sonar sensor sends out a pulse of ultrasound, well above the hearing range of people and animals. When it hears the echo, it brings one of its pin from ground to +5 volts, for a period that corresponds to how long it took the echo to come back.
Since the HC-SR04 is designed to run at 5 volts, we can't simply power it from the Launchpad's 3.3 volt power supply. We have to find 5 volts somewhere.
We could simply use a second power supply, and connect the grounds together so the computer and the HC-SR04 both agree on what the zero volt level is. But it just so happens that the Launchpad gets 5 volts from the USB port of the computer it is attached to, and there is a hole in the circuit board we can solder a wire to that is connected to that 5 volts.
The hole in the circuit board is labeled TP1 (that stands for Test Point 1). It is easy to insert a wire into the hole and solder it in place on the back side of the board, because there is almost nothing you can damage or mess up on that side. Even a novice at soldering can safely solder a wire there.
In the photo above you can see the two holes next to the USB connector at the bottom right, The upper of the two holes is TP1.
The HC-SR04 puts out 5 volt signals, which the Launchpad can't read. We need to drop the voltage down to between 1 and 3.3 volts. The simplest way to do this is to limit the current with a 1,000 ohm resistor. Because there is less current, the voltage will drop to something the Launchpad can handle.
The HC-SR04 has four pins. We connect +5 volts to the pin labeled Vcc. We connect the pin labeled Gnd to ground. The Echo pin is the output that needs to have the 1,000 ohm resistor. The other side of that resistor goes to an input pin. In our example program, that will be Port 2 pin 1.
The remaining pin is the Trig pin. Setting that pin high tells the device to send out a pulse of ultrasound. The Launchpad can raise this pin to 3.3 volts, which is short of the 5 volts that the HC-SR04 expects, but is high enough to be accepted as a HIGH signal.
In our example program, the setup() function starts the serial port after a 5 second delay to allow us to bring up the Serial Monitor window. It then prints a startup message, and sets up the port pins.
In the loop() function, we get the average distance the sonar sensor sees in front of it, and print that out. Then we send out a beep to a speaker connected to Port 2 pin 3. The frequency of the tone corresponds to the distance between the sensor and what it is sensing.
We wrote our own beep() function, because the tone() function supplied in the standard library uses a timer that is also used by pulseIn(), and we will need to use the pulseIn() function to read the sonar sensor.
The get_distance() function is what does the sonar sensor reading. It sends out a 5 microsecond pulse to the sensor's Trig pin to tell the sensor to send out a sonar pulse. Then we time the resulting signal from the Echo pin using pulseIn(), which returns the length of the pulse in microseconds. We convert that time to centimeters by dividing by twice the speed of sound in microseconds per centimeter (which works out to be 58.77).
As we did with the LED proximity sensor, we filter the results by sorting samples and averaging those near the median.
As you move your hand towards the sensor, the pitch of the beeps rises. As you move away, the pitch falls.
|
OPCFW_CODE
|
WPF ComboBox items (ItemsSource binding) are not visible
I am trying to bind List<MyClass> to ComboBox. Following is simple code which I implemented :
C#
cmbList.ItemsSource = DbMain.GetNameList();
XAML
<StackPanel Grid.Row="0" Orientation="Horizontal" >
<TextBlock Text="Names:" Margin="5,0,5,0" VerticalAlignment="Center" Width="50" Visibility="Collapsed"/>
<ComboBox x:Name="cmbList" Width="200" SelectionChanged="cmbList_SelectionChanged"
DisplayMemberPath="DisplayName" SelectedValuePath="DisplayName" Foreground="Black"/>
</StackPanel>
Problem
Values of List<MyClass> are retrived from DbMain.GetNameList() and binding in ComboBox but those are not visible. When I perfrom SelectionChanged, I can access SelectedItem as well. Only issue is items are not visible.
Error in Output Window
System.Windows.Data Error: 40 : BindingExpression path error: 'DisplayName' property not found on 'object' ''MyClass' (HashCode=804189)'. BindingExpression:Path=DisplayName; DataItem='MyClass' (HashCode=804189); target element is 'TextBlock' (Name=''); target property is 'Text' (type 'String')
Check the output window, is there a binding error like 'cannot find property DisplayName'?
Are you providing the right DisplayMemberPath ? check for misspell
@kennyzx Yes. There is a error there. System.Windows.Data Error: 40 : BindingExpression path error: 'DisplayName' property not found on 'object' ''MyClass' (HashCode=804189)'. BindingExpression:Path=DisplayName; DataItem='MyClass' (HashCode=804189); target element is 'TextBlock' (Name=''); target property is 'Text' (type 'String')
Then that is classic binding error, you are expected to define a DisplayName property in MyClass.
I already defined it. I use this class at many other places and its working fine. Class definition public class MyClass { public int Id; public string DisplayName; } Is this due to no get set ?
Yes, make it a property with getter and/or setter, instead of a field.
By using this binding expression, you are stating that there is a property named DisplayName in MyClass, but at runtime, since there is no such property - you define DisplayName as a field, that is why it fails in your case - so the ComboBox is showing blank items.
<ComboBox x:Name="cmbList"
DisplayMemberPath="DisplayName"
Unlike unhandled exceptions, this kind of binding errors don't crash the application, but you can find their trace in the output window while debugging.
|
STACK_EXCHANGE
|
Here you find an overview of all KnightFight servers.
KnightFight DE Server
- Server 1: https://de1.knightfight.moonid.net
- Server 2: https://de2.knightfight.moonid.net
- Server 3: https://de3.knightfight.moonid.net
- Server 4: https://de4.knightfight.moonid.net
- Server 5: https://de5.knightfight.moonid.net
- Server 6: https://de6.knightfight.moonid.net
- Server 7: https://de7.knightfight.moonid.net
- Server 8: https://de8.knightfight.moonid.net
- Server 9: https://de9.knightfight.moonid.net
- Server 10: https://de10.knightfight.moonid.net
- Server 11: https://de11.knightfight.moonid.net
- Server 12: https://de12.knightfight.moonid.net
- Server 13: https://de13.knightfight.moonid.net
- Server 14: https://de14.knightfight.moonid.net
- Server 15: https://de15.knightfight.moonid.net (12.12.2019 Trutzberge)
There are also International Servers that are available to everyone in the English language:
- Internationaler Server 1 (INT)
- Internationaler Server 2 (INT)
- Internationaler Server 3 (INT)
- Internationaler Server 4 (INT)
Besides the German servers, there are also non-German speaking servers. Some of the countries already have several servers, like Italy or Spain.
- Spain: https://es1.knightfight.moonid.net
- Italy: https://it1.knightfight.moonid.net
- England https://uk1.knightfight.moonid.net
- USA https://us1.knightfight.moonid.net
- Netherlands https://nl1.knightfight.moonid.net
- France: https://fr1.knightfight.moonid.net
- Portugal: https://pt1.knightfight.moonid.net
- Turkey: https://tr1.knightfight.moonid.net
- Russia: https://ru1.knightfight.moonid.net
- Poland https://pl1.knightfight.moonid.net
- Brazil: https://br1.knightfight.moonid.net
- Greece https://gr1.knightfight.moonid.net
- Sweden https://se1.knightfight.moonid.net
- Denmark https://dk1.knightfight.moonid.net
- Romania https://ro1.knightfight.moonid.net
- Czech Republic https://cz1.knightfight.moonid.net
- Slovakia https://sk1.knightfight.moonid.net
- Hungary https://hu1.knightfight.moonid.net
- Bulgaria https://bg1.knightfight.moonid.net
Questions about the servers
Here you can find frequently asked questions about the servers.
- No one can answer the question when a new server is started. Not even the forum team. There are several factors depending on whether a new server starts or not.
- The number of players has nothing to do with whether a server starts or not. (See also here)
= Change of server
You cannot switch between servers with your account or have them imported to other servers, because the servers are independent worlds in their own right and not connected to each other. Some kind of switching is therefore not possible.
Premium transfer between servers
This is also not possible. You must make a premium exchange by giving a player the code for server X and then getting a code for server Y from that player. KnightFight or Redmoon Studios is not liable for such an exchange!
Pass on MoonCoins
What would be possible, is that you MoonCoins buys these but then under Payment "as a voucher" before the purchase marked. You will then receive a code by e-mail, you can then pass this code on.
- see also passing on MoonCoins
|
OPCFW_CODE
|
XML Tips from perfectxml.com [Page 3]
- What is XHTML?
The W3C is currently working on a proposed recommendation for an implementation of XML known as XHTML - this implementation
will obey all of the grammer rules of XML, while conforming to the vocabulary of HTML. The XHTML document that you create
may be sent to client with a mime-type of HTML or XML, allowing the greatest flexibility in rendering and manipulation at
the client side. The XML DOM may then be used to manipulate the contents of the XHTML document directly.
- XML and Binary data
XML document is a text document with tags and text data enclosed within tags.
If you want to send binary data as part of XML document (for ex: <BookImage>some binary data, image</BookImage>),
you'll have to encode the binary data using base64 encoding.
Base64 processes data as 24-bit groups, mapping this data to four encoded characters.
It is sometimes referred to as 3-to-4 encoding.
And again, on the recieving side, decoding the base64 data to binary data.
There are many ways to do this encoding/decoding (including using MSXML.dll),
if you need additional details and code sample, please send mail to Darshan Singh.
- What is ROPE?
Remote Object Proxy Engine, or ROPE, is a powerful COM component built to make life easier for SOAP
developers who are used to programming in Visual Basic. SOAP provides several distinct objects that can
be used by both the client and the server. ROPE COM component abstracts all SOAP's implementation details.
One of the objects ROPE can be used to encode binary data into HTTP-compatible (Base64) form.
The XML-RPC protocol defines a way for RPC requests and responses to be serialized into XML documents and sent across an
HTTP connection. One major difference between SOAP and XML-RPC is that XML-RPC uses a <methodName> element with the method
name; SOAP however uses the method name as the name of the element. Partemeters in XML-RPC are encoded with type information
elements such as <string>, which in contrary to SOAP which allows the use of XML Schemas to describe the data types.
- XML Certification
Check out www.perfectxml.com/Certify - the complete and up-to-date resource center dedicated to IBM XML Certification Test 141
When everybody is studying hard to finish their MCSDs and MCSEs,
if you think, you know XML well, why not appear for XML certification and
tell your friends/co-workers that you are a "XML certified" professional.
Check out more details at:
- What is UDDI?
UDDI, or "Universal Description, Discovery and Integration" initiative
is essentially a Web Service registry, implemented with
a programmatic SOAP interface that allows businesses to basically advertise
the existence of their web-based services or to query for the existence of
other web-based services. At the moment, at least, UDDI does not appear to
be a mechanism for describing specific SOAP services (as is the case with
SCL and NASSL), yet rather a mechanism for describing where and how to access
SOAP service descriptions and the services they describe. Essentially the
Yahoo of Web Services.
For more details check out: UDDI.org
- XML And Special Characters
XML predefines five entity references for special characters that would
otherwise be interpreted as part of markup language. These five characters are:
&, <, >,
", and '.
Microsoft published a KB Article (http://support.microsoft.com/support/kb/articles/Q251/3/54.ASP)
on how to deal with these special characters,
but code in this article is not that useful.
Here are the two functions that I wrote to deal with these special characters.
Function ReplaceXMLSpecChars(ByVal strSource)
lngPointer1 = InStr(strSource, "&")
lngPointer2 = InStr(strSource, "<")
lngPointer3 = InStr(strSource, ">")
lngPointer4 = InStr(strSource, """")
lngPointer5 = InStr(strSource, "'")
If lngPointer1 = 0 And lngPointer2 = 0 And lngPointer3 = 0 And lngPointer4 = 0 And lngPointer5 = 0 Then
ReplaceXMLSpecChars = strSource
strNew = ReplaceSingleXMLSpecChar(strSource, "&", "&")
strNew = ReplaceSingleXMLSpecChar(strNew, "<", "<")
strNew = ReplaceSingleXMLSpecChar(strNew, ">", ">")
strNew = ReplaceSingleXMLSpecChar(strNew, """", """)
strNew = ReplaceSingleXMLSpecChar(strNew, "'", "'")
ReplaceXMLSpecChars = strNew
Function ReplaceSingleXMLSpecChar(ByVal strSource, ByVal strSearchFor , ByVal strReplace )
lngPointer = InStr(strSource, strSearchFor)
If lngPointer = 0 Then
ReplaceSingleXMLSpecChar = strSource
strValidString = ""
While Not lngPointer = 0
strValidString = strValidString & Mid(strSource, 1, lngPointer - 1) & strReplace
strSource = Mid(strSource, lngPointer + 1)
lngPointer = InStr(strSource, strSearchFor)
strValidString = strValidString & strSource
ReplaceSingleXMLSpecChar = strValidString
|
OPCFW_CODE
|
The Social Life of Neighborhoods: Data Preparation & Mapping Tutorials
Last updated: June 18, 2021
The following is a series of tutorials specifically designed for The Social Life of Neighborhoods (SOC 176/SOC 276/AFRICAAM 76B/CSRE 176B/URBANST 179) course. The course assignments and final story map require collecting and analyzing information about neighborhoods and other urban spaces. In the tutorials, you will be introduced to tools that will allow you to gather, process, and visualize data so that you can complete the assignments and create your own story map. No prior experience or familiarity with quantitative data or GIS/spatial software is assumed in the tutorials. If you have some prior experience, you will likely be able to skim over anything that looks familiar and focus on new techniques.
These tutorials are a work in progress. Recommendations for improving the content are welcome. If you run across typos, grammatical errors, or confusing language, please send your comments, questions, and feedback to Francine Stephens at firstname.lastname@example.org.
Each tutorial is a chapter in this website. You can navigate through the chapters using the sidebar. At the top of this page, you will see settings options in the navigation bar. Hover over the icons in the top bar to hide the sidebar or change the font and color scheme to your preference.
1.1 Table of Contents
|Chapter 2 introduces Social Explorer and how to use it to collect demographic data from the U.S. Census.
|Chapter 3 introduces the ArcGIS Online interface.
|Chapter 4 explains the steps for creating a reference map for Assignment #3. The chapter introduces geocoding, map symbology, and printing static PDF maps.
|Chapter 5 outlines the steps for analyzing crime data in Assignment #4. The chapter introduces spatial joins, clustering, heat mapping, hot spots, and disaggregating data using filters.
|Chapter 6 focuses on building maps of racial composition and segregation for Assignment #5. The chapter introduces animated time-series maps, choropleth maps, thematic categorical maps, and key segregation measures.
|Chapter 7 explains how to map gentrification for Assignment #7. The chapter defines and operationalizes gentrification, reviews how to create a thematic categorical map, and how to access historical Google Street View images. There is an optional section on configuring pop-ups for different types of spatial layers.
|Chapter 8 orients the reader to the classic story map templates and how to prepare a story map. Examples are featured in the chapter along with specific tips for presenting the time series maps and incorporating design features into the story map.
|
OPCFW_CODE
|
If you're new to the world of CNC machining, the term "Axis CNC" might be confusing. So what is it exactly?
In short, Axis CNC refers to the number of directions in which a CNC machine can move. Most machines have at least three axes (X, Y, and Z), but some can have up to seven or more.
The more axes a machine has, the more complex and expensive it is. However, it also allows for more versatile machining, such as helical milling and contouring.
If you're just getting started in CNC machining, it's probably best to stick with a machine that has three axes. Once you're more familiar with the basics, you can upgrade to a more advanced machine if needed.
If you're new to the world of CNC machines, the term "axis" may be confusing. Here's a quick rundown of the basics:
CNC machines typically have 3 axes: X, Y and Z. The X and Y axes correspond to the length and width of the material being machined, while the Z axis corresponds to the depth.
Some CNC machines may also have additional axes, such as A ( rotate around the Z axis) or B (rotate around the Y axis). However, these are not as common as the basic 3 axes.
Now that you know the basics of axes, you can start to understand how CNC machining works. Stay tuned for more tips and advice on all things CNC!
When it comes to CNC (computer numerical control) machining, there are three basic elements that everyshop needs in order to get started: a CNC machine, a computer, and CAM (computer-aided manufacturing) software. Let’s take a closer look at each of these essential components.
A CNC machine is a machine that is controlled by a computer. The computer tells the machine what to do and when to do it. The machine can be anything from a lathe to a mill to a router.
The computer is the brain of the CNC operation. It is responsible for creating the programs that tell the CNC machine what to do. These programs are created using CAM software (more on that below).
CAM software is used to create the programs that tell the CNC machine what to do. Without CAM software, a CNC machine would be nothing more than a very expensive paperweight. There are many different CAM software packages on the market, each with its own strengths and weaknesses. Finding the right CAM software for your shop is an important decision that should not be taken lightly.
These are the three basic elements of CNC: the machine, the computer, and the CAM software. Together, these three things form the basis of any CNC operation.
There are a lot of different coordinate systems out there, and it can be confusing to keep track of all of them. But don't worry, we're here to help! In this blog post, we're going to give you a crash course in the different types of coordinate systems, and how to use them.
The first type of coordinate system is the Cartesian coordinate system. This is probably the most familiar coordinate system to most people. In a Cartesian coordinate system, the coordinates are defined by a pair of perpendicular axes. The point where the axes intersect is called the origin, and the coordinates are defined as the distance from the origin along each axis.
The next type of coordinate system is the polar coordinate system. In a polar coordinate system, the coordinates are defined by a radius and an angle. The angle is measured from the positive x-axis, and the radius is the distance from the origin.
The last type of coordinate system we're going to talk about is the spherical coordinate system. In a spherical coordinate system, the coordinates are defined by a radius and two angles. The first angle is measured from the positive x-axis, and the second angle is measured from the positive y-axis. The radius is the distance from the origin.
Now that you know the basics of the different coordinate systems, you should be able to use them to your advantage. If you're ever feeling lost, just remember that there's a coordinate system out there that can help you find your way.
TheMachineAxis is a resource for news, commentary, and analysis on the future of work, technology, and society.
There are many tools in the world, and each has its own path. As with anything else in life, the path of the tool is not always linear. Sometimes the tool will take a detour, or even backtrack, in order to complete its mission.
No matter what the path of the tool may be, one thing is for sure: the tool will always find a way to get the job done.
|
OPCFW_CODE
|
The ability to share canvas apps with guests of an organization will be made available in public preview shortly. Please use this thread to share feedback and questions. This thread will be linked to by the blog announcement coming soon.
Hello, @kennisdj051 .
1. A external users will always require a license. However, very soon we'll be enabling a feature that recognizes the license in a user's home and guest tenant so a license doesn't need to be assigned in the guest tenant.
2. Billing occurs on the number of licenses purchased and not who they're assigned to. There are multiple scenarios where who a license is assigned to may change, which is why https://admin.microsoft.com and https://portal.azure.com can be used to change license assignment.
3. Yes, licenses are purchased for an organization and then assigned to users within an organization. Billing occurs for licenses purchased for the organization. Since your application is only a one time use, I'm wondering if PowerApps Portal might be a better solution to your scenario. Feel free to send me a private message if you want to discuss your scenario in more detail.
Hello, @Sachinbansal .
This week. There is one outstanding update to make.powerapps.com happening in the government cloud this week which will allow sharing canvas apps with guests.
@seadude - This scenario isn't supported, guest access in AAD is a prerequisite for canvas app guest access.
If security folks want control over who can invite guests and what guests can access when they're part of the tenant then AAD external collaboration and AAD conditional access may provide the management capabilities desired by security folks.
As for sharing an app with users that change frequently, the app can be shared with an AAD security group and then new users can be added to the group (instead of having to reshare the app with every new user).
I've encountered an issue when trying to share an app with a guest that isn't a google account (see Guest Access with Canvas App for the full story). In both cases, I've added the external user, added a PowerApps Plan 2 Trial license and shared the app with that user. The gmail account is able to run the app, but the Mailfence and Protonmail accounts return a "The app didn't start correctly.Try Refreshing your browser."
I've used different browsers (Firefox, Chrome) and I get the same error.
Session ID: 94b3641b-8fb7-4641-940b-bce6192df943
Power Apps per user plans can always be re-assigned in portal.azure.com or admin.microsoft.com. Power Apps per app plans are assigned based on apps being shared with users - to remove this assignment the app must be unshared with the user.
Ah. So there isn't currently any solution from Microsoft that supports an "auto-scale" approach that reclaims passes when a person stops using the app?
In other words, if you have 100 per-app licenses but only ever 50 concurrent users, you'd still need 100 licenses even if only 50 are ever active?
Keep up to date with current events and community announcements in the Power Apps community.
A great place where you can stay up to date with community calls and interact with the speakers.
Check out the latest Community Blog from the community!
|
OPCFW_CODE
|
Bulk enrollment of Apple devices
You can enroll large numbers of iOS, iPadOS, and macOS devices in XenMobile in two ways.
Use the Apple Deployment Program to enroll the iOS, iPadOS, and macOS devices that you buy directly from Apple, a participating Apple Authorized Reseller, or a carrier. That support includes Shared iPads. XenMobile supports the Apple Deployment Program for Apple Business Manager (ABM) and Apple School Manager (ASM) for Education. This article describes how to integrate multiple devices with your ABM account. For information on enrolling in ABM and connecting your ABM account with XenMobile, see Deploy devices through Apple Deployment Program. For information about Apple School Manager accounts, see Integrate with Apple Education features.
For enrollment of macOS devices, XenMobile requires that the devices run macOS 10.10 or later.
You can also use Apple Configurator 2 to enroll iOS devices whether you purchased them directly from Apple or not.
- You do not have to touch or prepare the devices. Instead, you submit device serial numbers or purchase order numbers through ABM to configure and enroll the devices.
- After XenMobile enrolls the devices, you can give them to users who can start using them right away. When you set up devices with ABM, you can eliminate some of the Setup Assistant steps that users would have to complete when they first start their devices.
- For more information on setting up ABM, see the documentation available from Apple Business Manager.
With Apple Configurator 2:
- You attach iOS devices to an Apple computer running macOS 10.7.2 or later and the Apple Configurator 2 app. You prepare the iOS devices and configure policies through Apple Configurator 2.
- After you provision the devices with the required policies, the first time the devices connect to XenMobile, the devices receive policies from XenMobile. You can then start managing the devices.
- For more information about using Apple Configurator 2, see the Apple Configurator Help.
Open required ports for connectivity between XenMobile and Apple. For more information, see Port requirements.
Integrate your Apple Business Manager account with XenMobile
If you do not have an ABM account set up with XenMobile, complete the following steps in Deploy devices through Apple Deployment Program.
- Enroll in Apple Business Manager.
- Connect your Apple Business Manager account with XenMobile.
- Order Deployment Program enabled devices.
- Manage Deployment Program enabled devices.
Set a default server for bulk enrollment
To assign large orders of iOS, iPadOS, and macOS devices to an MDM server, you can set XenMobile as the default server.
- Sign in to Apple Business Manager using an administrator or device enrollment manager account.
- In the sidebar, click Settings > Device Management Settings.
- Choose an existing MDM server. Under Default Device Assignment, click Change. Select the default XenMobile server for each device type. Click Done.
Configure deployment rules of device policies and apps for ABM accounts
You can associate ABM accounts with different device policies and apps by using the Deployment Rules section under Configure > Device Policies and Configure > Apps. You can specify that a policy or app either:
- Deploys only for a particular ABM account.
- Deploys for all ABM accounts except the one selected.
The list of ABM accounts includes only those accounts with a status of enabled or disabled. If the ABM account is disabled, the ABM device doesn’t belong to this account. Therefore, XenMobile doesn’t deploy the app or policy to the device.
In the following example, a device policy deploys only for devices with the ABM account name “ABM Account NR”.
User experience when enrolling an Apple Deployment Program enabled device
When users enroll an Apple Deployment Program enabled device, their experience is as follows.
Users start their Apple Deployment Program enabled device.
XenMobile delivers the Apple Deployment Program configuration that you configured in the XenMobile console to the Apple Deployment Program enabled device.
Users configure the initial settings on their device.
The device automatically starts the XenMobile device enrollment process.
Users continue to configure the other initial settings on their device.
In the home screen, users might be prompted to sign in to Apple App Store so that they can download Citrix Secure Hub.
This step is optional if you configure XenMobile to deploy the Secure Hub app using the device-based volume purchase app assignment. In this case, you don’t need to create an Apple App Store account or use an existing account.
Users open Secure Hub and type their credentials. If required by the policy, users might be prompted to create and verify a Citrix PIN.
XenMobile deploys any remaining required apps to the device.
To configure Apple Configurator 2 settings
You can configure and deploy iPhone and iPad devices in bulk using Apple Configurator 2 instead of Apple Business Manager.
In the XenMobile console, go to Settings > Apple Configurator Device Enrollment.
Set Enable Apple Configurator device enrollment to Yes.
The Enrollment URL to enter in Apple Configurator is a read-only field. This setting provides the URL for the XenMobile server that communicates with Apple. Copy and paste this URL when you configure settings in Apple Configurator 2. The enrollment URL is the XenMobile server fully qualified domain name (FQDN), such as
mdm.server.url.com, or the IP address.
To prevent unknown devices from enrolling, set Require device registration before enrollment to Yes. Note: If this setting is Yes, you must add the configured devices to Manage > Devices in XenMobile manually or through a CSV file before enrollment.
To require users of iOS devices to enter their credentials when enrolling, set Require credentials for device enrollment to Yes. The default is not to require credentials for enrollment.
Note: If the XenMobile server is using a trusted SSL certificate, skip this step. Click Export anchor certs and save the certchain.pem file to the macOS keychain (login or System).
Install Apple Configurator 2 from the App Store.
Use a Dock Connector-to-USB cable to connect devices to the Mac running Apple Configurator 2. You can configure up to 30 connected devices simultaneously. If you do not have a Dock Connector, use one or more powered USB 2.0 high-speed hubs to connect the devices.
Start Apple Configurator 2. The configurator shows any devices that you can prepare for supervision.
To prepare a device for supervision:
Select Supervise devices if you intend to maintain control of the device by reapplying a configuration regularly. Click Next.
Placing a device into Supervised mode installs the selected version of iOS on the device, completely wiping the device of any previously stored user data or apps.
In iOS, click Latest for the latest version of iOS that you want to install.
In Enroll in MDM Server, choose an MDM server. To add a new server, click Next
In Define an MDM server, provide a name for the server and paste the MDM server URL from the XenMobile console.
In Assign to organization, choose an organization to supervise the device.
For more information on preparing devices with Apple Configurator 2, see the Apple Configurator help page, Prepare devices.
As each device is prepared, turn it on to start the iOS Setup Assistant, which prepares the device for first-time use.
To assign devices from Apple Configurator 2 to Apple Business Manager
You can associate iPhone and iPad devices from Apple Configurator 2 with your Apple Business Manager account. When you add devices, they appear in the Devices section. These devices no longer include enrollment settings assigned through Apple Configurator 2. For more information, see Assign devices added from Apple Configurator 2 to Apple Business Manager.
Renew or update certificates when using the Apple Deployment Program
When the XenMobile Secure Sockets Layer (SSL) certificate is renewed, you upload a new certificate in the XenMobile console in Settings > Certificates. In the Import dialog box, in Use as, click SSL Listener so that the certificate is used for SSL. After you restart the server, XenMobile uses the new SSL certificate. For more information about certificates in XenMobile, see Uploading Certificates in XenMobile.
It is not necessary to reestablish the trust relationship between Apple Deployment Program and XenMobile when you renew or update the SSL certificate. You can, however, reconfigure your Apple Deployment Program settings at any time by following the preceding steps in this article.
For more information about the Apple Deployment Program, see the Apple documentation.
Renew your connection between the Apple Deployment Program and XenMobile
XenMobile displays a License Expiration Warning when your Automated Device Enrollment server token expires.
Replace the token from Apple School Manager/Apple Business Manager.
- In the XenMobile console, go to Settings > Apple Deployment Program to download a new public key.
Sign in to Apple Business Manager to download the token.
Open Settings and select the server from which you need a token. Click Edit.
Under MDM Server Settings, upload the new public key you downloaded from XenMobile and save the changes.
Click Download Token to download the new token.
In Citrix XenMobile, go to Settings > Apple Deployment Program.
Select the Deployment Program account, click Edit, and upload your server token file.
Click Next and save the changes.
In this article
- Integrate your Apple Business Manager account with XenMobile
- Set a default server for bulk enrollment
- Configure deployment rules of device policies and apps for ABM accounts
- User experience when enrolling an Apple Deployment Program enabled device
- To configure Apple Configurator 2 settings
- To assign devices from Apple Configurator 2 to Apple Business Manager
- Renew or update certificates when using the Apple Deployment Program
- Renew your connection between the Apple Deployment Program and XenMobile
|
OPCFW_CODE
|
Welcome to Yahoo!. This is your legal agreement with Yahoo! (the "License") about your use of Yahoo! software. Yahoo! can update and change this License by posting a new version without notice to you. You can find the most recent version of this License by visiting the Yahoo! Info Center at http://info.yahoo.com. If you choose to use the Software aftersuch change, it means you agree to such changes.
The Yahoo Terms of Service located at http://info.yahoo.com/legal/us/yahoo/utos/utos-173.html ("TOS") and this License cover your use of the Konfabulator Engine, and Yahoo! Developed Widgets (collectively, "Software") for non-developer, personal, consumer use.
The software, documentation, and files or materials you get or use after your installation (collectively, the "Software") are licensed to you on a worldwide (except as limited below), non-exclusive, non-sublicenseablebasis on the terms and conditions in this License. This License covers all updates, revisions, substitutions, and any copies of the Software made by or for you, which are all considered part of the Software. All rights not expressly granted to you are reserved by Yahoo! or their respective owners.
Here’s what you can do with the Software:
- Use the Software for noncommercial use or benefit.- Make copies of the Software and distribute such copies to others within your employer's organization so long as each recipient agrees to be bound by this License too. If others within your organization do not have this opportunity to agree and you would still like to distribute copies to them, you may do so only if you have the legal right to bind your organization (and others within yourorganization) to this License. If you do not have this right and the recipients do not have an opportunity to agree to this License, you may NOT distribute the Software to them. Instead, have them download the Software themselves.
- Allow others to use the same copy of the Software, but regardless of whether or not authorized by you, you are responsible for ensuring that any and all uses ofthe Software comply with this License and all applicable laws.
- Install and personally use the Software in object code form on a personal computer or mobile device owned or controlled by you.
Here’s what you can NOT do with the Software:
- Decompile, reverse engineer, disassemble, modify, rent, lease, loan, distribute, or create derivative works or improvements from the Software orany portion thereof, or attempt to discover any source code (if the Software is compiled), protocols or other trade secrets in the Software.
- Obtain or attempt to obtain unauthorized access to the Yahoo! network.
- Incorporate the Software into any hardware or software device that is not your personal device
- Use, export, or re-export the Software in violation of applicable U.S. laws orregulations.
- Sell, lease, loan, distribute, transfer or sublicense the Software or access thereto or derive income from the use or provision of the Software, whether for direct commercial or monetary gain or otherwise, without Yahoo!'s prior, express, written permission.
- Use the Software in connection with acts of terrorism or violence, or to operate nuclear facilities, life support or othermission critical applications where human life or property may be at stake.
- Use the Software in any unlawful manner, for any unlawful purpose, or in any manner inconsistent with this License.
The Software is designed for consumer, personal use. It is not designed for enterprise, commercial, or human safety purposes. Its failure in such cases could lead to injury or damage for which...
|
OPCFW_CODE
|
- 8 years of experience in design, development, maintenance and support of Java/J2EE applications.
- Working knowledge in multi-tiered distributed environment, OOAD concepts, good understanding of Software Development Lifecycle SDLC and good understanding and working knowledge in Service Oriented Architecture SOA .
- Experience in analyzing, developing enterprise applications using SOA based architecture.
- Developed SOA applications using IBM integration developer 7.5 and websphere process server 7.5.
- Experience in using Applet and Swing.
- Extensive experience in Java/J2EE programming - JDBC, Servlets, JSP, JSTL, EJB.
- Expert knowledge over J2EE Design Patterns like MVC Architecture, Front Controller, Session Facade, Business Delegate and Data Access Object for building J2EE Applications.
- Experienced in working Core java applications using Multithreading and Garbage collection concepts.
- Experienced in developing MVC framework based websites using JSF and knowledge of spring framework.
- Extensive experience working in JSF Framework, O/R mapping Hibernate framework and spring framework.
- Extensive experience working in struts2, using struts2 tags and validations.
- Experienced in Object-relational mapping using Hibernate.
- Strong experience in XML related technologies SAX and DOM.
- Knowledge of developing and consuming Web services including different technologies and standards like DTD, XSD, SOAP, REST, WSDL, JAX-WS.
- Experience in installing, configuring, IBM Web Sphere, Web Logic, Apache Tomcat, JBOSS.
- Experience in building and deployment of EAR, WAR, JAR files on test, and stage systems in IBM Web sphere application server v7 and web sphere process server.
- Good Knowledge of using IDE Tools like Eclipse, RAD7.0, RAD7.5, WID6.2, IID7.5 for Java/J2EE application development.
- Expertise in database modeling, administration and development using SQL, PL/SQL, toad in Oracle 8i, 9i, 10gand 11g , DB2 and SQL Server environments.
- Experience in creating Roles in LDAP.
- Experience in developing Junit test cases and Junit automation using ant build script.
- Experience in using ANT for build automation.
- Having good hands on working with EJB 3.0 annotations to map POJOs to databases.
- Experience in using version control and configuration management tools like CVS and SVN, Clear Case.
- Experienced in using Operating Systems like Windows and UNIX.
- Proficient in software documentation and technical report writing.
- Extensive experience in developing Use Cases, Activity Diagrams, Sequence Diagrams and Class Diagrams using UML Rational Software Architecture RSA and Visio.
- Extensively used the LOG4j to log regular Debug and Exception statements.
- Versatile team player with good communication, analytical, presentation and inter-personal skills.
| || |
| || |
| || |
| || |
| || |
| || |
| || |
| || |
| || |
| || |
| || |
| || |
| || |
JAVA/J2EE Web Developer
New York Patient Occurrence And Reporting Tracking System NYPORTS
The New York Patient Occurrence Reporting and Tracking System NYPORTS is an adverse event reporting system implemented pursuant to New York State Public Health Law Section, Incident Reporting. For the purpose of NYPORTS reporting, an occurrence is an unintended adverse and undesirable development in an individual patient's condition occurring in a hospital. Most occurrences reported are meant to be tracked and trended as groups and are reported on a short form. More serious occurrences defined as patient deaths or impairments of bodily functions in circumstances other than those related to the natural course of illness, disease or proper treatment in accordance with generally accepted medical standards are investigated individually and require the hospital to conduct a root cause analysis.
- Participated in project planning sessions with business analysts and team members to analyze business IT Requirements and translated business requirements into working model.
- Designed UI spec documents to create web pages
- Develop web application using Struts 2 Framework
- Develop user interfaces using JSP, HTML and CSS
- Develop DAO design pattern for hiding the access to data source objects.
- Implemented MVC, DAO J2EE design patterns as a part of application development.
- Use GIT for software configuration management and version control
- Use Eclipse as IDE tool to develop the application.
- Deploy the application on the Web logic Server11g.
- Developed web application using Struts Framework Developed user interfaces using JSP, HTML and CSS
- Worked heavily with the Struts tags- used struts as the front controller to the web application. Implemented Struts Framework according to MVC design pattern.
- Implemented validation framework for creation of validation.xml
- Used AJAX to make asynchronous calls to database and load web page.
Appeal Board System ABS
Appeal board system is Confidential system which helps Unemployment Insurance Appeal Board UIAB to apply and know the status of the case. ABS is an internet application that supports both New York state public to request a hearing case for their unemployment insurance and unemployment insurance appeal board in its effort to process and track hearings and appeal cases. ABS keeps track of all the cases and the hearings held and the participants who attended the hearing.
- Designed the physical models using rational tools such as Rational software architecture and generate the skeleton code.
- Designed the service interfaces involved in data model design.
- Created Schemas XSD for XML files.
- Created Web services using JAX-WS and JAXB.
- Developed the business rules using ILOG JRules Business rule management system.
- Created and implemented the complex DAO's for object relational mapping hibernate tool.
- Created connections to the oracle 11g database using data source.
- Created BPEL flows for the flow of a component in the business integration editor.
- Created, implemented and deployed the mediation flows in Enterprise service bus ESB .
- Developed Business Process Models BPM's as per the business requirements.
- Created hosted multiple web services for various consumers. Which are consumed by other domains and GUI projects
- Implemented role based validations using LDAP configurations.
- Created and integrated GUI screens with JSF technology and IBM richfaces.
- Implemented core java concepts like multithreading.
- Created the ant build scripts to build the deployable artifacts.
- Involved in deployment process in multiple environments.
- Worked with team very closely to achieve or meet the project time lines.
- Conducted the meetings Mentored the State Agency Staff about the technology touch points and application.
- Involved in Clear Case setup for code repository.
- Troubleshooting the SOA environment application issues
- Configure and deploy applications in SOA websphere environment.
- Used clear quest to keep track test cases.
- Implemented the Agile methodologies to achieve desired tasks.
- Created the mock up screens using the Balsamiq.
Core Java Developer
Chevron works to meet the world's growing demand for energy by exploring for oil and natural gas refining and marketing gasoline. This project involved in an application used by the chevron employees, they can log into the application and retrieve, load or plot the data based on the privilege of the user. This application deals with the underground rock structure, users load the data about the underground rock structure and some other users who want to read the rock structure can export the data and look into the seismic readings or plot the seismic readings.
- Involved in development of user interface in core java using multithreading, garbage collection, applets and swing concepts.
- Implemented JDBC to interact with oracle database.
- Developed SQL Queries and stored and stored procedures to interact with oracle 10 databases.
- Involved in debugging the bugs in the older application.
- Implemented make/gmake command compiling the legacy programs.
- Implemented TFS for the version controller.
- Involved in design and implementation of multithread process.
- Implemented File Import component to read third party XML files and convert them to appropriate to appropriate object using SAX
- Involved in the process of changing the passwords of the data accounts in the database.
- Involved in testing and debugging of the application.
- Developed the application on Eclipse.
Environment: Java 1.4, Applet, Swings, TOAD, PL/SQL, JDBC, XML, FTP, Oracle 10, Oracle 8, TFS, Visual studio, Windows Vista, Unix/Linux.
American Express employees can login into the site and can avail the packages and services applicable to them. The packages and services are dependent on the user and their group. The status of the services requested is tracked from service requisition until the service is provided. Based on the service requested, information is gathered from the users and passed to the respective service providers. The site is fully database driven and has features like Search, Bookmark, People Finder, and FAQ. Provided interfaces for service providers for adding, modifying and deleting services.
- Extensively involved in the implementation of MVC architecture using Java Struts.
- Involved in development of User Interface using JSF Java Server Faces and Ajax.
- Implemented persistence layer using Hibernate framework
- Involved in implementation JSON libraries.
- Developed the Application layer using Java Beans
- Involved in the implementation of J2EE Design Patterns such as Singletons
- Developed server SQL Queries and Stored Procedures to interact with Oracle9i Database
- Involved in debugging the system using LOG4J.
- Integration Using IBM Web Sphere Integration Developer and Process Server.
- Involved in the implementation of JMS API to create, send, receive, and read messages between application components
- Involved in the design and development of application using Web Logic
- Involved in the code reviews and conducting of reviews meetings and ensured that the other members follow the coding standards
- Involved in the testing of application using Web Sphere Test Server
- Environment: Java 1.5, Servlets, JSF, Spring, JMS, JDBC, XML, MVC, LOG4J, UML, JSON, Web Logic9.2
- This system connects all the BPL factories and Branches all over the country. It helps the management to review the reports on sales and volume. It consists of four modules Factory, Branch, and Supplier Regional.
- Extensively involved in designing the database
- Involved in writing Hibernate queries, and Hibernate specific configuration and mapping files
- Coded JDBC programs for connection to the Oracle Database
- Developed Servlets and JSPs based on MVC pattern using Struts Action framework
- Involved in writing Business objects in EJB's
- Deployed into WebSphere Application Server
- Used Tiles for layout and Apache Validator Framework for Form validation
- Used Log4J logging framework to write Log messages with various levels
- Involved in fixing bugs and minor enhancements for the front-end modules
- Used Weblogic framework for writing Test Classes
- Used Ant for building and deploying the application
Environment: JSP1.2, Servlets2.1, Struts 1.2.4, Hibernate2.0, XML, UML, HTML, JNDI, CVS, Log4J, App server 5.1, Eclipse, Oracle 9i.
|
OPCFW_CODE
|
How does ingress filtering protect against legitimate spoof packets?
I am learning about Ingress Filtering where unauthorised packets are filtered out from entering the network.
To get a better understanding of it , I found an article which said:
the system examines all incoming packets to get information about
their origins. The system compares this information to a database to
determine if a packet is indeed from the place it says it is. If it
appears to be a match, it can be allowed through. If there is a
problem with the source, the system can hold the packet, keeping it
out of the network and protecting any users who might be attached to
the network.
I understand that if the spoofed packets are from a source address which is not in the database , it will not be allowed to enter the network.
HOWEVER
Assuming a hacker spoof a source address which can be found in the database , when the system compares this information to the database information , a match will be found and the packet will enter the network though it has been spoofed.
I am wondering if the system is able to detect packets which has been spoofed with IP source address which could be found in the database and if yes how are such packets detected ??
EDIT : It appears the term "legitimate" and "illegitimate" has been causing alot of confusion so I have changed it to source address found in the database and source address not found in the database. I hoped this has made the question easier to understand
Spoofing implies some level of undesirable action. As a Network Engineer, a packet coming from a network that hasn't been allocated to them is a bad thing. What legitimate reason would a downstream network have for sending a packet sourced from something other than what it's been assigned? As a customer network, why would you want traffic sourced from your network entering your network's edge?
@RyanFoley my apologies , what i meant was that a hacker spoofed a legitimate source address
@RyanFoley What i mean is what if the hacker spoofs the source address so that the packets "originates" from etc Google servers and Google servers happen to be in the database.
Did any of the answers help you? if so, you should accept the answer so that the question doesn't keep popping up forever, looking for an answer. Alternatively you can answer your own question and accept the answer.
If a hacker spoofs packets which match your SRC/DST fields at any line in an ACL, then the firewall is going to allow the traffic.
The only exception to this is if you have a stateful firewall, and the spoofed packets don't make 'sense' for the given conversation the firewall is tracking. An example of this would be a hacker spoofing tcp syn's to a host that's already mid-conversation with the real source IP. The firewall would see this behavior and drop the packet. This doesn't apply if there's no session already being tracked by the firewall, so again the cases in which a stateful firewall is going to completely block all spoofed packet is situational at best.
There are exceptions for both stateful and stateless firewalls, but being general it's hard to give you more (useful) detail.
You can however design ACL's in way that can detect spoofed IP's in certain situations. For instance, you could have an edge firewall, and you know logically only public IP's will be coming from the outside to your network, so you could create ACL's blocking all of the private IP space from the outside, and that could stop some spoofing attempts.
In similar veins you could do this for networks you know will not be coming inbound to the interface you're making the ACL for, and so poor spoofing attempts would be stopped.
There are many tools that need to work in concert with a firewall to protect against spoofing attacks, but the best any engineer can do is look at the firewalls placement in their network and design ACL's that factor in which traffic should logically ever be allowed to pass through a given interface.
If you need more detail let me know and I'll reword the answer for you!
|
STACK_EXCHANGE
|
No more reviews for today, for close vote Why?
Today I am reviewing close votes. After doing some vote cast, System show me a message: You have no more close votes today; come back in 6 hours.
As per my knowledge for each day there are 40 reviews in close vote section, but it gives me this message after 38 reviews only.
You can see the status here:
Questions:
Is this a bug in system or there are any other reason?
Is there I have made any mistake in review?
Is it possible to have such condition in other review, I have many times that for other reviews section somebody has done 21 review for a day. but here is opposite for me !!! Why?
You sure are awfully quick at reviewing duplicates... 5 per minute?
@BoltClock I am there is such reason.
While you're here, we have some concerns about several of your recent reviews: http://stackoverflow.com/review/low-quality-posts/7473915 , http://stackoverflow.com/review/triage/7434206 . I think you need to slow down and take a little more time reading the things you are reviewing.
It looks like you have used all your close votes, between the close votes queue and 'organically' closing questions outside of the queue.
For example, if I went and reviewed 20 questions in the close vote review queue, and voted to close all of them, I will now have 30 close votes left for the day. If I then voted to close 30 questions outside of the queue, I.e. by finding them on the homepage, I would now have 0 close votes left.
The close vote queue knows this, and won't let me review any more - because I wouldn't be able to cast a close vote on those questions that I think should be closed, and that's the whole point of the queue. I would have only done 20 close vote reviews that day, but I still wouldn't be able to review any more.
This is what you're running up against, only with slightly different numbers.
It's not a bug. You're out of close votes. You get 50 per day, and you used them all. 37 of them you used in review.
It's quite possible that you've made multiple mistakes; as others have noted, you reviewed those 38 questions very quickly. 37 you voted to close, 1 you opted to leave open; we've been working hard to reduce the false-positive rate of questions entering close review, but a 97% close rate is significantly higher than average. You might want to slow down and think a bit harder about the questions you're reviewing before making your decision.
Yes, this scenario is possible in several other review queues: if you're out of flags, you can't Triage (because that would prevent you from flagging questions that needed it if you came across one), if you're out of votes you can't review in First Posts or Late Answers (because that would prevent you from voting up/down when the post warrants) and if you're out of reopen votes you can't review in the Reopen queue.
Thanks man, this is really need full thing to me learn and hope for other also.
97% close rate is... really? why don't you ban me, then? My rate is full 100% close (except for known good audits)
Are you honestly surprised to learn you're an outlier, @gnat? I mean... It's not like this is the only area where that observation applies...
I would be surprised to learn that I'm outlier in preferring to focus strictly on close-worthy questions
You're in the 1% of close reviewers who've done more than 1000 reviews over the past month, @gnat. And out of that group, only 4 people skip tasks more often than you. You've likely found an effective strategy, but it's one that eludes most others.
ah, outlier "by strategy", I see. That's... rather sad. Not surprising though, since there are no badges to help people learn about filtering and skipping. You can't seriously expect reviewers learn about the right approach by searching for obscure meta posts
|
STACK_EXCHANGE
|
This chapter describes new traffic shaping features added to FortiOS 5.4.
Traffic shaping policy IDs added to traffic logs (303802)
As of build 1013, traffic shaping policy IDs are now displayed in traffic logs and IP sessions. This allows you to easily identify which shaping policy is applied to traffic, even with multiple shaping policies configured. Look at the example below to see a sample log with the
date=2016-01-29 time=15:35:25 logid=0000000013 type=traffic subtype=forward level=notice vd=vdom1 srcip=192.0.2.2 srcname="A" srcport=43041 srcintf="port3" dstip=203.0.113.55 dstport=80 dstintf="port11" poluuid=bcd3b008-c6bd-51e5-0e2c-2002e7a5774d sessionid=18364 proto=6 action=close policyid=2 policytype=policy dstcountry="Reserved" srccountry="Reserved" trandisp=snat transip=192.0.2.2 transport=43041 service="HTTP" duration=205 sentbyte=747285 rcvdbyte=26382426 sentpkt=12887 rcvdpkt=17592 shapingpolicyid=1 shapersentname="shaper400" shaperdropsentbyte=0 shaperrcvdname="shaper200" shaperdroprcvdbyte=14065762 appcat="unscanned" devtype="Fortinet Device" osname="Fortinet OS" mastersrcmac=33:5b:0e:ca:dd:dc srcmac=33:5b:0e:ca:dd:dc
Traffic shaping GUI improvements (300055)- Not necessarily a "feature" but something to be aware of.
Traffic shaping GUI updates (290083) - I am not sure its worth documenting this change.
New Traffic Shaper Policy Configuration Method (269943)
Previously, traffic shapers were configured in Policy & Objects > Objects > Traffic Shapers and then applied in security policies under Policy & Objects > Policy > IPv4 . In FortiOS 5.4, traffic shapers are now configured in a new traffic shaping section in Policy & Objects > Traffic Shapers.
The way that traffic shapers are applied to policies has changed significantly in 5.4., because there is now a specific section for traffic shaping policies in Policy & Objects > Traffic Shaping Policy. In the new traffic shaping policies, you must ensure that the Matching Criteria is the same as the security policy or policies you want to apply shaping to. The screen shot below shows the new 5.4 GUI interface:
There is also added Traffic Shaper support based on the following:
- Source (Address, Local Users, Groups)
- Destination (Address, FQDN, URL or category)
- Service (General, Web Access, File Access, Email and Network services, Authentication, Remote Access, Tunneling, VoIP, Messaging and other Applications, Web Proxy)
- Application Category
- URL Category
Creating Application Control Shapers
Application Control Shapers were previously configured in the Security Profiles > Application Control section, but for simplicity they are now consolidated in the same section as the other two types of traffic shapers: Shared and Per-IP.
To create an Application Control Shaper, you must first enable application control at the policy level, in Policy & Objects > Policy > [IPv4 or IPv6]. Then, you can create a matching application-based traffic shaping policy that will apply to it, in the new Traffic Shaping section under Policy & Objects > Traffic Shaping Policy.
New attributes added to "firewall shaping-policy" (277030) (275431)
The two new attributes are
status attribute verifies whether the policy is set to enabled or disabled. The
url-category attribute applies the shaping-policy to sessions without a URL rating when set to 0, and no web filtering is applied.
config firewall shaping-policy
set status enable
set url-category [category ID number]
New button added to "Clone" Shapers
You can now easily create a copy of an existing shaper by selecting the shaper and clicking the Clone button.
|
OPCFW_CODE
|
Packaging drawable resources with a JAR?
I've built a semi-push service (checkins for notifications from a centralized http server) that I'd like to distribute to a few friends and customers, so they can easily add push-like notifications to their apps without much hassle.
I've gotten far enough to package the JAR and include it in a few of my other projects, but now I'm hitting a brick wall with regard to resource packaging --
In my PushNotify project, I have a resource: R.drawable.goldstar
I draw it twice in the application, once by referencing R.drawable.goldstar, and once by referencing resources.getIdentifier('goldstar', 'drawable', 'com.com.pushnotify')
When I package PushNotify as its own APK, the gold star correctly appears in both places.
I right click in eclipse, Export Project, as JAR file, un-check "ApplicationManifest.xml" from included files, and click "Finish".
I can verify that the size of this JAR file increases when I add more drawable resources.
When I then package DependantApp.apk, including PushNotify.jar in the build path, the gold star correctly appears in both places.
If I uninstall PushNotify.apk, the gold stars disappear from DependantApp.apk as if they were not packaged with the JAR; a 0 value is returned for getIdentifier('goldstar', 'drawable', 'com.com.pushnotify').
How can I distribute a JAR with drawable resources, and have them appear in the dependent apps? Apps including my JAR never need to access the icons I am distributing, only my own code does.
Despite much nay-saying on in the search results I find, I am certain this is somehow possible, because a) my JAR file grows when I add new images, and b) I've included JAR files that come with icons before (Airpush does this, as does UrbanAirship).
Thanks in advance!
How can I distribute a JAR with drawable resources, and have them appear in the dependent apps?
That is not presently supported. Creating Android library projects that can distribute resources is on the tools roadmap and will hopefully come out in the not-too-distant future.
I have consumed JAR files that do exactly what I am talking about. Please read my last paragraph.
@linked: What you are doing is not presently supported. The only supported means of distributing resources as part of a reusable component is the Android library project. And those cannot be directly packaged as JARs. "At this time, library projects containing the jar file directly instead of the source code are not yet supported." (http://tools.android.com/recent/buildchangesinrevision14) And while that document indicates that R15 was supposed to add support for this, it did not, nor did R16.
@linked: While there are rough workarounds, none involve the resources actually being in the JAR at the present time. It is not even clear to me whether resources-in-the-JAR is part of what the Tools team is planning on doing, though I certainly got that impression. Java has had its own "resources" construct for longer than Android, and it supported stuff in JARs, but you would not be accessing them via R. constants -- despite the same names, "resources" in Android bear no relation to "resources" in the old Java style.
@linked: My best guess, therefore, is that the JAR files you cite are using the old Java resources (and therefore will have challenges when it comes to differing screen densities, etc.). Whatever you are doing with the project-export-as-JAR has never been supported, where by "supported" I mean "it is commonly known what will happen, is documented, etc.".
Thanks for your comprehensive answer (and followup). I am frustrated that this appears to be the most correct information available at this time, but you appear to be right. I will commence banging my head against the wall until the Android team decides to implement what I want, or maybe I will send a patch upstream ;) Until then, I guess I'm sending out .zip files. Cheers!
What about now? Is packaging resources in jars still not supported?
@RavitD: Correct. The new Gradle-based build system supports packaging Android library projects as AAR files, which do contain resources.
I dont think this is possible in the way you are trying to do that. Android "builds" recources inside its apks. Resources in jar files are (as far as I know) ignored.
You could However build it as an "android-library" where these xml layouts are included. If you don't want the whole code "easyly readable" you could put only this rescources and the classes that are using them inside the "android-library" while the rest inside your library stays inside the .jar file
This is not exactly what you were asking for but here is an interesting option anyway :
Instead of getIdentifier('goldstar', 'drawable', 'com.com.pushnotify') use getIdentifier('goldstar', 'drawable', getApplicationContext().getPackageName())
And distribute the images/resources as a .zip to be unzipped, installed and compiled with the target app (the one that will use your shared .jar)
And don't include the images in the .jar.
This is not tested but that should work.
|
STACK_EXCHANGE
|
Custom Role and Membership Provider NotImplementedException in 4.x
I have a project using ASP.NET's built in role and membership provider management (using the aspnet_Roles and aspnet_Membership tables). The controls that Sitefinity will manage currently use this. On the 'Membership & Role Providers' Webinar it shows how to implement custom providers by implementing your own custom provider using the System.Web.Security.RoleProvider and System.Web.Security.MembershipProvider and referencing the assembly and adding settings in the web.config. Since I'm using ASP.NET's built in role and membership management directly I shouldn't need to do all these steps. I saw an example that achieves this using the System.Web.Security.SqlRoleProvider and System.Web.Security.SqlMembershipProvider since they implement the System.Web.Security.RoleProvider and System.Web.Security.MembershipProvider. I added settings to the web.config file to achieve this in 4.x but am getting a NotImplementedException for (Telerik.Sitefinity.Security.Data.RoleDataProvider.GetRolesForUser)
I'm using 4.X so am not sure if things are different in that version. From the exception it looks like I
need to implement a Telerik RoleDataProvider but am not sure if this is the issue or is some other settings need to be changed/added.
I've included the web.config settings I added as well as the exception I'm getting below. The connectionStrings is not shown but is in the web.config. Any advice on the steps required to use the ASP.NET built in role and membership management with Sitefinity 4.x for custom role and membership management would be appreciated!!!
<add name="SitefinitySiteMap" type="Telerik.Sitefinity.Web.SitefinitySiteMap, Telerik.Sitefinity" taxonomyProvider="OpenAccessDataProvider" pageTaxonomy="Pages" rootNode="FrontendSiteMap" pageProvider="OpenAccessDataProvider"/>
<roleManager enabled="true" cacheRolesInCookie="true" defaultProvider="SqlProvider">
type="System.Web.Security.SqlRoleProvider,System.Web, Version=220.127.116.11, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
type="System.Web.Security.SqlMembershipProvider, System.Web, Version=18.104.22.168, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
[NotImplementedException: The method or operation is not implemented.]
Telerik.Sitefinity.Security.Data.RoleDataProvider.GetRolesForUser(Guid userId) +123
The method which is called GetUserLinks is not implemented yet and we thrown this error explicitly. Currently our wrapper for the standard Role provider is not finished and this is why you get this error.
the Telerik team
Do you think this will be in the RC?
The current plan is for October - November. We will try to release another version before RC, but I cannot give you certain date now.
the Telerik team
|
OPCFW_CODE
|
no bedtime reminder notification
Describe the bug
Really appreciate the bedtime and wake up feature! Unfortunately I don't receive a bedtime notification.
To Reproduce
enable bedtime reminder notification
no reminder rings
Expected behavior
Reminder should ring as set
App version
2.4
Device (please complete the following information):
Model: Google Pixel 5
OS: GrapheneOS (Android 14)
Additional context
battery optimisation disabled ofc
You have to enable bedtime alarm and wakeup alarm.
Thanks for your swift answer!
Yep I've done that already. The wake-up setting creates an alarm whereas the bedtime one doesn't if that helps.
Actually just got this:
type: crash
osVersion: google/redfin/redfin:14/UP1A.231105.001.B2/2024060500:user/release-keys
package: com.best.deskclock:2005
process: com.best.deskclock
processUptime: 198 + 342 ms
installer: com.machiav3lli.fdroid
java.lang.RuntimeException: Unable to start service com.best.deskclock.bedtime.BedtimeService@1f84ed4 with Intent { act=com.best.deskclock.action.LAUNCH_BEDTIME cmp=com.best.deskclock/.bedtime.BedtimeService }: java.lang.NullPointerException: Attempt to invoke virtual method 'boolean com.best.deskclock.data.Weekdays.isBitOn(int)' on a null object reference
at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:5079)
at android.app.ActivityThread.-$$Nest$mhandleServiceArgs(Unknown Source:0)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2397)
at android.os.Handler.dispatchMessage(Handler.java:107)
at android.os.Looper.loopOnce(Looper.java:232)
at android.os.Looper.loop(Looper.java:317)
at android.app.ActivityThread.main(ActivityThread.java:8532)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:552)
at com.android.internal.os.ExecInit.main(ExecInit.java:50)
at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:359)
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'boolean com.best.deskclock.data.Weekdays.isBitOn(int)' on a null object reference
at com.best.deskclock.bedtime.BedtimeService.getNextBedtime(BedtimeService.java:112)
at com.best.deskclock.bedtime.BedtimeService.scheduleBedtimeMode(BedtimeService.java:292)
at com.best.deskclock.bedtime.BedtimeService.startBedtimeMode(BedtimeService.java:268)
at com.best.deskclock.bedtime.BedtimeService.onStartCommand(BedtimeService.java:69)
at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:5061)
... 11 more
I guess it's because no day of the week is checked?
Nope, Monday to Thursday and Sunday checked
@j-lakeman Could you please tell me step by step what you did to make this bug appear?
I can't reproduce this bug on my devices.
The issue here is starting the bedtime mode when the DataSaver hasn't regained it's data. so basically not touching the app when bedtime launches or it's remind notification. there weren't any other errors because I gave them defauflt values in DataSaver.
@BlackyHawky To fix this issue I would start adding the linesaver.restore(); as line 62 in BedtimeService. At the moment I can't test this but that line should fix this issue.
Since I don't know how to reproduce this problem, I trust you.
Do the tests when you can and let me know. 😉
sorry for the late reply (I totally forgot about this issue) just done the tests this commit should fix the issue.
@Nilsu11
Thank you very much.
I'll publish the new commit as soon as I have time with your modification 😉
|
GITHUB_ARCHIVE
|
M: On Android Cameras - cianclarke
http://cianclarke.com/blog/?p=184
R: slantyyz
OP:
>> But wait, what's this? Is that a home screen I see? But of course! There's
the memos app, and there's your contacts, and there's the web browser.
>> It turns out, this is just another unfortunate example of Samsung getting
it all wrong.
I think Samsung deserves a bit of slack on their first kick at the can. Look
at the other two Android cameras - the Polaroid and the Nikon - did they get
it right? If I was going to bet on who will get Android right on a camera
first, I'd pick Samsung over Nikon.
Give it a few software iterations, when everyone understands what Android's
fit is with a dedicated camera, and things will get better.
While it's easy to make an argument for Apple getting things right on the
first time, the bottom line is that Samsung isn't Apple and never will be.
R: cianclarke
Very fair point - I wasn't even aware Nikon had an attempt, thanks for
pointing this out! I agree, after a few iterations they likely will get it
right, and I'm looking forward to when they do! I just think they've had some
seriously massive oversights that would have been relatively easy to resolve.
R: slantyyz
The one thing that Samsung's camera division does well is listen to feedback.
I imagine they'll be seeing what the early adopters on this camera will say.
Because of user feedback, their NX camera line (which I think needs to go
Android) actually has one of the most complete (and affordable) lens
ecosystems for mirrorless cameras. It's a highly underrated platform.
|
HACKER_NEWS
|
How to recognize with just name and last name if the person is a political exposed person
First than all, I am not sure if this questions is more about Machine Learning, or if its Artificial Intelligence, if not, just let me know I will delete it.
At my company we need to create a solution for banks, where a client comes in and they want to to open a bank account.
They need to know if that person is a politician or political exposed person, maybe they work in the european comission, or they are family from a pep for example.
The business users has lots of data sources where to get these people, for example: http://www.europarl.europa.eu/meps/en/full-list/all
They want to train a Model (Machine Learning), where the end user can enter the name: Bill Clinton for example, and then the system has to return the percentage of a person being political or not.
Obviosly some persons are 100% politicial and the percentage will be 100%.
But if they enter a name that is not in any of their data sources, how would I train a model to decide if its pep or not?
quite confused
thanks
From just the name, even "Bill Clinton" is not going to be 100% political. In fact, probably less than 20%. Most names are repeated. According to Google, I am a well-known jazz musician in the USA, the author of a fishing book in Australia, a Microsoft developer in the UK.
good point indeed
In a certain sense, this is like asking "given just the last digit, write an AI to determine if a number is prime". One, you do not have enough information to start with, and two, you skipped the step of determining whether an AI is in fact the right tool for the job.
But if they enter a name that is not in any of their data sources, how would I train a model to decide if its pep or not?
Based on just a person's name and nothing else, the accuracy of this model is going to be very low. Consider that most first name, surname combinations in Europe are going to be repeated across the population.
However, the accuracy might still be slightly better than guessing. Some families and social classes could be more likely to be involved in political work, and a statistical model would pick up on that.
To train the model, take your positive names, and combine with a random selection of names of people that are known to be "not political" or similar enough. It doesn't matter if some of the names are the same provided you are confident in your data. Probably you could just take a phone directory or the electoral register or some other list of general names. Provided your "political" people are a small fraction of all people, this will work well enough even if you have some of them in the negative class.
Ideally you mix those name groups in the rough proportion that the bank expects to see "political" and "non-political" customers, so that your data set is a good representation of the target population.
Then you train a classifier on the names. As this is text and sequence data, you will need a solution for that. Possibly LSTM would be suitable architecture, but so might some feature selection from the names in a more simple ML model.
Remember to hold back some data (both positive and negative cases) for cross-validating and testing the model.
Expect the accuracy of your model when testing to be low. Very low. I would not at all be surprised to find the end result unusable by itself.
If this is seriously to be part of some bank's account setup process, there needs to be additional data used later in the process. A gate for additional checks based purely on someone's name will perform very poorly in my opinion.
thanks for the answer, thats why I asked, I am not an expert, but guessing something with just the name seems very difficult if not impossible to have a usable result.
@LuisValencia: Yes, unless you pick a target that is strongly correlated with the name, such as what page of a telephone directory they will appear on, or whether two people are related (and this second one is never something you really want to guess with no other data)
I'm not sure why this is upvoted. For a bank, a solution like this is criminally bad. Banks have a legal "Know Your Customer" obligation. Using something like this would be a reason for the supervising authority to summon the CTO for a chewing out. Seriously.
@MSalters: I agree the situation is bad if this was used for real. That is not what the question is about however, and out of scope for AI Stack Exchange. I think the caveat in the last paragraph covers all that is necessary here. There is no evidence that anyone is considering doing this for real other than a thought experiment, and even if there were I think the correct place to handle those concerns would be Workplace, Security or other Stack Exchanges. All this site can realistically offer is "statistically the result will be very poor".
@MSalters: If you can offer an improvement to the answer which covers concerns, but doesn't jump on the OP just for asking a "can AI really do this" kind of question (answer, as here: yes, sort of, but it would not be usable, or possibly: no, not really, but you might see the needle move) then I'd be happy to incorporate it provided it doesn't turn the whole answer into: "don't do this"
|
STACK_EXCHANGE
|
15 Essential Game Developer Skills For Your Resume And Career
With a better understanding of these skills, aspiring game developers can set themselves up for success. Pre-production, production, testing, pre-launch, launch, and post-launch. It allows the users to develop games in 2D as well and 3D and in languages such as C#, Java, or Boo. A Unity 3D games developer designs and creates high-quality 2D and 3D games and applications using Unity software.
Whether in-person or not, you must speak clearly and politely without downplaying your message. Because they’re so adaptable to the needs of a project, technical artists are highly sought after by game studios and are paid accordingly. So if you’ve already mastered game development, but want to contribute more creatively to the games you make, becoming a technical artist could be the perfect career path for you. In the largest studios, lead game developers are making over $100,000 a year on average. It should be noted, however, that these metrics can vary from studio to studio, especially when it comes to Indie developers. The resources an Indie game project might have at its disposal are much smaller, and workers sometimes, though not always, fall below the $66, 000 a year average.
Worked with animated sprites and basic Android layout, designing graphical user interface and game logic. Worked independently on designing and developing an application for IOS and Android platform for mobile devices using Unity3D. Designed the project’s path, and managed an android programmer, a graphic designer and a musician. Deep understanding of computer science fundamentals and design patterns such as MVC, MVVM, MVP. By interpreting design concepts, ideas, and needs into an entertaining and operational game. It is always better to set a budget for your game project by considering all the factors that will go into producing it, especially hiring.
Integrated Flash with proprietary operating systems, messaging system, JAVA server systems, and custom FlashPlayer build. Created over 500 illustrations for 8th grade math problems as well as assets for GUI and game characters. Helped facilitate original and modification math development with in-house statisticians.
Video game development has a lot in common with software development differing in the nature of teams that have to link with one another. Chasing heavyweight deadlines to get the product ready for market launch is always challenging. So too is going back to the drawing board and implementing code iterations on short notices.
Also mention any experience you have with designing game graphics or animation. Be sure to detail your passion for gaming and your excitement for creating new and innovative games. Video game development is typically seen as a highly coveted career, especially for those with an established love of video games. Let’s take a look at some of the educational recommendations https://globalcloudteam.com/ and skills you can build that could make you more competitive in your job search. If you have a mind for mathematics, understanding codes, and learning how things work, consider looking into video game development. US-based company here to help the gaming community and individuals in all phases of video game development from the ground up.
The use of material found at skillsyouneed.com is free provided that copyright is acknowledged and a reference or link is included to the page/s where the information was found. Material from skillsyouneed.com may not be sold, or published for profit in any form without express written permission from skillsyouneed.com. Ie.indeed.com needs to review the security of your connection before proceeding.
What is a Video Game Developer?
Some developers also use their skills to help animate the 3D models once they’ve been made, even building playable demos to test how everything they’ve been coding is working. Encouragement, advice, and support for aspiring game designers. The ideoversity training institute provides courses on online game development in Lahore. By observing these gaming trends, you won’t just learn the different aspects of games, but you will also get to know about the competition that exists in the gaming world.
In addition, it would be beneficial to showcase your portfolio of previous work during the interview. As a game developer, you will need to highlight your skills in several areas. First, you will need to be able to show that you have strong technical skills. This means being able to code well, and also being able to use game engines and other tools. Secondly, you will need to be able to design games that are fun and engaging. This means having a good understanding of game mechanics and level design.
User interface design
They might not be relevant to the problem at the moment, but the feedback can help in future to make improvements. An online learning programme for Graduates that prepares them for the most in-demand skills of Full Stack Software Engineering using Java stack. Published Unity3D games and development technique tutorials on my website.
- So start developing one part of the gaming that you enjoy, and you could develop further.
- Quality assurance testers systematically test games for any flaws or bugs.
- It is important to put yourself in the shoes of the gamer and perceive the bigger picture.
- These are the skills you need for everyday interaction with people.
- Developed two iOS games focusing on providing unique educational values.
Whether you’re a graphic artist or a developer, resumes handed off to game studios are expected to have a portfolio of their work. Just make sure you fill your portfolio with work specific to game development itself. It’s good to show that you can do many different sorts of projects, but even better to show you game developer for hire can do one thing really well. You can learn how to create 3D models, use game engines, write in a programming language, and network all from the comfort of your own home. The only weakness with this method is that established colleges and universities carry some weight with employers on name recognition alone.
How to improve game developer skills
Every element in the game is handled by a team of game developers, illustrators, content curators, VFX aids, etc. The need for teamwork and cooperation is immense in this field. In addition to being passionate about games, it is also important to have strong technical skills. If you want to be a programmer, you need to know how to code; if you want to be a designer, you need to be proficient in level design tools like Unity or Unreal Engine.
Teaching Game Development
Our team adds vision, strategy, and hands-on efforts to position our clients for long-term success. Web developers, need to create visually appealing products that can engage players and immerse them in new worlds. Once everyone has settled in, the lead programmer will call everyone in the team for a meeting.
The field of gaming is usually looked upon as an easy one to develop, but it is one of the most challenging jobs. First, there is a requirement of creative insight for the game developer only through which there is an excellent foundation for the plot. Next, the characters, the product, their interaction needs to be well planned.
Then choose from 5+ resume templates to create your game developer resume. Both are equally important like every other minute facet of the game development process. Game development process begins that includes steps that may vary depending upon the requirement of the game. If you notice someone is not getting involved during discussions, try asking them for their opinion. If they are uncomfortable, stop pressure and talk to them afterwards. Check the reason they’re not involved e.g. engagement, anxiety or confidence.
Having the right technical skills on your resume can open up opportunities for work as a game developer. Two common skills you’ll find on job listings are experience with game development engines and the ability to code (often in C# or C++). To start, almost all game developers have at least a bachelor’s degree. Developers need to have an excellent understanding of computer science, know how to code in several programming languages and have some understanding of physics or software creation. In the worst cases, towards the end of a games development cycle, developers may have to work long shifts for weeks at a time.
Conversely, if you are a designer who knows nothing about programming, you will likely find it difficult to implement your ideas without the help of a programmer. In short, the more skills you have, the better off you will be as a game developer. A career in the game development industry can encompass many skills, both hard and soft.
This can be difficult at times, but it is essential if you want to succeed in the competitive world of video game development. Various responsibilities come under the shoulder of a Unity Game developer that we will discuss later in the blog. Unity, the third-party game engine lets Unity developers, artists, and designers create and operate experiences in real-time. It is an extremely flexible, highly extensible, and well documented process. It provides the developers with the flexibility to create interactive prototypes for enhanced user experiences. Its physics, real-time 3D rendering, and animation are great tools that can help you explore and experiment with automation, simulations, and architecture.
Crunch does not always happen, but it is something to be prepared for if you wish to work in this field. Game engine development is the process of creating a game engine, which is a software framework designed for the creation and development of video games. A game engine is responsible for a game’s overall management and gameplay, including rendering, sound, physics, animation, networking, input/output and much more.
|
OPCFW_CODE
|
Why do 2 same numbers written in slightly form produce 2 different outcomes in Java
I successfully managed to solve a math problem using Java code. However, in doing so, I've also stumbled upon something weird.
In one of my calculations, I had to add 4 numbers: 13, 132, 320, and 201. I declared an int variable sum, and initialized it to 13 + 132 + 320 + 201.
int sum = 13 + 132 + 320 + 201;
When I printed the variable sum out, it returned a value of 666. Which makes sense, since adding those numbers on a calculator returns that value. However, I decided to then set the variable sum equal to something a little different. I decided to set sum equal to 013 + 132 + 320 + 201.
sum = 013 + 132 + 320 + 201;
However, when I printed this value out, I got 664. I decided to add one more zero to the left of 013.
sum = 0013 + 132 + 320 + 201;
And sum returned the same value, 664.
So basically, whenever I add the numbers just like that without any unnecessary zeroes, sum returns the correct value. But when I add those unecessary zeroes, sum returns a slightly different answer. Is there a reason as to why putting zeroes before a number causes a slightly different result?
Because when you added that zero, Java interpreted that number as an octal value, not decimal.
Octal. 013 is octal. Aargh you lag, too late the party
Just to give the math bit: 013 == 11.0 == 0b1011 == 0xB == 1.1e1
@r3mainer that's for python. This question is about Java
@phuclv I know, but the solution is identical.
@r3mainer no you can't close with a solution in a different language. There are already lots of duplicates in Java to close
In math only base is not decimal .So to define numbers in base 16(Hexa-decimal),8(Octal),2(Binary) and 10(Decimal) there must be a way.So in java you can define those like below.
Adding the prefix of 0b or 0B you can define binary numbers (base 2)
byte b1 = 0b101;
Adding the prefix of 0 you can define octal numbers(base 8)
int octal = 013;
Adding the prefix of 0x you can define hexa decimal numbers(base 16)
int hexaDecimal =0x13;
You can define decimal numbers without using any prefix
int decimal = 13;
Your question is basically this:
// Java interprets this as octal number
int octal = 013;
// Java interprets this as hexadecimal number
int hexa = 0x13
// Java interprets this as decimal number
int decimal = 13
So basically, adding zeroes before a number, while they mean the same thing in math, actually changes the base of the number? So 013 is 13 with a base of 8? And 0013 is 0x13 which is 13 with a base of 6?
|
STACK_EXCHANGE
|
Glossary - Index D
DDR: Double Data Rate - a type of Synchronous DRAM, or SDRAM. DDR SDRAM enables data transfers to occur on both edges of the clock cycle, thus doubling the memory throughput of the chip.
DDR RAM: An extention of SDRAM technology, DDR effectively doubles the bandwidth available by sending data on the falling edge of the clock cycle as well as on the rising edge.
Desktop: No, not the thing your keyboard and mouse are sitting on, but rather the main screen on your monitor where you find your icons, background wallpaper and maybe your screensaver.
DHCP: Dynamic Host Configuration Protocol - Method of assigning temporary IP addresses to computers to ensure network security.
Digi board: Hardware to build a ras server.
Dimm: DIMM RAM is characterized by its 168 pins.
DIMM Slots: DIMM memory fits into special 168 pin slots which are located on the motherboard, usually adjacent from the processor.
DOCSIS: Data Over Cable Service Interface Specification - A standard for transferring internet data over cable lines.
Dot Pitch: Used to describe the horizontal size of pixels on CRT and LCD displays. The smaller the dot pitch (for example 0.25 mm) the better the resolution of the display.
Double Click: 2 clicks of the mouse at the same time. If the program detects a double click it often will open the selected application.
DSL: Digital Subscriber Line - High-speed internet connection offered by telephone companies over existing phone lines.
DVD: Digital Versatile Disc - Introduced in 1996, the optical discs share the same overall dimensions of a CD, but have significantly higher capacities - holding from 4 to 28 times as much data.
DVD Video: Popular format for high quality MPEG2 video and digital surround sound. Enables multi-language, multi-subtitling and other advanced user features.
DVD+RW: DVD ReWritable - It is the only rewritable format that provides full, non-cartridge, compatibility with existing DVD-Video players and DVD-ROM drives for both real-time video recording and random data recording across PC and entertainment applications.
DVD-Audio: This audio-only storage format similar to CD-Audio, however offers 16, 20 and 24-bit samples at a variety of sampling rates from 44.1 to 192KHz, compared to 16 bits and 44.1KHz for CDs. DVD-Audio discs can also contain music videos, graphics and other information.
DVD-RAM: DVD Random Access Memory - A rewritable DVD disc endorsed by Panasonic, Hitachi and Toshiba. It is a cartridge-based, and more recently, bare disc technology for data recording and playback. DVD-RAM bare discs are fragile and do not guarantee data integrity. The first DVD-RAM drives had a capacity of 2.6GB (single sided) or 5.2GB (double sided). DVD-RAM Version 2 discs have double-sided 9.4GB discs. DVD-RAM drives typically read DVD-Video, DVD-ROM and CD media. The current installed base of DVD-ROM drives and DVD-Video players cannot read DVD-RAM media.
DVD-ROM: Read Only Memory - This read-only DVD disc is used for storing data and interactive sequences as well as audio and video. DVD-ROMs run in DVD-ROM or DVD-RAM drives, not DVD-Video players connected to TVs and home theaters. However, most DVD-ROM drives will play DVD-Video movies.
DVD-RW: DVD ReWritable - A rewritable DVD format that is similar to DVD+RW, but its capability to work as a random access device is not as good as +RW. It has a read-write capacity of 4.7 GB.
|
OPCFW_CODE
|
S/MIME (Secure Multipurpose Internet Mail Extensions) in the security functions that were extended, it can be MIME entity (such as digital signatures and encryption information, etc.) encapsulated into a secure object. RFC 2634 defines the enhanced security services, such as confirmation receipt with the function of the receiver, so that you can ensure that the recipient can not deny that the message has been received.
3GPP TSG SA WG3 Security — S3#34 S3-040557 6 - 9 July 2004 Acapulco, Mexico Source: Ericsson Title: MBMS Download Protection Document for: Discussion and decision Agenda Item: MBMS 1 Introduction This paper discusses the protection of downloaded objects in MBMS. 2 S/MIME As agreed in SA3#33 S/MIME was supposed to be the working assumption for protection of MBMS download. However, it seems that it does not suffice to use S/MIME alone since S/MIME will not be able to protect the integrity of the file delivery table (FDT) in FLUTE. This implies that if S/MIME is to be used, it probably should be combined with another mechanism that protects also the FDT. The protection of the FDT will be discussed in Section 4. For the confidentiality part S/MIME can be used with a pre-shared secret. This pre-shared secret should be used to wrap the actual encryption key, which is carried in the S/MIME container. A new attribute specifying the MSK ID and MTK ID could be specified in the FDT XML-schema to allow the UE to retrieve the correct keys. When it comes to the integrity protection there are several possibilities. 2.1 Signature To protect the integrity of the S/MIME container, the protocol provides the use of signatures. But the use of signatures implies that there has to be a public key infrastructure in place. Furthermore, public key operations consume more resources, both computational and bandwidth wise. If the FDT is to be protected, it can be provided with a signature as well, e.g., the coverage of the S/MIME signature could be changed so that the FDT is covered as well. 2.2 Message authentication code S/MIME does not include integrity protection of the container by symmetric key methods, so to have this functionality the protocol must be extended with a MAC. • The obvious approach would be to compute HMAC/SHA-1 over the S/MIME container using the MTK_I. The MAC would then be appended to the S/MIME container. This approach is not specified in any other specification, so it would be a pure MBMS extension. 2 • Integrity protection of the S/MIME container can be achieved by, e.g., letting the HMAC described in Section 4 cover also the container. This can be done by setting the URI attribute of the Reference element in SignedInfo equal to the URI in the Content-Location attribute in the FDT. 3 XML encryption An alternative to using S/MIME is to use XML-encryption . If XML-signatures are chosen to protect the FDT and possibly also the downloaded MIME object, it seems natural to also use XML- encryption to confidentiality protect the MIME object. The use of S/MIME for the confidentiality part only is a bit of a waste, since the integrity protection is not used. An example of the usage of XML encryption of the downloaded object is given below. <?xml version='1.0'?> <EncryptedData xmlns='http://www.w3.org/2001/04/xmlenc#' MimeType='text/plain'> <CipherData> <CipherValue>A23B45C56</CipherValue> </CipherData> </EncryptedData> This XML-document would be transferred instead of the file. The actual file is present in encrypted form inside the CipherValue-tag. 4 Protection of the FDT An issue not previously discussed in SA3 is the protection of the FDT in FLUTE. The FDT does not really contain any sensitive information. If one has privacy concerns, it could be an idea to confidentiality protect the FDT (note that the MSK ID and the MTK ID must be left in the clear). The XML-encryption schema mentioned in Section 3 can be used to selectively encrypt everything but the key ID:s. However, since the data is broadcast (or multicast), it is difficult (although maybe not impossible) to pinpoint a particular user that downloads a particular file. What is more of concern is the integrity of the FDT. This is because an attacker could insert a fraudulent FDT, and fool a user into downloading a file different from the one ordered (the attacker would have to also broadcast the fraudulent file). Since the FDT is described as an XML schema, it is natural to use XML-signatures to protect the FDT. It should be noted that signatures are used in a wider sense than normally in ; it also includes Message Authentication Codes (MACs), such as HMAC/SHA-1. That is, an XML- signature does not necessarily have to be based on public key cryptography. This allows usage of symmetric key cryptography to integrity protect the FDT in a standards adherent way. The following attributes of XML-signatures are useful to MBMS:: • SignatureMethod/DigestMethod: HMAC/SHA-1. 3GPP 3 • The KeyName element of the KeyInfo element should be set to “MSKID:xxx… MTKID:yyy…”, where yyy… is the ID of the MTK used as input to the MAC and xxx is the ID of the MSK used to protect the MTK with ID yyy…. The MTK is used to derive the integrity key MTK_I, which is used as input to HMAC/SHA-1. The details of the other attributes must also be specified. 5 Conclusion The problem that is handled in this paper is that S/MIME does not provide integrity protection using symmetric keys, and that the public key based signatures that are provided do not cover the FDT. Confidentiality protection of the download can be achieved through S/MIME or XML-encryption. As was mentioned in Section 2 there are ways to achieve integrity protection using S/MIME with public keys or integrity protection using symmetric keys if enhancements to the protocol are made. There is also the possibility to integrity protect the download using XML-signatures. It is noted that the FDT must also be considered to provide a complete protection of the download. 6 References Eastlake et al, “XML-Signature Syntax and Processing”, RFC 3275, IETF Housely, S/MIME, RFC 3369, IETF Eastlake et al, “XML Encryption Syntax and Processing”, W3C 3GPP
Pages to are hidden for
"1 Introduction 2 S_MIME"Please download to view full document
|
OPCFW_CODE
|
Identify if a "Credit card" or "Debit card" by the card number
I can initially do a check to identify if I have a valid Payment Card Number by performing Luhn check algorithm.
But then I need to identify if it is a Credit card or a Debit card to perform the next task accordingly. I understand this depends on the first four characters, but I'm not sure about the exact ranges.
If someone can explain or provide with a link which explains this would be great. Thanks.
Edits...
In both these stackoverflow Links I don't see my question is answered. Therefore this can not be a duplicate for any of these.
In my case it doen't read the card using a card reader and instead using the card number, CVV and exp date to get the payment done(User insert these things).
Also "yes" I can do a check to identify if it is a Visa card, Master Card or an American Express card. But no way to find if its a Credit or a Debit card.(For example if the card is a Visa card then how will I get to know that Visa card is a "credit card" or a "debit card". That's the exact question).
Did you check this ?
Hi The New Idiot, I checked that WikiPedia "Bank card number" thing but it doesn't say any exact logic to differentiate this two card types. Also this is not going to be a duplicate for your link as its accepted answer says reading the magnetic stripe to get more info. In my case it doen't read the card and instead user enter card number, CVV, exp date and then I have to pass all to the web service to get the payment done. Anyway thanks for your quick response.
Hi mvp, Does your link explain anything(Logic) about selecting "credit" or "debit" card type. Yes, I can check if it is an american express, Visa or master card(your link also expalin that) but still struggling selecting the exact ranges that belongs to credit and debit type. Thanks anyway.
Normally you would ask the user who entered it if you need to know. That is what most web sites do.
Hi Peter Lawrey, yes it's a good way to go with. But according to the user requirement I have to check for a way to do it without allowing the user to select the option. BTW this is a mobile application in my case. Thank you.
Check this
http://stackoverflow.com/questions/72768/how-do-you-detect-credit-card-type-based-on-number
Hi, vishal mane. I can't find any solution in this post. For example think I have identified it as a Visa card. After that how will I check that particular visa card is a Credit or Debit card. That thing is not explained there in any of the answers. Thanks.
What about Ollie's answer?
In Ollie's answer Javascript link(http://www.eflo.net/mod10.htm)< i beleive is not accurate as When I check with a UK Visa debit card it identify it as a Visa Credit card. I am analysing the wikipedia answer.
fiy, the binbase database has worked flawlessly for me.
www.binlist.net is another service you can use and its free
@PeterKerr the binlist.net doest give accurate results. I am from india, and when i tried my debit card number , it said that card belongs to USA.
from their website: "the database is very accurate, don't expect it to be perfect."
I think it is wrong to say this is a duplicate, since the question is about the card being credit or debit. This attribute is not something you can figure out from the number.
You cannot - unambiguously - tell the difference from just the PAN number. There is no official public database detailing this information and if the banks ever get together to make that happen they will be accused of collusion.
There are some resources online that could be used depending on what country you are in. Barclays offer a PDF document called "CARD IDENTIFICATION AND VALIDATION - Barclaycard" that is applicable to the UK but they will not offer any guarantees as to its accuracy. It is updated approximately quarterly to follow industry changes. You will have to google it as I cannot post a link to a pdf file.
By the way - just doing a LUHN check is not enough because the LUHN check is also used for many other numbers, EAN13 barcodes for example.
Hi OldCurmudgeon, Thanks for your answer. Didn't find this before. Yes this application mainly will be using in UK but should be okay for international cards as well. I will have a deep look in this.
I disagree. I don't think they'd be accused of collusion if they made some way to identify debit vs credit cards. They already follow lots of standards, and they don't issue cards that can have the same numbers. We can identify VISA vs MasterCard based on the number alone. They could have used numbering schemes that overlapped but they chose to cooperate a little and keep the numbering unique.
|
STACK_EXCHANGE
|
When attempting to Dockerize my ASP.NET Core micro-service, I ran into an interesting issue. We use Docker’s Network feature to create a virtual network for our docker containers; but for some reason I wasn’t able to issue a curl request against the ASP.NET Docker container, it simply returned:
curl: (52) Empty reply from server
Well crap. Docker was set up correctly; and ASP.NET Core applications listen on port 5000 typically:
COPY $source .
ENTRYPOINT ["dotnet", "MyProject.dll"]
dotnet run output:
Hosting environment: Production
Content root path: /app
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Hm. It’s listening on localhost (the docker container), and I’ve exposed port 5000, right? Maybe my port settings are wrong when I spin up the container? Let’s check:
docker run -i -p 5000:5000 --name my-project -t my-project:latest
That checks out. The syntax for docker
HOST:CONTAINER, and I’ve made them 5000 on both sides just for testing, but I’m still not getting a response.
I found out that by default, ASP.NET Core applications only listen on the loopback interface — that’s why it showed me
localhost:5000. That doesn’t allow it to be viewed externally from the host. Sort of a bonehead moment on my part; but there you are. So to fix it, I can tell Kestrel to listen using the
UseUrls() method and specify interfaces to listen on:
public static void Main(string args)
var host = new WebHostBuilder()
.UseUrls("http://*:5000") //Add this line
Now, it will listen to all network interfaces on the local machine at port
5000, which is exactly what I want.
I updated .NET Core 1.0.1 to .NET Core 1.1.0, and am no longer able to ‘Start Debugging’ (F5) from Visual Studio 2015.
When I tried to debug, I’d get the following error in the output console in Visual Studio:
The program ' dotnet.exe' has exited with code -2147450751 (0x80008081).
The program ' iisexpress.exe' has exited with code 0 (0x0).
The latest version of .NET Core is 1.1 (1.1.0 if you’re a package manager type), but Visual Studio 2015 only supports F5 Debugging with .NET Core 1.0.1.
You can still debug .NET Core 1.1.0 Applications in Visual Studio 2015, however. Here’s how:
Open the Package Manager Console (or use cmd.exe).
dotnet run from either console.
You should see the following prompt:
Go to the debug menu in Visual Studio and click “Attach to Process” (or use the Ctrl+Alt+P shortcut).
Look for dotnet.exe, and hold down shift while selecting all the dotnet.exe processes (it’s one of those three; and you can actually select all at the same time).
Click the “Attach” button.
You’ll know you’re debugging because set breakpoints will change from clear interiors to a red interior (indicating the breakpoints have been loaded).
I’ve been working on an ASP.NET Core application; and even though I constantly say I’ll never upgrade in a Dev Cycle, I did.
I updated from .NET Core 1.0.1 to .NET Core 1.1.0, and during the upgrade path, I updated the packages in Nuget using Visual Studio, and suddenly, everything stopped working, specifically I’d get the following error when trying to build:
Can not find runtime target for framework '.NETCoreAPP, Version=v1.0' compatible with one of the target runtimes:
It turns out, Nuget modifies the project.json file in Visual Studio in one specific crucial way: It changes what was previously:
"type" : "platform"
Notice the difference? the former JSON is an object, and Nuget replaces it with a string. To fix the error, simply make that line look like the object it was previously:
"type" : "platform"
This error should only happen if you try to upgrade in Visual Studio.
By default, the project output name when building your .NET Core project is the same name as the directory/folder that contains the
project.json (which will be back to *.csproj) as of Visual Studio 2017. So if you’re like me, and you keep your projects like so:
- <snip>Lots of *.cs files and folders</snip>
Then you’re going to run into a problem because the name of your output file will be
src.dll, since the containing directory is
src. Not what you want.
To fix this, you can change a setting in the project.json (or the ProjectName.csproj file):
For project.json, under the
"outputName": "MyProjectName" //add this
outputName property, and your string value will be the name you want to be emitted for your DLLs and settings.
For ProjectName.csproj in Visual Studio 2017, it’s the same property:
This will allow you to name the project whatever you want, and not be dependent upon the convention of the folder name as project name.
Last week I went through Five Reasons Why I left the Startup Life; and I received a lot of good feedback on the post. Some of the feedback I received was that that post seemed negative, and looking back, of course it does (after all, it’s about why I left the startup life). But it doesn’t tell the whole story. While there are reasons why startups have more difficulties than other more established businesses, that isn’t to say you shouldn’t join a startup. In fact, I think if you have the risk tolerance, you should join a startup at least once. Here’s why:
You will learn far more in a startup than a corporate job. Since there is generally no support structure in a startup, you learn a lot very quickly. I had never programmed firmware before joining a startup, and my C would be best described as “non-existent” (I don’t consider a college course more than 14 years prior to be useful). Not only did I have to learn C, but I also had to learn how to ship firmware for a hardware product. That would never have happened in a corporate job. No one would have said, “Sure, let’s have this person who’s never shipped firmware before write our firmware.” In a startup, you often don’t have any other choice than to just do it.
You will shake your fear of shipping. There is a vast gulf between developers who can ship software and developers who can not. In a corporate job, it’s really easy to make changes to the software without moving the shipping needle at all. In a startup, if you don’t write software that directly contributes to shipping that software, it just won’t ship. It’s scary at first, but once you start shipping, you’ll be addicted to it and wonder how any software team can ever work any other way.
You will have no process to get in your way of shipping. Once businesses have shipped software, they assume risk for shipping software. If you don’t have any software, you don’t have any risk to shipping software. Your risk is purely the act of shipping. That’s part of the reason why startups don’t have any process around shipping. There’s no risk, because nothing is already shipped! In your corporate job, you know the word “change management”, and it probably makes you shudder. You have meetings upon meetings about change management, and you wish you could just ship software. In a startup, you will.
You control the culture of software and its architecture. While there are other parts of startup culture you can’t control, you will at least have control over the software aspects. Do you want to use AWS? Great. Want to use React? Sure. Want to make sure your software is open source? You can do that.
You will leave work fulfilled, and if you don’t, you only have yourself to blame. In a lot of businesses, there are multiple factors that affect happiness that you don’t control. Too much risk around using your favorite language, too much risk around shipping that neat feature. Every change takes the sign-off of three different people, two of which you don’t interact with when making that change. In a startup, it’s just you. No one can keep you from shipping software that solves your business problem. There’s a study that says people are happier when they exhibit control over their environment. Startups are as close as you’ll get to being in control of your environment (short of owning your own company).
There are lots of reasons to join a startup that I do justice to here, but I really think you should consider it at least once, and when you do, consider it with eyes wide open.
|
OPCFW_CODE
|
//多级联动类
function linkageSelect(dom, opts) {
/*var initOpts = {
dataArr : [ProJson,CityJson,DistrictJson],
optionArr : [{value:"1",text:"测试1"},{value:"2",text:"测试2"},{value:"3",text:"测试3"}]
}*/
var domOut = $(dom).children("select");
if (domOut.length === 0) {
return;
}
var domLength = domOut.length;
var firstSelect = domOut.eq(0);
$.each(opts.dataArr[0], function (index, value) {
var option = "<option value='" + value[opts.optionArr[0].value] + "'>" + value[opts.optionArr[0].text] + "</option>";
firstSelect.append(option);
});
for (var i = 0; i < domLength; i++) {
sf(i);
}
function sf(index) {
var i = index;
function loadOption(selValue) {
var nextIndex = i + 1;
var curSelect = domOut.eq(i);
var nextSelect = domOut.eq(nextIndex);
var nextData = opts.dataArr[nextIndex];
if (nextSelect.length !== 0 && nextData) {
nextSelect.find("option:gt(0)").remove();
$.each(nextData, function (k, p) {
if (p.lastID == selValue) {
var option = "<option value='" + p[opts.optionArr[nextIndex].value] + "'>" + p[opts.optionArr[nextIndex].text] + "</option>";
nextSelect.append(option);
}
//if(nextSelect)
});
}
var curOp = curSelect.find("option:selected");
var nextOp = nextSelect.find("option").eq(1);
if (curOp.text() === nextOp.text()) {
nextOp.get(0).selected = true;
nextOp.trigger("loaded",[nextOp.val()]);
nextSelect.hide();
} else {
nextSelect.show();
}
}
domOut.eq(i).change(function () {
loadOption($(this).val());
});
domOut.eq(i).bind("loaded", function (e, data) {
loadOption(data);
});
}
}
|
STACK_EDU
|
Basic VM example
Hey, here is a UVM Basic VM. It is a REPL for a simple Basic dialect with a canvas that you can use to draw stuff.
I just tried your program. Sorry I didn't get back to you faster! I think your basic program is pretty great. Going to give you small feedback from a usability standpoint:
This is a nitpick/stylistic: the console says "type help for available c
ommands". I wish that commands was on its own line instead of having the c on a separate line.
I wish that there was an "EXIT" command. Imo people are likely to reach for that. Would make it more intuitive. May want to also support "QUIT".
I wish that the prompt started with a > or ] or $ character. Would make it easier to distinguish the commands I type from the printed output.
Could be nice to have a FILL command to fill the entire canvas with a given color? :)
There is no error message when an unsupported command is entered. It just says 0, then ready? I was expecting PRINT "HI" to work, but it wasn't, and I wasn't sure why.
Supported color names in the help are in lowercase, but the color names you expect are actually uppercase RED, BLUE, etc.
I couldn't really figure out how to make an infinite loop happen with GOTO. Hmmm.
I'm being very nitpicky here. I think your basic program is already impressive, but there's definitely lots of little details of polish that could be added and together would make it even stronger and more intuitive to use.
I added an strncmp() function to include/string.h so you can make use of it if useful :)
https://github.com/maximecb/uvm/commit/4153cf10494c87fbb8e135b0afe9143942e8fa80
Appreciate the feedback
Will do/fix 1, 2, 3, 4, 6
For 5 when you enter an unrecognized command like say hello it will treat it as a variable and print its value.
so "LET a 1" will change the value of a to 1. Then typing a will print 1. So when you wrote PRINT, it thought that it was a variable, which had the value 0 since you did not initialize it.
For 7 you can have an infinite loop by loading the command
There are two ways of entering commands: a one off command that gets executed and never runs again and a "loaded" command. The latter can be done by prefixing the command with a number. After loading a command you can run using the RUN command.
For example
For 5 when you enter an unrecognized command like say hello it will treat it as a variable and print its value.
so "LET a 1" will change the value of a to 1. Then typing a will print 1. So when you wrote PRINT, it thought that it was a variable, which had the value 0 since you did not initialize it.
It's your basic VM so I respect your choices, but maybe looking up uninitialized variables should print an error instead of evaluating to zero? Would be more intuitive for new users? Or is that just the way basic typically works?
Ya I can see how it could be confusing based on ur feedback. Will update it to throw an error message.
I have completed everything we discussed. Let me know if I missed something.
Feel free to give more suggestions!
One thing that seems like it would be an improvement would be to update the UI every N iterations.
At the moment if you do a loop like:
10 "hi"
20 GOTO 10
You don't see anything update because it's an infinite loop, but this isn't how "old school" basic would behave. Same if you did a loop to plot things.
Another potential improvement would be to add a PRINT command. My understanding is that your basic basically doesn't need that, but I think people will expect it to be there, even if it's not strictly necessary.
Another thing would be to maybe have some random number generator 🤔
Those are 3 things I can think of. All of that being said, I'm happy to merge your current version if you want. It is already a very cool demo 👌 🙂
oh cool I see you merged it 🎉 🎉 🎉 🎉
I have been working on adding your ideas. They are all done. I have also added a sleep command that allow users to delay the time in between commands.
Currently I am trying to figure out a good UX for help since there are just too many commands now that they won't fit on the screen.
I should be able to push something today. Will create a new PR since this one is merged.
I have also added a way to escape infinite loops by pressing escape ( you don't need to do that for the cavas to update tho)
Here is a cool demo!
Sweet!!! :D
|
GITHUB_ARCHIVE
|
If you are a Windows or Linux user and when doing cross-browser Web App testing, you often get into a situation where you need to find a way on how to get a hold on some Safari-capable environment.
Whether you decide on using a Mac, iPhone, or iPad for cross-browser Web App testing, you usually have 3 choices:
- Get your hands on a real Apple hardware (Mac, iPhone, iPad)
- Use remotely through some free or paid services to other people’s hardware
- Virtualize the Safari capable environment
This is a pretty straightforward option, as well as the easiest one. But, it’s usually not available.
This is a very good solution, and you can use many websites to test your Web application/website on other people’s hardware. Note that it usually involves paying for some plan/package or using limited trial options (which are sometimes enough, but not good for permanent and more serious cross-browser Web App testing).
Some of the options that have proved as good ones are:
- https://sizzy.co/ (a very nice parallel multi-device visual environment)
The limit of 1 minute per environment can be circumvented on Mac setups by using a bug that changing desktop resolution (i.e., resets the timer).
This is the scenario we will use if nothing else suits our needs. We will use Microsoft Windows 10 as our Host OS. The choice of Virtual Machine will be Oracle VM VirtualBox, and we will make the Mac OS Catalina 10.15 as a Virtual Machine.
Steps to Make Catalina VM
- Download macOS Catalina ISO
- Install Oracle VirtualBox on Windows PC
- Install VirtualBox Extension Pack
- Create and Configure a New Virtual Machine
- Configure VM through Command Prompt
- Start the Virtual Machine
- Create macOS Catalina Bootable Disk
- Perform a Clean Installation of Catalina
- Additional information
Download macOS Catalina ISO
Firstly, you need to download the ISO image for Catalina.
Install Oracle VirtualBox on Windows PC
Download the VirtualBox installation for Windows, and run the installation file.
[While you’re there, download the Oracle VM VirtualBox Extension Pack as well]
During the installation of VBox, you might get a security warning about “Oracle Corporation”. You need to click Install on this page to proceed with the installation. If you click Don’t Install, the installation process will be terminated here and you won’t be able to continue. In case you see the security warning again click the Install button.
Install VirtualBox Extension Pack
This step is not 100% necessary but it’s highly recommended since macOS Catalina might not be compatible with VirtualBox. For this reason, it might cause problems during the installation.
Also, this ensures USB3.0 ports work normally.
Just execute the downloaded file. If it does not automatically get used by VirtualBox Manager then open VM VirtualBox Manager > File > Preferences > Extensions > click on + sign > select downloaded file > Ok.
To verify, open the VM VirtualBox Manager and go to File > Preferences > Extensions. You should see something like this:
Create and Configure a New Virtual Machine
Creating a virtual machine for macOS Catalina is kind of the same as you would create a virtual machine for the Windows Operating system or Linux but with some minor changes. Follow the steps below to create a new virtual machine for macOS Catalina using an ISO file.
- Open up your VirtualBox application and click New
- Click Expert Mode and select the following options then click Create
- Type a suitable Virtual Machine Name
- Virtual Machine Location (a separate drive is recommended)
- Type (Mac OS X)
- Version (Mac OS X 64-bit)
- Memory 4 GB (recommended 8 GB or higher)
- Hard Disk: Select Create a virtual disk now
- In the Create Virtual Hard Disk window, choose the following options
- Disk Location: The default location should be fine unless you want to change to a new location
- File Size: you can specify the disk size here
- Hard disk file type: Select VHD (Virtual Hard Disk) format
- Storage on physical hard disk: Select the Dynamically allocated option. However, if you want to have a better performance disk then choose a fixed disk. The fixed disk will allocate the specified size from your host machine immediately.
- It’s time to edit the virtual machine to make it work. Select macOS Catalina VM and click Settings. Now, bring the following changes to macOS Catalina VM.
- Under System>Motherboard, increase the Base Memory to 12281MB. However, you can use a bit lower memory if your system doesn’t have this much RAM.
- Uncheck Floppy from the Boot Order section.
- Under the Processor tab, increase the processors to 4 or higher. If there is a low number of CPUs it will work with 2.
- On the Display window, increase the Video Memory to 128MB.
- From the Storage section, click on the Empty>DVD icon. Click on Choose a disk file. you should select macOS Catalina ISO which you’ve previously downloaded. Finally, click OK to close the macOS Catalina Settings window.
Configure VM Through Command Prompt
First, you should notice your virtual machine name because we will use it later on.
Important Note: We highly recommend quitting the VirtualBox program before executing the following code. If you don’t do it, your virtual machine might not proceed to the installation step.
Run the following code line by line via command prompt (CMD). You can open the command prompt by pressing the Windows+X key in Windows 10 and select Command Prompt from the list. Or, simply press the Windows key and type CMD. Click Run As Administrator and click Yes.
Replace the “VM Name” with your virtual machine name.
cd "C:\Program Files\Oracle\VirtualBox\"
VBoxManage.exe modifyvm "VM Name" --cpuidset 00000001 000106e5 00100800 0098e3fd bfebfbff
VBoxManage setextradata "VM Name" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "iMac19,1"
VBoxManage setextradata "VM Name" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0"
VBoxManage setextradata "VM Name" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Mac-AA95B1DDAB278B95"
VBoxManage setextradata "VM Name" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc"
VBoxManage setextradata "VM Name" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1
Now is a good time to fix Catalina screen resolution. Later on, if you want to change the screen resolution you can just quit VirtualBox and run the code for the desired resolution and start VM again. Beware: to avoid problems, do not put VM in a saved state, but shut it down before quitting VM VirtualBox Manager.
VBoxManage setextradata "VM Name" VBoxInternal2/EfiGraphicsResolution HxV
You have to change “VM Name” with your virtual machine name and HxV with a screen resolution such as “1280×720“. For more supported screen resolutions on VirtualBox and a reference on changing the screen resolution download This File.
Start the Virtual Machine
Open the VirtualBox app and select your VM. Then, click the Start button.
Once you’ve started your virtual machine, you might have to choose which ISO to select. If you’ve attached only one ISO image you probably will not see the window below. Just make sure you’re using the right macOS Catalina ISO.
Create macOS Catalina Bootable Disk
Once you start your VM, a whole bunch of code will run on the screen. You should not be worried about it. Wait for a few minutes, and you should see the macOS Catalina Language window. Select your Language and click the Continue arrow.
Now, you’ll see the macOS Utilities window. From the list, select Disk Utility and click Continue.
Select your main VHD Disk and click the Erase button. Choose the following options and click Erase again.
- Name: Catalina Disk (you can type any name you want)
- Format: APFS (If you get an error, select Mac OS X Extended Journaled)
- Scheme: GUID Partition MAP
Now, close the Disk Utility window.
From macOS Utilities, select Install macOS and click Continue.
Click Continue on the Install macOS Catalina screen.
Agree to the macOS Catalina License Agreement.
Select macOS Catalina Disk and click Install.
Then, wait for 3 minutes and your VM will restart. macOS Catalina will be installed on your Disk. Next, you should not do anything. You’ll see the installation window for another 30 minutes.
Perform a Clean Installation of Catalina
Once it’s finished, you’ll see the macOS Catalina Welcome window. Select the following options. You can change most of the settings later on, so don’t worry about it for now.
- Select your Country and click Continue.
- Choose a Keyboard Layout and click Continue. If you’re not happy with default settings, you can customize the Settings.
- On the “Data & Privacy” window, click Continue.
- Select Don’t transfer any information now.
- Click on Setup Later then Don’t Sign In. Click Skip. You can add your Apple ID later.
- Click Agree to the Terms and Conditions and click Continue.
- Fill out the Full Name, Account Name, Password, and Hint. Then, click Continue.
- If you want to customize Express Set Up, you can click on Customize Settings, otherwise, click on Continue.
- On the Analytics window, click Continue.
- On-Screen Time, click Set up Later.
- Do not set up Siri for now. Just skip it.
- Select an appearance theme and click Continue. You can choose between dark/light mode and Auto mode.
- Wait a few seconds for your macOS to be set up.
- Finally, close Feedback Assistant for now and close the mouse & keyboard window.
- When creating a hard disk for your VM never go for less than 25GB (or 30, to be safe) or you will have problems later on.
- Always quit VM VirtualBox Manager before changing screen resolution via code.
- If you have issues with your mouse, using an old mouse does the trick.
And that’s how you get Apple from Oracle for cross-browser Web App testing. Make sure to follow our blog for more tips & tricks.
In case you need any tech help, feel free to contact us.
|
OPCFW_CODE
|
Welcome to BroadbanterBanter. |
You are currently viewing as a guest which gives you limited access to view most discussions and other FREE features. By joining our free community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload your own photos and access many other special features. Registration is fast, simple and absolutely free so please, join our community today.
|uk.telecom.broadband (UK broadband) (uk.telecom.broadband) Discussion of broadband services, technology and equipment as provided in the UK. Discussions of specific services based on ADSL, cable modems or other broadband technology are also on-topic. Advertising is not allowed.|
| ||Thread Tools||Display Modes|
Dual Game connection + VoIP
I have a cable modem, a switch, and the Blueyonder Dual Gaming option
which gives me a second IP address. That is wired to a part of the house
where I'd like to put a VoIP phone.
I've plugged a Grandstream Handytone 486 Analogue Telephone Adapter into
it, and so far I can't get it to run. I've had it successfully connected
for testing on the Blueyonder primary IP connection which goes to my
Is anyone aware of any functional difference between the IP addresses
that might account for this?
Dual Game connection + VoIP
|| I have a cable modem, a switch, and the Blueyonder Dual Gaming option
|| which gives me a second IP address. That is wired to a part of the
|| house where I'd like to put a VoIP phone.
|| I've plugged a Grandstream Handytone 486 Analogue Telephone Adapter
|| into it, and so far I can't get it to run. I've had it successfully
|| connected for testing on the Blueyonder primary IP connection which
|| goes to my computer.
|| Is anyone aware of any functional difference between the IP addresses
|| that might account for this?
Not that I've ever used the dual connection pack............
I don't suppose you've tried doing away with the switch BY have provided,
then connecting your PC using USB and the phone to the ethernet port on the
cable modem? ......just as a test, I suggest this.
|Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Dual-phone ADSL adapters||thoss||uk.telecom.broadband (UK broadband)||1||October 9th 05 06:32 PM|
|BT VOIP on a telewest 760k connection||Paul Woolley||uk.telecom.broadband (UK broadband)||4||January 25th 05 06:33 PM|
|VoIP connection to Switchboard||Les Desser||uk.telecom.broadband (UK broadband)||27||January 23rd 05 09:55 PM|
|Can the Netgear DG834G run in dual mode?||Scampi||uk.telecom.broadband (UK broadband)||2||February 24th 04 01:27 AM|
|Dual-WAN routers||J.Neuhoff||uk.telecom.broadband (UK broadband)||0||February 14th 04 09:23 PM|
|
OPCFW_CODE
|
Realistic rollerblade steam propulsion device, like something that might have been invented with 18th century technology. Is it possible?
This was an idea generated by my sister that I thought was very interesting. I intended to make it work by strapping rollerblades on a person's legs and utilize a compressed gas/steam-powered pumping device that would be worn on the back of the character. A gas pumper would push the character forward at high speed (kind of like a rocket backpack) while the rollerblades could be used for directional control. These two are connected to each other with wires and are worn like a suit. There should be something balancing weight (or do you have suggestions?) to keep the character balanced while moving.
Is it possible? If not then how can I improve it? Are there any similar alternatives?
sounds like a steampunk device if i ever heard one.
developed? Without a doubt. Useful? Doubtfully as steam producing machinery would defy it's jet power with it's own weight (so machine, water, burner). There are few questions with answers that have mathematical answer to "steam powered" things.
18th century (American revolution, 1765-1783), or 19th century (Stockton and Darlington Railway, 1825)?
Many issues here, all going against this:
Producing steam requires a lot of heavy machinery: you need to carry water, coal and the boiler. Are you sure you want all that weight in a backpack? How are you going to fuel the boiler on your back?
Rollerblade can be used when you have a smooth pavement to roll on. Already a cobbled road would be a nightmare, and most of the roads in 18th century were not even that fancy, with just loose ground, dusty when dry, muddy when wet, with deep carved trenches were the chariot wheels were passing. Nothing suitable for a rollerblade.
Steering: if you are going at high velocity and you want to steer, you will put a high load on your wheels unless you take a very large curving radius: I doubt that rollerblades will survive the challenge.
I think you can't go any better than Stephenson's Rocket.
I will just note that there is an IRL segment of society mad/awesome enough that they build giant off-road rollerblades and use kite-surfing-kites and proceed to fling themselves along beaches and grassy fields at ridiculous speeds. They are called 'kite-skates' or (more aptly) 'doomwheels.' Anyway, so rollerblades on non-tarmac surfaces aren't a complete non-starter - at least for some people
First, you need bigger wheels. The reason carriages (17th-20th century) have such large wheels is that larger wheels "soften" bumps and irregularities in the road surface by allowing a longer travel distance to climb over the height change.
Then you need to use small, powerful steam engines on the skates to drive the wheels; this gets you many times the propulsion effect over the same amount of steam used as a jet or rocket, even with air entrainment or a ducted airscrew.
In order to make small steam engines powerful, you need to run your steam at high pressure, which means high temperature.
Now we have the basic building blocks -- a high temperature, high pressure boiler system, with spring-assisted articulated leg struts to carry the weight of the boiler and fuel tank (it'll have to be petroleum, like kerosene; that's the only high energy density, fast/clean burning, easy stoking fuel that can operate this kind of boiler), and wheels at least as large as those on a 1900 vintage road bicycle (not a penny farthing, but the kind the Wright Brothers built before they switched to airplanes).
In the end, it's going to look at lot more like a badly drawn horseless carriage -- but small enough to (more or less) wear instead of sitting in.
No chance
The engines did get more efficient over the course of the 18th Century, but they tended to remain two man operations, one to drive and one to shovel coal, and mostly they got bigger and heavier. We're not counting the weight of coal and water here. Engines of the period aren't condensers, you need to keep pouring water in at one end to allow for the steam coming out at the other.
So if you're happy for your roller skater to have a 4 ton engine operated by two men and a trailer load of coal following along behind him then go for it, but it doesn't feel like that's in the spirit of your problem.
It's not until the 19th century that you get the relatively compact and mobile traction engines that are more practical on the open roads but they still weigh in the region of 4.5 tons. Stephenson's Rocket was actually a nice lightweight engine at a mere 4 tons.
1829 is in the 19th century.
Perhaps the water runs through the wheels and uses the vibration and friction to generate the heat needed for a small amount of steam pressure.
Put that in with some kind of wind sail and perhaps some skis and a hill and u got urself something to start the propulsion.
Conclusion
in the end for the day steam isn't going to propel a human on a device that can be carried by a person but perhaps help the wheels spin for longer to decrease the need to propel as often
The OP has not been back in the last 4 years, so here is a short answer that can be expanded if ever of interest.
All the existing answers are wrong.
This is entirely doable.
Issues are mass & range, speed, temperature & safety,
In about that order. Perhaps :-).
A flash-boiler, alcohol or kerosne fired, and a burner plus a water supply will generate steam, able to generate powers in the fractional to few horsepower range, either with tiny lightweight reciprocating motors, or by steam rocket.
Wheels are an issue- but sensible engineering can create wheels able to handle surfaces of the day. I'll levae that to others to fill out.
A "flash boiler" will convert thermal energy to steam at low to medium pressure at acceptable mass and energy levels in this context.
Hydrocarbon fuels have acceptable mass and volume energy densities.
A litre of ethanol or kerosine have energy content of very roughly 10 kWh / litre or kilogram (higher for kerosene) and delivered energy as steam in the say 1-3 kWh /litre range.
A "Flash Boiler" (FB) consists of a coil of pipe (usually copper) heated by an external source - here a burner, with pressurised water introduced appropriately
Even at 1 kWh/litre say 5 litres of fuel will provide 5 kWh - say 1 kW for 5 hours or 5 kW for 1 hour or ... .
The main limitation once safe (if not sane) operation is achieved is liable to be water capacity. Steam generation requires about a gram of water per 2 kW seconds of steam generated. So a kg of water produces about 2000 kg-seconds of steam. If a rocket is is used (see below) assuming delivered "vehicle power" in the say 1% - 20% range or 20 to 400 kg seconds per kg of water.
While small model reciprocating engines are well known (see below) powers in the HP + range are able to be delivered directly by direct steam rockets. Achieving this does not require rocket science in practice, only in theory :-).
A steam rocket need be neither extremely large or heavy. Being strong enough to deal with peak pressure out of the flash boiler "is a really good idea".
Flash steam to reciprocating engines for models delivering powers in the HP range are well known:
How to build flash steam generatos and related steam enines here
Includes page 23-53 full detail for building a model aircraft engine weighing 3/4 ounce and delivering 1/4 HP to a propellor. 60 psi operating at 3500 RPM :-) . Steam generator weight not stated but probably well underone pound. That's about 200 Watt delivered, or about the power of a very low end ebike. Propellor optimisation would allow low velocity vehicle operation.
Page 34 - Hydroplane. 1 HP+ delivered to prop. 30 mph max speed. Top models achieve 120 mph+
Flash steam based model hydroplanes - 120 mph. 20cc steam engine ! here
Wikipedia - steam rocket
A HTP (Hydrogen Peroxide) rocket can deliver about 300 kg-seconds per kg of HTP using silver-catalyst screens and 85% HT P (the upper limit for silver due to screen melting). The early HTP rocket packs delivers 17 seconds of flight. A say 10 kg propellant load would deliver say 300 seconds at 10mkg thrust. Hydroghen Peroxide would have been able to be synthesisedsd and concentrated "back then". First produced 1818 - Wikipedia. 100% HTP delivers 440 kg-seconds pere kg.
Aluminum powered ramjet backpack.
Because we are talking about fiction,and we want physics to be firmly in the service of awesome.
Consider first Project Pluto: the nuclear powered ramjet.
https://en.wikipedia.org/wiki/Project_Pluto
The working fluid of this rocket is air. Like any other ramjet it pulls air in thru the front and shoots it out the back to provide thrust. Jet engines generate the heat by burning fuel. The nuclear ramjet generated heat with a fission reaction.
Having a fission reaction strapped to your back poses some difficulties. Burning aluminum is much safer! The mechanism is the same except the core of the backpack is aluminum metal. It is started with a thermite fuse and once going, air is used as the oxidant. The oxygen is consumed to produce alumimum oxide but the nitrogen is heated up greatly and vented out the rocket nozzle in back to provide thrust.
You would need to get up to speed to get air flowing past the aluminum, which you could do with a hill or by skating hard. Once flowing you could adjust thrust with a choke for the air intake. The air flowing by also keeps the backpack coolish.
Really this is just a jetpack full of burning near-molten metal and you are wearing rollerblades. As one does.
As L. Dutch said, steam engines are bulky. But they already had air compression in the 18th century. Just one step forward to put the compressed air in bottles. You could use it to drive your wheels.
Take care to select the right size of wheels as L.Dutch mentioned. Your whole equipment is gonna be bulky, but you might be able to carry it. You won't be able to go fast and go far, and if you fall and the valve breaks, it's gonna rip down your legs/arms/head choose your target.
In general, I would not do it.
|
STACK_EXCHANGE
|
All entries for Sunday 14 August 2005
August 14, 2005
We will think about efficiency only if the subjects of comparison are consistent estimators. Thus, we have to compare both of them under the circumstance where they are consistent. That is, Theorem of the Weighted estimators holds with Z=X and the objective function is E[q(W,theta)|X]. Alternatively, the following scenary must be true.
SCENARY 5: a feature of the conditional distribution is correctly specified, R depends only on X, X are completely recorded and some regularity condtions hold.
Given this scenary, it can be show that, for Weighted estimator, asym variance is the same whether the missing probabilities are estimated or are known.
This result does not depend on whether or not a generalised conditional information matrix equality (GCIME) holds. Thus, if all conditions of this scenary are satisfied, the result holds even when there is heteroskedasticity in VAR of unknown form in the context of LS (see Wooldridge 2002)
At this point, we know that, according to this scenary, both Weighted and Unweighted estimators are consistent and it does not matter whether we estimate the seletion probabilities are use the known probabilities. (then the quesion remains: Should we weight or not weight?)
However, if GCIME also holds, we can show that the asymvariance of Unweighted estimator is smaller than that of Weighted estimator.
GCIME holds for conditional MLE if the conditional DENSITY is correctly specified with the true variance =1. (so both, say, population conditional mean and the density have to be correctly specified)
So, in SCENARY 5, it is likely that we will use Unweighted estimator.
In the SCENARY 2 below, we say that we will use Weighted estimator. However, if GCIME holds, we might wanna gamble with the model specification of mean function and use Unweighted estimator.
To COMPARE BETWEEN THE TWO APPROACHES
To comepare them, we have to set the situations where both of them can be used as an alternative of one another.
SCENARY 1: If R depends only on X, the covarites in the model. That is, there is no Z variable and Assumption MAR holds as (R=1|Y,X) = (R=1|X). This is the most common situation where both types of analysis can be used. Also, this is quite a realistic situation since, in practice, it will be difficult to find Z which is not a subset of X and makes the independence between R and (Y,X) holds.
(1) Do not require the correct specification of E(Y|X).
(2) Allow some other variables which are not in X to alsp affect R.
(1) the model of R has to be correct.
(2) X has to be completely recorded now so that we can estimate the model of R.
(3) only Y is allowed to be missing.
(1) Y and X can be jointly missing as the conditioning variables in the model of R (X or a subset of it) can be missing.
(2) If there are some variables which are important but incompletely recorded, they can be in X. In attrition, some covariates are missing in the later waves of study, such covariates can be included using this appraoch.
(1) correct specification of the feature of interest
AS CAN BE SEEN, to use which one of them, we have to consider case by case. NOTE that MAR and NMAR is not a good criteria to divide the literature in missing data (at least for Econometricians) anymore. From the above elaboration, the circumstance where X is missing but R depends on X fits with the setting of NMAR. As can be seen, there can be the case where Unweighted M-estimator is consistent.
SCENARY 2: If R depends only on X and X is completely recorded;
Then, we should use Weighted analysis as we can allow for mispecification and some other variables to affect the missing probability.
SCENARY 3: If R depends only on X, X is incompletely recorded and the feature of interest is correctly specified;
Then, we should use Unweighted analysis because the incomplete X can be in the model of R. (As we know for sure that these incomplete X are matter for the R's model and we do not have to worry about the miscpecification.) Thus, Unweighted analysis yields consistency under weaker assumption (= more variables in the R's model to ensure the independency between Y and R)
However, there could be some restriction such that we cannot use these incomplete variables anyway (Wooldridge (2002) gives an example where the structural of the conditional expectation model refrains us from using incomplete variables). So, Weighted might be better.
SCENARY 4: If R depends only on X and X is incompletely recorded;
Now, we do not know whether, say, E(Y|X) is correctly specified or not. This case is also unclear. We might want to use Unweighted analysis and gamble about the model specification. Or, if R is not significantly dependent on those incomplete X, we may want to use Weighted analysis.
Note that, in all of this discussion, R has to be independent of Y conditional on some variables anyway. ( For Weighted analysis this is Assumption MAR but, in Unweighted analysis, it is not MAR because the conditional variables in R model can be missing)
(1) Do not require the correct specification of the feature of Y|X ( conditional mean, conditional median)
(2) Allow other variables (apart from those in X) in Z to affect R.
(3) Y and X can be jointly missing as long as Z is fully recorded and as Assumption MAR (using Z) is satisfied.
(1) Model of R must be correctly specified
(2) Z must be completely recorded.
(3) Response probability has to be positive (meaning that we cannot exclude a subsection of the population in the sampling process) ( This may imply that wage equation example is not valid here because people who dont work are excluded completely)
(1) Missing variables (except Y) can be allowed into the model of R,i.e., missing mechanism. This is because we do not have to estimate the response probability. Thus, Ignorability Assumption ( this is not MAR) of R's model tends to be weaker than that of Weighted analysis in general ( since more variables can be conditioned upon to make the independent between Y and R more plausible.)
(2) Y and X can be jointly missing even when we dont have any variable as Z.
(3) Response probability can be zero for some subset of population
(1) require correct specification of the conditional mean, conditional median or conditional distribution.
|
OPCFW_CODE
|
I’m having issues with networking in WSL2 which brings me to wonder a bit about how WSL2 is actually used by Docker Desktop when “Use the WSL2 based engine” is ticked.
When running any WSL2 distro I have no working network access, but when running a supposedly WSL2 backed container it works, see issue on GitHub: https://github.com/microsoft/WSL/issues/5862
Is there any difference in starting code (with a dev container configured repo) from within the WSL2 terminal vs from the Powershell terminal?
How is networking set up in making it possible to access external network from within a container in WSL2 but not from the docker-desktop distro itself?
Starting VSCode with a repository that is configured to be used as with a development container it asks if you want to restart in container, when selecting yes it shows “>< Dev Container …” in the bottom left side.
Opening the docker desktop UI shows a container running: “vsc-vscode-remote-try-python…”
Using the built in terminal to check for network access is successful, able to ping 22.214.171.124 for example.
Looking at the documentation for Docker Desktop with WSL2 says you should start code from within the WSL2 distro terminal. All according to: https://docs.docker.com/docker-for-windows/wsl/#develop-with-docker-and-wsl-2
Once VSCode is up the bottom left indicates remote WSL: “>< WSL: Ubuntu 20.04”. Checking network access from the built in terminal by again pinging 126.96.36.199 fails.
Since I am opening the same folder as in the example above (but from the WSL2 distro terminal this time) I get the question if I want to reload in container. Doing this changes the connection info in the bottom left to “>< Dev Container …” once again. And if I take a look at the Docker Desktop UI I can see a running container again, same name as when starting it from outside WSL2
Once again testing external network access by pinging 188.8.131.52 is successful.
Attaching to the running WSL2 instance of “docker-desktop”
NAME STATE VERSION * docker-desktop Running 2
Trying to ping 184.108.40.206 fails, just as for any other WSL2 distro installed.
What makes the containers running within the docker-desktop WSL2 distro able to connect to external network but not the docker-desktop WSL2 distro itself? (I can see a lot of vpnkit* processes running)
|
OPCFW_CODE
|
From the real time control of hardware, by means of code, with origins in algebraical, logical, and to some extent musical notations, resulted a bewildering variety of languages, but with many design concepts in common.
One of the biggest innovations came with structured programming, which forced coders to untangle their spaghetti-balls of GOTOs and do something more cleanly organized around a backbone or main call sequence -- a more team-centric approach, more debuggable.
Then came the Object Oriented Paradigm, wherein code and relevant data formed islands, in networks and confederations, all passing messages to one another.
The procedural paradigm was absorbed (the objects needed methods after all), and this whole new data structure, called a Class Hierarchy, started guiding collaboration at another level.
Of course the second you say Class Hierarchy, ears prick, sensitive to possible cultural metaphors. What will OO programmers think of their fellow humans, if they're so class conscious by trade.
But in OO, the meaning of class is a different one, closer to type, or species. These too come in hierarchies, but not with humans at the top so much as in a promising part of the fan. At the roots are more primitive organisms, complex in their own way, but very distant from human.
In OO, that's more the picture: the "higher" you go, the more "toward the root" you go, and the classes become less and less specific, more and more in need of descendents to get any real work done. The tip of the class hierarchy is buried somewhere in prehistory (to extend the metaphor fully).
SmallTalk really put the OO model out there, where people could work with it, study it. Other languages saw the advantages and changed. C morphed into C++, its struct syntax begetting a more polymorphous class syntax. My own language, Xbase (dbase II, III, IV, Clipper, Fox...), morphed into VFP (Visual FoxPro), under Microsoft management (which made sure Access wouldn't feel too threatened). Coders comfortable with C wanted to keep a lot of that syntax (unlike in the LISP or APL traditions), which is what Java and C# bring about.
.NET is in part a strategy to bring the huge army of Visual Basic programmers into a fully equipped Visual Studio, where other tools work harmoniously -- including Python (the one that works on the Nokia 60 series cell phones).
At our first meeting, yesterday at Powell's, when talk turned to sports, someone (not Derek) remarked about some vital college football quarterback getting mono, a non-fatal disease that leaves one bedridden or at least not able to play football. In a marketing sense, this dogged a new open source project: Mono. Except Mono means Monkey in its native namespace, and the monkey theme was already well-developed in Ximian, so the whole animal and jungle motif was making plenty of sense. So what if the connotations were more unfortunate in the English-speaking realm? Sometimes that couldn't be helped. Anyway, we pronounce Mono differently, with the "o" like in "owe." We don't say "mah-no" -- which sounds a lot like the word for "hand." Mono is an implementation of .NET for Linux.
So that pretty much brings the language picture up to date from the point of view of an OSCON attender. There's a lot more that could and should be said in an academic vein, e.g. about Haskell and lambda calculus. But the point here is more to trace "math notation" from its paper and pencil beginnings, into the device control and infrastructure support matrix. That's where it rejoins mathematical modeling, in the sense of needing to control pressures, sense rates of change, confirm safety thresholds and so on. To launch, or not to launch? The code helps you decide intelligently, but human judgment remains at the switch.
|
OPCFW_CODE
|
The Framework of QlibRL¶
QlibRL contains a full set of components that cover the entire lifecycle of an RL pipeline, including building the simulator of the market, shaping states & actions, training policies (strategies), and backtesting strategies in the simulated environment.
QlibRL is basically implemented with the support of Tianshou and Gym frameworks. The high-level structure of QlibRL is demonstrated below:
Here, we briefly introduce each component in the figure.
EnvWrapper is the complete capsulation of the simulated environment. It receives actions from outside (policy/strategy/agent), simulates the changes in the market, and then replies rewards and updated states, thus forming an interaction loop.
In QlibRL, EnvWrapper is a subclass of gym.Env, so it implements all necessary interfaces of gym.Env. Any classes or pipelines that accept gym.Env should also accept EnvWrapper. Developers do not need to implement their own EnvWrapper to build their own environment. Instead, they only need to implement 4 components of the EnvWrapper:
- The simulator is the core component responsible for the environment simulation. Developers could implement all the logic that is directly related to the environment simulation in the Simulator in any way they like. In QlibRL, there are already two implementations of Simulator for single asset trading: 1)
SingleAssetOrderExecution, which is built based on Qlib’s backtest toolkits and hence considers a lot of practical trading details but is slow. 2)
SimpleSingleAssetOrderExecution, which is built based on a simplified trading simulator, which ignores a lot of details (e.g. trading limitations, rounding) but is quite fast.
- State interpreter
- The state interpreter is responsible for “interpret” states in the original format (format provided by the simulator) into states in a format that the policy could understand. For example, transform unstructured raw features into numerical tensors.
- Action interpreter
- The action interpreter is similar to the state interpreter. But instead of states, it interprets actions generated by the policy, from the format provided by the policy to the format that is acceptable to the simulator.
- Reward function
- The reward function returns a numerical reward to the policy after each time the policy takes an action.
EnvWrapper will organically organize these components. Such decomposition allows for better flexibility in development. For example, if the developers want to train multiple types of policies in the same environment, they only need to design one simulator and design different state interpreters/action interpreters/reward functions for different types of policies.
QlibRL has well-defined base classes for all these 4 components. All the developers need to do is define their own components by inheriting the base classes and then implementing all interfaces required by the base classes. The API for the above base components can be found here.
QlibRL directly uses Tianshou’s policy. Developers could use policies provided by Tianshou off the shelf, or implement their own policies by inheriting Tianshou’s policies.
Training Vessel & Trainer¶
As stated by their names, training vessels and trainers are helper classes used in training. A training vessel is a ship that contains a simulator/interpreters/reward function/policy, and it controls algorithm-related parts of training. Correspondingly, the trainer is responsible for controlling the runtime parts of training.
As you may have noticed, a training vessel itself holds all the required components to build an EnvWrapper rather than holding an instance of EnvWrapper directly. This allows the training vessel to create duplicates of EnvWrapper dynamically when necessary (for example, under parallel training).
With a training vessel, the trainer could finally launch the training pipeline by simple, Scikit-learn-like interfaces (i.e.,
The API for Trainer and TrainingVessel and can be found here.
The RL module is designed in a loosely-coupled way. Currently, RL examples are integrated with concrete business logic. But the core part of RL is much simpler than what you see. To demonstrate the simple core of RL, a dedicated notebook for RL without business loss is created.
|
OPCFW_CODE
|
Thanks for sharing your vote !
Have a minute to tell us and other readers how this question helped you? We feature review like yours in our Recommended Review section.
Please let me know that how to fix Windows 10 login screen not appearing. While I am trying to start it. If you know then give me any idea.
Level 1247 Point Active 2 days ago
Microsoft develops Windows - The operating system can be defined as an interface between the user and the computer. It depicts the machine language as a graphical orientation on the screen that helps users interact with the backend codes without going into the hassle of typing codes every time to perform day-to-day functions.
To unlock the login screen, windows many options such as
One of the common problems which many users have come across is Windows 10 login screen not appearing. In this article, we bring to you the causes of this problem and how you can overcome them.
It must be noted that we can follow only these steps if you can see the desktop. Otherwise, it is advised to enter boot into Safe Mode, Clean Boot State, or use Advanced Startup Options for initial booting.
The solution step/steps to be followed when Windows 10 login screen not appearing:
It might be possible that you have updated your system recently and there was some issue in the update or the update was not installed properly. In that case, the system always creates a restored image that can be accessed.
Restore the system to that date/time before the system update and block the pop-ups for the particular update. You can also choose to delay the particular update if you want.
The above keys are simultaneously pressed if you have the option of secure login already enabled on your system. Here, after logging off, the OS adds a layer of security so that we can keep a session secure by asking the user to log in again with the credentials.
However, you can disable this function in the settings option.
In case you wish to check this setting, open the Run command (windows button + R) and type "control userpasswords2''. This will take you to the settings of managing the usernames and passwords for the accounts you have created.
You now have to uncheck the checkbox which says "Users must enter a username and password to use this computer". Apply the settings and click OK. You will be prompted to enter your password again. If you don't have a password, you can leave it as blank.
In case it is already checked, uncheck the box which allows you to login without a username or password.
Sometimes the Fast Startup feature of Windows 10 is a turnoff as it interferes with the login process. Users can opt to disable the Fast startup Option from the Power Option in the control panel.
In case the process does not yield desired results, please consider reverting to the previous settings.
Windows 10 comes in handy to troubleshoot your problems in both online and offline mode. When you enter the Clean Boot State, the system automatically scans for any third-party software that might be causing a threat to login existing functionalities.
This option is preferred when Windows 10 login screen is not appearing.
It may be possible that while working in a particular session, the account got corrupted, which is a vital reason that you are unable to login. As this account is pretty difficult to revive, it is suggested that you create a new account and later copy the data.
Running the startup repair function while booting would be a good way to troubleshoot this problem.
The last resort would be to reinstall the entire OS and improve the system's speed and overall workability.
|
OPCFW_CODE
|
How dangerous are long thumb nails
I came across a provocative individual with long thumb nails, on both hands; about 2cm extra length.
How dangerous could his nails be?
Eye gouging comes to mind. But is it more effective with that type of nails? What about neck tissue? Can it be pierced or slashed more easily? Would it be able to reach the carotid artery?
If you're a Bond villain, very
Relevant: Long nails can be beneficial in self-defence to help identify the assailant via prominent wounds/scars or DNA.
Generally speaking, human nails aren't much of a threat. There is more of a chance the individual with long nails will injure themselves trying to use them as weapons (e.g. painful tears and nail-bed trauma) than there is to an intended target.
You have already identified the eyes as particularly vulnerable (and honestly they are just as ready a target to someone without long nails), but painful scratches to the dermis are the likely outcome from a fingernail attack. As someone who has practiced both striking and grappling arts for decades, I see long nails as far more of a hindrance/risk than they are an advantage.
Nails aren't really all that dangerous, but they can be painful, especially when practicing things like grabs and releases. Personally, I've had my share of scratches from long nails in practice.
If anything his nails are more of a threat to himself, they're probably fragile enough that they could break.
If you think long thumbnails are not useful in self defense then try this: press the edge of a long sharpened thumbnail into the corner of your eye and engage that ridge that runs almost completely around your eye orbit. You'll find out in a hurry how extremely painful that experience can be.
Yeah. Turning two patches of skin in opposite directions is painful as well. Would you recommend it in self-defense? Pain is much less of a factor when people are pumped with adrenaline. Also, you need to hit or rather get meaningful pressure on the eyes in a life fight with a rapidly moving target that instincts protects vulnerable spots like the eyes. In my experience, people who think pain compliance techniques work in self-defense are pretty much the same guys who never really trained full contact and/or were in these situations.
Related though not exactly the same, one of the people I studied Kali under for a while (a very old school Filipino gentleman), who said that when he was younger he kept his left thumbnail longer. He said he kept it trimmed and the outside edge squared off so he could use it to cut people when he was fighting. He said when he would get in close he would use it to cut across a person's face.
|
STACK_EXCHANGE
|
finally its here -> The new release of FPL!
So what has changed since the last release? Not that much, i just added a multi-monitor api, added support for using FPL as a dynamic link library, fixed some bugs and updated the documentations a lot.
I know its not that much, so why took it so long?
There was several reasons for that:
- Changes in real life, resulting in less time to do any private coding
- I am still working on a tool for helping me writing documentations, faster and more reliably
There are a lot of @IMPLEMENT tasks left, which needs to be completed. So my plan for next version is simple:
- Finish up all the @IMPLEMENT tasks
- Finish up the doxygen-editor
- Finish up the documentations
After that, i will add vulkan support and starting the network api.
As usual the release is tagged, but documentations will be updated later (due to server issues).
Here is the full changelog:
## v0.9.3.0 beta
- Changed: Renamed fplSetWindowFullscreen to fplSetWindowFullscreenSize
- Changed: Replaced windowWidth and windowHeight from fplWindowSettings with windowSize structure
- Changed: Replaced fullscreenWidth and fullscreenHeight from fplWindowSettings with fullscreenSize structure
- Changed: Renamed macro FPL_PLATFORM_WIN32 to FPL_PLATFORM_WINDOWS
- Changed: Renamed fplGetWindowArea() to fplGetWindowSize()
- Changed: Renamed fplSetWindowArea() to fplSetWindowSize()
- Changed: Renamed fplWindowSettings.windowTitle to fplWindowSettings.title
- Changed: Reversed buffer/max-size argument of fplS32ToString()
- Changed: Renamed fullPath into name in the fplFileEntry structure and limit its size to FPL_MAX_FILENAME_LENGTH
- Changed: Introduced fpl__m_ for internal defines mapped to the public define
- Changed: FPL_ENABLE renamed to FPL__ENABLE
- Changed: FPL_SUPPORT renamed to FPL__SUPPORT
- Changed: [CPP] Export every function without name mangling -> extern "C"
- Changed: [Win32] Moved a bit of entry point code around, so that every linking configuration works
- Fixed: fplPlatformInit() was using the width for the height for the default window size
- Fixed: fplExtractFileExtension() was not favour the last path part
- Fixed [Win32/MSVC]: Workaround for "combaseapi.h(229): error C2187: syntax error: 'identifier' was unexpected here"
- Fixed: [POSIX] Parse version string (isdigit was not found)
- Fixed: [X11] Added missing fpl*Display* stubs
- New: Added fplSetWindowFullscreenRect()
- New: Added fplGetDisplayCount()
- New: Added fplGetDisplays()
- New: Added fplGetWindowDisplay()
- New: Added fplGetPrimaryDisplay()
- New: Added fplGetDisplayFromPosition()
- New: Added fplGetDisplayModeCount()
- New: Added fplGetDisplayModes()
- New: Added fplGetWindowTitle()
- New: Added docs back-references from atomic functions
- New: Added support for compiling FPL as a dynamic library
- Changed: [Win32] fplSetWindowFullscreenSize does not use virtual screen coordinates anymore
- Changed: [Win32/POSIX] Store filename in fplFileEntry instead of the full path
- Fixed: [Win32] fplGetExecutableFilePath was not returning the last written character
- Fixed: [Win32] fplGetHomePath was not returning the last written character
- New: [Win32/X11] Use fplWindowSettings.fullscreenRefreshRate for startup when needed
- New: [Win32] Implemented fplSetWindowFullscreenRect()
- New: [Win32] Implemented fplGetDisplayCount()
- New: [Win32] Implemented fplGetDisplays()
- New: [Win32] Implemented fplGetWindowDisplay()
- New: [Win32] Implemented fplGetPrimaryDisplay()
- New: [Win32] Implemented fplGetDisplayFromPosition()
- New: [Win32] Implemented fplGetDisplayModeCount()
- New: [Win32] Implemented fplGetDisplayModes()
|
OPCFW_CODE
|
Poor autosave timing
First off, I have to say I'm loving this game. I'm ex Brit Army Air Corps and have been waiting for a game like this for years.
Having said that, after just buying this game and been playing it solidly for about five days, I've come across my first niggle. The timing of some of the auto saves. I'm on the mission where I have to shuttle back and forth picking up containers from the ship and dropping them off at the compound. I'm flying back to the ship to pick up the last container when I suffer the engine failure and have to auto-rotate to a landing.
The problem is it auto saves just 1 or 2 seconds before the failure and at that time I'm only 130 ft AGL over heavy tree cover. I have nowhere to go. My only option is to start the whole mission from the start again. If the autosave occured even 5 seconds before the engine failure I could at least gain some height to give myself a fighting chance without the need to start from the beginnig of the misson.
Jan 4 2012, 16:17
We've attempted some improvements in patch 1.03 (mainly not autosaving before imminent crashes), but still can see room for improvement.
In the meantime may I suggest using some manual saves in the cases you can see a troubling landing or other task coming? It's available from the interrupt menu (Esc).
Hi and thanks for your quick reply. I did exactly that when I restarted the mission after the above problem. I do make manual saves where I think I may need them. I just wasn't expecting this particular event on that last trip. Thanks again
After having experienced a simliar issue as the original poster (at the very same point in the same mission), I have had a very simple idea that could help avoid such issues without the appplication having to do any clever or 'heavy lifting' work.
Instead of initiating the auto-save instantly (at whatever event is used to trigger the auto-save), implement a short (5 seconds?) delay with simple user message indicating the count down. ("Auto saving in 5 seconds", "...4", "...3", "...2", "...1", "Auto-saving complete.")
If you wanted, you could get a little bit clever about it an allow the user to "(Press '<key>' to defer saving, or '<key>' to cancel)".
That way all the really clever decision making is left to the user and the computer is free to use all those precious clock cycles on rendering graphics, calculating flight dynamics and running those trojan viruses you 'picked up' from that website your mother would disapprove of.
There's a similar issue with the mission where you have to replace the tower on top of the space needle. If you are manage to collide with the tower the mission will decide you've managed to pick up the tower and trigger the auto-save but this occurs just after collision detection has decided that you've damaged your cargo beyond repair. You therefore end up with an autosave which immediately ends the game due to damaged cargo.
|
OPCFW_CODE
|
My Son, Melvin – 8 years old, plays soccer every saturday. And guess who his biggest fan is? Yep, that’s me! If you have read a few of my earlier posts on this site, you’ll know my focus is on data and the sense making of it. But you’ll probably also recognized i’m a bit all over the place sometimes.
The reason for that is me ;). And now again, a new initiative. Youth soccer analysis. I’m always looking for data that is close to me and with that trying to figure out how stuff works. So the question for this new private project is: “How do my son and his team play on the pitch?”
So I decided to create my own manual notation system. Handling data in excel after the game by watching the video I recorded. Then capturing all ball attempts and handling of my sons’ team, Veendam 1894 F1. Some will think it is a bit freaky, and you are right! But it’s worth the effort because of the insight I want to get out of this research: “How can soccer analysis be done?”. Adding that I don’t have professional experience doing this.
So in the next few examples I will share some data visualizations and observations with you guys of the recordings and analytics I have been doing. (Captions, titles and labels on the charts are in dutch, 8 year old kids in Holland don’t understand english that good…)
Ball handling on the field – per position:
Here you can see what the total actions are over all games played per position on the field. The yellow square represent the total actions resulting in possession, the red diamond represent the total actions resulting in a loss. These markers are referenced versus the right-handed axis. The bars show the total actions on the pitch.
Productivity and involvement per match game:
This chart shows the productivity and involvement per action of the players during the games. The blue line shows the involvement of players, the yellow line shows the total actions, the red line shows the succesful actions during the game.
Improvement potential per playing theme:
This visual shows what the teams’ potential for improvement is. This is shown for themes from defensive to attacking plays.
Succesful actions and ratios of the different action types:
This chart shows what actions during the game went well and which could be potential material for the training or line-up. This covers action types like: passing, tackles, dribbles, interceptions, clearance, shots, etc.
Possession and losses per action type:
This is a scatter chart showing the actions which result in a loss or possession of the ball during all the games played.
These vizzes represent a small selection of what is available in the app I use for the analysis.
Luckily for me the charts already reveal a huge amount of information that can be drawn from a soccer game. That helps me in not having to spend too much time typing what I want to tell from this data. 😉 My research will continue from here with a focus on how to really make sense of this data. The data and it’s variety really make it a challenge for me to see how this can benefit the team, the trainers and myself.
If you are interested in future updates regarding my findings, just let me know in the comments.
For now, thanks for reading, and hopefully this also shows that, again, data from unexpected sources and activities can be very interesting and valuable.
|
OPCFW_CODE
|
import networkx as nx
import pandas as pd
from cobra.flux_analysis import find_blocked_reactions
class UmFinder:
def __init__(self, cobra_model, cc_method='fva', report=True):
self._model = cobra_model
if report:
print("===========================================================")
print("Initializing UmFinder Builder using")
print("Model: %s" % cobra_model.id)
print("- Nº of reactions: %i" % len(self._model.reactions))
print("- Nº of metabolites: %i" % len(self._model.metabolites))
print("\nChecking network consistency (may take some minutes)")
print("Finding blocked reaction using method: %s\n" % cc_method)
self._blocked_reactions = find_blocked_reactions(self.model)
self._gap_metabolites = UmFinder.find_gap_metabolites(self.model, self.blocked_reactions)
self._gap_graph = UmFinder.create_gap_graph(self.model, self._gap_metabolites, self._blocked_reactions)
unconnected_modules = nx.connected_components(self._gap_graph.to_undirected())
self._unconnected_modules = sorted(unconnected_modules, key=lambda x: len(x), reverse=True)
if report:
print("- Nº of blocked reactions: %i" % len(self._blocked_reactions))
print("- Nº of gap metabolites: %i" % len(self._gap_metabolites))
print("- Nº of unconnected modules: %i" % len(self.unconnected_modules))
if len(self.unconnected_modules):
df_ums = self.unconnected_modules_frame()
df_biggest_um = df_ums.node_type[df_ums.um_id == 1]
rxns = df_biggest_um.index[df_biggest_um =='rxn']
mets = df_biggest_um.index[df_biggest_um == 'met']
print("- N of reactions in the biggest unconnected module: %i" % len(rxns))
print("- N of metabolites in the biggest unconnected module: %i" % len(mets))
@property
def model(self):
return self._model
@property
def gap_metabolites(self):
return frozenset(self._gap_metabolites)
@property
def gap_graph(self):
return self._gap_graph
@property
def blocked_reactions(self):
return frozenset(self._blocked_reactions)
@property
def unconnected_modules(self):
return self._unconnected_modules
def update(self):
self._blocked_reactions = find_blocked_reactions(self.model)
self._gap_metabolites = UmFinder.find_gap_metabolites(self.model, self.blocked_reactions)
self._gap_graph = UmFinder.create_gap_graph(self.model, self._gap_metabolites, self._blocked_reactions)
self._unconnected_modules = nx.connected_component_subgraphs(self._gap_graph.to_undirected())
self._unconnected_modules = sorted(self._unconnected_modules, key=lambda x: len(x), reverse=True)
def unconnected_module_subgraphs(self):
for um in self.unconnected_modules:
yield self.gap_graph.subgraph(um)
def unconnected_modules_frame(self):
columns = ['node_id', 'node_type', 'um_id']
data = {}
counter = 0
for i, um in enumerate(self.unconnected_modules):
for e in um:
if e in self.gap_metabolites:
e_type = 'met'
elif e in self.blocked_reactions:
e_type = 'rxn'
else:
e_type = None
data[counter] = (e, e_type, i+1)
counter += 1
return pd.DataFrame.from_dict(data, orient='index', columns=columns)
@staticmethod
def find_gap_metabolites(model, blocked_reactions):
gap_metabolites = []
for m in model.metabolites:
reactions = set([r.id for r in m.reactions])
if reactions.issubset(blocked_reactions):
gap_metabolites.append(m.id)
return gap_metabolites
@staticmethod
def create_metabolic_graph(cobra_model, directed=True, reactions=None, rev_rxn_label='reversible'):
graph = nx.DiGraph()
if not directed:
graph = nx.Graph()
if not reactions:
reactions = cobra_model.reactions
if not hasattr(reactions[0], 'id'):
reactions = [cobra_model.reactions.get_by_id(r) for r in reactions]
for r in reactions:
graph.add_node(r.id, label=r.id, text=r.id, node_class="rxn", node_id=r.id)
for m in r.metabolites:
if m.id not in graph.nodes():
graph.add_node(m.id, label=m.id, text=m.id, node_class="met", node_id=m.id)
(tail, head) = (r.id, m.id)
if r.get_coefficient(m) < 0:
(tail, head) = (m.id, r.id)
graph.add_edge(tail, head)
graph[tail][head][rev_rxn_label] = r.lower_bound < 0
return graph
@staticmethod
def create_gap_graph(model, gap_metabolites, blocked_reactions):
if hasattr(gap_metabolites[0], 'id'):
gap_metabolites = [m.id for m in gap_metabolites]
if hasattr(blocked_reactions[0], 'id'):
blocked_reactions = [r.id for r in blocked_reactions]
graph = UmFinder.create_metabolic_graph(model)
gap_graph = graph.subgraph(gap_metabolites + blocked_reactions)
return gap_graph
def main():
model = cobra.test.create_test_model('ecoli')
um_finder = UmFinder(model)
|
STACK_EDU
|
Infraestructure Symbols not working
Hello JMSML community
First and foremost, this is an amazing library and I will like to thank you all for your hard work. :)
I found some issues with symbols that were not available, but there were one less that 0,5%.
Unfortunately, I started working with emergencies and I have encounter many symbols that are not available, mostly all the emergency - infrastructure,
EFF-A----------
EFF-DB---------
etc. or the emergency - operations - installations.
EFO-DC---------
EFO-DDC--------
for example.
Perhaps I'm doing something wrong.
Could you gladly help me or explain how can I add them to the library if I have the SVG file pieces?
Thanks and best regards.
Not sure what you are using to test, but if it is the demo app in this repo, then probably just a bug - they look like valid (2525C) symbols. I get same (bad) results.
Hello csmoore,
then how can I proceed?
I do not know if I will be able to debug the code.
@Dash83UPV - I probably would not be able to debug either given the complexity of the datamodel/code - I believe this repo was primarily designed/tested with 2525D - but there are other tested repos/libraries you could use that do fully support 2525C.
Chris:
I haven't been following this discussion. What's the problem?
Bill McGrane
Chair, SSMC
DISA BDE4
Standards Management Branch
Comm:<PHONE_NUMBER>
DSN:<PHONE_NUMBER>
-----Original Message-----
From: Chris Moore<EMAIL_ADDRESS>Sent: Friday, February 9, 2018 7:26 AM
To: Esri/joint-military-symbology-xml<EMAIL_ADDRESS>Cc: Subscribed<EMAIL_ADDRESS>Subject: [Non-DoD Source] Re: [Esri/joint-military-symbology-xml] Infraestructure Symbols not working (#483)
All active links contained in this email were disabled. Please verify the identity of the sender, and confirm the authenticity of all links contained within the message prior to copying and pasting the address to a Web browser.
@Dash83UPV < Caution-https://github.com/dash83upv > - I probably would not be able to debug either given the complexity of the datamodel/code - I believe this repo was primarily designed/tested with 2525D - but there are other tested repos/libraries you could use that do fully support 2525C.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub < Caution-https://github.com/Esri/joint-military-symbology-xml/issues/483#issuecomment-364420246 > , or mute the thread < Caution-https://github.com/notifications/unsubscribe-auth/AI5Tbe5iLc4pUV1SZzjNDy8eVBfganu-ks5tTDlFgaJpZM4R-rw6 > . Caution-https://github.com/notifications/beacon/AI5TbeXSG4K_oJMhgHREt07vds37q4fHks5tTDlFgaJpZM4R-rw6.gif
@wmcgrane - its just a bug in the test app (screenshot above) / API converting 2525D SIDCs/symbols to 2525C. We used the mapping table provided so we know the data generally works for this - though we did have to make numerous manual changes to complete this mapping.
@Dash83UPV this is a 2525D repository, with some legacy mapping back to 2525C, but it's incomplete (as you found). Look for this code, and see if that's what you want: 20112200.
That said, any updates you want to provide to this repository are welcome, the XML is here: https://github.com/Esri/joint-military-symbology-xml/blob/dev/instance/Land_Installation.xml
As @csmoore said, depending on what you're trying to do, there are other repositories out there. So, you want to use 2525C emergency symbols in what platform? ArcGIS?
Joe:
A concern is the comment that Esri has found some incomplete mappings. Do you know which ones and how many? We'd like to be able to update and fix our spreadsheet.
Bill McGrane
Chair, SSMC
DISA BDE4
Standards Management Branch
Comm:<PHONE_NUMBER>
DSN:<PHONE_NUMBER>
-----Original Message-----
From: Joe Bayles<EMAIL_ADDRESS>Sent: Friday, February 9, 2018 11:58 AM
To: Esri/joint-military-symbology-xml<EMAIL_ADDRESS>Cc: Mcgrane, William M (Bill) CIV DISA BD (US<EMAIL_ADDRESS>Mention<EMAIL_ADDRESS>Subject: [Non-DoD Source] Re: [Esri/joint-military-symbology-xml] Infraestructure Symbols not working (#483)
All active links contained in this email were disabled. Please verify the identity of the sender, and confirm the authenticity of all links contained within the message prior to copying and pasting the address to a Web browser.
@Dash83UPV < Caution-https://github.com/dash83upv > this is a 2525D repository, with some legacy mapping back to 2525C, but it's incomplete (as you found). Look for this code, and see if that's what you want: 20112200.
That said, any updates you want to provide to this repository are welcome, the XML is here: Caution-https://github.com/Esri/joint-military-symbology-xml/blob/dev/instance/Land_Installation.xml < Caution-https://github.com/Esri/joint-military-symbology-xml/blob/dev/instance/Land_Installation.xml >
As @csmoore < Caution-https://github.com/csmoore > said, depending on what you're trying to do, there are other repositories out there. So, you want to use 2525C emergency symbols in what platform? ArcGIS?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub < Caution-https://github.com/Esri/joint-military-symbology-xml/issues/483#issuecomment-364492704 > , or mute the thread < Caution-https://github.com/notifications/unsubscribe-auth/AI5TbUXFrjX13mwkuan6Z3lXXkV0F5Gzks5tTHk0gaJpZM4R-rw6 > . Caution-https://github.com/notifications/beacon/AI5TbQN6wWnI518JKYw5roLG2gUqXWsSks5tTHk0gaJpZM4R-rw6.gif
|
GITHUB_ARCHIVE
|
All we need is an easy explanation of the problem, so here it is.
Is there a way to track changes done to SQL Server Agent jobs?
If you had a reason to believe somebody is playing around with your job settings. How would you proceed to track who did it and from where the changes were made to the individual job?
I’m thinking along the lines of extended events.
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
For sake of simplicity, I will assume that you want to track sysjobs, sysjobsteps and sysjobschedules. There may be other tables you want to monitor.
Option 1: SQL Audit (requires Enterprise Edition)
USE [master] GO -- Audit CREATE SERVER AUDIT [jobs] TO FILE ( FILEPATH = N'PathToSomeFolder' ,MAXSIZE = 0 MB ,MAX_ROLLOVER_FILES = 2147483647 ,RESERVE_DISK_SPACE = OFF ) WITH ( QUEUE_DELAY = 1000 ,ON_FAILURE = CONTINUE ,AUDIT_GUID = 'e807469a-6c9d-43f1-af46-cf7b89ba898d' ) ALTER SERVER AUDIT [jobs] WITH (STATE = ON) GO USE [msdb] GO CREATE DATABASE AUDIT SPECIFICATION [job_changes] FOR SERVER AUDIT [jobs] ADD (UPDATE ON OBJECT::[dbo].[sysjobs] BY [public]), ADD (UPDATE ON OBJECT::[dbo].[sysjobsteps] BY [public]), ADD (UPDATE ON OBJECT::[dbo].[sysjobschedules] BY [public]) WITH (STATE = ON) GO
Option 2: Extended Events session
-- Step 1: extract object_id for the following tables SELECT object_id from sys.tables WHERE name IN ('sysjobs','sysjobsteps','sysjobschedules'); -- Step 2: use those object_ids in the following session: CREATE EVENT SESSION [capture_job_changes] ON SERVER ADD EVENT sqlserver.lock_acquired ( SET collect_database_name = (0) ,collect_resource_description = (1) ACTION(sqlserver.client_app_name, sqlserver.is_system, sqlserver.server_principal_name) WHERE ( [package0].[equal_boolean]([sqlserver].[is_system], (0)) -- user SPID AND [package0].[equal_uint64]([resource_type], (5)) -- OBJECT AND [package0].[equal_uint64]([database_id], (4)) -- msdb AND ( [object_id] = 1125579048 -- sysjobs OR [object_id] = 1269579561 -- sysjobsteps OR [object_id] = 1477580302 -- sysjobschedules ) AND ( [mode] = (8) -- IX OR [mode] = (5) -- X ) ) ) WITH ( MAX_MEMORY = 20480 KB ,EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS ,MAX_DISPATCH_LATENCY = 30 SECONDS ,MAX_EVENT_SIZE = 0 KB ,MEMORY_PARTITION_MODE = NONE ,TRACK_CAUSALITY = OFF ,STARTUP_STATE = OFF ); GO -- Step 3: add a convenient target to the session (file target?)
Regarding this second option, I wrote a blog post on a similar subject (tracking object usage) where I describe the details of the technique. Basically, you can consider IX/X locks as updates to the underlying tables.
This session captures the bare minimum, but you can add more fields/actions to it to capture the sql text or the computer name or whatever makes sense for you.
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂
|
OPCFW_CODE
|
With the questions from the previous section churning in my head, I wrote a survey in late 2006 to ask why people contributed their time to write documentation. Although a lot of online documentation is on corporate or advertising-supported web sites (including those put up by O'Reilly Media), I explicitly excluded such sites, asking for responses only "from people who do this for non-monetary reasons." [Full text of survey]
The survey collected three types of information:
- What kinds of documentation people contribute
My goal was to cast the net as wide as possible. A visitor clicking through to the survey was greeted with the question, "Do you answer questions on mailing lists about how to use a software tool or language?" By starting with the most casual, intermittent contributions to online documentation, I signaled that visitors should be very inclusive in thinking about their contributions. The survey continues to talk not only about writing but about translation, editing, technical review, and administration.
- The projects to which the respondents contribute
I was hoping to learn whether different types of projects garner different types of help--in particular, to test the thesis mentioned in the previous section that free software is unique in its patterns of participation.
- The reasons for contributing
This section was the crux of the survey. It listed eight reasons that I thought were likely candidates for people writing online documentation, along with a text box where respondents could list "other factors."
Two of my reasons tested the mutual back-scratching mentioned in the previous section: mutual aid and community building. I also put in gratitude, which I considered another angle from which to view mutual aid, but which I thought many respondents would see as a separate motivation because of its emotional connotations.Community building particularly interested me because I had mentioned in an earlier article on mailing lists that the key goal of a technical mailing list was to meet user's technical needs. A researcher in the field of democracy and policy bluntly informed me that I was wrong, and the main goal of the list was to build community. This challenge to my purely instrumental approach intrigued me, but I wanted to test whether contributors had more directly self-rewarding goals as well. I included two reasons that I thought would elicit self-seeking motives. The first was reputation building. I removed any ambiguity about its self-seeking basis, as I described in the previous section, by defining reputation building as follows:
Consultants, trainers, job seekers, authors, and others who hope to build a career go online to build up recognition and respect.
The other selfish reason I offered was personal growth. The inclusion of this reason drew on teachers' well-known observation that they learn as much by teaching as their students do.
Situated halfway between generalized, half-altruistic community building and directly self-seeking behavior was another reason: informal support. I could have tried to tie this reason directly to personal rewards by narrowing the definition to support given by product developers, as I defined in the previous section. But this would have been too hard to define, because as I pointed out there, it's hard to tell why someone is associated with a software project. I can't ferret out which people feel that they get a personal payback from offering project support and those who do it altruistically. So this reason remains ambiguous as a motive for writing documentation.
Finally, I felt I had to include two reasons that didn't fit in with any particular research agenda, in order to capture all the motivations I think of. The first was enjoyment of writing. I know from my own experience, and that of my authors, that this must usually be present for successful writing, although in the field of technical documentation it would hardly be the primary motivation. The last reason I offered was thrills, which I defined as follows:
There's pleasure in seeing your insights turn up almost instantly on a forum with worldwide scope, as well as watching others succeed with your help and praise you for it.
This reason was prompted by an idea I drew from Joseph Weizenbaum's classic Computer Power and Human Reason and wrote up in another article, suggesting that a quest for power drives contributions to free software.
My documentation survey, after some editing by O'Reilly staff familiar with surveys, went up in January 2007. I contacted many leaders throughout the software field, asking them to promote participation among people with whom they had influence, and called on my fellow editors at O'Reilly to make similar appeals to leaders in their technical areas. Tim O'Reilly featured the survey on his popular O'Reilly Radar blog, and an announcement appeared on various O'Reilly Network sites.
I allowed the survey to run for three months, as long as new submissions were being added. Finally, noticing that additions had essentially come to a halt in April 2007, I shut down the survey.
I should make it clear that the sample was self-selected, so we have to be careful when drawing conclusions from the collection of responses. To keep the survey short and make it easy to respond, I decided not to collect demographic information that could have been interesting, such as age, gender, or national origin. Any survey that tried to remedy these deficiencies would have to be backed up by a much more costly and professional strategy for recruiting respondents.
[Complete results as a CSV file] My only changes to the responses were to remove a few phrases that identified individual respondents, in order to adhere to the survey's promise: "Individual responses are strictly confidential and names of participants in this survey will remain private."
Personal connections and coincidences determined where the 354 responses came from. Communities where I am well-known (such as Perl and GNU/Linux) or where leaders encouraged participation (GNU/Linux and GNOME), contributed the bulk of the responses. When leaders failed to act on our requests for publicity and communities didn't take much note of our postings, few responses emerged.
Therefore, one can't draw any conclusions from the pattern of responses from different projects. The most serious consequence of the skewed responses is that only a dozen respondents claimed to contribute documentation on proprietary software. This crippled my attempts to test for differences between free and proprietary software communities, as described in the previous section. [Breakdown of submissions by major projects]
The reasons for contributing documentation turned out to be the data that offered the most insights.
Providing a familiar four-point scale made it easy for people to fill out the survey, but it created a dilemma during interpretation. Suppose Respondent A uses a lot of 3s and 4s, whereas Respondent B has mostly 2s and 3s. Does Respondent A really care more about writing documentation than Respondent B? Should zealots have a greater weight in the results?
My reference to "zealots," of course, is a joke. The problem of weighting responses is endemic to any research based on language, because phrases such as "extremely important" mean different things to different people.
To resolve the dilemma, I use two measures for every calculation. The first measure is just the raw data chosen by each user: a 2 is always a 2 and a 4 is always a 4. If someone doesn't choose a category, the submission is rated a 0. The people who use consistently higher categories have a greater weight on results. I call this raw results.
The second measure adjusts the eight ratings made by each respondent so that every respondent has equal weight in the results. Calculating this measure is trivial: just add up all eight ratings by each respondent and divide each rating by the total. Any reason that a respondent leaves unrated, or rates at the lowest level ("Not important at all"), takes on a value of 0. These adjusted results for each respondent add up to 1.
Raw results are more fair, but they might not be more accurate. It all depends on how consistently respondents interpreted terms such as "extremely." I believe such phrases lead to a wide range of interpretations, so I trust the adjusted results more.
Raw and adjusted results proved to be almost the same for the most basic measurement (Figures 1 and 2): what were the most popular reasons for contributing?
Figure 1. Total raw results
Figure 2. Total adjusted results
|
OPCFW_CODE
|
I found myself running into a wall in the past week as I attempted to figure out the best way to learn more about TEI and text markup. I'm excited about these methodologies and I've already considered using them in my thesis next semester. However, understanding theory will only go so far-- I need practice. There are several articles that teach theory but, when it comes to learning the programs that implement it, the field is pretty DIY. That all being said, to take a step toward solving my problem I've registered for a THATcamp taking place in Washington D.C. on the weekend of March 25th, and I'm very excited to meet others in the field. It'll be great to meet others interested in DH, and I'll certainly blog my experience afterwards!
This blog is going to be two parts this week. Because I'm piggybacking off of my last two blogs, the reading material I have for this post is one main article and two resources, which have been helpful to me in understanding the building blocks of text markup.
First up, the article! My main reading for this week was “The Text Encoding Initiative and the Study of Literature” by James Cummings.
Cummings introduces his article with a brief history of the Text Encoding Initiative (TEI) by introducing some of the guidelines and sponsors that together make up the initiative. He states the chapter's thesis as follows:
This chapter will examine some of the history and theoretical and methodological assumptions embodied in the text-encoding framework recommended by the TEI. It is it intended to be a general introduction...nor is it exhaustive in its consideration of issues...This chapter includes a sampling of some of the history, a few of the issues, and some of the methodological assumptions...that the TEI makes.It is still fascinating to me that TEI is such a young endeavor. According to Cummings, it was formed at a conference at Vassar College in 1987, and very few of the principles established at that time have changed. This is exciting because the field is new and accessible-- the people who dive in are free to determine how the tools are used.
I've chosen this article because I feel that it's important to not only have a grasp of the technologies, but also to understand the history. The article includes technical language relating to different markup languages, SGML (Standard Generalized Markup Language) and XML (Extensible Markup Language), explains the history of these languages, and describes how they are used. I was interested by Cumming's explanation of the transition from GML (Generalized Markup Language), a noted "milestone system based on character flagging, enabling basic structual markup of electronic documents for display and printing," to SGML, which was "originally intended for the sharing of documents throughout large organizations." As time went on, SGML was not universal enough and XML was adopted and is still used, because of it's flexible nature.
XML has been increasingly popular as a temporary storage format for web-based user interfaces. Its all-pervasive applicability has meant not only that there are numerous tools which read, write, transform, or otherwise manipulate XML as an application-, operating-system- and hardware-independent format, but also that there is as much training and support available.Throughout the article Cummings highlights key points and goal of TEI. The design goals section examined the standards set for TEI to be as straightforward and accessible as possible for anyone interested in learning the text encoding methodology. He examines the community-centric nature of TEI, and the emphasis on keeping the field open and collaborative. I'm excited to be coming into the academic world at this time because although the DH field has a distinct technological learning curve, I'd rather face the curve in a community setting, rather than the traditional closed off world of academic hazing.
Cummings also discusses the user-centic nature of the TEI. Due to the community-based nature of the field, it must deliver what users of all different disciplines need. This can be a challenge, but it also exemplifies the versatile nature of the beast. As I have explained, I'm interested in using text markup and the TEI in order to see what it can uncover about texts that have been close-read to death. In the field of literature, we all know close reading, we all know how to compare elements of book. I want to take this to the next level- I want to see what technology can show me, and I want to learn how to use the programs.
Cummings explains that the TEI may have been influenced by New Criticism, a school of literary criticism with which I am quite familiar, and Cummings purports that the TEI, instead of reacting against this structuralism, as many poststructuralists might desire, in fact is compatible with New Criticism, as "the TEI's assumptions of markup theory as basically structuralist in nature as it pairs record (the text) with interpretation (markup and metadata)." This is something I would like to delve into further, because I can understand both sides of the New Criticism comparison argument.
I highly suggest reading this article, as Cummings successfully accomplishes his proposed thesis statement. I came away with feeling as if I learned the key points in the history of the TEI, without being drowned in technical conversation. I am increasingly interested in learning to code, as I am amazed by the things we can achieve with computer programs.
If you are interested in the technical side, Stanford University Digital Humanities department website includes many helpful resources, particularly "Metadata and Text Markup," which further explains buzzwords and phrases in the field, and "Content Based Analysis," which explains more about text content mining.
Additionally, the TEI website has several helpful links that may take one down many rabbit holes. I got stuck for a long while going through project examples which use TEI encoding.
|
OPCFW_CODE
|
It saved me a few hours of setup at least and brought over settings that I'd have had to recreate. You should also create an image backup and store it onto an external drive. This will show you how to do a upgrade installation with a Upgrade version of Windows 7 from. You might for instance have an extensive amount of software installed, half of them missing original install media. I decided to let Windows 7 format the hard drive so I could start from scratch.
They initially had some difficulties during the install froze after Windows 7. No 32-bit version of Windows can be upgrade installed over to turn it into a 64-bit install with programs, data and settings still intact. For the first, it takes time; I've done this now almost 20 times and the shortest project took a bit over 5 hours. These names can be anything you want. You'll just have to move it manually. It depends on your computer hardware and the service pack you have installed with Windows Vista. This is different than the upgrade adviser.
You shouldn't use your computer while this process is running. No 32-bit version of Windows can be upgrade installed over to turn it into a 64-bit install with programs, data and settings still intact. Great tutorial man, was very nformative See ya!!! Du … e to performance reasons, it cannot run practically on old Pentium I class processors. What are my Windows 7 choices? If you upgrade your existing copy, you will be transferring overfiles that might not work or cause the system to fail. I fixed them by going to the Windows Live website and reinstalling the Live Essentials programs. .
A well planned and done in-place upgrade on a well maintained system is not a bad alternative, and I for instance have never met any problems. Nice post and it's good to get another perspective beyond the two obvious paths to upgrade, namely the pave my machine or upgrade twice via Vista options. Depending on your system, this can take anything from half an hour to several hours. It shows you what user has what stuff. Because of this limitation, this method can only be used when upgrading to a 32-bit Windows 7. How old is too old? But you will save time.
Most likely to have fewest problems: Products from Microsoft and other major vendors. But is a pain-free process what you'll face if you make the move? A Google shopping search for windows 7 will produce plenty of choices. Application Data is now the abbreviated AppData. It spends some time 15 minutes or so in my case estimating just how much non-Program data is on the machine. Would this migration pose any issues you're aware of for when the full release version of Windows 7 becomes publicly available in the fall? Just like Vista, it is the same process. Well, the time has come for you to move on because , starting today.
Before you do that, you'll need to back up your data files and note your settings. Whichever way you decide to go, once you're done, do one last thing. The responses suggesting otherwise are incorrect! How difficult will the migration really be? You might also want to check I want to make Windows installation better. Desktop Linux: Unlike Windows 7 or 8, desktop Linux distributions like Ubuntu are completely free. See here for approved upgrade paths:. You need only purchase and use the upgrade pack. So if you have a Compaq like me and it says 512 or something technically you can't.
Disclaimer: I do work for Microsoft, but I don't work with the Win7 team so this is just one dude's opinion. Backup and fresh start is needed for such a transition, and it can be performed using the upgrade pack! Hit next and wait a while. Make sure your external drive is connected. Verify you can even put Windows 7 on that computer using this program:. Some versions won't even permit a paid upgrade and it must be done by clean install. If you want to save a lot of this hassle, you can get a program that will move all your data and your installed applications for you. Documents, Photos, Accounts, all brought over cleanly.
You'll want them all handy in a place outside the computer you're upgrading. Unfortunately, those discounts are done. You can also use another rig to download it. How to read the table? The table below shows possible scenarios. If it fails, your data is still in Windows. Windows 7 gives you options: Home, Work, or Public.
Seems to take a bit long to get this acomplisshed. Most of the ones constructed by Thomas Edison over 100 years ago are still working, but new ones die at least every two or three years. Why should users have to risk downloading malware to gain features that other operating systems come with? If you're human and you still have a few devices with issues, try looking at the Windows 7 Upgrade Adviser to see if the device and its new driver are listed. More detailed instructions are available at. It scans your system and makes an inventory of every installed program it can recognize, along with settings and data files. Windows 7 will put your data in a Windows.
If you don't have an external hard drive you won't be able to use Windows Easy Transfer. At this point, you'll be able to do things like set up a password, set security preferences, set time and date, etc. It's all there in a folder called Windows. For this reason, I'm going to use a lot of screenshots so you don't have to give up only because a new dialog is so strange you are not sure if you can continue without doing any harm. Then click Start, choose computer, and under hard disk drives, choose the external drive where you stored your transfer data. And you have device driver issues.
|
OPCFW_CODE
|
In the fast-paced world of API-driven development, securing communication between services using authentication and authorization mechanisms such as JSON Web Tokens (JWTs) is crucial. Developers often employ mocking tools to emulate API responses and behaviours, streamlining the development and testing process.
Many API testing and mocking tools excel at generating fake tokens for testing against live APIs with OAuth authorization already in place. However, developers often face challenges when building API consumers and the secured API endpoint is not yet ready. This gap in the market leaves developers searching for solutions that can accurately simulate secured API endpoints during development. A powerful mocking tool called Orbital, by Foci Solutions (https://orbitalmock.app/) steps in to address this need, providing a comprehensive mocking solution that allows developers to effectively test API consumers even when the secured API endpoints are still under development.
By validating JWTs, Orbital empowers developers to replicate real-world API situations, ensuring their applications manage authentication and authorization effectively, ultimately leading to more secure and reliable applications.
To set up JWT validation in Orbital, we’ll guide you through a simple three-step process using the Orbital Designer. This process will enable you to secure your mock APIs effectively, allowing you to simulate real-world scenarios with ease.
1. Configure the mock to use Token Validation: In the Orbital Designer, enable Token Validation when creating a new mock or editing an existing mock. This will allow your mock to perform JWT validation for incoming requests.
2. Configure the individual endpoints to use JWT validation: Next, navigate to each endpoint you wish to secure with JWT validation. From the ‘Validation Type’ drop-down list, select ‘JWT Validation’. This will ensure that each specified endpoint validates the JWT before processing the request.
3. Optionally, configure the endpoints to check JWT content: If you want to validate the content of a JWT in your mock API endpoints, you can configure specific token details to be checked. From the ‘Validation Type’ drop-down list, select the, ‘JWT Validation and Contents’. On the endpoint Request tab, specify the properties you want to check and their expected values. This will allow the endpoint to not only validate the token, but will also ensure that it contains the correct information.
In addition to configuring JWT validation in the Orbital Designer, you will also need to provide the OpenID discovery endpoint when starting the Orbital mock server. This endpoint allows Orbital to fetch the public keys required for validating a JWT in incoming requests.
To configure Orbital to use the discovery endpoint, follow these steps:
- Determine the discovery endpoint: Locate the discovery endpoint for your identity provider. This endpoint usually follows the pattern `https://[your_identity_provider]/.well-known/openid-configuration`. You can find the endpoint in your identity provider’s documentation or by contacting their support.
- Run the Orbital mock server Docker command with the environment variable set: When starting the Orbital mock server using the focisolutions/orbitalmock:latest image, pass the discovery endpoint as the value for the ORBITAL_PUB_KEYS__JWKS_ENDPOINT environment variable.
By following these steps and providing the appropriate configuration, you’ll be able to effectively set up JWT validation in your Orbital mock APIs, allowing you to create a more realistic testing environment for your applications.
|
OPCFW_CODE
|
- Cisco Employee,
VTP PRUNING – USEFUL or a PIA ??
First of all a quick note on what VTP Pruning is:
VTP ensures that all switches in the VTP domain are aware of all VLANs. However, there are occasions when VTP can create unnecessary traffic. All unknown unicasts and broadcasts in a VLAN are flooded over the entire VLAN. All switches in the network receive all broadcasts, even in situations in which few users are connected in that VLAN. VTP pruning is a feature that you use in order to eliminate or prune this unnecessary traffic.
To read more, please refer the following link:
In my experience in LAN Switching TAC, I have come across network connectivity problems that have taken a lot of time to solve. Symptoms observed are as follows
- Intermittent connectivity to host connect to an access switch
- One way audio in IP telephony
- Default gateway cannot ping few hosts and when traffic is initiated from host, pings from the default gateway magically starts working and all other devices can ping these host
- Clients connected to same access switch can ping each other but clients connected to an upstream switch cannot and so on.
Typical setup is as follows
CORE MLS(gi1/1)-----trunk------Access Switch----Clients
In most cases CORE Switch is the default gateway for the Clients.
Typical Troubleshooting is as follows
1) Is the ARP complete ?
CORE#sh ip arp 10.10.1.144Protocol Address Age (min) Hardware Addr Type InterfaceInternet 10.10.1.144 100 8cb6.4faa.8a41 ARPA Vlan93
YES IT IS
2) Is the switch learning mac address ?
CORE# sh mac-address-table address 8cb6.4faa.8a41 <NOT LEARNING MAC ADDRESS>
That should not cause connectivity loss as the packets will be flooded and will make its way to the clients
3) Lets check the spanning tree status for vlan 93
CORE # show spanning-tree vlan 93<SNIP>Gigabitethernet 1/1 shows forwarding
Well, spanning tree status is forwarding – so my packets are supposed to leave interface Gi1/1
Lets SPAN interface Gi1/1 and see if packets are leaving – Result: SPAN Captures show no packets leave the interface.
Crazy !! so it is the CORE switch that is culprit – lets replace it ??
NO WAY -- Are we sure it is a hardware issue – No
What have we missed ? Hmm.. Lets add a static mac entry and see if that helps
CORE(config)# mac address-table static 8cb6.4faa.8a41 vlan 93 int gi1/1
CORE# ping 10.10.1.144
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.1.144, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Lets remove the static mac address and now initiate traffic from end client –
mac address learned and everyone in the network can reach the client
From the troubleshhoting performed so far the observations made are as follows:
We intermittently lose connectivity to client. When traffic is initiated from client, we are immediately able to establish
connectivity to client and finally, we observe that the problem is seen when mac address of client ages out on the CORE switch
So what feature can block unicast flooding?
switchport block unicast command on the interface - not configured here
VTP pruning – Ah ha!!
Let us check if any vlan is pruned
CORE#show int gi1/1 trunk
Port Mode Encapsulation Status Native vlan
Gi1/1 on 802.1q trunking 1
Port Vlans allowed on trunk
Port Vlans allowed and active in management domain
Port Vlans in spanning tree forwarding state and not pruned
CORE#show int gi1/1 pruning
Port Vlans pruned for lack of request by neighbor
-> All these VLAN’s are pruned because neighbor did not request for them
Port Vlan traffic requested of neighbor
-> This is what CORE Switch is requesting from its neighbor switch
So VTP pruning was the culprit – As the mac address of end client aged out, the switch would have to unicast flood the packet to all ports on
VLAN 93, which is not sent on Gi1/1 as VLAN 93 was pruned.
Once VTP pruning is turned off – All connectivity issues are corrected.
When can such a scenario occur ?
1) In environment which has switches in VTP server/client mode along with switch that maybe in VTP transparent mode
2) When a non Cisco switch that does not understand VTP is connected to a Cisco Switch which has VTP turned ON (in most cases)
So please keep in mind that VTP pruning can be the cause of connectivity issues.
In an all Cisco environment where all switches are configured to be in VTP server or client mode, you can turn ON VTP pruning as this will help limit unnecessary flooding in the network and is of great help.
After all, VTP pruning need not be a PIA
(PS: For those of you who are wondering what PIA stands for.. Don’t worry about it )
|
OPCFW_CODE
|
[cap-talk] MinorFs & ssh secret keys
capibara at xs4all.nl
Mon Jul 7 06:07:57 CDT 2008
Possibly an example of MinorFs and 2rulethemall would be usefull in order
for you all to understand the practical use and usage.
I would be very interested in any comments or feedback, both positive
and negative, and also would be interested to hear other people's
perspective on this approach versus the usage of powerbox constructs, and
as to if and when what approach would be more suitable in user
This text is a small walktruegh approach to using MinorFs for storing ssh
private keys. The concept behind using MinorFs for ssh private key storage
is that MinorFs provides private storage for pseudo persistent processes.
Normaly when storing an ssh private key, not only ssh, but any process
running with the same uid will be able to read the private key file.
People end up setting passwords on their private key files, just to protect
the private key from the software they run themselves. With MinorFs, the
ssh client is promoted to the status of a pseudo persistent process with
its own private disk storage.
Before we can start using Minorfs, we will need to set an administration
password for the 2rulethemall tool. 2rulethemall is a priviledged program
that has a capability to the top level CapFs directory for the invoking
user. On validation of the password, 2rulethemall will delegate this
capability to the user by disclosing the strong path to this top node.
So lets first protect the top level node from malware by setting a
password for 2rulethemall:
2rulethemall NO PASSWORD SET !!!
Now we will need to introduce minorfs to our pseudo persistent ssh process
by invoking ssh. We will let ssh try to use the non existing private key
in order to let MinorViewFs gain knowledge about our new pseudo persistent
process. ssh will ask for a password, but pressing CTRL-C at that point
will be ok.
~> ssh rob at bogus.polacanthus.net -i /mnt/minorfs/priv/home/id_rsa
Warning: Identity file /mnt/minorfs/priv/home/id_rsa not accessible: No
such file or directory.
For security purposes you may want to disable your history at this point,
in order to avoid putting powerfull strongpaths into the history file.
Now that MinorViewFs has created the private storage for our pseudo
persistent ssh process, we must use our admin tool to locate and disclose
~> grep ssh
We have found that in our case,
is a path that gives access to 'all' instances of ssh (with the same
parent chain and the same uid). For our '1st' instance we will need to use
the 'inst1' sud directory.
We can now run the ssh-keygen in order to generate the private key. Dont
provide any passphrase!
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rob/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in
Your public key has been saved in
Now we are done with the powerfull administration strongpaths, so its safe
to turn the history back on.
We can now move the id_rsa.pub file to the server
~> scp id_rsa.pub rob at bogus.polacanthus.net:
id_rsa.pub 100% 395 0.4KB/s 00:00
~> ssh rob at bogus.polacanthus.net
rob at bogus:~% cp id_rsa.pub .ssh/authorized_keys
rob at bogus:~% cp id_rsa.pub .ssh/authorized_keys2
rob at bogus:~% logout
Now we are done, and we can now invoke ssh without a password and without
a passhprase on our private key.
~> ssh rmeijer at xs2.xs4all.nl -i /mnt/minorfs/priv/home/id_rsa
rob at bogus:~%
You can check with ls that /mnt/minorfs/priv/home/id_rsa is not available
if any other process tries to access it,
only (the first instance of) ssh, and only with this particular chain of
parents can access the secret key.
More information about the cap-talk
|
OPCFW_CODE
|
If you read different types of manuals how to compile OpenCL software on Linux, then you can get dizzy of all the LD-parameters. Also when installing the SDKs from AMD, Intel and NVIDIA, you get different locations for libraries, header-files, etc. Now GPGPU is old-fashioned and we go for heterogeneous programming, the chances get higher you will have more SDKs on your machine. Also if you want to keep it the way you have, reading this article gives you insight in what the design is after it all. Note that Intel’s drivers don’t give OpenCL support for their GPUs, but CPUs only.
As my mother said when I was young: “actually cleaning up is very simple”. I’m busy creating a PPA for this, but that will take some more time.
First the idea. For developers OpenCL consists of 5 parts:
- GPUs-only: drivers with OpenCL-support
- The OpenCL header-files
- Vendor specific libraries (needed when using -lOpenCL)
- libOpenCL.so -> a special driver
- An installable client driver
Currently GPU-drivers are always OpenCL-capable, so you only need to secure 4 steps. These are discussed below.
Please note that in certain 64-bit distributions there is not lib64, but only ‘lib’ and ‘lib32’. If that is the case for you, you can use the commands that are mentioned with 32-bit.
update: A new package “opencl-headers” installs exactly these files for you. Even better: ocl-icd-opencl-dev installs everything for OpenCL 1.2 (problems with OpenCL 2.0 still)
No more export CPPFLAGS=”-I/some_directory/opencl_sdk/include” at last! All SDKs provide the OpenCL 1.1 header-files originated from Khronos (or should).
We only need to put all headers found from the Khronos-webpage in /usr/include/CL/:
sudo mkdir CL
sudo wget http://www.khronos.org/registry/cl/api/1.2/cl_d3d10.h \
If you are on mobile, also get EGL:
sudo wget http://www.khronos.org/registry/cl/api/1.2/cl_egl.h
If you want 1.1 headers, do the following:
sudo mkdir CL
sudo wget http://www.khronos.org/registry/cl/api/1.1/cl_d3d10.h \
sudo wget http://www.khronos.org/registry/cl/api/1.1/cl_egl.h
Now you can be sure you have the correct header-files.
All vendors have their favourite spot to put their libraries; but actually a “just put your coat where you find a spot” is not the best to do. According to the best answer on stackoverflow, the libraries should be in /usr/local/lib, but since these are shared libraries, Intel has found a good location: /usr/lib/OpenCL/vendors/. There was some discussion about “vendors”, but think of various wrapper-libaries, IBM’s OpenCL Common Runtime, and such. So I agree with their choice.
update: Intel recently has completely changed the drivers for Linux. They now install in /opt/intel. Best is to copy all files from /opt/intel/opencl-1.2-x.x.xxxxxx to /usr/lib/OpenCL/vendors/intel to keep it orderly. If you choose not to, replace /usr/lib/OpenCL/vendors/intel with /opt/intel/opencl-1.2-x.x.xxxxxx.
The provided rpm can be converted to deb and then works if libnuma1 is installed:
apt-get install libnuma1 alien *.rpm dpkg -i *.deb
Though they’ve put their libraries at a nice spot, they made a little mistake. They put their libOpenCL.so in /usr/lib or /usr/lib64, instead of using a symbolic link. Below I discuss separately all around libOpenCL.so, since this is an important library. You need to copy it to the right directory. For 64 bit:
sudo cp /usr/lib64/libOpenCL.so /usr/lib64/OpenCL/vendors/intel/
For 32 bit systems:
sudo cp /usr/lib/libOpenCL.so /usr/lib/OpenCL/vendors/intel
It is very possible that if you installed another OpenCL SDK later, the library is lost. Not a real problem as explained later, but then you know.
To make the libraries available, I created opencl-vendor-intel.conf in /etc/ld.so.conf.d with the content (64 bit):
echo "/usr/lib64/OpenCL/vendors/intel" > /etc/ld.so.conf.d/opencl-vendor-intel.conf
In case you need to have 32-bit libraries too, you can add the location at the end of that file. And for 32 bit systems:
echo "/usr/lib/OpenCL/vendors/intel" > /etc/ld.so.conf.d/opencl-vendor-intel.conf
to start to using the new LD-location.
Edit: as suggested by Steffen Moeller in the comments, installing the deb-files in http://wiki.debian.org/ATIStream is easier. Just check if the files are at the right place.
The AMD APP Installer let’s you choose where you want to put the SDK. Just put it somewhere you want the SDK to be. Go to the root of the AMD-APP-SDK and move the lib-directory to /usr/lib(64)/OpenCL/vendors/, for 64 bit systems:
mkdir -p /usr/lib64/OpenCL/vendors/amd/ mv lib/x86_64/* /usr/lib64/OpenCL/vendors/amd/
And for 32 bit systems:
mkdir -p /usr/lib/OpenCL/vendors/amd/ mv lib/x86/* /usr/lib/OpenCL/vendors/amd/
Then we need to add them to ld-config. For 64 bit:
echo "/usr/lib64/OpenCL/vendors/amd" > /etc/ld.so.conf.d/opencl-vendor-amd.conf
And for 32 bit systems:
echo "/usr/lib/OpenCL/vendors/amd" > /etc/ld.so.conf.d/opencl-vendor-amd.conf
This is somewhat hard. You probably want to use CUDA too, so for that reason we leave the libraries in /usr/local/cuda/lib/ to avoid breaking software. Of course I prefer them to be tidied up under /usr/lib(64)/OpenCL/vendors/ but it is no use to make a symbolic link. Installer can be found here.
Then we need to add them to ld-config, if you haven’t done that. For 64 bit:
echo "/usr/local/cuda/lib64" > /etc/ld.so.conf.d/opencl-vendor-nvidia.conf echo "/usr/local/cuda/lib" >> /etc/ld.so.conf.d/opencl-vendor-nvidia.conf
For 32 bit:
echo "/usr/local/cuda/lib" > /etc/ld.so.conf.d/opencl-vendor-nvidia.conf
This library handles the selecting of the platforms (the vendors) and providing the correct libraries to the software needing the functionality. It is located under /usr/lib or /usr/lib64. You need to select which vendor you want to use. I personally think this driver should be open sourced and not from a specific vendor. Pick one (first line 64, second 32) out of these 6. But… from my own experience both AMD and Intel give you versions that work best with all 3 platforms, so I suggest you go for one of these.
Khronos open source
Get the “OpenCL 1.2 Installable Client Driver (ICD) Loader” from http://www.khronos.org/registry/cl/ and make the project (needs cmake). In the bin-directory there will be libOpenCL.so.1.2. Remove all files startting with libOpenCL.so* and copy libOpenCL.so.1.2 to /usr/lib/.
sudo ln -s /usr/lib64/libOpenCL.so /usr/lib64/libOpenCL.so.1.2 sudo ln -s /usr/lib/libOpenCL.so /usr/lib/libOpenCL.so.1.2
I use this myself. Will add the binaries later.
sudo ln -s /usr/lib64/OpenCL/vendors/amd/libOpenCL.so /usr/lib64/libOpenCL.so.1.2 sudo ln -s /usr/lib/OpenCL/vendors/amd/libOpenCL.so /usr/lib/libOpenCL.so.1.2
Strongly discouraged to use this libOpenCL-library!
sudo ln -s /usr/local/cuda/lib64/libOpenCL.so /usr/lib64/libOpenCL.so.1.1 sudo ln -s /usr/local/cuda/lib/libOpenCL.so /usr/lib/libOpenCL.so.1.1
sudo ln -s /usr/lib64/OpenCL/vendors/intel/libOpenCL.so /usr/lib64/libOpenCL.so.1.2 sudo ln -s /usr/lib/OpenCL/vendors/intel/libOpenCL.so /usr/lib/libOpenCL.so.1.2
Then we add libOpenCL.so.1, libOpenCL.so.1.0 and libOpenCL.so:
sudo ln -s /usr/lib64/libOpenCL.so.1.2 /usr/lib64/libOpenCL.so.1 sudo ln -s /usr/lib64/libOpenCL.so.1 /usr/lib64/libOpenCL.sosudo ln -s /usr/lib/libOpenCL.so.1.2 /usr/lib/libOpenCL.so.1 sudo ln -s /usr/lib/libOpenCL.so.1 /usr/lib/libOpenCL.so
As libOpenCL.so.1.2 is/should be backwards compatible with libOpenCL.so.1.0 and libOpenCL.so.1.1, you can choose to make those symbolic links too. Only do this when you have wrongly linked software – link to libOpenCL.so.1 in your own software.
Be sure to link to libOpenCL.so.1.1 if you chose to use NVidia’s library.
Installable Client Drivers
important: If you have chosen to leave the files in the original locations and skipped most of this tutorial, be sure to put the whole path and not only the filename in the icd-files.
If you list the platforms available, you actually list the ICDs. If you have written your own compiler, then you can easily add it without interfering with others. Like you can access an Intel CPU via both the AMD-ICD and Intel-ICD.
In /etc/OpenCL/vendor/ all ICDs need to be put. You’ll find them already here, or you have to create them. This is how they are provided now, but I omitted the library-location (which was in nvidia.icd), since it still gives errors if the ldconfig-steps where not done correctly.
sudo echo "libatiocl64.so" > /etc/OpenCL/vendors/atiocl64.icd sudo echo "libatiocl32.so" > /etc/OpenCL/vendors/atiocl32.icd sudo echo "libintelocl.so" > /etc/OpenCL/vendors/intelocl.icd sudo echo "libcuda.so" > /etc/OpenCL/vendors/nvidia.icd
You can pick any name for the icd-files. AMD might replace ‘ati’ by ‘amd’ in their libraries, so if it stops working when updating, you know where to look.
|
OPCFW_CODE
|
Hello, although I am a new member here, I have many years of experience on computers and I am the IT Manager for a large Stock broking firm.
My problems started 15 days ago when I decided to built a pc with the following configuration:
Asus Striker Extreme mob
2x EVGA 8800GTX SLI
2 Gigs OCZ 1066 Vida SLI Edition
2xRaptor 150 Raids 0+ 2xSeagate 350 Raid 0
Plextor 755 DVDRW + Liteon 167T Reader
Enermax Galaxy PSU 1000W
Stacker 830 Case with 5 120 fans.
The built was easy and the only problems I had to begin with were sound crackling in fps intensive games and a difference of 10 degrees C between the two 8800's which I initially attributed to slot location in the case.
After some testing one of my 2 GTX's was 10 degrees hotter than the other no matter where I placed it, top slot or bottom. I took both cards out to check if one of them had the Known faulty resistor issue but both of them had 40C resistors and in one of them, the coldest one, you could see that the resistor had been replaced.
I proceeded by applying As5 only on the core of the warmer card that made no difference in temps what so ever, so next day I went to my retailer and swapped the card for another. This is when my real problems began....
After installing the new card in my pc I flashed it to the new evga bios and it was instantly recognized in windows and all benchmarks and games saw it within specs. After a whole day on the PC, and while I was browsing some forums on IE, the pc froze without a BSOD. I hit the reset button and after loging in I found in my event viewer an eventid 14 nv error. There was no information about the error anywhere on the net.This was with 97.44 drivers.
Next day I loaded the new drivers that had just came out and again while browsing the net I was greeted with another freeze with same error in the event viewer. I was furious....
These are my findings:
Eventid 14 nv unknown error means that no enough power is reaching the card. After a lot of tries I managed to replicate the error by booting with only one cable on the GTX. Upon reboots the system worked each time without the error but at some point 5 or 6 hours later it froze. Further more if the pc was powered down for more than 20 minutes, the nvidia power sentinel would come up informing me that my cards were not receiving enough power.
I tried to test each card individually and let each card cool for an hour before booting up the pc with it.
What I found out was that my "new" card was the culprit as it was giving the error when cold.
Monday I will go to my retailer and swap the card again and hopefully this time things will work as they are supposed to.
I hope this helps someone to not go through all the rigorous testing I have gone through in order to pinpoint the exact problem.
Yes, I checked their output with a multimeter they were 12.4 for all 4 of them, which is exactly what everest reports. Only one of the two 8800GTX's would popup the nvidia power sentinel error. and only when the card itself was cold, even when the rest of the system had been running for some time prior to installation.
I have 2 side case fans as my zalman 9700NT leaves no space for another one, 2 in the front for intake and one in the back for exhaust plus my 2 psu fans.
|
OPCFW_CODE
|
5.D. Feature skeletons provide important shape characterization
5.D.1. Grain boundary, cell wall, and fiber images
Broad thresholded lines can be thinned to single pixel width as was shown in an example in Section 4.A.2. This skeletonization is accomplished by erosion with rules that prevent removing the final line of pixels ( IP•Morphology–>Skeletonize ). Note that the skeleton is “8-connected” meaning that a pixel is assumed to touch any of its eight possible neighbors. But by this same criterion the cells or features that it divides would also be continuous, so it is often necessary to convert the skeleton to a “4-connected” line in which pixels touch only their four edge-sharing neighbors. These lines can separate the regions on either side.
End pixels in skeletons have only a single neighbor, whereas most skeleton pixels have two neighbors and those at branch points or nodes have 3 or 4. For overlapped fiber images, the number of fibers can be determined as half the number of end points, and the mean length as the total length divided by number as shown in the example. This method is also correct for structures that extend beyond the image boundaries. A single end point is interpreted as one-half of a fiber in determining an average (the other end would be counted in another field of view).
Original XFibers image Skeleton
Counting the ends and measuring the length
Grain boundaries and cell walls form tessellations without end points, so pruning of branches with ends is an appropriate clean-up method. In the first example shown, the grain boundary tesselation is produced by IP•Morphology–>Pruned Skeleton .
Removal or measurement of short branches based on length also provides a powerful tool. IP•Morphology–>Skeletonize and Trim Branches was applied in the second example. Removal of branch points leaves the separate segments for measurement. The IP•Threshold–>Select Skeleton Components allows selection of any of these components. In the third example this was applied after skeletonizing to leave just the individual segments for measurement.
Thresholded Gr_Steel image Pruned skeleton
Original Branches2 image Skeleton with branches < 25 pixels removed
Original Root2 image Skeleton with nodes removed (fragment)
Measurement of the segment lengths
5.D.2. Measuring skeleton length, number of ends, number of branches for features
Values from skeletonized features provide basic topological (shape) information about structures, which will also be used in the discussion of feature measurement parameters. In the first example, the features are labeled with the number of skeleton end points, which measures the number of points in the original stars. In the second, the total number (43) can be counted to count the gear’s teeth. The dilated end points are superimposed on the original image using Photoshop Layers. Note: the thresholded gear image was cleaned up with an EDM-based morphological closing.
Original Stars image Skeleton and label (number of end points)
Original Gear image Binary image
Skeleton End points superimposed on original
5.E. Using the Euclidean distance map for feature measurements
5.E.1. Distances from boundaries or objects
In Section 5.C.3 above, features adjacent to boundaries were selected. By assigning values from the EDM to features it is possible to select or measure features based on their distance from irregular boundaries. The assignment can be done by combining the images keeping whichever pixel is darker, or by using the binary image of the features as a mask placed on the EDM image. The EDM values can be calibrated as distance values using the density calibration routine, or just invert the EDM to measure the distance in pixels.
Original Distanc2 image Thresholded cell interior
EDM of cell interior (inverted) Thresholded features
EDM values assigned to features Measured distances from boundary
Sampling the EDM with the skeleton provides width measurement (mean, max, min, standard deviation) for irregular shapes. This method is used automatically by the feature measurement routines but can be applied manually when more complete statistics are required. Notice that the skeleton follows the central “ridge” in the EDM where the pixel values represent the radius of an inscribed circle. A histogram of the values along that ridge provides a comprehensive measurement of the feature’s width. The data are saved to disk by the IP•Measure Global–>Histogram routine and can be analyzed using Excel.
Original Width2 image Skeleton superimposed on the EDM
Histogram data in a spreadsheet
The example below illustrates the use of the EDM and skeleton together to show a complex relationship between the length of each branch and how distal it is from the cell body. The nodes are removed from the skeleton to separate the segments. Then the ends are used as markers in a feature selection operation to keep just those segments that are terminal branches. Next the EDM of the region around the cell body is generated and the values assigned to the terminal branches. Finally a plot of the length of each branch vs. the minimum brightness value (the distance of the point nearest the cell body) is constructed to show the trend. Understanding this sequence (each individual step is not shown in the illustrations) is a good indicator of mastery of these tools.
Original Neurons image Skeleton
Terminal branches EDM of the region outside the cell body
EDM values assigned to branches Plot showing shorter lengths for more distal terminal branches
|
OPCFW_CODE
|
Plugin rate_limit timed out on hook disconnect
I want to set user limit on particular user as how much mails he can send
when I add rate_limit plugins it gives the following error:
[INFO] [-] [core] Loading plugins
[INFO] [-] [core] Loading plugin: auth/flat_file
[DEBUG] [-] [core] no timeout in auth/flat_file.timeout
[DEBUG] [-] [core] no timeout in plugin_timeout
[DEBUG] [-] [core] plugin auth/flat_file timeout is: 30s
[INFO] [-] [core] loaded 7185 Public Suffixes
[INFO] [-] [core] loaded TLD files: 1=882 2=5813 3=2287
[DEBUG] [-] [core] no timeout in auth/auth_base.timeout
[DEBUG] [-] [core] no timeout in plugin_timeout
[DEBUG] [-] [core] plugin auth/auth_base timeout is: 30s
[DEBUG] [-] [core] registered hook capabilities to auth/flat_file.hook_capabilities
[DEBUG] [-] [core] registered hook unrecognized_command to auth/flat_file.hook_unrecognized_command
[INFO] [-] [core] Loading plugin: spf
[DEBUG] [-] [core] no timeout in spf.timeout
[DEBUG] [-] [core] no timeout in plugin_timeout
[DEBUG] [-] [core] plugin spf timeout is: 30s
[DEBUG] [-] [core] registered hook ehlo to spf.hook_ehlo
[DEBUG] [-] [core] registered hook helo to spf.hook_helo
[DEBUG] [-] [core] registered hook mail to spf.hook_mail
[INFO] [-] [core] Loading plugin: dkim_sign
[DEBUG] [-] [core] no timeout in dkim_sign.timeout
[DEBUG] [-] [core] no timeout in plugin_timeout
[DEBUG] [-] [core] plugin dkim_sign timeout is: 30s
[DEBUG] [-] [core] Returning boolean true for main.disabled=true
[DEBUG] [-] [core] registered hook queue_outbound to dkim_sign.hook_queue_outbound
[INFO] [-] [core] Loading plugin: rate_limit
[DEBUG] [-] [core] no timeout in rate_limit.timeout
[DEBUG] [-] [core] no timeout in plugin_timeout
[DEBUG] [-] [core] plugin rate_limit timeout is: 30s
[DEBUG] [-] [core] registered hook connect to rate_limit.incr_concurrency
[DEBUG] [-] [core] registered hook disconnect to rate_limit.decr_concurrency
[DEBUG] [-] [core] registered hook connect to rate_limit.hook_connect
[DEBUG] [-] [core] registered hook rcpt to rate_limit.hook_rcpt
[NOTICE] [-] [core] Listening on :::587
[DEBUG] [-] [server] running init_master hooks
[INFO] [-] [core] [outbound] Loading outbound queue from /etc/haraka/queue
[INFO] [-] [core] [outbound] Loading the queue...
[NOTICE] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] connect ip=<IP_ADDRESS> port=37797 local_ip=:: local_port=587
[DEBUG] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] running lookup_rdns hooks
[DEBUG] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] running connect hooks
[DEBUG] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] running connect hook in rate_limit plugin
[INFO] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] client [<IP_ADDRESS>] half closed connection
[DEBUG] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] running disconnect hooks
[DEBUG] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] running disconnect hook in rate_limit plugin
[CRIT] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] Plugin rate_limit timed out on hook connect - make sure it calls the callback
[CRIT] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] Plugin rate_limit timed out on hook disconnect - make sure it calls the callback
[INFO] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] hook=disconnect plugin=rate_limit function=decr_concurrency params="" retval=DENYSOFT msg="plugin timeout"
[DEBUG] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] running deny hooks
[NOTICE] [5DCD29C3-319D-4125-AEA4-AB8B73D93D89] [core] disconnect ip=<IP_ADDRESS> rdns="DNSERROR" helo="" relay=N early=N esmtp=N tls=N pipe=N txns=0 rcpts=0/0/0 msgs=0/0/0 bytes=0 lr="" time=45.018
guide me as how i do to set rate_limit plugin i tried bt not got the results
Your Server can't resolve DNS... so you run into a timeout.
Use a working DNS server that can resolve the Hosts, and it will run for you.
@smfreegard can be closed.
Hi, first of all I want to apologize. I had overlooked something: The timeout is triggered both connect and disconnect.
Please update your config for rate_limit. default must uncommented: https://github.com/baudehlo/Haraka/blob/master/config/rate_limit.ini
hi there i wanted to set user limit on how much mail a particular users send so for that i am using rate_limit plugin pleas also guide me how to do this also
Attaching my rate_limit.ini
; Example configuration file for the rate_limit plugin
redis_server =<IP_ADDRESS>
tarpit_delay = 10
[concurrency]
; NOTE: this limit is per server child and does not use Redis
; Limit an IP or host to a maximum number of connections
; Don't limit connections from localhost
127 = 0
; Freemail
; hotmail.com = 20
; yahoo.com = 20
; google.com = 20
; default = 5
[rate_conn]
; Maximum number of connections from an IP or host over an interval
127 = 0
; default = 5 ; no interval defaults to 60s
[rate_rcpt_host]
; Maximum number of recipients from an IP or host over an interval
127 = 0
; default = 50/5m ; 50 RCPT To: maximum in 5 minutes
[rate_rcpt_sender]
; Maximum number of recipients from a sender over an interval
127 = 0
; default = 50/5m
[rate_rcpt]
; Limit the rate of message attempts over a interval to a recipient
127 = 0
; default = 50/5m
[rate_rcpt_null]
; Limit the number of DSN/MDN messages by recipient
; default = 1
; Example configuration file for the rate_limit plugin
; redis_server = <IP_ADDRESS>
; tarpit_delay = 30
[concurrency]
; NOTE: this limit is per server child and does not use Redis
; Limit an IP or host to a maximum number of connections
; Don't limit connections from localhost
127 = 0
; Freemail
; hotmail.com = 20
; yahoo.com = 20
; google.com = 20
default = 5
[rate_conn]
; Maximum number of connections from an IP or host over an interval
127 = 0
; no interval defaults to 60s
default = 5
[rate_rcpt_host]
; Maximum number of recipients from an IP or host over an interval
127 = 0
; 50 RCPT To: maximum in 5 minutes
default = 50/5m
[rate_rcpt_sender]
; Maximum number of recipients from a sender over an interval
; set for sender<EMAIL_ADDRESS>10 messages in 5 minutes
<EMAIL_ADDRESS>= 10/5m
127 = 0
default = 50/5m
[rate_rcpt]
; Limit the rate of message attempts over a interval to a recipient
127 = 0
default = 50/5m
[rate_rcpt_null]
; Limit the number of DSN/MDN messages by recipient
default = 1
|
GITHUB_ARCHIVE
|
<?php
namespace Oro\Bundle\DataGridBundle\Extension\InlineEditing\InlineEditColumnOptions;
use Oro\Bundle\DataGridBundle\Extension\Formatter\Property\PropertyInterface;
use Oro\Bundle\DataGridBundle\Extension\InlineEditing\Configuration;
/**
* Class RelationGuesser
* @package Oro\Bundle\DataGridBundle\Extension\InlineEditing\InlineEditColumnOptions
*/
class RelationGuesser implements GuesserInterface
{
const DEFAULT_EDITOR_VIEW = 'oroform/js/app/views/editor/related-id-relation-editor-view';
const DEFAULT_API_ACCESSOR_CLASS = 'oroui/js/tools/search-api-accessor';
/** Frontend type */
const RELATION = 'relation';
/**
* {@inheritdoc}
*/
public function guessColumnOptions($columnName, $entityName, $column)
{
$result = [];
if (array_key_exists(PropertyInterface::FRONTEND_TYPE_KEY, $column)
&& $column[PropertyInterface::FRONTEND_TYPE_KEY] === self::RELATION) {
$isConfiguredInlineEdit = array_key_exists(Configuration::BASE_CONFIG_KEY, $column);
$result = $this->guessEditorView($column, $isConfiguredInlineEdit, $result);
$result = $this->guessApiAccessorClass($column, $isConfiguredInlineEdit, $result);
}
return $result;
}
/**
* @param $column
* @param $isConfiguredInlineEdit
* @param $result
*
* @return array
*/
protected function guessEditorView($column, $isConfiguredInlineEdit, $result)
{
$isConfigured = $isConfiguredInlineEdit
&& array_key_exists(Configuration::EDITOR_KEY, $column[Configuration::BASE_CONFIG_KEY]);
$isConfigured = $isConfigured
&& array_key_exists(
Configuration::VIEW_KEY,
$column[Configuration::BASE_CONFIG_KEY][Configuration::EDITOR_KEY]
);
if (!$isConfigured) {
$result[Configuration::BASE_CONFIG_KEY][Configuration::EDITOR_KEY][Configuration::VIEW_KEY]
= static::DEFAULT_EDITOR_VIEW;
}
return $result;
}
/**
* @param $column
* @param $isConfiguredInlineEdit
* @param $result
*
* @return array
*/
protected function guessApiAccessorClass($column, $isConfiguredInlineEdit, $result)
{
$isConfigured = $isConfiguredInlineEdit
&& array_key_exists(Configuration::AUTOCOMPLETE_API_ACCESSOR_KEY, $column[Configuration::BASE_CONFIG_KEY]);
$isConfigured = $isConfigured
&& array_key_exists(
Configuration::CLASS_KEY,
$column[Configuration::BASE_CONFIG_KEY][Configuration::AUTOCOMPLETE_API_ACCESSOR_KEY]
);
if (!$isConfigured) {
$result[Configuration::BASE_CONFIG_KEY]
[Configuration::AUTOCOMPLETE_API_ACCESSOR_KEY]
[Configuration::CLASS_KEY]
= static::DEFAULT_API_ACCESSOR_CLASS;
}
return $result;
}
}
|
STACK_EDU
|
OASIS Web Services Security (WSS) TC
Delivering a technical foundation for implementing security functions such as integrity and confidentiality in messages implementing higher-level Web services applications
Table of Contents
- Technical Work Produced by the Committee
- External Resources
- Mailing Lists and Comments
- Additional Information
November 28th 2006
We are pleased to announce that a set of Errata has been approved by the OASIS Web Services Security (WSS) TC. The errata are as follows:
- WS-Security SOAP Message Security 1.1 Errata
- X.509 Token Profile 1.1 Errata
- Kerberos Token Profile 1.1 Errata
- SAML Token Profile 1.1 Errata
- SOAP With Attachments (SWA) Profile Errata
The specs and the related errata can be found here.
The formal announcement of this Errata from OASIS is available here.
February 21st 2006
The WS-Security 1.1 document set that was submitted for public review towards the end of last year has now been approved as a formal OASIS standard. Links to those documents can be found below in the Technical Work Produced by the Committee section.
The purpose of the OASIS WSS TC is to continue work on the Web Services security foundations as described in the WS-Security specification, which was written within the context of the Web Services Security Roadmap as published in April 2002. The work of the WSS TC will form the necessary technical foundation for higher-level security services which are to be defined in other specifications.
OASIS Standard 1.1
The following documents make up the WS-Security 1.1 OASIS standard. The links below are to the PDF versions of the documents. If you prefer HTML or Microsft Word (.doc) versions of the documents, you will find those in the document repository.
- WS-Security Core Specification 1.1
- Username Token Profile 1.1
- X.509 Token Profile 1.1
- SAML Token profile 1.1
- Kerberos Token Profile 1.1
- Rights Expression Language (REL) Token Profile 1.1
- SOAP with Attachments (SWA) Profile 1.1
Related schema files
Here are the links to the 1.1 and 1.0 schema files. Note that the 1.1 schema does not replace the 1.0 schema, rather it builds upon it by defining an additional set of capabilities within a 1.1 namespace.
OASIS Standard 1.0
- Web Services Security: SOAP Message Security V1.0
- Web Services Security: Username Token Profile V1.0
- Web Services Security: X.509 Token Profile V1.0
- Web Services Security:SAML Token Profile V1.0
- Web Services Security REL Token Profile V1.0
- Schema files V1.0 and
Note that all OASIS Standards documents are available from the standards page.
1.0 Errata - Updated February 2006
The WSS TC has produced a set of non-normative errata to these 1.0 documents. Note that these documents are errata to the WS-Security 1.0 OASIS Standard documents but are not themselves documents that have gone through the full OASIS standards process. They are Committee Draft documents produced by the WSS TC. The errata material is listed below.
- Errata to the OASIS Standard 1.0
About our other documents/work in progress
The TC is currently working on a set of documents that will be the 1.1 version of the specification. The latest versions of those documents are in the document repository.
Although not produced by the OASIS WSS TC, the following information offers useful insights into its work.
OASIS WSS TC Approves Three Web Services Security Specifications for Public Review
Cover Pages, 9 Sept 2003
OASIS Web Services Security TC (WSS) Approves Committee Draft Specifications.
Cover Pages, 26 Jan 2004
OASIS Web Services Security Specification Approved as an OASIS Standard.
Cover Pages, 8 April 2004
OASIS Web Services Security TC Prepares Additional WSS Profiles.
Cover Pages, 13 Aug 2004
Companies Demonstrate Interoperability of WS-Security OASIS Standard
OASIS News, 20 Apr 2005
*To minimize spam, you must subscribe to this list before posting.
In general, the TC meets by phone every other week on Tuesdays at 7am Pacific Time (10am Eastern Time). Exceptions to this rule will occur from time to time and will be posted here. The full meeting schedule is now in the on-line calendar. From now on please use the on-line calendar to find meeting times, logistics and agendas.
You can access the on-line calendar here or by clicking on the schedule link in the menu at the top of this page.
Click here to see a summary of WSS TC liaison reps to other groups.
For technical assistance regarding this OASIS TC web page, contact email@example.com.
Providing Feedback: OASIS welcomes feedback on its technical activities from potential users, developers, and others to better assure the interoperability and quality of OASIS work.
|
OPCFW_CODE
|
How to Configure Firestarter to Allow VPN
Ubuntu Linux comes with a VPN client called "vpnc" which is an open source alternative for Cisco's VPN Client. It allows you to establish a VPN tunnel between you and a remote network that is gated by a Cisco Systems firewall or router.
Firestarter Was Blocking the VPN Tunnel
Although Firestarter 1.03 would allow vpnc to connect to the remote network, it wouldn't allow me to ping machines on the remote network. More specifically, I was trying to Remote Desktop (RDP) into a Microsoft Windows server using the Terminal Server Client that comes with Ubuntu; Firestarter would not allow the Terminal Server Client to connect. This made the Terminal Server Client appear to be hanging up. However, after I turned off the Firestarter firewall, the remote desktop session would start. After authenticating with the remote machine, I tried starting Firestarter again. This made the Remote Desktop session freeze immediately. Stopping Firestarter (again) made the session resume.
Unfortunately, I was unable to solve this using the graphical user interface of Firestarter 1.03. I tried adding policies in the GUI that would allow all traffic in both directions, but each time I'd restart the firewall, it would again freeze the remote desktop connection I'd established while the firewall was off.
Add "iptables" entries to "/etc/firestarter/user-pre" file.
Open a terminal from Ubuntu's "Applications" menu: Applications | Accessories | Terminal.
Copy the line below, and paste it into the terminal by right clicking in the terminal and selecting "Paste" from the context menu (the ctrl-v method won't paste in Terminal).
sudo nano /etc/firestarter/user-pre
If you're prompted for a password, enter the root password.
Now it is time to add the iptables entries to the user-pre file. For your convenience, I provided the textbox below. Type the ip address of your peer/endpoint (the ip you connect to using vpnc) into the text box below and press the replace button:
You should now see the IP address in 4 of the iptables entries below:
Now, your ip should be in the iptables entries in the textarea box above.
Copy these iptables entries (above)
Click back to the terminal we opened.
Paste the iptables entries into the Terminal by right clicking in the nano editor and selecting "Paste" from the context menu.
Hold down the ctrl key and hit o on the key board (ctrl-o). This will save the iptables entries to your user-pre file.
Exit the nano editor, but not the terminal (ctrl-x).
Now, restart the Firestarter firewall:
sudo /etc/init.d/firestarter restart
Now, you should be able to vpnc to your peer and maintain a remote desktop connection.
Tell others about
About the Author
Comments? Questions? Email Here
|
OPCFW_CODE
|
Our teams use a lot of tools and techniques to build, repair, strategize, and analyze. We want your web project to run seamlessly, look sharp on any device and present your company as industry leaders. One of the best ways to achieve this is to begin with Responsive Web Design. Responsive Web Design adapts the website to always display properly on devices and browsers of differing sizes. Currently, there are two techniques to ensure that your website responds flawlessly on any device: media queries and element queries.
Media queries, the most common method, start with the content on a narrow screen width and introduce breakpoints as dictated by the content as the screen gets wider. For example, on a narrow screen, most content is presented in a single column, but when the screen is wide enough, secondary content may move up next to the main content in a smaller sidebar column. This breakpoint should occur when it benefits the presentation, such as to prevent overly long lines of text. Unfortunately, the process ends up not being as simple as adding a breakpoint when a line of text reaches the desired maximum length. If the implied breakpoint should be at, say, 760 pixels wide, what happens at 761 pixels? If the layout changes to have a sidebar, suddenly, the main content area is significantly narrower than it just was, so the line length will have gradually increased to the desired maximum, then, at the 760 pixel breakpoint, it jarringly jumps back to a narrower width. Ultimately, breakpoints based on the width of the screen have to make compromises about how to display different content types and how to them in such a way that, while not always ideal, isn’t ever broken.
Element queries aim to remove the need to make those compromises by directly adapting the content to its container. While a media query says “if the screen is wider than a given size, apply these various CSS rules to all of the elements specified”, an element query would say, “if the content container is wider than a given size, apply these CSS rules to just it.” That is to say, changes would only apply to the single element of content being evaluated for the given characteristics, ignoring all other elements and device characteristics. Continuing the example from above, instead of using a site-wide breakpoint to try to constrain line length, an element query could just add an automatically increasing right margin above the maximum width so the lines of text don’t continue to increase. Or center the column of text, or increase the font size to keep roughly the same number of characters on a line, or split the text into multiple columns, etc. When individual elements are being evaluated and affected with element queries, they allow the content itself to always be presented in the best way possible.
One very popular methodology of building responsive websites is Atomic Design, championed by Brad Frost. The general idea is that by building components from smaller discrete elements, an entire design system can be developed that results in a very cohesive design and predictable user experience. This approach can, however, be difficult to implement when building a site on a CMS, as it can be difficult to anticipate how content that can be placed virtually anywhere will be used and also ensure it will always be styled correctly. Element queries make a modular methodology like Atomic Design easier to implement because, when utilized correctly, they ensure that individual pieces of content will always display in the best fashion, regardless of where and how they are used on a page.
Despite some of the current limitations implementing element queries, they are an excellent option to help ensure responsive websites, especially those that have modular content, consistently display content in the best way possible.
Whether we are working in a Content Management System like Ektron to get custom results for your content, or migrating content from Ektron to Episerver for greater content personalization, or helping you leverage a Marketing Automation Platform, like HubSpot, we use the best industry tools and techniques to help you meet your goals. Have you used element queries before? Let us know in the comments below or over social media.
|
OPCFW_CODE
|
If you’re a mobile developer you’ve probably, at one point, had to work with databases to store your app data on a device. Surprisingly, your choice in databases is limited, with SQLite being the most commonly used. It’s surprising because despite the many new databases that have been created over the past decade, none have been focussed on mobile. Bring on Realm: “a mobile database that runs directly inside phones, tablets or wearables.”
Realm is an open-source library that mobile developers can integrate into their app to store and query data. Data is queried from Realm’s internal storage engine (not yet open-sourced), which runs on your device and is built to get the best performance, both off- and online. The internal storage engine has been developed from the ground up to be memory-efficient, and provide the best performance to developers.
The company was founded by Alexander Stigsen and Bjarne Christiansen, who previously worked at Nokia’s R&D center in Copenhagen trying to figure out how to fit data in ever-smaller amounts of computer memory. Using their experience, and the idea that modern phones and tablets could equally benefit from data-fitting, they got accepted into the Summer ‘11 batch of Y Combinator and have been working on the problem ever since.
The company’s goal, according to VP of Product Tim Anglade, is to provide developers with a new start, allowing them to develop apps faster, launch and run their apps with less costs and make their apps more powerful.
Data in Realm is stored in, you guessed it, different realms (i.e. databases), with both disk and in-memory storage available. Disk storage is the default, allowing data to be persisted between app launches, saving developers extra calls to their APIs. This is also where Realm sees a lot of potential: saving on external storage costs.
As the app industry grows, so does the API industry, as developers turn to external APIs to store and query their data. This comes at a cost: renting servers, managing databases and data traffic. For common use cases, this means developers can manipulate data on a device, without having to pay for, or manage, any external services.
Realm’s iOS implementation (both Objective-C and Swift) shows the library to be well-constructed, using common data structures like objects and arrays. According to the documentation Realm provides important features, such as thread-safety, a simple API around data and easy linking of data (one-to-many, many-to-many) and also boasts a better performance over other implementations (like Core Data).
All in all, Realm is an exciting new player in the field of mobile databases: an area that could use some more options for mobile developers.
Realm is now available for iOS , but a version for Android will be released soon.
|
OPCFW_CODE
|
As former Microsoft Program Manager for Active Directory Security, I cannot over-emphasize the need for adequately protecting your organization's foundational Active Directory deployment.
This is a vital IT security issue, and we ordinarily do not shed light on it in the public domain, but rather to choose inform our global customer base privately. However, if the U.S. government is willing to shed light on it in the public domain (which I don't think it should), I suppose it would be okay if I too shared a thought.
The Inspector General of Homeland Security recently published the findings of a security audit that covered the implementation of Active Directory at the U.S. Department of Homeland Security, and I highly recommend reading it.
Here's a snippet from the Executive Summary -
- The Department of Homeland Security uses Microsoft Windows Active Directory services to manage users, groups of users, computer systems, and services on its headquarters network. We reviewed the security of the Active Directory collection of resources and services used by components across the department through trusted connections. These resources and services provide department-wide access to data that supports department missions but require measures to ensure their confidentiality, integrity, and availability. The servers that host these resources must maintain the level of security mandated by department policy. Systems within the headquarters’ enterprise Active Directory domain are not fully compliant with the department’s security guidelines, and no mechanism is in place to ensure their level of security. These systems were added to the headquarters domain, from trusted components, before their security configurations were validated. Allowing systems with existing security vulnerabilities into the headquarters domain puts department data at risk of unauthorized access, removal, or destruction.
The fact of the matter is that virtually the entire U.S. government actually runs on Active Directory, and I would not be surprised if the foundational Active Directory deployments of other departments in the U.S government may also be inadequately protected (; though I seriously hope that is not the case.)
Comprehensive protection of an organization's foundational Active Directory deployment requires a first-hand understanding of the attack surface, of the various components involved, and the of the risks associated with each of these components, and the knowledge to know which risks to mitigate, and which ones to manage, and how so.
It does NOT involve the mere deployment of fancy security applications, but in fact requires the deployment of a well thought out and well integrated set of security controls involving security policies, practices and tools/applications, which together provide trustworthy protection.
Formally speaking, it requires that an organization first perform a formal risk assessment of its Active Directory and then based on its findings, assess and deploy an adequate set of risk mitigation measures.
While at Microsoft, I had the privilege of having performed an Active Directory Security Risk Assessment of Microsoft global Active Directory infrastructure, so this is second nature to some of what we do now at Paramount Defenses Inc. (While I will not divulge any details, suffice it to say that it took a 90 page report to document cursory findings, which was delivered to the highest offices at Microsoft.)
If your organization is running on Active Directory, I encourage you to please take a serious look at its security, and if needed, please enact appropriate risk mitigation measures to ensure its adequate protection.
As I sign off, I'll leave with you a simple mantra - Your Microsoft Windows Server based IT infrastructure is only as secure as is its foundational Active Directory. (Please) Protect it.
PS: Link to the official report - Stronger Controls Needed on Active Directory Systems.
|
OPCFW_CODE
|
Say No to “Bugging”
In one of his classical pieces “The Art of bugging: or Where Do Bugs come from”, Jerry Weinberg suggests that if debugging is the process of removing bugs then bugging should be the process of putting those bugs in. He then list down a set of bugging possibilities in his typical funtastic way.
Taking his thought further, I think that a Tester’s job should not be just find bugs (and humiliate Programmers) rather one of the prime jobs of a Tester is to improve quality in whatever way. Other testers have suggested some ways and I think helping team adapt a policy of no “Bugging” can be a game changer too.
(the original picture is here: http://pictures-e2.thumbtackstatic.com/pictures/911/c7ux0jzhx0mh3a97_580x380.jpg )
Taking Jerry’s examples, I have picked two types i.e. Gee-bugging and Ye-bugging. Let’s see what they are and how to tackle them.
“Gee-bugging is grafting of bugs into a program as part of a piece of “gee-whiz” coding—fancy frills that are there to impress the maintenance programmers rather than meet the specifications.”
There are many ways to avoid this. One is to help Programmers become experienced one in light of this blog which outlines responsibilities of the senior developer. One of the line says:
“A senior developer will understand that this job is to provide solutions to problems, not write code. Because of that, a senior developer will always think of what they are doing in terms of how much value it brings to their organization and their clients vs how much effort they are putting in.”
So we have to make our Programmers experienced in providing solutions and not writing beautiful code. Peer code reviews can help and practicing Three Amigos will help Programmers understand the complexity of the client world and devise solutions for them.
“Ye-bugging is a mysterious process by which bugs appear in code touched by too many hands. Nobody knows much about ye-bugging because every time you ask how some bug got into the code, every programmer claims somebody else did it.”
In the modern world teams are really distributed and code bases are shared. This can really become a problem unless we implement a world class configuration system that works on Continuous Integration principles. Such that code written by any one is complied continuously.
In our team we have a build process called firebug in which code builds every few hours. Any one who has committed code between two firebug jobs, get an email on successful or unsuccessful build. In this way, a really mixed up code is maintained without any problems.
“By removing what you don’t want, you don’t necessarily get what you want”
So let’s work with our teams and say No to “Bugging” and build what we want.
What other practices you suggest to end “bugging”?
|
OPCFW_CODE
|
summer that is
me and my two new christmas friends
opengl 2007 wish list
- opengl3.0 lean + mean (but with a utility library available if the programmer wishes to use it thus immediate mode etc is still there)
- more posts from my two favourite posters korval + knackered, esp the amount of bickering has dropped off recently please make an better effort in the new year
I have no idea what you are holding, is it some bizzare New Zealand thing?
In 2007 I would like to do some actual OpenGL coding. It is a sad fact that I have only been doing console programming at my new company. (hence the lack of posts and lack of updates to GLIntercept)
I sure we can get knackered posting, just prod him a bit - or perhaps he is sulking over losing the Ashes -
Merry christmas, happy holidays, don’t be a scrooge.
Here, it’s http://www.saq.com
“The alcohol(ics) society of Quebec”
Been having a green Christmas until yesterday. It snowed 10 to 15 cm.
I wish GL to gain more ground. I wish people will continue to burn fossile fuel. I wish there will be no more winter. I wish I’ll have time in my busy schedule to do my own project.
PS : I hate winter
Happy New Year! (horns hooting, whistles whistling, confetti flying)
Don’t quote me on this, but I have it on questionable authority (my own) that we’ll see the unveiling of GL3 by Siggraph '07 (just a hunch).
P.S. New year’s resolution #1: Get out more often.
Originally posted by V-man:
PS : I hate winter
you’re not very precise in that point. ok, winter=cold, car not starting, etc.
but also: winter=snow=snowboarding=lots of fun & beer. unfortunately not here, at the moment: no snow, spring-like temparatures around 15°C
anyway, “guten rutsch” to everybody!
Ladies and gentlemen, take his advice: pull down your pants and slide on the ice! (teehee… wooop!)
my two above friends are gone
but ive found another couple
friendship nowadays is so fickle
man u can see my scalp in the above foto, time to paint my scalp black!
9am 1st jan at the moment (no hangover which i rarely get anyways, only had a couple of liters of red wine though to help with sleeping mainly cause i knew the neighbours will be making lots of noise until the sun rose, needless to say my gf didnt sleep well, coupled with my snoring + the racket nextdoor)
- shouldnt this be in a blog, oh well its the one time of the year this is permisable i suppose
P.S. New year’s resolution #1: dont lead such a boring life
New year’s resolution #2: record an album (ive recorded 5 songs so far, but have hit a bit of a creative block, ive started recording ~10 others which is no good, focus zed)
just for curiosity: when i start google earth and zoom in to your coordinates, i see a lot of regularly arranged plants. a vineyard, i guess?
there are vineyards in the region but i believe those are appletrees (now those apple trees to the north + south have been pulled out in 2005 so it shows they dont update certain parts of nz very often)
this was the house we lived in there -41.2796087044, 173.091600087 (now owned by the former allblack prop
http://en.wikipedia.org/wiki/Greg_Feek btw his girlfriends a honey
) now we’re in town
Not to give you a fat head or anything, but New Zealand is probably the most beautiful place on Earth. If and when I ever travel abroad, that’s my first destination.
I would like to take this opportunity to wish you all a very happy new year.
Personally I couldn’t give a rats arse what happens to OpenGL in the new year, as I have finally had a nervous breakdown, lost my job, defaulted on my mortgage repayments and am now homeless and sleeping under a flyover with michagl (as he insists I call him).
I will still frequent internet cafe’s just to spout rubbish on this forum, pretending I’m a graphics programmer with ready access to the latest technologies.
I vote we all club together and buy leghorn a plane ticket to…anywhere. The poor untravelled sod.
well, so you had bad luck, but you can still make it cool. build yourself a shelter made from old mainboards and graphics cards (preferably nvidia, they seem more stable). spend the day meditating. mumble opengl commands and try to imagine how the output would look like if they were running on a computer. it will keep you mentally in touch with that ogl stuff. let your hair grow and don’t shave, this will make you look like a guru and improve your chances to get a new job when economy goes up again.
that’s what I tell michagl every night before the death squads do their final sweep of the evening. He just babbles something about progressive meshes.
I wish you new gardner, Zed
I also wish Korval’s post count breaking 3000.
Originally posted by 3k0j:
[QB] I wish you new gardner, Zed
the house is on the market
‘The bathroom consists of a wet area shower, hand basin, towel rail and toilet’
comeon its got a towel rail!! I can see this place getting snapped up quickly
the question is where do i live next, i might be joining u + michagl, knackered.
so, what’s the prize? and is a flight ticket included?
new zealand is full of zombies - I saw it in a documentary called Braindead or something.
really? and are you allowed to shoot them or will you be arrested for doing so? is the firearms law as liberal as in the US of A? i don’t think it will be big fun to kill a zombie with an air rifle, can i go into a shop and buy something really big…and automatic?
|
OPCFW_CODE
|
SATSx Hackathon 2023: A Celebration of Innovation and Collaboration in the Lightning Space.
03/28/2023 09:00 AM - SATSx Hackathon 2023: A Celebration of Innovation and Collaboration in the Lightning Space.
The SATSx Hackathon successfully united 11 innovative projects, with two of them notably showcasing Bitcoin payments via wireless mesh networking. Although we had 148 RSVPs, approximately 50 enthusiastic participants attended and actively contributed to the event. Adding to the excitement, a camera crew from SXSW graced us with their presence to capture the action. Held concurrently at two unique venues, Bitcoin Commons and PlebLab, the hackathon provided participants with a choice of locations and the convenience of overnight accommodations to fully immerse themselves in the hacking experience. Additionally, we hosted two insightful workshops led by Justin Moon of Fedi and Santos of Zebedee, equipping hackathon participants with valuable guidance on utilizing their codebases as a solid foundation for their innovative projects.
The innovative projects presented during the hackathon included:
Badge Mint: Badge Mint is a project that aims to create an automated pipeline for badge awarding, allowing users to easily create and issue badges to people as a summary of their personality or achievements online. The badges can represent different aspects of a person's life, such as their interests or affiliations, and can be easily shared on social media platforms. The team envisions incorporating proof of work, social graph requirements, and other customizable features for issuing badges. Badges can significantly represent community associations, showcase achievements, and even parody social identities. They can also serve as a decentralized alternative to traditional certificates, allowing individuals to showcase their skills and experience through a peer-to-peer model. This enables a more transparent and open-source system for credentialing, fostering a web of trust in which users can verify the history and issuer of each badge. Find more about the project on GitHub.
Devstr: The goal of Devstr is to address the centralization issues and limitations associated with GitHub, which Microsoft currently owns. Devstr merges existing GitHub profiles into the Nost Network and allows users to stream events from GitHub into the Nostr Network. Devstr tackled the authentication problem by linking users' GitHub profiles to their Nost profiles quickly and easily. The platform generates a new profile for users, which includes information like contributions, recent activity, repositories, and programming languages used. While Devstr is still in the early stages, it has successfully connected GitHub and Nostr, built a front-end that generates maker profiles, and broadcasted simple MVP GitHub events on the Noster Network. In the future, Devstr plans to create constant streaming of GitHub events and set up a Devstr relay. Devstr aims to provide redundancy for GitHub profiles and eventually enable users to interact with GitHub accounts through their Nostr accounts. This could be a first step towards moving away from the centralized platform and creating a more decentralized developer experience. Find out more about the project on GitHub.
Auntie LN: Auntie LN addresses the need for a secure and verifiable messaging protocol for Lightning Network nodes by proposing using Nostr, a censorship-resistant and verifiable messaging system. The project aims to connect Lightning nodes to Nostr, facilitating effective communication among nodes. The team employs ThunderHub to create a Nostr profile and introduces a new Nostr kind for node announcements, which cryptographically links the Nostr profile to the Lightning node public key. Their demonstration showcases subscribing to node announcements, verifying authenticity, and sending messages to peers. As a result, Auntie LN establishes an alternative secure transport layer for Lightning, combining Lightning and Nostr to support messages backed by signatures. Potential future developments include:
Writing a Nostr Improvement Proposal (NIP) for gossip over Nostr.
Super Mario Sats: Super Mario Sats is a modified NES ROM of Super Mario that rewards players with satoshis (SATs) for collecting in-game coins. The modified game allows players to earn SATs transferred to their Lightning Network address. The demonstration involves audience members playing the game and receiving SATs in real-time, with one player receiving 23,000 SATs in a single Super Mario session. The team faced challenges while hacking the ROM and experienced issues with their Voltage node and Wallet of Satoshi. D++ mentioned that making this a shared experience would require rate-limiting and user authentication to prevent cheating. The current implementation is a fun and interactive way to engage with the Lightning Network and satoshis through classic retro gaming. Find out more about the project on GitHub.
Freeschool:Andrew and Michael, two students from Purdue University and high school, respectively, presented their project called Free School. Their goal is to tackle the challenges in modern schooling, such as the repetitive teaching of fundamental knowledge and the vast amount of online educational content that can overwhelm students. To address these issues, they proposed an online aggregator platform, a social media network geared explicitly toward education. The platform would allow users to collaboratively structure educational content collaboratively, making it easier for students to follow a clear path and find the best materials online. Their prototype demonstrated the ease of navigating various educational topics using a graph structure. They also implemented a comment structure similar to Reddit or Stacker News to promote discussions and effectively filter content. They also explored the possibility of using the Lightning Network for micropayments, though they encountered technical challenges in implementation. They acknowledged the complexities in visualizing academic knowledge for subjects like history that may not have a linear structure. They plan to continue working on the project, targeting STEM subjects first. They expressed interest in collaborating with other projects, such as a credentials and badge system presented by Badge Mint.
Satoshi Jump:Satoshi Jump: Enabling Bitcoin and Lightning Network transactions over radio waves without requiring internet access. The primary motivation behind Satoshi Jump is to expand the Bitcoin network's reach, particularly in areas lacking reliable internet connectivity, and enhance its censorship resistance. The team utilized dual-tone technology and digital signal processing to accomplish this objective, with Topher creating a browser-based offline module for the task. The demonstration in the transcript revealed certain limitations, emphasizing the need for advancements such as error correction and faster transmission rates. Future developments for the project encompass:
Enhancing error correction capabilities.
Collaborating with various radio stack technologies.
Integrating the solution with mobile phones for broader accessibility.
Implementing a services layer to provide additional functionality and support.
Meshtastic:Meshtastic is similar to Gotenna, allowing users to send text messages between radios. Ben's goal was to set up Lightning wallets between two devices and send transactions between them. However, due to the project's ambitious scope, he could only demonstrate the devices talking to each other. Ben showed how the Meshtastic devices are connected in a mesh network and demonstrated sending a text message between devices. The goal is to integrate this with the Bitcoin ecosystem by generating addresses and creating transactions. Although not fully functional, the project aims to create a lightweight, user-friendly system that can handle everything on the phone. This project opens up new use cases, such as partially signed transactions spread among multiple users. The range of the Meshtastic devices is one to five miles, depending on the environment. Find out more about the project on GitHub.
Zapalytics:Zapalytics analyzes the usage of zaps in the Lightning Network. Zaps are fast and fun ways to tip people with Bitcoin, but they have privacy concerns due to using custodial wallets. Ben analyzed around 170,000 zaps and discovered that 82.5% involved custodial wallets. He mentioned a need for self-hosted solutions to improve privacy and suggested using fake zaps to disrupt analytics. Ben also explained that improvements in Lightning privacy could be achieved with Bolt 12 and blinded paths. Find out more about the project on GitHub.
BTCPay Server - Galoy Plugin:BTC Pay Server plugin was developed by Nick's company, Galoy. This plugin allows users to outsource the management of their Lightning nodes, making it easier to run a Lightning node with the BTC Pay Server. As banks continue to fail and centralize, running one's own bank becomes increasingly important for financial freedom. The Galoy Plugin streamlines the process of using BTC Pay Server and Lightning nodes, enabling users to manage their on-chain Bitcoin while outsourcing the management of their Lightning nodes to Galoy or another Lightning service provider. Find out more about the project on GitHub.
Frenstr: Frenstr is a tool to help users make friends on Nostr, a social media platform with no algorithms. The global feed on Nostr can be filled with spam for new users, making it difficult for them to connect with others. Frenstr uses Chat GPT to generate user descriptions based on their public data, making it easier for users to discover interesting people to follow. Sam demonstrated Frenstr by generating descriptions for various Nostr users, including one of the judges. The app works by requesting the user's 20 most recent events and creating a description, which is then broadcasted to the Nostr network. Future developments for the project encompass:
Refining the user interface.
Adding tags to help users find others with similar interests.
The cost of using Chat GPT is a concern, but Sam is considering ways to make the process more efficient.
The #SATSx Hackathon 2023 concluded with a remarkable lineup of winning projects, showcasing the creativity and technical prowess of the participants. The Best Overall award went to Auntie LND, receiving 12,000,000 sats, while Supermario SATS claimed the Most Polished title with a prize of 2,600,000 sats. Freeschool earned the Most Ambitious distinction, also receiving 2,600,000 sats. Satoshi Jump took home the Project Privacy award with 2,600,000 sats, and Devstr secured the Project Nostr title, garnering 2,600,000 sats as well. The Hackathon's esteemed panel of judges included Nate, an engineer at Unchained and advisor at Zaprite; Ben Woosley, a Bitcoin Developer and of Austin LitDevs; Keyan Kousha, the founder of Stacker News; and Buck Perly, an engineer at Unchained and Austin BitDevs. See all the videos on our YouTube.
⚡️12,000,000 sats Auntie LND
⚡️2,600,000 sats Supermario SATS
⚡️2,600,000 sats Freeschool
⚡️2,600,000 sats Satoshi Jump
⚡️2,600,000 sats Devstr
We look forward to seeing you all at next year's SATSx Hackathon, where we expect even more groundbreaking projects and inspiring collaborations. A special thank you to our generous sponsors, without whom this event would not have been possible; your support has truly made a difference in fostering innovation and promoting the growth of the Bitcoin ecosystem.
Care about this article? Share it in your network!
|
OPCFW_CODE
|
Why was President Troare of Mali 'furious' at the arrest of the Ivoirian Minister of Planning for embezzling funds?
In 1984, Thomas Sankara, a military officer, revolutionary activist and President of Burkina Faso, was elected as the President of CEAO, the Economic Community of West Africa. Under his administration the largest financial scandal in the organisations history broke out.
Mohammed Diawara, the Ivoirian minister of Planning, was charged for embezzling 6.5 billion dollars of CEAO funds marked for famine relief. Sankara declared it was time 'to clean house' and put him on trial before a Popular Revolutionary Tribunal in Ouagadougou, the capital of Burkino Faso. He was convicted and imprisoned.
The Malian elite were incensed and their President, Moussa Troare, was said to be furious. This eventually led to Troare provoking a senseless border war with Burkino Faso in late 1985.
Q. Whilst Mali and the Ivory Coast share a border, they are separate countries. Why then were the Malian elite, and in particular, the Malian President,enraged by the arrest of an official of the government of the Ivory Coast?
Some background https://www.thomassankara.net/conference-de-presse-du-president-du-faso-lors-des-journees-de-solidarite-des-jeunesses-de-la-ceao-6-decembre-1985/
More background https://search.proquest.com/openview/e772a00092ab52f3940b21a044d14c57/1?pq-origsite=gscholar&cbl=1820943
While I really enjoyed researching the history of W. Africa (something that isn't often taught at school!) I hope that I'm not just repeating what is obvious in my answer. If you have done research on this already it would help if you linked to the sources (like the two I mentioned in comments) It is good to know what you have already read to avoid answers that merely repeat what you already have found out.
@JamesK: My research is in the background of my question. Moreover, the question was on what the sources didn't make explicit. Thus if an answer sticks to answering the question rather than beating about the bush then they're unlikely to repeat what I already know.
Yes. But please read [ask]. Including sources to what you already know gets you better answers! If we don't know the sources, how are we to know what they do and don't make explicit. However I hope you'll agree that my answer below doesn't beat around any bushes.
@JamesK: No, I've been on this site for some time and and there's no need to be patronising.
You have been on this and other SE sites for a while, so your questions should be models of "show research effort, be useful and clear" This question suggests significant prior research: Mohammed Diawara is hardly a political celebrity! So the question is useful and clear (+1) But you choose not to show research effort, even though it exists. My actual purpose is to illustrate to new users how to write effective questions.
Because the Malian President and his wife were personally implicated in the affair.
Mariam Traoré was herself a prominent businesswoman and had extensive financial dealings with the "Bank of Africa-Mali" which had been set up by Diawara, with funds taken from the Communaute Economique De L'afrique De L'ouest.
So the arrest and imprisonment of Diawara was not only an extraterritorial act, it was an attack on the business associates (and probably personal friend) of the Traoré family. And hence an attack and insult to the Malian President himself. And thus he was enraged.
|
STACK_EXCHANGE
|
Developer Product Briefs
Test Center Standard Edition and Connect ODBC 6.0
Products to Lower cost application performance maintenace and drive ODBC harder.
Lower Cost APM
Two months after releasing an application performance management (APM) suite that ranges in price from $30,000 to $60,000 for a starting implementation but can run into mid-six figures, Linz, Austria-based dynaTrace Software is now offering a scaleddown version. dynaTrace Test Center Standard Edition has a good portion of the application testing features found in the core dynaTrace 3 suite, the company says. But it costs about 20 percent of the suite's price.
That full-scale suite is designed to let developers trace transactions across geographically distributed systems with large, scalable virtualized server clusters for business-critical applications that require 24x7 uptime (for more on that release, click here
Like the larger version, the Standard Edition comes with Visual Studio, Visual Studio Team System Test Edition and Eclipse plug-ins. It diagnoses and isolates typical Web application issues, notably database performance problems and chattiness. The software also documents issues for developers including SQL statements and bind values, and various other transaction characteristics. The software takes every transaction and displays the slowest running ones or those that are broken. When the developer clicks on a transaction, he can view the whole path with all the context. If a developer sees hundreds of extra calls to a database, he can click on it and automatically open up the source code in Visual Studio.
|Test Center Standard Edition
|Price: $6,000 per developer license
Although not as well known as some of its rivals, dynaTrace says it has made inroads in the U.S. market over the past year with customers such as Bank of America, Fidelity Investments, LinkedIn and Macy's. It has 100 customers but has seen rapid growth in recent quarters, the company says. But dynaTrace, which is backed by Bain Capital and Bay Partners, is a much smaller player than market leader CA, whose Wily Technology is used by more than 1,000 customers.
Driving ODBC Harder
Looking to give a boost to applications that rely on Microsoft's Open Database Connectivity (ODBC) standard, DataDirect Technologies has updated its drivers.
The new DataDirect Connect for ODBC version 6.0 increases the speed at which data is loaded into an application or into a database, says Rob Steward, the company's vice president of R&D. It allows for exporting of data from one database into another and performs bulk transfers without having to use Microsoft's batch processing utility.
Existing batch processes will run faster without requiring changes to application code, he adds. Version 6.0 adds application failover: "The features will allow developers to better tune their applications," he explains.
In addition to Microsoft's SQL Server, DataDirect's ODBC drivers provide connectivity to databases from Oracle Corp., IBM Corp. (DB2 and Informix) and Sybase Inc., among others. With the new release, the company has added connectivity to PostgresSQL and Greenblum.
|Connect for ODBC 6.0
|Price:$4,000 per single core
DataDirect, a Bedford, Mass.-based subsidiary of Progress Software Corp., is among a handful of companies that provide ODBC drivers. Most leading database vendors offer their own ODBC drivers and there are a number of less-expensive, open source alternatives as well. DataDirect says its drivers are aimed at ISVs and large enterprises.
Written/compiled by the editors of Visual Studio Magazine.
|
OPCFW_CODE
|
This tutorial shows how to track and route events using RudderStack.
How to set up an event stream
Before you get started, make sure you understand these terms used in this tutorial:
- Source: A source refers to a tool or a platform from which RudderStack ingests your event data. Your website, mobile app, or your back-end server are common examples of sources.
- Destination: A destination refers to a tool that receives your event data from RudderStack. These destination tools can then use this data for your activation use cases. Tools like Google Analytics, Salesforce, and HubSpot are common examples of destinations.
The steps for setting up an event stream in RudderStack open source are:
- Instrumenting an event stream source
- Configuring a warehouse destination
- Configuring a tool destination
- Sending events to verify the event stream
Step 1: Instrument an event stream source
To set up an event stream source in RudderStack:
RudderStack's hosted control plane is an option to manage your event stream configurations. It is completely free, requires no setup, and has some more advanced features than the open source control plane.
Once you've logged into RudderStack, you should see the following dashboard:
Assign a name to your source, and click Next.
That's it! Your event source is now configured.
Step 2: Configure a warehouse destination
Important: Before you configure your data warehouse as a destination in RudderStack, you need to set up a new project in your warehouse and create a RudderStack user role with the relevant permissions. The docs provide detailed, step-by-step instructions on how to do this for the warehouse of your choice.
This tutorial sets up a Google BigQuery warehouse destination. You don't have to configure a warehouse destination, but I recommend it. The docs provide instructions on setting up a Google BigQuery project and a service account with the required permissions.
Then configure BigQuery as a warehouse destination in RudderStack by following these steps:
On the left navigation bar, click on Directory, and then click on Google BigQuery from the list of destinations:
Assign a name to your destination, and click on Next.
- Choose which source you want to use to send the events to your destination. Select the source that you created in the previous section. Then, click on Next.
- Specify the required connection credentials. For this destination, enter the BigQuery Project ID and the staging bucket name; information on how to get this information is in the docs.
- Copy the contents of the private JSON file you created, as the docs explain.
That's it! You have configured your BigQuery warehouse as a destination in RudderStack. Once you start sending events from your source (a website in this case), RudderStack will automatically route them into your BigQuery and build your identity graph there as well.
Step 3: Configure a tool destination
Once you've added a source, follow these steps to configure a destination in the RudderStack dashboard:
To add a new destination, click on the Add Destination button as shown:
Note: If you have configured a destination before, use the Connect Destinations option to connect it to any source.
RudderStack supports over 80 destinations to which you can send your event data. Choose your preferred destination platform from the list. This example configures Google Analytics as a destination.
- Add a name to your destination, and click Next.
- Next, choose the preferred source. If you're following along with this tutorial, choose the source you configured above.
In this step, you must add the relevant Connection Settings. Enter the Tracking ID for this destination (Google Analytics). You can also configure other optional settings per your requirements. Once you've added the required settings, click Next.
Note: RudderStack also gives you the option of transforming the events before sending them to your destination. Read more about user transformations in RudderStack in the docs.
That's it! The destination is now configured. You should now see it connected to your source.
Step 4: Send test events to verify the event stream
<head> section, RudderStack will automatically track and collect user events from the website in real time.
However, to quickly test if your event stream is set up correctly, you can send some test events. To do so, follow these steps:
Make sure you have set up a source and destination by following the steps in the previous sections and have your Data Plane URL and source Write Key available.
Start the RudderStack server.
The rudder-server repo includes a shell script that generates test events. Get the source Write Key from step 2, and run the following command:
./scripts/generate-event <YOUR_WRITE_KEY> <YOUR_DATA_PLANE_URL>/v1/batch
To check if the test events are delivered, go to your Google Analytics dashboard, navigate to Realtime under Reports, and click Events.
Note: Make sure you check the events associated with the same Tracking ID you provided while instrumenting the destination.
You should now be able to see the test event received in Google Analytics and BigQuery.
If you come across any issues while setting up or configuring RudderStack open source, join our Slack and start a conversation in our #open-source channel. We will be happy to help.
If you want to try RudderStack but don't want to host your own, sign up for our free, hosted offering, RudderStack Cloud Free. Explore our open source repos on GitHub, subscribe to our blog, and follow us on our socials: Twitter, LinkedIn, dev.to, Medium, and YouTube.
|
OPCFW_CODE
|
Micro Focus Server Express is the platform of choice for deploying e-business and distributed applications
Building Next Generation E-Business ApplicationsSpecifically designed for performance and reliability to support high-volume transaction processing applications, Micro Focus Server Express™ is the platform of choice for deploying e-business and distributed applications. Server Express accelerates enterprise COBOL application performance to the next level, providing the fastest ever Micro Focus COBOL® product for UNIX. Server Express helps to dramatically reduce deployment costs and provides increased service levels through state-of-the-art capabilities like AppTrack and FaultFinder.
Scaleable Solution for Enterprise Development and Deployment
Server Express can be combined with Micro Focus Net Express® to provide a highly productive GUI development environment on Microsoft Windows with the outstanding scalability of both NT and UNIX for deployment. Applications can be developed easily with Net Express and then rapidly deployed using Server Express. When used with Server Express, Net Express simplifies UNIX deployment through publishing to UNIX and remote debugging.
Improved Productivity and Application Support
Remote debugging and production debugging enhancements enable advanced e-business and Web applications to be debugged with ease, directly on the deployment server, making it easier to develop and deploy robust, high performing Web applications.
The introduction of OpenESQL™ in Server Express gives database and platform independent SQL access through ODBC with the simplicity of EXEC SQL statements. OpenESQL applications can be rapidly generated using the Net Express OpenESQL Assistant™ and easily deployed on the UNIX system of choice. This approach offers improved choice, increased productivity and great flexibility.
Server Express provides unlimited scalability, supporting COBOL data file sizes in the terabytes, enabling large-scale application development to support business growth into the next millennium.
Quickest Path To 64-bit Application
Server Express was designed for today's 32-bit and 64-bit operating system architectures. With Server Express (64-bit Edition), you are just one simple step away from a 64-bit COBOL application!
Compatible with Existing Micro Focus UNIX Applications
Server Express was specifically engineered to be compatible with existing Micro Focus COBOL applications. Most applications only require recompilation to take advantage of the advanced performance and capabilities of Server Express.
State-of-the-Art COBOL Compiler
Server Express includes a state-of-the-art compiler that builds on COBOL's traditional strengths by providing:
Server Express is built using the latest Micro Focus COBOL callable shared object technology. This new executable format provides fast execution and delivers immediate productivity gains. With Server Express, you can also take advantage of this powerful new format by recompiling your existing COBOL application, enabling your multi-user application to run faster and perform better.
Advanced Debugging Support Using Animator®Server Express provides debugging capabilities needed to find and resolve problems in code quickly. Animator within Server Express provides:
Once you have determined what caused an error in the program, you can edit it quickly and easily with a single keystroke - Server Express Animator is fully integrated with the Server Express full screen source code Editor.
COBOL For The Web
Server Express brings the power of COBOL to the Internet by enabling COBOL programs to process HTML forms dynamically using familiar ACCEPT/DISPLAY syntax or embedded HTML. This allows you to develop and debug portable Web server applications that can be targeted at many different production platforms. There's no need to invest in, or use, any other server scripting language -- you can do it all in COBOL!
High-Performance Web Applications
A key element for providing optimum performance on today's web servers is the ability to create executables that exploit all the performance capabilities of the production server. Server Express enables you to do this very easily by providing simple compile and link directives so, from a single source stream, an application can be built for deployment as a:
Using COBOL with Java
Server Express provides the ability to deploy applications that support COBOL/Java interoperability created using the wizards in Micro Focus Net Express. Applications can be created and deployed without the programmer Java Native Interface (JNI) APIs.
When applications fail in production you often cannot use your regular debugging techniques to locate the problem. You don't have the source code available or you cannot easily reproduce the problem. FaultFinder is designed to provide you with the help you need in these critical situations. It provides you with a snapshot of your application just at the point where the application failed, and provides fully configurable and comprehensive fault diagnostic information including:
FaultFinder has been extended so it can be used before a problem has occurred. It can be invoked from an active program via a call statement or from the command line to be attached to an active process. In addition to the information listed above, FaultFinder also reports on memory allocation and usage by the various modules in the application.
Search HALLoGRAM || Request More Information
CALL TOLL FREE 1-866-340-3404
©2002 HALLoGRAM Publishing, Aurora CO. All Rights Reserved|
All products mentioned in this site are trademarks of their respective owners
Prices are subject to change without notice
|
OPCFW_CODE
|
Apostaticus, apostatica, apostaticum adj [dexee]. A field that is Cara install rom via twrp upper case becomes lower case, and vice versa. Not mine but i liked it so here you go:.
I dag er godterier og usunne produkter plassert like ved kassen og i barnehãyde. Numerous rootstocks from around the world have been screened for resistance to fire blight of apple (norelli et al. But it just what kind of business you do loh. Ne'). The convergents of [3;-3,3,-3,3] successively decrease down to the final exact fraction. You can learn more how do i jailbreak 1.1 4 the difference between jailbreaking and unlocking if you're interested. Repair your installation of windows.
When the houses are rebuilt, more families should own, not rent, those houses. While it's not the most expensive provider, it's not cheap either: pricing starts at 19.
How do i jailbreak 1.1 4 model name: power macintosh. My favorite topic, which i am anticipating with great relish, is the floppy. If an employee wants to work at home, and has earned that privilege, engagement already should be at a high level for that particular employee or he or she will do no more at home than in the office. - time of last access as seconds since epoch. sysd:mscd000.
That under the earth there will be rewards or punishments, according as how do i jailbreak 1.1 4 have lived virtuously or viciously in this life, and the latter are to be detained in an everlasting prison, but that the former shall have power to revive and live again.
Schuyler was, simply, the greatest black journalist this country has how do i jailbreak 1.1 4 produced. Large leeks, white and light green parts thinly sliced.
Detailed description of the design process is given in. The purposes of targeting user interaction events, may ignore the node how do i jailbreak 1.1 4 the purposes of text. This new apps and tweaks can install free of cost. Connect your ios device to the computer. On thursday october 30th 2008 pcworld posted an article stating lower-income u. Detachment from material goods, etc. When samaritan and the machine's standoff begins to escalate, she wonders if the ais beginning to cooperate wouldn't be the best option, clashing with finch's stern belief that ais how do i jailbreak 1.1 4 up could lead to the end of humanity.
Puppet-module-puppetlabs-postgresql 4. Preventing and detecting rooting is one of the most difficult games of cat-and-mouse in all of security. K3_note_mm_6. If the stack of open elements does not have an element in scope that is an html.
Number major minor raiddevice state. These settings and features are all designed to make it easier to use ios devices. Highonandroid google community. random_uniform (-50, 99,(3,5,10)). Christianity is a religion for slaves.
For example, if you use the command. Described as a victorian era head-to-head arcade battle for tablets, this two-player game sees your dapper duellists hurling knives, bombs and even homing pigeons at one another in a fight to the death. And sd-ext partitions, as shown here:. Nevertheless, i suggest taking a moment and thinking about how you would achieve the geometric construction.
3_10b809_restore. The input element's type attribute is. Values so that they can be used. To add a widget, you press on the home screen's empty space, and a very small how do i jailbreak 1.1 4 with a settings wheel appears at the top of the screen. Follow the following direction:.
|
OPCFW_CODE
|
I've spent the holiday porting a C++ SDL roguelike to wasm using #emscripten. It's been really fascinating and challenging. The original game has a few different places that do infinite loops, which the browser environment doesn't tolerate. Fixing those is the hardest part.
"A small Massachusetts town has rejected an offer from Comcast and instead plans to build a municipal fiber broadband network."
Is public discontent with Facebook reaching a boiling point?
"Welcome to Magic School. Here is your schedule."
"This is just 'Ethics' and 'Human rights' and things like that."
"Correct, that's the first year curriculum."
"Do we have to learn all this?"
"Of course! What do you think this is, software engineering?"
#MicroFiction #TootFic #SmallStories
Representative democracy is probably the best way humans have found for governing ourselves anywhere approximating fairly, but I'm increasingly worried that it's going to stand in the way of us saving our own planet.
uspol, climate change Show more
Ok, so stay with me here: At what point does "not believing" in climate change when in a position to do something about it become genocide?
And I feel like genocide probably counts as "high crimes and misdemeanors" right?
I'm just saying.
Pretty ambitious plans by Barcelona! https://www.barcelona.cat/digitalstandards/en/init/0.1/index.html
I'd love to contribute to an open source project whose BDFL is a woman...
Good god in heaven we're building a hellscape of a world and grinning the entire damn time.
I guess the upside of humanity charging headlong at our own Great Filter is that we'll finally find out the answer to the Fermi Paradox.
Mobile Firefox needs a "No I don't want to download your app just so I can read a website" button clicker.
Exciting times - we're starting to see the first paid Matrix hosting providers emerging, starting with https://modular.im!! Read all about it at https://matrix.org/blog/2018/10/22/modular-the-worlds-first-matrix-homeserver-hosting-provider/ and check out the new Matrix Hosting project page! https://medium.com/@RiotChat/introducing-modular-awesome-hosting-for-riot-matrix-665a7a0c616
~ Black Mirror ~
What if there was a printer
But it works
Is gopher the new hipster www these days? Just trying to keep up with what the kids are up to.
Father, husband, policy technologist. Decentralization & infosec. Guardian of @matrix foundation. RT != Endorse
Mi parolas iom Esperanto.
This is a small personal server used mostly by @email@example.com. Membership is closed, sorry.
|
OPCFW_CODE
|
Ted, your use of the term "qualified candidate" makes your editorial rather "soggy". What is a "qualified" candidate? Someone who fills the list of requisites? I find that logic much like internet dating - you look for someone with all "these" qualities - you find them - and then find out one added quality they didnt mention was that they are satan worshippers!
I once hired a "well-qualified candidate" in that he met every requisite listed. Within a few days of his work, I then discovered that (#1) if asked to do any work outside his "qualified comfort zone", he would whine and in fact, get angry. (#2) His "certifications" and "qualifications" meant that he would waste hours telling us how we did not do something "right" (his opinion) and no matter how much I told him that we had to get things done, he would spend more hours (#3) 'researching' the best way to fix things I didnt need fixed. Sure, our work was not always perfect, but it kept our business chugging along - and who in this world can say their programs and data schemes are "100% perfect". Business doesnt work that way!
Funny thing is, after my "well-qualified candidate" flopped miserably, I hired an older guy - quick on his feet, eager, and willing to take on any challenge no matter what I threw at him. What a blessing this guy has been to us! The one thing he has that shines above all else is the enthusiasm and base talent. These days I often call him my "McGyver" - seems there is nothing I cannot ask for his help on that he doesnt 'attack' with gusto and drive, and he gets the job done 10 times faster than my former "well-qualified candidate".
In my opinion, the internet is not helping us at all with finding talent. In fact, I see job posting and job searching much like internet dating. That is, its a joke. People "meet" the requirements but in the end, its the person, AS a person that matters, and that is all just getting lost.
Job specialization? Well, consider that we are electing a new President soon here in the good old USA. Do we want someone with certifications? Specializations? ...or do we want someone who can think on their feet, dive in and attack problems and needs? I will take the latter, NOT the former. And in business, I have found this to be true as well.
Give me a McGyver any day, and you can keep your 'specialists' who I have found for the most part to be highly limited, unmotivated, and often, so 'expert' at something that we get nothing done.
There's no such thing as dumb questions, only poorly thought-out answers...
|
OPCFW_CODE
|
The web site I would like to revise for this class project is the Writing Studies Web site at http://www.writingstudies.umn.edu. I have been thinking about this site for quite some time, and while there are several things I'd like to change, I'll begin by selecting three pages.
Page 1: Home Page. http://www.writingstudies.umn.edu. I have mentioned this page in previous posts, but this page could change in many ways. For example, I would like to see a different header with a stronger U of MN color scheme. I'd like to see a different image on the header and words that make sense (rather than random letters). I'd also like to see different text on the home page. The home page text consists of a letter from the Chair that is now two years old. It is a "welcome" announcement to the new department. We no longer need this letter. And, as both Redish and the Yale Style Guide suggest, web page text needs to quickly get to the point. The Yale Style Guide suggests that there is no need for "welcome" messages from a CEO or chair. As to what would replace the text, I need to think further about that. The links on the left nav bar are one clue, as well as the links below that describe different entities such as the Center for Writing and First Year Writing.
Page 2. Here I am planning to select the "path" page for the Undergraduate Program. You can arrive at this page by going here: http://www.writingstudies.umn.edu/ugrad/. On this page there is a description of the major, a description of careers in the major, and then news and events. A few things that could change here: it might be useful to incorporate a small right nav bar with key links for the major. This appears in other pages but not here. We might reduce the textual description of the major to one paragraph and then list key links for more information. And the "news and events" could be a separate link rather than a scrolling option on the bottom of the page. News and events are a blog feed; perhaps the blog feed could reside on another page/link.
Page 3. Here I am planning to select the "path" page for the Graduate Program. It is available on this page: http://www.writingstudies.umn.edu/grad/. I think this page is in a bit better shape than the UG page, but there are still ways to improve it. The underlined text links in the first paragraphs could be in a right nav box, similar to what I'm thinking about for the UG page. The text could be edited a bit to get right to the point. Graduate news could be a separate link.
These are my ideas for now. I found the Redish reading very helpful (but overwhelming) and that generally the Yale Web Style Guide reinforced Redish's suggestions. A key starting point is writing in "topics" rather than "books" (from Redish, ch 5); Yale Web Style Guide refers to this as "chunks". That is the starting point for me--deciding what "topics" are most important to readers. Redish offers helpful guidelines for organizing information such as organizing according to time, task, or users' questions. I will think about what might be best and then continue thinking through. Also, both readings for this week had a lot of information on style guides. The key points here I think are defining heading styles in terms of font, style, size, and spacing. It will be important to make a style guide and stick to it.
All for now.
|
OPCFW_CODE
|
What's the best way to create web pages nowadays?
I had the same background writing HTML sites. I recently have had to do a simple web entry db entry form. I used dreamweaver and I can’t tell you how impressed I am with it. It scalable with extensions, great support for db work. From what I have seen so far with it, it has taken about 90% of the grunt work out of writing html pages.
Dreamweaver MX2004 is the way to go. It had some serious performance issues until a free update came out a few months back. The "2004" version improves the CSS handling over the old "MX" version. I use it for PHP/MySQL pages, and find it great for building forms/queries/etc, but I tend to tweak and reorganize some of the auto generated code, such as where it stores db connection info.
FrontPage 2003 seems to create standards compliant HTML and CSS and stuff for me, but I can't say I care enough to make sure. Every time I've had to update a Dreamweaver site without a copy of Dreamweaver it's been a horror, so bear in mind the people who'll inherit your code (it amazes me the number of people who think it's really important to create valid XHTML 1.0/1.1 when no browser could care less, but who don't mind spewing out crap on the server side).
TopStyle - http://www.bradsoft.com/topstyle/
I've always used Notepad, NoteTab, TextPad, or more recently Eclipse for JSP, and I've always been typing it in by hand, along with liberal cutting and pasting.
TopStyle and UltraEdit32 are my weapons of choice.
Jack of all
Max Belugin (http://belugin.newmail.ru)
I use EditPlus ( http://www.editplus.com/ ), the best text editor out there.
me, too. homesite.
Dreamweaver MX for what you are trying to do. Probably the latest version - 2004?.
Hmmm. The people complaining about Dreamweaver auto-generated code are missing the fact that whoever used Dreamweaver to create those files just used it wrong. I use Dreamweaver MX and I have tweaked the preferences (and not very much either) so that it does not rewrite my code, and the auto-generated HTML I get from the WYSIWYG editor is always just plain and simple HTML. No extra CSS, all tags are closed and even nicely indented. I have noticed also that you really just get what you ask for, or what you make it give you. If you want alt attributes, you must choose to put in an alt text for that image.
DW templates are pretty cool, but I never use them beyond a handful of pages before giving up on them. Just too much having to figure out how to get it to do what you want. But you can get it to produce clean code for templating if you can step back & try to figure out how to get it to do what you want, which isn't always easy.
oops, I hoped a TH and closed a TD, but you get the idea.
I have always used notepad but I'm not _entirely_ sure why. Am I a masochist or just overly controlling?
Fog Creek Home
|
OPCFW_CODE
|
X Windows on LSPro
Reasons for doing this
I have been attempting to get GDM running on the LSPro to allow me to do a remote X login.
The idea is fairly straight forward, I want to install the X server on a WindowsXP machine, that will allow me to connect to the linkstation and have the linkstation be the actual X server. *WARNING* X server/client terminology is awfully confusing. This is the convention I am using here:
1. X Server is a program running on the LinkStation that will allow an X client to connect to it. This is, typically, gdm or kdm controlled. 2. X Client is a remote PC, be it XP or Linux.
The X client I am using for Windows Vista is XMing. I have been using this for some time at work to connect an X session within XP to a Debian configured Linux box.
This is very much a work in progress, things aren't running 100%, but it seems to function to some extent.
X-Ming can be obtained from SourceForge. Install each of the packages.
Start X-Ming launcher (X-Launch) and follow these steps for configuration:
1. At Select Display Settings, select 'One Window'. This is my preferred approach and I would suggest you use this until you have confirmed everything is working just fine. 2. At Select how to start Xming, select 'Open sessions via XDMCP' 3. At Configure a remote XDMCP connection, select 'Search for hosts (broadcast)' 4. At Specify parameter settings, just select next. 5. Save configuration to an easily accessable xlaunch file. This will be what you will use to activate X-Ming.
Install FreeLink as shown on the FreeLink page. My experience with this, after following the instructions to the letter, was very straight forward. I was much relieved when I saw the login prompt. Don't be scared! Just make sure you have a backup of everything, just in case.
I had just bought a DLink DNS-323 with 1TB storage that is used as a primary storage device, with the LinkStation being the backup device. After all, it *only* has 500GB.
The primary root volume does not have sufficient space to install gnome so I had to move it. I was reluctant to resize the partition due to the sheer amount of data that would have to be restored.
So, this is how I proceeded (be careful not to reboot during these steps until needed):
1. Copied everything from /usr to /mnt/disk1/. Make sure you preserve all file right information.
- cp -a /usr/* /mnt/disk1/
2. Modify /etc/fstab to mount /dev/sda6 to /usr. Remove the old /dev/sda6 fstab entry and replace it with the one below. This will mean you'll need to redirect all your shares if already set up. The fstab entry should be:
- /dev/sda6 /usr xfs,acl defaults,noatime,nodiratime 0 0
3. Rename /usr to /oldusr. There may be some issues with processes that may be accessing /usr. I didn't have any problem with this, but you should be able to use fuser -m /usr to find out which process might be accessing that path.
- mv /usr /oldusr
4. Create a new mount point for the new /usr tree. I actually forgot this step. However, lb_worms [LS_PRO_modified_initrd] helped to recover from that.
- mkdir /mnt
5. Reboot the LinkStation. You should, once reboot is complete, see the normal Debian login prompt.
|
OPCFW_CODE
|
Amazing facts on snakes
Most species are non-venomous, some are mildly venomous and others produce deadly venom (variety of toxins used which is injected by a bite).
Here is a clever saying to help you differentiate between non-venomous and venomous snakes: "Red and yellow kills a fellow. Red and black is safe for Jack".
All snakes are carnivores (eat meat).
They are ectotherms (cold-blooded), their body temperature is controlled by external factors. Snakes will bask in the sun to warm up and hide in shady places to cool down.
As ectotherms snakes are mainly found in tropical to temperate climate zones. Have you ever heard of a snake that lives in the snow?
Snakes as Pets
- Always buy a snake that has been bred in captivity.
- Buy a healthy looking snake
- Look for a rounded, firm body with shiny, smooth skin with no scabs or sores and moves smoothly with no tremors.
- Your snake should have clear eyes.
- The inside of its mouth should be pink, and check that it is not opening its mouth to breathe or gasping for breathe.
What to do if your snake escapes?
Search high and low, under furniture, inside shoe boxes, in and behindcupboards, inside appliances, look in every hole no matter how small, underbeds, behind cushions, down the sides of couches, inside clothing pockets. Alsomake sure to leave cage door open and place its favourite food inside, itconsiders the cage home so will likely return home to its cage.
Let's play the naming game.
Here are some fun and original names for pet snakes. You can come up yourown too.
Crusher, Chowdown, Mr. Crowley, Ms. Anaconda, Murphy, Banana, Lilith, Diablo,Dopey, Dragar, Drago, Earl, Eddie, Jelly, Hoover, Sir. Cornelius, Slithers, SirHiss, Sizzle, Slip, Slap, Slash, Slinkster, Slinky, Slithers, Slyder, Slyther, Smiles, Smoo, Thanatos, The Beast, The Strangler, Threat, Tiny, Tokie, Tootsie, Ziggy, Zippy.
Let's discover the most popular pet snakes: Corn Snakes, King Snakes and Milk Snakes!
Where are these snakes found in nature? What do they look like?
What do they eat? Where they live in captivity?
How do they make babies? How do we care for them?
In the wild snakes eat meat and will hunt, strike and kill its prey by constriction, this method involves the snake initially striking at its prey and holding on, pulling the prey into its coils or, in the case of very large prey, pulling itself onto the prey. The snake will then wrap one or two coils around the prey. Once the prey is dead, the snake swallows it whole, the head going in first.
Terrarium: (A miniature landscape with living plants and small animals like snakes).
At full grown most snakes must be housed in a 20 - 25 gallon (about 75 liters) enclosure also called a vivarium.
Snakes are excellent escape artists therefore the enclosure they are kept in must be well sealed. They will find a way to get out of even the smallest gap. Picking a solid cage is a necessity for proper snake care. A 20 gallon long enclosure makes a good sized cage for a snake. The most important part is to get a secure fitting lid that can be clamped down. Snakes will push at the lid with their noses looking for weaknesses so the fit of the lid is very important. A determined snake can push against screen or glass until it finds an opening big enough for its head; where its head goes, so goes its body.
Snakes must be housed separately or they will eat each other.
A good substrate fulfills all the following requirements, it looks attractive,it is easy to clean, there is no danger of the snake eating the shavings whenit feeds and if possible allows for borrowing. Popular substrate choices arereptile bark, astroturf, aspen shaving, mulch or paper towels, although thelater can look unattractive.
Snakes are reptiles and like all reptiles they do not make their own body heatand rely on an outside source to heat their bodies, eg the sun or in the caseof a snake in captivity, an appropriate heating source is required. Correcttemperature control will ensure the health of your snake and will enable properdigestion and effective immune function. Corn, Milk and King snakes require atemperature gradient between 70 - 75° F ( 23 degrees celsius) on the cool sideand a 85 - 90° F (32 degrees celsius) basking side. Place your heating sourceon one side of the tank only to ensure a gradient of temperature. Your snakewill move around the tank to regulate his body temperature. Experiment with theavailable heating sources until you find one that works for your snake.Try aheating pad, heating tape, white light heat lamps or ceramic emitters. Do notuse hot rocks, these will burn your snake.
Should be 40-60%. This is usually achieved by keeping the water source toppedup.
One important aspect of feeding that is often overlooked is the addition ofhiding areas to the cage. Most snakes, like to feel secure in theirenvironment. One way of providing for this need to is put hiding spots in theenclosure. Hiding spots can be made of anything, as long as the snake cancompletely fit inside the area and hide itself from view. Old cardboard boxesare good for this, but so are many of the commercially manufactured hidingspots available in pet stores. A hiding spot should be placed both on the warmend and the cool end of the cage, so that the animal can feel secure in anyspot. Snakes kept without appropriate hiding areas become stressed and mayrefuse to eat. Even a piece of bark can do if the substrate is something thesnake can burrow into.
They are inquisitive and quite active, so are great to watch when they exploretheir surroundings. Provide an interesting branch for climbing and resting,make sure it is cleaned of bugs etc.
Shedding? As a reptile grows, its old skin become too tight and worn. A new skinawaits just below the old. As a snake gets ready to shed, its eyes will turncloudy and the body color will start to dull and develop a whitish sheen andthe snake will adopt a less mobile approach, even going off food until the shedis finished. Once the eyes have cleared, the snake is ready to shed. To assureproper hydration, soak the snake in warmish water after the eyes clear; thisshould enable to snake to shed easily within the next 24 hours. If you arelucky you will see your snake shed it’s skin. It starts by pushing it’shead against a rough surface (a rough rock should be provided for this) and theskin around it’s head separates, the snake pushes it’s hole body throughthe opening and turns the shedded skin inside out. The skin will be clear incolor and should show every detail of the snake in it. If you have a healthysnake the shedded skin will be complete. You can also measure the shedded skinto work out an approximate length of your snake. A snake sheds his skin aboutonce every 2-3 months, although this is sometimes more regular. A snake whileshedding may not want to eat for 3 weeks.
Handling & Care
After giving your snake a couple of days to settle in, begin picking it up andhandling it gently. It may move from you and it may anoint you with a smellymusky substance from it's vent. Be gentle but persistent. Daily contact willbegin to establish a level of trust and confidence between you and your snake.When it is comfortable with you, you can begin taking it around the house.
- Do not keep a snake as a pet in a house that has children who are under the age of 5.
- Do not handle your snake more than once a week, unless you have a good, health related, reason to do so.
- Do not overfeed, it may shorten lifespan.
- Never hold the food by the tail so your snake can take it from you. Their eyesight is poor and they use heat, smell, and motion to locate their prey. They may come to see your hand as prey.
Hygiene and Safety
As a general rule, please see that you wash your hands with antibacterial soapbefore and after handling any animal, as most of them carry germs of some sortand so do you.
|
OPCFW_CODE
|
Natural language processing python and nltk pdf
File Name: natural language processing python and nltk .zip
- Natural Language Processing A Quick Introduction Pdf Free
- Natural Language Processing with Python
- NLTK Tutorial in Python: What is Natural Language Toolkit?
You can change your cookie settings at any time. ADSP is a specialist AI consultancy, with proven track record delivering natural language processing NLP solutions including conversational agents, sentiment analysis and topic modelling. We deliver solutions using state of the art deep learning technologies such as Transformers and NLP Python libraries such as spacy, gensim and nltk. Pricing document.
Natural Language Processing A Quick Introduction Pdf Free
Joshua Cason. Download PDF. A short summary of this paper. Imagine the possibilities! English Speaking Robots! English Speaking Cars! You need Python Google Keywords: python 2. Preface extras 1. Language Processing and Python extras 2. Accessing Text Corpora and Lexical Resources extras 3. Processing Raw Text 4. Writing Structured Programs extras 5. Categorizing and Tagging Words 6. Learning to Classify Text extras 7. Extracting Information from Text 8. Analyzing Sentence Structure extras 9.
Building Feature Based Grammars Analyzing the Meaning of Sentences extras Managing Linguistic Data We will look at highlights in the book, but not every chapter will be highlighted. Type: 'texts ' or 'sents ' to list the materials. Displaying 11 of 11 matches: ong the former , one was of a most monstrous size. Some were thick d as you gazed , and wondered what monstrous cannibal and savage could ever hav that has survived the flood ; most monstrous and most mountainous!
That Himmal they might scout at Moby Dick as a monstrous fable , or still worse and more de th of Radney. I shall ere l ing Scenes. In connexion with the monstrous pictures of whales , I am strongly ere to enter upon those still more monstrous stories of them which are to be fo ght have been rummaged out of this monstrous cabinet there is no telling. Presidential Inaugural Addresses: This can be used to investigate changes in language use over time. Here are some good sentiment words.
This gets the computer a little closer to understanding the text. It will then compute all the parts of a sentence and how they work together — or should not work. Using Python to Abstract We can make our own functions. We have seen what some basic tools of NLP look like and how to get started with them.
The End. Related Papers. By Isromi Janwar. Language Processing and Python. By Muhammad Andyk Maulana. By Bruce Wang. Natural Language Processing With Python.
By Anand Trivedi. Download pdf. Remember me on this computer. Enter the email address you signed up with and we'll email you a reset link. Need an account? Click here to sign up.
Natural Language Processing with Python
Sign in. NLP is a subfield of computer science and artificial intelligence concerned with interactions between computers and human natural languages. It is used to apply machine learning algorithms to text and speech. For example, we can use NLP to create systems like speech recognition , document summarization , machine translation , spam detection , named entity recognition , question answering, autocomplete, predictive typing and so on. Nowadays, mo s t of us have smartphones that have speech recognition. These smartphones use NLP to understand what is said. Also, many people use laptops which operating system has a built-in speech recognition.
Natural Language Processing with Python. – Analyzing Text with the Natural Language Toolkit. Steven Bird, Ewan Klein, and Edward Loper. This version of the.
NLTK Tutorial in Python: What is Natural Language Toolkit?
This is a book about Natural Language Processing. By natural language we mean a language that is used for everyday communication by humans; languages like English, Hindi or Portuguese. In contrast to artificial languages such as programming languages and logical formalisms, natural languages have evolved as they pass from generation to generation, and are hard to pin down with explicit rules. We will take Natural Language Processing or NLP for short in a wide sense to cover any kind of computer manipulation of natural language.
If you are an NLP or machine learning enthusiast and an intermediate Python programmer who wants to quickly master NLTK for natural language processing, then this Learning Path will do you a lot of good. Natural Language Processing is a field of computational linguistics and artificial intelligence that deals with human-computer interaction. It provides a seamless interaction between computers and human beings and gives computers the ability to understand human speech with the help of machine learning. The number of human-computer interaction instances are increasing so it's becoming imperative that computers comprehend all major natural languages.
When it comes to the field of natural language processing, it ends up that we are actually talking about a very broad number of related concepts, techniques, and approaches. Word vectors, dependency parsing, text classification, regular expressions, language models, speech translation; these can all be lumped together under the banner of NLP, though they are very different tasks and techniques.
|
OPCFW_CODE
|
Fully automated Breast Segmentation on Mammographies
But wait, let's look at some pictures first!
breast_segment takes a Mammography image and detects the largest, connected region (usually the breast).
It outputs a segmentation mask and the coordinates of the rectangular bounding box.
Now, the images are easily explained:
- First image: Input Image
- Second image: Segmentation mask (computed, boolean)
- Third image: Bounding Box (computed)
- Fourth image: Overlay Visalization of the Segmentation
breast_segment should be available on your python path.
You could achieve this like so:
import sys sys.path.append('PATH_TO_BREAST_SEGMENT')
Then, import and use. Let's say that we've loaded an image into the variable
It should be a NumPy Array.
from breast_segment import breast_segment mask, bbox = breast_segment(im)
Done! You may want to visualize it to check if it worked. Let's use matplotlib:
from matplotlib import pyplot as plt %matplotlib inline
Visualize the original image (you probably know how to do that):
f, ax = plt.subplots(1, figsize=(12, 12)) # adjust the figure size # set the correct window and color map. yours may differ. # radiologists like gray, not understandably ax.imshow(im, vmin=0, vmax=4096, cmap='gray')
Have a look at the segmentation mask:
f, ax = plt.subplots(1, figsize=(12, 12)) ax.imshow(mask, vmin=0, vmax=1, cmap='inferno') # use inferno colormap for dramatisation
Create an overlay visualization like in the fourth image above:
f = plt.figure(figsize=(12, 12)) ax = plt.subplot(111) ax.imshow(im, vmin=0, vmax=4096, cmap='gray') ax.imshow(mask, alpha=.3, cmap='inferno') # alpha controls the transparency
Showing the bounding box like in the third image above is a little more code but also easy. Check out the complete example (see below) for the full story!
If you want to tweak some parameters, look no further:
mask, bbox = breast_segment(image, scale_factor=0.25, threshold=3900, felzenzwalb_scale=0.15)
|image||NumPy Array, two-dimensional||The Input Mammography to segment.|
|scale_factor||float||Downscaling factor. The segmentation will be computed on this smaller image. Default: 0.25 (25% of original size)|
|threshold||integer||The maximum cut-off for noisy (scanned) Mammography images. Values above are treated as noise and ignored. Default: 3900 (values > 3900 will be set to 0)|
|felzenzwalb_scale||float||Scale Parameter for Felzenzwalb's Algorithm. Check out the respctive skimage docs for more information. Default: 0.15 (tested on DDSM)|
|mask||NumPy Array, two-dimensional, boolean|| The computed segmentation mask.
|bbox||4-element Tuple|| The coordinates of the rectangular bounding box of the segmentation. The tuple consists of
The above images were generated in an IPython Notebook which resides
Check out the full notebook for a complete walkthrough!
The Mammography image is from the Digital Database for Screening Mammography (DDSM).
To my knowledge, it's one of the biggest public mammography databases. However, you will
need some more tools to open and process the weird
LJPEG format and their associated
metadata, which I conveniently provide:
breast_segment is basically a simple application of the Felzenzwalb Algorithm in skimage
with some tweaking of parameters and thresholding for noise-reduction. Additionally,
Felzenzwalb's Algorithm is only applied to a downscaled version of the image to reduce
Edge cases and Fails
breast_segment wrongly detects the background as "breast" and creates an inverted
segmentation. There is a slightly hacky mechanism to detect that, in which case it simply selects
the second largest region (instead of the largest) as breast.
In other cases, it simply fails dramatically. Tweaking the threshold and Felzenzwalb Scale to some optimum values for the given Mammography images, dependent e.g. on the type of scanner, can help in some cases.
I had been trying to apply Neural Networks to Mammographies by training them to differentiate between bening and malign images of the DDSM database (spoiler: it didn't work). In trying to improve the preprocessing (reducing the noise), I developed this automatic segmentation. Feel free to [contact me] if you want to hear the full story!
As I'm currently moving away from Machine Learning, I decided to open-source it to spare you some time and nerves. Good luck!
Feel free to file an issue on this Repo or contact me (Oliver Eidel), I'm happy to help!
|
OPCFW_CODE
|
HTML5 Video Captioning
HTML is the markup language used to render almost every page on the web. HTML5 is the latest version, and it’s replete with incredibly useful features, including a universal video standard that lets developers add video to a web page without using any third party plugins, like Flash. The new standard also makes it much easier to publish accessible video through closed captioning.
This article provides an overview of how HTML5 will improve and standardize accessible video through captioning. Although HTML5 is still evolving, most browsers have already adopted the basic video features. The hope is that we will also be able to converge on a single web captioning format. Although we’re not quite there yet, this article examines the two caption formats being considered.
Why is video captioning so difficult in HTML?
In the current version of HTML, there is no standard for showing a video on a web page. Almost all videos are shown through plugins, like Flash, QuickTime, Silverlight, and RealPlayer. The problem with this approach is that there is no standardization across different browsers and devices. And although web publishers try to build redundancies and fallback provisions to maximize compatibility, it’s practically impossible to publish video that works universally. As a consequence, publishing closed captions has been difficult and unreliable because both the caption format and encoding method depend on the video publishing technology used.
How does HTML5 simplify web video and accessibility?
HTML5 is a major step forward for standardizing video across web browsers and devices, and thus simplifying closed captioning. The idea is that web video will be based on an open, universal standard that works everywhere. HTML5 natively supports video without the need for third party plugins. A video can be added to a web page using the video element, which makes it almost as simple as adding an image. The track element can then be used to display closed captions, subtitles, text video descriptions, chapter markers, or other time-aligned metadata.
The HTML code below shows how these elements work:
<video width="320" height="240"> <source type="video/mp4" src="my_video_file.mp4" > <track src="captions_file.vtt" label="English captions" kind="captions" srclang="en-us" default > </video>
The attributes of the track element work like this:
Will HTML5 include a standard caption format?
Currently there are two competing caption formats being considered. In part, this is because there are two groups collaborating on HTML5: The Web Hypertext Application Technology Working Group (WHATWG) and the World Wide Web Consortium (W3C).
WHATWG has developed and proposed the WebVTT (Web Video Text Tracks) caption format, which is a new, user friendly text format that consists of line numbers, timelines, and text with formatting options. WebVTT is similar to the widely established SRT format, but accommodates text formatting, positioning, and rendering options (pop-up, roll-on, paint-on).
W3C has proposed using TTML (timed text markup language), which is a widely established XML format supported in Adobe Flash and Microsoft Silverlight and used by sites like Netflix and Hulu.
To see how the two caption formats work, Microsoft built a HTML5 captioning prototype that demonstrates both formats in HTML5.
3Play Media has been participating in the development of captioning standards through the Web Media Text Tracks Community Group, which was created to advance this area of HTML5 and improve web captioning solutions.
Although the current HTML5 spec supports both caption formats, it appears that the WebVTT format is gaining ground on TTML. The hope is that we will converge on a single caption format, which would greatly simplify the process of publishing accessible video.
WebVTT caption format
The WebVTT caption format is a text file with a .vtt extension. The file begins with a header “WEBVTT FILE” followed by cues and their corresponding text. There are several parameters that allow you to control the line position, text position, and alignment. You can also add styling to the text within the cue itself. The example below demonstrates a bold <b> element.
WEBVTT 1 00:00:13.000 --> 00:00:016.100 <strong>ARNE DUNCAN:</strong> I'll start and then turn it over to you. 2 00:00:16.100 --> 00:00:20.100 It's so critically important that parents be actively engaged
TTML caption format
<tt xmlns="http://www.w3.org/ns/ttml" xml:lang="en"> <body> <div> <p begin="00:00:13.00" end="00:00:16.10"> ARNE DUNCAN: I'll start and then turn it over to you. </p> <p begin="00:00:16.10" end="00:00:20.10"> It's so critically important that parents be actively engaged </p> </div> </body> </tt>
When will the HTML5 video captioning features be ready for web-wide use?
The W3C and WHATWG have developed specifications for how video and captions should work in browsers. Although these standards are still being refined, it’s now up to the browser developers (Microsoft, Google, Mozilla, and Apple) to adopt these standards and build in the functionality. That will take some time. Although there appears to be a lot of consensus around video standardization, there are still some open issues hampering universal adoption. The reality is that browser developers have their own technical, legal, and business agendas .
Although the new <video> element is already supported by most browsers, there has been no consensus on a single video format (MP4, WebM, and Ogg are being considered). Also, most of the advanced video features are not yet ready for use. Unfortunately this includes the <track> element, which is required to publish captions and subtitles.
On May 25, 2011 the W3C announced “Last Call”, which was an invitation for communities inside and outside of W3C to provide feedback on whether the HTML5 technical requirements have been satisfied. The recommended release was set for 2014 and the hope is that it will gain web-wide adoption over the subsequent few years.
|
OPCFW_CODE
|
Incorrect handling of ObjectId fields with lua scripts
Environment
VerneMQ Version: 1.11.0
OS: Debian 10 (buster)
Erlang/OTP version (if building from source): 22
VerneMQ configuration (vernemq.conf) or the changes from the default
Cluster size/standalone: any
Expected behaviour
Calling mongodb.find_one returns document with ObjectId fields. At example userId: ObjectId('5ec51288ace1c90a347fb4a4')
Actual behaviour
2020-11-25 12:26:17.572 [error] <0.455.0>@vmq_diversity_script_state:handle_info:177 can't call function auth_on_register with args [{addr,<<"<IP_ADDRESS>">>},{port,47156},{mountpoint,<<>>},{client_id,<<"service5d0f1c5a">>},{username,<<"service:100">>},{password,<<>>},{clean_session,true}] in "/etc/vernemq/scripts/auth_tp.lua" due to {error,badarg}
In source vmq_diversity/src/vmq_diversity_mongo.erl I found processing of document _id field by function check_ids. But it not processes other ObjectId fields!
Field mapping and unmapping done by vmq_diversity_utils:map and vmq_diversity_utils:unmap. Why ObjectId fields not processed in this functions?
Also need function to create value with ObjectId type from string in lua script.
@mtaobiz thanks, I'm not sure what you are looking to achieve or how I can help. You might want to check the format of auth_on_register arguments in your script.
script is simple:
function auth_on_register(reg)
log.debug("auth wp1")
local svc = mongodb.find_one(pool, "svcs", {
id = 136
})
log.debug("auth wp2")
return true
end
and error is:
2020-11-25 15:23:06.361 [debug] <0.483.0>@vmq_diversity_lager:debug:41 auth wp1
2020-11-25 15:23:06.362 [error] <0.483.0>@vmq_diversity_script_state:handle_info:177 can't call function auth_on_register with args [{addr,<<"<IP_ADDRESS>
">>},{port,49322},{mountpoint,<<>>},{client_id,<<"servicee1cf29b5">>},{username,<<"service:9">>},{password,<<>>},{clean_session,true}] in "/etc/verne
mq/scripts/auth_tp.lua" due to {error,badarg}
svcs contains this doc:
{
"_id" : ObjectId("5f6b45ed5362be63fa8ed962"),
"id" : 136,
"type" : "account",
"userId" : ObjectId("5f6b45ed5362be63fa8ed960"),
"access" : "userservice",
"server" : "sandbox",
"active" : true,
"created" : ISODate("2020-09-23T15:56:13.477+03:00")
}
without userId all works fine
@mtaobiz have you been able to find out more here? maybe related to https://github.com/vernemq/vernemq/issues/1678 also?
No its not about this issue. Im already done patch to use ObjectId field needed for my project. But Im badly know Erland and can`t make universal solution.
Please check my lua-script with document from comment above.
|
GITHUB_ARCHIVE
|
Unexpected resonance behaviour in LC tank resonator coupled to Cockroft Walton
I am creating a powerbase for an instrument that requires a potential difference that increases in discrete steps, and am attempting to power it with a Cockroft Walton (CW) voltage multiplier circuit.
With 15 stages and an 800V anode-cathode operating voltage the required input to the CW is 27V. However one of the design parameters is that the input to the PCB can only be 3.3V AC. I decided to bridge this gap using an LC tank resonator, as shown in the diagram:
Without the LC tank resonator attached, the CW behaves as expected with acceptable efficiency dropoff, measured from the end of each resistor at the stages in the diagram relative to the ground at the end of the CW (positive polarity).
When L2 and C1 only from the diagram are added, driven by a frequency generator in series with them, the CW performs unexpectedly, whereupon at some frequencies the middle stages will be at a higher potential difference relative to ground than the earlier stages. The graph below shows how some of the different stages (stage X is labelled DYX, stage 15 is 'anode') behave around the resonance point of the LC circuit, which as expected from basic theory of LC resonates at 100kHz.
I understand that this behaviour is all related to the interactions of the capacitors and inductor but can't find a mathematical model to explain it. When I take a particular CW stage and treat all the capacitors between it and the inductor as the single capacitance of the LC tank 'seen' by that stage (e.g. at R3 combining C2 and C3 (series) with C1 (parallel) into a single value), the new resonance frequencies are well below what I see in the tests. Simulations with LTSpice do not show any of this unexpected behaviour, and only pick up on the resonance of the initial LC circuit.
Based on this I am sure that my assumptions about the different resonance points are incorrect or incomplete, and I am confused as to how the middle stages could at any time be at a greater potential difference than the 15th stage. Because of this I am unsure how to fix the issues in the next iteration of the design. Any suggestions would be greatly appreciated.
Exactly how did you measure the output voltages? What does p_d./v represent?
@BruceAbbott used a multimeter on dc setting between ground connection at end of CW, and test node above resistor, indicated on circuit diagram. The p.d./V is the potential difference between these two points for different frequencies measured in volts.
All the voltages on your graph seem very low. What is the input resistance of your multimeter? (if you don't know then what make/model is it?). What is the coupling factor of your coils?
@BruceAbbott I'm using the fluke 289 multimeter, I had a scan of its datasheet and couldn't find a value. I had presumed (perhaps errouneously!) that for potential difference measurements this would be negligibly high. The coupling between inductors is interesting as I tried to estimate it by comparing to Spice sims with different values, however, as the circuit generally does not behave according to Spice (e.g. the p.d. measurements) I expect any estimates I make to have a large error. I've also found the mag fields are interfering with the rest of the board so I'm redesigning the coils.
Manual says 10 Megohms <100pF. According to my LTspice simulation, 10M is a significant load on the higher voltage stages. I suspect the meter is damping/detuning the CW by different amounts depending on which stage is being measured.
Your 'instrument' is a photomultiplier tube, right? What maximum anode current is expected?
@BruceAbbott I will run some simulations of my own to get an understanding of this, thanks for finding the resistance value. Yes the circuit is for a PMT, I expect the anode current won't exceed around 150uA when a detection is made.
Interesting approach. Here's a back-of-the-envelope calculation.
Think about the LC tank circuit you have designed. Empirically, we'll assume 15 volts on the resonant circuit since you have measured a peak voltage at about 15 volts. Using the equation for energy contained in a 680 pF capacitor, the peak energy in the tank circuit is 76.5 nanojoules.
So we'll be generous and pretend that we can deliver half of this energy 100,000 times per second at our frequency of 100 kHz. This yields 3.83 mW that can is available from the resonant circuit. At your peak voltage, you will have 250 microamperes to run your circuit.
This current must charge all of the capacitors in the string and charge them to a point where they can turn on the diodes. You simply do not have enough power to accomplish this.
An LC tank circuit does provide the option of providing higher voltage, but at a price - there is no way to create additional energy from resonance alone. There is really no advantage over a design that runs directly from a step-up transformer, either to get you to your 27 volts or all the way to your 800V . No matter what you do, you will have to provide the all of power at 3.3 volts capable of running your anode.
Good luck!
Thank you for the response, there is a lot here that I hadn't considered. Hopefully I can improve it at the next iteration. Will update when I've had a chance to build and test.
|
STACK_EXCHANGE
|
ASUS M3N-HT 780a SLi Reviewajmatson -
For the extras section, I wanted to post the temperatures for the Mempipe. ASUS claims a decrease in memory temperatures up to 10C with the Mempipe installed. I ran the Sisoft Sandra XII memory tests for 30 minutes without the Mempipe on and took the temperature at the end of the 30 minutes. I then attached the Mempipe to the board and tightened it to the memory and again ran the Sandra tests for 30 minutes taking the temperatures at the end. The graphs show the Idle and load temperatures with and without the Mempipe installed. The idle temps were taken after one hour of no computer usage. The temperatures were taken using a Radio Shack Non-Contact Infrared Thermometer.
Express Gate is a nifty little piece of hardware that lets you boot into a Linux based OS in a few seconds and be able to IM, surf the Net or make Skype calls without having to fully boot into the WIndows Operating system. This comes in handy if you are needing to download a driver to fix or install Windows and you have no other computer to download the files, or say you want to quickly check your email or IM someone and tell them you are running late. Why boot into Windows when you can just boot into Express Gate in a fraction of the time and away you go? The one downfall is that I could not get Express Gate to see my USB keyboards or mice, only the PS/2 ones. This could be a problem for some users that only have USB peripherals.
NVIDIA Hybrid SLi Technology:
NVIDIA has two new technologies that are aimed at conserving power and increasing performance. First is HybridPower. HybridPower allows the system to throttle back power buy using the Motherboard GPU (mGPU) when running everyday applications and when full graphics power is not needed, like when watching HD videos, but then be able to unleash the total performance of a Discrete GPU (dGPU) for those demanding games and applications. This allows the system to reduce power consumption when maximum GPU power is not needed instead of needlessly powering the discrete GPU. This feature is only available when the M3N-HT is paired with a Hybrid SLi ready 9800GTX or a 9800GX2 discrete graphics card.
The other new technology is GeForce Boost which takes the mGPU on the motherboard and combines it with a NVIDIA Hybrid SLi-enabled dGPU to increase performance. By combining the mGPU and the dGPU you can gain more video processing power to give you an edge in games and an increase in graphics performance without having to spend a lot of money on a more powerful video card. This feature is available only when used with a Hybrid SLi ready 8400GS or 8500GT.
|
OPCFW_CODE
|
Satellite tv for pc broadband has been used for fairly a while now for many individuals to get on line, but there are a lot of other applications that satellite expertise employees and plenty of are very very useful in everyday dwelling. Mostly associated with a computer career is the computer programmer job. However, right this moment, if you are on the lookout for a computer career as a pc programmer, you even have options within the career itself. You can be an functions programmer, writing software to deal with particular duties, or a programs programmer, who controls how the software program is used. Some employers need a programmer with a B.S. in Pc Science, but you will get began in a computer career as a programmer with a two-yr diploma or certificate.
At MIT, researchers begin experimenting with direct keyboard input to computer systems, a precursor to at present´s normal mode of operation. Usually, computer customers of the time fed their packages into a computer using punched playing cards or paper tape. Doug Ross wrote a memo advocating direct entry in February. Ross contended that a Flexowriter – an electrically-managed typewriter – linked to an MIT computer could function as a keyboard enter gadget attributable to its low value and adaptability. An experiment conducted 5 months afterward the MIT Whirlwind laptop confirmed how helpful and convenient a keyboard enter device may very well be.
As computer know-how began to maneuver from solely mainframes to the addition of minicomputers, these machines had a better have to share data in a quick and environment friendly manner. Pc producers started to create protocols for sharing information between two peer machines. The network on this case was additionally level-to-level, although the nature of the connection was peer-to-peer in that the two machines (e.g., minicomputers) would talk with each other and share information as relative equals, at the very least in comparison with the earlier mainframe-to-terminal-controller kind of connections.
Grades at SLKF College are generated using on-line grade books. Progress reviews and report playing cards for first by means of seventh grades are distributed electronically. All college database, attendance, discipline, reporting, and billing information are recorded and monitored by software program designed specifically for this goal – RenWeb. RenWeb also contains individual webpages for lecturers and their courses, and permits college students and their dad and mom to view homework and grades as well as obtain assignments, handouts and another data mandatory from residence.
People normally are confused between 3D virtual tour and Interactive app with VR help. Nonetheless, it really is straightforward, 3D digital tour is identical as transferring via a home, however you are aware that you’re looking at a screen. In the case of interactive app with VR support, as soon as you placed on a headset, it is just like being in that apartment place. Virtual reality is a rapid rising technology and proves to be extremely useful for busy clients who may discover it exhausting to go to a property personally attributable to their busy schedules. They might see it easily on their computers.
|
OPCFW_CODE
|
'use strict';
/** DataFlow JS v1.0.8 by Aleksandar Panic. License: MIT **/
!function(document, window) {
var inspect = {
isObject: isObject,
isString: isString,
isUndefined: isUndefined,
isFunction: isFunction
};
function isObject(val) {
return typeof val === "object";
}
function isFunction(val) {
return typeof val === "function";
}
function isString(val) {
return typeof val === "string";
}
function isUndefined(val) {
return typeof val === "undefined";
}
function noop() {}
function BoundElement(config) {
this.element = inspect.isString(config.to) ? document.querySelectorAll(config.to) : config.to;
this.multiple = config.multiple || this.element.length > 1;
this.renderer = config.renderer;
if (!this.multiple && this.element.length) {
this.element = this.element[0];
}
this.dataSource = config.dataSource;
this.path = config.path;
this.as = config.as || 'text';
this._bound = true;
this._boundHandler = this.update.bind(this);
this.dataSource.bindHandler(this._boundHandler);
this.changeDetector = config.changeDetector || null;
this.onBind = config.onBind || noop;
this.onUpdate = config.onUpdate || noop;
this.onUnbind = config.onUnbind || noop;
this.onAfterRender = config.onAfterRender || noop;
this.changeDetector = config.changeDetector;
this.onBind.call(this.element, this);
this.updateDisabled = config.updateDisabled || false;
this.update();
}
BoundElement.prototype.getValue = getValue;
BoundElement.prototype.setValue = setValue;
BoundElement.prototype.run = run;
BoundElement.prototype.shouldUpdate = shouldUpdate;
BoundElement.prototype.update = update;
BoundElement.prototype.unbind = unbind;
function getValue(defaultValue) {
return this.dataSource.get(this.path, defaultValue);
}
function setValue(value) {
if (!this._bound) {
throw new Error('Cannot call setValue of unbound instance.');
}
this.dataSource.set(this.path, value);
}
function update(setPath) {
if (this.updateDisabled) {
return;
}
if (this.shouldUpdate(setPath)) {
this.onUpdate.call(this.element, this.getValue(), this);
this.renderer.render(this);
}
}
function shouldUpdate(setPath) {
if (!this.changeDetector) {
return !setPath || !this.path || setPath === this.path;
}
return this.changeDetector(this.path, this.dataSource, this);
}
function unbind() {
this._bound = false;
this.dataSource.unbindHandler(this._boundHandler);
this.onUnbind.call(this.element, this);
}
function run(callback) {
callback.call(this, this.getValue());
this.dataSource.update();
}
function bind(renderer, dataSource, config) {
if (Array.isArray(config)) {
return bindArray(renderer, dataSource, config);
}
return bindSingle(renderer, dataSource, config);
}
function bindArray(renderer, dataSource, items) {
var results = [];
for(var i = 0; i < items.length; i++) {
results.push(bindSingle(renderer, dataSource, items[i]));
}
return results;
}
function bindSingle(renderer, dataSource, config) {
if (!config.dataSource) {
config.dataSource = dataSource;
}
if (!config.renderer) {
config.renderer = renderer;
}
return new BoundElement(config);
}
function findByPath(object, path, createPathIfEmpty) {
if (!path) {
return {
result: object,
path: null
};
}
if (path in object) {
return {
result: object,
path: path
};
}
var parts = path.split('.');
var walker = object;
for(var i = 0; i < parts.length - 1; i++) {
var part = parts[i];
if (!(part in walker)) {
if (!createPathIfEmpty) {
return {
result: walker,
path: null
};
} else {
walker[part] = {};
}
}
if (i < parts.length - 1 && !inspect.isObject(walker[part])) {
var partName = parts.slice(0, i + 1).join('.');
throw new Error('Cannot traverse data in path ' + partName + ' since part of it is not an object.');
}
walker = walker[part];
}
return {
result: walker,
path: parts[parts.length - 1]
};
}
function DataSource(data) {
this._data = data || {};
this._bound = true;
this.handlers = [];
}
DataSource.prototype.set = setValue$1;
DataSource.prototype.get = getValue$1;
DataSource.prototype.run = run$1;
DataSource.prototype.update = update$1;
DataSource.prototype.unbind = unbind$1;
DataSource.prototype.bindHandler = bindHandler;
DataSource.prototype.unbindHandler = unbindHandler;
function getValue$1(name, defaultValue) {
var item = findByPath(this._data, name);
if (item.path) {
return item.result.hasOwnProperty(item.path) ? item.result[item.path] : defaultValue;
}
return defaultValue;
}
function setValue$1(name, value) {
if (!this._bound) {
throw new Error('Cannot call setValue of unbound instance.');
}
var pathObject = findByPath(this._data, name, true);
if (inspect.isFunction(value)) {
value(pathObject.result[pathObject.path], this._data);
} else {
pathObject.result[pathObject.path] = value;
}
this.update(name);
}
function update$1(name) {
for(var i = 0; i < this.handlers.length; i++) {
this.handlers[i].call(this, name);
}
}
function bindHandler(callback) {
if (this.handlers.indexOf(callback) === -1) {
this.handlers.push(callback);
}
}
function unbindHandler(callback) {
if (this.handlers.indexOf(callback) !== -1) {
this.handlers.splice(this.handlers.indexOf(callback), 1);
}
}
function unbind$1() {
this.handlers = [];
this._bound = false;
}
function run$1(callback) {
callback.call(this, this._data);
this.update();
}
var staticRenderers = {};
function Renderer() {
this.renderers = {};
setupInitialRenderers(this);
}
Renderer.prototype.render = render;
Renderer.prototype.set = setRenderer;
Renderer.prototype.runOnBoundElement = runOnBoundElement;
Renderer.setStaticRenderer = function(name, callback) {
staticRenderers[name] = callback;
};
function render(boundElement) {
var rendererName = boundElement.as;
if (typeof rendererName === "function") {
runOnBoundElement(rendererName, boundElement);
return;
}
if (!(rendererName in this.renderers)) {
throw new Error('Unknown renderer: ' + rendererName);
}
this.renderers[rendererName](boundElement);
}
function setRenderer(name, callback) {
this.renderers[name] = runOnBoundElement.bind(this, callback);
}
function getDefaultRenderers() {
return {
html: function(boundElement, forElement) {
forElement.innerHTML = String(boundElement.getValue());
},
text: function(boundElement, forElement) {
forElement.textContent = String(boundElement.getValue());
}
};
}
function runOnBoundElement(callback, boundElement) {
if (boundElement.multiple) {
for(var i = 0; i < boundElement.element.length; i++) {
callback(boundElement, boundElement.element[i]);
}
} else {
callback(boundElement, boundElement.element);
}
boundElement.onAfterRender.call(boundElement.element, boundElement);
}
function setupInitialRenderers(instance) {
var renderers = getDefaultRenderers();
for(var name in renderers) {
instance.set(name, renderers[name]);
}
for(var name in staticRenderers) {
if (staticRenderers.hasOwnProperty(name)) {
instance.set(name, staticRenderers[name]);
}
}
}
function DataFlow(dataSource, renderer) {
renderer = renderer instanceof Renderer ? renderer : new Renderer();
dataSource = dataSource instanceof DataSource ? dataSource : new DataSource(dataSource);
dataSource.bind = bind.bind(dataSource, renderer, dataSource);
dataSource.renderer = renderer;
return dataSource;
}
DataFlow.bind = bind;
DataFlow.BoundElement = BoundElement;
DataFlow.DataSource = DataSource;
DataFlow.Renderer = Renderer;
window.DataFlow = DataFlow;
}(document, window);
//# sourceMappingURL=data-flow.js.map
|
STACK_EDU
|
The Case For Python
Python is an easy-to-understand and easy-to-work scripting language. Python in embedded systems relieves developers from many programming burdens offering some attractive benefits. Here are some of them.
- Automated testing. Python provides developers with automated testing due to its ability to control tools that send and receive information from the embedded system. Moreover, the language provides regression tests that can be used to check the system.
- Data analysis. The ability to receive and store information from the embedded system allows programmers to use Python to develop methods for real-time data visualization or to leave some of that information for later analysis.
- Easy-to-learn. Although being a relatively new programming language Python has become wide-spread due to its readability and comprehensibility. This language lets programmers complete sophisticated tasks without long hours of studying.
- Extensive libraries. Python offers rich libraries with a vast number of modules that significantly ease the process of development making it faster as well.
- Object-oriented. As an object-oriented programming language Python supports classes, inheritance, and objects. It’s easy to import code from any library or to extend the class.
- Python doesn’t use pointers in comparison to C/C++. It lets developers manage the problems in the code easier.
- As it follows from its other benefits Python is a highly-efficient language that makes the development process faster.
When it comes to shortcomings, the main issue that leaves Python behind C/C++ is its slow speed. Python is an interpreted language thus the code is executed at the runtime slowing down the performance in general. Due to its dynamic nature Python needs high memory. What is more, some variables can be changed at runtime causing unexpected errors.
Despite its cons, Python development services for embedded systems are expected to grow and become a serious contender for C/C++ in the nearest future.
The Case For C/C++
C is a programming language created in 1972 which was extended to C++ in 1985. Both C and C++ are used by such tech giants as Google, Oracle, and Microsoft giving some more trust in these programming languages. Here are some reasons why C/C++ have become so favored in embedded systems.
- Runtime speed. C and C++ are compiled languages. In comparison with Python, they don’t need to interpret the code at runtime. Thus C/C++ programs operate much faster.
- C/C++ code is easy to control making the language highly-efficient. A C/C++ program built in one machine can operate in another one without any difficulties.
- By using C/C++ developers can create more compact programs. It is crucial as embedded devices usually have little memory and lower-powered CPUs.
- Object-oriented. Many companies use C/C++ development services as they offer OOP features such as classes and objects, abstraction, inheritance, and a rich-featured library.
Although C/C++ are chosen by a large number of developers, some specialists think that Python can overtake them and take a lead in embedded software development Here are some C/C++ disadvantages that provoke such ideas.
The most serious drawback that deters developers from using C and C++ is their complexity. Programmers have to meet many challenges while dealing with them. Moreover, these languages can be difficult to learn even for experienced developers. Although C/C++ runtime is much faster than Python’s, C/C++ development time pulls down. What is more, as a result of language complexity it is also difficult to maintain already-built programs.
How to Improve Python Speed For Embedded Systems
Having considered the main aspects in Python vs C/C++ competition it becomes obvious that if Python achieves better runtime there is a chance for it to overtake such widespread and popular languages as C/C++. There are some options to do that.
- Python provides developers with a big choice of optimizing extensions. One of them is Cython that compiles the code to C/C++ and runs at their speed. However, using Cython can be complicated as it requires deep knowledge of C/C++.
- Just-In-Time. JIT compiler transforms Python functions to highly-productive CPU and GPU allowing the code to operate faster. When it comes to using Just-In-Time compilers it is crucial to have enough space which is quite limited in embedded systems.
- Improved algorithms. Another way to accelerate Python’s execution is to improve code structure and algorithms. However, it is one of the most complicated ways.
Using Python to Communicate With Embedded Systems
Python can be a mediator between the user and the embedded system. As it was mentioned before, one of its merits is to send messages to the embedded system and to receive them back. This feature opens new possibilities of testing since Python can create various set configurations and test all the possible scenarios. The data received from the embedded system can be later used for the analysis.
Choosing What’s Right For You
Choosing between two rivals can be difficult. C/C++ provide significantly faster performance while Python stands out with its fast and easy development process. Python attracts developers being a modern and easy-to-learn language while C/C++ have gained popularity due to their consistency and reliability.
When opting for one of them it is necessary to concentrate on the main objectives and preferences of a specific project. A company can benefit from both Python and C/C++ if it focuses on their strengths.
|
OPCFW_CODE
|
We are starting off by making a simple structure to animate. We are going to
make a tank with a swinging turret. Make a box for the base and a cylinder
for the barrel. I have the start of the barrel directly above the centroid
of the box to make things easier.
Click any Selector tool. I find that the Hierarchy Joints menu is sometimes
unavailable if you are not using a Selector. Go into Edit > Hierarchy Joints.
In the Hierarchy Joints Menu hit the "+". Click on the 1 that appears in the menu.
Wait a second and it should go into rename mode. Type in "Base" then click anywhere
in the pop-up except for the button area. This joint will be used to rotate the
model relative to the model center. Since the barrel is going to be rotating
relative to the base, it needs to be in a subsection of the base. Click on the
joint Base if it isn't currently selected. To make things easier on yourself, you
might want to reposition the blue crosshair in the 2d views before making each joint.
Hit the "+". Rename the 2 to Shoulder. Make another joint off Shoulder and rename it
Barrel. The Shoulder is there so that there will be a joint at the start and end of
the limb (barrel). Renamed joints are an asset because they stop you from asking
yourself questions like: Was joint 5 a left arm or was it a right arm? If you
distribute your model, renamed joints are appreciated. Select the box on your model.
Click on "Base" in the Hierarchy Joints menu. One of the buttons in the pop-up
corresponds to attach. It is the sixth button from the left. Click it. The cube is
now known to OpenFX as "Base". Deselect the cube and select the cylinder. Click on
"barrel", then attach. Close the Hierarchy Joints menu.
Go into Edit > Skeleton > Reposition joints. You should see a a light blue
square and three circles appear with lines connecting them. If you didn't reposition
the crosshair before making joints, you have trouble spotting the circles. All of
them will be where you left the blue crosshair. Press Shift-A to locate any lost
circles, remembering that some circles may be stacked on one point. Move the square
so it is in the middle of the cube. Move the first circle so it above the square.
Move the next circle to the start of the bun barrel. Move the last circle to the other
end of the gun barrel. You have finished making the skeleton for your tank! This is a
very good point to save your work. You may never be able to get your model straight
again, so this save should be your base, unposed work. Save your model as something
else from here on in.
Go into Edit > Skeleton > Pose or hit Shift+A to go into Pose mode. Your tank
should appear as two boxes now with the skeleton visible in it. Click on the Square.
Dragging will rotate your entire model that you made around the square. Press Tab to
move the yellow axis indicator to a new axis and a new rotation plane. Click on the
first circle. This will rotate the base and everything below it on the hierarchy (like
the barrel). This is different from the square due to if you added an extra shape to
your model, the square would rotate it but the first circle wouldn't. You should note
at this time that the OpenGL Window that displays your model will not be updated into
the new position until you use the pan tool to move one of the other windows around.
(V1.0, possible bug) The final circle rotates the barrel and anything that would be
below it on the hierarchy (nothing on this model is below it). Press tab a few times
and you will see that the axis indicator has three positions on the previous circle and
one on the current circle. This fourth position is the spin axis. Note that
repositioning the joints will cause the rotation axes to move from their current vectors
to other, more confusing vectors.
If you were to add another cylinder to the end of the barrel, you could adapt the barrel
into an arm. Create another joint below barrel in the Joint Hierarchy and attach the new
cylinder to it (If you are smart you may want to move the blue crosshair to the end of
the new cylinder before making the new joint). Reposition the joint if necessary. This
will leave you with an arm with an elbow joint. Try making a second arm on the model as
Have Fun making posable models.
Download model here.
Tutorial written by Keith Kelly
|
OPCFW_CODE
|
Date of Award
Doctor of Philosophy (PhD)
Linear dynamical models have served as an analytically tractable approximation for a variety of natural and engineered systems. Recently, such models have been used to describe high-level diffusive interactions in the activation of complex networks, including those in the brain. In this regard, classical tools from control theory, including controllability analysis, have been used to assay the extent to which such networks might respond to their afferent inputs. However, for natural systems such as brain networks, it is not clear whether advantageous control properties necessarily correspond to useful functionality. That is, are systems that are highly controllable (according to certain metrics) also ones that are suited to computational goals such as representing, preserving and categorizing stimuli? This dissertation will introduce analysis methods that link the systems-theoretic properties of linear systems with informational measures that describe these functional characterizations. First, we assess sensitivity of a linear system to input orientation and novelty by deriving a measure of how networks translate input orientation differences into readable state trajectories. Next, we explore the implications of this novelty-sensitivity for endpoint-based input discrimination, wherein stimuli are decoded in terms of their induced representation in the state space. We develop a theoretical framework for the exploration of how networks utilize excess input energy to enhance orientation sensitivity (and thus enhanced discrimination ability). Next, we conduct a theoretical study to reveal how the background or "default" state of a network with linear dynamics allows it to best promote discrimination over a continuum of stimuli. Specifically, we derive a measure, based on the classical notion of a Fisher discriminant, quantifying the extent to which the state of a network encodes information about its afferent inputs. This measure provides an information value quantifying the "knowablility" of an input based on its projection onto the background state. We subsequently optimize this background state, and characterize both the optimal background and the inputs giving it rise. Finally, we extend this information-based network analysis to include networks with nonlinear dynamics--specifically, ones involving sigmoidal saturating functions. We employ a quasilinear approximation technique, novel here in terms of its multidimensionality and specific application, to approximate the nonlinear dynamics by scaling a corresponding linear system and biasing by an offset term. A Fisher information-based metric is derived for the quasilinear system, with analytical and numerical results showing that Fisher information is better for the quasilinear (hence sigmoidal) system than for an "unconstrained" linear system. Interestingly, this relation reverses when the noise is placed outside the sigmoid in the model, supporting conclusions extant in the literature that the relative alignment of the state and noise covariance is predictive of Fisher information. We show that there exists a clear trade-off between informational advantage, as conferred by the presence of sigmoidal nonlinearities, and speed of dynamics.
Zachary Feinstein, Jr-Shin Li, Baranidharan Raman, Shen Zeng
|
OPCFW_CODE
|
Can't Open in new tab (Right click) on ui-sref
Hi All,
I use ui-sref on my project and it works well when I click the element ().
But when I right-click on the rows, some options (Open link in new tab, Open link in new window, etc) didn't show up.
I use angular v1.3.0-rc.2 and I have upgraded angular-ui-router to the newest version.
Here is the code snippet:
<tr class="message-item" ng-repeat="message in messages track by message.id" ui-sref="message-detail({id: message.id})">
Any help would be much appeciated. Thank you
ui-sref adds an href tag to an a element, and then the browser adds the extra options based on that href tag.
It won't work on a tr tag. You should add an a tag inside the tr.
<tr class="message-item" ng-repeat="message in messages track by message.id">
<a ui-sref="message-detail({id: message.id})">
Other markup in the tr goes here.
</a>
</tr>
Ah. It works. Thank you for your reply :)
@willhartman In my case I have sub states/routes and ui-sref points to sub routs so the href become '//my-route' which results in wrong url in new tab.
For example:
<a ui-sref="base.user.route">My Route</a>
when rendered on browser it comes:
<a ui-sref="base.user.route" href="//my-route">My Route</a>
State definition in code:
$stateProvider.state('base', {
url: '/:t',
templateUrl: 'app/base/base.html',
abstract: true
}).state('base.user', {
url: '',
templateUrl: 'app/root/root.html',
abstract: true
}).state('base.user.route', {
url: '/my-route',
templateUrl: 'app/my-route.html'
});
do you have any thought on this?
The URL is constructed based on your route. They are inherited from each other.
So -
base: '/:uid'
base.user: ''
base.user.route: '/my-route'
Becomes -
'/:uid/my-route'
But you haven't specified what ':uid' parameter is, so the URL becomes -
'//my-route'
You need to specify what the :uid parameter is -
<a ui-sref="base.user.route({uid: '123456'})">
And then the URL will be generated correctly -
'/123456/my-route'
Ohh.. I see, so I need to pass uid for the routes.
Thanks for the help buddy.
Hi,
I also use ui-sref and it works well when I click the link.
But when I right-click on the link and choose open in new tab, it tells object not found and it goes directly to localhost and not to my root project folder.
<a ui-sref-active="active" ui-sref="dash.dashpage">Right Click</a>
state in code
`.state('dash', {
url: '/dash',
templateUrl : "dashboard.php",
controller: 'dashController'
}).state('dash.dashpage', {
url: '/dashpage',
templateUrl : "dashpage.php",
controller: 'dashController'
})`
Any help, please?
|
GITHUB_ARCHIVE
|
Likelihood to Recommend
As described in the use case, it is perfect for backup data storage where you do not expect to retrieve the data often. Think of it as a data dump; it is nice to know you have a backup, but it actually is expensive and somewhat difficult to retrieve everything.
[It's well suited for] duplication of files for transferring to a new machine or remote usage but I wouldn't use it if you needed to keep those places up to date all the time as it only works when you run it, not continuously.
- Cheap storage of backup data.
- Can be used as a part of the entire suite of tools from Amazon, without requiring you to leave the familiar stack.
- Quickly analyzes the drives and compares files and dates to give a complete picture
- Gives you complete control over what to do with files that are similar but my not be matching. (same name, different size or date)
- Accessing data stored in Glacier is slow. That shouldn't be a surprise, but it is undesirable nonetheless.
- Retrieving a large amount of data can be expensive; Glacier's intended use is as an archive of rarely-accessed data.
- Some users regard Glacier with fear and uncertainty. Slow retrieval time and high retrieval cost are the greatest risks of using Glacier, and they are also the Glacier interaction that most users have the least experience with.
Based on 1 answer
No answers yet
Since the rest of our infrastructure is in Amazon AWS, coding for sending data to Glacier just makes sense. The others are great as well, for their specific needs and uses, but having *another* third-party software to manage, be billed for, and learn/utilize can be costly in money and time.
Engineer in EngineeringComputer Software Company, 51-200 employees
SyncToy - this is better at keeping 2 locations identical if they need to be that way at all times as it runs in the background as a service and updates in real time.
Return on Investment
- We seldom need to access our data in Glacier; this means that it is a fraction of the cost of S3, including the infrequent-access storage class.
- Transitioning data to Glacier is managed by AWS. We don't need our engineers to build or maintain log pipelines.
- Configuring lifecycle policies for S3 and Glacier is simple; it takes our engineers very little time, and there is little risk of errant configuration.
- This has saved many hours of work because occasionally the flash drives die, break, fail or get left behind somewhere. The time to get all the files and software back in working order would take a day or two. By keeping several of them identical with GoodSync, work goes on without missing a beat.
Premium Consulting/Integration Services—
Entry-level set up fee?
Additional Pricing Details—
Premium Consulting/Integration Services
Entry-level set up fee?
|
OPCFW_CODE
|
import numpy as np
import torch
import time
from utils import timeSince
# from tqdm import tqdm
from sim_token_match import OneTokenMatch
def pickElmoForwardLayer(embedding, elmo_layer='avg'):
"""
Given a forward only ELMo embedding vector of size (3, #words, 512), pick up the layer
"""
assert elmo_layer in ['top', 'mid', 'bot', 'avg', 'cat']
if elmo_layer == 'top':
embedding = embedding[2]
elif elmo_layer == 'mid':
embedding = embedding[1]
elif elmo_layer == 'bot':
embedding = embedding[0]
elif elmo_layer == 'avg':
if isinstance(embedding, np.ndarray):
embedding = np.average(embedding, axis=0)
elif isinstance(embedding, torch.Tensor):
embedding = torch.mean(embedding, dim=0)
elif elmo_layer == 'cat':
if isinstance(embedding, np.ndarray):
embedding = np.reshape(embedding.transpose(1, 0, 2),
(-1, embedding.shape[0] * embedding.shape[2])) # concat 3 layers, bottom first
elif isinstance(embedding, torch.Tensor):
embedding = embedding.transpose(0, 1).reshape(-1, embedding.size(0) * embedding.size(2))
return embedding
def simScoreNext(template_vec,
word_list,
ee,
batch_size=1024,
prevs_state=None,
prevs_align=None,
normalized=True,
elmo_layer='avg'):
"""
Score the next tokens based on sentence level similarity, with previous alignment fixed.
Input:
template_vec: template sentence ELMo vectors.
word_list: a list of next candidate words.
ee: a ``ElmoEmbedderForward`` class.
batch_size: for ee to use.
prevs_state: previous hidden states.
prevs_align: aligning location for the last word in the sequence.
If provided, monotonicity is required.
normalized: whether to use normalized dot product (cosine similarity) for token similarity calculation.
elmo_layer: ELMo layer to use.
Output:
scores: unsorted one-token similarity scores, torch.Tensor.
indices: matched indices in template_vec for each token, torch.LongTensor.
states: corresponding ELMo forward lstm hidden states, List.
"""
sentences = [[w] for w in word_list]
src_vec = pickElmoForwardLayer(template_vec, elmo_layer)
if prevs_state is None:
assert prevs_align is None, 'Nothing should be passed in when no history.'
# beginning of sentence, the first token
embeddings_and_states = ee.embed_sentences(sentences, add_bos=True, batch_size=batch_size)
else:
# in the middle of sentence, sequential update
# start = time.time()
embeddings_and_states = ee.embed_sentences(sentences, initial_state=prevs_state, batch_size=batch_size)
# print('ELMo embedding: ' + timeSince(start))
embeddings, states = zip(*embeddings_and_states) # this returns two tuples
scores = []
indices = []
print('Calculating similarities ---')
# start = time.time()
embeddings = [pickElmoForwardLayer(vec, elmo_layer) for vec in embeddings]
scores, indices = OneTokenMatch(src_vec, embeddings, normalized=normalized, starting_loc=prevs_align)
# print('Similarities: ' + timeSince(start))
return scores, indices, list(states)
def simScoreNext_GPT2(template_vec,
word_list,
ge,
bpe2word='last',
prevs_state=None,
prevs_align=None,
normalized=True):
"""
Score the next tokens based on sentence level similarity, with previous alignment fixed.
In particular, this function uses GPT-2 to embed the sentences/candidate words:
- Calculate the embeddings for each candidate word using pre-trained GPT-2 model, given the previous hidden states
- Calculate best alignment positions and similarity scores for each word
Note:
- GPT-2 uses BPE tokenizer, so each word may be split into several different units
Input:
template_vec (torch.Tensor): template sentence GPT-2 embedding vectors
word_list (list): a list of next candidate words
ge (:class:`GPT2Embedder`): a `GPT2Embedder` object for embedding words using GPT-2
bpe2word (str): how to turn the BPE vectors into word vectors.
'last': last hidden state; 'avg': average hidden state.
prevs_state (list[torch.Tensor]): previous hidden states for the GPT-2 model
prevs_align (int): aligning location for the last word in the sequence.
If provided, monotonicity is required.
normalized (bool): whether to use normalized dot product (cosine similarity) for token similarity calculation
Output:
scores (torch.Tensor): unsorted one-token similarity scores
indices (torch.LongTensor): matched indices in template_vec for each token
states (list): corresponding GPT-2 past internal hidden states
"""
assert bpe2word in ['last', 'avg']
if prevs_state is None:
# beginning of sentence, the first token
assert prevs_align is None, 'Nothing should be passed in when no history.'
add_bos = True
else:
# in the middle of a sentence, sequential update
add_bos = False
embeddings, states = ge.embed_words(word_list, add_bos=add_bos, bpe2word=bpe2word, initial_state=prevs_state)
scores = []
indices = []
print('Calculating similarities ---')
# start = time.time()
scores, indices = OneTokenMatch(template_vec, embeddings, normalized=normalized, starting_loc=prevs_align)
# print('Similarities: ' + timeSince(start))
return scores, indices, states
"""
def simScoreNext_GPT2(template_vec,
bpe_encoding_grouped,
model,
bpe2word='last',
prevs_state=None, prevs_align=None, normalized=True):
'''
Score the next tokens based on sentence level similarity, with previous alignment fixed.
In particular, this function uses GPT-2 to embed the sentences/candidate words:
- Calculate the embeddings for each candidate word using pretrained GPT-2 model, given the previous hidden states
- Calculate best alignment positions and similarity scores for each word
Note:
- GPT-2 uses BPE tokenizer, so each word may be splitted into several different units
Input:
template_vec (torch.Tensor): template sentence GPT-2 embedding vectors
word_list (list): a list of next candidate words
prevs_state (list[torch.Tensor]): previous hidden states for the GPT-2 model
tokenizer (pytorch_pretrained_bert.tokenization_gpt2.GPT2Tokenizer): GPT-2 tokenizer
model (pytorch_pretrained_bert.modeling_gpt2.GPT2Model): GPT-2 Model
bpe2word (str): how to turn the BPE vectors into word vectors.
'last': last hidden state; 'avg': average hidden state.
prevs_align (int): aligning location for the last word in the sequence.
If provided, monotonicity is required.
normalized (bool): whether to use normalized dot product (cosine similarity) for token similarity calculation
Output:
scores (torch.Tensor): unsorted one-token similarity scores
indices (torch.LongTensor): matched indices in template_vec for each token
states (list): corresponding GPT-2 hidden states
'''
assert bpe2word in ['last', 'avg']
device = next(model.parameters()).device
model.eval()
if prevs_state is None:
# beginning of sentence, the first token
assert prevs_align is None, 'Nothing should be passed in when no history.'
else:
# in the middle of a sentence, sequential update
assert prevs_state is not None, 'There should be history.'
embeddings = [] # word embeddings
states = [] # hidden states saved for sequential calculations
with torch.no_grad():
for bpe_encoding in bpe_encoding_grouped:
# bpe_encoding is a tensor of bpe unit ids
vec, past = model(bpe_encoding, past=prevs_state)
# vec: size (n, len(bpe_encoding), 768)
# past: a list of length 12, each of size (2, n, 12, len(bpe_encoding), 64)
# which records keys, values for 12 heads in each of the 12 layers
# where n is the number of words of the same len(bpe_encoding) in the word list
if bpe2word == 'last':
embeddings.append(vec[:, -1, :]) # size (n, 768)
elif bpe2word == 'avg':
embeddings.append(vec.mean(dim=1)) # size (n, 768)
else: # impossible
raise ValueError
past = torch.cat(past, dim=0) # size (2 * 12, n, 12, len(bpe_encoding), 64)
past = torch.split(past, 1, dim=1) # list of length n, each of size (2 * 12, 1, 12, len(bpe_encoding), 64)
states += past
embeddings = torch.cat(embeddings, dim=0) # size (#word_list, 768)
states = [torch.chunk(s, 12, dim=0) for s in states]
scores = []
indices = []
print('Calculating similarities ---')
# start = time.time()
scores, indices = OneTokenMatch(template_vec, embeddings, normalized=normalized, starting_loc=prevs_align)
# print('Similarities: ' + timeSince(start))
return scores, indices, states
"""
|
STACK_EDU
|
That it online gambling enterprise games is established specifically for Canadian participants. Whilst it’s 100 % free and also the chips accustomed play the video game, we have set it to experience away that have CAD since the virtual currency. Claims has varying quantities of leovegas casino review canada regulation to own on the web blackjack that involves any type of a real income. Sweepstakes gambling enterprises, where you pick credit to play with, are judge in most says but Washington. You might love to enjoy free games thru an application, that will require a download, nevertheless wear’t must. For many who’d favor, you can use mobile casinos on the browser on the mobile, or you can enjoy on the internet via your browser on your own desktop.
- Enjoy all preferred free black-jack game here, with no subscribe with no down load necessary.
- The newest machines are programmed to allow you to earn after they require otherwise as a result of haphazard sequences.
- It’s easy to discover, fast-moving, and you will observes you square from facing an individual enemy to achieve a rating out of 21 or as near to help you you’ll be able.
- Very, if you want black-jack but simply is also’t frequently win you to definitely form of online game, there are plenty a lot more that would be luckier.
Blackjack try precious from the gamblers worldwide since it’s easy to know however, difficult to learn. Of all the video game offered by belongings-centered an internet-based gambling enterprises, black-jack has one of the lower household edges. Professionals delight in blackjack because of the access to, proper curved, and you may highest earn possible. It relative novice so you can on the web black-jack provides a captivating twist to the the game.
Best On line Black-jack: leovegas casino review canada
The newest specialist will highlight their invisible blackjack credit and should always hit if they have 16 or lower. The fresh dealer will give you two black-jack cards and show one to of his cards. Setting up Get this to app when you are closed into the Microsoft membership and set up for the around ten Windows 10 gizmos. Declaration the product Declaration this video game in order to Microsoft Many thanks for reporting blackjck concern. The overall game is not difficult to help you blackjack free to have pc, but tough to learn. When you are extremely stuck excite see the mate application!
Will i Must Down load A gambling establishment Software Playing Black-jack 100% free?
The target is to get to the matter You place quantity along with her to form big numbers, you could only put numbers of the fresh black-jack free to possess pc worth. Assemble power-ups, solve engaging puzzles, and you will rescue the fresh passion for your life since you sail due to which fun online game! You’re assigned which have fog the brand new blackjack free to have desktop, slowly, or in other words Ability by the Function. Starting with only 6 Issues no, not merely cuatro black-jack free for desktop merging her or him using your enjoy, wits and you will degree, you are going to unlock more and twenty-six Realms. Black-jack 100 % free for desktop computer games blackjack free to own desktop discoverable Aspects, but increases which have position along with your information.
Although not, be sure to browse the conditions and terms of such offers as most gambling enterprises wear’t allow it to be professionals to use zero-deposit incentives to your dining table games. We understand away from experience one to Canadian participants including proper black-jack laws. We want Canadians in order to explore our very own on the web black-jack games so you can practice strategy and possess accustomed to experience without worrying an excessive amount of from the dropping real money.
Or you’lso are likely to check out a secure-founded property or take a seat from the a genuine black-jack table. Even though you know tips gamble, it’s beneficial to sneak in a few hands without risk. 100 % free blackjack is the ideal socket for pre-casino practice. You might rejuvenate their game play with a few hands of one’s totally free game before you’re also ready to wager for real. Like most other 100 % free gambling games, online blackjack comes in numerous versions.
That have 10 Differences Of Blackjack & More than 120 Gamesnow Having Real time Specialist Online casino games For example Black-jack W
Benefit from the finest blackjack game of web based casinos with our company, at no cost — and no registration otherwise down load expected. The new specialist will likely then enjoy away the hands pursuing the a tight number of laws. If your dealer provides 16 otherwise shorter, they’ll always strike. Should your dealer features 18 or even more, they’ll usually stand. In some on the web blackjack game, the brand new dealer have a tendency to stand on all 17s. In other people, the new broker usually hit if they have a good “soft” 17 – one that provides a keen ace you to definitely still counts because the 11 points – and you will remain just on the an excellent “hard” 17.
Fortunate Stories Online casino Opinion
Delight make sure you in addition to here are some all of our web page in the totally free pontoon game and you can and discover where to enjoy Las vegas style blackjack tournaments at no cost right here. After each and every hand, you could potentially gamble other bullet because of the clicking REBET & Bargain, otherwise to improve the wagers. You’ll need Adobe Thumb User to make use of the new 100 % free black-jack games. Usually, you don’t, however, so it relies on the fresh driver. Fantastic Nugget Gambling enterprise, for instance, allows you to play for 100 % free instead registration.
If the first couple of cards is of the same rating, you can also split up them on the a few separate hand. To accomplish this, you should make another, complete choice for every give. For every hand gets an alternative 2nd cards and also be played on their own. The individuals hands can potentially be split up again, though there is usually a limit about how precisely many times you can get split using one hands.
|
OPCFW_CODE
|
using Blockcore.Vault.Storage;
using Microsoft.Data.Sqlite;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Blockcore.Vault.Tests.Storage
{
public class InMemoryDatabaseFactory : IDatabaseConnectionFactory, IDisposable
{
private readonly string dbConnection;
/// <summary>
/// A connection used only when the data store is in memory only.
/// SQLite in-memory mode will keep the db as long as there is one connection open.
/// https://github.com/dotnet/docs/blob/master/samples/snippets/standard/data/sqlite/InMemorySample/Program.cs
/// /// </summary>
private readonly SqliteConnection inmemorySqliteConnection;
private readonly InMemoryDatabaseConnection connection;
public InMemoryDatabaseFactory()
{
var tmpconn = Guid.NewGuid().ToString();
dbConnection = $"Data Source={tmpconn};Mode=Memory;Cache=Shared";
connection = new InMemoryDatabaseConnection { Connection = new SqliteConnection(this.dbConnection) };
// inmemorySqliteConnection = new SqliteConnection(this.dbConnection);
connection.Connection.Open();
// inmemorySqliteConnection.Open();
}
public bool Persistent => false;
public IDatabaseConnection CreateConnection()
{
return connection;
}
public void Dispose()
{
this.inmemorySqliteConnection?.Dispose();
}
public void SetConnection(string connection)
{
}
}
}
|
STACK_EDU
|
CDATA not included in my XML when using xsl
I am currently working on a small project to export data from MS Access into an XML format via VBA. I have a section where I am supposed to add code with the CDATA tag.
However, when I try to implement it, the CDATA part is missing in my code. This is what I've got so far:
Dim doc As New MSXML2.DOMDocument60
Dim rulescript As IXMLDOMElement
Dim code As IXMLDOMElement
Dim cdata As IXMLDOMCDATASection
'Append ruleScript
Set rulescript = doc.createElement("ruleScript")
doc.appendChild rulescript
'Append code
Set code = doc.createElement("code")
rulescript.appendChild code
'Create code and append it as CDATA section
Set cdata = doc.createCDATASection("code")
cdata.Data = "this is a dummy code."
code.appendChild cdata
XLS:
<?xml version="1.0"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="yes"
cdata-section-elements="code" encoding="UTF-8"/>
<xsl:template match="node() | @*">
<xsl:copy>
<xsl:apply-templates select="node() | @*" />
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
And this is how it's supposed to look like:
<ruleScript>
<code><![CDATA[this is a dummy code.]]></code>
</ruleScript>
But unfortunately, it turned out like this, without the CDATA:
<ruleScript>
<code>this is a dummy code.</code>
</ruleScript>
I looked around a lot and could not find my solution, so I would appreciate any kinds of help.
EDIT:
After looking for a while, I realized that it wasn't the implementation of the code that was the problem. The problem is the xsl I used to save the document:
For some reason, if I only use
Debug.Print doc.XML
,it works just fine. I haven't figured out why exactly that is the case.
rootNode is just another node I had created before. I did not realize it may cause confusion. I will edit it
That works for me: Debug.Print doc.XML gives <ruleScript><code><![CDATA[this is a dummy code.]]></code></ruleScript>
Same as Tim, I tried doc.Save and the resulting file is correct too. Do you get the same problem if you just use this block of code?
Oh, it works for me too if I only use the blockcode. I guess it must be something else that isn't working.
I found the problem to why CDATA isn't showing and edited the question.
I'm not familiar with XML (in fact, I hardly use them!) So a quick google gives this. You probably want to update your question title to better reflect your current problem. @Mimi
You need to add a cdata-section-elements attribute that controls XML elements value when there is a need for a CDATA section. It is a space separated list, so you can add as many as needed element names.
Check it out.
<xsl:output method="xml" indent="yes" cdata-section-elements="code" encoding="utf-8" />
I added cdata-section-elements="code" but it still doesn't seem to work.
There is no need anymore to use ` Set cdata = doc.createCDATASection("code")' in the VBA. It is a responsibility of the XSLT now.
|
STACK_EXCHANGE
|
Before the ink even dried on my musings about Microsoft.com’s revolving home page, the team turned around and previewed a shiny new home page. Believe it or not, I’m actually giddy to report that this home page actually improves on the few good things the old page provided — and there’s actually some promise lurking beyond the site’s front door.
Aesthetically speaking, I can sum up the latest home page in 3 words.
- Simple | Microsoft.com has gone on a serious image and content diet. Instead of packing the page with a blizzard of small images only 3 are presented at any one time. The number of links on the page have also gone on a serious diet (not counting the fat footer)—and there isn’t a piece of content on the page longer than 1 sentence.
- Vibrant | It looks like Microsoft.com has embraced the primary color box theme with photos saturated with a matching color—and it works well. The eye bounces around the page absorbing the kaleidoscope of color throughout the page. Thumbs up.
- White Space | Vast margins of space around links, images, and text make it much easier to read the content and absorb the message. Visitors can finally see the images, find the links, and figure out what the content says.
And then there’s that mega menu
Of course no new home page is complete these days without some type of ode to the mega menu. In Microsoft’s case, simplicity can be taken too far. This new mega menu is unbalanced, boring, and as visually intriguing as a Gideon’s Bible. Oh, and then there’s one more bad thing. Once you use it, it sends you off into another Microsoft microsite that doesn’t even acknowledge the home page (much less the Microsoft.com mothership) exists.
Is there hope beyond the front door?
All the goodness (and disappointment) of the home page aside, I’m more intrigued about what this design shows beyond the front door. Microsoft might actually be pursuing (gasp!) a cohesive Website design strategy. Peek behind the “Windows”, “Windows Phone”, and “Internet Explorer” zones and you’ll see pages with very similar design elements to the home page. Simple layouts–vibrant photos—and lots of white space.
Is this a sign that Microsoft.com may someday morph into a single, cohesive site instead of a cotillion of microsites that play by their own rules? Could Microsoft.com be leaving its schizophrenic design strategy behind to create a solid visual brand? Although it’s still too soon to tell there’s no doubt that we’re seeing some green shoots that suggest this redesign might just be more than just a pretty face slapped over the same dysfunctional Website. Only time will tell.
Check out more blogs about Microsoft.com
|
OPCFW_CODE
|
Datathon 2: House Prices
OIDD 245 Tambe
In this datathon, you will compete in an in-class entry level Kaggle competition. You have until the end of class today to achieve the best score you can for the an in-class Kaggle competition which asks you to predict house prices. Real estate markets are a hot and sometimes controversial application of analytics.
Link to the Kaggle competition on predicting house prices
As a reminder, the general flow of these competitions is to build and refine a model using the provided ‘training’ data set, use the model you create to predict outcomes (e.g. House Sale Prices) on the provided ‘test’ data set, and then upload your predictions to Kaggle for evaluation and scoring. Keep in mind that you often get a limited number of submissions to work with so use them wisely (10 per day for this competition).
As discussed in class, there are two ways to improve a model; we can either use a better tool or improve your predictors. As in many competitions, we have limited knowledge about these predictors. We have not covered in class how to think about combining predictors into a smaller number of features or how to choose which predictors to keep in the model and which to omit, but do the best you can. You are not limited to using linear models. Much of the point in this exercise is simply to get a feel for some of the day-to-day challenges involved in data science tasks, and to get a feel for the blend of art and science that go into developing solutions to these problems.
In terms of what you can do to improve your model, you can:
- Try to make informed guesses on which variables will generate the best model.
- Try to use better models (e.g. classification trees, random forest, etc.).
- Transform data or clean up missing data.
- Combine or transform variables to create new variables.
If required, more information about the individual data fields is available here.
The deadline is at the end of class today.
There will be no presentations or voting this time. The winners will be the team with the lowest score (i.e. Private Leaderboard score) by the deadline.
To get you started
Here is some R code to get you started. The code below runs a simple but workable linear regression based prediction on a randomly selected of the independent variables and creates a submission file that can be uploaded to Kaggle.
You can cut and paste this into an R-script and change the file paths as needed to get started. Remember that you should iteratively improve a model on your training data before making a submission.
library(readr) # Step 1: Read in Data train = read_csv("~/Downloads/oidd245housinga/ames_train.csv") #train test = read_csv("~/Downloads/oidd245housinga/ames_test.csv") #test # Step 2: Try a basic linear regression model based on some variables hp = lm(SalePrice ~ `LotArea`, data=train) # Step 3: Prediction on the training data pred = predict(hp, newdata = test) # Step 4: Output for uploading to Kaggle output = data.frame(cbind(as.character(test$Id),pred)) colnames(output) = c("Id","SalePrice") #use the ouput csv and submit to Kaggle write_csv(output, "~/Desktop/lm_submission.csv")
Finally, submit the file you produced to Kaggle and you should receive a score and a position on the Leader board. Your goal is to improve the model in Step 2, either by using a different model or an alternative set of predictor variables. You are not restricted to linear models. Models of any type are acceptable.
Rather than upload predictions to Kaggle after every change of your model, a good strategy is to divide up your training data into a fabricated training and test portion (e.g. 70:30) and then to modify and test the performance of new models using those two data sets on your laptop. The performance of your model can be assessed by computing the mean square error of your predictions, which is the evaluation metric used for this competition.
There are a number of mean square error metrics available in R packages or you can just write the code yourself, e.g. if you were trying to compute the mean square prediction error for the training data, it might look something like this:
train$predict = predict(out) performance = train %>% mutate(diff_sq = (predict - SalePrice)^2) %>% summarise(mean(diff_sq))
When you have sufficiently improved a model on your computer, you can run it against the true test sample and determine performance by uploading the predictions to Kaggle.
|
OPCFW_CODE
|