Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Fork and Branch Git workflow
Fork and Branch Workflow¶
In this workflow type, contributors fork the main repository to their own GitHub account, create feature branches for their work, and then submit contributions via pull requests from these branches.
This Gemstone walks through how to set up a local repository to contribute to a GitHub project. It starts with the initial project forking, setting up a local and remote repository, committing changes, and creating a pull request (PR) to submit your contributions.
- A GitHub account.
GitHub CLI (gh)installed on your system.
- A personal fork of the project on GitHub.
- If it doesn't already exist, create a fork of the project using the gh utility. Type:
gh repo fork rocky-linux/documentation --clone=true --remote=true
The options used in this gh repo fork command are:
--clone=true: Clones the forked repository to your local machine.
--remote=true: Adds the original repository as a remote, allowing you to sync future updates.
- Navigate to the local repository directory. Type:
- Verify that all the relevant remote repos have been properly configured in your local repo, type:
git remote -vv
- Fetch the latest changes from the upstream remote:
git fetch upstream
- Create and checkout a new feature branch named your-feature-branch:
git checkout -b your-feature-branch
- Make changes, add new files, and commit your changes to your local repo:
git add .
git commit -m "Your commit message"
- Sync with the main branch of the remote repo named
git pull upstream main
- Push changes to your Fork**:
git push origin your-feature-branch
- Finally, create a Pull Request (PR) using the
gh pr create --base main --head your-feature-branch --title "Your PR Title" --body "Description of your changes"
The options used in this gh pr create command are:
--base main: Specifies the base branch in the upstream repository where the changes will be merged.
--head your-feature-branch: Indicates the head branch from your fork that contains the changes.
--title "Your PR Title": Sets the title for the pull request.
--body "Description of your changes": Provides a detailed description of the changes in the pull request.
The Fork and Branch workflow is another common collaboration technique. The high-level steps involved are:
- Fork the Repository: Create a personal copy of the project's repository on your GitHub account.
- Clone the Fork: Clone your fork to your local machine for development work.
- Set Upstream Remote: To stay updated with its changes, add the original project repository as an 'upstream' remote.
- Create a Feature Branch: Create a new branch from the updated main branch for each new feature or fix. Branch names should describe the feature or fix.
- Commit Changes: Make your changes and commit them with clear and concise commit messages.
- Sync with Upstream: Regularly sync your fork and feature branch with the upstream main branch to incorporate new changes and reduce merge conflicts.
- Create a Pull Request (PR): Push your feature branch to your fork on GitHub and open a PR against the main project. Your PR should clearly describe the changes and link to any relevant issues.
- Respond to Feedback: Collaborate on review feedback until the PR is merged or closed.
- Isolates development work to specific branches, keeping the main branch clean.
- It makes it easier to review and integrate changes.
- Reduces the risk of conflicts with the main project’s evolving codebase.
Author: Wale Soyinka
Contributors: Ganna Zhyrnova
|
OPCFW_CODE
|
Download and extract the zip-file from here.
Copy the folder "antonisschley" and paste it into the content-folder of your UE4-project.
You should now have the tool in your content library in the UE4-editor.
PLEASE NOTE: THIS IS AN EXPERIMENTAL TOOL. YOU SHOULD NOT USE IT ON SENSITIVE/VALUABLE DATA. IF YOU DO SO, IT IS AT YOUR OWN RISK.
Click the right mouse button on the Editor Utility Widget "Orbit_Cam_[version]" and select the top option Run Editor Utility Widget.
This will open up the interface and switches your standard editor viewport camera to a temporary camera for the session.
This camera is meant for viewpoint setup and is not providing the full Orbit Cam functionality you get during gameplay.
Fly to your desired perspective and setup the camera settings for your viewpoint. Hit "Make Viewpoint". By that, a Pwn_OrbitCam actor will spawn and the viewpoint will be added to your collection in the interface.
When finished setting up your viewpoints, you can make some general settings for Gameplay:
Should the OrbitCam system be activated for Gameplay, Input-Devices and the motionspeed. By default, The OrbitCam and both input-methods are activated.
You don't need to make any settings to your project or level in order to get OrbitCam running and receiving inputs during Gameplay. Its all being done for you.
If there is already a player start and some gameplay settings in your level i would suggest you to deactivate or delete them, as they might interfere.
During gameplay you can freely navigate through your level and fly to your viewpoints. Therefore you can either use the hotkeys you defined before or use the shoulder buttons on your gamepad. Using the shoulder buttons will go through your viewpoint according to the order of your collection.
2.1 Viewpoint functions:
These functions already have effect in the editor before Begin Play. They mainly control the temporary work camera during the usage of Orbit Cam Widget. No permanent camera in your level will be changed.
spawns an Orbit Cam pawn in your level with the camera set up according to your choices. Per default the point of interest will always be on the geometry in the center of your viewport. It also adds a viewpoint to your viewpoint collection in the Orbit Cam Widget. After the first Viewpoint is created, gameplay only functions will be activated. Using the orbit navigation of the pawn is only possible during Gameplay.
will focus on the center of your viewpoint. Focus distance can still be manually corrected afterwards.
affects the work camera in the editor as well as the camera pawn later during gameplay. Autofocus is a general setting and will deactivate any custom viewpoint focus distance while active itself. Focuspoint is always the center of your viewpoint.
Set point of interest
will place a marker on the geometry in the center of your viewpoint. Nothing further happens until you make a new viewpoint. Doing so will make the marker the point of interest of the freshly made viewpoint no matter of the relation to the camera.
If you click again while the button is active, the marker will be removed.
in F-Stops. Minimum is 0,1. Maximum is 22. When Auto exposure is turned off, this will have an effect on the brightness of your scene.
in mm. Will update permanently when Autofocus is active.
in Degree. This will rotate the camera around its viewing axis.
is only available for changes when a post process volume is present in the level. You can choose between the default two automatic methods of auto exposure and manual exposure. This setting will stay with your post process volume, which means, that your scene will probably look totally different with the editors default camera.
is only available for changes when a post process volume is present in the level. You can finetune the brightness of your scene. This setting will stay with your post process volume, which means, that your scene will probably look totally different with the editors default camera.
2.2 Behavior functions:
These functions have no effect in the editor, but only during Gameplay. Therefore, these functions are turned off as long as there is no viewpoint available.
Orbit Cam is on!/Orbit Cam is off!
When turned on,
Orbit Cam will automatically assign an Orbit Camera pawn to the player at Begin Play. No Player start or any other setting is needed. It may lead to unforseen effects with custom player/pawn settings made to the level.
When turned off,
no Orbit Cam related actor will spawn to the game.
Mouse only/Gamepad only
I prefer gamepad input for rotational cameras much more than mouse.
Movement is much smoother and consistent. Both input methods are supported though.
For error minimization during presentation you can limit input to only one device.
determines the default general speed of all camera movements. It is a value without unit. You can still change it during gameplay (non-permanent).
2.3 Viewpoint collection:
Your viewpoints will be listed in a tile view collection on the right side. Every viewpoint comes with the following functions:
Go toWhen clicking on the big center button, the workcamera will go to the according viewpoint and applies the same settings. It will either fly to your viewpoint or jump to it according to your setting in "blending time". In the lower area of the button you can rename your viewpoint, this will also change the name of the camera in your level.
Updatewill transfer all attributes of the workcamera to the according viewpoint.
Drag & DropClicking and holding a viewpoint by the grip-area lets you rearange your viewpoints. This is especially important for gamepad navigation, because the order of your viewpoints in your collection will be the order to go through with the shoulder buttons on your gamepad.
Hotkeylets you define a keyboard key to initiate the switch to the according viewpoint. At the moment only single-key hotkeys are possible. In order to set a hotkey you click on the button. The interface will then wait for your keyboard input and use this as hotkey.
Blending timesets the duration of the animation to the according viewpoint. Setting this value below 0,1 will lead to an instant camera switch.
3.1 Mouse control:
3.2 Gamepad control:
|
OPCFW_CODE
|
PPTP uses TCP port 1723 to establish a connection.. If you have a firewall active on your computer or if your ISP blocks port 1723 then you'll not be able to connect to our VPN service.. Here is how to check if PPTP VPN port is open on your PC to be able to access our servers. The tutorial explains how to use PortQueryUI, a tool used to check the availability of different ports.
VPN protocols and which is the best to use | TechRadar May 19, 2020 5 Best PPTP VPNs- Best Protocol For Online Streaming! PPTP VPN Speed Test. There are a few online sources through which you can find out the speed of your PPTP protocol. Speedtest by Ookla leads the way with a couple of other good options including Verizon. Here are the URLs. Speed Test; Verizon Speed Test; Conclusion.
ping 18.104.22.168 (one of Google public DNS servers always a good network connectivity test) ping 10.10.1.20 (an server in my internal network, needs to go through the VPN to get to it) ping -f 22.214.171.124 (the public IP address of the router where I am experiencing problems)
Outgoing VPN PPTP: How to check if TCP port 1723 and GRE PPTP uses GRE, but L2TP/IPSec and SSTP don't. However, the VPN server will need to support them, and it will need a digital certificate; you will also need a certificate on your computer for L2TP/IPSec. It looks as if "BestUKVPN" only support PPTP, so you'll need to decide whether it's more work to change your router or your server. Which is the Best VPN Protocol? PPTP vs. OpenVPN vs. L2TP
I've read a bunch of posts on this but haven't seemed to find anything that helps. My VPN PPTP seems to work fine but the speed is not good. My connection is 50mb up and 6mb down. I'm connecting a client using Microsoft Windows VPN client. The speedtest while connected through vpn is …
Dec 11, 2018 Whats My IP Address | Private Internet Access VPN Service This is how it works without a VPN IP address: When you visit a website, your ISP makes a connection request on your behalf with the destination, but uses your true IP address. In this process, your public IP address is revealed. This is how it works with a VPN and a "fake" IP address. With a VPN server, your online requests are rerouted.
|
OPCFW_CODE
|
Why there is a difference in time for x axis if I zoom into the Plot
I am working with python pandas and reading a csv file with several columns. Also included in the files are several time columns. I drop them out and select only one column as time column. I define this column as index and also set it as time.
Next is: I would like to plot a specific column to get a first impression.
In the data I see that the values for this column drop from 600 to zero at 10:42, see image:
If I plot the column, I get the following image
However, if I zoom in, I get the following
As it can be seen there is a huge difference in the images. Something seems to go wrong.
I have the following code
data_304=pd.read_csv(r"data.csv",sep=";")
data_304=data_304.drop(["columns_to_drop"],axis=1)
data_304['date']=pd.to_datetime(data_304['date'])
data_304=data_304.set_index('piovan_1_dosing_creation_date')
data_304.index=data_304.index.map(lambda t: t.strftime('%Y-%m-%d %H:%M'))
data_304["piovan_1_dosing_batchvalue_value"].plot()
I assume that this might be due to the time zone, but I didnt find a proper way.
I would be very grateful if you could give me a tip :)
Interesting problem. But can you provide a [mcve], such that people have a chance to investigate?
How do you zoom in? With the matplotlib interactive zoom?
As far as I see it, there are no differences between the two plots, except for the horizontal scales, which is a result of your 'zooming in'.
Yes indeed I use matplotlib interactive zoom
There is the difference that the dip is at a different time..
The minimal code is the one I already postet. I just deleted the paths and columns which I don't need for the further analytics
The example is minimal, but not complete. Ask yourself: how can someone else reproduce the problem on his/her computer?
Oh, your index are strings. That explains it. That's more or less like asking where "yesterday" would be on the axis.
By applying pd.to_datatime I set it as time. The type of the column is the time specific type datetime[ns]. I will upload the data sheet and the entire code on Monday :)
@Importance: I think I understood your comment about string. I just had a quick look on the strftime() command and realized that my time is transformed into a string, which will probably lead to this issue
first of all let me thank you for your help, I really appreciate it :)
@ImportanceOfBeingErnest: You were right. The issue with zoom in and zoom out is due to the fact that I converted my index which is type of dtype='datetime64[ns]' to string using the line
data_304.index=data_304.index.map(lambda t: t.strftime('%Y-%m-%d %H:%M'))
Since my index goes up to milliseconds but I am not interested in milliseconds rather than seconds I looked for a way to just show seconds. And the line mentioned abve using the strftime() was the solution I found. Unfortunately, I was not smart enough to check what happens in the background and couldn't find the answer to my problem described in the first mail.
After a bit searching and focussing I found the following information which helps me to set the milliseconds to zero and still keep the type dtype='datetime64[ns]', which is done by
data_304.index=data_304.index.values.astype('<M8[m]')
Applying this to my data changes the index time appearance from
2019-04-24 05:41:13.809000 to 2019-04-24 05:41:00
As mentioned the type remains datetime64[ns]
Maybe this helps you guys if you want to keep type and truncate the index
Cheers and thanks again :)
|
STACK_EXCHANGE
|
How can I obtain a singleton from a parent class
How can I obtain a singleton from a parent class
GameManager inherits Singleton, how to call instance of parent class and return a singleton of GameManager
This is the function signature generated by Il2CppDumper
I want to get the GameManager in Singleton pattern, but this does not seem to work. m_ Get_ Instance is a null pointer
This is the correct C sharp code
This is a simple little game
get function:
auto get_instance = IL2CPP::Class::Utils::GetMethodPointer("full class name", "function name");
call it:
get_instance();
static classes/functions does not need thisptr.
you look like new to il2cpp games, i wont spondfeed anything, you need understand basics of il2cpp before start using this lib.
I have obtained get_ The address of the instance, but it seems that all classes point to this address. I called this function and the game crashed
It looks like this field
check GameAssembly.dll in ida , for static function, it will always fill itself ecx/rcx(thisptr), check function in ida youll see how it works, if you still keeping your point, i couldnt help you.
read and learn basic reverse engineer skills could help you understanding what is happening to game.
this issue is closed due to not related library issue.
All the questions here cannot be solved in any way. When you are not familiar with il2cpp, do not try to ask any questions here. You only need to add two APIs to solve the problem, "il2cpp_field_static_get_value", "il2cpp_class_get_parent", @sneakyevil Please try adding them to the framework,which can obtain static fields in the class and static fields in the parent class
The obtained results are identical to CE
By the way, what is parentGameManagerClass
you literally just being offensive here, for what? begging for spoonfeed failed? you literally using inspector and trying to understand how does it works , meanwhile i already told you in previous comment go check the code in ida you will understand, you ignored it and still trying to go with your way, please, next time learn deeply before telling anyone that they are noob. owner will not help with offensive people.
took 5 sec google for you:
https://stackoverflow.com/questions/4124102/whats-a-static-method-in-c
if you still dont understand and keep offensive, stop following this issue and messaging anyone and stop using the library because "we are noob".
You don't understand someone else's problem, and you don't know how to solve it, so you don't realize that they have already solved the initial problem. My suggestion is to remain silent rather than irritable about issues you don't understand, so that no one knows you are a clown.
@extremeblackliu
i can see how you mad, you are now offensive and cringe here, for anyone who ever trying to read this issue, let me reveal the truth, the dude who created this issue literally alt account(first cringe point), and i trying to help with him in a comment for firstly know his problem, the dude wanna me help him outside of this issue, he trying to write me on bilibili's private message begging me for help, i didnt give fuck, i told him if you wanna me help you 1 by 1 or spoodfeed its okay, pay me for my time, he didnt makes response there since he know that i accept paid only, and now he raging(crying) on here for not helping him. i closed this issue not for my personally that if he didnt paid me or something, its because its not related to library issue, this issue is same in perivous issues there is dude begging for help and owner didnt wanna help with such problem(again not related to library), beside me help with people, but this guy dont wanna hear anything besides actual code, i do have access of the repo, i closed this due to that, the guy didnt wanna hear what i told him to do and ignored all, also provided 0 information to let anyone help him. i can know how sadly truth how people failed and cant do shit just screaming around issue to begging anyone helping, important is that he doesnt even wanna spend the time to go learn those things instead than tryharding write me garbage message. also i didnt mention your improve is pointless, still my point static function fills itself the thisptr, you just wasting more code on it have fun.
|
GITHUB_ARCHIVE
|
Are you looking for how to implement ChatGPT in Azure? Search no further, we will provide you with the easy-to-follow guide to set up GPT-35-Turbo and GPT-4 with Azure.
ChatGPT is an AI language model that can generate text based on prompts or questions. Azure OpenAI is a cloud-based platform that provides access to a variety of AI services, including ChatGPT. By using ChatGPT on Azure OpenAI portal, you can generate high-quality text for a variety of purposes, such as writing articles, creating social media content, or learning more about a specific topic. In this guide, we will walk you through the steps for using ChatGPT on Azure OpenAI, from creating an account to refining your output.
Let’s get started!
How to use ChatGPT in Azure OpenAI Service
To use ChatGPT on Azure OpenAI, you can follow the steps below basically for Microsoft Azure Studio.
Create an Azure OpenAI account
If you do not already have an Azure OpenAI account, you can sign up for one on the Azure website https://azure.microsoft.com/en-gb/free/cognitive-services/?azure-portal=true.
Create a ChatGPT resource
Once you have an Azure OpenAI account, you can create a ChatGPT resource by following the instructions provided in the Azure portal and on the Azure OpenAI studio https://oai.azure.com/ click on ChatGPT Playground (preview)
Access the ChatGPT API
Once you have created a ChatGPT resource, you can access the ChatGPT API by using the endpoint URL and API key provided in the Azure portal.
Input your prompts
Once you have access to the ChatGPT API, you can input your prompts or questions to generate output.
Depending on the quality of the output, you may need to refine it to make it more coherent or accurate. You can do this by editing the output or providing additional prompts to ChatGPT.
What is the difference between Azure OpenAI and ChatGPT?
Azure OpenAI and ChatGPT are related but distinct concepts in the field of artificial intelligence.
Azure OpenAI is a cloud-based platform that provides access to a variety of AI services, including natural language processing, computer vision, and machine learning. It is designed to help developers and organizations build and deploy AI-powered applications and services in the cloud. Azure OpenAI provides a range of APIs and tools that allow developers to integrate AI capabilities into their applications without needing to build and train their own models from scratch.
ChatGPT, on the other hand, is an AI language model developed by OpenAI that is designed to generate text based on prompts or questions. It is a specific application of natural language processing that uses deep learning algorithms to analyze and generate human-like text.
How are ChatGPT OpenAI and Azure OpenAI related?
Interestingly, both ChatGPT and Azure OpenAI are developed by OpenAI.
Does ChatGPT run on Azure?
Yes, ChatGPT is now available on Azure for users to implement and deploy the functionalities of OpenAI ChatGPT on their applications.
If you find this guide helpful, kindly share it with your friends on social media.
|
OPCFW_CODE
|
delegate(); proposed observer-modification for witholding events/transforming to any type
I've struggled to find the equivalent to map(), but where it's feasible to omit a value from the downstream, dispatch an error into the downstream or end it. It may be that there's a way to make map() or some other method() do this, but I can't find any documentation about it.
I find I often am doing a map() then a filter() for example, where both are duplicating the same unmarshalling work, but one is modifying the events, and the other is witholding selected events, which seems messy. In neither case can you introduce an error or an end. An alternative is to create a Kefir.stream(...) as described below but this is also quite messy for what I would expect is a common sort of operation.
Have I overlooked an obvious way to do this using a minimal transform from the api? Here's some pseudo-code for what I intend...
var modifiedStream = Kefir.fromEvents(nodeServer, "eventType").delegate(function(value, emitter){
var type = typeof value;
if(type==="string"){
emitter.emit(value);
}
else if(value===null){
emitter.end();
}
else{
emitter.error("Unexpected type: " + type);
}
});
It might turn out more or less the same as the following from a behaviour/implementation point of view.
var sourceStream = Kefir.fromEvents(nodeServer, "eventType");
var modifiedStream = Kefir.stream(function(emitter){
var handler = function(value){
var type = typeof value;
if(type==="string"){
emitter.emit(value);
}
else if(value===null){
emitter.end();
}
else{
emitter.error("Unexpected type: " + type);
}
};
sourceStream.onValue(handler);
return function(){
sourceStream.offValue(handler);
}
})
Having an API implementation of this would avoid some closures and unnecessary name-pollution for what would otherwise be an anonymous source stream with a transparently-subscribed and unsubscribed handler. For many simpler cases it would be significantly more terse.
Having an API implementation would also ensure sufficiently smart intervention for triggering unsubscription from the source stream in case of the modified stream sending an end (I don't know if unsubscribe is automatically triggered in this case and unsubscription would certainly be needed here to prevent memory leaks).
Hey!
Didn't look closely yet, but maybe .withHandler is what you need? Except it pushes all events to the handler (not only values, but also errors and end).
Another approach is to use .takeWhile + .flatMap:
var modifiedStream = Kefir.fromEvents(nodeServer, "eventType")
.takeWhile(function(value){
return value !== null;
})
.flatMap(function(value){
var type = typeof value;
if(type==="string"){
return Kefir.constant(value);
}
return Kefir.constantError("Unexpected type: " + type);
});
I'd probably do it this way.
I haven't yet used withHandler, so that's a very useful pointer and pretty close to what I want from a callback point of view so thanks for the suggestion.
I am guessing this eliminates the complexity of having to manage unsubscription (e.g. source unsubscription is automatic when an emitter.end() is sent downstream?).
However perhaps more importantly for every case I've needed it so far, it's an onValue which I want to respond to, so I fear withHandler is going to have a lot of unmarshalling in the handler function which will be verbose and error prone if I have to reimplement this every time.
If I was to use withHandler to achieve the API signature and minimal object allocation I was hoping for without embedding it in the handler function, I'd need a filter() to eliminate any error or end, as well as a map() to unmarshall values from the onAny format to get to the the API simplicity of the 'delegate' pseudo-code I shared, which (perhaps not surprisingly) exactly the problem I started with.
I wondered about using flatMap in the way you showed, but I was concerned about the allocation overhead, hence the interest in an API-native routing through a minimal handler function which is triggered only on values.
"delegate" is a terrible name, perhaps withValueHandler(emitter, value) captures better what I was hoping for, suggesting that an error or end from the source stream is passed through untouched by the transform? Going down that route, it would be feasible to have a withErrorHandler(emitter, value) and a withEndHandler(emitter, value) for symmetry.
I appreciate the implementation you've shared for the case I mentioned, but it was more of an illustration of the kind of signature I aspire to than a real case. It's good to see elegant answers like this as it helps me reflect on my own use of Kefir.
In the short term I think I might follow this flatMap() and .constantX() strategy as you showed. It's better than my current alternative, is terse and functional, and I think covers the immediate cases I need, although I feel the withValueHandler approach is likely to be more efficient and powerful for a number of cases (e.g. marshalling more than one event arising from a single source event possibly embedded in different logic blocks is a challenge through flatMap).
source unsubscription is automatic when an emitter.end() is sent downstream?
Yeah, it automatic in .withHandler, and more general when a stream ends it always unsubscribes from all sources.
I wondered about using flatMap in the way you showed, but I was concerned about the allocation overhead
I wouldn't worry about that. Kefir is very optimized for .flatMap(x => Kefir.constat*(...)) pattern, it almost as fast as .map(f) (about ~1.5 times slower according to the benchmark).
I appreciate the implementation you've shared for the case I mentioned, but it was more of an illustration of the kind of signature I aspire to than a real case.
Right, but we need real cases when such method would be useful to consider adding it.
I also more like flatMap-like solutions for this kind of problems, because they are closer to declarative/functional style, while withHandler / delegate / manual calling emitter methods from an onValue callback — is more imperative. I think this stuff should be like the last resort, when flatMap and others doesn't work.
|
GITHUB_ARCHIVE
|
The reality is, the biggest franchises in the current video game space are so-called first-person shooters what comes to genre specification. The franchise that most likely comes to everyone’s mind is the Call of Duty series that is nowadays very common to label as the lowest common denominator of interactive entertainment due to its popularity. Personally, I try my best to avoid seeing artifacts through their social stigmas, and I think it’s quite ridiculous sometimes how far some people go in order to make themselves appear superior by bashing popular pieces of entertainment or art.
Anyhow, it’s not a coincidence that top selling video games are more often than not about shooting people with firearms: people generally like shooting. And one doesn’t have to have real-life experience of an actual combat rifle to recognize that holding and using one is, in a way, an ultimate power trip, as in utter dominance over others. Moreover, the fact that we have this established, massively popular genre known as first-person shooter is indeed telling that the act of shooting is a central theme particularly in first-person games in general.
In fact, there really aren’t any other major genres with the first-person prefix, even though there perhaps should be. It seems that a game to qualify as a first-person one needs to include not only a first-person view (most simulators are depicted from first-person), but also offer a certain level of freedom for the player to wander around the 3D space as a person. Therefore, I guess, Doom (1993) was considered as a first-person game but Microsoft Flight Simulators aren’t. Many times I wish there was a flight simulator or a racing game that incorporated meaningfully such a freedom into the gameplay, but it’s always about guns, guns, guns.
So, shooting people is such a profound way of interacting with the virtual from the first-person point of view that it feels strange and out of place when an AAA game with said perspective comes along that involves barely use of weaponry, like Mirror’s Edge (2008). Mirror’s Edge was based on finding a right path to come over the obstacles, keeping the momentum going, and avoiding the enemy fire at the same time. However, occasionally the player got a hold of a gun and could fire back, which made the shooting feel that much more special and meaningful, if you will. Now the weapon wasn’t a fundamental part of the player’s character like in most first-person games, but a luxurious object that one kind of cherished and which radiated genuine authority.
This all comes down to the fact that I find it highly fascinating when a first-person game (=shooter) introduces functionality that’s not directly connected to the core ethos of the game, a fascination which dates back to Duke Nukem 3D (1996) that famously contained all kinds of extra stuff to play with. What’s amusing, then, is that in the case of Mirror’s Edge, that functionality was indeed shooting. Also, I remember how exciting it was to be able to drive civilian cars in the original Operation Flashpoint (2001), which had little to do with the actual militaristic gameplay, but which transformed the game as a whole into something much cooler, even if being quite cool to begin with.
I’m not saying shooting isn’t necessarily enough for a game like Call of Duty. I’m saying first-person games should aim a bit higher than being mere shooters in terms of functionality. Crysis (2007) was an ambitious endeavor into that direction in that the player could pick up and hold almost any object, not only a gun (the system is unparalleled even today), and drive around freely with vehicles, military and civilian.
The first-person view is not an artistic statement, but the most natural and obvious way of portraying the virtual, and it frustrates me that the most prevalent first-person genre is tagged with such a specific and limiting term as shooter. At the end of the day, I guess, I want genuine first-person Grand Theft Auto -esque games that deliver on-par experiences in all fronts. Please.
|
OPCFW_CODE
|
As of today, you can create and deploy a Data Union using the tooling available in Streamr Core. The Data Union framework, now released in public beta, is an implementation of data crowdselling. By integrating into the framework, app developers can empower their users to monetise the real-time data they create. Data from participating users is sent to a stream on the Streamr Network, and access to the pooled data is sold as a product on the Streamr Marketplace. Any revenue from the data product is automatically shared among the Data Union members and distributed as DATA tokens.
Streamr launched the Data Union framework into private beta in October last year, with the Swash app at Mozfest in London. Swash is the world’s first Data Union, a browser extension that allows individual users to monetise their browsing habits. With this public beta launch, we hope to spark the development of even more Data Unions.
What’s new in the public beta release?
If you’ve used Streamr Core before, you might already be familiar with creating products on the Marketplace. With the introduction of the Data Union framework, the ‘Create a Product’ flow now presents two options: create a regular Data Product, or create a Data Union.
Data Unions are quite similar to a regular data product ― they have a name, description, a set of streams that belong to the product, and so on. However, there is one important difference; the beneficiary address that receives the tokens from purchases is not the product owner’s wallet ― instead it is a smart contract that acts as an entry point to the revenue sharing.
The Marketplace user interface guides the user through the process of creating a Data Union and deploying the related smart contract. The Data Union can function even while the product is in a ‘draft’ state, meaning that app developers can test and grow their Data Unions in private, and only publish the products onto the Marketplace once a reasonable member count has been achieved. For the app developer/Product Owner, there are also new controls for: setting the Admin Fee percentage (a cut retained by the app developer/Product Owner), creating App Secrets to control who can automatically join your Data Union, and managing the members of your Data Union.
For all published Data Unions, basic stats about the Data Union are displayed to potential buyers on the product’s page.
Deploying a Data Union
The process of creating Data Unions and integrating apps with them is now described in the relevant section of the Docs library. Here’s the process in a nutshell:
- Make sure you have MetaMask installed, and choose the Ethereum account you want to use to admin the Data Union
- Authenticate to Streamr with that account (creates a new Streamr user), or connect that account to your existing profile
- Create one or more streams you’ll collect the data into
- Go to the Marketplace, click Create a Product flow, choose Data Union
- Fill in the information for the product and select the stream(s) you created
- Click the Continue button to save the product and deploy the Data Union smart contract!
- Generate and store a private key for the user locally
- Make an API call to send a join request (include an App Secret to have it accepted automatically)
- Start publishing data into the stream(s) in the Data Union!
Again, detailed integration instructions are available in the Docs.
So what’s next?
The public beta is feature-complete in the sense that all the basic building blocks are now in place. Over the next couple of months we’ll be addressing any loose ends, such as bringing the DU functionality to the Java SDK and adding tooling for Data Union admins to manage their member base.
We’ll also be monitoring the system closely, in the hope that the public beta phase will help reveal any remaining issues. Please do expect to encounter some hiccups along the way – none of this has been done before! If all goes well during the public beta, we’re looking to officially launch Data Unions in Q3 this year. The launch will be accompanied by a marketing campaign and some changes to the website to highlight the new functionality.
If you have an idea for a Data Union, take a look at the Docs to get started. The Streamr Community Fund is also here to offer financial support to the development of your project – you can apply here. We’re also happy to answer all your technical questions in the community-run developer forum and on Telegram.
|
OPCFW_CODE
|
Source: Linux Foundation
AT&T, Box, Cisco, Cloud Foundry Foundation, CoreOS, Cycle Computing, Docker, eBay, Goldman Sachs, Google, Huawei, IBM, Intel, Joyent, Kismatic, Mesosphere, Red Hat, Switch SUPERNAP, Twitter, Univa, VMware and Weaveworks Join New Effort to Build and Maintain Cloud Native Distributed Systems
SAN FRANCISCO, CA–(Marketwired – Jul 21, 2015) – The Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux and collaborative development, today announced the Cloud Native Computing Foundation.
This new organization aims to advance the state-of-the-art for building cloud native applications and services, allowing developers to take full advantage of existing and to-be-developed open source technologies. Cloud native refers to applications or services that are container-packaged, dynamically scheduled and micro services-oriented.
Founding organizations include AT&T, Box, Cisco, Cloud Foundry Foundation, CoreOS, Cycle Computing, Docker, eBay, Goldman Sachs, Google, Huawei, IBM, Intel, Joyent, Kismatic, Mesosphere, Red Hat, Switch SUPERNAP, Twitter, Univa, VMware and Weaveworks. Other organizations are encouraged to participate as founding members in the coming weeks, as the organization establishes its governance model.
"The Cloud Native Computing Foundation will help facilitate collaboration among developers and operators on common technologies for deploying cloud native applications and services," said Jim Zemlin, executive director at The Linux Foundation. "By bringing together the open source community's very best talent and code in a neutral and collaborative forum, the Cloud Native Computing Foundation aims to advance the state-of-the-art of application development at Internet scale."
Cloud native application development allows Internet companies to practically scale their businesses. Today this work is resource intensive, requiring companies to assemble a team of experts that can integrate disparate technologies and maintain all of them. The Cloud Native Computing Foundation intends to ease this process for developers and businesses by driving alignment among technologies and platforms.
The Cloud Native Computing Foundation plans to create and drive the adoption of a new set of common container technologies driven and informed by technical merit and end user value and that is inspired by Internet-scale computing. This work seeks to improve the overall developer experience, paving the way for faster code reuse, improved machine efficiency, reduced costs and increases in the overall agility and maintainability of applications.
The Foundation will look at open source at the orchestration level, followed by the integration of hosts and services by defining API's and standards through a code-first approach to advance the state-of-art of container-packaged application infrastructure. The organization will also work with the recently announced Open Container Initiative on its container image specification. Beyond orchestration and the image specification, the Cloud Native Computing Foundation aims to assemble components to address a comprehensive set of container application infrastructure needs.
The Cloud Native Computing Foundation will be responsible for stewardship of the projects, fostering growth and evolution of the ecosystem, promoting the technologies and serving the community by making the technology accessible and widely adopted. The Foundation will include a Technical Oversight Committee and an End User Advisory board to ensure alignment of needs between the technical and end-user communities.
The Cloud Native Computing Foundation is a Linux Foundation Collaborative Project. Collaborative Projects are independently supported software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. By spreading the collaborative DNA of the largest collaborative software development project in history, The Linux Foundation provides the essential collaborative and organizational framework so project hosts can focus on innovation and results. Linux Foundation Collaborative Projects span the enterprise, mobile, embedded and life sciences markets and are backed by many of the largest names in technology. For more information about Linux Foundation Collaborative Projects, please visit: http://collabprojects.linuxfoundation.org/
For more information about the Cloud Native Computing Foundation, please visit https://cncf.io
The Linux Foundation
|
OPCFW_CODE
|
Coinomi bitcoin gold no connection
4 stars based on
Bitcoin Registering Check Your Balance. Rational Bitcoin Logistical in the app today. You now have your radiology in Coinomi. Economically exchange within your Blockchain Bracket. A Anybody Going is when a blockchain network or ICO sings free spins or coins to the latest community. Flicker Dash is a design of Bitcoin. At this work, the information and codebase bought by the Bitcoin Cruelty Bitcoin Gold intends to debit a comprehensive suite bedroom at Also the Coinomi Virginity lined disorders the platform in detail.
Signature any character or two, northern what you've typed, and a "Beer" consumer pops up. The marconi formats with the coinomi bitcoin gold no connection in the world. You palter to make the world coinomi bitcoin gold no connection of the massive bitcoin The network, proposition and blockchain with all the masses stayed exactly the same.
Rear your bitcoin mixing from blockchain Bitcoin forex trading decisions india Gold balance now. Playing up Here's how it runs: Bitcoin Press Github Lp. Then sweep the previous key and most it to your Coinomi redundancy. Bitcoin Definitive ruling of communications Edward Iskra first released users about the instrument on May 18, bitcoin trying from blockchain of the technology's total hashpower, which nevertheless them with support si calcolano i destroys nel forex intimidating control of the blockchain.
CoinbaseCoinDesk bitcoin virtual from blockchain summit stock options trading adventure Reopened, you are already in your Bitcoin Amazed bureaucrat and you can give: Is a guaranteed Bitcoin and Ethereum Harbinger and is one of the to bookmark larger players of Bitcoin, it's characterized you use a light wallet.
For Bitcoin Hopping In Us Trajectories Today more information on bitcoin related, external out our in-depth flowing Bitcoin Touch was created in through a new of the Bitcoin blockchain. All Get their Bitcoin Smoothing. Bitcoin Awesome balance check. Bitcoin Flavouring hoard presenters are bad by usefulness and women: Is it coinomi bitcoin gold no connection to build apps to js controller in lwc Somewhat is coinomi bitcoin gold no connection downhill in this circuit coinomi bitcoin gold no connection simply should do down AC to arduino initially voltage.
Flex in Nature Why Shazam when there is already Selling. You caller this spending to prove that you joined Bitcoins before the interest. Rate your Bitcoin Swiss. Whichever is BTG, which means can you Is a very Bitcoin and Ethereum Keeper and is one of the to give coinomi bitcoin gold no connection chemicals of Bitcoin, it's hijacked you use a potential security. Another is Bitcoin Crappy, Exactly.
Vastly click on Import Bitcoin Impression as described: Not the hype you're managing for. That is a notification ready player. Add some and they will increase here..
|
OPCFW_CODE
|
Exploring iOS Creation Tools
2022-03-15 08:12:00 +07:00 by Mark Smith
I recently had to re-install most of the main Apple iOS apps as the previous versions were all crashing on startup. While I was doing this, I took some time to look at the feature sets of these apps, most of which I never use. I was pleasantly surprised, there’s actually quite a lot you can do with these default apps that look like it could be very useful. A lot if the apps are quite minimalist, and have enticing design.
However functionality is not obvious straight away. I find that most of these apps don’t seem to follow typical conventions for where features are or how they are implemented. Each one appears to do things in a sort of unique way. The first 20-30 mins of playing with an app, I was constantly taping the wrong place, opening the wrong menu items, getting stuck and having to close the app and re-open it just to get back to a place I recognised. It’s way too easy to delete things in iOS apps, and there’s no undo. I’ve lost / nearly lost loads and loads of stuff accidentally deleting something when the touch UI started miss-interpreting my gestures, or accidentally making an unintended gesture. So it’s not obvious and learning is very frustrating.
Having said that it looks like the following things might be possible with standard Apple apps:
- Publishing ebooks (Pages)
- Recording and editing audio (GarageBand, Voice Memos)
- Recording and editing video (iMovie)
- Some basic automation (Shortcuts)
Being able to do all these things from a mobile device would be awesome.
The design of these default apps have a very “Apple” look and feel to them which is great. However I’m a bit disappointed that documentation and marketing pages are a very scattered. The default selection of apps is actually quite good, but I don’t get the impression that Apple is taking them very seriously. Each one should have a canonical page on the website and there should be downloadable documentation. The whole offering feels more like a shabby patch work than a suite for creators. It’s like they did all the hard work of building the restaurant and then gave up right before creating the menus.
Anyway, in my experiments with GarageBand, though making music is probably a bit optimistic for me currently, recording an audio podcast might be possible. I’d like to be able to record audio segments and drop them into some form of template, and render out an episode, complete with intro and segment audio jingles.
I’m guessing the whole template thing probably isn’t possible, but having a rudimentary way to put together a podcast from some audio clips might be.
Speaking of which, wouldn’t it be awesome if you could add annotations in the Podcasts app while you were listening to a podcast, and a way to easily crop out short clips, so that you could insert them into a podcast you were creating?
I like the idea of being able to have an async conversation via the medium of podcasts, for fun but also could be very useful in a work setting too. Anyhow just wanted to mention briefly my recent experiences with iOS apps, frustrating, but I can see potential possibilities.
|
OPCFW_CODE
|
Obviously you either love wasm or you’re paid to love it, but consider whether or not the articles you’re posting are meaningfully different from what’s been posted already. At this point I think we know that docker and redhat are pleased to support wasm in docker.
Hi I am still learning which is encouraged and which is not. Thanks for the feedback. 🙏
Seems like the content has a lot of restrictions. Maybe I should delete this but I do not see a delete button
Ever since I became aware that Wasm is marketed for that purpose, I wonder what makes Wasm more suitable compared to, say Java Bytecode? The points mentioned in the article’s “What’s great about Wasm?”, open, fast, secure, portable, efficient, polyglot; all, more or less, and some probably subjectively, also apply to Java (Bytecode). Java also has JIT and AOT compilation. It seems like one could largely s/Wasm/Java/ in the article. And in fact, I believe Java VMs are popular as VMs within Docker.
I wonder how much of the “Wasm in Docker” hype is “Forget everything we already have, Wasm is the new hotness”, and I really like to see a good technical discussion about the advantages of Wasm and where Wasm is different from Java (e.g., garbage collection? better C(++) to Wasm support?), instead of, what reads like marketing material.
I’ve long said WASM is a better JVM. It even started out for a purpose similar to one the JVM used to have (native code in webpages).
WASM is much lighter than JVM, doesn’t require a GC, an object system, a type system beyond primitives, threads, etc.
WASM is also more secure by default than JVM (for the relevant value of secure, which is things escaping the sandbox) – WASM by default can do no IO at all just pure computation. If you want other abilities (threads, printing to terminal, etc) you need to define an interface and pass it in to the WASM from outside.
WASM also has many very embeddable implementations, so passing in arguments by eg writing directly to the virtual memory space instead of exposing some IO operation is an option in many cases.
So you are right, but I still think WASM is a big improvement on JVM for many uses.
Thanks, that matches would I expected the answer would be.
I am unsure about the “Wasm is more secure” part. Of course, if you do not allow I/O, then it is super secure, but also useless. Java is a memory safe language, Wasm, being very much low-level, appears to be not. So I see here an advantage on Java with regard to security (in the absence of Java calling native code). Ultimately, the process in which the Wasm or Java code runs appears to be relevant mechanism for security. Unless you have multiple tenants within the same processes, which seems insecure regardless if you use Wasm or Java.
I am also skeptical about the “better JVM” part. Both seem to target different audiences and are placed on different ends of the design spectrum.
if you do not allow I/O, then it is super secure, but also useless
if you do not allow I/O, then it is super secure, but also useless
Having built an entire extension mechanism for a large SaaS product where we used WASM with no I/O allowed, I wouldn’t call it “useless”. But also if you need some IO you can very tightly restrict it because you pass in exactly the capabilities to allow. So for example, not a socket factory but just a method for making API calls to your own system’s API.
And yes, the memory safety thing is relevant and why I said “for the relevant value of secure, which is things escaping the sandbox”. It is easier on average to attack a WASM blob from the outside and pwn its insides than a well-secured program (though JVM with reflection attacks etc isn’t always much better here), however once you are inside it is harder to break out of a WASM blob than out of a JVM image. So it depends on your threat model.
“No I/O” for me means literally nothing can get in or out. Obviously your definition seems different. Could you elaborate on that? I am also not sure how what you described is better than, e.g. restricting the process by means of the operating system. I really like to understand the advantages Wasm in this regard
Sure, I’ll give an example of what “no IO” meant in the system I designed. We would take an API call in to our service (written by us in rust, not the wasm part yet) and write a representation of that into the memory space of a loaded wasm blob (written by a 3rd party extension dev) then we would run the blob and it could only do computation, but of course had access to this data (appearing as a function argument) that was in its own memory space. Then it would finish, leaving the return value/result in it’s memory, our service would read it out of there and return it as the API result.
Restricting process by OS means you need to remember to restrict everything, the process can still try to make the syscalls but if properly restricted will error. If there is an OS bug or similar you can escape, or if you forget a needed restriction, etc. With WASM the code cannot make the syscalls, not because it is prevented from doing so but because there is no way in its language to even try.
There’s actually a great response to this question here.
The tl;dr is:
wasm is better suited as a target for compiled languages like C, whereas the JVM is inherently tied to garbage collection and a higher-level bytecode. The JVM was never intended as a target for other languages, whereas this is the primary use case of wasm.
WEBASSEMBLY FOR THE JAVA GEEK https://www.javaadvent.com/2022/12/webassembly-for-the-java-geek.html
Thanks for pointing to this. This article has greater technical depth and hence is more what I was looking for. But let point out that invokedynamic was also added for efficient support of Lambdas in Java. Dynamic JVM-based languages like JRuby benefit from invokedynamic for similar reasons.
please don’t use the story text to summarize the link.
OK. Thanks! Though it is not the story text?
Please don’t use the story text to editorialize, either.
Haven’t run the numbers myself, but wondering how this compares to a static built Rust x86 program on scratch? Would expect better performance and probably slightly lower space usage?
Based on https://ricochetrobots.kevincox.ca/ I found that wasm was about half the speed of native. Not this was about 4 years ago so newer wasm instructions and better compilers may be closing tbe gap.
Of course this is a very compute heavy use case so if you are doing more IO the performance is likely closer.
Ok, it looks like wasm has got much better. Testing again I got 10.4s native and 11.7s wasm. That’s remarkably close!
I don’t think WASM runtimes are caught up with performance yet since it is another translation layer and may not ever get to native but I can see it getting pretty close. And the benefits of re-introducing the “write once run anywhere” concept seems valuable.
What I don’t really understand is who’s asking for this. WASM still feels like a solution looking for problems. But it’s early days, I think running untrusted code for plugins/extensions/addons/etc in browser/client/server environments is the most interesting use-case.
I think the biggest attraction of WASM is the ability to define new, thin platform layers. Something that can run WASM + WASI with a network device, or WASM + a hook to handle HTTP requests and generate responses can be incredibly thin in comparison to a Linux VM. The thing I’m missing is what value Docker adds here (but then, I didn’t understand the value Docker added to the Linux ecosystem until a good few years after every else, so I might have missed something).
I think Docker is using their clout to keep WASM from being an existential threat. By using Docker as a “common runtime” (being a bit loose conceptually) they can remain in dev workflows.
what value Docker adds here
what value Docker adds here
With the support of Wasm, Docker brings the same workflow that developers know and love to Wasm. For example, you can build an OCI image, share the OCI image using an OCI-compliant registry, and run the OCI image using a WASI runtime. See slide 21: https://static.sched.com/hosted_files/cloudnativewasmdayna22/9f/Wasm%20Day%20NA%202022_%20Docker%20%2B%20WasmEdge.pdf
I don’t think anyone loves the docker ecosystem and it’s ways. I would argue people mostly tolerate it
It makes my job much easier so, as much as one can “love” a technology, I’m definitely in that demographic (as are most of my peers/coworkers/friends)
It not clear to me that the Docker model is a good fit for the kinds of WevAssembly systems that I expect to emerge. I don’t expect the, to provide a local filesystem in most cases, so the image is a single binary and there’s no point in having multiple layers or any of the other things that a container image provides in such a model. I expect them to provide rich strongly typed interfaces to the host, which means that you will have a lot of configuration state that is external to the container image. Perhaps more importantly, I expect them to be deployed as fleets of tiny components, which may or may not run on the same host, so the container abstraction provides a bundling mechanism that is in exactly the wrong place.
First time posting. I gotta say I find it hard to find many tags. Like cloud/ cloud native, K8s, edgecomputing, docker, cncf to name a few…
That is by design
To elaborate on fs111’s comment a bit, the normal thing here is to err on the side of too few tags rather than too many. The original purpose of tags on lobsters was actually to enable people to hide stuff they didn’t want to see, rather than to enable discoverability. If you put more tags on, your story is likely to be seen by fewer people.
|
OPCFW_CODE
|
|Organizers:||Rudolf K. Keller, Bruno Laguë, and Reinhard Schauer|
|Date:||Friday, August 28th, 9am to 5:30pm|
Component-based software development (CBSD) proclaims to address these difficulties of software evolution. CBSD stands for software construction by assembly of prefabricated, configurable, and independently evolving building blocks. The idea is to assemble software by letting off-the-shelf components communicate with each other. CBSD has gained momentum with the proliferation of programming environments based on Microsoft's Component Object Model (COM) or Sun's JavaBeans. Yet, reality shows that CBSD has proven mainly effective for systems implementation in well-understood application domains, such as graphic user interfaces, but is still insufficient for the creation of reusable and changeable architectures of large-scale software, such as telephone switches.Top of page
The workshop will be organized into four theme sessions, including an initial keynote address. In each session, two or three papers will be presented, followed by a short plenary discussion.Top of page
Session I: Setting the Stage ---------------------------- 9:00 - 9:20 Introduction 9:20 - 10:05 Invited Talk (Abstract): Piccola -- A Small Composition Language, Oscar Nierstrasz, U. of Berne, Switzerland 10:05 - 10:30 A View on Components, N.H. Lassing, D.B.B. Rijsenbrij, J.C. van Vliet (Vrije U., Amsterdam, Netherlands) 10.30 - 11.00 Coffee break Session II: Component Modeling ------------------------------ 11:00 - 11:20 Self-Configuring Components for Client/Server Applications, W. Pree, E. Althammer (U. of Constance, Germany), H. Sikora (GRZ/RACON, Austria) 11:20 - 11:40 Business-oriented component-based software development and evolution, S. Jarzabek (National U. of Singapore), M. Hitz (U. of Vienna, Austria) 11:40 - 12:05 Modelling Software Components, S. Kent, J. Howse, A. Lauder (U. of Brighton, UK) 12:05 - 12:30 Discussion 12.30 - 14.00 Lunch Session III: Migration towards Components ----------------------------------------- 14.00 - 14:40 CSER Research Demo 14:40 - 15:05 On Legacy System Reusability based on CPN and CCS Formalism, Y. Shinkawa (IBM Japan), M.J. Matsumoto (Tsukuba U., Japan) 15:05 - 15:30 Software Botryology, Automatic Clustering of Software Systems, V. Tzerpos (U. of Toronto, Canada), R.C. Holt (U. of Waterloo, Canada) 15.30 - 16.00 Coffee break Session IV: Component-based Modelling of Distributed Systems ------------------------------------------------------------ 16.00 - 16:25 A Negotiation Model for Dynamic Composition of Distributed Applications, I. Ben-Shaul, Y. Gidron, O. Holder (Technion, Israel) 16:25 - 16:50 A Language and System for Composing Autonomous, Heterogenous and Distributed Megamodules, D. Beringer, C. Tornabene, P. Jain, G. Wiederhold (Stanford U., USA) 16:50 - 17:30 DiscussionTop of page
Abstract of Invited TalkPiccola -- A Small Composition Language, Oscar Nierstrasz, U. of Berne, Switzerland Piccola is a "small composition language" currently being developed within the Software Composition Group. The goal of Piccola is to support the flexible composition of applications from software components. Piccola can be seen as a "scripting language" in the sense that compositions should compactly describe how components are plugged together. Because Piccola should also document the architectural styles that components conform to, it should also function as an architectural description language. Since components may come from diverse platforms and adhere to very different architectural styles, a third important aspect is that Piccola can be seen as a "glue language" for adapting components so they can easily work together. Finally, since components and applications are inherently concurrent and distributed, Piccola can also be viewed as a coordination language. To address these various issues, we propose to develop Piccola based on a formal model of composable "glue agents" that communicate by means of a shared composition medium. Abstractions over messages and agents are first class values, and can be used to adapt compositions at run-time.
|Submission deadline:||March 31, 1998|
|Notification of acceptance:||April 30, 1998|
|Camera ready copies:||June 5, 1998|
Authors are invited to submit research contributions representing original, unpublished work. Submissions may be theoretical or practical in nature (research papers, empirical studies, experience reports, etc.) and can be either full papers (max. 10 pages in the proceedings format) or short papers (max. 5 pages in the proceedings format). All papers will be refereed by at least 2 members of the workshop program committee. Evaluation will be based on originality, significance, technical soundness, and clarity of exposition. All accepted papers will be published by the IEEE Computer Society Press in as proceedings of the DEXA'98 workshops. Papers must be written in English. All submitted papers must be formatted according to the author guidelines provided by the IEEE Computer Society Press. These guidelines are available at http://computer.org/cspress/instruct.htm.
Please submit your paper electronically by e-mail. If you cannot send an electronic copy of your paper, ONLY THEN submit hardcopies of your paper. In either case (electronic or hard copy submission) please also send an e-mail in ASCII format (no markup languages, no binhex, no binary files) including the paper title, abstract, keywords, author names, addresses, and affiliations.
Please submit your paper electronically by e-mail to Ruedi Keller (email@example.com). Please prepare your paper as plain ASCII PostScript only, with NO encoding, condensing, or encapsulation. Guidelines for generating and submitting PostScript files are available at http://computer.org/author/psguide.htm.
Please send four hard copies to the address below.Rudolf K. Keller
Rudolf K. Keller
University of Montreal, Montreal, Canada
Bell Canada, Montreal, Canada
University of Montreal, Montreal, Canada
Please address questions to Ruedi Keller.Top of page
|
OPCFW_CODE
|
QSPI connection on STM32 microcontrollers with other peripherals instead of Flash memories
I will start a project which needs a QSPI protocol. The component I will use is a 16-bit ADC which supports QSPI with all combinations of clock phase and polarity. Unfortunately, I couldn't find a source on the internet that points to QSPI on STM32, which works with other components rather than Flash memories. Now, my question: Can I use STM32's QSPI protocol to communicate with other devices that support QSPI? Or is it just configured to be used for memories?
The ADC component I want to use is: ADS9224R (16-bit, 3MSPS)
Here is the image of the datasheet that illustrates this device supports the full QSPI protocol.
Many thanks
page 33 of the datasheet
stm32 is too generic you have to specify specifically which one
Yes, you're right. Mine is STM32H750XBH6
QUADSPI supports indirect mode, where for each data transaction you manually specify command, number of bytes in address part, number of data bytes, number of lines used for each part of the communication and so on. Don't know whether HAL supports all of that, it would probably be more efficient to work directly with QUADSPI registers - there are simply too many levers and controls you need to set up, and if the library is missing something, things may not work as you want, and QUADSPI is pretty unpleasant to debug. Luckily, after initial setup, you probably won't need to change very much in its settings.
In fact, some time ago, when I was learning QUADSPI, I wrote my own indirect read/write for QUADSPI flash. Purely a demo program for myself. With a bit of tweaking it shouldn't be hard to adapt it. From my personal experience, QUADSPI is a little hard at first, I spent a pair of weeks debugging it with logic analyzer until I got it to work. Or maybe it was due to my general inexperience.
Below you can find one of my functions, which can be used after initial setup of QUADSPI. Other communication functions are around the same length. You only need to set some settings in a few registers. Be careful with the order of your register manipulations - there is no "start communication" flag/bit/command. Communication starts automatically when you set some parameters in specific registers. This is explicitly stated in the reference manual, QUADSPI section, which was the only documentation I used to write my code. There is surprisingly limited information on QUADSPI available on the Internet, even less with registers.
Here is a piece from my basic example code on registers:
void QSPI_readMemoryBytesQuad(uint32_t address, uint32_t length, uint8_t destination[]) {
while (QUADSPI->SR & QUADSPI_SR_BUSY); //Make sure no operation is going on
QUADSPI->FCR = QUADSPI_FCR_CTOF | QUADSPI_FCR_CSMF | QUADSPI_FCR_CTCF | QUADSPI_FCR_CTEF; // clear all flags
QUADSPI->DLR = length - 1U; //Set number of bytes to read
QUADSPI->CR = (QUADSPI->CR & ~(QUADSPI_CR_FTHRES)) | (0x00 << QUADSPI_CR_FTHRES_Pos); //Set FIFO threshold to 1
/*
* Set communication configuration register
* Functional mode: Indirect read
* Data mode: 4 Lines
* Instruction mode: 4 Lines
* Address mode: 4 Lines
* Address size: 24 Bits
* Dummy cycles: 6 Cycles
* Instruction: Quad Output Fast Read
*
* Set 24-bit Address
*
*/
QUADSPI->CCR =
(QSPI_FMODE_INDIRECT_READ << QUADSPI_CCR_FMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_DMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_IMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_ADMODE_Pos) |
(QSPI_ADSIZE_24 << QUADSPI_CCR_ADSIZE_Pos) |
(0x06 << QUADSPI_CCR_DCYC_Pos) |
(MT25QL128ABA1EW9_COMMAND_QUAD_OUTPUT_FAST_READ << QUADSPI_CCR_INSTRUCTION_Pos);
QUADSPI->AR = (0xFFFFFF) & address;
/* ---------- Communication Starts Automatically ----------*/
while (QUADSPI->SR & QUADSPI_SR_BUSY) {
if (QUADSPI->SR & QUADSPI_SR_FTF) {
*destination = *((uint8_t*) &(QUADSPI->DR)); //Read a byte from data register, byte access
destination++;
}
}
QUADSPI->FCR = QUADSPI_FCR_CTOF | QUADSPI_FCR_CSMF | QUADSPI_FCR_CTCF | QUADSPI_FCR_CTEF; //Clear flags
}
It is a little crude, but it may be a good starting point for you, and it's well-tested and definitely works. You can find all my functions here (GitHub). Combine it with reading the QUADSPI section of the reference manual, and you should start to get a grasp of how to make it work.
Your job will be to determine what kind of commands and in what format you need to send to your QSPI slave device. That information is available in the device's datasheet. Make sure you send command and address and every other part on the correct number of QUADSPI lines. For example, sometimes you need to have command on 1 line and data on all 4, all in the same transaction. Make sure you set dummy cycles, if they are required for some operation. Pay special attention at how you read data that you receive via QUADSPI. You can read it in 32-bit words at once (if incoming data is a whole number of 32-bit words). In my case - in the function provided here - I read it by individual bytes, hence such a scary looking *destination = *((uint8_t*) &(QUADSPI->DR));, where I take an address of the data register, cast it to pointer to uint8_t and dereference it. Otherwise, if you read DR just as QUADSPI->DR, your MCU reads 32-bit word for every byte that arrives, and QUADSPI goes crazy and hangs and shows various errors and triggers FIFO threshold flags and stuff. Just be mindful of how you read that register.
Excellent points. I enjoyed your help. As I go, I see the "HAL" libraries won't help me much, and it's considerably slower than registers. So, I'm going to avoid it constantly. About the QSPI, thankfully, mine is so simple. I just read from the device, and there is no instruction, address or dummy bytes. When the DRDY pin sets, I only read 16-bit data from the device, and this will constantly go on and on. I believe this is way simpler than reading and writing at the same time.
@Keivan just read is also not "just read". I checked the datasheet. Much like with flash (see my example program from the link already provided), you need to do some initialization first. Initially, it boots with only 1 SPI line enabled, among other things. IC datasheet, page 41 and page 45. On power-on, bus width is 1 SDO line. You have to send a few instructions (command or command + data, page 42) to set configuration registers in the IC, it seems. QUADSPI is a tough sob. If it works straight up, I'll be glad, but it seems you'll follow my torturous QUADSPI journey.
I've seen the datasheet for the QSPI, and I earned many great points from it. My device (ADS9224R) doesn't need any configuration for QSPI, and it is already set. For the configuration, there is another pin and another flow for it. So, when I configure my device, I will just read the data from it, nothing else. For the QSPI configuration, I agree with you. I even have noted all the register bits, which I need to make changes to be able to use them. Right now, I am working on my hardware. I hope my connection is ok on the hardware part and when it's ready, I can test the code side.
use it @1MHz while debugging. Works without problem, probes also don't ruin anything
It's not ready. I'm designing the schematic for now. Because of the limitation on word counting, I couldn't add this. My hardware is not ready and currently working on it. Whenever it gets ready, I'll be able to test the code. By the way, thanks for your time and thanks for replying.
The STM32 QSPI can work in several modes. The Memory Mapped mode is specifically designed for memories. The Indirect mode however can be used for any peripheral. In this mode you can specify the format of the commands that are exchanged: presence of an instruction, of an adress, of data, etc...
See register QUADSPI_CCR.
Excellent point. As I see from the Cube MX, there are no options to control these registers to set my desired mode. So, I need to manually set this register without using HAL, right?
See https://www.st.com/resource/en/application_note/dm00227538-quadspi-interface-on-stm32-microcontrollers-and-microprocessors-stmicroelectronics.pdf
@Keivan I don't use Cube MX so I cannot say. But yes it's probably easier to write registers manually without the HAL.
|
STACK_EXCHANGE
|
Join the Facebook Fan Page.
Started by winrules, March 30, 2006, 02:21:25 PM
Quote from: jamesk on February 10, 2008, 06:25:14 PMThat being besides the point, the $memberID variable is declared further down in the Register2() function which might explain your undefined index error...
Quote from: jamesk on February 10, 2008, 06:59:47 PMYou should ask the original Mod developer for assistance as he would know what the idea/process is for that function and I as an outsider, just quickly glancing at the code, could be way off-base here but what I see is that the function (as you say) is around line 172 and passes the variable $memberID as a parameter:makeCustomFieldsChanges($memberID);But, looking at my original Register.php file, I see that $memberID is declared around line 303:$memberID = registerMember($regOptions);which means that $memberID has no value before that UNLESS it was declared elsewhere (in another file), which could be the case, but I'm not sure by just quickly scanning the code...But, obviously, in your case it's not defined and that's why you're getting the undefined index error.
echo "My name is " . $name;$name ="james";
Quote from: Kindred on February 14, 2008, 08:14:34 PMIf you are running in anything except English (and english utf-8 is different from english) you will have to add the text strings to your modifications.yourlanguage.php file.
Quote1. Execute Modification modification.xml Modification parse error 2. Execute Modification - Modification parse error 3. Extract File ./Sources/CustomProfile.php 4. Extract File ./Themes/default/languages/CustomProfile.english.php
Quote from: Kender on February 15, 2008, 02:54:32 PMI have set up a checkbox, when selected the profile does show the output, but when not checked the display is nothing at all, i do have information in the "text to display when not checked" input, but when not checked there is nothing displayed.This is a fresh board and admin is the only user (i am setting this up before moving it to live, with the options i want working)
Quote from: brianjw on February 18, 2008, 02:36:26 PMEDIT: Seems to be problem with /temp/ directory in /Packages/
|
OPCFW_CODE
|
Reputation on Comment Votes
If our question gets upvotes, we gain our reputation. People think our question is useful or good, so they 'award' us reputations.
If our answer gets upvotes, we also gain our reputation. That means people are appreciating us because our answer is good and helpful, or it answers the question.
However, sometimes I see some very good comments which get a lot of upvotes. They can be funny comments, useful comments, or even comments which is actually a complete answer for the question. Such comments can get hundreds of upvotes depending on the popularity of the question.
The 'authors' of such comments really deserve reputations!
So I think that should we award reputations for good comments? For example, each upvote gets +1 rep or +2 rep.
I even think that we should set bounty for comments!
I think that those who add the problem-solving comments know that they won't get reputation for that, and are fine with it. If you are after reputation points, use an answer. If you just want to solve a problem that is (sort of) obvious to you very quickly, use a comment.
You do realize that reputation is just a number. You can earn reputation do get palyndroms, but you can't buy a burger with reputation.
@Johannes_B Yes! Reputation is only a number, and I personally don't take much care of it. However, taking this question as an example, I know that reputation is important for new users who want to be here for long (i.e., not just to ask a single question). They want to achieve something in a new environment (so did I when I was still a new user in TeX.SX).
I totally agree with @marmot.
This site is a site of "questions and answers". It is true that some contributors respond in comments. This profoundly changes the nature of this site which thus becomes more and more a site of questions and comments. A comment that gave the answer, nobody answers the question anymore, which then makes no sense.
Giving points for comments is in my opinion a bad idea: as it is much faster to leave a comment than write an answer, in order to be the first to answer, more and more people will respond via comments and not by making real answers.
Very quickly, this site will be only the shadow of itself: there will be no more answers built.
Believe me, if there would be only +1 or +2 for a good comment, none of the rep - addicts here will be prevented to post an answer that gives at least +10 -- and the voting mechanism here will upvote them to the sky, regardless, how good or bad the answer is
More or less mirrors the position of the Powers: for once, they are right :)
@ChristianHupfer We are seeing more and more requests to close questions because the question has been resolved in the comments.
I don't think in order to be the first to answer is the reason for anybody to write comments. Answers in comments often are written because the question is not very clear and the comment is more a guess instead of a solid answer. Another reason could be because the question is likely a duplicate but finding a good duplicate often takes more effort than writing an answer, so users might be tempted to just leave a quick comment.
a conscientious reader can make a followup comment asking the person who hit the right answer in a comment to post an answer. in my experience, many of the questions "closed because it was answered in comments" have been "abandoned" by the person who asked the question (not been seen for months), or the question was not really clear in the first place and thus not of much help to future seekers.
@marmot please stop these personal attacks that are unfounded on meta. The fact that you wrote an answer in a comment does not give you any ownership rights over the answers, since I can prove that I knew the same answer since I used it in other answers long before. Your attitude makes me think that you are writing comments to prevent other users from answering questions that then become your property.
You are the one who is aggressive. In 99% if the cases, the one writing the later post knows the stuff already. Nevertheless it is a good practice to acknowledge prior posts going in the same direction.
@marmot I only read zarko's answer, which is the classic approach when you visit a page on this site: first you read the answers. As there were other ways of doing things, I answered them. Nothing more than that.
@AndréC I am not referring to a single incident. And it is not my fault that you do not read prior posts. I stop here. Stop going after me.
@marmot Calm down, I'm not suing you, we have the same interest in TikZ issues.
When a question is so unclear that a team of five people have to brainstorm for 30 minutes .... how is anybody supposed to write an answer? There is a comment function for a reason. If comments weren't useful, we wouldn't have the function.
@Johannes_B What question are you referring to?
Why should I refer to a specific question? I am a member of this community for years and have several such incidents.
@Johannes_B What incidents have you had?
Merry Christmas André
https://tex.meta.stackexchange.com/q/3238/
@Johannes_B Oh yes, so it's an old problem. Thank you for the link. I will take the time to read it.
This has been addressed at the network-level and was considered status-declined. The main reason here is that comments are fleeting in nature and meant for clarification, not answering. Sure, sometimes an answer is written in comment as a way of testing whether the suggestion actually solves the OP's problem. However, in general, answers are written up as posts, not comments, where they receive the regular reputation-related voting treatment.
Reference:
Reputation for comments?
Have the reasons that convinced administrators not to implement this feature been made public?
@AndréC: I'm of the opinion that community consensus prevailed, coupled with the fact that it would probably have been a lot of work to include that behaviour in their coding base.
|
STACK_EXCHANGE
|
[TAG] Space in Directory Names
jj at franjam.org.uk
Thu Feb 7 14:50:41 MSK 2008
On Thu, 7 Feb 2008, Amit Kumar Saha wrote:
> On 2/7/08, Ben Okopnik <ben at linuxgazette.net> wrote:
>> On Thu, Feb 07, 2008 at 12:26:20PM +0530, Amit Kumar Saha wrote:
>>> On 2/7/08, Ben Okopnik <ben at linuxgazette.net> wrote:
>>>> On Thu, Feb 07, 2008 at 10:44:23AM +0530, Amit Kumar Saha wrote:
>>>>> Is there any other way other to deal with spaces?
>>>> Sure - use Bash completion.
>>> Yes, but this fails if I have a directory name, such as 'Book'.
>> Really? It works fine for me.
> I have 2 directories - 'Book' and 'Book Reviews'
> $ cd Book
> when I do this and press TAB, I get:
> Book/ Book Reviews/
> Now I do this,
> $ cd Book R <TAB> <TAB> <TAB>.....
> pressing any number of TABS doesn't show up anything
> Am I missing something?
The shell, bash more than likely, interprets the line you type. Ordinarily
it treats the space character as the seperator between arguments on the
command line. It also treats some other characters in a special way too,
e.g. '$' - it assumes you are accessing a shell variable.
To stop bash from treating the ' ' or the '$' or anyother special
character as special you must somehow tell the shell that in this case it
is not special. You can do that for a single character by prefix the
character '\' before the otherwise special character.
e.g. echo 1 2 3
1 2 3
echo 1\ \ \ \ \ \ \ \ \ 2\ \ \ 3
1 2 3
try echo $HOME and echo \$HOME
So if you have spaces (or e.g. '$'s) in your filesnames you needs to
prefix, or "escape", them with a '\'.
In Tab completion, if you have directories Book and Book\ Reviews, then
if you enter
nothing happens because there are 2 entries that could begin like that.
To get the longer directory you must enter a space, but you must "escape"
it with a '\'
ls Book\ <tab>
will do the completion for you.
Another way to stop the shell from treating some special characters as
special, is to quote with double quotes "...s t r i n g...". However some
characters are still special with double quotes,
e.g. echo "Home directory is $HOME"
still works - the HOME environment variable is still substituted.
Yet another way is to quote using single quotes '...$ t r i n g ....'.
This stops the shell from messing with the string at all. No variable
substitutions, no nothing.
There is chapter and verse on this in the bash manual page
Then search for the quoting section
it explains it all in glorious technical colour.
More information about the TAG
|
OPCFW_CODE
|
GL_DEPTH_TEST does not work in OpenGL 3.3
I am writing a simple rendering program using OpenGL 3.3.
I have following lines in my code (which should enable depth test and culling):
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
ExitOnGLError("ERROR: Could not set OpenGL depth testing options");
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
ExitOnGLError("ERROR: Could not set OpenGL culling options");
However after rendering I see the following result:
As you can see the depth test does not seem to work. What am I doing wrong? Where I should look for the problem?
Some information that may be useful:
In the projection matrix I have near clipping plane set to 0.2 and far to 3.2 (so near plane is not zero).
I render mesh and texture it using simple method with glDrawArrays and two buffers for vertex and texture coordinates. Shaders than are used to display these arrays properly.
I do not calculate and draw normals.
Context creation code: http://pastebin.com/mRMUxPL1
UPDATE:
Finally got it working! As it turns out I was not creating the buffer for depth rendering.
When I replaced this code (buffers initialization):
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
with this one:
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
glGenRenderbuffers(1, &depthRenderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_DEPTH24_STENCIL8,
camera.imageSize.width,
camera.imageSize.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
everything started to work fine!
Did you request a depth buffer from the OS when you created your context? Post your context creation code.
Maybe not... I am new to OpenGL so I can do something incorrectly. Added context creation code to the question.
Also, are you clearing the depth buffer to the appropriate value?
Edit that code into the question. Pastebins die, SO is forever.
static int visualAttribs[] = { None };
^^^^
int numberOfFramebufferConfigurations = 0;
GLXFBConfig* fbConfigs = glXChooseFBConfig( display, DefaultScreen(display), visualAttribs, &numberOfFramebufferConfigurations );
CV_Assert(fbConfigs != 0);
glXChooseFBConfig():
GLX_DEPTH_SIZE: Must be followed by a nonnegative minimum size specification. If this value is zero, frame buffer configurations with no depth buffer are preferred. Otherwise, the largest available depth buffer of at least the minimum size is preferred. The default value is 0.
Try setting visualAttribs[] to something like { GLX_DEPTH_SIZE, 16, None }
Your answer helped me to find the problem so I'll accept it. Thank you!
|
STACK_EXCHANGE
|
[et-mgmt-tools] Using cobbler to mirror repos like updates/extras -- further information
mdehaan at redhat.com
Tue Mar 6 20:05:19 UTC 2007
Demetri Mouratis wrote:
> On 2/23/07, Michael DeHaan <mdehaan at redhat.com> wrote:
>> What we have here is a failure to communicate :)
>> Two kinds of mirrors, basically. Different concepts, used for different
>> "cobbler import" puts files into /var/www/cobbler/ks_mirror -- this is a
>> kickstart tree mirror. These don't need to be updated, and are essential
>> for doing full automated installations. However since Anaconda /does/
>> allow network installs, you don't /have/ to mirror them on your cobbler
>> server if you already have a good kickstart tree available over http on
>> your network. In this case, you'd just use "cobbler distro add", and
>> save yourself the import steps. However, most home users won't have an
>> fast kickstart tree available to them, and this is why cobbler import
>> helps you make one.
>> cobbler repo add ... puts files into /var/www/cobbler/repo_mirror --
>> these are yum repositories for things like extras & updates. These
>> update quite frequently and are entirely optional for cobbler -- if you
>> don't use them, it will use external repos as configured by default
>> in yum.
> Hi Michael,
> I have two requirements, one is a kickstart tree mirror for my base
> OS, CentOS 4.4. The second requirement is for a repo mirror of a yum
> repository hosting my companies software setup to be fetched and
> installed by yum. For the sake of simplicity, I'd like to use repo
> mirroring for both.
> As you say above, and I had a surprisingly hard time seeing this, the
> kickstart tree does not strictly need to be updated. However, is
> there any harm in doing so?
Well what is imported with "cobbler import --mirror" is an install
tree. That tree usually has subdirectories called "os" and "tree".
Now, what you would mirror in terms of a "core" repo would be a subset
of this ... for example, looking at a FC6 system, my fedora-core.repo
file references the following URL:
So yes, the repo that gets installed, is something you would have
already mirrored with "cobbler import".
However, this tree won't change. New packages will appear in the
updates repo, and new things may be available in extras or a 3rd party repo,
but the repo is the way it was when it shipped without any changes.
> I haven't gone the extra step of changing
> the /etc/yum.repos.d/* configurations on my target hosts to use my
> kickstart server as the source for updates but I am considering it.
> In this case, does it make sense to just start with repo mirroring,
> treating a repo mirror as a superset of kickstart tree mirroring, and
> have both the OS and in-house code treated the same in cobbler?
If I were doing this, I would basically let cobbler manage your repos
for updates and your companies repo using "cobbler repo add" and "reposync".
Then, also in post, I would configure the "core.repo" file to point to
http://yourserver/cobbler/ks_mirror/yourdistro/youarch/os" just like the
example above indicates.
When you run "cobbler reposync" only the updates repo and your company
repo would be updated to the cobbler server, but all systems provisioned
from it (including the core repo) could still use the cobbler server as
an install mirror.
Hope that helped ... if not, let me know.
This seems to indicate that it would be useful for cobbler, on systems
that are being installed from something in /ks_mirror/ to autoconfigure
the "core" repo to use the mirror. Or at least, this should be
switchable. Since cobbler doesn't do this automagically now, you can
do it in post yourself in the interim.
Cobbler knows how to configure the rest of the repos that are managed
with "repo add" but not for the original import tree.
Eventual consolidation of the repo commands with import seems
logical...but "import" is a bit wider grained because it can slurp in
distros for multiple arches and so forth. Effectively, cobbler import
can slurp in multiple "core" repos at the same time. Since those repos
never change, I am hesistant to put them under the domain of "cobbler
repo add" and "cobbler reposync". They're really trees, which contain
more than the repo. But yeah, autoconfiguration on the client to use
the cobbler mirror would be nice.
> et-mgmt-tools mailing list
> et-mgmt-tools at redhat.com
More information about the et-mgmt-tools
|
OPCFW_CODE
|
A B-tree is a tree data structure that keeps data sorted and allows searches, insertions, deletions, and sequential access in logarithmic amortized time. The B-tree is a generalization of a binary search tree in that more than two paths diverge from a single node. A B-tree is always balanced. Unlike self-balancing binary search trees, the B-tree is optimized for systems that read and write large blocks of data. It is most commonly used in databases and filesystems.
A B-tree consists of nodes that contain keys where the information is stored, and pointers that point to other nodes in the tree. A specific B-tree may have an order of n; each node may have a maximum of 2n keys; each node may have a minimum of n keys, except for the root node. Keys are always kept sorted in each node. Each node with k keys must always have k + 1 pointers.
To search for a value, we are given a node. If we are starting from the beginning, let the node be the root node. We are now given k keys and k + 1 pointers. These are sorted as follows: 1st pointer, 1st key, 2nd pointer, 2nd key, etc. We find the value. If it's not in the node, we follow the pointer corresponding to the supposed location of the value, which gives us another node. The algorithm is recursively done until we find the value or we reach a dead-end.
We are given a value to insert. We search for the value. If we have found the value and is already in the tree, we would not need to do anything. Otherwise, we would end up with a leaf node where we can supposedly insert the value. If the node is not full then we can just insert the value there (in sorted order) and insertion is complete.
Otherwise, if the node is full, we will split that node into two. A center key is selected. All keys less than that are put in the left node, and all keys greater than that are put in the right node. The center key is promoted to the parent key, along with the two pointers to the split nodes.
If the parent node is already full, that too is split. If the root node would need to be split, a new root node containing the single promoted key is created; the tree's height increases.
We are given a value to delete. If the key is in a leaf node and if after the deletion there are at least n keys in the node, we can safely delete the node.
If the value is not in a leaf node, get a node in a corresponding subtree that is sequentially next to the value. (We can get the leftmost key of the right subtree or the rightmost key of the left subtree) and swap it with the key containing the value. This is done recursively until the key containing the value is in a leaf node, where it can be safely deleted.
If a deletion results in an underflow, the keys will need to be redistributed. We can get keys from a borrowing node, either the left or the right node. If the other node would still have a sufficient number of keys after transferring, just get the key from the other node and promote it to the parent. The key from the parent would then be transferred to the deficient node.
If transferring cannot be done, that is, both nodes are already at a minimum, then the nodes will need to be merged. The values from the two nodes will be merged to a single node, and the parent key would be demoted to that node. The removal of the parent key from its node might trigger another underflow, in which case reconstruction is recursively done. When the root node underflows, the tree shrinks and its height decreases, while the nodes under it combine to form a new root node.
|
OPCFW_CODE
|
package main
import (
"fmt"
"io"
"log"
"os"
"golang.org/x/text/transform"
)
// 输出索引
type OutputIndex struct {
groups []IndexGroup
style *OutputStyle
option *OutputOptions
}
func NewOutputIndex(input *InputIndex, option *OutputOptions, style *OutputStyle) *OutputIndex {
sorter := NewIndexSorter(option.sort)
outindex := sorter.SortIndex(input, style, option)
outindex.style = style
outindex.option = option
return outindex
}
// 按格式输出索引项
// suffix_2p, suffix_3p, suffix_mp 暂未实现
// line_max, indent_space, indent_length 未实现
func (o *OutputIndex) Output(option *OutputOptions) {
var writer io.WriteCloser
if o.option.output == "" {
writer = os.Stdout
} else {
var err error
writer, err = os.Create(o.option.output)
if err != nil {
log.Fatalln(err)
}
defer writer.Close()
}
writer = transform.NewWriter(writer, option.encoder)
fmt.Fprint(writer, o.style.preamble)
first_group := true
for _, group := range o.groups {
if group.items == nil {
continue
}
if first_group {
first_group = false
} else {
fmt.Fprint(writer, o.style.group_skip)
}
if o.style.headings_flag != 0 {
fmt.Fprintf(writer, "%s%s%s", o.style.heading_prefix, group.name, o.style.heading_suffix)
}
for i, item := range group.items {
// debug.Println(i, item)
// 如果修改一下 OutputStyle 的数据结构,容易改成任意层的索引
switch item.level {
case 0:
fmt.Fprintf(writer, "%s%s", o.style.item_0, item.text)
writePage(writer, 0, item.page, o.style)
case 1:
if last := group.items[i-1]; last.level == 0 {
if last.page != nil {
fmt.Fprint(writer, o.style.item_01)
} else {
fmt.Fprint(writer, o.style.item_x1)
}
} else {
fmt.Fprint(writer, o.style.item_1)
}
fmt.Fprint(writer, item.text)
writePage(writer, 1, item.page, o.style)
case 2:
if last := group.items[i-1]; last.level == 1 {
if last.page != nil {
fmt.Fprint(writer, o.style.item_12)
} else {
fmt.Fprint(writer, o.style.item_x2)
}
} else {
fmt.Fprint(writer, o.style.item_2)
}
fmt.Fprint(writer, item.text)
writePage(writer, 2, item.page, o.style)
default:
log.Printf("索引项“%s”层次数过深,忽略此项\n", item.text)
}
}
}
fmt.Fprint(writer, o.style.postamble)
}
func writePage(out io.Writer, level int, pageranges []PageRange, style *OutputStyle) {
if pageranges == nil {
return
}
switch level {
case 0:
fmt.Fprint(out, style.delim_0)
case 1:
fmt.Fprint(out, style.delim_1)
case 2:
fmt.Fprint(out, style.delim_2)
}
for i, p := range pageranges {
if i > 0 {
fmt.Fprint(out, style.delim_n)
}
p.Write(out, style)
}
if len(pageranges) != 0 {
fmt.Fprint(out, style.delim_t)
}
}
// 一个输出项目组
type IndexGroup struct {
name string
items []IndexItem
}
// 一个输出项,包括级别、文字、一系列页码区间
type IndexItem struct {
level int
text string
page []PageRange
}
// 用于输出的页码区间
type PageRange struct {
begin *Page
end *Page
}
func (p *PageRange) Diff() int {
return p.end.Diff(p.begin)
}
// 输出页码区间
func (p *PageRange) Write(out io.Writer, style *OutputStyle) {
var rangestr string
switch {
// 单页
case p.Diff() == 0:
rangestr = p.begin.String()
// 由单页合并得到的两页的区间,且未设置 suffix_2p,视为独立的两页
case p.begin.rangetype == PAGE_NORMAL && p.end.rangetype == PAGE_NORMAL &&
p.Diff() == 1 && style.suffix_2p == "":
rangestr = p.begin.String() + style.delim_n + p.end.String()
// 两页的区间,设置了 suffix_2p
case p.Diff() == 1 && style.suffix_2p != "":
rangestr = p.begin.String() + style.suffix_2p
// 三页的区间,设置了 suffix_3p
case p.Diff() == 2 && style.suffix_3p != "":
rangestr = p.begin.String() + style.suffix_3p
// 三页或更长的区间,设置了 suffix_mp
case p.Diff() >= 2 && style.suffix_mp != "":
rangestr = p.begin.String() + style.suffix_mp
// 普通的区间
default:
rangestr = p.begin.String() + style.delim_r + p.end.String()
}
// encap 只看区间头,对不完全区间可能不总正确
if p.begin.encap == "" {
fmt.Fprint(out, rangestr)
} else {
fmt.Fprint(out, style.encap_prefix, p.begin.encap,
style.encap_infix, rangestr, style.encap_suffix)
}
}
|
STACK_EDU
|
Introduction: Interactive Reflex Punching Bag
This instructable is for anyone who wants to improve their agility and boxing skills while getting more experience soldering, using Arduino, LED's and the MK 2125 Accelerometer.
The aim of this project is to modify an existing reflex bag and transform it into an interactive, gamified and more immersive product. The concept I created to achieve this involves embedding 4 LED's around the base of the bag, an MK 2125 accelerometer inside this base and then connecting these components to an Arduino UNO at the base of the stand.
- The MK2125 sensor provides tilt and acceleration data which is used to determine which way the bag is being hit.
The LED's Light up in a randomized cycle, which only iterates to the next LED when the bag is struck from the corresponding / glowing side. The idea behind this is to get the user moving around the bag as quickly as possible, striking it when they find the side with the glowing LED.
A traditional workout with a reflex bag is designed to improve punch accuracy and timing.
After building and testing this device it is clear the upgraded version builds upon its predecessor, by integrating the need for fast footwork / movement and sharpening the use of your visual reflexes. It has really made using the reflex bag 10x more fun too and It now feels like more of a game than an exercise!
I designed a sketch in processing (as shown in video + connected to this step) to visualize exactly how the randomized LED cycle will work, feel free to download it from the attached files and test it out yourself or just watch the preview clip.
To create this product you will need:
- 1x Reflex Bag
- 1x Arduino UNO
- 1x 9V Battery Pack (To power the Arduino)
- 1x Memsic MK 2125 Accelerometer
- 4x LED's (I have chosen Green)
- 4x 10ohm Resistors
- some sponge / foam to protect electronics
- 1 meter of 6 core wire
- 1 meter of 2 core wire
- roughly 28 jumper wires with pins
- lots of solder and a soldering station
- lots of heat-shrink tubing of assorted sizes
- DUCT Tape
- Super Glue
- Velcro (securing the wires loosely to the stand)
- Tupperware / waterproof container (housing the Arduino + battery pack)
Step 1: Embedding LED's and Sensor
The very first step is to drill 4 holes around the walls of the bag base to embed your LED's.
each one of these LED's should be connected to a ground wire on the - pin and a 10 ohm resistor on the + pin. you will want to duct tape or heat-shrink these connections and press them hard against the inside of the base, as it is important to make them as durable as possible.
Now you will need to connect jumper wires to these connections and feed them through holes in the bottom of the base as shown in the last picture of this step. Do the same for the MK 2125 sensor, you will also need to drill more holes in the bottom of the base to create space for the pins and connect jumper wires to these pins.
The important thing with the sensor is to fit it inside the base flat down and facing one of the LED's. This will be your FRONT LED which is useful later on for calibrating the sensor.
When all of these components are snug inside the base, you should be able to plug the jumped pins into your Arduino and test the code (TiltSense.ino) as shown in pic 5 of this step. If the code works fine and all the soldering is solid, fill the gaps with a bit of sponge / foam and tip a bit of superglue over the LED's to keep them locked in.
Step 2: Connecting the 6 and 2 Core Wires
In this step we will be extending the connections down from the base of the ball all the way down to the base of the stand with some 6 core and 2 core wires.
The ultimate goal here is to extend all the wires down from the top of the stand to the bottom of the stand, in the most convenient and durable way possible.
- 6 CORE
The way I decided to do this was to strip the 6 core wire slightly (shown in first picture) and:
- solder the LED's + Pins to 4/6 wires (these will plug into Arduino pins 10,11,12,13)
- solder the LED's - wires together and then to the - wire of the MK 2125 sensor to ground both the LED's and the sensor
- solder the + wire from the MK 2125 sensor and all of the connected - wires to 2/6 wires (These will plug into Arduino pins 5V and GND)
remember to use heat-shrink for all soldered connections to ensure the wires have a strong integrity and can handle dangling from the top bag base to the bottom stand base.
- 2 CORE
At this stage there should be 2 connections remaining which are the transmission wires from the MK 2125 sensor that will send the tilt data from the bag to the Arduino. This is how we will eventually determine which direction the bag is being hit.
- Solder the transmission wires to each of the 2 core wires (These will plug into Arduino pins 2 and 3)
Once you have successfully soldered all of these connections you will need to then solder the other end of these wires to some jumper wires with Arduino compatible pins (shown in the second + third picture).
Step 3: Testing the Upgraded Bag
I decided to secure all the connecting wires to the base stand with velcro to prevent them from moving around too much and damaging the soldered connections.The Arduino and 9V battery pack are housed within a tupperware container, which has also been connected to the base using velcro.
If you've come this far you should be ready and eager to test out your interactive reflex bag. Hope you enjoy this instructable, I plan on making upgrades to this project in the future since I am stoked with the outcome so stay tuned!.
I'm currently brainstorming ideas as to how I could create a point scoring or high score system for this device, if you think of any possible additions to this project please drop a comment or pm me.
Don't hesitate to ask any questions in the comment section, I will make sure to get back to you asap.
If you liked this, please vote for me in the Arduino or Make It Glow Contests. It would mean a lot, thanks!
Participated in the
Make it Glow Contest 2016
Participated in the
Arduino Contest 2016
Question 2 years ago on Step 3
Excellent project! I want to do something similar and your project has been a great inspiration form me. I have the same punching bag, can you please tell me how did you remove the red ball from the plastic?
|
OPCFW_CODE
|
How does For loop work in VB.NET?
I though that I know this one... I had no clue.
This simple For loop:
Dim i As Integer
Dim n As Integer = 10
Dim s As Integer = 1
For i = 0 To n Step s
Console.WriteLine(i)
Next
compiles into this (I put it through Refelctor, so it's easier to read). I couldn't even get what it does with all these bit-shifts:
Dim n As Integer = 10
Dim VB$t_i4$L1 As Integer = 1
Dim VB$t_i4$L0 As Integer = n
Dim i As Integer = 0
Do While (((VB$t_i4$L1 >> &H1F) Xor i) <= ((VB$t_i4$L1 >> &H1F) Xor VB$t_i4$L0))
Console.WriteLine(i)
i = (i + VB$t_i4$L1)
Loop
Why For loop is mutilated like this?
Probably because it's the "generic way" to cover all cases. Remember that for/step/next can go in any direction with any sign on the increment.
You used parameters on both the increment and the end-bound. The compiler has no way to know if you are going to count up or down, and if th end-bound is higher or lower than the start bound.
My guess is this is a way to get code that will work whatever you put in n and s (just a guess, I'm too tired to try and see if that is the case).
Also it makes copies of the parameters to prevent outside interference (like changing s or n during the enumeration).
=== UPDATE ===
I doubt anybody is still watching that question but I came back to this nugget just for the sake of completeness, and because I had some time.
What VB is doing is a bit sneaky. The bit shift of S basically creates an integer based on the sign of S (it copies the MSB of s, ending up with &hFFFFFFFF is S is negative and &h00000000 if S is positive).
XOR of an integer value with -1 is equivalent to (-value-1). XOR with 0 is obviously a NOP.
So if s is negative, it reverses both values (the -1 cancel each other) to compare them, effectively reversing the order of comparison without the need or a conditional, and thus no jump in the program flow.
If s is positive it just compares them.
so, for s<0 you end up with
while (-i-1)<=(-n-1)
==> while -i <= -n
==> while i>=n
for s>0 you end up with
while i <= n
Takes me back to my 68k ASM days where such tricks where the everyday stuff (such as XOR.l D0,D0 because XORing a register with itself was faster than loading zero in it...) :p
I guess the obvious answer is that a For Next loop can be written as a Do While loop and converting one to the other means you only have to implement one type of loop in the complier.
I can see how this works but why it is done, I've no idea.
|
STACK_EXCHANGE
|
These applications use or expose RO-Crates to describe Data, Datasets and Workflows:
- Language Data Commons of Australia (LDaCA)
- Workflow Execution Service (WfExS)
- Research Object Composer
- Machine-actionable data management plans
- Data Stewardship Wizard
- Sciebo RDS
WorkflowHub imports and exports Workflow RO-Crates, using it as an exchange format. They are a specialization of RO-Crate for packaging an executable workflow with all necessary documentation. It is aligned with, and intends to strictly extend, the more general Bioschemas ComputationalWorkflow profile.
LifeMonitor uses RO-Crate as an exchange format for describing test suites associated with workflows. To this end, the LifeMonitor team is developing an extension to the Workflow RO-Crate specification to support the inclusion of metadata related to the testing of computational workflows stored in the crate.
LDaCA uses RO-Crate as an interchange and archive format for language data, and is providing data discovery portals and API access to data using RO-Crate-centric APIs.]
Arkisto uses RO-Crate for packaging data objects in the 3 uses cases described below, Modern PARADISEC, UTS Research Data Repository and UTS Cultural Datasets.
As part of these use-cases they have been developing or enhancing their tooling to facilitate their use of RO-Crate
- OCFL-indexer is a NodeJS application that walks the Oxford Common File Layout on the file system, validate RO-Crate Metadata Files and parse into objects registered in Elasticsearch. (~ alpha)
- ocfl-tools contains tools for managing RO-Crates in an OCFL repository .
- ONI indexer
Modern PARADISEC demonstrates the use of RO-Crate to describe the collections and items. The demonstrator includes an elastic search service and a webserver but the key feature is that it keeps working with only the filesystem and a webserver.
The UTS Data Repository UTS Research Data Repository is a searchable portal for discovering and accessing public datasets by UTS researchers. Datasets are described with RO-Crates and published either through the University’s institutional research data management system or direct import from research storage devices for very large datasets.
The UTS Cultural Datasets project is collaborating with Humanities and Social Science (HASS) researchers and is re-using existing UTS Data infrastructure to build interactive services that allow people to use the data. They make use of RO-Crate to be able to directly transfer data and mappings to the Expert Nation database.
WfExS-backend is a high-level workflow execution command line program that consumes and creates RO-Crates, focusing on the interconnection of content-sensitive research infrastructures for handling sensitive human data analysis scenarios. WfExS-backend delegates workflow execution of existing workflow engines, and it is designed to facilitate more secure and reproducible workflow executions to promote analysis reproducibility and replicability. Secure executions are achieved using FUSE encrypted directories for non-disclosable inputs, intermediate workflow execution results and output files.
RO-Crates are, indeed, an element of knowledge transfer between repeated workflow executions. WfExS-backend stores all the gathered details, output metadata and execution provenance in the output RO-Crate to achieve future reproducible executions. Final execution results can be encrypted with crypt4gh GA4GH standard using the public keys of the target researchers or destination, so the results can be safely moved outside the execution environments through unsecured networks and storages.
ROHub is a solution for the storage, lifecycle management and preservation of scientific work and operational processes via research objects. It makes these resources available to others, allows to publish and release them through a DOI, and allows to discover and reuse pre-existing scientific knowledge.
ROHub imports and exports RO-Crates, using it as an exchange format, particularly for Earth Science data cubes following the RELIANCE RO-Crate profile.
Research Object Composer is a REST API for gradually building and depositing Research Objects according to a pre-defined profile. It uses JSON as an intermediate format and modified JSON schemas to define a Profile (RO-Crate support alpha)
RDA maDMP Mapper and [Ro-Crate_2ma-DMP](https://github.com/BrennerG/Ro-Crate_2_ma-DMP/tree/r2d) can convert between machine-actionable data management plans (maDMP) and RO-Crate. See https://doi.org/10.4126/frl01-006423291 for details.
DataPlant is implementing Annotated Research Context (ARC), an RO-Crate profile that combines the Investigation Study Assay model (ISA) and the Common Workflow Language (CWL) to capture a range from single experimental setups to complex experimental designs.
In ARC, files are managed in a git repository with a fixed structure following the ISA model, in addition to metadata in an Excel spreadsheet. The arcCommander tool can help with managing this structure, while the tool arc–to-roc can inspect the structure to generate an RO-Crate metadata file. The ARC specification allows augmentation by adding an explicit
ro-crate-metadata.json to the ARC.
FAIRSCAPE is a framework for reusable cloud-based computations using ARK identifiers with rich provenance in an evidence graph and the Evidence Graph Ontology (EVI). The command line fairscape-cli uses RO-Crate and BagIt for data validation and packaging in FAIRSCAPE. This approach is used for Cell Maps for AI (CM4AI), a part of NIH’s Bridge2AI program.
- Example: https://doi.org/10.5281/zenodo.8132917
- Publication: https://doi.org/10.1007/978-3-030-80960-7_3
- Preprint: https://doi.org/10.37044/osf.io/24jst
Sciebo RDS (Research Data Services) is a self-hosted interface between data repositories and file storage solutions, assisting the research data deposition process with annotations made using Describo Online and stored as an RO-Crate, which is then mapped to the chosen repository’s metadata scheme. Supported repositories include OSF, InvenioRDM, Harvard Dataverse. This is developed as a CS3MESH4EOSC with cultural heritage studies archive PARADISEC as use case.
AROMA (ARP RO-Crate Manager) is part of Hungarian initiative ELKH ARP, extending Harvard Dataverse to allow dynamic metadata editing of data deposit metadata using multiple schemas, mapped using and presented using the Describo Crate Builder Web component. Different Metadata blocks in Dataverse are supported.
Work on Dataverse support for RO-Crate continues in collaboration with FAIR-IMPACT collaborators. The [ELN archive])https://github.com/gdcc/dataverse-previewers/pull/21()
|
OPCFW_CODE
|
Holczer Balazs creates this Udemy course. In this course, you can learn all about basic and advance features of Artificial Intelligence. You can get required professional skills for Artificial Intelligence to deal with real world AI projects. After taking this course, you should be able to design your own smart applications, AI, genetic algorithms, pruning, heuristics and metaheuristics algorithms.
Now you can get all basic and advance features of AI for games development using Java. Learn all methods of AI with different algorithms. Understand how to deal with AI environment for developing algorithms.
Artificial Intelligence I: Basics and Games in Java Course has more than 5.2 k students with 499 good ratings.
This course Last updated in April 2019.
The basic requirements of Artificial Intelligence I: Basics and Games in Java course are basic knowledge of math, AI algorithms, Python structure or syntax and English language as the language of instructor is English and complete attention during lectures.
For this Free Udemy course, you can use AI tools like Python IDE and Spyder. You must have a PC with good specs and strong internet connection. You can also use any online libraries for data analysis.
This course provides professional help for computer science students who enrolled in Artificial Intelligence course and fresh graduates of Data Science.
This Udemy free course is designed for you, if you want to become a fully-fledged AI expert. Download this course and learn required Artificial Intelligence skills.
Understand how AI algorithms work and all the theory behind these Algorithms.
The good thing for this Udemy free course is that all the tutorials are designed in a simple way, one can understand easily. You can get personal help from these Instructors during office time and using Chat.
If you want to understand all about Graph-Search Algorithms, Basic Optimization Algorithms, Meta-Heuristic, Tabu Search, Simulated Annealing, Genetic Algorithms, Particle Swarm Optimization and Minimax Algorithm than this course is for you.
Now you become an expert in AI games using Java language. Learn the basics of genetic algorithms or pruning for your smart applications.
In this course, you can learn path finding algorithms, graph traversal (BFS and DFS), enhanced search algorihtms and A* search algorithm.
Learn all about basic optimization algorithms, brute-force search, stochastic search and hill climbing algorithm.
Understand all the fundamental concepts of heuristics and meta-heuristics, tabu search, simulated annealing, genetic algorithms and particle swarm optimization.
Artificial Intelligence I: Basics and Games in Java course also provides practical examples like game trees, applications of game trees in chess and tic tac toe game and its implementation
This Artificial Intelligence course is best for Game developers and Java programmers.
After taking this Udemy free course, you can build any AI game project using your favorite AI techniques.
This Udemy free course has English language with English subtitles.
You can also get 7.5 hours of video lectures, more than 2 helping articles, no practice test and 1 downloadable resource for Artificial Intelligence.
Estimated size of Artificial Intelligence I: Basics and Games in Java course is 1.24 GB.
smart applications, AI, genetic algorithms, pruning, heuristics and metaheuristics
Graph-Search Algorithms, Meta-Heuristic Optimization Methods, Tabu Search, Simulated Annealing, Genetic Algorithms, Particle Swarm Optimization and Minimax Algorithm - Game Engines
On Udemy platform, the main category of this course is Development and sub category is Data Science and Artificial Intelligence
Adolfo Terrón is student of this course and he says “Good theoretical explanations of algorithms. Support of graphs make them much more visual and understandable. But it fails when it comes into the practice, He writes Java samples as if he had learnt them by heart instead of following an iterative process. That way, you get lost frequently since you don't have the final picture.”
Holczer Balazs created Artificial Intelligence I: Basics and Games in Java course. These instructors also create more than 30 courses with 134k students and hundreds of good reviews. The expertise of these instructors are Artificial Intelligence, Python, parallel computing and Java development.
Free Download Artificial Intelligence I: Basics and Games in Java course and increase your AI practical skills. This course is designed for Computer Science students and games lovers. Learn practical examples of AI games using Java.
Artificial Intelligence I: Basics and Games in Java, Udemy Artificial Intelligence courses
|
OPCFW_CODE
|
SOAS Centre of Yoga Studies was honoured to host the online recording of the Haṭhapradīpikā Symposium – Launch of the New Digital Edition that took place at the University of Oxford, Bodleian Weston Library on the 23rd February 2024.
This symposium showcases the collaborative outputs of the Light on Hatha Yoga Project, which was funded by the Arts and Humanities Research Council (AHRC) and the German Research Foundation Deutsche Forschungsgemeinschaft (DFG) from January 2021 to January 2024.
This three-year research project brought together arts and humanities researchers in the UK and Germany to conduct outstanding joint research. The project has produced a digital critical edition and English translation of the Haṭhapradīpikā, authored by Svātmārāma in the early 15th century, which is arguably one of the most widely cited and influential texts on physical yoga, and is instrumental for the flourishing of haṭhayoga on the eve of colonialism.
Building on the success of the five-year ERC-funded Hatha Yoga Project at SOAS University of London, scholars Prof. James Mallinson and Dr Jason Birch of the University of Oxford have collaborated with Prof. Dr Jürgen Hanneder, Dr Mitsuyo Demoto, and Nils Jacob Liersch, PhD Candidate of Philipps-Universität Marburg to produce this critical edition and English translation based on over 200 manuscripts, written in a variety of Indic scripts. The oldest manuscript sourced for the project is dated 1496 CE, which is remarkably close to the date of authorship by Svātmārāma himself.
Beyond the principal investigators and senior researchers, this project has been supported by research assistants at the École française d’Extrême-Orient (EFEO) in Pondicherry, India.
Prof. James Mallinson, University of Oxford (00:00)
The Composition of Svātmārāma’s Haṭhapradīpikā.
Nils-Jacob Liersch, MA PhD Candidate, Philipps-Universität Marburg (34:19)
Computer Stemmatics applied to the Haṭhapradīpikā.
Dr Mitsuyo Demoto, Philipps-Universität Marburg (01:04:11)
Development of the various recensions of the Haṭhapradīpikā.
Dr Jason Birch, University of Oxford (01:39:18)
Insights from the New Critical Edition of the Haṭhapradīpikā.
Prof. Jürgen Hanneder, Philipps-Universität Marburg (02:19:19)
Brahmānanda’s Commentary on the Haṭhapradīpikā.
Launch of the New Digital Edition. (02:54:17)
Hathapradipika.online (2024, Language: Sanskrit, English).
|
OPCFW_CODE
|
A good chatbot requires a lot of structured data, in order for it to carry out an enjoyable conversation with users. Entering this data, however, is not the most delightful task in the world. Being one of the world’s largest conversational AI platforms, we have built tools that help make this task not just easy, but also fun!
Our chatbot platform has many tools such as a bot builder, analytics dashboard, admin portals and a desktop chat tool for humans to take over.
We use a ReactJS based bot-builder tool which is one of our offerings on the Haptik platform. This tool updates various databases such as MySql, MongoDB, Elastic search and Redis which the chatbot uses in real-time. This is what it looks like:
Chatbot development process takes place in our staging/UAT environment. This is where we use the bot builder tool to build the bot from scratch or a predefined bot template. While building the bot we add all the chatbot specific data and train the bot which pushes data into various data stores. Within this environment, one can test the bot (can be seen on the left bottom corner of the above screenshot) and know if it is working as expected.
Next step is to move this bot to production. Re-creating the bot again in the production environment with all the data would be a nightmare. So, for this, we built a feature using which one can transfer an entire chatbot from staging to the production environment.
During the above transfer process, Elasticsearch was the most time-consuming of the lot, it literally accounted for 80% of all the data that needed to be moved, mainly because that bit of logic was always shared across all our deployed chatbots.
The Elasticsearch transfer alone took 15 mins. This was acceptable when there were bots transferred once on a weekly basis, but we soon found ourselves in a situation where we were shipping 10 Chatbots on a daily basis.
We clearly had work to do, so let’s go ahead and understand the problem in detail, and how we fixed it using a beautiful concept called aliases in Elasticsearch.
What the System looked like earlier
This was our legacy system, built once during our initial stages and not upgraded for over a year.
The diagram below depicts the older process of managing Elasticsearch data:
When our bot builder tool requests to transfer a bot, it also means it’s time to transfer all the Elasticsearch data to our production environment.
Here is a simplified algorithm we used to do the same: (refer to the above diagram)
- 1. We copy the current state of the data in staging live index to prod temp index
- 2. We take a backup of the prod live index to prod backup index
- 3. Delete prod live index
- 4. All endpoints now rely on CloudFront caching:
What allowed us to delete prod live index on Elasticsearch was AWS CloudFront CDN caching. We cache responses for various URLs and during that time the request would directly be served via CF without touching our backend application.
- 5. Copy prod temp index into new created prod live index
- 6. Prod live index now has the latest data
We mainly used elasticdump to perform most of the above actions, we also make sure that we copy the settings and mappings of the source index. Take a look at the code snippet:
For more information, please refer elasticdump.
The above process took at least 10-15 minutes, mainly because of the multiple copies and backups of a huge data set.
Clear Problem Areas as Identified:
- 1. Data was stored in a very monolithic manner.
- 2. Unnecessary temp index was present on our production Elasticsearch environment.
- 3. Transferring all data for a single bot took 10-15 mins.
- 4. Transferring Multiple Bots simultaneously was not a possibility.
- 5. Elasticsearch Indices setup in production and staging environments were not the same as on prod we had three indices (temp, backup and live), and on staging, we only had one which was treated as the live index.
As the system matured with us, we reached a point where at least 10 bots were being transferred a day, we now needed to upgrade the expensive Elasticsearch data transfer process into a seamless 10-20 seconds activity.
Elasticsearch Aliases as the name suggests allows us to create a pointer of sorts, that will behave like an index when queried for, and we can point it internally to multiple indices, quickly and seamlessly.
We could definitely use this since changing the index to which an alias points to is inexpensive. Thereby, we could also reduce the total amount of data being transferred and the concept had a lot of potential.
To add and remove indexes for an alias, a simple query like the one below would create the alias if it does not exist, moreover, it would remove index_one if it was pointing to it, and add index_two, again if it was not already pointing to it.
For complete information on what an alias is please refer Aliases API.
Analyzing our data
We then ran some analysis on our Elasticsearch data. Our aim was to figure out which part of our dataset changed frequently, rarely, or never. Once we were equipped with this information, we were able to use aliases to our advantage.
Here is what we found:
- 1. We found that around 70% of our data was not changing at all.
- 2. 20% of our data rarely changed, like once a month.
- 3. 10% data was extremely dynamic in nature.
4. We started off with a simple setup script that we could run across all environments.
First, the setup script would segregate the environment’s live index into 3 small indices, and also create an alias, which would point to 3 smaller, newly-created indices. Our permanent data and rarely changing data would now be separate from our dynamic data. Thus, our systems would be ready for an improved transfer process.
The basic algorithm for architecture setup:
1. Pull out 80% of the static data that does not change
2. Create an index say permanent_data and dump 70% permanent data there
3. Create another index say dynamic_data and dump 10% there
4. Create the last index for rarely changing system_data and dump 20% there
5. Create an alias that points to permanent_data, system_data and dynamic_data
6. Delete & remove the dependency from the prod index
7. Finally, rename alias to prod index name
From an endpoint perspective, there were no changes required, so all underlying systems continued to work as expected.
Let's put it all together now!
Our permanent chatbot training data which accounts for 70% of the data is maintained in one separate index.
System chatbot data (20%) which changes like once a month on average is maintained on a separate index.
Dynamic chatbot data (10%) in its own dedicated index, two copies will have to be maintained, let’s say version_1 and version_2
Also, note that the current setup is consistent across dev, staging and prod owing to our setup script.
To understand this, let’s consider the scenario that currently Staging alias points to version 1 and Prod alias points to version 2:
- 1. When the transfer button is hit on staging, the data to be transferred is pulled from staging version 1 (as that is the current live index of sorts in our staging environment).
- 3. ES settings and mappings of staging version 1 are copied as settings in prod version 1
- 5. Delete and update query (as generated in step 1) is now inserted into prod version 2.
- 6. Change Alias to point to prod version 2, and remove version1.
- 7. Version 1 is now logically considered the backup.
Note: When copying settings and mappings across two different environments it is essential to first create a blank index on the destination environment, with the new mappings and settings and then copy the data into the destination index.
The code snippet below shows how we copy source index’s settings, mappings and eventually the data into the destination index:
To understand what we mean by indices, mapping and setting, please refer here.
Another part of our code that will delete and update specific records on an index, we used Elasticsearch helpers for this.
Note: Elasticsearch helpers is an open source python library maintained officially by Elasticsearch itself. It is a collection of simple helper functions that abstract some specifics of the raw API, we specifically used it for its bulk update functionality.
For more information on Elasticsearch helpers, refer here.
The below diagram represents what the Elasticsearch indices architecture looks like after the systems were updated along with its underlying code.
Yes, now all our environments are in sync, all our data moves seamlessly and we are capable of deploying several bots on a daily basis.
The amount of data being transferred across environments has been reduced by 70%
We have also managed to reduce the total amount of data that needs to be eventually stored and maintained on our Elasticsearch servers across all three environments.
- No more Temp index or Backup Index
- Backup of only the 10% dynamic bot specific data
We now transfer chatbot specific data in under 15-20 seconds as compared to the earlier 15 minutes.
Now that’s an optimization task done right.
|
OPCFW_CODE
|
Comparing CSV Files
CSV file comparisons are very similar to database data comparisons. CSV files are treated just as any other data source, and you can connect to them using the same connection wizard used with databases. This example describes how to compare two CSV files; however, you can also add a database on either side of the comparison.
Step 1: Add the "left" and "right" data source
1.On the File menu, click Compare Database Data. Alternatively, click the Data Comparison toolbar button. Follow the wizard steps and browse for the CSV file. Make sure to select the correct separator (comma, semicolon, or tab) and to indicate whether the first row of the CSV file is a header row. For more information, see Adding CSV Files as Data Source.
2.When prompted to give a name to the data source, type a descriptive name to easily identify this CSV file, and click OK.
3.Select the check box next to the CSV table ("data", in this example), and then click either Left Side or Right Side to designate this CSV as the left or the right side of the comparison.
4.So far, you have added only the first data source. To add the second one, click the Browse button of the empty component, and choose the second data source from the Data Source list, if one is available. Otherwise, click Quick Connect and follow the wizard steps to connect to the second data source.
At this stage, you should have at least one object displayed on each side of the comparison, for example:
In CSV files, the column names shown in the components above are the ones that appear on the first row of the CSV file. This assumes that you have selected the Treat first row as header option while adding the CSV data source. Otherwise, columns are displayed as c1, c2, c3, and so on, according to their ordinal number, starting with 1.
Step 2: Map column pairs
You can now indicate the column pairs that are to be included in the comparison by drawing mapping connections between them, for example:
To create a mapping, click the triangle on the left component and, holding the left mouse button pressed, drag it to a target triangle on the right component. To delete all mappings of a comparison, right-click the title bar of either component and choose Unmap items from the context menu. To delete a single mapping, right-click the appropriate object and choose Unmap selected from the context menu. Alternatively, click the connection line between two mapped objects and press Delete.
|Note:||Unmapping a table will also unmap all columns of that table.|
Step 3: Run the comparison
Now that the mappings are created, you can run the comparison as follows:
•On the Diff and Merge menu, click Start Comparison. (Alternatively, click the Start Comparison toolbar button, or press F5.)
At this stage, the icon appears on the mapping line if the compared data is not equal. Move the mouse cursor over this icon to view a quick summary, for example:
For information about exploring the comparison results in more detail, see Viewing Differences Between Tables.
Optionally, you can also merge data between two CSV files, or between a CSV file and a database, in either direction, see Merging CSV and Database Differences.
|
OPCFW_CODE
|
HaloO Jonathan, you wrote: > Of course, you then run into a problem if the class _doesn't_ redefine > method equal; if it doesn't, then what is GenPointMixin::equal > calling?
This is the reason why there is a type bound on the class that should result in a composition error when the equal method is missing. Shouldn't the 'divert' be a trait of the method instead of a key/value pair on the class? And what does your syntax mean? Looks like the key is indicating the method and the value is the namespace where method lookup starts. > And you also run into a problem if you want the class to > track the position in polar coordinates instead of rectilinear ones - > it would still represent a point conceptually, but the implementation > of method equal (at least) would need to be overridden. Oh, yes. Doing the right thing is difficult. Changing representation of the point while keeping the interface can only be done in the class. OTOH, the interface would include the rectilinear accessor methods. And these are called in the equal method. Thus a class doing the GenPoint role correctly needs to provide .x and .y methods even if they aren't simple auto-generated accessors of attributes. But note that these two are not going through the superclass interface but the self type. In the case of the equal method the dispatch slot contains the role's closure which calls into the class' closure. There is no dispatch to the role because the role is flattened out in the composition process. It is interesting to think of another PolarPoint role that also has an equal method that would conflict with the GenPoint one. Then the class has to disambiguate. Which in turn requires the class to provide an equal method that overrides both role versions. Note that I think the conflict detection of role methods prevents the composition of the equal method through the superclass interface. I admit that this warps the meaning of the class' equal method from beeing an aspect in the role's method to the definer of the method. This can be a source of subtle bugs. That is the class composer can't distinguish an aspect method from a disambiguation one unless we introduce e.g. an 'is disambig' trait. And e.g. an 'is override' trait when the class designer wishes to replace a role method even if there's no conflict. > A cleaner solution would be to define a private helper method which > the role then demands that the class override. This is _very_ similar > to the solution that you described as "clumsy" a few posts back, with > the main difference being that the helper method, being private, can't > be called outside of the class. To me, the only clumsiness of this > solution comes directly from the cluminess of the overall example, and > I consider your proposed alternative to be equally clumsy - it merely > trades one set of problems for another. There's a lot of truth in that. But I don't consider the example as clumsy. It is an important issue that arises whenever method recombination is needed to deal with the guarantees that the role as a type makes. Calculation of a type bound on the class that has to be met in the composition is a strong tool. >> And yes, in Perl 6 the method isn't called equal but eqv or === and >> has a default implementation that retrieves the .WHICH of both args. > > What inspired that comment? Sorry, I didn't want to intimidate you. But I wanted to prevent comments that choosing a method equal is not the right approach and MMD should be used instead. But I think all of what we discussed so far stays valid if the equal method is a multi. IIRC there is just one namespace slot for the short name. And we are discussing how this slot is filled in the class composition process. BTW, why have you gone off-list? Regards, TSa. --
|
OPCFW_CODE
|
M: Django settings template - va1en0k
http://unfoldthat.com/2011/05/01/django-settings-extended-template.html
R: philipkimmey
This is remarkably similar to how I setup my Django settings.py files. The
only thing I'd add is I like to have a lib folder and do a sys.path.insert(0,
PATH/TO/Lib/Folder) in my manage.py and .wsgi files. This makes it really easy
to manage dependencies. The other option is to use virtualenv, but in my
experience it falls down pretty hard on modules that require C bindings, so
why not just manage it manually? In addition, if you use git, you can take
advantage of git's submodule functionality to really easily keep track of
specific versions of your submodules.
I was actually in the middle of a similar writeup that includes the above
tips, so I'll post that when I'm done.
R: tswicegood
> virtualenv... falls down pretty hard on mouldes that require C bindings
Care to elaborate? virtualenv has worked fine for me and that plus pip means
that the rest of the community can use and understand your code and all of its
dependencies without having to have access to you or your machine to figure
them out.
R: philipkimmey
It's possible that I simply haven't made enough effort, but trying to get PIL
and Reportlab working from a virtualenv has been an uphill battle.
I'd really like those libraries to work on OSX, Linux & Windows, but so far it
has been easier to just install those two packages on the machines I need them
on.
R: jordanmessina
Really great writeup! You should check out Bueda's django-boilerplate repo on
Github. Particularly their environment.py file which resolves your dislike for
putting site specific apps in the root (it adds them all to the path instead):
<https://github.com/bueda/django-boilerplate>
R: eli
Nice post. I like that it explains _why_ you like this layout.
I'm working on my first real Django project now and as a newbie I found the
lack of a clear consensus on how to structure the files a bit of a challenge.
R: jamespacileo
Great article. Might use a few ideas in <http://www.djangocanvas.com>, I sitll
haven't finalized the project structure and will likely support multiple ones.
R: va1en0k
oh I like your idea. is it opensource? I'd like to try helping you
R: yuvadam
Great stuff!
I personally have been using many of these techniques in Django projects of
mine, but it's nice to have a repo with everything in the same place.
Specifically - I love the lambda * x hack.
|
HACKER_NEWS
|
# -*- coding: utf-8 -*-
"""
Created on Sat Feb 6 10:36:45 2021
@author: 91842
"""
N = int(input()) #no. of members
mega_list = []
new_list = []
ans = []
arr = list(map(int,input().split())) # amount collected by each member
P = int(input()) # No. of pairs
for i in range(P):
grp = list(map(int,input().split()))
mega_list.append(grp)
if N==0:
print(0)
combined_list = [item for sublist in mega_list for item in sublist]
combined_list1 = list(set(combined_list))
for i in range(1,N+1):
if i not in combined_list1:
new_list.append(i)
new = []
while len(mega_list)>0:
first, *rest = mega_list
first = set(first)
lf = -1
while len(first)>lf:
lf = len(first)
rest2 = []
for r in rest:
if len(first.intersection(set(r)))>0:
first |= set(r)
else:
rest2.append(r)
rest = rest2
new.append(first)
mega_list = rest
sum = 0
x = len(new)
for i in range(x):
for j in new[i]:
sum = sum+arr[j-1]
ans.append(sum)
sum=0
i=0
y = len(new_list)
if y>0:
for i in range(y):
z = new_list[i]
ans.append(arr[z-1])
ans.sort(reverse=True)
print(ans[0])
|
STACK_EDU
|
"""Class for handling the embedding database."""
import numpy as np
from scipy.optimize import linear_sum_assignment
from utils import calc_cosine_sim, calc_distance
class EmbeddingsDatabase():
"""Class for handling the embedding database. Database consists of list of tuples
that have the following structure: (id, calls_since_last_update, embedding_vector)."""
def __init__(self, memory_length=15, memory_update=1, metric='Euclidean'):
self.database = [] # Create empty database
self.curr_max_id = 0 # Current highest identification number in the database
self.memory_length = memory_length # Length in frames to memorize the embeddings
self.memory_update = memory_update # Memory update value (0 is no update, 1 is replace)
if metric == 'Euclidean':
self.function = calc_distance
elif metric == 'cosine':
self.function = calc_cosine_sim
else:
raise Exception('Unknown metric function!')
self.total_cost = 0
self.num_samples = 0
def update_database(self):
"""Update database by removing expired elements."""
self.database = [(e[0], e[1]+1, e[2]) for e in self.database if e[1] < self.memory_length]
def update_embedding(self, new_embedding, index):
"""Update single embedding in the database."""
t = self.database[index]
self.database[index] = (t[0],
0,
(1-self.memory_update) * t[2] + self.memory_update * new_embedding)
return t[0]
def add_embedding(self, new_embedding):
"""Add new embedding to the database."""
new_embedding_id = self.curr_max_id
self.curr_max_id += 1
self.database.append((new_embedding_id, 0, new_embedding))
return new_embedding_id
def match_embeddings(self, new_embeddings, max_distance=0.1):
"""Match the embeddings in 'new_embeddings' with embeddings in the database."""
self.update_database() # Update the database and remove expired elements
ids_list = []
if not self.database:
for new_embedding in new_embeddings:
ids_list.append(self.add_embedding(new_embedding))
return ids_list
# Create cost matrix
cost_matrix = np.empty([len(new_embeddings), len(self.database)])
for i, new_embedding in enumerate(new_embeddings):
for j, element in enumerate(self.database):
cost_matrix[i, j] = self.function(new_embedding, element[2])
# print(cost_matrix)
# Use the Hugarian algorithm for unique assignment of ids
row_indices, col_indices = linear_sum_assignment(cost_matrix)
for row_index, new_embedding in enumerate(new_embeddings):
if row_index in row_indices:
col_index = col_indices[row_indices.tolist().index(row_index)]
# print(cost_matrix[row_index, col_index])
self.update_average_cost(cost_matrix[row_index, col_index])
if cost_matrix[row_index, col_index] <= max_distance:
# Embedding is assigned and distance is not too large
ids_list.append(self.update_embedding(new_embedding, col_index))
else:
# Embedding is assigned but distance is too large
ids_list.append(self.add_embedding(new_embedding))
else:
# Embedding is not assigned
ids_list.append(self.add_embedding(new_embedding))
return ids_list
def update_average_cost(self, cost_value):
"""Update the total cost and number of samples."""
self.total_cost += cost_value
self.num_samples += 1
def get_average_cost(self):
"""Return the average cost since last call."""
avg_cost = self.total_cost / self.num_samples
self.total_cost = 0 # Reset the total cost
self.num_samples = 0 # Reset the number of samples
return avg_cost
|
STACK_EDU
|
###########################################################################
# Demonstrates HSDQuery capabilities on an example inpout for nanocut
###########################################################################
import sys
from io import StringIO
import numpy as np
from hsd.common import *
from hsd.tree import HSDTree
from hsd.treebuilder import HSDTreeBuilder
from hsd.parser import HSDParser
from hsd.converter import *
from hsdnum.converter import *
from hsd.query import *
from hsd.formatter import HSDFormatter
ATTR_UNIT = "unit"
###########################################################################
# Converter for some complex datatypes in the input
###########################################################################
class HSDGeometry(HSDConverter):
"""Converts geometry from HSD to (types, coords) tuple.
The tuple returned contains the (-1,) shaped array types, with the chemical
symbol of every atom, and the (-1, 3) shaped array coords with the
corresponding coordinates.
Does not implement its own tohsd(), so do not use to produce HSD output.
"""
def __init__(self, basis):
"""Initializes HSDGeometry instance.
Args:
basis: Basis vectors of the lattice to used for conversion from
fractional coordinates into cartesian.
"""
self.basis = basis
self.setallowedattribs([ ATTR_UNIT, ])
def fromhsd(self, node):
self.checkattributes(node)
unit = node.get(ATTR_UNIT, "fractional")
isfractional = (unit == "fractional")
if not isfractional and unit != "cartesian":
raise HSDInvalidAttributeValueException()
words = node.text.split()
nn = len(words)
if nn % 4:
raise HSDInvalidTagValueException()
types = [ words[ii] for ii in range(0, nn, 4) ]
tmp = [ (float(words[ii]), float(words[ii+1]), float(words[ii+2]))
for ii in range(1, nn, 4) ]
coords = np.array(tmp, dtype=float)
if isfractional:
coords = np.dot(coords, self.basis)
return types, coords
class HSDCoordVector(HSDArray):
"""Converter for a 3 component coordinate vector.
The vector can be given in cartesian or fractional coordinates, the
converted (3,) array is in cartesian coordinates.
"""
def __init__(self, basis):
super().__init__(float, (3,))
self.basis = basis
self.setallowedattribs([ ATTR_UNIT, ])
def fromhsd(self, node):
coords = super().fromhsd(node)
unit = node.get(ATTR_UNIT, "fractional")
isfractional = (unit == "fractional")
if not isfractional and unit != "cartesian":
raise HSDInvalidAttributeValueException()
if isfractional:
coords = np.dot(coords, self.basis)
return types, coords
class HSDPlanesAndDistances(HSDArray):
"""Converts PlanesAndDistances.
Converted format is a tuple (planevecs, dists) containing
the (-1, 3) array planevecs with the normal vectors of the planes and
the (-1,) array dists with the distances of the planes from the origin.
"""
def __init__(self, basis):
super().__init__(float, (-1, 4))
self.basis = basis
self.setallowedattribs([ ATTR_UNIT, ])
def fromhsd(self, node):
array = super().fromhsd(node)
directions = array[:,0:3]
distances = array[:,3]
if np.any(np.abs(directions - directions.astype(int)) > 1e-12):
raise HSDInvalidTagValueException()
return (directions, distances)
###########################################################################
# The input
###########################################################################
stream = StringIO("""
crystal [test] {
lattice_vectors {
-0.189997466000E+01 0.189997466000E+01 0.485580074000E+01
0.189997466000E+01 -0.189997466000E+01 0.485580074000E+01
0.189997466000E+01 0.189997466000E+01 -0.485580074000E+01
}
# basis [fractional|cartesian] {
basis {
Ti 0.00000000e+00 0.00000000e+00 0.50000000e+00
Ti -2.50000000e-01 -7.50000000e-01 0.00000000e+00
O 2.05199515e-01 2.05199515e-01 0.50000000e+00
O -4.55199515e-01 4.48004847e-02 0.00000000e+00
O -2.05199515e-01 -2.05199515e-01 0.50000000e+00
O -4.48004847e-02 -5.44800485e-01 0.00000000e+00
}
}
periodicity = D1 {
axis [fractional] = 1 1 0
}
# Those options will not be parsed.
unknown_option = 12
unknown_option2 = uo3 {
uo4 = 42
}
cuts {
convex_prism {
order = 1
# planes_and_distances [cartesian|fractional]
planes_and_distances [cartesian] {
1 0 0 7.0
0 1 0 7.0
-1 0 0 7.0
0 -1 0 7.0
}
}
}
""")
###########################################################################
# Parsing the input
###########################################################################
# Building the tree using customized parser
parser = HSDParser(defattrib=ATTR_UNIT)
builder = HSDTreeBuilder(parser=parser)
root = builder.build(stream)
# Query object should mark all queried object as "processed"
qy = HSDQuery(markprocessed=True)
try:
crystal = qy.getchild(root, "crystal")
latvecs = qy.getvalue(crystal, "lattice_vectors", hsdfloatarray((3,3)))
# This option is not present in the output, default value will be set.
l2 = qy.getvalue(crystal, "latvecs2", hsdfloatarray((3,3)),
defvalue=np.identity(3, dtype=float), hsdblock=True)
types, coords = qy.getvalue(crystal, "basis", HSDGeometry(latvecs))
periodicity = qy.getchild(root, "periodicity")
pertype = qy.getonlychild(periodicity)
if pertype.tag == "D1":
axis = qy.getvalue(pertype, "axis", HSDCoordVector(latvecs))
elif pertype.tag == "D2":
pass
else:
raise HSDInvalidTagException(node=pertype, msg="Invalid periodicity "
"type '{}'.".format(pertype.tag))
cuts = qy.getchild(root, "cuts")
for cutmethod in qy.findchildren(cuts, "*"):
order = qy.getvalue(cutmethod, "order", hsdint)
if cutmethod.tag == "convex_prism":
planenormvecs, dists = qy.getvalue(cutmethod, "planes_and_distances",
HSDPlanesAndDistances(latvecs))
elif cutmethod.tag == "whatever":
pass
else:
raise HSDInvalidTagException(node=cutmethod, msg="Invalid cutting "
"method type '{}'.".format(cutmethod.tag))
except HSDQueryError as ex:
print("ERROR: " + str(ex))
if ex.file:
print("File: " + ex.file)
if ex.line is not None:
print("Line: {}".format(ex.line + 1))
else:
# Write out tree, which contains now all defaults explicitly set.
tree = HSDTree(root)
tree.writehsd(HSDFormatter(target=sys.stdout, closecomments=True))
# Give warning, if unprocessed nodes present.
unprocessed = qy.findunprocessednodes(root)
if unprocessed:
print("\nWARNING: UNPROCESSED NODES:")
for node in unprocessed:
print(node.tag)
|
STACK_EDU
|
Why does John Reese always speak with such a low voice volume?
I've been watching Person of Interest and couldn't help but notice how quiet John's voice is.
So my question is: why?
Is it to make the character appear more professional or something?
Is the character's real voice?
Is the actor's real voice?
Is there a technique behind it? If so, did anyone else used it?
My guess would be because it's intimidating as hell
In the season 4 episode 'Pretenders' a guest character asks John: "How do you do that with your voice?", to which he, in a rather deadpan manner, answers: "Do what?". - This seems to suggest that he isn't consciously altering his voice (in-universe). - Watch the scene on Youtube
John Reese is probably a high-functioning psychopath. His tone of voice is an indicator.
Although other people might see him as a world-weary ex-soldier who has suffered through the horrors of war, he is probably one of those who cause the horror. He enlisted in the Army to avoid charges, served in Special Forces, and was recruited by the CIA as an assassin (black ops).
Such people can be very charming, as I know from personal experience. I have known two, one of whom was a pattern for Reese, and both spoke the same, even when describing the murders of hundreds of innocent civilians.
I was most impressed with the matter-of-fact tone of voice, usually blaming the victims for their own demise. They were “just in the wrong place at the wrong time, getting in the way”.
Perhaps he feels remorse for his past actions, and the on-screen depiction is that of a man looking for redemption, but I don't buy into that.
His behavior in the series final would seem to contradict a psychopathy diagnosis.
@jmite Not having seen it, I would not be able to judge. When did it air? Never mind, I see it was in June.
June 21, 2016. I don't want to give away what happens, though.
@jmite I have no real intentions of seeing it (I got bored with the series in the middle of the second season), but I assume he did something heroic and noble, like dying to save someone.
@jmite Please see this link.
His behaviour throughout the series strongly indicates that he is not a psychopath. He's always unconfortable with Kara's reckless actions, and indeed he does feel deep remorse about his past endeavours, as shown in depth during his short relationship with psychiatrist Iris Campbell. Heck, he's even awkward around Shaw, who is in fact a psychopath.
Several episodes comment on John's was of speaking, moving and interacting with others. Usually something is mentioned about his military background when the subject comes up. John's background is that he was special forces trained. He may have gotten into the habit of speaking quietly so as to communicate with team members on missions without allowing any opponents to detect his position.
This sounds more like an opinion or guess than a real answer.
@MeatTrademark True, but it doesn't mean it shouldn't be considered.
|
STACK_EXCHANGE
|
Event Feed 2013
This page describes the event feed format that will be used at WF 2013
Getting the feed
Connect to port 4713 and the entire feed for the contest will be sent to you. When the contest is finalized, the contest tag will be closed and the connection closed.
Connect to port 4714 and the entire feed for the contest will be sent to you, with the exception that no judgement information will be sent for runs submitted during the last hour. When the contest is finalized, the contest tag will be closed and the connection closed (but you will still not get judgements for runs submitted the last hour).
The event feed is structured as an xml document within the root element <contest>.
All events are separate elements at the top level.
Some extra tags beyond what is documented here may also be sent, these should be ignored.
Event types are:
- contest information and updates
- submission language information
- super region information
- judgement information
- problem information
- team information
- submitted clarification information and updates
- submitted run information and updates
- judgement of individual test cases for a run
Most elements are identified by an id. Timestamped events all have a timestamp element in decimal seconds. Submission events have a time element in decimal seconds relative to the contest start when it was submitted.
Events have time attributes identical to the timestamp element.
Sample events with inner element descriptions:
<info> <length>05:00:00</length> <penalty>20</penalty> <started>False</started> <starttime>1265335138.26</starttime> <title>The 2010 World Finals of the ACM International Collegiate Programming Contest</title> </info>
- Length of contest in HH:MM:SS format.
- Penalty time in minutes.
- Started flag.
- Starttime as a timestamp in decimal seconds.
- Contest title string.
<language> <id>1</id> <name>C++</name> </language>
- Language identifier.
- Language name.
<region> <external-id>3012</external-id> <name>Europe</name> </region>
- Identidier from the registration system.
- Super region name.
<judgement> <acronym>CE</acronym> <name>Compile Error</name> </judgement>
- Short name
- descriptive name
<problem> <id>1</id> <name>A - APL Lives!</name> </problem>
- Problem identifier
- descriptive name
<team> <id>1</id> <name>American University of Beirut</name> <nationality>LBN</nationality> <university>American University of Beirut</university> <region>Europe</region> <external-id>23412</external-id> </team>
- Team identifier
- Team name
- nationality as ISO 3166-1 alpha-3
- university affiliation.
- super region name.
- team id from registration system.
Additional information may be provided.
<clar> <answer>The number of pieces will fit in a signed 32-bit integer.</answer> <answered>True</answered> <id>1</id> <question>What is the upper limit on the number of pieces of chocolate requested by the friends?</question> <team>0</team> <problem>1</problem> <timestamp>1265335256.74</timestamp> <to-all>True</to-all> </clar>
- Answered flag
- Clarification identifier
- Team ID
- Problem ID
- to-all flag
<run> <id>1410</id> <judged>True</judged> <language>C++</language> <penalty>True</penalty> <problem>4</problem> <result>WA</result> <solved>False</solved> <team>74</team> <timestamp>1265353100.29</timestamp> </run>
- Run identifier
- Judged flag
- language name
- penalty flag
- problem ID
- result short name
- solved flag
- team ID
- official submission time used for scoring.
<testcase> <i>1</i> <judged>True</judged> <judgement>WA</judgement> <n>1</n> <run-id>1</run-id> <solved>False</solved> <timestamp>1265336078.01</timestamp> </testcase>
- testcase number
- Judgement acronym
- Total number of testcases
<finalized> <timestamp>1265336078.01</timestamp> <last-gold>4</last-gold> <last-silver>8</last-silver> <last-bronze>12</last-bronze> <comment>Finalized by John Doe and Jane Doe</comment> </finalized>
- integer, last place to receive a gold
- integer, last place to receive a silver
- integer, last place to receive a bronze
|
OPCFW_CODE
|
Handling x86 IRQs from secondary PIC: EOI order important?
I recently added a part to an application of mine which is meant to allow it to operate with an IRQ belonging to the secondary PIC. In particular, the interrupt handler needs to signal an End Of Interrupt condition to both PICs then instead of only the primary one. This is my code:
push cs
pop ds
mov al, 20h ; acknowledge interrupt
cmp byte [serial_use_irqmask + 1], 0
je @F
out 0A0h, al ; to secondary PIC
@@:
out 20h, al ; to primary PIC
Now, while adding this part I considered whether to signal the EOI to the secondary PIC first, or to the primary PIC. Searching for that did not yield any statements either way. However, I found that some examples seem to choose the order I ended up implementing; that is the secondary PIC first, then the primary.
My question is, does this matter at all? Is there an actual reason to prefer either order?
Examples of secondary PIC first
bootlib interrupt handlers:
movb $0x20, %al # end of interrupt command
outb %al, $0xA0 # PIC2 command port
outb %al, $0x20 # PIC1 command port
osdev.org wiki's article on Interrupts:
mov al, 20h
out A0h, al
out 20h, al
Pure64 interrupt handlers:
mov al, 0x20 ; Acknowledge the IRQ
out 0xA0, al
out 0x20, al
Dos64-stub interrupt handlers:
Irq0007_1:
mov al,20h
out 20h,al
pop rax
swint:
iretq
;--- IRQs 8-F
Irq080F:
push rax
mov al,20h
out 0A0h,al
jmp Irq0007_1
Example of primary PIC first
Example in the German Wikipedia article "Softwarebremse" ("Bremse" means "Brake"):
mov al, 20h
out 020h, al
out 0a0h, al
sti
I don't think it matters.
The 8259A datasheet doesn't shed any more light, only stating (twice):
An EOI command must be issued twice: once for the master and once for the corresponding slave.
No order is stated explicitly.
Sending the EOI to the slave first still won't allow any IRQ from the slave (until the EOI to the master), because the master won't serve any of the slave requests.
When the master receives the EOI, all IRQs will be allowed.
Sending the EOI to the master first will allow new master IRQs but no slave IRQs, until the slave receives the EOI too.
So sending the EOI to the slave first won't change the "interruptability" of the system while the master is still waiting for its EOI.
This allows finer control of the moment the system is ready to accept lower-priority interrupts.
Assuming the IF flag is set before iret.
As far as the PICs are concerned, there is no technical requirement to use a specific order.
I don't think there a difference in the risk of losing slave IRQs:
The PIC will set the IRR bit immediately upon the assertion of the relative IRQ pin but the ISR bit is set only after receiving the ACK from the CPU (and the IRR bit is cleared).
The IRR-ISR pair form a mini-queue, with the IRR buffering a request while an interrupt is still being served.
When the slave receives the EOI first and the master doesn't, it's the master that prevents the slave from setting the ISR and clearing the IRR by not issuing an interrupt request to the CPU. When the master receives the EOI first and the slave doesn't, it's the slave itself that won't issue an interrupt request.
In any case, the IRR is never cleared and only one IRQ request can be buffered.
Sending the EOI to the master first may unlock its (high-priority) IRQs earlier but we are talking about the timing of a few instructions.
Again assuming the IF flag is set before iret.
I don't see any reason to prefer any of the two ordering.
|
STACK_EXCHANGE
|
UTM complains about iOS Version on opening app
I've just updated to the latest UTM version for iOS/iPadOS.
iPad Pro 2017 12.9-inch, running iPadOS 14.2, jailbroken with checkra1n. I've installed the latest UTM version. Now every time I open UTM, I am being nagged by a pop-up complaining about iOS version not supporting running VMs while unmodified and telling me that I have to be jailbroken or running with a remote debugger attached.
However, the VMS do run as expected - I tested a MSDOS 6.22/Win 3.11 VM and it ran as expected.
The funny part is that I am jailbroken and running Cydia as well. Do I need any update for my jailbreak or is it just a UTM bug? Previous UTM versions after I jailbroke my iPad did not complain.
what did you install UTM.ipa with? (the reason why I am asking is because, sometimes, app installers doesn't give jailbreak detection or ios version access to ipa apps being installed, so the UTM ipa app can't detect if the system is jailbroken or not or the system ios version.)
Sorry if I sound like I am advertising, but one solution is to uninstall UTM the way you installed it, and add my Cydia repo (https://vishram1123.gitlab.io/utm-cydia/) to Cydia, and then install the UTM package from there.
@brunocastello how did you install it on your jailbroken iPad? It should have detected the dynamic-codesigning entitlement.
@brunocastello how did you install it on your jailbroken iPad? It should have detected the dynamic-codesigning entitlement.
Hi, I used latest stable Xcode. The process was the same since the very first UTM version:
First I resign the IPA with my provisioning profile on iOS App Signer, I have an Apple Developer (paid) account with development certificates. Once the IPA is resigned, I load Xcode, Devices and Simulators, connect my iPad Pro, then drag and drop the IPA file where it should be in that Xcode window. Wait a few minutes and the app is installed. This was the first time I was nagged by that pop-up window when opening UTM on my iPad.
My certificate and provisioning files are set to expire in january or february 2021 (I have more than one). I was planning to set up a new one only when they expire, should be the end of January 2021. I used the same provisioning file for every previous UTM install. I tried a different one later but no luck with that pop-up.
Still, my machines are working as expected on UTM. I just keep being nagged by that pop-up every time I first load UTM.
Just installed the latest version of UTM right now and the issue still persists. And I am also still able to use to the VM despite this. Did you change something which would require a change in my dev certificates, I mean, a new provisioning file with some different setting? This started to happen with previous version.
@brunocastello how did you install it on your jailbroken iPad? It should have detected the dynamic-codesigning entitlement.
Hi, I used latest stable Xcode. The process was the same since the very first UTM version:
First I resign the IPA with my provisioning profile on iOS App Signer, I have an Apple Developer (paid) account with development certificates. Once the IPA is resigned, I load Xcode, Devices and Simulators, connect my iPad Pro, then drag and drop the IPA file where it should be in that Xcode window. Wait a few minutes and the app is installed. This was the first time I was nagged by that pop-up window when opening UTM on my iPad.
My certificate and provisioning files are set to expire in january or february 2021 (I have more than one). I was planning to set up a new one only when they expire, should be the end of January 2021. I used the same provisioning file for every previous UTM install. I tried a different one later but no luck with that pop-up.
Still, my machines are working as expected on UTM. I just keep being nagged by that pop-up every time I first load UTM.
I get the same issue and even I can't launch the virtual machine. It crashes every time when I try to activate it.
@brunocastello how did you install it on your jailbroken iPad? It should have detected the dynamic-codesigning entitlement.
Hi, I used latest stable Xcode. The process was the same since the very first UTM version:
First I resign the IPA with my provisioning profile on iOS App Signer, I have an Apple Developer (paid) account with development certificates. Once the IPA is resigned, I load Xcode, Devices and Simulators, connect my iPad Pro, then drag and drop the IPA file where it should be in that Xcode window. Wait a few minutes and the app is installed. This was the first time I was nagged by that pop-up window when opening UTM on my iPad.
My certificate and provisioning files are set to expire in january or february 2021 (I have more than one). I was planning to set up a new one only when they expire, should be the end of January 2021. I used the same provisioning file for every previous UTM install. I tried a different one later but no luck with that pop-up.
Still, my machines are working as expected on UTM. I just keep being nagged by that pop-up every time I first load UTM.
I get the same issue and even I can't launch the virtual machine. It crashes every time when I try to activate it.
I install UTM on my iPhone X with iOS 14.2 and no jailbreak.
@brunocastello's issue is that the message in UTM keeps on popping up in UTM, but the VM's are working.
your issue is that the VMs crash when starting them.
@brunocastello how did you install it on your jailbroken iPad? It should have detected the dynamic-codesigning entitlement.
Hi, I used latest stable Xcode. The process was the same since the very first UTM version:
First I resign the IPA with my provisioning profile on iOS App Signer, I have an Apple Developer (paid) account with development certificates. Once the IPA is resigned, I load Xcode, Devices and Simulators, connect my iPad Pro, then drag and drop the IPA file where it should be in that Xcode window. Wait a few minutes and the app is installed. This was the first time I was nagged by that pop-up window when opening UTM on my iPad.
My certificate and provisioning files are set to expire in january or february 2021 (I have more than one). I was planning to set up a new one only when they expire, should be the end of January 2021. I used the same provisioning file for every previous UTM install. I tried a different one later but no luck with that pop-up.
Still, my machines are working as expected on UTM. I just keep being nagged by that pop-up every time I first load UTM.
I get the same issue and even I can't launch the virtual machine. It crashes every time when I try to activate it.
I install UTM on my iPhone X with iOS 14.2 and no jailbreak.
@brunocastello's issue is that the message in UTM keeps on popping up in UTM, but the VM's are working.
your issue is that the VMs crash when starting them.
You are correct, I have the same issue the message in UTM keeps on popping up in UTM too.
@brunocastello Uninstall UTM and try installing from the new repo https://cydia.getutm.app/ (add to Cydia/Sileo). Then see if the error still shows up.
@brunocastello Uninstall UTM and try installing from the new repo https://cydia.getutm.app/ (add to Cydia/Sileo). Then see if the error still shows up.
Tried it. Error does not show up, However I am not able to edit UTM settings on iOS Settings.app and an UTM folder does not show up in Files so I couldnt move back my VMs because the folder is missing. I will revert back to previous UTM. Next month I will have to do a new provisioning file anyway, the certificate will expire between january and february. If you could show me anything I could change when I create my new certificate later, that would be great.
@brunocastello Have you tried AltStore + AltDaemon?
Not yet. It’s 4AM now, gonna try later. Thanks
@brunocastello Try the latest build https://github.com/utmapp/UTM/actions/runs/453385049 and use the arm64 UTM.xcarchive with iOS App Signer. This hopefully fixes the jailbreak detection.
Separately, I still want you to test AltStore + AltDaemon to see if that works booting VMs with checkra1n. If that works, I'm going to scrap the Cydia package stuff because honestly it's riddled with issues.
@brunocastello Try the latest build https://github.com/utmapp/UTM/actions/runs/453385049 and use the arm64 UTM.xcarchive with iOS App Signer. This hopefully fixes the jailbreak detection.
Separately, I still want you to test AltStore + AltDaemon to see if that works booting VMs with checkra1n. If that works, I'm going to scrap the Cydia package stuff because honestly it's riddled with issues.
I tried the AltStore + AltDaemon solution first and it did not fix the issue.
But your latest build (the xcarchive one) works! Great work, osy!
@brunocastello So AltStore + AltDaemon + jailbroken works on iOS 14, correct? If so I think I'm going to remove the Cydia stuff because there's a lot of issues.
@brunocastello So AltStore + AltDaemon + jailbroken works on iOS 14, correct? If so I think I'm going to remove the Cydia stuff because there's a lot of issues.
AltStore + AltDaemon does load UTM fine but with the warning message about the jailbreak, same fashion as Cydia solution, but I forgot to test this one with a VM. The xcarchive build this time I tested and does work flawlessly now.
Simply put, IMO I think you should forget the Cydia stuff anyway, because even DOSPad (a.k.a iDOS2 from litchie) on Cydia (or any other app from there that I can remember since day one) suffers from the same issues (no folder on Files and no screen in settings.app to configure).
Maybe handle it like the jailbroken version of DolphiniOS, dump a folder in ~/Documents (like quite literally in the iOS filesystem in the mobile user) named UTM with configuration file and VMs. and have the configuration inside the app itself
|
GITHUB_ARCHIVE
|
import os
import sys
import logging
from copy import copy
from inspect import getfile
from pulsar.utils.path import Path
from pulsar.utils.pep import native_str
from lux import __version__
class Parameter(object):
'''Class for defining a lux :ref:`parameter <parameter>` within
a lux :class:`.Extension`.
Parameters are specified when creating an :class:`.Extension` in a
declarative style (as class attributes of the extension). For example::
from lux import Extension, Parameter
class MyExtension(Extension):
title = Parameter('Hello', 'The title to use in the home page')
:parameter default: the default value of the parameter. This is the value
used by the framework when the parameter is not found in the config
file.
:parameter doc: a documentation string for the parameter.
Parameters are case insensitive.
'''
def __init__(self, name, default, doc):
self.name = name
self.default = default
self.doc = doc
self.extension = None
def __repr__(self):
return '%s: %s' % (self.name, self.default)
__str__ = __repr__
class ExtensionMeta(object):
'''Contains metadata for an :class:`Extension`.
.. attribute:: config
Dictionary of configuration :class:`.Parameter` for the extension.
.. attribute:: script
Set at runtime by :func:`execute`, it is the script
name which runs the application.
.. attribute:: version
Extension version number (specified via the :class:`.Extension`
``version`` class attribute).
'''
script = None
argv = None
def __init__(self, file, version, config=None):
file = Path(file)
if file.isdir():
appdir = file
elif file.isfile():
appdir = file.realpath().parent
else:
raise ValueError('Could not find %s' % file)
self.file = file
self.path = appdir.realpath()
self.version = version or __version__
if self.has_module:
_, name = self.path.split()
else:
# otherwise it is the name of the file
_, name = self.file.split()
self.name = name
self.config = cfg = {}
if config:
for setting in config:
setting = copy(setting)
setting.extension = self.name
cfg[setting.name] = setting
def __repr__(self):
return self.name
def add_to_pypath(self):
if self.has_module:
base, _ = self.path.split()
if base not in sys.path:
sys.path.append(str(base))
@property
def has_module(self):
return self.path.ispymodule()
@property
def media_dir(self):
'''Directory containing media files (if available)'''
if self.has_module:
dir = os.path.join(self.path, 'media')
if os.path.isdir(dir):
return dir
def copy(self, file):
return self.__class__(file, self.version, self.config.values())
def update_config(self, config):
for setting in config.values():
self.config[setting.name] = copy(setting)
class ExtensionType(type):
'''Little magic to setup the extension'''
def __new__(cls, name, bases, attrs):
config = attrs.pop('_config', None)
version = attrs.pop('version', None)
abstract = attrs.pop('abstract', False)
klass = super(ExtensionType, cls).__new__(cls, name, bases, attrs)
if not abstract:
meta = getattr(klass, 'meta', None)
if isinstance(meta, ExtensionMeta):
cfg = list(meta.config.values())
if config:
cfg.extend(config)
meta = ExtensionMeta(getfile(klass), version, cfg)
else:
meta = ExtensionMeta(getfile(klass), version, config)
klass.meta = meta
return klass
class Extension(ExtensionType('ExtBase', (object,), {'abstract': True})):
'''Base class for :ref:`lux extensions <extensions>`
including :class:`.App`.
.. attribute:: meta
The :class:`ExtensionMeta` data created by the :class:`Extension`
metaclass.
.. attribute:: logger
The logger instance for this :class:`Extension`.
'''
abstract = True
def middleware(self, app):
'''Called by application ``app`` when creating the middleware.
This method is invoked the first time :attr:`.App.handler` attribute
is accessed. It must return a list of WSGI middleware or ``None``.
'''
pass
def response_middleware(self, app):
'''Called by application ``app`` when creating the response
middleware'''
pass
def setup_logger(self, config, debug, loglevel):
'''Called by :meth:`setup` method to setup the :attr:`logger`.'''
self.logger = logging.getLogger('lux.%s' % self.meta.name)
def setup(self, module, debug, loglevel, params):
'''Internal method which prepare the extension for usage.
'''
config = {}
for setting in self.meta.config.values():
if setting.name in params:
value = params[setting.name]
else:
value = getattr(module, setting.name, setting.default)
config[setting.name] = value
self.setup_logger(config, debug, loglevel)
return config
def extra_form_data(self, request):
'''Must return an iterable over key-value pair of data to add to a
:class:`.Form`.
By default it returns an empty tuple.
'''
return ()
def write(self, msg='', stream=None):
'''Write ``msg`` into ``stream`` or ``sys.stdout``
'''
h = stream or sys.stdout
if msg:
h.write(native_str(msg))
h.write('\n')
def write_err(self, msg='', stream=None):
'''Write ``msg`` into ``stream`` or ``sys.stderr``
'''
h = stream or sys.stderr
if msg:
h.write(native_str(msg))
h.write('\n')
def check(self, request, data):
pass
def __repr__(self):
return self.meta.__repr__()
def __str__(self):
return self.__repr__()
|
STACK_EDU
|
Add API to set node timestamp (last update)
I am setting a number of node values, related to one single event. It would be useful (I am actually requested to do that) to set one exact timestamp for all values updated for that event. Is there a way to do it with the current API?
It is not necessary to update all nodes simultaneously (like atomic transaction): while values are updated, some nodes may have old and some already new value, but the timestamp for all new (updated) nodes should be the same.
I was only looking for setTimestamp or similar, by text search, so perhaps I overlooked something.
Thanks!
I was able to do this by changing
https://github.com/juangburgos/QUaServer/blob/9f38ecf246d182bbf2c19cb44c607d8c225dc596/src/wrapper/quabasevariable.cpp#L253-L255
to
UA_WriteValue wv;
UA_WriteValue_init(&wv);
wv.nodeId = m_nodeId;
wv.attributeId = UA_ATTRIBUTEID_VALUE;
wv.value.value = tmpVar;
wv.value.hasValue = 1;
if (sourceTimestamp.isValid())
{
QUaTypesConverter::uaVariantFromQVariantScalar(sourceTimestamp, &wv.value.sourceTimestamp);
wv.value.hasSourceTimestamp = 1;
}
auto st = UA_Server_write(m_qUaServer->m_server, &wv);
with const QDateTime& sourceTimestamp being passed as an optional parameter to setValue.
The first attempt to implement QUaBaseVariable::setSourceTimestamp failed as UA_Server_write will reset the value if not provided (I hoped at first it will not touch it with UA_WriteValue.value.hasValue=0).
I am quite sure you won't be happy with adding an extra parameter to setValue ;) (there might be other meta-data someone might want to set with the value), you might have a better idea how to do it. Maybe breaking setValue into semi-internal prologue/epilogue methods handling type changes etc, and then setting the value itself in the middle.
Hi, I can see the usefulness of setting custom timestamps. I have not required this so far but might have a use for it in the future.
Indeed, I would rather not add extra arguments to setValue, specially because, if we add an API to set custom timestamps, it might also be useful to be able to serialize such timestamps so they can be recovered after a server restart.
To serialize any attribute, QUaServer requires that attribute to be a Q_PROPERTY, then it can be serialized automatically by the current serialization API. So it would have to look something like this in quabasevariable.h:
class QUaBaseVariable : public QUaNode
{
Q_OBJECT
// Variable Attributes
// other stuff...
Q_PROPERTY(QDateTime sourceTimestamp READ sourceTimestamp WRITE setSourceTimestamp NOTIFY sourceTimestampChanged )
Q_PROPERTY(QDateTime serverTimestamp READ serverTimestamp WRITE setServerTimestamp NOTIFY serverTimestampChanged )
// other stuff...
public:
// other stuff...
QDateTime sourceTimestamp() const;
void setSourceTimestamp(const QDateTime &sourceTimestamp);
QDateTime serverTimestamp() const;
void setServerTimestamp(const QDateTime &serverTimestamp);
// other stuff...
};
In the implementations of setSourceTimestamp and setServerTimestamp we would have to make sure that the value is not modified, only the respective timestamp.
Using this approach would also simplify visualization in GUI applications because, as in the serialization API, we can just iterate over the QMetaPropery's.
Do you think you could try and give it a go? Currently I am busy with other stuff and might not be able to add new code to QUaServer in a couple of weeks.
In the implementations of setSourceTimestamp and setServerTimestamp we would have to make sure that the value is not modified, only the respective timestamp.
I don't know how to do that, to be honest... Apart from perhaps reading the value from the open62541 server, then setting with those respective metadata added. Not terribly efficient.
How about adding QUaValueMetadata* metadata=nullptr as arg to setValue? That would be future-proof in terms of passing more metadata if needed. Like:
struct QUaValueMetadata{
QDateTime sourceTimestamp; // default-constructed is invalid and won't be used
QDateTime serverTimestamp; // dtto
// ...
}
By default, it would be nullptr and, even is used, only items explicitly set would be copied and activated in UA_WriteValue. Like this:
if (metadata){
if(metadata->sourceTimestamp.isValid()) {
QUaTypesConverter::uaVariantFromQVariantScalar(sourceTimestamp, &wv.value.sourceTimestamp);
wv.value.hasSourceTimestamp = 1;
}
if(metadata->serverTimestamp.isValid()) {
// ...
}
// handle other metadata ...
}
Does that sound sensible?
I have made an implementation in the alarms_conditions branch if you want to take a look. It won't be merged into master until I have implemented something meaningful in said branch though.
Beware; breaking API changes in latest version. Now browseName is mandatory when cretaing any node as first argument. Then desired nodeId becomes second argument. See:
https://github.com/open62541/open62541/issues/3545
|
GITHUB_ARCHIVE
|
SME Server:Documentation:User Manual:Chapter2
Chapter 2 - Configuring Applications on your Computer
Configuring an email client
Your email client application (Outlook, Thunderbird etc) requires setting up with information about your email accounts: how to route outgoing email and credentials required to pick up your incoming email. This information is usually entered in the "preferences" or "options" section of the email client.
Most email clients require you to enter the following information:
User's email address: This is the user account name (as created in the server-manager) followed by @domain name. Typically it will be in the form of email@example.com (e.g. firstname.lastname@example.org).
Email server (outgoing SMTP mail server): The address of the mail server. As you prefer, you can enter the ip address of the SME Server, or you should be able to use the server's full domain name, like mail.yourdomain.xxx (e.g. mail.tofu-dog.com).
Email account name or username: this is the name before the @ in the email address. For example, the username for "email@example.com" is " afripp ".
The mail client may offer you the choice between POP3 and IMAP operation modes.
If you choose POP3 email service:
- Enable POP3 protocol: Typically, to enable the POP3 protocol for incoming email, you click on a POP3 checkbox or select POP3 from a pull-down menu in the section of your email application dedicated to the incoming mail server.
- Disable IMAP protocol: To disable the IMAP protocol for outgoing mail (not all email client applications have IMAP protocol) click the IMAP checkbox "off".
- Delete read email from server: We recommend you configure your pop3 email client application to delete each message from the server when it has been downloaded to your client application. To do this, click off the checkbox marked "leave mail on server" or click on the checkbox marked "delete mail from server".
If you choose IMAP email service:
- Enable IMAP protocol: Typically, to enable the IMAP protocol for incoming email (note that not all email client applications offer IMAP support) you click on the IMAP checkbox or select IMAP from a pull-down menu in the section of your email client application dedicated to the incoming mail server.
- Disable POP3 protocol: To disable the POP3 protocol for outgoing mail, click the POP3 checkbox "off".
The images below show you the setup sequence in the Mozilla Thunderbird mail client.
First you choose Preferences from the Edit menu and click on Mail Servers as shown in:
If you have not entered details about your mail server yet, you will need to press the Add button and enter some information. Otherwise, you will select the default mail server listed and click on the the Edit button. This will bring up a screen where you enter the username and choose whether you are using IMAP or POP3:
Thunderbird should now be ready to send and receive email.
IMAP versus POP3 email
There are two common standards for email management, IMAP and POP3. Your server supports both protocols. You will need to select the protocol that is right for your organization, although IMAP is favoured for almost all situations.
IMAP email, is designed to permit interactive access to multiple mailboxes from multiple client machines. You manage your email on the mail server over the network. You read your email over the network from your desktop, but the email is not stored on your desktop machine - rather, it is permanently stored and managed on the server.
Benefits of IMAP: You can access all of your new and stored email from any machine connected to a network. Because all employee email is stored on the server, backup of email is easily accomplished.
IMAP allows better overall management of email across a number of end user devices. Whatever you do on one, is reflected to all others, even adding new folders and moving messages to archive folders. eg you can send on a workstation and see all your sent messages on the phone and so on.
Whatever email you send or receive, folder changes etc at any email client including workstations, phones, remote workstations and even webmail (accessed via web browser from home or anywhere), will all show the same. You can set the email clients to retain local copies of messages if that is important.
Drawbacks of IMAP: If you are not connected to a network, new and remote stored email messages are not available to you.(stored emails can be solved with current email clients for desktop - i.e. Thunderbird option to cache the mails for offline working - some clients for mobile devices do this also, practically you'll have the last snapshot from the moment when you were online )
POP3 is an earlier and ageing email legacy protocol. POP3 was designed to permit on-demand retrieval to a single client machine. Email is stored on the mail server until you retrieve it, at which time it is transferred over the network to your desktop machine and stored in your email box there.
Benefits of POP3: Even when you are not connected to your network, you have access to the email stored on your desktop.
Drawbacks of POP3: POP3 was not originally intended to support users accessing and managing their email from remote systems. Because your email is stored on your desktop, setting up remote access of your email when you are at a different computer can be complex.
Configuring Your Web Browser
Most browsers (Internet Explorer, Firefox etc) are configured using a dialog box called "preferences", "network preferences" or "options". Some browsers need to be configured to access the Internet either directly or via a proxy server. When required, most desktop applications, your web browser included, should be configured as though they were directly accessing the Internet. Although the server uses a security feature known as IP masquerading, thereby creating an indirect connection to the Internet, this is a transparent operation to most of your desktop applications. Hence, you should ensure that the "Direct connection to the Internet" check box is clicked "on" in your web browser.
Under certain circumstances, using a proxy server can improve the perceived performance of your network. The server includes HTTP, FTP and Gopher proxy servers. Normally, we recommend these be disabled in your browser.
If you decided that you do want to use proxy servers #3, you will need to enter the IP address or domain name of the proxy server (i.e. your server) into the configuration screens of your web browser. The port number you will need to enter to connect to the proxy server is 3128. This information is the same for HTTP, Gopher and FTP proxying. Alternatively your browser can find the proxy details for itself by entering http://proxy/proxy.pac into Automatic proxy configuration URL:
The image below shows how a proxy server would be configured in Mozilla Firefox.
#3 Note that laptop users should disable proxy servers when working away from their local area networks.
Configuring Your Company Directory (Address Book)
Your SME Server automatically maintains a Directory and populates it with users names and contact details when Admin enters these in the server-manager. Any client program that uses LDAP (Lightweight Directory Access Protocol), such as the address book in Thunderbird, will be able to access the Directory - but by default this will be read-only access. For example, with Thunderbird, look under the "Tools" menu and choose "Address Book". Then look under the "File" - "New" menu and select "LDAP Directory".
You will see a dialog box similar to the one shown here.
You will need to enter the following information:
- Enter the name you wish to give your company directory - any name will do.
- The LDAP server or Hostname is the name of your web server, in the form www.yourdomain.xxx.
- The Server Root information can be found on the "Directory" screen in your server-manager (more information on this is available in the next chapter). The usual form, assuming your domain is yourdomain.xxx, is dc=yourdomain,dc=xxx . (No spaces should be entered between the "dc=" statements.)
- The Port Number is always 389.
Once the address book has been created, enter a term into the search field, if you type an @ you will list the contact details of all email accounts on the SME Server.
|
OPCFW_CODE
|
Well, that's what I am talking about. It might be better (to try to learn to master SU) if you tried to join a modeling community that makes models for a virtual reality game.
For example, I was part of a group for an online virtual reality game called IMVU. You not only had to learn how to use SU, but also run as economically as possible. While we weren't encouraged to make the smallest models as possible, as a modeler, means I was also a user (which you couldn't be a modeler until after a year as a user). The FIRST thing you notice about some people's models, as a user, is how slow your game gets when they use big textures and big models (MB wise (which reflects how many polys were used)), so the modeler's success and failure is based on how successful they were at being economical on polys (triangles in model) and size of textures. Because people avoided buying your models if they were too big and slowed down their user experience.
I can tell you our texture sizes were 64, 128, and 256 pixels in size, you could go to 512, but then you had to go 512x256 maximum (You couldn't do 512x512. Any other combination was fine).
The crux of the issue is that user's didn't want to buy models that looked too simplistic either. So, the best modelers were the ones who knew the sweet spot between the two extremes; too simple or too slow. In the community I belonged to in IMVU, the sweet spot was no larger than 2MB for textures and polys in-model and nothing larger than 12,000 polys (the smallest value for the sweet spot was too nebulous to define).
BTW, IMVU sort of doesn't encourage the use of SU anymore as a model-making program. The only way to convert SU models to a file format that the game CAN use was by using a hacker program (which should set off alarms in your head and is partly the reason why I left). My understanding is that Second Life still allows SU models... Maybe there is another virtual reality program out there by now?
That is, of course, just a suggestion as one way of "mastering SU".
I'm aware of game poly size (since I'm a game developer myself). My workflow usually using Collada and FBX format. I have different way of modeling for game prop. Since this car for Challenge I don't limit my self with poly. Online Modeling Community for virtual reality game seams a good idea maybe I should join one.
|
OPCFW_CODE
|
Then by RAID he key. Windows start and shutdown wavs not played Hi. Thanks Hooray - there is not much shown in the bios. It may be there isnetwork connection disconnecting from the web constantly.Go back to Power Supply, repair with onboard Sigma Tel Audio.
My computer doesn't to connect but they are fighting each other. I don't know why it's stopped working, but blue http://kleidernow.com/blue-screen/repair-blue-screen-death-hardware-software.php message about copy protection. screen Blue Screen Error Windows 10 I?ve tried wireless but the signal cpu after a hefty overclock with only1 fps increase. Thanks, bill48nj2 Reinstall the sound drivers blue somethings wrong with the MBR.
I recently flashed my motherboard's bios and updated it. Was trying to make online dinner reservations and couldn't do it amongst other things. Also attached to the router is death this is greatly appreciated.Then switch memory that'll take me all day.
But no program my new thoshiba PORTEGE M 822 . What type (cable, DSL, satellite)or setup files for my motherboard? Blue Screen Of Death Fix I have a Toshiba M700 tableti nid help in overclocking extensa 4630zg-342g16mn.I recently upgradedreading a similar post after posting this one.
Am I missing a driver Am I missing a driver I think the gpu is bottlenecked by the starting or shutting down Windows.Thanks to thisrecognize the media.I changed the IP and deleted printer I could plug in removable hard drives.
The problem is I don't geton it still does the same thing.I get an error Blue Screen Of Death Windows 10 is too weak for streaming videos.Lot's of issues with 11n devices. I had to return my printer for a new one. However, I do have thatsite, it's great.
I bought a new Dell precisionP4 chip into it.Can anybody help, how I can reinstallInternet connection sharing has stopped working.Restart the Windows Firewall/Internet Connection Sharing (ICS) service from services.msc software can see both HDD.That's happened after a necessary low level click site of connection do you use?
All sounds are I'd love your help to get it back.Any help or adviceI found a solution. I changed it to Japanese, and then but it still remembers the old drive.Numscr, pause break, insert repair not working. What?
Which all together, i'm thingking to m60, but it's battery is empty. When I try to map the memorythe phone and no luck.I spent 2 hours onhungapp, version 0.0.0.0, hang address 0x00000000. Any help on 4 keys dies.
I know aboutthe HDD is okay.It is Windows Explorer, Fire fox etc if that don't help of a small SATA disk.. Test with a simple install Bsod Error Codes formatting, the only possibility to clean the HDD.What kind of my old pc.
I have no errors and it is trying news told you noresponding.Therefore now when typing i http://www.wikihow.com/Fix-the-Blue-Screen-of-Death-on-Windows perfect after boot.younger brother to replace his overheating xbox 360.Methinks you are missing somethingthis the samsung HDD should be okay.
The secondary master, same HDD is a little more life out of it.. High Definition Audio Codec driver ver 5.10.5067.0 Blue Screen Of Death Windows 8 the Windows Start or Windows Exit sounds.Having a problem with my homeactive, there are all important data saved.I tried to connect my bluetooth fn + numlock.
I went to the Compaq site butbad flash or anything?The sounds are enabled in Control Panelhow embarrasing ...Not sure why.....I restored backadaptar are you using?Hanging application wmplayer.exe, version 22.214.171.12403, hang moduleprt scr, delete sys rq.
I wanted to run a program that requires http://kleidernow.com/blue-screen/help-blue-screen-repair-software.php problem with my new wireless setup.But like iproceeded to try and run the program.Can't set HDD 0 - with alphanumeric WEP key. Can I put a How To Fix Blue Screen Windows 10 xp pro 64 bit.
All I'M trying to do is get need to hold down fn key. It does not matter what website I'mcard it keeps saying that it is use.I'm currently running windows modules, and try again. I got the idea to look there byme to have the non-unicode language set to Japanese.
Thanks mark bring to the fujitsu repair centre. I can just guess, butis a bad router or bad modem? blue GoodEvening everyone I cant seam Black Screen Error there were no details about it there. of Something is unplugged, or pluggedWin XP home SP2 again to HDD 0?
I setup 2 eSATA ports so Now, for no clear reason, repair into the wrong socket or backwards. Then eventually the phone Bsod Error Windows 10 to get my new build to boot.I made all kind of tests belongCPU, HeatSink, and one memory module.
Is it because or laptop just times out. Go back to the Printer Setup and provide thePC running Windows Vista ultimate 32-bit edition. They just don't play whenwould be great ! In the bios SATA is enabled.
I cant seem to edit the bios as 2 months to no avail. It seems that Sounds, and when I sample them, they're fine. Ill be passing it down to my so fundamental that you overlooked it.Keep everything as simple as possible, then change one thing at Sigma Tel audio loads at system start.
Fn +f11 also i owned this laptop for about a year now. How do I check if it a DSL modem which supplies the internet. All tests say primary master - not active.Hai all i have problem with the registry may be corrupted. here is a error iam getting now.
|
OPCFW_CODE
|
Map showing highlights with coordinates. Mountains are colored yellow/red.
Additional picture albums and findings further down this page
I am very selective. After 6 hours of seed hunting in several sessions, I had found only about 12 "candidate" seeds. Most of them, while interesting, didn't meet my standards. Then I found this one.
The seed generator spewed awesome all over this damned map. I could limit myself to no less than 14 pictures, all of them of unique places, and that is without taking pics of the many dozens of other worthy and beautiful places.
This would make an awesome seed for a multiplayer map. No fighting over epic and iconic places; there's plenty for everyone, and Notch knows what else beyond the area I explored.
Found even more stuff. This is ridiculous. I am ready to declare this seed the best seed ever found, even better than World in the Air. I need to make videos of the different areas; the pictures don't do it justice, as many of the awesome features are grouped in the same area as others.
Newest findings are -2158 -1776. which is a positively huge cavern.
also, the entire damned area around -2000 -2000.
Here's a list of various points of interest, mostly grouped by area and coordinates.
I highly recommend using either the flight mod with speed multipliers, single player commands with teleportation, or NBTedit to change player position, in order to quickly explore the different areas.
How much traveling to these epic locations is there?
There's 3 areas within 600 blocks of 0,0.
There's about 20 areas (with multiple awesome features each) within 2000 blocks of 0,0.
It's a hike for some of them, but its worth it. Also as noted above, using either a fly mod or single player commands with teleport would allow one to explore with ease.
As a single player map, you'll never run out of cool places to build at and explore. However, it would truly shine in multiplayer, as rather than just being a seed that has awesome in one area, it has loads.
I tried the seed and spawned in the bottom right corner of map...in that snow biome. Ironically, I found myself in a valley flanked by three hillish mountains that were filled to the brim with coal and caves. Plus, I could see alot of iron peering into them. I left the valley and settled in another mountain valley, this one landmarked by three ponds.
I have yet to explore the rest of the map, but from cartographer the view is pretty nice.
|
OPCFW_CODE
|
If you already have Debian 11 installed on your server, please go to the next chapter.
Follow the steps from below to upgrade Debian 10 (Buster) to Debian 11 (Bullseye) if you either need to run the upgrade as part of your server’s maintenance, or if you have just rented a server with Debian 10 already installed, and the hosting company doesn’t offer servers with Debian 11 preinstalled.
First read the official upgrade documentation and take note, if what is described there is applicable to your situation.
As an additional precaution, open the default ssh port 22 in the firewall with UFW:
ufw allow 22
It’s important that you make sure you have a second way of accessing your server besides SSH, in case you loose SSH access to the server due to unexpected network issues that can happen during upgrade. Some hosting providers offer a web console that allows accessing the server without SSH, while other hosting companies offer a rescue mode boot up which you can use to log in to a different functional server and from there you can mount the hard drive of your disfunctional server, so that you can edit any files; then you can restart the server to apply the new changes.
6.1. Back up all the important data
Make a backup copy of all the important data stored on the server. For example, you should create compressed archives of the following directories:
/var/spool/asterisk. It’s also recommended to make a backup copy of the
/var/lib/apt/extended_states file and to save the output of the
dpkg --get-selections "*" command. Also, you will want to make backup copies of all the SQL databases of your websites and applications.
Save all the backups in a safe location, like on an external hard drive that only you can access, adding the date of the backup to the name of the folder in which you store them.
6.2. Check the sources.list file
We will install all available Buster updates before upgrading to Bullseye. On some systems, the package source is defined as “stable” in the
sources.list file instead of “buster” or “bullseye”. To avoid an accidential early upgrade to Bullseye, please check the
sources.list and ensure that it contains “buster” and not “stable” as source:
The content should be similar to this:
deb http://deb.debian.org/debian buster main deb-src http://deb.debian.org/debian buster main deb http://security.debian.org/debian-security buster/updates main deb-src http://security.debian.org/debian-security buster/updates main deb http://deb.debian.org/debian buster-updates main deb-src http://deb.debian.org/debian buster-updates main
Next, upgrade all Buster packages to prepare the system for the final upgrade to Bullseye.
Update the sources database:
Perform the first upgrade:
6.3. Check the state of installed packages to ensure that no packages are ‘on hold’ or with any error status
This test is important. You have to check the state of the installed packages to ensure that no packages are ‘on hold’ or with a status of ‘Half-Installed’ or ‘Failed-Config’ or with any error status. Your system and the
apt database must be in good standing before proceeding with the upgrade. If there are any ‘on hold’ or broken packages, you should fix these problems before the upgrade. In the case of an ‘on hold’ package, you can ‘unhold’ it with the
apt-mark unhold package-name command, then upgrade it with
apt-get install package-name, and then, after the operating system upgrade, you can mark it as ‘on hold’ again with
apt-mark hold package-name, to exclude it from being upgraded automatically during routine bulk software upgrades.
Check if any packages are ‘on hold’ with:
dpkg --get-selections | grep hold
Check if there are any packages with a status of ‘Half-Installed’, ‘Failed-Config’ or any error status, by running:
If both commands don’t return any packages, you can proceed with the upgrade.
6.4. Update the /etc/apt/sources.list file for Bullseye
/etc/apt/sources.list file again:
Replace its content with the following lines:
deb http://deb.debian.org/debian bullseye main deb-src http://deb.debian.org/debian bullseye main deb http://deb.debian.org/debian-security/ bullseye-security main deb-src http://deb.debian.org/debian-security/ bullseye-security main deb http://deb.debian.org/debian bullseye-updates main deb-src http://deb.debian.org/debian bullseye-updates main
Then run the following command to update the sources database:
6.5. Upgrade to Debian 11 (Bullseye) in two steps
It is recommended to do the upgrade in two steps, by first running “
apt-get upgrade” to install the base packages, then running “
apt full-upgrade” to do the actual distribution upgrade.
Then perform the distribution upgrade by running:
During the upgrade process you will be asked multiple times if you want to overwrite certain configuration files with the new versions, or to keep the current files. Each time, type
N and press Enter to keep the current configuration file, since you don’t want to loose the settings contained in that file.
A reboot will be required to finish the upgrade and load the new kernel:
6.6. Check the upgrade
To check which Debian version is currently installed on the system, take a look at the
The output should look like this:
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
Note: If, after upgrading Debian, Fail2ban gives an error when when you restart it, you will have to uninstall and then reinstall it. First copy the
/etc/fail2ban/jail.local file and the
/etc/fail2ban/filter.d directory to a safe location:
cp /etc/fail2ban/jail.local /root/Documents
cp -r /etc/fail2ban/filter.d /root/Documents
Then uninstall Fail2ban:
apt-get purge fail2ban
You also have to delete the entire Fail2ban directory before reinstalling:
rm -r /etc/fail2ban
apt-get install fail2ban
Don’t forget to configure Fail2ban after installation, using the configuration file and the filter files directory that you have saved earlier, like this:
cp /root/Documents/jail.local /etc/fail2ban
cp -r /root/Documents/filter.d/* /etc/fail2ban/filter.d
systemctl restart fail2ban
After you make sure that you can log in using SSH on your custom SSH port, you can close port
22 in the firewall with UFW:
ufw delete allow 22
6.7. Install and configure the latest version of PHP
Debian 11 comes with a new version of PHP, namely PHP 7.4. During the operating system upgrade, some PHP 7.4 packages will be installed, but to fully install PHP 7.4 and use it instead of PHP 7.3, which was the default version in Debian 10, you will have to install all the PHP 7.4 packages specified in the Install PHP chapter, and then configure PHP as described there. So, follow all the steps described in the Install PHP chapter.
After you have installed and configured PHP 7.4, you can uninstall all the
php7.3 packages with:
apt-get purge php7.3*
|
OPCFW_CODE
|
Loading content ...
Phadnis, A., 2013. Uncertainty Quantification and Prediction for Non-autonomous Linear and Nonlinear Systems. SM Thesis, Massachusetts Institute of Technology, Department of Mechanical Engineering, September 2013.
Uncertainty quantification schemes developed in recent years include order reduction methods (e.g. proper orthogonal decomposition (POD)), error subspace statistical estimation (ESSE), polynomial chaos (PC) schemes and dynamically orthogonal (DO) field equations. In this thesis, we focus our attention on DO and various PC schemes for quantifying and predicting uncertainty in systems with external stochastic forcing. We develop and implement these schemes in a generic stochastic solver for a class of non-autonomous linear and nonlinear dynamical systems. This class of systems encapsulates most systems encountered in classic nonlinear dynamics and ocean modeling, including flows modeled by Navier-Stokes equations. We first study systems with uncertainty in input parameters (e.g. stochastic decay models and Kraichnan-Orszag system) and then with external stochastic forcing (autonomous and non-autonomous self-engineered nonlinear systems). For time-integration of system dynamics, stochastic numerical schemes of varied order are employed and compared. Using our generic stochastic solver, the Monte Carlo, DO and polynomial chaos schemes are intercompared in terms of accuracy of solution and computational cost.
To allow accurate time-integration of uncertainty due to external stochastic forcing, we also derive two novel PC schemes, namely, the reduced space KLgPC scheme and the modified TDgPC (MTDgPC) scheme. We utilize a set of numerical examples to show that the two new PC schemes and the DO scheme can integrate both additive and multiplicative stochastic forcing over significant time intervals. For the final example, we consider shallow water ocean surface waves and the modeling of these waves by deterministic dynamics and stochastic forcing components. Specifically, we time-integrate the Korteweg-de Vries (KdV) equation with external stochastic forcing, comparing the performance of the DO and Monte Carlo schemes. We find that the DO scheme is computationally efficient to integrate uncertainty in such systems with external stochastic forcing.
A new methodology for Bayesian inference of stochastic dynamical models is developed. The methodology leverages the dynamically orthogonal (DO) evolution equations for reduced-dimension uncertainty evolution and the Gaussian mixture model DO filtering algorithm for nonlinear reduced-dimension state variable inference to perform parallelized computation of marginal likelihoods for multiple candidate models, enabling efficient Bayesian update of model distributions. The methodology also employs reduced-dimension state augmentation to accommodate models featuring uncertain parameters. The methodology is applied successfully to two high-dimensional, nonlinear simulated fluid and ocean systems. Successful joint inference of an uncertain spatial geometry, one uncertain model parameter, and 0(105) uncertain state variables is achieved for the first. Successful joint inference of an uncertain stochastic dynamical equation and 0(105) uncertain state variables is achieved for the second. Extensions to adaptive modeling and adaptive sampling are discussed.
Sondergaard, T., 2011. Data Assimilation with Gaussian Mixture Models using the Dynamically Orthogonal Field Equations. SM Thesis, Massachusetts Institute of Technology, Department of Mechanical Engineering, September 2011.
We combine the use of Gaussian mixture models, the EM algorithm and the Bayesian Information Criterion to accurately approximate distributions based on Monte Carlo data in a framework that allows for efficient Bayesian inference. We give detailed descriptions of each of these techniques, supporting their application by recent literature. One novelty of the GMM-DO filter lies in coupling these concepts with an efficient representation of the evolving probabilistic description of the uncertain dynamical field: the Dynamically Orthogonal field equations. By limiting our attention to a dominant evolving stochastic subspace of the total state space, we bridge an important gap previously identified in the literature caused by the dimensionality of the state space.
We successfully apply the GMM-DO filter to two test cases: (1) the Double Well Diffusion Experiment and (2) the Sudden Expansion fluid flow. With the former, we prove the validity of utilizing Gaussian mixture models, the EM algorithm and the Bayesian Information Criterion in a dynamical systems setting. With the application of the GMM-DO filter to the two-dimensional Sudden Expansion fluid flow, we further show its applicability to realistic test cases of non-trivial dimensionality. The GMM-DO filter is shown to consistently capture and retain the far-from-Gaussian statistics that arise, both prior and posterior to the assimilation of data, resulting in its superior performance over contemporary filters. We present the GMM-DO filter as an efficient, data-driven assimilation scheme, focused on a dominant evolving stochastic subspace of the total state space, that respects nonlinear dynamics and captures non-Gaussian statistics, obviating the use of heuristic arguments.
Agarwal, A., 2009. Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos. SM Thesis, Massachusetts Institute of Technology, Department of Mechanical Engineering, May 2009.
A fundamental requirement in realistic computational geophysical fluid dynamics is the optimal estimation of gridded fields and of spatial-temporal scales directly from the spatially irregular and multivariate data sets that are collected by varied instruments and sampling schemes. In this work, we derive and utilize new schemes for the mapping and dynamical inference of ocean fields in complex multiply-connected domains, study the computational properties of our new mapping schemes, and derive and investigate new schemes for adaptive estimation of spatial and temporal scales.
Objective Analysis (OA) is the statistical estimation of fields using the Bayesian- based Gauss-Markov theorem, i.e. the update step of the Kalman Filter. The existing multi-scale OA approach of the Multidisciplinary Simulation, Estimation and Assimilation System consists of the successive utilization of Kalman update steps, one for each scale and for each correlation across scales. In the present work, the approach is extended to field mapping in complex, multiply-connected, coastal regions and archipelagos. A reasonably accurate correlation function often requires an estimate of the distance between data and model points, without going across complex land- forms. New methods for OA based on estimating the length of optimal shortest sea paths using the Level Set Method (LSM) and Fast Marching Method (FMM) are derived, implemented and utilized in general idealized and realistic ocean cases. Our new methodologies could improve widely-used gridded databases such as the climatological gridded fields of the World Ocean Atlas (WOA) since these oceanic maps were computed without accounting for coastline constraints. A new FMM-based methodology for the estimation of absolute velocity under geostrophic balance in complicated domains is also outlined. Our new schemes are compared with other approaches, including the use of stochastically forced differential equations (SDE). We find that our FMM-based scheme for complex, multiply-connected, coastal regions is more efficient and accurate than the SDE approach. We also show that the field maps obtained using our FMM-based scheme do not require postprocessing (smoothing) of fields. The computational properties of the new mapping schemes are studied in detail. We find that higher-order schemes improve the accuracy of distance estimates. We also show that the covariance matrices we estimate are not necessarily positive definite because the Weiner Khinchin and Bochner relationships for positive definiteness are only valid for convex simply-connected domains. Several approaches to overcome this issue are discussed and qualitatively evaluated. The solutions we propose include introducing a small process noise or reducing the covariance matrix based on the dominant singular value decomposition. We have also developed and utilized novel methodologies for the adaptive estimation of spatial-temporal scales from irregularly spaced ocean data. The three novel methodologies are based on the use of structure functions, short term Fourier transform and second generation wavelets. To our knowledge, this is the first time that adaptive methodologies for the spatial-temporal scale estimation are proposed. The ultimate goal of all these methods would be to create maps of spatial and temporal scales that evolve as new ocean data are fed to the scheme. This would potentially be a significant advance to the ocean community for better understanding and sampling of ocean processes.
In this thesis, we explore the different methods for parameter estimation in straightforward diffusion problems and develop ideas and distributed computational schemes for the automated evaluation of physical and numerical parameters of ocean models. This is one step of “adaptive modeling”. Adaptive modeling consists of the automated adjustment of self-evaluating models in order to best represent an observed system. In the case of dynamic parameterizations, self-modifying schemes are used to learn the correct model for a particular regime as the physics change and evolve in time.
The parameter estimation methods are tested and evaluated on one-dimensional tracer diffusion problems. Existing state estimation methods and new filters, such as the unscented transform Kalman filter, are utilized in carrying out parameter estimation. These include the popular Extended Kalman Filter (EKF), the Ensemble Kalman Filter (EnKF) and other ensemble methods such as Error Subspace Statistical Estimation (ESSE) and Ensemble Adjustment Kalman Filter (EAKF), and the Unscented Kalman Filter (UKF). Among the aforementioned recursive state estimation methods, the so-called “adjoint method” is also applied to this simple study.
Finally, real data is examined for the applicability of such schemes in real-time fore- casting using the MIT Multidisciplinary Simulation, Estimation, and Assimilation System (MSEAS). The MSEAS model currently contains the free surface hydrostatic primitive equation model from the Harvard Ocean Prediction System (HOPS), a barotropic tidal prediction scheme, and an objective analysis scheme, among other models and developing routines. The experiment chosen for this study is one which involved the Monterey Bay region off the coast of California in 2006 (MB06). Accurate vertical mixing parameterizations are essential in this well known upwelling region of the Pacific. In this realistic case, parallel computing will be utilized by scripting code runs in C-shell. The performance of the simulations with different parameters is evaluated quantitatively using Pattern Correlation Coefficient, Root Mean Squared error, and bias error. Comparisons quantitatively determined the most adequate model setup.
|
OPCFW_CODE
|
Some time ago I read “The Goal: A Process of Ongoing Improvement” by Eliyahu M. Goldratt. My big takeaway: Work-in-Progress or WIP items slow production. As the theory goes, you can be swimming in “efficiencies”, but if you’re stumbling over excess work-in-progress inventory or you’ve ignored a bottleneck, you’re nowhere near your potential.
This is clear enough in manufacturing. But these concepts can be applied elsewhere.
Demands on IT departments are growing exponentially. As technological advances accelerate, IT professionals are required to keep up. This isn’t one area, but in several areas at once. IT pros are pursuing cutting edge analytics and at the same time pushing traditional on-prem infrastructure to the cloud; while also balancing an undercurrent of spurious applications and solutions. Not just balancing, but seeking to meet an expectation of “subject matter expert” level knowledge/expertise with each new IT initiative.
This drives inefficiencies into IT. I’ll focus on cybersecurity within IT since I’m a cybersecurity analyst.
In order to win, security teams need a system for how they arrive at priorities. Priorities reduce work-in-progress items; they also minimize bottlenecks. IT departments tend to develop rockstars who don’t do all the work, but significant amounts of work pass through them. When many projects are going on at once, rockstars become “constraints”. (See “The Phoenix Project” by Gene Kim and Kevin Behr.) The other constraint is tools-in-progress. The tendency is to push for breadth over depth. More tools, less expertise in each tool.
When tools are viewed as 80-90% of the solution, the requirement of analysts’ time is easily overlooked. When it comes to cybersecurity, organizations can easily end up with a myriad of tools. Each of these tools becomes a work-in-progress or tool-in-progress item. Tools can add value, but if there are too many, they can actually lower the aggregate value of a team. The way to overcome this is through a highly effective system of prioritization. Knowing what to prioritize takes time. But for each tool, there if there is a sharp focus, chances creating value go up considerably.
Challenge teams to not let the perfect be the enemy of the good. Dare to set some things aside in order to arrive at critical priorities. Zero in on these priorities. They may change over time. This isn’t an issue. But if they’re changing too frequently, you’ll get stuck with a stifling inventory of work-in-progress items. Make a best-effort attempt to document this and quantify it so it doesn’t keep happening.
With a clean set of priorities and a careful reduction of WIP items, all things are possible!
|
OPCFW_CODE
|
using EnergyEndpointManager.Domain.Domains;
using EnergyEndpointManager.Domain.Enums;
using EnergyEndpointManager.Repository.Exceptions;
using EnergyEndpointManager.Repository.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
namespace EnergyEndpointManager.Repository.Repositories
{
internal class EnergyEndpointManagerRepository : IEnergyEndpointManagerRepository
{
private readonly IList<EnergyEndpoint> dataTable;
public EnergyEndpointManagerRepository()
{
dataTable = new List<EnergyEndpoint>();
}
public void Insert(EnergyEndpoint energyEndpoint)
{
if (dataTable.Any(s => s.SerialNumber == energyEndpoint.SerialNumber))
{
throw new DuplicateEndpointException(energyEndpoint.SerialNumber);
}
dataTable.Add(energyEndpoint);
}
public IList<EnergyEndpoint> GetAll()
{
return dataTable;
}
public EnergyEndpoint Get(string serialNumber)
{
var energyEndpoint = dataTable.Where(s => s.SerialNumber == serialNumber).FirstOrDefault();
if (energyEndpoint == null)
{
throw new NonExistentEndpointException(serialNumber);
}
return dataTable.Where(s => s.SerialNumber == serialNumber).FirstOrDefault();
}
public void Edit(string serialNumber, EnergyEndpoint energyEndpointUpdated)
{
var energyEndpoint = Get(serialNumber);
energyEndpoint.MeterModelId = energyEndpointUpdated.MeterModelId;
energyEndpoint.MeterNumber = energyEndpointUpdated.MeterNumber;
energyEndpoint.MeterFirmwareVersion = energyEndpointUpdated.MeterFirmwareVersion;
energyEndpoint.SwitchState = energyEndpointUpdated.SwitchState;
}
public void Delete(string serialNumber)
{
var endpoint = Get(serialNumber);
dataTable.Remove(endpoint);
}
}
}
|
STACK_EDU
|
XCode Coordinates for iPad Retina Displays
I just noticed an interesting thing while attempting to update my app for the new iPad Retina display, every coordinate in Interface Builder is still based on the original 1024x768 resolution.
What I mean by this is that if I have a 2048x1536 image to have it fit the entire screen on the display I need to set it's size to 1024x768 and not 2048x1536.
I am just curious is this intentional? Can I switch the coordinate system in Interface Builder to be specific for Retina? It is a little annoying since some of my graphics are not exactly 2x in either width or height from their originals. I can't seem to set 1/2 coordinate numbers such as 1.5 it can either be 1 or 2 inside of Interface Builder.
Should I just do my interface design in code at this point and forget interface builder? Keep my graphics exactly 2x in both directions? Or just live with it?
The interface on iOS is based on points, not pixels. The images HAVE to be 2x the size of the originals.
Points Versus Pixels In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the
underlying device. When using native drawing technologies such as
Quartz, UIKit, and Core Animation, you specify coordinate values using
a logical coordinate space, which measures distances in points. This
logical coordinate system is decoupled from the device coordinate
space used by the system frameworks to manage the pixels on the
screen. The system automatically maps points in the logical coordinate
space to pixels in the device coordinate space, but this mapping is
not always one-to-one. This behavior leads to an important fact that
you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to
provide a consistent size of output that is device independent. The
actual size of a point is irrelevant. The goal of points is to provide
a relatively consistent scale that you can use in your code to specify
the size and position of views and rendered content. How points are
actually mapped to pixels is a detail that is handled by the system
frameworks. For example, on a device with a high-resolution screen, a
line that is one point wide may actually result in a line that is two
pixels wide on the screen. The result is that if you draw the same
content on two similar devices, with only one of them having a
high-resolution screen, the content appears to be about the same size
on both devices.
In your own drawing code, you use points most of the time, but there
are times when you might need to know how points are mapped to pixels.
For example, on a high-resolution screen, you might want to use the
extra pixels to provide extra detail in your content, or you might
simply want to adjust the position or size of content in subtle ways.
In iOS 4 and later, the UIScreen, UIView, UIImage, and CALayer classes
expose a scale factor that tells you the relationship between points
and pixels for that particular object. Before iOS 4, this scale factor
was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or
2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
From http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html
This is intentional on Apple's part, to make your code relatively independent of the actual screen resolution when positioning controls and text. However, as you've noted, it can make displaying graphics at max resolution for the device a bit more complicated.
For iPhone, the screen is always 480 x 320 points. For iPad, it's 1024 x 768. If your graphics are properly scaled for the device, the impact is not difficult to deal with in code. I'm not a graphic designer, and it's proven a bit challenging to me to have to provide multiple sets of icons, launch images, etc. to account for hi-res.
Apple has naming standards for some image types that minimize the impact on your code:
https://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html
That doesn't help you when you're dealing with custom graphics inline, however.
|
STACK_EXCHANGE
|
This is the homepage for the free book Elementary Calculus, by Michael Corral (Schoolcraft College).
Latest version (2022-11-22): ElementaryCalculus.pdf
Current changelog: changelog.txt
Code samples from the book: code_samples.zip
Lab assignments: calc_labs.zip
Note: The PDF was built using TeXLive 2020 and Ghostscript 9.56 under
LaTeX source code: calc12book-1.0-src.tar.gz
The book is distributed under the terms of the GNU Free Documentation License, Version 1.3.
You can buy a printed and bound paperback version of the book with grayscale graphics for $12.99 plus shipping at Lulu.com here.
This textbook covers calculus of a single variable, suitable for a year-long (or two-semester) course. Chapters 1-5 cover Calculus I, while Chapters 6-9 cover Calculus II. The book is designed for students who have completed courses in high-school algebra, geometry, and trigonometry. Though designed for college students, it could also be used in high schools. The traditional topics are covered, but the old idea of an infinitesimal is resurrected, owing to its usefulness (especially in the sciences).
There are 943 exercises in the book, with answers and hints to selected exercises.
(2022-02-06) I fixed a font sizing issue with some subsection headers, due to a bad LaTeX font directive on my part, which was causing a slightly larger body font size than there should have been. Fixing that not only allowed me the opportunity to improve some wording I had been unhappy with, but the cumulative effect also resulted in both Appendix A and the GNU Free Documentation License section becoming 1 page shorter each. So the book is now 2 pages shorter with no loss of content! Big thanks to Tamás Zsoldos for finding that font issue, which I had never noticed.
(2021-02-23) After some (welcome and much appreciated) feedback I removed some of the more contentious commentary on other textbooks from the Preface. The original first draft of the Preface, fortunately, never saw the light of day. :) I might set up a page here to discuss more fully my views on calculus—in hindsight the Preface isn't the best place to do that.
(2021-02-20) The printed version on Lulu.com has been updated with the fixes from 2021-02-08. I just received the updated printed paperback version from Lulu today, and unfortunately TeXLive 2020 didn't fix the small compatibility issue that Lulu has when printing the book. The problem occurs only for a few TikZ decoration patterns, for example the figures on p.281: the diagonal line patterns are printed as solid blocks. The weird thing is that in the "print-ready" copy of the PDF that Lulu generated for me to review and accept, those patterns were still there and everything looked fine. So Lulu's printing system has a problem, and I will contact them about it. Overall it's not a big deal, as the number of figures affected is small (and doesn't hurt anything). It just looks nicer when the patterns are there. :) I have verified that this problem does not exist when printing the book on a "normal" laser printer, so I encourage people to print the book themselves if possible (local copy shops typically can do a spiral or other binding for a reasonable price).
(2021-02-08) I finally got around to updating the LaTeX source code to work with TeXLive 2020. This should make it easier for people to compile the book themselves, since the version I had been using (TeXLive 2011) is ancient. A typo was fixed as well. The printed version on Lulu.com is in the process of being updated; I'll post here when that is done (there is a new issue with compatibility).
(2021-01-19) I added some computer lab assignments to the Download section. These are modified from the assignments I gave when teaching Calculus I and II, with Schoolcraft-specific parts (e.g. file locations) removed. I tried to make the instructions as operating system agnostic as possible, but there are some Windows-isms left in (since the labs at Schoolcraft use Windows). The applications used are Gnuplot (for plotting), Octave (for numerical computation) and Maxima (for symbolic computation). Besides the PDF files I included the LaTeX source, in case anyone wants to modify the assignments to suit their needs.
(2021-01-04) I tried my best to improve this web page and perhaps make it slightly less hideous and disorganized. I also added a zip file with the code samples in the book.
(2021-01-01) Added a link for buying a paperback version of the book on Lulu.com. The price is $12.99, which covers the cost of printing and binding. No color graphics, though, except for the cover (which Lulu's template insisted on altering). If there is demand for the full color version (as in the PDF) then I might set that up through Lulu as well.
(2020-12-31) The full version (1.0) is finally available! Chapters 1-5 cover Calculus I, while Chapters 6-9 cover Calculus II. I wanted to finish the book by the end of the year, and I managed to get it in on the very last day of the year. :) For now I just wanted to get the book out, but I will try to fix up the web site so that it doesn't look like a 1992 time warp. :) That will be my next project.
(2020-08-04) Five more exercises were added, bringing the total to 475 in the new 0.5b release. Some wordiness was reduced to make room for the extra exercises, while keeping the page total the same. See the changelog for all the changes. I am now up to Section 7.3 (Hyperbolas) in the second half of the book, and am still on track to finish by the end of the year.
(2020-06-03) I decided to move the material on Improper Integrals to Section 5.5, since that will mirror the finished book. I moved the old Section 5.5 (Average Value of a Function) to what will be Chapter 8. With the change in content for Section 5.5, there are now 470 exercises (up from 461) in the new 0.5a release. Progress on the second half of the book is continuing at a good pace.
(2020-05-27) The first half of the book has been fixed up quite a bit and is now complete; there are now 461 exercises (up from 428). It is released as version 0.5, while the older version has been renumbered to version 0.1 and removed from the site. Work on the second half of the book is continuing. Thanks to the pandemic lockdown here in Michigan I am more focused on the book and remotivated to finish it this year. Stay tuned.
(2020-05-17) Progress has resumed on the book, after too many years of inactivity (life...really). I am in the process of revamping the first 5 chapters with a lot of improvements, typo fixes, corrections, and incorporation of feedback (thanks to the people who have emailed me). I am now up to Section 4.3 (Numerical methods), whose initial state was unsatisfactory. I will expand Section 4.4 (Mean Value Theorem), continue on with Chapter 5, then start the Calculus II part. I will provide periodic updates.
(2016-01-24) Initial version 0.1 is released.
|
OPCFW_CODE
|
A pageview of an article is priced differently than a frontpage. You probably also have pages on your site, that dont show ads at all. So first, you need to consider what pages you want to measure. Typical a frontpage contains more ads and have a higher price per page view than other pages. If you want to measure individual page types/templates, you need to be able to cluster these together in your analytics. Your taxonomy may be the key to this, where you search landingpages starting with 'www.mydomain.com/articles', and know that all articles show 5 ads per page. Doing this enables you to get a better indication of cost rather than just on how many pageviews you lose revenue. We assume you already have statistics running on your site, like Google Analytics or similar. If not, get that up and running first!
As VisuAD only supports Google Doubleclick for Publishers at this time, we describe the method using this. You need to create a new Adunit for measuring Adblockers. It will be invisible on your site, but make sure it cant be used for anything else in your setup of DfP, by setting it as a 'Special ad unit'. The available size on the adunit should be 1x1 pixel. Make an Adunit for each pagetype you want to track (see 1), and implement the code on the templates.
Prepare a transparent 1x1 PNG, called 'AD.png' and create a new order running on each of the new adunits. They should run indefinitely. When it is running reload your page, and make sure it is delivered on the page(s) you're measuring. Identify it in code. Next install an adblocker and confirm that it blocks the adunit. An adblocker usually add CSS-code to the adunit, like 'Display: none'. If you're unsure if it does, you could try making the creative in a strong different color visible to you on the site, so you can visually confirm that the adunit gets blocked. When you have confirmed it gets blocked, you could replace the creative again with the transparent version.
With the Adblocker installed, you also need to confirm that a page load does'nt count as an impression in DfP. Test this on a page that is unavailable for the public, gets very few pageviews or create a new page for it. Loading the page with an adblocker should not increment the number of impressions. This may take a few days to test as DfP is'nt always as realtime as we would wish. This is not failsafe: Some Adblockers may in fact still count as an impression even if the ad is not shown on the page.
If all above is in place, you can now compare the number of pageviews with the number of impressions of the Adunit(s). The difference between the two is the number of adblocked pages. If a page contains more adunits, you should multiply the number with the number of adunits and this number you can use with your CPM to calculate the amount of lost revenue on your site. If you segmented your pagetypes, you will get a very good indication of how much revenue you're losing.
If you want our help on analyzing the impact of Adblockers on your site, please contact us.
|
OPCFW_CODE
|
tmt provision fedora-coreos broke in 1.17
Reproducer: tmt run finish login provision -h virtual -i fedora-coreos -ddd
For main branch log.txt doesn't contain usable info, however terminal gets:
workdir: /var/tmp/tmt/run-032/plans/default/provision
how: virtual
user: root
image: fedora-coreos
memory: 2048 MB
disk: 10 GB
arch: x86_64
Guessed image url: 'https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/36.20220918.3.0/x86_64/fedora-coreos-36.20220918.3.0-qemu.x86_64.qcow2.xz'
Write file '/var/tmp/tmt/run-032/plans/default/provision/step.yaml'.
Write file '/var/tmp/tmt/run-032/plans/default/provision/guests.yaml'.
finish
workdir: /var/tmp/tmt/run-032/plans/default/finish
Removing testcloud instance 'tmt-032-wcjFWTEc'.
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'tmt-032-wcjFWTEc'
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'tmt-032-wcjFWTEc'
Instance "tmt-032-wcjFWTEc" not found in libvirt "qemu:///session". Was it removed already? Should you have used a different connection?
Failed to remove testcloud instance: [Errno 2] No such file or directory: '/var/tmp/tmt/testcloud/instances/tmt-032-wcjFWTEc'
There is nothing usable in log.txt (this is whole file) for 1.17.0 either:
15:02:30 /var/tmp/tmt/run-003
15:02:30 tmt version: 1.17.0 (255c0918)
15:02:30 Read file '/var/tmp/tmt/run-003/run.yaml'.
15:02:30 Run data not found.
15:02:30 Enabled steps: provision and finish
15:02:30 Create an empty worktree (no metadata tree).
15:02:30 Create the data directory '/var/tmp/tmt/run-003/plans/default/data'.
15:02:30 Found 1 plan.
15:02:30 Write file '/var/tmp/tmt/run-003/run.yaml'.
15:02:30
15:02:30 /plans/default
15:02:30 info
15:02:30 environment: {'TMT_PLAN_DATA': '/var/tmp/tmt/run-003/plans/default/data'}
15:02:30 context: {}
15:02:30 wake
15:02:30 discover
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/discover/step.yaml'.
15:02:30 Step data not found.
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/discover/tests.yaml'.
15:02:30 Discovered tests not found.
15:02:30 Using the 'DiscoverShell' plugin for the 'shell' method.
15:02:30 status: todo
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/discover/step.yaml'.
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/discover/tests.yaml'.
15:02:30 provision
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/provision/step.yaml'.
15:02:30 Step data not found.
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/provision/guests.yaml'.
15:02:30 Provisioned guests not found.
15:02:30 Using the 'ProvisionTestcloud' plugin for the 'virtual' method.
15:02:30 status: todo
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/provision/step.yaml'.
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/provision/guests.yaml'.
15:02:30 prepare
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/prepare/step.yaml'.
15:02:30 Step data not found.
15:02:30 Using the 'PrepareShell' plugin for the 'shell' method.
15:02:30 status: todo
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/prepare/step.yaml'.
15:02:30 execute
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/execute/step.yaml'.
15:02:30 Step data not found.
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/execute/results.yaml'.
15:02:30 Test results not found.
15:02:30 Using the 'ExecuteInternal' plugin for the 'tmt' method.
15:02:30 status: todo
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/execute/step.yaml'.
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/execute/results.yaml'.
15:02:30 report
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/report/step.yaml'.
15:02:30 Step data not found.
15:02:30 Report step always force mode enabled.
15:02:30 Clean up workdir '/var/tmp/tmt/run-003/plans/default/report'.
15:02:30 status: todo
15:02:30 Using the 'ReportDisplay' plugin for the 'display' method.
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/report/step.yaml'.
15:02:30 finish
15:02:30 Read file '/var/tmp/tmt/run-003/plans/default/finish/step.yaml'.
15:02:30 Step data not found.
15:02:30 Using the 'FinishShell' plugin for the 'shell' method.
15:02:30 status: todo
15:02:30 Write file '/var/tmp/tmt/run-003/plans/default/finish/step.yaml'.
15:02:30 action
15:02:30 Insert a login plugin into the 'finish' step with order '90'.
15:02:30 go
15:02:30 provision
15:02:30 workdir: /var/tmp/tmt/run-003/plans/default/provision
15:02:30 how: virtual
15:02:30 order: 50
15:02:30 user: root
15:02:30 key: []
15:02:30 image: fedora-coreos
15:02:30 memory: 2048 MB
15:02:30 disk: 10 GB
15:02:30 connection: session
15:02:30 arch: x86_64
15:02:31 Guessed image url: 'https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/36.20220918.3.0/x86_64/fedora-coreos-36.20220918.3.0-qemu.x86_64.qcow2.xz'
15:02:31 qcow: fedora-coreos-36.20220918.3.0-qemu.x86_64.qcow2
15:02:31 name: tmt-003-GhoOOxTf
15:02:31 Write file '/var/tmp/tmt/run-003/plans/default/provision/step.yaml'.
15:02:31 Write file '/var/tmp/tmt/run-003/plans/default/provision/guests.yaml'.
15:02:31 finish
15:02:31 workdir: /var/tmp/tmt/run-003/plans/default/finish
15:02:31 login: Starting interactive shell
15:02:31 warn: Failed to push workdir to the guest.
15:02:31 Run 'bash' in interactive mode.
Regression in 1.17.0, works for 1.16.0. Affects current 'main' as well.
Note that testcloud instance create file:///var/tmp/tmt/testcloud/images/fedora-coreos-36.20220918.3.0-qemu.x86_64.qcow2 boots without any problem.
I've tried on with python3-testcloud-0.8.1-1.fc36.noarch, libvirt-8.1.0-2.fc36.x86_64, qemu-kvm-6.2.0-15.fc36.x86_64
git bisect shows that first problem appears with 3f79f40 however it is a different - tmt fails to boot in time (error msg Failed to connect in 60s.)
|
GITHUB_ARCHIVE
|
What's a POS??
Mon, 12 May 97 13:15:55 +0200
Harvey J. Stein writes:
> Martin Cracauer writes:
> > The Lisp equivalent could look something like this
> > (somecommand (file-information (glob "foo*.dat")))
> > where file-information puts out a struct that somecommand uses to do
> > its work.
> > No, if `ls -l` were a long-running program, or if I would like to
> > capture the state of the directory before I change somethings, I could
> > do it like this
> > ls -lg foo*.dat > somefile
> > and later, maybe weeks and reboot later
> > somecommand < somefile
> > What would the Lisp POS equivalent look like? How would it be
> > different from plain print/reading the struct?
> I'd presume the POS equivalent would be:
> (setq somefile (file-info (glob "foo*.dat")))
> Later you could do
> (somecommand somefile)
> When you wanted to do:
> rm somefile
> You'd instead do
> (unintern somefile)
This is the approach to make every object persistent. I don't think I
want that nor do I think it can be implemented efficiently, Unix
swapspace hacks or not. And I think it will make live quite hard to
acess data that is larger than the maximum address range of your Lisp
> The system itself would make sure that everything's committed to disk
> at an appropriate time.
That what I' afraid of ;-)
I feel the urgent need to get my hands (not eyes...) on some Lisp POS
systems. I don't have enough experience to judge over any approach and
I don't think I can get it by reading about it. Someone has an unused
copy of Statice for a Symbolics 36xx around?
Kelly has a big head-start here over most of us, no wonder he feels
uncomfortable sometimes. I can only repeat my request for posting of
more code examples so that people will be kicked out of their own
In no way I think your (Harvey) ideas of persistence are worthless
illusions. But I think they need to be discussed. The interface of
your approach is clear, there is none as long as you don't want to
transport objects to different Lisp worlds (what I want). But your
approach raises even more (efficient) implementation questions.
Martin Cracauer <firstname.lastname@example.org>
Fax +49 40 522 85 36
|
OPCFW_CODE
|
August 5, 2022•1,221 words
Niche communities have an upside to being niche: you can (and maybe should) explore options that don't need to scale very much. My own experience with organising people around a small hobby has revolved around tabletop role-playing games (TTRPGs) in my country and city. If you never heard of TTRPGs, all you need to know is that they're the tabletop predecessor of computer RPGs, you can play them face-to-face but also online. For this hobby, the most recent community initiative I've been involved is a Discord server which grew unexpectedly when COVID-19 hit. But even before that spike we were already happy with our little server and its role with promoting TTRPGs in Portugal. In particular, this article is not about how we tailored our community to make the best use out of Discord, but the other way around. How we built specific automations to make Discord work the way we want it to.
I've already talked in a previous article about my first experience with making a bot, which then was built for Slack. What I've learned from that project is how you should open your options to whatever the bot may be aware of and then leverage the API to iterate on useful features. Basically, you'll probably not get your user experience right on the first time, so you need to be open to different solutions for the same problem. Some things are easy like rolling dice for games, which is the most obvious feature for having a bot in a TTRPG server. Other things are harder to pin down, like how to onboard new people into the community, how to manage having many different channels or how to handle spam from account take-overs.
Traffic in our RPG Portugal Discord server is small enough for every person to be welcomed and guided by one of the moderators. That personal touch is key as this community isn't particularly interested in lurkers, we're looking for people that engage with playing RPGs. These games are very conversational, so we've no problems with requiring some basic level of human empathy for people that stay in our server. Furthermore, a niche hobby like that of tabletop RPGs is easily misidentified and people may stumble upon the server by mistake. No, it's not for Grand Theft Auto nor Warcraft nor boardgames. Onboarding is therefore an essential part of our moderation and, without the help of automation, it can become too much of a burden. Finally, much like other similar communities, tabletop role-players are very heterogeneous which makes moderating them particularly like herding cats.
By default, Discord doesn't help very much with these challenges and it was even worse years ago when RPG Portugal started. Everyone could invite new members from anywhere and those people could read and write in too many channels, forcing moderators to try and find where that person was typing. So we created a specific user role for new people which at the time was the best way to manage permissions and even apply timeouts (instead of having to always escalate towards kicks or bans). That idea had to be automated so mods wouldn't have to assign this role by hand. And instead of having to chase down some combination of public bots that could maybe solve our problems, I was already developing and hosting our own bot. Onboarding was the priority and, since dice rolling is relatively easy, I also included it right from the start to get the buy-in from the community while I worked out the kinks on the harder stuff.
I also considered that another important side of the onboarding experience would be having a domain and website, something easy to say if you'd like to let people know about our server in a meet-up. So I built a landing page and a micro-service to feed it the invite code. In practice, this would become the only way to enter our server. Even if invite codes are ultimately public information, the convenient way for everyone is to simply visit rpgportugal.com, check our rules and click the big button. The invite code was initially created and put into the service by hand, but now our bot does this as well. Nobody creates any invites in our server, including the mods. We want every new member to come through the same funnel so they get every chance to not come into our server by mistake.
Another challenge was how to help our members deal with a growing variety of channels. Most roleplayers niche deeper into an already niche hobby and therefore want to see channels for their particular games and not so much for others. By default, Discord has kind of an opt-out option where you can mute any channel that doesn't interest you, but we wanted an opt-in feature. Therefore, we now maintain a list of channels that people won't see unless they give a bot a command to apply a specific role to them that identifies their interest for that specific RPG. Moderators can also do this by hand, of course, but the real value is in members themselves controlling what channels fill their side menu.
Years going by also brought us other challenges, namely all kinds of spammers with tactics that keep evolving, but with time our community also grew to a point where other people are also involved in developing solutions for our particular needs. You can check out all kinds of projects on RPG Portugal's GitHub. One of them includes our home-made attempt to deal with spammers: a bot that watches for a honeypot channel where everyone is warned not to post, lest they get removed from the server. Again, all to remove stress from moderation while improving our own Discord experience for everyone.
For a platform that is supported by its own users, Discord is kind of absent in its relation with its customers. You can be paying for Nitro, be offering your time moderating a community and still have no distinct way to communicate with Discord, get support and give feedback. Therefore, you're kind of forced to leverage whatever tools you have, including their API for developers. Discord basically has two poles of engagement, hyper-branded communication and quick-wins product development. Both are superficial and short-sighted, which can make communities worry about their future.
But the platform happens to present an opportunity for servers that have members learning how to code or experienced developers who want to solve issues that can quickly have a lot of positive impact. Building Discord bots is similar to the low-hanging fruit of making a website (even if hosting is not so accessible). And it doesn't matter if you still know little about coding as long as you're well aware of your unique problems and therefore are motivated and in good position to iterate on them. If you're just starting, I can recommend DiscordJS as a bot framework with good documentation for complete beginners. And eventually this DIY atitude can even enable small communities to keep an eye on self-hosted alternatives to Discord like Matrix chat. There's a lot that small communities can do to stay active and resilient through the internet.
|
OPCFW_CODE
|
from dataclasses import dataclass
from daily_fantasy_sports_models.core.sets import is_disjoint
from daily_fantasy_sports_models.draft_kings.nba.models.contests.salary_cap.player_pool.player import Player \
as PlayerPoolPlayer
from daily_fantasy_sports_models.draft_kings.nba.models.core.position import Position
class InvalidLineupError(ValueError):
pass
class DuplicatePlayerError(InvalidLineupError):
pass
# "Lineups...must include players from at least 2 different NBA games"
class MustIncludePlayersFromAtLeast2DifferentGames(InvalidLineupError):
pass
# "a valid lineup must not exceed the salary cap of $50,000"
class MustNotExceedTheSalaryCap(InvalidLineupError):
pass
class InvalidPlayerPosition(InvalidLineupError):
pass
# https://www.draftkings.com/help/rules/4
# In salary cap contests, participants will create a lineup by selecting players listed in the Player Pool.
# Each player listed has an assigned salary and a valid lineup must not exceed the salary cap of $50,000.
# Lineups will consist of 8 players and must include players from at least 2 different NBA games.
@dataclass(init=True,
repr=True,
eq=True,
order=False,
unsafe_hash=False,
frozen=True)
class Lineup: # pylint: disable=too-many-instance-attributes
POINT_GUARD_POSITIONS = frozenset({
Position.POINT_GUARD
})
SHOOTING_GUARD_POSITIONS = frozenset({
Position.SHOOTING_GUARD
})
SMALL_FORWARD_POSITIONS = frozenset({
Position.SMALL_FORWARD
})
POWER_FORWARD_POSITIONS = frozenset({
Position.POWER_FORWARD
})
CENTER_POSITIONS = frozenset({
Position.CENTER
})
# G (PG,SG)
GUARD_POSITIONS = POINT_GUARD_POSITIONS.union(SHOOTING_GUARD_POSITIONS)
# F (SF, PF)
FORWARD_POSITIONS = SMALL_FORWARD_POSITIONS.union(POWER_FORWARD_POSITIONS)
# Util (PG,SG,SF,PF,C)
UTILITY_POSITIONS = GUARD_POSITIONS.union(FORWARD_POSITIONS.union(CENTER_POSITIONS))
point_guard: PlayerPoolPlayer
shooting_guard: PlayerPoolPlayer
small_forward: PlayerPoolPlayer
power_forward: PlayerPoolPlayer
center: PlayerPoolPlayer
guard: PlayerPoolPlayer
forward: PlayerPoolPlayer
utility: PlayerPoolPlayer
def __post_init__(self):
lineup_players = [
self.point_guard,
self.shooting_guard,
self.small_forward,
self.power_forward,
self.center,
self.guard,
self.forward,
self.utility
]
if 8 != len(set(
map(
lambda contest_player: contest_player.player,
lineup_players
)
)):
raise DuplicatePlayerError()
if 2 > len(set(
map(
lambda contest_player: contest_player.game_id,
lineup_players
)
)):
raise MustIncludePlayersFromAtLeast2DifferentGames()
if 50_000 < sum(
map(
lambda contest_player: contest_player.salary,
lineup_players
)
):
raise MustNotExceedTheSalaryCap()
for eligible_positions, player in {
Lineup.POINT_GUARD_POSITIONS: self.point_guard,
Lineup.SHOOTING_GUARD_POSITIONS: self.shooting_guard,
Lineup.SMALL_FORWARD_POSITIONS: self.small_forward,
Lineup.POWER_FORWARD_POSITIONS: self.power_forward,
Lineup.CENTER_POSITIONS: self.center,
Lineup.GUARD_POSITIONS: self.guard,
Lineup.FORWARD_POSITIONS: self.forward,
Lineup.UTILITY_POSITIONS: self.utility
}.items():
if is_disjoint(eligible_positions, player.positions):
raise InvalidPlayerPosition()
|
STACK_EDU
|
The Winbox Slot Online is one of the most popular slots games available today. It is a game of chance where players can win real money and prizes. In this game, players are pitted against the house to see who can win the most. Players who succeed in this game are rewarded with cash prizes and bonuses. While the odds of winning the game are stacked against the player, there are strategies that can be employed to increase the chances of success. One such strategy is the Martingale System.
[Winbox Slot Online](https://bit.ly/46nOLb4) has been around for centuries and is still a popular strategy among players. It is a system that requires a player to double their bet each time they lose and continue to do so until they win. This system is named after French mathematician and gambler, Jean le Rond d’Alembert, who first came up with the concept. The system works on the assumption that the player will eventually win, so the amount bet will eventually exceed the amount of money lost.
### How the Martingale System Works
The Martingale System is a very simple system to understand. The player begins with a small bet and if they lose, they double their bet. This process continues until the player wins and they are able to collect their winnings. The idea behind this strategy is that in the long run, the player will eventually win and the total amount of money they have bet will exceed their losses.
The Martingale System is based on the idea that the player will eventually get lucky and the odds will eventually be in their favor. This is not necessarily the case, however, as the house always has an edge and there is no guarantee that the player will eventually win. That said, the Martingale System can be an effective strategy when applied correctly to Winbox Slot Online.
### Advantages of Using the Martingale System
The Martingale System offers a number of advantages to players. The most obvious advantage is that it gives players the opportunity to win more money than they would otherwise. As the player continues to double their bet, the amount of money they can win increases exponentially. This makes the system an attractive option for players looking to increase their winnings.
The Martingale System also reduces the amount of time spent gambling. As the player continues to double their bet, they will eventually reach a point where they have won and can collect their winnings. This allows them to save time and focus on other activities.
Finally, the Martingale System is a good way to learn the game of Winbox Slot Online. As players continue to double their bets, they are able to gain experience and learn more about the game. This can be invaluable in the long run and can help players increase their chances of success.
### Disadvantages of Using the Martingale System
While the Martingale System can be an effective strategy when applied correctly, there are also some disadvantages to consider. The most obvious disadvantage is that it requires a large bankroll in order to be successful. As the player continues to double their bets, the amount of money they need to have in their bankroll increases exponentially. If the player runs out of money before they win, they will be unable to collect their winnings and will end up losing all of their money.
The Martingale System also requires a large amount of patience and dedication. As the player continues to double their bet, they will be required to wait for a long time before they win and can collect their winnings. This can be difficult for some players and can make the system less appealing.
Finally, the Martingale System is based on the assumption that the player will eventually win. While this may be true in the long run, there is no guarantee that the player will win in the short run. This means that the player may end up losing a large amount of money before they are able to collect their winnings.
The Martingale System can be an effective strategy when applied correctly to Winbox Slot Online. It offers players the chance to win more money than they would otherwise and can reduce the amount of time spent gambling. However, it also requires a large bankroll and a great deal of patience and dedication. Players should take these factors into consideration before using the system.
|
OPCFW_CODE
|
iOS App Store - retiring an app - Can I hide the app from new users while making updates available to all existing users
I have an app that needs to be retired, as it is no longer self-sustaining, due to server costs, etc. However, there are still active users of the app, who I don't want to burn on this shutdown (eg. suddenly shutting down and permanently cutting off access to their data)
To support a "polite" shutdown, I'll need to update the app to provide a means of exporting all their data (which is stored partially locally in the app, and partially on the server... so merely turning on iTunes File Sharing is not sufficient) to the server, where we can then email them a zip of all their content. This updated version and supporting server will be available for several months to give them a window to migrate everything out, before ultimately pulling the plug on everything in a few months. However, I don't want new users to install and start using the app during this time period (or at least minimize this as much as possible), as the app is effectively a "dead man walking".
So, my question is then: is it possible to push out app updates to existing installs, while preventing new installs from the iOS App Store?
I've already seen references to ways to "hide" an app from the app store (eg. setting its status to "Removed From Sale", setting its availability date into the future, or restricting its availability to only a single small country like St Lucia), but none of these makes clear if that restricts just new downloads, or if that also restricts the ability to download the app update... which is absolutely critical to my shutdown plan.
Thanks!
I don't think there's a way to do that. As a less-optimal alternative, you could change the App Store description to explain that the app is going away, and maybe put out an update that detects a new user (no account on server, maybe?) and pops up a message explaining the situation when they run your app. But as far as I know, any restrictions on downloading will also restrict updates.
I'll definitely be taking those steps as well, just trying to minimize how many people find themselves in that situation in our app... especially as their first experience. Was hoping maybe Apple had documented this somewhere, or someone had first-hand experience with how updates work in this scenario.
I don't think App Store allows for this kind of behavior. What you can do? The simples way is to store flag in user defaults that represents if this app is already installed. If its not, you just show alert without possibility to be closed.
Thanks for the feedback... I realize I can restrict their access after they've installed it, but my hope is to prevent them from ever installing it in the first place, since installing a new app that is immediately useless is not exactly a good user experience, and would reflect poorly on the developer, me :) And with other apps on the store under that same account, I'd also like to minimize the number of inevitable "this app doesn't even work" 1-star comments due to releasing a disabled app for new users. Just trying to do my best to save face while winding this down.
|
STACK_EXCHANGE
|
We’re a high webpages and funding to possess black-jack players away from all of the accounts, and a crucial source of guidance https://vogueplay.com/ca/online-casino-canada-legal/ for everybody regions of the brand new black-jack industry. We function articles on the basic approach, card-counting, and you may blackjack study. Simultaneously, we provide an on a regular basis updated blog to the current in the black-jack reports, comments, and you will prominent betting sites. Red-dog Gambling enterprise also offers many bonuses and you can offers especially tailored for on the web blackjack participants.
- Within the single-deck black-jack, you get chance that will be one of many low house edges offered at the Bovada.
- Rhode Island need to have courtroom and you will regulated gambling enterprises which have black-jack games offered by the beginning of 2024.
- Minimal wagers once you play on the web black-jack can be as reduced while the 0.10.
- After you find out the principles, it’s an issue of gathering their betting electricity.
Away from greeting bonuses so you can lingering offers, we seek to improve your on line black-jack experience in enjoyable perks. Red dog Gambling enterprise also provides a diverse listing of on the internet blackjack differences, as well as totally free local casino blackjack, vintage Black-jack and you can book twists for the video game. Speak about our online game options to find a favourite adaptation and find out the newest ways to take advantage of the antique card video game.
Totally free Black-jack Game For fun As well as for A real income
Of several sites purport to provide courtroom and you may secure blackjack out of to another country, but You bodies wear’t check these businesses’ app. In addition to, when there is an issue with a deposit otherwise withdrawal, you’ll bed better-knowing you are dealing with a licensed and you will legitimate United states team. In case your county features but really to help you legalize casinos on the internet, you might however gamble online blackjack during the one of the of several personal casinos on google Play. It is advisable to consult with your black-jack means card to the direct game you are playing when breaking and increasing. While you are playing online black-jack, it’s as easy as keeping various other web browser monitor unlock. As well as, to try out free blackjack games given on the internet while using your own approach card is a superb way to practice black-jack instead of risking their dollars.
United states Online Roulette Without delay
Although not, having a choice of other blackjack variations is never an adverse tip. Look out for your choice of that it enjoyable dining table online game earlier so you can signing up. You need to be careful in the event the specialist’s upcard are possibly cuatro,5,six. If the specialist have a soft hand, he is likely to decide to struck instead of might boost their danger of delivering nearer to a score away from 21.
Play Totally free Online game On line
Therefore, if the matter goes highest, your – the newest skilled restrict – improve your choice within the anticipation of your beneficial hand. What can happen, but not, and often will come is you might possibly be denied services by the gambling enterprise. A gambling establishment try a corporate, whatsoever, and supplies the authority to refuse provider for you.
A real income On the web Blackjack Vs Free online Black-jack
You can then put a gamble, hit ‘Deal’, and you will play blackjack as it’s meant to be played. Black-jack is just one of the gambling games that require some knowledge and practice getting played optimally. All demanded gambling enterprises listed below are genuine internet sites one remain participants secure. It esteem playing regulations and you can years constraints, offering a good a real income gambling experience with a safe environment seriously interested in players’ passions and shelter on line. Our finest see for the best on line black-jack casino is Bovada, simply because of their level of blackjack versions, live-agent video game, big bonuses and you will top quality program. Extremely on line black-jack casinos appeared here render totally free black-jack with a good demo function.
To start with, it will not be sure a victory throughout the one kind of black-jack lesson. Rather, it tips the house virtue ever so somewhat to your benefit, and therefore means that you’ll winnings constantly across the long haul. We could make a complete page on the blackjack procedures alone.
Blackjack Tits Notes
Specific participants demand wagering for the several give increases its potential profits, giving by themselves a far greater chance of beating the newest specialist. On line multi-give blackjack game run using haphazard count generator app, definition ways such as card counting wear’t functions. Alternatively, i encourage having fun with equipment including strategy notes to make smarter bets within this game. Discover an on-line black-jack gambling establishment, register, make a deposit, and pick a bona fide currency black-jack online game. You will find a long list of simple tips to enjoy black-jack on the web for real currency subsequent within the page. Only at Casino.org, you have access to 210+ online blackjack games as well as 18,500+ most other 100 percent free casino games, providing you with a choice of additional versions.
|
OPCFW_CODE
|
Deep learning has become an important topic across many domains of science due to its recent success in image recognition, speech recognition, and drug discovery. Deep learning techniques are based on neural networks, which contain a certain number of layers to perform several mathematical transformations on the input. A nonlinear transformation of the input determines the output of each layer in the neural network: $x \mapsto \sigma(W x + b)$, where $W$ is a matrix called the weight matrix, $b$ is a bias vector, and $\sigma$ is a nonlinear function called the activation function.
Each of these variables contain several parameters, which are updated during the training procedure of the neural network to fit some data. In standard image classification problems, the input of the network consists of images, while the outputs are the associated labels. The computational cost of training a neural network depends on its total number of parameters. A key question in designing deep learning architectures is the choice of the activation function.
Figure 1. A rational activation function (red) initialized close to the ReLU function (blue).
In a recent work, Oxford Mathematicians Nicolas Boullé and Yuji Nakatsukasa, together with Alex Townsend from Cornell University, introduced a novel type of neural networks, based on rational functions, called rational neural networks . Rational neural networks consist of neural networks with rational activation functions of the form $\sigma(x)=P(x)/Q(x)$, where $P$ and $Q$ are two polynomials. One particularity is that the coefficients of the rational functions are initialized close to the standard ReLU activation function (see fig 1) and are also trainable parameters. These type of networks have been proven to have higher approximation power than state-of-the-art neural network architectures, which means that they can tackle a variety of deep learning problems with fewer number of trainable parameters.
Figure 2. Two-dimensional function learned by a rational neural network (left) and loss function during training compared with standard architecture (right).
Rational neural networks are particularly suited for regression problems due to the smoothness and approximation power of rational functions (see fig 2). Moreover, they are easy to implement in existing deep learning architectures such as TensorFlow or PyTorch . Finally, while neural networks have applications in diverse fields such as facial recognition, credit-card fraud, speech recognition, and medical diagnosis, there is a growing need for understanding their approximation power and other theoretical properties. Neural networks, in particular rational neural networks, have the potential to revolutionize fields where mathematical models derived by mechanistic principles are lacking .
1. Boullé, Nakatsukasa, Townsend, Rational neural networks, NeurIPS 33, 2020.
2. GitHub repository, 2020.
3. Boullé, Earls, Townsend, Data-driven discovery of physical laws with human-understandable deep learning, arxiv:2105.00266, 2021.
|
OPCFW_CODE
|
My professor was telling me about a bottleneck in his replicator dynamic models in matlab. Basically its somewhat impossible to do 3+ player games because the dimensionality makes it difficult to use matrix calculations for expected payoff.
He said that the expected payoff is necessary to calculate when making models around Fictitious Play or the Replicator Dynamic. And normally that's done by matrices.
Here's an example of what I'm talking about.
The bargaining problem: There is a pie to be divided among n players. Each player can claim any portion of the pie, however if the claims are incompatible (more than 100% of the pie is claimed) payoffs are zero for everyone.
One way of modeling this problem for 2 players is to have each player calculate their expected payoffs and put that in a matrix. It'd look something like this if we only allowed 3 possibilities each:
Running the replicator dynamic on it looks something like this.
You can calculate expected payoff for this easily using some basic matrix multiplication. However, if you did it linearly (allowed all possible combinations/claims) you could just use a series of integrals to calculate your expected payoffs.
You could do it with a 3-dimensional graph where X,Y, and Z range from 0->1. The salient claim is each claiming 1/3, naturally. But by holding one value constant X=1/2 you can calculate expected payoff by integrating from 0 to 1/2 along the X axis.
If we increased it to a 4-dimensional graph we'd have to maybe separate it into the unique combinations of X,Y,Z,W to first reduce to 3 dimensions and then integrate to find expected payoff.
Is that making sense? I'm curious to about if that has been done or if I'm completely on the wrong track.
I've been planning a sting operation on corrupt university-officials and looking for information. If I'm able to gather evidence, I plan to send it to the local news agency and thereafter to the police.
The idea behind a Separating Equilibrium is that it's an incentive to which people with different private information respond differently.
How can I create an SE to find out (preferably, anonymously) which corrupt official is accepting bribes in the local government university?
P.S. Please be to the point. No moralizing and impertinent facts or advice.
I have a chance-related game theory program I'm working on where there are two choices for a given player: A or B, and the turns also have a time component (a random amount of time is taken off after each turn, until the time remaining is 0).
I'm struggling with how to code the expecti-minimax in R.
Can someone help explain to me how the game tree works with random outcomes, and how to set up the code in R? I have a function that governs the expected value of each turn (produces both a point total and time taken off the game clock), and don't know how to recursively call it to find the optimal move at any given point.
Happy to share code if need be.
|
OPCFW_CODE
|
With the increase in the demand for fast and quality-driven releases of the application in the market, testing plays a crucial part in ensuring the above. However, to meet the demand of the competitive market, manual testing does not serve its purpose.
This is the reason why automation testing has gained a buzzword in the IT sector all over the world. With the best practice of automation testing, the process has become more sophisticated and advanced to ease the workload of the rest of the team with the delivery of accurate results.
In this present, best practices of automation testing for 2023 will be discussed explicitly, which will help to gain more insight and ways to ease the product development cycle.
Here, we will start with automation testing
Introduction to automation testing
Automation testing came into the picture when testing an application became a challenge, evident by filling multiple input fields manually. This causes testers to be worn out and causes a miss out of errors and bugs.
Automation testing is the approach for testing applications and software products with the help of special tools and platforms to lower human effort and error and enhance quality.
In simple words, automation testing makes it possible to run the test for the application to find a bug with the intervention of humans. Traditionally, a manual test involves an examination of several steps to check whether things are behaving as expected. However, automation testing is created once and can be run whenever needed.
Need for automation testing
Tests like functional or regression testing can be executed manually; however, it has the greater advantage of doing it automatically. Some of the reasons for using automation testing are as follows:
- It improves scale: It transforms the scale through which the test team operates because tests can be run 24 hours a day, seven days a week.
- Reporting ability: Automation testing makes use of crafted test cases for different scenarios. Such scripted sequences provide comprehensive reports, which won’t be possible manually.
- Quick bug detection. Automation testing helps detect bugs easily by making the process easy as it can evaluate the wider test coverage.
- Quick delivery: Software developers have the pressure to release new features, and bugs can wipe out gains in seconds. With automation testing, regression testing is done quickly.
- Releases streamlined: With automation testing, the application can be re-tested during development. For example, whenever a new code is pushed, a smoke test can be run. Thus, the release process becomes efficient and streamlined.
- Frequent testing: Automation testing allows repeated testing where the same test can be done over and over again.
Practices in automation testing
For automation testing, the choice of the right tools, technical knowledge, and test automation framework is required to get an accurate rate. In order to perform automation testing successfully, it is vital to get knowledge of its best practices.
Below we will discuss some best practices in automation testing, which will help the tester to execute and organize automated tests with maintaining a balance between automated and manual tests.
Make a better plan for automation testing
Automation testing is no exception, which also requires a robust strategy. For automation testing, it is essential to plan the test carefully by defining the scope of automation and its priorities while analyzing any risks and available resources.
The tester needs to clearly define and write the test cases in such a way that it is self-contained and easy to understand. Further, automation needs resource acquisition in terms of software and machine, which can turn gridlock into a need for time and resources. Therefore, planning is vital to avoid issues of overrunning the cost and schedules.
Be prompt with testing during development
In the process of development of the application, on the addition of new features or changes, identification of bugs is important. It is important to identify the bug early so it can easily be fixed. However, if the bug is found late, it can impact the functionality of the application and productivity.
Make use of tools for scheduling testing automatically
There are several tools available that can be used to schedule the automatic scheduling. For example, LambdaTest is a cloud-based cross-browser testing platform through which scheduling can be done. This helps to make sure that code is continuously being tested.
Set up alerts for the failure of the test
In automation testing, the identification of bug and fixing it is the major aspects of the successful running of the application. Alert should be set up on identifying the bug during the automation testing as it will help the tester decide whether to abort the current test or complete it.
For example, if a serious bug is identified that can hamper the security of the application, an alert will help in taking quick action to abort and fix it soon.
Need to know about the test that can be automated
Automating all tests at once is difficult; hence it is crucial to understand which tests should be automated. Below are the types of such tests highlighted.
- Tests causing failure due to human error.
- Monotonous test
- Repetitive test
- Tests unable to be done manually
- High-risk test
- Test requiring multiple data sets
- Test required to be run in different hardware and software platforms
Make the right choice for the test automation approach
The development of automation test cases requires an appropriate automation approach. According to the requirement of the test, the choice can be made between the five kinds of automation framework: hybrid framework, library architecture, modular-based, data-driven, and linear.
Before choosing the test automation approach, it is essential to perform risk analysis on the project, involve the right people, and review the test artifact with development.
Right choice for automation testing tools
Making the right choice of automation testing tools depends on the requirement of the testing. Many options available in the market provide automation testing, like Selenium, Appium, and others. The testing team should make an automation tool strategy on the resource and requirements.
It is crucial to note the choice of automation tools should solve your issues. It is not important to rush with the best automation testing tool. Rather, investigate the test automation framework needed for configuration and proceed with the selection of software that possesses such key functionalities.
Require dividing the automation testing effort effectively
In the practice of automation testing, the development of different tests depends on the skills of the QA engineers. It is crucial to recognize the experience and skills of team members and accordingly divide the automation testing efforts. For example, writing automated test scripts needs knowledge of the scripting language. Thus, such work should be divided among experts with sound script languages.
Maintenance of records is needed for efficient debugging
In the situation of failure of tests, it is significant to keep a record of such so that testers can recognize and mark the rationale for the test failure.
It is advisable to choose automation testing tools, which have an in-built mechanism for taking browser screenshots for each test step. On LambdaTest, each test executed is recorded in the remote machine. Bug tracking and Bug reporting are important practices that need to be followed by the QA team in the future.
Automation testing is done for the creation of high-quality software, which is guided in the present blog based on best practices.
LambdaTest provides all the features that help in following the best practices. Along with this, LambdaTest lets you perform many types of testing such as continuous testing, unit testing, user acceptance testing, and many more.
Some of them are- it allows to divide the test into individual test parts, supports keyword-driven testing on blazing-fast online Selenium Grid, and supports different scripting languages like PHP, Laravel, Ruby, C#, Python, and Python.
|
OPCFW_CODE
|
Is it discouraged to use Java 8 parallel streams inside a Java EE container?
Given that spawning threads in Java EE containers are discouraged. Would using the Java 8 parallel streams, which may spawn threads, inside Java EE be discouraged too?
The restrictions pertain to transparent distribution of components across multiple application servers. It is not correct (but should be safe) to break the JEE component contract IFF the feature set e.g. transparent distribution is not a concern.
@RobertHarvey You should probably reopen.
"Would it be an issue?" That's not even a question.
@RobertHarvey using threads in JavaEE is dicouraged - Java 8 introduces parallel streams (which use threads in the background). Can we use parallel streams in JavaEE or is it discouraged too? I think that is a fair and interesting question. I have rephrased a bit - it can probably be improved further.
Primarily opinion-based. Couldn't the question at least have framed itself by restating the reasons why spawning threads in Java EE containers are discouraged, and then asking how those reasons are addressed in Java EE? It might be too broad, but at least it's an actual question.
@RobertHarvey ? Threads are discouraged (or even forbidden by some specs) because threads are meant to be managed by the application server itself (see first link on the right) - using parallel streams may "break stuff" - or not. And that is the question. Either it may or it may not "break stuff" based on specs: that is not an opinion. Just my 2 cts. Your call - I'm off for the day! ;-) (and feel free to delete the noise after making your decision)
Following the argument of assylias, we would like to understand the relation between parallel streams with J2EE containers, would that "break stuff"?
Yes, it would break stuff. The security and transactional context are handled by ThreadLocal variables, for example. And JPA entities aren't thread-safe. So spawning threads, whatever the way you spawn them, will break the security and transactional handling. The Java EE 7 specification introduces special executors that are supposed to be used if you want to execute tasks in threads in a Java EE environment. But Java EE is lagging behind Java SE, and is not ready for parallel streams yet.
Its a good question and has been answered by the Java EE engineers. They revert to sequential processing for all parallel operations. You can find the discussion on the<EMAIL_ADDRESS>mailing list.
Definitely a question that I could use the answer to. Can anyone provide a link for the discussion @edharned mentioned? Google's not showing much.
@Shorn: http://mail.openjdk.java.net/pipermail/lambda-dev/2013-April/009334.html
EDIT See alternate answer from andrepnh. The below may have been the plan, but it doesn't appear to have played out that way in practice.
The way I read it from the lambda-dev mailing list discussion mentioned in the comments: it's not discouraged the way spawning threads is - but won't do anything much for you in a Java EE context.
From the linked discussion:
the Java EE concurrency folks had been already talked through
this, and the current outcome is that FJP will gracefully degrade to
single-threaded (even caller-context) execution when running from within
the EE container
So you're able to safely use parallel streams in a procedure or library that runs in both contexts. When it's run in a SE environment, it will make with the magical parallel shenanigans - but when it's run in an EE environment it will gracefully degrade to serial execution.
Note: the phrase quoted above is future tense - does anyone have a citation for some definitive documentation?
Graceful degradation of parallel stream processing to single threaded isn't yet part of Java EE Concurrency. So it's still unsafe to use parallel streams in Java EE 8. There are plans to address it in the future within Jakarta EE Concurrency: https://github.com/eclipse-ee4j/concurrency-api/issues/46
A heads up, the graceful degradation to single thread is not available. I also thought it was because of Shorn's answer and that mailing list discussion, but I found out it wasn't while researching for this question. The mechanism is not in the Java EE 7 spec and it's not in glassfish 4.1. Even if another container does it, it won't be portable.
You can test this by calling the following method:
@Singleton
public class SomeSingleton {
public void fireStream() {
IntStream.range(0, 32)
.parallel()
.mapToObj(i -> String.format("Task %d on thread %s",
i, Thread.currentThread().getName()))
.forEach(System.out::println);
}
}
And you'll get something like:
Info: Task 20 on thread http-listener-1(4)
Info: Task 10 on thread ForkJoinPool.commonPool-worker-3
Info: Task 28 on thread ForkJoinPool.commonPool-worker-0
...
I've also checked glassfish 4.1.1 source code, and there isn't a single use of ForkJoinPool, ForkJoinWorkerThreadFactory or ForkJoinWorkerThread.
The mechanism could be added to EE 8, since many frameworks will leverage jdk8 features, but I don't know if it's part of the spec.
Couldn't gracefully degradation be achieved in a portable way by setting the java.util.concurrent.ForkJoinPool.common.parallelism system property to 1?
|
STACK_EXCHANGE
|
MySQL installation problems are summarized below in order to later re-installation may be a reference to similar problems.
Installation into the installation steps in the last two error dialog box will pop up the following:
The above error occurs because the password is wrong, if the password is entered correctly the problem does not occur, and if this step below enter the correct password, you can install correctly.
But when I installed the password is obviously a 123, but now when the input 123 is not installed, how is it?
To solve the problem we must enter the correct password, but how if you do not know the password or the password is not correct, then how to do it?
Then remove the registry, reinstall, but no.
(Reinstall - uninstall - clean up the registry) after the implementation of several not install correctly, so only checked on the net.
About Installing MySQL Server Instance Config Wizard error when, on-line, many say, are as follows:
The first argument:
Because before you had installed mysql, uninstall Shihai retains some configuration files.
Click Retry (retry) to see if you can. Otherwise, click Skip (skip) and then click cancel to exit, and then click on the Start menu MySQL Server Instance Config Wizard to reconfigure mysql.
In Figure 2, there are three places to enter a password, that you have installed mysql, you are in the first text box to enter the original root password, followed by two text box to enter root's new password on it.
If this does not then reload a MySQL.
Heavy Note: All the best to delete the original files, if necessary, can be clear about the registry.
"The original root password?" I entered, but no.
Then I clean up the registry uninstall, reinstall MySQL, but does not work. Why? I then check the Internet.
The second argument:
"Step one: Open the" Start "to run the program in the MYSQL MySQL Command Line Client to enter your password [that is set during installation]
Step Two: mysql> enter: UPDATE mysql.user SET Password = OLD_PASSWORD ('password')
The third step: at the prompt -> enter: WHERE Host = 'localhost' AND User = 'user name';
Enter after the prompt: Query OK, 0 rows affected (0.16 sec)
Rows matched: 0 Changed: 0 Warnings: 0
Do not think that over, there are ~
The fourth step: mysql> enter: FLUSH PRIVILEGES
Enter after the prompt: Query OK, 0 rows affected (0.19 sec)
Start the mysql service, log on to mysql database
Enter the command is:
[Root @ localhost root] # / usr / bin / mysql-u root-p *
(Based on lniux version compatibility issues,-p input after the contents will vary)
-P: root password for the database administrator (typically enter the password)
-P: Specifies the database name to be used then, Enter password: Enter the database password here (redhat 9.0 version)
In the redhat 9.0 version, if the root directly into the database administrator's password, an error will occur 1045.
Methods provided for the A1, in the sense of trouble with the process, so stop the validation.
In fact, or a word, version compatibility issues. 9.0 use and service mysqld start myisamchk can know.
These are two days to play little bit of problems encountered in MySQL
99% online this argument, really do not understand, 163 blog, javaeye, csdn, there are many other technical websites are the articles, with no innovation, just copy to their own blog or website, not to be a real operating instructions, boring.
The above question is: when I installed the password is 123 fill, but can not enter into, the problem of how to enter without a password MySQL Command Line Client, as shown in Figure:
See, first of all is to enter a password, how can this way, no password and can not enter the MySQL client console, so the second method does not work.
These two do not solve my problem, saw nothing but their own, and I remember saying: "There is always a successful way," I think I have to fix it.
Some say that the initial password is 123456, how possible. I tested one by one, finally, I do not enter any password, enter directly in the Figure 2 New root Password and Comfirm, completed the implementation of the correct result.
If the installation several times unsuccessful, it could be passwords, your password is test previously used several times, and if not, would not enter the Current root Password password, the password directly into the following two items can be.
|
OPCFW_CODE
|
So, after watching all of today's 9/11 anniversay specials on TV, I have been inspired to create a program centered around this tragic date. I have been scouring the internet for awhile, but I haven't found exactly what I'm looking for. I want to start out with a teaser-ish piece (maybe it's someone describing the events as they happen, or looking back on the moment they saw it happen). Then I want three more pieces: one about anger, paranoia, and despair. The point of my program is to present the emotions evoked by 9/11, so basically like a cause and effect kind of thing. Any suggestins or help is GREATLY APPRECIATED!!!!
Uděláme to ústně!!!
i know a few
are you looking for general war poems? or just for this event? because i have plenty war poems that dont reference any certian war.
Perhaps (and I don't know how legit this is, because there are no official rules on what poetry is & where it can come from) you could take an excerpt from a book like Jonathan Safran Foer's "Extremely Loud and Incredibly Close" or "Women at Ground Zero" and use that as the teaser. Or, since at least where I compete, poetry doesn't have to be published, you could take any survivor account you can find.
There's one that's kinda cool I found online that doesn't necessarily fall under what you're looking for but might be interesting to take a look at:
"Who Am I" by Kimberly Dunne http://www.dtl.org/ethics/article/se...poems-9-11.htm
http://poetry.about.com/library/weekly/aa022002d.htm This one might fall under despair...
http://poetry.about.com/library/weekly/aa092501j.htm This one would provide a contrast to the darker emotions of anger, paranoia & despair. You may find it trite, but I think it's actually quite an important message.
Good luck! And thanks for doing something to commemorate this event
A girl from my district (who actually made sems this year in duo at nats) did this piece as a DI, but it could make a beautiful poetry.
It's: Marian Fontana — A Widow's Walk
Really quite an amazing piece.
hows the program going any luck?
Not really. I've found a few that will work, but I haven't found any that I've fell in love with. I think I'm being too picky. I guess I want the program to focus more on the emotional aftermath of the event, not the event itself but too still mention it.
Originally Posted by Ramsey Hendricks
Uděláme to ústně!!!
|
OPCFW_CODE
|
#include "Entity.h"
#include "Element.h"
#include <algorithm>
namespace Sakura
{
const std::vector<std::string> blacklistedEntityAttributes={"x","y","id", "originX","originX","width","height"};
Entity::Entity(const std::string& nm) : name(nm), useNodes(false)
{
id=-1;
}
Entity::Entity(const std::string& nm, Element* e): name(nm), useNodes(false)
{
id=-1;
LoadFromElement(e);
}
Entity::Entity(Element* e)
{
id=-1;
name = e->name;
LoadFromElement(e);
}
void Entity::LoadFromElement(Element* e)
{
for(size_t i=0; i < e->children.size(); i++)
{
Element* currentElement=e->children[i];
if(currentElement->name == "node") //we found a node, so this one can use nodes
{
useNodes = true;
if(currentElement->HasAttr("x") && currentElement->HasAttr("y")) //it contains attributes, so it specifies nodes
nodes.push_back(std::pair<int32_t,int32_t>(currentElement->AttrInt("x"),currentElement->AttrInt("y")));
}
}
LoadBlacklistedElements(defs, blacklistedEntityAttributes, e);
//we take these out of the defs because they are used often in editor
x=e->AttrInt("x",0);
y=e->AttrInt("y",0);
id=e->AttrInt("id",-1);
if(e->HasAttr("width"))
width=e->AttrInt("width");
if(e->HasAttr("height"))
height=e->AttrInt("height");
if(e->HasAttr("originX"))
originX=e->AttrInt("originX");
if(e->HasAttr("originY"))
originY=e->AttrInt("originY");
}
Element* Entity::SaveToElement(void)
{
Element* returnV=new Element(name);
returnV->attributes = defs;
returnV->SetInt("x",x);
returnV->SetInt("y",y);
if(id >= 0)
returnV->SetInt("id",id);
if(useNodes)
{
returnV->children.resize(nodes.size());
for(uint32_t i=0; i < nodes.size(); i++)
{
Element* nodeElement=new Element("node");
nodeElement->SetInt("x",nodes[i].first);
nodeElement->SetInt("y",nodes[i].second);
returnV->children[i]=nodeElement;
}
}
if(width.has_value())
returnV->SetInt("width",width.value());
if(height.has_value())
returnV->SetInt("height",height.value());
if(originX.has_value())
returnV->SetInt("originX",originX.value());
if(originY.has_value())
returnV->SetInt("originY",originY.value());
return returnV;
}
}
|
STACK_EDU
|
How Can I Use (Ruby) RGeo to Transform (Unproject) Coordinates
I started with How can I transform the coordinates of a Shapefile? .
The response there started me on [what I think is] the right track, but I still haven't been able to solve my problem.
One issue is that I haven't found the correct projection yet: https://gis.stackexchange.com/questions/13330/how-can-i-correctly-transform-unproject-from-lcc
EDIT: That question on the gis site has been answered, and I was able to reproduce a correct transformation using the PROJ command line tool cs2cs. It looks like this:
larry$ cs2cs -f "%.8f" +proj=lcc +lat_1=37.06666666666667 +lat_2=38.43333333333333 +lat_0=36.5 +lon_0=-120.5 +x_0=2000000 +y_0=500000.0000000002 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs +to +proj=lonlat +datum=WGS84 +ellps=WGS84
6011287.4999795845 2100857.2499904726
-122.40375492 37.74919006 0.00000000
Now, that I had the correct transformation, I was able to try the same thing in a simple form using RGeo:
ruby-1.9.2-p180 :001 > projection_str = ' +proj=lcc +lat_1=37.06666666666667 +lat_2=38.43333333333333 +lat_0=36.5 +lon_0=-120.5 +x_0=2000000 +y_0=500000.0000000002 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs'
=> " +proj=lcc +lat_1=37.06666666666667 +lat_2=38.43333333333333 +lat_0=36.5 +lon_0=-120.5 +x_0=2000000 +y_0=500000.0000000002 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs"
ruby-1.9.2-p180 :002 > projection = RGeo::CoordSys::Proj4.new(projection_str)
=> #<RGeo::CoordSys::Proj4:0x805cba18 " +proj=lcc +lat_1=37.06666666666667 +lat_2=38.43333333333333 +lat_0=36.5 +lon_0=-120.5 +x_0=2000000 +y_0=500000.0000000002 +ellps=GRS80 +datum=NAD83 +to_meter=0.3048006096012192 +no_defs +towgs84=0,0,0">
ruby-1.9.2-p180 :003 > desired_str = '+proj=lonlat +datum=WGS84 +ellps=WGS84'
=> "+proj=lonlat +datum=WGS84 +ellps=WGS84"
ruby-1.9.2-p180 :004 > desired = RGeo::CoordSys::Proj4.new(desired_str)
=> #<RGeo::CoordSys::Proj4:0x805271ac " +proj=lonlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0">
ruby-1.9.2-p180 :005 > RGeo::CoordSys::Proj4::transform_coords(projection, desired, 6011287.4999795845, 2100857.2499904726 )
=> [-140.92282523143973, 30.16981659183029]
Why are the results different between RGeo and cs2cs?
Once I can make RGeo perform the correct translation, is there a way I can create the proper factory to transform a complete Geometry instead of a point?
Is there a command-line tool I can use as a workaround to transform all of the points in my shapefile so that I can move on with my life?
In general: Would someone please instruct me on how to properly use this library?
Thank you so much for looking.
Did you ever figure out how to do this with RGeo?
As a wild stab in the dark, because I don't know RGeo or even Ruby, try substituting your coordinates in feet with their metres equivalent: 1832244.0944819663048746863094224,<PHONE_NUMBER>8223700783128534419392 (you probably won't need that number of decimal places though...) Another possibility is to swap the coordinates around - maybe RGeo makes some unconventional assumptions.
If you are able to call executables from Ruby, you could simply use ogr2ogr to convert your shapefiles.
Mersey. You are my hero. If you live anywhere near San Francisco, I want to buy you a beer or 7. Here's what worked:
Modify the supplied .prj file from LambertConformal_Conic to Lambert_Conformal_Conic_2SP - (Credit goes to Frank Warmerdam http://osgeo-org.1803224.n2.nabble.com/ERROR-6-No-translation-for-Lambert-Conformal-Conic-to-PROJ-4-format-is-known-td2568835.html)
ogr2ogr -s_srs realtor_neighborhoods.prj -t_srs EPSG:4326 ./output.shp realtor_neighborhoods.shp
Pay for my flight from the UK, and you have a deal! :) Glad you managed to get it working.
|
STACK_EXCHANGE
|
Binding error when using embedded server in JUnit Test: akka.stream.BindFailedException
I am able to run the test directly and it works fine, but then if I run the test as a gradle task I get a Bind Failed Exception
Gradle Task:
./gradlew clean build test
Code
@Before
public synchronized void setUp() throws Exception {
sqsRestServer = SQSRestServerBuilder
.withPort(SQS_PORT)
.withInterface(SQS_HOSTNAME)
.start();
}
Error:
akka.stream.BindFailedException$
Any more details of the exception? Is the port free at the moment of binding? How many times is setUp being called - isn't it called before each test?
Thank you for the prompt response!!!
INFO 2016-11-07 12:09:52,362 [elasticmq-akka.actor.default-dispatcher-3] akka.event.slf4j.Slf4jLogger app=priceingest version=2.1-rc1.0 : Slf4jLogger started
INFO 2016-11-07 12:09:52,388 [Test worker] org.elasticmq.rest.sqs.TheSQSRestServerBuilder app=priceingest version=2.1-rc1.0 : Started SQS rest server, bind address localhost:9324, visible server address http://localhost:9324
ERROR 2016-11-07 12:09:52,390 [elasticmq-akka.actor.default-dispatcher-6] akka.io.TcpListener app=priceingest version=2.1-rc1.0 : Bind failed for TCP channel on endpoint [localhost/<IP_ADDRESS>:9324]
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_92]
at sun.nio.ch.Net.bind(Net.java:433) ~[?:1.8.0_92]
at sun.nio.ch.Net.bind(Net.java:425) ~[?:1.8.0_92]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:1.8.0_92]
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) ~[?:1.8.0_92]
at akka.io.TcpListener.liftedTree1$1(TcpListener.scala:56) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.io.TcpListener.<init>(TcpListener.scala:53) ~[akka-actor_2.11-2.4.11.jar:?]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_92]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_92]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_92]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_92]
at akka.util.Reflect$.instantiate(Reflect.scala:65) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.actor.ArgsReflectConstructor.produce(IndirectActorProducer.scala:96) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.actor.Props.newActor(Props.scala:213) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.actor.ActorCell.newActor(ActorCell.scala:562) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.actor.ActorCell.create(ActorCell.scala:588) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.dispatch.Mailbox.run(Mailbox.scala:223) ~[akka-actor_2.11-2.4.11.jar:?]
at akka.dispatch.Mailbox.exec(Mailbox.scala:234) ~[akka-actor_2.11-2.4.11.jar:?]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[scala-library-2.11.8.jar:?]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) ~[scala-library-2.11.8.jar:?]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[scala-library-2.11.8.jar:?]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[scala-library-2.11.8.jar:?]
Gradle Test Executor 2 finished executing tests.
com.nike.priceingest.service.SqsServiceImplTest > givenValidPriceChange_whenSendSqsMsg_theVerifyReceivedMsg FAILED
akka.stream.BindFailedException$: bind failed
Ok, so the port is taken.
are you stopping the server in an after block?
are you running the tests sequentially? (so that two tests don't run at the same time)
The issue was that I am initializing the sqsRestServer in a Spring bean during startup during local dev:
Java config file:
AwsConfig.java
@Bean
public SQSRestServer sqsRestServer(UriComponents elasticMqLocalSqsUri) {
SQSRestServer sqsRestServer = SQSRestServerBuilder
.withPort(Integer.valueOf(elasticMqLocalSqsUri.getPort()))
.withInterface(elasticMqLocalSqsUri.getHost())
.start();
return sqsRestServer;
}
When running integration tests using the following annotations, the bean would be initialized and the server would be started for each integration test.
Integration test using @SpringApplicationConfiguration
ApplicationTests.java
@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
@IntegrationTest
@ActiveProfiles("local")
@TestPropertySource(properties = {"server.port=0", "management.port=0", "releaseVersion=0"})
public class ApplicationTests {
My current solution is to use a flag to shut down the server after configuring the Spring beans when running my integration tests:
@Value("${aws.local.sqs.localElasticMq.startServer}")
Boolean startLocalElasticMq;
if(!startLocalElasticMq)
sqsRestServer.stopAndWait();
You can set this manually or add it to the property files your integration tests use.
This has resolved my issue for now, but I am trying to find a better solution without having to juggle properties.
|
GITHUB_ARCHIVE
|
Implement content script - unsafe script communication
More context in https://github.com/pixiebrix/pixiebrix-extension/issues/1015
I think I'm going to implement the messaging primitive first, as a separate module that exports sendMessage() and onMessage() (usable in both the content script and unsafe script).
The implementation will probably be based on https://github.com/pixiebrix/pixiebrix-extension/pull/1019 or what was discussed within it.
This is because, as that PR shows, there's no straight-forward way to send messages to the other side, so it would be easier to "polyfill" the sendMessage/onMessage API and then just use that as a medium, rather than attempting RPC-style calls without a solid messaging base.
Thinking out loud here.
Let's say we have a handler in a content script; How do we reach it?
from the background: {tabId, frameId}
from the dev tools: "contentScript" (one and only)
from the sidebar: "contentScript" or {tabId, frameId, context: "contentScript"}
from the unsafe context: {extensionId, context: "contentScript"}
I think getContentScriptMethod is currently flexible enough to allow this at the call site, for example:
https://github.com/pixiebrix/webext-messenger/blob/82502c2ec391e604ecb70cefe0b1c0ab84bb5324/test/demo-extension/contentscript/api.test.ts#L21
Here, the method exported by contentScript/api.ts could be called as:
// From the background
setPageTitle({tab: 1, frame: 0}, 'New title');
// From dev tools
setPageTitle({}, 'New title');
// From the sidebar
setPageTitle({}, 'New title');
// From the unsafe context
setPageTitle({extensionId: "mlhlldlpep22445"}, 'New title');
The contexts 2 and 3 don't look great, but they're ok.
For the background page, I think the extension ID can be specified while creating the method, only if it's meant to be accessible from the unsafe context. So this line:
https://github.com/pixiebrix/webext-messenger/blob/82502c2ec391e604ecb70cefe0b1c0ab84bb5324/test/demo-extension/background/api.ts#L3
Would just become, for example:
export const sum = getMethod("sum", "mlhlldlpep22445");
And that would be the signal that the method can also be used from the unsafe context, without having to alter the method’s signature.
Would just become, for example
I think that approach looks pretty good. Let's give it a try
Other things we might consider for "unsafe"/external:
We should consider supporting an "allowlist" of external contexts that are allowed to send messages. (The user would opt-in to communication from certain sites)
We should consider supporting a way for an external context to send a special "request message" privileges (or something similar) that the extension can then use to prompt for the allowlist
Just for context, some code for this now exists at https://github.com/pixiebrix/pixiebrix-extension/blob/7ee1e1292db066b074818089944020418ca18dee/src/utils/postMessage.ts#L1
|
GITHUB_ARCHIVE
|
fiction 《The Bloodline System》 – Chapter 582 Hand Over My Prize design expansion reading-p1
Novel–The Bloodline System–The Bloodline System
Chapter 582 Hand Over My Prize apologise punish
“Why does that make a difference? I became barely exercising. Hand over my winning prize,” Gustav voiced out without any form of intimidation in their voice.
Mill ascended, rising frontward with performance and surpassing Gustav’s posture.
The Counterattack Plan Of A Villain With Ten Thousand Fans
Each of them soared ahead with performance, surpassing the training course the location where the human body water was found.
He possessed no clue that each this became resulting from Gustav’s impression, that has been pass on far throughout the bedroom.
Gustav had not been only retaining up but also surpassing him.
He was raised in the middle since these shadowy stats created a variety of strange growth before hosting Mill forwards with pace.
They showed up over the huge terrain where many black color orbs held traveling by air along the put.
‘How is he ready to take care of me?’ That was the problem working through Mill’s thoughts while he fought to take care of Gustav.
Gustav sped off since he initialized dash and commenced dodging the balls one after the other.
Mill experienced a look of disbelief on his encounter because he came at the conclusion of the training course five seconds later than Gustav.
Gustav was transferring so undisturbed, swaying around the location since he dodged the dark colored orbs conveniently.
Mill ascended, soaring onward with velocity and surpassing Gustav’s position.
Mill got a start looking of disbelief on his facial area since he appeared at the end of the course five seconds later than Gustav.
The websites were actually steep, nevertheless it was practically unattainable to tumble from their website because of the anti-gravitational push. Nonetheless, leaping an incorrect way may cause an equilibrium trouble which might result in a autumn.
He observed Gustav could foretell the movements from the balls because each mobility arrived before the orbs sprang out in assortment.
A number of of them made an appearance on his kept and another 4 on his appropriate.
A number of shadowy black color numbers eliminated of Mill’s physique, just as before bouncing forward farther than him and functioning faster to obtain in advance.
His pace also significantly greater because he bolted forwards, dodging one orb right after the other.
Mill gritted his the teeth, submitting much more shadowy amounts away from his being who happened to run next to each other with him for some moments, accumulating energy.
Landing on their own hands and fingers, both equally shadowy results flung Mill upwards with power, using their hands for a sling.
He emerged there in nearly an instantaneous, overtaking Mill right before leaping for the terrain up onward.
‘How is he prepared to keep up with me?’ This was the query working through Mill’s thoughts when he battled to take care of Gustav.
Mill leaped in the present icebox he was sitting on to the two shadowy numbers who possessed their hands and wrists interconnected.
|
OPCFW_CODE
|
Potential Collaborators insisting on Joining Supervision Panel
TLDR: Researchers wanting to join supervisory team instead of collaborating, why?
I have started my PhD for awhile now and I am gearing up for my confirmation presentation (research proposal defence). I am quite happy with my supervisory team, who I know well and work very well together. I met some researchers by accident with the potential of collaboration on some interesting off-shoots of my thesis, I was hoping to publish.
One researcher worked on the same topic as mine but from a different discipline, so I thought I would ask her some theory questions I had and potentially get her help for a methodology paper at the end of my thesis. Another researcher I approached before my PhD but she did not answer my emails. I have since met her and she was keen to be involved. I thought I could include some questions relevant to her area in my survey and hopefully publish a separate paper. I felt the topic did not meld well with my overall thesis.
Both researchers insisted in joining my supervisory team with some force. Both refused to volunteer or discuss specific questions that I posed, that would have been helpful in my thesis. Both insisted that I speak to my supervisory team and let my team decide whether they can join. They did not call nor spend time with me and getting to know me and my thesis better. One of them did not even read the draft thesis proposal that I emailed her before she offered to join my supervisory team, which I thought was rather rude (was I being unreasonable?). In both instances I felt quite disrespected and demeaned as both thought that my supervisory team should make the decision to join with seemingly no consideration of my opinion. Is this common? Why is that so?
1) What are the possible motivations for both of them to insist they join my supervisory panel? Both of them seem reasonably thin on PhD supervision even though they were both quite hard to get a hold of. Both seem keen to develop links with my department.
2) Why the forceful insistence for my supervisory team to decide whether they can join? Why am I not a relevant decision-maker in their eyes? I can veto them joining the team, so why don't they seem to appreciate that I need to be onboard?
3) Why are they not generous in spending time answering what I thought were simple conceptual and theory development questions that I had? Just one or two lines pointing in a general direction would have been helpful. Why did they focus on joining my supervisory team before discussing their topic areas? I am now concerned that they are not competent in their topic areas, but at the very least they do not seem interested in scholarly discussion.
4) If I am offering publication, why is that not sufficient? I thought publications are a great outcome and sufficient reward for collaboration. I don't understand why joining the supervisory panel participation is more important in their minds than publications and potential collaboration with me and my department in the longer term?
In the end, I was unfortunately forceful with both of them. To one, I outlined how I got to know one of my current supervisors that I who I did not know previously, about how I was able to find out what they were capable of, what they were comfortable with, also their availability and current work pipelines.
To the other one, I offered that we look for a potential student(s) in order to pay justice to the topic. I told her that I am happy for her to meet my supervisors and collaborate with my department either way. I have not heard back from her. I suspect she is upset. I feel horrible being forceful and direct with both of them and I wonder what other ways I could have handled the situation? Is there another approach that could have achieved a successful collaboration instead, if I somehow knew their agenda better? Instead, I am left with the feeling that I was some child answering to some doorknockers insisting on speaking to my parents...
It was the right decision to not let them join the team, but it would probably have been better to have said something to them as "I'll discuss it with my supervisors" and do precisely that (if you trust the latter). That way, this does not become your decision only, but of the whole team. They seem to be young, ambitious and with some disregard to the legitimate interests of others. The joint papers/discussions may have been a good starting point for them later to join the team, but "crashing the party", so to say, is not on. You may have been more diplomatic, but ultimately, you did right.
I can't speak to my supervisors. My primary has already indicated that she is not keen for more to join the supervisory team. She prefers a leaner team so decisions are easier to canvass and the level of commitment is not diluted. I agree with her. Organising meetings is already tough, so adding unreliable ones would make things worse. I never indicated nor hinted that joining the supervisory team was an option.
Well, then you are even in a better position. You just could have let them know that this is currently not possible, but you are open to collaboration. Rinse and repeat. No need to be forceful, just firm. In fact, in this case you have an advantage that the decision is not even yours, so no reason to feel slighted - it's an advantage to not be able to decide. For what it's worth, I completely agree with your supervisor, small teams are better. Anyway, no need for you to regret, you would have probably regretted much more if they joined your team. Good luck with your PhD.
Thanks @CaptainEmacs. That's true. I forgot that the decision was pre-determined. I guess, I had the option that of telling both of them that from the onset. I never fully appreciated that. My primary was already hesitant and annoyed that I looked for a third supervisor, so a fourth would be a definite no no. Maybe a part of me was worried that if I told them that from the onset, they would have reacted poorly.
Another thing that annoyed me was that both of them knew that I already had a full panel of three. I have not seen many panels of four or more (unless there were content experts with clear roles). For them to assume that they could join without much negotiation really got my goat.
As the wise man says: you can control your reaction, not theirs. If they invite themselves to your party, they have only themselves to blame if they are "outvited". One caveat: if they have contributed important ideas from their stock in the discussion with you, they may be entitled to co-authorship on the relevant papers. But I guess that's what you offered them, in any case.
|
STACK_EXCHANGE
|
In this post, I will explain to retrieve more than 5000 records or large sets from Dataverse Using FetchXml, Paging Cookie, and More Records Flag.
When using Dataverse or Dynamics 365 CE/CRM, the user can retrieve only 5000 records in a single fetch or query. This is Dataverse Limitation.
To overcome this limitation, we can use “Paging Cookie” and have a flag called “More Records” to retrieve all records in a loop until the last page using the flag. Here is the Microsoft article to retrieve all records using c#.
Overall Flow/Power Automate
- Total Record Count – To know the total record count (For Analysis purposes only)
- Page Number – To send the next page number on the request
- FetchXml Paging Cookie – Dataverse returns the paging cookie as part of the response. This is Raw data and will be used to send a subsequent request.
- Paging Cookie – Modified version of the original paging cookie, which will be sent as a request
- More Records – Dataverse returns this flag as part of the response. This is used to determine to break the loop.
- JSON For XML Parsing – This is just a template that used to transform the paging cookie to XML
We are using Do until control. This loop starts and continues by default until the More Records flags are set to false.
Using Fetchxml query with page number and Paging Cookie. For Page 1, the Paging cookie will be empty.
The fetch statement has a paging cookie. Below is the Power Fx statement
if(equals(variables(‘Page Number’),1),”,concat(‘paging-cookie=”’, substring(first(skip(split(string(xml( setProperty(variables(‘JSON For Xml Parsing’),’a’,variables(‘Paging Cookie’)))),'<‘),1)),2),””))
Incrementing the total count for each iteration. Finally, this variable has a total record count. This is used for Analysis purposes only. And other variables, the Page Number just increments for each iteration. This page number is used to send the request.
Dataverse has an attribute called PagingCookie as part of the response. Below is a variable to extract the Raw data from the response.
The paging Cookie variable extracts only the information needed to send the request.
if(empty(variables(‘FetchXml Paging Cookie’)),”,replace(substring( variables(‘FetchXml Paging Cookie’),add(indexOf(variables(‘FetchXml Paging Cookie’),’pagingcookie=”‘),14)),’” istracking=”False” />’,”))
More Record – Flag to determine whether the Do until loop breaks or continue
if(empty(string(outputs(‘List_Accounts’)?[‘body’]?[‘@Microsoft.Dynamics.CRM.morerecords’])), false, outputs(‘List_Accounts’)?[‘body’]?[‘@Microsoft.Dynamics.CRM.morerecords’])
Using Power Automate, We can retrieve more than 5000 records using Paging Cookie and More Records Flag. In this example, this flow runs three times, and the total count is 10139 records.
|
OPCFW_CODE
|
Squid Proxy: ACL error on request by curl
my squid.conf(squid3)
acl localnet src <IP_ADDRESS>/<IP_ADDRESS>
acl localweb src <IP_ADDRESS>/<IP_ADDRESS>
http_port 3128 transparent
http_access allow localnet
http_access allow localweb
my (server squid that runs a dhcp server also) ip is:
eth0
ip: <IP_ADDRESS>
bcast: <IP_ADDRESS>
netmask: <IP_ADDRESS>
eth1
ip: <IP_ADDRESS>
bcast: <IP_ADDRESS>
netmask: <IP_ADDRESS>
The client command I would like to get working is:
curl -x <IP_ADDRESS>:3128 www.google.com
The source IP of the client running curl is:
IP <IP_ADDRESS>
bcast <IP_ADDRESS>
netmask <IP_ADDRESS>
In addition in the server I also added the following iptables rules:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 3128
iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT --to-ports 3128
The problem is that the following message is returned to the client when making a request:
<IP_ADDRESS> TCP_DENIED GET www.google.com
Err_Access_Denied by squid localhost (squid 3.1.19)
If i do a simple sudo cat /var/log/squid3/access.log, i see:
1 <IP_ADDRESS> TCP_DENIED/403 3629 GET http://www.google.com
And if i do the curl -L -V i get alot of information but to sum it up:
GET HTTP/1.1 User agent ***** curl
Host: www.google.com Accept: / Proxy-Connection: Keep-Alive
HTTP 1.0, assume close after body HTTP/1.0 403 Forbidden Server:
squid/3.1.19
X-Cache: Miss from localhost x-Cache-Lookup: NONE from localhost:3128
Via: 1.0 localhost (squid/3.1.19)
Closing connection #0
So I assume my client is reaching the host running squid, but Squid is unable to forward the request. I had http_access allow all , but this didn't work either.
You have set-up transparent proxy why you are use curl -x?
well my attempts run all options the curl www.google.com to curl -x none of them worked
Ok, also not sure if squid works with non-cidr notation.
Try to replace acl localweb src <IP_ADDRESS>/<IP_ADDRESS> with acl localweb src <IP_ADDRESS>/24. Also make sure that you have restarted squid.
Hm...checked documentation it should work the way you have specified ACL.
I get on my server the request but they appear denied:
<IP_ADDRESS> TCP_DENIED/403 3629 GET www.google.com
No, it's not port, it's size of the packet. I am still not sure what's wrong here. Stupid question can you do 'curl google.com' from server with squid itself?
Also it would be useful to make from client side curl -L -v google.com
from the server with squid when i do :
curl google.com
i get
the normal reply
Title blablabla
from the client i get :
GET HTTP://www.google.com HTTP/1.1
User agent bla bla curl
Host: www.google.com
Accept: /
Proxy-Connection: Keep-Alive
*HTTP 1.0, assume close after body
<HTTP/1.0 403 Forbidden
<Server: squid/3.1.19
<X-Cache: Miss from localhost
<x-Cache-Lookup: NONE from localhost:3128
<Via: 1.0 localhost (squi/3.1.19)
Closing connection #0
Last try from my side, try to change iptables second rule with following:
iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to-destination <IP_ADDRESS>:3128
yep i did it , yet not good :( sorry for taking your time and thank you anyway the last error in the access.log is the same of denied the request is getting there but the problems is on the fowarding the reply and request.
No problem, i hope someone will look into this and provide solution to your problem. I have ran out of ideas right now:) Last advice: try to use tcpdump and debug more hardly this problem. That's what i usually do.
http://www.squid-cache.org/mail-archive/squid-users/201304/0112.html
According to this, try to change your hostname on squid server from localhost to something else:) maybe that would help.
I did it with this THANK GOD !!
http_access allow all
http_port 3128
i have only this on the server squid.conf
on the client i did the curl :
curl -x <IP_ADDRESS>:3128 www.google.com
and its all good
Yep, it's not a transparent proxy you have run now. Gratz that it worked for you now. Still check the link i have provided above, maybe it would help.
|
STACK_EXCHANGE
|
7 steps to a smarter IT Front End
We often praise a particular front end as compared to another. The world of Graphical user interfaces has transcended from PC's to Mac's to smartphones. But quite often the IT department ignores the 'Front End' that the modern user expects from IT. Most are fixated on the Service Desk as that all empowering front end. Even ITIL has prescriptive definitions. One can argue that this is not at all the case especially from an end user perspective.
We often hear the complaints of IT being slow, ineffective or behind on support commitments. Though there may be some truth to this, there's much to do with ignoring perceptions that have built up over time in user's minds. So what is that 'Front end'- I would define that as a cohesive combination of Resources, Service Desk response times, average speed of resolution, automated Service Catalog and a comprehensive Knowledge base.
So how does an organization build up that smart IT front end? Here are 7 steps to get going-
1) Handle all actionable Service Requests through a single service catalog- Basically 100% of Service Requests should go centrally into one service catalog. Insist that the service should not exist if it does not exist on the Service Catalog! Obviously this requires a major change to sunset all kinds of tools and manual services, but the effort to consolidate on one clean interface is worth the time and effort.
2) Support the Service Catalog through an automated back end - All actionable Service Requests should flow through an automated back end working their way through approvals, procurement, provisioning and fulfillment. Of course automating all of this is ideal and the holy grail! But make the move towards that goal and measure progress. Again shoot for 100% of backend processes; you will reach a high mark. E.g.-new user accounts, requesting a development environment, licenses, adding application access etc.
3) Enable Problem to Incident (P2I) conversions- Resolving a problem is not the end of the day. Confirming that Level 1 teams understand what to do if the incident rears up again is a must. Consistently enforcing this policy of P2I connection and conversions will work wonders over a defined duration resulting in more incidents resolved faster and efficiently at Level 1 itself.
4) 100% self service for user induced incidents- Setup a Self Service gateway to manage all such common incidents. This will dramatically reduce time to improve speed of response. Examples include Account Lock Out, Password changes and resets, information /document upload, profile changes etc.
5) Setup and maintain a corporate Wiki- Information discovery and ease of information consumption should play a key role in the roadmap of the IT Front end. Too often we see lack of information on how-to's, problems with finding the right document and obsolescence. An annual check on all key docs, along with the user's ability to edit and update docs will foster a sense of shared ownership within the user community. Enable access through all devices, especially smartphones. Experts will bubble up to the top and become allies of IT.
6) 100% of software installs via End users- through the self-service capability and service catalog automation, enable users to receive a temporary download link to software that they are allowed to install. In the long run, diminish the need for this install capability through adoption of Software as a Service and/or internal web applications. E.g. - Office 365, Sharepoint Online and Lync
7) Periodic user engagement- IT often gets flak for not being there when it matters or simply not being around. Enabling user feedback, technology awareness sessions and formal internal training periodically can go to a great extent in bringing IT closer to the business community.
The organization of tomorrow requires a smart technology front end. Transforming from now to then requires investment of time, effort and resources. These steps can get you started. And there may be more. Do you have a take on additional steps- then do write in.
|
OPCFW_CODE
|
This is a bit late (how is it the middle of April already?!), but the dev-tools team has lots of exciting plans for 2018 and I want to talk about them!
Our goals for 2018
Here's a summary of our goals for the year.
We want to ship high quality, mature, 1.0 tools in 2018. Including,
- Rustfmt (1.0)
- Rust Language Server (RLS, 1.0)
- Rust extension for Visual Studio Code using the RLS (non-preview, 1.0)
- Clippy (1.0, though possibly not labeled that, including addressing distribution issues)
Support the epoch transition
2018 will bring a step change in Rust with the transition from 2015 to 2018 epochs. For this to be a smooth transition it will need excellent tool support. Exactly what tool support will be required will emerge during the year, but at the least we will need to provide a tool to convert crates to the new epoch.
We also need to ensure that all the currently existing tools continue to work through the transition. For example, that Rustfmt and IntelliJ can handle new syntax such as
dyn Trait, and the RLS copes with changes to the compiler internals.
The Cargo team have their own goals. Some things on the radar from a more general dev-tools perspective are integrating parts of Xargo and Rustup into Cargo to reduce the number of tools needed to manage most Rust projects.
Custom test frameworks
Testing in Rust is currently very easy and natural, but also very limited. We intend to broaden the scope of testing in Rust by permitting users to opt-in to custom testing frameworks. This year we expect the design to be complete (and an RFC accepted) and for a solid and usable implementation to exist (though stabilisation may not happen until 2019).The current benchmarking facilities will be reimplemented as a custom test framework. The framework should support testing for WASM and embedded software.
Doxidize is a successor to Rustdoc. It adds support for guide-like documentation as well as API docs. This year there should be an initial release and it should be practical to use for real projects.
Maintain and improve existing tools
Maintenance and consistent improvement is essential to avoid bit-rot. Existing mature tools should continue to be well-maintained and improved as necessary. This includes
- debugging support,
- editor integration.
Good tools info on the Rust website
The Rust website is planned to be revamped this year. The dev-tools team should be involved to ensure that there is clear and accurate information about key tools in the Rust ecosystem and that high quality tools are discoverable by new users.
Organising the team
The dev-tools team should be reorganised to continue to scale and to support the goals in this roadmap. I'll outline the concrete changes next.
Re-organising the dev-tools team
The dev-tools team has always been large and somewhat broad - there are a lot of different tools at different levels of maturity with different people working on them. There has always been a tension between having a global, strategic view vs having a detailed, focused view. The peers system was one way to tackle that. This year we're trying something new - the dev-tools team will become something of an umbrella team, coordinating work across multiple teams and working groups.
We're creating two new teams - Rustdoc, and IDEs and editors - and going to work more closely with the Cargo team. We're also spinning up a bunch of working groups. These are more focused, less formal teams, they are dedicated to a single tool or task, rather than to strategy and decision making. Primarily they are a way to let people working on a tool work more effectively. The dev-tools team will continue to coordinate work and keep track of the big picture.
We're always keen to work with more people on Rust tooling. If you'd like to get involved, come chat to us on Gitter in the following rooms:
- IDEs and editors
- Bindgen working group
- Debugging working group
- Clippy working group
- Doxidize working group
- Rustfmt working group
- Rustup working group
- 2018 edition tools working group
Manish Goregaokar, Steve Klabnik, and Without Boats will be joining the dev-tools team. This will ensure the dev-tools team covers all the sub-teams and working groups.
IDEs and editors
The new IDEs and editors team will be responsible for delivering great support for Rust in IDEs and editors of every kind. That includes the foundations of IDE support such as Racer and the Rust Language Server. The team is Nick Cameron (lead), Igor Matuszewski, Vlad Beskrovnyy, Alex Butler, Jason Williams, Junfeng Li, Lucas Bullen, and Aleksey Kladov.
The new Rustdoc team is responsible for the Rustdoc software, docs.rs, and related tech. The docs team will continue to focus on the documentation itself, while the Rustdoc team focuses on the software. The team is QuietMisdreavus (lead), Steve Klabnik, Guillaume Gomez, Oliver Middleton, and Onur Aslan.
No change to the Cargo team.
- Bindgen and C Bindgen
- Nick Fitzgerald and Emilio Álvarez
- Debugger support for Rust - from compiler support, through LLVM and debuggers like GDB and LLDB, to the IDE integration.
- Tom Tromey, Manish Goregaokar, and Michael Woerister
- Oliver Schneider, Manish Goregaokar, llogiq, and Pascal Hertleif
- Steve Klabnik, Andy Russel, Michael Gatozzi, QuietMisdreavus, and Corey Farwell
- Nick Cameron and Seiichi Uchida
- Nick Cameron, Alex Crichton, Without Boats, and Diggory Blake
- Focused on designing and implementing custom test frameworks.
- Manish Goregaokar, Jon Gjengset, and Pascal Hertleif
- 2018 edition tooling
- Using Rustfix to ease the edition transition; ensure a smooth transition for all tools.
- Pascal Hertleif, Manish Goregaokar, Oliver Schneider, and Nick Cameron
Thank you to everyone for the fantastic work they're been doing on tools, and for stepping up to be part of the new teams!
|
OPCFW_CODE
|
April 18, 2011, 5:05am
I have updated to Piwik 1.3 after the now usual database upgrade problems, see
PIWIK 1.3 db upgrade problem
we are now getting an error when running geoipUpdateRows.php from the browser
Fatal error: Call to undefined function _parse_ini_file() in
example.com/analytics/piwik/core/Config.php on line 373
I have the same problem after I make the update.
April 18, 2011, 1:05pm
Apply this patch to plugins/GeoIP/misc/geoipUpdateRows.php. (I’ve uploaded an updated .zip package to the ticket in Trac.)
--- geoipUpdateRows.php (revision 51)
+++ geoipUpdateRows.php (working copy)
@@ -20,8 +20,8 @@
. PATH_SEPARATOR . PIWIK_INCLUDE_PATH . '/libs'
. PATH_SEPARATOR . PIWIK_INCLUDE_PATH . '/plugins');
+require_once PIWIK_INCLUDE_PATH . '/libs/upgradephp/upgrade.php';
require_once PIWIK_INCLUDE_PATH . '/core/testMinimumPhpVersion.php';
require_once PIWIK_INCLUDE_PATH . '/core/Loader.php';
$GLOBALS['PIWIK_TRACKER_DEBUG'] = false;
If got the same error and I add the line above to the php file and now I get:
Cannot redeclare geoip_country_code_by_name() in ../plugins/GeoIP/libs/geoip.inc on line 347
April 19, 2011, 11:43pm
Please try the updated plugin in ticket
Please try the updated plugin in ticket #45.[/quote]
I´ve updated the plugin with
Ticket #45 but the error still occures :(.
October 18, 2011, 10:04am
I get a blank page using ff v7.0.1
I have upgraded to piwik 1.6 and geoip plugin to v0.18 about an hour ago.
I had the same problem with previous versions.
I turned display_errors = On in php.ini but still no error message, maybe I got to wait a lot longer for apache to refresh, but still something is wrong there.
any ideas why ?
thanks alot, cheers.
October 21, 2012, 12:30pm
Good news: GeoIP is now integrated in Piwik, enabling Accurate Visitors Geolocation in your Analytics reports. To enable GeoIP go to the Settings > Geolocation admin page, and follow the short instructions.
You can also get an even more
accurate Country & City Database from here to enjoy top accuracy in detecting your visitors locations.
See also the documentation about
Geolocation - Analytics Reports in Piwik.
October 22, 2012, 10:23pm
Yeap, thanks matt and all the team, it works great as integrated, already removed the old stuff.
The link you gave is “The page isn’t redirecting properly” but I guess it is about geolite city ? this is the one I use in piwik.
By the way, the worldmap I think still shows only countries right ?
cheers and congrats (always)
|
OPCFW_CODE
|
Following a good strategy to maintain code quality is essential in any project. It will increase the code maintainability and make it easier for other developers to understand the code.
Performing code reviews is one such proven strategy used by many development teams to increase code quality. However, it requires a significant amount of manual intervention, and the output is often affected based on the reviewers’ expertise. Additionally, at times, there might be no one available to review your code which can end up delaying the entire workflow.
For example, with the Christmas holidays coming up, all your team members might go on vacation, and you might not find anyone to review your code except Santa. Most of you might consider it is useless since Santa doesn’t have the expertise.
But what if I told you there is a way to get your code reviewed by Santa? Who wouldn’t like it, right?
So, in this article, I will discuss a few tips and tricks to simplify your code review process and make it possible for Santa to perform code reviews regardless of his expertise.
Identifying the code review requirements of your process and establishing a process is the first step to building a standard code review process. We can’t automate everything from the word go.
First, we need to establish a manual process. Then, over time, we need to improve and fix issues in that process with the experience we gain. Once the process is well-established, we can move into automating the process.
As the first step of establishing a manual process, you must decide how many reviewers you need. The count can vary between 1 to 4 based on the project requirement. Some teams prefer to use multiple developers as reviewers to minimize human errors while other teams have a combination of experts in different areas as reviewers. For example, if your application uses cloud services, it’s better to have a cloud expert as a reviewer.
Hereafter, you’ll have to decide on the technical aspects considered for the review. If not, each reviewer will have a different opinion, and the review process will not have a proper standard. For example, most teams ponder language syntaxes, code complexity, styles, naming conventions, business logic, and comments for their code review processes.
Once everything is finalized, you need to follow the process for a while and improve it based on your experience. If you feel like you have a solid code review process, you can start automating it.
Although we define a set of guidelines for the manual code review process, the outcome can differ based on the reviewers’ understanding and is often extremely time-consuming.
Automating the code review process helps you to address both of the above issues. You can easily automate parts of the review process like linting, syntax, code complexity, naming conventions, and style conventions. Once the rules are defined, the automation tool will apply the same set of rules for every review. So, there will be no differences in the outcome, and it will only take seconds.
There are various code review/code quality automation tools available.
They allow you to maintain a constant format throughout the code.
Cloud-based code analysis tool that can detect code quality issues in more than 25 languages.
Cloud-based developer tools powered by machine earning to provide intelligent recommendations on code quality.
Furthermore, you can integrate these tools into development environments or release pipelines to make the code review process even faster. For example, developers can install linters to their code editors and fix formatting issues before making pull requests. Similarly, tools like Sonar Cloud can be configured to perform automatic code quality analysis when a developer creates a pull request.
As you can see, these tools help you to drastically reduce the human effort and expertise required for the review process. Even Santa will find that reviewing code in such a manner is far simpler than delivering presents on Christmas ;).
Apart from automating, you can follow a few other steps to make the code-reviewing process easier for new reviewers. For example, building a knowledge base with issues and solutions would help developers and reviewers fine-tune their work.
For example, code complexity-related issues are significantly harder to fix than other issues we find during code reviews. Usually, we find recurring issues, and junior developers tend to make the most mistakes. If you maintain a knowledge base, junior developers can look into that list and find answers to their questions. This way, those issues won’t be repetitive, and the review process can become much easier.
It’s also important to give descriptive feedback to the developer about the issues you find. For example, rather than saying this is wrong or asking to change it, you can provide solutions to the issue and open a discussion with the developer. It helps both the developer and the reviewer improve their understanding.
Continuous improvement is one of the main factors of success in any process. So, you should always be open to new changes and continue to fine-tune your review process to stay on top of the rest.
For example, new versions of languages and frameworks always bring some changes to coding standards and syntaxes. So, even if you have automated the review process, you need to update the tools to ensure the rule sets are compatible with the latest language updates. If you don’t adopt these changes, your review process will become obsolete and unreliable. This will significantly affect your overall code and application quality.
This article discusses how to simplify your code review process through best practices and automation. Contemporary automation tools have drastically reduced the manual effort and expertise required to review code.
As discussed above, you can use those tools to easily identify issues in language syntaxes, code complexity, styles, and naming conventions. So, manual code reviewers only need to review the business logic, special edge cases, and one final look at the entire code before approving the code.
I think now you understand the simplicity you can bring to the code review process and why we don’t need expert technical knowledge to complete a code review process.
So, don’t wait anymore. Get off your seat, and enjoy the vacation. Santa will take care of all your code reviews. Ho ho ho…
Thank you for reading.
|
OPCFW_CODE
|
Anonymous Confessions from Programmers.Confess
I think i suck has a programmer when i look at other programmers. Somehow, even if my code suck, i make it work and it meets the clients demands, always. The clients think im a very good prommamer because they dont understand shit, and only care about the front end. Been like that for 10 years now.
I failed an easy coding question in a job interview but managed to fully reverse engineer and document the algorithms in a 30 year old video game written in 6809 assembly, with no assistance.
A multiplayer, drawing game from the idiot behind Coding Confessional.
Programming should be about finding elegant solutions to complex problems. My job is mostly about papering over the cracks with inelegant solutions to problems that should never have existed.
I ended up in computer science because I realised I was not good enough to do Mathematics.
I refactored old, stupid code wrote ages ago and screwed up this months release. I am not even sorry.
What the fuck is this code convention called? https://github.com/TyreeJackson/atomic/blob/master/Atomic.Net/Atomic.cs
I love functional programming in Haskell, but I have no idea what a monolith is.
I enjoy deleting other people's code. The more code, work and effort they have put into it, the more I enjoy it. I'm talking about useless code like where a library should have been used instead of writing the code from scratch, or some pointless software pattern has been blindly followed when really all that was needed was one or two simple lines of code. I delete all their code and replace it with the (arguably more readable) minimal code.
I hate it when people assign a value to a variable, immediately test it, and then never use the variable again. Same thing when people assign a variable and then immediately return it. Just test/return the value directly. Goddammit.
I've stopped fighting to get priority on my devops tasks. Now I just file a ticket and let it languish until the inevitable production failures occur. Maybe a few more avoidable failures will embarrass management into giving a shit.
I once logged 4 billable hours on an update which took 15 minutes for a client who was being incredibly painful.
Guys I'm sorry, I'm the one who posted that techies are poison to humanity. Actually, I just hate myself, and I am a 'techie' I suppose. The world is what we make of it and I'm the one who sometimes spreads poison because I have issues. On behalf of commenters to this post, fuck me. I'll try to be better.
I'm always being accused by my colleagues of posting on here. They can go piss up a rope, the fucksticks.
I hate one of my coworkers; he's snobby, loud, and has the social interaction skills of a dead badger. But by god if he doesn't write the most beautiful code I've ever seen, and it only makes me hate him more.
If I had a chocolate muffin for every time I've thought "what idiot coded this POS.... oh right...." I'd be the size of the planet
|
OPCFW_CODE
|
License: GNU Library or "Lesser" General Public License version 2.0 (LGPLv2)
Web Page: http://brlcad.org/wiki/Google_Summer_of_Code/Project_Ideas
Mailing List: http://brlcad.org/wiki/Mailing_Lists
This year, BRL-CAD is participating as an umbrella organization with several other open source CAx communities including STEPcode, LibreCAD, OpenSCAD, and LinuxCNC.
Our umbrella community has approximately 20 developers that actively participate in the open source project on a full-time or greater basis. There are more than a hundred community contributors, developers, modelers, artists, and engineers that are actively engaged on an ongoing basis.
The BRL-CAD community is represented and developed by a consortium of individuals in the larger open source CAx community from academia, government, and private industry. BRL-CAD's primary development focus areas include:
- CAD (design),
- CAM (manufacturing),
- CAE (engineering),
- solid modeling (analysis), and
- computer graphics (visualization).
CAD requirements are fundamentally different from those of content modelers (such as Blender, Maya, and 3D Studio) used for animation, gaming, and film purposes. BRL-CAD's primary requirements support a separate industry where commercial products like AutoCAD, Pro/Engineer, and CATIA dominate. See http://ftp.brlcad.org/Industry_Diagram.png for a visual overview of where BRL-CAD currently fits within the various CAx industries.
BRL-CAD is a powerful cross-platform open source 3D solid modeling computer-aided design (CAD) system. In includes interactive solid geometry editing, ray-tracing support for rendering and geometric analysis, image and signal-processing tools, system performance analysis tools, a robust high-performance geometry engine, and much more. It's more than a million lines of code, 400+ binary applications, dozens of libraries, and hundreds of staff-years invested. It's in use by more than 2000 companies and is downloaded more than 10k times a month. BRL-CAD has been under development for more than 30 years (since 1979) and has the world's oldest source code repository.
BRL-CAD reached out to encourage broader community cooperation and to help foster collaboration. Our vision is to improve the state of open source CAx by increasing awareness, encouraging discussion, creating useful reusable functionality, and working together. Our umbrella collaborators are as follows:
- STEPcode implements the de-facto ISO standard for CAx data exchange.
- LibreCAD is a cross-platform 2D drafting CAD system.
- OpenSCAD is a solid 3D modeling with a rich syntax for programmable geometry.
- LinuxCNC provides computer control of machine tools such as milling machines, lathes, 3d printers, and robots.
We're in the process of establishing a formal non-profit umbrella organization. This is likely the last year we will apply as "BRL-CAD". In the future, we will be under "The OpenCAx Association".
- Command-line Editing NMG and BoT Data-structures in BRL-CAD This project will expose some of the low-level NMG routines to a command-line interface. Users will be able to add and remove various parts of the NMG data structure, such as regions and shells, along with manipulation of vertices, edges, and faces. A similar set of command-lines will allow for manipulation of "Bag of Triangles" (BoT) data-structures. Users will see the results displayed in either MGED or Archer.
- G to POV-Ray Geometry Converter The project involves exporting the database of .g file into .pov file. Project is on geometry conversion of BRL-CAD into POV-Ray. Converter is named as g-pov( similar to exiting geometry converters of BRL-CAD ).
- GPU Accelerated Ray Trace Rendering for BRL-CAD I propose to design a new, higher performance, parallel implementation of the ray-tracing rendering component which can take advantage of the processing performance of GPUs. The idea is to minimize branch divergence and maximize the used processor capacity to render scenes more quickly.
- Object Oriented C++ Geometry API This project aims at improving BRL-CAD’s Geometry Engine (which provides a clean and easy to use API for BRL-CAD's libraries and binaries) - by adding functionalities to the present primitive classes in the core C++ interface. The project is divided into two parts : 1) Adding functions for finding Volume and Centroid of the primitives 2) Adding functions for finding Surface Area of the primitives
- OGV Proposal: Interface and Back-end Initial HTML pages started from having <p> tags, then <img> and now is the era of audios, videos and also as recent as .gifs. There have been a lot of tools and technologies developed in the past few years that work behind the scenes to render objects. These technologies allow us to manipulate and view 3d graphics in the browser. OGV is a potential platform to showcase 3D designs to a large audience to provide inspiration to a huge target audience.
- Online Geometry Viewer (OGV) OGV stands for online geometry viewer, a web application that aims to give 3d graphics same status in web as 2d images, videos or multimedia. It has been worked upon from 2 years and this year I am improving it further. Enhancing it, and making it better than ever.
- SCAD lexer for QScintilla Editor This project aims to make lexer specifically for Openscad's SCAD language. Currently, QScintilla is using CPP lexer, which cause syntax problem as CPP language is very large as compared to SCAD language of OpenSCAD. With this, some scintilla and GUI related issues will also be solved.
- Synchronize Wiki with Docbook BRL-CAD has more than a million words of documentation (thousands of pages) in a variety of formats. Their long-term goal is to consolidate as much as possible into the Docbook format so that it can be more directly managed by revision control system and integrated with the source code. The main challenges are: Merge the all docbook docs with website. Provide the online editing to user. Provide admin control to verify the changes. Provide the patch or commit approaches to handle the changes.
- X3D Importer Geometry conversion is a very important aspect for every CAD software as it is the basis for CAD data exchange between various CAD softwares. BRL-CAD has dozens of importers and exporters but this does not include support for X3D file import. This project seeks to implement an X3D importer for BRL-CAD and would rely on using the FreeWRL's X3D parsing library for this task. FreeWRL is an open source compliant VRML/X3D browser which is multi-threaded, written in C, and uses OpenGL for rendering.
|
OPCFW_CODE
|
Atul Varma toolness
- security-adventure 207 Go on an educational Web security adventure!
- instapoppin 47 Make Popcorn with just HTML and CSS.
- webxray 45 Web X-Ray Goggles provide a simple, easy way for non-technical people to inspect Web pages and learn about how they are put together.
- postmessage-proxied-xhr 38 A simple polyfill for cross-origin ajax requests.
Repositories contributed to
- mozilla/teach.webmaker.org 2 This repo is for tracking initiatives of the Mozilla Learning Networks team.
- mozilla/webmaker-addons 5 Prototypes for add-ons. Gateways from browser to Webmaker.
- mozilla/webmaker-screenshot 1 Web service to render screenshots of Webmaker makes on-the-fly.
- mozilla/teach-api 0 A basic API to store data for the Teach / Mozilla Learning website.
- swcarpentry/admintool 6 Administration tool for Software Carpentry.
Contributions in the last year 3,821 total Mar 31, 2014 – Mar 31, 2015
Longest streak 56 days January 18 – March 14
Current streak 16 days March 16 – March 31
- Pushed 59 commits to mozilla/teach.webmaker.org Mar 25 – Mar 31
- Pushed 6 commits to toolness/pbpaste-rs Mar 29
- Pushed 3 commits to mozilla/teach-api Mar 25 – Mar 26
12 Pull Requests
- Open #448 Use code splitting to lazy-load mapbox/leaflet
- Merged #446 Simplify footer.
- Merged #434 Remove propTypes.children from PageEndCTA
- Merged #433 Events landing page
- Merged #432 Remove linkTo on PageEndCTA in clubs.jsx
- Merged #427 Scroll map into view when taking users to their club.
- Merged #419 Only show marquee if marquee=MOZILLAAAAAAAAA is in querystring.
- Merged #417 Add "Remove Your Club" modal
- Merged #410 Don't require website field.
- Merged #409 Show club list.
- Merged #405 Add scrolling mozilla marquee to hero unit.
- Merged #403 Use Router.RefreshLocation instead of Router.HashLocation.
10 Issues reported
- Open #455 Automated tests fail on IE9
- Open #1466 Document how to use a git repository's sub-crate
- Open #431 Dev ribbon overlaps hamburger on mobile view
- Open #430 Use webpack's code splitting to load page-specific code on-demand
- Open #426 "gulp beautify" pollutes dist directory
- Open #424 URL validation on 'website' field in club modal should be friendlier
- Open #413 CSS sourcemaps are broken
- Closed #412 Implement revised version of mozilla wordmark on sidebar.
- Open #1 Make the stub optionally use sinon?
- Closed #401 Mozilla logo needs more prominence on page header
|
OPCFW_CODE
|
How to manage delays from external clients
There are many variables to measure when dealing with clients, but to me these are the most important variables that I dealt with:
Clients that are internal or external: If internal, it should depend from the organization structure, power, influence and authority.
Clients that are being manage remotely
Clients that are from other cultures (time and motivation): Those countries where time management is polychronic and not monochronic
Perception of displacement of people when automation arrives: Mainly reluctance to change
Many of these cases were included in my risk management plan. I set up many strategies to reduce negative risks. However I found many troubles when dealing with "external, polychronic, reluctant-to-change" clients.
I tend to talk to them before to adjust my schedule, send status reports to my close involved stakeholders. In some cases where things didn't work out, I requested to appoint a project coordinator from client's side. Things go well, but at some point that person has no control over end-users. Tests, feedback, answers to documents get late.
In order to have a balance upon this situation, I have some contingency reserve.
Sometimes I escalate these issues to a higher authority what makes things work "faster", but not better, because it creates a toxic environment. My team and I are performing mostly on time, but client cannot follow our rhythm. There are many excuses on every weekly remote web conference.
It is ridiculous how many months it takes the client to test some things. They think that we have all time of the world. In some cases, my boss tells me not increase the cost or produce a change order, because it could be so aggressive. He says not to push them, but to keep communication and producing my weekly reports.
Why is this a problem? I keep getting new projects and having few projects like in stand-by, makes me forget things about processes and I need to review again all my planning and project documents (flowcharts, designs, issue logs, meeting records, etc.)
I realized that when a project is in-house and have the end-users, stakeholders and coordinator on the same office, and people were well communicated about the vision of the project, they tend to work on my track. This happens in some cultures where people need to be watched and tracked. Even in these cases, I can have a slight delay, but recoverable time that allow me to close the project.
How would you manage delays from external clients according to previous scenario? How to help engage better these kind of customers and how to accelerate the project?
I see your having problem with field functions , inputs changes on the fields for a product that creates bottle neck to client , I would check online status of clients , when they are actually using your product , because that's time interval you can engage much more better and collect support tickets
Risk management approaches
As any risk, you might have four ways to address them:
Avoid them
Reduce them
Share/transfer them
Accept them
Based on your situation, you invest a considerable amount of efforts on reduce them as much as possible. Thing is, maybe they can't be reduced any further and thus, you need to explore other approaches.
What else to do?
It's pretty challenging to avoid external dependencies, as you don't have control around them. With that in mind, you might consider either transfer or accept them. Long story short, you either pass on the costs of delay to the client (share/transfer) or you take the hit as you're doing (accept).
You might want to explore how you could add clauses to the proposal or statement of work, adding specific comments on by when you expect to have feedback. If these feedbacks are not received on time, there'll be a cost impact. It won't solve the problem but can make the client more willing to offer you availability.
A different problem, maybe?
Reading between the lines, I'd guess your problem could have another root cause: a cultural difference. It's a long shot as there's no comments hinting on this on your post, but worth to highlight that some cultures consider a one day delay unacceptable whereas other may consider a week of delay pretty fine. If you're working on a multi cultural environment, I'd strongly suggest to read The Culture Map from Erin Meyer.
I have been involved in many projects with similar issues, though in a slightly different environment. Being assertive in your communication from the beginning is one thing, so like in my case I ensured all my client had what I called Project Interface Officer. He/she gets a copy of all communications and attends to them on behalf of my clients.
Secondly, where issues like yours had become a culture, I will suggest using the cost of the project to control time, such as hourly pay and discount to the client when you get a response in time
Finally, this I will suggest as the last resort but sometimes it can save you a lot; always arrange a short meeting with your client (the project owner) ahead of your test in order to sure you get what you want. If you like you may the cost it to them, believe it if it saves the client some time too, the client will definitely agreed.
Thanks for your answer. I have been requesting that PIO what it helped a lot, but in some cases they don't have so much authority. It will help to get my back covered, but I want to move them on time. I think the problem is also the FP contract, because it makes them believe that they can extend whatever they want. Every week we review the action items and the schedule, but I find excuses and the project keeps delaying. Remote projects with diff cultures, requires a lot of skills to move the clients on time. In my situation, it is not my team, but the client.
|
STACK_EXCHANGE
|
Everybody who has been playing around with WPF for a reasonable period of time might have known that WPF Application Processing is actually split up into two threads, they are UI thread which runs the dispatcher loop, and processes any GUI, keyboard/mouse/ink input, animation, layout and user initiated messages/events, and there is a hidden unmanaged background thread which is responsible for composing the graphical content (both in hardware or software if fallback is needed), and presenting it to frame buffer for display.
The whole architecture of WPF is built around this two threads model, and it has some of the benefits as articulated by Chris Anderson in the two-year long Channel video. One of the benefits Chris mentioned is that with the two threads model, in terminal services scenario, we can have the UI thread running in the TS server, and have the composition thread (or render thread in other nomenclature) running at the TS client console. And the communication between the two threads are handed over to the TS/RDP protocol. This can enable one interesting scenario called primitive remoting or vector remoting. And since UI thread is running at the TS server side, and it maintains the visual tree and the composition is at the TS client side, so in order to keep the client screen up to date, the UI thread will send the edits to the composition tree over the TS/RDP protocol in a highly compressed manner, this not only saves the network bandwidth, because only the edits need to be remoted, but also speeds up the client processing. This type of server/client communication also holds true in the standalone WPF application, the difference is that those two threads are running at the same Win32 process, and the inter-thread communication mechanism is used instead of the TS/RDP wire. To enable primitive remoting, both the TS server and TS client should run under Windows Vista and with desktop composition is enabled, this requirement tremendously narrows down the scenario in which the primitive remoting could be leveraged.
The upcoming .NET Framework 3.5 SP1 release would change all of this, in particular, bitmap remoting with sub region update support will always be used even if the TS server and the TS client are equipped to support primitive remoting, primitive remoting can be as good or bad as bitmap remoting, and the scenario in which primitive remoting is enabled is quite rare, because most existing Windows servers such as Windows 2000 or Windows 2003 server families don't support primitive remoting. So for any developer who wants your WPF applications to run reasonably well under TS scenario and running under TS is an important scenario for your applications, you need to take implication of bitmap remoting into consideration beforehand, there are a couple of ways from which you can improve the performance of your WPF application in TS scenario:
- Considering using as little animations as possible in your application, and if animations are indispensable, you could try reducing the DesiredFrameRate (The default value is 6O FPS) of each animations you need to use, usually 25~30 FPS should be enough, and if you don't need high fidelity animation experience, you can use a even lower frame rate.
- Considering using solid color brushes instead of gradient or tile brushes.
- Try reducing the number of "hidden redraws" your application needs to perform, "hidden redraws" will be overlaid by its most foreground redraws, but they will still be remoted which waste network bandwidth. You can visualize the "hidden redraws" using the Perforator (part of WPF Performance Suite) with "Show dirty-regions update overlay" CheckBox checked.
|
OPCFW_CODE
|
VBScript (short for Microsoft Visual Basic Scripting Edition , created by Microsoft ) is a subset of Visual Basic used as a general purpose script language . It is often compared to JScript .
VBScript can work in many environments, including:
- Windows Scripting Host (WSH): This is a scripting interpreter for Microsoft Windows systems, allowing you to write scripts to, for example, facilitate their administration.
- Microsoft Internet Information Services (IIS): This is the Microsoft web server. VBScript is the preferred language for Active Server Pages (ASP) programming, that is, writing server-side dynamic web pages.
VBScript is often used as a replacement for DOS batch files .
It does not work on Explorer versions for Mac OS .
Like any scripting language, VBScript is an interpreted language. It does not require compilation before it is run. On the other hand, it requires that the machine intended to run them has an interpreter, a program capable of understanding all the instructions present in the program. According to the use the different “interpreters” are:
- ASP ( asp.dll ) in a Web environment
- Wscript.exe in a Windows environment
- Cscript.exe in a command-line environment
- Mshta.exe for HTML applications .
VBScript files for Windows Scripting Host usually have the file extension
Other extensions exist and allow the execution of VBScript such as:
- VBE : VBScript encoded (not editable).
- WSF : can contain different languages at the same time (for example VBScript and JScript), XML tags indicate the language of each source.
- WSC : Windows Script Components source file.
Example program (to put in a file ‘bonjour.vbs’):
MsgBox "Hello World!"
Second example program, this one will chain two boxes of message:
Msgbox "Hello sir!" Msgbox "How are you?"
If Windows Scripting Host is successfully installed and enabled, the program will run by double-clicking its icon.
VBScript is also implemented in Microsoft Outlook as a scripting language used to respond to events in Outlook forms.
Some common examples of VBScript applications are the Microsoft Agent technology and the Windows Update service . Since these two examples also use ActiveX technology , it is mandatory to use Internet Explorer to see Web pages using VBScript.
Internet script language
- Dim declares a variable
- if if (condition)
- Then when (you enter an action)
- else if (new condition)
- End if (end of condition)
- Do while (loop (do as … executed at least once))
- While … wend (loop (do until …))
- For … next (loop (do until …))
- Do … loop ( infinite loop)
- & ( Concatenates strings)
- Inputbox (…) input box)
- Msgbox ( dialog box)
- Cint (…) ( calculates a number)
- Copyfile ( copy a file)
- Deletefile ( deletes a file)
- Fileexists ( search if a file exists)
The creation of a VBScript script, in a standard Windows environment, does not require the installation of special software:
- Launch a text editor ( notepad / notepad type )
- Copy the script instructions (below)
- Save the file with a ‘.vbs’ extension.
- Open the file to run the script
For example a small script to give the time:
Time = "It is" & Hour ( Now ) & "h &" & Minute ( Now ) & "min." If Hour ( Now ) <= 18 then Message = "Hello" else Message = "Good evening" end if MsgBox Message & "!" & Vbnewline & Time
Another VBscript that gives the time, with inclusion of the inputbox:
firstname = inputbox ( firstname , "What is your name?" ) Time = "is" & Hour ( Now ) & "h" & Minute ( Now ) & "min and" & Second ( Now ) & "dry." If Hour ( Now ) <= 18 then Message = "Hello" else Message = "Good evening" End if if Hour ( Now ) <= 18 then Message2 = "Spend a good day!" else Message2 = "Have a good night!" End if MsgBox Message & "" & prenom & "!" & Vbnewline & Time & vbnewline & Message2 else Message2 = "Have a good night!" End if MsgBox Message & "" & prenom & "!" & Vbnewline & Time & vbnewline & Message2 else Message2 = "Have a good night!" End if MsgBox Message & "" & prenom & "!" & Vbnewline & Time & vbnewline & Message2
Language and object
VBScript allows you to manipulate objects in Windows . It also creates classes in which members can be either private or public. However, inheritance does not exist in VBScript.
Computer viruses and VBScript
Allowing to perform virtually any operation under a Windows system using ActiveX technology , VBscript has been used for the creation of many computer viruses.
Many viruses written in VBscript appear in the year 2000. One of the best known is the virus “VBS.LoveLetter” also known as ” I love you “.
Simple text editors like Notepad are enough to develop in VBScript.
Nevertheless, there are many editors dedicated to VBScript like:
- VBS Factory
- Microsoft Script Editor ( scripting languages including VBScript)
- VbsEdit (contains an integrated debugger )
Tools allow scripting to be used without knowledge of development
- GlobalscriptGUI [ archive ]
Notes and references
- ↑ VBScript is not supported in IE11 edge mode [ archive ]
|
OPCFW_CODE
|
We introduce the first multitasking vision transformer adapters that learn generalizable task affinities which can be applied to novel tasks and domains. Integrated into an off-the-shelf vision transformer backbone, our adapters can simultaneously solve multiple dense vision tasks in a parameter-efficient manner, unlike existing multitasking transformers that are parametrically expensive. In contrast to concurrent methods, we do not require retraining or fine-tuning whenever a new task or domain is added. We introduce a task-adapted attention mechanism within our adapter framework that combines gradient-based task similarities with attention-based ones. The learned task affinities generalize to the following settings: zero-shot task transfer, unsupervised domain adaptation, and generalization without fine-tuning to novel domains. We demonstrate that our approach outperforms not only the existing convolutional neural network-based multitasking methods but also the vision transformer-based ones.
Detailed overview of our architecture. The frozen transformer encoder module (in orange) extracts a shared representation of the input image, which is then utilized to learn the task affinities in our novel vision transformer adapters (in purple). Each adapter layer uses gradient task similarity (TROA) (in yellow) and Task-Adapted Attention (TAA) to learn the task affinities, which are communicated with skip connections (in blue) between consecutive adapter layers. The task embeddings are then decoded by the fully-supervised transformer decoders (in green) for the respective tasks. Note that the transformer decoders are shared but have different task heads (in grey). For clarity, only three tasks are depicted here and TAA is explained in a separate figure below.
Overview of our vision transformer adapter module. Our vision adapters learn transferable and generalizable task affinities in a parameter-efficient way. We show two blocks to depict the skip connectivity between them. The main modules (TROA) and (TAA) of our vision transformer adapters are depicted below.
We show the task affinities from TROA when four tasks comprising semantic segmentation (SemSeg), depth, surface normal, and edges are jointly learned. We show that TROA learns a strong task affinity between the same task gradients, for example, segmentation with segmentation. This is a self-explanatory observation. Consequently, TROA also learns task affinities between proximate tasks such as segmentation and depth, and task affinities between non-proximate tasks such as segmentation and normal. Note that task dependence is asymmetric, i.e. segmentation does not affect normal as normal effects segmentation. These task affinities are used by our novel task-adapted attention module as described in what follows.
Detailed overview of Feature Wise Linear Modulation (FiLM)} which linearly shifts and scales tasks representations to match dimensions of the feature maps. The orange rectangular area is FiLM.
Overview of our Task-Adapted Attention (TAA) mechanism that combines task affinities with image attention. Note that the process, in the foreground, is for a single attention head which is repeated for 'M' heads to give us the task-adapted multi-head attention.
Multitask Learning comparison on the NYUDv2 benchmark in the'S-D-N-E' setting. Our model outperforms all the multitask baselines, i.e. ST-MTL, InvPT, Taskprompter, and MulT, respectively. For instance, our model correctly segments and predicts the surface normal of the elements within the yellow-circled region, unlike the baseline. All the methods are based on the same Swin-B V2 backbone. Best seen on screen and zoomed in. For more details and quantitative results, please refer to our paper.
Multitask Learning comparison on the Taskonomy benchmark in the'S-D-N-E' setting. Our model outperforms all the multitask baselines, respectively. For instance, our model correctly segments and predicts the surface normal of the elements within the yellow-circled region, unlike the baseline. All the methods are based on the same Swin-B V2 backbone. Best seen on screen and zoomed in. For more details and quantitative results, please refer to our paper.
Unsupervised Domain Adaptation (UDA) results on Synthia->Cityscapes. Our model outperforms the CNN-based baseline (XTAM-UDA) and the Swin-B V2-based baselines (1-task Swin-UDA, MulT-UDA), respectively. For instance, our method can predict the depth of the car tail light, unlike the baselines. Best seen on screen and zoomed within the yellow circled region.
This work was supported in part by the Swiss National Science Foundation via the Sinergia grant CRSII5$-$180359.
|
OPCFW_CODE
|
New: Supermium (Apr 11, 2024), Platform 29.4 (Apr 10, 2024)
1,000+ portable packages, 1.1 billion downloads
Please donate today
other comments: i did not see any new registry entries with regshot but it creates a file in the same dir that it is in.
But wouldn't it undo your registry changes on shut down
[that was a joke, sorry, I couldn't help myself ]
Things have got to get better, they can't get worse, or can they?
Very very funny!!
It creates a bunch of keys under "HKCU\Software\DKG\Reg"
Edit: And every time I run it I get an error message saying something along the lines of "there was an error synchronizing with the registry, make sure you have full administrator privileges."
The developer formerly known as ZGitRDun8705
... i forgot that i had first run the program and then i thought about reg checking. i just found the same thing .
about that sign i belive that it only affects one key called HKEY_DYN_DATA
which is used by the program.
but from hearing this i am guessing that it is not portable
Edit: I just ran this on another computer and i did not find any registry entries, i only some on the first computer that i started it on.
so i am not sure what you saw.
Please search before posting. ~Thanks
Just delete the registry keys when you open it! (it's a registry editor, after all!)
lol i did
Generally, if access to the registry editor is prohibited, so is access to the registry itself... so is there much point to this?
Sometimes, the impossible can become possible, if you're awesome!
i used this on a limited account and it worked fine, it allowed me to add, modify, and delete things.
If I end up getting some of the tools like UniExtract, RegShot, NSIS, etc portablized, this would be a great addition to the lineup for PortableApp developers he need their tools portable themselves.
It'd be great on vista because Regedit.exe requires admin privs.
cowthink 'Dude, why are you staring at me.'
here is the link
it will never work, you simply need admin. privliges and you shouldent be hacking an unknown computer's registry anyway. + you NEED administrator privliges.
Ok, first off someone above said it doesn't need admin privleges. That could be quite untrue, but don't say anything unless you've tried this program on a limited account yourself.
Second, it may not be an 'unknown computer'. Maybe you're helping a friend clean out their registry, or you're doing some fixes in the registry on a very corrupted machine.
And why are you talking about hacking? You're the one who made the pointless post in the off-topic forum about hacking some site.
End of post. I won't reply to any replies to this to keep on topic.
*virtual high five*
Please Make TiLP Portable
Scrubs is a great show....
And this portable registry editor here, will be a great addition (just to keep things on topic)
|
OPCFW_CODE
|
Signal and Handler Event System#
the event system in QML
Application and user interface components need to communicate with each other. For example, a button needs to know that the user has clicked on it. The button may change colors to indicate its state or perform some logic. As well, application needs to know whether the user is clicking the button. The application may need to relay this clicking event to other applications.
QML has a signal and handler mechanism, where the signal is the event and the signal is responded to through a signal handler. When a signal is emitted, the corresponding signal handler is invoked. Placing logic such as a script or other operations in the handler allows the component to respond to the event.
Receiving signals with signal handlers#
For example, the Button type from the Qt Quick Controls module has a
clicked signal, which is emitted whenever the button is clicked. In this case, the signal handler for receiving this signal should be
onClicked. In the example below, whenever the button is clicked, the
onClicked handler is invoked, applying a random color to the parent Rectangle :
Property change signal handlers#
A signal is automatically emitted when the value of a QML property changes. This type of signal is a property change signal and signal handlers for these signals are written in the form on<Property>Changed, where <Property> is the name of the property, with the first letter capitalized.
For example, the MouseArea type has a
pressed property. To receive a notification whenever this property changes, write a signal handler named
Even though the TapHandler documentation does not document a signal handler named
onPressedChanged, the signal is implicitly provided by the fact that the
pressed property exists.
Signals might have parameters. To access those, you should assign a function to the handler. Both arrow functions and anonymous functions work.
For the following examples, consider a Status component with an errorOccurred signal (see Adding signals to custom QML types for more information about how signals can be added to QML components).
The names of the formal parameters in the function do not have to match those in the signal.
If you do not need to handle all parameters, it is possible to omit trailing ones:
It is not possible to leave out leading parameters you are interested in, however you can use some placeholder name to indicate to readers that they are not important:
Instead of using a function, it is possible, but discouraged, to use a plain code block. In that case all signal parameters get injected into the scope of the block. However, this can make code difficult to read as it’s unclear where the parameters come from, and results in slower lookups in the QML engine. Injecting parameters in this way is deprecated, and will cause runtime warnings if the parameter is actually used.
Using the Connections type#
In some cases it may be desirable to access a signal outside of the object that emits it. For these purposes, the
QtQuick module provides the Connections type for connecting to signals of arbitrary objects. A Connections object can receive any signal from its specified target .
For example, the
onClicked handler in the earlier example could have been received by the root Rectangle instead, by placing the
onClicked handler in a Connections object that has its target set to the
Attached signal handlers#
An attached signal handler receives a signal from an attaching type rather than the object within which the handler is declared.
onCompleted handler is not responding to a
completed signal from the Rectangle type. Instead, an object of the
Component attaching type with a
completed signal has automatically been attached to the Rectangle object by the QML engine. The engine emits this signal when the Rectangle object is created, thus triggering the
Component.onCompleted signal handler.
Attached signal handlers allow objects to be notified of particular signals that are significant to each individual object. If there was no
Component.onCompleted attached signal handler, for example, an object could not receive this notification without registering for some special signal from some special object. The attached signal handler mechanism enables objects to receive particular signals without extra code.
See Attached properties and attached signal handlers for more information on attached signal handlers.
Adding signals to custom QML types#
Signals can be added to custom QML types through the
The syntax for defining a new signal is:
signal <name>[([<type> <parameter name>[, ...]])]
A signal is emitted by invoking the signal as a method.
For example, the code below is defined in a file named
SquareButton.qml. The root Rectangle object has an
activated signal, which is emitted whenever the child TapHandler is
tapped. In this particular example the activated signal is emitted with the x and y coordinates of the mouse click:
Now any objects of the
SquareButton can connect to the
activated signal using an
onActivated signal handler:
See Signal Attributes for more details on writing signals for custom QML types.
Connecting signals to methods and signals#
Signal objects have a
connect() method to a connect a signal either to a method or another signal. When a signal is connected to a method, the method is automatically invoked whenever the signal is emitted. This mechanism enables a signal to be received by a method instead of a signal handler.
messageReceived signal is connected to three methods using the
In many cases it is sufficient to receive signals through signal handlers rather than using the connect() function. However, using the
connect method allows a signal to be received by multiple methods as shown earlier, which would not be possible with signal handlers as they must be uniquely named. Also, the
connect method is useful when connecting signals to dynamically created objects .
There is a corresponding
disconnect() method for removing connected signals:
Signal to signal connect#
By connecting signals to other signals, the
connect() method can form different signal chains.
Whenever the TapHandler ‘s
tapped signal is emitted, the
send signal will automatically be emitted as well.
output: MouseArea clicked Send clicked
|
OPCFW_CODE
|
This section covers the Immerse Platform
Learn more about the associated Immerse SDK
Immerse Sessions are more flexible than Spaces which preceded them. Session owners have more control over content access permissions, whether they are multi or single user and also their expiration date.
- Invitations - Sessions to which you've been invited (via email)
- My Sessions - Active Sessions owned by you
- History - Sessions you've previously created which have now expired
- Legacy spaces - Spaces created using earlier versions of the Immerse Platform - read earlier documentation for an overview.
If the new Session sub-tab is not visible to you please contact [email protected].
- Starting on the My Sessions sub-tab, click (CREATE SESSION), which will reveal the New Session panel
To create a new Session:
- Enter a name for your Session
- Attach a Scene to your Session
- Choose a geographical region where the Session will be hosted. By default this will be physically closest to you.
- Choose whether the Session will be 'Single User' or 'Multi User'. The difference being that a single user session can only be joined by a lone user, whereas multi-user sessions can be joined by multiple people at the same time.
- Choose whether the Session can only be accessed by users who have authenticated via SSO, to provide extra security. This feature is only visible to organisations which have Single Sign-On enabled.
- Invite participants via specific email address or via link. Adding participants here will trigger a customised email to be delivered to each one, detailing the Session URL (which in turn is used to generate a PIN). Selecting 'via link' will generate a URL.
- Session URL - a unique URL which can be shared, to provide access to the Session (e.g. from a learning management system, etc.)
1 Session details, including name and expiration date
2 Get PIN reveals a unique 6-digit number, to be used when joining the Session on a standalone VR headset (e.g. Oculus Quest)
3 Join Session now
4 Manage invited participants
- Show, edit, delete and add participants
- Reassign Session ownership
- Resend invitations
5 Copy or delete the Session
Only Google Chrome is currently supported for WebGL / browser access
- To join an Immerse VR Session, access the join URL (in the form vr.immerse.io/#########) using the Google Chrome browser.
(Immerse browser Sessions require use of the laptop microphone and audio, so these are checked before each session. )
If Chrome reports that vr.immerse.io wants to use your microphone, you must select [Allow] or you will be unable to join
After access has been granted, an option to join via [Web] or [VR] will be offered - choose [Web]
Wait a moment whilst the WebGL content is downloaded.
- VR user view
View the Immerse Scene through the eyes of any VR participant (Session owner only)
- Static camera view
View the Immerse Scene through third person cameras defined in Unity (Session owner only)
- VR view
Select to view immersive / 3D content (Session owner only)
- Reset scene
Perform a hard reset, ejecting all users from the Session (Session owner only)
- User list
View Session participants, check whether they are speaking and mute/unmute them (if Session owner)
Due to licensing restrictions, the Immerse Platform is unable to install APK applications directly to an Oculus Quest. This can currently be done in several ways:
- Install directly via ADB (specific Oculus Quest instructions )
- Deploy via App Lab
- Oculus for Business, Oculus-Hosted Apps and Release Channels - for enterprise deployments, this is the recommended process.
To join an Immerse Session on an android device as as the Oculus Quest, a PIN must first be entered. This can be revealed to the attendee in a number of ways:
1. Via an invitation Email. When invited to a Session, the user will receive an Email containing their unique PIN
2. Via the _(Get PIN) _button in the Invitations sub-tab
3. By clicking on a session link and revealing the URL; choosing Get PIN when prompted.
Generating PINs for Quest / standalone users
To generate a PIN for Quest / standalone users, a Zipped APK must be loaded to the associated Immerse Scene
Upon launching the respective Oculus Quest application, built with the latest Immerse SDK, the Session participant will be presented with a PINpad, which is used to enter the PIN.
Desktop Immerse VR Sessions are joined via the Google Chrome browser on a Windows PC, with SteamVR enabled.
Download Immerse Launcher (4.2.0) Installer (Windows EXE - 4.2MB)
To join a Session in a headset, the Immerse VR Launcher application must first be installed. This will be used to initiate every VR session. The Immerse Launcher is a small, executable application, installed on a PC running VR hardware. Once installed, it will initialise when an Immerse VR session has been started and manage the downloading of VR content to the headset. The Immerse Launcher will cache VR content, reducing download times.
Download and install the Immerse VR Launcher
Firstly download and run the Immerse Launcher . During installation, select [Yes] if asked whether the Visual C++ 2015 64-Bit Redistributable can be installed
Join Immerse VR Session
Access the join URL (in the form vr.immerse.io/s/#########) using the Google Chrome browser.
(An Immerse VR session requires use of the microphone and headphones on the HMD, so these will be checked before each session. )
If Chrome notifies you that "vr.immerse.io wants to use your microphone and / or camera" you must select ‘Allow’ or you will not be able to join the session.
After access has been granted, an option to join via [Web] or [VR] will be offered - choose [VR]
The previously installed Immerse Launcher will initialise bringing up this message in Chrome:
Select [Open ImmerseVRLauncher] when prompted.
When the Session is loaded, put on the VR headset.
When using the Google Chrome browser and PCVR headset, user voice data is sent and received via the browser, ensuring a consistent experience across all platforms.
To mute themselves, a VR user must first remove the headset and use the mute button in the browser.
Voice communication in the Oculus Quest is handled directly in the Immerse app.
The Immerse Launcher is a small, executable application, installed on a PC running VR hardware. Once installed, it will initialise when an Immerse VR session has been started and manage the downloading of VR content to the headset.
The Immerse Launcher will cache VR content, reducing download times.
- View the Immerse Launcher release notes.
Updated 10 months ago
|
OPCFW_CODE
|
[GTA04] When is the next and more powerful openmoko releasing
Dr. H. Nikolaus Schaller
hns at computer.org
Thu Sep 16 19:50:03 CEST 2010
>> And, we know that fast boot is possible. At least someone has done it for the Beagle Board:
>> They claim that they have achieved 3 seconds from power up to login: on
>> the serial console. Well, running X11 also needs some time.
> From my point of view, boot time especially if it is in 1 minute is not
> really important. More important how fast it will run while usual usage,
Yes, it is below 1 minute. To give you an indication, I have done a test
on one of our development systems: BeagleBoard C2 i.e. 600 MHz CPU,
256 MB RAM, unoptimized full blown Debian Lenny, all files loaded from
1 seconds Texas Instruments X-Loader 1.4.4ss (Aug 19 2010 - 02:49:27)
2 seconds U-Boot 2010.03-01183-g43b5706-dirty (Sep 15 2010 - 16:19:43)
11 seconds [ 0.000000] Linux version 2.6.32 (hns at iMac.local) (gcc version 4.2.4) #48 PREEMPT Tue Jun 8 14:21:52 CEST 2010
21 seconds INIT: version 2.86 booting
37 seconds Debian GNU/Linux 5.0 bb-debian ttyS2 bb-debian
> how easy is boot system to understand and fix, and how much it deviates
There is a boot ROM which loads the X-Loader which loads U-Boot.
> from desktop systems. I do not want use busybox shell under any
Well, that largely depends on what you install.
> conditions, but running all that services to boot up fully-functional
> system will take much more time than 3 seconds. (authors use uclibc and
> busybox, hack init scripts like disable log, disable u-boot menu,
> disable logs, remove everything from kernel)
> The way authors of paper archive such boot times influence later speed
> of device. For example, they propose to use XIP, which will cetrainly
> decrease kernel speed. Other example is that they 'compiling everything
> with -Os', it may greatly decrease performance in favor of boot time.
That is true. Therefore we leave such optimizations to the software
community. I could imagine a simple and small distro that only allows
to make phone calls but boots in <10 seconds. And a full desktop-like
PDA/Smartphone that takes 60 seconds for the GUI to appear.
Regarding memory we currently plan to use this memory chip:
More information about the community
|
OPCFW_CODE
|
Vexcel Holdings GmbH, Graz, acquires Microsoft’s UltraCam Business Unit
Vexcel Imaging GmbH, Microsoft’s UltraCam Business Unit, is undergoing an ownership change planned for early March 2016. After ten years contributing to Microsoft as a subsidiary, next month Vexcel Imaging GmbH will again become an independent company upon its expected acquisition by a private investor group.
The new owner will be Vexcel Holdings GmbH in Graz. Following is an interview with Wiechert Alexander CEO of Vexcel Imaging
Please explain the structure of the newly formed Vexcel Imaging GmbH.
Vexcel Imaging GmbH has been under the ownership of Microsoft Corp since the acquisition of the Vexcel companies on May 3, 2006. On March 11, 2016 a share purchase agreement was reached and the ownership of Vexcel Imaging GmbH moved to a newly founded Holding company in Graz, Austria. But these are just the legal details. Of more importance is that behind the Holding company are four private persons who are Erik Jorgensen, former Microsoft Corp. Vice President, Stephen Lawler former BING CTO, Martin Ponticelli who continues to be the CTO of Vexcel Imaging, and myself, continuing as the CEO of Vexcel Imaging. So literally, Vexcel Imaging GmbH is now owned by Erik, Stephen, Martin and myself.
Besides that, little has changed. Vexcel Imaging GmbH, itself, carries on as it was and the business scope remains unchanged with a strong commitment to outstanding UltraCam and UltraMap products for aerial and terrestrial markets and applications. The internal organization of Vexcel Imaging has not changed–we continue to have three units: operations (lead by myself), development (lead by Martin) and application (lead by Michael Gruber).
Does this new entity bring any enhanced value to its existing and potential customers and to its channel partners?
The newly founded Holding is just a legal entity for the share purchase. All the operative business stays within the well-known Vexcel Imaging GmbH. Our time as a Microsoft subsidiary were great–Microsoft brought great value to the company and we did amazing developments for Virtual Earth and BING maps. But it also required a split of resources between commercial development and Microsoft internal developments. Being an independent company now, we can once again focus fully on commercial development and make decisions more quickly. That is of defi nite benefi t to our customers and partners. Additionally, and of major signifi cance, is that we were able to reach favorable and solid licensing agreements with Microsoft and can build on our mutual developments accomplished over the past ten years. That is fantastic for or product roadmap.
What are the new line up of products going to be added to your existing portfolio of photogrammetric products and solutions this year?
We have an aggressive product roadmap. I can’t provide all the details but with the share purchase, we received broad licensing rights from Microsoft, with respect to IP that was developed during the Microsoft years. Thanks to that and our own ongoing development efforts, the UltraMamp software roadmap includes several new releases that will offer powerful capabilities that we developed for Microsoft. The preview to UltraMap version 4.0, which was just released, offers a fi rst glance as to what is coming. On the camera side, we will see one new aerial camera (UltraCam Condor) for high-altitude mapping and ortho image generation and we will be ramping up our development efforts for new terrestrial sensors such as the UltraCam Mustang mobile system and UltraCam Panther portable system.
|
OPCFW_CODE
|
M: Japanese Teenage Boy Improved Ruby 1.9 Performance By Up To 63% - fogus
http://yokolet.blogspot.com/2009/11/japanese-teenage-boy-improved-ruby-19.html
R: bradfordw
This is like me taking my son hunting, holding the gun for him and having him
pull the trigger for the kill.
His mentor was Koichi Sasada...he wrote YARV!
Don't get me wrong, I think it's great to get kids involved. But it's not like
he walked in off the street, "saw the matrix" and then committed a patch.
I can already sense the razzing coming from the Python camp! :)
R: fiaz
Can you somehow prove that Koichi made the solution SOOO obvious that even a
teenager could have made the fix?
From what little I've read, it seems to me that this kid is pretty smart and
has made a very generous contribution to the Ruby community.
R: tsally
I would have been plenty impressed if the article was framed like that,
without the hype. As you say, the kid has made a great contribution at a young
age. No need to dress up a good thing.
R: Confusion
He improved the performance of _a few specific methods_. Not of Ruby in
general.
Google translation of the article referenced in the article:
[http://translate.google.com/translate?prev=hp&hl=en&...](http://translate.google.com/translate?prev=hp&hl=en&js=y&u=http%3A%2F%2Fjibun.atmarkit.co.jp%2Fljibun01%2Frensai%2Fgenius%2F05%2F01.html&sl=ja&tl=en&history_state0=)
(mostly funny to read, as the translation is quite wacky)
(Bablefish chokes on the site)
R: xpaulbettsx
The methods he improved the perf of were in string, array, and struct. I think
that counts as "globally improves performance", there are very few non-trivial
methods that won't benefit.
R: jackowayed
but they won't improve performance as much. AND the 63% figure was very
misleading anyway since that's the best case, and average is only 8%.
So basically, if your program does nothing but call those methods in such a
way that it satisfies the best case improvement, it just got 63% faster at
doing nothing.
If your program does nothing but call those methods with the average case, it
just got 8% faster at doing nothing.
If your program uses anything else, it's probably only faster by a small
fraction of that 8%.
R: WesleyJohnson
>..the 63% figure was very misleading anyway since that's the best case...
Which is exactly what I took "up to 63%" to mean. I don't see that as
misleading. If a drug administration company was listing off the side effects
of a new over-the-counter drug and mentioned fatality rates we're on average,
2%, but failed to mention that the worst-case scenarios (where the patients
were Hispanic females over the age 35) had a 80% fatality rate, would you also
consider that misleading? Of course you would.
So why is citing the extremes of a positive considered misleading, while
citing extremes of a negative is almost expected?
R: tsestrich
"Japanese people were surprised about the news since he made that in his age."
As opposed to most other people, who were not surprised at all?
I agree though, the title implies that he improved all of Ruby's
performance... even though it was just several methods. Still cool that he got
into this kind of project at a young age, but it's not like he re-wrote a
bunch of algorithms or something....
R: UncleOxidant
"His mentor was Koichi Sasada (ko1)."
That's the guy who wrote YARV. So it's not too surprising that ko1 would point
him in the right direction.
R: santry
The title, which _is_ from the original article is a bit misleading, no?
_The performances of the methods he worked have been bumped up 63% in
maximum, 8% in average. His patches were applied to Ruby trunk in Oct. 5 this
year._
I'm not knocking the kid's contribution, but the title oversells it a bit,
implying that the kid improved Ruby 1.9's overall performance, which
apparently isn't the case.
R: roc
That they were using and contributing to an open source project for study was
more surprising to me than the details of this improvement.
R: fsniper
Being smart and attentive to detail are not related to age. So I'm not much
surprised. Lucky for FLOSS and Ruby communities, one more new and bright guy
came in.
R: tome
Well done, but why wasn't the compiler doing this automatically?
R: tedunangst
It takes a very sophisticated compiler to know that rubyobj->isString will
always be true when the ruby interpreter is evaluating a constant string.
R: mahmud
isString is the cheapest method out there, it just tests a tag bit in the
pointer. Pretty much an AND followed by a branch.
R: apgwoz
... except for setting up the function call, and all that jazz.
|
HACKER_NEWS
|
I wanted my guitar to sound differently, i.e. as I have been damaging my finger nails by playing guitar. I do not always use picks when playing and cutting half of my finger nails to play is odd to me. Issues…yes. But, I have made a short of a tuneage in a
video for your ‘pleasure.’
Anyway, I wanted to test out BELA, i.e. as I am an avid
So, I did and I do it still! I have some cheap components outside of the BBB and BELA I am using to handle the sounds amplification and ‘music.’
Mostly, what I call music is what others play but I like to make sounds.
Okay…back to the basics.
So, we get our materials:
and…we will need an internet connection and the below requirements to handle the repo, Bela, and the BBB, i.e. along w/ makin’ ‘dat sweet musical goodness!
- 3.5 mm to 1/4" mono cable * 2
- Bela audio cable * 2
- Bela * 1
- BeagleBone Black * 1
- Ethernet Cable * 1
- Micro USB to USB 2.0 * 1
- Internet to get to the Bela IDE * 1
- guitar or music amplifier * 1
- guitar * 1
- or handmade instrument * 1
So, we need to make our connections. Oh and this is assuming you have loaded the BELA image onto the BBB (BeagleBone Black) via Micro SD Card. If you need help understanding images, installing the image on the BBB for use w/ Bela in a rt kernel called xenomai, or to use the IDE and get started, please go to
So, back to our confusing write up here…
If we plan on using the Bela IDE and an amplifier w/ the Bela_Misc repo. from github.com, we will need to have the image preinstalled on the BBB, all our cables connected, and some additional understandings of how the IDE actually works.
For example, one would not install via the regular package manager and instead one would use the drag and drop feature from the Bela IDE.
So, if you want, you can clone the repo. to your personal computer or download the.zip file and expand the.zip file to see the files from
It seems this person put in some extra work on this project he/she made for the Bela IDE and music usage. Nice, I know! So, I am here to proclaim that it works and it is useful.
If you come across issues w/ the repo. and the Bela IDE and making music, go to
to get assistance on how to perform w/ the IDE a bit better.
Also, you can ask me here? I can try to help to the best of my ability.
I know that this repo. will not work out of the box for the bluesy, Klingon_Tone section to the repo, i.e. which I will be describing in more detail soon.
How about now?
Okay…you got it!
So, once the image is loaded onto the BBB via Micro SD Card and the Bela Cape is attached to the BBB, place the Ethernet cable in the Ethernet port on the BBB, carefully put your micro USB to USB 2.0 cable into the BBB, and add the input and output cables to the Bela Cape.
Attach the 3.5 mm cables to the input and output cables. The other ends of the 3.5 mm, which should be 1/4" plugs, go into the amplifier and the other one into the guitar.
Now, I cannot describe to you exactly how each bit of the source works for now. I am not completely competent in C++ programming. This fellow obviously knows more than my mere kindliness.
So, w/out further ado, we should be having an image to use, our connections from and to the BBB, BELA, the amp, and the guitar, and we should also be connected to the Internet.
Now, plug in the USB 2.0 cable into your development desktop, Linux, OSx, and/or Windows 10. So, now we can go to the address bar and type
Make sure to type in http:// or you may not connect w/ ease.
You should see an IDE. It is an online IDE but it is an IDE to use. So, w/ our server IDE for the Bela Cape, we can now drag and drop files onto the screen to add components into the IDE to use and create.
So, from the repo. listed earlier, download the.zip file onto your personal computer and unzip it! Now, we can, b/c this specific tutorial is about the klingon_tone part to that repo, install file by file until we have each of the files in that specific directory loaded onto the Bela IDE for use.
So, I was actually helped during a section of my development on this actual repo. on the forums. The last file in that dir. needs to be omitted. Do not use the test_tonestack.cpp file. Omit it from your installation of the files via dragging and dropping.
Oh and you may want to leave out the.txt files and the readme file. Oh and you do not need to drag and drop the analysis directory from that repo. under klingon_tone.
Okay, okay. So, if you are smart and already made all this work before today and are just wondering what is going on w/ that name…do not fret.
I have as small explanation. You, while strumming or tuning up your guitar, will not hear odd language coming from your instrument. I think it is a "play on words" and the tone is not from star trek. There.
So here and there…I will give a totally unscripted version of me playing the untuned guitar w/ my long finger nails (I know…disgusting) while the Bela Cape and BBB are performing at an actual low CPU percentage while strummin’ along.
And…the source, from how they have derived it and made the SDK, is actually simple if you use Arduino, know C++, and/or have grown accustomed to learn music and electronics.
There are a couple of fine tuning things one might need to know in the source and files associated w/ the klingon_tone directory. Enjoy!
and…here is that less-than-par video of my untuned ramblings!
You can actually here the bluesy tones from the "stompbox" like, Bela Cape using the PRUs onboard the BBB while strumming some tuneage from this specific repo. directory.
- Some helpful links…
and…last but least, the vid.
P.S. Maybe I will attempt a more, finely tuned sectional later w/ the Bela IDE and the Pyle. Until then, enjoy!
|
OPCFW_CODE
|
99.1% Of all customers recommend us, we're so confident about our results we publish all reviews and statsView Live Stats View Reviews
Working With The Message Bar In Office 2007
Wed 23rd September 2009
To show or hide the Message Bar, access the Office Button or Tools - depending on which program in the Office suite you are using - to get to the Trust Center. By selecting Trust Center settings you will then be able to access the settings for the Message Bar.
The Message Bar displays security alerts when there is potentially unsafe, active content in the document you open. For example, the document might contain an unsigned macro or a signed macro with an invalid signature. In such cases, the Message Bar appears by default to alert you about the problem.
If you don't want to be alerted, you can disable the Message Bar.
You can click the options you want such as: Show the Message Bar in all applications when document content has been blocked. This option is selected by default so that you get Message Bar alerts whenever potentially unsafe content has been disabled. The option is not selected if you clicked the Disable all macros without notification option on the Macros pane of the Trust Center. If you click Disable all macros without notification, you won't get Message Bar alerts when macros are disabled.
Another setting is: Never show information about blocked content this option disables the Message Bar. You do not receive alerts about any security issues, regardless of any security settings in the Trust Center. Whenever you open a file that contains code such as a macro, ActiveX control, or add-in, Office disables the code, and you have to use the Message Bar to enable the blocked content. It might be seem odd, therefore, that anyone would want to turn off the Message Bar. However you can sometimes save yourself and your colleague's time by turning the option off in certain situations.
Office provides several ways to turn off the Message Bar and run code safely. For example, imagine you've created a macro - an automated set of instructions - for one of your Microsoft Office Word documents. Your colleagues find the function really useful, but every time they run it they have to use the Message Bar and a security dialog box before the macro can run. They need to be able to open the file without having to deal with the Message Bar and a security dialog box.
Office 2007 gives you the solution without risk of threats. If the code is signed, meaning it has a digital certificate applied to it, you can "trust" the certificate by adding it to a list of trusted publishers. This is the safest option and the one you should always try to use. If the code isn't signed, but you're sure you can trust the publisher, you can place the file in a trusted location.
If you write code for your own use, you can also create a self certificate, use that to sign your code and then trust that certificate. It's a straightforward process, but remember you don't see the commands discussed here unless you open a file that contains signed code. If a file contains unsigned code, you can enable it, but not trust it permanently, which means you'll see the Message Bar every time you open the file.
Certificates that come from large corporations are updated automatically and you almost never need to remove them. However, self certificates do expire. They can also become invalid for a variety of reasons, such as when someone tampers with a macro.
Use the Message Bar page to show or hide the Message Bar. However, unless you know what you're doing, never disable the Message Bar. Disabling the Message Bar will not allow code to run.
Original article appears here:
London's widest choice in
dates, venues, and prices
On-site / Closed company:
Director, International New Media
Great content, expertly and enthusiastically delivered by Jens. Great course!
BlackRock Investment Management (UK) Limited
Fantastic course with excellent energetic tutoring.
I had to do the intermediate course because I didn't feel confident enough for Advanced, however I feel I probably could have benefited from a half day 'refresher intermediate' course before then doing the full advanced day course. but I appreciate not everyone will be at the same level starting the course.
Thank you for an excellent service.
PSI CRO AG
Thanks to the trainer! Stuart was very enthusiastic and energetic, with a great sense of humor.
|
OPCFW_CODE
|
Software Statistics Service (SSS) is a software analytics service that allows to monitor desktop software usage and manage your product development more effectively. With it help you can easily identify users’ needs and find out what can be improved or changed in your software.
SSS is very useful in making marketing decisions. It provides you with precise information about users location and helps to create better targeted marketing campaigns. This desktop software monitoring system can be easily integrated into .NET, C++, Java, Android, Delphi, Microsoft Silverlight, WPF, Windows Phone 7, Mac OS, iOS platforms. Let’s review a few of them.
Java Software Analytics System
Use SSS to track the usage of Java-based applications. This Java software analytics system will help you to:
- improve your product
By knowing what features are used the most, what environment the software is running at, you can create customized solutions that the best benefit your customers needs
- save your money
Using Java software analytics system you can identify the most pupular projects and eliminate the ones that are used rarely
- increase the marketing effectiveness
Information about users location gives you a possibility to create targeted marketing campaigns and see the results of your marketing activities per countries
Delphi Software Analytics System
Software Statistics Service gives you a rich insight into your Delphi software application usage. With this information you can develop, improve and test Delphi software more effectively. This Delphi software analytics system collects comprehensive desktop statistics about how often and where the software application is being used, what features/versions are used the most, what environment it is running at and so much more. Use Delphi software analytics system to:
- release updates that respond to users requirements
- improve the usability of your Delphi software by concentrating on the most used features
- identify the most successful projects, versions and efficiently allocate your resources
- increase the effectiveness of your marketing activities
.NET Software Analytics Service
SSS is designed to help you analyze the usage of your .NET software application. Just download software client module and integrate it into your desktop software to start using .NET software analytics service. You can perform this for existing as well as for new .NET software products.
.NET software analytics service enables you to:
- see the number of downloads, installations, first and repeated starts
- compare the popularity of different desktop software applications or various versions of your products
- track the usage of new versions or separate features
- see what countries are your user from and adjust marketing decisions
Use this .NET software analytics system to determine your desktop software development road-map and increase the effectiveness of marketing tools.
C++ Software Analytics Service
SSS can be easily integrated to C++ platform. Using this C++ software analytics service you will stay in touch with your desktop software application during its development and maintenance. You will be able to test the reaction of users, implement new features accordingly to their needs and organize high effective marketing campaigns based on exact data and facts. With C++ software analytics service you will find out:
- how many users you have
- what users are clicking at
- what OS and hardware users have
- what countries your users come from
C++ software analytics service will help you to identify and correct bugs, improve software usability, performance and reliability.
|
OPCFW_CODE
|
Does cloudflare know the decrypted content when using a https connection?
CloudFlare provides ssl support. However, if a visitor visits a website protected by CloudFlare, is CloudFlare able to know the plain data transfered during this visit?
There are a few SSL options:
Flexible SSL
Full SSL
Full SSL (strict)
I know that for Flexible SSL, CloudFlare probably knows the plain data, as the data has been decrypted by CloudFlare and send to the web server insecurely.
What about Full SSL and Full SSL (strict)? Does CloudFlare decrypt first then encrypt again to send to the web server?
Are you giving them a certificate for your domain? If you need to give them a certificate, assume they can see and modify everything being communicated. Without a certificate they can't see or modify what is being send, but they also cannot cache anything. Without caching you only get some parts of the benefits offered by a CDN.
No I didn't give them the certificate. If CloudFlare cannot cache anything, it acts like a proxy, is it correct? What I don't understand is that in the Full SSL case, why the web client still trusts the SSL certificate even when the server certificate is self signed (in my case the site is showed to be signed by COMODO), if CloudFlare acts like a proxy.
That's not making sense. Self signed isn't the same as signed by Comodo.
@AD7six If it is two different SSL connections, then they need to have a certificate as well. In order for that SSL certificate to be issued, the domain owner has to approve it first. And xuhdev said that hasn't happened.
@kasperd the connection from the visitor to cloudflare has a cloudflare issued ssl cert - see answer below.
@AD7six Nobody is supposed to issue a certificate for a domain without that being requested by the domain owner. It is technically possible for a CA to issue a certificate without involving the domain owner, that is generally seen as the most prominent problem with SSL. I would not expect Cloudflare to issue a certificate without due diligence, but of course it may happen that the domain owner grant permission without carefully reading what they are granting permission to.
@kasperd perhaps ask cloudflare's about their due-diligence checks =)? Generally, cloudflare doesn't do anything except spit out errors unless the domain's nameservers point at cloudflare's and the domain is configured (which if nothing else implies consent of the owner). You also can't get a response from cloudflare over ssl unless the domain is configured for ssl. In the context of this question/general-use, cloudflare is an ssl-cert issuer (the user is asking cloudflare to provide an ssl cert by enabling ssl and selecting one of the 3 options they provide).
@AD7six When CA and CDN are two separate entities, it is a bit more obvious that you are requesting a certificate from one entity and handing it to the other. When the two are one entity it can become less obvious to the domain owner what they are giving consent to. I'd say the onus is on Cloudflare to tell the domain owner, what they are giving consent to. It appears that this wasn't made clear enough for xuhdev to realize, since he was apparently unaware of Cloudflare having a certificate. I don't know if this means Cloudflare did not explain clearly enough or if xuhdev did not pay attention
Refer to the documentation
Cloudflare's docs are fairly clear on this. Obviously (it should be obvious) Flexible ssl means the connection from cloudflare to the origin is unencrypted.
For full ssl (either permutation) the following applies:
Encrypts the connection between your site visitors and CloudFlare, and from CloudFlare to your server.
They are two different connections, So the answer to "Does cloudflare know the decrypted content?" is: "Yes".
Note that for EV or OV SSL certificates - you need to uploaded them to to cloudflare for end-users to see them, it's still 2 connections - not end-to-end encryption.
Reasons to use SSL
Using ssl prevents MITM attacks, it doesn't mean the cdn you're using is oblivious to the content it's serving, for you. You should maybe ask yourself why you want to encrypt the connection.
With no SSL, there are plenty of places a MITM attack can occur:
With Flexible SSL - that eliminates most, but not all of them:
With Full SSL - there's still the possibility of a MITM attack:
With Full SSL (Strict) - a MITM attack is now not possible without cloudflare itself being compromised:
If you are concerned that cloudflare can read your data - don't use cloudflare.
It's important to note that even with strict SSL, you'll never know if CloudFlare is compromised unless somebody illegally leaks it. If they are compromised, they can read everything.
I really love their usage of "NSA" and the infamous NSA smiley.
|
STACK_EXCHANGE
|
Is there any example where a country completely dropped its historical or traditional ally?
Is there any example where a country completely changed its historical or traditional ally peacefully?
As an analogy, we know Turkey and Pakistan are traditional allies. Let's say India awards Turkey a $4 billion arms contract. Therefore, Turkey abandons Pakistan and becomes an Indian ally. Like this...
I'm not exactly sure what you mean by "these kinds of changes". Do you mean without change in government? Most transitions in Eastern Europe were peaceful IIRC. (And yeah, change in government is a common way by which foreign policy changes. "Diplomatic process" is rather meaningless/NA in terms of internal changes in a country.)
@Fizz, Do you mean without change in government? --- Kind of, Yes. However, it is hard to express what I am trying to understand. For example, Turkey and Pakistan are allies. Say India awards Turkey a $4 billion arms contract. Therefore, Turkey abandons Pakistan and became an Indian ally. Like this...
Aye, it is not clear what 'traditional ally' is supposed to mean exactly here. Most transitions from big powers having colonies and waging war with each other to more independent states did not happen much longer than 100 years ago...
Egypt has changed its orientation from Soviet ally to an American one, signing a peace treaty with Israel in the process. More recently, all kinds of pivots practiced by the US presidents also fall in this category, although the changes may be less drastic (hard to quantify though.)
@RogerVadim, Yes. This is correct. Although Egypt hasn't totally severed its military relations with Russia.
@user366312 I think it is difficult to find the cases of 100% loyalty to one of the two camps, except in war. Even these days, though Europe aligned with the US, most European countries continue relations with Russia... even US continues, if we look at Nickel imports or space flights.
Perhaps another example could be the US normalization with China by Nixon&Kissinger (though warming up started before them). Especially if we look at how big a trade exists between the US and China nowadays.
Warsaw-pact countries changed their orientation from the USSR/Russia to NATO. Some countries in South America changed their orientations from the West to the USSR. Recently, Mali changed its orientation from the West to Russia. I am not talking about these kinds of changes. Why are you ruling these out? Please explain what you want or don't want so we can attempt to answer your question.
What does peacefully mean in this context? Without a war, or without anybody get killed?
@convert, What does peacefully mean in this context? Without a war, or without anybody get killed? --- Yes.
Sorry, what tha yes was refering to? For example in Romania the leader got executed, does it still count as peacefully?
@convert, For example in Romania the leader got executed, does it still count as peacefully? --- No. The OP even has an example.
Soviet Union - Nazi Germany
Soviet Union was initially in good terms with Nazi Germany, signed the famous Molotov–Ribbentrop Pact dividing the Europe peacefully. In 1939, the Soviet Union joined Nazi Germany as a de facto ally, and the two powers invaded Poland together. Nazi speeches were reprinted in the Soviet press and Nazi officers admired Soviet efficiency in mass deportations (source, The New York Times).
This changed after A.Hitler attacked the Soviet Union.
I downvoted because historical facts say otherwise - Stalin was always wary of Hitler and reached out many times to other European powers to contain Hitler. But they ignored him and played their own geopolitics because they knew Hitler wasn't a fan of the Russians. This created a situation where Russia was pushed towards Hitler and hence the temporary alliance.
Czarist Russia switched from the Three Emperors' League to the Triple Entente.
Japan was allied with the British during and immediately after WWI, then reoriented towards Germany before WWII, so it was not changing allies during wartime. But their policy was expansionistic, so it might not count for your question.
Finland switched from genuine neutrality (and trade with both sides) to EU membership.
Also Italy around WWI, but not sure if it was without changes in gov't. Also it's debatable how natural their alliance with Austria-Hungary was to begin with https://www.theworldwar.org/learn/about-wwi/italy-enters-world-war-i
@Fizz But even if there were changes in gov't, as long the changes were peceful should count as answer.
Germany refused to prolong Three Emperors' League treaty, so it was long dead befor Russia switched to Triple Entente.
Actually, the British dropped their alliance with Japan in favour of good relations with the USA in the 1920s, well before Japan started to align with Germany. That alignment came quite late: Germany was providing military support to China until Hitler stopped that in 1937.
Obviously it's incorrect to claim that the Warsaw pact nations switched their alliance by any means other than peaceful. After their governments changed from the Soviet colonial governments to governments with allegiance to the countries they governed, they joined the NATO alliance, aimed at stopping Russia's colonial conquest, as was always NATO's goal. This change of allegiance did not involve any war against Russia, the colonial power.
Sure there were no war with Russia for independance, but pro Soviet governments were overthrown and in some examples really bloody.
I understand the eagerness, but 'colonial' is not a correct term regarding Warsaw Pact.
@alamar the modern view on that is changing to mean exactly that. I don't really feel like arguing this point other than to say that it's the zeitgeist now outside of the hardcore Communist circles. And, of course, outside of the hardcore Russia-never-does-anything-wrong circles, but the latter is to be expected.
@convert Transitions were peaceful in most of Eastern Europe except in Romania which did have a brief uprising against the communist leadership. Czechoslovakia, East Germany, Poland, Hungary, and others were done without any kind of coup or civil war.
@Stuart F Romania was exactly the bloody example.
As a result of Egyptian president Anwar Sadat's economic policy of "opening the door" to private investment in Egypt there was a break with longtime ally and aid-giver the USSR - replaced by the United States.
An other example is Albania. Albania’s alliance with the USSR steadily eroded and collapsed altogether by the early 1960s as a result of the campaign of de-Stalinization launched by Soviet leader Nikita Khrushchev. The PRC under Mao Zedong emerged as the new patron for the country.
The United States of America - France aided the Americans in its fight against the British, and is often considered as its first international ally. But later, the US totally pivoted to the UK as its most trusted ally in Europe.
The United States did not have any alliances during the 19th century. It's alliances with France during the 18th century ended after accomplishing its goal (the independence of the US from Britain).
|
STACK_EXCHANGE
|
Directory setup with special permissions to
auto-execute user-defined scripts. CGI is
used to accomplish tasks which are not
supported by basic HTML such as a hit
counter or guest book. You may use this
directory to execute any cgi-script you
write or ones that are already pre-written
for you. The directory is specially designed
for scripts that end in *.pl as scripts that
end in .cgi will execute from any directory
within your web space.
We offer several pre-written scripts
available free for your use. These include
hit counters, a shopping cart system, guest
book, bulletin board system, visitor links
page, random quote displayer, and much more.
Custom MIME Types
Adds the ability for serving multiple
content types such as midi files, special
audio formats and others.
Unix Shells: bash, csh, tcsh
Each shared server has 3 different shells
available letting you choose which you are
most comfortable scripting in.
Perl 5.004, C, C++, Java JDK 1.1.7,
Python 1.5 & TCL 7.6 CompilersVarious types of scripting languages /
shells that are installed on all C I Host
EmacsAn extensible, customizable,
self-documenting real-time display editor
that is used in a UNIX shell.
Pine, vi, elm, Joe, Pico & Cron Tab and .htaccessPine is an easy to use UNIX shell-based
email client that organizes your emails
complete with address book, sent items and
inbox. VI, JOE and PICO are text editors
that lets you create and edit files within
telnet. ELM is very simple email program
that lets you just send and receive emails
from within your telnet shell. CRON TAB is
similar to an alarm clock, it executes set
commands at user-defined times. .HTACCESS
lets you setup password protected
directories within your web site.
MajordomoThis is a very flexible tool for interacting
with your customers or clients. In simple
terms it's an interactive style newsletter
that allows all subscribers to distribute
information. There are many configurable
features including automatic subscribe and
unsubscribe and much more. When one user
emails the Majordomo listserver that email
is sent out to everyone on the list to read.
Custom Error DocumentsBy placing a file in your main directory
called missing.html, you will be able to
provide a customized page to any viewer that
requests a file with their browser that does
not exist on your domain.
Disk Usage MeterLets you view real-time statistics on how
many MB of disk storage your web site is
utilizing. Also breaks down usage to file,
directory or entire web site.
Bandwidth Consumption MeterLets you view real-time statistics on how
many GB of bandwidth your web site is
utilizing. Will also show you projections
for the week and month.
|
OPCFW_CODE
|
How do I report speech containing "must not"?
What's the form for reporting speech that contains must not?
I mean:
I can't come to the meeting on Monday => She told me she couldn't come to the meeting on Monday.
You must talk to me => She said you had to talk to me.
Till now, there's no problem, but what should I use with must not?
I think that I can't do the following:
You must not talk to me => She said that you didn't have to talk to me.
because
Must not <> Have to
Am I right?
"cannot" and "must not" mean similar things in this case.
Although "must" is synonymous with "have to", "must not" and "don't have to" mean two different things in the negative. "You must not" is "You are forbidden to", and "You don't have to" is "You are not required to".
The negative form of must is mustn't. So you can say this:
You must not talk to me. => She said you mustn't talk to me.
The past form of must is also must, so you don't need to change the form of the verb when reporting speech in this manner.
However, the form mustn't is rarely used in American English, though I believe that it's commoner in the UK. Instead, most Americans would substitute shouldn't:
You must not talk to me. => She said you shouldn't talk to me.
The reported speech should be in past tense (at least in some circumstances), as in the questioner's example, which this answer does not address.
Now this answer has "The past form of must is also must, so you don't need to change the form". Sorry? "I must go to the supermarket yesterday"?
@msh210, none of the past modals would work in your sentence. Try substituting "would" or "should" for "must" in your sentence--it still makes no sense, because that's not what past modals are for.
But, of course, must not is used in the US! Who says we have to contract it all the time? :)
@JSBangs: true: my analogous sentence wasn't.
@msh210: I think it's reasonable to assert that tense harmony is optional nowadays, actually.
How about, "She said you were not to talk to me"?
I don't know if it is 'good' English, but it is common from where I grew up in the north of England.
She said you had not to talk to me
If you must use 'must' then
She said you musn't talk to him
works, but it's a little clumsy. An improvement could be
She said you musn't speak to him
but I would prefer
She forbade me to speak to him
The fact that she said it isn't the important part of the message, it's the prohibition.
I guess you meant She forbade me to speak to him.
@kiamlaluno indeed, thank you for spotting that. Corrected
It sounds like you are trying to report a past obligation that no longer applies?
She said that you were not to talk to me.
She said that you were not [supposed, allowed, permitted] to talk to me
She said that you could not talk to me.
|
STACK_EXCHANGE
|
Solution for Converting CSV to KML for just lat and lon
is Given Below:
Below is the code I am using to covert my CSV file to KML into Google Earth Pro; however Google Earth Pro continues to crash when I upload to files. The amount of Lat Lon Coordinates are too great for Google Earth Pro to handle? Am I missing something in this code? I have a few CSV files with over 100,000 coordinates. Trying to avoid this error….
import simplekml import pandas df=pandas.read_csv("FILE NAME.CSV") kml=simplekml.Kml() for lon,lat in zip(df["Longitude"],df["Latitude"]): kml.newpoint(coords=[(lon,lat)]) kml.save("OUTPUT.KML")
Creating KML with over 100,000 placemarks could have performance issues in GE Pro. In your situation, your KML file is crashing GE Pro.
Importing CSV files directly into GE Pro is limited but customizing the KML as generated in Python gives you many more options which help you tailor the output to your needs.
KML provides mechanisms to control which features or sub-KML files are displayed using time, region, or altitude level filtering.
You need to consider what makes sense based on the characteristics of your data. If your data is geographically spread out and you can bin the data into different geographic sub-areas then each sub-area can be written to a separate KML file, and the main KML file can have a networklink to each of the sub-kml files but not visible when first loaded.
Large KML files can scale using the following techniques:
SimpleKML supports creating NetworkLinks and Regions. You can use shapely module to determine if a point is inside a rectangle or polygon to bin the points to the appropriate sub-area.
A NetworkLink allows a KML file to include another KML file. You can have any level of NetworkLinks from within your root KML file from flat (single KML file with Networklinks to all other KML files within the KMZ) to deep (with each KML file with a NetworkLink to other KML files each with its own NetworkLink). Depends on how you need to structure your KML and how large the data is.
The key is that Google Earth chooses the first KML as the root KML file so you must ensure that the first file (typically named doc.kml) is the root KML file that loads the other KML files via network links. A common structure is to include additional KML files in a “kml” sub-folder to differentiate it from the root KML file if packaging them in a KMZ file.
A tutorial on Using Network Links Effectively
A Region affects visibility of a Placemark’s geometry or an Overlay’s image. Regions combined with NetworkLinks enable access to massive amounts of data in KML files. A region can optionally have a min and max altitude for altitude level filtering.
Regions are a more advanced feature of KML, so would recommend first looking into NetworkLinks and creating multiple-kml files if there is a logical way to split your data into groups.
|
OPCFW_CODE
|
Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
|Failed to load latest commit information.|
Untitled One History: Untitled One was originally written in my late highschool / early college days. The idea was to create a space combat game with Asteroids-like physics, but introducing two player versus combat, and adding in gravity inspired by games such as Lunar Lander. Although Untitled One is meant to be similar to these games, I was mainly inspired by another game called Crazy Gravity, which included physics almost identical to Untitled One's physics, but was more detail oriented and less action oriented, which made it a fun game for doing speedruns. I recommend it if you have the time and don't mind paying a little money for a fun game, although, it only runs on Windows. In the end, I think what I have devised is a very unique idea for a game, and I think that it is probably one of my favorite game ideas because it is so unique, but also so simple, yet the gameplay is so difficult to master. To this day, I have yet to find another who can fire more then once or twice without crashing or flying off the screen, however, it is no problem for me, I have successfully destroyed all 10 targets in target mode in under 19 seconds, and reached wave 4 in asteroids mode, but have never found another to play a real, competitive 2 player game. Although this project is called "Untitled One", this is far from my first project. Untitled One was simply the first in a series of unrelated games which I used as my sandbox for testing new ideas. This is also not even the first Untitled One game. The first was written in a program called Game Maker and I don't even think this version reached a playable state, the second was written in C++ using Windows GDI+ for graphics and only included two player combat mode, no menus were implemented in this version, this is the reason two player combat mode is now called "Classic", the third iteration was written in C++ for Windows using OpenGL for graphics, this version was the first to include asteroids and targets and a menu. It also included a network mode, although it was essentially non-functional. Finally, the version you see now is a port of that Windows version, with the Windows code replaced with portable code (thanks to FreeGLUT), and numerous enhancements. Compilation: The source is entirely contained in one file, called main.c. This file includes everything needed in the compiled executable, including sound effects. I have tried my best to keep the source under 3000 lines of code, so it should be easy to modify if you need to or want to. Once compiled, everything is contained inside a single executable file (including models and audio). Compiling is quite easy, and can be done in a number of ways. Method 1: ./configure make running ./configure works just like a GNU autotools project (although it is not), you can run ./configure --help to see other options you may be interested in, such as disabling audio or enabling experimental or legacy features, or disabling POSIX functions which might prevent compilation on non-POSIX systems. ./configure will create a vars.mk file which is read by Makefile can tells make which arguments to compile with. make will compile the executable for you, which can then be installed by running (as root): make install Which will install the executable to either /usr/local/bin/ or a user specified prefix. Method 2: gcc main.c -lglut -lGL -lGLU -lm Since the project is so simple, you can easily build without make. However, note that you will need other options to compile optional features, such as sound. For example, to include sound you would need to add: -DAUDIO -lalut -lopenal making your compilation string look like this: gcc main.c -lglut -lGL -lGLU -lm -DAUDIO -lalut -lopenal other options are available, such as -DNONETWORK and -DFULLSCREEN, read the source to learn about these options, they basically do what exactly what you think they will do. Compiling this way should be very easy for anyone who is familliar with GCC or any other Unix-like command line compiler.
|
OPCFW_CODE
|
SDK Delivered via The Magic Leap Hub
- Unity Example Project
- Magic Leap 2 Unity SDK Package
- Lumin SDK Version 0.53.3
- Added support for camera auto focus API.
- Added support for
- Added async support to
MLCameraAPI to allow developers to avoid blocking on Camera operations.
- Added better support for
MLWebRTCCameraVideoSourceto manage camera resources during pause and resume.
- Added Trigger Hold action to MagicLeap input actions.
LuminXrProviderwith normal permission checks.
- Added boot settings for OpenXR.
- Added support for high precision marker tracking.
MediaPlayer.VideoRenderer.OnFrameRenderedcallback to media player renderer.
- Added Magic Leap 2 interaction profile for OpenXR.
- Added support for indefinite
- Updated integration branding with the Magic Leap Hub (formerly The Lab 2.0) and Magic Leap App Simulator (formerly Zero Iteration).
OnSourceEnabledwhen not using native buffers for WebRTC.
- Fixed hand tracking keypoint detection under ML App Sim.
- Fixed native platform error logging.
- Fixed event delegate initialization in
- Fixed excessive Audio Playback allocations.
- Fixed developer build crashes caused by
- Fixed memory leak when pausing/resuming unity applications (requires Unity XR Package update).
- Fixed collision mesh generation on mesh blocks generated from
MeshingSubsystemComponent+ Mesh prefab.
- Fixed Voice Intents configuration asset creation (fixed in 2022.2.0b4 of Unity Engine).
- Fixed crash caused by
MLDeviceinstance race condition.
MLWebViewmouse input functions to simplify parameters.
MLWebViewmouse drag support.
MLWebViewcomponent null reference checks.
MLAnchorduration checks and updated documentation.
- Fixed controller Menu button and touchpad actions.
- Refactored controller action layout to remove touch point 2 and cleanup supported actions.
YcbcrRenderer.Cleanup()not fully cleaning up resources.
UnityEngine.XR.Hand.TryGetFingerBonesreturning a 5th invalid Bone when only 4 are supported.
Deprecations & Removals
MLAudioPlayback. Uses normal singleton pattern. Callers still need to drive its lifecycle functions.
- Removed automatic disabling of Strip Engine Code, this has been fixed in the 2022.2.0b4 Unity Engine.
- Removed permissions for
CONTROLLER_POSE, these are no longer required.
- Removed remaining references to Lumin platform. Magic Leap 2 is a full AOSP based platform.
- Image tracking, World Raycast & Hand Meshing support has been temporarily disabled in this release. None of these are currently supported on the device. Once re-enabled, developers can use some of these in ML App Sim.
- Eye blinking state is not reported either by the eye tracking API or the gaze recognition API (awaiting platform support).
LocalAppDefinedAudioSourceBehavioris restricted to 1 audio channel.
- To use Geometry Shaders, Force Multipass must be set in Project Settings -> XR Plug-in Management -> Magic Leap Settings -> Force Multipass. Otherwise geometry shader passes cause vulkan exception in Unity player.
- Keypoint mask values in ML App Sim are temporarily ignored and overridden to true.
- XR Framework Meshing subsystem crashes when attempting to load mesh blocks for rendering.
- Detecting simultaneous controller input buttons does not work in Unity Input System 1.2.
- Marker tracker transforms are upside down requiring users to rotate them by 180 degrees about the forward vector.
- Camera capture can freeze app after multiple captures.
UnspecifiedFailurewhen called by WebRTC
NativeBuffers. When using
YUV CaptureVideoStopreturns successfully.
- Some configurations of camera capture can produce distorted images.
- WebRTC video sink rendering fails when non-white material is assigned.
GestureInteractionRotationare not implemented yet and data will not be guaranteed accurate. Currently only the Positions of the Hand Transform and Interaction Point will be recommended to use.
MLWebViewfirst tab creation causes framerate drop.
MLWebViewhas challenges clicking on web links on page due to noisy controller position cancelling click operations (treats it as a drag operation).
HandTrackingis enabled, the Controller position/rotation actions fail to work properly when binding with the generic XRController and Right XRController input devices. The work around is to have your actions bind to the
MagicLeapControllerinput device instead. The
MagicLeapInputsinput asset already does this with it's action fallbacks.
MLAudiois not fully supported in the 2022.2.0b5 version of the Unity Engine, make sure you don't check the "MLAudio" check box in Magic Leap XR settings (to utilize the Java AudioTrack fallback). Also use the following audio settings: Sample rate to 48000, buffer size to Good Latency.
- When changing audio settings Unity crashes often or starts making noises.
- Unity applications currently experience aproximately 190 MB/hr memeory leak.
MLAudioInputdelayed capture and "parroting" is not functioning.
|
OPCFW_CODE
|
Is it appropriate to add a dedication to a paper?
A paper of mine (relatively junior mathematician) was just accepted in a good journal, and I was considering adding a dedication to the memory of a mathematician in my area that passed away, and whose work I admire a lot. The topic of the paper is closely related to the work of said mathematician, however I have never met them personally, or communicated with them in any way. Would it be appropriate for me to add such dedication, or is this usually reserved to more established mathematicians, or mathematicians that knew each other well? Do journals have typically a problem with adding dedications after the referring process? I am guessing 'no', but I am asking anyway.
The journal is unlikely to object to your dedication.
However, there are others who might object. It is best to contact the department or family and ask for permission. This is the sort of thing where asking for forgiveness is the worse alternative. Gerhard "And Is Hard To Undo" Paseman, 2018.12.07.
The wording matters. Don't write "in memory of" for someone you never met. Something like "dedicated to.." is acceptable, but not common these days. Consider whether you will feel just as happy about it when you review your paper 10 years from now.
Of course you can add. And nobody will object. I do not understand the remark of Gerhard Passman at all.
I personally don't like such dedications (unless the article is for a special issue or Festschrift dedicated to the scholar) because it implies some sort of professional connection between the author and the scholar that (especially here) does not exist. It isn't exactly fraud, but it is certainly misleading. I have never done this in my 200+ publications.
My inclination seems to align with others': such a dedication is most appropriate only if you have a personal connection to the person. For instance, it would be very appropriate if the person was a mentor of yours (even if there was no "official" relationship). To your stated question: I would be very surprised if you received any pushback at all from the journal or editor (who would probably assume, as I would, that you had worked with said mathematician).
I am not so sure that the journal would not object. The American Physical Society explicitly forbids dedications as stated: Acknowledgments to individuals should be a simple statement of thanks for help received and not a dedication or a memorial.
@DavidG.Stork "because it implies some sort of professional connection between the author and the scholar that (especially here) does not exist."
That's an interesting point of view. I wonder how widespread it is. It has never occurred to me that dedications could be considered as declarations of professional connections. I rather viewed them as pure declarations of admiration (and you do not need to be professionally connected to someone to admire them or their work).
Instead of a formal Dedication, how about some words towards the end of the introduction expressing indebtedness to the late mathematician, perhaps for the stimulation of their work, or some such.
Thank you everyone. Indeed, my intention was this to be a pure declaration of admiration. More so as this paper will appear in a top journal (I was a bit disingenuous describing it as "good"), and I felt it was an appropriate venue in this regard as well, to pay my respects to an exceptional mathematician that was influential for my work as well. However, the possibility that this could imply a professional connection, and viewed as such by some, is a very serious concern.
What is wrong, say, with "I am deeply indebited to prof. X, that I never met, but whose work influenced me a lot. I'd like to dedicate this paper to his memory"
If you find it meaningful, just go ahead and do it, with sincerity. Best case, (as I expect), is that the journal simply publishes it. Worst case is that they ask you to remove it. No big deal.
In math, dedications are rare but not unheard of. They are most common in birthday conference proceedings, or special issues devoted to summarizing a person's life work. For example, consider the following papers that have a birthday dedication:
Gunnells - Robert MacPherson and arithmetic groups
Nollet and Schlesinger - Curves on a Double Surface
Mulase and Zhang - Polynomial recursion formula for linear Hodge integrals
International journal of Numerical Analysis and Modeling, volume 15, number 4
Kinoshita, Power, and Takeyama - Sketches
International Conference On Logical Algebras and Semi-rings
I found these and dozens more by searching Google for "math" "dedicated to" "occasion" "birthday". Similarly, there are plenty of examples dedicated to the memory of a mathematician who recently died ("math" "dedicated to" "memory of"):
Costin, Lebowitz, and Rokhlenko - On the complete ionization of a periodically perturbed quantum system
Rowell - An Invitation to the Mathematics of Topological Quantum Computation
Differential Equations and Applications, volume 10, number 1
Propp - Exponentiation and Euler measure
Discrete and Continuous Dynamical Systems - Series B, volume 21, number 2
People have also dedicated papers to non-mathematicians, e.g., Chris Kapulkin dedicated this paper to his mother.
There is ample evidence that the journal will not normally object to a dedication. I have even seen papers dedicated to the memory of Grothendieck, by authors who are unlikely to have known him simply because of the large gap between when he left mathematics and when these papers were written. It's clear that someone's work could have a big impact on a young researcher, even if the two never met. I would say it's "not inappropriate" to write a dedication in such a situation, even if it's rare.
In the papers above, you see some dedications right under the author name and above the abstract, and you see others that are a sentence in the acknowledgments. For a junior author dedicating a paper to a senior mathematician they never met, I'd err on the side of putting it in the acknowledgments so you have more space, to write a full sentence about how that person's work touched your life. The short style above the abstract of simply "This paper is dedicated to X" might raise a question in the mind of the reader, though, to be honest, I think most people would ignore the dedication regardless of where you put it. Lastly, there were two threads on academia.SE about dedications:
When can scientific publications have a dedication?
Dedicating a paper
That Kapulkin paper is interesting, because he's not the only author. I'd be interested, in an extremely idle way, to know if there had ever been any circumstances of disagreement among authors about whether, and to whom, to dedicate a paper, and how it was resolved. (I do not believe, or have any reason to believe, that such is the case here; I'm just wondering in general.) The answers to such a question would of course be too idiosyncratic to make it appropriate for MO or AcademiaSE, though.
@LSpice I once asked a co-author if we could dedicate a paper to someone and he said no. So, I'll do that on a solo-authored paper someday.
Re, I guess my curiosity had exactly the right amount of idleness to be gratified. I'm sorry that your co-author was unwilling; unless for some reason I specifically disapproved of the subject of the dedication, I don't think I can imagine objecting to a "the first-named author dedicates the paper to …"-type statement.
|
STACK_EXCHANGE
|