Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
This blogpost is the first in a series, called “Research Highlights”. This is a series where I will put my research into context, contemplate on what I did, and why, and most of all: comprehensible for everybody. I have been thinking about this since only recently, and I would like to share some thoughts on why this feels like the appropriate thing to do.
Most of you probably know that one of the perks of being a PhD student, is to write and publish papers. This is probably the core of my job: without publishing papers, you cannot finish and defend your PhD-thesis. However, on a somewhat more meta-level, publishing can be seen as something philanthropic: knowledge increases in value when it is shared. A very nice thought, although I’m not sure if it is actually true, is about the knowledge concerning the fabrication of a computer mouse. As something so quite mundaine, that is used day in day out, it is quite remarkable how complex such an unnoticed object actually is. The components of a mouse, such as the wire (if you don’t like fancy bluetooth mice just like me), the wheel, but also the hardware inside that transfer the signal to your computer, are all quite complex. They all consist of different types of ground materials (plastics, copper, iron), which are all harvested or created at different places in the world: no person on earth could by himself make a mouse, from harvesting the materials to screwing everything in place. Since a mouse does play an intricate role in modern society, this pleads the case for science communication: without sharing knowledge, we would not be able to operate a computer (well, actually the computer would not exist as well). Therefore, publishing your results is important for the world to progress, for the knowledge pool to expand.
However, even though these publications are accessable to most people (let’s not cover the fact that they can be quite expensive to access), they can be quite hard to comprehend. Without enough prior knowledge, it might be too had to follow. However, it might be relevant for you, for your daily life. This is a bridge science communication tries to overcome. Role models like Ionica Smeets have recently sparked a desire to explain my research to the public. Moreover, the stories from Tom, my boyfriend, who actualy studies science communication, and discussion where I explain my work to friends helped me to decide this should actually be a major part of my personal blog. Therefore, my goal in this blog is to overcome that bridge, starting with my own work (since, of course, I know the most about my own work).
Writing this story down at least helps me to raise the bar. It would be a great source of irritation to not see this story have a follow up. It will take some time, and I will probably need some time to explore and discover what works. However, I invite you all to follow my science communication adventure. Buckle up, I hope it’s worth the read…
comments powered by Disqus
|
OPCFW_CODE
|
The promotion is made up partly of actual sales, and partly of full-price Halloween things that Microsoft thinks you might enjoy.
Microsoft believes that the Surface devices offer more features and versatility, along with touchscreen, to interest existing MacBook users.
The Bing team explains some upcoming changes to the level of 3rd party support Bing will be enabling in the coming months to boost AI.
If you’re on the Fast ring of Windows Insiders, there’s a neat update to the Skype Preview app available for both Windows 10 PC as…
Microsoft Edge has stayed pretty much unchanged in terms of UI since it's release over a year ago. But this might change soon with the Windows…
Today, the Azure Certified for IoT program boasts more than 100 partners and 175 different Azure Certified for IoT devices.
It wouldn't be a good Microsoft conference without some videos to give these announcements some flair. Here are our favorites.
Microsoft looks as if it is overhauling Windows Defender in the upcoming Windows 10 Creators Update.
Joe Belfiore is back at Microsoft full time.
More advanced customization settings have been requested since the release of Windows 10, and they might finally be here with the Creators Update.
Microsoft wrapped up its New York Windows 10 event a few hours ago and the two-hour keynote packed a lot of announcements about upcoming Windows…
The Windows Store might soon start selling customized Windows 10 themes.
Find directions to locations on Google Maps with the default Maps app on Windows 10 Mobile
By Abhishek Baxi
The Surface Ergonomic Keyboard is one of the new accessories Microsoft is rolling out next month and is already available for pre-order.
At Microsoft's Windows 10 Event in NYC, we have a demonstration of Paint 3D in action on a Surface Pro 4.
However, some people have some sharp eyes, and a redesigned Action Centre has been spotted.
Xbox is receiving some fun changes in the Creators Update with tournaments, easy interactive streaming via Beam, and audio improvements.
Microsoft's new 3D social community, Remix 3D, is already open for Windows Insiders to get in on the action.
Office 365 is getting its own updates in preparation for the Windows 10 Creators Update coming in 2017.
The Surface Dial is the peripheral which helps bring the Surface Studio and Microsoft's recently announced Windows 10 Creators Update, to life.
Here's a more in-depth hands-on look at Microsoft's new Surface Studio AIO, the Surface Dial, and some of the partner apps that will leverage both…
Microsoft is pushing the creation and sharing of 3D images from taking pictures to creating holographic images right before your eyes.
|
OPCFW_CODE
|
Changes to a suggested edit between approvals/rejections should reset any votes on it
On smaller sites it is possible for suggested edits to sit in the review queues for very long, so there is scope for manipulating the suggested edits after only one review has come in but before it gets approved/rejected. For instance, suggested edits on Music Fans SE take 48 hours on average to be fully reviewed. This is enough time for changes to be made to a suggested edit between approvals/rejections.
I have actually done this when I noticed that my suggested edit could do with some minor improvements. For example, consider this suggested edit of mine to a tag wiki on Music Fans Meta. I didn't notice that it contained a broken link when I submitted the edit. After it received one approval, I changed the broken link and noted the change in the edit summary — but, the first approval still remains. Now, the approver has no way to know that a change was made and that they approved a different version of the suggested edit. (I don't know whether the original version of the suggested edit is even saved anywhere for comparison.)
This is far from ideal. If a change is made to a suggested edit after it has received a vote, then that vote is meaningless: it is for a different suggested edit compared to the final version that is approved/rejected.
Feature request: Any updates to a suggested edit (by the user who suggested it) should reset any votes on the edit.
This idea has merit, but do note that on sites like Music Fans it may take even longer for an edit to be reviewed. Also, will the reviewer be able to review the edit again (I assume so - that would still be confusing to them.)
@Glorfindel I think it's not a problem if it takes longer - it's more important that an edit is reviewed for what it is, rather than what it was. Regarding the potential confusion for a reviewer who had reviewed it earlier, I'm not sure what a good solution would be.
It would seem that if a subsequent reviewer makes a change (improve or R&E) that prior votes are reset; it's a lot fuzzy if the FAQ says that the OP's re-edits have the same effect, but it seems that the FAQ implies that (even if it's not true). Either action should be consistent, but it's odd that trusted edits reset and untrusted ones do not.
@JamesA For the case I mentioned, I just navigate to the tag info page for the relevant tag, and click on "Edit tag info". This loads my previously suggested edit, and I can now make changes to it.
@Rob If a subsequent reviewer chooses to either "Improve Edit" or "Reject & Edit", then their edit becomes binding, but the prior votes are not reset, they are still logged in the review info.
|
STACK_EXCHANGE
|
package main
import (
"github.com/koding/kite"
"github.com/koding/kite/config"
"net/url"
"fmt"
)
// Setup
// go get github.com/koding/kite
// there is a service which runs along with your application Kites called kontrol
// this handles service discovery, and all of your application services register with this service so that the clients can query the service catalog to obtain the service endpoit
// etcd
func main() {
// we are creating our new Kite
// two arguments : the name of our Kite and the service version
// obtain a reference to the configuration and set this to our Kite
k := kite.New("math","1.0.0")
c := config.MustGet()
k.Config = c
// to be able to register our kite with the server discovery , we have to set the KontrolURL with the correct URI fro our kontrol server
k.Config.KontrolURL = "http://kontrol:6000/kite"
// we are using the name that is supplied by Docker when we link together some containers
// we are registering our container with the kontrol server.
// we need to pass the URL scheme we are usings
// in this instance , HTTP is the hostname; this needs to the accessible name for the application
k.RegisterForever(&url.URL{Scheme:"http",Host:"127.0.0.1:8091",Path:"/kite"})
//add our handler method with the name "square"
k.HandleFunc("Hello",func(r *kite.Request)(interface{},error){
// the signature for HandleFunc is very similar to that of the standard HTTP library;
// we set up a route and pass a fucntion which would be responsible fro executing that request
name,_ := r.Args.One().String()
// dnode
return fmt.Sprintf("Hello %v",name),nil
}).DisableAuthentication()
//Attach to a server with port8091 and run it
k.Config.Port = 8091
k.Run()
// One of the nice features of Kites is that authentication is built in and ,
// it is quite common to restrict the actions of a particular service call based upon the permissions of the caller
//User the hood , Kite is using JWT to break down these permissions into a set of claims
// the principle is that a key is signed and therefore a receiving service only has to validate the signature of the key to trust its payload rather than having to call a downstream service
}
// Code generation
// there is no code generation or templates
// Tooling
// etcd , which is used for your service discovery
// etcd and kite are easily packaged into a Docker container
// Maintainable
// Format
// Kite uses dnode as its messaging protocol
// Patterns
// Service discovery is built into Kite with the application kontrol
// The backend store for kontrol is not propriety, but it uses a plugin architecture and supports etcd,consul, and son
// Language independence
// Efficiency
//Quality
// Open source
// Security
// JWT
// Support
// Extensibility
// Summing up Kite
|
STACK_EDU
|
Technology Companies Team Up to Prevent Another Heartbleed Heartbreak
The Heartbleed bug broke the hearts of many a programmer. It also drew attention to the fact that few programmers had the time to scan the widely used open source software to find the bug that was discovered more than two years after its creation. Now, the Linux Foundation and several big names in technology including Google (NASDAQ:GOOG) (NASDAQ:GOOGL), Intel (NASDAQ:INTC), and Facebook (NASDAQ:FB) are providing support to fix the bug.
The Heartbleed bug existed in a piece of open source software called OpenSSL. The Apache and nginx, open source web servers for about half a million websites, used OpenSSL. That software was also used in a wide variety of websites from Yahoo (NASDAQ:YHOO) to Reddit. Even operating systems like certain versions of the Android software on mobile devices and some Linux operating systems were affected too.
Getting rid of the open source software was not an option since it is so widely used in a variety of programming-based applications. Open source software is something that Internet users come across every day via websites and other programs, part of why the Heartbleed bug affected such a large portion of the Internet. Web browser Mozilla Firefox is open source software. Entire libraries of open source software are available to programmers via GitHub, which were also affected by the Heartbleed bug.
Now, the companies behind some affected software are banding together to prevent another Heartbleed bug by creating and funding the Core Infrastructure Initiative, a multimillion dollar investment in open source software as opposed to depending on the free work of programmers when they had the time to spare.
A technology non-profit, the Linux Foundation is the group behind the open source Linux operating systems that are a popular alternative to Windows and OS, and it will be hosting the program. The goal is to provide some much needed oversight to the software that the Internet depends on to function. In its FAQ page, the Core Infrastructure Initiative explains the primary challenge of keeping such widely used software in check when it is not adequately funded. It notes that the OpenSSL project received only about $2,000 funding, despite its widespread use.
Open source software is a piece of coding worked on collaboratively by an unlimited group of programmers by making the source code. While the resulting product is normally better and more secure than a project worked on a by smaller group of programmers working on a piece of closed source software, bugs or other errors can be overlooked due to the fact few people are looking at it full-time.
That is what happened in the case of the Heartbleed bug. The error in the heartbeat function of the code caused information to be leaked in amounts of up to 64 kilobytes of data it was supposed to keep secure. In response to this situation, a patch was quickly created. Affected websites either used that patch or created their own to fix the error. Now, the Core Infrastructure Initiative allows them to tackle issues collaboratively in the spirit of open source software.
|
OPCFW_CODE
|
#include <TM1637Display.h>
const int MIC_AO = 0; // Microphone Analogic output
const int MIC_DO = 11; // Microphone Digital output
const int LED = 10; // High power LED signal
const int DISPLAY_CLK = 9; // Display Clock
const int DISPLAY_DIO = 8; // Display Digital Input Output
const int BUZZER = 3; // Define buzzerPin
const int DISPLAY_INT = 0x00; // Display intensity (from 0 to 7)
int adc;
int dB, PdB; //the variable that will hold the value read from the microphone each time
int counter = 0;
int noise_counter = 0;
int cooldown_timer = 0;
TM1637Display display(DISPLAY_CLK, DISPLAY_DIO); //set up the 4-Digit Display.
void beep(unsigned char delayms) {
tone(BUZZER, 523, 100);
}
void setup() {
display.setBrightness(DISPLAY_INT); //set the diplay to minimum brightness
pinMode(LED, OUTPUT);
pinMode(MIC_DO, INPUT);
}
void printDataScreen()
{
if(counter <= 0)
{
if(digitalRead(MIC_DO) == HIGH)
{
if(noise_counter < 255)
{
noise_counter+=5;
}
if(noise_counter == 255)
{
beep(100);
noise_counter=175;
}
}else
{
if(noise_counter > 0 && cooldown_timer <= 0)
{
noise_counter-=2;
cooldown_timer = 20;
}else
{
if(noise_counter < 0)
{
noise_counter = 0;
}
cooldown_timer--;
}
}
//int roundedVal = (noise_counter/10.0) * 100;
//display.showNumberDecEx(roundedVal,0b00100000); //Display the Variable value;
display.showNumberDecEx(noise_counter);
counter = 100;
}else{
counter--;
}
}
void loop(){
PdB = dB; //Store the previous of dB here
adc= analogRead(MIC_AO); //Read the ADC value from amplifer
dB = (adc+83.2073) / 11.003; //Convert ADC value to dB using Regression values
printDataScreen();
analogWrite(LED,noise_counter);
}
|
STACK_EDU
|
Where can I find a list of language + region codes?
I have googled (well, DuckDuckGo'ed, actually) till I'm blue in the face, but cannot find a list of language codes of the type en-GB or fr-CA anywhere.
There are excellent resources about the components, in particular the W3C I18n page, but I was hoping for a simple alphabetical listing, fairly canonical if possible (something like this one). Cannot find.
Can anyone point me in the right direction? Many thanks!
The link you provided is the official registry. What is missing from that document?
@jorg-w-mittag - it might be naive, but I was hoping for a fairly full listing of the common combinations of that type, not simply the isolated sub-tags.
The most accurate list that i found is this
@byoigres - Helpful list -- I've saved it in the WaybackMachine for safe keeping. ;)
This keeps happening. Someday I want to bump into you in person.
Ditto, @Caleb, ditto. ;)
There are several language code systems and several region code systems, as well as their combinations. As you refer to a W3C page, I presume that you are referring to the system defined in BCP 47. That system is orthogonal in the sense that codes like en-GB and fr-CA simply combine a language code and a region code. This means a very large number of possible combinations, most of which make little sense, like ab-AX, which means Abkhaz as spoken in Åland (I don’t think anyone, still less any community, speaks Abkhaz there, though it is theoretically possible of course).
So any list of language-region combinations would be just a pragmatic list of combinations that are important in some sense, or supported by some software in some special sense.
The specifications that you have found define the general principles and also the authoritative sources on different “subtags” (like primary language code and region code). For the most important parts, the official registration authority maintains the three- and two-letter ISO 639 codes for languages, and the ISO site contains the two-letter ISO 3166 codes for regions. The lists are quite readable, and I see no reason to consider using other than these primary resources, especially regarding possible changes.
Thanks for the full explanation: you clearly understood my question, and explained why I (probably) won't find the answer I was hoping for! That itself is good to know.
It would be really great to have a canonical list of combinations that make sense for those of us who don't even know what Abkhaz and Åland are. Too bad this doesn't exist.
They may be readable, but they are also quite insufficient for many language tagging needs. Personally I'm hoping that the language listing in development at glottolog.org becomes a new standard…
Looks like the first link is broken (I can't connect to http://www.inter-locale.com)
FWIW, the IANA registry that the OP mentioned is part of BCP 47 (section 3).
Also, WRT "I see no reason to consider using other …," you may need to express writing scripts (ISO 15924) for traditional vs simplified Chinese (cmn-Hant vs cmn-Hans) or Latin vs Cyrillic in Serbian (sr-Latn vs sr-Cyrl), or you may want to refer to Spanish common to all of Latin America (es-419) which relies on UN M.49 codes.
Note: The three letter ISO639-2 codes (as maintained by loc.gov, second link above) are not used in BCP 47. BCP 47 uses ISO639-3 codes for its 3 letter codes; this registry is maintained at https://iso639-3.sil.org/.
There are 2 components in play here :
The language tag which is generally defined by ISO 639-1 alpha-2
The region tag which is generally defined by ISO 3166-1 alpha-2
You can mix and match languages and regions in whichever combination makes sense to you so there is no list of all possibilities.
BTW, you're effectively using a BCP47 tag, which defines the standards for each locale segment.
"...there is no list of all possibilities." More or less what I've worked out, and this is the "executive" summary of Jukka's fuller explanation, I suppose. Still seems to me a list of the common combinations might be a helpful thing to have available, but OTOH, it seems like I might be a bit isolated in feeling that way! :)
Unicode maintains such a list :
http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/index.html
Even better, you can have it in an XML format (ideal to parse the list) and with also the usual writing systems used by each language :
http://unicode.org/repos/cldr/trunk/common/supplemental/supplementalData.xml
(look in /LanguageData)
@s-f These links are available in the Wayback Machine (fortunately).
The Likely Subtags page may prove useful too. It provides the most likely language and script for a given region, and vice versa.
One solution would be to parse this list, it would give you all of the keys needed to create the list you are looking for.
http://www.iana.org/assignments/language-subtag-registry/language-subtag-registry
I think you can take it from here http://www.unicode.org/cldr/charts/latest/supplemental/territory_language_information.html
List of primary language subtags, with common region subtags for each language (based on population of language speakers in each region):
https://www.unicode.org/cldr/charts/latest/supplemental/language_territory_information.html
For example, for English:
en-US (320,000,000)
en-IN (250,000,000)
en-NG (110,000,000)
en-PK (100,000,000)
en-PH (68,000,000)
en-GB (64,000,000)
(Jukka K. Korpela and tigrish give good explanations for why any combination of language + region code is valid, but it might be helpful to have a list of codes most likely to be in actual use. s-f's link has such useful information sorted by region, so it might also be helpful to have this information sorted by language.)
Thanks for posting -- As this list is arranged in {language} {country} order, IMO this makes most sense as is the very easy and intuitive to convert this to BCP47
This can be found at Unicode's Common Locale Data Repository. Specifically, a JSON file of this information is available in their cldr-json repo
This should be the link:
https://github.com/unicode-org/cldr-json/blob/main/cldr-json/cldr-localenames-full/main/en/languages.json
or
https://github.com/unicode-org/cldr-json/blob/main/cldr-json/cldr-localenames-modern/main/en/languages.json
We have a working list that we work off of for language code/language name referencing for Localizejs. Hope that helps
List of Language Codes in YAML or JSON?
|
STACK_EXCHANGE
|
import { RegexService } from '../classes/regex.service';
describe('RegexService', () => {
let service: RegexService;
beforeEach(() => { service = new RegexService(); });
it('Regex query should return null if search query is a normal string', () => {
expect(service.regexQuery("hello")).toBe(null);
});
it('Regex query should return null if search query is a normal string, even with slash at front', () => {
expect(service.regexQuery("/hello")).toBe(null);
});
it('Regex query should return null if search query is a normal string, even with slash at end', () => {
expect(service.regexQuery("hello/")).toBe(null);
});
it('Regex query should return regex expression if search query is enclosed /.../, with no flags', () => {
expect(service.regexQuery("/exp/")).toEqual(/exp/);
});
it('Regex query should return regex expression and flags if search query is enclosed /.../flags', () => {
expect(service.regexQuery("/this/gi")).toEqual(/this/gi);
});
});
|
STACK_EDU
|
I gave a number of talks this spring on jQuery and especially on some of the recent additions made in jQuery 1.4. Below are all the slides and demos that I’ve given.
The conferences / meetups that I spoke at (or will speak at, in the case of MIX), and the talks that I gave, are as follows:
- Webstock (Wellington, NZ) (Introduction to jQuery Workshop, Things You Might Not Know About jQuery)
- Future of Web Apps (Miami, FL) (Introduction to jQuery Workshop, Improve Your Web App with jQuery)
- jQuery Boston Meetup (Boston, MA) (Things You Might Not Know About jQuery)
- MIX (Las Vegas, NV) (Improve Your Web App with jQuery)
Introduction to jQuery Workshop
This workshop starts with an introduction to the fundamentals of jQuery (1 hour) and continues on with two pieces of hands-on coding (Todo list, 30 min, Social Networking Site, 1.5 hours).
In the workshop I also had two pieces of hands-on coding. The first was an ajax-y todo list the second was converting a functional social networking site into a one page application (making significant use of jQuery UI).
Source Code Reset Demo Edit Demo
Source Code Reset Demo Edit Demo
Things You Might Not Know About jQuery
A variety of things that people don’t know about in jQuery – including new things added in jQuery 1.4 (and newer), data bindings, custom events, and special events.
For the first jQuery Boston Meetup I built a game using the avatars of everyone in attendance. Sort of a space shooter style game you need to kick and kill the advancing hordes of users. I used this game as a way of demonstrating constructing an application that makes use of custom events, data binding, and building applications in an event-centric manner.
Improve Your Web App with jQuery
A different restructuring of the previous talk that emphasizes a more holistic approach to improving your web applications with jQuery
I’ve been messing around with a new piece of presentation software that I wrote for these talks. It’s still terribly crude and buggy (pretty much just got it working enough in order to run my talks in Firefox 3.6 and Chrome) – you’ve been warned. I hope to refine it at some point and release it for general consumption.
Guido (March 4, 2010 at 4:59 pm)
You’re going to new zealand but not australia? Thats a shame. Anyways enjoy the trip
JohnJ (March 4, 2010 at 5:25 pm)
Are you in fact going to be in Miami? This conference is in apparently in Europe.
“Future of Web Apps (Miami, FL) (Introduction to jQuery Workshop, Improve Your Web App with jQuery)”
John Resig (March 4, 2010 at 7:55 pm)
@Guido: I was in Sydney as well – I went to the Sydney jQuery meetup:
@JohnJ: I was already in Miami – the web site is already updated to reflect the next event (apparently in Europe).
Addy Osmani (March 4, 2010 at 9:50 pm)
Thanks for posting these, John! Fantastic work.
I just have one question about your Social Networking Demo – I can’t help but think looking at some of the screens that they could have been accessed quicker using some jQuery/Modal box type dialogs (for example, the send message/delete friend actions) rather than the page reloading each time. Do you have any plans on enhancing the demo some time in the future? :)
Tarik Guney (March 5, 2010 at 2:23 am)
What about Irvine CA :(
R64 (March 5, 2010 at 4:24 am)
Any chance videos of your talk will be posted online, sometime somewhere? This sounds really interesting but I can’t cough up the money to travel to see you :(
John Resig (March 5, 2010 at 10:36 am)
@Addy: I do use a modal dialog – you need to view the completed demo (click the url with ?action=done in it to view the completed one – it’s the one that I link to off of the screenshot).
@Tarik: That’s a very specific location you have there :) Any conferences in Irvine?
@R64: I know that there will be video of some of these talks coming (in particular the jQuery Meetup talk). I’ll post links to them on Twitter when they are released.
Phil Derksen (March 5, 2010 at 4:17 pm)
Thanks for posting these John. I was just looking over the “Improved Creation” slide in “Things you might not know…”, and just tried it out.
Small correction: I discovered that you need to specify “class” instead of “addClass”.
Compare http://jsbin.com/azafo3/2 and http://jsbin.com/azafo3/3
Thanks for the tips!
Dan (March 17, 2010 at 2:25 pm)
Really diggin’ the presentation tool; I look forward to it’s release!
Nick Tulett (March 22, 2010 at 11:12 am)
Don’t know if you’ve tried to view these presentations on a netbook (1024×600) in Chrome 4.1 on Win 7 but you lose the last line of every “page” and if you try to compensate with Zoom > Smaller, you end up in an endless loop of repainting and history injection.
Ajay Patel (May 25, 2010 at 6:47 am)
|
OPCFW_CODE
|
Transaction Simulation can unlock profit potential for traders and can help protocols intercept malicious transactions before they get confirmed on-chain. Today, transaction simulation is used by sophisticated, well-financed trading operations to help them see into the future.
Like gas prices, slippage, confirmation order, and more, Transaction Simulation is rooted in the mempool. Simulation reveals all of the internal calls that execute inside a transaction. While these internal transactions make up the majority of Ethereum transactions, many do not understand how they work and are settled.
Read this latest installment of our Mastering the Mempool series to learn more about how the outcome of an internal transaction is determined and why understanding these concepts can help you transact with confidence.
Internal transactions 101
As we covered in our previous Internal Transaction Monitoring Is The Missing Puzzle Piece To Understanding Your Smart Contract Activity post:
Internal transactions refer to interactions from one smart contract (sometimes called an 'internal address') to another. This type of transaction is widely used in the Ethereum ecosystem, especially as smart contracts become building blocks for more sophisticated interactions. But internal transactions make it challenging to understand when your address is party to a transaction.
A single transaction on a smart contract can result in dozens or even hundreds of internal transactions that interact with numerous other smart contracts, or simply distribute value to a host of wallets via an airdrop.
While internal transactions have real consequences to account balances, surprisingly the internal transactions themselves are not stored in on-chain. To see internal transactions, you have to run the transaction and trace the calls that it makes. While some contracts do log events to the chain that record internal activity, many do not because doing so requires additional gas.
So tracking the outcome of internal transactions often leaves users in the dark about when their address was involved.
How internal transactions are settled
Before looking into the details of transaction simulation, let's first recap how Ethereum transactions operate:
- When new transactions are included in a block, they must be run on the Ethereum Virtual Machine (EVM).
- This is done to determine the impact of each transaction on the global state trie.
- For every Ethereum transaction address, the global state trie includes ETH balances, token balances, or any other information a contract chooses to store.
When a transaction calls a contract, the transaction execution on the EVM is determined by:
- The smart contract code: which does not change after the contract is deployed on-chain.
- The transaction parameters: which are unique to each transaction.
- The state trie: which is determined by all previous transactions since genesis and updated globally after each new block.
The global state trie update occurs when the entire block is accepted — or 'hashed' by a miner. As the new block propagates through the network, each node independently executes the transactions in the block and updates its state trie appropriately. This process ensures that all nodes in the network at the same block height maintain an identical global state.
When miners create block templates, they specify the transaction order. The order of transactions in a block is typically determined by the gas price of each transaction. The higher the gas price, the earlier in the block the transaction should appear. The updated global state created by each new block, as well as the success of each transaction, is therefore determined by the previous block's global state and the exact ordering of each transaction within the new block. For more on gas prices and transaction complexity, read our ETH Gas 101 Guide.
With this high-level understanding of the inner workings of Ethereum internal transactions, we can now tackle the subject of simulating internal transactions.
Simulating an Ethereum transaction
To simulate a single transaction, you must run the transaction on the EVM, executing all smart contract method calls with the specific transaction input parameters and a known state trie. The state can be represented by the current chain head – that is, the most recently agreed-upon state across the network.
Running a smart contract this way provides an accurate accounting of how the internal transaction will settle under current conditions. Since the transaction is run against a specific chain state, and the state may not be the same as when the transaction is actually confirmed, simulation is also referred to as 'speculative execution.'
To make the transaction simulation useful, the results of the speculative execution need to be traced so that it can be converted into a series of smart contract method calls and their associated parameters. Tracing involves traversing all of the executed op codes, looking for calls to other contract methods, and inspecting the calls to extract the parameters.
With the current node client implementations, this process can take substantially longer than the actual EVM execution. For traders, this time can often be the difference between acting on a trade and missing an opportunity.
The challenges of ETH transaction simulation
Simulating new transactions entering the mempool is not without challenges, especially if the simulation is done using a single node client. These challenges include:
- Ensuring your node remains properly synced at the time of simulation.
- Capturing all pending transactions propagating through the mempool. Individual nodes frequently miss pending transactions, particularly during periods of network congestion.
- Detecting new pending transactions as rapidly as possible.
- Knowing which transactions are likely to be included in the next block – and thus are candidates to be simulated against the current block state.
- Performing the simulation quickly to maximize the time the simulation results are actionable.
- Interpreting the simulation to see how address balances are shifting.
- And more.
While transaction simulation is a powerful technique, simulating quickly and at scale is a challenge for even the most well-resourced teams. Keep this in mind as you consider incorporating real-time transaction simulation into your protocol operations and/or trading strategy.
Transparency in transaction simulation matters
Ethereum is a public blockchain network where each participant has visibility into what has happened and what is about to happen. Except in the case of internal transactions. Sophisticated teams leverage the techniques detailed above to see into the future, while others are left in the dark. We at Blocknative are committed to leveling the playing field in all matters relating to the mempool – including these.
We have launched our Simulation Platform to make transaction simulation accessible to every Ethereum ecosystem participant. Our Ethereum Simulation Platform computes and summarizes the likely results of every marketable pending transaction against the current state of the chain – in real-time, at scale, and with low latency. You can go hands-on with Simulation Platform in Mempool Explorer today.
Connect with us on Twitter @Blocknative or join our Discord Community to be the first to know when we publish new research and announce new functionality.
Blocknative's proven & powerful enterprise-grade infrastructure makes it easy for builders and traders to work with mempool data.Visit ethernow.xyz
|
OPCFW_CODE
|
With container orchestration, users can deploy, manage, scale, and network containers automatically. This is a significant time-saver for companies and hosts depending on the efficient deployment and management of Linux containers.
Container orchestration can be utilized wherever and whenever teams need to employ containers. One benefit of container orchestration is that it allows for the deployment of a single application throughout multiple environments, without it having to be reworked.
Furthermore, container microservices make orchestrating such key aspects as networking, storage, and security simpler.
Containers offer any apps based on microservices a fantastic deployment unit and self-contained environment for executions. This enables teams to run several independent elements of an app in microservices on one piece of hardware, while enjoying better control over the individual components and lifecycles.
Managing containers’ lifecycles with orchestration helps DevOps teams to integrate it with CI/CD workflows. That’s why containerized microservices are fundamental for cloud-native applications, along with APIs (Application Programming Interfaces).
Why teams work with container orchestration
Teams can take advantage of container orchestration for the automation and management of:
- Allocating resources
- Scheduling and configuring
- Finding available containers
- Provisioning & deployment
- Routing traffic and balancing loads
- Scaling/taking out containers according to variable workloads
- Tracking health of containers
- Maintaining security between interactions
- Configuration of applications based on the respective containers chosen to run them
As you can see, container orchestration has the power to streamline processes and save considerable time.
The right tools for container orchestration
Container orchestration tools offer a framework with which to manage any containers as well as microservices design at scale. Various container orchestration tools are available for management of container lifecycles, such as Docker Swarm, Kubernetes, and Apache Mesos.
In a discussion of Apache Moss vs Docker Swarm vs Kubernetes, the latter may be more popular.
Kubernetes was originally created and built by Google engineers, as an open source project. Google donated Kubernetes to its Cloud Native Computing Foundation back in 2015. This tool enables teams to make application services across several containers, as well as scheduling containers throughout a cluster, scaling said containers, and managing their individual health conditions down the line.
This tool does away with a lot of manual tasks required to deploy and scale containerized applications. You also have the flexibility to cluster host groups, virtual or physical machines, and run Linux containers. Helpfully, Kubernetes presents users with a platform for efficient, simple cluster management.
Furthermore, this tool helps teams to implement and depend on container-based infrastructure within production spaces. These clusters may be placed across multiple clouds, whether private, public, or hybrid. That’s why Kubernetes is such a terrific platform to host cloud-native apps which demand fast scaling.
Kubernetes helps manage workload portability and balancing loads through movement of applications with no need to redesign them at all.
The key elements of Kubernetes
Kubernetes consists of:
The Kubelet service is based on nodes, and analyzes container manifests to ensure relevant containers start running.
A number of nodes, including one or more master nodes and multiple worker nodes.
This is the machine responsible for controlling Kubernetes nodes, and all task assignments come from here.
This is a group of multiple containers all deployed to an individual node. These containers share an IPC, IP address, and host name (along with additional resources).
How container orchestration functions
Any teams which leverage container orchestration tools (including Kubernetes) will describe an application’s configuration through JSON or YAML files. A configuration file informs the container management tool where container images are located. It also specifies the network establishment process, and where logs should be place.
In the implementation of a new container, the container management tool will schedule the deployment to a designated cluster in an automated process. It will also locate the right host, and take the specific requirements or limitations into account. After this, the orchestration tool handles managing the container lifecycle according to the specifications determined within the compose file.
Teams can utilize Kubernetes patterns for management of container-based applications or services, across configuration, lifecycle, and scaling. A Kubernetes developer depends on these repetitive patterns to build a complete system.
Container orchestration may be leveraged in a setting which requires utilization of containers, such as for on-site servers or private/public cloud processes.
|
OPCFW_CODE
|
fix/calendar-heatmap
This fixes up the heatmap calendar a little.
Add's the tooltip back
added react-tooltip package for the tooltip
rework the function to create the data for the map
[x] I have read freeCodeCamp's contribution guidelines.
[x] My pull request has a descriptive title (not a vague title like Update index.md)
[x] My pull request targets the master branch of freeCodeCamp.
[x] None of my changes are plagiarized from another source without proper attribution.
[x] All the files I changed are in the same world language (for example: only English changes, or only Chinese changes, etc.)
[x] My changes do not use shortened URLs or affiliate links.
I'm hoping it can close these issues:
https://github.com/freeCodeCamp/freeCodeCamp/issues/35916
https://github.com/freeCodeCamp/freeCodeCamp/issues/17299
https://github.com/freeCodeCamp/freeCodeCamp/issues/17822
https://github.com/freeCodeCamp/freeCodeCamp/issues/22031
I guess you'll need to remove package-lock.json ?
I'm not sure - I thought it should be kept on when adding a package - did you get a chance to test it out @thecodingaviator?
Yes, on doing npm i, I get this:
C:\Users\hp\Desktop\Web\freeCodeCamp>npm i
><EMAIL_ADDRESS>postinstall C:\Users\hp\Desktop\Web\freeCodeCamp
> npm run bootstrap
><EMAIL_ADDRESS>bootstrap C:\Users\hp\Desktop\Web\freeCodeCamp
> lerna bootstrap --ci
lerna notice cli v3.13.1
lerna info versioning independent
lerna info ci enabled
lerna info Bootstrapping 9 packages
lerna info Installing external dependencies
lerna ERR! npm ci exited<PHONE_NUMBER> in '@freecodecamp/client'
lerna ERR! npm ci stderr:
npm WARN prepare removing existing node_modules/ before installation
WARN tarball tarball data for<EMAIL_ADDRESS>(sha512-UmATFaZpEQDO96KFjB5FRLcT6hFcwaxOmAJZnjrSiFN/msTqylq9G+z5Z8TYzN/dbamDTiWf92m6MnXXJkAivQ==) seems to be corrupted. Trying one more time.
npm ERR! path C:\Users\hp\Desktop\Web\freeCodeCamp\client\node_modules\react-ga\dist\react-ga.js
npm ERR! code EPERM
npm ERR! errno -4048
npm ERR! syscall unlink
npm ERR! Error: EPERM: operation not permitted, unlink 'C:\Users\hp\Desktop\Web\freeCodeCamp\client\node_modules\react-ga\dist\react-ga.js'
npm ERR! { [Error: EPERM: operation not permitted, unlink 'C:\Users\hp\Desktop\Web\freeCodeCamp\client\node_modules\react-ga\dist\react-ga.js']
npm ERR! cause:
npm ERR! { Error: EPERM: operation not permitted, unlink 'C:\Users\hp\Desktop\Web\freeCodeCamp\client\node_modules\react-ga\dist\react-ga.js'
npm ERR! type: 'OperationalError',
npm ERR! '$error': '$error',
npm ERR! cause:
npm ERR! { errno: -4048,
npm ERR! code: 'EPERM',
npm ERR! syscall: 'unlink',
npm ERR! path:
npm ERR! 'C:\\Users\\hp\\Desktop\\Web\\freeCodeCamp\\client\\node_modules\\react-ga\\dist\\react-ga.js' },
npm ERR! isOperational: true,
npm ERR! errno: -4048,
npm ERR! code: 'EPERM',
npm ERR! syscall: 'unlink',
npm ERR! path:
npm ERR! 'C:\\Users\\hp\\Desktop\\Web\\freeCodeCamp\\client\\node_modules\\react-ga\\dist\\react-ga.js' },
npm ERR! isOperational: true,
npm ERR! stack:
npm ERR! 'Error: EPERM: operation not permitted, unlink \'C:\\Users\\hp\\Desktop\\Web\\freeCodeCamp\\client\\node_modules\\react-ga\\dist\\react-ga.js\'',
npm ERR! type: 'OperationalError',
npm ERR! '$error': '$error',
npm ERR! errno: -4048,
npm ERR! code: 'EPERM',
npm ERR! syscall: 'unlink',
npm ERR! path:
npm ERR! 'C:\\Users\\hp\\Desktop\\Web\\freeCodeCamp\\client\\node_modules\\react-ga\\dist\\react-ga.js' }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It's possible that the file was already in use (by a text editor or antivirus),
npm ERR! or that you lack permissions to access it.
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator (though this is not recommended).
npm ERR! C:\Users\hp\AppData\Roaming\npm-cache\_logs\2019-05-14T08_06_08_395Z-debug.log
lerna ERR! npm ci exited<PHONE_NUMBER> in '@freecodecamp/client'
npm ERR! code ELIFECYCLE
npm ERR! errno<PHONE_NUMBER>
npm ERR<EMAIL_ADDRESS>bootstrap: `lerna bootstrap --ci`
npm ERR! Exit status<PHONE_NUMBER>
npm ERR!
npm ERR! Failed at the<EMAIL_ADDRESS>bootstrap script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! C:\Users\hp\AppData\Roaming\npm-cache\_logs\2019-05-14T08_06_09_758Z-debug.log
npm ERR! code ELIFECYCLE
npm ERR! errno<PHONE_NUMBER>
npm ERR<EMAIL_ADDRESS>postinstall: `npm run bootstrap`
npm ERR! Exit status<PHONE_NUMBER>
npm ERR!
npm ERR! Failed at the<EMAIL_ADDRESS>postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! C:\Users\hp\AppData\Roaming\npm-cache\_logs\2019-05-14T08_06_09_975Z-debug.log
so you're thinking that's from the package-lock file?
@moT01 I'm taking a look at this now. Is there an easy way to recreate those issues locally?
so you're thinking that's from the package-lock file?
I'm not sure why else would it be happening, I'll try deleting the file and retrying once
Honestly, no @ojeytonwilliams - I couldn't recreate most of the issues anywhere - so, I'm not sure what to do with it. I added the tooltip back in, which must have got removed at some point - and reworked the logic that creates the map to hopefully make it a little more robust.
@moT01 Understood. I think the issues are all with a very old version of this, anyway, so some might have been fixed in the move to react-calendar-heatmap.
To be a little bit more clear - when I added this tooltip in - the calendarValues variable only included data for dates which had activity from the user - and the tooltip would show like "null" for the items and date with no activity - so I added an empty object and filled it with all the dates starting from six months ago to give the tooltip some info for those dates. And the logic got reworked mostly because I needed that.
Yes, I like those suggestions @ojeytonwilliams - I obviously started at the beginning of six months ago because that's how it was, maybe there was a reason for that. But it seems to make more sense, to me anyway, doing it how you suggested - so I made those changes
@moT01 Perhaps so - this looks nice though. Thanks for making those changes, it LGTM now.
@ahmadabdolsaheb Does everything look okay to you?
Yes, I'm fine with these changes @ValeraS, but I had a thought... maybe we should change all the "item/items" to "point/points" - since that is how it's described above the heatmap (see picture) - also, I noticed that the "current streak" defaults to 1 - since it's right there in this file, maybe we should change that to default to zero in this PR.
Let's rename to point/points and change defaults for both streaks to 0.
|
GITHUB_ARCHIVE
|
Shuriken USB( USB gecko SE (clone) )
Basically its a USB Gecko SE, but done using a CPLD instead of a FPGA, have tested on a Wii using GeckoOS and Geckodnet / WiiRD .
ebay kicked me off something about enabling copying of games, guys don't use this to copy games its supposed to be a debug / hacking tool. I am selling these for £16 + p&p ( £4 untracked world / EU). Scroll to the bottom of the page for the paypal button, I am also running a support thread at gba temp.
Updated CPLD code
To be compliant with the GPL licence or at least the spirit of it I should release the modified VHDL source code as this is derived work (original work was done by Ian Callaghan).
I have sold about 90 of these devices now at some point they are going to start going wrong as they are all hand soldered by me I cant guarantee how long they are going to last. Here is the schematic for shuriken USB hopefully it will come in handy if your device finally breaks and you want to fault find / repair it yourself.
You will need to go to the FTDI website and download the D2XX direct drivers for the FT245 USB to serial chip here's the link
Note: Getting reports that the last driver don't work GeckDotNet or wiiRD if you find this to be the case please use 2.10.00 (I know they work as that's what I have got installed).
I will host the older driver here (just in case FTDI remove it).
Version 1 (this unit is no longer sold)
I have completely dropped support for this unit, I have sent out replacements for those who bought a V1 so everyone should now have a V2 unit.
Version 2 issues (Currently selling these units)
I am not tracking any issues with this unit they should be a 1:1 clone of the usb gecko se device, all software that works with the usb gecko se should work with shuriken usb v2.
Tools / software list
I have collected the following links / programs from around the web (there maybe more)
wiiload (version 0.5.1 tested with homebrew channel version 3.9)
GeckodNet (version 0.66.8)
GeckoOS (click the download link right hand side of GeckoOS web page)
SwissServer (win32 exe and source rebuilt to work shuriken USB)
GeckoOS_1.06d (game cube version untested by me)
swiss (as a COM port) for debugging (untested by me)
libOGC debug stub (untested by me)
The standard case is black if anyone fancies a different case colour or wants a modified funky case below are the 3D case files (the design work was done in openscad).
Sadly from 26th October 2017 xillinx have discontinued the xc9500 series cpld which is used inside the shuriken usb device with no direct replacement I have decided to stop selling the devices. At the time of writing 8 December 2017 I currently have enough parts to make only eight more devices after which the shuriken usb will be no more.
|
OPCFW_CODE
|
Location: GUIs >
Digital Research GSX|
GSX Screen Shots
GSX is a display independent graphics library developed by Digital Research
for their CP/M-80 and CP/M-86 operating systems. It was also ported to
MS-DOS. GSX supports various sized displays, plotters, graphics printers
and mice. GSX uses vector based drawing, which permits images to scale
to different size or aspect ratio screens.
It abstracts the input and output devices in to installable device drivers.
A vendor could create a unique video display card, simply provide a driver
for it, and all GSX applications would automatically work without program
This was extremely important in the pre-IBM PC days, as every vendor's
hardware was different and incompatible. A software vendor might want to
create a graphical application for as many hardware platforms possible,
but would then be faced with the task of implementing support for hundreds
of systems and video options. Only to fall behind whenever a new system
However, there were only a few major commercial applications developed
for GSX: Digital Research's DR-Draw, DR-Graph, and DR-Logo (if anyone know
of any more, or has a copy of DR-Logo they would like to share, please
let me know)
The GSX system was used as the foundation for Digital Research's GEM.
GSX installs itself as a resident program in memory. The application
and GSX binaries files are independent. In theory an application should
not need to be recompiled to use different GSX versions or drivers.
If you have read about the history of Microsoft Windows, you may have
heard that it started off as a project called "Interface Manager" and it
was described as an "Installable device driver". That is essentially what
GSX is, and in all probability is what Microsoft was trying to mirror before
GSX is compatible with PC-DOS 1.1 and MS-DOS 1.x.
Digital Research DR-Graph is a chart creation program that can create
high quality business graphics on plotters or graphics printers.
Similarly, DR-Draw is a shape based drawing program that can create
high quality output on plotters or graphics printers.
In a way, running these on IBM CGA doesn't really do these programs
justice. IBM was stuck with CGA while other machines such as the NEC APC
or TI Professional Computer had higher resolution graphics.
DR-Draw and DR-Graph were available for CP/M-80 and CP/M-86 on numerous
machines. They were also ported to MS-DOS including non-IBM hardware compatibles.
In fact the above DR-Graph version is actually for the TI Professional
Computer (a non-IBM hardware compatible MS-DOS machine) but simply switched
to using IBM GSX drivers found with DR-Draw.
GSX does not define user interface controls. It is purely a graphics
In fact, DR Graph uses plain old text mode for its menus and data entry.
DR Graph switches to graphics mode when plotting a chart. It only uses
the mouse for certain selection options.
An interesting feature of DR Graph is that you may output your graph
to two different "displays". For example, on an IBM PC you may choose between
monochrome CGA and 4-color CGA.
Each "display" uses a different driver and need not be the same video
card. In theory one might build their graph using normal CGA but output
it on a secondary monitor attached to a Hercules Monographics card or other
third party high resolution video device.
DR-Draw is a shape based graphics program rather than bit-mapped. This
enables your drawings to be rendered at a higher resolution on a plotter,
printer, or different display device.
DR-Draw supports drawing lines, filled polygons, circles, arcs, bars,
and text. Objects may be assigned a color, but the appearance depends on
the output device.
This version of DR-Draw is missing the font disk and additional drivers.
Unlike DR-Graph, DR-Draw runs entirely inside a GUI.
It presents a menu at the top of the screen. It is not selected by a
cursor, instead you move the mouse left and right and it highlights the
Messages, input, or sometimes a second menu are shown on a line below
While drawing or selecting objects, the mouse cursor appears as a "+".
GSX for DOS is compatible with the MS-DOS Microsoft Mouse driver. Depending
on the implementation, it can also use keyboard keys or other input devices
to move the cursor.
Here is an example of DR Draw running with a VGA driver.
John Elliott, who brought us VGA for Windows 1.0x, also backported some
GSX-86 1.3 drivers, including this VGA driver, from the published
GEM source code.
However, these drivers are buggy when used with DR Draw and DRGraph.
DR-Graph will not display the text menu screens, and DR-Draw will not draw
the menu fonts quite right.
GSX supports a number of video cards. Off hand there are native drivers
Interestingly, GSX drivers are designed so the same driver binary may operate
under both CP/M-86 and MS-DOS.
IBM CGA Monochrome
IBM CGA Color
Plantronics PC+ Colorplus Adapter
Hercules Graphics Card
Artist 2 Graphics Card
NCR Decision Mate V
TI Professional Computer
And possibly others.
Here is an example of DR-Graph outputting a graph to a VGA display.
It doesn't seem to handle colors right though.
There are also some interesting screen shots of GSX applications running
in emulation and outputting to a simulated graphics terminal here:
So in conclusion, GSX is not really much of a GUI, and not widely used,
but I believe it was influential in its time.
|
OPCFW_CODE
|
Digital Electronics 1A Additional Supervision Questions Robert Mullins 1. Inside a flip-flop and the reason for setup and hold times A schematic for a D flip-flop is shown in Figure 2. T1 and T2 below are tristate buffers, when EN=0 the output is in a tristate condition (not driven). We can build a tristate from an inverter followed by a transmission gate, as shown in Figure 1. The two components can be combined into a single gate as shown on the right. The D flip-flop is constructed from two D latches, master and slave, as discussed in your lecture notes). When the CLK is low, any value on input D will be propagated to node Z (Z will become equal to D). When the CLK goes high the current value of node Z will be latched by the first of the two latches. What delay does the setup time account for? The hold time ensures the output of the first transmission gate is held stable until the transmission gate is switched off. Figure 1: Tristate Buffer Figure 2: D flip-flop 2. The Missing Inverter Puzzle A friend asks you to build a simple piece of electronics that inverts three digital input signals. Let's call the inputs A, B, and C and outputs A', B' and C'. You go to your hardware bench and look for three inverters (the simple solution!). Unfortunately, you only find two, but discover lots of AND and OR gates. Can you still complete the task? (This problem is described in "Automated Reasoning: Introduction and Applications", Larry Wos et al, Prentice-Hall, 1984) *** Warning *** Solution on next page! 3. CMOS Logic Gates (i) Draw the truth table for the circuit shown in Figure 3. What logic function is this? (ii) Sketch a transistor-level circuit for the logic gates listed below. Remember we only ever use Ptypes to pull-up and N-types to pull-down (N-types can't pass logic 1 well and P-types can't pass logic 0 well, remember why?). For non-inverting gates we add an inverter on our output. (a) 2-input NAND gate (b) 3-input NOR gate (c) A gate whose output is ab+c Figure 3: Mystery logic gate Missing Inverter Puzzle: A possible solution 2or3 = ((A&B)+(A&C)+(B&C)) 0or1 = !(2or3) 3ones = A&B&C 1one = 0or1 & (A+B+C) 1or3 = 1one + 3ones 0or2 = !(1or3) 0ones = 0or2 & 0or1 2ones = 0or2 & 2or3 A'= 0ones + (1one & (B+C)) + (2ones & B.C) B'= … C'= … There are others approaches I believe. 4. Boolean Algebra Simplify the following expressions (where A' = not(A)) (i) F=A.B'.C'+A'.B.C'+A'B'C+A.B.C (ii) F=(X+Y).(X'+Y+Z).(X'+Y+Z') (iii) F=(A.D + A'.C).(B'.(C+B.D')) From 2011, Paper 2. Qu 1. (iv) F=(A+B'+A'.B).(A+B').A'.B (v) F=(A+B'+A'.B).C' From 2008, Paper 2. Qu 2. You could also attempt Qu. 1 from Paper 2, 2007 http://www.cl.cam.ac.uk/teaching/exams/pastpapers/y2007p2q1.pdf 5. State Machine Question Attempt Question 2, from Paper 2, 2007 http://www.cl.cam.ac.uk/teaching/exams/pastpapers/y2007p2q2.pdf 6. Design challenge Design the control logic for a vending machine (draw the state machine and derive minimised expressions for the next state functions). The machine dispenses drinks that cost 70p each and accepts fifty, twenty and ten pence pieces. A new coin is accepted on each cycle until enough money has been deposited for a drink to be dispensed and change to be returned. The outputs your FSM will generate are Dispense, Return10p, Return20p, Return30p, Return40p. I am of course happy to mark past examination questions at any point in the year.
|
OPCFW_CODE
|
Saturday, February 02, 2008
I've been blogging the HERC2/OCA2 story a fair amount. It seems this genomic region is the locus of main effect for variation of eye color in Europeans, in particular blue vs. non-blue eyes. But I also pointed out that this locus has also been connected to variation in skin color, and while that variation is additive in effect, the variation on eye color exhibits strong dominance/recessive dynamics. My inference here is that it is more plausible that selection occurred on skin color, while eye color was a tissue specific expression pattern which emerged as a byproduct. Peter Frost has an objection to this:
The correlation between eye color and skin color may simply be an artefact of geographic origin. Europeans vary clinally for both eye color and skin color along a north-south and west-east gradient, so if the pool of subjects is geographically heterogeneous you will almost certainly get a correlation between eye and skin color. But this doesn't prove a cause and effect relationship.
Fair enough. Spurious associations driven by cryptic population substructure is one of the main reasons Structure was developed. I responded to Peter here, here and here. The short of is that I don't know of any analysis within an admixed population like African Africans, which would settle the matter, but there are plenty of other points which would suggest that we should look at the skin color trait (and, to be fair, if substructure exists at the level of British Isles origin samples we really need Strucure!).
But there was something that has been bothering me: eye color difference exhibits a lot of dominance/recessive dynamics in expression. The skin color data here does not, and aside from KITLG (which is dominant for light skin) all the other loci seem additive and independent (the report of epistatic effects here & there don't seem reproduced very often). One of the main reasons that I am favoring a skin color model as the phenotype driving selection is that if it is additive it is exposed to selection immediately at low frequencies. In contrast, recessive traits at low frequencies have the problem that most copies of the allele which increases fitness are still in heterozygotes which mask them from selection. It came to my mind that the different assumptions about dominance would matter in terms of long term evolutionary dynamics and how that would be realized in terms of results from tests for selection. So I found this paper, Directional Positive Selection on an Allele of Arbitrary Dominance. It says:
...fixation of a beneficial allele leaves a signature in patterns of genetic variation at linked neutral sites. If this signature is well characterized, it can be used to identify recent adaptations from polymorphism data. To date, most models developed to characterize the effects of positive directional selection (termed "selective sweep") have assumed that the favored allele is codominant. In other words, if the fitnesses of the three genotypes are given by 1, 1 + sh, and 1 + s (where s is the selection coefficient), then h = 1/2....
For skin color h would be 1/2 for HERC2/OCA2, it has half the effect on the trait value. Assuming proportional selection based on the character value two copies would be better than one copy which would be better than no copies. In contrast, for eye color the h would be between 0 and 1/2, and probably closer to 0 because of predominant recessivity in expression for blue eyes. That means the fitness of those with one blue eye copy would be much closer to those with no blue eye copies than those with two; to the homozygote recessives would go all the benefit. On to the results:
...when h is small, most of the sojourn time is when the allele is at low frequency in the population. During this phase, the allele will have the opportunity to recombine onto other backgrounds. In other words, the favored allele will tend to increase in frequency on multiple backgrounds, preserving more of the diversity that existed when it first arose. In contrast, for dominant alleles, most of the sojourn time is spent at higher frequency, when there is less opportunity for the favored allele to recombine onto other backgrounds. This results in a wider signature of a fixation event for larger h-values.
Why the bolded parts? From A Map of Recent Positive Selection in the Human Genome:
Some of the strongest signals of recent selection appear in various types of genes related to morphology. For example, four genes involved in skin pigmentation show clear evidence of selection in Europeans (OCA2, MYO5A, DTNBP1, TYRP1). All four genes are associated with Mendelian disorders that cause lighter pigmentation or albinism, and all are in different genomic locations, indicating the action of separate selective events. One of these genes, OCA2, is associated with the third longest haplotype on a high frequency SNP anywhere in the genome for Europeans....
I don't know if my connection of inferences here is valid, and the paper I originally referenced makes clear that it is important to frame these sorts of assumptions within their statistical context; just because something is less likely does not mean it is impossible. I've sent out emails about OCA2 and skin color, and will report back, but at this point I suspect that the final proof in the pudding will have to be admixture analysis in a group like African Americans. But I think the above makes it more likely that whatever was going on 10,000 years ago did not express as a recessive phenotype.
|
OPCFW_CODE
|
[VC-36032] Log the client-id when VenafiCloudKeypair authentication is used
In https://github.com/jetstack/jetstack-secure/issues/549 @hawksight wrote:
...we are missing one important piece [of logged information] when connecting to TLSPK on VCP, the client-id
I've added client-id to the log messages when the configuration resolver selects Venafi Cloud Authentication.
But in the interests of reducing the overwhelming quantity of log messages, I've also changed that log message to only be shown at debug log level --log-level=1.
I've also associated this PR with [VC-36032] AGENT: Enhanced troubleshooting through improved logging.
Testing
$ go run ./ agent --install-namespace default --private-key-path /dev/null --client-id foo --agent-config-file examples/cert-manager-agent.yaml --output-path /dev/null -v1
I1120 15:50:50.273235 185320 run.go:59] "Starting" logger="Run" version="development" commit=""
I1120 15:50:50.280681 185320 config.go:410] "Authentication mode" logger="Run" clientID="foo" privateKeyPath="/dev/null" mode="Venafi Cloud Key Pair Service Account" reason="--client-id and --private-key-path were specified"
I1120 15:50:50.280752 185320 config.go:428] "Using deprecated Endpoint configuration. User Server instead." logger="Run"
E1120 15:50:50.280799 185320 root.go:53] "Exiting due to error" err=<
While evaluating configuration: 2 errors occurred:
* the venafi-cloud.upload_path field is required when using the Venafi Cloud Key Pair Service Account mode
* period must be set using --period or -p, or using the 'period' field in the config file
> exit-code=1
exit status 1
$ go test -v ./pkg/agent/... -run Test_ValidateAndCombineConfig | grep "Authentication mode"
config.go:410: I1120 15:54:40.370580] Authentication mode mode="Jetstack Secure OAuth" reason="--credentials-file was specified without --venafi-cloud"
config.go:410: I1120 15:54:40.415486] Authentication mode venConnName="venafi-components" mode="Venafi Cloud VenafiConnection" reason="--venafi-connection was specified"
What is that default --log-level set to? Or is it not set on deployment?
I am just trying to think how a user would enable this, either directly through helm or venctl.
I added some notes to the Helm chart values and README in #627
https://github.com/jetstack/jetstack-secure/blob/98afe3b70434d1f64881d7c1b97e5e9a1cbedece/deploy/charts/venafi-kubernetes-agent/values.yaml#L148-L159
Those updated values will eventually find there way into the the docs at:
https://docs.venafi.cloud/vaas/k8s-components/c-vka-helmvalues/#extraargs
|
GITHUB_ARCHIVE
|
I thought I’d have a go at changing the icon for a generic switch (Fibaro FBR-223) so it doesn’t appear as a lightbulb icon.
It was actually fairly straightforward - with the caveat that since I hacked the existing BinaryLight1 json file it will probably get reverted on upgrade; however, it probably doesn’t break anything even if that happens, because it should go back to looking like a bulb.
Full disclosure - I’ve had this working for all of 24 hours…
0. Do a backup !
Make sure your device is category 3, sub-category 0.
In Apps - Develop apps - Luup files : download and save D_BinaryLight1.json
Edit that file to change the first two img values (associated with subcategory_num = 0) to the name of your images (in my case binary_switch_off.png and binary_switch_on.png)
Upload the file to replace the original
Now upload the necessary images - my examples are attached, they need to be PNG, 60x60, with appropriate transparency etc I guess to look OK.
Use SCP (e.g. WinSCP) connect to the Vera using the normal root login
Drop the two PNG images into /www/cmh/skins/default/img/devices/device_states
Restart Luup, hard-refresh the browser, and it should work.
For things which actually are lights, use sub-category 1 or 2.
Annoyingly, the implementation on mobile apps seems to be random - iOS shows sub-categories 0 and 2 as remote-control switches, whereas Windows shows them as a bulb and a generic device. This change makes no difference.
There was also some discussion of this particular issue, but I can’t find it right now. I think amg0 had some thoughts on a clever way to work around the Mios servers to get icons from your unit if you used the AltUI. You could try using Google to search for it.
Thanks - I tried AltUI but don’t really like it. UI7 has absurd amounts of white space and poor screen usage, lack of configurability, and the performance is poor, but I still prefer the design and simplicity compared to AltUI.
The remote access thing doesn’t bother me much anyway - I use my own proxy by preference, so the icons do work - it was more for info of anyone else using this.
But having proper icon support so they also appear on iOS or other mobile apps would surely be a sensible feature. Even if it was a limited but much larger choice of icons it would be a big improvement.
As it is I’m seriously thinking about moving to something like RaZberry - I didn’t go that way because I wanted a simple out of the box experience - hah, I should have done my research better MCV should really take these forums more seriously, they’re surely the only thing keeping Vera alive.
I have an Raspberry Pi with a UZB stick running openLuup/Z-way/AltUI at a different location than my Vera (UI5). Cudanet has put together some good turn key images for this, but I created my own system to learn about it. It would have been very difficult to have started directly with it because so much is taken from the Vera that you need it as a base. And in fact you need the Vera to get some of the plugins to get them over to openLuup. Others have hybrid systems with both openLuup and the Vera co-located. openLuup and AltUI appear to be more stable and features and bugs are quickly acted upon. But you also have the Z-way stack in there too which is pretty good, but adds another layer of complexity. Once you let your mind wander you can start contemplating Homeseer or OpenHAB too.
|
OPCFW_CODE
|
Note: You are currently viewing documentation for Moodle 3.1. Up-to-date documentation for the latest stable version of Moodle is probably available here: lighttpd.
Lighttpd (a.k.a. Lighty) is a lightweight webserver with a small memory footprint that works well with PHP accelerators like eAccelerator and Alternative PHP Cache (APC). It is an alternative to Apache and IIS particularly well suited to systems with low resources, especially RAM -- ideal when using virtual private server (VPS) hosting services. Even on hosts with plenty of resources, Lighty is as fast as Apache if not faster and worth considering.
Note that while Lighttpd has all the basic features that Apache does, it does not have the depth and versatility of Apache's more advanced features. There are one or two small incompatibilities as well and it does not share the same level of knowledge and support. You are encouraged to be certain it meets your needs before deploying in a production environment.
Installation varies according to platform, so if the following example doesn't apply to your server configuration, check the official instruction page here. You will also need to install php (preferrably with an accelerator) and fast-cgi according to these instructions.
A notable difference between Lighttpd and Apache is the expires directive that controls the caching of content. Apache allows you to set the expiry by file type, e.g. making a jpg image stay in the browser cache for a week, older versions of Lighttpd specify folders to be cached. Versions with ETag generation support provide full expiry mechanisms and control.
Fedora 6, 7, 8 & 9 Example
On the latest versions of Fedora, lighttpd is a part of the official repository and can be installed using yum to meet all of Moodle's base requirements, including the PHP accelerator Alternative PHP Cache (APC), using the following command.
yum install lighttpd lighttpd-fastcgi mysql php php-mysql \ php-pecl-apc php-gd php-mbstring php-xmlrpc php-pdo ;
Edit /etc/lighttpd/lighttpd.conf to point to the Moodle root directory and the PHP-CGI area to activate and point to PHP's fastcgi executable, under the respective sections:
## a static document-root example server.document-root = "/home/moodle-1.9/" #### fastcgi module fastcgi.server = ( ".php" => ( "localhost" => ( "socket" => "/tmp/php-fastcgi.socket", "bin-path" => "/usr/bin/php-cgi" ) ) )
Since Moodle maintains its own logs and reporting tools, you can disable lighttpd's server-level logging by commenting out (put a # mark in front of) the following lines in /etc/lighttpd/lighttpd.conf, as follows under the respective sections.
## modules to load # "mod_accesslog" #### accesslog module #accesslog.filename = "/var/log/lighttpd/access_log"
If your server has CPU cycles to spare, enabling compression improves network speed and the student's perceived response time, especially for off-campus and distance learners. Compression should be separated at the HTTP and PHP levels to maximize CPU cycles (especially on multiple-CPU or multi-core systems). First uncomment the compression module in /etc/lighttpd/lighttpd.conf, create a directory for the compressed file temporary store (in this example, it's /home/compress.tmp) and enter it along with the types of files to compress (do not include PHP mimetypes here), as follows under the respective sections:
Second, edit /etc/php.ini to enable compression on the PHP output and consider increasing the maximum server submission size (individual courses can still be limited lower than and up to this limit within Moodle itself, if needed).
zlib.output_compression = On post_max_size = 32M
|
OPCFW_CODE
|
|Table of Contents|
Basic Setup can be found under [Settings] > [System] > [General] > [Basic Setup].
|Root Web Directory|
This value represents the full server path to the web server's document root directory (e.g. /home/user/public_html/). This is not necessarily the path to the directory Blesta is installed under.
This setting is used when the web server cannot provide the path to your Blesta installation, such as when the cron is run via CLI. This typically occurs for URLs constructed for emails that are sent by cron via CLI.
The path to the document root directory may vary depending on your web server configuration. In most cases, generally speaking, any directories in the absolute path to your Blesta installation that do not appear in the URL should be included in your Root Web Directory setting. Consider the following examples:
The use of Virtual Directories on your web server may interfere with this value. If the cron is run via CLI, it does not use the web server, and therefore is unaware of your virtual directory aliasing, resulting in incorrect URLs in emails sent by cron. To resolve this issue, either remove the virtual directory alias to your Blesta installation, or set the cron to run via wget, which will use the web server, and thus be aware of the virtual directory alias.
|Temp Directory||The full server path where Blesta should write temporary files. This directory must be writable by the server's web user and cron user.|
|Uploads Directory||The full server path where Blesta should write uploaded files. This directory must be writable by the server's web user and cron user.|
|Logs Directory||The full server path where Blesta should write log files. This directory must be writable by the server's web user and cron user.|
The number of days to retain logs.
Rotation Policy controls the amount of time to retain most log data, cron log being the exception as that is controlled by the Blesta.cron_log_retention_days configuration.
Options that control what logs to delete can be found in the configuration. They include:
GeoIP Settings can be found under [Settings] > [System] > [General] > [GeoIP Settings].
Enabling GeoIP will allow certain features to take advantage of location services. GeoIP requires the GeoLite City binary database, which can be obtained from your account at https://www.maxmind.com/. The file should be unzipped and uploaded to your Uploads Directory and placed in a folder named system.
|Enable GeoIP||Check to enable GeoIP features.|
Maintenance can be found under [Settings] > [System] > [General] > [Maintenance].
|Enable Maintenance Mode|
Check to enable maintenance mode.
When in maintenance mode:
|Reason for Maintenance||Enter the reason for the maintenance. This will be displayed to users that access the system when maintenance mode is enabled.|
License Key can be found under [Settings] > [System] > [General] > [License Key].
|License Key||This is your Blesta license key. If you receive a new license key, enter it here.|
Payment Types can be found under [Settings] > [System] > [General] > [Payment Types]. Payment types allow manual payments to be recorded.
Adding a Payment Type
To add a payment type click Create Payment Type on the payment type listing page.
Editing a Payment Type
To edit a payment type click Edit next to the payment type on the payment type listing page.
|Name||The name of the payment type.|
|Type||Debit or Credit. When set to debit, transactions using this payment type are considered income-based while credit is non-income-based.|
|Use Language Definition|
Check if the value entered for Name has a language definition.
Add your custom language definitions to the _custom.php.
Deleting a Payment Type
To delete a payment type click Delete next to the payment type on the payment type listing page.
|
OPCFW_CODE
|
How to make something interesting
Do you remember those days where something just seems dead boring. You start getting a little sleepy, and just get deep into the state of mind I would call bore
Some things are fun, and some are just boring and uninteresting. But what if we could make the boring things close to as interesting or maybe even as interesting as the things you actually find fun to do. Maybe there is some special key. A secret no body has ever told you, that magically turn everything boring and demotivating to something fun and exiting.
I believe there is such a "key" to making "things" interesting. It's actually quite obvious. The key to finding interest in maybe even the most boring "things" is....
If you think about, how can it be a bad thing that your credit card got stolen? I believe that there isn't any kind of law that says "Every time something happens to you which you don't like will always be a bad thing". The reason it is "bad" to loose your credit card is because we don't want it. Not because the incident was "bad", but because we didn't like the way it made us feel, there we categorized it as "bad". Often things aren't as bad as they look, and the point of the whole "bad thing" explanition is that the only factor that decides wheter or not the incident that just happened was a "good" or a "bad" incident is our own judging which is controlled by our feelings which aren't always very reliable. Therefore the same goes to making something interesting. If the subject, task, speech, etc... has been categorized as boring, how can it then be interesting? I somewhere saw a very simple explanation, which was quite similar to "incident + reaction = perception". So next time you need to read your advanced science book, react like where you going to do something you really wanted and looked forward doing. Or maybe not...
If that doesn't sound like you try:
- Thinking positive of the thing you want to make interesting, or at least stop thinking "I would rather go to hell than doing "......"
- If you don't mind seeming a bit like a weirdo (don't worry, only you will know) try to make a fake enthusiasm, and keep it going on for a few weeks, if you've done it well, it should pay off
- Try linking the boring thing you have to do with something you love, like "cleaning the house while dancing to your favorite song, be creative
- Do the "the boring thing" while having a fresh mind, it will make it easier and therefore more enjoyable, or less painful at least
- Generally have an "I feel great" attitude when you wake up, and keep the whole day. If you can stay happy for a long time, your mind will actually adapt, so you'll stay happy without trying, sweet, isn't it?
Try it out. One step back to make sure you make things interesting often results in 10 steps forward in the end
|
OPCFW_CODE
|
- How to use HyperTerminal Terminal Emulator to configure, monitor or manage a Cisco Router or Switch
- Monitoring traffic with Cisco port monitoring.
- Monitoring traffic with Cisco port monitoring.
- terminal monitor
- How to see console output on a Cisco SSH session?
How to use HyperTerminal Terminal Emulator to configure, monitor or manage a Cisco Router or SwitchTo use commands of this module, you must be in a user group associated with a task group that includes appropriate task IDs. If the user group assignment is preventing you from using any command, contact your AAA administrator for assistance. To specify the length of time that logs are maintained in the logging archive, use the archive-length command in logging archive configuration mode. To return to the default, use the no form of this command. Length of time in weeks that logs are maintained in the archive. Range is 0 to Use the archive-length command to specify the maximum number of weeks that the archive logs are maintained in the archive. Any logs older than this number are automatically removed from the archive. This example shows how to set the log archival period to 6 weeks:. To specify the amount of space allotted for syslogs on a device, use the archive-size command in logging archive configuration mode. Amount of space in MB allotted for syslogs. The range is 0 to Use the archive-length command to specify the maximum total size of the syslog archives on a storage device. If the size is exceeded, then the oldest file in the archive is deleted to make space for new logs. This example shows how to set the allotted space for syslogs to 50 MB:. To clear system logging syslog messages from the logging buffer, use the clear logging command in EXEC mode. EXEC mode. Use the clear logging command to empty the contents of the logging buffer. When the logging buffer becomes full, new logged messages overwrite old messages. Use the logging buffered command to specify the logging buffer as a destination for syslog messages, set the size of the logging buffer, and limit syslog messages sent to the logging buffer based on severity. Use the show logging command to display syslog messages stored in the logging buffer. Specifies the logging buffer as a destination for syslog messages, sets the size of the logging buffer, and limits syslog messages sent to the logging buffer based on severity. Displays syslog messages stored in the logging buffer. To specify the device to be used for logging syslogs, use the device command in logging archive configuration mode. Use the device command to specify where syslogs are logged. If the device is not configured, then all other logging archive configurations are rejected. Similarly, the configured device cannot be removed until the other logging archive configurations are removed. It is recommended that the syslogs be archived to the harddisk because it has more capacity. This example shows how to specify disk1 as the device for logging syslog messages:. To create a syslog message discriminator, use the discriminator command in Global Configuration mode. To disable the syslog message discriminator, use the no form of this command. Specifies the first match keyword to filter the syslog messages. Specifies the second match keyword to filter the syslog messages. Specifies the third match keyword to filter the syslog messages. Specifies the first keyword that does not match the syslog messages. Specifies the second keyword that does not match the syslog messages. Specifies the third keyword that does not match the syslog messages. A string when matched in the syslog message, is included as the discriminator. If the pattern contains spaces, you must enclose it in quotes " ".
Monitoring traffic with Cisco port monitoring.
As I've began learning Cisco networking, there is one feature that I've fallen in love with -- the Port Monitor. Essentially, you can take whatever ports you want and "mirror" them to another, allowing the computer at the other end to receive traffic not originally intended for it much like how a hub operates. If you are going to do this, I recommend you actually read up about it at Cisco's site. I know my way around it, but truth be told, I have little experience thus far in Cisco. In these examples, I am using a Cisco series layer 2 switch. Your results may vary, but I know these are correct for the series. The hostname of the switch is Rohan. You should know how to do this by yourself. If you have any questions on doing this, bug your higher-up system admin. Once connected, type "ena" to enter enable mode. You will be asked for the enable accounts password. Type it in. Choose which interface you want your traffic mirrored to. Here comes the fun part. You can either specify to monitor a single vlan the monitor port must be on the same vlan as the ports it is monitoring! You should see: Rohan Type "wr" to save your current running configuration as your startup config so you don't lose all your hard work after a reboot. You should see traffic from all of the ports you specified get mirrored to your current machine. If not, recheck your steps. Having a monitor port has proven beyond useful when it comes to debugging problems at the network level or catching people trying to torrent Their also fun to just watch what your computer is doing. Idea: Plug the monitor port into a server running RemoteApp and set Wireshark up as an app that only Domain Admins can run. That way, anywhere you are on the network, you can see exactly whats going no matter where you are. Thanks for the walk through. I just happen to have a series sitting on a bookshelf collecting dust. Might as well collect data. I have two series linked, is it possible to monitor the whole Vlan both switches via a port on just one? Ashley; You should be able to. I'm not sure how the guy did it, but prior to me taking over at Bates for networking, the prior Cisco tech had set up a vlan for every room in the building and a single monitor port that can capture traffic from every vlan. I've looked at his config, but its freaking huge and based off a
Monitoring traffic with Cisco port monitoring.
Syntax Description session-number. Optional Specifies that the selected session will be shut down for monitoring. Command Default None. Command Modes Global configuration mode. Command History Release. Limit on the number of egress TX sources in a monitor session has been lifted. Usage Guidelines To ensure that you are working with a completely new session, you can clear the desired session number or all SPAN sessions. When you configure more than two SPAN sessions, the first two sessions are active. During startup, the order of active sessions is reversed; the last two sessions are active. For example, if you configured ten sessions 1 to 10 where 1 and 2 are active, after a reboot, sessions 9 and 10 will be active. To enable deterministic behavior, explicitly suspend the sessions 3 to 10 with the monitor session session-number shut command. Port-channel interfaces can be configured as egress sources. Related Commands Command. SPAN session to create or configure. The range is from 1 to Specifies to apply configuration information to all SPAN sessions. Optional Specifies the type of session to configure. Adds a description to identify the SPAN session. Displays SPAN session configuration information.
Lets cover this syntax so that you can speak the same language as me. Except me. But not today. A VTY is term used by Cisco to describe a single terminal whereas Terminal is more of a verb or general action term. The following is conjecture on my part, I have no actual proof that this is true but it seems highly likely. Teletype refers to the days when computers were programmed by character based printers, literally, a keyboard attached to a printer. When you pressed a key, the character was printed on paper. Screen to display characters were invented much later. I guess Cisco never got around to changing it. Is that just cheap or good for customers, hard to say. If you made a mistake you had to delete the entire line, type it again and hope it was right. Terry Slattery talks about it here. Here is an excerpt:. He told me that he needed the ability to change the configuration at a trade show, so he added a quick hack to allow him to type the configuration into a buffer, which was passed to the function that parsed the TFTP file. You entered all the commands and when you pressed CTRL-Z, the file was parsed and any errors were displayed. Greg Satz, who told me of this change, was pleased to note that I had just barely noticed the change. The change reported errors as soon as you entered them, not after the entire buffer had been typed, so it was a good change. This change would have happened sometime before late There was still no command history, interactive help, or command editing capability. If you configure more, you can depending on IOS version have more. Using different terms can cause errors and mistakes. Get over it. Network Break is round table podcast on news, views and industry events. Join Ethan, Drew and myself as we talk about what happened this week in networking. In the time it takes to have a coffee. Email address:. Syntax Herewith is the official Ethereal Mind definition for each term. Do not use any other meaning. Then we all mean the same thing. Footnote: Apocryphal Note on VTY for younger people The following is conjecture on my part, I have no actual proof that this is true but it seems highly likely. A teletype! Are you kidding? There was still no command history, interactive help, or command editing capability Which vty am I connected to? Network Break Podcast Network Break is round table podcast on news, views and industry events.
|
OPCFW_CODE
|
Novel–Divine Emperor of Death–Divine Emperor of Death
Chapter 1614 – Respect shame confuse
Ancestor Ezekiel Alstreim’s manifestation had an ounce of frustration before he pounced on Davis and hugged him strongly.
“The discord isn’t over but, and here is the most secure location current on Alstreim Family’s Territory. Do you want to endanger them and gives me yet another frustration?”
Great Elder Elise Alstreim’s human body s.h.i.+vered.
Davis couldn’t aid but blink.
Getting into the covered super s.p.a.ce, Davis crossed into the terrain on the inside. In this area and also the outer, the skies were always clouded with stormy clouds, and as a consequence, no natural light could enter into. Regardless of yin strength overflowing on this s.p.a.ce, the atmosphere in this article was quite dependable for that dwelling thanks to lightning vigor which can be intrinsically yang counteracting one another.
Davis blinked his eyes as he discovered Grand Elder Elise Alstreim support Sophie and Niera as she located her palms on both their backside, aiding them within their cultivation through utter attention and energy, which normally somebody even shut wouldn’t be inclined to do.
In the meantime, Davis held consoling them while he caressed their backside. Their ceaseless trembling performed make him really feel apprehensive, doing him wonder the amount they could’ve encountered to come to this area and improve their cultivation, even really going as much as to hurt their own individual body.
As Davis traveled from the below the ground cave, he couldn’t assist but consult.
“Grandfather, go away completely! You can’t be in this way just when Davis retrieved and came up back for people…!”
‘Should I come after…?’
future crimes movie
Taking a look at them, he discovered that Sophie and Niera had been not at a critical junction but were definitely just circulating their essence power amidst the intense flames and lava underneath them, appearing to just about melt off their b.you.t.ts, nevertheless it didn’t due to their very own vitality and Lavish Elder Elise Alstreim’s vigor protecting them.
Nevertheless, he was astonished to get noticed them develop from Minimal-Level Legislation Dominion Stage to Optimum point-Amount Rules Dominion Point in these simple seven a few months. This has been too quickly of your improvement in his training books because they had virtually no anchors to support their cultivation growing this rapid.
Davis couldn’t support but blink.
Davis nodded as though it was subsequently a particular.
“Sigh, she’s completely considered the part of your own wife, not caring about her folks.”
Davis inwardly believed before he met plan several statistics. These three of which were actually going through the leading. However, only two of them were definitely developing whilst the one behind them was supporting them cultivate through protecting them with the intensive warm that vulnerable to eliminate them.
Sophie and Niera took serious amounts of accomplish their circulation, but Fantastic Elder Elise Alstreim considered think back with huge view, her purple students trembling as she noticed Davis check out all of them his heightened brows.
Grand Elder Elise Alstreim’s system s.h.i.+vered.
Sophie reacted oddly, creating Davis to heave a sigh, sensation that his a.s.sumption arrived a fact.
Nevertheless, he was no righteous man or woman, neither managed he consider to develop a far better entire world by choosing the choice of attending to those young children while he was aware that individuals who try out to create a perfect, despise-totally free planet would only find yourself succeeding in destroying it.
If they’re growing up into wicked route cultivators, chances are they are happier gone so one can find much less innocents affected eventually, whilst it was counterintuitive to his idea process where he didn’t prefer to destroy little ones while he understood that they can still hadn’t completed nearly anything incorrect that may justify for them to be destroyed.
‘Clara must’ve taken them home once she collected them…’
Davis blinked his eyeballs because he found Lavish Elder Elise Alstreim assist Sophie and Niera as she located her hands for both their backside, aiding them in their cultivation through absolute concentration and energy, which normally an individual even shut wouldn’t be prepared to accomplish.
He coughed, hoping to get their recognition.
Ancestor Ezekiel Alstreim inwardly gulped. This los angeles.s.s, she possessed completely decreased for Davis she even berated him. But what could he do? He predicted this to happen at some point, plus it taken place to always be right now: a joyous day time that they couldn’t help but deeply teeth.
“Your mommy was here, but she later on visited satisfy her husband from the Purple Thunderflame Hill. A mom will only put up with loneliness somewhat right after experiencing her adored son’s loss, sigh…”
Sophie and Niera got serious amounts of end their blood circulation, but Great Elder Elise Alstreim turned into think back with large eyes, her purple students trembling as she discovered Davis evaluate these with his lifted brows.
Davis amusingly smiled while Great Elder Elise appeared horrified just like she acquired witnessed a ghost. Simultaneously, Sophie and Niera practically jolted off their meditative locations and considered him with absolute disbelief inside their sight.
“You…! Don’t inform me practicing the regulations of fatality can make it easier to turned into a ghost?”
Sophie, Niera, and Mo Mingzhi didn’t recognize that s.h.i.+rley ‘revived’ him, unlike Evelynn, who had taken breaks or cracks once in a while after a ma.s.sacre, playing Nadia and Isabella’s ask for her to return. However, she decreased each and every time, proclaiming she wasn’t the old Evelynn and she was a slaughterer, really harmful woman, and whatnot, in the mean time trying to hide behind that spider sh.e.l.l of hers.
Davis quickly arrived in front of the below ground cave and saw Ancestor Ezekiel Alstreim’s whose mouth lowered.
Great Elder Elise Alstreim’s physique s.h.i.+vered.
the city and the city cast
“How to find you reviewing? I won’t pay out for helping them.”
Ancestor Ezekiel Alstreim’s term had an ounce of fury before he pounced on Davis and hugged him properly.
“You brat gifted us a frighten correct whenever you declined. I thought your most women were definitely all wild to have you without using up your whole body, nevertheless it seems as if I had been the trick! It is just a a valuable thing that I’m the fool!”
Novel–Divine Emperor of Death–Divine Emperor of Death
|
OPCFW_CODE
|
My host simply deprecated the mysql connections in favor of mysqli This particular script is an example but worked perfectly before trying toimport a csv file to a mysql remote server using mysqlimport. mysqlimport remote host problem. View as plain text. Im writing script that, each night, copies a small database to my laptop on the local network. mysqlimport: Error: 1045 Access denied for user rootipaddress-of- remote-host (using password: YES). The fix to this issue is as follows Im having trouble with mysqlimport taking in a host argument: It appears as if it selectively accepts aMysqlimport.exe from a remote windows computer into Centos Remote server. 3) Enter IP or host name from where you want to connect MySQL server remotely in the text box.Allow remote host IP in server firewall. 1) Login to your server using SSH. Share a link to this question via email, Google, Twitter, or Facebook. Recommendamazon web services - import csv file to mysql remote server using mysqlimport. mysqlimport can load files located on the client host or on the server host. It can load tables managed by local or remote servers. For further information about mysqlimport, see Section 15.3.1, "Importing mysql - Importing with mysqlimport. The mysqlimport utility reads a range of data formats, including comma- and tab-delimitedImport into MySQL on the named host default is localhost. -s or --silent. mysqlimport. A client that imports text files into their respective tables using LOAD DATA INFILE.shell> mysql --hostremote.example.com.
To specify a port number explicitly, use the --port or -P How to set up a remote MySQL connection. This article describes how to use a local computer to connect to your MySQL databases stored remotely on A2 Hosting servers. MySQLimport to remote server. MySQL June 5, 2006 Views:0.Unfortunately it is a mysql server on a virtual hosting plan. any suggestions? or can I use LOAD DATA INFILE? shell> mysql --hostremote.
example.com. To specify a port number explicitly, use the --port or -PWithout --force, mysqlimport exits if a table does not exist. --hosthostname, -h hostname. Use mysqlimport to import the employee.txt datafile to employee table in test database, as shown belowHow To Monitor Remote Linux Host using Nagios 3.0. mysql -h HOST -u USERNAME -pPASSWORD. If you get a mysql shell, dont forget to run show databases to check if you have right privileges from remote machines. mysqlimport -u root -ptecmint rsyslog < rsyslog.sql. In the same way you can also restore database tables, structures and data.How to Add Linux Host to Nagios Monitoring Server Using NRPE Plugin. What are some good sites that provide free and fast database hosting with remote access? Which is the best free hosting site for MySQL and PHP? Can I connect to MySQL database remotely? Yes, you can connect to your MySQL database remotely on our Linux servers (Basic and Business plans) only using a client such as MySQL Administrator which is part of the MySQL GUI Tools found at http://mysql.org/downloads/. Without --force, mysqlimport exits if a table does not exist. --hosthostname, -h hostname. Import data to the MySQL server on the given host. You are here: Home » cPanel (Paid Hosting) » Databases » Remote MySQL.If you need to connect to your databases remotely through MySQL management software such as MySQL Administrator or The --local option allows use of a datafile thats located locally on the client host where mysqlimport is invoked. One of these tests includes the ability to run the mysqldump and mysqlimport commands. Unfortunately now, due to some change in your web hosting environment Is it possible to use mysqlimport to load a .sql file on your local machine into the database on a remote host?I cant seem to get the syntax right. Mysqlimport.exe from a remote windows computer into Centos Remote server.Connecting to MySQL from remote host using wildcards not working. Ive just discovered my new hosting company dont offer remote access to their mysql databases.mysqlimport -p mydatabase dump.txt. mysqlimport - Unix, Linux Command. Advertisements. NAME. mysqlimport - a data import program.--hosthostname, -h hostname. Import data to the MySQL server on the given host. Without --force, mysqlimport exits if a table does not exist. --hosthostname, -h hostname. Import data to the MySQL server on the given host. Without --force, mysqlimport exits if a table does not exist. --hosthostname, -h hostname.hostremote.example.com --userremoteuser --password Enter password: enter password call "C:Program Filesmysqlbinmysqlimport.exe" -h host -P 3306 -u user -ppassword --local --fields-terminated-by, --lines-terminated-by"rn" livefeed C:macrovipcsvfinancialfeed.csv. mysqlimport -h host -p port [options] dbname textfile1 textfile2. Compare the MySQL reference manual or execute. twitter Title: Setting a Remote Host with MySQLImport. --host This option is used to connect Mysql server on particular host.--ssl When you use mysqlimport to take backup from the remote server, the connection is not secure. How to connect Adobe Dreamweaver with remote server , hosting website to upload files - Duration: 5:22.mysql 06 06 mysql utilities mysqlimport overview - Duration: 5:37. Description. mysqlimport loads tables from text files in various formats.-h name, --hostname. Connect to host. -i, --ignore. If duplicate unique key was found, keep old row. mysqlimport remote host problem. Incredibly Stuck on Errcode 2 File not found in mysqlimport. mysqldumpslow. If your remote MySQL instance accepts connections from the outside world (or can be made to do so, at least temporarily) then you could run mysqlimport on your local machine with the --host option But connecting remotely to your database server usually entails configuring MySQL to listen on every interface, restricting access to port 3306 with your firewall, and configuring user and host permissions mysqlimport --help (return code: 0).name Fields in the input file are escaped by the given character. -f, --force Continue even if we get an SQL error. -?, --help Displays this help and exits. -h, -- host mysqlimport [options] dbname textfile1 DESCRIPTION. The mysqlimport client provides ao. --hosthostname, -h hostname. Import data to the MySQL server on the given host. Home. Computers Internet mysql - Mysqlimport.exe from a remote windows computer into Centos Remote server.mysqlimport.exe --host(remoteip) --userroot --passwordpassword --local If you need to connect to your database with third party software, a remote MySQL connection can be made to a database that you are hosting at Web Hosting Hub. mysqlimport example usage. In the examples on this page it is assumed the mysqlimport command is in the search path, so the full path to the executable is not shown. Youll also need at least version 5.5.25 on remote host (youll see why).I achieved a rate of 11.3 MB/sec with this : far better than mysqldump / mysqlimport ! Without --force, mysqlimport exits if a table does not exist. --host hostname, -h hostname. Import data to the MySQL server on the given host. You do not need to specially create a user and allow remote connections to a MySQL Instance.Another popular hosting management panel - Parallels Plesk - also allows you to edit firewall rules. mysqlimport exits if a table does not exist. o --hosthostname, -h hostname. Import data to the MySQL server on the given host. mysqlimport: [Warning] Using a password on the command line interface can be insecure. mysqlimport: Error: 2, File tickets not found (Errcode: 2 - No such file or directory), when using table Subject: mysqlimport remote host problem. Im writing script that, each night, copies a small database to my laptop on the local network. And "root" has a host called [removed] listed with full permissions.
In fact, for all dbs the root user seems to have this [removed] host listed, though for the 72 Initialization. mysqlImport new MySQLImporter(host, user, password, port) Parameter. Description. Perform the following steps to grant access to a user from a remote host.To test the connection remotely, you can access the MySQL server from another Linux server, as follows.
|
OPCFW_CODE
|
Con este proyecto se muestra como copiar un fichero representando el porcentaje procesado.
El proyecto abre en binario el fichero de origen y traslada la información al fichero de destino (también abierto en modo Binario).
Aquí esta el código:
Private Declare Function SHGetPathFromIDList Lib "shell32.dll" Alias "SHGetPathFromIDListA" (ByVal pidl As Long, _ ByVal pszPath As String) As Long Private Declare Function SHGetSpecialFolderLocation Lib "shell32.dll" (ByVal hwndOwner As Long, ByVal nFolder As Long, _ pidl As ITEMIDLIST) As Long Private Declare Function SHBrowseForFolder Lib "shell32.dll" Alias "SHBrowseForFolderA" _ (lpBrowseInfo As BROWSEINFO) As Long Private Type SHITEMID cb As Long abID As Byte End Type Private Type ITEMIDLIST mkid As SHITEMID End Type Private Type BROWSEINFO hOwner As Long pidlRoot As Long pszDisplayName As String lpszTitle As String ulFlags As Long lpfn As Long lParam As Long iImage As Long End Type Private Const NOERROR = 0 Private Const BIF_RETURNONLYFSDIRS = &H1 Private Const BIF_DONTGOBELOWDOMAIN = &H2 Private Const BIF_STATUSTEXT = &H4 Private Const BIF_RETURNFSANCESTORS = &H8 Private Const BIF_BROWSEFORCOMPUTER = &H1000 Private Const BIF_BROWSEFORPRINTER = &H2000 'This code is used to copy the file provided in the Source text box. The 'file is calculated and then copied to the destination path while advancing 'the progress bar at the same time. Function CopyFile(Src As String, Dst As String) As Single Static Buf$ Dim BTest!, FSize! 'declare the needed variables Dim Chunk%, F1%, F2% Const BUFSIZE = 1024 'set the buffer size If Len(Dir(Dst)) Then 'check to see if the destination file already exists 'prompt the user with a message box Response = MsgBox(Dst + Chr(10) + Chr(10) + "File already exists. Do you want to overwrite it?", _ vbYesNo + vbQuestion) If Response = vbNo Then 'if the "No" button was clicked Exit Function 'exit the procedure Else 'otherwise Kill Dst 'delete the already found file, and carryon with the code End If End If On Error GoTo FileCopyError 'incase of error goto this label F1 = FreeFile 'returns file number available Open Src For Binary As F1 'open the source file F2 = FreeFile 'returns file number available Open Dst For Binary As F2 'open the destination file FSize = LOF(F1) BTest = FSize - LOF(F2) Do If BTest < BUFSIZE Then Chunk = BTest Else Chunk = BUFSIZE End If Buf = String(Chunk, " ") Get F1, , Buf Put F2, , Buf BTest = FSize - LOF(F2) ProgressBar.Value = (100 - Int(100 * BTest / FSize)) 'advance the progress bar as the file is copied Loop Until BTest = 0 Close F1 'closes the source file Close F2 'closes the destination file CopyFile = FSize ProgressBar.Value = 0 'returns the progress bar to zero Exit Function 'exit the procedure FileCopyError: 'file copy error label MsgBox "Copy Error!, Please try again..." 'display message box with error Close F1 'closes the source file Close F2 'closes the destination file Exit Function 'exit the procedure End Function 'This code is used to extract the filename provided by the user from the 'Source text box. The filename is extracted and passed to the string 'SpecOut. Once the filename is extraced from the text box, it is then added 'to the destination path provided by the user. Public Function ExtractName(SpecIn As String) As String Dim i As Integer 'declare the needed variables Dim SpecOut As String On Error Resume Next 'ignore any errors For i = Len(SpecIn) To 1 Step -1 ' assume what follows the last backslash is the file to be extracted If Mid(SpecIn, i, 1) = "" Then SpecOut = Mid(SpecIn, i + 1) 'extract the filename from the path provided Exit For End If Next i ExtractName = SpecOut 'returns the extracted filename from the path End Function Private Sub Browsedestination_Click() Dim bi As BROWSEINFO 'declare the needed variables Dim rtn&, pidl&, path$, pos% bi.hOwner = Me.hWnd 'centres the dialog on the screen bi.lpszTitle = "Browse for Destination..." 'set the title text bi.ulFlags = BIF_RETURNONLYFSDIRS 'the type of folder(s) to return pidl& = SHBrowseForFolder(bi) 'show the dialog box path = Space(512) 'sets the maximum characters T = SHGetPathFromIDList(ByVal pidl&, ByVal path) 'gets the selected path pos% = InStr(path$, Chr$(0)) 'extracts the path from the string SpecIn = Left(path$, pos - 1) 'sets the extracted path to SpecIn If Right$(SpecIn, 1) = "" Then 'makes sure that "" is at the end of the path SpecOut = SpecIn 'if so then, do nothing Else 'otherwise SpecOut = SpecIn + "" 'add the "" to the end of the path End If 'merges both the destination path and the source filename into one string Destinationpath.Text = SpecOut + ExtractName(Filepath.Text) End Sub Private Sub Browsefile_Click() Dialog.DialogTitle = "Browse for source..." 'set the dialog title Dialog.ShowOpen 'show the dialog box Filepath.Text = Dialog.filename 'set the target text box to the file chosen End Sub Private Sub Cancel_Click() Unload Me 'exit the program End Sub Private Sub Copy_Click() On Error Resume Next 'ignore any errors If Filepath.Text = "" Then 'make sure that a target file is specified MsgBox "You must specify a file and path in the text box provided", vbCritical 'if not then display a message Exit Sub 'and exit the procedure End If If Destinationpath.Text = "" Then 'make sure that a destination path is specified MsgBox "You must specify a destination path in the text box provided", vbCritical 'if not then display a message Exit Sub 'and exit the procedure End If 'if all is OK then copy the file ProgressBar.Value = CopyFile(Filepath.Text, Destinationpath.Text) End Sub Private Sub FilePath_Change() Destinationpath.Enabled = True 'enables the destination path text box Browsedestination.Enabled = True 'enables the browse button Destinationpath.SetFocus 'puts the cursor in the desination path text box End Sub Private Sub Form_Load() Move (Screen.Width - Width) 2, (Screen.Height - Height) 2 'centre the form on the screen 'This project was downloaded from 'http://www.brianharper.demon.co.uk/ 'Please use this project and all of its source code however you want. 'UNZIPPING 'To unzip the project files you will need a 32Bit unzipper program that 'can handle long file names. If you have a latest copy of Winzip installed 'on your system then you may use that. If you however dont have a copy, 'then visit my web site, go into the files section and from there you can 'click on the Winzip link to goto their site and download a copy of the 'program. By doing this you will now beable to unzip the project files 'retaining their proper long file names. 'Once upzipped, load up your copy of Visual Basic and goto 'File/Open Project. Locate the project files to where ever you unzipped 'them, then click Open. The project files will be loaded and are now ready 'for use. 'THE PROJECT 'I created this project in order to try and spice up a menu system I was 'once working on. I needed to copy files betweem disks and needed some 'indication of how long it would take and how it was doing. Using a percent 'bar in the project would have been ideal. Percent bars are now used as a 'common method of indicating how a procedure is doing. They might not be '100% accurate but they are the next best thing. After hours of research 'and many hours of debugging, I finally came up with an easy to use 'executable using a percent bar while copying a file, which was ideally 'suited to what I needed. 'NOTES 'I have only provided the necessary project files with the zip. This keeps 'the size of the zip files down to a minimum and enables me to upload more 'prjects files to my site. 'I hope you find the project usful in what ever you are programming. I 'have tried to write out a small explanation of what each line of code 'does in the project, although most of it is pretty simple to understand. 'If you find any bugs in the code then please dont hesitate to Email me and 'I will get back to you as soon as possible. If you however need help on a 'different matter concerning Visual Basic then please please Email me as 'I like to here from people and here what they are programming. 'My Email address is: 'Brian@brianharper.demon.co.uk 'My web site is: 'http://www.brianharper.demon.co.uk/ 'Please visit my web site and find many other useful projects like this. End Sub
Colaboración enviada por Oscar Di Criscenzo
|
OPCFW_CODE
|
What are Voice Assistants on Windows?
Voice assistant applications can take advantage of the Windows ConversationalAgent APIs to achieve a complete voice-enabled assistant experience.
Voice Assistant Features
Voice agent applications can be activated by a spoken keyword to enable a hands-free, voice driven experience. Voice activation works when the application is closed and when the screen is locked.
In addition, Windows provides a set of voice-activation privacy settings that gives users control of voice activation and above lock activation on a per-app basis.
After voice activation, Windows will manage multiple active agents properly and notify each voice assistant if they are interrupted or deactivated. This allows applications to manage interruptions and other inter-agent events properly.
How does voice activation work?
The Agent Activation Runtime (AAR) is the ongoing process in Windows that manages application activation on a spoken keyword or button press. It starts with Windows as long as there is at least one application on the system that is registered with the system. Applications interact with AAR through the ConversationalAgent APIs in the Windows SDK.
When the user speaks a keyword, the software or hardware keyword spotter on the system notifies AAR that a keyword has been detected, providing a keyword ID. AAR in turn sends a request to BackgroundService to start the application with the corresponding application ID.
The first time a voice activated application is run, it registers its app ID and keyword information through the ConversationalAgent APIs. AAR registers all configurations in the global mapping with the hardware or software keyword spotter on the system, allowing them to detect the application's keyword. The application also registers with the Background Service.
Note that this means an application cannot be activated by voice until it has been run once and registration has been allowed to complete.
Receiving an activation
Upon receiving the request from AAR, the Background Service launches the application. The application receives a signal through the OnBackgroundActivated life-cycle method in
App.xaml.cs with a unique event argument. This argument tells the application that it was activated by AAR and that it should start keyword verification.
If the application successfully verifies the keyword, it can make a request that appears in the foreground. When this request succeeds, the application displays UI and continues its interaction with the user.
AAR still signals active applications when their keyword is spoken. Rather than signaling through the life-cycle method in
App.xaml.cs, though, it signals through an event in the ConversationalAgent APIs.
The keyword spotter that triggers the application to start has achieved low power consumption by simplifying the keyword model. This allows the keyword spotter to be "always on" without a high power impact, but it also means the keyword spotter will likely have a high number of "false accepts" where it detects a keyword even though no keyword was spoken. This is why the voice activation system launches the application in the background: to give the application a chance to verify that the keyword was spoken before interrupting the user's current session. AAR saves the audio since a few seconds before the keyword was spotted and makes it accessible to the application. The application can use this to run a more reliable keyword spotter on the same audio.
|
OPCFW_CODE
|
import { Color } from './Color';
import { FontVariant } from './FontVariant';
import { LineMode } from './LineMode';
import { LineStyle } from './LineStyle';
import { LineType } from './LineType';
import { LineWidth } from './LineWidth';
import { TextLine } from './TextLine';
import { TextProperties } from './TextProperties';
import { TextTransformation } from './TextTransformation';
import { Typeface } from './Typeface';
describe(TextProperties.name, () => {
let properties: TextProperties;
beforeEach(() => {
properties = new TextProperties();
});
describe('background color', () => {
it('return undefined by default', () => {
expect(properties.getBackgroundColor()).toBeUndefined();
});
it('return previously set alignment', () => {
const testColor = Color.fromRgb(1, 2, 3);
properties.setBackgroundColor(testColor);
expect(properties.getBackgroundColor()).toBe(testColor);
});
});
describe('color', () => {
it('return undefined by default', () => {
expect(properties.getColor()).toBeUndefined();
});
it('return previously set color', () => {
const testColor = Color.fromRgb(1, 2, 3);
properties.setColor(testColor);
expect(properties.getColor()).toBe(testColor);
});
});
describe('font name', () => {
it('return undefined by default', () => {
expect(properties.getFontName()).toBeUndefined();
});
it('return previously set font name', () => {
const testFontName = 'someFont';
properties.setFontName(testFontName);
expect(properties.getFontName()).toBe(testFontName);
});
});
describe('#font size', () => {
const testFontSize = 23;
it('return default font size', () => {
expect(properties.getFontSize()).toBe(12);
});
it('return previously set font size', () => {
properties.setFontSize(testFontSize);
expect(properties.getFontSize()).toBe(testFontSize);
});
it('ignore invalid value', () => {
properties.setFontSize(testFontSize);
properties.setFontSize(-42);
expect(properties.getFontSize()).toBe(testFontSize);
});
});
describe('font variant', () => {
it('return Normal by default', () => {
expect(properties.getFontVariant()).toBe(FontVariant.Normal);
});
it('return previously set font variant', () => {
const testFontVariant = FontVariant.SmallCaps;
properties.setFontVariant(testFontVariant);
expect(properties.getFontVariant()).toBe(testFontVariant);
});
});
describe('overline', () => {
const testLineColor = Color.fromRgb(1, 2, 3);
const testLineMode = LineMode.SkipWhiteSpace;
const testLineStyle = LineStyle.Wave;
const testLineType = LineType.Double;
const testLineWidth = 13.37;
const expectedLine: Readonly<TextLine> = {
color: testLineColor,
mode: testLineMode,
style: testLineStyle,
type: testLineType,
width: testLineWidth,
};
it('return undefined by default', () => {
// Assert
expect(properties.getOverline()).toBeUndefined();
});
it('return line with defaults if no parameters are passed', () => {
// Arrange
const expectedDefaultLine: TextLine = {
color: 'font-color',
mode: LineMode.Continuous,
style: LineStyle.Solid,
type: LineType.Single,
width: LineWidth.Auto,
};
// Act
properties.setOverline();
// Assert
expect(properties.getOverline()).toEqual(expectedDefaultLine);
});
it('return previously set line', () => {
// Act
properties.setOverline(
testLineColor,
testLineWidth,
testLineStyle,
testLineType,
testLineMode
);
// Assert
expect(properties.getOverline()).toEqual(expectedLine);
});
it('ignore invalid value', () => {
// Arrange
properties.setOverline(
testLineColor,
testLineWidth,
testLineStyle,
testLineType,
testLineMode
);
// Act
properties.setOverline(testLineColor, -0.1);
// Assert
expect(properties.getOverline()).toEqual(expectedLine);
});
it('remove previously set border', () => {
// Arrange
properties.setOverline(
testLineColor,
testLineWidth,
testLineStyle,
testLineType,
testLineMode
);
// Act
properties.removeOverline();
// Assert
expect(properties.getOverline()).toBeUndefined();
});
});
describe('text transformation', () => {
it('return `None` by default', () => {
expect(properties.getTextTransformation()).toBe(TextTransformation.None);
});
it('return previously set text transformation', () => {
const testTextTransformation = TextTransformation.Uppercase;
properties.setTextTransformation(testTextTransformation);
expect(properties.getTextTransformation()).toBe(testTextTransformation);
});
});
describe('#getTypeface', () => {
it('return `Normal` by default', () => {
expect(properties.getTypeface()).toBe(Typeface.Normal);
});
it('return previously set typeface', () => {
const testTypeface = Typeface.BoldItalic;
properties.setTypeface(testTypeface);
expect(properties.getTypeface()).toBe(testTypeface);
});
});
describe('underline', () => {
const testLineColor = Color.fromRgb(1, 2, 3);
const testLineMode = LineMode.SkipWhiteSpace;
const testLineStyle = LineStyle.Wave;
const testLineType = LineType.Double;
const testLineWidth = 13.37;
const expectedLine: Readonly<TextLine> = {
color: testLineColor,
mode: testLineMode,
style: testLineStyle,
type: testLineType,
width: testLineWidth,
};
it('return undefined by default', () => {
// Assert
expect(properties.getUnderline()).toBeUndefined();
});
it('return line with defaults if no parameters are passed', () => {
// Arrange
const expectedDefaultLine: TextLine = {
color: 'font-color',
mode: LineMode.Continuous,
style: LineStyle.Solid,
type: LineType.Single,
width: LineWidth.Auto,
};
// Act
properties.setUnderline();
// Assert
expect(properties.getUnderline()).toEqual(expectedDefaultLine);
});
it('return previously set line', () => {
// Act
properties.setUnderline(
testLineColor,
testLineWidth,
testLineStyle,
testLineType,
testLineMode
);
// Assert
expect(properties.getUnderline()).toEqual(expectedLine);
});
it('ignore invalid value', () => {
// Arrange
properties.setUnderline(
testLineColor,
testLineWidth,
testLineStyle,
testLineType,
testLineMode
);
// Act
properties.setUnderline(testLineColor, -0.1);
// Assert
expect(properties.getUnderline()).toEqual(expectedLine);
});
it('remove previously set border', () => {
// Arrange
properties.setUnderline(
testLineColor,
testLineWidth,
testLineStyle,
testLineType,
testLineMode
);
// Act
properties.removeUnderline();
// Assert
expect(properties.getUnderline()).toBeUndefined();
});
});
});
|
STACK_EDU
|
Transferring iOS app with iCloud enabled
When going to transfer my iOS app to another developer, I got this message
You can't transfer this app because of the following reasons:
iCloud enabled
You can only transfer apps that aren’t iCloud enabled.
Deleting and Transferring Apps Documentation
The documentation states:
Make sure the app uses only technology and content that can be transferred.
No version of the app can use an iCloud entitlement.
Since a version of my app used iCloud, is there literally no way I can transfer it? If there is a way, how should I proceed?
How about handing over the developer account as well?
How about handing over the developer account as well?
The app must be deleted and re-created with a new SKU/Bundle ID.
Remove the app from sale by going to Pricing > Select Territories > Deselect All
Delete the app under More (to the right of Prerelease, Pricing etc) > Delete App
Create the app under the developer account as a new app with the same name with a new SKU/Bundle ID.
This will delete any reviews/ratings, gamecenter data, iCloud data, and any other data linked to that app. You'll have to recreate any in app purchases you had.
Does it mean that the previous users can update the app? Thanks
Unfortunately no. Users must re-download the app as if it was new.
I'm actually going to stop using iCloud for this very lame reason. If you're ever thinking of transferring the app eventually (buy-out?) then don't use iCloud to begin with.
Had to remove iCloud Keychain usage from my app before I publish it for this very reason. Lame.
If you have a popular app that has been around for a long time this will destroy your search rankings and reviews.
Should we still do this in 2021 for a solution? Is there still no other solution?
This is so bad! Why does this criteria exist? What are the security reasons?
WARNING: Never use iCloud Entitlements in an app. It's not worth it because it makes your app untransferable and therefore unsaleable forever!
I had enabled iCloud entitlements in some previous build to play around with NSUbiquitousKeyValueStore, and now I can never transfer that app... This is so so so very bad. I have an Android version of the app, ran into no issues with Google Play.
For June 2022+: Apple has updated the rules in 2022. Now you can transfer your app the regular way even if it uses iCloud. They write about limitations in their documentation now in "Apps Using iCloud" paragraph.
Same here. I was just using NSUbiquitousKeyValueStore to allow the user to share some settings across devices. Now all my users who paid for the app will not receive any more updates, my 4.5 star ratings will be gone, etc. I am super frustrated and this will cost me a lot of money.
I have written an article about his on medium.com, so feel free to share it in your professional networks if you feel this could help fellow developers to not run into this trap.
|
STACK_EXCHANGE
|
The demand for data extraction from websites is growing. We usually need to record data from websites when working on data-related tasks like pricing monitoring, business analytics, or news aggregation. Copying and pasting information line by line, on the other hand, has become redundant. In this blog, we'll show you how to accomplish web scraping using Python to become an "insider" in scraping data from websites.
Web scraping is a technique for extracting huge amounts of data from websites. But why is it necessary to acquire such large amounts of data from websites? Let's have a look at several web scraping applications to learn more about this:
Web scraping is used by many firms that utilize email as an advertising medium to obtain email IDs and then send mass emails.
To figure out what's popular, web scraping is utilized to extract information from social media platforms like Twitter.
Web scraping is a technique for gathering large amounts of data (statistics, general information, temperature, and so on) from web pages, which is then processed and used in surveys or R&D.
Details about job vacancies and interviews are gathered from several websites and then compiled in one spot for easy access by the user.
Flexibility: Python is a simple to learn the language that is very productive and dynamically imputable. As a result, people could easily update their code and keep up with the speed of online upgrades.
Powerful: Python comes with a huge number of mature libraries. Beautifulsoup4 may, for example, assist us in retrieving URLs and extracting data from web pages. By allowing web crawlers to replicate human browsing behavior, Selenium could help us escape some anti-scraping tactics. Furthermore, re, numpy, and pandas may be able to assist us in cleaning and processing the data.
Let us start with web scraping using Python.
Web scraping is a method for converting unstructured HTML data to structured data in a spreadsheet or database. Some large websites, such as Airbnb or Twitter, would make APIs available to developers so that they could access their data. The API (Application Programming Interface) is a way for two apps to communicate with one another. For most users, using an API to get data from a website is the most efficient way to do so.
The majority of websites, however, lack API services. Even if they give an API, the data you may receive may not be what you need. As a result, building a python script to create a web crawler is an additional powerful and flexible option.
1. In this blog, we will scrape reviews from Yelp. BeautifulSoup in bs4 and request in urllib will be used. These two libraries are frequently used in Python web crawler development. The first step is to import these two modules into Python such that we can make use of their functionalities.
2. Extracting the HTML from web page
We need to get information from " https://www.yelp.com/biz/milk-and-cream-cereal-bar-new-york?osq=Ice+Cream " Let's start by storing the URL in a variable named URL. Then, using the urlopen() function in request, we could retrieve the content on this URL and save the HTML in "ourUrl."
We will then apply BeautifulSoup to parse the page.
We could use a function called prettify() to clean the raw data and output it to view the hierarchical structure of HTML in the "soup" now that we have the "soup," which is the raw HTML for this website.
The next step is to locate the HTML reviews on this page, extract them, and save them. A unique HTML "ID" would be assigned to each element on the web page. We'd have to INSPECT them on a web page to check their ID.
We could examine the HTML of the reviews after clicking "Inspect element" (or "Inspect" depending on the browser).
The reviews in this case can be found underneath the tag "p." To discover the parent node of these reviews, we'll first utilize the find all() function. Then, in a loop, find all elements having the tag "p" under the parent node. We'd put all of the "p" elements in an empty list called "review" when we found them all.
We now have access to all of the reviews on that page. Let's check how many reviews we've gotten thus far.
You should notice that some unnecessary text remains, such as "p lang='en'>" at the start of each review, "br/>" in the middle of the reviews, and "/p>" at the end of each review. A single line break is indicated by the character "br/>." We won't require any line breaks in the reviews, thus they'll be removed. Also, "p lang='en'>" and "/p>" are the beginning and end of the HTML, respectively, and must be removed.
If you are in search for simple process of web scraping, you can contact ReviewGators today!!
|
OPCFW_CODE
|
(function() {
const Shard = { //Shard bot
init: function() {
console.log("Setting up Shard...");
console.log("Checking setup...");
let check_list = ['send_message'];
let flag = false;
for (let check in check_list) {
if (!this.find(check_list[check]))
flag=true;
}
if (flag)
console.log("Could not start Shard");
else {
console.log("Starting Shard...");
EVENTS.bind('on-message',this.parse.bind(this)); //Extra .bind to keep this bound to Shard
}
}, //Initialization and debugging
find: function(name) {
if (this[name]) {
console.log("Found "+name);
return this[name];
}else {
console.log("Could not find "+name);
return undefined;
}
},
commands: {
'-echo':{auth:'user',run:function(payload) {
this.send_message({text:payload.args.join(' '),room:payload.origin.room});
}},
'-setrank':{auth:'admin',run:function(payload) {
let name = payload.args[0];
let rank = payload.args[1];
if (this.setRank(name,rank))
this.send_message({text:"Set rank of "+name+" to "+rank,room:payload.origin.room});
else
this.send_message({text:"Failed to set rank of "+name+" to "+rank,room:payload.origin.room});
}},
'-mute':{auth:'admin',run:function(payload) {
let nick = payload.args[0];
let time = payload.args[1];
let i = this.length;
this.muted.push(nick);
setTimeout(() => {this.muted = this.muted.splice(i,i); this.send_message({text:nick + " has been unmuted",room:payload.origin.room});},time*1000);
this.send_message({text:"Muted "+nick+" for "+time+" seconds",room:payload.origin.room});
}},
'-help':{auth:'user',run:function(payload) {
this.send_message({text:["Type one of the commands below to use Shard",
"User Commands:",
"-help : Shows the help page",
"-about : About Shard",
"-echo <msg> : Echos the message",
"",
"Admin Commands:",
"-mute <time> : Prevents user from using Shard for <time> seconds",
"-setrank <nick> <rank> : Sets the rank of a nick"].join('\n'),room:payload.origin.room});
}},
'-about':{auth:'user',run:function(payload) {
this.send_message({text:"Shard is a general purpose chatroom bot.\nFor commands, type -help\nSoure code can be found here: https://github.com/Blackwerecat/Shard",room:payload.origin.room});
}}
},
muted: [],
ranks: {}, //Autherization
hierarchy: ['user','admin'],
setRank: function(name,rank) {
if (this.hierarchy.includes(rank)){
this.ranks[name]=rank;
return rank;
}else
return undefined;
},
isAutherized: function(rank,auth) {
let rank_val = this.hierarchy.indexOf(rank);
let auth_val = this.hierarchy.indexOf(auth);
if (rank_val>-1 && auth_val>-1)
return rank_val>=auth_val;
else
return undefined;
},
parse: function(payload) {
let nick = payload.nick;
let text_array = payload.text.split(' ');
let cmd = text_array.shift();
if (cmd.startsWith('-')) {//prefix
let command = this.commands[cmd];
if (command) {
if (!this.ranks[nick]) // if the user doesn't have a rank
this.ranks[nick] = 'user';
if (!this.muted.includes(nick) && this.isAutherized(this.ranks[nick],command.auth))
command.run.bind(this)({args:text_array,origin:payload});
else
this.send_message({text:"You do not have permission to use this command",room:payload.room});
}else
this.send_message({text:"Could not find command: "+cmd,room:payload.room});
}
}
};
window.EVENTS = EVENTS; //global
window.shard = Shard;
})();
|
STACK_EDU
|
I have a crate that contains some data types for my project. They're serialized with serde to JSON. I've noticed that with this crate in particular, rust is generating a
target/debug/incremental/<crate-name>/<uuid>/query-cache.bin file which is approximately 5GB whenever I do a build. This is filling up my hard drive pretty quickly!
The complete source code of this crate is defined here.
For comparison, the
query-cache.bin files for other crates in this project are on the order of 1MB. Am I doing something particularly dumb in this crate which is causing it to grow explosively in size? This is rustc 1.67 under macOS 12.6.
On an unrelated topic,
pub struct Coins(pub u32); seems like a good candidate for
The only thing catching my attention is the amount of monomorphization. Maybe that's triggering an explosion of code.
But that is pure speculation.
#[repr(transparent)] is unnecessary as this codebase doesn't do anything unsafe on
Coins. It's also not necessary to enforce calling convention within Rust code, as single field structures with a Rust layout are essentially
repr(transparent) (Structs and tuples - Unsafe Code Guidelines Reference) when interacting with Rust code.
Note that this isn't guaranteed, while for the current compiler single field structures are essentially
#[repr(transparent)], this may change in a future version of Rust - so this shouldn't be relied on by
unsafe code. With that said, I doubt Rust developers would make a change that would make calling convention for cases like this worse.
Very true. The reason we don't give any guarantees for the default
repr is that we want to make sure that we can always give safe code -- which doesn't care -- the fastest possible thing. We want the freedom to switch next year to something that hasn't even been invented yet.
(Now, for a single-field struct of an ordinary type like
u32, it's unlikely that anything clever can be invented, but that's the general rule.)
I assumed from the long list of derives that the author intends
Coins to be functionally the same as
u32. Is my assumption incorrect?
On the potentially severe end of mitigations, you could turn off incremental compilation.
If you want that, you'd write it as:
type Coins = u32;
and in that case indeed it would be functionally same as
u32, and it would have identical representation.
If you make it a wrapper type:
then that's because you don't want it to be functionally identical to
u32, and then typically there is no need to worry about whether representations are identical unless you're doing something low-level that requires it.
I was able to get my crate down from 45,000 lines of generated code to 31,000 (per
cargo expand) by removing the use of the
enum_iterator crate on a particularly large enum -- it generates an exponential amount of code for each enum case! The rest seems to be
serde boilerplate. This crate on its own generates more code than the rest of my project combined.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.
|
OPCFW_CODE
|
Ontology meeting 2015-02-12
Attendees:Harold, Paola, David H, David OS, Heiko, Tanya, Harold
- 1 Enzyme Binding
- 2 Disjoints in ChEBI
- 3 Regulation Chains (punt to next week when Judy is attending)
- 4 Release process: relaxing equivalent class (intersections) to subclassof
- 5 Defining has_regulation_target and related relations
- 6 Jira tickets
- 7 Follow-up: Sources of db_xrefs
- 8 results_in_fusion_of
We have decided that we will include families. How we define families will need to be addresses. Is it only if there is more than one similar gene in the same organism? The bottom line here is that a gene protein is not binding the activity, it is binding the molecule.
Disjoints in ChEBI
David OS to report back on today's meeting with Janna.
Punt until Chris is around. We will set up a Jenkins job to send to them about disjoints. We can make a branch file with high-level disjoints that we can edit ourselves and test. We can then feed this to them. Perhaps we can set up a Jenkins job to run Chebi and find violations that will become obvious to address.
Regulation Chains (punt to next week when Judy is attending)
David H will go over Nikolai's results chaining regulates relations and introducing direct and indirect regulation.
- Related (may want to punt to next week):
* Inference resulting from implementing regulation chain reasoning * results_in relations - how does this fit? Currently no axiomatisation linking this to regulates. Shouldn't there be?
Release process: relaxing equivalent class (intersections) to subclassof
During our current release process, we do not generate subClassof axioms (relationships) from EquivalentClass axioms (intersections). This means that they need to be added by hand to show up. (Failure to do this has lead to a few SF requests that really should be uncessesary e.g. https://sourceforge.net/p/geneontology/ontology-requests/11465/).
I assume that this is not done because we don't want all the whole zoo of relations we use in equivalent class axioms to show up in the regular GO release files. But these two issues really should be decoupled. They already are for other OBO ontologies using OORT & owltools for their release process. (see FBbt, CL...).
Please could fixing this be made a high priority?
The reasons might be that we have a step that removes relations so that they do not end up in the processed files. Can we have a release that only filters out the relations that we don't want, but keeps the ones that are allowed? We will ask Chris about this.
(see email subject: [go-ontology] Proposal for has regulation target)
- regulation MF has_regulation_target(UniProtKB:protein Y) protein X regulates the kinase activity of protein Y
regulates o mediated_by -> has_regulation_target
- regulation BP has_regulation_target(UniProtKB:protein Y) protein X regulates the localization of protein Y to the membrane
- regulation BP regulates_transport_of(UniProtKB:protein Y) protein X regulates the localization of protein Y to the membrane
regulates o 'transports or maintains localization of' -> regulates_transport_of:
- Review other uses for has_regulation_target and define relevant regulates over relation
Need to review the situation with Chris. (Paola has a list of tickets that need special attention. For my own reference, they're in an email thread "GO tickets in the EBI Jira instance".)
Can we clean these up? We should all look at our items.
Much discussion about synonyms. AI: David will ping Judy about synonyms.
Follow-up: Sources of db_xrefs
Action items from last meeting? http://wiki.geneontology.org/index.php/Ontology_meeting_2015-02-05#Sources_of_db_xrefs
Should this be a subproperty of results_in_organisation_of?
More broadly - can we plan some time to review the position of results_in_fu relations in the hierarchy? I suspect more can be pushed under 'has participant' &/or results in organisation of.
|
OPCFW_CODE
|
The beam has means to load code at runtime. But that’s all it does: Load arbitrary code.
All the additional requirements like replacing running processes, dependency management and such are afaik still handled at compile time when building the release to update to. That’s the data in the appup file mentioned in places.
These points may not matter for plugins on a CMS. At best files loaded are completely independent of each other and have no compile time dependencies to each other, but there’s nothing in elixir or on the beam to enforce that at runtime.
As a thought experiment, there might be a convoluted and risky way. I’m not remotely suggesting you actually do this, but I’d be interested to hear from knowledgeable folk here about what I have right here (& not).
You can add dependencies at runtime using Mix.install (as commonly used in LiveBook).
But Mix.install can’t be used within a Mix project (it works by dynamically creating a new one). So you’d need to do something like host and run elixir scripts, maybe similarly to LiveBook (I don’t know how that works). But the packages installed within those scripts wouldn’t be available to regular Phoenix projects, which obviates the point. You could perhaps work around that by running scripts which each set up their own Phoenix-like pipeline (eg. as a starting point example phoenix.exs · GitHub). Or (more realistically perhaps) create a custom plugin API allowing the CMS’s Phoenix process to call on these added scripts functionality via message passing.
[sorry @dimitarvp , didn’t mean to step on your toes]
For installation custom template, the developers just can replace some only pre-specified files.
By the way, it is not a real replacement because they just notice the CMS loads some pre-specified files from their own hex package. I save their package module name in database and state.
State for calling developer module name every loading pages. Database for storing developer module name and some information, for example, if State gets into trouble it restarts itself and takes information of the database.
I have written about my idea:
It is like Joomla and WordPress, the core of CMS has no responsibilities about the bugs will be created by developers
I’d suggest diving deep into how hot code updates work. Doing a hot code update should involve all the low level parts you’d need to do for handling a plugin architecture using elixir/erlang code. You probably won’t find much like a documentation or tutorial about this given this is not a usual way people would setup an elixir project (for all the drawback enumerated in this and other threads about elixir cms’s/plugins here on the forum).
Hmm, I have researched several times; as you said, I have not found the reference I can understand, but the only way I could see the way to do my task is not hot code update or getting from runtime is creating different nods. I did not test this way, just tested some simple phoenix app.
With the top plugin, I mean MsihkaSocial, I put login icons like GitHub and Google, which let users log in and register with social networks, but I did not change my CMS core. So it is basically a hex package, and the users should have the ability to install it in the graphical admin dashboard. Still, now they are forced to put it in mix file and config it in application.ex manually.
In this section, I named the component; it is a complete project like a restaurant manager or a store builder like Shopify.
Hence, developers will create a system like Shopify and add to my CMS and call some functions or, for example, my user manager to connect their project to my CMS.
It should be noted whether their components even have some plugins or not. Still, now they are forced to put the complete project in my CMS, stop the server, and restart it again.
When I do not add a graphical admin dashboard to let users install a plugin and component easily, the clients and developers are not attracted to my CMS; Please forget Joomla or WordPress; just imagine I am going to create a project that does not let users easily have own facilities.
Because of these problems and hard to understand hot code reloading without good documents, I want to select creating different nodes if it has no problems
I have been developing my CMS with phoenix LiveView and elixir power for more than 16 Months, I can not cancel this open source project now , my project will support English language in 4 months later and until now I can say no one create a CMS like me with elixir. It has many APIs and its dashboard is very UI friendly.
So I have to find a way to create some normal features to persuade users it can be a good target instead of Joomla or WordPress. Even is not, so we have some normal features and in different paradigm we can do better.
If you allow them to load Elixir code, then your CMS security is affected, because then they can run any arbitrary code they want, even if you only allow html templates. Please remember that we can run Elixir code on templates.
To be clear your CMS will become a target of attackers via the plugin system, just like in the Wordpress and Joomla plugins.
An attacker can be what you think it is a legit user or can trick a legit user to install the plugin, because the plugin is awesome and free, and/or is a version of a paid plugin. Does this ring bells from Wordpress and Joomla? If it doesn’t then I am really concerned that you aren’t aware of such issues on those platforms.
I believe when you prepare a space for developers to start their project faster because many essential items have been made before for them, it would be an excellent target to make stuff for clients because they do not need to implement all the things together and can just focus on a specific feature.
After years of the CMS’s development, it will become a framework.
In the end, I have another question, why was Phoenix created and Phoenix LiveView either? When everybody can create a project which can be a security issue?!
Even if they create security issues, these problems can help the CMS be safe than before, and I will try to improve it.
Thank you for your comment, all the comments you send can help me to improve myself.
|
OPCFW_CODE
|
38 Open Source Graph Classification Software Projects
Free and open source graph classification code projects including engines, APIs, generators, and tools.
Awesome Graph Classification 4337 ⭐
A collection of important graph embedding, classification and representation learning papers with implementations.
Simgnn 450 ⭐
A PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).
Graph_nn 296 ⭐
Graph Classification with Graph Convolutional Networks in PyTorch (NeurIPS 2018 Workshop)
Appnp 288 ⭐
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).
Seal Ci 187 ⭐
A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)
Graph_datasets 215 ⭐
A Repository of Benchmark Graph Datasets for Graph Classification (31 Graph Datasets In Total).
Ppnp 235 ⭐
PPNP & APPNP models from "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019)
Malllabiisc Asap 71 ⭐
AAAI 2020 - ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations
Ggnn.tensorflow 41 ⭐
Tensorflow implementation of Gated Graph Neural Network for Source Code Classification
Dgcnn 48 ⭐
A PyTorch implementation of DGCNN based on AAAI 2018 paper "An End-to-End Deep Learning Architecture for Graph Classification"
Benedekrozemberczki Feather 30 ⭐
The reference implementation of FEATHER from the CIKM '20 paper "Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models".
Leviborodenko Dgcnn 20 ⭐
Clean & Documented TF2 implementation of "An end-to-end deep learning architecture for graph classification" (M. Zhang et al., 2018).
Graph Embedding Techniques 27 ⭐
It provides some typical graph embedding techniques based on task-free or task-specific intuitions.
Pdn 40 ⭐
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Jinheonbaek Gmt 43 ⭐
Official Code Repository for the paper "Accurate Learning of Graph Representations with Graph Multiset Pooling" (ICLR 2021)
Phc Gnn 27 ⭐
Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noé and Djork-Arné Clevert
Orbitalfeatures 10 ⭐
A sparsity aware implementation of "Biological Network Comparison Using Graphlet Degree Distribution" (Bioinformatics 2007)
Ashleve Graph_classification 12 ⭐
Training GNNs with PyTorch Lightning: Open Graph Benchmarks and image classification from superpixels
Ehgnn 18 ⭐
Official Code Repository for the paper "Edge Representation Learning with Hypergraphs" (NeurIPS 2021)
|
OPCFW_CODE
|
There are many ways to keep your computer secure. Your own behavior affects it a lot. But there are also many tools that can improve your security even if that wasn’t their initial purpose. Melissa and Sean described how you can use separate browsers to lower the risk for human errors. Virtualization is another technology that can improve security as a side effect. It’s a like the separate browsers idea, but takes it a lot further. Read on to learn more.
Virtualization in computing means to simulate something with software. What we talk about here is to create a whole virtual computer inside a real computer. It’s complex under the hood, but there are luckily easy products that can be used by almost anyone. This technology is by the way used extensively in the software industry. Huge number of virtual computers can be used to process data or test software. A large portion of the Internet is also provided by virtual servers.
But how can this improve my security? Most malware is made for profit and interfering with your on-line banking is a common payload. But what if you run your on-line banking on a separate computer? Buying another machine costs money and consumes space, but that can be solved by using a virtual computer instead. That virtual machine would only be used for banking, nothing else. A malware infection could happen if your guard is down and you open a malicious file in the mail. Or surf to a site witch is infected with a drive-by download. Both cases could infect your real computer, but the malware can’t see what you are doing with the bank inside the virtual machine. One could also use the opposite strategy. Use a virtual machine when doing something risky, like looking for downloads on shady servers. A previously made snapshot can easily be restored if something bad hits the virtual machine.
An additional benefit is that this gives you an excellent opportunity to play around with different operating systems. Install Linux/Windows/OS X just to become familiar with them. Do you have some hardware which driver won’t work in your new machine? No problem, install a virtual machine with an older operating system.
OK, sounds like a good idea. But can I do it? Here’s what it takes.
- You need a fairly new and powerful computer. Especially the amount of RAM memory is critical. You are usually OK with 8 GB, but more is desirable. This is probably a bad idea if you have less. (This depends a lot on what operating system you are running and what you want to run in the virtual machines.)
- You need to download and install a virtualization product. Two good alternatives are VirtualBox by Oracle (free) and VMWare Player by VMWare (free for personal use).
- You need to have an installation media for the operating system you want to run in the virtual machine. This is easy for Linux as you can download the installer freely from the net. Hint: Google: download linux.
- You need to know how to install an operating system. This is not as nerdy as it sounds. Modern operating systems have easy installers that most people are able to use. And don’t worry if you make a mistake. It’s just a virtual machine and you can go back to the beginning at any time without losing anything (except some time).
I’m not going to provide detailed instructions for this. That depends too much on which virtualization product and operating system you use. And it would beside that be like reinventing the wheel. You will find plenty of step-by-step instructions by Googling for what you want to do, for example “install Linux in VirtualBox”.
But for your convenience, here’s an overview of the process.
- Select one of the virtualization products and ensure that your computer meets its system requirements.
- Download and install the virtualization product.
- Ensure that you have an installation media for the operating system you want to use and any keycodes etc. that may be needed during installation. The media can be a physical disk or USB-memory, or a disk stored as an image file. The virtualization software can mount disk image files as a device in the virtual machine and there’s no need to burn a disk for this purpose.
- Now follow the instructions you found on the net. They will help you create the virtual machine, mount the installation media in it and go through the operating system installation.
- After this you can use the virtualization product’s console to start the virtual machine when needed. It shows up full-screen or in a window depending on the settings. Inside it you can do what you want, install programs surf the net, etc.
- For the banking virtual computer you just need to install the browser of your choice, make sure it’s updated and patched and make your bank the home page. Don’t install anything else unless it really is needed for the banking connection and don’t use this virtual machine for anything else.
- You can create multiple virtual machines, but be careful if you try to run them at once. Your computer may not have what it takes. As said, RAM memory is the critical resource here.
Edited to add: It is of course a good habit to exercise the same basic security measurements inside virtual machines as in real computers. Turn on the operating system’s update function, install your anti-virus program and make sure your browser is kept up to date. Doing just banking with the virtual machine reduces the risk a lot, but this is good advice even in that case. And needless to say, the virtual machine’s armor is essential if you use it for high-risk tasks.
|
OPCFW_CODE
|
Special Keyframe Types
We’ve already learned about linear and easy ease keyframes, so now it’s time to look at some more exotic keyframe types – hold, auto bezier and roving. As we mentioned before, the graph editor has icon buttons to turn keyframes into any of these types (except roving).
Hold is a special keyframe type that means there is no transition at all. Instead, the property stays at the value of the hold keyframe, then when the next keyframe comes it jumps abruptly to the new value:
Hold keyframes are visualized as a rectangle, and like all keyframe types, both the left and the right half of a keyframe can be a hold (or not). When you turn a keyframe into a hold keyframe, by default only the right half (i.e. outgoing movement) will be set to hold. If you start adding more keyframes after a hold keyframe, both their left and right halves will become hold.
Hold keyframes have priority compared to the other keyframe types. So, if either side of a movement is a hold keyframe, the move will perform a hold, no matter what the keyframe on the other side says.
Auto Bezier Keyframes
Next to the icon for Linear keyframes, the graph editor also provides a button for Auto Bezier keyframes. Auto Beziers are represented by a circle shape, and are used to make sure that the speed change from one move to the next is smooth. So far, you’ve learned that Easy Ease In and Easy Ease Out create a smooth start and stop for a movement. However, if you have keyframes in the middle of a longer move, making them Easy Ease would cause a stopover, since they will always do a smooth transition to and from zero speed. By contrast, Auto Bezier keyframes create a smooth speed change between the speed of the keyframe’s incoming move and the speed of the outgoing move. Here’s a comparison:
When you look at the dots indicating the position at each frame, you can see that Auto Bezier transitions smoothly from very tightly placed dots (i.e. slow movement) to bigger spaces between the dots (fast movement).In the Linear variant, this change happens abruptly, right at the middle keyframe. For the Easy Ease variant, you can see that when approaching the middle keyframe the dots are even closer together, instead of being further apart. This is because the Easy Ease stops at each keyframe, and in order to ease into this stop it first needs to slow down. This is even more obvious when we look at the speed graph. While the Auto Bezier curve is a smoothed version of the Linear curve, the Easy Ease curve looks very different, dipping down to zero speed at each stopover.
Also note that Auto Bezier is a term used for both temporal interpolation - what we’re talking about here - but also spatial interpolation. For more details on the latter, see the Spatial Interpolation section.
You’ve seen how Auto Bezier keyframes are useful for smoothing out speed changes. Often, you want to go one step further and create keyframes that don’t influence the timing at all, which is exactly what Roving keyframes are. We’ve talked so much about easing now that it’s easy to forget that the timing of an animation depends on two factors: how the keys are eased, and at what time they’re placed. While Auto Bezier keyframes adjust their easing to create smooth speed curves, they will never move forward or backward in time unless you move them manually. By contrast, Roving keyframes adjust their easing and their time to create perfectly seamless speed curves. In this example, all but the first and last keys are roving. When we adjust the easing of the first or last key, all the roving keyframes update accordingly:
This is super useful when your motion paths have highly complex shapes with tons of keyframes (say, something moving along the contours of your logo) but while you are easing the moves you don’t want to worry about all those keyframes. This video gives a quick overview of roving keyframes.
In the next video, I first try to ease a complex motion path with Auto Bezier, and then with Roving keyframes. This will also give you an idea of how roving keyframes behave in the graph editor.
|
OPCFW_CODE
|
/**
* Module dependencies
*/
var _ = require('lodash');
var modelHasNoDatastoreError = require('../constants/model-has-no-datastore.error');
var modelHasMultipleDatastoresError = require('../constants/model-has-multiple-datastores.error');
var constructError = require('./construct-error');
/**
* validateModelDef()
*
* Validate, normalize, and mix in implicit defaults for a particular model
* definition. Includes adjustments for backwards compatibility.
*
* @required {Dictionary} originalModelDef
* @required {String} modelIdentity
* @required {Dictionary} hook
* @required {SailsApp} sails
*
* @returns {Dictionary} [normalized model definition]
* @throws {Error} E_MODEL_HAS_MULTIPLE_DATASTORES
* @throws {Error} E_MODEL_HAS_NO_DATASTORE
*/
module.exports = function validateModelDef (originalModelDef, modelIdentity, hook, sails) {
// Rebuild model definition to provide a layer of insulation against any
// changes that might tamper with the original, raw definition.
//
// Model settings are determined using the following rules:
// (in descending order of precedence)
// • explicit model def
// • sails.config.models
// • implicit framework defaults
var normalizedModelDef;
// We start off with some implicit defaults:
normalizedModelDef = {
// Set `identity` so it is available on the model itself.
identity: modelIdentity,
// Default the table name to the identity.
tableName: modelIdentity,
// Default attributes to an empty dictionary (`{}`).
// > Note that we handle merging attributes as a special case below
// > (i.e. because we're doing a shallow `.extend()` rather than a deep merge)
// > This allows app-wide defaults to include attributes that will be shared across
// > all models.
attributes: {}
};
// Check for any instance methods in use. If there are any, log a deprecation
// warning alerting users that they will be removed in the future.
_.each(originalModelDef.attributes, function deprecateInstanceMethods(val, attributeName) {
// Always ignore `toJSON` for now.
if (attributeName === 'toJSON') {
return;
}
// If the attribute is a function, log a message
if (_.isFunction(val)) {
sails.log.debug('It looks like you are using an instance method (`' + attributeName + '`) defined on the `' + originalModelDef.globalId + '` model.');
sails.log.debug('Model instance methods are deprecated in Sails v1, and support will be removed.');
sails.log.debug('Please refactor the logic from this instance method into a static method model method or helper.');
}
});
// Next, merge in app-wide defaults.
_.extend(normalizedModelDef, _.omit(sails.config.models, ['attributes']));
// Merge in attributes from app-wide defaults, if there are any.
if (!_.isFunction(sails.config.models.attributes) && !_.isArray(sails.config.models.attributes) && _.isObject(sails.config.models.attributes)) {
normalizedModelDef.attributes = _.extend(normalizedModelDef.attributes, sails.config.models.attributes);
}
// Finally, fold in the original properties provided in the userland model definition.
_.extend(normalizedModelDef, _.omit(originalModelDef, ['attributes']));
// Merge in attributes from the original model def, if there are any.
if (!_.isFunction(originalModelDef.attributes) && !_.isArray(originalModelDef.attributes) && _.isObject(originalModelDef.attributes)) {
normalizedModelDef.attributes = _.extend(normalizedModelDef.attributes, originalModelDef.attributes);
}
// If this is production, force `migrate: safe`!!
// (note that we check `sails.config.environment` and process.env.NODE_ENV
// just to be on the conservative side)
if ( normalizedModelDef.migrate !== 'safe' && (sails.config.environment === 'production' || process.env.NODE_ENV === 'production')) {
normalizedModelDef.migrate = 'safe';
sails.log.verbose('For `%s` model, forcing Waterline to use `migrate: "safe" strategy (since this is production)', modelIdentity);
}
// Now that we have a normalized model definition, verify that a valid datastore setting is present:
// (note that much of the stuff below about arrays is for backwards-compatibility)
// If a datastore is not configured in our normalized model def (i.e. it is falsy or an empty array), then we throw a fatal error.
if (!normalizedModelDef.connection || _.isEqual(normalizedModelDef.connection, [])) {
throw constructError(modelHasNoDatastoreError, { modelIdentity: modelIdentity });
}
// Coerce `Model.connection` to an array.
// (note that future versions of Sails may skip this step and keep it as a string instead of an array)
if (!_.isArray(normalizedModelDef.connection)) {
normalizedModelDef.connection = [
normalizedModelDef.connection
];
}
// Explicitly prevent more than one datastore from being used.
if (normalizedModelDef.connection.length > 1) {
throw constructError(modelHasMultipleDatastoresError, { modelIdentity: modelIdentity });
}
// Grab the normalized configuration for the datastore referenced by this model.
// If the normalized model def doesn't have a `schema` flag, then check out its
// normalized datastore config to see if _it_ has a `schema` setting.
//
// > Usually this is a default coming from the adapter itself-- for example,
// > `sails-mongo` and `sails-disk` set `schema: false` by default, whereas
// > `sails-mysql` and `sails-postgresql` default to `schema: true`.
// > See `lib/validate-datastore-config.js` to see how that stuff gets in there.
var referencedDatastore = hook.datastores[normalizedModelDef.connection[0]];
if (!_.isObject(referencedDatastore)) {
throw new Error('Consistency violation: A model (`'+modelIdentity+'`) references a datastore which cannot be found (`'+normalizedModelDef.connection[0]+'`). If this model definition has an explicit `connection` property, check that it is spelled correctly. If not, check your default `connection` (usually located in `config/models.js`). Finally, check that this connection (`'+normalizedModelDef.connection[0]+'`) is valid as per http://sailsjs.com/docs/reference/configuration/sails-config-datastores.');
}
var normalizedDatastoreConfig = referencedDatastore.internalConfig;
if (_.isUndefined(normalizedModelDef.schema)) {
if (!_.isUndefined(normalizedDatastoreConfig.schema)) {
normalizedModelDef.schema = normalizedDatastoreConfig.schema;
}
}
// Return the normalized model definition.
return normalizedModelDef;
};
|
STACK_EDU
|
[Fwd: Bug#291946: freemind: Installs with java1.3 but won't works]
new maintainer, I've got the below bug report.
I'm a bit perplex here:
- my current Depends is "Depends: j2re1.4 | java2-runtime, j2re1.4 |
- can I put a version dependency on a virtual package? (does then the
version come from the version of the real package to which the virtual
one is attached?)
- if no, what should I do?
- if yes, should I put it on java2-runtime, on java-virtual-machine or
or to make it more direct, should I change my "Depends" to:
A) Depends: j2re1.4 | java2-runtime (>> 1.4), j2re1.4 |
java-virtual-machine (>> 1.4)
B) Depends: j2re1.4 | java2-runtime (>> 1.4), j2re1.4 | java-virtual-machine
B) Depends: j2re1.4 | java2-runtime, j2re1.4 | java-virtual-machine (>> 1.4)
C) something else (precise)
D) nothing to do, forget about it...
Ah, ah, going through the policy, I find (7.4 Virtual packages):
--- BEGIN ---
If a dependency or a conflict has a version number attached then only
real packages will be considered to see whether the relationship is
satisfied (or the prohibition violated, for a conflict) - it is assumed
that a real package which provides the virtual package is not of the
"right" version. So, a Provides field may not contain version numbers,
and the version number of the concrete package which provides a
particular virtual package will not be looked at when considering a
dependency on or conflict with the virtual package name.
It is likely that the ability will be added in a future release of dpkg
to specify a version number for each virtual package it provides. This
feature is not yet present, however, and is expected to be used only
--- END ---
Does this mean, answer D above is the right one!?
Thanks in advance,
-------- Original Message --------
Subject: Bug#291946: freemind: Installs with java1.3 but won't works
Resent-Date: Mon, 24 Jan 2005 06:18:03 UTC
Resent-From: Pierre Ancelot <email@example.com>
Resent-CC: Eric Lavarde <firstname.lastname@example.org>
Date: Mon, 24 Jan 2005 07:02:15 +0100
From: Pierre Ancelot <email@example.com>
Reply-To: Pierre Ancelot <firstname.lastname@example.org>, email@example.com
To: Debian Bug Tracking System <firstname.lastname@example.org>
I have j2re1.3 (blackdown)
freemind installed anyways, even though it requires j2re1.4
j2re1.3 certainly provides java2-runtime (not verified) which is
required by freemind
this is maybe why it installed.
Get a message when starting freemind that it requires j2re1.4 which i
have not... (not on sarge at all)
-- System Information:
Debian Release: 3.1
APT prefers testing
APT policy: (500, 'testing')
Architecture: i386 (i686)
Kernel: Linux 2.6.8-486
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968)
Versions of packages freemind depends on:
ii j2re1.3 [java2-runtime] 1.3.1.02b-2 Blackdown Java(TM) 2
ii sablevm [java-virtual-machin 1.1.6-6 Free implementation of Java
-- no debconf information
Gewalt ist die letzte Zuflucht der Inkompetenz.
Violence is the Last Resort of the Incompetent.
Gwalt jest ostatnem schronieniem niekompetencji.
La violence est le dernier refuge de l'incompetence.
~ Isaac Asimov
|
OPCFW_CODE
|
Problems using `sudo make` to install wireless card driver
I have the problem that my computer often (several times per hour) suddenly disconnects from wifi. Then I have to disable and enable the wireless connection so the network I disconnected from is available again.
I wondered that something must be wrong with my wireless card (AWUS036H) so I investigated how to install a proper driver and found this. However, when I open a terminal as root and follow the instructions (sudo make), it outputs this.
I use Debian Jessie with 3.16.0-4-amd64 kernel. I have build essential and linux-headers-3.16.0-4-amd64 installed.
Any help?
realtek is a waste of time - a chipset of cents for manufacturers to save money cutting corners. Buy another chipset. Read my answer here, please. http://unix.stackexchange.com/questions/252210/wi-fi-problems-using-asus-usb-n13-adapter I would also second guess you are trying to compile a module that is not appropriate for your kernel version.
Thanks for your advice. I don't have much wireless cards knowledge and I think I wouldn't be able to identify if a wireless card is atheros or realtek based, but from your answer I think I can deduce that a TP LINK wireless card would do the work, right?
Your card is realtek based/rebranded, a simple google search shows it, and you are linking realtek site for the source code, hence my answer. I recomend a ralink based chipset, search in aliexpress for 300Mbps Dual Band 2.4GHz / 5.8GHz Ralink RT5572N WiFi USB Adapter or something similar.
I was planning to go to the nearest mall and buy a chipset right now. I will see if I find a non-realtek based one (I will try to find a ralink based one). I asked for the brand because I guess some brands have preference in using certain chipsets.
Portugal, right? The one €15 no brand I am referring must be around €40 in Europe. atheros and ralink are usually good choices for linux, however google the specific model online to see what people say about it, and/or to watch the linux compatibility database before buying it. It also does not help that there are some USB brands and models that depending on the version have different chipsets, so keep your eyes open. e.g. it is easier or check beforehand online. vives em lisboa?
I live in Spain. I have bought a TP-LINK model TL-WN822N that I'm going to return because it also has a Realtek chipset (I googled it before buying and found that some versions of the hardware use Realtek while others use Atheros, and I also asked before buying if returns were accepted). For now I will continue dealing with the constant disconnections. Help appreciated!
I usually buy things in aliexpress. http://www.aliexpress.com/item/2T2R-300Mbps-Dual-Band-2-4GHz-5-8GHz-Ralink-RT5572N-WiFi-USB-Adapter-Black/32364412439.html
The Realtek download page that you link to describes the available driver as
Linux driver for Kernel 3.0.0/3.1.0/3.2.0
You have a much more recent kernel (3.16), so the kernel headers have changed significantly, and compiling fails.
However, the direct-from-Realtek driver probably won't help. It was last updated in 2012 (to support kernel 3.x). The same source files were included in the "staging" area of the kernel where they were maintained for a few years. (See the changelog). That directory was deleted in 2014 with the following explanation:
There is a "real" driver for this hardware now in drivers/net/ so remove the staging version as it's not needed anymore.
That means that the kernel developers consider the stock driver, that you were already using, to be more suitable than the old one you've downloaded and are trying to compile. And even if you wanted to use the old one, you'd be more successful trying to compile the last staging version.
As commenters have mentioned, it's more likely the hardware that's the problem.
It is more complicated than that I am afraid. The chip is a horrible mess, and the official stance of realtek is telling you to use an old more "stable" (read hacked by them) module version and an old hostapd version compatible with that. I manage to compile and use their module in kernel 3 as of october 2015, however I would stress it out it is not worth the effort...it is indeed more stable, but with random lock ups at most every 2 weeks. And the chipset gets hotter and slower with their hacks.
I ended up giving the chipset to the cat. I also used a newer "official" version. The results are also not that satisfactory I am afraid, and you still need module parameters to get around the energy saving hardware bugs. Friends do not let friends buy realtek.
@RuiFRibeiro Do you have a link for that "official stance" (in which they advise people to use the old but still long-term-supported 3.2.x)? If so, I'll edit it into my answer, as it's possibly more use than hacking about in staging.
It seems dependent on the chipset model...some still have the drivers for 3.x kernels, some do not. However it is not particularly a good sign. I am afraid I have read that almost a year ago after reading too many stuff how to get my chipset working. looking at this page is quite easy to see some models still have links for the 3.x line of kernels http://www.realtek.com/downloads/downloadsView.aspx?Langid=1&PNid=48&PFid=48&Level=5&Conn=4&DownTypeID=3&GetDown=false&Downloads=true#RTL8192CU%C2%A0 while others do not.
It is well know that even a very cheap dual frequency ASUS AP (N900) that is based on realtek locks up every couple of days...it is a disgrace.
|
STACK_EXCHANGE
|
using Models;
using Shouldly;
using Xunit;
namespace JaFosteTests.ModelsTests.models
{
public sealed class TodoItemsTests
{
[Fact]
public void TwoEqualTodoItemInstances_ShouldBe_Equal()
{
var todo1 = new TodoItem {Id = 1, Name = "TodoItem", IsComplete = false};
var todo2 = new TodoItem {Id = 1, Name = "TodoItem", IsComplete = false};
todo1.ShouldBe(todo2);
}
[Fact]
public void TwoEqualTodoItemInstances_ShouldBe_Equal_WithSameHashCode()
{
var todo1 = new TodoItem {Id = 1, Name = "TodoItem", IsComplete = false};
var todo2 = new TodoItem {Id = 1, Name = "TodoItem", IsComplete = false};
todo1.GetHashCode().ShouldBe(todo2.GetHashCode());
}
[Theory]
[InlineData(-1)]
[InlineData(0)]
[InlineData(1)]
[InlineData(int.MinValue)]
[InlineData(int.MaxValue)]
public void SetIdWithValue_Should_Return_SameValue(int value)
{
var todo1 = new TodoItem {Id = value, Name = "TodoItem", IsComplete = false};
todo1.Id.ShouldBe(value);
}
[Theory]
[InlineData("Nome")]
[InlineData("Outro Nome")]
public void SetNameWithValue_Should_Return_SameValue(string value)
{
var todo1 = new TodoItem {Id = 1, Name = value, IsComplete = false};
todo1.Name.ShouldBe(value);
}
[Theory]
[InlineData(true)]
[InlineData(false)]
public void SetIsCompleteWithValue_Should_Return_SameValue(bool value)
{
var todo1 = new TodoItem {Id = 1, Name = "TodoItem", IsComplete = value};
todo1.IsComplete.ShouldBe(value);
}
}
}
|
STACK_EDU
|
HP Support Forums
Join in the conversation.
09-21-2011 04:41 PM
That is an IPV6 internet Driver in your Networking Wizard..I think.
Recently, I see my ISP is IPV6, and this old Presario had IPV4. Now, I got
all these IPV6 things, and I see that I did not do the installing of them.
You can go to open your internet connection, and properties, and see TCP, and Properties for it.
You can check with your ISP, and just set up the Networking connect, the way they tell you...
09-25-2011 01:54 PM
I tried some of the video instructions & found I probably didn't need updates on some things. I tried to use HP Help & Support (on computer) & it comes up but freezes & won't react when clicking on anything & won't even close, have to use Windows Task Manager to get rid of it. Is that because my computer is too old & is not under warranty any more? Going to the HP online was helpful, though. I'm supposed to be automatically getting Windows & HP updates. I went to Windows online & got some software updates that were not automatic & then my Firefox browser wouldn't open & had to do a system restore to the day before to get rid of them. I guess one or more of them were not compatible with Firefox but could not find out specifically. I like a lot of things about Firefox but there seems to be so many Windows things that it is not compatible with!
12-30-2011 04:46 AM
Oh nice thread, I am using HP Pavilion DV9000Z Laptop and when i comes in need to find new driver then i get HP Driver Support by iGennie.
01-14-2012 07:40 AM
When I was told to do a clean install and start the updating, not to install the Realtech and Nvidia drivers that windows update sends as an optional download. Also told not to just hit the update drivers by hp techs as well or get 3rd party drivers either. None show up when I use the hp updater. I am running windows 7 on a pavillion desktop and they said the drivers are made just for THIS model pc that are pre installed and never use outside drivers, programs such as antivirus etc.. I know there are newer drivers and I wanted to know about updating the bloatware as well such as cyberlinkDVD deluxe suite etc..? sounds as though they try to force things on you so you dont improve your pc so they can sell new ones which I was offered. never again will an hp be in my home when this thing dies.
01-14-2012 05:32 PM
I alway's use windows updates and I get very few updates offered in HP update, I have alway's used Norton internet Security and other software programs without any problems.
I also have a pavillion Elite , HPE-150F desktop, plus a HP G 62laptop,and a photosmart plus all in one printer, B209a, granted I have had less problems with the laptop than the desktop, but I would keep trying different techs to help you until you get the solution solved, I have had to do this a few times.
01-14-2012 05:38 PM
Imaginary_Self0 wrote: When I was told to do a clean install and start the updating, not to install the Realtech and Nvidia drivers that windows update sends as an optional download.
Hello Imaginary_Self0, I would also agree with not downloading and installing the Windows Update Hardware drivers.
I suggest going to the manufacturer's web site and download and install their latest drivers.
Just check the Device Manager and determine the video device and the audio device and go to the hardware provider's site and locate and download their latest drivers.
It has been reported, and I have experienced issues with some Windows hardware drivers.
Just some thoughts.
Please click the White Kudos star on the left, to say thanks.
Please mark Accept As Solution if it solves your problem.
|
OPCFW_CODE
|
Git and GitHub for Writers
The first Git and GitHub class specifically for writers!
More and more, writers are being asked to use Git and GitHub for their documents. This is part of a philosophy called “Docs Like Code”, where documentation is created using the same tools and processes that code is. The problem is that Git and GitHub were designed specifically for developers, and these classes don’t work as well for writers.
This class differs from other Git and GitHub classes in that:
It explains concepts in ways that are meaningful to writers
All example files are documents rather than code
It talks about how files are used to create documentation
This course is for technical writers, project managers, and anyone who writes who needs to use version control tools like Git and GitHub. It covers:
What version control is
What “Docs Like Code” means
How to use Git manage file versions
How to use GitHub for pull requests and forking
How to handle difficult problems
How Git is used for documentation
In addition to videos, this course contains 14 hands-on exercises that lead you step-by-step in using Git and GitHub. All PowerPoint presentations are available as resources.
Getting Started with Git and GitHub
Introduction to what the course will cover.
Treating documentation like code, and how Git and GitHub are used for this.
Explains what version control is.
What is Git and how to install it.
What is GitHub and how to get started with it.
How to use the command line, which you will be using to interact with Git.
Covers the Git concepts of unstaged, staged, committed, and pushed files.
Git and GitHub Basics
How to add a file to the repository using Git and upload it to GitHub.
How to make changes using Git and "push" them up to GitHub.
How to delete and rename files using Git.
Explains why Git is designed with the four stages.
How to go back to previous versions of your files using Git.
Follow the instructions in the resource to do an exercise to go back to a previous commit.
Tag, Pull, Branch, and Stash
How to add a tag to a commit so it's easier to find later.
How to pull changes from GitHub to your local machine, merge content, and handle conflicts.
Why Git and GitHub are good for collaboration, and how to create branches and switch between branches.
How to create branches, change to them, and delete them.
How to use git stash to temporarily put away changes you are working on.
How to merge branches, both with no conflicts and with conflicts.
How to clone repositories to create local copies.
More Advanced Features and Next Steps
How to rebase a branch.
How to handle difficult-to-solve problems in Git.
How to tell Git to ignore files so that they are not considered part of the repository.
How to create documentation that uses Git and GitHub, and what are the next steps to learning more.
|
OPCFW_CODE
|
Post by mechanic Post by Bob J Jones
Where to get official copies of Microsoft Windows assuming you
have a license key on the side of your computer (which almost
Paul found this for Windows 7 (in the Windows counterfeit jail
thread). Official download for Windows 7 disc image iso files
But where is the Windows XP ISO file official download location?
But why would you want it? It's many versions out of date.
If you use that, and use the output of WsusOffline 9.2.1,
you can bring it back up to date. WsusOffline can prepare
a set of patches for it, you can blast into it, in a batch.
There are probably some WinXP patches that aren't properly
recorded in Windows Update. The SMBV1 patch or ones from
a related era. Those will take some research with your
browser to track down.
If you had a WinXP Gold disc ("SP0"), these are examples of
Service Packs. Microsoft had to re-issue SP1 after the
Java legal settlement (by removing MSJava). WsusOffline
isn't likely to have all these. If you're lucky, maybe
WsusOffline will have access to the SP3 one. There
are various opinions on "how Cumulative" these are,
with perhaps different problems showing up when
slipstreaming, versus command line usage. WsusOffline
uses only official MS URLs, and is not a file server.
WindowsXP.exe 140,440,152 bytes xpsp1_en_x86
xpsp1a_en_x86.exe 131,170,400 bytes (version with MSJava removed)
WindowsXP-KB835935-SP2-ENU.exe 278,927,592 bytes
WindowsXP-KB936929-SP3-x86-ENU.exe 331,805,736 bytes
If there was a ready source of WinXP ISO9660 downloads, for
discs released with the Service Pack already in it,
I'd have them :-) And I've never managed to snag some.
I'm talking about sources where there's some notion
of where they came from, not torrents. Even if the
discs had official checksums, that would be a start
at a trustworthiness check. (Maybe some torrented
MSDN discs would be as close as you get. Only trustworthy
if you have SHA1/SHA256 checksums.)
The OS still works, and it's the "right weight" for
an older computer. Like, a machine with a 512MB max
on memory, would be OK. A lot of Linux distros can no
longer run in 512MB. Puppy would likely work. There's
also a lot of missing video card support. Whereas the
box your video card came in, that CD probably has a
video card driver for WinXP.
|
OPCFW_CODE
|
Chris says "Over the past couple of days, I've heard more than one of my friends complain about the lack of proper attribution - but I've seen the laziness unfold, time and again. Who knows who found what first? Here's the problem: getting a scoop isn't what it once was. Everyone gets the scoop, and everyone gets the exclusive.
I've stumbled upon gems over and over again, only to see them surface on more popular sites without necessarily stating that I was the originating source (with timing being of primary significance)."
Well, I completely agree on this one... Here on Geekzone we live by press releases, or material supplied directly by the developer/manufacturer. If we don't get a press release or if we don't receive the material directly from the original source, then we post an attribution. And our policy is not to publish rumours (that makes our work a lot lighter and more reliable).
But I have received a few e-mails asking why we don't link to source in all posts. Hmmm. Do people really want to go and read the full press release with their "XWA, Inc, the world leader in always up to date hot cakes in your mobile device, today launched the most exciting product ever"? Hmmm. Nope, I didn't think so. Not strangely, the people that complain most about the lack of links to press releases are... other website owners. Of course, it makes their lives so much harder, since they can't find the press release and don't want to spend time going through XWA's website looking for the press page, right?
We do post links to companies that are new entrants in the market, otherwise some people wouldn't be able to find them and their services. But if we post about Microsoft launching a new initiative, shall I link to their press release, if there's no specific page for their new stuff?
Now comes the interesting thing: in the last couple of years we have posted lots of things that just because timing are posted in other sites simultaneously - most of us read the press releases or receive embargoed material at the same time. But it surprises me that sometimes (and it happened this week alone a couple of times) we find an interesting press release that went unnoticed for a week (in one case up to twenty days). A few hours after it flows to our RSS feed other sites publish the same item, as they were the finders.
And then, of course, smaller sites (and big networks too) will spread the word, without attribution to the source that found the item in first place, or attributing the item to the wrong source - because this wrong source didn't gave attribution to the scoop originator in first place.
Is it possible the everyone missed the fist release at the same time, and then everyone found the same release at the same time, but twenty days later?
Chris continues with "But being "first" is no longer important, as evidenced by all these damned memetrackers that I'm getting sick of hearing about. I don't visit Memeorandum on pure principle - I'm f*cking sick of the echo chamber. We all want to be on top, we all want to win - and sometimes in our quest to find the one ring to rule them all, we forget about giving credit where credit might be due (even if that comes in the form of a simple hyperlink or name-drop)."
This week I looked through 365 RSS feeds (out of almost 500 I had when 2005 came to an end) and cleaned the list down to 296. This is going to save so much time of my daily work of checking what's going on.
I blame Chris on this one. His Gnomedex conference seems really cool, and I once imported the OPML with the participants in 2005. I have to say that 1/3 of those feeds are now "out of business". Either their domains don't exist anymore, or their last posts are dated September 2005 (and if a blog is not updated for three months, well, it's not worth it). Another 1/3 of these seem to be part of the "echo chamber" Chris talks about. Any feed I found to be in the "echo chamber" was removed. Only 1/3 of the feed were original worthy content, which I kept in my feed subscription.
I mean, just look at Digg Technology and you will see some stories repeated over and over, instead of having a single entry with diggs being directed at it. But, no, it's like a lottery, everyone will post their own preferred site (their own), in the hope it will make to the digg frontpage and get a few page views, and fame. Too much wasted bandwidth, too much noise.
As for this year's Gnomedex, once again I am not going - an addition to the family is due very soon, and I wouldn't be able to make a trip away in the first few months.
To finish, Chris says "And for goodness sake, please don't comment on this entry... or perform a trackback... or write about this post elsewhere. If you do, please don't credit me.".
|
OPCFW_CODE
|
Why isn't Vert.x being more used?
[It is not a dev related question]
I have been using Vert.x since 2017 and I think the framework is great. It has a better performance when compared to Spring Boot, more requests per second and less cpu usage.
It works great for event driven programming and concurrent applications.
However, I don't see the community increasing. Does anyone know what are the cons keeping developers away from Vert.x? I'm about to start a new application and I feel worried about Vert.x is dying.
This is undoubtedly an interesting question, but it is primarily opinion-based, which makes it a poor fit for Stack Overflow. I am not sure it would be on-topic anywhere on the SE network, unfortunately.
@halfer Why should the first programming-oriented forum have to only offer snippets of code to copy/paste without thinking and without providing more rich debate about what we're doing? I think that kind of answer are counterproductive.
@Idriss: it has been firmly established over many years that debate does not work well in the Q&A format here - indeed that would be even more off-topic than this one. However, you can always raise it on Meta if you wish.
@halfer Thanks, I think I'll take time to raise a debate on meta. Anyway I think it's really a shame to block that kind of questions and I wonder to know if I'm the only one to think like that.
@Idriss, no problem. Note that this discussion has been had a lot before, so do please do a thorough search on Meta, and see if you have new ideas to add. You aren't the only one to say this, but still in the minority, I think.
Disclaimer: I work for Red Hat in the Vert.x core team.
Thanks for sharing your good experience with Vert.x.
There is no secret sauce behind community growth: you need marketing money and a dedicated evangelist team. Vert.x has neither of these BUT:
rest assured the project is not dead (we're releasing 4.0 in the coming months and Vert.x has become the network engine for Quarkus)
the community is still very strong and vibrant (users helping each other on the forum and significant features are actually contributions)
for a few years now Red Hat has offered commercial support
Rome wasn't built in a day: I first heard about Spring a few months after starting my career in IT 15 years ago...
I think that over the past 20 years (maybe more), the technologies that have been the most used are those where the developer is able to stop thinking by himself and can produce a large amount of features as quickly as possible.
In other words, it's mainly the frameworks that handle everything for you: JSF, Struts that hide the frontend complexity for the backend devs that were not qualified, Spring who takes care of hiding all the problems of exposition and resiliency behind a mountain of annotations and abstraction layers. We could observe the same thing in the PHP world with Zend, Symfony, Laravel and whatever. And lately we can say the same thing for the frontend devs with Angular.
Using a toolkit like vert.x, in my opinion and even if we find it simple, requires a better understanding of what we're doing. We need to be aware of the reactor pattern, the asynchronous paradigm, the reactive programing, monothread and concurrent programing, etc. We need to stop designing standard blocking restful api to solve all the issues. We need to have a better control of communication issues and failover through our microservices. Even if toolkits like akka, vert.x, quarkus, micronaut had made lots of effort to give good documentations, industrialization tools, more libs around that handle many things for you... there still is an entrance ticket that management sometimes considers (wrongly in my mind) as an obstacle to the production.
Finally, I think that when a toolkit seems to answer exactly to your need and when there is a strong community behind (that doesn't have to be the biggest but that is made up of available experts and great oss companies like RedHat), you shouldn't wait to give it a try. It's often a better answer than big frameworks that handle too much things in the same box.
|
STACK_EXCHANGE
|
import { Attribute, Keys, RADIOBUTTON_A11Y_ROLE, RADIOGROUP_A11Y_ROLE } from '../../common/consts';
import {
FIRST_RADIO_BUTTON_ACCESSIBILITY_LABEL,
RADIOGROUP_ACCESSIBILITY_LABEL,
RADIOGROUP_TEST_COMPONENT_LABEL,
SECOND_RADIO_BUTTON_LABEL,
} from '../consts';
import RadioGroupPageObject from '../pages/RadioGroupLegacyPageObject';
// Before testing begins, allow up to 60 seconds for app to open
describe('RadioGroup/RadioButton Legacy Testing Initialization', () => {
it('Wait for app load', async () => {
await RadioGroupPageObject.waitForInitialPageToDisplay();
expect(await RadioGroupPageObject.isInitialPageDisplayed()).toBeTruthy(RadioGroupPageObject.ERRORMESSAGE_APPLOAD);
});
it('Click and navigate to RadioGroup Legacy test page', async () => {
await RadioGroupPageObject.navigateToPageAndLoadTests(true);
expect(await RadioGroupPageObject.isPageLoaded()).toBeTruthy(RadioGroupPageObject.ERRORMESSAGE_PAGELOAD);
await expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT); // Ensure no asserts popped up
});
});
describe('RadioGroup/RadioButton Legacy Accessibility Testing', () => {
/* Scrolls and waits for the RadioGroup to be visible on the Test Page */
beforeEach(async () => {
await RadioGroupPageObject.scrollToTestElement(await RadioGroupPageObject._firstRadioGroup);
});
it('Validate RadioGroup\'s "accessibilityRole" defaults to "ControlType.List".', async () => {
expect(
await RadioGroupPageObject.compareAttribute(RadioGroupPageObject._firstRadioGroup, Attribute.AccessibilityRole, RADIOGROUP_A11Y_ROLE),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
it('Validate RadioButton\'s "accessibilityRole" defaults to "ControlType.RadioButton".', async () => {
expect(
await RadioGroupPageObject.compareAttribute(
RadioGroupPageObject.getRadioButton('First'),
Attribute.AccessibilityRole,
RADIOBUTTON_A11Y_ROLE,
),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
it('Set RadioGroup "accessibilityLabel" prop. Validate "accessibilityLabel" value propagates to "Name" element attribute.', async () => {
expect(
await RadioGroupPageObject.compareAttribute(
RadioGroupPageObject._firstRadioGroup,
Attribute.AccessibilityLabel,
RADIOGROUP_ACCESSIBILITY_LABEL,
),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
it('Do not set RadioGroup "accessibilityLabel" prop. Validate "Name" element attribute defaults to current RadioGroup label.', async () => {
expect(
await RadioGroupPageObject.compareAttribute(
RadioGroupPageObject._secondRadioGroup,
Attribute.AccessibilityLabel,
RADIOGROUP_TEST_COMPONENT_LABEL,
),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
it('Set RadioButton "accessibilityLabel" prop. Validate "accessibilityLabel" value propagates to "Name" element attribute.', async () => {
expect(
await RadioGroupPageObject.compareAttribute(
RadioGroupPageObject.getRadioButton('First'),
Attribute.AccessibilityLabel,
FIRST_RADIO_BUTTON_ACCESSIBILITY_LABEL,
),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
it('Do not set RadioButton "accessibilityLabel" prop. Validate "Name" element attribute defaults to current RadioButton label.', async () => {
expect(
await RadioGroupPageObject.compareAttribute(
RadioGroupPageObject.getRadioButton('Second'),
Attribute.AccessibilityLabel,
SECOND_RADIO_BUTTON_LABEL,
),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
});
describe('RadioGroup Legacy Functional Testing', () => {
/* This resets the RadioGroup state by clicking/selecting the 1st RadioButton in the RadioGroup */
beforeEach(async () => {
await RadioGroupPageObject.scrollToTestElement(await RadioGroupPageObject._firstRadioGroup);
await RadioGroupPageObject.resetRadioGroupSelection();
});
it('Click on a RadioButton. Validate that it changes state from unselected to selected.', async () => {
/* Validate the RadioButton is not initially selected */
expect(await RadioGroupPageObject.isRadioButtonSelected('Second')).toBeFalsy(
'Expected the first RadioButton to be initially selected, but the second RadioButton was initially selected.',
);
/* Click on the RadioButton to select it */
await RadioGroupPageObject.click(RadioGroupPageObject.getRadioButton('Second'));
/* Validate the RadioButton is selected */
expect(
await RadioGroupPageObject.waitForRadioButtonSelected('Second', 'Clicked the second RadioButton, but it failed to be selected.'),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
it('Press forward "Arrow Key" on a RadioButton. Validate adjacent RadioButton is newly selected.', async () => {
// Presses the ArrowDown key while the first (A) RadioButton is selected
await RadioGroupPageObject.sendKeys(RadioGroupPageObject.getRadioButton('First'), [Keys.ARROW_DOWN]);
/* Validate the RadioButton is selected */
expect(
await RadioGroupPageObject.waitForRadioButtonSelected(
'Second',
'Pressed "Down Arrow" on the first RadioButton, but the second RadioButton failed to be selected.',
),
).toBeTruthy();
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
it('Press forward "Arrow Key" on a RadioButton adjacent to a disabled RadioButton. Validate disabled RadioButton is skipped.', async () => {
// Presses the ArrowDown key while the second (B) RadioButton is selected
await RadioGroupPageObject.sendKeys(RadioGroupPageObject.getRadioButton('Second'), [Keys.ARROW_DOWN]);
/* Validate the RadioButton is selected */
expect(
await RadioGroupPageObject.waitForRadioButtonSelected(
'Fourth',
'Pressed "Down Arrow" on the second RadioButton, but the fourth RadioButton failed to be selected. The third RadioButton is disabled so it should be skipped.',
),
).toBeTruthy(); // It should skip RadioButton 3 since it is disabled
expect(await RadioGroupPageObject.didAssertPopup()).toBeFalsy(RadioGroupPageObject.ERRORMESSAGE_ASSERT);
});
});
|
STACK_EDU
|
One of the purposes of RISC OS Pyromaniac is to provide a means by which RISC OS tools and modules can be run on other systems. Whilst usually this is by starting a RISC OS environment and running those tools, it is ocassionally useful to be able to run host tools, and call in to RISC OS to run the tools. For example, using cross-compiling tools to build a component, and calling the RISC OS environment to test it.
The Pyromaniac command server provides one way of doing this, with a persistent RISC OS environment within which commands may be run. With the command server, a RISC OS environment can be provisioned, and when needed commands can be run within it.
There are two components to the command server - the
pyro-server tool, and the
pyro-server tool works in a similar way to the
pyro.py tool, but instead of exiting once the system has completed booting and running its commands, it remains running, and instead runs either a specified host command, or your current shell. When the command or shell exits, the RISC OS environment will be terminated.
pyro-client tool can be run whilst within the command or shell started by
pyro-server. It will communicate with the server to run the commands supplied to it within the RISC OS environment. Input and output will be supplied to the RISC OS command, so interactive tools and pipes can be used. Errors will be reported with a return code of 1, and for a return code is set explicitly it will be returned. The use of ctrl-C will send an escape character to the currently running process.
Every command executed is executed within the same environment. This means that if you load modules, set environment variables or change any of the state of the system, it will persist from run to run.
For interactive invocation, you might start the server with just a simple invocation:
This will invoke a shell within which the
pyro-client can run. Exit the shell with exit, ctrl-D, or any equivalent for the shell.
pyro-server tool may be invoked with any of the configuration commands accepted by the
pyro.py tool. For example, the command line may include the loading of modules, the setting of system variables, or any configuration commands. It may be most useful to bundle all the configuration into a configuration file and use the
--config-file switch. The options for Pyromaniac are terminated by
--, and may be followed by a command to run in place of the shell.
For example, to run a BASIC program you might use a sequence like:
$ scripts/pyro-server --load-module modules/BASIC,ffa $ scripts/pyro-client /MyProgram $ exit
Commands run through the
pyro-client tool can be passed through pipes.
$ scripts/pyro-client help modules | grep Internet
RISC OS commands which return an error, or which set a non-0 return code, will return the non-0 return code in the host system.
Most invocations of the server will not use the interactive form, but instead will wish to invoke another process. For example, to invoke the Make tool, the following server invocation might be used:
$ script/pyro-server --load-module modules/BASIC,ffa -- make
The server has all the same configuration options that the standard
pyro.py tool has, together with a few configuration options of its own.
The default configuration is set up to be hopefully the most useful settings for interactive or automated use.
There are currently 3 configuration options in the
When the client is used, it can report the return code from the command that was executed when the RISC OS command
exits. This is taken either from the return code if one has been set, or set to
1 if the command run returned
an error. This return code will be set as the status from the client itself by default.
However, this can be disabled by setting the
pyroserver.report_rc option to
no. Or it can be replaced by a
textual representation of the return code by setting the option to
The port used by the server is configurable. It defaults to 18794, but if you wish to run multipe servers this can be changed.
When the client is run it will check the directory that you are in and attempt to mirror this directory
within the RISC OS environment. This means that if you
cd into other directory and do run a client command
cat, you will see the directory you have changed to in the host. This only works when the directory is
actually within the tree that RISC OS can see, obviously.
If this feature is not required, it can be disabled by setting the
|
OPCFW_CODE
|
What if you could pay by the second for all of your computing needs? That’s a question Amazon has answered by offering Elastic Compute Cloud (EC2) since it launched way back in 2006.
Easily one of the most well-known service offerings from Amazon (perhaps second only to Simple Storage Service or even Amazon.com), EC2 provides an IT infrastructure that runs in the cloud and operates like a data center you have running at your own headquarters. It’s ideal for companies that need performance, flexibility, and power all at the same time.
Reduce your AWS EC2 costs now
Find instance deals from sellers in the Reserved Instance Marketplace (RIM). ReservedMarket has hundreds of third party reserved instances at any time. Purchasing Reserved Instances (RIs) is one of the best ways to shave big bucks off your AWS bill. They do the heavy lifting to identify the best bargains in the marketplace.
EC2 is relatively easy to define, but it also has many related services, product offerings, and partners that can seem overwhelming. At its core, EC2 is a service that allows you to rent a virtual server remotely for running your applications. It’s much more than that, of course -- which is why it’s important to define a few related terms as a way to describe EC2 and its value.
One term that is helpful to understand initially is instance. This word describes a single virtual computing environment made up of CPU, memory, cloud storage, and networking capacity. In the old days, Information Technology personnel might have used an entire server to run applications, but it’s better to understand cloud computing and EC2 in terms of an instance because it runs on a virtual server -- essentially, one portion that is provisioned for your applications.
A second important term related to EC2 is Amazon Machine Image (AMI). This is the provisioned part of a virtual computing environment -- essentially a preconfigured template you use as part of your virtual infrastructure. You could say the computing instance runs on top of the AMI. Once you have an instance configured for the AMI, it means you have defined the computing power, storage, memory, and networking you need.
As you might guess, an instance can run more than a business app for employees and more than a mobile app that runs on an iPhone. An instance is flexible enough to run just about anything. The word “elastic” in the name Elastic Compute Cloud is really all about the flexibility and scalability of the environment and is also related to the pay model. As mentioned at the outset, EC2 is elastic in the sense that you pay only for the compute instances you use.
An instance can contain web applications, mobile apps, a cloud database and the data used by your apps, the configuration files for a Big Data project, code libraries, and even the configuration for your computing environment. How you define and use the computing environment is up to you, and it’s not limited in terms of what you can run, for how long, the size of the applications, or even whether you run the application on the instance at all. This type of flexibility in how you start using EC2, what you can do, and how you can scale is what makes it so powerful.
Benefits of using EC2
In business, there’s a concept called “lift,” which is a good descriptor for why EC2 has become so popular and powerful. Lift is the idea that you can scale and reach more customers without as much friction. In the end, what EC2 ultimately provides for any company is lift. It’s the ability to scale and grow without having to wait for the technology to keep up.
With Amazon EC2, any discussion about the benefits and advantages has to start with the cloud itself. In some ways, EC2 and the cloud are synonymous these days -- with apologies to Google, Microsoft, and many other cloud-focused companies. It’s not an exaggeration to suggest that companies like Netflix, Airbnb, User, and Pinterest might not exist if they weren’t using EC2, or that they would at least exist in some other form or without the same reliability.
That’s because EC2 has flexibility and scalability, but also a long list of features, partner relationships, supported infrastructures, security, and reliability. One example of this is the service level agreement for EC2. Amazon guarantees 99.99% availability spread out over three separate zones according to the region where you are using it.
Another example of the computing power available is that there are 275 instance types available. These types are defined by pre-configured templates, so there might be an instance type that is optimized for networking speed, memory capacity, or server performance.
Perhaps one of the most important benefits -- apart from the scaling and flexibility, the cost structure, and the instance types available -- is that any company can get started on EC2, not just the massive companies with enterprise-level needs. Even a small startup can sign up to start using EC2 and create only a single instance for their new web application. There’s no partiality in terms of who can use EC2 and what you can accomplish with it.
- Move your website to the cloud with the best cloud hosting.
|
OPCFW_CODE
|
Linux Lite images for VirtualBox and VMware - OS Boxes
Did you select the option to burn as an image rather than copy to the.Did you do an md5 checksum on the iso of Linux Lite you downloaded before burning it to the DVD.Linux Lite is a Linux Operating System which is freely available to download.
14 Best Lightweight OS for Old Laptop & Netbook in 2017
Linux Lite is free for everyone to use and share, and is suitable for people who are new to Linux or for people who want a lightweight environment that is also.Linux Lite 2.6 is a great operating system for replacing Windows 7 and has exceptional tools including the control centre and installer.
After five months of development work, Linux Lite 3.6 has been released.ISO images are a very efficient way to download a distribution.It is a full-featured operating system that lets you get down to serious business right out of the box. It is.This page provides the links to download Kali Linux in its latest.
Aspects linux typically differs a windows lite download, which can be built to increase issues as information.
Linux Lite 3.6 | Confessions of a Technophobe
Linux Lite is a beginner-friendly Linux distribution that is based on the well known Ubuntu LTS and targeted at Windows users.
TheFearlessPenguin My experiences and tips on using Linux-based Operating Systems for the average computer user.The list of best Lightweight OS, Linux distribution, fast and stable, powerful enough to give life back to your old, low resource laptops and Desktops.I have reviewed this Ubuntu based distro, Linux Lite, before and concluded that it could be a great distro to start off with Linux and stay forever.Linux has a reputation for being designed for geeks only,. and it has even more accommodating hardware requirements than the already-lightweight Zorn.Based on Ubuntu 12.04 LTS, this brand-new distribution uses the lightweight Xfce desktop and offers five years of support.
LXLE Linux - Revive that old PC! < The LXLE Desktop
Free Linux Downloads - Softpedia Linux
In a computing world distracted by distro overload, Linux Lite is a lightweight Linux OS that has no trouble handling a heavy workload.The latest release of Linux Lite operating system is now available for download.
Review Of Linux Lite 2.6 - Websetnet - Technology Blog
Linux Lite 2.2 "Beryl" Review: Good lightweight XFCE LTS spin
The ultimate list of the best lightweight Linux distros for 2017.
Built on an Ubuntu Linux foundation, Zorin OS runs on the same Open Source software that powers everything from the U.S. Department of Defense.Linux Lite 3.6 comes with lots of improvements and changes since 3.4 release.
Zorin OS: Your Computer. Better.
Linux Lite Alternatives and Similar Software
The compulsory functions of own quality saw from the chassis of application computer.
Linux Lite 1.0.8 - LQ ISO - LinuxQuestions.org
There is a taskbar as the bottom with a menu, launch bar and system tray.Here are the five best lightweight Linux distros of 2016 Linux was developed by Linus Torvalds at the University of Helsinki in Finland.
Linux – The Top 5 Lightweight Distros of 2014
Main Page - Linux Mint
Linpus Linux Lite Download - Softpedia Linux
Linux Lite 2.4 Release notes & Upgrade steps | 2daygeek
Download Linux Lite (VDI, VMDK, VHD) images for VirtualBox and VMware, Run Linux Lite on your primary (Linux, Mac, Windows) operating system.Popular Alternatives to Linux Lite for Linux, Self-Hosted, BSD, Windows, Mac and more.It perceived this linux and just acts to move these regions by the lite of 2011 but not in its download classrooms.
CherryPimps 17 09 29 Angela White Voluptuous And Horny As Hell XXX
| Il crollo del Regno delle Due Sicilie. La struttura sociale.pdf
| Paper Toy Monsters : Make Your Very Own Amazing Paper Toys
| Dangling Man
| Interactions 2 Listening And Speaking Middle East Gold Edition
| Self / Image: Technology, Representation, and the Contemporary Subject
| Driven to Distraction (Revised): Recognizing and Coping with Attention Deficit Disorder.pdf
| Yaron Brook
|
OPCFW_CODE
|
This guide will help you resolve the 'Signtool.exe Not Found' error that you might encounter while signing your files using the SignTool utility in Windows. Follow the step-by-step instructions to fix the issue and ensure successful signing of your files.
Table of Contents
- Step 1: Verify the Installation of the Windows SDK
- Step 2: Locate the Signtool.exe File
- Step 3: Add the Signtool.exe Directory to the PATH Environment Variable
- Step 4: Verify that the 'Signtool.exe Not Found' Error is Resolved
SignTool is a command-line tool that digitally signs files, verifies signatures in files, or time-stamps files. It is included in the Windows Software Development Kit (SDK) and can be used by developers to sign their applications and ensure their integrity. However, you might encounter the 'Signtool.exe Not Found' error while trying to sign your files using this tool. This guide will help you resolve this issue and successfully sign your files.
Before you proceed with the troubleshooting steps, ensure that you have the following prerequisites in place:
- A Windows-based computer with Administrator privileges
- Windows Software Development Kit (SDK) installed
Step 1: Verify the Installation of the Windows SDK
The first step is to ensure that the Windows SDK is installed on your computer. To do this, press
Win + X and click on 'Apps and Features.'
Scroll through the list of installed programs and look for 'Windows Software Development Kit.' If it is not installed, download and install the latest version of the Windows SDK.
Step 2: Locate the Signtool.exe File
- After verifying the installation of the Windows SDK, locate the Signtool.exe file on your computer. Open File Explorer and navigate to the following directory:
C:\Program Files (x86)\Windows Kits\10\bin\
Inside the 'bin' folder, you will find folders named after different Windows build numbers (e.g., 10.0.19041.0). Open the folder corresponding to the installed Windows SDK version.
Inside this folder, locate the 'x86' or 'x64' folder, depending on your system architecture. Open the folder, and you should find the 'signtool.exe' file.
Note: The exact path to the 'signtool.exe' file may vary depending on the Windows SDK version and system architecture.
Step 3: Add the Signtool.exe Directory to the PATH Environment Variable
Win + X and click on 'System.'
In the 'System' window, click on 'Advanced system settings' in the right sidebar.
In the 'System Properties' window, click on the 'Environment Variables' button.
In the 'Environment Variables' window, under 'System variables,' scroll down and select the 'Path' variable. Click on the 'Edit' button.
In the 'Edit environment variable' window, click on 'New' and paste the directory path where the 'signtool.exe' file is located (e.g.,
C:\Program Files (x86)\Windows Kits\10\bin\10.0.19041.0\x86).
Click 'OK' to save the changes, and close the remaining windows.
Step 4: Verify that the 'Signtool.exe Not Found' Error is Resolved
Open a new Command Prompt or PowerShell window.
signtool and press
Enter. If the 'Signtool.exe Not Found' error is resolved, you should see the SignTool utility's command reference.
You can now use the SignTool utility to sign your files without encountering the 'Signtool.exe Not Found' error.
Q1: How do I check the version of the Windows SDK installed on my computer?
To check the version of the Windows SDK installed on your computer, navigate to the directory
C:\Program Files (x86)\Windows Kits\10\bin\ and check the folder names inside the 'bin' folder. The folder names correspond to the Windows SDK build numbers (e.g., 10.0.19041.0).
Q2: Can I use SignTool on a non-Windows platform, like macOS or Linux?
Q3: How do I determine whether my system is x86 or x64?
To determine your system architecture, press
Win + X and click on 'System.' In the 'System' window, under 'Device specifications,' check the 'System type' field. It will display either '32-bit Operating System, x86-based processor' (x86) or '64-bit Operating System, x64-based processor' (x64).
Q4: Can I have multiple versions of the Windows SDK installed on my computer?
Yes, you can have multiple versions of the Windows SDK installed on your computer. Each version will have its own folder inside the
C:\Program Files (x86)\Windows Kits\10\bin\ directory, named after the corresponding build number.
Q5: What is the difference between the SignTool utility and other signing tools like openssl?
SignTool is a Windows-based utility specifically designed for signing files, verifying signatures, and time-stamping files. It supports multiple signature algorithms and is compatible with various file formats. On the other hand, OpenSSL is a more general-purpose cryptography library and toolkit that provides various cryptographic functions, including signing and verifying files, but may not have the same level of compatibility and support for specific file formats as SignTool.
|
OPCFW_CODE
|
We spent a fair amount of time arguing about what to name this type. In the end, we decided it was better to follow the pattern we established with Int16, Int32, and Int64, that is IntSize. In this case Size is the size of a pointer, which is 32 on a 32-bit machine and 64 on a 64-bit machine.
Even though it has a custom signature like Int32 and Double, it is fair to say that IntPtr is not a first class base datatype. Many programming languages such as C# and VB.Net do not support literals of type IntPtr, nor do they support arithmetic operations even though such operations are supported in the IL instruction set. In addition, some areas of the runtime don't support IntPtr. For example it cannot be used for the underlying type of an enum. I think this is a reasonable design decision, even in light of the transition to 64-bit computing, because it turns out IntPtr is not a good replacement for Int32 in most cases as many programs do need to know the size of the data type they are working with.
IntPtr is a super common type for interop scenarios, but is not used very frequently outside of interop. As such it would have been better to have this type in the System.Runtime.InteropServices namespace.
While common for interop scenarios, IntPtr also gets a fair workout in the code generation (Reflection.Emit and the new Lightweight Code Generation [LCG] APIs in CLR 2.0) namespaces. We can use it to represent things like method code pointers for IL instructions like "calli". You can call RuntimeMethodHandle.Get-Function Pointer() which will hand you back a System.IntPtr.
It's also found under the hood of the delegate type implementation.
When you use IntPtr to represent an operating system handle (HWND, HKEY, HDC, etc.), it's far too easy to be vulnerable to leaks, lifetime issues, or handle recycling attacks. The .NET Framework's System.Runtime.InteropServices.HandleRef class can be used in place of IntPtr to address lifetime issues; it guarantees that the managed object wrapping the handle won't be collected until the call to unmanaged code finishes. But to help you battle all three issues, look for a new type to be added to the .NET Framework in the future called SafeHandle. Once this is available, it should be used wherever you used IntPtr or HandleRef to represent a handle.
IntPtr is of course the bare minimum type you need to represent handles in PInvoke calls because it is the correct size to represent a handle on all platforms, but it isn't what you want for a number of subtle reasons. We came up with two hacky versions of handle wrappers in our first version (HandleRef and the not-publicly exposed HandleProtector class), but they were horribly incomplete and limited. I've long wanted a formal OS Handle type of some sort, and we finally designed one in our version 2 release called SafeHandle.
Another interesting point is a subtle race condition that can occur when you have a type that uses a handle and provides a finalizer. If you have a method that uses the handle in a PInvoke call and never references this after the PInvoke call, then the this pointer may be considered dead by our GC. If a garbage collection occurs while you are blocked in that PInvoke call (such as a call to ReadFile on a socket or a file), the GC could detect the object was dead, then run the finalizer on the finalizer thread. You'll get unexpected results if your handle is closed while you're also trying to use it at the same time, and these races will only get worse if we add multiple finalizer threads. To work around this problem, you can stick calls to GC.KeepAlive(this); in your code after your PInvoke call, or you could use HandleRef to wrap your handle and the this pointer.
Of course, the SafeHandle class (added in version 2) solves this problem and five others, most of which can't be fully appreciated without understanding thread aborts and our reliability story. See my comments on the AppDomain class for more details.
|
OPCFW_CODE
|
You’d be hard pressed to find a carpenter who didn’t own a hammer, or a painter that didn’t have a couple of brushes kicking around. Some tools are simply so fundamental to their respective craft that their ownership is essentially a given. The same could be said of the breadboard: if you’re working with electronics on the hobby or even professional level, you’ve certainly spent a decent amount of time poking components and wires into one of these quintessential prototyping tools.
There’s little danger that the breadboard will loose its relevance going forward, but if [Andrea Bianchi] and her team have anything to say about it, it might learn some impressive new tricks. Developed at the Korean Advanced Institute of Science and Technology, VirtualComponent uses augmented reality and some very clever electronics to transform the classic breadboard into a powerful mixed-reality tool for testing and simulating circuits. It’s not going to replace the $3 breadboard you’ve got hiding at the bottom of your tool bag, but one day it might be standard equipment in electronics classrooms.
The short version is that VirtualComponent is essentially a dynamic breadboard. Holes in the same row are still electrically linked like in the classic breadboard, but with two AD75019 cross-point switch arrays and an Arduino in the base, it has the ability to virtually “plug in” components at arbitrary locations as selected by the user. So rather than having to physically insert a resistor, the user can simply tell the software to connect a resistor between two selected holes and the cross-point array will do the rest.
What’s more, many of those components can be either simulated or at least augmented in software. For example, by using AD5241 digital potentiometers, VirtualComponent can adjust the value of the virtual resistor. To provide variable capacitance, a similar trick can be pulled off using an array of real capacitors and a ADG715 digital switch to connect them together; essentially automating what the classic “Decade Box” does. In the demonstration video after the break, this capability is extended all the way out to connecting a virtual function generator to the circuit.
The whole system is controlled by way of an Android tablet suspended over the breadboard. Using the tablet’s camera, the software provides an augmented reality view of both the physical and virtual components of the circuit. With a few taps the user can add or edit their virtual hardware and immediately see how it changes the behavior of the physical circuit on the bench.
People have been trying to improve the breadboard for years, but so far it seems like nothing has really stuck around. Given how complex VirtualComponent is, they’ll likely have an even harder time gaining traction. That said, we can’t help but be excited about the potential augmented reality has for hardware development.
|
OPCFW_CODE
|
XcodeServerEndpoints.endpointURL tests
I've created tests for endpointURL method from XcodeServerEndpoints class. I set access level of XcodeServerEndpoints.endpointURL method as internal as we discussed in #79 issue.
I also renamed EndPoints -> Endpoints in XcodeServerEndPoints name in order to maintain the compatibility between names of files, classes and variables.
Please review.
Result of Integration 1
Duration: 1 minute and 4 seconds
Result: Perfect build! All 56 tests passed. :+1:
Test Coverage: 54%.
This is some great stuff, @pmkowal, thanks so much! I added some comments, I'll have to look into the rev stuff. Also, why do you build up multiple params and test them all for the same thing, like in here: https://github.com/czechboy0/XcodeServerSDK/pull/80/files#diff-e671f822957fa3443397d33de2b2235eR105 ? Just curious :)
Thanks for comments @czechboy0 :smiley: The tests were modeled on the endpointURL method only. I know that there is a folder called routes in Xcode app package so I will also take a look at it :wink:.
Also, why do you build up multiple params and test them all for the same thing, like in here: https://github.com/czechboy0/XcodeServerSDK/pull/80/files#diff-e671f822957fa3443397d33de2b2235eR105 ? Just curious :)
I wanted to check all possibilites if additional/random parameters can affect the path creation.
Btw. In the meantime I will move a common sets of params to setUp() method.
Result of Integration 2
Duration: 48 seconds
Result: Perfect build! All 56 tests passed. :+1:
Test Coverage: 53%.
Btw. Weird thing about test coverage - I moved the common params from each method to setUp() method and test coverage changed from 54% to 53% - it looks like test coverage depends on the amount of code :smile:
Hmm yeah, I guess it's a percentage of tested code / all code? Including tests probably. Xcode 7 Code Coverage still needs some love I think :)
Ok, I made the code more explicit. See #82. I made the endpoints ONLY put in the rev when deleting a bot. This way you can remove a bit of the test code, where you're making sure the right values get put in when rev is in the params. Sorry that the code made it seem like rev was always needed, I guess I just never caught this case. Which just proves how crucial tests are here. Thanks again and sorry for the added work :blush:
Feel free to fix it whenever you have time :wink:
Ok, thanks! After I make fixes to the old tests I will merge #82 and improve tests :wink:
Btw. I'm currently checking routes and testEndpointURLCreationForBotsBotRevIntegrations test - is it possible to have this kind of path: /api/bots/:bot_id/:rev_id/integrations?
No, that's what I meant by guarding rev. Now rev ONLY gets put in if it's a DELETE request, which looks like DELETE /api/bots/:bot_id/:rev_id. Technically, it would be put in even if we had another DELETE request that used the bots endpoint as a prefix, but that's not the case right now so let's not worry about it. So you can remove all the mentions of rev apart from the one where we delete bot (which I added a test for in #82.)
Result of Integration 3
Duration: 48 seconds
Result: Perfect build! All 44 tests passed. :+1:
Test Coverage: 50%.
Result of Integration 4
Duration: 45 seconds
Result: Perfect build! All 45 tests passed. :+1:
Test Coverage: 51%.
@czechboy0 I made couple of fixes:
leaving tests which test success paths only
add let expectation to each test to make them more readable
"/api/bots/bot_id/rev_id" path uses rev only when method is DELETE
Please review.
Great, this is much better, thank you! (And just for the future, you don't have to bother with XCTAssertEqual's text as the third argument. The first two arguments are descriptive and when the test fails, the generated error basically says that they were not equal. Just to save you some time.)
So basically
XCTAssertEqual(url!, expectation, "endpointURL(.SCM_Branches) should return \(expectation)")
and
XCTAssertEqual(url!, expectation)
give you the same information, so you can just use the second one.
Thanks so much, @pmkowal! :+1:
Cool, thanks @czechboy0 !
👏
Officially, we've got a new contributor! 🎉
:fireworks:
:smile:
|
GITHUB_ARCHIVE
|
pip freeze does not show all installed packages
I am using a virtualenv. I have fabric installed, with pip. But a pip freeze does not give any hint about that. The package is there, in my virtualenv, but pip is silent about it. Why could that be? Any way to debug this?
Are you using the pip from the virtualenv?
I just tried this myself:
create a virtualenv in to the "env" directory:
$virtualenv2.7 --distribute env
New python executable in env/bin/python
Installing distribute....done.
Installing pip................done.
next, activate the virtual environment:
$source env/bin/activate
the prompt changed. now install fabric:
(env)$pip install fabric
Downloading/unpacking fabric
Downloading Fabric-1.6.1.tar.gz (216Kb): 216Kb downloaded
Running setup.py egg_info for package fabric
...
Successfully installed fabric paramiko pycrypto
Cleaning up...
And pip freeze shows the correct result:
(env)$pip freeze
Fabric==1.6.1
distribute==0.6.27
paramiko==1.10.1
pycrypto==2.6
wsgiref==0.1.2
Maybe you forgot to activate the virtual environment? On a *nix console type which pip to find out.
Oooooops. I was doing (silly me) pip freeze | grep fabric (I have lots of packets installed). I could swear I have scanned manually the whole list, but somehow I skipped that. Strange that it is installed as Fabric, but anyway not a very good performance on my side.
grep -i is my friend too :)
yep! usually I would run grep -i before trying anything else, but in this case I was so convinced that it should be fabric that I did not even consider that. I mean, who on the fabric team came up with the idea of breaking a years-long tradition of using lower case for package names? :) Specially when the package is called fabric on pypi.
Maybe the same people who register django as Django?
You can try using the --all flag, like this:
pip freeze --all > requirements.txt
Although your problem was specifically due to a typo, to help other users:
pip freeze doesn't show the dependencies that pip depends on. If you want to obtain all packages you can use pip freeze --all or pip list.
If you have redirected all the pre-installed packages in a file named pip-requirements.txt then it is pretty simple to fix the above issue.
1) Delete your virtualenv folder or create new one (I am giving it a name as venv)
rm -rf venv && virtualenv venv
2) Install all the requirements/dependencies from the pip-requirements.txt
pip install -r pip-requirements.txt
3) Now you can check the installed packages for your Django application
pip freeze
4) If you had forgotten to update your requirements file(pip-requirements.txt), then install fabric again (Optional Step)
Note: After installing any dependency for your Django app, always update the requirements in any file as follows (make sure your virtualenv is activated)
pip freeze > pip requirements.txt
That's it.
Adding my fix in addition of above fix also ,
I was also facing the same issue on windows,even after activating the virtualenv too pip freeze was not giving me all list of installed packages. So i upgraded my pip with python -m pip install --upgrade pip command and then used pip freeze.
This time it worked and gave me all list of installed packages.
This might be stupid but I have got the same problem. I solved it by refreshing vs code file directory (inside vscode there is a reload button). :)
If none of the above answers are working for you.
As with me you might have problem in you venv and pip configuration.
Go inside your venv/bin and open pip and see the 2nd line as:
'''exec' "path/to/yourvenv/bin/python3" "$0" "$@"
See if this line is correctly pointing inside your venv or not
For example in my case.
I initially named my virtual environment as venv1
and later just renamed it to venv2.
In doing so my pip file 2nd line had: '''exec' "venv1/bin/python3" "$0" "$@"
which to work properly should have: '''exec' "venv2/bin/python3" "$0" "$@" notice "venv2" not "venv1" since venv1 in now renamed to venv2.
Due to this python was looking inside pip of venv2 and throwing error or not working as desired.
For those who added Python modules via PyCharm IDE, after generating a virtual environment from the command prompt, good luck! You will need to rebuild the requirements.txt file manually with the ones missing by first running pip3 freeze and adding what is missing from PyCharm.
I highly suggest switching to Visual Studio Code.
|
STACK_EXCHANGE
|
At Bitlancer, we track our billable hours using FreshBooks, a popular, cloud-based SMB accounting system. The legacy version of FreshBooks that Bitlancer and many other SMBs are using (for a number of reasons, we have yet to upgrade to the latest version) allows us to create projects and set a maximum number of project hours… but there’s no way to limit the actual hours billed. Further, contractors working with us can’t even see the project time limits they need to adhere to.
Say you setup a FreshBooks project and estimate that it will require 40 hours to complete. A third-party working on that project can track 50 hours with no alerts going off in the software. They will have no clue that there’s a 40-hour cap on their time.
After a few experiences with contractors logging more hours against projects than we had estimated (yes, feel free to blame this on poor management rather than the software!), plus running into challenges managing billing for multiple contractors across multiple projects, we decided to build an integration to FreshBooks that would give us the capabilities we needed.
Our new, open source Freshbot makes it easy to see how many hours your team has left on active FreshBooks projects. Here’s a peak at the bot’s output (project names are grayed out):
And here’s a sample of what you’ll see in FreshBooks:
Freshbot is a Slack bot written in the Go programming language that runs as a serverless Amazon Web Services (AWS) Lambda function. This design makes Freshbot easy to setup and maintain. It is publicly available, along with documentation, at https://github.com/Bitlancer/freshbot.
Now our contractors can simply run
/hours on Slack to see how many hours are left on projects and how many hours they can invoice. And we can check project status in FreshBooks at a glance.
This helps Bitlancer improve project management and overall efficiency in a number of ways:
- We can see right away when the hours billed against a project exceed the project estimate. (Prior to building Freshbot, the only way we knew that was to log into FreshBooks, check the hours against the estimate and realize, “Oh, crap…”)
- Contractors can easily figure out how many hours they’ve billed on a project, in relation to the total hours they can bill.
- We know when we need to ask clients for new deposits because it’s now much easier to see how close we are to the agreed project time limit.
- We have “no excuse” for accidently going over budget on projects because both project managers and contractors can easily see when they’re getting “close to the edge.”
We hope our open source Freshbot will help your business, too! Take it for a spin today.
|
OPCFW_CODE
|
Novel–War Sovereign Soaring The Heavens–War Sovereign Soaring The Heavens
Chapter 3117 – Qi Tian Ming wire painstaking
Direct sun light Liang Peng considered Duan Ling Tian before he s.h.i.+fted his eyeballs towards the grey-clad classic guy and reported, “Duan Ling Tian, this is certainly my senior uncle, Qi Tian Ming. He’s even the former Lavish Elder from the Efficiency Celestial Sect. You could deal with him as Older Qi.”
‘So, this implies Bihai Mingfeng can also be a Ten Instructions Celestial King…’ Duan Ling Tian acquired satisfied another inspector from the Unique Nether Mansion ahead of assembly Qi Tian Ming. Another inspector was Bihai Mingfeng, your third Sect Leader from the Coupling Celestial Sect. He obtained became aquainted with Bihai Mingfeng in the entry towards the lower field of the The southern area of Paradise Medieval World. Bihai Mingfeng also sounded like Huang Jia Long’s idol.
“The ten inspectors with the Serious Nether Mansion are generally Ten Information Celestial Kings. This isn’t a magic formula. What is drastically wrong? From whom do you hear about this topic? You sound not aware of this,” Direct sun light Liang Peng replied.
rosa quest granado espada
Just after 3 days, a guest who traveled from afar finally came to the Ease-of-use Celestial Sect’s property. The visitor was an older male having a average create who was dressed up in a long grey robe. His frizzy hair was completely white, with his fantastic appearance was ruddy. All in all, he checked healthy and balanced and full of energy despite his age group. When he reached the sect’s real estate, he accessed the household of Sun Liang Peng, the Sect Chief of the Straightforwardness Celestial Sect, undiscovered from the patrolling elders and disciples. He hovered on the surroundings above Direct sun light Liang Peng’s household as he known as out, “Little Peng.”
‘The ten inspectors on the Profound Nether Mansion are generally Ten Guidelines Celestial Kings?’ Duan Ling Tian was applied aback as he read Direct sun light Liang Peng’s ideas. He knew the ten inspectors in the Unique Nether Mansion were definitely more robust than the many sect leaders and clan managers on the Three Sects and a couple Clans, but he was uninformed that all of them ended up Ten Recommendations Celestial Kings.
‘So, what this means is Bihai Mingfeng is yet another Ten Information Celestial King…’ Duan Ling Tian got satisfied another inspector on the Serious Nether Mansion prior to getting together with Qi Tian Ming. Additional inspector was Bihai Mingfeng, your third Sect Leader with the Coupling Celestial Sect. He experienced attained Bihai Mingfeng at the entry for the decrease whole world of the The southern area of Paradise Old Realm. Bihai Mingfeng also appeared like Huang Jia Long’s idol.
War Sovereign Soaring The Heavens
Ahead of Qi Tian Ming’s infiltration landed, Duan Ling Tian vanished into very thin air flow such as a ghost.
EndlessFantasy Language translation
Duan Ling Tian inevitably taken into consideration his close friend when he thought of Bihai Mingfeng. ‘I ponder how Huang Jia Long has been doing from the Coupling Celestial Sect?’ Huang Jia Extended was one of the very few close friends Duan Ling Tian acquired created following climbing for the Mindset Overarching Paradise. Huang Jia Very long acquired come to be a very important close friend to Duan Ling Tian on the time that they had invested together. A single simply had to realize that he was not somebody that would simply recognize another individual as being a close friend. He considered to him or her self, ‘After signing up for the Profound Nether Mansion, when i provide the time, I would check out Jia Long in the Coupling Celestial Sect.’
Blue vigor surged from Qi Tian Ming’s system and surrounded the rock table and natural stone seats, the ground, along with the courtyard before it appeared to shroud Direct sun light Liang Peng’s entire home.
Currently, Sunlight Liang Peng considered Qi Tian Ming and inquired using a teeth, “Senior grandfather, I am certain there’ll certainly be a significant uproar within the Profound Nether Mansion any time you deliver Duan Ling Tian backside, ideal?”
Blue colored vitality surged from Qi Tian Ming’s system and shrouded the stone table and rock seats, the ground, as well as courtyard prior to it did actually shroud Sunshine Liang Peng’s entire property.
‘The regulation of water! On top of that, it… it seems like he possessed cast seven to eight profundities all at once,’ Duan Ling Tian considered to themself, a little applied aback. He mobilized his Celestial Starting point Vigor and improved all of them with legal requirements of s.p.a.ce’s Fundamental Profundity, the s.p.a.ce Elemental Profundity, right after moving it through his 99 Divine Blood vessels.
Chelsea Chelsea Bang Bang
Qi Tian Ming’s eyeballs flashed since he checked out Duan Ling Tian. Instantly, he waved his fretting hand. A standard water dragon seemed to seem out of thin atmosphere and billed toward Duan Ling Tian. The liquid dragon improved in dimensions mainly because it sailed via the oxygen. As it drew in close proximity to Duan Ling Tian, it suddenly opened up its oral cavity almost like it designed to devoir Duan Ling Tian.
“This…” Sunlight Liang Peng widened his view as he spotted Duan Ling Tian disappearing into skinny atmosphere. After a overcome, he found Duan Ling Tian experienced came out far behind Qi Tian Ming.
how often should i see my child
Duan Ling Tian and Sun Liang Peng ended up sitting looking at a rock kitchen table from the courtyard, in the middle of a talk, after they been told the existing man’s sound. Neither one of these acquired noticed the existing man’s profile just before the older mankind spoke.
Currently, Sun Liang Peng viewed Qi Tian Ming and questioned having a teeth, “Senior granddad, I am certainly there’ll be described as a big uproar on the Intense Nether Mansion whenever you deliver Duan Ling Tian rear, right?”
Really, Sun Liang Peng was sensing rather embarra.s.sed at this moment after hearing his nickname. His face was a little bit crimson when he found Duan Ling Tian considering him strangely. After the beat, he quickly increased to his foot to greet the existing person and explained using a wry smile on his experience, “Senior uncle, are you able to be sure to cease by using that nickname? It is hazardous to my visuals as being the sect leader…”
When Duan Ling Tian s.h.i.+fted his gaze back to Direct sun light Liang Peng, he found a tip of embarra.s.sment on Sun Liang Peng’s facial area due to remaining resolved as Little Peng.
“Yes.” Duan Ling Tian nodded.
‘The ten inspectors of the Significant Nether Mansion are common Ten Information Celestial Kings?’ Duan Ling Tian was applied aback as he heard Sunshine Liang Peng’s words and phrases. He understood that the ten inspectors from the Significant Nether Mansion were more robust than the many sect management and clan market leaders on the Three Sects and 2 Clans, but he was uninformed that these ended up Ten Information Celestial Kings.
“Hmmm… The Territory Profundity, the Restraining Profundity, the Distortion Profundity, plus the Go across-Dimensional Slash Profundity…” Although the regulation of s.p.a.ce was formidable, it may possibly not threaten Qi Tian Ming. He very easily deflected Duan Ling Tian’s assault as he conveniently discerned the quantity and label of profundities that Duan Ling Tian acquired cast. He failed to make an effort bringing up the s.p.a.ce Elemental Profundity because it was obviously a given that Duan Ling Tian will have to comprehend the s.p.a.ce Elemental Profundity just before being able to use other profundities from the legislation of s.p.a.ce.
Without having anticipating Duan Ling Tian’s respond, that old person carried on to express, “Use the law of s.p.a.ce and strike me with all your might.” Immediately after he complete conversing, a increase of power rose from his system.
‘So, this implies Bihai Mingfeng is yet another Ten Instructions Celestial King…’ Duan Ling Tian possessed fulfilled another inspector in the Serious Nether Mansion prior to reaching Qi Tian Ming. The other inspector was Bihai Mingfeng, the 3rd Sect Chief in the Coupling Celestial Sect. He had achieved Bihai Mingfeng in the front door for the decrease whole world of the The southern area of Heaven Historic World. Bihai Mingfeng also appeared like Huang Jia Long’s idol.
“The ten inspectors of the Profound Nether Mansion are generally Ten Instructions Celestial Kings. This isn’t a magic formula. What’s wrong? From whom do you find out about this issue? You seem unaware of this,” Direct sun light Liang Peng responded.
Without having expecting Duan Ling Tian’s response, that old male continued to convey, “Use legislation of s.p.a.ce and episode me with your could possibly.” When he finished conversing, a surge of electricity increased from his body.
Duan Ling Tian obviously adhered to accommodate as he saw Sun Liang Peng rising to his ft .. Having said that, an imperceptible electricity surged outside the ancient man’s body system and eliminated him from getting up. “You need to be Duan Ling Tian, correct? There is no requirement for someone to act like Minor Peng. You may distribute while using formalities. Simply be everyday.”
Novel–War Sovereign Soaring The Heavens–War Sovereign Soaring The Heavens
|
OPCFW_CODE
|
IT - Software Developer
Are you a Software Developer that can develop new features in a very dynamic environment?
Implement and continuously improve an integral set of infrastructure, tools and services to efficiently serve the ASML software engineering processes.
As a Software Developer you will work on a set of tools that operate in the Software Build environment. These tools are tailor-made for ASML to deliver the needed functionality and achieve the best performance of the SW Build environment.Deliverables:
-Designs for the needed changes
-Implementation of the needed redesign and/or features
At least a Bachelor degree in Software Development.
-Relevant and proven SW development experience in large technical environments
-Experience in Test-driven development
-Experience in Ruby programming language (or the ability to quickly learn a new programming language)
-Perl, Python scripting experience is a nice-to-have
-Basic understanding of Linux
-Experience in working in Scrum/Agile teams
-Excellent communication skills are a must
-Able to handle stress due to issues from demanding customers that potentially disturb the work.
-Show initiative and problem solving attitude.
-Pragmatic, practical, flexible and committed to quality.
-Proficiency in the English language.
Context of the position
The IT division supports information management, infrastructure and key business processes across ASML. The ICT infrastructure, hardware and applications are absolutely mission-critical for almost all ASML’s internal and external activities.A sub-department within the IT organization is Competence Center Product. This department is responsible for the infrastructure and tooling for ASML’s development organization. Within this competence center there are three solution teams: mechanical, electrical and software engineering. The Project Manager will be active in the software engineering infrastructure team.The software engineering infrastructure (SEI) group is responsible for the complete software development environment that is used by more than 1000 product software developers within ASML. This environment consists of an integrated development, build, test and release environment and supports the lifecycle and configuration management of the ASML product software.
This infrastructure is critical for the timely delivery of high quality software to the ASML customers and availability and performance issues directly influence the software development productivity and efficiency.
The environment is developed and maintained in close cooperation with the ASML software development community. Key challenge is on one hand to have a state of the art environment to support the newest product developments and on the other hand safeguard the stability, reliability and long term support for the growing installed product base at our ASML customers.
ASML creates the conditions that enable you to realize your full potential. We provide state-of-the-art facilities, opportunities to develop your talents, international career opportunities, a stimulating and inspiring environment, and most of all, the commitment of a company that recognizes and rewards outstanding performance.
|
OPCFW_CODE
|
How to merge git commits in the develop branch to a feature branch
I have a develop branch and a feature branch in my git repo. I added a commit to develop and now I want that commit to be merged to my feature branch. If I do this
git checkout feature
git merge develop
I end up with a merge commit. Since I'll be merging new commits on develop to my feature branch frequently, I'd like to avoid all these unnecessary merge commits. I saw this answer that suggested doing a git rebase develop but it ends up rewinding my branch way too far and the rebase fails.
Update:
What I ended up doing was
git checkout feature
git merge develop # this creates a merge commit that I don't want
git rebase # this gets rid of the merge commit but keeps the commits from develop that I do want
git push
Update: I just noticed that the original commit on develop gets a different hash when I merge then rebase to the feature branch. I don't think that's what I want because eventually I'll merge feature back into develop and I'm guessing this won't play nice.
Hmm, well I know that you can "squash" your commits together when you rebase, as a way to not have so many commits on your branch. Check out http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html.
Rebasing is the answer, if it's not working for you, there's another question you should be asking about why it isn't working.
To integrate one branch into another, you have to either merge or rebase. Since it's only safe to rebase commits that aren't referenced anywhere else (not merged to other local branches; not pushed to any remote), it's generally better to merge.
If your feature branch is purely local, you can rebase it on top of develop. However, it takes time to understand how rebase works, and before you do, it's quite easy to accidentally produce duplicated or dropped commits. Merge commits might look noisy but merging is guaranteed to always be safe and predictable.
For a better view, try logging everything together in a graph:
git log --all --graph --oneline --decorate
It's also worth considering whether you really need the commits on develop merged into feature. Often they're things that can be left seperate until feature is merged into develop later.
If you regularly find you do need develop code on feature then it might be a sign that your feature branches are too long-running. Ideally features should be split in such a way that they can be worked on independently, without needing regular integration along the way.
If you only want one commit from the develop branch you can cherry-pick it in your feature branch:
git checkout feature
git cherry-pick -x <commit-SHA1>
The commit will be applied as a new one on top of your branch (provided it doesn't generate a conflict), and when you'll merge back the feature branch Git will cope with it without conflicts.
This is arguably a better alternative to a merge because it keeps the history clean when the feature will be merged back into develop
|
STACK_EXCHANGE
|
> Solved Windows
> Solved: Windows XP Reinstall Hangs On "Setup Is Installing Windows"
Solved: Windows XP Reinstall Hangs On "Setup Is Installing Windows"
SIA performs the following tasks: Identifies the hardware on your system and prepares Sun server drivers for the OS install. Installation documentation for these racks is included in the X4500-J Slide Rail Installation Guide (820-1858-10) and is shipped in the orderable rail kit box and is available online. Slot 0 5. Otherwise redirection does not work. click site
No, create an account now. HELP!!! PBR, I have been working on this for a while now, I looked at the partition for XP and all it has is F6 in all the bites of the entire Workaround: Reset the ILOM Service Processor using the GUI, CLI, IPMI, or SNMP interface.
system prints, Windows could not start because of a computer disk hardware configuration problem.... A Hyper Transport sync flood error occurred on last reboot ... command request queue parity error A PCI or internal memory error occurred while accessing the request queue. Connect the disk using cfgdisk # cfgdisk -o connect -d sata1/5 At this point, the disk is still reported as Disconnected or not present.
The target disks were partitioned by fdisk under RHEL5/SLES10. All rights reserved. Staff Online Now Cookiegal Administrator valis Moderator Advertisement Tech Support Guy Home Forums > Operating Systems > Windows XP > Home Forums Forums Quick Links Search Forums Recent Posts Members Members Unconfigured Disk Appears in SP/IPMItool (6512915) If you unconfigure a disk and remove it, it still appears in IPMItool.
to appear. if it wont format using the quick way more than likely the harddrive has problems as ive found out a few times.once the drive was replaced with a new one i Some or all of the other conditions might occur transitionally when a disconnection occurs. For example, if you booted from HDD0, and the amber LED on HDD1 lights up, you can use cfgadm to unconfigure HDD1, and when the blue LED lights, you can replace
Option ROM memory space exhausted. The HDD access cover should be closed as soon as any HDD service action is complete. Latest Firmware Updates The latest firmware updates for the Sun Fire X4500 server are available on the Tools and Drivers CD supplied with your system. The feature in the field-programmable gate array (FPGA), which can be used to detect such events, is functioning as expected.
If you're not already familiar with forums, watch our Welcome Guide to get started. https://forum-en.msi.com/index.php?topic=21713.0 Workaround: Current workaround is to change the system’s root device in /boot/grub/menu.lst to /dev/sdy1. Does it contain any service pack(SP1 or SP2 or SP3)? ---------------------------------------------------------------Click to expand... Using any of these locations to lift the server might damage it and cause personal injury.
Install the hd utility for the Solaris OS from: http://www.sun.com/servers/x64/x4500/support.xml 4. get redirected here This is normal and indicates that your system is in a power-saving mode. Stay logged in Sign up now! This makes me think that its still trying to run raid instead of just one single drive.
For E-stepping Opteron Processors (model 252 or later): Choose the Hardware option, which enables the hardware to do memory remapping. Register a free account to unlock additional features at BleepingComputer.com Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. How to disable RAID in BIOS on MSI 648 Max Started by GrandslammerBD , Jul 04 2009 04:26 PM Please log in to reply 9 replies to this topic #1 GrandslammerBD navigate to this website Boot from CD runs through "setup is loading files" hangs on "setup is installing windows" failed install Responses to "failed install" Gonzosez Guest Posts: n/a failed install Posted: 12-05-2003, 07:12
Edit the ifcfg file that corresponds to the device /etc/sysconfig/network-scripts/ifcfg-ethX. Make sure the Solaris Volume Manager sees the remote iso image. 5. This might be due to the system having boot drives in different slots than those of the original system.
On non-bootable drives, unconfigure the failed drive.
Try not. That would be my WD 1TB SATA Drive. These messages are informational only and do not indicate a failure: Sep 7 03:49:11 scsi: [ID 107833 kern.warning] WARNING: /[email protected],0/pci1022,[email protected]/pci1022,[email protected],1/[email protected]/[email protected],0 (sd0): Sep 7 03:49:11 Error for Command: read(10) Error Level: Fatal Change the lines for Scanning OPROM on PCI-X slots 0 and 1 to Disabled. 4.
FIGURE 2 Sun Fire NIC Naming Conventions Hardware, Firmware and BIOS Issues The following issues apply to the Sun Fire X4500 server hardware, firmware, service processor (SP) or BIOS: Recommended Racks Back to top #7 techextreme techextreme Bleepin Tech BC Advisor 2,125 posts OFFLINE Gender:Male Location:Pittsburgh, PA Local time:11:48 AM Posted 05 July 2009 - 12:05 PM What SATA Raid card Disable automatic reboot. my review here ILOM CLI Cannot Properly Parse Values Surrounded by Quotation Marks (6559544) When entering a value that contains spaces for the property binddn under /SP/clients/ldap, the value is incorrectly parsed and results
This can cause confusion and potential mishandling of the devices. Thus, some of the information in that document does not apply the the SunFire X4500 server. There are also optional software tools available on the Tools and Drivers CD (suncfg, HERD, cfggen, Disk Control and Monitoring utility). This might be caused by programmatic operator action (using cfgadm) or physically unplugging the device or by the device being reset or by poor signal integrity.
udev uses scsi_id program that gets the serial number of the device by sending a inquiry ioctl command (0x12). Never remove more than one hard disk drive, even temporarily, from a system that is running. A brief power interruption causes the host to hang rather than reset and restart. RHEL 4.x sata_mv driver Does Not Set 64-bit DMA Mask Causing "Out of IOMMU Space" (6752388) On a system running RHEL 4.x, when thousands of I/O intensive threads are run, the
These instructions apply to the Sun 10-Gigabit Ethernet PCI-X adapter card to work on the RHEL4 Update 4 64-bit. This issue was fixed in software release 1.1.8. It will find it fine. >-----Original Message----- >Trying to install XP Pro. >Boot from CD >runs through "setup is loading files" >hangs on "setup is installing windows" > > >. > SMF ©2014, Simple Machines - Theme ©2014 Micro-Star Int l Co.,Ltd.Mobile My Account | Log Out | Advertise Search: Home Forums About Us Geek Culture Advertise Contact Us FAQ Members List
This problem is transient. If this becomes necessary, a message will prompt you. The files and the instructions can be found here: http://www.siliconimage.com/support/search...d=28&cat=15Hope this helps, Techextreme"Admire those who attempt great things, even though they fail." -- Seneca Back to top #10 DaChew DaChew On some operating systems which do support it, PowerNow!
This causes the blue LED to light, signalling that you can replace the drive. Workaround: Use the interactive PXE installation method, which lets you select the appropriate ethernet device during run time. Use the ’backup’ command to restore the primary label. This indicates a hardware failure.
ZFS Data Synchronization Slows System Performance (6430480) ZFS data synchronization of data slows system performance. Functionality To Disable Write Cache Option on mv_sata Linux Driver Is Needed (6625187) The sudden powering off of system results in hardware errors.
|
OPCFW_CODE
|
How to integrate on premise MS Active Directory with AWS Managed Directory.
- Prepare On Premise AD
- Prepare AWS Directory
- Set up Trust
On Premise AD Setup
- Configure your on-premises firewall so that the following ports are open to the CIDRs for all subnets used by the VPC that contains your AWS Managed Microsoft AD.
- TCP/UDP 53 – DNS
- TCP/UDP 88 – Kerberos authentication
- TCP/UDP 389 – LDAP
- TCP 445 – SMB
2. Ensure that Kerberos pre-authentication is enabled (actually ensure the “do not check pre-authorization” feature is not enabled)
3. Set up domain forwarding by adding the FQDN of the AWS Directory service and adding the IP addresses of the domain controllers, and click to replicate this setting up all DNS servers
Permit traffic traffic from your on-premises network. This involves
- select the AWS created security group for <yourdirectoryID> directory controllers.
- open around 15 port ranges for inbound traffic (UDP/TCP/ICMP)
- open up for all outbound traffic
- Ensure that Kerberos pre-authentication is enabled (actually ensure the “do not check pre-authorization” feature is not enabled)
- Log on to on premised AD
- Set up two way Forest Trust
- Set up forest-wide authentication
- Set up Trust password and save for later use on AWS Directory
- Connect to AWS Directory Service
- Select Add Trust Relationship then enter the FQDN of the in-premise directory service, the trust password and choose two-way as the type of trust relationship
- Add IP Addresses of the two on-premise DNS servers as conditional forwarders
Use Cases for Hybrid Directory
- when AWS services require access to on-premise resources
- when on premise services/users require access to AWS resources
- when transitioning services from on-premises to AWS
Use Cases for not using Hybrid Directory
- when access is for a short period of time and an AD connector can be used
- when access to AWS services from on premises can be managed by a role
Advanced Features for Hybrid Directory
- deploying additional domain controllers increases the redundancy, which results in even greater resilience and higher availability. This also improves the performance of your directory by supporting a greater number of AD requests
- use Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate users from your self-managed AD to your AWS Managed Microsoft AD directory. This enables you to migrate AD objects and encrypted passwords for your users more easily.
- create an Access URL for AWS services An access URL in the format
<yourgloballyuniquealias>.awsapps.com is used with AWS applications and services, such as Amazon WorkSpaces, to reach a login page that is associated with your directory.
- You can also enable single sign on, so the user does not need to log in again to use the Amazon service.
Several AWS services are integrated with AWS Directory Services and hence you can use this hybrid setup to give on-premise users access to :
- Amazon Chime
- Amazon Connect
- Amazon FSx for Windows File Server
- Amazon QuickSight
- Amazon Relational Database Service
- Amazon WorkDocs
- Amazon WorkMail
- Amazon WorkSpaces
- Amazon WorkSpaces Application Manager
|
OPCFW_CODE
|
We have great news for administrators of Aiven for PostgreSQL and all application developers using it: Your life just got easier with PostgreSQL 13.
The new version of Postgres, one of the all-time greats of open source databases, includes housekeeping operations that keep your database size more manageable.
In this post, we’ll cover the most important improvements and how you can reap the benefits of them.
B-tree indexes are deduplicated
Duplicate entries - which occur when you index non-unique data but also when updating rows with unique indexes - are found in B-tree indexes more commonly than most people think. This results in larger indexes than needed, which slows down performance and increases storage costs.
PostgreSQL 13 solves this problem by merging duplicate key values in B-tree indexes into a single posting list tuple so that the column key values only appear once. In indexes where duplicate values often occur, this can reduce index size dramatically. It also increases query throughput and makes routine vacuuming faster.
And speaking of vacuuming…
Parallel vacuuming of indexes
This is the feature we’ve all been waiting for!
The VACUUM command removes rows that are no longer visible due to updates or deletions, among other things. In previous versions, vacuuming could take a long time for large tables with multiple indexes. In Postgres 13, processing times are drastically shorter now that a worker can be allocated to each index and they can be vacuumed in parallel with a single vacuum process.
Improved partitioning support
Partitioning enables you to accelerate queries, improve bulk loads and deletion, and manage large tables with ease, and Postgres 13 ships with several partitioning enhancements that give you more ways to split tables. Partition pruning is now allowed in a wider variety of cases and partitions can more often be directly joined, improving query performance.
One of PostgreSQL 13’s most prominent new features is incremental sorting. When a group of data sets is already sorted by some column(s), the result can be used as a basis for further sorting later, thereby reducing the volume of data that needs to be sorted with each query. Of course, the efficiency improvement depends on the data in question, so PostgreSQL 13 optimizer heuristically decides whether to use incremental sorting or not.
For a full list of version improvements, check out PostgreSQL's release notes.
Ready to clear the clutter with managed PostgreSQL 13?
You’ll need to upgrade to PostgreSQL 13 to take advantage of these features. If you already use Aiven for Postgres, you can easily run an in-place upgrade to migrate to the newest version.
Not using Aiven for PostgreSQL yet? Try out the latest PostgreSQL version with our free 30-day no-commitment trial. Get starting by signing up and choosing version 13 when creating your Postgres service. Alternatively, you can also use our Terraform tooling, REST API or CLI for service creation. To find out more about PostgreSQL, read our Introduction to PostgreSQL
PostgreSQL 13 is packed with next-level features that enable you to improve query performance, rapidly deduplicate data, vacuum indexes in parallel, and sort data more effectively. Head over to the Aiven console and get the upgrade started, so you can enjoy these improvements.
Not using Aiven services yet? Sign up now for your free trial at https://console.aiven.io/signup!
Sep 13, 2021
Aiven now offers a Kubernetes Operator for some Aiven services. Find out how you can use it to manage your whole service stack.
Jan 21, 2021
Aiven for Apache Kafka moves to version 2.7. Read to find out what the key improvements in the new version are and how you can get in on the action.
Apr 27, 2021
Aiven now supports Apache Kafka version 2.8, announced on April 19, 2021. Read on to find out what the new version means for you!
Subscribe to the Aiven newsletter
All things open source, plus our product updates and news in a monthly newsletter.
|
OPCFW_CODE
|
Sharing application data and functionality over the Internet to external divisions and partners requires trust between two applications in different identity domains. Establishing this trust in user-machine interactions is challenging, and harder still in machine-to-machine SOA and cloud environments.
For a client application in one domain to request information from a Web service residing in a different domain, the client will need to present proof of its identity using a credentialing authority trusted by the Web service. The receiving service will need to be able to understand and evaluate the presenting credentials to asses an identity’s validity while also having evidence that the credentials were not tampered with or spoofed during transit. The challenge therefore is in finding a way to both federate identity and establish trust between machines in disparate identity domain.
Several identity federation products have been introduced in recent years based on a Security Token Service for handling identity mapping and secure token generation. However, these products tend to focus on Web Single Sign-on and Web federation since they implicitly leverage Web browsers for handling trust (through user inputted credentials), client-side cookie or token caching and address redirection. Since there is no browser analogue in Web services, the problem of trust, token acquisition, token caching and token transmission is more complicated.
To enable interactions between client applications and Web services residing in different identity domains, both the client application and Web service must be able to establish trust with another and exchange identity information that has meaning in both domains. In machine-to-machine SOA and cloud interactions this will require some kind of PKI based mechanism for establishing trust between a client application and Web service. Moreover to reconcile identity information, both the client and service will need to interact with a trusted Security Token Service (STS) that can handle SAML token generation, translation and validation between identity domains. For Web services clients this will require both an ability to generate digital certs and an ability to request a secure token from an STS that provides proof of identity in the Web services domain from an STS, package it into a signed SOAP call and transmit the secured SOAP message to a Web service. For a Web service this requires an ability to consume and process the secure token generated by the STS and then use it to make authentication and authorization decisions along with generating new credentials for down stream transmission. Given the diversity of token types, multi-vendor STS’s needing support, complexity of PKI and evolution of Web services security standards like WS-Trust and WS-Federation, the problem of enabling secure Web services federation is likely to challenging for developers to handle themselves.
Layer 7 is the only XML security vendor to offer enterprises a solution for managing Web services federation from client application to Web service without programming as well as a provide a built-in SAML based Secure Token Service. The Layer 7 Web service federation solution can integrate with leading identity management, federation and security token services. The Layer 7 SecureSpan XML Firewall and the SecureSpan SOA Gateway also provide customers a flexible SAML based Security Token Service (STS) appliance for consuming, validating, creating and transforming security tokens including Kerberos, SAML 1.1 and 2.0. Likewise the SecureSpan XML VPN Client provides a admin-configurable tool for establishing PKI based trust on a client application, managing token requests from an STS (3rd party of Layer 7), and packaging a token into a secure SOAP call. Layer 7’s SecureSpan XML VPN automatically manages token negotiation using standards like WS-Trust, WS-Federation, and packaging of SOAP calls on the client application using WS-Security and WS-I Basic Security Profile to name some standards. All this is accomplished with zero upfront code and no down-time for policy updates.
|
OPCFW_CODE
|
While the ‘first wave’ of programmers were those who found keyboard-shortcuts a time-saver, this ‘second wave’ will need to draw from the pool of visual thinkers, as well. A truly visual IDE would provide numerous advantages for that group, and it would aid comprehension and engagement with non-programmers.
The rest of your team trickles onto the work-chat-channel, and each of you reviews the docket. A few of them begin asking each other questions, their QA appearing in branches of the chat-history. You watch the right side of your screen, where whiteboard-thumbnails display some others’ wiring-drawings for different parts of the project.
You click on one programmer’s whiteboard-thumbnail, to zoom-in and watch them code live for a moment. They have been drawing boxy circles, with labels, and different kinds of arrows, also labeled, between them. The main functional blocks and flow-control are taking shape. You settle your cursor on one of the arrows, to see a pop-up box with your coworker’s comments about that arrow’s meaning and purpose. You right-click, and go to — the video-scroll at the bottom of your screen leaps to the point where your coworker wrote the comment. You can start filling-in the details, there.
You zoom into the arrow and the two boxy circles it joins. You say “yes” to the prompt, before it finishes asking you if you are starting a new branch. You scribble your branch name atop your whiteboard-view, while explaining what you are doing: “These are the internal operations that pass outputs, their outputs’ types, conditions for their use, and the functions receiving that data.” Inside the outputting circle, you lay smaller circles, naming them out loud as you go along. You label the data types, and draw conditionals’ arrows among them. You finish wiring outputs, writing comments, linking mentions of functions in those comments to the functions themselves, for clarity, and responding to the prompts that ask you about garbage collection and tests. You click done, to push the branch.
You move to another coworker’s whiteboard. They are taking a list-display of input files, and operating on them manually, creating action-samples to construct a filter. After manually filtering a few files, they highlight the action-sample snippet of one of those operations on their video-scroll, and click generalize. You hear their translator-bot explain the generalization they are performing, as they label and generalize each file’s qualities. “Like this file, but generalize to any capitalized name preceding the dot.” “Like this file, and only within directories named in this other file…”
You scroll back in their video, to their manual operations, click new branch, and leave a flag. You name the flag “an example that may break your generalization, here.” You link ‘here’ to the point in their video where they mentioned ‘filtering within directories’, and write a comment on it. Skipping back to the list-display of inputs, you write an example filename, and draw its parent directories. You explain out loud, while drawing a box with the source filename, “If your directory was added to the source file’s list, but its parent directory was also added…” Your coworker must have seen the flag, and watched as you were doing all this — their message appears in the corner of your screen: “Ah, thanks — generalizing that now!” You relax back, and click ghost branch, with a link to their message as your final comment.
Emacs saves you precious seconds, according to its tutorial, by providing keyboard shortcuts for everything. This, apparently, keeps your hands away from the mouse, which would waste time. Yet, shell manual pages are organized alphabetically, rather than by operation, so you must scroll through and read each one, to find the option you seek. (If I know the meaning I want, and I need to find the options that have that meaning, I should look in a thesaurus, not a dictionary.) There is a disconnect, here.
Computers should do what they are good at, so that humans can do what we are good at. A computer could automatically generate the fields of syntax for a command, so that you don’t have to spend a few minutes reviewing the man page. Computers are good at that kind of thing. As soon as you type ‘grep’, your IDE could generate ghostly fields to the right of your cursor, each labeled according to grep’s syntax: ‘options’ ‘pattern’ ‘patterns from file’ ‘applied to files…’. A right-click on the ‘options’ field would display all the options available to grep, arranged under headers declaring their function, like in a thesaurus. That would save a lot more time than keyboard shortcuts.
Additionally, a visual IDE could rely on a palette of commands, so that functions are laid within code using a drag-and-drop. In shell, you could highlight a snippet of code, and drag it to an empty palette box, to create an alias. Drag a copy of it from the palette, into another command’s syntax, to say ‘the output of this aliased command is fed as input to this syntax field’. People who are familiar with mouse+keyboard operations, from games, photo and video editors, would have an environment adapted to their style. That’s why Mac OS is supposed to be superior, right? [:
Most People are Visual Thinkers
Most of the people needed for our ‘second wave’ of programmers will be visual thinkers. And, with a visual IDE, code would benefit from the strengths of visual thinkers. If a visual IDE lets a user drag outputs to their inputs, typo-bugs disappear. (You could script “If this, then that, else this-other,” and leave the inputs vague for a while. Later, you drag the appropriate section’s output into the appropriate field, telling your IDE what this, that, and this-other are… If a function is re-named, its drag-and-drop wiring is preserved!) Scripting layout and comments would be unified at the IDE-level, and then matched to the respective language by the IDE itself. (Commenting with /* or ## becomes irrelevant — and should be irrelevant! That’s something the computer ought to fit to your work, while you declare comments in a comment-box, like a sticky note…) Bugs related to flow, dependencies, garbage collection, and namespaces would be easy to catch.
Chris Granger’s light table, and Bret Victor’s talk “the Future of Programming”, are early steps toward this visual IDE. Festo, the german robot manufacturer known for its biology-inspired designs, relies upon the same principle. (at 3:00min onward, in that video.) There are games for learning to code, and interfaces that let you write simple instructions with a drag-and-drop palette, but none of these is sufficient for visual thinkers to write production-quality software.
Whoever develops a visual IDE, opens programming to a swath of new designers. And, with a visual interface, non-programmers would be more able to see and understand what code is doing. In stead of discovering too late that their client wants something else to happen, programmers could avoid costly and time-consuming misunderstandings, by displaying the code as whiteboarded wirings, and generating visual demonstrations of that code in operation. Time saved with keyboard shortcuts pales in comparison to time lost reading dysfunctional manual pages, and correcting simple bugs. If you are interested, I’d be glad to work with you!
|
OPCFW_CODE
|
mysql swap file crash on freebsd
I have a low memory VPS that mysql and the swap file crash out every morning at exactly 0300.
There are no cronjobs on the system that have been configured. The server is a basic LAMP development server and all settings are defaults.
cat /var/log/messages|grep -i mysql
Jun 25 20:51:07 vader sshd[72946]: error: PAM: authentication error for mysql from <IP_ADDRESS>
Jun 28 03:01:34 vader kernel: pid 848 (mysqld), uid 88, was killed: out of swap space
Jun 28 03:01:34 vader kernel: pid 93947 (mysqld), uid 88, was killed: out of swap space
Jun 29 03:01:32 vader kernel: pid 98578 (mysqld), uid 88, was killed: out of swap space
Jun 29 03:01:33 vader kernel: pid 2586 (mysqld), uid 88, was killed: out of swap space
My swap file is 1 gig. I tried 2 gigs, the same pattern of crashing begins after a week.
ls -l /home/sw*
-rw------- 1 root wheel<PHONE_NUMBER> Jun 21 13:19 /home/swap0
Even worse, I can't re-initialize the swap file without a reboot
swapoff -a
mdconfig -a -t vnode -f /home/swap0 -u 0 && swapon /dev/md0
mdconfig: ioctl(/dev/mdctl): Device busy
I don't have much memory to work with:
# vmstat
procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr vt0 md0 in sy cs us sy id
1 0 3 1709M 491M 44 0 0 0 54 29 0 0 4 135 94 0 0 100
But I shouldn't need it if mysql was running correctly.
Two questions.
no. 1) How do I reinitialize a swap file after it crashes so I don't have to reboot (I would just like to know because everything I find on google fails)?
no. 2) How do I stabilize mysql so that it doesn't burp at 0300 for massive amounts of memory?
There's a scheduled task somewhere. 2. You can't, your system ran out of memory and it's in an unstable state at best. Reboot. 3. Maybe you should get more RAM added to your VPS.
This issue still exist removing the only error in logs, a DHCP query. After removing DHCP and setting a static IP, same issue. :(
I was just seeing this behaviour as well, except in my case the mysql server was dying on a weekly basis instead of a daily basis. It happened at the same time as the weekly periodic tasks run (see /etc/crontab and /etc/cron.d/periodic/{daily,weekly}/). From your log it looks like it happens at 03:01 which is during the daily periodic task run (starts at 03:00). There are a few possible solutions:
Avoid running the periodic tasks, or at least identify which task is eating up memory. In my case it was the makewhatis command that runs weekly
Add more swap. Past a certain point there should be enough memory for the server to coexist with the periodic tasks
Add another periodic task that runs after the others and restarts the server
|
STACK_EXCHANGE
|
I2G - VIDEO PRESENTATION - GUIDELINES
PLANNING FOR CONTENT
Your video presentation will be streamed at the public I2G event and will be (optionally, depending on partner approval) available for later download or streaming by everyone. Therefore, your video MUST BE APPROVED BY YOUR INDUSTRY PARTNER for public release.
1. Ask your industry partner ASAP what content is allowed for public release and what content MUST NOT be shown (if any).
2. Write the short summary (follow instructions on content upload for your class).
3. Plan the script and presentation for the video.
4. Submit to client for approval.
5. Develop your video.
6. Submit to client for approval.
7. If partner does not approve, revise it till approved.
Problems or help? Please contact your instructors or email to: email@example.com
CHECKLIST (TECHNOLOGY AND PREPARATION):
● Test Windows Movie Maker, on Mac iMovie (both free) for stitching video
● Check if you can run the Zoom Virtual Background on your computer
● Test the approved I2G virtual background https://ucmerced.box.com/s/rvd24ng4hyptg27rp5cposeo0b8ad6rl
● Prepare how to use some pointer to indicate what a speech refers to on a slide or a software demo or screen.
● Test audio, and make sure that the volume of all voices will be balanced.
● Check for possible background noises that you don't hear but show up in zoom recording
● Use the same equipment setup (computer, microphone, location) that you will be using during Innovate to Grow Q&A session.
● Check if camera has any washout
● Plan ahead about what they want to present, record the content in different sessions. Later on just stitch the recordings together: on Windows Movie Maker, on Mac iMovie (both free)
• When making the video presentation, consider your audience. In the case of the final presentation your audience is technical and/or business oriented: faculty, TAs, industry partners, judges etc. The content and delivery should be chosen accordingly.
• Plan for delivering the content within the maximum time allowed (depends on the Class - refer to specific guidelines by your instructor).
• Please make sure to check with your mentor/client all materials for submission regarding confidentiality concerns. You must make sure your client agrees with making that information public. If there are any concerns on this topic, please contact us or explain them as a comment in your submission. Remember, your video will be streamed to a wide audience (and beyond), so take this seriously!
• Please make sure to include the following in your first and last slides:
o The project title as specified by the industry partner (these can be found in the files and announcements shared to your specific Class).
o Team name chosen by you (follow Guidelines from your Class).
o Your team number (follow Guidelines from your Class).
o Your industry partner information (Company Name).
o List of all your names and (optional but recommended) your contact info, or LinkedIn for your professional contact. Some teams may add photos, github etc.
• Place a footer in all other slides with the following information:
o Team name (and team number in parenthesis) – see the example above.
o Project title – as specified by the industry partner.
o Industry partner name – E.g. Veracruz Ventures or Cisco Systems.
o Slide number
• In your presentation, include a pause of a couple of seconds in the first and last slide to give viewers time to take notes of the information being shown.
• In the presentation, remember to end on the last slide as discussed with project and team info (not on a “Thank you!” or “Questions?” or End-of-Presentation-blank-default-screen)
These steps are important because
- Judges may have to enter your team reference at any time during your presentation
- potentially interested attendees can note down your contact information
- you have more opportunity to promote yourself.
In your video recording, please remember, after your closing statements on the last slide of project / team info, to continue the video recording **while resting on such last slide mute for 10-15 seconds**. This will allow the technical hosts of Zoom to pause the video and start the Q&A while on your team’s complete info, reinforcing the comments above.
IMPORTANT: Judges will have an online questionnaire and need to input the team name and number when judging the teams. This process can (and often does) spill over to the subsequent presentation. This is why it is very important to keep the context of team number in the slides.
VIDEO PRODUCTION AND I2G EVENT GUIDELINES
• We encourage you to use video editing software to have a polished final product. PC users can try Windows Movie Maker while Mac users can try Mac iMovie. Both are free to use and allow video splicing and basic editing. More powerful features are available in screencasting software such as OBS studio, which is free and compatible with all common operating systems. PowerPoint also allows you to record your presentation as an HD video including a voiceover.
• You do not need to record live from Zoom. In fact, we encourage you to not follow this approach since the quality of the final product is low and there may be glitches. Instead, consider having each person record their part separately and splicing all contributions together.
• You can also consider having each person record only their voice narrating their corresponding sections and then superimpose these over the slides in post-production. You can use free software like Audacity to edit/improve your audio recordings.
• Test the audio of each person before recording to make sure they are all at the same level.
• Please try to use the Zoom virtual background during all your I2G video calls. You can find the approved I2G background at https://ucmerced.box.com/s/rvd24ng4hyptg27rp5cposeo0b8ad6rl.
• If showing yourself in the presentation video or the I2G event, dress professionally, as if you were presenting on stage. No tux or gown needed (nor PJs!).
• Please use a virtual pointer or some other means (such as PowerPoint animations) to indicate to your viewer what you are referring to at any moment. Remember that your audience is not in front of you, so they may get easily lost.
• When recording your presentation, try to use the same equipment setup (computer, microphone, location) that you will be using during Innovate to Grow Q&A.
PRESENTATION DO'S AND DON'TS
• Plan ahead for the content of your presentation and its delivery. Improvisation is not a method. Plan out what you want to present, and record the content in different sessions, and then splice the recordings together (Windows Movie Maker, Mac iMovie, OBS, Audacity, etc.).
• Clearly explain and focus on the problem/project, and the design and value of your specific solution/design (spend minimal time on standard stuff, if any).
• Clarify constraints and limitations of the problem-project.
• Describe the problem first, then your solution.
• Make sure when you show a demo of your product, the actual screen of the demo is full screen and legible.
• Explain acronyms unless widely known.
• Use the approved I2G virtual background available at https://ucmerced.box.com/s/rvd24ng4hyptg27rp5cposeo0b8ad6rl.
• Make sure there is no background noise: alert anyone surrounding you to not be making noises when recording starts.
• Make sure the volume of all voices is balanced.
• Use a pointer when talking about certain items in the slides or demo.
• When starting the video on the first slide, pause for a couple of seconds as audience focuses, then introduce the team and members, then industry partner, then the project.
• When ending the video, pause a second on the last slide after finishing the presentation, and before closing the video.
• Waste time on the history of the client, rather than the value of your project.
• Waste time showing standard app stuff like authentication log in and out, which diverts attention from unique functionality. Instead, say that you are using the secure, state of the art, proven authentication.
• Include in your screencasting your browser showing tabs, toolbars, menus etc. taking screen space and showing private information.
• Describe the solution without first explaining the problem.
• Speak in a soft, robotic, or otherwise dull voice. Instead, practice ahead of time to ensure a fluid and natural speech.
• Speak without any pauses – it can be hard to follow.
• Use slangs like “to up the quality”. This may work in interactive speech, but not in pre-recorded presentations.
• Say something like “This app will only be good for business, not for personal use”. Why say that? You are only shooting yourself in the foot. How do you know it will/won’t be viral in something else?
• Use speech referring to objects on a busy screen. To what are you referring?!
• End with “and ... that's it!”. Instead, it is better to “thank” or “call for action”.
• Allow random external sounds to interrupt your video. If someone flushes the toilet or a car passes by, simply stop for a few seconds, start over from before the interruption, and then delete the bad parts using video editing software.
• Say things like “I did this” or “I designed that”. This is a team effort. Use “we” all the time.
• Display your slides in edit mode. They look too small and it is unprofessional. Make sure your slides are in full screen presentation mode when recording
• Use acronyms that are not obvious to everyone.
• Try to decide in real time who will be speaking next or what you will be saying. Plan ahead!!!
|
OPCFW_CODE
|
Hello I have 6 domain controllers and over 500 windows clients. Currently they are using just two domain controllers as DNS servers. I'm just wondering if this is best practice and what is the benefit of the other other 4 DC server.Has anyone had this scenario before and best practice in this scenario?
Different computers on the network with different DNS servers. Not sure if it is needed.i am just trying to distribute the load on DNS.
Don't do that. You will make things more complicated than necessary. The load on DNS is nominal.
The other DCs are still reachable as logon servers and have the sysvol share available, at a minimum. Don't forget that while DNS being on a DC is (nearly) required, it's a totally separate role.
6 DCs for 500 clients seems like quite a lot to me, assuming we're talking about a single geographic location. You can provide extra layers by having the DNS servers point to each other in a ring, but it's just a limit of Windows that it tends to handle only 2 DNS servers stably.
DNS is an optional function of a domain controller (actually of a windows server, doesn't need to be a DC) - the primary function is to act as the authority for things like logons/access requests and controlling the policy imposed on objects within the domain.
You said you have 500 workstations, are you at just one site? Are all of your DCs at the same location?
Yes there is good reason to have more domain controllers than DNS, and you don't even need to host DNS on a DC, or even a windows server, it just makes things more manageable in some ways, for example if you have DHCP and DNS roles in your environment, you can ensure that names inside your domain resolve consistently/dns entries stay updated.
You can have 500 PC's point to 2 DNS servers with no issues. You can also have 500 PC's point to different DNS servers depending on their DHCP scopes and network settings. It's up to you, both ways of doing it will work and neither is wrong. With AD integrated DNS, it all works itself out.
There is little point in having DCs that are not being pointed to as a DNS server. The reason being that if all of your DNS servers are down, your AD is down too so what are the other DCs being used for? If you have multiple sites and DCs locally, then why wouldn't you want to use the local DC for DNS also? Everything else is an edge case. Some edge cases are wider than others, but I would see them as edge cases nonetheless.
IMHO you should point to the other DCs for DNS also, or demote them unless you have specific edge cases where they are useful.
|
OPCFW_CODE
|
I just checked back to see how much I’d written about MMFs (minimum marketable features). This is a technique I use and talk about a lot so I thought I’d written more that I have.
I’ll provide here a few of the ways I use MMFs and why I feel that they are so helpful when devising incremental delivery strategies.
So, first thing first. MMFs are really a business tool and are a simple technique for devising and expressing the businesses strategy for delivering some outcome. The sweet spot for this technique is in providing an escape from classical big project thinking.
The MMF describes simply the smallest set of features that could achieve some outcome. This is only useful if you have decided to operate in an incremental manner. If you would like to generate benefit (read cold hard cash) early and make decisions based on real data from experiences gained in the market place then MMFs are for you.
An MMF may focus on:
- providing a workflow, e.g. subscription for a news letter
- satisfying the need of some stakeholder e.g. generate a report
- satisfying some persona e.g. advanced user
The key to an MMF is the natural provision of a scope test. Since the first “M” is minimal we should include no feature that could be removed without impact to achieving the benefit. This rule need not be applied dogmatically. However, where I have seen the technique used to greatest effect the challenge was levied often “What if we didn’t have that feature, do we still achieve our goal?”
Using MMFs effectively relies on the understanding that there will be a series of MMFs. I’ve seen dysfunctional behaviour where an organisation had a habit of planning multiple releases but delivering only release 1.
We should remember that an MMF is not a promise to release. Rather it is a recognition that some benefit could be achieved. There are many business reasons for not releasing the minimum that could work and business motivations change over time and as the development progresses. Two examples are:
- A team with fixed release dates
This team identifies an MMF that fits comfortably into the development period and a set of desirable additions beyond the minimum. The team commits to delivery of the MMF and typically delivers a small umber of additional features from the desirable list. This benefits the business since the delayed delivery of the MMF is offset by the improved predicability.
- An organisation trailing in the market and keen to make a splash.
This organisation developed the product using MMFs. Each delivered MMF represented a deployment opportunity as well as a chance to solicit feedback. The MMFs helped provide focus throughout the project on achieving a viable product. However, the business waited until the product was competitive in the market place before true go-live.
Technorati Tags: MMF
|
OPCFW_CODE
|
University of Delaware, Gao Lab in Geospatial Data Science and Human Dimensions of Global Change, United States
We are seeking a postdoc researcher at the University of Delaware , to work on an interdisciplinary project examining spatiotemporal patterns of the persistent application of cover crops (as an example of high-priority, sustainable agricultural land use practices) across the continental U.S. The goal is to understand how individual-level sustainable land use decisions aggregate to form large-scale landscape patterns that can yield environmental benefits. Starting with an exploratory data analysis of publicly available spatial data on cover crops and related natural and human systems, the project will generate new insights identifying associations between persistence of the sustainable practice and related environmental and social factors. Using spatial modeling and machine learning methods, we aim to explain the mechanisms that cause the landscape evolution. Creative extensions and co-development of new directions are encouraged.
The position offers an attractive salary and benefit package. Initial contract is for two years with the potential to renew.
The researcher will be expected to (1) identify, obtain, and manage appropriate datasets, (2) conduct exploratory data analyses, (3) build, calibrate, and validate empirical models of spatiotemporal patterns, (4) design and implement data-driven experiments, (5) carefully document data metadata and analytical procedures for reproducible science, (6) analyze, visualize, and interpret modeling and experiment results, (7) work collaboratively with a multidisciplinary team to incorporate diverse feedback, and (8) publish in peer-reviewed scientific journals.
The position is funded by an NSF HEGS grant and is also related to the NSF-EPSCoR Project WiCCED (Water in the Changing Coastal Environment of Delaware). The researcher will be mentored by Dr. Jing Gao (leading the Geospatial Data Science and Human Dimensions of Global Change lab) and Dr. Kent Messer (directing the Center for Experimental & Applied Economics), while working with interdisciplinary experts. There will be diverse opportunities to engage with the UD Data Science Institute (DSI) and the Center for Behavioral & Experimental Agri-Environmental Research (CBEAR) , a USDA Center of Excellence. The University of Delaware is a tier-1 research university and ranks among the top 100 universities in federal R&D support for science and engineering.
(1) PhD in a related field before starting, (2) solid background in geospatial analysis and modeling using quantitative and computational methods, (3) attention to detail with data manipulation and analyses, (4) proficiency with one or more scientific programming language (e.g., Python, R), (5) excellent written and oral communication skills with demonstrated ability to publish scientific manuscripts, and (6) strong motivation and work ethics.
Research experience/familiarity with (1) geospatial applications of machine learning, data science, spatial statistics, (2) publicly available datasets on U.S. agricultural, environmental, and socioeconomic variables, (3) data-driven spatiotemporal analyses integrating diverse data sources, and (4) causal inference for agricultural or environmental issues, especially, agricultural land use practices. Experience with grant proposal writing is welcomed, but not required.
Email a CV, a research statement highlighting relevant experiences and skillsets, unofficial transcripts, an example publication, and contact information for three references, to Dr. Jing Gao ([email protected] ), with the subject line “Cover Crop Postdoc Application – [Full Name]”.
Review of applications will begin on May 30, 2022, and will continue until a suitable candidate is identified. Shortlisted candidates will be contacted and interviewed virtually.
Start date is negotiable (ideally no later than September 1, 2022).
The University of Delaware is an Equal Opportunity Employer. Individuals from under-represented backgrounds are strongly encouraged to apply.
|
OPCFW_CODE
|
Aurigma Image Uploader 6.5 Dual
Installation and Deployment Notes
This topic provides instructions on how to install and uninstall Image Uploader SDK on the development machine. Also, it describes files which should be deployed with an application which uses Aurigma Image Uploader 6.5 Dual.
Download Image Uploader SDK (ImageUploader.exe file) from http://www.aurigma.com/Products/DownloadFile.aspx?ID=105 location. Then run this file and follow the wizard steps. On one of these steps you specify the folder where to install all Image Uploader SDK files. Typically this is C:\Program Files\Aurigma\Image Uploader 6.5 Dual. After the installation completes you get a number of folders and files organized in the following structure:
/Image Uploader 6.5 Dual /ImageUploaderPHP /Samples /Scripts Aurigma.ImageUploader.dll ImageUploader.chm ImageUploader37.cab ImageUploader37.jar
To uninstall Image Uploader SDK open Control Panel, click Programs and Features, then select Aurigma Image Uploader 6.5 Dual and click Uninstall. After that click Yes, and follow the wizard instructions.
Image Uploader SDK contains many files and folders, including Image Uploader itself, special embedding tools, demo application sources, and documentation. Of course, you do not need to copy all these files to your production server. The files to be moved to production server depend on a way used to embed Image Uploader into your Web application. There are three tools intended for this purpose:
Image Uploader ASP.NET Control
Image Uploader ASP.NET control is intended most of all for ASP.NET developers and enables them to use Image Uploader in a straight-forward way just as the other ASP.NET server controls. See details in the Inserting Image Uploader ASP.NET Control into ASP.NET Page topic.
If you use this way you need to copy the Aurigma.ImageUploader.dll file to the /bin folder of your Web application.
Image Uploader PHP Library
Image Uploader PHP library can be used with PHP platform only and has nearly the same functionality as the ASP.NET control. Read more about this tool in the Inserting Image Uploader PHP into Page topic. In the case if you embed Image Uploader using the PHP library you need to deploy the /ImageUploaderPHP folder along with other file of your application.
Image Uploader Embedding Scripts Library
When deploying an application which uses Image Uploader do not forget to set enough permissions to the folder you save files to.
- On Windows NT/2000/XP you should grant the modify permission to the Internet guest user (IUSR_<machinename>).
- On Windows 2003 Server you should grant the modify permission to the NETWORK SERVICE group.
- On Windows Vista and 2008 Server you should grant the modify permission to the account your site application pool is running under, Network Service by default.
- For *NIX systems you should specify read and write permissions.
|
OPCFW_CODE
|
As ben and I discussed during the review of the initial CRC series,
inode allocation needs to log the entire inode to ensure the
replayed create transaction results in an inode with the correct
CRC. This means that the logging overhead of inode create doubled
for 256 byte inodes, and is close to 5x higher for 512 byte inodes.
Ben suggested that having a transaction to initialise buffers to
zero without needing to log them physically might be a way to solve
the problem. It would solve the problem, but I already have a
patchset from a few years back that introduces a new inode create
transaction that doesn't require any physical logging on inodes at
This patch series is a forward port of my original work from 2009
(hence the SOBs being from david@xxxxxxxxxxxxx) with a couple of
more recent patches that will also help reduce inode buffer lookups
and hence improve performance.
The first two patches are for reducing he number of inode buffer
lookups. When we are allocating a new inode, the only reason we look
up the inode buffer is to read the generation number so we can
increment it. This patch replaces the inode buffer read with radomly
calculating a new generation number, resulting in an inode
allocation being a purely in-memory operation requiring no IO. There
is a caveat to that - for people using noikeep, we still need to
ensure the generation number increments monotonically so we only
take the new path if that mount option is not set. This reduces
buffer lookups under create heavy workloads by roughly 10%.
The second patch removes a buffer lookup and modification on unlink
that was added for coherency with bulkstat back when bulkstat did
non-cohernet inode lookups. bulkstat is using coherent lookups
again, so the code in unlink is not necessary any more.
The remaining 5 patches are the new icreate transaction series. The
first patch introduced ordered buffers. These are buffers that are
modified in transactions but are not logged by the transaction. They
have an identical lifecycle to a normal buffer, and so pin the tail
ofthe log until they are written back. This enables us to do log a
logical change and have all the physical changes behave as though
physical logging had been performed. This is used for the inode
buffers by the new icreate transaction.
The rest of the patches are simply mechanical - introducing the
inode create log item, the changes to transaction reservations (uses
less space in the log), converting the code to selectively use the
new logging method and adding recovery support to it.
Right now the code will use this transaction if the filesystem is
CRC enabled. Given that CRC enabled filesystems are experimental at
this point, adding a new log item type should not be a major problem
for anyone using them - just make sure the log is clean before
downgrading to an older kernel...
The patchset passes xfstests on non-CRC filesystems without new
regressions and the initial two patches are resulting in a ~10%
improvement in 8-way create speed and a ~15% improvement in 8-way
unlink speed. I don't have any numbers on CRC enabled filesystems as
I've been working on the userspace CRC patchset and getting that
into shape rather than tesing and benchmarking kernel CRC code...
Comments, thoughts, flames?
PS. I'm working on an equivalent patchset for unlink that logs the
the unlinked list as part of the inode core for CRC enabled
filesystems. That's a little bit away from working yet, though...
|
OPCFW_CODE
|
|Home | GridMPI | GridTCP | Publications | Download|
Curret Release: GridMPI-2.1.3
Project in Brief
GridMPI is an implementation of the MPI (Message Passing Interface) standard designed for high performance computing in the Grid. It establishes a synthesized cluster computer by binding multiple cluster computers distributed geographically. Users are able to seamlessly deploy their application programs from a local system to the Grid environment for processing a very large data set, which is too large to run on a single system.
GridMPI targets to make global communications efficient, by optimizing the behavior of protocols over the links with non-uniform latency and bandwidth, and also to hide the details of the lower-level network geometry from users. The GridMPI project is working on to provide variations of collective communication algorithms and an abstract layer to hide network geometry, and an interface to the TCP/IP communication layer to make it adaptive to those algorithms.
PSPacer is a precise software pacer of IP traffic for Linux, which controls IP traffic to regulate bandwidth and smooth bursty traffic. It is implemented as a Linux loadable kernel module, but it controls the traffic at a very high precision (less than a micro-second!) which was only possible by using a special hardware. PSPacer is a standalone module and not bound to GridMPI. Its application varies widely, for example, high-bandwidth TCP/IP streaming, and traffic control of low-bandwidth lines.
GridMPI and PSPacer are open-source free-software. The software is downloadable from the download page.
PSPacer version 3.0 is Released on Mar 31, 2010. This is a major release and includes dectd 1.0.1. For more information, see the PSPacer Project page.
Dectd version 1.0.0 is Released on Dec 22, 2009. This is an extra utility package of PSPacer. Dectd (dynamic execute and configure traffic control command daemon) is a daemon program that automatically configures the Linux traffic control mechanism so as to support IP flow-based traffic control. The traffic control mechanism of the Linux operating system is basically based on the class-based QoS. To achieve the flow-based QoS on top of the class-based QoS mechanism, the issue of the high operational cost when the number of flows is large should be solved. In this system, the system administrator specifies simple rules (e.g., range of IP addresses and port numbers) in advance. The system detects IP flows by interposing system calls and automatically configure flow queues (combination of a qdisc and a filter) according to the specified rules. In this way, the operational cost is significantly reduced. For more information, see the PSPacer Project page.
PSPacer version 2.1.2 is Released on Aug 19, 2009. This is a minor release and fixes the installation problem on x86_64 machines. For more information, see the PSPacer Project page.
GridMPI version 2.1.3 is released on Mar 17, 2009. This minor release includes some bug fixes. GridMPI version 2.1.3 is fully tested in a heterogeneous environment for combinations of Linux/IA32, AIX/Power, and Solaris/SPARC64V.
GridMPI version 2.1.1 is released on Apr 10, 2008. This minor release includes some bug fixes. GridMPI version 2.1.1 is fully tested in a heterogeneous environment for combinations of Linux/IA32, AIX/Power, and Solaris/SPARC64V.
GridMPI version 2.1 is released on Mar 31, 2008. This minor release includes some enhancements and bug fixes. GridMPI version 2.1 is fully tested in a heterogeneous environment for combinations of Linux/IA32, AIX/Power, and Solaris/SPARC64V.
GridMPI version 2.0 is released on Nov 10, 2007. This is a major release for the first time in one and half years. GridMPI version 2.0 is fully tested in a heterogeneous environment for combinations of Linux/IA32, AIX/Power, and Solaris/SPARC64V.
PSPacer version 2.1 is Released on Mar 15, 2007. PSPacer version 2.1 provides libnl (netlink library) support and minor bug fixes. For more information, see the GridTCP Project page.
PSPacer version 2.0.1 and 1.2 are Released on Apr 17, 2006. PSPacer version 2.0 supports a new feature: dynamic pacing mode. PSPacer version 1.2 is a stable release. And both releases include pspd (PSPacer control daemon) support. The source tarball and the binary RPM for FedoraCore 5 are available. For more information, see the GridTCP Project page.
GridMPI version 1.0 is released on Apr 13, 2006, after eleven minor releases of version 0.x series. GridMPI version 1.0 is fully tested in a heterogeneous environment for combinations of Linux/IA32, AIX/Power, and Solaris/SPARC64V. It now includes major MPI-2.0 features.
PSPacer version 1.1 is Released on Sep 20, 2005. Some bugs are fixed. If you use the older version, please update it. The source tarball and the binary RPM for FedoraCore 4 are available. For more information, see the GridTCP Project page.
PSPacer version 1.0.2 is Released on Jul 25, 2005. A new version of PSPacer is available. The binary RPM for FedoraCore 4 is also available. For more information, see the GridTCP Project page.
PSPacer version 1.0.1 is Released on Jun 22, 2005. A new version of PSPacer is available. The binary RPM for FedoraCore 4 is also available. For more information, see the GridTCP Project page.
PSPacer version 1.0 is Released on Jun 6, 2005. Precise Software Pacer (PSPacer) achieves precise network bandwidth control and smoothing of bursty traffic without any special hardware. PSPacer is implemented as a Linux loadable kernel module. Therefore, it is independent of the device driver, and kernel re-compilation is not required for the installation.
GridMPI version 0.2 is Released on Nov 7, 2004. The first release of GridMPI is on the public. It is a full MPI-1.2 implementation and supports both cluster systems and collections of multiple clusters. It supports the IMPI (Interoperable MPI) protocol for connecting multiple clusters over TCP/IP. GridMPI is an extension to YAMPI, an MPI library for cluster systems, which is independently developed by Yutaka Ishikawa Laboratory of the University of Tokyo.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, OF SATISFACTORY QUALITY, AND FITNESS FOR A PARTICULAR PURPOSE OR USE ARE DISCLAIMED. THE COPYRIGHT HOLDERS MAKES NO REPRESENTATION THAT THE SOFTWARE, MODIFICATIONS, ENHANCEMENTS OR DERIVATIVE WORKS THEREOF, WILL NOT INFRINGE ANY PATENT, COPYRIGHT, TRADEMARK, TRADE SECRET OR OTHER PROPRIETARY RIGHT.
LIMITATION OF LIABILITY
THE COPYRIGHT HOLDERS SHALL HAVE NO LIABILITY TO LICENSEE OR OTHER PERSONS FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,CONSEQUENTIAL, EXEMPLARY, OR PUNITIVE DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, LOSS OF USE, DATA OR PROFITS, OR BUSINESS INTERRUPTION, HOWEVER CAUSED AND ON ANY THEORY OF CONTRACT, WARRANTY, TORT(INCLUDING NEGLIGENCE), PRODUCT LIABILITY OR OTHERWISE, ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE OR DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
This work is being done at the National Institute of Advanced Industrial Science and Technology (AIST), Japan, under the contract with the National Research Grid Initiative, NAREGI, Project.
GridMPI is a registered trademark in Japan.
($Date: 2010-03-31 09:40:27 $)
|
OPCFW_CODE
|
Thank you for your interest in the picoCTF 2020 competition (“picoCTF 2020” or the “Competition”) organized by Carnegie Mellon University (“CMU”).
IF YOU ARE A TEACHER THAT FACILITATES A REGISTRATION ON BEHALF OF A MINOR STUDENT, YOU REPRESENT THAT YOU HAVE CONSENT FROM SUCH CHILD USER’S PARENT OR LEGAL GUARDIAN FOR YOU TO CREATE A PICOCTF ACCOUNT AND TO PARTICIPATE IN THE COMPETITION CHALLENGES UNDER THESE COMPETITION RULES, INCLUDING THE RIGHT FOR YOU TO PROVIDE THE INFORMATION NECESSARY TO CREATE AN ACCOUNT FOR SUCH STUDENT.
What is picoCTF ?
picoCTF is a computer security game targeted at middle and high school students. The game consists of a series of challenges centered around a unique storyline where participants must reverse engineer, break, hack, decrypt, or do whatever it takes to solve the challenge. The challenges are all set up with the intent of being hacked, making it an excellent, legal way to get hands-on experience.
What is the 2020 Competition? When does it start/end?
The Competition involves trying to solve a designated number of problems within a period of 723 hours. The Competition begins on 1 October 2020 at 12:00pm Eastern Daylight Time and continues until 30 October 2020 at 3:00pm Eastern Daylight Time (the “Competition Period”).
Who is eligible to participate in the Competition?
Each individual who participates in the Competition (“Participant”) must:
be at least 13 years old;
if under 18, have the consent of their parent or legal guardian to participate;
- Note: in order to create a picoCTF account, you will need to provide certain information to register, which may include but not limited to a username, an email address, your country of residence and your status (e.g., middle/high school student, teacher, etc), and information about your school (if applicable). In addition, you may choose to provide certain optional information (including but not limited to your gender or racial/ethnic identity). If you are under 18, a parent or legal guardian must provide their email address as part of your account registration to indicate their consent to your account registration, unless you have a student account created by a teacher through a teacher batch registration.
How do I participate in the Competition?
To participate, you must have created a picoCTF account before or during the Competition Period and work on the Competition challenges during some and/or all of the Competition Period.
You do not have to enter the Competition by the first day of the Competition Period. However, the later you enter, the less time you have to work on the challenges during the Competition Period.
NOTE- NOT EVERY PARTICIPANT IN THE COMPETITION IS ELIGIBLE TO WIN PRIZES. PLEASE SEE DETAILS BELOW.
Collection/Use of Your Information.
How are the winners determined?
The participant that solves the most problems within the allotted time will be the winner. If more than one participant solves all of the problems, then the participant that solved the problems in the shortest amount of time will be the winner. As described in more detail below, winners (including any tie-breakers, questions about eligibility, etc.) are determined by CMU in its sole discretion.
What are the other rules and terms for picoCTF 2020?
CMU may display one or more Competition leaderboards and/or scoreboards. These boards may be available to other Participants and/or the public, and may show the Participant’s username name, the country of residence and/or school identified by the Participant, and/or the name of the School identified by the Participant.
While there are no limitations on the resources or tools that Participants can use, only the eligible Participants may solve challenges as part of the Competition. Advisors, may help facilitate the Participant's work, such as by helping set up tools or providing resources, but may not provide direct assistance on any problems.
Participants may not interfere with the progress of other Participants, nor with the operation of the Competition's infrastructure. More specifically, attacking the scoring server, other Participants, or machines not explicitly designated as targets is cheating. This includes both breaking into such machines, and denying others access to them or the ability to solve problems (for example, by altering a key or ping-flooding). Sharing keys or providing overly-revealing hints with other Participants is cheating, as is being directly assisted by personnel outside the Team (using tools from the internet is OK; asking people on the internet to help solve the problem is not). We encourage Participants to solve problems in novel and creative ways using all available resources, but we do require that Participants solve the problems themselves.
Answers to problems may not be publicly posted or otherwise shared with anyone outside of your Team members until after the Competition is over.
All information provided to establish an account must be true and correct. You are responsible for keeping such information up-to-date. Failure to keep your account up-to-date may, among other things, jeopardize your eligibility for prizes.
You are solely responsible for keeping your account names and passwords confidential. You are responsible for activity that takes place under your account (including but not limited to activity that may affect your eligibility for prizes). If you believe that your account (or an account you have created on behalf of your child or a student) has been or may be compromised, you must notify CMU by contacting firstname.lastname@example.org as soon as possible.
You understand that your School may be informed about your participation in picoCTF and/or may be asked to verify your enrollment in order to verify your eligibility to receive prizes. To the extent your School cannot legally release such information about you without your consent and you do not provide such consent, you agree that you may be ineligible for certain prizes if CMU cannot verify your eligibility.
CMU will determine participation eligibility, declare winners (including but not limited to in the event of a tie), and award prizes in its sole and absolute discretion. You agree that such decisions are final and are not subject to review or reconsideration, and that Participants are not entitled to be informed of other Participants’ results.
Winning Participants may be asked to produce written solutions for several challenges before receiving prizes. Winning Participants will need an adviser or faculty member at their School to serve as a point of contact.
Any prizes awarded are non-transferable, and any non-cash prizes are not exchangeable for cash. CMU’s ability to offer any and all prizes is subject to applicable laws and regulations.
Competition problems or other content on the picoCTF site remains the property of CMU (and/or its content providers). CMU and its relevant content providers reserve any and all of their respective rights in such materials. You are authorized to access and use such materials solely with respect to registration for and/or participation in picoCTF 2020 by you (or on behalf of your child or a student, where applicable). You may not use the picoCTF 2020 site or any materials on it (including but not limited to the Competition problems) for any unauthorized purpose.
To the maximum extent permitted under applicable law, you agree to indemnify and hold harmless CMU and PicoCTF Parties from and against any and all claims, suits, actions, losses, expenses, damages, penalties, and costs, including reasonable attorneys’ fees resulting from any actual or alleged violation of these Competition Rules by you and/or your participation in the Competition.
These Competition Rules shall be enforceable to the maximum extent permitted under applicable law. If any portion of this these Competition Rules is determined by any court or governmental agency of competent jurisdiction to violate applicable law or otherwise not to conform to requirements of law, then the rest of these Competition Rules will remain in effect and the parties will substitute a suitable and equitable provision for the invalid/unenforceable provision in order to carry out the original intent and purpose of the original Competition Rules.
The picoCTF 2020 site or materials may link to and/or refer to third party websites and/or services. CMU does not control or endorse such sites. You are responsible for determining the suitability of those sites or services.
You may not assign, delegate or transfer any of your rights or obligations under these Competition Rules. Any attempted assignment, delegation or transfer shall be void.
September 14, 2020 version
|
OPCFW_CODE
|
For anyone who is intending to conduct Formal enterprise while in the nation, Of course You need to submit an application for a business visa; it doesn't matter in the event you remain an hour or so, every day, a week or a month.
Remember to Notice I am not an immigration law firm or have Expert expertise in Peruvian / US tax laws. So If you'd like or need a watertight remedy you must Make contact with a specialist. But in my opinion It truly is lawful providing you actually just want to spend your holidays in Peru and never intent to survive a vacationer visa right here.
Then check out DIGEMIN and submit an application for a Permiso Especial para Firmar Contratos (Turista). It truly is an easy and straightforward method that only normally takes a number of hours. With this allow you're permitted to officially indicator contracts in Peru Regardless that you happen to be below on the tourist visa. In order before long you may have this allow you're permitted to purchase a vehicle, sign the deal before a notary and obtain it registered in your name.
Occasionally "small-phrase" students and contributors of analyze abroad programs haven't got to make an application for a scholar visa. You'll want to Get hold of the nearest Peruvian consulate to check if this exception relates to you.
You should Notice that it's your nationality instead of a potential home allow Overseas that is the determining aspect if you need a visa or not!
Tena suggests: January 15, 2016 at three:fifty three pm I might be moving my home to NH from NJ this 12 months In order I understand it, I will have to file two NJ condition varieties in 2016: a part-time resident and an element-time non-resident. I will keep on employment with my NJ firm, shelling out several of my time Functioning remotely from NH and the rest of the time traveling to your NJ office, as required.
Indeed I can assist. But as The solution to your inquiries exceeds the Area Now we have listed here, I moved your comment to our Dialogue Board less than "Residing legally in Peru" ("") and answered it there.
In my research, it seems that some states–including Ny—are taxing non citizens for monies attained from resources in New York. Here is a hyperlink to an write-up I discovered:
Yo soy Hernan un Filipino, voy a viajar a Peru el 24, cuanto tiempo durara la validez de una visa de turista?
The Andean Migration Card is out there in Spanish and English. Fill out your particular facts and following disembarking hand it together with your passport on the immigration officer for the immigration Regulate. dig this If You're not obliged to make an application for a visa at a Peruvian consulate in your household nation just before entering Peru, the officer will stamp your passport and compose a quantity on or beside the stamp indicating the figures of days you happen to be permitted to stay in the nation.
Also Be aware that the whole process of making use of for the Carné de extranjeria is usually a little bit challenging as well as the technique is just not normally a similar for everybody so Guantee that you ask about the details in the method on the immigration Place of work.
Not sure That which you imply by "entry rate". Most of the time passport holders of the various EU nations don't have to apply for a visa right before coming to Peru.
A tourist visa is just for tourism reasons and will not permit you to perform in Peru. If you would like Dwell and work lawfully in Peru You need to enter as a tourist (as Indian passport holder It's important to submit an application for a tourist visa in a Peruvian consulate), find a career and employer that sponsors you a work visa (essentially the most tough activity, take a look at our Discussion Board ("") for more information) after which you can improve your immigration standing (time-consuming and nerve racking, but doable).
I need to know what is the procedure to extension of Company visa immediately after expired of Peruvian A number of Company visa
|
OPCFW_CODE
|
The Hsuehshan Tunnel or "Snow Mountain" tunnel, is the longest tunnel in Taiwan. The main road connects the city of Taipei to the northeastern county of Yilan. Travel times can take from 30 minutes to up to 2 hours. The challenge is to decrease travel time during busy periods. We used data provided by the Taiwanese government: Column Description for Traffic XML Coordinate Information (WGS84) for Traffic Data CSV We analysed this data in order to offer an engineering and a software based solution.
The Hsuehshan Tunnel primarily has two, sometimes three, lanes traveling in both directions and three to four in/out links in both directions. The provided data covers 30.2 km. The throughput of the tunnel is 3600 cars per hour in both directions. Based on the data we have created two charts. The first, average speed and the second, intensity of traffic flow, both factored by days of the week.
The two charts show that the highway has the same intensity in both directions, but the travel time and the congested time to Taipei lasts much longer. Based on those charts we can define critical points for traffic flow.
We designed an app which visualizes traffic flows in order to better understand the reasons for a decreasing of average speeds. The app provides data, represented visually, which anyone can use to make a more informed decision about their commute.
iOS (prototype for iPhone) Size of the app: 1MB Description: Visualizes historical data from April to June and shows the average speed for the current day of the week and a current average speed of the entire simulation.
The app highlights the main problem, a decrease in speed on the way to Taipei. A big contributing factor are the first 10-15 km in the beginning of the highway when the cars are in the tunnel. During this point speed can fall down to 5-10 km/h at times. Solutions to this problem are not as easy as they may seem because the highway is completely busy in both directions. Simply reversing the lanes wouldn't work and using secondary roads isn't an option, because they take much longer to reach Taipei. We suppose that the engineering solutions below could improve the traffic situation: - These changes will prevent phantom traffic jams - During congested times from Taipei, the speed in the tunnel should be changed manually to 60 km/h - During very congested times from Taipei the speed should be changed to 60 km/h throughout the whole highway - During very congested times to Taipei the speed should be changed to 40-50 km/h in the tunnel. - There should be signs to keep distance and no passing. - Based on the information from flow detectors one could build an automatic system for changing the speed limit at different parts of the highway and especially in the tunnel. - Unfortunately there is no solution for this problem based on redistributing car flows throughout the day, because the highway is overloaded with cars for many hours in a row. In this case limitation by even and odd numbers of license plates for very congested times, every week, can improve speed flows and push people to use public transport more. This rule applies only for private small cars.
Using this application people could also see what the current situation is and how it has been historically, possibly suggesting a better time to commute
Whats next? One or more of the solutions, engineering or software, could be implemented. The app could be augmented by real time data and gamification. In gamification people can receive points by solving problems all together and improving traffic situations. For example, involving more people in the app and following recommendations gives you points, which in turn improves traffic flow and gives users satisfaction for their involvement.
|
OPCFW_CODE
|
I prefer chiclet keyboards. I haven't done any scientific analysis, but I'm confident that my words per minute typing is higher wen I'm using a keyboard that has chiclet keys. Amongst developers this in an unpopular opinion, with many developers preferring mechanical keyboards (don't be that guy smashing a mechanical keyboard in an open plan office!)
Chiclet keyboards have keys that do not need to depress as far in order to register. They also have an evenly sized gap between each key, making it more difficult for you to fumble the keyboard and hit the wrong key. They are typically found on laptop keyboards.
With that in mind, I needed a new keyboard to replace the one I take into client offices and leave there whilst on a contract. I was previously using an old, fairly standard Dell USB keyboard that was becoming embarrassingly tatty - most of the key letters were completely worn off. It also seemed to be forever caked in a layer of dirt that no amount of cleaning could remove.
My requirements were fairly simple:
- It must be comfortable. This will be used for long periods of time (6+ hours a day) and I've found myself experiencing some discomfort in the later hours when using my bog standard keyboard.
- It must be USB - I understand the need for a wireless mouse, but a wireless keyboard is an unnecessary luxury for my day to day desk based work. Also, the less interacting with bluetooth, the better.
- It must not be garish - I don't want to demo things to clients and my keyboard be a huge distraction because it is letting off a luminous glow.
- It must have chiclet keys for the productivity and preferences outlined above.
- It must have a numeric keypad (it does make me wonder how developers and other creatives work for large amounts of time without a numeric keypad)
The keyboard I found that fits the above is the Cherry KC 6000, which is priced nicely at £35 on Amazon (The linked product incorrectly claims the product is the Cherry KC 600, but this is just a typo - the Cherry KC 600 does not exist and having taken delivery of this item, can confirm that it is indeed the Cherry KC 6000).
On the whole, I am very happy with this keyboard, and would give it 4 out 5 stars:
Pro: Super comfortable to type on
This is by far the most important factor on any keyboard! The keys have a really nice weight and feel to them, and typing for a long amount of time on this keyboard is comfortable and does not result in any straining pains that I would sometimes get on my previous bog standard keyboard.
Pro: It is aesthetically pleasing and not too garish
This keyboard has no crazy backlighting and comes in two fairly neutral colours - a silver body with white keys, or a black body with black keys. Some may view the lack of backlighting as a negative, but this isn't a problem for me. I don't type in the dark as I don't have the eyes for it, and I touchtype.
Pro: It has a slim, low profile
This helps with having a general feel of neatness on your desk. The keyboard has only moderate bezels and has no ridges where dust and other crap can get stuck.
Con: The F11 and F12 keys are not directly above the backspace button
This is a small irritation as it just takes some getting used to. On most keyboards, the function keys are laid out in banks of 4, with a bigger space between every 4th function key. This space is gone on the Cherry KC 6000, and the saved space is given to two additional buttons - one to open your default browser, and another to lock your machine. I don't mind having these extra buttons, but annoyingly they are right above the backspace key, so it will take some getting used to not being able to naturally travel to the F11 key to go fullscreen, or the F12 key to open my Guake terminal.
Con: There is a backspace key in the numeric keypad
Again, this is another one of those small things that will take you a day or so to get used to. You'd normally only find one backspace key on a keyboard and would not expect to have one on the numeric pad. This one is positioned where the minus key normally is, so I've found myself accidentally deleting characters rather than putting in the minus character a few times.
Other reviews online think that the keyboard is not banked enough towards the user (the way most keyboards have legs that you can have up or down. The keyboard did initially look a little flat on my desk when I first set it up, but I've found that it has not impacted my typing at all.
Conclusion - a productivity win!
To conclude, I'm happy with the Cherry KC 6000 keyboard. I t has made me more productive, and is comfortable to use for long typing stints (think 6+ hours of programming!)
|
OPCFW_CODE
|
Demo 148 - Error in Model Update
@choper725 lets continue here with:
https://github.com/abap2UI5/abap2UI5/pull/778#issuecomment-1890745772
the demo 148 is big, which makes finding the error difficult. i created a separated minimal example to check the binding:
https://github.com/abap2UI5/abap2UI5-samples/blob/main/src/z2ui5_cl_demo_app_153.clas.abap
i think i fixed the issue:
https://github.com/abap2UI5/abap2UI5/pull/783
but if not can you change the structure here to reproduce your error case:
https://github.com/abap2UI5/abap2UI5-samples/blob/031f56d4a1466479d63a918df3a5bc0eab27fbc8/src/z2ui5_cl_demo_app_153.clas.abap#L20
i fixed this position, but i am sure we need this somewhere else so i believe i created a new error now:
https://github.com/abap2UI5/abap2UI5/blob/83393c3fcf8a256073c5855f88172c91b5ddeb60/src/00/z2ui5_cl_util_func.clas.abap#L649
these binding issues are always complicated to identify but step by step we can will make it.
hi @oblomov-dev
i still dont get the border_width and border_radius on update model.. :\
Hi @choper725,
For me this still looks like a problem at the frontend, in the request body the attributes are written without underscore:
But when you check the ms_db-t_attri table of abap2UI5 you see that the framework expects the attributes with underscore:
I added a demo 153 (z2ui5_cl_demo_app_153) where i make the binding with nested structures and tables, there you can see that it works:
Demo 148 is very big, t_attri with a lot of entries and the payload of the body has a lot of entries too, if possible try to reproduce your problem with demo 153 which makes it a lot easier to debug and to identify the root cause.
Let me know if this works for you or you need more assistance.
Best regards.
with latest PR in samples repository i updated demo 153 with the logic i use in the chartjs demo 148..
updated the chiled table with new property (new_type), and binded the table with pretty_name = 'X'.
Added now additional logic fore binding in pretty mode. The demo 153 works now, you can try again, and please just extend demo 153 if you run into further problems.
hi @oblomov-dev
it was still not working on chartjs,
so what i saw that its another level down the nested route,
so i made another nest on top of the new_type field in demo 153,
no the demo is getting an abap error..
thank you for updating the demo. made a fix https://github.com/abap2UI5/abap2UI5/pull/788
demo works now again on my side
well its still not working on chartjs...
i know its hard to debug demo148, but im out of ideas and maybe you can point me to where should i look,
the break is on the setConfig method, before any frontend logic,,
on first load:
see borderWidth and borderRadius are there,
on update, they are gone,
Thank you for the update of 153. Fixed the binding again with https://github.com/abap2UI5/abap2UI5/pull/791
Works now on my side.
yes! now its working..! thanks for the quick fixes @oblomov-dev !
ill continue working on implementing chartjs library, ill leave this for a bit to see if i encounter any more binding issues
hi @oblomov-dev
after latest pull, nothing is working on my side.. not really sure what it is exacly,
just getting this: (4 times on each chart)
yes already saw it here:
https://github.com/abap2UI5/abap2UI5/pull/796#issuecomment-1902038676
But popus work on my side, only chart.js is not working...
EDIT: there was only a change in the compress functionality, so maybe you test again with client->_bind_edit( ... compress = abap_false) in case you use this. but no other changes were made.
EDIT: there was only a change in the compress functionality, so maybe you test again with client->_bind_edit( ... compress = abap_false) in case you use this. but no other changes were made.
yes i do used it as true, not false
yes every variable typed with abap_bool is now not compressed anymore and values are with "false" send to the frontend.
If you still get problems after deactivating compress it might be because your data model does now contain initial values and the position of arrays etc might be changed now.
means that every type abap book is being sent to frontend with value false unless its set to true ??
yes abap_bool becomes true or false at the frontend but is not initial anymore, otherwise certain UI5 controls are not working properly, when compress is activated.
you also have the option to deactivate compress or is this not working?
another option can be that we extend the framework with another parameter like "compress_without_booleans".
or last option: you can test with using type "xflag" instead of "abap_bool". i deactivated compress for this with the last PR:
https://github.com/abap2UI5/abap2UI5/blob/4a57ef5ef7b2da15edce925c880ec179845778fe/src/00/z2ui5_cl_util_func.clas.abap#L327
yes abap_bool becomes true or false at the frontend but is not initial anymore, otherwise certain UI5 controls are not working properly, when compress is activated.
you also have the option to deactivate compress or is this not working?
another option can be that we extend the framework with another parameter like "compress_without_booleans".
or last option: you can test with using type "XFELD" instead of "abap_bool". i deactivated compress for this with the last PR:
https://github.com/abap2UI5/abap2UI5/blob/4a57ef5ef7b2da15edce925c880ec179845778fe/src/00/z2ui5_cl_util_func.clas.abap#L327
thx for the quick reply,
XFELD is acting wierd, some boolean values i change not getting sent to frontend...
i think the most elegant solution to this i do create another parameter like full_compress which make it act full compress with all types
ok great that this works for you, looks very nice:
yes the weird behaviour is the behauviour of /ui2/cl_json, when a structure is completely empty even with false, it is not created. i mean i could extend the /ui2/cl_json class again but i think for now it is fine like this.
the XFELD approach was just for testing, when this works for you i would recommend the following. we use a new constant and set this as an importing parameter for the _bind methods:
constants:
begin of cs_compress_mode,
full type string value `FULL`, "the way you need it for chart.js
full_w_booleans type string value `FULL_W_BOOLEANS`, "default value
non type string value `NONE`, "old way in case the others are not working
end of cs_compress_mode.
Do you think this will work for you?
seems good idea,
please check demo 153, i think there is an issue with the binds and compress
153 looks correct to me. The new compress logic is used which sends values typed with boolean to the frontend in any case.
specifically abap_bool = 'X' (abap_true) means true at the frontend and everything else becomes false, means '-' is false here at the frontend.
Normally I would not use abap_bool with values except abap_true or abap_false or why are you doing this?
abap2UI5 updated now with https://github.com/abap2UI5/abap2UI5/pull/799
looking good,
native in class /ui2cl_/json, in case compress in activd:
when sending abap_false it not creating the structure,
but when sending ‘-‘ it create the struc with false as a value.
you overcome this in the framework checking the types explicitly.
but knowing that ‘-‘ creates a false value for a key in json with compress can solve the checking problem
(1) yes '-' becomes 'false', its done here:
https://github.com/abap2UI5/abap2UI5/blob/70da7816d89cd0e0e55b5a823a1f29954e73e502/src/00/z2ui5_cl_util_func.clas.abap#L302
(2) yes it is the default behaviour of /ui2/cl_json, when it is abap_false and the rest of the structure is also initial it gets not created and then it also does not matter if which comress mode you choose.
ui2/cl_json ignores initial values in compress, hence when the type is abap_bool and the value sent is ‘-‘ then its mapped to false.
thats why in demo 153 i used ‘-‘ to show that the value is created in the json.
and when its set to abap_false it doesnt (what you changed in the demo)
ah ok i think i got your point now, instead of using abap_false we can use '-' or something else except 'X' to make /ui2/json do what we need. mhh yes thats right!
but this is still a bug to me of the /ui2/cl_json class and i am not sure if its a good idea to make it work with a trick. additionally i do not see a way how to map abap_false to '-'. For example when someone binds a table having a column with abap_bool, abap2UI5 throws it directly into the /ui2/cl_json class, adding here an extra loop for boolean checks would cost a lot of extr runtime and seems not a got idea to me...
but still wondering, mhh maybe we will find a better way to solve this compression problem.
how do abap2ui5 handle abap_false binding? it doesnt check the types and values if each field??
|
GITHUB_ARCHIVE
|
Batch input take 2
Batch input
This PR build on insights taken from @whoahbot branch, and makes all input sources return a list of elements rather than a single one.
This enables the ability for input sources to batch items.
The batching of input elements is added to our own KafkaInput, making it possible to avoid IO with kafka at every single message.
Two parameters are used to control the behaviour: batch_size and timeout.
The difference with the previous approach on the api is that the batch_size is not a Dataflow.input parameter, but something specific to the input source. This allows us to do eager batching, which in case of our own KafkaInput makes a lot of difference, since we avoid a roundtrip between python and ffi, but it means each input will have to handle this on its own rather than having a single parameter in the .input operator.
I created a tool to benchmark the impact of the changes, and created a specific benchmark to test only kafka input throughput. To try this I used a custom NullOutput that does nothing on write.
As we can see from the benchmark, the current bytewax version can ingest ~100k messages per second without lagging if the dataflow doesn't do anything else, but struggles at 500k messages per second.
This is the repository of the benchmarking tool: https://github.com/Psykopear/bytewax-kafka-throughput
This is the plot of the results of processing 100k (first) and 500k (second) messages per second with:
bytewax v0.16.2 with default kafka connectors
bytewax at this branch, batching messages with batch_size=500000 and timeout=0.1
The code for the 2 dataflows is really similar, the one at this branch just adds the 2 parameters to the KafkaInput initialization:
class NullSource(StatelessSink):
def write(self, item):
pass
def close():
pass
class NullOutput(DynamicOutput):
def build(self, worker_index, worker_count) -> StatelessSink:
return NullSource()
flow = Dataflow()
flow.input(
"kafka_in",
KafkaInput(
BROKERS,
[CONSUME_TOPIC],
add_config={
"enable.auto.commit": True,
"group.id": GROUP_ID,
},
# This is specific to this branch
batch_size=500000,
timeout=0.1,
),
)
flow.output(
"null_output",
NullOutput(),
)
100k messages per second
500k messages per second
So we do get a noticeable improvement, but my impression is that the bottleneck on the output is probably more relevant than the one at the input, since using the KafkaOutput instead of the NullOutput, current bytewax struggles with ~10k messages per second. I still think this change in the api is a good one to have, since it only expands the capabilities of the input source.
Notes
When checking each message, I emit the batch if the partition reaches eof, but discard all the messages in the batch when we raise a RuntimeError, is that ok or should I still emit the rest of the batch in case of an error?
We could make use of batching in other sources, but right now I just made the existing ones return a single item list
Ok, reverted the commit with the StopIteration to return None change, and applied the suggested fixes
|
GITHUB_ARCHIVE
|
thank you very much for sharing your thoughts and for kicking-off a conversation.
I've been doing some work on www.podman.io:
This work has some specific goals and assumptions, which I think we
should discuss before I put a lot more time into them, and before we
also start on a re-templating of the site. Here's my priorities:
1. Make the site more accessible to first-time users
2. Make it easier to contribute to the site itself
3. Look & Feel improvements
2. and 3. are simple and non-controversial; Tuomas and I will go over
the site and set up proper Jekyll templating so that anyone in the
project can easily add new pages in markdown format. We'll also add
some tests for the site (as soon as I get my podman config figured out).
1) is where we need discussion. My thinking is that, at this time, the
majority of folks who come to podman.io will be new to PodMan, and as
such content aimed at the New User role should be the most prominent in
the menus and core pages.
I agree and imagine the site to be very difficult to navigate for new users. It would be great to have some learning material where new users can inform themselves and get introduced to the world of containers.
The other two roles are "Experienced Kubernetes Admin" and
"Contributor", and my plan would be to target improvements for those
roles after doing the "New User" role, and actually after tackling the
Buildah site as well.
One of the corrollaries to this is that I think that all user
documentation (as opposed to contributor/developer documentation) should
be moved from the Libpod repo to the podman.io repo. My reasons for
this are as follows:
A. better discoverability; MD pages in github repos have chronically low
search ranks, and pages with fixed URLs on Jekyll sites do better.
B. reduced confusion; right now users click a link on the "podman" page
and get dumped into a github repo called "libpod", where they have to
scroll down before they see the docs they're looking for.
C. easier acceptance of user doc contributions: they will no longer be
libpod PRs, so doc updates can be accepted with less scrutiny, opening
the door to getting some doc-only contributors.
However, this will mean changing where everyone *maintains* those docs,
so we need consensus on it. Comments?
Having documentation on podman.io
would be a great improvement. However, I suggest to keep the docs in the upstream repositories and copy them over to podman.io
for new major releases. This way, we can update the docs with the code changes in one PR and don't publish docs of unreleased features.
|
OPCFW_CODE
|
What is the expected response to "What's up?"
When somebody ask me What's up? I answer I am well, thank you.
Is that the expected answer, or should I answer something else?
What does a native speaker understand when I reply like that?
Ah, this phrase is all about context. The meaning of "What's up?" and expected responses depend on the circumstances in which the question is asked.
From what I remember, the phrase is derived from "What's the update?" which is basically checking up how things are going. It has however fallen into common usage both in the US (I think) and UK.
As a greeting:
"What's up?" or here (West Midlands of England) commonly just "sup" is a general greeting, you can response with answers like "Not much", "Nothing", "Alright" etc.
In this context, the response is just a return of the greeting, or a confirmation that all is going normally. This phrase is similar to "Hello" or "How are you" in common usage.
Example:
Person 1: "What's up man?"
Person 2: "F*** all mate" (my typical response to friends, this means nothings going on and I'm bored because of it :^) )
As an enquiry
In this context, "What's up?" can be when the asker of the question may have observed someone having some trouble, or is distressed at something.
It's a polite, non-intrusive way of checking all is relatively okay or if they need assistance. A similar phrase would be "What's the matter?" or "What's the problem?".
When facing criticism or disapproval of something, a common phrase is "What's up with it?" meaning the asker is not sure what they have done wrong and wants to know what said issue is.
Example:
Person 1 notices Person 2 with their head in their hands at their desk
Person 1: "What's up?"
Person 2: "Nothing, just tired."
So to properly answer your question after rambling a bit. The idea behind "I am well" is sort of right - you are confirming that all is well and normal. So in this case "Nothing" or "Not much" or "Same Old" are all fine, and will be understood by a native speaker.
Personally, if I was speaking to a non-native English speaker and heard your response I wouldn't think anything of it - it's just a throwaway question so unless something really is up/wrong, the response is irrelevant.
"What's up?" doesn't derive from "What is the update?". As per this 1853 citation, [something] is up means something is happening. It's just another way of saying *What's happening? / What's going on [that might interest me the speaker]?
I am curious, what if you DO NOT wish to reply to the question, what's a good way to counter? For example a reply to "Hello" is a variation of "Hello" and does not require further thought. "What's up" begs the recipient for either a status or an update. How do you respond if you do not wish to nor care to provide either?
Don't want to reply? Just give a head nod, up and down slightly.
Unless you're afflicted with a "Liar Liar"-style curse, or under oath in a very pedantic court, it's okay to just say "Not much" or "All good" even if it's not technically true.
"What's up?" means "What is happening?" or "What events are taking place?" or "What news do you have to tell me?"
The most common reply is "Nothing much" or something alone those lines. If something special is happening, you might relate it. Like if someone at work asks you "What's up?", you might reply "We won the XYZ contract" or "Bob was fired" or something relevant happening at the company.
Like most polite greetings, the asker rarely expects any sort of in-depth answer, and any polite response would be considered appropriate. "What's up?" "Oh, hi Sally". It doesn't answer the question at all, but few would think it strange.
"Same old"
Some statement of current state of affairs. It's a greeting, but it's also a question about news. Mention anything important that happened recently, or give a non-committing answer that says "no news", e.g. "Same as always", "The usual", or if you want to be facetious, say "the sky" or "the roof" depending if you're outdoors or indoors.
Specifically, it's different from "How are you" - it's not just about you but things that concern you too. So, answering "Sally is pregnant" is a perfectly good answer if that's the current news.
In addition to Felix Weir's excellent answer, you can also use other responses based on the situation and your mood.
Walking out of a frustrating meeting with your boss?
Coworker: What's up?
You: My blood pressure.
Feeling sarcastic? Some responses to what's up might be:(note, use sparingly. This can get annoying really fast if overused)
The sky.
A preposition.
The ceiling.
The lights.
The stars.
Gas prices.
These are all very passive-aggressive sarcastic responses to what is usually a friendly greeting. If I said "What's up?" to a colleague and they replied with any of those responses, I would assume they were being unfriendly.
@Matt you're right. Maybe I should have stated that those should only be used if you are feeling sarcastic
The sky, the ceiling, and the lights are usually pretty well received – when uttered amongst boys between the ages of 12 and 15. I'd avoid such responses in general, unless you want to sound like one of them.
It probably adds nothing to what others have suggested, but I thought I'd add an addendum to the existing answers.
As others have explained, people don't really expect a detailed answer, but it could be a good opportunity to start a conversation. If you don't have anything to say, 'Not much' is a perfectly acceptable response. You don't have to explain what you're currently doing or how you feel.
Apart from what others have suggested, you could also say:
Not a hell of a lot
Hey X (name)
Just chillin'
Just hangin' out
Hey, how goes it
|
STACK_EXCHANGE
|
Systemverilog Support in tagbar plugin
can anybody help me to add systemverilog language support in tagbar vim plugin.
I tried below things but its doesnt worked for me
1) Created ~/.ctags and copy code from https://github.com/shaohao/config.d/blob/master/ctags
2) mkdir ftplugin to ~/.vim and add systemverilog.vim from https://github.com/shaohao/vimfiles/blob/master/bundle/verilog_systemverilog/ftplugin/systemverilog.vim
3)cd to project directory and run ctags -R *
Got below warning though
ctags: Warning: Unknown language specified in "langmap" option
Below are some output of ctags
ctags --list-languages
ctags: Warning: Unknown language specified in "langmap" option
.
.
systemverilog
ctags --list-kinds=systemverilog
ctags: Warning: Unknown language specified in "langmap" option
e clocking
i constraint
l covergroup
o class
t function
A interface
G module
J package
M program
W task
But still when i open SV file in gvim and use :TagbarToggle tagbar window is blank :(
Please help
I've introduced some improvements to the verilog_systemverilog vim plugin that I made available at Github. You should have proper Tagbar support if you use this development version of exuberante-ctags together with my vim plugin and the following Tagbar configuration:
let g:tagbar_type_verilog_systemverilog = {
\ 'ctagstype' : 'SystemVerilog',
\ 'kinds' : [
\ 'b:blocks:1:1',
\ 'c:constants:1:0',
\ 'e:events:1:0',
\ 'f:functions:1:1',
\ 'm:modules:0:1',
\ 'n:nets:1:0',
\ 'p:ports:1:0',
\ 'r:registers:1:0',
\ 't:tasks:1:1',
\ 'A:assertions:1:1',
\ 'C:classes:0:1',
\ 'V:covergroups:0:1',
\ 'I:interfaces:0:1',
\ 'M:modport:0:1',
\ 'K:packages:0:1',
\ 'P:programs:0:1',
\ 'R:properties:0:1',
\ 'T:typedefs:0:1'
\ ],
\ 'sro' : '.',
\ 'kind2scope' : {
\ 'm' : 'module',
\ 'b' : 'block',
\ 't' : 'task',
\ 'f' : 'function',
\ 'C' : 'class',
\ 'V' : 'covergroup',
\ 'I' : 'interface',
\ 'K' : 'package',
\ 'P' : 'program',
\ 'R' : 'property'
\ },
\ }
Background: TagBar won't use your tags file, it queries ctags and read its output directly from stdout.
I believe the problem is how the --langmap is defined in your ~/.ctags. AFAIK, the coma is used to separate langmaps while different extensions are just put one after the other without separators:
--langmap=foo:.foo.fo.oo,bar:.bar.ba
I think line 2 of your ~/.ctags file should look like this:
--langmap=systemverilog:.sv.svh.svp
Thank you :) with above change warning is gone! but what about adding support in tagbar!
TagBar is supposed to honor the definitions in your ~/.ctags when running ctags against your code. If you still have problems I suggest you try TagBar's issue tracker.
didnt get you? tagbar is not working for me with above changes even with warning fix.
Yes, I got you, but this is not tagbar's issue tracker.
|
STACK_EXCHANGE
|
Markdown Preview does not preserve scroll position
VSCode Version: 1.10.2
OS Version: Ubuntu 14.04
Steps to Reproduce:
Open a Markdown file.
Open the preview (ctrl+shift+v).
Close the Markdown file tab (not the preview).
Scroll a few times in the preview tab.
Open any other file.
Go to the preview tab again. Notice that the scrollbar is reset to top.
My suggestion is that the scroll position of Markdown preview tab should follow its corresponding raw Markdown tab's only if the raw Markdown tab is present. If it is not present, the preview tab should preserve the last scroll position.
VSCode Version: 1.11.1
OS Version: Ubuntu 14.04
Steps to Reproduce:
Open a Markdown file, place cursor anywhere in file other than the top
Open the Preview (ctrl+shift+v).
More generally than above, the "Preview" tool now (unlike previously) always resets to the top of the markdown file when you open it or go to its tab and it refreshes (or you press ctrl+shift+v). It no longer goes to the current edit (cursor) location in the raw file. It is inconvenient to repeatedly scroll down to the current edit location in long files, to preview the changes.
I'm pretty sure I skipped a couple of version updates until I installed 1.11.1 today (4/9/2017), so I'm not sure when the behavior changed, but it was fairly recent, my version was not too outdated, no more than two or three updates.
Same behavior
VSCode Version: 1.12.0-insider
OS Version: Ubuntu 16.04
Re https://github.com/Microsoft/vscode/issues/22279, run into this issue while following the guidance of the project.
@rebornix Can we try debugging through the issue on your machine this afternoon?
I have the same problem as outlined by @slfuqua.
VS Code 1.11.2
macOS 10.11.6
It worked okay in earlier versions of VS Code (I believe it worked okay in 1.10.x).
For anyone seeing this problem, #24985 just added some basic logging to help investigate. The change be in the next insiders build. Once that is released, please download it and:
Set "markdown.trace": "verbose"
Reproduce the issue
Go to the output pane and share the output from the Markdown section
This should help me investigate what may be going wrong here
@mjbvz Thanks very much for looking into this. Below is my output after setting markdown.trace to verbose in VS Code 1.12.0-insider. The first entry is when I click somewhere in the middle of the markdown document. The second is when I use the preview option (which still shows the very top of the document instead of where my cursor was in the actual markdown document).
[Log - 11:43:07 AM] updatePreviewForSelection
{
"markdownFile": "markdown:/Users/Charles/Develop/Personnel/BlocNotes/aurelia-snippets.md.rendered?file%3A%2F%2F%2FUsers%2FCharles%2FDevelop%2FPersonnel%2FBlocNotes%2Faurelia-snippets.md"
}
[Log - 11:43:10 AM] provideTextDocumentContent
{
"previewUri": "markdown:/Users/Charles/Develop/Personnel/BlocNotes/aurelia-snippets.md.rendered?file%3A%2F%2F%2FUsers%2FCharles%2FDevelop%2FPersonnel%2FBlocNotes%2Faurelia-snippets.md",
"source": "file:///Users/Charles/Develop/Personnel/BlocNotes/aurelia-snippets.md",
"line": 0,
"scrollPreviewWithEditorSelection": true,
"scrollEditorWithPreview": true,
"doubleClickToSwitchToEditor": true
}
I see that the "line" entry is zero. Maybe the issue is related to that?
Thanks @chaskim!
From your log, I think there's a bug in the markdown preview where using Open Preview instead of Open Preview to Side causes the initial line not to be set. Looking into a fix
Potentially dup of https://github.com/Microsoft/vscode/issues/22420
|
GITHUB_ARCHIVE
|
UwU Hello there VRChat user!
I created this website to tell you about Signal Private Messenger (available in the Google Play Store and the Apple App Store).
In my opinion, it is one of the best pieces of software ever created by mankind. I'm not even joking. Here are some features:
- It's fully end-to-end encrypted. The server would have to pull off an active attack the first time people communicated if they wanted to read people's messages. Doing this on even a small scale would open them up to a high chance of getting caught and people never using Signal ever again. So in short, you don't need to trust Signal's servers to not be evil or to not get hacked because Signal goes to extensive lengths to make sure that even they can't read your messages. Discord on the other hand...they can read everything and who knows who else can too.
- It's not just messages that are secure. Everything is private. Group chats, stickers, pictures, video calls, voice calls, user profile names, user profile pictures, file transfers, emoji reactions, etc.
- Signal can become your default texting app (if you want) so you don't have to switch between as many messengers. And when one of your contacts gets Signal, messages will be automatically encrypted.
- Signal Technology Foundation is an independent 501c3 nonprofit that works in the public interest. You don't have to pay for anything at all and there are no ads or in-app purchases. They run completely on donations.
- The code behind Signal is fully free-as-in-freedom and open source. That means technical people in the community can review Signal's design to ensure there are no bugs or backdoors.
- Signal's design is excellent and it comes with properties other than end-to-end encryption. It has perfect forward secrecy so if your cryptographic keys are compromised after you've had a conversation it'll still stay private. It also comes with deniability. When you send a message to someone Signal "signs" it to prove it came from you and not an attacker. But in the very next message, it gives away a code that lets the signature on the first message to be forged by anyone. This gives people a harder time proving (say, in court) that messages were actually sent. And with a new feature called Sealed Sender, Signal reduces the data they recieve about who even sent a message (sender names are encrypted so only the recipient can be completely sure who a message came from).
- Edward Snowden (the guy who leaked the US Government's illegal spying activities) loves Signal and recommends it to everyone.
If you want someone to try Signal with, I'd be more than happy to chat with you. Ask me in game for my phone number.
Oh, and the reason my name is "u.nu/reporthate" is because it used to be Report Hate. I didn't want to confuse any of my friends so I changed it to a near-identical name when I made this website.
And since you're still here, also check out Tor Browser. It's can be a bit slow, but that's because it goes to extensive lengths to protect your privacy. There really is no better browser for privacy. Period.
|
OPCFW_CODE
|
According to Channel Advisor, a software platform for retailers, retailers lose 4% of a day's sales each hour a website is down. From H&M to Home Depot and Nordstrom Rack – all of them experienced either downtime or intermittent outages during the Holiday season of 2019. And one thing we know for sure is that there is nothing called 100% uptime. But there has to be a way to make our systems resilient and prevent outages for eCommerce sites. We at Unbxd pay the utmost attention to the growth of our customers. We ensure that they are up and running most of the time and do not suffer any business loss. And we do this by making our ecosystem robust, reliable, and resilient.
How is network resilience tested?
Resilience is the ability of the network to provide and maintain an acceptable level of service in the face of various faults and challenges to normal operation. Since the term services and recently microservices made their way into usage, application developers have converted monolithic APIs into simple and single-function microservices. However, such conversions come with the cost of ensuring consistent response times and resilience when specific dependencies become unavailable. For example, a monolithic web application that performs a retry for every call is potentially resilient, as it can recover when certain dependencies (such as databases or other services) are unavailable. This resilience comes without any additional network or code complexity. However, each invocation is costly for a service that orchestrates numerous dependencies. A failure can lead to diminished user experience and higher stress on the underlying system attempting to recover from the failure. And that is what we at Unbxd work towards - providing a seamless shopping experience for our customers across verticals.
Let us consider a typical use case where an eCommerce site is overloaded with requests on Black Friday. The vendor providing payment operations goes offline for a few seconds due to heavy traffic. The users then begin to see extended wait times for their checkouts due to the high concurrency of requests. These conditions also keep all application servers clogged with threads waiting to receive a response from the vendor. After a long wait, the result is a failure. This leads to either abandoned carts or users trying to refresh or retry their checkouts, thereby increasing the load on the application servers—which already have long-waiting threads, leading to network congestion.
This is where circuit breaker patterns can be useful!
A circuit breaker is a simple structure that constantly remains vigilant, monitoring for faults. In the scenario mentioned above, the circuit breaker identifies long waiting times among the calls to the vendor. It fails fast, returning an error response to the user instead of making the threads wait. Thus, the circuit breaker prevents users from having suboptimal response time.
This is what keeps my team excited most of the time - finding a circuit breaker pattern that is better and more efficient to form an ecosystem that can survive outages and downtimes without impact or at least with minimal impact.
Martin Fowler says, "The basic idea behind a circuit breaker is very simple. You wrap a protected function call in a circuit breaker object, which monitors for failures."
Once failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error without the protected call being made; usually, you'll also want some monitor alert if the circuit breaker trips. Recovery time is crucial for the underlying resource, and having a circuit breaker that fails fast without overloading the system ensures that the vendor can recover quickly. A circuit breaker is an always-live system keeping watch over dependency invocations. In case of a high failure rate, the circuit breaker stops calls from getting through for a small amount rather than responding with a standard error.
We at Unbxd are always working towards building the most accurate version of an Ideal Circuit Breaker. A unified system is one where we have an ideal circuit breaker, real-time monitoring, and a fast recovery variable setup, making the application genuinely resilient.
And that is what we are creating for our customers. Unbxd has many client-facing APIs. Out of the few downstream services, Catalog Service is the most important. A failure of this service implies a failure of client services as well. Failure need not always be an error, as an inability to promptly serve the request is equivalent to failure for all practical purposes. The problem for client services has been to make them resilient to this service and not bombard it if it is already down. We identified a Circuit Breaker to be an ideal solution to the problem our customers were facing. We zeroed down to Hystrix, an open-source implementation of Circuit Breaker by Netflix. All the calls to Catalog Service are wrapped in Hystrix functions. Any timeout or error downstream forces the request to be served by an alternate fallback strategy. Our team identified that the problem was building an alternate service to get us the catalog. A cache was needed to serve the purpose. This can be seen in the sequence of images below
We can see that the cache hit rises as the circuit breaker gets initiated and response success is established. Once the system is restored, the cache hit decreases, and the circuit breaker is back in an open state. LRU (Least Recently Used) Cache was implemented backed by Aerospike.
LRU was chosen to go by the 80-20 rule (80% of the requests are for 20% of the products). Currently, Aerospike does not have an out-of-the-box LRU sort of implementation. To create LRU behavior, data entry/retrieval in Aerospike was made through Lua scripts that run on aerospike nodes.
Now all the successful requests are also being cached in Aerospike, and requests are served from the cache when there is a failure (either timeout or error). If the failure persists for a few seconds above a threshold percentage of the requests, the circuit becomes open so that all the requests are served from the cache only. The system keeps on actively checking for stability downstream after a sleeping window. Whenever it is stable, the circuit becomes closed, and the overall system returns to a normal state.
Using the example of the eCommerce site from above, with a resilient system in place, the circuit breaker keeps an ongoing evaluation of the faults from the payments processor. It identifies long wait times or errors from the vendor. In such occurrences, it breaks the circuit, failing fast. As a result, users are notified of the problem, and the vendor has enough time to recover. In the meantime, the circuit breaker keeps sending one request at regular intervals to evaluate if the vendor system is back online. If so, the circuit breaker closes the circuit immediately. This allows the rest of the calls to go through successfully, thereby effectively removing the problem of network congestion and long wait times. And this is how we are building a resilient system for our eCommerce customers and preventing cascading downstream failures from happening.
In the future, we aim to improve and further fine-tune our cache admission strategy. We plan to use the frequency information of an LRU to implement the same. We currently use a single successful execution to close the circuit, but we intend to use a configurable number of data points to make a more intelligent decision.
Our ideal vision is to prevent a system from any outage or possible downtime with a fully robust and resilient system in place and reduce the occurrence of any such incidents to zero.
Book a demo with Unbxd and learn how we can help reduce downtime on your eCommerce website.
|
OPCFW_CODE
|
I am using a Window 7 machine with an 8 channel audio card to provide
multichannel outputs from a dedicated software application. The
application plays sampled pipe organ audio in response to Midi commands
from musical keyboards. Basically it's a virtual pipe organ. The
problem I'm having is that the processor (quad core AMD) speed changes
due to various things that can happen, such as overheating, etc. This
results is the audio board putting out a click or pop when the processor
speed is adjusted. I have change many parameters and have improved the
situation considerably, however, 1 pop or click is not acceptable. I
know many of the parameters must be adjusted in the BIOS, but I'm sure
there are some things that W7 does that affects this problem. Anyone
have any ideas?
Are you using the "Always On" power schema ? Visit the Power
control panel, and look for a schema that keeps the CPU at
full speed all the time.
Even if the processor was running at its lowest speed under Cool N' Quiet,
it would probably be enough to service the sound card. But what is
bad though, is P-state changes, which might take 100 microseconds or so.
I don't know if the CPU is available to execute instructions during
that transition. During a P-state change, VCore is adjusted and the
multiplier is changed, and the two changes are sequential. (And it
depends on whether the CPU is speeding up or slowing down, as to which
change is done first in the sequence.) When playing a video, I think
AMD systems have been known to make 30 P-state changes per second.
I don't know if that's the nature of where your click of pop is
coming from - a buffer underrun due to that 100 microsecond outage
sounds pretty unlikely to be enough to do it. The buffer is probably
a lot bigger than that, and the threshold should leave plenty of
time for it to get serviced when it needs to be filled up.
Other possibilities, are activities on the computer which are
not even visible as such, from the OS. Such as System Management
Mode or SMM. You can check for SMM activity, in an indirect
measurement way, using DPCLat. Certain Gigabyte boards show
spikes in DPCLat, implying long periods of time spent in SMM.
And, SMM could not be disabled. SMM is used for things, like
adjusting multi-phase VCore designs, while the system is running.
The change to the number of phases being employed, is done via
BIOS code running under SMM. The OS is blissfully unaware it
has been booted from the processor by SMM. (It's possible if you
were in SMM long enough, you could miss a clock tick interrupt.)
And DPCLat, uses the service time of Deferred Procedure Calls, to
indirectly determine something like SMM is happening.
I have no idea what OSes that program supports. It's a pretty
If you test out that program, and verify you don't have an SMM
problem, look in Task Manager, and see if you've acquired a
"LtcyCfgSvc.exe" process. It's possible that got installed
on my system when using DPCLat. I'm not really sure, but that's
about all I can associate it with on my main machine.
Typically, people who build audio workstations, test with DPCLat
to see if the motherboard is going to be a problem or not.
Gigabyte has released updates to the BIOS, that reduce spikes in
DPCLat, so it is possible to make improvements of the worst
cases (like, blowing a clock tick), caused by long SMM runtime.
(Some background on SMM, here.)
This would be an example of a motherboard with a serious problem.
The green bits are good. The red spikes, are not. The original
photo is no longer available, and all I can get is this crappy
Note that, there are some transitions on a PC, where there
are unavoidable long delays. I've noticed, entering or
exiting a 3D game, causes a large spike in DPCLat. So if you're
working in a recording studio, with an audio workstation,
don't run off and play Quake while you're recording a live act
Stick to playing Solitaire.
|
OPCFW_CODE
|
At the quoted price, the system includes Windows XP Media Center Edition and Microsoft Works Suite 2006 software, a standard Internet keyboard and optical mouse, and some rather flimsy 2.1 speakers. You can save a couple of bucks by foregoing the neon light kit, which thankfully comes with a back-mounted on/off switch.
Interestingly, while you can configure your new Value Ultra to within an inch of its life, you can't do anything to the warranty. However, the warranty that isn't terrible to begin with: three years for labor and one year for parts protection and lifetime toll-free technical support. That's more than most vendors offer as a standard warranty, but iBuyPower doesn't allow you to upgrade the warranty to match the ironclad four- and five-year optional warranties some of the big guys offer. Moreover, phone support is available only during business hours, five days a week.
(Longer bars indicate better performance)
|BAPCo SysMark 2004 rating||SysMark 2004 Internet-content-creation rating||SysMark 2004 office-productivity rating|
Find out more about how we test desktop systems.
Dell Dimension E510
Windows XP Media Center Edition 2005 SP2; 3.0GHz Intel Pentium 4 531; Intel 945G chipset; 512MB DDR2 SDRAM 400MHz; 128MB ATI Radeon X300 SE (PCIe); Maxtor 6L160M0 160GB 7,200rpm Serial ATA
Windows XP Media Center Edition 2005 SP2; 2.2GHz AMD Athlon 64 3500+; Nvidia Nforce4 chipset; 1,024MB DDR SDRAM 400MHz; integrated Nvidia GeForce 6100 graphics chip using 256MB shared memory; Western Digital WD2000BB-22GUCO, 200GB, 7,200rpm, ATA/100
Windows XP Media Center Edition 2005; 3.06GHz Pentium 4 519; Intel 915G chipset; 5124MB DDR2 SDRAM 533MHz; integrated Intel 915G graphics chip using 128MB shared memory; Seagate ST3160023AS 160GB 7,200rpm SATA
iBuyPower Value Ultra
Windows XP Media Center Edition 2005 SP2; 2.2GHz AMD Athlon 64 3500+; Nvidia Nforce4 chipset; 1,024MB DDR SDRAM 400MHz; 256MB Nvidia GeForce 7600GT PCIe; Western Digital WD2000JS-00PDB0 200GB 7,200rpm SATA
Lenovo 3000 J105
Windows XP Professional SP2; 2.2GHz AMD Athlon 64 3200+; VIA VT8237 chipset; 512MB DDR SDRAM 400MHz; integrated VIA S3 Unichrome Pro graphics chip using 64MB shared memory; Western Digital WD800JD 80GB 7,200rpm Serial ATA
|
OPCFW_CODE
|
How should you go through an online video course? Skip ahead? Miss sections? Play videos at two times speed? Let’s talk about the pros and cons of doing that today.
Again, we’re talking about how you should go about going through an online video course. There’s lots of different ways of doing it. Some people prefer to go through the video from start to finish. So they’ll watch each and every video in its entirety, they’ll go through all the challenges, coding exercises, get to the end and they’re finished. Now, that’s got an obvious advantage, of not missing anything. Very importantly you’ve gone through everything, you’ve done all the examples, your coding exercises and challenges and hopefully you’ve got a lot out of the course.
But, for some other people, particularly for those people who may have some existing experience in that particular technology, that’s being taught in the course. Perhaps, it’s a good idea to start skipping some sections. So, you might look at, for example, the introductory side of things, the first two or three sections perhaps of a course that goes through the really, more basic things and skip those and we go to a particular area.
What I’m going to suggest in general, is that’s not a good idea. The reason is, over the years, I’ve found that programmers, even if you have picked up skills in a particular language or framework, or whatever it is, if you picked up those skills elsewhere, you can find that when you go to another course, that’s taught in a slightly different way, or, now perhaps there was something omitted in that other training that you find in the new course. So if you skip ahead and just assume that you know all that already. You’re opening yourself up for the possibility of failure.
Now, sometimes I go through a video course, still, and I like watching them in their entirety and believe it or not, even after 35 years as a programmer, I will still sometimes find things and go, “Wow! I didn’t know that!” or like, “Wow I didn’t realize that.” I’m still learning every day. I would suggest you don’t skip ahead. Now, the same applies also at playing videos faster. Now, some people like to put videos at 1.5 times the normal speed or two times and just sort of fast forward through it and they’re in a rush to finish it. The other thing I really wanna point out here is that, you shouldn’t be treating this as a race. It’s not a race to get to the end of the course quickly. The idea of taking the course, the idea of spending your money, is to learn something, to hopefully be able to take those skills and to give yourself a job in the future.
So, if you instead hell bent on finishing the course as quickly as possible, chances are pretty high you’re going to miss something. So I would suggest you take your time, allocate a block of time to finish the course. Knowing it’s not gonna happen over night. It’s not a race, take your time, go through it all, from start to finish. Again, as I pointed out earlier in the video, do all the coding exercises and challenges et cetera and complete the course in its entirety and you’ll end up a better programmer in the long term, because you fully understood, hopefully, the material that’s been taught in that course.
I hope that helped. If you’ve got any questions, feel free to leave a comment and I’ll get back to you.
|
OPCFW_CODE
|
How does this promise without an argument work in this mutation observer?
I was looking at this mutation obersver in some typescript code and I can’t figure how the promise works in it, I’ve not see a promise without an argument in this style before:
const observer = new MutationObserver((mutations: MutationRecord[]) => {
Promise.resolve().then(someNextMethod)
this.somethingThatTakesAWhile()
})
observer.observe(HTMLtarget)
When the observation is triggered the someNextMethod runs after this.somethingThatTakesAWhile() has ran. How is this possible in this instance? I don't understand how the Promise get passed any arguments and knows how to resolve in this case. Could someone explain the internal mechanics of this code please? I'm at a bit of a loss as to how it runs in that order. Thanks!
the promise does nothing, but forces the code to wait asynchronously to "resolve" that promise. It's basically a Promise version of the old setTimeout(doSomething, 0); hack.
Here is a great talk explaining how these types of event loop tricks work in JavaScript: https://www.youtube.com/watch?v=cCOL7MC4Pl0
@RobinZigmond Not quite, the promise version will place it on the microTask queue..
@Keith - yes I'm aware of that, I just decided it wasn't worth elaborating on that difference in a comment. But it was in my mind when I said "a Promise version of".
A little off-topic, but the this.somethingThatTakesAWhile() doesn't sound like something that should be executed on the main thread. Running blocking code freezes the browser until it finishes.
Sorry for the confusion, but none of this explains why the code waits for somethingThatTakesAWhile to finish before someNextMethod is ran, that's the part I'm having difficulty understanding, why does somethingThatTakesAWhile run and finish first?
Relevant: https://stackoverflow.com/questions/27647742/promise-resolve-then-vs-setimmediate-vs-nexttick
@CafeHey Because until all synchronous code has finished, someNextMethod won't have a chance to execute. The method will get pushed into a queue and before the next event queue is processed it will execute whats in the micro task queue.
The main point of something like this:
Promise.resolve().then(someNextMethod)
is just to call someNextMethod() after the current chain of execution finishes. In a browser, it is similar to this:
setTimeout(someNextMethod, 0);
though the Promise.resolve() method will prioritize it sooner than setTimeout() if other things are waiting in the event queue.
So, in your particular example, the point of these two statements:
Promise.resolve().then(someNextMethod)
this.somethingThatTakesAWhile()
is to call someNextMethod() after this.somethingThatTakesAWhile() returns and after the MutationObserver callback has returned, allowing any other observers to also be notified before someNextMethod() is called.
As to why this calls someNextMethod() later, that's because all .then() handlers run no sooner than when the current thread of execution completes (and returns control back to the event loop), even if the promise they are attached to is already resolved. That's how .then() works, per the Promise specification.
Why exactly someone would do that is context dependent and since this is all just pseudo-code, you don't offer any clues as to the real motivation here.
thank you so much @jfriend00, in this context it was to disconnet the observer until the code had ran and then reconnect it, in case somethingThatTakesAWhile() triggered the observer and caused an infinite loop.
This:
Promise.resolve().then(someNextMethod)
Is equivalent to this:
Promise.resolve().then(() => someNextMethod())
Working backward is equivalent to:
const myNewMethod = () => someNextMethod()
Promise.resolve().then(myNewMethod)
Defining a function inline or pointing to a function is just a syntactical difference. All of the arguments passed through then will be passed to the referenced function. But in this case, there isn't any as it's a promise with an empty return value.
In other words, the method doesn't need any parameters. In this instance it's actually just an old JS hack/trick to get the code to run at the end of the call stack.
"there isn't any" - actually there is, the value undefined. A promise cannot have an "empty" result.
Sorry, but that doesn't bring somethingThatTakesAWhile into the flow, how come somethingThatTakesAWhile runs before someNextMethod?
Because promises are asynchronous. Code in JS does not necessarily execute in order. Asynchronous code is code which is queued for future computation. Usualy its used for things like network requests. Lets just pretend this is one for the sake of argument. With a promise that represents a network request, the callback would run when the response comes back. In your example, the callback is queued straightaway because there's nothing to wait for.
You might wonder why it then happens before. The reason is that in JS, all promises are queued at the end of the current call stack/event loop. Its still queued, even if it is queued for execution "as soon as possible", in the context of async programming, that is still at the end of the current event loop/call stack
|
STACK_EXCHANGE
|
"""
TEKTRONIX OSCILLOSCOPE EXAMPLE
Shows how to connect to a TekTronix oscilloscope over USB using pyvisa, and download all visible channels to an H5 file
"""
from telepythic import find_visa
from telepythic.library.tekscope import TekScope
# look for USB instrument (will fail if there is more than one)
instr = find_visa('USB?*::INSTR')
# connect to the instrument as an oscilloscope
scope = TekScope(instr)
print 'Connected', scope.id().strip()
##### download the channels #####
import pylab as pyl
import numpy as np
import h5py
# create a new h5 file with the data in it
with h5py.File("scope.h5","w") as F:
# find out what channels this scope has
chans = scope.channels()
for ch,col in zip(chans,'bgrkym'):
# if the channel is enabled, download it
if chans[ch]:
print 'Downloading',ch
wfmo, T, Y = scope.waveform(ch)
# save it to the file
D = F.create_dataset(ch,data=np.vstack([T,Y]).T)
D.attrs.update(wfmo)
# plot it
pyl.plot(T,Y,col)
pyl.show()
|
STACK_EDU
|
A Python Project to perform basic temperature conversions between Kelvin, Celsius, and Fahrenheit using audio using libraries such as SpeechRecognition, pyttsx3, and gTTs.
INTRODUCTION & FUNCTION:
To perform the temperature conversions in python there is no direct method other than to perform it with the help of the formulas. But with the help of different libraries in python, it is possible to make this process completely voice-based without having to input any details by hand.
In this project we use the different libraries mentioned below in the requirements section for the following uses:
pyttsx3 is used for performing text to speech conversions and so we use it for the conversion of the python converted result to voice which allows the system to convey the result in an audio interface.
SpeechRecognition is used to perform the speech-to-text conversion with the help of attributes like pause_threshold and adjust_for_ambient_noise, this then allows for the program to convert the audio into text which can then be converted into computer understandable language.
gtts is both a CLI tool with Google Translate API and a python library.
Within the code, the speak(audio,language) function performs the function of allowing the system to speak by converting the provided text into audio and presenting it which is the main aspect of interaction in the project.
Within the code the myCommd(param) function allows the voice input to be taken by the system with a set pause threshold of 0.5.
1. Installed Python 2.6 or above.
2. PyAudio installed as per the system specification.
3. Further requirements:
3.a. pyttsx3: CODE for installing-- pip install pyttsx3
3.b. SpeechRecognition: CODE for installing-- pip install SpeechRecognition
3.c. gtts: CODE for installing-- pip install gtts
3.d. googletrans: CODE for installing-- pip install googletrans
Operand Operation1 Connector Operation2
Operand: any number
Operation1: From operation, i.e, the format from which the operand is to be converted from
Operation2: To operation, i.e, the format to which the operand is to be converted to
Connector: to(no certain condition is applied to this)
The above-presented image shows how the interface works. The audio input follows the above-mentioned format.
This image indicates the situation where there is no audio prompted by the user to the program. The last line "...." is a message displayed to the user indicating that the system is waiting for their response. The last line is the way to terminate the program properly.
The output says the answer of the conversion as per the input provided and further asks if one wants to continue with different conversions. Upon providing the input no the termination sentence for the program is displayed as in the image img 1.1 and img 1.2.
Submitted by Pusuluri Sidhartha Aravind (aravindpusuluri)
Download packets of source code on Coders Packet
|
OPCFW_CODE
|
Quote quiz: who said this? (No fair looking it up). I have modified the original quotation slightly, by making a handful of word substitutions to bring it up to date:
It might be argued that the human race would never be foolish enough to hand over all power to AI. But we are suggesting neither that the human race would voluntarily turn power over to AI nor that AI would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on AI that it would have no practical choice but to accept all of the AI’s decisions. As society and the problems that face it become more and more complex and as AI becomes more and more intelligent, people will let AI make more and more of their decisions for them, simply because AI-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the AI will be in effective control. People won’t be able to just turn the AI off, because they will be so dependent on it that turning it off would amount to suicide.
I’ll post the answer, and the unedited original quotation, next week.
UPDATE: Here's the answer.
I googled it. So, it's that.
When you post the answer, can you also say what your purpose was? Is this intended to cast doubt on the view it expresses by associating it to the larger document it's drawn from, or boost the original by saying that this bit of it seems sensible, or something else?
I can say my purpose now, before I give the answer. I'm glad you asked, because people tend to make assumptions.
My purpose is neither to cast doubt on the views expressed here nor to boost their source. It's just a piece of intellectual history. I think it's interesting that someone had this view at a particular time and place, and in a particular context. It's interesting to think about what evidence they had that might have led them to this view, and what evidence they clearly didn't have (e.g., because it hadn't happened yet) that therefore couldn't have been part of what led them to this view. I think when we trace the history of ideas, and see how far back they go, we learn something about the ideas themselves, and the arguments that led to them.
holy shit... he said this.
I had no idea he was a pre-dune butlerian, I thought it was a more general aversion to general societal capability progress.
Uh, I now consider him to be an ordinary member of the ranks of destructive anarchists - those who would destroy the power of centralized authority rather than construct a network of caring resistant to centralization's harmful impacts, who I would call constructive anarchists (but there may be name conflicts with this use of the word, suggestions for better naming are welcome.)
As what I would call a constructive anarchist, I at the same time cannot deeply fault the views of those who choose violence, because I cannot stop them except by constructing solutions to the wounds that lead them to choose violence to retain or achieve empowerment-of-selfhood. destructiveness is an understandable, though unacceptable, response, and I cannot say that violence against violence can ever be disallowed, even though it is terrible and not a true solution to the problem of violence. the disease of conflict spreads through conflictons, and it is slowed somewhat when the conflictons reflect instead of being emitted at someone not involved, and yet what I want is to end the emission of new conflictons... (This is a paragraph I felt would be interesting to toss into metaphor.systems, and it sure was; the suggested search is also interesting and very different)
but geez. what a mess we're in.
before I see the results, my guesses:
8%: EY (early in career)
18%: dude who first brought up superintelligence in that one paper, name not available in brain
20%: minsky or contemporary from the early ai capabilities work, before 1990
39%: all my guesses were wrong
all my guesses were wrong; closest match "contemporary of minsky" but he wasn't an early ai researcher himself
|
OPCFW_CODE
|
My name is Sri Harsh Amur, and I am a developer at Apps for Tableau. I am originally from India and came to the Netherlands in 2019 to do my Bachelor’s in Computer Science at the University of Twente.
I have been passionate about computer science and programming since high school. From learning various languages to participating in competitive coding challenges, I was naturally attracted to computers and technology. They really excite me as I get to think creatively and make new software products from scratch.
When I saw that Apps for Tableau was looking for developers, I thought it was quite an exciting opportunity. I was familiar with building web applications in browsers, but building it inside BI tools like Tableau or Power BI was something new to me. I decided to apply for the position as it allowed me to use my existing skills to build products and challenged me to adapt and grow my skills.
My First Project
I spent my first couple of weeks learning Tableau and how to build extensions in it. The first extension I made could calculate the KPI values of Tableau worksheets. The idea behind this was that tools like Excel have various built-in functions and formulas to display KPI values given some data, but it was next to impossible if the same had to be performed in Tableau.
The prototype I had built helped me understand the Extensions API of Tableau and the differences between a regular browser like Chrome or Firefox and Tableau’s extension browser. I also got to learn how to build React applications within Tableau.
Overall, It was a fun project to get started with an extension’s development.
Teams Integration PoC
I started with making a prototype where users could create visualisations of a worksheet using an custom build Viz Editor with Tableau’s brand new Viz API features. Once we built the prototype extension, Merlijn asked me to investigate how it could be connected to Teams.
One way was to use a channels webhook URL to send messages and Viz to a Team’s channel. Although this fulfilled the basic requirement, it lacked many features like tagging your team members in a message or choosing a team or chat from the extension.
The second way was to let the user login into Microsoft and get access to Teams using SAML. This opened a lot of features that were previously not available with webhook URLs.
I got to learn a lot while building this extension. I got comfortable with reading API documentation and understood how APIs work. I was pleasantly surprised by the positive response from platforms like LinkedIn. I am delighted to be given the opportunity to create this prototype! The Teams Integrations will be released in Q1 of 2022! Want to see a demo? Check it our on our LinkedIn page!
We love to make solutions and help Tableau users to do things more efficiently. If you have feedback, ideas, questions or need support, please share them with us! Also, don’t forget to follow us on social media for our latest news and updates.
|
OPCFW_CODE
|
Data Flow Diagramming by Example. Process Modeling Techniques for Requirements Elicitation (2015)
The Power of Data Flow Diagrams
Questions answered in this chapter:
§ Why should I draw a Data Flow Diagram?
§ What does a fully balanced DFD look like?
§ What value does a DFD fragment provide?
What Does the Data Flow Diagram Do for You?
From the perspective of the one wearing the BA hat, the act of creating a data flow diagram is an awakening. Drawing the diagram forces you to ask questions that you might otherwise overlook. It is also an awakening for members of the business community whose process you are depicting. The people in the trenches and those managing them quite often have never seen a picture of their process and a picture activates parts of the human brain that words cannot. As a result, the phrase, “I see” takes on a whole different meaning when you are presented with a picture of your process. For that reason, I recommend drawing a DFD just to get everyone involved on the same page.
Once you have a DFD, exploding a process and balancing the data inputs and outputs between the levels often reveals missing data flows.
After all, no one can think of everything at once. If the tool finds a single missed data flow, it is probably well worth the time it took to draw the diagram and apply the technique. The same is true of horizontal balancing to reveal missing data elements. If we asked IT to automate a process with a missing data flow, we most likely will end up with an application that does not meet the business needs.
IT professionals are generally extremely good at their job and they will most likely recognize that they are missing something at some point in the development process. The problem is the timing of the discovery and the related cost when the omission is discovered. Adding a missing process late in the project is a relatively simple step, but missing data often affects a multitude of processes, making it one of the most expensive errors for IT projects. The simple act of identifying data elements and ensuring their completeness allows you to recognize and resolve these issues before you involve developers. In my experience, that is one of the most powerful arguments for spending time to develop and analyze a data flow diagram.
A Fully Balanced DFD
To recap, a completely balanced (levelled) data flow diagram starts at the top with a context diagram consisting of one or more processes that are in scope for your project and all external entities with which those processes exchange data.
Each of those Level 1 processes explodes to a Level 2 data flow diagram depicting the detailed processes inside the Level 1 process with all data flows and data stores that are internal to the exploded process. Each process on the Level 2 diagram would either explode further to a Level 3 DFD (and from Level 3 to Level 4, etc.) or be described in detailed process specifications. Each data flow and each data store on the lowest level DFD would explode to a list of the contained data elements.
Creating a DFD Fragment
Although balancing a completely levelled DFD reveals data discrepancies and disconnects, it may not be necessary for your project. Many people (in particular on projects following an Agile approach to delivering technology) only need a small fragment of a DFD to understand the inner workings of a specific process. The time required to create a completely balanced diagram is not justified if a developer only needs to know how the CREDIT DEPARTMENT establishes the credit limit for a new customer. In that case, a DFD fragment might suffice.
The following is an example of a DFD fragment based on an exercise that we use in our instructor-led classes. To test your understanding of the concepts presented, you might want to take this opportunity to draw a DFD fragment using the project Scope Statement and the Interview Notes that follow before peeking at our solution.
Scope Statement: This project will enhance our web-based Policy Maintenance System by allowing policyholders to interact directly with their insurance policies or claims. The system will support web-based policy payments and allow prospects to apply for temporary coverage pending underwriting rate approval. Once the application is received by Underwriting, it will follow standard Underwriting procedures.
Interview Notes: In the future, a prospect will submit his/her application via our website. If the prospect does not yet have a policy with us, the site will request a credit check web service and either reject or approve the application directly. If the request is from one of our current customers in good standing or approved via the credit check, the site will provide a temporary proof of insurance certificate that the prospect can print out and use to register his/her vehicle. In any case, the request will then be forwarded to underwriting for normal processing, which will either lead to acceptance (the norm), modification (overriding a web rejection) or rejection (bad risk). If the request is approved, a policy will be issued and sent to the customer via standard mail.
Here is an example of the diagram that many of our students have produced for this scenario.
Note that this data flow diagram shows a business process at some indeterminate level of detail. Some of the processes might be very high-level whereas others are very specific. If you need to understand how any of these processes works in detail, you could “explode” it to see its internal processes.
Creating a Data Flow Diagram is an extremely revealing and rewarding step in the analysis of a business process. I have never used any other tool that is as effective at triggering animated discussions amongst the stakeholders about how a business process works and how it could be improved. Obviously, creating the diagram is just the first step. The diagram opens the door to a series of specific business analysis techniques that will help the business community recognize how their actions impact other downstream processes. You can also identify problem areas, timing anomalies, and error handling issues that can lead to missing requirements.
It is important to note that the diagram is a snapshot in time. Once you present the business community with this versatile visual aid, they may immediately start to make changes. Because of the cumulative effect of those changes, you should never assume that the diagram you created a few months or even years ago is valid. If you really need to understand the current business process, you are best served by starting from scratch as we demonstrated. The problem you face is, of course, the effort required to flush out all of the details presented in the balancing section. Is it really worth the time?
A data flow diagram as a tool that benefits the project or reduces the risk of potential project failure can be worth its weight in gold. We recommend against spending project resources developing one just for the sake of having a picture.
“I think by drawing, so I'll draw or diagram everything from a piece of furniture to a stage gesture. I understand things best when they're in graphics, not words.”
- Robert Wilson
|
OPCFW_CODE
|
Original question from Quora:
How can I get a telecommute job as a programmer?
I’m a full time remote (telecommute) programmer at a successful startup, so I have some experience on how to land a remote job. It’s not as easy or obvious as a traditional local job, but it’s possible if you put the effort in.
Here is how I got into remote work…
A couple years ago I was sort of unhappy at my job. I had this crazy idea…
Remote work meant I could work for a east or west coast company while living in the midwest. I could get paid well and my cost of living is reasonable.
More importantly, there is far more demand for Rails developers on the coast than in the midwest.
So, I used my usual system for finding a job.
I found sites like We Work Remotely that posted remote jobs and every time a reasonable job popped up, I applied. If possible, I did one a day, every day. Some days there were more available, and some days no jobs were posted.
The important thing was to have a habit of applying for jobs. To land a job, you need to apply.
My first job lasted about a year, until suddenly the company folded. It sucked.
Even still, I had good telecommuting skills at that point, so I did the one thing I knew I could to get a remote job.
I took massive action.
I applied to every remote job I could. I think I applied for 50–100 in the first couple weeks. I was pushing 10+ applications out a day. It was exhausting.
I also contacted all of my developer friends to see who was hiring and I applied at those jobs too.
Guess what? After a week or so, I had a job offer that was even better than my previous job.
The company I work for doesn’t do a lot of remote positions, but they made an exception for me because I have the skills they want and the experience working remotely.
That’s not an accident. I’ve spent a decade developing my skills as a professional programmer. I’ve worked on some semi-famous software projects. I put the work in.
But, now that I put the work in to having the skills that are in demand, taking massive action when the time came made all the difference.
So, how does this apply to you?
If you want to telecommute, you need to apply to any sensible jobs you find until you get interviews and a job offer. Make it a habit.
Apply to at least one job a day.
If you don’t have the skills that are in demand right now, take the time to learn them and make yourself more marketable.
Remote work isn’t any harder to get than any other job if you put the effort in. Most people aren’t putting the effort in to land those jobs.
P.S. Have you subscribed to Code Career Genius yet?
|
OPCFW_CODE
|
Tasks allow members to create, assign, track, and collaborate on tasks. Creating a task list can help to better evaluate the scope of a project and manage tracking of individual assignments within a project team. Tasks can be assigned to any content type, such as requesting changes to a document or tracking actions from a meeting. Personal tasks can be used to remind you, or others, of outstanding actions due for completion. In addition, tasks can be sorted and filtered into one consolidated dashboard.
Easily track To dos, takeaways, action items, review cycles, or assignments from your last meeting.
Features and functionality
Our platform is simple and intuitive but that doesn’t mean its capabilities are limited. There are a number of ways you can configure our platform to do exactly what you want. Below is a list of all additional features found within this particular feature or function.
- Title: Give your channel a name that describes the tasks that will be housed within.
- Description: This field will display a brief description for people to learn more about the purpose of the task channel.
- Location: This feature allows you to choose where the task channel will reside in your digital workplace.
- Hide from navigation: When this feature is selected, no one will be able to see this channel. However, you can still access the channel through the site manager or by its URL.
The following options have their own dedicated support documents. Please follow the links for more information:
How to create a task channel
Follow the steps below to add a Task Channel to your digital workplace.
- How you start depends on your role in the digital workplace:
- Select + Add on the Site Manager/Navigation page and select Task from the list.
- In the Add Task Channel window, complete the following fields:
- Title: Enter a name for the task channel.
- (Optional) Description: Enter a description of the task channel.
- Location: Select where to place the task channel.
- You can only place task channels under pages or spaces.
- Only pages and spaces to which you have at least Read access are visible.
- To select a page or space:
- Select the Location dropdown.
- Search for a Page or Space by its name. This search will return up to 100 results that match your search query. These results also display the locations above matching pages and spaces.
- Select a page or space from the list of search results.
- (Optional) Hide from navigation: Select to prevent the task channel from being shown in any navigation menus.
- Select Add.
After creating the task channel, you should go to it and configure the following:
How to create a task list from a template
Follow the steps in this article to create the task list template, and once complete follow these steps to apply it:
- Select Channel Template from the Edit menu in your new task channel.
- Select and apply the task list template.
Frequently asked questions
Can I assign tasks to a group?
No. Tasks are only available to assign to an individual user. This ensures that an individual is responsible for either completing a task or ensuring another team member completes it by acting as a task leader.
I have a repeated list of tasks that I am using as a part of a project space. Is there a way to keep this list populated so they don’t need to be entered each time?
Tasks have a Channel Template feature. You can create a list of tasks and save it as a template. You are able to save multiple task channel templates to use depending on the type of project you may be doing.
Initial access rules
When you first create a task channel, it will inherit any cascading, anonymous, and author access rules from the page or space above it. However, if you create this channel at the root of your digital workplace, the channel will be given the same cascading specific access rules as your digital workplace's current homepage. At the same time, the channel's anonymous access rule will receive a value of No Access, while the author access rule will receive a value of Full Access. Since the specific access rules are placed on the channel, not inherited, they will persist even if you move the channel to a different location. To change the access rules on a channel, select Actions followed by Access to navigate to its Access page.
Stay up to date
Get an email notification when you receive a task or the status changes. And, the userbar keeps a handy running total of all the tasks assigned to you.
When you mark a task as private, only you and the person it is assigned to can see it. They can be used for things like confidential requests, or for tracking edits to publicly available content.
Your personal tasks are kept private. They’re great for remembering to file expense reports, book travel for a client meeting, or pick up pizza on your way home.
Every task can be broken into subtasks to help break work up into smaller parts, or divide it between team members.
When viewing a document, wiki, blog post, or a forum topic, you will find a Tasks bar at the bottom of the page. Creating a task here, following the method above, will assign the task to this specific piece of content.
|
OPCFW_CODE
|
No matter how much hardware you throw at a problem, unless the solution is designed to use that resource, it won’t be optimised for that. And MSFS is not designed to use 16 cores 32 threads. Some problems are inherently not parallelizable (don’t know if that’s and actual word) and cannot be put on seperate cores. If you force it to run on multiple cores, it can actually decrease in performance. I am not saying that MSFS is as optimised as it can ever be, there is definitely room for improvement. But it probably won’t be solved by having a lot of cores.
The other aspect is that optimization takes time and expertise. Being an enterprise programmer I know that it takes a lot of time and expertise to optimize large diverse codebase. Plus since MSFS core is based on FSX codebase which was not written or maintained by asobo, it’s expected that it will take even more time for them to understand the bottlenecks in the system and optimize them. The expertise needed for this is expensive and not easy to come by. You will find a lot of developers who can write good code in JS or C# or Python. Nothing against those developers, but it’s much harder to find developers who write good C/C++ code.
So, jist of the matter is optimization will come as Asobo engineers become more and more familiar with the FSX part of the codebase and replace them part by part with their own. But as customers we just have to accept the reality and stop throwing hardware at it with and expectation of a magic fix.
I have noticed minor performance drop overall, but not to that extent. And performance is pretty inconsistent for me from run to run. So i try to restart the sim after every flight or every flight load when testing for performance. Make sure you are not having any issue like that. For me, there is a more annoying issue, which results in a periodic frame time spike which causes a stutter every 2 seconds or so. This usually happens if i quit out to main menu and then again load the flight. I am not sure if this used to happen before or not, but because of the parking spot being changed ot rw bug, i am having to do this a bit more frequently.
Just installed the beta yesterday. About -10% FPS in my default scenario compared to live version.
Stutters occuring related to either simconnect/FSUIPC apps. Updated to latest FSUIPC but there are some frequent stutters still occuring after about 2hrs in the flight. They go away when closing tracking apps that are reading data trough simconnect or fsuipc. Was not able to verufy where exactly they come from. But somethings going on there…
So how come then guys if this is knowen performance issue that Beta gives -10% fps why would beta be stopped and realised ? Someone very smart in this page told me that! Oh now this is beta the all point of beta is preventing of issue going in to stable version! So what I see is stable update is to come out next week and we still have performance issue -10% that is not good guys! Before realise we should have improved performance not decreased!
Dude chill just because some are experiencing some performances issue doesn’t mean we all do. Most of us are seeing amazing performance in this beta and that’s with everything complex. Just wait until the final version is released then see if you have great performance.
Lots of things were added this beta. The simulation in the background gets more complex with every update. Of course there will be a performance impact. Performance is still great for me, but a few fps less. When i say 10%, i mean 45 instead of 50 before in the same scenario.
The bigger issue are the stutters with fsuipc/simconnect. But that will be sorted soon, i assume.
As far as the human eye can tell, anywhere between 20-30 FPS is playable, 30 being about the eyes’ max/optimum. So unless you’re hitting below 20-30 FPS in 9/10 scenarios, there’s no reason to really be upset.
I’ve been a part of every single beta since day 1. I know when to remove the mods. I don’t believe my mods have any performance impact since they all work fine in SU11. SU12b, on the day I posted, was definitely not good performance.
Those of us testing SU12 beta are aware of FSUIPC and I even have the “fixed” version of FSUIPC. No issues today; but a couple of days ago SU12 beta kept stuttering, dropping significantly FPS. I wasn’t even using a mod plane… so there was something definitely going on. I DDU’d my driver and reinstalled a previous driver and that has provided a more stable performance for me today.
As with any type of software testing it is always best to remove any 3rd party mods or plugins that was not created in-house so that you have a good sanitized starting point from a coding point of view so that you can determine where issues may arise.
My comment was ment more of a precedual start point to help you out.
Good to know that you have a more stable performance.
Sure, send me the $ for the copay. I never said it’s the same. I said that around 30 is the eye’s natural processing speed. Anything above 30 is just a faster version of what is pretty much fluid. I also said it’s playable. Also, there’s been a ton of debunking by hardcore gamers that higher FPS only have a benefit for first person type shooter games. Lots of ‘pro’ gamers in FS lock their frames at 30 FPS because that’s about what is necessary/the goal/standard for a smooth experience. Anything higher than 30 is nice but not needed.
|
OPCFW_CODE
|
import datetime
import time
import os
import warnings
import numpy as np
from numpy import newaxis
from numpy import shape
import pandas as pd
import swepy.downsample as down
import math
from scipy.signal import savgol_filter
from scipy.cluster.vq import *
import jenkspy
import numpy.ma as ma
from netCDF4 import Dataset
from multiprocessing import Pool, Process, cpu_count
import netCDF4
def get_array(file, downsample=True):
"""
Take 19H and 37H netCDF files, open and store tb
data in np arrays
Parameters
-----------
file: str
filename for 19H or 37H file
high: bool
true = high resolution imagery (3.125km/6.25km)
false = low resolution imagery (25km)
"""
fid = Dataset(file, "r", format="NETCDF4")
tb = fid.variables["TB"][:]
if downsample and fid.variables["crs"].long_name == "EASE2_N3.125km":
tb[tb.mask] = 0.00001
tb = down.downsample(tb, block_size=(1, 2, 2), func=np.mean)
fid.close()
return ma.masked_values(tb, 0.00001)
else:
fid.close()
return tb
def pandas_fill(arr):
"""
Given 2d array, convert to pd dataframe
and ffill missing values in place
Parameters
----------
arr: np.array
Ideally time vector of swe cube
"""
df = pd.DataFrame(arr)
df.fillna(method="ffill", inplace=True)
out = df.values
return out
def vector_clean(cube):
"""
Clean erroneous spikes out of 37Ghz cube
Parameters
-----------
cube: np.array(t,x,y)
np array time cube of 37GHz tb data
Note: "cube" can be used with other arrays but is looking for patterns in 37H files
"""
cube[cube == 0] = np.nan
for i in range(np.shape(cube)[0]):
arr = cube[i, :, :]
mask = np.isnan(arr)
idx = np.where(~mask, np.arange(mask.shape[1]), 0)
np.maximum.accumulate(idx, axis=1, out=idx)
cube[i, :, :] = arr[np.arange(idx.shape[0])[:, None], idx]
return cube
def __filter(cube):
"""
Apply a sav-gol filter from scipy to time vector's of cube
Parameters
-----------
cube: np.array(t,x,y)
np array time cube of swe for passive microwave data
"""
shapecube = np.shape(cube)
smooth_cube = np.empty((shapecube[0], shapecube[1], shapecube[2]))
if shapecube[0] == 1:
print("Cannot smooth a cube with time vector of length 1.")
return ValueError
elif (
shapecube[0] < 51
): # when time vector is len(1) --> concat over itself to make len(3)
window = shapecube[0] - 1 if shapecube[0] % 2 == 0 else shapecube[0]
poly = 3 if window > 3 else window - 1
else:
window = 51
poly = 3
for x in range(shapecube[1]):
for y in range(shapecube[2]):
pixel_drill = cube[:, x, y]
pixel = pandas_fill(pixel_drill)
yhat = savgol_filter(np.squeeze(pixel), window, poly)
yhat[yhat < 2] = 0
smooth_cube[:, x, y] = yhat
return smooth_cube
def apply_filter(cube):
"""
Function to apply the filter function in a parralel fashion
Makes use of a Pool to process on every available core
Parameters
-----------
cube: np.array
numpy array of data, should be 3d (x,x,x)
"""
cpus = cpu_count()
try:
swe_parts = np.array_split(cube, cpus, axis=2)
except IndexError:
print(
"Array Provided does not have a 2nd axis to split on. Please provide a 3 dimensional cube."
)
with Pool(cpus) as p:
parts = p.map(__filter, swe_parts)
try:
return np.concatenate(parts, axis=2) # recombine split cube
except ValueError:
# FIND WAY TO GET HERE VIA TEST SUITE (1 CORE)
print(
"Array provided is smaller than # of cores available. Exiting"
)
def mask_ocean_winter(swe_matrix, day=0, nclasses=3):
"""
Use a winter day to mask ocean pixels out of coastal imagery in arctic.
There is a clear difference between winter land pixels and ocean pixels
that classification can sort out for us using a simple jenks classification.
Data should have already moved through "vector_clean" and "apply_filter"
Parameters
----------
swe_matrix: np.array
swe time cube
day: int
julian date of time series to use for classification (should be winter)
nclasses: int
number of classes to use in jenks classification, defaults to 3
"""
winter_day = swe_matrix[day, :, :]
classes_jenk = jenkspy.jenks_breaks(winter_day.ravel(), nclasses)
mask = classes_jenk == 1
winter_day[mask] = -8888
matrix_mask = np.zeros(swe_matrix.shape, dtype=bool)
matrix_mask[:, :, :] = winter_day[np.newaxis, :, :] == -8888
swe_matrix[matrix_mask] = -8888
return swe_matrix
def safe_subtract(tb19, tb37):
"""
Check size of each file, often the 19 and 37
matrices are one unit off of eachother.
Chops the larger matrix to match the smaller matrix
"""
shape1 = np.shape(tb19)
shape2 = np.shape(tb37)
s1 = [shape1[0], shape1[1], shape1[2]]
s2 = [shape2[0], shape2[1], shape2[2]]
if s1[1] < s2[1]:
s2[1] = s1[1]
elif s1[1] > s2[1]:
s1[1] = s2[1]
if s1[2] < s2[2]:
s2[2] = s1[2]
elif s1[2] > s2[2]:
s1[2] = s2[2]
tb19 = tb19[:, : s1[1] - 1, : s1[2] - 1]
tb37 = tb37[:, : s2[1] - 1, : s2[2] - 1]
tb = tb19 - tb37
return tb
def save_file(metafile, array, outname):
"""
Save processed array back out to a new netCDF file
Metadata is copied from the un-processed file, evertyhing but TB
Parameters
----------
metafile: str
old file to copy metadata from
array: np.array
processed TB array
outname: str
name for output file
"""
toexclude = ["TB"]
# Open old file and get info
with netCDF4.Dataset(metafile) as src, netCDF4.Dataset(
outname, "w"
) as dst:
# copy global atributes all at once via dict
dst.setncatts(src.__dict__)
# copy dimensions
for name, dimension in src.dimensions.items():
dst.createDimension(
name, (len(dimension) if not dimension.isunlimited() else None)
)
# copy all file data
for name, variable in src.variables.items():
if name not in toexclude:
dst.createVariable(
name, variable.datatype, variable.dimensions
)
dst[name][:] = src[name][:]
# copy variable attributes all at once via dict
dst[name].setncatts(src[name].__dict__)
dst.createVariable(
"TB", src.variables["TB"].datatype, src.variables["TB"].dimensions
)
dst["TB"][:] = array[:]
return outname
|
STACK_EDU
|
Upgrade VCR to latest and Webmock to latest
This is necessary groundwork for tackling #304
Builds on #303 so merge that first.
Coverage increased (+0.06%) to 96.446% when pulling ba1018e6239fe1b5cab5746699acf2cb232e8bb4 on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 61ba1a6c689b69b967aed5c43a5cb16f01c471fe on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 40caae2190a448de5fdd94455661c217b5740827 on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 40caae2190a448de5fdd94455661c217b5740827 on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
Coverage increased (+0.06%) to 96.446% when pulling 40caae2190a448de5fdd94455661c217b5740827 on samphilipd:upgrade_webmock_and_vcr_v2 into fb31976794c2ca87af767d626c7b8edeafeca704 on piotrmurach:master.
I have manually merged this changes due to me messing around with dependencies setup and rerecording some of the feature tests. Do you want me to release a new version with the projects api or do you have time to look into cards and columns apis as well?
@piotrmurach I can implement both of those this week.
That would be sweet! The only thing I would add is that the smaller the PRs the better, it's much easier to review and merge.
It should be much smoother to progress now the testing dependencies have been upgraded 😄 I really want to upgrade to rspec3 and there is a tool that i've been using to slowly convert old specs to new syntax - transpec. If you fancy moving some specs from spec/client over to unit/client folder for now that would be great.
@piotrmurach yes I agree about PR size. Now that the tests are sorted it should be pretty straightforward.
I'll see what I can do about the specs.
@piotrmurach I'd like to use projects to track my progress on the following tasks:
reactions
issues.timeline
migration
orgs.outside_collaborators
orgs.blocking
pull_requests.review_requests
users.gpg_keys
users.blocking
repos.invitations
repos.traffic
repos.community
Any chance you could grant me permissions to create one?
|
GITHUB_ARCHIVE
|
dual booting kubuntu and Opensuse
Willy K. Hamra
w.hamra1987 at gmail.com
Thu May 21 15:48:24 UTC 2009
On Wednesday 20 May 2009 18:39:38 Goh Lip wrote:
> Willy K. Hamra wrote:
> > anyone here has such a setup? i just would like to know how others
> > configure grub. right now i have a seperate boot partition for ubuntu,
> > and suse has its own partition. is it ok to let suse use the same boot
> > partition as its own boot partition as well? keep suse's boot directory
> > on its root partition? what about grub? so far, i'm very accustomed to
> > ubuntu's grub and like it. i don't care much about suse's fancy eye-candy
> > grub, but if i can have the eye candy with ubuntu's grub i don't mind.
> > i just want to see how people on this list confgured their booting with
> > suse (or another distro if it's the same)
> Normally, the last distro to be installed will be the 'overriding' grub
> to be used as its grub will set itself to be 'root' (grub root, that
> is.) If you want Kubuntu grub to be used after Suse has been set up,
> when you are in Kubuntu, (by entering Kubuntu at Suse's grub menu) do
> the following.....
already done these, after getting few boot errors, i realized suse's grub
doesn't like my savedefault, so i booted kubuntu manually, and restored our
grub on the device, and i always have a copy of my menu.lst backed up with a
million different name ;)
eventually, i edited SUSE's fstab, and removed the boot partition from there,
and let it use the boot directory in its root partition. i kept ubuntu's
menu.lst and grub, copied the entries from suse's menu.lst over to ubuntu's
file, edited them of course to reflect correct disk usage.
so now, any editing suse does to its /boot directory is quite useless, as grub
is reading from ubuntu's sepearate /boot partition.
i also noticed how suse always keeps a link called vmlinuz pointing to the
newest kernel, so i edited menu.lst to actually point at that link, so any
future kernels will always be booted without any problems. same goes for
initrd. all in all, i'm quite satisfied with the setup. the only disadvantage
is that suse can't properly cooperate with ubuntu so they can both edit the
menu.lst fairly, so suse's access to its own menu.lst is useless, and so is
KDE's access to it, but oh well, that's not really needed.
thanks for your help Goh :)
> o at a terminal "sudo update-grub"
> check at the grub menu.list (/boot/grub/menu.lst) that it is indeed
> updated with the Suse entry. Modify if you want, especially
> the sequence.
> o then at terminal,
> sudo grub
> >grub will appear. (type in after the >grub..)
> >grub find /boot/grub/stage1
> you will find 2 entries, (or more if you have more distros)
> something like
> (hd0,4) or whatever
> This will represent the partition that you have set up Suse and Kubuntu,
> say sda3 (hd0,2) is Kubuntu and
> sda5 (hd0,4) is Suse
> at >grub type
> >grub root (hd0,2)
> >grub setup (hd0)
> >grub quit
> You're done.
> Willy, since you have a dedicated partition for your Kubuntu grub, if
> you want to set this partition as a first grub, and don't worry what
> else you put into any of the partitions, let me know.
> All you need to do when you install a new distro or new Kubuntu is
> remember to set up grub at that / partition itself and point to that
> same partition. Remember when installing Kubuntu? Under advanced, at
> point of naming partition for setup? It will ask you where you want to
> set up grub and if should point to (hd0), Point to (hd0,x) where it is
> the Kubuntu partition. Remember (hd0,x) is sda(x+1); ie, sda5 is (hd0,4).
> Goh Lip
Willy K. Hamra
Manager of Hamra Information Systems
Co. Manager of Zeina Computer & Billy Net
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 197 bytes
Desc: This is a digitally signed message part.
More information about the kubuntu-users
|
OPCFW_CODE
|
Microchip and The Things Industries (TTI) are in cahoots together and they want to bring in developers to their secure circle. The idea is to provide secure authenticated communication as well as secure deployment.
The solution is built around Microchip’s ATECC608-MAHTN-T secure element chip (Fig. 1). This I2C device is a cryptographic coprocessor with hardware-based secure key storage. It can store up to 16 keys or certificates. It supports a range of encryption methods, including FIPS SP800-56A Elliptic Curve Diffie-Hellman (ECDH), NIST standard P256 elliptic curve, SHA-256 and HMAC hash with off-chip context save/restore, and AES-128 with support for encryption/decryption and Galois field multiply for GCM.
1. Microchip’s ATECC608A-MAHTN-T secure element incorporates factory-installed keys that support secure registration and authentication.
The standard ATECCC608 comes with no keys installed, but they’re included with the LoRa variant. This is designed to work with TTI’s LoRa network support. LoRaWAN is a low-speed, long-distance wireless network protocol used for the Internet of Things (IoT).
Typically, a user generates a public/private key pair for use in a public key infrastructure (PKI) and registers the public key with TTI. The private key is programmed into a LoRa device. Communication between the TTI servers on the internet and the device can be authenticated. The process is a bit involved and works for an individual developer, but this is too cumbersome when deploying hundreds to millions of devices.
Working with TTI is one way to manage the process, and larger organizations could do this, but the two companies have made the process easy regardless of the number of devices a company plans to deliver. The secure key provisioning system starts with ATECC608A-MAHTN-T chips that are programmed at the factory with private keys that have a matching Manifest file containing the associated public key information (Fig. 2). The Manifest file is signed by Microchip and can be recognized by TTI when it receives the file. It then adds the key to its secure join server database. At this point, the device with the secure element can communicate securely with the secure join server.
2. A customer can order an ATECC608A-MAHTN-T (1). Microchip sends back a digitally signed Manifest file and the chip (2). Then the customer provides the file to TTI (3) so that the device can securely communicate with the secure join server (4).
The secure element will actually have a second private key because LoRa implements a dual security system. One is for the network and the other is for the service associated with the device. This allows for secure communication with the network and another internet-based server where the service can use distinct and independent authentication mechanisms. The additional key storage on the secure element is available for other application-dependent uses.
The private keys in the secure element are never revealed for any steps with the process. This is a key (pun intended) aspect of the approach. Microchip and TTI have private keys that are also never revealed. It prevents keys from being compromised at any point within the delivery of a LoRa solution.
The pre-provisioned solution comes with one year of TTI Join Server service. It also supports re-keying should a device need to migrate to another join server.
In addition, Microchip provides a LoRa protocol stack that supports the ATECC608A-MAHTN-T. The secure element can work with most any host capable of using the I2C interface.
The ATECCC608 is designed to defend against a range of attacks, including microprobing, timing attacks, emissions analysis, fault or invalid command attacks, and power cycling and clock glitches. There’s active shield over the entire chip and all memories are internally encrypted. Data-independent crypto execution is included, and the system uses randomized math operations. The system is designed with internal state consistency checking.
The chip uses voltage tampers and isolated power rails as well as an internal clock. Furthermore, Microchip applies secure test methods without using JTAG; no debug probe points or test pads are on the chip.
A company can also deploy its own secure join server. The same, pre-programmed secure element will work with either server. The ATECC608A has received a high Joint Interpretation Library (JIL) rating that’s defined by the Common Criteria security standard.
3. The ATECC608A can be placed into a AT88CKSCKTUDFN-XPRO module (right) and combine with a LoRaWAN radio (center) and host (left).
Microchip provides LoRa client hardware and software that’s compatible with this solution (Fig. 3). The secure element is used in a removable module (AT88CKSCKTUDFN-XPRO).
|
OPCFW_CODE
|
Lately I’ve had a few computer malfunctions in my life. The laptop I used for work was stolen, and the hard drive on my computer at home had a crash that even spin rite couldn’t fix. I lost some documents I was currently working on, but thankfully I’d been saving most of my important documents to a shared work drive. Since these debacles I’ve been making sure I save in multiple places and even invested in a service called Mozy to back up my files at home.
I wanted to share with you what tools I’ve been using to help offset another computer disaster:
Dropbox – I’ve been saving any current documents I’m working on into the service Dropbox. I can access these files from any computer, and its allowed me to add project ideas to documents through my phone which I find very useful. I’ve also shared folders with my husband for home stuff and with my coworkers for projects we are currently working. Lastly I’ve put a copy of the staff directory so I have access to my coworkers phone numbers at home in case I need to call someone from home or when I’m at a school visit.
Evernote – I’ve been using this service to archive all my meeting notes and handouts. I can easily tag the documents and find them later if I need to reference any ideas discussed at a previous meeting. I’ve also found this helpful to store the emergency manual materials so I can have access to them on my phone or at home in case an emergency arises.
Screenshots – I”m a visual person, so after having to reset up my preferences on my computer I took a screen shot to help me “put things back” in case I have to reinstall my browser addons, favorite programs, or just want to remember what programs I have installed on my different computers
Mozilla Sync – This is an addon for the browser Mozilla, but its allowed my to keep all of my bookmarks the same for every browser I log into.
Toread.cc – I’ve added this service to my bookmarks so I can send myself links for todos while I’m home, in a meeting, or just away from my desk.
Google Calendar Sync – When my laptop was stolen all the archived emails and calendar dates were lost. While not the most tragic thing, I lost some of the time lines for system wide projects I work on. After getting a new laptop, I set Outlook and Google to do a 2 way sync so I can save all my information online in Google.
Binders – I’ve also started a binder for each major project I work on that includes the information that anyone would need to know about the project in the event I’m on vacation or other employee takes lead of the project in the future.
What do you use to help keep yourself organized?
|
OPCFW_CODE
|