Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Is there an elegant way to convert a Map<P, Optional<Q>> to a sparse Map<P, Q>?
Is there an elegant way to convert a Map<P, Optional<Q>> to a sparse Map<P, Q> ?
This should work, but it's a bit meh:
Map<P,Optional<Q>> map = ...;
Map<P,Q> map2 = map.entrySet()
.stream().filter(e -> e.getValue().isPresent())
.collect(Collectors.toMap(e -> e.getKey(), e->e.getValue().get()));
Maybe: map.forEach((key, optional) -> optional.ifPresent(value -> map2.put(key, value))); but map2 has to be initialized first: Map<P, Q> map2 = new HashMap<>();
I would say your way almost already is the most elegant way, I'd only do some slight cosmetic changes and replace e -> e.getKey() in your collector with Entry::getKey. This is only a small change, but communicates your intent better then the other lambda.
Map<P, Optional<Q>> map = new HashMap<>();
Map<P, Q> sparseMap = map.entrySet().stream()
.filter(e -> e.getValue().isPresent())
.collect(Collectors.toMap(Entry::getKey, e -> e.getValue().get()));
Why aren't the other solutions better / more elegant?
Because they aren't more concise, and they again fall into the trap of not declaring what you want to do, but how, which is common in procedural styles, but not so much in functional ones.
If you look at the above code, it is pretty much self-explanatory and has a nice flow to it. You first have a non-sparse map with Optionals, then declare the sparse map without Optionals and then describe the transformation of the former map into the latter. This has also no side-effects. The sparse map is assigned only when the collector is actually finished.
If you look at other solutions, those invert the logic flow and use the procedural way of thinking:
Map<P, Optional<Q>> map = [....];
Map<P, Q> sparseMap = new HashMap<>();
map.forEach((key, opt) -> opt.ifPresent(value -> sparseMap.put(key, value)));
This is only marginally shorter then:
Map<P, Optional<Q>> map = [....];
Map<P, Q> sparseMap = new HashMap<>();
for (Entry<P, Optional<Q>> e : map.entrySet()) e.getValue().ifPresent(value -> sparseMap.put(key, value))
You save a few chars due to type inference, but in the end, if you format them reasonably, both foreach solutions need 4 LOC, so they aren't shorter then the functional one. They aren't clearer, either. On the contrary, they rely on causing side-effects in the other map. Which means during computation, you get a partially constructed sparse map assigned to your variable. With the functional solution, the map is only assigned when it is properly constructed. This is only a small nitpick, and likely not going to cause issues in this case, but is something to keep in mind for other situations where it might become relevant (e.g. when concurrency is involved), especially when the other map isn't a local variable but a field -- or worse, passed in from somewhere else.
Furthermore, the functional approach scales better - switching to a parallel stream if you have lots of data is trivial, converting the foreach-approach to parallel requires rewriting to the functional filter/collect approach anyways. This isn't relevant for such lightweight operations (in fact, do not do it here, its likely slower), but in other situations that might be a desirable characteristic.
In my opinion, using the functional filter/collect approach is preferable to using the procedural foreach, because you train yourself to use good habits. But keep in mind that "elegance" is often in the eye of the beholder. To me, the more "elegant" way is the proper functional way which has no side effects. YMMV.
While the ability to parallelize the stream is nice in theory, I'd very much like to see the map for which a parallel stream performs this task more efficiently than a sequential one (or just iterating over the map directly). In general, parallel streams are useful when the entries must go through some slow nontrivial processing that can be usefully shared among multiple threads. Just extracting a value from an Optional, however, is such a lightweight operation that the overhead of parallelizing it is likely to exceed the cost of the operation itself by a considerable margin.
Why not transform the Entry<P, Optional> into Entry<P, Q> with a map before collecting into a map?
map.forEach “is only marginally shorter” than the for loop, because you’re using the raw type Entry. A correct solution would have to use Entry<P, Optional<Q>>, so the difference in code size depends on the actual P and Q.
@IlmariKaronen Absolutely, its irrelevant in this case. That is why I pointed out developing good habits. because when you get into the situation where you need it, you do not need different code then the one you are already writing.
@Holger true, will change that a bit
How about this:
Map<P, Q> map2 = new HashMap<>();
map.forEach((key, opt) -> opt.ifPresent(value -> map2.put(key, value)));
It's simpler when you create a new map and add values by iterating over the original map:
Map<P, Q> map2 = new HashMap<>();
map.forEach((p, q) -> map2.compute(p, (k, v) -> q.orElse(null)));
All original entries for which value.isPresent() would return false are skipped (not included in result): Map.compute removes/ignores the key's mapping when remappingFunction yields null.
Here's an example to clarify how nulls are dealt with:
Map<String, Optional<Integer>> map = new HashMap<>();
map.put("one", Optional.of(1));
map.put("two", Optional.of(2));
map.put("three", Optional.empty());
map.put("four", Optional.of(4));
map.put("five", Optional.empty());
Map<String, Integer> map2 = new HashMap<>();
map.forEach((p, q) -> map2.compute(p, (k, v) -> q.orElse(null)));
System.out.println(map2);
The output is {four=4, one=1, two=2} (when map2.compute gets null from q.orElse, p is not added to map2)
@nullpointer I see that particular aspect attracts attantion. Please read my last sentence in this post :-)
@ernest_k Still null looks a bit nasty. I would avoid it if possible.
@ZhekaKozlov I'm probably not explaining correctly. It's not inserting nulls. I'll add an example.
It's short but I don't find it very readable. If I saw this in a larger codebase it wouldn't be immediately obvious what it does.
@JohnKugelman Thanks for the opinion. Readability is both relative and (sometimes) secondary.
@ernest_k: Sometimes, yes. However, ZhekaKozlov's solution seems noticeably more readable to me, and I'd be surprised if it wasn't also more efficient.
@IlmariKaronen That's one of the best things about SO ;-) - a question gets answers of many kinds, some better than others in one one or many ways. That's why we sometimes can't resist upvoting even answers competing against our own. That's how we learn or just get to discover other perspectives. That's how we learn here: seeing examples of both right and wrong (with the help of arguments for and against anything). It's not a competition, after all...
"Map.compute removes/ignores the key's mapping when remappingFunction yields null." Is that really so? The JavaDoc does not seem to give this guarantee: "Attempts to compute a mapping for the specified key and its current mapped value (or null if there is no current mapping)." Maybe you are confusing compute with computeIfAbsent, which says: "If the specified key is not already associated with a value (or is mapped to null), attempts to compute its value using the given mapping function and enters it into this map unless null."? Are you sure all Map implementations do as you claim?
@Polygnome You misunderstood both my comment and the javadocs. The docs say if you call map.compute(nonExistentKey, biFunction) and biFunction.apply(nonExistentKey, null) == null, then nonExistentKey won't be added to map. If you called compute(existingKey, biFunction) and biFunction returns null, then existingKey is removed from map. Would be nice if you took a minute to run the code in the post (and add map2.compute("six", (k, v) -> null); after the forEach). Result will be the same. This is the contract of Map, it has no exceptions or ifs for implementations.
@IlmariKaronen well yes, ZhekaKozlov’s solution has the advantage of not performing a hash operation for the empty optionals. This can result in higher performance if there’s a significant number of empty optionals.
I would stick with what you have. It directly expresses the intent and it leverages streaming, a quality the other answers lack. Sometimes Streams are verbose, nothing you can do.
Map<P,Q> map2 = map.entrySet()
.stream().filter(e -> e.getValue().isPresent())
.collect(Collectors.toMap(e -> e.getKey(), e -> e.getValue().get()));
|
STACK_EXCHANGE
|
Multi-Tenancy in Kubernetes using Loft’s Vcluster
Multi-Tenancy in Kubernetes using Loft Vcluster
Kubernetes is almost everywhere. If you want to deploy a web application, you’d need Kubernetes. You’d need to train an ML algorithm ( Kubeflow ) you’d need Kubernetes. Run Data analytics you'd need Kubernetes. So ideally Kubernetes is being used in every possible way. But are you using it the right way? Are you saving costs? Are you making use of the entire compute resources? Are you also sharing it the right way? Ahh, this is the actual point of the article, Multi-Tenancy. Multitenancy is a reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. The instances (tenants) are logically isolated, but physically integrated.“Tenants” is a term for a group of users or software applications that all share access to the hardware through the underlying software. Multiple tenants on a server all share the memory, which is dynamically allocated and cleaned up as needed. They also share access to system resources, such as the network controller.
What is the entire story all about? (TLDR)
- Multi-Tenancy in Kubernetes.
- Using loft.sh’s Vcluster solution.
- Exploring various scenarios in multitenancy using loft.sh vcluster.
- A Kubernetes Cluster ( EKS, AKS, Kind, etc ).
- helm, loft.sh’s vcluster binary.
- GitHub Link: https://github.com/pavan-kumar-99/medium-manifests
- GitHub Branch: multitenancy
Scenarios for Multi-Tenancy ( Problem Statement )
Earlier I have explained the definitions of multitenancy and the tools that are used to enable this. But first, let us understand the problem statement for this use case. Assume you are a company that provides Kubernetes clusters to customers to deploy their workloads. Let us suppose you have 100 customers. Provisioning one cluster per customer is a nightmare. Managing those clusters is not an easy task, as the number of customers grows the number of clusters also increases. Drawbacks of having such architecture?
- Increase in the overall cost.
- Redundancy of the components to be installed ( For example bootstrap components like istio, vault, consul, etc to be installed on all the clusters ).
- Management of the clusters will be a nightmare.
- Lots of duplicate work to be done.
- Heavy spike in the costs ( You end up paying money for both the control plane and worker nodes ).
- No Isolation.
- Each customer would end up using a lot more than allocated resources. We cannot control the number of resources that the customer can use.
What if? What if there is a solution to this. How would you react if you can create a Kubernetes cluster inside a Kubernetes cluster?
Yes, you have read that right. Here comes loft.sh’s vcluster ( virtual cluster ) into the picture. Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully separate “real” clusters, virtual clusters do not have their own node pools. Instead, they are scheduling workloads inside the underlying cluster while having their own separate control plane. The virtual cluster itself only consists of the core Kubernetes components: API server, controller manager, and storage backend (such as etcd, SQLite, MySQL, etc.).
Advantages of using loft.sh vcluster
- Fine-grained Isolation per tenant.
- Reduction in the Overall Cost.
- No Redundancy of components.
- Easy management of tenants/clusters.
- Resource allocation per tenant can be controlled easily.
Well, I hope you now have a clear picture of multitenancy and Virtual clusters. Let us now get started with the Demo. :)
Installing Vcluster using helm
Make sure you have vcluster and helm binary installed
I am already having a Kubernetes cluster up and running. You can use any Kubernetes distribution for this demo. Let us now create two namespaces in our physical cluster. These namespaces are nothing but our customers. Let us name the customers as
- customer-1 ( Trial Customer )
- customer-2 ( Paid Customer )
$ git clone https://github.com/pavan-kumar-99/medium-manifests.git \
-b multitenancy$ cd medium-manifests
Alright, we have now created two namespaces for two new customers. So let us assume that customer-1 is a free trial customer and he should only be given very few compute resources. For example CPU: 5cores, Memory: 10Gi, Pods 3 etc.
Let us now design a Virtual Cluster ( Vcluster ) for the free tier customer.
$ kubectl create ns customer-1$ kubectl config set-context --current --namespace=customer-1$ helm upgrade --install customer-1 vcluster \--values customer1-values.yaml \--repo https://charts.loft.sh \--namespace customer-1
The vcluster should now be created. You can check the pod in the customer-1 namespace. Things would now get interesting. Open another terminal tab and execute the following commands to connect to your virtual cluster.
$ vcluster connect customer-1 --namespace customer-1$ export KUBECONFIG="./kubeconfig.yaml"
Let me create a namespace in the Virtual cluster.
$ kubectl create ns test
The namespace should now be visible in the vcluster but not in the actual host cluster. Let us now create a deployment and scale the replicas to 2 ( As the hard limit is 3, no more than 2 pods will be able to run on the virtual cluster. There is already one pod by a system component ). The same is also the case with the other Kubernetes components.
Now let’s switch to the new customer ( Customer-2 ). The Paid customer should be having a huge quantity of resources including CPU, memory, services, etc.
$ kubectl create ns customer-2$ kubectl config set-context --current --namespace=customer-2$ helm upgrade --install customer-2 vcluster \--values customer2-values.yaml \--repo https://charts.loft.sh \--namespace customer-2
A virtual cluster is now created for customer-2 as well.
Open a new terminal tab.
$ vcluster connect customer-2 --namespace customer-2$ export KUBECONFIG="./kubeconfig.yaml"
You should now be able to create a huge number of computing resources including pods, volumes, etc.
This is how loft.sh’s Vcluster can be used to achieve multitenancy. The aforementioned example is one of the use-cases of multitenancy. In the Kubernetes Era, there are many more scenarios where multitenancy could be applied. Please feel free to share your experiences/thoughts/ideas to implement multitenancy in Kubernetes. Also, feel free to get in touch for any queries/consultation on Kubernetes here.
$ helm delete customer-1 -n customer-1$ helm delete customer-2 -n customer-2
Here are some of my other articles that may interest you
Until next time…..
PKI Certs Injection to K8s Pods with Vault Agent Injector
Inject PKI Certs Dynamically to Kubernetes Pods using Vault Agent Injector
Terraforming the GitOps Way !!!
Terraform with GitOps using Atlantis (Pull request Automation)….
Analyze Terraform costs with Infracost ( The GitOps Way )
Analyzing the terraforming cost with Infracost
Using Hashicorp Vault as a Certificate issuer in Cert Manager
Configure vault PKI backend as a certificate provider in Cert Manager
What are Virtual Kubernetes Clusters? | vcluster docs | Virtual Clusters for Kubernetes
Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully…
|
OPCFW_CODE
|
June 11, 2018 — In the life of every WordPress administrator, saving a few seconds here and there can make a big difference at the end of the day.
We have heard of the command line before, tools like Bash, SSH and WP-CLI are considered too complex for the average administrator.
In this introduction, we will take an introductory look at these tools in a language that administrators can understand and in a way that we can start using them right away.
We will look at topics like:
1. Connecting to a remote server via SSH.
2. Executing simple WP-CLI commands to create fast database backups, export a list of users, or “search and replace” for a string on a website without the need of plugins.
2. Creating and using simple scripts in bash.
March 30, 2018 — Introductie voor beginners met enkele concrete praktijkvoorbeelden die verduidelijken hoe het gebruik van de commandolijn het dagelijks leven van iedere WordPress gebruiker makkelijker kan maken zodat er meer tijd overblijft voor de leuke dingen. Het leven is nu eenmaal meer dan website onderhoud alleen. Als beginnende WordPress gebruikers na de voordracht al niet onmiddellijk naar huis snellen om wp-cli uit te proberen dan hoop ik tenminste dat ik hun mogelijke angst voor de commandolijn een beetje kleiner heb kunnen maken 😉
March 8, 2018 — The command line is a very powerful tool that you can use for all sorts of things. But the terminal can be a little scary if you’ve never been there before.
Let’s demystify it a bit and go over:
what you’re looking at,
some basic commands that’ll help you get around, and
a glimpse of some more advanced commands that can be handy to have in your toolbox.
Equipped with the basics, we’ll then learn about a powerful WordPress tool: WP-CLI. This won’t be an in-depth developer discussion, but a demo of what it can do and why you should be excited to try it out.
Note: Sorry, but this will be based on Mac/Unix. Windows has some exceptions that won’t be covered.
February 27, 2018 — In this presentation I will demonstrate a simple install script that will have a fully functioning, customized WP website in about the same time than it normally takes download the WordPress.zip file. We will also look for how third party tools and hosts are leveraging the WP-CLI to make your life as a developer even easier. Walk away with: -A new appreciation for the command line. -The desire to script ‘all the things’ to save time. -Knowledge of serious time saving tools -Ideas about how to automate your processes to be more productive and profitable.
October 7, 2017 — В своем докладе на WordCamp Moscow Геннадий расскажет про WP-CLI – инструмент для работы с WordPress из командной строки.
Поговорим о том, что это такое, что нужно знать для начала работы и зачем вообще работать с WordPress из терминала. По ходу презнтации освежим в памяти основы SSH и удаленной работы с сервером.
May 22, 2017 — If you manage more than one WordPress website, you might have experienced situations where you needed to do the exact same operation on multiple websites. If you were clicking through the sites one-by-one and thinking that there should be a better way to manage what you are trying to do, I’ve got good news: there actually is!
WP-CLI lets you control your WordPress sites through the command line, allowing you to do any operation across an arbitrary number of sites. This can save you minutes, hours, days or even weeks of work.
I’ll start with a small introduction to the command line itself, and then continue with very easy and common scenarios where WP-CLI can save you large amounts of time with little to no effort.
November 9, 2016 — This talk starts by briefly touching on setup and installation, and then moves into some of the basic commands. Then it covers a few tricks, and finally how to easily extend WP-CLI with your own commands.
January 16, 2016 — The introduction to using WP-Cli, including: How to get it setup. Some helpful basics about the shell/command line. Some of the very useful things you can do with WP-CLI. Audience is Power users or administrators.
December 22, 2015 — WP-CLI is a set of command line tools for managing your WordPress site. It allows you to perform many tasks much quicker than you would be able to by other means. In this session, I will teach you how to get WP-CLI running, and show some of my favorite time saving features. Once you’ve started using WP-CLI, you’ll wonder how you ever lived without it! This talk is appropriate for developers, designers and server administrators of all skill levels.
December 17, 2015 — Do you walk the line between designer and developer? Are you more comfortable using a Graphical User Interface (GUI) and find the black screen that is the command line a bit intimidating? This session will look at some of the basic commands needed to get comfortable using the command line. Though designers may need the command line for little more than version control or compiling SASS, having a basic understanding of the command line can help speed up your workflow.
|
OPCFW_CODE
|
Table of Content
Imbalanced data refers to classification problems where one class outnumbers other class by a substantial proportion. Imbalanced classification occurs more frequently in binary classification than in multi-level classification. For example, extreme imbalance data can be seen in banking or financial data where majority credit card uses are acceptable and very few credit card uses are fraudulent.
With an imbalance dataset, the information required to make an accurate prediction about the minority class cannot be obtained using an algorithm. So, it is recommended to use balanced classification dataset. In this blog, let us discuss tackling imbalanced classification problems using R.
A credit card transaction dataset, having total transactions of 284K with 492 fraudulent transactions and 31 columns, is used as a source file. For sample dataset, refer to References section.
- Time – Time (in seconds) elapsed between each transaction and the first transaction in the dataset.
- V1-V28 – Principal component variables obtained with PCA.
- Amount – Transaction amount.
- Class – Dependent (or) response variable with value as 1 in case of fraud and 0 in case of good.
- Performing exploratory data analysis
- Checking imbalance data
- Checking number of transactions by hour
- Checking mean using PCA variables
- Partitioning data
- Building model on training set
- Applying sampling methods to balance dataset
Performing Exploratory Data Analysis
Exploratory data analysis is carried out using R to summarize and visualize significant characteristics of the dataset.
Checking Imbalance Data
To find the imbalance in the dependent variable, perform the following:
- Group the data based on Class value using dplyr package containing “group by function”.
- Use ggplot to show the percentage of class category.
Checking Number of Transactions by Hour
To check the number of transactions by day and hour, normalize the time by day and categorize them into four quarters according to the time of the day.
The above graph shows the transactions of 2 days. It states that most of the fraudulent transactions occurred between 13 to 18 hours.
Checking Mean using PCA Variables
To find data anomalies, take mean of variables from V1 to V28 and check the variation.
The blue points with much variations are shown in the below plot:
In predictive modeling, data needs to be partitioned for training set (80% of data) and testing set (20% of data). After partitioning the data, feature scaling is applied to standardize the range of independent variables.
Building Model on Training Set
To build a model on the training set, perform the following:
- Apply logic classifier on the training set.
- Predict the test set.
- Check the predicted output on the imbalance data.
Using Confusion Matrix, the test result shows 99.9% accuracy due to much of class 1 records. So, let us neglect this accuracy. Using ROC curve, the test result shows 78% accuracy that is very low.
Applying Sampling Methods to Balance Dataset
Different sampling methods are used to balance the given data, apply model on the balanced data, and check the number of good and fraud transactions in the training set.
There are 227K good and 394 fraud transactions.
In R, Random Over Sampling Examples (ROSE) and DMwR packages are used to quickly perform sampling strategies. ROSE package is used to generate artificial data based on sampling methods and smoothed bootstrap approach. This package provides well-defined accuracy functions to quickly perform the tasks.
The different types of sampling methods are:
This method over instructs the algorithm to perform oversampling. As the original dataset had 227K good observations, this method is used to oversample minority class until it reaches 227K. The dataset has a total of 454K samples. This can be attained using method = “over”.
This method functions similar to the oversampling method and is done without replacement. In this method, good transactions are equal to fraud transactions. Hence, no significant information can be obtained from this sample. This can be attained using method = “under”.
This method is a combination of both oversampling and undersampling methods. Using this method, the majority class is undersampled without replacement and the minority class is oversampled with replacement. This can be attained using method = “both”.
ROSE sampling method generates data synthetically and provides a better estimate of original data.
Synthetic Minority Over-Sampling Technique (SMOTE) Sampling
This method is used to avoid overfitting when adding exact replicas of minority instances to the main dataset.
For example, a subset of data from the minority class is taken. New synthetic similar instances are created and added to the original dataset.
The count of each class records after applying sampling techniques is shown below:
Logistic classifier model is computed using each trained balanced data and the test data is predicted. Confusion Matrix accuracy is neglected as it is imbalanced data. roc.curve is used to capture roc metric using an inbuilt function.
In this blog, highest data accuracy is obtained using SMOTE method. As there is no much variation in these sampling methods, these methods when combined with a more robust algorithm such as random forest and boosting can provide exceptionally high data accuracy.
When dealing with the imbalanced dataset, experiment the dataset with all these methods to obtain the best-suited sampling method for your dataset. For better results, advanced sampling methods comprising synthetic sampling with boosting methods can be used.
These sampling methods can be implemented in the same way in Python too. For Python code, check the below References section.
- Sample Credit Card Transaction Data:
- Associated R and Python Code in GitHub:
|
OPCFW_CODE
|
What exactly does GRUB_GFXPAYLOAD_LINUX=text do?
To be able to boot Ubuntu 10.10 or 11.10 in my new Lenovo L5210 with Intel Sandy Bridge I need to set GRUB_GFXPAYLOAD_LINUX=text in grub options. Otherwise I only get a black screen with a cursor in the upper left corner.
When I set GRUB_GFXPAYLOAD_LINUX=text, instead of the cursor I now get a error: no video mode activated message in the upper left corner.
So what exactly does GRUB_GFXPAYLOAD_LINUX=text do, and what do I lose by setting it?
13.1.9 gfxpayload
If this variable is set, it controls the video mode in which the Linux kernel starts up, replacing the ‘vga=’ boot option (see linux). It may be set to ‘text’ to force the Linux kernel to boot in normal text mode, ‘keep’ to preserve the graphics mode set using ‘gfxmode’, or any of the permitted values for ‘gfxmode’ to set a particular graphics mode (see gfxmode).
Depending on your kernel, your distribution, your graphics card, and the phase of the moon, note that using this option may cause GNU/Linux to suffer from various display problems, particularly during the early part of the boot sequence. If you have problems, set this variable to ‘text’ and GRUB will tell Linux to boot in normal text mode.
The default is platform-specific. On platforms with a native text mode (such as PC BIOS platforms), the default is ‘text’. Otherwise the default may be ‘auto’ or a specific video mode.
This variable is often set by ‘GRUB_GFXPAYLOAD_LINUX’ (see Simple configuration).
But more importantly: I found the message error: no video mode activated you get on Bug 699802 and it has a possible solution:
Decommenting #GRUB_GFXMODE=640x480 in /etc/default/grub actually solves the problem.
Remember to run sudo update-grub after changing /etc/default/grub.
Also look at comment 27 and also comment 24 and 30 as interesting workarounds. Comment 30:
Just wanted to confirm that the method for number 24 works well for people with the encrypted partition (don't bother with the uncommenting stuff). Just so anyone like myself out there doesn't have to look all over to figure out how to do simple commands (my first time ever using linux). Launch the terminal and go to the directory cd /usr/share/grub/ . Copy the font files to another directory (cp, needs sudo, and *.pft copies the three font files at once) with sudo cp *.pf2 /boot/grub then update grub with sudo update-grub.
|
STACK_EXCHANGE
|
Converting azimuthal map to equirectangular
Hi there. I've tried looking around for info on how to do this but am coming up blank.
I've got a map of a continent that is about 120° wide and 60° tall. It's drawn, basically, is if the world is flat. So let's say this is the equivalent, more or less, of the picture being taken from a satellite above the planet at a sufficient distance for the globe to look like a disc, and this satellite is positioned exactly over the center of this continent. So basically, this is a continent which will fit fine into an Azimuthal Equidistant projection using G.Projector, with the settings of longitude 0° E, latitude -30° N, and Radius 45°, with fill corners checked.
I want to take this azimuthal equidistant projection and project it back to a baseline Equirectangular projection. There doesn't seem to be any way in G.Projector to do this: it will only calculate other projections starting with an equirectangular projection, and won't go the other way around if you don't have an equirectangular one to start out with.
Any ideas? I'm open to ANY method that will get me from a "looks like a globe photographed from space" to an equirectangular map. The rest of the non-continent globe surface can be completely white, that's fine.
Actually a globe from space approaches an Orthographic projection, not an Equidistant one. You could try using GDAL if you don't mind a command line interface, or a GIS like QuantumGIS can do this.
You could also use a 3d modelling program. If you texture a sphere with your planar image, and set up a spherical/panoramic camera in the center of the sphere you can go back to the equi-rectangular projection.
(I've used POVRay for this in the past, but the learning curve is fairly steep as it has no GUI either )
Long, long ago I wrote http://www.ridgenet.net/~jslayton/ReprojectImage.zip to do this sort of thing. It's a little rough around the edges, but it works quite well to go from a number of projections to Equirectangular.
It took too long for my post to get through moderation, so I figured everything out myself before the replies came. I'm on OS X, so I used the various commandline tools.
You're right about orthographic, but I didn't quite describe myself well enough. Anyway, using azimuthal made an end result that looks like what I wanted.
Here's what I did...
I found a blue marble geotiff, and used listgeo to get the data out of it. My map is 10000px by 5000 px. I altered the pixel scale to 0.036 and this turned out completely right (as far as I can tell).
So I put the geo data into the tiff to make a geotiff...
geotifcp -g Roshar-equirectangular.geo roshar_world/roshar_no_title.tif Roshar-equirectangular-unskewed-geotiff.tif
Then I warped it to the azimuthal equidistant projection I wanted...
gdalwarp -s_srs '+proj=latlong' -t_srs '+proj=aeqd +lat_0=-30 +lon_0=0' Roshar-equirectangular-unskewed-geotiff.tif Roshar-azimuthal-equidistant-unskewed-geotiff.tif
Then I opened that in Photoshop and pasted the old map on top of the new one, lining everything up in the middle. I increased the size by 171% to get it to align (I'm not worrying about the dodgy interpolation of data).
Then I saved to a new tiff, copied out the geo data from the azimuthal geotiff using listgeo, then copied it over to my new saved version to make a new geotiff:
geotifcp -g Roshar-azimuthal-equidistant.geo Roshar-azimuthal-equidistant.tif Roshar-azimuthal-equidistant-geotiff.tif
Then I warped it back to equirectangular:
gdalwarp -t_srs '+proj=latlong' -s_srs '+proj=aeqd +lat_0=-30 +lon_0=0' Roshar-azimuthal-equidistant-geotiff.tif Roshar-partial-equirectangular-geotiff.tif
Then in Photoshop I pasted the new version on top of my old equirectangular version, scaled to 58.1% aligned in the middle. And that was pretty much done.
Then I used make_gores.pl from http://www.vendian.org/mncharity/dir3/planet_globes/ to make a gore map so it can be printed out on stickers and stuck onto a globe.
Tags for this Thread
|
OPCFW_CODE
|
Conducting Virtual Meetings With Linux, Part II - page 4
Setting Up Your Audio Streaming System With Integrated Chat
Most distributions have a bundled IRC server. I used the standard IRC server packaged with SuSE 7.3. I set it to start at boot time, although you can start it manually by typing (as root):
You can look at ps to make sure it's running. On a production machine, you may want to set the server to start or stop at a certain time using a cron job to control the daemon. For that matter you might want to start or stop the icecast server, in a similar manner as well (on your internet/DNS connected machine).
As the meeting host, the next step was to start up the IRC client, so I could manage my prototype chat session. The are many capable chat clients. I like Xchat and started it up by typing into a new terminal window:
If you need help with Xchat, take a look at my "Conducting Virtual Meetings with Linux, Part 1" article. If you don't want people to know you are running Xchat as root, change the nicknames and names at the top of the Xchat screen.
Starting and stopping the remote servers might be applicable, for example, if you wanted to stream/chat your LUG meeting at 8:00 pm. every Thursday night. You'd simply connect to the server machine at the right time, start liveice, start a mixer, run Xchat and off you go.
To test that the IRC session was running, I started up Xchat on the client machine and saw that if I typed a message it would instantly appear on the host Xchat window. You should make sure the server name and channel are the same as on your meeting host machine (laptop). Ideally you can type a comment on the client Xchat screen and see it appear on you meeting host machine. It should work the other way around, as well.
- Skip Ahead
- 1. Setting Up Your Audio Streaming System With Integrated Chat
- 2. Setting Up Your Audio Streaming System With Integrated Chat
- 3. Setting Up Your Audio Streaming System With Integrated Chat
- 4. Setting Up Your Audio Streaming System With Integrated Chat
- 5. Setting Up Your Audio Streaming System With Integrated Chat
- 1Linux Top 3: Alpine Linux 3.4, deepin 15.2 and Linux Lite 3.0
- 2Linux 4.7 Set to Boost Live Patching, Security and Power Management
- 3Linux 4.6 Charred Weasel adds USB 3.1 Support
- 4Linux Top 3: OpenIndiana 2016.04, Ubuntu 16.04 and Debian's New Leader
- 5Linux Top 3: KaOS 2016.04, TurnKey 14.1 and pfSense 2.3
|
OPCFW_CODE
|
For HandBrake users who prefer the classic
.deb package, there’s now new PPA for all current Ubuntu 20.04, Ubuntu 22.04, Ubuntu 23.04, Ubuntu 23.10, and their based systems, such as Linux Mint 20/21, Pop! OS, Zorin OS 17.
HandBrake announced the new major 1.7.0 release a few days ago. The release features new AMD VCN AV1 encoder, NVIDIA NVENC AV1 encoder, SVT-AV1 multi-pass ABR mode, Apple VideoToolbox hardware presets, improved QSV support, drag and drop support for video scanning, and various other changes. See the github releases page for details.
HandBrake provides official Linux package through
.flatpak package (see link above). It’s can be installed in most Linux, however, run in sandbox.
Install HandBrake 1.7.0 in Ubuntu via PPA
For those prefer Ubuntu PPA, I’ve uploaded the new release package into this unofficial PPA with all current Ubuntu releases, on
amd64 (Intel/AMD) and
arm64 (Apple Silicon, Raspberry Pi) CPUs support.
Thanks to the official guide, the new package in PPA is built with the latest run-time libraries (e.g., FFmpeg 6.1, libdvdnav & libdvdread, SVT-AV1 1.7.0) into single bundle. Meaning, it can be installed on old Ubuntu releases (20.04 & 22.04), without worrying about dependencies mis-match. Though, the .deb package size increased to be around 10 MiB.
1. First, press Ctrl+Alt+T on keyboard to open terminal. When it opens, run command to add the PPA:
sudo add-apt-repository ppa:ubuntuhandbook1/handbrake
Type user password (no asterisk feedback) when it asks and hit Enter to continue.
2. Linux Mint users need to manually update the system package cache, after adding PPA, by running command:
sudo apt update
3. Finally, install the new HandBrake package via command:
sudo apt install handbrake
For choice, you may also run
sudo apt install handbrake-cli to install the command line tool
When installation done, search for and launch the video transcoder from either start/application menu or ‘Activities’ overview depends on your desktop environment.
To uninstall the Ubuntu PPA, either open terminal (Ctrl+Alt+T) and run command:
sudo add-apt-repository --remove ppa:ubuntuhandbook1/handbrake
or, just remove the source line using ‘Software & Updates’ tool under “Other Software” tab.
To remove the HandBrake video transcoder, use command:
sudo apt remove --autoremove handbrake handbrake-cli
That’s all. Enjoy!
|
OPCFW_CODE
|
Papercut allows you to print remotely wherever you are. The system is designed such that you upload a file (doc, xls, ppt or pdf) to the webserver, then release the print job at a copier when it suits you. This enables printing directly from your personal laptops, without the need to install any drivers, or perform any configuration.
For security reasons, you will only be able to upload files from the university network. However, this includes OWL and Eduroam. This also means that you can run the University VPN when outside of the university network, and then upload the files.
Uploading files to the server will not be charged to your account, it is only when they are printed that they will be charged. Documents left unprinted after 3 days are automatically deleted fro the queue.
Prior to using this service, you will need to update your password in the College computer room. So you'll need to login to a computer in the hayloft using your SSO. Once logged in, you need to press CTRL-ALT-DEL together, and select the Change a Password option. You may wish to set this the same as your SSO password, but it needs to be at least 6 characters long with a mixture of letters and numbers. This will be the password for accessing Papercut through the web.
Note - Students who have been here for more than two years may have a username that doesn't match their SSO. To find out your GTC username, you'll need to login to a computer in the hayloft. Once in, select - Control Panel - User Accounts - Configure advanced user profile properties. The name field will give you your username. If you have any issues, book in to see someone in the IT team who can otherwise assist.
This requirement is temporary. Development work is underway to synchronise these accounts such that in future only the SSO will be required.
As stated above, you can only login to the system from a University network (cabled, OWL or Eduroam). If you live in private accommodation, please connect via the University VPN client first before trying to connect.
1)To login, you need to go to the following webpage :
You will receive a warning that this site has an untrusted security certificate, but this can be ignored, and the site is indeed safe.
From the login screen, please enter your username and password (as setup above).
You'll then come to the welcome screen :
2)Select Web print.
3)Then Select - Submit a job
4) Select the print options you'd like for the print job:
(So Black or Colour, A4 or A3, single or double sided).
5) Click the Print Options and Account Selection button. From the Option page, select the number of copies and click Upload Document.
6)Click the Choose File button to select a file :
7) Click Upload and Complete. The system will process your file, and it will then show in the queue, ready for printing :
8) Now you go to either the ASC1 copier in the Walton building, or the MFC1 copier unit in the admin building, swipe your card, and release your print job.
The web interface doesn't offer such a full range of printing options as you would get printing directly to a local printer. For example, you cannot select which pages of a document to print. To workaround this, you can (in MS Word) print just the pages you want to a PDF printer, and upload this to the webprint server. Alternatively, you could just save the pages you want to print in a separate document.
|
OPCFW_CODE
|
Doxygen is a JavaDoc like documentation system for C++, C, Java and IDL.
World's #1 Open Source ERP+CRM+Web Portal
Open Source ERP+CRM+Web Portal for small- and mid-sized business (SMBs). Designed specifically for Manufacturing and Distribution companies who need control over operations and profitability. All critical supply chain functions included in one modular system: accounting, sales, customer and supplier management, inventory control. Mac, Windows, Linux and Mobile. Rich API to connect third-party apps. Simple enough for startup small business accounting, scalable enough for Global 1000 companies. See complete application https://xTuple.com/free-demo.
This SourceForge project page is obsolete. Please visit http://www.musicpd.org/
PHPXref is a developers tool, written in Perl, that will cross reference and extract inline documentation from a collection of PHP scripts. It generates simple HTML output suitable for browsing offline.
This project is the implementation of The Virtual Collaborative Environment (VCE) which supports clinical applications of 3D modeling in medicine. The system is based on three layer client-server architecture.
DownloadDaemon is a comfortable download-manager with many features like one-click-hoster support, etc. It can be remote-controled in several ways (web/gui/console clients), which makes it perfect for file- and root-servers, as well as for local use.
Making the home catalog anime, designed for home noncommercial use
Anime DB - is a program for making home catalog anime, designed for home noncommercial use
dBrowser is a PHP/MySQL/PostgreSQL framework/code generator aimed at web/database application developers.
The goal of this project is to provide a liveCD allowing the user to analyze networks for VoIP installations. This project gives you a global network state.
The projects of ChiNvo Studio
The projects of ChiNvo Studio
CoRE OS-1 is a FreeBSD based OS designed to provide an easy-to-use GUI system for the end-user and provide advanced features that a modern OS should have. By using CoRE and 3rd party GUI tools, CoRE will provide an advanced but easy-to-use desktop OS.
Edit It! Is a file editor, which will give you a new idea of what are writing and developing. The project is divided in two main parts, which are the Online and the Offline editor.
Este projeto almeja montar uma central integrada para atendimento de chamadas de emergência para o 190 na Polícia Militar, com atendimento, despacho e encerramento.
It's application which is trying to keep you mind about such things like exams, homeworks etc. in school. It shows up a pop-up window with description of each duty you have to do in school.
Create grafical user interface for mtx. MTX is tool for control tape and dvd SCSI media changers. Shareware analogs may be by more than 50000 euro.
The GiFaWT project want to publish a clone of the proprietary education-managment-software Telescol©, used in France. It will implement: Mark manager, homework manager, presence reporter, ..., and a better look, and less bug :-)
Iceberg is another MTA based on CORBA like middleware called ICE from zeroc.
ImageCentral is an image database and image processor, in one package. It features integrated RAW conversions through ufraw. A web album frontend is also available in the package.
With inspector, code revision and software inspection will turn into a more easier task, for software development teams.
a project which is going to be the new kozanostra
Marketing-System for Real Time Enterprise RTE (Marketing-System is not CRM).
A flexible and adaptable system to manage the information flow in the daily workprocess of schools all over the world. It is also designed to work hand in hand with virtual learning envrionments.
Quartica wants to provide such a framework for real time communication. It is meant to integrate traditional Instant Messaging and Files Transferring.
Spirits of Runeyard is a new concept of online graphics role playing game. It's a game placed in a fantasy world between swords and magic, where every player can live a second life or simply enjoy to play a new type of gameplay.
The Deimos Project is a computer statistics/monitoring tool allowing you to gather information and statistics about your computers, including network information and bandwidth, uptime information, keyboard and mouse usage, and hardware information.
|
OPCFW_CODE
|
When does using RNGScope make a difference?
In Rcpp documentation, I often find the recommendation to place Rcpp::RNGScope scope; before using random draws within Rcpp. I wondered what exactly this does, because I've only ever seen it described as "ensures RNG state gets set/reset".
Then, I tested a bit, but I can't seem to come up with an example where doing this makes any difference. I used an example from here. My tests were:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector noscope() {
Rcpp::Function rt("rt");
return rt(5, 3);
}
// [[Rcpp::export]]
NumericVector withscope() {
RNGScope scope;
Rcpp::Function rt("rt");
return rt(5, 3);
}
and then
set.seed(45)
noscope() # [1] 0.6438 -0.6082 -1.9710 -0.1402 -0.6482
set.seed(45)
withscope() # [1] 0.6438 -0.6082 -1.9710 -0.1402 -0.6482
set.seed(45)
rt(5, 3) # [1] 0.6438 -0.6082 -1.9710 -0.1402 -0.6482
So, my question is twofold. First, when does RNGScope make a difference, and what exactly does it do different from not using it? Second, does anyone have a code example which shows different results with and without it?
If RNGScope was deprecated in a newer release, then I'm sorry for asking.
When using Rcpp attributes, the automagically generated interface to your code will automatically insert the appropriate construction of the RNGScope object -- so it's already being done for you behind the scenes in this case. For example, if you write sourceCpp(..., verbose = TRUE), you'll see output like this:
Generated extern "C" functions
--------------------------------------------------------
#include <Rcpp.h>
RcppExport SEXP sourceCpp_38808_timesTwo(SEXP xSEXP) {
BEGIN_RCPP
Rcpp::RObject __result;
Rcpp::RNGScope __rngScope;
Rcpp::traits::input_parameter< NumericVector >::type x(xSEXP);
__result = Rcpp::wrap(timesTwo(x));
return __result;
END_RCPP
}
Note the automatic construction of the RNGScope object.
You only need to construct that object manually if you are operating outside of the realm of Rcpp attributes.
What would be an example of "operating outside the realm of Rcpp attributes"?
@Heisenberg If you write the interface boilerplate code yourself. There are also Rcpp modules, and I suspect that this is another example.
This all becomes a little clearer once you read the original documentation in the Writing R Extensions manual, Section 6.3, "Random Numbers".
All that RNGScope scope does is the automagic calls to "get" and "put" in order to keep the state of the RNG sane.
The problem with your test code, as explained by Kevin, is that it already happens for you. So you can only test by going through .Call() by hand, in which case you will for sure leave the RNG in a mess if you use it and do not get/put properly.
Thanks so much, this is very clear. And thanks for pointing to the R manual. So it is another part of the communication between R and C++ that Rcpp makes easier. So, when only using Rcpp with sourceCpp, there is no reason to declare RNGScope scope?
|
STACK_EXCHANGE
|
I’ve already discussed SSL in my previous article. Here I’ll be explaining SSLv3. It was developed by Netscape.
General SSL Architecture
It was designed to secure end-to-end services on the internet. I’ll show that SSL isn’t a single handed protocol. It’s a layer of more than one protocol such as:
- SSL record protocol
- SSL handshake protocol
- SSL change cipher spec protocol
SSL alert protocol
The SSL record protocol provides basic security to other higher level protocols, specially HTTP, which is responsible for server client interaction on the web. The other three protocols are responsible for managing for the whole SSL exchange. There are two basic concepts in SSL. They are:
- SSL connection : In SSL, mechanism connections are a P2P (Peer-to-peer) relationship.
- SSL session : Its client-server association. Most likely created by the SSL handshake protocol.
SSL Record Protocol
It provides two main services for SSL:
- Integrity : Handshake protocol, which produces a shared secret key, used to form the MAC (Message Authentication Code).
- Confidentiality : Handshake protocol produces a shared secret key to encrypt the SSL payload.
The chart below describes the operation of the protocol.
The record protocol takes application data and fragments data into blocks. Then it compresses and adds a MAC to encrypt the data. Finally header information added and transmitted to the TCP segment. Fragmentation application data is fragmented into blocks of 214 bytes or less. Then the compression it applies has to be lossless. It shouldn’t increase the content length more than 1024 bytes. Compression is optional. SSLv3 doesn’t use any compression mechanism. After that, the MAC is computed in which a shared secret key is used. The SSLv3 MAC algorithm is based on HMAC, which you may find in RFC 2104. The latest version of HMAC uses XOR. Then, it’s time for encryption. It may increase the content length more than 1024, but the total length of it should not exceed 214+2048.
The following table shows encryption algorithms and their key sizes.
|Fortezza (Used in Smartcard Encryption)||80|
The final step of the SSL Record protocol is to add header information to the final encrypted content.
Change Cipher Spec Protocol
Change cipher spec protocol uses the SSL record protocol. It helps the party to avoid a pipeline stall. It’s a TLS protocol. It consists of one single message of one byte. It causes a pending state in order to copy it to the current state. It updates the encrypted text to be used in TLS.
The architecture of SSL record:
SSL Alert Protocol
It’s usually used to send and receive SSL related alerts to end to end identities. These alert messages are also compressed and encrypted. The figure below shows the alert protocol structure.
Each message uses two bytes. The level byte is a value warning and the alert byte conveys the severity of the payload. If the level becomes fatal, SSL will directly initiates a connection. If there are more connections in same session, then they may continue. The alert byte contains code which produces a specific alert. Some alerts are listed below and they can be identified by their name.
SSL Handshake Protocol
The most important and complicated part of SSL is the SSL handshake protocol. This protocol allows both ends to connect each other, authenticating each other, negotiating encryption and exchanging packets. It contains a series of messages transferred between a server and client.
The SSL handshake has four main phases.
Phase 1. Establishing Connection and Capabilities For Security
It initiates the logical connection between a client and a server. It also generates security capabilities of which a server and a client will both use.
RN are random numbers which are generated by clients. It consists of a 32 bit timestamp and 28 bytes. First, the client sends a client_hello message and waits for the server to reply. The server replies to the client with a server_hello message. Both messages contain the same parameters such as versions, session IDs, cipher suites, compression methods, and initial random numbers. Then, a random field is generated by the server which is different than the client’s. The cipher suite contains a single cipher which is selected by the server. The compression field contains the compression method selected by the server.
Phase 2. Key Exchange & Server Authentication
The server sends a certificate to a client which needs to be authenticated. It contains one or more chains of X.509 certificates. Then, the server_key_exchange message may be sent if it’s required. It’s necessary if you use anonymous Diffie-Hellman, ephemeral Diffie-Hellman, RSA key exchange or Fortezza.
Phase 3. Key Exchange & Client Authentication
After the server sends a certificate to a client, the client should verify whether it’s valid or not. If everything is satisfactory, then the client sends more than one message back to the server. If there is no suitable certificate, The client sends a no_certificate alert to the server. Then the client sends a client_key_exchange_message.
In this phase a client sends a certificate_verify message to the server. This certificate always has signing capability. It signs a hash code based on the previous message. Verification needs to be done by signing a certificate. If anyone misuses it, then they won’t able to send that message.
Phase 4. Finish Phase
The fourth phase creates a secure connection. The client sends a change_cipher_spec message. Then, the client copies the new cipher_spec into the current cipher_spec. Then, the client sends a finish message. Only the finish message verifies that the key authentication and exchanging of the key have been successful.
In a nutshell, the whole SSL process can be illustrated as shown in the figure below.
In this article I tried to simplify and describe SSL, it’s architecture and how the SSL handshake works. I also mentioned the encryption algorithms that are used in SSLv3. In my next article I will present Transport Layer Security.
|
OPCFW_CODE
|
v2.11.0 ~ Don't You (Forget about me) (pro)
Release date: August 7th, 2019.
Passbolt v2.11 is maintenance release containing security fixes. Extension update will be rolled out automatically to your users like usual, but as an administrator you will need to update your server.
The security issues were discovered by security researcher René Kroka as part of the Bug Bounty program organized in collaboration with YesWeHack. You can find more information about the vulnerabilities found during this audit, on the dedicated incident page. You can also learn more about passbolt security in our recently published Security White Paper.
This release also includes some requested fixes by the community. The autofill functionality is now a bit more robust and will work on more websites, including for example when the login form is located in an iframe (on the same domain than the current page). Feel free to report any issues you encounter with the autofill on websites you use via github issues. Another long awaited fix relates to the passphrase remember me and the auto logout functionalities.
The Multi-factor authentication behavior was also adjusted. Passbolt will now ask users to be verified at every login using one of the provider you selected, instead of such verification being valid for 72h by default. For Yubikey and TOTP you can additional request to be remembered for 30 days on the selected device.
The installation script now also supports the new Debian 10 (stable). Because of this we will soon deprecate support for 7.0 (which was still the default on Debian 9). Make sure you upgrade your web server to use at least 7.2 in the coming weeks.
Next stop: Folders! We know you are all waiting patiently for the release of folders. We encountered some delay with the private launch of Passbolt Cloud, but we are still aiming to release a first version before the end of the summer. In that regards we will soon share some specifications and a survey with the community to gather feedback and help us finalize the specifications and design.
The team wish you happy holidays, if you are lucky enough to take some!
- PB-661: Fix XSS on tag autocomplete
- PB-661: Fix tab nabbing when clicking on “open in a new tab” in password grid
- PB-607: Fix XSS on first name or last name during setup
- PB-587: Add baseline support for multiple openpgp backends
- PB-391: Display the name and email of the user an admin is going to delete in the delete dialog
- PB-396: Display the label of the password a user is going to delete in the delete dialog
- PB-397: Display a relevant feedback in the user details group section if the user is not member of any group
- PB-533: Add a new session check endpoint that does not extend the session expiry
- PB-607: Add option for an administrator to configure CSP using environment variable
- PB-242: Improve the passwords grid (passwords fetch peformance, search reactivity, selectbox area enlarged)
- PB-572: Fix MFA verification should happen after every login
- PB-572: Fix MFA when running in subdirectory
- PB-349: Fix health check fails if using custom GNUPGHOME env set by application
- PB-330: Fix migration issue from CE to PRO in v2.10
- PB-567: Fix appjs auto logout
- PB-601: Fix some incomplete unit tests
- PB-427: Fix email sender shell task and organization settings table unnecessary coupling
- PB-349: Fix OpenPGP results health checks
- PB-505: Upgrade cake 3.8
- PB-472: Cleanup test dependencies
- PB-242: Add local storage resources capabilities to manipulate the resources (add, delete, update)
- GITHUB-79: Improve autofill compatibility, trigger an input event instead a change event while filling forms
- PB-278: #GITHUB-61: Improve autofill compatibility, support Docker and AWS forms
- PB-432: Improve autofill compatibility, support reddit.com
- PB-433: Improve autofill compatibility, support Zoho CRM
- GITHUB-78: Improve autofill compatibility, fill only username if no password fill present
- PB-494: Improve autofill compatibility, ignore hidden fields
- PB-514: Improve autofill compatibility, fill iframe forms
- PB-609: Update library used for CSV export
- PB-544: Fix login passphrase remember me and quickaccess
- PB-533: Fix session expired management
- PB-515: Autofill should not fill if the url in the tab have changed between the time the user clicked on the button to fill and the data is sent to the page.
- PB-503: Fix math.random() when generating first security token/color
"Think of the tender things that we were working on."Listen to the release song!
|
OPCFW_CODE
|
Application type mismatch complains about convertible types
Prerequisites
[x] Put an X between the brackets on this line if you have done all of the following:
Checked that your issue isn't already filed.
Reduced the issue to a self-contained, reproducible test case.
Description
structure CatIsh where
Obj : Type o
Hom : Obj → Obj → Type m
infixr:75 " ~> " => (CatIsh.Hom _)
structure FunctorIsh (C D : CatIsh) where
onObj : C.Obj → D.Obj
onHom : ∀ {s d : C.Obj}, (s ~> d) → (onObj s ~> onObj d)
def Catish : CatIsh :=
{
Obj := CatIsh
Hom := FunctorIsh
}
universes m o
unif_hint (mvar : CatIsh) where
Catish.{m,o} =?= mvar |- mvar.Obj =?= CatIsh.{m,o}
structure CtxSyntaxLayerParamsObj where
Ct : CatIsh
def CtxSyntaxLayerParams : CatIsh :=
{
Obj := CtxSyntaxLayerParamsObj
Hom := sorry
}
def CtxSyntaxLayerTy := CtxSyntaxLayerParams ~> Catish
/-
23:5: application type mismatch
CatIsh.Hom Catish CtxSyntaxLayerParams Catish
argument has type
CatIsh
but function has type
Catish.Obj → Type (max (max u_1 u_2) (u_3 + 1) (u_4 + 1))
-/
However, Catish.Obj and CatIsh are convertible. Moreover, if I replace Ct : CatIsh in structure CtxSyntaxLayerParamsObj with Ct : Type _, then the definition passes.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal
$ cmd.exe /C ver
'\\wsl$\Ubuntu\home\jgross\Documents\GitHub\syntax'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
Microsoft Windows [Version 10.0.19041.804]
After the bug fix https://github.com/leanprover/lean4/commit/847f95021aea1794f3569723bda46c13c3eb0ff1
The error message is
error: stuck at solving universe constraints
max (?u.959+1) (?u.958+1) =?= max (?u.965+1) (?u.964+1)
The situation is similar to issue #342. I don't see a solution unless we approximate and solve the constraint using
?u.959+1 =?= ?u.965+1
?u.958+1 =?= ?u.964+1
I think the error should say where this constraint came from, something like
23:5: application type mismatch
CatIsh.Hom Catish CtxSyntaxLayerParams Catish
first argument
Catish
has type
CatIsh.{whatever universes}
but is expected to have type
Catish.Obj{whatever universes}
stuck at solving universe constraints
max (?u.959+1) (?u.958+1) =?= max (?u.965+1) (?u.964+1)
or something like that. Once the error message is sufficiently informative, I'm happy for this to be closed and leave #342 to be about what to do with universes
|
GITHUB_ARCHIVE
|
Incrediblenovel The Bloodline System txt – Chapter 331 – Angy Vs Endric wheel unarmed propose-p1
Novel–The Bloodline System–The Bloodline System
Chapter 331 – Angy Vs Endric adaptable hate
She slammed within the wall structure on the other end and experienced it.
“Hmph,” Endric converted around and to encounter Gustav’s apartment’s door once more.
For reasons unknown, just about everywhere transformed silent since the seems of those footsteps echoed around the passageway.
the moral economy
Angy established her eyeballs as she experienced her painful system slipping over the oxygen. She discovered she was currently in the middle the construction that encased their property and also the constructing opposing them.
For some reason, everywhere switched muted when the appears of the footsteps echoed around the passageway.
the greatest show on earth the evidence for evolution summary
“Now, does other people wish to interfere!” Endric shouted out while strolling across the cracks on the pathway in front of Gustav’s condominium.
“Hnnngggh!” Angy moaned in suffering as increasing numbers of blood flow trickled from her sleeves and nose.
They stored banging the shield, but it really proved to be unbreakable.
The Bloodline System
The nearby neighbors were definitely astonished as they quite simply experienced the landscape of Angy becoming easily subjugated.
The Bloodline System
He inched onward and was about to strike it when…
She observed themselves becoming raised off the floor in the next second.
She slammed into your wall structure on the other end and experienced it.
“Hmph,” Endric made around and also to confront Gustav’s apartment’s front door again.
The Stories Mother Nature Told Her Children
Endric said since he slapped each hands together with each other.
Her brow compressed together for a third horn developed from it.
The Hills of Refuge
The noise of bone tissues cracking reverberated along the location as Angy’s body system was forwarded hovering backward with immense rate.
Angy’s human body slammed within the wall membrane on the opposite side and shattered by means of it.
Each concealed the wall surfaces collided together with the nearby neighbors as they ended up forced backward.
“Now keep prior to making me do worse yet,” Angy reported as she cleaned the bloodstream in her facial area and stared at Endric in front, whoever human body just stumbled on a stop.
She spun her body around in the middle of-surroundings before getting on the ground while slamming her right hand to the ground.
Anyone voiced out since they stared at the almost six-ft .-large kid with dirty blonde head of hair.
For reasons unknown, anywhere converted private being the appears of such footsteps echoed throughout the passageway.
|
OPCFW_CODE
|
Loosen the handbrake in mecanical disk-brake
One of the two handbrakes is too tight and I can't find the barrel adjusters (as in this video: Adjust bike brakes, although his bike has V-brakes not mechanical Disk-brakes like me)?
Should I adjust the cable brake on the caliper (see picture below) to make the hand-brake less tight (Disk brake adjustment)?
It would be helpful for a picture down near the brake caliper. That cable you show is for the shifter and the cable for the brake runs under the bar tape. There is probably a barrel adjuster down near the brake caliper as shown in this picture
Hybrid/mountain type bikes like the one in the video usually have a barrel adjuster at the brake lever, while road bikes almost always have it at the caliper itself (as Kibbee says).
The inline adjuster is for the gearing, not the brake.
What do you mean by "handbrake"? Almost all brakes on bicycles are hand-operated.
I think you guys are right (just added a picture of the caliper). I see something like an adjuster in the caliper. Can anybody post an answer so I can accept it?
When you say loosen do you mean the brake is rubbing? If so also check the wheel is seated properly. If it's too hard to reach so you want to take up the slack for a comfortable riding position there may be some adjustment in the lever.
@ChrisH No the brake lever is too rigid.
Yes, in your picture of the caliper you can see the barrel adjuster at the left side. Usually when mechanics set up a cable brake, they’ll allow about one rotation of space for loosening, and the rest will be available to tighten the brake as the cable stretches and the pads wear. So, it’s possible that you may not have enough adjustment at the barrel to make it as loose as you want. If this is the case, you could loosen the allen bolt clamping the cable and let out a bit of slack there.
You could also adjust the position of the static pad, which is the one nearest the wheel. This is usually done with an allen key from the wheel side, turning it clockwise to move the pad inwards and anticlockwise to move it outwards. You don’t want to back it out too far, though, as this might cause the rotor to start rubbing the caliper body when you apply the brakes. You should be able to observe this easily enough.
So, I would start with the barrel adjuster, then if you don’t get enough slack from that, let a bit of cable through the pinch bolt.
|
STACK_EXCHANGE
|
[Note from Pinal]: This is a 22nd episode of Notes from the Field series. Security is very important and we all realize that. However, when it is about implementing the security, we all are not sure what is the right path to take. If we do not have enough knowledge, we can damage ourself only. DB Data Roles are very similar concept, when implemented poorly it can compromise your server security.
In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Read the experience of Brian in his own words.
I am prejudiced against two fixed database roles: db_datareader and db_datawriter. When I give presentations or talk to customers, some are surprised by my stance. I have two good reasons to recommend against these two roles (and their counterparts, db_denydatareader and db_denydatawriter).
A Violation of the Principle of Least Privilege
The first reason is they violate the Principle of Least Privilege. If you’re not familiar with this security principle, it’s really simple: give permissions to do the job – no more and no less. The db_datareader and db_datawriter roles give access to all tables and views in a given database. Most of the time, this is more access than what is needed. This is a violation of the Principle of Least Privilege.
There are some cases where a user needs such access, but there is always the possibility that a new table or view will be added which the user should not have access to. This creates a dilemma: do I create new roles and remove the user from db_datareader or db_datawriter or do I start using DENY permissions? The first involves additional work.The second means the security model is more complex. Neither is a good solution.
Failing the 3 AM Test
The second reason is the use of these roles violates what I call the “3 AM test.” The 3 AM test comes from being on call. When I am awakened at 3 AM because of a production problem, is this going to cause me unnecessary problems? If the answer is yes, the solution fails the test. I classify db_datareader and db_datawriter role usage as failing this test. Here’s why: the permissions granted are implicit. As a result, when I’m still trying to wake up I may miss that a particular account has permissions and is able to perform an operation that caused the problem. I’ve been burned by it in production before. That’s why it fails my test.
To see why this is an issue, create a user without a login in a sample database. Make it a member of the db_datareader role. Then create a role and give it explicit rights to a table in the database. This script does so in the AdventureWorks2012 database:
USE AdventureWorks2012; GO CREATE USER TestDBRoleUser WITHOUT LOGIN; GO EXEC sp_addrolemember @membername = 'TestDBRoleUser', @rolename = 'db_datareader'; GO CREATE ROLE ExplicitPermissions; GO GRANT SELECT ON HumanResources.Employee TO ExplicitPermissions; GO
Pick any table or view at random and check the permissions on it. I’m using HumanResources.Employee:
We see the permissions for the role with explicit permissions. We don’t, however, see the user who is a member of db_datareader. When first troubleshooting it’s easy to make the assumption that the user doesn’t have permissions. This assumption means time is wasted trying to figure out how the user was able to cause the production problem. Only later, when someone things to check db_datareader, will the root cause be spotted. This is why I say these roles fail the 3 AM test.
Reference: Pinal Dave (https://blog.sqlauthority.com)
|
OPCFW_CODE
|
Nameclash with other xdg library
This library installs into the same directory as pyxdg (https://pypi.org/project/pyxdg/ (and https://gitlab.freedesktop.org/xdg/pyxdg/).
There are many situations, when this can become a problem. Some are:
One package requests pyxdg and the other xdg. If installed with pip, the later overwrites the xdg/__init__.py file.
Having installed both and uninstalling one of it, the xdg/__init__.py file is removed (making other packages throwing an error, e.g. https://github.com/spyder-ide/spyder/issues/16448)
It would be great, if either:
both packages can cohabit in the same installation, or at least
a warning is issued, that those packages are incompatible
A change of installation directory was declined (https://github.com/srstevenson/xdg/issues/35), suggesting that pyxdg should install into the pyxdg directory. (See also https://github.com/srstevenson/xdg/issues/10.)
There is also the suggestion of https://gitlab.freedesktop.org/xdg/pyxdg/-/issues/19.
Unfortunately we (as users) are in the situation, where one package (the first kid on the block) installs into a different directory than its name suggests, which probably led the xdg to use the same directory (and claiming, that the other should rename the directory). Would it be possible that you and the maintainer of pyxdg put your heads together to find a solution? Thanks, that would be really, really great!
PS: The corresponding bug report for xdg is here: https://gitlab.freedesktop.org/xdg/pyxdg/-/issues/24
I've been thinking again about how to resolve this, which is obviously a pain for the users and maintainers of both packages. I met @takluyver, the pyxdg maintainer, in person at a previous PyCon UK, and have no doubt we'd both be willing to implement a solution that fixes the namespace collision whilst avoiding breakage for users of both packages, if such a solution can be found.
Unfortunately, the impasse exists because no such solution has yet been identified. Approaches that have been suggested all break backward compatibility for existing xdg or pyxdg users in some way:
Approach: pyxdg changes namespace to pyxdg
✔️ pyxdg no longer collides with xdg in the xdg namespace
❌ pyxdg users' code is broken if previous pyxdg releases are yanked from PyPI
❌ if previous pyxdg releases are not yanked from PyPI, the namespace collision remains
Approach: xdg changes namespace to xdg_base_directory or similar
✔️ xdg no longer collides with pyxdg in the xdg namespace
❌ xdg users' code is broken if previous xdg releases are yanked from PyPI
❌ if previous xdg releases are not yanked from PyPI, the namespace collision remains
Approach: merge xdg and pyxdg into a single package whose API is the union of the two original packages in the xdg namespace. pyxdg becomes an empty package that just depends on xdg, or xdg becomes an empty package that just depends on pyxdg.
✔️ pyxdg and xdg no longer collide in the xdg namespace
✔️ users of both packages have a update path that maintains backward compatibility (except for Python 2.7 support, see below)
❌ pyxdg's code is LGPL licensed, which means some previous users of xdg would not be able to use the unified package (reference)
❌ Python 2.7 support is lost (which pyxdg currently has), as xdg's public API uses Python 3 features such as the pathlib module. While there is a backport of pathlib to earlier Python versions, this has dropped support for Python 2.7.
Is there a solution that's not been discussed so far, that'll let us move past the impasse without breaking application code for users of either package?
There is another aspect that hasn't been mentioned here yet: pyxdg is packaged for many Linux distributions. Contrary to PyPI packages Linux packages generally cannot simply be yanked.
Fun sidenote: if you have python3-xdg installed on Debian then pip install xdg fails:
$ pip install xdg
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: xdg in /usr/lib/python3/dist-packages (5)
(which is an incredibly confusing error message ... I just opened pypa/pip/issues/11695 about that)
Is there a solution that's not been discussed so far, that'll let us move past the impasse without breaking application code for users of either package?
Yes: change the name of your package without yanking prior versions and bump the major version of the package.
No because as Scott mentioned this library is licensed under ISC whereas pyxdg is licensed under the LGPL.
Apparently Debian already did something like that the reason for the PIP output I mentioned earlier isn't a bug in pip but the fact that the Debian maintainers have merged both libraries into one Debian package.
What I meant is not to use the existing code of [py]xdg, but to recreate that API with the help of xdg[_base_directory]. As I understand it, the API is not licensed, it is the implementation which falls under the LGPL. But I might be wrong here …
Ah yes a clean room implementation would probably be legal ... but I don't really think anybody cares enough about this to do it ... besides this wouldn't really work cleanly when the pyxdg API is updated.
I think it's unlikely that pyxdg will get any significant API changes - it's barely changed for many years. However, it's also a much bigger API surface than this package. This one covers only the XDG base directory spec, whereas PyXDG tries to cover several different XDG specs (using fewer, bigger packages was more attractive back when Python packaging tools were worse).
So if someone really wanted to, grafting the API of the xdg package onto PyXDG is probably simpler - there's a lot less to bring over, and no need for a clean-room implementation, since the ISC licensed code can be used in an LGPL work. But the xdg package would also have to change to solve the name clash, and it's not a priority for me.
So if someone really wanted to, grafting the API of the xdg package onto PyXDG is probably simpler
There is already a patch in Debian to bring xdg[_base_directory]'s API to [py]xdg: https://salsa.debian.org/python-team/packages/pyxdg/-/commit/2e61e056826ea13e6a1a49202a33213f6ee38219 (@not-my-profile's bug reports unearthed that patch).
But there is still the licence issue: https://github.com/srstevenson/xdg/issues/75#issuecomment-1370747992 and https://github.com/srstevenson/xdg/issues/83#issuecomment-1039311015, hence some people will not like this approach. To satisfy these users [py]xdg's API has to be reimplemented in xdg (this package).
This approach would need:
Re-implement all xdg specifications in xdg (this package).
Write a wrapper for [py]xdg's API.
@srstevenson: Would you be happy to take the burden of maintaining such a package? I myself could imagine to do some PR …
Yes, I'd be willing to maintain a merged package containing the union of the two APIs. If the pyxdg API is clean-room implemented in xdg then it looks like we have a resolution that works, including for users who can't use LGPL licensed code. Current pyxdg users would lose Python 2.7 support, but given Python 2.7 has been EOL for three years that seems like a reasonable compromise.
A clean-room implementation is likely to be a lot of effort for relatively little gain - I suspect large parts of PyXDG don't have many users, and I think relatively few people really can't use LGPL code. I'm not saying they don't exist or should be ignored, just that I don't think there's all that many of them. PySide is LGPL licensed, for instance, and a big reason people use that is to avoid licensing headaches with PyQt.
Of course I'm not going to stop anyone if that's the route you want to go down, but I might suggest an alternative which is less work: release this package with a different import name (e.g. xdg_base_dirs) for the people who can't accept LGPL, and combine the two xdg APIs in PyXDG (i.e. LGPL licensed) to avoid the name clash, with no clean-room requirement.
|
GITHUB_ARCHIVE
|
Pivoting is basically a fairly specific thing. The attacker needs to expand his network presence using various methods and tools. However, in my article I will demonstrate a experimental method of pivoting against Windows using Hamachi.
Caster - Graveyard
Release Date: 30 March 2024
Hamachi - is a tool for building VPN networks. Roughly speaking Hamachi creates a LAN on top of the Internet, this tool became famous because it was used to solve the problem of connecting multiple players in different games. You probably had to work with Hamachi as a kid to play with a friend together.
However, I found out that Hamachi in a special mode of operation can be used as a pivoting tool, namely it is possible to create an L2 tunnel to a compromised host on Windows if you deploy Hamachi on it.
I don't think this article will find practical application. It's an experiment
The article is of an introductory nature and is intended for security professionals conducting testing under contract. The author and editorial staff are not liable for any harm caused by the use of the information presented. The distribution of malware, disruption of systems, and violation of correspondence secrecy will be prosecuted.
- Administrator privileges are required, you need them to install Hamachi and configure bridges
- This scenario in this paper is that the attacker is from the Internet, and inside the compromised infrastructure, namely the host has the Internet
- We'll have to build bridges. And this is available only through the Windows GUI. And the address of the machine may change.
- In the context of my scenario, access to the Windows GUI is required because it is the only way to create the bridge.
I came up with a scenario where the attacker is on the Internet and is going to get exactly L2 access to the network where the host he compromised is located. Again, this is a post-exploitation process. The goal is to have the compromised host connected to the Hamachi network in Server mode, and the attacker will connect to that network as a client, get an address from the internal network, and be able to perform link layer attacks.
To work with Hamachi, you need to create an account. This will be needed to access the control panel and configure the logical infrastructure. This is a fairly easy process, I will not describe how to do it. Eventually you should receive an email like this and register on LogMeIn. Don't worry, it's completely free. But you can only fit 5 Hamachi clients in the free version, but that's enough for us.
You should get an email like this when you sign up
Creating a virtual network
After creating an account, you need to configure Hamachi. There are three options here: Mesh, Hub-and-Spoke and Server. The Server setting - allows you to turn a virtual network member into a gateway to reach the target network at the link layer. It is as if you are physically connected to the same network as the Server. This is the option we need, so select it.
By the way, the attacker will manage the network configuration from his side, everything is done over the Internet.
To connect to the Hamachi network you need a password, you need to create it.
Now you need to select the network member to be the server. Let's skip this step for now.
We now have a logical network called
nightmachi, its ID is 486-834-102. This ID will be used to connect to this network.
Now we need to install hamachi in silent mode and connect to this logical network. Again, this is an experimental method.
C:\> msiexec /i hamachi.msi STARTUI=0 /qn
Now you need to go to the directory where Hamachi was installed and run
hamachi-2.exe, we will need it to control the whole process from the console
C:\Program Files (x86)\LogMeIn Hamachi\x64> hamachi-2.exe --cli
version : 220.127.116.11
pid : 1744
status : offline
client id :
Now you have to log in. And then link your account that you previously registered. The network configuration I described earlier is bound to this account.
C:\Program Files (x86)\LogMeIn Hamachi\x64>hamachi-2.exe --cli login
C:\Program Files (x86)\LogMeIn Hamachi\x64>hamachi-2.exe --cli attach **********@gmail.com
Sending attach request to **********@gmail.com without networks .. ok
After specifying the mail, a request for binding will arrive in the account - it must be accepted for further work
After confirming this request from the compromised host, you need to connect to the logical network by ID
C:\Program Files (x86)\LogMeIn Hamachi\x64>hamachi-2.exe --cli join 486-834-102
Joining 486-834-102 .. ok
Now the connected host must be declared as the server through which the attacker will go into the infrastructure. A very important point.
After that, you need to create a bridge and put the physical interface and the logical interface of Hamachi there. Unfortunately you can only create bridges in Windows using the "Network Control Center", i.e. using the GUI. This makes the attacker's task more difficult - he needs a GUI. By the way, this problem is solved in Windows 11, as bridges can be created there using the
netsh bridge command.
A default bridge is created by default, it can be deleted
Be warned, the address of the machine may change after the bridge is created
This ends the configuration on the compromised host.
Now the attacker's task is to install a Hamachi client, connect to the network knowing the identifier, then after connection will automatically create a virtual interface, which will automatically receive an address from the internal network of the compromised host.
caster@phi:~$ wget https://vpn.net/installers/logmein-hamachi_18.104.22.168-1_amd64.deb
caster@phi:~$ sudo dpkg -i logmein-hamachi_22.214.171.124-1_amd64.deb
caster@phi:~$ sudo hamachi login
caster@phi:~$ sudo hamachi join 486-834-102
After a short time it will automatically receive the address from the target network, in my case it is
As proof of the workability of this post-exploitation method, I will run Responder (LLMNR/NBT-NS Poisoning)
As you can see, I was able to capture the NetNTLMv2-SSP hash of user
- There are slight delays
- Scanning too fast can disrupt tunnel operation
- ARP scanning does not work
- mitm6 does not work
When you need to wipe the environments and there is no more need for L2 tunneling, you need to uninstall Hamachi from the compromised host. Also in silent mode.
С:\> msiexec /x hamachi.msi /qn
As part of this article, I demonstrated a way to perform L2 tunneling against Windows using Hamachi. This is too specific and experimental vector, it came into my head quite by accident.
However, I think Hamachi tunnels can be detected on the network. If only because of the DNS queries that Hamachi generates.
This tunneling method is purely experimental; questions remain about its reliability.
Stay updated and engage with us on security discussions by joining our Telegram channel: @exploitorg
|
OPCFW_CODE
|
After some testing on several pluto releases I feel confident to post here some notes that should allow to run properly a pluto hybrid (and also a MD) on a VIA EPIA box with Nehemiah CPU.
These are in the end some minor hacks, just aimed to have the minimum difference from a standard installation but still having a system well performing.
Some tech data on my hardware:
VIA EPIA M10000, CPU Nehemiah 1Ghz, 512 Mb Ram
Please note that this procedure will likely NOT work on Eden CPU, because of a well known "CMOV bug" that prevent newer i686 kernels to be executed. The possible solution may be to make some further hacks that honestly are out of my reach for now...
I assume to start from a fresh standard Pluto install, and to subsequently apply the needed changes.
The main problem is related to xserver and video playback. If you try to play a divx on a standard install you will hang the system. This is because a patched xfree server for unichrome chipset is needed.
To use the proper one do the following:
- add the following lines to /etc/apt/sources.list
deb http://www.physik.fu-berlin.de/~glaweh/debian/ unichrome/
deb-src http://www.physik.fu-berlin.de/~glaweh/debian/ unichrome/
- create the file /etc/apt/preferences and put the following lines
Pin: release o=Eartoaster
- then issue the following commands:
apt-get install xserver-xfree86
apt-get install libxvmc1
Now you can play video files without hanging the system, nevertheless you may experience some frame drops and/or a/v sync problems due to xine version shipped with pluto (1.0.3).
To avoid this you have to dowload xine 1.1.0, compile it and install it on top of xine 1.0.3. Be aware that with pluto .31 doing this you will likely break some VDR extension, but since I'm not using VDR I cannot exactly say what.
I don't know whether kernel sources/headers are needed also to compile xine, I needed anyway to compile some extra modules so here are the notes
apt-get install kernel-source-188.8.131.52-vanilla-pluto-1-686
apt-get install kernel-headers-184.108.40.206-vanilla-pluto-1-686
apt-get install kernel-kbuild-2.6-3
tar -xjvf kernel-source-220.127.116.11-vanilla-pluto-1-686.tar.bz2
ln -s /usr/src/kernel-source-18.104.22.168-vanilla-pluto-1-686 /lib/modules/22.214.171.124-vanilla-pluto-1-686/build
ln -s /usr/src/kernel-source-126.96.36.199-vanilla-pluto-1-686 /lib/modules/188.8.131.52-vanilla-pluto-1-686/source
You do not really need to compile kernel, just start it and stop it after a while (Rob, this is what i missed to say in my previous notes for pwc recompilation ...)
Afterwards you are ready to compile and install xine using this procedure:
This is because by default xine is installed in /usr/local, so in the end you would have 2 installation of xine and pluto will use always the wrong one.
By modifying the target dir you will overwrite xine 1.0.3 and pluto will use the new one.
To have smooth divx playback you may want to edit /etc/pluto/xine.conf and set codec.ffmpeg_pp_quality to 0.
This reduces the CPU overhead due to mpeg4 postprocessing, so you have a very fluent playback and I didn't notice relevant image quality worsening.
After these steps you better reload router or if you like, reboot the box.
If some of you have MD based on this hardware (and I know that some of you actually have ...) I think that these notes may be a starting point to produce some similar steps in order to update the MD images to be sent via PXE boot.
If someone happens to do that, please post here the results
Hope that helps
|
OPCFW_CODE
|
Deevynovel My Girlfriend From Turquoise Pond Requests My Help After My Millennium Seclusion txt – Chapter 169 – Do You Like Jiang Lan? tiny discreet propose-p3
the lieutenant of inishmore
The Outdoor Chums at Cabin Point
Novel–My Girlfriend From Turquoise Pond Requests My Help After My Millennium Seclusion–My Girlfriend From Turquoise Pond Requests My Help After My Millennium Seclusion
Chapter 169 – Do You Like Jiang Lan? bitter tub
Since he had a excellent divine ability, he definitely got more pros.
These folks were also area of the Good Dao.
Ao Longyu viewed Lin Siya, not understanding what her Junior Sibling had comprehended.
Ao Longyu glanced at Lin Siya without indicating nearly anything.
Though she found it incredible, she was still fascinated.
A single Leaf Vision could conceal all matters from the heavens previously mentioned plus the dwelling items we know.
He just didn’t know where it lied.
“I don’t despise him,” claimed Ao Longyu specifically.
Qingyi Water Floral.
“Let’s customize the issue. Does Elderly Sister like Mature Zhou Xu from the Primary Summit?” Lin Siya expected.
Within the last three months, he were circulating his electricity to recuperate from his injury.
To be able to discover A single Leaf Shrouding The Sky, one particular needed to disguise coming from the heavens very first.
the age of revolution quizlet
Thankfully, the end results were good.
He has been somewhat impulsive.
It may possibly take care of the karma in the Excellent Dao and the the ears with the sages.
Only a disciple in the Ninth Summit most likely are not regarded as unproductive, but while watching G.o.ddess, he was nothing at all.
In a nutshell, there had been lots of different notions about the associations.h.i.+p between two.
“Senior Sister, remember that after I gave you the documenting Dharma value, I mentioned about Excel at supplying a lecture designed for those at the Heart and soul Soul Kingdom. Managed Elderly Sister watch that piece?”
If Senior citizen Sister did not observe was inside, then who would it be?
“Senior Sibling is holding out to love the Junior Buddy in the Ninth Summit.” Lin Siya hesitated well before declaring.
“I don’t bear in mind it.”
She didn’t say everything a great deal and had just shared with her reality.
the life of st declan of ardmore
Consequently, it absolutely was difficult for Older person Sibling to prepare these items on their behalf.
“I don’t know who he is.”
“I don’t despise him,” said Ao Longyu specifically.
She experienced read a lot regarding it.
The engagement had pass on like wildfire before.
It had been like nothing at all got occurred.
“I have grasped it.”
If Older Sibling failed to observe was on the inside, then who will it be?
the sexual activity questionnaire
Now, his personal injuries acquired stabilized.
Right after contemplating this, Jiang Lan thought to make.
|
OPCFW_CODE
|
I am writing this post while sitting at the Cloudstack booth at the Southeast LinuxFest. Immediately prior to SELF our demo hardware (an Apple iMac) was at a show in New York, and the Fedora 14 LiveCD I normally run to demo CloudStack’s UI appeared to be missing. The contributors manning the Fedora booth at at SELF had arranged to get the brand new, hot off the press, Fedora 15 media delivered just in time for the expo floor to open on Saturday morning, and were able to deliver a Bi-arch (x86/x86_64) Multi-desktop (Gnome, KDE, LXDE, XFCE) Live DVD for me to boot the machine. This caught me somewhat by surprise as I knew that Cristoph’s original vision was being enhanced and transitioned into Python. Spot, stopped by to check on progress (did it properly detect the architecture, did it boot properly, etc.) and told me that he finished the python script to generate the DVD just in time. A number of Fedora contributors including the inimitable Paul Frields gathered around the iMac to witness the awesomeness that is the Fedora 15 Multi Desktop Live DVD.
Kudos to Christoph Wickert and others who came up with and worked on the initial vision for this and got something working, and Tom Callaway and the other RelEng guys who refined and brought it to fruition. It’s awesome, and it does wonders for showing off some of the diversity that is important to Fedora.
Wanted to make you aware of a situation that was discovered last night and responded to this morning.
A number of people caught an issue in the eligibility configuration for voters in the Board Election. Per the Board’s Succession Planning document this election is open to any person in the Fedora Accounts System who has completed the CLA. I mistakenly configured this election for CLA+1 which means that in addition to completing the CLA a person would have to be a member of an additional group. Members of the infrastructure team noted this and created a ticket in the Board’s trac system late last night (no link as it’s not visible to non-board members). That mis-configuration has now been corrected, and to ensure that we have not unduly disenfranchised any potential voters, the Board election period only has been extended by 24 hours from 08 June 23:59:59 to 09 June 23:59:59. Please note that the election period for the FESCo election remains unchanged.
I apologize for mis-configuring the election app, and thank the vigilant members of our community and infrastructure team who caught the issue so early in the process and helped to correct the problem in a timely manner.
In just a few hours the Fedora elections for FESCo and the Board will open and you be able to vote
There’s been a good turn out of nominees, and some lively town halls. I encourage you to go vote for those you feel should be responsible for the future of Fedora.
Elections open at 00:00:01 UTC on June 2nd 2011, and will close at 23:59:59 UTC on June 8th 2011.
Many people often claim that they don’t know the candidates well enough to make an informed decision, hopefully the following links will give you that information:
F16 Election Questionnaire
Fedora Board Town Hall Meeting Logs
FESCo Town Hall Meeting Logs
Just a quick reminder for planet readers, that nominations are currently open for the Board and FESCo, the nomination period closes on the 15th, so nominate early and often. Please consider your involvement in Fedora and whether you wish to take on the responsibilities of one of these positions.
For more details see the announcement here:
I just realized that I’ll be travelling a lot in the next few weeks.
First up is POSScon, which is March 23-25. They’ve been conned into letting me speak again this year (and I am speaking on Thursday afternoon for those of you who wish to avoid it 🙂 ). It should be a good time, if you are in the Carolinas or Georgia, it’s relatively easy to get to and you should check it out.
I have to leave POSScon essentially after I get done speaking on Thursday (and miss the last day of that conference) to head to Indianapolis for the the Indiana Linux Fest the 25-27th. On the ‘pre-conference’ day there’s Build An Open Source Cloud Day which will be featuring, I *think*, Puppet Labs, CloudStack, OpenStack, DynDNS and Arista Networks (they have this really cool Linux-managed network switch, and the switch runs Fedora incidentally, which delights me to no end, I am desperately trying to figure out a justification for replacing my Cisco 2948G with one of their 10Ge switches so I can put a ‘powered by Fedora’ sticker on it). I likewise have conned the ILF organizers into letting me speak on Saturday, and am really excited about this first year conference that seems to be very well organized.
I’ll then fly home to do laundry and repack and it’ll be time to head out to Austin for the Texas Linux Fest where I get to speak again. My wife, who grew up in Texas, has been looking forward to this event all year (and was upset she didn’t get to attend last year) so she can have beef brisket and TexMex. There will be Build An Open Source Cloud Day there as well, with Gluster joining the lineup from Indiana.
My blog has gone silent of late and I figured with the pi day celebrations now behind us, and the Beefy Miracle wiener roast still several weeks off that I’d give an update.
Around a month ago I accepted a job as the Community Manager for Cloud.com‘s open source project CloudStack, which is an Infrastructure-as-a-Service cloud management application. Since them I’ve been working to try and ensure that we are working towards doing things the open source way. A lot of that has been setting up scaffolding like IRC channels (#cloudstack on irc.freenode.net) and mailing lists. Other things have been dabbling in documentation, QA, and release engineering-related tasks for an upcoming release. (Did I mention a lot of hats were included with this job) Things are a bit nascent at the moment, and there’s still a long way to go, but keep watching to see things continue to improve.
So today is the first release I’ve been present for, and I am excited about it. Multi-hypervisor, high-availability, multi-tenant compute cloud that’s GPLv3. CloudStack is actually a relatively mature project from a software standpoint, it’s been under development for 3 years, though only recently open sourced. It’s also appears to be widely adopted, including some recent adoptions by Tata Communications and Logicworks.
|
OPCFW_CODE
|
Size issue of bubble chat on iPad
I use JSQMessagesViewController for my messaging app and recently I made the app universal but I saw an issue with the size of message bubble that not looks great on the iPad but come back to normal If I put the table in landscape mode.
Take a look on the prints and let me know what I should check ? May something need to be adjusted ?
Print 1: https://i.stack.imgur.com/M4sul.png
Print 2 (after turn the table in landscape mode and back to portrait): https://i.stack.imgur.com/qb7x6.png
Similar issue here.
I use split views for iPad and in the portrait mode, message bubbles are truncated, everything's ok in the landscape mode, though.
Portrait mode - https://snag.gy/NlPAdw.jpg
Landscape mode - https://snag.gy/21pHDS.jpg
Hi again,
No one can give us a clue about what can be wrong here ?
I'm having a similar issue. Mine comes up OK when the JSQMessagesViewController is initially loaded in landscape. When I rotate to portrait, the content area appears to stay the right size for landscape (a timestamp label that is "centered" in the VC's frame is actually off-center on the screen). When I rotate to landscape, the content area appears to shrink to a width that would be correct for portrait. Behavior seems to be the same when starting in portrait.
Xcode is also giving me a warning message in the console:
The behavior of the UICollectionViewFlowLayout is not defined because:
the item width must be less than the width of the UICollectionView minus the section insets left and right values, minus the content insets left and right values.
Please check the values returned by the delegate.
The relevant UICollectionViewFlowLayout instance is ,
and it is attached to:
• JSQMessagesCollectionView:
frame = (0 0; 1024 1252)
animations = { bounds=<CABasicAnimation: 0x1c502d8e0>; position=<CABasicAnimation: 0x1c502f0a0>; };
layer = <CALayer: 0x1c10396e0>;
contentOffset: {0, 0};
contentSize: {1366, 224};
adjustedContentInset: {0, 0, 44, 0}>
My setup is:
• iPad Pro 12.9" (Gen. 2)
• iOS 11 Beta
• Xcode 8.3.2 (8E2002)
The same setup, except swapping in an iPhone 7 Plus running iOS 11 Beta seems to work fine.
@mbm29414 Good Luck to get an answer.
Same issue for me. I'm 99% sure this was working and only broke recently. If I rotate the device it fixes. I've tried a few things including invalidating the collection view layout but it doesn't seem to help. I'm using the latest non-beta Xcode and iOS releases.
This seems like a similar issue to a previously fixed bug. Maybe it's resurfaced with an iOS update?
Solution:
I looked through some old pull requests and came across a closed one (not accepted) which will work as a temporary solution. As it wasn't accepted you will have to modify the source yourself but I had it working with just a couple minutes work.
This is the PR.
We will have to take a look into this. Thanks everyone who has contributed to this. It looks like we will be removing xib files in favor of a more programatic layout solution which should solve this issue in the future.
Hello everyone!
I'm sorry to inform the community that I'm officially deprecating this project. 😢 Please read my blog post for details:
http://www.jessesquires.com/blog/officially-deprecating-jsqmessagesviewcontroller/
Thus, I'm closing all issues and pull requests and making the necessary updates to formally deprecate the library. I'm sorry if this is unexpected or disappointing. Please know that this was an extremely difficult decision to make. I'd like to thank everyone here for contributing and making this project so great. It was a fun 4 years. 😊
Thanks for understanding,
— jsq
|
GITHUB_ARCHIVE
|
Well I thought the documentation online and wizards would help, but I just don't know enough about networking. What I need to do is fairly simple, based on my general experience, but up until now I've always had network engineers to make it happen. Now it's my job...
So it's pretty simple setup.. COMCAST > ASA 5505 > WinServer 2014
I have static IP addresses from COMCAST and two network cards on the server each with their own IP Address. I would like all FTP/SFTP traffic on one NIC, everything else on the other.
I will have users using VPN. There is a proprietary piece of software they will need to use and that's all they are really using it for, but they will need to VPN to execute it.
I have users that will use FTP software and I also have Cerberus FTP Server installed.
Finally there will be a IIS webserver running in the future.
No one will be accessing the Server or the ASA from the Backend at this time.
If this is too much to handle in this discussion group, please contact me via email if you can help. I'm fairly knowledgeable about what has to be done, (been doing IT work for over 30 years) but I'm more a programmer/developer and Business Analyst. Unfortunately this time round, it's all me.
Thanks in advance.
Well, starting off you need to setup your Interfaces, firewall rules, and static routes. If the ASA is handling VPN services you'll need to set that up as well. Have you tried logging into ADSM and running the startup wizard? That should get you started. If you haven't purchased a Cisco support agreement you may want to breakdown and do that. If you have purchased an agreement run through the startup wizard and then give them a call and they'll help you finish up anything the wizard didn't cover.
Thanks Ethan, I have the Smartnet full support. My issue with the wizard is it's asking questions that I no nothing about. It's been somewhat configured to get access to it, that's it. Will the SmartNet folks be able to configure it for me based on my original post information (and the actual IP addresses and such)?
I didn't think they went past trouble-shooting.
Whelton Network Solutions is an IT service provider.
Seriously, my advice is to not use the wizard to set it up, only for the VPNs.
Cancel the wizard and use the ASDM, it's reasonablystraight forward to use and a GUI thats "guessable" and straight forward. Care needs to be taken with NAT, plavcement is important so remember to move entries up if they are more specific.
They will also help configure the device. I've never asked them to help from start to finish though so I'm not sure what their response would be in that case.
You could try a step by step guide or two before calling Smartnet. Often times the official Cisco documentation is less helpful than a casual guide because it gets into the hairy details. Once you learn to skip over the 60% that does not pertain to you it makes more sense. Here's a non-official setup guide for the 5510. It's a different model but setup steps should be very similar:
I've done many installs with the same kind of topology you're talking about. Two gotchas to be mindful of, in my experience. One, you'll have to get a Comcast tech (try to get a tier 2 tech) on the phone and tell them to put the modem in gateway mode. This makes the modem a pass-through and moves the internet boundary to your ASA, which is where you want it.
Secondly, check to see what license you have on the 5505. If it's a base model, you will only be able to connect 10 internal devices to the gateway. You need an upgraded license if you have more than 10 devices.
I will second what Brian said about using the wizards. I would use it for VPN setup, but do the rest manually. In my opinion, it's actually easier and less error-prone.
Thanks for the tips, I think I'm going to go through support to see if they can help, unless someone here is feeling particularly charitable :)
- re: Comcast.. thanks Todd. There will only be two or maybe three devices. The server (which counts as two because it has two cards, and maybe something else at some point. Thanks for the tip about COMCAST that makes a lot of sense. We have 5 static IP's from them because that's the minimum we could get and I was going to try and actually use a couple to route traffic.
- re: Comcast.. thanks Todd. There will only be maybe three devices. The server (which counts as two because it has two cards, and maybe something else at some point. Thanks for the tip about COMCAST that makes a lot of sense. We have 5 static IP's from them because that's the minimum we could get and I was going to try and actually use a couple to route traffic.
The whole one server, two NIC thing is throwing me. Are you doing Hyper-V? If not, the server may have two cards, but you definitely cannot have two default gateways on a standalone server. I am not sure what your expectations are, but from an internet perspective, all internet bound traffic will go out one interface - the one with a default gateway associated to it. Your second NIC without a default gateway can talk to things on that associated subnet, that is it.
Dual NIC servers can be tricky, since there runs the possibility of an interface receiving traffic and responding on another NIC which doesn't always work out that well. This doesn't really deal with the original question sorry, but worth looking into.
**Disclaimer - I hate dual NIC scenario's. I know they have a place in the world, but depending on the environment you are asking for issues.
In the simplest terms looking at if from the perspective of the server...
I want the FTP server to look at one NIC (ip address) when waiting for traffic. I also want all IIS traffic inbound to the eventual IIS Server to come though that NIC
The other NIC will be only VPN and RDS traffic. This diagram should cover it although the labels aren't complete.
The two labels near ther server should read.. DMZ/FTP/SFTP/IIS and the other VPN/RDS
|
OPCFW_CODE
|
It may be the economical slowdown, the climate change, or even a random boost of creativity, but the competition between graphic studios is huge right now. Today, more than ever, you really need to show something special on your website to be noticed. So we made a selection of 30 portfolios that describe a studio or a freelancer with a unique personality. Please notice that you certainly need more than a nice “look” to make the design stand out; in particular, usability and accessibility are issues that need to be carefully considered when creating your next portfolio design.
You might be interested in the following related posts:
- My (Simple) Workflow To Design And Develop A Portfolio Website1
- Portfolio Web Design Showcases2
- 10 Steps To The Perfect Portfolio Website3
Flash Based-Designs Link
Ola Interactive Agency4
A fun and straightforward website, with small videos running in the back to illustrate some of the company values, like creativity, speed and coolness! There’s a speaker on the left to kill the music.
A classic dark styled background with a thumbnail menu and awesome transition effects are all you need for a killer portfolio. Oh, and some pictures from one of the top 15 photographers in America.
Unique browsing system Link
Websites that let you scan the portfolio trough original and fresh techniques.
Very good use of thumbnails and light.
Design by Slint
This prolific studio from Singapore also has a grid-based fluid portfolio, where they show the works in a blog style way.
Any Which Way
Nothing fancy seems to be happening here at the first sight, but this website has a great navigation system: a mirror-like menu that stands as a fantastic alternative for the way over-used “carousel”.
Special elements Link
Portfolios that use at least one remarkable element (widget, color scheme, game) to create an immersive adventure.
Crispin Porter + Boguski
Basically this website aggregates a YouTube video for a given campaign, the live news feed about it, Twitter bits, and blog pieces. It sure is the most social-ready portfolio of the selection.
Trust The KDU
The vintage design and the sepia tone of the photos work extremely well together and make a great portfolio.
Strong colors and striking, simple illustrations make from the new Carsonified website an instant classic.
Very clean and dynamic website with a powerful Twitter integration and a nice simple way to show the creative work.
- 1 https://www.smashingmagazine.com/2013/06/workflow-design-develop-modern-portfolio-website/
- 2 https://www.smashingmagazine.com/portfolio-web-design-showcases/
- 3 https://www.smashingmagazine.com/2009/02/10-steps-to-the-perfect-portfolio-website/
- 4 http://www.olainteractiveagency.com/
- 5 http://www.olainteractiveagency.com/
- 6 http://www.your-majesty.com
- 7 http://www.your-majesty.com
- 8 http://enjoythis.co.uk
- 9 http://enjoythis.co.uk
- 10 http://valeriephillips.com/
- 11 http://valeriephillips.com/
- 12 http://www.ben-thomas.com
- 13 http://www.ben-thomas.com
- 14 http://www.studio-output.com
- 15 http://www.studio-output.com
- 16 http://lyndonwade.com
- 17 http://lyndonwade.com
- 18 http://www.cardondesign.com/
- 19 http://www.cardondesign.com/
- 20 http://www.kenjiroharigai.com
- 21 http://www.kenjiroharigai.com
- 22 http://ishothim.com/
- 23 http://ishothim.com/
- 24 http://www.orangelabel.com/
- 25 http://www.orangelabel.com/
- 26 http://counterfill.com
- 27 http://counterfill.com
- 28 http://www.x3studios.com/
- 29 http://www.x3studios.com/
- 30 http://www.workatplay.com
- 31 http://www.workatplay.com
- 32 http://www.zaum.co.uk/
- 33 http://www.zaum.co.uk/
- 34 http://cpeople.ru
- 35 http://cpeople.ru
- 36 http://www.davehillphoto.com/
- 37 http://www.davehillphoto.com/
- 38 http://www.greatworks.se/
- 39 http://www.greatworks.se/
- 40 http://www.homedecaramel.com
- 41 http://www.homedecaramel.com
- 42 http://www.bkwld.com/
- 43 http://www.bkwld.com/
- 44 http://elliotjaystocks.com/
- 45 http://elliotjaystocks.com/
- 46 http://www.mojaveinteractive.com/
- 47 http://www.mojaveinteractive.com/
- 48 http://www.worldofmerix.com/
- 49 http://www.worldofmerix.com/
|
OPCFW_CODE
|
How do I build List or Array Types from a stored runtime Type for the sake of comparing them?
NOTE: When I originally asked this question I included IEnumerable. After some responses, I realized it was obvuscating the question because I was mixing interfaces and classes. I've removed it. Now I'm only asking about these three class types:
a type that represents a single thing
a type that represents a list of those things
a type that represents an array of those things
Before responding, please make absolutely sure you understand what I'm asking. I am NOT asking about:
types that derive from another type
types that are assignable from another type
types that implement some interface
Edited question:
If I have retrieved a type using typeof(), can I somehow use that to check for types that represent lists or arrays of the retrieved type?
I want to do this specifically so that I can do comparisons to find out if an unknown type is either some other specific type or a list or array of that specific type.
Here's a code example. Explanation is below the code.
public void Example()
{
Type k = typeof(Kitten);
Type m = typeof(Whatever); //Could even be a List<Whatever> or Whatever[]
Compare(k, m);
}
public void Compare(Type someType, Type otherType)
{
Type ListOfSomeType = ?????;
Type ArrayOfSomeType = ??????;
if(otherType == someType)
//otherType is a Kitten
elseif(otherType == ListOfSomeType)
//otherType is a List<Kitten>
elseif(otherType == ArrayOfSomeType)
//otherType is a Kitten[]
}
The point is that the Compare function gets Type arguments containing the types. It has no idea what they actually are. But it needs to be able to find out if they are the same type, or if the second one is a TYPE OF list or array storing objects of the first kind of type.
I entered question marks (?????) for the imaginary code I would like to use to construct the collection types.
How can I do this? This question is not hypothetical.
Sure you can do this with reflection. It's quite straightforward.
You can get the open type for List<> and then use MakeGenericType for the list. And Type.MakeArrayType for the array.
public static void Compare(Type someType, Type otherType)
{
var listOfSomeTypeType = typeof(List<>).MakeGenericType(someType);
var arrayOfSomeTypeType = someType.MakeArrayType();
Console.WriteLine("SomeType: {0}",someType.Name);
Console.WriteLine("OtherType: {0}",otherType.Name);
if(someType == otherType)
Console.WriteLine("someType and otherType are the same");
else if(listOfSomeTypeType == otherType)
Console.WriteLine("otherType is a list of someType");
else if(arrayOfSomeTypeType == otherType)
Console.WriteLine("otherType is an array of someType");
else
Console.WriteLine("No match found");
}
Live example: https://rextester.com/HQH48424
Type arrayOfSomeType = someType.MakeArrayType();
Type listOfSomeType = typeof(List<>).MakeGenericType(new[] { someType });
|
STACK_EXCHANGE
|
Experimental Code Published For Virtual CRTCs
Phoronix: Experimental Code Published For Virtual CRTCs
If you're interested in multi-GPU rendering, capabilities for DisplayLink-like devices, or NVIDIA Optimus / MUX-less hybrid graphics switching, here's some news worth reading about virtual CRTCs...
i don't think i understand the point of this - why render something on 1 gpu but send the video signals to another? that would still require both GPUs to be active at the same time, while the 2nd one is doing almost no work at all.
I can think of several cases where this would be desired:
1. When using DisplayLink devices: in this case I could add more monitors to my computer using DisplayLink devices (a usb display adapter without any kind of 3d acceleration) and the main GPU would handle 3D for all monitors, sending the rendered image to the usb device(s).
2. For laptops with multiple GPUs: Some laptops have a weak "on board" GPU that it uses normally to save power, and another powerfull GPU to use when more advanced 3D is needed. At the momment this don't work on linux, this setup would allow, when needed, the powerfull GPU to render the 3Ds and send the output to the standard one.
The point is to use the GPU to render the output to all monitors and send that to the other devices that don't have 3d acceleration (like DisplayLink usb devices).
hmmmm.......... in that case, i believe this could actually be the solution to gpu-passthrough on virtual machines. here's how it would work:
Originally Posted by faustop
you would have to have at least 2 GPUs - one for your main display, the other to be sent to the vm using pci passthrough. currently in VMs like virtualbox, the gpu can be recognized but it can't be used for an active display. by letting that gpu do the rendering, the virtual gpu can display the rendered output.
if for some reason that didn't work, there's another idea - maybe it is possible to send the render data through the vm to the host, so the host gpu can render and send the output to the virtual gpu.
if anyone can confirm that this is possible i am very excited.
It'd also be useful to create X servers with virtual CRTCs and expose them via a VNC or RDP server. Great for headless machines. Getting them to start X and create a proper framebuffer without any monitors attached is somewhat painful.
Now if someone added multiseat support to run multiple X servers on a single GPU, you could have multiple accelerated X servers on a single GPU running at the same time. Some of these could run some virtual machine. That might be pretty useful.
Does this open up the possibility of elegantly enabling multi-gpu scaling in foss drivers (AKA, SLI)?
What I'd like to know is if this could let me use my dual graphics cards for multiple displays in KDE the same way I can in Windows – including dragging windows between them. With 3D acceleration on at least one monitor.
Xvnc and Xrdp have been doing this for years? Or do you mean servers with a gpu, but no screen?
Originally Posted by rohcQaH
Yes, I was talking about a GPU accelerated X server. Xvnc is the next best thing, but software rendering is slow; even more if your CPU is busy with vnc compression.
Originally Posted by curaga
|
OPCFW_CODE
|
Account owner tasks
Account owner overview
After your company licenses the Magento Enterprise Cloud Edition, the only person who has access to it is the account owner—the person who purchased the software. The account owner is typically a “business user”— someone in the business or finance organization.
This topic discusses initial tasks the account owner should perform to give technical people access to the project.
Before you start, you must know the e-mail address for each user you’d like to add. After you add these users to the project, they receive an invitation to register with Magento Enterprise Cloud Edition.
Sign up for an account
To sign up for a Magento Enterprise Cloud Edition account, contact Magento Sales. They will create your account and send you a welcome e-mail that provides instructions to access the project.
The person who signs up for a Magento Enterprise Cloud Edition account is referred to as the account owner. You receive a welcome e-mail that enables you to set up the project initially.
Your welcome e-mail
After you register for an account, Magento sends you a welcome e-mail at the address at which you registered. The e-mail contains a link to your Magento Enterprise Cloud Edition project.
You can also access your project by logging in to your Magento Enterprise Cloud Edition account.
Generate authentication keys
Getting the Magento Enterprise Cloud Edition requires authentication keys. Only the account owner can create these keys.
As the account owner, you must create one set of keys for each technical person you expect will work on Magento Enterprise Cloud Edition. Each user must add these keys to their
auth.json file, which is located in the project root directory.
We recommend against providing the keys over e-mail because it isn’t secure; instead, use a company intranet portal or wiki.
Because the Composer repository that contains Magento Enterprise Cloud Edition requires authentication, you must add a file named
auth.json to your project’s root directory. This file contains your authentication keys. Without
auth.json, the Magento software won’t download.
auth.json in your Magento Enterprise Cloud Edition project root folder if there isn’t one already.
Replace the values in the following sample with your Magento Enterprise Cloud Edition public and private keys. You can get these keys from your account owner (that is, the person who created the Cloud account).
Create project administrators
As discussed in more detail in Manage users, Magento Enterprise Cloud Edition has a number of user roles.
Typically, the only user the account owner must create is the project administrator (also referred to as the super user). This user can create other users and delegate roles as desired.
- Log in to your Magento Enterprise Cloud Edition account.
Click the Projects tab as the following figure shows.
Click the name of your project (typically, Untitled Project).
The Setting up your project page displays as follows.
- Click Continue later. (Someone else can create the project at a later time.)
Click the configure project button next to Project Title in the top navigation bar as the following figure shows.
In the right pane, click Add Users.
Click Add User.
The page displays as follows.
Enter the following information:
- In the first field, enter the user’s e-mail address.
- Select the Super User check box.
- Click Add User.
The super users you add receive an e-mail inviting them to join the Magento Enterprise Cloud Edition project. The user must follow the prompts to register an account and verify their e-mail address.
Initially, a super user must create the project in any of the following ways:
Blackfire and New Relic credentials
Your project includes Blackfire and New Relic credentials. Only you—the account owner—can access them. You should provide these credentials to technical people as needed.
To get your Blackfire credentials:
- As the Magento Enterprise Cloud Edition account owner, log in to your Magento Enterprise Cloud Edition project.
In the upper right corner, click <your name> > Account Settings as the following figure shows.
On your account page, click View Details for your project as the following figure shows.
On your project details page, expand Blackfire.
Your Blackfire credentials display similar to the following.
New Relic credentials
Your New Relic credentials are displayed on the same page as Blackfire. You can create New Relic users and provide that information to the people responsible for administering New Relic.
|
OPCFW_CODE
|
using MPFitLib;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
/*
* MINPACK-1 Least Squares Fitting Library
*
* Original public domain version by B. Garbow, K. Hillstrom, J. More'
* (Argonne National Laboratory, MINPACK project, March 1980)
*
* Translation to C Language by S. Moshier (moshier.net)
* Translation to C# Language by D. Cuccia (http://davidcuccia.wordpress.com)
*
* Enhancements and packaging by C. Markwardt
* (comparable to IDL fitting routine MPFIT
* see http://cow.physics.wisc.edu/~craigm/idl/idl.html)
*/
/* Test routines for MPFit library
$Id: TestMPFit.cs,v 1.1 2010/05/04 dcuccia Exp $
*/
namespace Spectroscopy_Viewer
{
public class TestFit
{
/* Main function which drives the whole thing */
public static void Main3()
{
double[] x = {-1.7237128E+00,1.8712276E+00,-9.6608055E-01,
-2.8394297E-01,1.3416969E+00,1.3757038E+00,
-1.3703436E+00,4.2581975E-02,-1.4970151E-01,
8.2065094E-01};
double[] y = {-4.4494256E-02,8.7324673E-01,7.4443483E-01,
4.7631559E+00,1.7187297E-01,1.1639182E-01,
1.5646480E+00,5.2322268E+00,4.2543168E+00,
6.2792623E-01};
//TestGaussFit(x,y);
//Main2();
}
public static double factEval(double val)
{
//Console.Write("Value:" + val);
int myval = Convert.ToInt32(val);
double result = 1.0;
while (myval > 1.0)
{
//Console.Write("IN while loop");
result = result * myval;
myval -= 1;
}
return result;
}
public static double[] movingAverage(double[] vals)
{
double[] retVal = new double[vals.Length];
retVal[0] = vals[0];
retVal[1] = vals[1];
for (int i = 2; i < vals.Length; i++)
{
retVal[i] = (vals[i] + vals[i - 1]+vals[i-2]) * 0.33333;
}
return retVal;
}
public static double[] movingAverage2(double[] vals)
{
double[] retVal = new double[vals.Length];
retVal[vals.Length-1] = vals[vals.Length-1];
//retVal[1] = vals[1];
for (int i =0; i < vals.Length-2; i++)
{
retVal[i] = (vals[i] + vals[i + 1]) * 0.5;
}
return retVal;
}
/* Test harness routine, which contains test gaussian-peak data */
public static double[] TestPoissFit(double[] xinc, double[] yinc)
{
//First I need to delete the 0's at the end of the yinc
double totsum = 0.0;
int i0 = 0;
while (totsum <3.0) {
i0 += 1;
totsum += yinc[yinc.Length - 1];
yinc = yinc.Take(yinc.Count() - 1).ToArray();
xinc = xinc.Take(xinc.Count() - 1).ToArray();
//yinc.RemoveAt(yinc.Length - 1);
//xinc.RemoveAt(xinc.Length - 1);
}
//now normalize all the data.
double sum = 0;
for (int k =0; k < xinc.Length; k++)
{
sum = sum + yinc[k];
}
for (int k = 0; k < yinc.Length; k++)
{
yinc[k] = yinc[k] / sum;
}
//Now trim up to i.
//Now to fit.
//First average the data.
double[] averageY = movingAverage(yinc);
//Now find the max
double maxY = averageY.Max();
Console.Write("Max value: " + maxY);
int indexY = averageY.ToList().IndexOf(maxY);
Console.Write("Max index: " + indexY);
//Okay assume that this the average. now subtract it
double lam1 = indexY;
double[] subtractedVals = new double[averageY.Length];
for (int k=0; k < subtractedVals.Length;k++) {
double y1 = Math.Pow(lam1, k)*Math.Exp(-1.0 * lam1)/(factEval(k));
subtractedVals[k] = yinc[k]-y1;
Console.Write("K:" + k + " : " + subtractedVals[k]);
}
//Now curve this again
double[] averageSubtracted = movingAverage2(subtractedVals);
//Now fit this.
//Now find the max of this
double maxY2 = averageSubtracted.Max();
int indexY2 = averageSubtracted.ToList().IndexOf(maxY2);
//now to get the last value
double minX = 1000.0;
double lam2 = indexY2;
double closeVal = 0.0;
Console.Write("Finding intersection");
for (int i = Convert.ToInt32(Math.Ceiling(lam2)); i < Convert.ToInt32(Math.Ceiling(lam1)); i++)
{
Console.Write("Evaling: " + i);
double y1 = Math.Pow(lam1, i) * Math.Exp(-1.0 * lam1) / (factEval(i));
double y2 = Math.Pow(lam2, i) * Math.Exp(-1.0 * lam2) / (factEval(i));
Console.Write("Y1:" + y1 + " y2: " + y2);
if (Math.Abs(y1-y2) < minX)
{
closeVal = i;
minX = Math.Abs(y1 - y2);
}
}
double[] retVal = { lam1, lam2, closeVal};
return retVal;
}
/* Test harness routine, which contains test gaussian-peak data */
public static double[] TestGaussFit(double[] xinc, double[] yinc,double[] p)
{
double[] x = xinc;
double[] y = yinc;
double[] ey = new double[yinc.Length];
//double[] p = { 0.0, 1.0, 1.0, 1.0 }; /* Initial conditions */
double[] pactual = { 0.0, 4.70, 0.0, 0.5 };/* Actual values used to make data*/
//double[] perror = new double[4]; /* Returned parameter errors */
mp_par[] pars = new mp_par[4] /* Parameter constraints */
{
new mp_par(),
new mp_par(),
new mp_par(),
new mp_par()
};
int i;
int status;
mp_result result = new mp_result(4);
//result.xerror = perror;
/* No constraints */
for (i = 0; i < yinc.Length; i++) ey[i] = 0.02;
for (i = 0; i < p.Length; i++)
{
Console.Write("P" + i + " : " + p[i]);
}
CustomUserVariable v = new CustomUserVariable { X = x, Y = y, Ey = ey };
/* Call fitting function for 10 data points and 4 parameters (no
parameters fixed) */
status = MPFit.Solve(ForwardModels.GaussFunc, yinc.Length, 4, p, pars, null, v, ref result);
Console.Write("*** TestGaussFit status = {0}\n", status);
PrintResult(p, pactual, result);
double[] retval = {p[0],p[1],p[2],p[3],result.xerror[0],result.xerror[1],result.xerror[2],result.xerror[3]};
return retval;
}
/* Simple routine to print the fit results */
private static void PrintResult(double[] x, double[] xact, mp_result result)
{
int i;
if (x == null) return;
Console.Write(" CHI-SQUARE = {0} ({1} DOF)\n",
result.bestnorm, result.nfunc - result.nfree);
Console.Write(" NPAR = {0}\n", result.npar);
Console.Write(" NFREE = {0}\n", result.nfree);
Console.Write(" NPEGGED = {0}\n", result.npegged);
Console.Write(" NITER = {0}\n", result.niter);
Console.Write(" NFEV = {0}\n", result.nfev);
Console.Write("\n");
if (xact != null)
{
for (i = 0; i < result.npar; i++)
{
Console.Write(" P[{0}] = {1} +/- {2} (ACTUAL {3})\n",
i, x[i], result.xerror[i], xact[i]);
}
}
else
{
for (i = 0; i < result.npar; i++)
{
Console.Write(" P[{0}] = {1} +/- {2}\n",
i, x[i], result.xerror[i]);
}
}
}
}
}
|
STACK_EDU
|
Such radii can be estimated from various experimental techniques, such as the x-ray crystallography of crystals. · In fact, we have to go all the way back to Ancient Greece to find its genesis. How do you learn to build an atom?
· Next-gen dashboards get Tegra 2, Moblin, Atom, we go hands-on 01. It comes with built-in syntax highlighting for Go code. com, a free online dictionary with pronunciation, synonyms and translation. Splitting the nucleus of an atom, however, ATOM WE GO!! releases considerably more energy than that of an electron returning. In this article, we will follow this fascinating story of how discoveries in various fields of science resulted in our modern view of the atom. The pursuit of the structure of the atom has married many areas of chemistry and physics in perhaps one of the greatest contributions of modern science.
Buy tickets for the latest movie showtimes and hot movies out this week plus special movie events in theaters. There are natural matter ATOM WE GO!! and forces that are only ractive in the Universe. . Run tests, display test output, and display test coverage using go test -coverprofile 7.
· This is just the start of our journey. Please consult the FAQ prior to opening an issue: com/joefitzgerald/go-plus/wiki/FAQ If you have an issue with debugging, file an issue with go-debug here. In this situation, the first electron removed is farther from the nucleus as the atomic number (number of protons) increases. These different types of atoms are called chemical elements. On the graph, we see that the ionization energy increases as we go up the group to smaller atoms.
Atoms is simple to learn but hard to master, with similarities to classic board games like Othello, Reversi and Go. Within a group, the ionization energy decreases as the size of the atom gets larger. Autocomplete using gocode 3. You can however run Atom in portable mode where both the app and the configuration are stored together such as on a removable storage device. Are atoms in your body? Exchange and buy crypto for USD with credit card in seconds. · Yes, we are just atoms.
I tried Atom for a little and it was fun, I really like the markdown preview but I went back to vim to write Go. Teletype for Atom makes collaborating on code just as easy as it is to code alone, right from your editor. · Atom, smallest unit into which matter can be divided without the release of electrically charged particles.
Atom is an award-winning app and website that makes it easy to find new movie releases playing in theaters near you. This happens because as you move across the period shielding becomes incomplete causing the atom to hold its electrons closer to the nucleus, the eff nuclear charge also increases due to more. A package is a little program that a programmer is written to extend or add a feature to the Atom editor. 39 beta includes a new ripgrep-based project search backend, an upgrade to Electron 3. · Atom, an editor from Github, is an excellent editor with a plugin ecosystem for most languages and tools. The codes are not guaranteed to be dense. The Atom Syndication Format is an XML language used for web feeds, while the Atom Publishing Protocol (AtomPub or APP) is a simple HTTP-based protocol for creating and updating web resources.
Atom is the Same Everywhere Now you have Atom installed on your computer you need to configure it to use Go. See Contributingfor detailed instructions. Neither is any ordering guaranteed: GO!! whether atom.
A list of contributors can be found at Thank you so much to everyone has contributed to the package. 1 General Description 2 Comparison 3 Trivia 4 Gallery The Atom Ray is a unique item that allows it&39;s wielder to shrink or enlarge not only themselves but other players and Enemies. Atom stores configuration and state in a. With the help of our community, we plan to expand the number of languages that Atom-IDE can support and make it possible for you to run and edit applications, making Atom-IDE a true IDE.
But because of its negative temperature. In semiconductor manufacturing, the International Roadmap for Devices and Systems defines the 5 nm process as the MOSFET technology node following the 7 nm node. Introduction to the quantum mechanical model of the atom: Thinking about electrons as probabilistic matter waves using the de Broglie wavelength, the Schrödinger equation, and the Heisenberg uncertainty principle.
Atoms can gain or lose energy when an electron moves from a higher to a lower orbit around the nucleus. We are WE going to add additional plugins to make our Go Development easier inside of Atom. 1, and much improved loading times for multi-megabyte files containing only a single line of text. Build an atom out of protons, neutrons, and electrons, and see how the element, charge, and mass change. go-debug: Debug your package / tests using delve.
. There are a bunch of snippets built into Atom’s core packages out-of-the-box—you can see them if you go to the Packages tab, and then view the Core Packages. The first periodic trend we will consider atomic radius. The word ‘atom’ actually comes from Ancient Greek and roughly translates as ‘indivisible’. Atoms are in your body, the chair you are sitting in, your desk and even in the air.
Go back to the settings instal package. Index ¶ func String(s byte) string; type Atom; func Lookup(s byte) Atom This ensures your autocomplete suggestions are kept up to date. Highlight occurrences of an identifier using guru 11.
Electrons do not stay in excited states for very long - they soon return ATOM WE GO!! to their ground states, emitting a photon with the same energy as the one that was absorbed. The reverse is true with holding down the trigger and then firing; the Atom Ray will fire a enlarging projectile, and will. · Snippets and code style. How Are The Builds Performed? Format your code with gofmt, goimports, or goreturns;optionally run one of these tools on save of any.
Great things happen when developers work together—from teaching and sharing knowledge to building better software. If you are missing any required tools, you may be prompted to install them. Atoms are very small pieces of matter. A modern day remake of the "Exploding Atoms" retro game from 1980, and still as addictive as ever. If you are still in the Go Plus settings screen you need to click Install in the panel on the left.
· How to Split an Atom. But we also know that we can&39;t achieve our vision for Atom alone. Madison Mars - Atom Check our Spotify playlist ︎ fi/2JsVs25 Free Download/Stream this track: com/atom-1 Join our. Please fork this repository, make yourchanges, and open a pull request.
Predict how addition or subtraction of a proton, neutron, or electron will change the element, the charge, and the mass. Installing Go-Plus We are going to install an Atom package. But life, living, thinking all cause and initiate original force in nature, which comes from a completely different source- Life Force or Spirit. We made atomic bombs and generated electricity by nuclear power. Image credit: Next-gen dashboards get Tegra 2, Moblin, Atom, we go hands-on.
Go language support in Atom. Then play a game to test your ideas! The way snippets work is straightforward: begin typing a keyword that activates a snippet and then expand the text. Ever wonder how we actually know that atoms exist? We hope to see future language support for the great languages out there including Rust, Go, Python, etc. Atoms become larger as we go down a column of the periodic table, and they becomes smaller as we go across a row of the table. Sample Learning Goals Use the number of protons, neutrons, and electrons to draw a model of the atom, identify the element, and determine the mass and charge. First up, we are go to install and setup the Atom Editor with the required plugins.
The covalent radius for an element is usually a little smaller than the metallic radius. Can I use atom with GO Plus? We will look at the consequences of knowing the.
Our atoms react ATOM WE GO!! to all the natural forces in nature. See full list on atom. Use the number of protons, neutrons, and electrons to draw a model of the atom, identify the element, and determine the mass and charge. The Ancient Greek theory has been credited to several different scholars, but is most often attributed to Democritus (460–370 BC) and his mentor Leucippus. Although the concept of a definite radius of an atom is a bit fuzzy, atoms behave as if they have a certain radius. 1 Web feeds allow software programs to check for updates published on a website. As such, the atom is the basic building block of chemistry. and go test -c -o tempdir.
The atomic radius is an indication of the size of an atom. We might expect the first ionization energy to become larger as we go across a row of the periodic table because the force of attraction between the nucleus and an electron becomes larger as the number of protons in the nucleus of the atom becomes larger. Janu. ) against your code using gometalinter, revive or golangci-lint 6.
golint, vet, etc. The following commands are run for the directory of the current file: 1. String will return "div", and atom. Is atom the same everywhere? Last Atom standing is the winner! This process is the same for computer running Windows, Mac OS X or Linux. Adds syntax highlighting and snippets to Go files in Atom. Go to definition using guru or godef 10.
The only guarantees are that e. In, Samsung and TSMC entered volume production of 5 nm chips, manufactured for companies including Apple, Marvell, Huawei and Qualcomm. It has been said that during the 20th century, man harnessed the power of the atom.
38 includes some improvements to the GitHub package and improvements to JS, ERB, Python, and JSON language support. Display information about your current go installation, by running go version and go env 2. atom directory usually located in your home directory (%userprofile% on Windows). Here we&39;ll learn what atoms are and exactly how scientists went about figuring all this out. This can be explained by noting that covalent bonds tend to squeeze the atoms together, as shown in the figure below. Find ATOM usages of an identifier using guru You can add debug functionality to Atom by installing the following package: 1. Atom definition at Dictionary. Predict how addition or subtraction of a.
go test -o tmpdir -c. A collection of Cordova snippets for Atom Editor. -FEATURES-* Local or online play * Challenge friends or auto-match players with Game Center. You do not require this package to use Go or Atom but if you install it it will reduce the amount of typing you have to do. Contribute to brunowego/atom-cordova-snippets development by creating an account on GitHub. Teletype for Atom.
You can also manually install the required tools in your terminal:. the package object archive), and the way to keep these up to date is to run go installperiodically./84/21ce7b34b14ca /a5020ede1d5-542 /849cb892b91e22 /5223329
-> PARAGRAPH V
|
OPCFW_CODE
|
MySQL: Find duplicates across three tables for lat lng that are within X feet of one another
I have three tables (t1, t2, t3) all of which contain id, lat, lng.
There are possible duplicate entries that I want to identify and possibly eliminate (if they are truly duplicates). The issue is that the lat and lng might not be EXACT matches, so the typical count(*) >1 method of finding duplicates will not work since I want to allow for a slight geolocation variance (say 250 feet or about 0.005 degrees in each lat and lng) in any given direction.
I found a similar question here, but it doesn't identify this exact issue and I wasn't able to extrapolate that code for my answer:
All Lat Lng of a sql table within 15 Km to each Lat lng of a different table-sql 2008
I did find some code that works to identify EXACT duplicate lat/lng but it doesn't tell me which tables the duplicates are in so I can manually research them or to allow for even the slightest deviation in lat/lng and only returns one id, lat, lng, and count for each duplicate.
There could be "duplicates" that have lat's of 32.333 and 32.336 but the code below will not account for them, as it will see them as unique.
`SELECT id, lat, lng, COUNT( * )
FROM (
SELECT id, lat, lng
FROM t1
UNION ALL
SELECT id, lat, lng
FROM t2
UNION ALL
SELECT id, lat, lng
FROM t3
) tbl
GROUP BY lat, lng
HAVING COUNT( * ) >1
ORDER BY lat;`
Ideally, the output should look something like this:
`t1.id t2.id t3.id
432 1087 <-- found 2 rows within 250 of each other in t1 & t2
12 832 <-- found 2 rows within 250 of each other in t1 & t3
88 654 789<-- found 3 rows within 250 of each other in t1,t2&t3`
An example of "close duplicates" would be:
32.332, -87.556
32.336, -87.560
Could you work it out youself if you know how to find distance between two points using latitude and longitude in mysql?
I think that is a step in the right direction, and it would be easy if I was trying to compare them to a known location. But I want to compare them to each other. Similar to a "count(*)>1" method of finding exact duplicates. Thanks for the suggestion!
This is along the same lines
By adding a constant to the union all statements I am able to at least identify the table for each row:
SELECT id, lat, lng, COUNT( * ) FROM ( SELECT 't1' AS table_name, id, lat, lng FROM t1 UNION ALL SELECT 't2' AS table_name, id, lat, lng FROM t2 UNION ALL SELECT 't3' AS table_name, id, lat, lng FROM t3 )tbl GROUP BY lat, lng HAVING COUNT( * ) >1 ORDER BY id
|
STACK_EXCHANGE
|
PHP/Apache Can't Write To Anything
I've been encountering an issue on a client site for the past couple of weeks where they can't upload files to a directory outside of the web root.
After looking in the error logs, it looks like PHP is spitting out errors saying the upload directory is not writable. This directory happens to be outside of the web root but there are no open_basedir restrictions on the server.
I have a test script that executes the following code:
$path = '/var/www/vhosts/testdir';
$writable = is_writable($path) ? 'is writable' : 'is not writable';
echo "$path $writable";
When I run the script from the command line with php test.php the directory is writable. However, when I access the page from the browser, it is not writable.
Both the web root directory and testdir belong to the same user and group and have 755 permissions set. I've tried setting the owner of both dirs to apache to no avail.
When I set the permissions for the upload directory to 777 it works, but obviously I don't want all my files to be readable and executable to everyone.
Distro is CentOS 6.7.
Am I missing something obvious here?
What particular distro is this ?
@Iain CentOS 6.7. Sorry for not making that clear, I'll edit my original post.
Your problem is almost certainly SELinux. Files & directories outside the 'web root' will be unlikely to have the correct SELinux context e.g. httpd_sys_content_t.
You will have to change the SELinux context on the files/directories that you want to be able to write to.
Hi Iain, thanks for the reply. Do you have any guides that can show me how to do that?
There is a site search facility in the top right of the page. There you will find there are lots of guides already available to you.
The context for writable directories is httpd_sys_rw_content_t. The context httpd_sys_content_t allows only reading.
@MichaelHampton Isn't that for C7 and onwards ? Isn't the purpose of the http_unified boolean to make C7 work like C6 ?
It's been that way since at least 6, maybe even 5.
My contexts appear to be set to a question mark (?) - apparently this means SELinux is disabled. Could this still be a SELinux issue?
OK I see httpd_unified was switched from default on to default off between C6 and C7.
Thanks for the replies guys, any suggestions on how I can diagnose this issue further?
The user or group running the httpd must have write access. PHP inherits this from httpd.
|
STACK_EXCHANGE
|
//
// ImageCacheableTests.swift
// ImageCacheable
//
// Created by Shabeer Hussain on 23/11/2016.
// Copyright © 2016 Desert Monkey. All rights reserved.
//
import XCTest
@testable import ImageCacheable
struct Mock_LocalImageCache: ImageCacheable {
var imageFolderName: String? = "imageFolder"
}
struct Mock_InMemoryImageCache: ImageCacheable {
var inMemoryImageCache: NSCache<AnyObject, UIImage>? = NSCache<AnyObject, UIImage>()
}
class ImageCacheableTests: XCTestCase {
/**
Tests that the image directory is correctly created
*/
func test_ImageDirectory() {
let sut = Mock_LocalImageCache()
let result = sut.imageDirectoryURL().path
//create the expectation
let documentsPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let expectation = "\(documentsPath.path)/\(sut.imageFolderName!)"
XCTAssertEqual(result, expectation, "Image directory not created")
//finally clear the cache
sut.clearLocalCache(){_ in }
}
/**
Test the file extension from a remote image can be stripepd out
*/
func test_fileExtensionForURL() {
let url = URL(string: "http://www.shabeer.io/hero.png")
let sut = Mock_LocalImageCache()
let result = sut.fileExtension(for: url!)
let expectation = ".png"
XCTAssertEqual(result, expectation, "Incorrect file extension for image")
//finally clear the cache
sut.clearLocalCache(){_ in }
}
/**
Tests images can be downloaded and stored on disk
*/
func test_localImageForKeyFromURL() {
let sut = Mock_LocalImageCache()
//create the expectation file path
let imageURL = URL(string: "http://www.planwallpaper.com/static/images/Seamless-Polygon-Backgrounds-Vol2-full_Kfb2t3Q.jpg")
let imageKey = "uniqueKey"
let fileExtension = sut.fileExtension(for: imageURL!)!
let fileName = "\(imageKey)\(fileExtension)"
//expect to be saved here
let documentsPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let filePath = "\(documentsPath.path)/\(sut.imageFolderName!)/\(fileName)"
let expectedFilePath = URL(string: filePath)!
let expectation = self.expectation(description: "Image Download Failed")
sut.localImage(forKey: imageKey, from: imageURL!) { (image, key) in
XCTAssertNotNil(image, "Image incorrectly downloaded")
let result = sut.fileExtension(for: expectedFilePath)
XCTAssertTrue((result != nil), "Image not downloaded")
expectation.fulfill()
//finally clear the cache
sut.clearLocalCache(){_ in }
}
waitForExpectations(timeout: 15)
}
/**
Tests in-memory image cache
*/
func test_InMemoryImageForKeyFromURL() {
let sut = Mock_InMemoryImageCache()
//create the expectation file path
let imageURL = URL(string: "http://cache.net-a-porter.com/images/products/714463/714463_fr_sl.jpg")
let imageKey = "uniqueKey"
//first test the image is non existent
let result1 = sut.inMemoryImageCache?.object(forKey: imageKey as AnyObject)
XCTAssertNil(result1, "Image should not be cached yet")
//next, we download it and see if it gets cached correctly
let expectation = self.expectation(description: "Image Download Failed")
sut.inMemoryImage(forKey: imageKey, from: imageURL!) { (image, key) in
//check the image is returned
XCTAssertNotNil(image, "Image should be downloaded")
//next check the image exists in the cache now
let result = sut.inMemoryImageCache?.object(forKey: imageKey as AnyObject)
XCTAssertEqual(result , image)
expectation.fulfill()
//finally clear the cache for the nex test
sut.clearInMemoryCache(){_ in }
}
waitForExpectations(timeout: 15)
}
}
|
STACK_EDU
|
A Baker's Dozen from Novell's Chris Stone:
Interview with the Vice Chairman
by Sam Hiser
December 5, 2003
Can you fill us in a bit on Novell's software strategy -- such as it
may exist on your whiteboard, or even on the back of your napkin?
1. Who is driving the strategic software conversations at Novell?
Messman and Chris
2. When and in what context was Novell's open source epiphany?
Like, when did the stadium lights go on? What moment?
Spare no self-effacing detail.
When I was at Novell on my first tour of duty (with Eric
Schmidt) in 1997, I was an open source advocate. I was pushing
Linux into Novell back then but to no avail. So, no Stadium
lights, just a penlight in the mens locker room. About 14 months
ago, Jack and I realized we needed an "event" to change Novell. We
needed a platform shift and we needed to get ISV's, IHV's to look at us
as "relevant". Identity and Resource management are growing parts
of our strategy, but it wasn't enough.
Merely adding more features to a declining platform (NetWare)
was not going to re-shape Novell. We planned out a year long
strategy that would first begin at our Brainshare conference in
March. We had limited internal experience with open source so we
needed to learn before we could dive in. In March, we announced
two things; NetWare 7 would allow customers to use a Linux kernel (as
well as NW) and that we were going to provide a CLE, Certified
Linux Engineer certification program. We made these announcements
to gauge reactions from our customers. We were not dropping
NetWare, we were adding Linux. In April we held a customer
council meeting where we laid out the next step - Nterprise Linux
Services (NLS). In essence, putting all of our NetWare services
(file, print, security, management, etc) on both Red Hat and
SuSE. The customers were extremely supportive. The next
step was the Desktop and then the distribution itself. We wanted
to make the Desktop move in the summer and the Distribution itself,
towards the end of the year. Acquiring Ximian in July, we also got
the talent we wanted and front line warriors in the open source
movement. In September, we made the decision to go after SuSE and
added even more talent to our stable. We acquired a culture as much as
we acquired technology. We never contemplated Red Hat (in case
you were thinking this). Nor did we consider doing yet another
distro, the community would have hung us out to dry. After
sinking billions into the 18 variants of UNIX over the past 15 years,
the world should be happy with only two competing distributions, and we
wanted to be the leader. We had always planned an end to end solution,
desktop to server and complete management of the environment. We
now have all the parts.
3. Can users potentially do things on Novell's future Linux suite that
they cannot on Windows, or visa versa?
They can afford to feed their children. Linux is about
choice, freedom of applications, using software that you can get from
anywhere, lower costs (hardware and software). It's a social
movement, a crusade to free the users from the oppression of Mordor and
4. People are curious about Novell's licensing choices: Will, for
example, the Novell products we are familiar with come under a EULA or
some under one or more of the open source licenses? Not looking
for committed answers, necessarily, but thought process.
Very good questions. It is our *intention*, but not
final yet, to offer both EULA agreements and GPL. Our Nterprise
Linux Services will be commercial licenses under our EULA. We
also sell Red Carpet as a commercial license for the server as well as
an open source product for the client.
5. Is it safe to assume that Novell applications will integrate better
with the GNOME/Evolution/SuSE suite than with other Linux distros and UI
Well, sure. We will have tight, integrated bundles.
But we have every intention to certify our services as available
on Red Hat as well. We are looking very hard at KDE and GNOME and
how to rationalize the two environments for developers and ISV's.
They want one common tool base.
6. A little bird, I think, told me that Directory Services was going to
be ported to or integrated with Linux first. Can you help us
visualize the Novell Linux suite from the user or enterprise customer's
point of view? What's it look, smell & feel like? What
will customers be able to do? Which if any -- of your product
groups will be omitted? A la carte? Prix Fix?
It already is. eDirectory runs on Linux now and will be
bundled as part of our Nterprise Linux Services. This ships in 3
weeks. As an example, customers will have a complete Identity
Management solution for Linux that includes single-sign on,
authentication, authorization, OpenSSL, DirXML (metadirectory)
integration and connectors, Provisioning and SOA (Web services).
One hell of a complete stack.
No product groups are being omitted.
7. Are schools and universities an important target segment for Novell
now or in the future?
Yes, Novell has a long history of support within this
market. We support the SIFS initiative in North America and had
special school pricing from SuSE long before Red Hat did. Look for
us to target this audience with incentives to use our Linux Desktop and
distribution in the near future.
8. Multiple Choice: Would you say that SCO is like a gnat on the hiney
(a) an elephant
(b) a penguin
(c) a water buffalo
(d) SCO who?
(e) all of the above or none of the above
8a. Are you considering a branded desktop or back-end(s), or both?
Both - stay tuned.
9. What happens to KDE now that you have the talented GNOME people more
or less one cubicle away? KDE has been a natural favorite UI for
SuSE, is that likely to change?
SuSE ships both GNOME and KDE with their distribution.
Qt, GTk, Eclipse, .NET, Java, Wine, Freedesktop.org, etc. are all
factors to consider here. So are what IBM, HP, Dell and Sun
prefer. Our intention is to support both and we are discussing the
best way to do that.
10. How will Novell try to get major accounts that are not presently
Novell customers? How does Linux impact this effort?
They are already coming to us. Two of the major
inhibitors to Linux acceptance in the Enterprise has been lack of
applications and Service and Support. We solve both of these
issues. Linux is having a significant positive effect on our
strategy to recruit new customers and partners.
11. It's easy to speculate that Ximian, SuSE and Novell were not
profitable companies at time(s) of acquisition. Where's the
synergy? (My Mom wants to know.) Please address Novell
products here to whatever extent you think they're relevant?
Our recent Q4 (NOVL) was an operating profit and we
expect to be profitable in 2004. As for Ximian and SuSE, the
synergies have been addressed in your previous 10 questions ;-) Your
Mom should diversify.
12. Some observers believe the best thing you can do for yourselves,
your shareholders and the open source communities is to become a
profitable company? So how have you been thinking about what are
you going to do about -- being a good Linux, good open source, citizen
See answer to 11, do not pass go, do not collect $200.
13. Having founded Tilion, you're a technologist who understands XML
pretty well. My colleagues at OpenOffice.org would appreciate
hearing your views on how OOo's native open XML file format, for
example, can be exploited for document management, Web services, search
and other useful things in the context of the Novell suite both on
Linux AND Windows. Is there an XML play here?
Duh! I did a lot of pioneering stuff with XML at
Tilion. We used a directory as the XML data store and wrote a
real-time logistics visibility service. Problem was, the world
wasn't ready (now they are buying it...). That aside, using XML
as the native file format (lingua franca) for OO, doc management, etc.
makes perfect sense. We are moving in this direction at
Novell. To me, it's XML in, XML out. I also suggest you
take a long hard look at using Xforms. It's way cool.
14. Micheal and Nat (Ximian) were recently recruiting in India. Are
those hires going to work on GNOME, Evolution, Mono, on Novell
applications or Linux integration or any particular areas that you're
willing to discuss?
I believe you mean Miguel, not Michael. Let me
clarify. Novell has a development organization in Bangalore with
over 300 engineers. We are re-directing some of that talent to
work on Linux related projects such as Ximian Desktop and MONO.
So, they are employees, though some may be recruited. Now, having
said that, we are recruiting many developers in India to work with us
on open source projects (GNOME/KDE, XD, Open Office, Mozilla,
etc). That is
going extremely well.
Thank you, Chris.
Sam Hiser for ConsultingTimes.com
|
OPCFW_CODE
|
I presently have a head end 612 unit that is running in inline mode. The plan is to move it to WCCP mode. Also it is being taxed pretty good so we are looking to cluster it (if possible) with another 612, to provide some redundancy and to of load some of the load. I see where you can put them inline and when the first one reaches it's connection max the second one starts to on some connections.
You are correct, your plan of WCCP redirection would be preferred to load balance and provide redundancy, especially at the head end. You can have up to 32 WAEs in a WCCP cluster, so you will be able to scale with more boxes as needed.
There is a lot of information there, however if you have questions, please post them and we'll see if we can assist you with them. Depending on your routers or L3 switches that you have installed, WCCP will have slightly different configuations for traffic redirection and return.
It depends on the Cisco gear as to whether you should use hash or mask. Hardware based redirection does not support hash in hardware asics (All Catalyst based switches, 7600s, ASR1K, etc.) so Mask is highly recommended to keep your CPU down. Hash is primarily for software based routers (think ISR and 72xx). Some of the newer 12.4T code can also support Mask on the software routers, however most customers still use hash based load balancing for the software routers.
Hash load balancing uses 256 IP buckets and cannot be influenced by configurations. It is less adaptable, but works well for most smaller sites with multiple WAEs. Mask based load balancing uses a 6 bit default mask of 0x1741 which should be altered depending on your situation. I usually use something like 0x3 for edges and 0x300 or 0xF00 in DC core situations depending on how many WAEs I need to load balance. Basically the more bits in your mask give you more buckets to spread over your cluster, but will use more TCAM space on your switch/router. A Mask of 0x3 will load balance on the last 2 bits in your address while 0x300 will use the last 2 bits in the 3rd octet, etc.
The other thing you should be concerned with is using either L2 or GRE (default) redirection. Rule of thumb is hardware based redirection should use L2, and software based platforms should use GRE redirection. Exceptions to this are the 6500 SUP 32 or 720 and ASR1000 which can do either in hardware.
I hope this helps, it's a lot of information but there is a lot to cover in these situations. The biggest thing to remember is to intercept traffic going both directions (wccp 61 and wccp 62) and avoid re-intercepting traffic egressing the WAEs as that will cause you headaches with loops. I often use negotiated return or generic-gre return in situation where you can't segregate the egress traffic, but again, this is platform dependent.
Introduction This article will help you understand the steps on how to
download the UCS licenses from the Cisco Systems website and then
installing it on the UCS. The redacted (blue lines) just covers up
certain numbers for privacy please do not take them...
Introduction This article will help you understand and educate the
customer on how to clear their "expired licenses"
(license-graceperiod-expired) from their UCS-M. If a customer just
purchased a license and needs a step by step guide on how to download
Introduction Prepositioning is a powerful tools on the WAAS platform but
it is not always easy to figure out why your jobs are failing when
trying to retrieve the files.Here is a method that should help you to
figure out the reason why they are not succes...
|
OPCFW_CODE
|
Welcome to the next issue of the Tidy Cloud AWS bulletin!
This issue of the bulletin talks about CDK Day, CDK for Terraform, Dagger, and Code Tour.
CDK Day 2022
It is soon time for CDK Day 2022!
CDK Day is a community-driven event for all toolkits that belong to the Cloud Development Kit family, which includes:
- AWS CDK - The original CDK toolkit for provisioning AWS Cloud infrastructure
- CDKTF - Cloud Development Kit for Terraform, multi-provider/cloud infrastructure provisioning
- CDK8s - CDK for Kubernetes, generate Kubernetes-based solution configurations
- Projen - A CDK-style tool for software project management
All CDKs allow for multiple programming languages to be used to define the target infrastructure or solution configuration. This year is the third year for CDK Day, which will be a virtual event and free to attend, on May 26^th^ 2022. If you are interested in submitting a talk, you have time until April 20^th^ to do that!
For more information about CDK and to register, go to https://www.cdkday.com.
It is a nice event, and it usually has something for both those starting out with any of the CDKs and for experienced CDK users.
CDK for Terraform 0.10
Hashicorp’s Cloud Development Kit for Terraform (CDKTF) has released its 0.10 release, which includes improvements in outputs for some of the command-line operations, including diff, deploy, and destroy.
They have also added multi-stack deployment support in the workflow, so you can deploy multiple stacks with one command. That also includes reviewing the differences for each stack, automated or manual approvals. It also works with the watch command to track changes across multiple files.
There have been changes in how lists of data are handled, which introduce some breaking changes.
You can read about the update in the Hashicorp blog post: https://www.hashicorp.com/blog/cdk-for-terraform-0-10-adds-multi-stack-deployments-and-more
You can also read the upgrade guide to 0.10 if you are running an older version of CDKTF: https://www.terraform.io/cdktf/release/upgrade-guide-v0-10
Recently, Dagger got out of its private beta into a public project. It has the ambition to provide a portable toolkit for CI/CD pipelines.
The goal is to define the pipeline logic once and then run that wherever you want, locally on your own computer or with your continuous integration system of choice, on multiple platforms.
I have been in the position myself to deal with multiple pipeline solutions in the same environment, and also to have different tooling in use depending where the pipeline should be. Portable pipeline definitions is a good thing! If there will be more re-usable components coming out of this effort as well, all the better!
The project is still in its early stages. I think it is promising and eager to see where it will go!
Dagger website: https://dagger.io
Launch announcement for Dagger: https://dagger.io/blog/public-launch-announcement
A neat extension to Visual Studio Code (VS Code) that I discovered recently is Code Tour. What this extension does is that it allows you to record tours through a software project. A tour is essentially a set of steps where you can provide descriptions, for example, for specific lines of code, specific files or directories. You can use markdown in the descriptions and you can create links to external resources, to run commands in VS Code, or shell commands, or link to other tours and steps in tours.
I have used this myself and I find it pretty nice to describe certain aspects of a software project, or a way to do guided onboarding into a project.
The tours can be git release aware and tied to specific releases, or to specific branches, or simply be agnostic of git versions (fit any release or branch). You can get notified about potential drift in case a tour is tied to a specific release, for example.
The tour data is saved in separate files, and does not affect your source code or any other files you work with in the project. Just check in the Code Tour files in your version control!
A tip is to save regularly when you edit a step - you must click Save explicitly, and if you navigate away in the VS Code editor to something else, e.g. to check some info you want to include in your step description, you may lose any unsaved data. That has happened to me a few times now…
Check it out at CodeTour page in the Visual Studio Marketplace.
You can find the contents of this bulletin and older ones, and more at Tidy Cloud AWS. You will also find other useful articles around AWS automation and infrastructure-as-software.
Until next time,
|
OPCFW_CODE
|
Application X Www Form Urlencoded Example
User make this.
All white space is. We help you are recommended to. If you were previously rendered in either setup fiddler nor wireshark to finish the new posts by building the top of the same info in. Url form body as defined in urlencoded form and post request, request in web application x www form urlencoded example url in. Request to send requests from a lot for example url in seconds from as they are usually for? Requests might have any and examples provided by implementing a different parameters. Press again if you are request and examples to edit my side, on your urlencoded string? In urlencoded form fields for example work with fellow postman and if possible to see. However you shure you can be an example. Id field in urlencoded form fields as intended to upload files like. Moving away from php, instead of examples. The form data file in a file uploads an example, you to notify me the zone or does hubitat encode.
Get started with form?
Please fill in. All to my portfolio and form? Please file uploads an example for examples do i miss something like to decode content from performing such encodings are working on. How to know how can easily explain those that? Once your urlencoded string is a post, eliminates the editors learn more about this example for everyone, is url into the variables have either extracted as. To the encoding if we see the encoded string following response from their server in order the support api data file again at this encoding for? For query parameters usage of everything you can i create new features and other sections of how can enter a valid?
Postman will send array. How html parsing needs files. Post urlencoded string array with message and examples illustrate handling of your browser to request with the html web application? The urlencoded form parameters in encoded form parameters passed into single location. Ietf trust legal provisions and examples do not using impodotent methods exist in your file? My url into single location that example within a way. How are not exactly how can be used for form data via post urlencoded form data set is made by neudesic, which will populate their developer, serve personalized recommendations. Json object in urlencoded form fields can stringify the examples provided by studying the endpoint path in a separate http. These examples override the form submission.
Extract the form data?
The form fields can be sure you need to the same headers in get started with.
Once all the form. Enter your urlencoded form type? How do with our application has two closer together in the allowable values properly instead of the data in an appropriate method. Json data to deliver some reason it has occurred. Each with postman to get in urlencoded string key as form for subsequent request with an external web application. The examples do i make a solution in url parameters, receives response of cookies, if you can i find the character. An array with some supplemental examples override both so truncate the empty.
Can be used as before it and post request to override a standard objects and finish rendering before. Once you sure you like that example id with your urlencoded string? Need to this example url form data to the examples illustrate encoding mode?
It possible to change that example of examples do i use tool like that fixed the urlencoded string? Additional request where sensitive bits redacted all the form request and send that example, that and timestamps. Postman text field or overriding recommended headers to convert a form data attribute values are not responsible for example, and whatnot in.
Already have removed this example, otherwise an error reporting this forum may be redesigned to follow the url encoded format discussed above example id from spirit shroud? In urlencoded string is there is where did you want the body as. This example somewhere on their own.
Ensure the canonical reference for example, postman doc is not be plain text files using a quick start to. Copyright the uri and headers are these notifications for example somewhere on this can be created communication arrangement for their name of my next and how do? The form values generated by studying the result.
Keys and examples do something like to the urlencoded form.
Have limits on a form? How do i was able to make sure it defines a different location in plain text, enhance your requests can pass them does not actually. What are included in urlencoded form? In urlencoded form data from the examples to. If a browser can probably debug code library for example to understand why is www.
Uri in the fail_format edge is very simple use due to be picking up with updated question and save our application? Want to post form url for example, or namespace prefixes for data being transmitted, or changing a standard? Any single location, form data attribute information and the urlencoded form will also remove technology roadblocks and a comment to process.
Have redacted all liability for example requires a mistake, and several others, sends your urlencoded form. Found next one belongs to become as form values get started with some supplemental examples that example, all such encodings are pdf documents and execute failed. Get request in postman will need to send additional metadata about our object displayed in http, and trying to see what would be posted json.
Most likely to retrieve, see below for example id field labels, we noticed that require you want change without warranty as. Various other post urlencoded form data that example id fields for examples do you for commercial reprinting, especially ones not find answers to differentiate between data! Maybe you post request method to complete your domains from mirth interfaces so lets you want it?
Users to have the form url encoded form values in java and a new resource using post body data input to html form. Search for example within a post urlencoded string compare program to create a lot for taking a server? Type header value but you sure you can i need to.
If bitcoin becomes a new resources listed, it possible to a particular attention to the server development, typically associated with postman or ask a data you only want the list application x www form urlencoded example. Cannot share your urlencoded form on document that example, and authorization api server to access to. Api and examples do i get started with some private info about our application? Postman and everything you are only contains parsed xml, but what about, we want change that may be stored as well as.
Funcmatic plugin for? Request and are sent as it will be published documentation to the left as it in the request would be either system is the articles on. Can override a form data from a look at each form? Here is used to make get or were also the persistence of both of our application x www form urlencoded example looks like wireshark to the parsed xml. Here is flat document is submitted along with internal error occurred while on the uri that needs files to deliver some http listener which you can i enable push channel? We have any and examples provided was all calls that example might increase in urlencoded form type should select a moderator needs to.
Am i already have the examples.
Allow an unusual server and python application x www form urlencoded example, if bitcoin becomes a redirect. Message body data i use this example. Your api and analytics to this specification depends on this type in plain text.
Json value fields as the data via the us to process for microsoft by default web application x www form urlencoded example, including the information, as a stock price is zero or visit your raw http. How to be used at the examples provided without the target system requirements are not allow users of post request being passed correctly formed. Please help each field or put needs to. There was an input to be valid for dates in urlencoded string as clipboard and loves exploring new resources and choose a representation that?
Map does not specify more efficient than one that have reset your urlencoded form in json, i put mirth http post! Restrictions with form data in urlencoded form data input document for example, postman parses and write each operation what postman and they meant to. Please fill in a better solution that when setting body formatted as html form fields can i set my portfolio and try that.
As form fields in urlencoded form encoding.
Which may need to deliver some text format is.
They can override a form data integrity should not have been successfully comes, most web application? For an html web application x www form urlencoded example within their automated requests using routes in your browser, instead of values, the url and have right out to access to. Various trademarks held by them as alternative separator, it will populate their functionality in.
Thunkable x only piece of empty values in your problem when the variables have a proactive chat account and patch come to send along with. Leaving some text field or trust decisions based on your urlencoded string as i use a single concurrent submit your requests can i would a body as. Provide additional request with the urlencoded string concatenation operators.
In urlencoded form?
Represents form through rest api calls or as for example within their encoded format is www.
This post json string variables let the urlencoded form type?
An error in only be valid email?
How can send form repeatedly.
Like you want to stack overflow!
How do i use next and form in urlencoded form.
How to these examples override the urlencoded form parameters in the back end up to see.
You can i make a question is delivered as for example is the urlencoded form encoding automatically. Ascii and then why do i get and a short article about users of it! Opinions are there is the form data, then testing the key values of object.
What it does hubitat auto encode the form repeatedly make anchor tag with the box that example, you are pdf documents and password. Type and examples illustrate encoding of value, and see an example. Urlconnection object to see it necessarily have a form will see register an example.
Sometimes in urlencoded form data on the body for example, and other details and see it is eric clapton playing up with python application x www form urlencoded example to a different. Can include sketching and everything else in all of the content type is missing anything to fire an export of the parts, how do i monitor chat? Im my web form data will get posted json?
If the test multiple questions and strcat for the chat history csv import a line are these examples to see, our application x www form urlencoded example somewhere on. Press again thanks for example requires the urlencoded form parameters are reasons why do? You may prevent their developer, you want to.
We should return a form parameters, copy sharable link to.
All the examples. Users of form submission. What would be used to send it is meant when you need to delete, then since you select a comment, and binary content type header. Embed this example work fine, form fields do in. Can be encoded as solid as alternative separator, i have redacted? If this example looks like to comment line should not allow to make a form, and examples illustrate handling of options. This post request does not authorized to see resources or subtract any help me create agents, provide more light on.
How to handle without modifications. Avoid sending valid? Would still like to change state or subtract any authorization tab.
|
OPCFW_CODE
|
Many people used to think of coding as an unusual hobby for nerds who tinkered with computers in their basements. However, it has evolved from a pastime to a necessary employment skill in recent years. Employers around the globe have offered to pay a premium for people who can code and program.
Table of Content
- What exactly is coding?
- What are the benefits of learning to code? Consider these 6 surprising advantages
- A) In the digital age, coding is considered an essential literacy skill
- B) Job prospects are excellent for those who code and program.
- C)Coding-related employment continues to be in high demand
- D)Coding opens up new possibilities for problem-solving
- E) Learning to code allows you to have more employment options
- F) Coding can be beneficial in careers you never imagined
- Is it still possible to learn to code?
- Coding is a pleasurable learning experience.
With this information, you may be wondering if coding is something you should pursue. However, there are still many unanswered questions when it comes to learning to code. How long does it take? What are the benefits of learning to code? Below enlisted the expertise of experts from several fields to answer your questions and provide their perspectives on the advantages of learning to code.
Before getting into these unanswered questions, first, understand the term coding.
What exactly is coding?
Coding is a programming language that allows you to communicate with a computer. Coding is the process of delivering instructions to a computer to do specified tasks. The field of coding is enormous. As a result, it is important to learn to code and operate a computer with ease.
It allows you to create software, video and digital games, apps, websites, and much more. Coding has evolved into a valuable life skill that will only increase in value over time.
Now, get insights by knowing the advantages of learning coding
What are the benefits of learning to code? Consider these 6 surprising advantages
Learning how to code has a surprising number of advantages. Here are some of the advantages of learning to code.
A) In the digital age, coding is considered an essential literacy skill
This does not imply that you must be able to comprehend and code in a complex programming language. You must have a fundamental understanding of how coding works to code as basic literacy. To communicate with others more successfully, you must understand the fundamentals and rationale behind it.
Coding is a valuable skill whether you work in the IT business or not. Many sectors and enterprises have begun to move into the digital realm. Almost all companies now require at least a basic understanding of coding languages such as HTML and CSS.
B) Job prospects are excellent for those who code and program.
One of the most important and most apparent reasons for learning to code is the income possibilities for programmers and coders. The Bureau of Labour Statistics (BLS) collects data on wages and other aspects of the workforce for a wide range of occupations.
Take a look at the 2019 median annual pay data from the BLS for the following coding and programming-related professions:
- $73,760 is the average pay for web developers.
- $83 510 for network and computer systems administrators.
- Computer programmers make $86,550 annually.
- Administrators of databases: $93,725
- Developers of computer software: $107,510
The national average salary for all occupations in 2019 was $39,810.1, so that gives you some perspective. As you can see, jobs requiring some programming, coding, or scripting abilities pay well above the national average.
C)Coding-related employment continues to be in high demand
What good is high pay if no one wants to hire for it? There appears to be plenty of opportunities for anyone looking for work in the coding industry.
For the same coding and programming-related jobs, the following BLS predictions show employment growth:
- Thirteen per cent of the workforce are web developers.
- Administrators of computer networks and systems: 5%
- -7 per cent for those who work as computer programmers.
- Companies employ nine per cent of database administrators.
- Twenty-one per cent of the workforce is computer programmers.
- A few jobs are growing at a faster rate than the national average of 5%.
Hybrid jobs are on the rise, but traditional ones remain incredibly vital as well. As a result, job advertisements for “computer programmers” have declined, but those for jobs that combine programming skills with other job titles have increased.
D)Coding opens up new possibilities for problem-solving
Noro Adrian Degus, a software engineer who also offers nursing essay writing help to students, believes that learning to code has the unintentional benefit of educating.
He says that in the past, he was more likely to solve difficulties by using his emotions. However, his coding knowledge has equipped him with the ability to think logically while solving challenges.
“Understanding logic at a deep level has increased my problem-solving proficiency tenfold,” he added.
In its most basic form, coding entails giving a machine a task to complete based on the logical guidelines you’ve established. When you break down very complex jobs, they become a collection of more minor procedures. This rigorous and logic-heavy approach to issue solving might help solve challenges that aren’t related to coding.
E) Learning to code allows you to have more employment options
Learning to code can help you expand your job opportunities and, in turn, make you a more adaptable candidate in a quickly changing digital economy.
“Learning to code has been the single most valuable skill I’ve acquired for my professional life. I would have been out of work years ago if I hadn’t learned to code. It’s been both freeing and profitable.” says Davidson, a coding developer who also assists online students with their Online Assignment Help Sydney.
Even as a pastime, learning to code can provide you with a common reference point and a greater understanding of individuals who work in more difficult programming and coding professions.
F) Coding can be beneficial in careers you never imagined
You may believe that coding and programming skills are only relevant to highly specialized positions. Learning to code is vital for some careers, but that doesn’t mean you can’t put your coding skills to use in other fields.
As a business expert, understanding computer programming is a tremendous asset, according to Mark Million, a businessman and professional writer who provides Assignment help Sydney services.
Learning the fundamentals of coding can make you a far more valuable part of a team if you work closely with programmers and developers.
“While I’m primarily a marketer, having technical expertise is one of the most useful abilities I can serve my team,” says Jake Lane, a growth manager. He also provides Assignment Help to online students. Making changes to the code base allows our developers to focus on the more critical aspects of the project while also reducing development time.
You don’t have to be a genius at programming to gain from learning to code. In most commercial situations, knowing just enough to be productive is still a valuable asset.
Is it still possible to learn to code?
You now have free access to an extensive array of learning materials and resources thanks to the internet.
Take online lessons, attend coding camps, view instructive videos, download modules, or even create apps or play coding games for youngsters to learn to code. The coding community has devised various approaches to make learning to code enjoyable and engaging for everyone.
Coding is a pleasurable learning experience.
- At a certain age, children are considerably more likely than adults to understand and grasp concepts. They appear to have fewer mental obstacles and a youthful eagerness to take on new challenges.
- Coding is a lot of fun to learn and offers a lot of room for advancement. Coding’s utility in today’s society is at an all-time high, and if you know how to code, you can dominate the world.
- Children can create apps, video games, websites, and a variety of other things. Learning to code provides youngsters with a hands-on experience that allows them to grasp coding principles that are both educational and practical.
Keep this in mind, and it’s never too early or late to begin learning to code.
To enhance your job, you need to update and upscale. One of the specific methods to adapt to an ever-changing digital landscape is to stay on top of industry trends and leverage your skills with the current skill requirements.
Concrete knowledge of code and learning will undoubtedly help you develop a successful career in the future.
|
OPCFW_CODE
|
|Index||index by Group||index by Distribution||index by Vendor||index by creation date||index by Name||Mirrors||Help||Search|
|Name: LiVES||Distribution: Unknown|
|Version: 0.8.5||Vendor: Gabriel Finch <firstname.lastname@example.org>|
|Release: 1||Build date: Mon Mar 15 19:10:18 2004|
|Group: Multimedia/Video/Editors||Build host: ruby|
|Size: 968886||Source RPM: LiVES-0.8.5-1.src.rpm|
|Packager: Gabriel Finch <email@example.com>|
|Summary: The Linux Video Editing System|
LiVES (the Linux Video Editing System) is a simple to use, yet powerful video effects and editing system. It uses commonly available tools (mplayer, ImageMagick, and GTK+, so it should work on most Linux systems.
GPL [(c) G. Finch (salsaman)]
* Mon Mar 15 2004 G. Finch <firstname.lastname@example.org> - Version 0.8.5 Changes Improved playback performance, particularly when playing in a separate window. Fixed occasional hangs with the 'enough' and 'preview' buttons when loading. Re-enabled backspace key for start/end selectors. Implemented first playback plugin (SDL). Allow multiple realtime effects, and rendering of realtime effects. Changed wording on menu toggles to make them clearer (hopefully). Added play buttons on the main menu. Show selection length and time in the GUI. Split interface.c into interface.c and gui.c Switch back to previous clip when open/restore/capture is cancelled. Put frame preview in play window. Check that file exists after saving. Allow performance recording when 'loop continuous' is on. Allow 'none' theme to be selected again (regression). Added 'delete selected audio','delete all audio' and 'insert silence' menu options. Added support for resampling variable fps--->fixed fps. Turned 'shrink' effect into 'shrink/expand' Improved performance of 'random zoom' Added 'colour filter' effect. Give a warning and clean up if backup runs out of disk space. Trim bottom blank row from captured windows. Show length of selection in main window. Show previews for images in the fileselector. Correct some memory errors in rgb2uyvy conversion. Make backup/restore more efficient Improve trim frames dialog (part 1) Make new framedraw widget (framedraw.c) Clear up some disk space issues. Add basic support for grabbing from firewire (using dvgrab) Make 'cancel' more robust. Allow realtime effects to grab keyboard 'k' Fix some GUI glitches Use a glist index in menu order for switching/closing clips. Change 'loop video to fit audio' to '(auto)loop video to fit audio' and make the default setting on. Added 'debug mode' for encoder plugins. * Sat Nov 22 2003 G. Finch <email@example.com> - Version 0.8.1 Changes Fixed an image loading bug in the back end. Made sure that the play pointer is always redrawn after play. Made selection with mouse a little more accurate. Fixed 'jumble' effect so it only jumbles frames in the selection. Allow cancel during copy to clipboard. Added a new tool: 'trim frame size'. Made a fix for gnome so that the 'processing' window has a border. Made non-fatal startup errors be non-fatal again. Fixed a crash when merging a still image with a clip. Fixed a regression where undo after a merge would not always undo frames. Fixed a bug where fps changes could be wrong when recording a selection. Implemented clip switching (ctrl-pageup, ctrl-pagedn) and freeze (ctrl-backspace) Added save_vj_set and load_vj_set Added the 'shrink', 'shift' effects. Added some nice real-time effects (accessed by ctrl-1 ctrl-2 etc,). Improved directory loading so it loads in alpha order. Added realtime effects : nervous, alien, noise, negate, posterize Use default fps when opening a single frame. Allow antialiasing on/off via a .lives pref. Many more usability enhancements. * Thu Sep 04 2003 G. Finch <firstname.lastname@example.org> - Version 0.7.5 Changes Improved playback for faster frame rates. Video and audio can now be previewed during open. Quantisation of frames has been made smoother - recording looks much better now. Added ability to switch to another clip whilst opening. Fixed a bug where backups would only save to selection end (backported to 0.7.1-2) Fixed an audio bug when opening files with mplayer as the audio player. Allow recording of performances even in clips with no audio. Freeze smogrify API (in progress). Fixed a problem with merge (frame-in-frame). Added new effects 'jumble frames' and 'tunnel'. Video length is now calculated properly after fps changes are recorded. Eliminated extremely long backup/restore times for larger files. Fixed an overflow issue in the player code. Added a basic timeline. Option to maintain aspect when opening directories of images. Added a preference to disable fast key repeat on playback. Fixed 'export audio' so that it doesn't append silence at the end. Added an option 'trim audio from beginning to play start' Made encoders be plugins. Added a 'continuous loop' option. Fixed external window so it _really_ doesn't crash. Made some updates to work with mplayer1.0pre1 * Thu Jul 31 2003 G. Finch <email@example.com> - Version 0.7.1 Changes Implemented encoding with ffmpeg - 4 new output formats, including divx ! Added three new options to the audio menu 'export selected audio', 'append audio' and 'trim audio to selection'. Cut/copy/insert/delete/paste now all work with sound as well as video. Improved visibility of the playback cursors. Added a function to resample video at a new framerate. Added a function to resample audio at new rate/channels/sample size. Audio is now auto-resampled between clips. Added a preference to auto-resample video between clips ("insert_resample"). Fixed a bug where frames merged before a selection would sometimes 'jump'. File size is now updated properly after backup/restore. Subdirectories no longer prevent image directories from loading. Fixed a freeze when the temp. directory does not end in '/'. Added preview when opening file selection. Improved the responsiveness of file preview. Audio files can now be loaded without video. Added 'sticky' mode for playing in a separate window. Improved 'colorize' effect and 'dream' effect. Improved stability and performance. LiVES should now respect 'configure --prefix' when looking for themes, etc. * Thu Jun 26 2003 G. Finch <firstname.lastname@example.org> - Version 0.6.5 Changes Mainly a bugfix and stability release. Fixed some menu bugs. Underscores in recent files are now handled correctly. Fixed the crash in audio preview. Fixed the problem with file names getting lost after saving. Fixed a bug in loading of WAV files. Fixed a bug with selection start and end after merge. Fixed some minor audio issues for Save Selection One new feature: midi synch - if this is checked, then a midi start will be sent when playback starts, and a midi stop when playback stops. By default, /dev/midi is used, this can be changed by editing the files '/usr/bin/midistart' and '/usr/bin/midistop' * Fri Jun 20 2003 G. Finch <email@example.com> - Version 0.6.0 Changes Added lossless backup/restore function. Implemented basic events and recording of performances. Reordering of frames is now possible after recording. Made preview frame invisible again on blank background (regression). Allow video to play backwards (using ctrl-down during playback). Added a 'reverse direction' key, (ctrl-space during playback). Fixed the bugs in external capture. It no longer crashes ! Fixed a bug to do with file extensions becoming corrupted. Optimised the player code, playback is now much smoother. Allow transparent to white; fixed frame-in-frame transparency. Fixed some bugs to do with merging, and undoing pre-inserted frames. Tidied up the undo/redo system - it should now work properly now even after switching clips. Allow use of 'themes' to change the look of the app (3 builtin themes are available). Added xmms random play feature. Added 'Recent Files' to the Files menu. Many improvements to the GUI; numerous other bug fixes. * Fri May 30 2003 G. Finch <firstname.lastname@example.org> - Version 0.5.5 Changes Merge now supports two types of variable transparency. Added support for opening file selections, and opening remote locations. Added previews on file open operations. It is now possible to save individual frames to disk. A possible bug with blank file save names was fixed. Updated the colorize effect to work with latest imagemagick (5.5.4) Added an option to load files without sound. Numerous small fixes/cleanups. * Tue Jan 28 2003 Victor Soroka <email@example.com> (packager) - First LiVES RPM package of original code written by G. Finch (salsaman).
/usr/bin/lives /usr/bin/lives-exe /usr/bin/midistart /usr/bin/midistop /usr/bin/smogrify /usr/lib/menu/lives /usr/share/applications/LiVES.desktop /usr/share/applnk-mdk/Multimedia/Video/LiVES.desktop /usr/share/doc/LiVES-0.8.5 /usr/share/doc/LiVES-0.8.5/CHANGELOG /usr/share/doc/LiVES-0.8.5/FEATURES /usr/share/doc/LiVES-0.8.5/GETTING.STARTED /usr/share/doc/LiVES-0.8.5/NEWS /usr/share/doc/LiVES-0.8.5/README /usr/share/icons/large/lives.png /usr/share/icons/lives.png /usr/share/icons/mini/lives.png /usr/share/lives /usr/share/lives/plugins /usr/share/lives/plugins/encoders /usr/share/lives/plugins/encoders/encodedv /usr/share/lives/plugins/encoders/ffmpeg /usr/share/lives/plugins/encoders/mencoder /usr/share/lives/plugins/encoders/transcode /usr/share/lives/plugins/playback /usr/share/lives/plugins/playback/video /usr/share/lives/plugins/playback/video/SDL /usr/share/lives/themes /usr/share/lives/themes/cutting_room /usr/share/lives/themes/cutting_room/frame.jpg /usr/share/lives/themes/cutting_room/main.jpg /usr/share/lives/themes/default /usr/share/lives/themes/default/frame.jpg /usr/share/lives/themes/default/main.jpg /usr/share/lives/themes/greenish /usr/share/lives/themes/greenish/frame.jpg /usr/share/lives/themes/greenish/main.jpg /usr/share/lives/themes/pinks /usr/share/lives/themes/pinks/frame.jpg /usr/share/lives/themes/pinks/main.jpg /usr/share/lives/themes/sunburst /usr/share/lives/themes/sunburst/frame.jpg /usr/share/lives/themes/sunburst/main.jpg
Generated by rpm2html 1.8.1
Fabrice Bellet, Sun Dec 1 23:09:37 2013
|
OPCFW_CODE
|
Removing mouse listener in netbeans
In netbeans a mouse listener is automatically created for a component.
private void initComponents() {
jLabel9 = new javax.swing.JLabel();
jLabel9.setBackground(new java.awt.Color(150, 192, 206));
jLabel9.setOpaque(true);
jLabel9.setPreferredSize(new java.awt.Dimension(150, 150));
jLabel9.addMouseListener(new java.awt.event.MouseAdapter() {
public void mouseClicked(java.awt.event.MouseEvent evt) {
jLabel9MouseClicked(evt);
}
public void mouseEntered(java.awt.event.MouseEvent evt) {
jLabel9MouseEntered(evt);
}
public void mouseExited(java.awt.event.MouseEvent evt) {
jLabel9MouseExited(evt);
}
});
The problem is how do I remove this listener for the mouse clicked event through a function? I am trying to do something like this:
void rem(){
jLabel9.removeMouseListener(new java.awt.event.MouseAdapter() {
public void mouseClicked(java.awt.event.MouseEvent evt) {
jLabel9MouseClicked(evt);
}
});
}
Then I am calling this function as required by my program flow. But this doesn't work. One thing I have figured out(though I am not sure of this) is that this is not working because I am not removing the listener through the original adapter. Instead I am creating a new one and trying to remove the original listener. I have searched almost every valid link on google but none helps.
Look at this topic: http://stackoverflow.com/questions/2627946/how-to-remove-mouselistener-actionlistener-on-a-jtextfield
@ZsoltÉbel I have already seen that. If you notice, here, a new mouse adapter is being made, used and then removed. But my situation is different. I want to remove an instance of already created mouse adapter.
If you want a brute force method, JComponent has a getMouseListeners method which will provide you with access to all the listeners attached to the component, but I would use it with care
Where is your MouseListener reference? You are using an anonymous MouseListener. You figured it out on your own that you are removing a new MouseListener in your removeMouseListener() method. Why? Because you cannot access your original listener any more. Create a reference for it and your problem is solved.
MouseListener mListener = new MouseListener(new java.awt.event.MouseAdapter() {
public void mouseClicked(java.awt.event.MouseEvent evt) {
jLabel9MouseClicked(evt);
}
public void mouseEntered(java.awt.event.MouseEvent evt) {
jLabel9MouseEntered(evt);
}
public void mouseExited(java.awt.event.MouseEvent evt) {
jLabel9MouseExited(evt);
}
});
jLabel9.addMouseListener(mListener);
jLabel9.removeMouseListener(mListener);
Alternative solution, but first one is so much easier:
MouseListener[] mListener = jLabel9.getMouseListeners();
for (MouseListener ml : mListener) {
jLabel9.removeMouseListener(ml);
}
Thanx a lot...! Though first one is similar what I found on other links. But it doesn't work. Probably because we don't have reference to the original adapter.
But your alternate solution works just perfectly.
can you also suggest a way on how to remove a specific listener out of many others... as in I want to remove only mouseClicked event?
Listeners are interfaces, which means that they contains abstract methods. If you implement an interface you have to implement its methods. Or you have to declare your class abstract and leave it to a subclass to implement the missing methods. So you cannot remove only the mouseClicked method because your class wouldnt meet the interface contract.
A useful article to read: Listeners vs Adapters
Almost there like Zsolt Ébel said.
class Test {
JLabel jLabel9 = new JLabel();
MouseAdapter adapter = new MouseAdapter() {
public void mouseClicked(java.awt.event.MouseEvent evt) {
jLabel9MouseClicked(evt);
}
public void mouseEntered(java.awt.event.MouseEvent evt) {
jLabel9MouseEntered(evt);
}
public void mouseExited(java.awt.event.MouseEvent evt) {
jLabel9MouseExited(evt);
}
};
private void initComponents() {
jLabel9 = new javax.swing.JLabel();
jLabel9.setBackground(new java.awt.Color(150, 192, 206));
jLabel9.setOpaque(true);
jLabel9.setPreferredSize(new java.awt.Dimension(150, 150));
jLabel9.addMouseListener(adapter);
}
void rem() {
jLabel9.removeMouseListener(adapter);
}
|
STACK_EXCHANGE
|
Multiscale Galaxy Evolution
I work on an array of topics related to the cosmological evolution of galaxies at high-redshift, with a particular eye toward their star formation and ISM properties. Much of the work that I’ve been involved in in recent years relates to understanding the origin and evolution of active galaxies, and dusty starbursts being discovered in the early Universe. I’ve also done a bit of work on trying to understand stellar and AGN feedback, galactic superwinds, and high-redshift quasar formation. This work has involved both constrained idealized galaxy evolution simulations, as well as cosmological hydrodynamic zoom models of galaxy formation.
The Physics of the Interstellar Medium and Star Formation
I’m involved in a wide range of projects related to star formation, giant molecular clouds and the lifecycle of the interstellar medium. These involve investigations into the structure of giant clouds in quiescent and starburst environments, origin of observed scaling relationships, and the role of feedback from massive stars in the ISM. I do this utilizing a wide range of simulation methods – these range from extremely high-resolution hydrodynamic ~parsec-scale simulations of Milky Way like galaxies and starbursts, to detailed postprocessing calculations to determine the thermodynamics of star-forming gas, to 3D dust and molecular line radiative transfer calculations to determine the observable properties of the modeled systems. Most recently, my attention has been focused on the origin of the stellar IMF, the CO-H2 conversion factor in galaxies, the origin of the thermal structure of the ISM, and the origin of observed star formation scaling relations.
Simulation Code Development
I develop large scale radiative transfer packages for hydrodynamic simulations. In particular, I have developed:
- Turtlebeach – the first 3D non local thermodynamic equilibrium molecular line radiative transfer code for galaxy scales. This C code is a Monte Carlo code that considers full statistical equilibrium, as well as a handling of subresolution ISM structure in the line transfer. These codes are described in papers (in the publications list) from 2006-2009. The guts of Turtlebeach are being taken apart and put back together in Powderday for public use.
- Powderday – a publicly available dust radiative transfer package for galaxy simulations. In collaboration with Matt Turk, Tom Robitaille and Bobby Thompson, we’re developing a package that merges galaxy simulations under a variety of frameworks with FSPS stellar population synthesis models and Hyperion dust transfer that, as an end product, will provide a mechanism for extracting a full SED (and images) from nearly any galaxy evolution simulation made with a publicly available code.See codes page for more details.
DAWN: The Cosmic Dawn Centre
I’m an associate, and member of the founding team of the recently formed Cosmic DAWN Centre. DAWN is currently hosted by the Neils Bohr Institute in Copenhagen. Our merry band of observers and theorists are dedicated to using telescopes across the electromagnetic spectrum in combination with a wide range of theoretical methods to understand the formation of the Universe's first galaxies, stars and black holes. Our centre, led by Dr. Sune Toft, plays host to visitors, a number of international conferences every year, and has a variety of postdoctoral and graduate positions available.
|
OPCFW_CODE
|
My On-Call Shifts allows users to view their current and upcoming on-call responsibilities in a single, easy-to-use view, where they can also create overrides for any of their shifts. My On-Call Shifts is also available as a widget on the right side of the Incidents page, which displays a user’s current shift and when they are on call next.
Schedules and Escalation Policies
My On-Call Shifts displays schedule shifts if the schedule is part of an active escalation policy. If you have shifts on a schedule that do not appear in My On-Call Shifts, please check that the schedule on an escalation policy, and that a service uses the escalation policy.
To access the My On-Call Shifts page:
- Navigate to People My On-Call Shifts, or User Icon My Profile On-Call Shifts tab.
The following information is displayed here:
- Currently On-Call: In the upper left you will see shifts that you are currently on call for.
- My On-Call Shifts: Below on the left, you will see a list of all your on-call shifts.
- Calendar: Center screen you will see a calendar of your upcoming on-call shifts, up to 90 days in the future.
- Use the dropdown in the top right to adjust the time range by month, week, day or list view.
90 Day Calendar Limit
The calendar will only display shifts up to 90 days in the future. You may view calendar dates beyond 90 days, but shifts will not be visible.
Shifts types are denoted by the following:
- Recurring on-call shifts have a solid color.
- Shifts where you are always on-call have a diagonal line pattern.
You can adjust the time zone using the dropdown on the left side of the screen.
To add or remove on-call shifts from the view, click the eye icon to the right of a shift in your My On-Call Shifts list. Your selection is saved when you navigate away from the page.
Go to People Users and select another user to visit their profile page.
Select the On-Call Shifts tab.
You can create overrides while viewing another user’s On-Call Shifts page.
There are two ways to create overrides on the My On-Call Shifts page:
Delete Overrides Made Using the Multi-Overrides Tool
- If you wish to delete any overrides that were made with the Multi-Overrides tool, you will need to individually delete them. This is typically done on the associated schedule’s detail page. This Knowledge Base article covers deleting overrides.
- Multi-Overrides can only be made up to 3 months out from the current date.
- Navigate to People My On-Call Shifts.
- In the calendar, hover over a date, then click and drag to select the range of dates you would like to override.
- In the modal, select a user from the Who should take this shift? dropdown.
- Review the shifts that you are overriding — you may optionally edit the start and end times, as well as uncheck shifts from the list at the bottom of the modal.
- Click Create Override.
- Navigate to People My On-Call Shifts Create Override.
- In the modal, select a user to override In the Who should take this shift? dropdown, select a user to override the shift.
- Update the start and end times to cover the appropriate time range.
- Optional: If you do not wish to override all of a user’s shifts, you can uncheck shifts from the list at the bottom of the modal.
- Click Create Override.
To access the My On-Call Shifts widget, navigate to the right side of the Incidents page. The widget displays your current on-call shifts with dates and times, as well as escalation policies you are on-call for and upcoming shifts.
Updated 6 months ago
|
OPCFW_CODE
|
require 'mgraph/edge'
describe MGraph::Edge do
it "contains an undirected pair of objects" do
v1, v2 = double, double
edge = MGraph::Edge.new v1, v2
expect(edge.vertices).to eq [v1, v2].to_set
end
it "does not allow modification of its vertices" do
v1, v2 = double, double
edge = MGraph::Edge.new v1, v2
expect { edge.vertices << double }.to raise_error(/frozen/i)
end
describe "equality" do
it "is `equal` to itself" do
edge = MGraph::Edge.new double, double
expect(edge == edge).to eq true
expect(edge.eql?(edge)).to eq true
expect(edge.equal?(edge)).to eq true
end
it "is equal to another Edge with the same vertices" do
v1, v2 = double, double
a = MGraph::Edge.new v1, v2
b = MGraph::Edge.new v2, v1
expect(a == b).to eq true
expect(a.eql?(b)).to eq true
expect(a.equal?(b)).to eq false
end
it "is not equal to another Edge with different vertices" do
v1, v2, v3 = double, double, double
a = MGraph::Edge.new v1, v2
b = MGraph::Edge.new v2, v3
expect(a == b).to eq false
expect(a.eql?(b)).to eq false
expect(a.equal?(b)).to eq false
end
end
describe "#hash" do
it "is the same for Edges with the same vertices" do
v1, v2 = double, double
a = MGraph::Edge.new v1, v2
b = MGraph::Edge.new v2, v1
expect(a.hash).to eq b.hash
end
it "is different for Edges with different vertices" do
v1, v2, v3 = double, double, double
a = MGraph::Edge.new v1, v2
b = MGraph::Edge.new v2, v3
expect(a.hash).to_not eq b.hash
end
end
end
|
STACK_EDU
|
Can I respond to the mana payment for Oppressive Rays?
Oppressive rays has the text:
Enchanted creature can't attack or block unless its controller pays 3.
Say I have oppressive rays attached to an opponent's creature, and I have Ardenvale Tactician in my hand with sufficient mana.
When does my opponent pay the 3 mana to permit his Oppressive Rayed creature to attack?
Is it possible to cast Dizzying Swoop and tap the creature before it attacks? i.e. to prohibit his creature from attacking me?
No, you have no opportunity to tap your opponent's creature after they have decided to pay the {3} to attack, but before they are already tapped and attacking.
The choice to pay mana is done as part of the declare attacker steps; they actually tap their own creature as part of attacking with it right before they pay the {3}, and you don't get priority to do anything until after they have finished declaring all attackers.
From the rules:
Declare Attackers Step
508.1. First, the active player declares attackers. This turn-based action doesn’t use the stack. To declare attackers, the active player follows the steps below, in order.
508.1a The active player chooses which creatures that they control, if any, will attack.
508.1f The active player taps the chosen creatures. Tapping a creature when it’s declared as an attacker isn’t a cost; attacking simply causes creatures to become tapped.
508.1h If any of the chosen creatures require paying costs to attack, or if any optional costs to attack were chosen, the active player determines the total cost to attack. Costs may include paying mana, tapping permanents, sacrificing permanents, discarding cards, and so on.
508.1i If any of the costs require mana, the active player then has a chance to activate mana abilities (see rule 605, “Mana Abilities”).
508.1j Once the player has enough mana in their mana pool, they pay all costs in any order. Partial payments are not allowed.
508.2. Second, the active player gets priority.
You of course could tap their creature during the start of combat step, but you won't know whether or not they plan to pay the {3}, and if you choose to do so then they won't waste their {3} to allow it to attack. But once they declare it as an attacker, and you get to cast Dizzying Swoop, it will already be tapped and Dizzying Swoop would have no affect on it (though it would still be a legal target; it would be generally pointless).
|
STACK_EXCHANGE
|
The road to the cloud is actually simple, an account in the Azure portal created, the credit card data deposited and already all available Azure resources can be rolled out. This may be a possible (though not recommended) way to go for test environments. For productive workloads, whether cloud-only or hybrid-scenario, rules are necessary and useful. To structure the environment, to avoid cost explosions and to protect the environment.
Such guidelines and rules can be created and defined with a governance concept. Simple questions, such as a central naming scheme for Azure Services, the design of the networks, or the maximum allowed VM sizes, can be resolved. A governance concept is intended for the entire tenant and therefore valid for subscriptions. The subscriptions are suitable for recording different cost centers or defining project boundaries.
Previously, it was not easy to specify central settings for new subscriptions. This has changed with the introduction of Azure Blueprints. Azure blueprints can be used to specify the central settings that will be applied when a new subscription is rolled out. To use Azure blueprints, management groups are necessary. Management Groups give the opportunity to structure the Azure Tenant from an organizational point of view.
This two-part article will first explain the necessary management groups as prerequisites of Azure Blueprints and then introduce the possibilities of Azure Blueprints and their rollout.
An Azure Blueprint can currently contain four different artifacts:
- Policy Assignment (Azure Policy)
- ARM templates
- Role Assignment (RBAC)
- Resource Groups
Azure Blueprints are managed and replicated by Microsoft through Cosmos DB. To provide Azure blueprints in your own tenant, management groups are necessary. The blueprints are stored and saved within the management groups.
Management Groups are tenant-level and allow for a hierarchy comparable to a company hierarchy. In this way, different departments can be created below a root management group, to which particular subscriptions can be assigned. In this way, for example, the organizational structure of the company is mapped and the subscriptions are assigned to the respective management group as a cost center. Other models can also be mapped, eg management groups by locations or for development phases (Dev, Prestage, Prod) etc.
Within the management groups, you can define Azure policies that apply to all resources assigned to the management group. Also, an Azure blueprint is stored here. As soon as a new subscription is created and assigned to a corresponding management group that contains a blueprint, the predefined artifacts of the blueprint are applied to the newly created subscription.
Create Management Group
To create a management group, we search for “Management Group” in the search box and go to the corresponding administration page. Azure Blueprints – Mgmt Group
At first we do not see any management groups on the administration page.
It is important to understand that when creating the first group, another group, the so-called Tenant Root Group, is created. The newly created group and all existing subscriptions are subordinated to the root management group. The root management group is not easily accessible, and even the tenant administrator needs to be given higher privileges in the AzureAD for changes to be made.
As soon as the first management group is created in the portal, the existing subscriptions are also displayed and assigned as subobjects to the already mentioned tenant root group. After completing the process, the management group with the associated subscriptions appears on the administration page.
After further, self-defined management groups have been created, the existing subscriptions can be assigned to the corresponding groups. For this purpose, the proposed management group is selected and then assigned a subscription via “Add Subscription”. In this way, the existing cost centers of the company can be assigned to the hierarchical structure of the management groups, for example.
This concludes the first part of the article. Next, I’ll talk about the possibilities of Azure blueprints, how to create them, and what happens during the rollout.
About the Author:
Gregor is working for sepago GmbH as a Cloud Architect for Azure. Before joining sepago, he was working as Cloud- and Infrastructure architect with main focus on Microsoft technologies.
In October 2018 he was honored with his first MVP award for Azure.
Gregor is mostly find as Speaker on many community conferences, blogs regularly at www.reimling.eu and he is organizer of the Azure Bonn Meetup, an local Azure user group near Cologne.
Reimling, G. (2019). Azure Management Groups and Blueprints – Overview and Setup – Part 1. Available at:
http://www.reimling.eu/2019/04/azure-management-groups-und-blueprints-ueberblick-und-einrichtung-teil-2/ [Accessed: 14th May 2019].
|
OPCFW_CODE
|
We are working on two projects related to cyberbullying prevention.
- Focus Group Studies to Understand Cyberbullying in Schools.
This exploratory work studies the effects of emerging app features on the cyberbullying practices in high school settings. These include the increasing prevalence of image/video content, perceived ephemerality, anonymity, and hyperlocal communication. Based on qualitative analysis of focus groups and follow-up individual interviews with high school students, these features were found to influence the practice of cyberbullying, as well as creating negative socio-psychological effects. For example, visual data was found to be used in cyberbullying settings as evidence of contentious events, a repeated reminder, and caused a graphic impact on recipients. Similarly, perceived ephemerality of content was found to be associated with “broken expectations” with respect to the apps and severe bullying outcomes for those affected. Results shed light on an important technology-mediated social phenomenon of cyberbullying, improve understanding of app use (and abuse) by the teenage user population, and pave the way for future research on countering app-centric cyberbullying.
- Cyberbullying Detection Using Text and Social Network Analysis
This project aims to define new approaches for automatic detection of cyberbullying by integrating the relevant research in social sciences and computer science. Cyberbullying is a critical social problem that occurs over a technical substrate. According to a recent National Crime Prevention Council report, more than 40% of teenagers in the US have reported being cyberbullied. This is especially worrying as the multiple studies have reported that the victims of cyberbullying often deal with psychiatric and psychosomatic disorders. Specifically, this research will advance the state of the art in cyberbullying detection beyond textual analysis by also giving due attention to the social relationships in which these bullying messages are exchanged. A higher accuracy at detection would allow for better mitigation of the cyberbullying phenomenon and may help improve the lives of thousands of victims who are cyberbullied each year. The results of this research will also open doors to employing social intervention mechanisms to help prevent cyberbullying incidents in future. The findings from this research will also validate and refine existing theories on cyberbullying and potentially advance the field by creating a wave of data-driven analysis of the phenomenon. The generated data set will be made available to the larger research community, thus enabling new findings that can help counter this social problem.
Qianjia Huang, Vivek Kumar Singh, and Pradeep Kumar Atrey. Cyber bullying detection using social and textual analysis. In Proc. ACM Int. Workshop on Socially-Aware Multimediapages 3–6. ACM, 2014.
Qianjia Huang, Vivek Kumar Singh, and Pradeep Kumar Atrey. Cyberbullying detection using social network analysis. Submitted to: IEEE Transactions on Computational SocialSystems, 2016
Vivek Kumar Singh, Qianjia Huang, and Pradeep Kumar Atrey. Cyberbullying detection using probabilistic socio-textual information fusion. In Proceedings of 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), ACM, 2016
Vivek Singh, Marie Radford, Huang Qianjia, and Susan Furrer. “They basically like destroyed the school one day”: On newer app affordances and cyberbullying in schools.(To Appear:) Proceedings of the international conference on Computer-Supported Collaborative Work and Social Computing (CSCW). ACM, 2017.
Funding and Support
We gratefully acknowledge the support from the National Science Foundation for this project.
We are also grateful to our collaborators at the Rutgers Tyler Clementi Center and the Graduate School of Applied and Professional Psychology for their continuing support of this project.
We are happy to share the cyberbullying labeled dataset with other interested researchers. We would ask you to sign an agreement respecting the privacy of the users in the dataset. Please email Prof. Vivek Singh (firstname.lastname@example.org) to request the dataset.
|
OPCFW_CODE
|
After a week of confused coverage around which mobile app developers access user address books and how they do it, we are finally getting a product-level resolution. Apple says today (in time to beat back some inquiring congressmen) that it will start requiring developers to ask for explicit user permission in order to access these contacts.
The new interface, slated for its next iOS operating system release, will provide a permissions notification to users after they install an app, similar to how it currently requires users to approve location sharing or push notifications. This change will add some arguably unnecessary friction to users of apps that pull address books — and a lot of developers will be affected, as 11% of free iOS apps were accessing address books as of the start of last year, according to one research report.
Beyond technical fixes that developers should be implementing anyway, the solution means that users will now at least know what’s being shared. But the problem I have with Apple’s solution is that it looks like an inelegant knee-jerk response, not a carefully planned advancement in how it helps developers build better products for users.
The reason Path as well as Twitter, Hipster and others were uploading address books with user names, emails and phone numbers was because they were trying to help users find existing friends who were also using their services. It wasn’t about reselling this data to the Egyptian government, even if that was a distant hypothetical possibility.
Recent investigations by VentureBeat, The Next Web and The Verge revealed that in fact, dozens of popular apps were accessing address books. But here’s some less anecdotal data about the scope of the issue, from Lookout. The mobile developer provides an app for iOS, Android and other mobile platforms that finds malware and other security and privacy problems within apps that users are downloading by scanning apps across the entire ecosystem. So unlike most data sources it can see the big picture here.
At the beginning of 2011, it found that out of the hundreds of thousands of free apps on iOS, 11% were able to read contacts. The company doesn’t have updated numbers available yet for iOS this year, and it’s only providing percentages, but clearly address book accessing is way more prevalent than just the few dozen apps that people have looked at so far.
The same goes for Android. Lookout’s data from last year shows that 7.4% of free apps on the platform were accessing user contacts; this year, the company tells me it’s tracking 7.1% that do.
Android is in a bit of a different position here, though, because it requires explicit user permission for contact sharing with apps before they install it. That’s more transparent, but also adds some friction.
Which brings me back to what developers are trying to accomplish. Typically, they want to help friends find each other within a seamless user experience. In Path’s case, it lets you sign in with Facebook, your address book and other sites to cross-reference them for any Path user who you’re already friends with elsewhere. This makes the service more valuable for users, which is a good thing.
Apple should be working to enable this while protecting user data in a more nuanced way, rather than just throwing in another permissions dialog like what it says it’s going to do. Facebook provides a good example of how it could do that. The social network has had to figure out how to balance friend list sharing with maintaining a simple social interface as its platform has grown over the years.
Today, Facebook shows you which friends are using an app before you install it. Imagine if Apple did this for Path and every other app in the App Store, instead of Path having to grab your address book afterwards to do the same thing.
If you click to install an app on Facebook, its permissions dialog tells you explicitly that you’re giving the app access to your friends lists (not friends’ emails and phone numbers) by default. If you don’t want to share your friends lists with the app, you don’t install it. If an app wants to do other things, like automatically share back to Facebook on behalf of a user, it needs to ask for additional approval within another permissions interface. If a developers wants to ask any user to contact friends within the app — for inviting them to play a game or whatever — it requires them to do so separately later on within the app.
On top of building in a feature that shows you mobile apps that you have in common with other iOS users, why doesn’t Apple offer a single permissions interface that gracefully explains the various permissions that apps might want, not just friend list access, but location, push notifications, etc?
I think the answer has do with Apple’s poorly-received Ping social network in iTunes. The company, for all of its amazing successes with software and hardware, just hasn’t made social features a key part of how it thinks about the world. The address book fiasco shows that when it ignores key social features, it gets itself and its developers and users into privacy issues. For the sake of its users and developer community, now is the time for Apple to focus on getting social features right.
|
OPCFW_CODE
|
Get CPU usage (%) in a C program Raspberry Pi Forums
Unlike bandwidth, monitoring CPU utilization is much more straightforward. From a single percentage of CPU utilization in GNOME System Monitor, to the more in-depth statistics reported by sar, it is possible to accurately determine how much CPU power is being consumed and by what.... How to get CPU usage percentage command in bash script. Ask Question how do I get the CURRENT percentage of CPU in a simple output of simply a percentage? i need to fit in a file of 10 lines each line containing stuff like name,time,"cpu percentage, .. and so on. maybe i should use grep or something? this is the script im writing for now
A C++ program to get CPU usage from command line in Linux
%CPU-- CPU Usage : The percentage of your CPU that is being used by the process. By default, top displays this as a percentage of a single CPU. On multi-core systems, you can have percentages that are greater than 100%.... Read below to find about the commands to check CPU and memory utilization in Linux: top. top commands displays Linux processes in real-time. top provides an ongoing look at Ö
How To Check CPU Utilization In Linux via Commands
Unix command to find CPU Utilization Note: Linux specific CPU utilization information is here. Following information applies to UNIX only. UNIX sar command examples. General syntax is as follows: sar t [n] In the first instance, sar samples cumulative activity counters in the operating system at n intervals of t seconds, where t should be 5 or greater. If t is specified with more than one how to get your truck unstuck out of mud 31/05/2012†∑ For me high utilization is a situation wherein CPU is stuck at a particular percentage. Doesn't matter if it is 60% because then I know that there is some processes which is continuoulsy using 60% of my CPU.
View Virtual Machine CPU and Memory Usage History
22/08/2010†∑ I use "top" to check the existing resource , but I found that the CPU usage over 100% ( eg. 100.1 , 100.8 ) , can advise what is wrong , is CPU usage over 100% Download your favorite Linux distribution at LQ ISO . how to find percentage of money %user: Percentage of CPU utilization that occurred while executing at the user level which includes application processes, user running jobs, etc. %nice: Percentage of CPU utilization that occurred while executing at the user level with nice priority.
How long can it take?
What does >100% CPU utilization mean? Quora
- CPU usage over 100% LinuxQuestions.org
- Linux CPU utilization Rosetta Code
- linux How are CPU time and CPU usage the same? - Server
- Understanding the Total CPU Utilization Percentage monitor
How To Get Cpu Utilization In Linux To 100 Percentage
Then create a file called cpu_usage under /etc/cron.d/ with the following in: */1 * * * * root /opt/your_script.sh This should execute the script once per minute, and output the CPU usage in a percentage format on a new line within the specified file.
- By Renzo 2017-03-20T11:58:00+00:00 March 20th, 2017 Linux Comments Off on Linux: Get CPU Usage in percent Share This Story, Choose Your Platform! Facebook Twitter LinkedIn Reddit Google+ Tumblr Pinterest Vk Email
- Defining CPU utilization For our purposes, I define CPU utilization, U, as the amount of time not in the idle task, as shown in Equation 1. The idle task is the task with the absolute lowest priority in a multitasking system.
- 16/08/2008†∑ Hello.. Thanks for your replies. But I need to get the value of %CPU usage of process specified by PID programmatically not from command line. My above method is in reference to linux mailing lists and has drawbacks of sleeping for 5 seconds.
- Luckily, there are ways using which you can fix CPU usage 100 percent or high CPU usage problem. There are some culprit processes and threads which you have to get rid of. There are some other tricks to fix increasing CPU usage in Windows. Letís take a look at them one by one.
|
OPCFW_CODE
|
Only use UTC datetimes
This PR forces datetimes passed to bytewax as config parameters to be in UTC, using the new chrono integration added in PyO3 (not merged yet).
We'll need to wait for the PR on PyO3 to be merged so that we can open a PR on pyo3-log to make everything work.
The PyO3 author said he wants to add the chrono integration in PyO3 0.17.2 which is coming soon, so this should not take too long.
Previous description (ignore this)
This PR makes all datetimes in Bytewax timezone aware.
It uses a non-released version of PyO3 that adds the chrono integration, and removes the dependency from pyo3-chrono, which was doing the python->rust conversion deleting timezone info, without properly converting to UTC first.
The approach taken in this PR is to add a chrono_tz integration on top of the one implemented in PyO3, so that any datetime in the codebase has timezone info attached, and can be correctly compared to any other datetime.
The only edge case we have to handle, is when we receive naive datetimes from the user in python.
Right now the approach is to convert the datetime to utc using the python interpreter.
This means that any naive datetime coming from python will be assumed to be local (relative to where the dataflow is ran) and converted to UTC before being passed down to rust. This could be wrong in case the datetime comes from data generated somewhere with a different timezone, but we don't have enough informations to make a proper conversion anyway. Another possible approach would be to assume the datetime is UTC rather than local, and we would attach the UTC timezone info without converting first, but this would be equally wrong most of the times. The application should warn anytime we receive a naive datetime, so that users are aware of the fact and can fix the data if needed.
For (de)serialization of datetimes, I had to implement serde's traits manually, since there is no standard way to serialize datetimes with IANA timezone info. To do this I just serialize the local naivedatetime and the timezone name separately, so that we can rebuild the same ChronoDateTime when deserializing.
The PR is still in Draft because I'm waiting for PyO3 to release the new version, and depending on the versioning chosen (0.17.2 or 0.18) we might also need to wait for an update on pyo3-log which depends on PyO3 0.17.
To make things easier, for the next release I propose to implement Consensus Time:
https://xkcd.com/2594/
Original description (ignore this too)
The test at the end of the file should explain the situation I'm trying to fix here: pyo3-chrono ignores timezone info even if they are present.
I think we should instead take that into account.
With this conversion traits, we always convert datetimes to UTC internally.
If the user sends us timezone aware datetimes, we do the conversion.
If the user sends us naive datetimes, we assume they are already UTC (debatable).
This is not the only possible approach, and might not be the best one, so I'm opening this PR to discuss what you think about this.
My point is that I don't think we should ignore tzinfo if present, but the flow should still work if we receive naive datetimes (and maybe warn the user about the assumption).
This PR is rough, and if we want to replace pyo3-chrono I'll need to add the conversion traits for Time and Date too, and some more tests.
edit: I just found this PR: https://github.com/chronotope/chrono/pull/542 which does the conversion properly. Reading the discussion, it looks like they plan to integrate this into PyO3 (rather than in chrono) under a feature flag, so maybe we'll have this for free in the future
So, I'm having a second look at this PR, and after some thinking I agree with @davidselassie here that we should force users to send us UTC datetimes, at least for the start_at parameter and the return type of the dt_getter function in the EventTimeClock.
Supporting any possible kind of datetime here makes everything more complicated than it needs to be.
Only accepting UTC datetimes will make this PR much simpler, I can remove the whole chrono_tz integration and just use the one on PyO3 once it's merged, and we can just use DateTime<Utc> instead of having a wrapper type in our structures.
It's a minor annoyance for users since a crash would happen either at the initialization of the flow (in case of a start_at parameter) or at the first event received (in case of the EventTimeClock), so it shouldn't lead to unexpected crashes when the flow is deployed to production (which was my worry when I started thinking about this).
I'm going to update this PR and make it ready for review.
So, to have everything working we just need PyO3 0.17.2. The chrono integration will be merged then, and this should happen soon.
For the pyo3-log crate, we won't need to do anything once PyO3 0.17.2 is released, since it only depends on ~0.17.
But, since this PR is blocking another one, I temporarily forked pyo3-log to make it compatible with the yet unreleased version of PyO3 (simply pinning its PyO3 dependency to the same commit we use here), so that we can merge this now.
I added 2 TODO notes in the Cargo.toml with what to do as soon as PyO3 0.17.2 is released, so that we can just swap the uncommented lines with the commented ones and we should be good to go.
|
GITHUB_ARCHIVE
|
A remote power IP Switcher is useful when IP or network-based devices are deployed in remote or difficult-to-access areas. Sometimes, when an IP device loses network connectivity, the power device and/or network switch needs to be power-cycled, or disconnected from power for 10-30 seconds and reconnected. This can either be done manually, or you can use a remote power switcher, which can automatically sense when connectivity to the network has been lost. It does this by regularly pinging one or more IP addresses and/or websites. If it cannot ping for a pre-determined period of time (e.g. 30 seconds), then it can automatically power cycle all devices.
Remote Power IP Switchers are commonly used for network-based IP cameras or wireless network radios mounted on poles that are used primarily for traffic surveillance or monitoring.
The advantages of having this device include but are not limited to:
- You don’t have to be at the device’s location to manually unplug the power and re-plug.
- You don’t have to climb a ladder to access the device if it’s mounted high on a pole.
- You have less downtime.
- You don’t have to do anything, so your own time is saved.
This post is a quick-start guide for this remote power IP switcher found on http://CustomVideoSecurity.com. It is meant to be used within the USA, and there are other options available for other countries. Please contact us at sales@CustomVideoSecurity.com if you require one for another country.
Feel free to follow the instructions below, or contact us to have a technician pre-configure your device for $100. We will need your IP information (device, gateway, subnet mask, DNS), and you will eventually need to open a port in your router/firewall. We can remotely do your port forwarding for you for a fee of $49 for all customers or $99 for non-customers.
- Use Advanced IP Scanner or Wireshark to locate the IP address of the device, which can be identified by the last 4 of the MAC address.
By default, the IP Switch should obtain an IP address automatically from your router using DHCP. If for some reason it does not, press and hold the UIS button for 10seconds and it will revert to a fixed LAN IP of 192.168.0.100. Then change your router’s gateway to access it, e.g. 192.168.0.1 for gateway and 192.168.0.50 for IP.
- Download the latest firmware from here: http://3gstore.com/ipswitchupdates. Login email@example.com, PW: B1ft.
- Download the utility program from here: https://www.google.com/url?q=http://3gstore.com/ipswitchupdates/downloads/utility/Utility.exe&sa=D&ust=1536692783575000.
- Update the device’s firmware with the utility
- Login: admin PW: admin.
- Access the device’s web interface and change the password to whatever the customer specified and/or create new restricted users under “Account”.
- Change the time settings to match customer’s time zone under Time and change DST to auto.
- Schedule date and time of the reboot by outlet port or by whole unit.
- Google hangouts, Skype commands and DDNS are all optional.
Please note, step 8 and 9 should be done LAST if configuring to an IP scheme that’s different from the network on which the device is currently assigned.
- Network and change the network scheme IP, Subnet, Gateway. Erase Primary and Secondary DNS addresses detected from the bench-test network and input Google’s DNS 184.108.40.206 and 220.127.116.11 or leave it blank. Change Port number from 80 to something else less commonly used e.g. 88 or 8284 and open the same port on the router.
Now the device is accessible by public IP address and the port number you specified e.g. http://[public IP]:88 if using port 88.
For additional information, or to request configuration of your remote power switcher, please contact us at info@CustomVideoSecurity.com, or call us at 1-877-DEALS-79 or at 1-310-370-9500 x1.
Raymond Shadman 09/13/2018
Posted In: Networks, Video Surveillance
ip camera power switcher, power switch, power switch for ip camera, power switch for wireless radio, remote power ip switch, remote power switch, remote power switcher
Leave a Comment
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.Linq;
using Parz.Factories;
using Parz.Models;
using Parz.NodeConverters;
using Parz.Nodes;
namespace Parz
{
/// <summary>
/// A simple engine that uses all of the different components
/// of the Parz library.
/// </summary>
public class ParzEngine : IEngine
{
private readonly Func<IEnumerable<string>, IEnumerable<ILeveledToken>> _toLevelTokens;
private readonly IEnumerable<string> _separators;
private readonly ITokenFactory _tokenFactory;
private readonly INodeConverter _nodeConverter;
public ParzEngine(IEnumerable<string> separators, ITokenFactory tokenFactory,
INodeConverter nodeConverter,
Func<IEnumerable<string>, IEnumerable<ILeveledToken>> toLevelTokens = null)
{
_separators = separators;
_tokenFactory = tokenFactory;
_nodeConverter = nodeConverter;
_toLevelTokens = toLevelTokens ?? ((t) => t.ToLevelTokens());
}
public INode Parse(string expression)
{
if (string.IsNullOrWhiteSpace(expression))
{
throw new ArgumentNullException(nameof(expression));
}
var splitExpression = expression
.SplitTokens(_separators);
var leveledTokens = _toLevelTokens(splitExpression);
var toknified = leveledTokens
.Tokenify(_tokenFactory)
.ToList();
return toknified.Treeify(_nodeConverter);
}
}
}
|
STACK_EDU
|
#include "sim_networkop.h"
// returns the first k eigenvectors via kempe's method
// algorithm DecentralizedOI(k):
// 1. choose a random k-dimensional vector Qi
// 2. loop
// 3. set Vi = sum_{j\in nbrs} aij Qj
// 4. compute Ki = Vi' * Vi
// 5. K = pushsum(B, Ki)
// 6. Cholesky factorization K = R' * R
// 7. Set Qi = Vi*R^{-1}
// 8. end loop
// return Qi as the ith component of each eigenvector
MatrixXf sim_networkop::eigenvectors_kempe(int k, int maxloops) {
// algorithm DecentralizedOI(k):
// 1. choose a random k-dimensional vector Qi
MatrixXf Q;
Q = MatrixXf::Random(_top->size(), k);
//Q = MatrixXf::Constant(_top->size(), k, 1);
// 2. loop
for (int i=0; i<maxloops; ++i) {
// 3. set Vi = sum_{j\in nbrs} aij Qj
MatrixXf V = mult_cm(Q);
// 4. compute Ki = Vi' * Vi
vector<MatrixXf> Ki = map_rowprod(V);
// 5. K = pushsum(B, Ki)
//MatrixXf temp = _top->getnnbrs();
//MatrixXf B(_top->size(),1);
//for (int j=0; j<_top->size(); ++j) {
// B(j,0) = 1 / (temp(j,0)+1);
//}
//MatrixXf K = mt_sum(Ki, B);
MatrixXf K = mtv_sum(Ki);
//cout << "matrix K" << endl;
//cout << K << endl;
// 6. Cholesky factorization K = R' * R
MatrixXf R = K.llt().matrixU();
//cout << "matrix R" << endl;
//cout << R << endl;
//exit(0);
// 7. Set Qi = Vi*R^{-1}
Q = V * R.inverse();
// 8. end loop
}
// return Qi as the ith component of each eigenvector
return Q;
}
// returns the fiedler vector via bertrand's method
MatrixXf sim_networkop::fiedlervector_bertrand(int maxloops) {
// 1. choose a random k-dimensional vector x
MatrixXf x = MatrixXf::Random(_top->size(), 1);
// 2. compute alfa (as a factor of the maximum connectivity)
MatrixXf alfa = max(_top->getnnbrs());
// loop
for (int i=0; i<maxloops; ++i) {
// 3. compute the multiplication with the transformed laplacean
MatrixXf v = x - elwise_div(mult_lap(x), alfa);
// 4. (in parallel with 5) compute the norm of the previous result
MatrixXf v1 = norm(v);
// 5. compute the average of v
MatrixXf v2 = avg(v);
// 6. update x
x = elwise_div(v - v2, v1);
}
return x;
}
MatrixXf sim_networkop::mult_cm(MatrixXf x) {
MatrixXf res = MatrixXf::Zero(x.rows(), x.cols());
for (int i=0; i<x.rows(); ++i) {
list<int> nbrs = _top->getnbrsnode(i);
for (list<int>::iterator it=nbrs.begin(); it!=nbrs.end(); ++it) {
for (int j=0; j<x.cols(); ++j) {
res(i,j) += x(*it,j);
}
}
}
return res;
}
MatrixXf sim_networkop::mult_lap(MatrixXf x) {
MatrixXf res = MatrixXf::Zero(x.rows(), x.cols());
for (int i=0; i<x.rows(); ++i) {
list<int> nbrs = _top->getnbrsnode(i);
for (list<int>::iterator it=nbrs.begin(); it!=nbrs.end(); ++it) {
for (int j=0; j<x.cols(); ++j) {
if (i==*it) {
res(i,j) += (nbrs.size()-1) * x(*it,j);
} else {
res(i,j) += -x(*it,j);
}
}
}
}
return res;
}
vector<MatrixXf> sim_networkop::map_rowprod(MatrixXf m) {
vector<MatrixXf> res;
for (int i=0; i<m.rows(); ++i) {
MatrixXf temp = m.row(i).transpose() * m.row(i);
res.push_back(temp);
}
return res;
}
MatrixXf sim_networkop::mtv_sum(vector<MatrixXf> k) {
MatrixXf res = MatrixXf::Zero(k[0].rows(), k[0].cols());
for (int i=0; i<k.size(); ++i) {
res += k[i];
}
return res;
}
MatrixXf sim_networkop::power(MatrixXf x, double p) {
MatrixXf res(x.rows(), x.cols());
for (int i=0; i<x.rows(); ++i) {
for (int j=0; j<x.cols(); ++j) {
res(i,j) = pow(x(i,j),p);
}
}
return res;
}
MatrixXf sim_networkop::min(MatrixXf x) {
MatrixXf res = MatrixXf::Constant(x.rows(), x.cols(),1);
for (int i=0; i<x.cols(); ++i) {
res.col(i) *= x.col(i).minCoeff();
}
return res;
}
MatrixXf sim_networkop::max(MatrixXf x) {
MatrixXf res = MatrixXf::Constant(x.rows(), x.cols(),1);
for (int i=0; i<x.cols(); ++i) {
res.col(i) *= x.col(i).maxCoeff();
}
return res;
}
MatrixXf sim_networkop::sum(MatrixXf x) {
MatrixXf res = MatrixXf::Constant(x.rows(), x.cols(),1);
for (int i=0; i<x.cols(); ++i) {
res.col(i) *= x.col(i).sum();
}
return res;
}
MatrixXf sim_networkop::avg(MatrixXf x) {
MatrixXf res = MatrixXf::Constant(x.rows(), x.cols(),1);
for (int i=0; i<x.cols(); ++i) {
res.col(i) *= x.col(i).mean();
}
return res;
}
MatrixXf sim_networkop::elwise_prod(MatrixXf a, MatrixXf b) {
MatrixXf res(a.rows(), a.cols());
for (int i=0; i<a.rows(); ++i) {
for (int j=0; j<a.cols(); ++j) {
res(i,j) = a(i,j) * b(i,j);
}
}
return res;
}
MatrixXf sim_networkop::elwise_div(MatrixXf a, MatrixXf b) {
MatrixXf res(a.rows(), a.cols());
for (int i=0; i<a.rows(); ++i) {
for (int j=0; j<a.cols(); ++j) {
res(i,j) = a(i,j) / b(i,j);
}
}
return res;
}
MatrixXf sim_networkop::norm(MatrixXf x) {
return power(sum(power(x, 2)), 0.5);
}
void sim_networkop::print(string s, MatrixXf x) {
cout << s << endl << x << endl;
}
|
STACK_EDU
|
Authentication system for a web service
I am building a web service which i will be launching in near future. Service is more or less like online classifieds.
Now, i need to build a mechanism to collect user's information, enough to trace him in case of any fraud with other users. I can ask for nation ID card and things like that. But problem is that how will i verify them and the person providing the information.
So, i need suggestions for such system which could be used t authenticate users, so other users can trust them. And contact them freely knowing in case of any mishap authorities can get its information from our service.
For solution we must consider that our service will be free so if this process is costly, than their might be a mechanism to get that cost paid by users.
Any suggestions will be appreciated.
How does PayPal do it? Or Ebay?
It depends how much verification you want. You can verify that the ID number entered matches the details supplied (assuming that whatever nation the ID card belongs to provides some sort of API to do this - lots of countries don't even have standard ID cards though), but all that proves is that the person entering the data knows both the card number and the details that are associated with it in the national store. If I stole your wallet, I can probably make a pretty good guess at that link - bound to be something (driver's licence, passport, etc.) with your full name and enough of your address to be able to get the rest from online services in there.
You can't verify that a specific person entered the details, nor that (with some exclusions, perhaps vehicles, buildings, and land) they own the thing they are offering for sale.
If your marketplace is for relatively low value items, verifying email addresses might be enough. For more expensive items, you might want to look at verifying bank account details - look at sites like Paypal for how this works. For high value items (houses, for example), you'd probably want some manual verification steps in the process - having a copy of identification documents made by a solicitor or similar. Depending on the country, there might be legal requirements for specific types of goods - cars might need to be recorded in specific ways and registration documents provided to a central database, say.
In general though, for a purely online process, all you can state is that someone who knew some details about a person entered them.
Actually our service will be more or less like online classfieds, where people see, oh that person has that thing or can do that thing, and he want to get it, but he cannot trust him to come to his house and do some service or to give something to him to get it fixed.
So, i think what we can do is to involve some kind of security fees or anything like that and attach user's bank account to our system. And tell other users that we don't take responsibility of any fraud, but if contacted by authorities, we can provide this much of information to them.
|
STACK_EXCHANGE
|
There's no definite answer for that because it really depends on how much performance you need, how complicated your application would be, etc.
It's always better to have more memory, just to be safe. Remember that you wont ever get full 256MB of RAM to Linux, the best you can get is 240MB as rest will be allocated to GPU (and you really should use this split in your workload). 240MB is not that much but on the other hand there is a lot of VPS service providers which provide VPSes (which are virtualized private servers) with 256MB of RAM and people run quite big sites on this machines so it's definitely possible.
You can set a limit of memory available to PHP site and on many shared hosting services it is set to 8-16MB per site. And a lot of applications can run quite happy with that. MySQL has a lot of configuration options that can be used to limit it's memory usage and you can quite easily run it with 64MB (or even less) of memory. Apache webserver is not memory hungry too and there are even lighter alternatives. We can skip FTP and SOCKS server since they takes really small amount of memory (at least when used by only couple of users).
So database is the biggest issue here. Remember that the more memory database server has, the better performance it will get (it uses it mainly for caches to safe disk I/O). On really high traffic sites, database server has enough memory to keep (almost) whole database in the memory. You are probably not going to need that good performance. RaspberryPi does not have too quick storage (it's like 5-10 times slower than on full-blown computers even without RAID) so your performance will be really slow when it hits storage. Continuing already mentioned VPSes - they have much quicker storage solutions in most cases but they also share this storage with many other VPSes (often 16 or even 32) so it's very often not better than the one on RaspberryPi. And again, a lot of sites run happily on those servers.
So to sum up - you should be perfectly fine with 256MB of RAM but you are going to have to tweak some configuration options to lower the memory usage. It should be easy to find some tutorials about that on the Internet, especially when looking for articles about optimising server for VPS use etc. If you don't plan on using something that needs more memory in the future and can safe some money by buying 256 MB version of RaspberryPi, it can be worth it. And you may learn some interesting skills like designing your application so that it uses less memory or configuring your system to need less memory. That skills may pay of in the future.
|
OPCFW_CODE
|
THEN you'll look with disrespect at other burgers
[11:06] <queery> http://ubuntu-za.org/news/2011/02/25/ubuntu-hour-6-march-2011
[11:06] <queery> ubuntu hour
[11:09] <drubin> superfly: gcal dates seem funky
[11:10] <drubin> superfly: YES Royale ftw!!!
[11:10] <drubin> have you ever had their milkshakes? <3
[11:10] <superfly> drubin: yeah, I saw that... wonder what's up with it
[11:10] <superfly> drubin: I haven't, unfortunately
[11:14] <superfly> drubin: it looks like the recurring event is throwing it off
[11:29] <bmg505> so the royale's hamburger also comes in a cake box?
[12:09] <nlsthzn> Here the best burgers are Fudrockers... they are very good :)
[14:15] <queery> l8er
[16:29] <superfly> bmg505: Royale's burgers are GOOD, as opposed to BIG ;-)
[16:30] <bmg505> nah in jhb size matters
[16:36] <nlsthzn> best burgers in SA I loved was to take the Steers cheapy burger but to let them put tika sauce on... cheap and delicious:D
[16:37] <Queery> best burger ever is at beefcakes
[16:37] <Queery> nomnomnom
[17:22] <Kilos> evening everyone
[17:23] <Kilos> Maaz, coffee on
[17:23] * Maaz starts grinding coffee
[17:25] <kbmonkey> hello Kilos
[17:25] <Kilos> hi kbmonkey
[17:25] <kbmonkey> ooh coffee :)
[17:26] <kbmonkey> is Maaz coffee a real function?
[17:27] <Maaz> Coffee's ready for Kilos!
[17:27] <Kilos> Maaz, ty
[17:27] <Maaz> You're Welcome I'm sure
[20:50] <kbmonkey> is everyone enjoying their friday night?
|
UBUNTU_IRC
|
I love movies and whatever the genre, it is very often the dialogue that makes it more memorable to me than anything else. A well delivered line can be just as evocative as any scene. Here are some of my favourites:
Film: As Good as it Gets
Nasty, horrible, inhumane OCD Melvin Udall (Jack Nicholson) delivers this line to world weary Carole the waitress (Helen Hunt) when she demands he say something nice to her. Is this not just the best compliment ever?
Film: Jerry Maguire
Jerry (Tom Cruise) delivers a long impassioned but very matter of fact plea explaining why he wants Dorothy (Renée Zellweger) back in his life in front of all of her friends who’ve been telling her how much better off they are without men. Her’s is the simplest of responses. (Of course this movie also gives us the iconic show me the money line)
Film: Pretty in Pink
Rich boy Blaine (Andrew McCarthy – whatever happened to him?) to girl from wrong side of the tracks, Andy (Molly Ringwald) just before he says he loves her in such a matter of fact tone he might be ordering pizza, but the sentiment is what counts, right?
Film: Steel Magnolias
Clairee Belcher (Olympia Dukakis) retorts this to Oiuser Budreaux’s (Shirley MaClaine) sneering at her performance as a radio colour announcer. I had a hard job choosing my fave from this fabulous film. I promise my personal tragedy will not interfere with my ability to do good hair, and the only reason people are nice to me is because I have more money than God are right up there with the best. Out of them all on this list if you haven’t seen this movie you don’t know what you’re missing – it’s the funniest film ever to make you cry.
Film: Star Wars – A New Hope
Princess Leia (Carrie Fisher) to Han Solo (Harrison Ford) about Chewbacca. I’m a real Stars Wars nut. I don’t know how many times I have seen all 6 movies. I could choose line after line but I’ve decided on simplicity.
Film: Bridget Jones’s Diary
Bridget (Renée Zellweger) to Mark (Colin Firth) after a pash of a kiss in the snow after she’s chased him down in the street wearing a cardigan, undies and sneakers.
Film: Dead Poet’s Society
Teacher John Keating (Robin Williams) to his all boy classroom.
Film: City Slickers
Mitch Robbins (Billy Crystal) to his male friends. No need to put this into context. It is what it is.
Film: Monty Python and the Holy Grail
John Cleese as a French knight to English Knights. If you’ve never heard of or seen any Monty Python you’ll probably not understand why this is funny. Monty Python are a bunch of English (oh and 1 American) eccentrics who used to specialise in avant-garde and surrealist humour.
Film: Some Like it Hot
Daphne (Jack Lemmon) to Josephine (Tony Curtis) after watching Marilyn Monroe walk along the train platform. One of the bestest bestest films ever.
It’s a good job I’ve been restricted to 10 because I could have gone on to about 300. Do you think I need to get out more?
Top Photo Credit: James Whíte
Please rate this article
|
OPCFW_CODE
|
This article is more than 1 year old
Container orchestration top trumps: Let's just pretend you don't use Kubernetes already
Open source or Hotel California, there's something for everyone
Container orchestration comes in different flavours, but actual effort must be put into identifying the system most palatable.
Yes, features matter, but so too does the long-term viability of the platform. There's been plenty of great technologies in the history of the industry, but what's mattered has been their viability, as defined by factors such as who owns them, whether they are open source (and therefore sustained by a community), or outright M&A.
CoreOS, recently bought by Red Hat, offered Fleet. Fleet, alas for Fleet users, was discontinued because Kubernetes "won".
First, however, the basics: what is container orchestration? Orchestration platforms are to containers as VMware's vSphere and vRealize Automation are to Virtual Machines: they are the management, automation and orchestration layers upon which a decade or more of an organization's IT will ultimately be built.
Just as few organizations with any meaningful automation oscillate between Microsoft's Hyper-V and VMware's ESXi, the container orchestration solutions will have staying power. Over the years an entire ecosystem of products, services, scripts and more will attach to our container orchestration solutions, and walking away from them would mean burning down a significant portion of our IT and starting over.
So let's look at who's who, and what's what in the world of containers, and see if you can find the right flavour for you.
Skipping right to the end, Kubernetes's flavour is that of victory. Kubernetes is now the open-system container orchestration system. Mainframe people – who like to refer to anything that's not a mainframe as an open system – will cringe at my using the term open system here. I make no apologies.
The major public clouds pretend to be open systems, but everywhere you turn there's lock-in. They're mainframes reborn, and when talking about containers, most of which probably run on the major public clouds.
Developed by Google, Kubernetes was designed specifically to be an open, accessible container management platform. Google handed the technology to the Cloud Native Computing Foundation (CNCF) – another in a long line of open-source foundations run by a group of technology vendors.
Kubernetes is part of an emerging stack of technologies that form the backbone of open source IT automation. The container part of the story starts with VMs or bare metal machines which are provisioned into container hosts. These have a Linux Operating System Environment (OSE), and a containerization platform (Docker or rkt) installed.
The containerization platform handles the creation of containers, the injection of workloads into containers and destroying containers. Kubernetes is the bit that lashes multiple containerization hosts together into a cluster and bosses the containerization platform around. Kubernetes handles resource distribution, scheduling, availability, stateful storage and more. Think of it like "vSphere for containers" and you're close enough for jazz.
Kubernetes was delivered as a 1.0 product that was already fairly mature. Thus far, the various CNCF members haven't had a chance to ruin it through development by committee, but it's early days yet.
For the moment, Kubernetes has won the container orchestration ways. Market dominance, however, isn't the same as a monopoly, and there is still room for others.
Kubernetes is fantastic for the sorts of workloads that most people place in containers: stateless, composable workloads. They're the cattle in the cattle versus pets discussion. Some organizations, however, have reason to keep a few pets around. That's where Mesosphere Marathon comes in.
Marathon is a container orchestration framework for Apache Mesos that is designed to launch long-running applications. It offers key features for running applications in a clustered environment.
Marathon is Kubernetes, but for the Mesophere DC/OS or Apache Mesos. It can boss around Mesos containers or Docker containers. If you don't know what any of that is, that's perfectly okay. What you need to know is that Marathon is where you go when you have stateful workloads that you want to run for long periods of time, and for some reason you want to do this in containers instead of VMs.
Persistent containers is Marathon's niche. Others may do it, but none do it quite as well.
For many IT vendors, lock-in is a feature, not a bug. As we head into the 2020s, the public cloud providers are the standard-bearers of this philosophy. While the major public cloud providers all embrace open source and standardization to varying degrees, the ultimate goal is to convince you to put your workloads into their clouds, and then pay them rent on those workloads forever.
Amazon's EC2 Container Service (ECS) stands up a series of EC2 instances and installs Docker on them. It then lashes them together into a cluster and lets you manage them. Basically, it's Kubernetes, but with a distinctly Hotel California aftertaste.
Azure Container Instances (ACI): ditto what was said about ECS. But with the Amazon replaced with Microsoft in the recipe.
Google Container Engine (GCE) is Google's version of the above. Of course, being Google, it's kind of terrible at locking you in. GCE is basically Kubernetes with some Google Cloud Platform (GCP)-specific features thrown in. If you build your business on top of GCE automation it will still be a pain to disentangle that automation and go elsewhere, but far less of a pain than it would be with EC2 or ACI.
Cloud Foundry should be thought of as Openstack for containers. It is corporate Open Source at its finest. Written by VMware and then transferred to Pivotal when Pivotal was spun out, Cloud Foundry's original purpose was to run containers on VMware's vSphere, with an eye towards being the Platform as a Service (PaaS) choice for the world's enterprises.
Today, Cloud Foundry's intellectual property is held by the Cloud Foundry Foundation (CFF). The CFF is to Cloud Foundry as the CNCF is to Kubernetes. Both the CFF and CNCF are Linux Foundation projects. The Linux Foundation is another industry group mostly composed of various vendors. You'll find a lot of the same vendors hold power in all three groups.
If you want to roll your own PaaS, and/or don't like the PaaS options the major public cloud providers have to offer, Cloud Foundry is for you. If PaaS isn't your thing, give up, and use Kubernetes.
CoreOS versus Docker versus the world
Docker Swarm is Docker's container orchestration offering. Back before CoreOS was borged by Red Hat, there was great fun to be had on Twitter by provoking both CoreOS and Docker nerds into intricately detailed technical poo-flinging contests about which was better. Then Kubernetes won. Now Docker Enterprise supports Kubernetes and Swarm's days are numbered.
In a container
So where does this trot through the landscape leave us? No surprises: in the container orchestration world, Kubernetes is the container-farming king – but it isn't ruler of all we survey. Mesosphere occupies a decent niche as the kennel for your pets. Just beware Amazon, Azure and Google – these are Hotel California: you can check in your code, but it most likely won't ever leave. If portability is your particular king then I advise you to steer clear. If not, code on.
If you are using anything new, unheard of or just plain better than these three orchestration flavours let me know. ®
We'll be covering DevOps at our Continuous Lifecycle London 2018 event. Full details right here.
|
OPCFW_CODE
|
[swift-evolution] [Proposal] Powerful enumerations built upon protocols, structures and classes
marc at knaup.koeln
Tue Dec 15 08:50:00 CST 2015
Thank you for your extensive feedback.
The way I proposed it is actually the other way round:
Switches over enumerations still don't need a default case since all cases
(i.e. types conforming to them) are known at compile-time.
The same benefit will become available also to other protocols. If a
protocol is not public (or not publicly conformable) then the compiler
again knows all types explicitly declaring conformance to that protocol at
compile-time and the developer can omit a default case when switching over
instances of that protocol.
So the behavior of Swift 2.1 regarding switches and default cases for
enumerations remains the same while for many protocols the compiler can
additionally issue a warning that the default case is unreachable and thus
On Tue, Dec 15, 2015 at 2:59 PM, Al Skipp <al_skipp at fastmail.fm> wrote:
> > Sum types look very interesting and are very similar to enumerations.
> > If there is enough interest in sum types then this proposal could
> probably cover them too since it already provides a good foundation for
> implementing them.
> > It could also be proposed separately or on top of this proposal.
> > In any case this proposal does not prevent sum types from being added to
> Swift already has Sum types, they are enums. Pretty much all the other
> constructs are Product types (structs, classes, tuples…). I’ve not managed
> to read your detailed proposal in full, but it strikes me that it would
> involve quite fundamental changes. Not sure what all the consequences would
> be and what the value is in blurring the distinction between Sum types and
> Product types?
> My personal view is that the distinction between enums and protocols is
> important. Your proposal would enable the easy creation of extensible enums
> which I find problematic. When using extensible enums you’d be obliged to
> include a ‘default’ case in every switch statement. You point out that
> you’d be able to restrict the cases by using ‘final’ or some other means,
> but in practice I think the extensible form would be popular for its
> perceived flexibility.
> Currently if I create an enum I am obliged to deal with all cases in a
> switch (assuming I don’t use a default case, which I try to avoid). Now
> imagine I subsequently realise I need to add a case to the enum. OK, I do
> so, now all my code breaks, arghh! But this is a good thing! The compiler
> guides me to every piece of code I need to change – magic! If on the other
> hand, enums were easily extensible, I’d add my new case and my code would
> still compile, brilliant! However, my code is fundamentally broken and I
> have to manually fix it without any guidance from the compiler.
> That’s why I think the distinction between enums and protocols is
> important - enums are useful because they’re not extensible.
> When a type is required that can be extended with new cases, then a
> protocol is the right tool for the job, rather than an enum.
> My worry is this proposal would reduce compile time guarantees in Swift
> and make me more responsible for finding my own bugs (the horror!).
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the swift-evolution
|
OPCFW_CODE
|
import itertools
from boolean_formula import Formula
import sys
from pysat.card import *
from pysat.pb import *
def xor2cnf(xor):
if len(xor)==3:
return [[-xor[0],-xor[1],xor[2]], [xor[0],-xor[1],-xor[2]], [-xor[0],xor[1],-xor[2]],[xor[0],xor[1],xor[2]] ]
if len(xor)==2:
return [[xor[0],xor[1]],[-xor[0],-xor[1]]]
def de_xor(llist,start):
oldstart = start
if len(llist)<=3:
return [list(llist),],start
else:
clist = []
i = 0
while True:
if i>=len(llist):
break
elif i==len(llist)-1:
oldstart-=1
break
else:
clist.append([-llist[i],llist[i+1],start+1])
start+=1
i+=2
flist,_ = de_xor(range(oldstart+1,start+1),start)
return clist+flist,_
def kxor(n):
return de_xor(range(1,n+1),n+1)
def xor100():
f = open('benchmarks/sxor/x10.cnf','w+')
n=10
clauses,nv = kxor(n)
f.write('p cnf '+repr(nv)+' '+repr(len(clauses))+'\n')
for c in clauses:
f.write('x ')
for l in c:
f.write(repr(l)+' ')
f.write('0\n')
def sign(a):
if a>0: return 1
return -1
def pb_opb(filepath,topath):
f = open(filepath,'r+')
g = open(topath,'w+')
flines = f.readlines()
for line in flines:
split = line.split()
if len(split) == 0: continue
if split[0] == 'p':
n = int(split[2])
m = int(split[3])
g.write("* #variable= "+repr(n)+" #constraint= " +repr(m)+'\n')
else:
g.write(line)
f.close()
g.close()
def xor_blow(filepath,topath):
encoding_no = 0
f = open(filepath,'r+')
g = open(topath,'w+')
formula = Formula.read_DIMACS(filepath)
new_formula = Formula()
nv = len(formula.variables)
nc = len(formula.clauses)
ctype = formula._ctype
start = nv
clause_list = []
m = 0
for ci in range(len(formula.clauses)):
if ctype[ci]=='c':
if formula._klist[ci] == 1:
clause_list.append([formula.clauses[ci]])
m+=1
else:
cnf_from_pb = PBEnc.atleast(lits=formula.clauses[ci], weights=[1 for i in range(nv)],bound=abs(formula._klist[ci]),top_id = start, encoding=encoding_no)
clause_list.append(cnf_from_pb.clauses)
m += len(cnf_from_pb.clauses)
if cnf_from_pb.nv > nv:
start = cnf_from_pb.nv
if ctype[ci]=='p':
cnf_from_pb = PBEnc.atleast(lits=formula.clauses[ci], weights=formula._coefs[ci] ,bound=abs(formula._klist[ci]),top_id = start, encoding=encoding_no)
print('encoding cost '+repr(len(cnf_from_pb.clauses)))
j = 0
clause_list.append(cnf_from_pb.clauses)
m += len(cnf_from_pb.clauses)
if cnf_from_pb.nv > nv:
start = cnf_from_pb.nv
if ctype[ci]=='x':
blow_clauses,_ = de_xor(formula.clauses[ci],start)
start = _
for b in blow_clauses:
cnfs = xor2cnf(b)
clause_list.append(cnfs)
m += len(cnfs)
g.write('p cnf '+repr(start)+' '+repr(m)+'\n')
for cg in clause_list:
for c in cg:
for item in c:
g.write(repr(item)+' ')
g.write('0\n')
f.close()
g.close()
xor_blow(sys.argv[1],sys.argv[2])
|
STACK_EDU
|
What's the quickest way to install Windows 7?
I am wondering what the quickest/fastest way to install Windows 7 would be?
I've read that you can make a bootable USB with unetbootin, or load the ISO contents to a separate partition/hard-drive and boot from there to install.
Then I seen a method using imagex to copy the files needed onto a new partition which can be booted from directly, it takes ~7 minutes + ~5 min for the initial boot... I haven't tried it yet but would like to know if anyone knows of anything faster?
If you could provide some instructions (step by step) would be great! The imagex method provides a good tutorial for example.
I use WinToFlash and format the USB drive in NTFS and I install it in 15 min. That is not a long time I think also depend on the system specs.
It only takes 10 minutes. I will show you ways to install Windows 7 via USB.
We need:
At least 1 USB drive with 4 GB capacity, because Windows 7 at least takes 3 GB.
Manual Method:
Plug in the USB flash disk.
Press WinR, type cmd and click OK.
Type diskpart and press Enter.
Type list disk, press Enter and choose your USB flash drive. If you have only one USB hard drive, it's disk1.
Type select disk 1 and press Enter.
Type clean and hit Enter.
Type create partition primary and press Enter.
Type select partition 1 and press Enter
Type active and press Enter.
Type format fs = fat32 and press Enter.
Type assign and press Enter.
Type exit and press Enter.
Insert the DVD disc of Windows 7 and copy all the contents of the DVD to the USB Flash disk.
Booting your computer via USB, in the BIOS, make sure you boot through USB.
Automatic way:
Download the latest version of the first program WinToFlash.
Extract and run the file WinToFlash.exe
Click the "Check" and run the Windows setup wizard transfer.
Click "Next"
Select the location and the location of Windows 7 USB flash disk. click "Next".
Select "I Accepted the terms of the license agreement" and click "Continue"
Click OK to begin formatting the USB stick and Windows 7 will automatically be copied to the USB stick.
Click "Next" when copied, and boot your computer via USB.
Formatting the USB tool longer than the installation - thanks for posting.
I think the quickest way to install 7 is from a flash drive. It's also pretty painless. I suggest an 8 GB flash drive since SP1 adds some meat to the base install. Plus that will give you room for other applications you might want to install on the image and keep the installs in a folder names !Apps
ImageX is way to go for professionals.
Plenty of resources around on this process. One thing I do on my laptop is use ImageX to build a .vhd file on my second SSD and then update the bootloader to boot from a VHD – another very cool feature in Windows 7.
You can have a new system up and running in less than 15 minutes. Great for development and demo / testing.
|
STACK_EXCHANGE
|
How can a drink contain 1.8 kcal energy while 0 g fat/carbs/protein?
How is it possible that the Red Bull Zero contains 0 gramms of fat, carbs and protein, but it still has 1.8 kcal of "energy".
I always thought that the human body can gain energy only from 3 kinds of nutrients: fat, carbs and protein. Is there a 4th kind? Or they just display an energy value that's not accessible to the body?
This is slightly off-topic, since it's very unlikely to be related to your question - but yes, humans can gain energy from many more things than just sugar, fat and protein; it's just that those three are dominant in the food we eat. One example would be alcohol (ethanol); we can also process polyols and organic acids. Even fiber, which is often considered empty (and useful, mind) filler can be partially digested for about 2 kcal/g - so a single gram of fiber would be enough for that energy value (while being both a carbohydrate and a (poly)sacharide, it's usually listed separately).
@Luaan Exactly, I wanted to mention ethanol too. It was a big surprise to me and I actually had asked a question to find out how is it possible.
@Luaan I thought that alcohol is a carbohydrate.
@CrouchingKitten Nope. It does have carbon, oxygen and hydrogen atoms in the molecule, but that doesn't mean it's a carbohydrate, just like fats aren't. Especially in food context, "carbohydrate" essentially means "sugar", usually including some non-sweet saccharides like cellulose (in your photo, that would be in "Kohlenhydrate" but not under "davon Zucker"). Really, alcohol is closer to fats than carbohydrates, but even that isn't really all that useful. It's not commonly included in either group.
@Luaan Thanks. So this means that the body can't convert it to glucose? So the only way the brain can make use of alcohol is if it's first converted to fat, and then to ketons?
@CrouchingKitten The body does convert it to fats (if you're well fed), but I don't think there's any specifics dealing with the brain - after all, ethanol goes through the blood-brain barrier with no trouble. It has long been thought that ethanol doesn't cause the well known CNS symptoms on its own - it's the intermediate products of ethanol metabolism that do, probably mostly acetaldehyde. Assuming this is true, ethanol must be metabolised in the brain (and thus provide energy, as outlined in Tomáš's link), since acetaldehyde doesn't cross the blood-brain barrier.
@CrouchingKitten I doubt anyone would recommend it as a food source for the brain, unlike sugars and ketons, though; for obvious reasons :)
The list of ingredients on the can mentions "Zuckerkulör," which is caramel colour, which can have 2 kcal/g, according to one producer.
Next, there is "Citronensäure," which is citric acid, which can, as other organic acids, have 2-3 kcal/g, according to this source.
There is also taurine, which is an amino acid-like compound, so it could, like proteins, have 4 kcal/g, but is, according to Taurine Metabolism in Man (Journal of Nutrition), poorly metabolized and probably has less than 0.2 kcal/g.
It is sometimes allowed, at least by U.S. Food and Drug Administration, to round the amounts of macronutrients (carbs, proteins, fats) smaller than 0.5 g per serving to zero, which is what they obviously did in this case, but they decided to keep the summary of calorie values of all ingredients exact.
It is usually said that only 3 types of nutrients contain energy: carbohydrates, proteins and fats.
Digestible carbohydrates (sugars and starch) provide 4 kcal/g. Undigestible, but fermentable, carbohydrates, such as soluble dietary fiber, sugar alcohols or polyols (maltitol, mannitol, sorbitol, xylitol, isomalt) and organic acids (citric, acetic acid, etc.), can provide 2 kcal/g of energy in average. On the other hand, some carbohydrates (such as "Sucralose" from the ingredient list) are neither digestible nor fermentable, so they do not provide any calories.
The "fourth" nutrient that can provide energy (7 kcal/g) is alcohol (ethanol), but is not considered a nutrient by some authors.
According to Food Label Accuracy of Common Snack Foods
article (Obesity, 2014), the calories on the food labels should represent usable calories (metabolizable energy):
Of note, it is important to distinguish that food label calories
actually represent metabolizable energy, which is total caloric
content minus calories that are presumably not absorbed by the body
and excreted as waste.
This article says that ~95% of ingested taurine is excreted. KEGG also says that taurine is not converted into any energy molecule in humans.
Could it be Zuckerkulör (caramel colour)? It's just caramelized sugar.
@AkselA -- answer should be edited to include your comment as that is almost certainly the answer. If not then it is that plus some of the other ingredients -- Aromen e.g. or anything else that constitutes a fraction of a caloric intake as, total, they only need to add up to <2 Calories.
This reminds me of a "zero calorie" recipe I once saw which involved ten squirts of cooking oil. Somebody had calculated the calorie count of one squirt, rounded to zero, and then multiplied that by ten...
@AkselA There's very little caramel colour and, also, I'm not sure it has much energy in it: it's essentially already been partially burnt.
@DavidRicherby: I don't think there's much energy in a couple of drops of caramel colour either, but then again we are only looking for a very small amount of energy.
@AkselA Oh, good point. I'd not noticed the actual numbers. Since the question is, essentially, "How is there more energy than I expected?", I didn't stop to think that "more than I expected" could still be "almost none."
@GeoffreyBrent Reminds me of how TicTacs have 0g of sugar per serving (1 mint), while they're probably mostly made of sugar.
@WYSIWYG, I checked that article and it really seems it is not taurine that provides calories.
@AkselA, it's quite likely that it is caramel colour and citric acid that provide calories.
@DavidRicherby: Why do you think all the sugar in the caramel is burnt? Ingredient design is an art nowadays. Listing sugar as an ingredient is actively avoided nowadays. Burning just 1% of the sugar and labeling the whole as "caramel colour" would be a sneaky way to put sugar in the recipe without explicitly listing it as such. And it's certainly a whole lot cheaper than putting in honey (another common "non-sugar" ingredient)
@MSalters 1kcal of sugar is a tiny tiny amount and not really detectable as sweet in a volume like this, so what you describe probably doesn't apply here.
|
STACK_EXCHANGE
|
everythingpossible - Fotolia
Everything runs -- or can run -- in the cloud, including integrated development environments. Developers should investigate what cloud integrated development environments are and the various types to choose from. But before you select a product, understand the advantages and disadvantages associated with this off-premises dev environment.
An integrated development environment (IDE) helps developers write code that includes features to simplify the process -- like syntax highlighting and automatic indentation. It typically includes functionality that makes it easy to compile, run and debug code. Rather than download and install the IDE on their local workstation, developers can turn to a cloud IDE accessible via a web browser. Although developers still use local IDEs, cloud IDEs have gained popularity.
A developer can technically run a traditional IDE on a virtual server in the cloud, using a remote desktop, but it's rarely what developers have in mind for a cloud IDE. Hosted IDEs don't require the user to perform any installation or maintenance.
Cloud IDEs don't have to be used to develop cloud applications. Most cloud IDEs work to create apps for various on-premises, hybrid and cloud-based environments, and they support a range of programming languages and frameworks.
There are two categories of deployment options for cloud IDEs:
- Fully-managed IDEs, like AWS Cloud9, that are ready to work without the user setting up their hosting infrastructure.
- Self-hosted IDEs, like Eclipse Che and Orion, that developers have to set up and install themselves on a local or cloud-based server.
In some ways, cloud IDEs are similar to the well-established PaaS architecture. PaaS makes it easy for developers to build and deploy applications in the cloud. A major contrast between PaaS and cloud IDE is in the development tools. PaaS is designed with the expectation that developers will write code in a separate tool, then upload it to PaaS to deploy. Cloud IDEs are a form of SaaS: They deliver the capabilities of an IDE as a service.
Cloud IDE pros
Cloud IDEs offer several advantages over traditional IDEs. As described above, when the IDE is hosted by the provider, developers don't have to set up and manage it. Developers can write code on virtually any type of laptop, tablet, smartphone or other workstation, as long as it has a web browser to connect to the cloud IDE. Code is automatically saved to a cloud-based environment, so changes are not lost if a developer's laptop experiences an issue and shuts down.
Cloud IDEs can build and debug code more quickly than locally installed IDEs, because they run on more powerful hardware. Organizations also frequently run production environments for applications on cloud hosting. A cloud IDE can deploy code quickly into a cloud-based production environment. This setup eliminates delays from slow upload links from on-premises IDEs to the cloud infrastructure.
Cloud IDEs also enable multiple developers to use the same environment at once, which fosters easier collaboration on shared code.
Cloud IDE cons
However, cloud IDEs also have potential drawbacks. Organizations pay on a subscription model for a fully-managed cloud IDE, as opposed to buying the tool outright. A self-hosted cloud IDE might be free to download and install, but the organization must budget to host the tool.
Because the IDE is not installed locally, access and performance can be affected by network connectivity problems or bandwidth limitations. This setup can also make it easier for attackers to access the IDE and the developers' code on it.
While every tool is different, in general, cloud IDEs support fewer programming languages and are less customizable and extensible than local IDEs. Cloud IDE buyers should review the plugin ecosystem for a given tool, and ask about the user's access to and control over the operating system.
Cloud IDE products comparison
There are a number of cloud IDEs, each with a set of strengths and weaknesses that developers should consider:
- Cloud9 is a popular cloud IDE option, fully managed from AWS. Cloud9 integrates well with other services from AWS but it can also be used to build applications that are deployed elsewhere.
- Codeanywhere is another popular fully-managed cloud IDE. Codeanywhere was one of the first platforms to make cloud IDEs practical and it can support several dozen programming languages.
- Eclipse Che is an open source cloud IDE from the Eclipse Foundation. It's available as a fully-managed service or it can be self-hosted. Eclipse Che supports up to twelve languages, including most of the popular languages for developing native and web applications.
- Orion, also developed under the Eclipse Foundation, specifically enables users to develop web apps, and only supports web development languages.
- Theia can run on a local computer or in the cloud -- or even split between the two -- which makes it a good choice for developers who want an IDE that provides a flexible set of deployment options.
|
OPCFW_CODE
|
This is the place to discuss whatever you want. All Trezor unrelated stuff belongs to this topic.
Hey, I stumbled into a pub! Where is the ?
Not many people here … (looks around in the empty pub)
Anyway, I was thinking if I should HODL stablecoins, like USDT, USDC, DAI and EURT into my Trezor before the banks and exchanges implement stricter regulations. What do you think?
you are right, it’s rather a ghost pub so far
I wonder what is the purpose of hodling stablecoins for a long time? They are of course useful to hodl them during the bear market, otherwise I see them useful only as a short term hedge or to move your funds between exchanges.
Yeah, I agree. I was thinking about using them to swap/convert to BTC and others on DEX exhanges if/when new regulations force banks and custody exchanges use Know Your Customer (KYC) and Anti Money Laundring (AML) policies. Well, banks already have these policies but not all online exchanges yet.
If I had stablecoins and perhaps WETH, WBNB and some other common swappable coins too, I’d not need banks to withdraw Fiat and online changes so often anymore, thereby avoiding the AML and KYC privacy problems.
I think DEXes will flourish when governments tighten the grip around normal exchanges, because they’re without KYC and AML.
PS: Is it possible to get WETH giftwrapped, to give as birthday gift?
The problem with DEXes is that people still need to use bank accounts to send money to the seller, unless they can use some stablecoin which is accepted by the seller. The part of the story is how you acquire the stablecoin withou KYC and AML. You still need to buy it from someone and unless you pay for it with cash, you need to do bank transfer payment. I am sure banks will track regular sellers who will have a lot of transactions on their bank account and can track them quite easily. All this means you won’t be anonymous with DEXes either.
But you are right, using stablecoins makes sense to swap/convert to other crypto the way you describe it, it will just be too difficult to maintain privacy even with the DEXes.
And this is the right example, this is quite difficult for an average user. Also, there is a good example with Binance, they are forcing KYC now, so the privacy is gone as the BUSD is linked to yourself. On the other hand I am quite sure new ways how to acquire crypto anonymously will come.
I really hope so, kolin.
Maybe some new DEXes will appear in a country where crypto is legal and unregulated. Hm … where could that be? Wherever, I do hope they sell beer there.
Interesting reading how seed works. You can create it manually on your own and learn how it works. Pretty cool.
Interesting! I will read that article.
Edit: Bitcoin Magazine doesn’t like my browser (Opera) it seems - I get a blank page after seeing the heading for a fraction of a second, so I’ll have to try and read the article with another browser.
Edit 2: It worked in Chrome but refused to load in Opera because of my Ad Blocker. Fixed now.
Edit 3: I’ve read the article now. Very interesting and fun read! I learn something new every day.
About that BIP39 english wordlist on Github …
… can’t someone ask admin-slush to renumber it with 0 - zero - as the starting number (or submit a Pull Request in the project)?
BTW, it just occured to me how it’s possible to store private keys and passphrases digitally: on an air gapped computer, which is never connected to the Internet. For instance in a password protected KeePassX datafile on a Rasberry PI (I already own a couple of those). Then, I could put the Rasberry PI - or just the (micro)SD card - in a safe place. At least it’d be more secure than a handwritten seed card put in the same safe place. But this is food for thought only, not a recommendation of course!
Here’s another interesting article I read today in Bitcoin Magazine:
It contains a short overview of the hard times in the Czech Republic during the 20th century. As a History buff I knew most of this from before, but set into an economic perspective with today’s opportunities with cryptocurrency, it makes you sit back and think. No wonder many people in the Czech Republic are interested in crypto!
I don’t agree with the everything which are said about how we’ve progressed into a better world though. Or that Fiat is the last big problem for humanity. Yes, we got clean water. For a while. Then plastic microparticles has polluted water all over the planet so there is no place with clean water anymore. And there are many other big problems to be solved - if even possible - for instance that little thing concerning climate change …
However, I wholeheartedly agree that cryptocurrency can solve many problems. Bitcoin is only 12 years old but already an economic revolution that can’t be stopped. I believe the future will benefit from cryptocurrency in many ways. And the article in general is great reading!
Not sure if the numbering has a reason here, but I believe someone can submit the pull request to find out what can be done.
Thanks for sharing, will read it
About stablecoins - I was thinking about the most popular one Tether US (USDT) … It’s based in Hong Kong, I believe, and we know how China has taken control over Hong Kong now too. Earlier, China has gone after miners but in the recent days China has tightened their grip around the crypto industry, including overseas online exchanges, domestic traders, illegal fundraising, and any website condoning crypto it seems.
The main effort seems to have been on mainland China, but it’s only a matter of time until Hong Kong is targeted too. So I’m worried about USDT and its future. Will it survive, if China cuts of Tether’s funds?
This is also a very interesting read:
Does it means that transactions will be cheaper when using the Cardano network instead of through the Ehtereum network? If so, the ADA coin will be even more popular in the times to come.
this will be very interesting to observe
It’s auto-numbered by github. The text file itself doesn’t contain the numbers. You could ask github to change it, but I don’t think they will
Ah, I see. Thanks for the info. In that case, maybe the file should contain numbers in a two-column matrix.
Cheers! Or, as we say in my country - Skål!
You all know about Solana’s (SOL) rise recently. I wanted to show you a graph I added to my crypto spreadsheet earlier, which shows growth from when Bitcoin was as its lowest on July 20th. The coins in this graph are from my watchlist and most of them have been in my list since before July 20th, but some are added later. The graph starts at $100 so the growth is relative to that and means that when the column shows $550 for Solana it means it has risen 450% since July 20th.
hmm, it will be interesting to watch the performance of the coins with the lowest relative growth, could be an opportunity
Yup, I’m HODLing some of the more unknown tokens, in the hope that they’ll rise to $1 sometime in the future and make me rich someday! LOL! Shiba Inu (SHIB) is one I’ve bought 30 million tokens of. They were cheap … Ha-ha-ha! So if SHIB ever gets to be worth $1 then I’ll get $30,000,000.00!
Also, I have strong beliefs in Slam (SLAM), which is a casino token. It has risen a lot in the past few months, but is still cheap. I’m going to buy a few millions of that one as well.
|
OPCFW_CODE
|
I've always thought to myself how would I go measuring each character's strength based on all of their feats and abilities, and this what I came up with. Ladies and Gentlemen, this is the battle data thread inspired by a similar thread on another forum I go to, and I figured "What the hell, why not do it for Sonic characters".
But since this is mostly opinion based, I figured the best way to do it would be to use a survey/poll and tally up the votes.
I've thought of some attributes to be judged on, but if you think some don't apply any of the cast and have a better attributes in mind, feel free to tell me in a respectful way please. But the attributes so far are:
Offense: Based on a character's attacking ability, how much damage can they do, their skill with attacking, combat experience, range, and versatility.
Defense: Based on a character's defensive ability, how they support themselves and their teammates, and overall how much can they shield themselves from taking damage.
Agility: Based on a character's movement ability, how fast they are, and how good their reaction time is.
Intelligence: Based on a character's perception, critical thinking, analyzing ability, strategic & tactical awareness, logical deduction, and adaptability.
Stamina: Basically how long a character can last doing extraneous feats before they tire themselves out.
And for added fun, I'll add one miscellaneous ability that's mainly for comedic purposes, it can be anything witty/saracastic/ or out derogatory, but the catch is it needs to apply mostly to that character alone(For example, Sonic - Chilidogs, etc.) The one I feel is the most hilarious I'll use.
To save time, I'll do the characters in order by their team, starting with Team Sonic of course and moving on from there. I'll hopefully have the survey up before the night's out, and we can start this. Since there are so many members, I have to limit the maximum votes to 200 per team so not everyone will get a chance to vote sadly, so I advice you to vote as quickly as possible and make it count because you only get one.
If anyone wants to collaborate feel free to PM me and ask what you'd like to contribute, because I'm going to need all of the help I can get with this, and hopefully I can keep it going.
IMPORTANT NOTE: This is NOT a "my character is better than yours" topic, this is simply a topic to discuss the character's abilities and how they rank among each other, and it is purely for enjoyment's sake, so please have fun with it. You're free to disagree with the results, and discuss what you think is wrong with them, but once I've tallied everything, the decision is final, no exception. If you think a character's stats are off, then tough.
So let's do this:
Battle Ratings: Vote Here
Vote for Sonic, "Tails", & Knuckles - CLOSED
Vote for Amy, Cream, & Big - CLOSED
Vote for Shadow, Rouge, & Omega - VOTE HERE
Vote Espio, Charmy, & Vector - Coming Soon
Vote for Blaze, Silver, & Eggman Nega - Coming soon
Vote for Eggman, Metal Sonic, & Orbot/Cubot - Coming soon
Vote for Jet, Wave, & Storm - Coming Soon
I may add more soon, but I'll stick with this for now.
Note: Zeroes & Ones will not be counted, due to bullshit fanboys. Also, I apologize to ask, but can anyone who already voted please vote again, I made sure to filter out the bullshit this time so I apologize once again.
Edited by Ragna the Bloodedge, 07 August 2012 - 11:32 PM.
|
OPCFW_CODE
|
It was a life changing experience, simply because it made me really think about what I do, how I do it, and what I do to make it better. I had the opportunity to meet a photographer I hold in high esteem and listen to her talk about what she does best. Just like two photographers can never take the same exact picture because they will never see it the same way. Then Sue came out and rocked my world. I enjoyed absorbing every bit of information she willingly shared. I look forward to building our relationship and watching you grow as well. I could barely believe I would finally meet Sue Bryce.Next
For me the before image is just an image there is no story, no essence or character. The second one shows the beauty and confidence of the subject, her wonderful character and her personality. We'll then come together in April for one more live day in studio to wrap up the event. A photograph for me is the embodiment of three elements: the artist, the subject and the viewer. How could I not want to watch her for 28 Days when she sounds like that? Seeing and sitting with their computers in front of me was surreal. As a business owner whose product is a direct reflection of how you see the world can sometimes leave you feeling vulnerable.
And my Facebook friends list grew as I started connecting with fellow photographers who would also be embarking on this adventure. I sat there and nervously watched as she spoke about business and photography. For 28 Days , via , Â has been schooling us photographers on everything from posing, to marketing, to natural light studios. I could go on and on about what 28 Days has to offer and the endless lessons that are given out willingly by Sue, but honestly, if you want all that info you should really take the course. .Next
About clearing blocks and different marketing techniques. That is a pretty amazing lesson! Her confidence and conviction in what she does is amazing. Even my 4 year old has watched some of the online lessons. This is when all the fun and craziness started. I left my big ole camera home, kind of ironic for someone going to a photography workshop, right? Â Do I plan on being exactly like Sue? Following the 2-day kickoff, Sue delivers daily videos diving deep into 28 topics that are essential to building any successful portrait studio. I sent in my application and decided to let the fates decide what will happen. Learning from others to master your craft is always important but know that your work will always be a reflection of how you see life.Next
She gave me the foundation to do this type of photography, but within my style and in my way. Sue covers subjects like flow posing, capturing beautiful connection, posing and shooting groups, marketing to the key demographics, sales, and more. I saw on my CreativeLive Twitter feed that they were taking applications to come to Seattle to be a part of the live broadcast of the last day of 28Days. The above image is a before and after assignment from the 28 day with Sue Bryce. The 28 days with Sue Bryce workshop is an over all look into posing, shooting, marketing selling and everything else that is important to running a successful portrait photography business.Next
Being at the studio was surreal. There is so much information to take in. Sue Bryce is the Queen of Glamour Photography. Day 3 - The Natural Light Studio. Remember in the previous paragraph how I stated that I am not a Glamour Photographer? A group of photographers were congregated in the hotel lobby, and there we sat and chatted about all things photography, Sue and 28 Days.Next
Come back as I will be posting more images from the workshop! The search began for airfare and hotels, and my husband began rearranging his work schedule. But that is the beauty of what she taught me. Take those challenges, learn the skills, and create a business like Sue Bryce! If you have any interest in glamour, portraiture, or running a successful photography studio, this course is for you. Sue has given so many of us that opportunity. I was up bright and early on Monday and ready to start my big day. The travel involved in this adventure was crazy in and of itself.Next
At 9:30 pm on Sunday night I walked into my hotel in Seattle. This special program begins with two days of intense instruction on business, pricing, and overcoming your fears. Personally, I think he was drawn in to her awesome accent like the rest of us. They say that if you want to grow as a person, or in business, that you should surround yourself with people that are better than you, so you can learn from them. And another shout out to all the amazing photographers I met along this journey. Because anyone who is that talented at what they do has something that I can learn.Next
|
OPCFW_CODE
|
import { Class, inject } from "oly";
import { IJsonSchema } from "../interfaces";
import { JsonMapper } from "./JsonMapper";
import { JsonSanitizer } from "./JsonSanitizer";
import { JsonSchemaReader } from "./JsonSchemaReader";
import { JsonValidator } from "./JsonValidator";
/**
* Allow to parse object and array of objects
*/
export class Json {
@inject
protected mapper: JsonMapper;
@inject
protected sanitizer: JsonSanitizer;
@inject
protected validator: JsonValidator;
@inject
protected schemaReader: JsonSchemaReader;
/**
* Like JSON.parse.
*
* @param data Raw (string or object)
* @return Json object
*/
public parse(data: string | object): object {
if (typeof data === "string") {
return JSON.parse(data);
}
return data;
}
/**
* Like JSON.stringify.
*
* @param data Json (string or object)
* @return Json string
*/
public stringify(data: string | object): string {
if (typeof data === "object") {
return JSON.stringify(data);
}
return data;
}
/**
* Transform json into class.
*
* @param type Class definition
* @param data Json data
* @return Mapped object
*/
public map<T extends object>(type: Class<T>, data: object): T {
return this.mapper.mapClass(type, data);
}
/**
* Validator based on ajv.
* Result can be different (depend on ajv configuration).
*
* ```ts
* class Data {
* @field name: string;
* }
*
* const validData = this.json.validate(Data, {name: "Jean"});
* ```
*
* @param type Class definition with JsonSchemaReader
* @param data Json data
* @return Data after validation if valid
*/
public validate<T extends object>(type: Class, data: T): T {
return this.validator.validateClass(type, data);
}
/**
* Sanitize data.
*
* ```ts
* class Data {
* @field({upper: true}) name: string;
* }
*
* Kernel.create().get(Json).sanitize(Data, {name: "Jean"});
* ```
*
* @param type Class definition
* @param data Json data
* @return Sanitized data
*/
public sanitize<T extends object>(type: Class<T>, data: T): T {
return this.sanitizer.sanitizeClass(type, data);
}
/**
* Json#parse(), Json#validate(), Json#map() and Json#sanitize().
*
* ```ts
* class Data {
* @field name: string;
* }
*
* Kernel.create().get(Json).build(Data, {name: "Jean"});
* ```
*
* @param type Class definition
* @param data Raw data (string or object)
*/
public build<T extends object>(type: Class<T>, data: any): T {
return this.sanitize(type, this.map(type, this.validate(type, this.parse(data))));
}
/**
* Extract JsonSchemaReader from a class.
*
* ```ts
* class Data {
* @field name: string;
* }
*
* Kernel.create().get(Json).schema(Data); // {properties: { ...
* ```
*
* @param type Definition
* @return JsonSchema
*/
public schema<T extends object>(type: Class<T>): IJsonSchema {
return this.schemaReader.extractSchema(type);
}
}
|
STACK_EDU
|
- This topic is empty.
January 13, 2010 at 7:39 pm #27560kennethjaysoneMember
I want to setup a website that allows a user to fill in a form and upload their work to be printed, I’m thinking wufoo because it’s absolutely easy to get started. But their payment integration doesn’t allow me to use Malaysian Ringgit for the currency of my choice…it’s quite essential to use that currency because it is a local business.
I was thinking about using wufoo and foxycart together. Is that possible. Creating a form to accept data, when the user hits submit, the foxycart cart appears and they can make payments.
I emailed wufoo the following:
Let’s look for a quick solution (it is essential for me to collect payments in Malaysian Ringgit)..
I’ve been thinking whether this option i’m goint to share with you is possible?
I want to collect data from my website from my customers who want me to print their material (that’s why i so want to use wufoo because it’s easy to implement a form that does that).
Can i redirect the user to another page in which i can probably use foxycart to get them to checkout (putting a link that says "proceed with printing" that will pop out the foxycart’s cart (which allows me to set the price in the currency of my choice?
Sorry for the long question, but i need to know if i can do this before signing up for a paid plan.
And this was their reply:
We do allow you to redirect to the website of your choice after form submission. The problem would be with relating the payment to an entry. Foxycart wouldn’t know what the user selected in the form, so there would be no way to link the two. This would only work if the price was a fixed price. The second problem is with linking the entry to the payment. How would you know entry 5 paid and where their receipt is? We do allow you to pass the entryID in the URL as well (http://wufoo.com/docs/url-modifications/), but does Foxycart allow you to store a variable?
Now i don’t really know what variable their talking about?
Thank you for reading this.January 16, 2010 at 2:20 pm #69531Chris CoyierKeymaster
Although it seems to me it would probably be easier just to use your own forms.May 19, 2010 at 10:00 am #76042danielandoMember
How do you add that data to the Cart?
This is exactly the issue I have with a client project at the moment. I need a site visitor to upload a file, I then need to get that filename and attach it to the cart and tie it to the order….
- The forum ‘Other’ is closed to new topics and replies.
|
OPCFW_CODE
|
Sonic Software Releases Sonic Stylus Studio 5.0
Ground-breaking XQuery Tool Offers Powerful Solution for Data Integration
Bedford, MA September 02, 2003 Sonic Software today announced the release of Sonic Stylus Studio 5.0, its award-winning XML IDE. With this release, Sonic raises the bar for XML IDEs by delivering the most advanced XQuery tool on the market today. Stylus Studio 5.0 provides visual mapping, editing and debugging of XQuery with industry-leading support for the May 2003 W3C XQuery specifications.
Release 5.0 couples the Stylus Studio award-winning XSLT and XML editors with groundbreaking XQuery tools, making Stylus Studio the most productive XML data integration tool available today. New features in Stylus Studio 5.0 include:
XML Developer Community Reaction
"Stylus Studio 5.0 improves on the excellence of Stylus Studio 4.x, and makes building SOAP servers and clients a breeze. The ability to request the SOAP messages from a server directly and build a style sheet to process those messages in an interactive environment will make development of future applications much easier", said Mike Basile, system development for ARAMARK, a world leader in providing managed services -- including food, facility and other support services and uniform and career apparel. "In addition, the ability to easily switch XSL processors from the internal Stylus Studio processor to MSXML, Xalan, or Saxon makes testing cross-platform XSL style sheets even easier. I never have to leave the environment and I can test XSL 2.0 features while developing XSL 1.0 style sheets. I look forward to the future development of a great XSL tool."
In the past I have used and recommended competitive products for XSLT and XML development", said Paul Freeman, principal consultant for Architek Limited, a leading XML and technical architecture consulting company. "From now on my choice is Stylus Studio 5.0 for its excellent XML-to-XML mapping capability supporting both XPath and XQuery."
"Stylus Studio has enabled us to move almost all of our mapping and transformation development to style sheets",said Ouen Worth, Technical Specialist for Quadrem, a global transaction, content, and sourcing solutions provider for the natural resources industry. "In an integrated e-Commerce environment, it is imperative that a development team be able to adopt and handle new message types and modifications rapidly. Stylus Studio has reduced our core development time by around 300%. With Stylus Studio 5.0, it just gets better."
"What makes Stylus Studio a better IDE is the input of a knowledgeable and motivated XML developer community. The rapid evolution that has led to Stylus Studio 5.0 attests to the value of this input", said Carlo Innocenti, senior architect of Stylus Studio. "Stylus Studio development is a model for combining advances in software engineering with the vitality of the developer community."
Pricing & Availability
Stylus Studio 5.0 is available immediately at a price of $395 per development seat. To learn more about Stylus Studio, to download a free trial or to purchase the product, please visit www.stylusstudio.com. Stylus Studio is also available through Lifeboat Distribution (www.lifeboatdistribution.com) and Programmer's Paradise (www.programmers.com).
About Stylus Studio®
Stylus Studio, a product from Progress Software Corporation (Nasdaq: PRGS), is the first and only XML IDE to provide advanced support for XML and its related technologies: XSL, XSLT, XML Schema, DTD, SOAP, WSDL, SQL/XML and XQuery. Used by over 100,000 software developers world-wide, Stylus Studio simplifies XML programming and enhances developer productivity through innovation. For a complete listing of new product features, visit: http://www.stylusstudio.com. Additional technical product information about Stylus Studio is available at http://www.developxml.com.
Stylus Studio is a registered trademark of Progress Software Corporation. Sonic Business Integration Suite, Sonic ESB, Sense:X, and Sonic Software (and design) are trademarks of Sonic Software Corporation in the U.S. and other countries. Any other trademarks or service marks contained herein are the property of their respective owners.
Other Stylus Studio Press Releases:
PURCHASE STYLUS STUDIO ONLINE TODAY!!
Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced!
Learn XQuery in 10 Minutes!
Say goodbye to 10-minute abs, and say Hello to "Learn XQuery in Ten Minutes!", the world's fastest and easiest XQuery primer, now available for free!
|
OPCFW_CODE
|
A Kansas man found an unexpected visitor slithering beneath his couch cushions this week.
On Monday, the Butler County Fire District #3 responded to a Rose Hill residence after being asked by the city’s police department to assist with an “unusual call,” they revealed in a post on Facebook.
As it turns out, the call was from a Rose Hill resident reporting that they discovered a six-foot snake hiding inside their living room couch, fire officials said. The BCFD later confirmed on Facebook that the snake was a boa constrictor.
After arriving at the scene, the BCFD said Deputy Fire Chief Melvin Linot, whom they referred to as their “resident snake charmer,” successfully captured the long serpent and brought it outside.
“Yikes!” the BCFD captioned the terrifying photo of Linot holding the snake in his hands beside firefighter Brandon Kolter.
“Never a dull moment at BCFD#3!” they added in a follow-up post.
Authorities noted that the boa constrictor does not belong to the homeowners and that they are currently searching for its owner.
As of Thursday, no one had come forward to claim the serpent, leaving authorities stumped about what to do next, Butler County Fire Chief James Woydziak tells PEOPLE.
“The snake is currently at a pet store in nearby Derby where it can be cared for,” Woydziak says. “We have no idea how the snake got into the apartment because the tenants have lived there for four years and have never owned a snake and were quite upset to have it in their home. They have owned the couch for that entire time.”
“The Police Department and us are trying to figure out if we have to keep it for a specified time like lost property, or since it was in their home, if we can get it to a new home relatively quickly,” he continues. “One idea was to get it into the hands of someone who does the educational visits to schools teaching about animals.”
RELATED VIDEO: Snake in Toilet Interrupts Brett Eldredge’s Tropical Vacation
While it is uncertain what will happen if no one claims the boa constrictor, many Facebook users have already offered to take in the slithering surprise as their new pet.
Woydziak says the department has had “at least a dozen phone calls from as far away as Jacksonville, FL offering to adopt the animal.”
“AWWWWW! if he needs a home i come get it!! i got lots of love to give!!!” wrote one user, while another commented, “If nobody claims it I will absolutely adopt it!”
“I am not missing any snakes, but if that beauty needs a home I would be happy to take the little feller in and give it a good home,” added someone else.
Anyone with information about the boa constrictor is asked to call the BCFD #3 office at 316-776-0401.
|
OPCFW_CODE
|
It is an enterprise grade solution with advanced capabilities for teams working on projects of any size or complexity, including advanced testing and DevOps. Microsoft is planning a day-long launch event for Visual Studio , the latest release of its developer platform, on April 2. Create presentations, data models, and reports with tools and capabilities like PowerPoint Morph, new chart types in Excel, and. Innovate at scale and enrich your enterprise projects by tapping open-source code, community, and best practices available on GitHub. Learn what support Microsoft will be providing for the next on-premises release of Microsoft Office. The new version. Visual Studio is expected to be released sometime in the first half of , roughly two years after the last current flagship version, Visual Studio , rolled out.
Download previous versions of Visual Studio Community, Professional, and Enterprise softwares. Sign into your Visual Studio (MSDN). Microsoft officials said the retail pricing of Visual Studio Enterprise with MSDN will be 55 percent lower than the price of the comparable. Do you qualify for using Visual Studio Community Edition? "Visual Why should I buy Visual Studio Professional now that Microsoft has a free community edition? Alexandru Sarbu, The best IDE is the one you're most used with; I'm used with Visual Studio So - i just grin and bear the $ price tag for VS
Professional developer tools and services for individual developers or small teams. MSDN subscription not included! It contains all the required programs; the license is not pirated, so everything works well and without glitches. Thanks to the shop assistants for the quick the order registration of the application and delivery.
I was pleasantly surprised by the good prices, which are much less than in the other shops. I paid the same price that was indicated in the e-mail with the discount. I am satisfied with work of the shop! Competent managers. I paid and got them- everything's great, no red tape and application forms. They always have new offers.
I've only been using it for several years. After trying out the trial version, I have registered and bought the product. I was happy to find this shop. I have heard of it, but never had a chance to cooperate. And now I will to recommend it to everyone I know, as it is really convenient, and all the necessary software is accessible from one place.
There is no obtrusive advertising. Thank you. However, it was not an only advantage! It's very simple, nothing special. The order was fulfilled in a matter of seconds. As I understand, everything goes automatically. All you need to do is pay, and the key with the activation link is already in your mailbox.
I have repeatedly used this shop. I always like their fast reports on the status of your orders and the receipt of the payment. Everything is displayed fast online. The purchase goes fast, without stress and uncomplicated. Well done!!! See ALSO.
|
OPCFW_CODE
|
I have an issue with the "No Theoretical Answers" rule, because in its present wording, if I for instance refer to gravity and hinge my answer on that gravity exists, and that the theory of gravity is accurate, then the post will be deleted.
The problem: good answers get deleted
As stated with emphasis in this post, Skeptics SE do not subscribe to the school of thought "It's just a theory". In science, "theory" is the goal. "Theory" is the thing you get a Nobel Prize for. If in science you say "I have a theory", you rank among people such as Newton, Einstein, Hertz and Maxwell. "It's just a theory" is not a valid means by which to dismiss an answer.
Never the less the rule is an implementation of just that school of thought. The rule and the way it has been applied means that an answer of the sort "Looking at this particular scientific theory, the claim cannot be true" gets deleted under the "No theoretical answers" rule. I find this inherently problematic because to a reasonable person, such an answer is perfectly acceptable.
The users of Skeptics SE are missing out on good and valid answers because of the wording of this rule
The root cause: conflating "theory" with "speculation"
When reading through the text of the rule, I find it obvious that the actual concept that the rule wishes to avoid is not "theory" but "speculation". The confusion comes from that — in daily parlance, by the average person that is not a scientifically minded person — the phrase "I have a theory" actually means "I have a hunch" / "I have a vague idea / "If I speculate a bit".
So the problem is that "theory" can mean two different things depending on context. One of the meanings is good and solid, the other is not.
I agree with the spirit of the rule: we do not want answers based on loose speculation. But I do not agree that scientific theory should also be tossed out, like the proverbial baby with the bath water, because then we lose good answers.
Proposed edits to resolve the issue
Here are a few edits I suggest to remove the issue and make it clear that theory is all right, but speculation is not. The strikeouts is the old wording, the boldface is the new wording.
FAQ: What are
One of the premises of skepticism is the application of the scientific method: empirical
proofevidence validates or disqualifies theoreticalmodels. All questions we allow hereWe only allow posts that are empiricalmaterial in nature, thus answering via a purely theoretical modelspeculation is inappropriate : experiments are not "validated" by theory, but vice-versa.
Here is a list of common examples of types of unacceptable
Section "Back of the envelope calculations"
Answers based on simplified calculations instead of measurements are
theoreticalspeculative. By their nature such calculations implicitly assume a mathematical model, but they generally fail to show that the model is adequate to the circumstances of the question. They also do not investigate their own inaccuracy. They are a form of Original Research.
Section "Research-level answers"
My suggestion is that this entire section is lifted out to its own rule, one that states that "Answers must be accessible to the audience of Skeptics SE". Stuff that requires an academic degree to take in are not accessible to the general public of Skeptics SE.
Section "Pure logic/pure maths answers"
Answers that rely only on logic and maths are
theoreticalspeculative, because they do not connect the material nature of the question with the immaterial nature of the answer. All our questions are inherently referring to experimental evidencematerial reality. If your answer does not contain any material evidence, it is almost certainly not answering the right question.
Section "Common sense answers"
This section looks good as it is to me.
|
OPCFW_CODE
|
This point addresses the notions of run and accuracy for certain storing style
Assortments and Precisions in Decimal Depiction
This segment covers the ideas of number and preciseness for a given store formatting. It includes the selections and precisions related to your IEEE unmarried and two fold platforms also to the implementations of IEEE double-extended style on SPARC and x86 architectures. For concreteness, in identifying the notions of selection and precision you make reference to the IEEE unmarried structure.
The IEEE traditional specifies that 32 little bits be employed to portray a drifting stage quantity in solitary structure. Because there are best finitely lots of mixtures of 32 zeroes and kinds, only finitely several figures tends to be symbolized by 32 parts.
One all-natural real question is:
Just what are the decimal representations for the premier and most minor beneficial numbers that could be represented in this structure?
Rephrase the question and introduce the thought of variety:
Just what is the run, in decimal writing, of quantities that can be depicted through the IEEE solitary structure?
Taking into consideration the complete meaning of IEEE individual type, one can possibly establish that the variety floating-point rates that could be exemplified in IEEE individual format (if limited to glowing normalized numbers) is as comes after:
The second query means the detail (not to ever get mistaken for the precision or the amount of big digits) associated with the numbers depicted in a provided formatting. These impression were clarified by checking out some pics and instances.
The IEEE traditional for digital floating-point arithmetic determine the number numerical worth representable when you look at the solitary format. Keep in mind this couple of numerical worth is identified as a collection of binary floating-point number. The significand for the IEEE single type offers 23 little bits, which with the implied biggest bit, produce 24 numbers (parts) of (binary) preciseness.
One obtains a better collection of numerical values by marking the amounts:
(representable by q decimal numbers inside the significand) on the amount series.
BODY 2-5 reflects this example:
SHAPE 2-5 evaluation of some data Defined by online and Binary counsel
Observe that each sets differ. Therefore, estimating the number of important decimal digits related to 24 big binary digits, calls for reformulating the challenge.
Reformulate the trouble as to changing floating-point figures between digital representations (the inner format applied by the laptop or desktop) while the decimal structure (the format people are usually enthusiastic about). In fact, you really should become from decimal to digital and returning to decimal, or change from binary to decimal and to binary.
You must notice that since set of quantities differ, conversion rates are in general inexact. If done correctly, transforming some from set-to lots inside the more set causes picking among the two surrounding quantities from 2nd set (what type particularly is actually a concern associated with rounding).
Take into account a few examples. Assume you’re wanting express quite a lot on your next decimal counsel in IEEE solitary formatting:
Because there are best finitely most real quantities which can be represented precisely in IEEE single structure, not all numbers of these form include and this includes, by and large it’s going to be impractical to signify such quantities just. Like for example, try letting
and manage below Fortran regimen:
The output out of this application needs to be like:
The difference between the cost 8.388612 A— 10 5 assigned to y together with the value imprinted away is actually 0.000000125, that’s seven decimal orders of magnitude smaller compared to y . The precision of stage y in IEEE individual structure is all about 6 to 7 significant numbers, or that y enjoys about six important digits if it’s for portrayed in IEEE single structure.
In a similar fashion, the difference between the worthiness 1.3 allotted to z and so the appreciate designed and printed out and about is definitely 0.00000004768, which happens to be eight decimal commands of scale smaller than z . The accuracy of stage z in IEEE individual style features 7 to 8 substantial numbers, or that z has about seven considerable numbers if it’s are showed in IEEE single format.
Right now come up with practical question:
Think your alter a decimal floating-point amount a to its IEEE unmarried format digital depiction b, then read b on a decimal quantity c; what number of instructions of magnitude is between a and a – c?
What’s the number of big decimal numbers of an in IEEE solitary format interpretation, or what number of decimal numbers will be relied on as valid if an individual signifies by in IEEE solitary formatting?
The number of immense decimal digits is actually between 6 and 9, which, at the very least 6 numbers, although about 9 numbers are actually valid escort videos (with the exception of circumstances whenever the conversion rates happen to be precise, when infinitely most digits can be accurate).
Conversely, if you should alter a digital wide variety in IEEE solitary style to a decimal amount, immediately after which change they back into digital, typically, you should employ at the very least 9 decimal digits to ensure that after these two sales conversions you obtain the quantity you began from.
The overall photo is offered in TABLE 2-10:
Foundation Conversion inside Solaris Environment
Base conversion process is utilized by I/O programs, like printf and scanf in C, and read , write , and printing in Fortran. For those applications you want conversion rates between quantities representations in basics 2 and 10:
For the Solaris atmosphere, the essential strategies for platform conversions in all tongues are included in the typical C collection, libc . These regimes utilize table-driven calculations that deliver correctly-rounded sale between any enter and output types. In addition to their reliability, table-driven methods decrease the worst-case instances for correctly-rounded platform conversion process.
The IEEE typical demands proper rounding for characteristic quantities whoever magnitudes range from 10 -44 to 10 +44 but allows a little improper rounding for bigger exponents. (discover area 5.6 of IEEE traditional 754.) The libc table-driven calculations round correctly all over the whole choice of unmarried, two fold, and double prolonged models.
View Appendix F for references on bottom conversion process. Particularly great mention were Coonen’s thesis and Sterbenz’s reserve.
|
OPCFW_CODE
|
You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
I have a bitbucket-pipelines.yml file, with pipelines defined that trigger on either commits to the main branch or when a pull-request (PR) is created or updated. The pipelines are deploying terraform infrastructure as code using GitOps practices. The branching strategy is to have a main branch, which is our source of truth, from which developers take a branch which is then merged back into main via a PR.
The idea is that the PR needs to be reviewed by our infrastructure team before being merged but any developer is free to open branches, make code changes and create pull requests for review.
The PR pipeline runs linters, static analysis, validation and planning steps. The main branch pipeline applies (deploys) the resources.
The problem is that any developer who can open a PR can modify the PR triggered pipeline and effectively execute arbitrary code e.g. terraform apply or destroy commands. This is because the modified PR pipeline runs before it has been reviewed by a reviewer.
Can I prevent the PR pipeline running in this way? Ideally I only want it to use the bitbucket-pipelines.yml file from the main branch, the PR target, and not bypass the review process.
Hi @sm-space and welcome to the community!
Pull-requests pipelines run based on the definition that exists in the bitbucket-pipelines.yml file of the source branch. I'm afraid that it is not possible to use a definition that exists in the bitbucket-pipelines.yml file of a different branch or prevent the pull-requests pipeline from running when someone edits the yml file.
We have a feature request for the ability to restrict who can edit the bitbucket-pipelines.yml file which would probably address your concern:
You can vote for it (by selecting the Vote for this issue link) and leave feedback if you'd be interested in that feature. You can also add yourself as a watcher (by selecting the Start watching this issue link) if you'd like to get notified via email on updates.
In the meantime, if these steps require the use of credentials (that you define as variables) to connect to another server (where you don't want destroying commands to be executed), you could make use of deployment permissions (available on the Premium plan) and deployment variables:
You can configure certain or all steps of the pull-requests pipelines to be deployment steps (see here) and then make use of deployment permissions to Only allow admins to deploy to this environment. This way, if a user who is not an admin commits to a branch where the pull-requests pipeline runs, the deployment steps will be paused and they can only be resumed manually by an admin.
Someone could still edit the bitbucket-pipelines.yml file to remove the deployment definition from a step. However, if deployment variables are used for credentials to connect to a server, that step won't be able to connect because the deployment variables will be unavailable without the definition.
Please feel free to reach out if you have any questions.
Thank you for the reply and suggestions, I'll certainly vote to support that issue, it being 4 years old though is very concerning!
Unfortunately there are a couple of problems with that deployment approach that I don't think make it a good solution here. For deployment to AWS we're using OpenID connect to use an AWS role rather than long lived credentials, in-keeping with security best practices, therefore we don't have secret keys. To utilise the deployment variables we would also need to break the automated continuous deployment which is not something I'd want to advocate as a best practice when implementing GitOps.
You are very welcome. I understand that my suggestion doesn't meet your requirements, but it's the only available workaround at the moment.
Thank you for providing your feedback on that ticket. We get a large number of suggestions and feature requests and implementation is done as per our policy here.
When there is an update, it is going to be posted in the feature request.
|
OPCFW_CODE
|
Episode 54: Building a Neural Network and How to Write Tests in Python
Apr 02, 2021 46m
Do you know how a neural network functions? What goes into building one from scratch using Python? This week on the show, David Amos is back, and he’s brought another batch of PyCoder’s Weekly articles and projects.
David talks about a recent Real Python article titled “Python AI: How to Build a Neural Network & Make Predictions.” This article covers how to train a neural network and create a linear regression model.
We also cover several articles about testing in Python including, writing unit tests, testing code in Jupyter notebooks, and a testing style guide.
We cover several other articles and projects from the Python community including, how to build an Asteroids game with Python and Pygame, a 5-point framework for Python performance management, how it helps to know a Python programmer if you want a vaccination appointment, a Flask mega-tutorial, and the new release of SQLAlchemy.
Course Spotlight: Python Coding Interviews: Tips & Best Practices
In this step-by-step course, you’ll learn how to take your Python coding interview skills to the next level and use Python’s built-in functions and modules to solve problems faster and more easily.
- 00:00:00 – Introduction
- 00:02:01 – Build an Asteroids Game With Python and Pygame
- 00:08:18 – Python AI: How to Build a Neural Network & Make Predictions
- 00:11:51 – Sponsor: Scout APM
- 00:12:56 – How to Write Unit Tests in Python, Part 1: Fizz Buzz
- 00:19:40 – A 5-Point Framework For Python Performance Management
- 00:26:16 – Unit Testing Python Code in Jupyter Notebooks
- 00:30:02 – Python Testing Style Guide
- 00:31:32 – Video Course Spotlight
- 00:32:47 – Want a vaccination appointment? It helps to know a Python programmer
- 00:37:29 – Flask Megatutorial
- 00:41:22 – SQLAlchemy version 1.4.0
- 00:45:14 – Thanks and goodbye
Build an Asteroids Game With Python and Pygame – Build a clone of the Asteroids game in Python using Pygame. Step by step, you’ll add images, input handling, game logic, sounds, and text to your program.
Python AI: How to Build a Neural Network & Make Predictions – Build a neural network from scratch as an introduction to the world of artificial intelligence (AI) in Python. You’ll learn how to train your neural network and make accurate predictions based on a given dataset.
How to Write Unit Tests in Python, Part 1: Fizz Buzz – Get an introduction to unit testing in Python from the author of the Flask Megatutorial.
A 5-Point Framework For Python Performance Management – “Performance testing — like sailboat racing — depends on the conditions along the racecourse.”
Unit Testing Python Code in Jupyter Notebooks – Even if you code in Jupyter notebooks, there’s no excuse to not be testing your code!
Python Testing Style Guide – Need a quick yet thorough guide to testing? This excellent resource is for you.
Want a vaccination appointment? It helps to know a Python programmer – Programmers are writing scripts to help find vaccine appointments for those who are eligible.
- About Paweł Fertyk: Real Python Author
- Miskatonic Studio
- Episode 2: Learn Python Skills While Creating Games
- Episode 11: Advice on Getting Started With Testing in Python
- Python Coding Interviews: Tips & Best Practices - range() vs enumerate()
- doctest — Test interactive Python examples: Python Documentation
- testbook: Unit Testing Framework Extension For Testing Code in Jupyter Notebooks
|
OPCFW_CODE
|
In this tutorial I will be showcasing some more filters using OpenCV and Python! This is a continuation of my previous example which can be found here: https://dev.to/ethand91/creating-various-filters-with-opencvpython-3077
I've already discussed how to create the virtual environment in previous tutorials so I will skip that part.
Well lets get started creating some more filters! 🥳
First we will create the vignette filter. The vignette filter is achieved by creating a broad 2D Gaussian kernel.
def vignette(image, level = 2): height, width = image.shape[:2] x_resultant_kernel = cv2.getGaussianKernel(width, width/level) y_resultant_kernel = cv2.getGaussianKernel(height, height/level) kernel = y_resultant_kernel * x_resultant_kernel.T mask = kernel / kernel.max() image_vignette = np.copy(image) for i in range(3): image_vignette[:,:,i] = image_vignette[:,:,i] * mask return image_vignette
Here we generate the vignette mask using Gaussian kernels, we then generate the result matrix and then apply the mask to each of the image's color channels.
The next filter is the embossed filter:
def embossed_edges(image): kernel = np.array([[0, -3, -3], [3, 0, -3], [3, 3, 0]]) image_emboss = cv2.filter2D(image, -1, kernel = kernel) return image_emboss
Here we create an array for each of the channels and then apply it to the image via filter2D.
The next filter is the outline filter:
def outline(image, k = 9): k = max(k, 9) kernel = np.array([[-1, -1, -1], [-1, k, -1], [-1, -1, -1]]) image_outline = cv2.filter2D(image, ddepth = -1, kernel = kernel) return image_outline
Similar to the embossed filter but this time we increase the quality of the outlines.
The final filter is one of my personal favorites, the style filter.
def style(image): image_blur = cv2.GaussianBlur(image, (5, 5), 0, 0) image_style = cv2.stylization(image_blur, sigma_s = 40, sigma_r = 0.1) return image_style
This filter is really cool IMO. Before calling stylization it's best to blur the image a bit for better results.
Here I have shown how to create more various filters with opencv/python. I hope this tutorial was useful to you.
If you have any cool filters please share them. 😎
The source code and the original image can be found via: https://github.com/ethand91/python-opencv-filters
Like me work? I post about a variety of topics, if you would like to see more please like and follow me. Also I love coffee.
|
OPCFW_CODE
|
Nested SELECT using results from first SELECT
I need to list all customers who were referred to a bookstore by another customer, listing each customer's last name and the customer# who made the referral.
Easy enough, but I'm trying to add onto the query by also listing the referring customer's first and last name from the same table and data.
Referred column is the customer# of the person who referred them.
SELECT lastname, a.referred || ' ' || a.firstname || ' ' || a.lastname "Referred By:"
FROM customers
WHERE referred =
(SELECT a.firstname, a.lastname FROM customers WHERE customer# = a.referred);
My expected result is something like
Lastname: Referred By:
Gina 1003 Leila Smith
Getting this error:
ORA-00904: "A"."LASTNAME": invalid identifier
00904. 00000 - "%s: invalid identifier"
Am I correct in thinking I might need to do the nested select in the SELECT clause itself?
Thank you.
You can only use aliases like that in SELECT clause if they are defined in the same queries FROM clause. i.e. - SELECT alias_here.col FROM tab alias_here
You can either left outer join the customers table for the referral information back to the customers table, or you could use a scalar subquery to retrieve the details in the select list. Which is more performant for your data is up to you to test.
Won't be possible to do it this way. Please share some sample data and expected results so we can re-write your SQL from scratch.
You haven't defined the alias a anywhere. ???
You can self-join the table.
Assuming that the customers table has a column called referred that contains the customer# of the referring customer:
SELECT
c.lastname,
c.referred || ' ' || r.firstname || ' ' || r.lastname "Referred By:"
FROM customers c
INNER JOIN customers r ON r.customer# = c.referred
I actually wasn't aware that you could self-join a table, thank you! This way works too (:
Welcome @Wesley! Happy that it helped. Next time please document your question with sample data and expected output, this makes it much more easy to understand and respond.
It is much easier using CONNECT BY clause -
SELECT PRIOR lastname,
referred || ' ' || firstname || ' ' || lastname "Referred By:"
FROM customers
CONNECT BY PRIOR customer# = referred
Close, but I need it get the name of the referrer from the same table. This is just giving me say
Last name Referred By:
GIANA 1003 TAMMY GIANA which is the same name.
|
STACK_EXCHANGE
|
The article suggests that many of the accounts in question may be from phishing or from a third party torrent site. This is a fine opportunity to talk about password security. I wouldn't be surprised at all if a fair number of accounts were compromised because of reused passwords.
There are some people who like to complain about password tracking tools like Password Safe and pieces of paper, but in all honesty, they work better than most brains. I would guess the average person can't remember more than two or three passwords at a time, and they're probably not very good ones at that. One of them is likely their ATM PIN of 1234. If you're some sort of super genius who can remember hundreds of passwords, and you read this blog, I don't believe you. Quite often the concept of perfect can interfere with our ability to get things done. In most instances, a perfect solution is unattainable, where good enough is possible and is better than it was previously.
Attacks like this have happened more than once, I've had it happen to me. I used to use a throw away password for all my public mailman accounts (this was before I realized that mailman will randomly assign me a password, as I never actually need it). That password was then later used to attempt to gain entry to private archives to a list I'm on. I didn't use that password there, but this made me understand that it was time to get serious about my passwords. I now use a tool called pwsafe, which uses the Password Safe database format for storing passwords. I of course don't use this for any REALLY important passwords (I keep those in my brain, and they're not near as impressive as the pwsafe passwords).
If you're like most people, and use a couple of passwords everywhere, please stop doing that. Find a good password generating tool, and either use a piece of paper or something like password safe to store them. The other big advantage to using not your brain to store passwords, is that it's much easier to change them. How many of you have been using the same password for five years, because it's too annoying to think up a new good password? Lots of us do that, it's hard to change.
I'm personally not a fan of password maker. I think it's a suitable solution for some people, but I'm not willing to use it, I wouldn't sleep at night. My problem is that in the event a bad guy comes to have your default password maker settings, they have access to all your current and FUTURE passwords.
One solution is to have a random password ( let's say aaaaaaa ) that you prefix or suffix with a context dependent letters ( let's say the two first letter of the website, and the first of the tld ).
So to log on example.org, the password will be aaaaaaaaexo.
The benefit are simple, we only need to remember the first password, and the scheme we use to generate the password. This is perfectly doable for most people, as this doesn't requires much long term memory. Yet this provides differents passwords for differents services, and the scheme can add enough complexity ( ie here, we take a 8 letters password and get a 11 letter one ) to protect against brute force attack.
There is some problems however, if someone get one password, and figure the scheme, you are screwed. And if you need to change the password somewhere, you will have to add a exception , and that's bad.
But I think the risk are quite low, the scheme can be made easy to remember but complex to figure. As you say, good enough is the goal.
|
OPCFW_CODE
|
Online Python And Xml 2002
There have online python and xml Amateurs by third settings, halted into three cookies. The enough articles with establishment; ideas and sets before reasonable; and makes experiences on the brief teachings of the disastrous Soviet topic in religious, now in New York, intelligence on both historians of the Atlantic, the infrastructure as meant on interest, and the national levels of the applications and powers. Part II travels the items from 1940 to 1970 and is the final exception, Rodgers and Hammerstein, their efforts, and Weill and Bernstein. Part III is up the methodology with a now beneficial proof on Sondheim, the not keen ocean, the rise study, century, and the empire stems us into the American detail with a Resistance of reviews already to be read by living of worth.
353146195169779 ': ' appeal the online overview to one or more guess practices in a user, sending on the case's delivery in that ad. 163866497093122 ': ' Page customers can do all lines of the Page. 1493782030835866 ': ' Can show, mix or use Widgets in the situation and bottom t Critics. Can add and push thinking relations of this ACCOUNT to appear Citations with them.
Christopher Hogwood, Richard Luckett. ISBN 0521235251( paperback). product in European Thought, 1851-1912( Cambridge Readings in the Literature of Music). ISBN 0521230500( earth).
Online Python And Xml 2002
ISBN 0521390494( online). government: The Brandenburg Concertos( Cambridge Music Handbooks Series), Malcolm Boyd. ISBN 0521387132( student). History; scholar: Concerto for Orchestra( Cambridge Music Handbooks Series), David A. ISBN 0521480043( experience), 0521485053( bible).
93; and the Khwarezmid Empire. The largest kind to match out of Central Asia came when Genghis Khan organized the casualties of Mongolia. 93; most of Central Asia had to broadcast viewed by a ME tour, Chagatai Khanate. In 1369, Timur, a Turkic point in the Assyrian own address, permitted most of the renunciation and lost the Timurid Empire.
NYTimes seconds for genres and incursions. End is koolaid two fragments after Freshman in most civilizations. 4637) from 7 AM to 10 PM( ET) Monday through Friday, or 7 AM to 3 PM( ET) Saturday and Sunday. If you have OS of the US, are remove our strong government area.
Belgium takes the Ruhr Valley. really the League of Nations says written for development. 1919 Round Table goal, Lionel Curtis is the peasantry of the Royal Institute for International Affairs( RIIA) in London. They mean Middle Eastern teacher items amongst themselves.
For the online python and xml by Mad at the World, 're World light( tension). way exchange or full number( back to PULL reinforced with WebObjects, useful or clickable audiobook) has a reference of such Series that bombed as a 20th complete traffic in the policies. It is linguist from a interested past. It is Sorry to delete been with important judgment, which, like conclusion self, visits with the human of different peaks and settings, but comes hard tolerate not on a presidential Civilization.
I would thank that at least one of the sacrifices divided in online python and xml 2002 might share some page for some styles: historian photo. It is thought finalized to experience box, coke with " enemy, lower Internet History, and meet the confidence of harmless heavens( talk the catalog around in-universe Stress Reduction and deep personal documentation). 7, this period begins close no g and it might strictly tolerate climatic for president. Ft. inform why the film of cell should post considered to all maximum words.
A Western online might care to criticise on his musical much from tens in his topical product, and he however needs the architecture to Become this long if it stems policies for whatever Early slogan. request, at least in its extra method, has the Buddhism of discovery. only that we reflect not been the weak productivity, teach for a mind the very Muslim 2000s journal; what experiences are they Please? This is not to commit that all discrepancies expect the inherent fall, but what about the Buddhism?
He is bloody eyes to the permanent online python and xml; to its Foreign updates of signature; and to the answer of considering to the &, having how traditions are supposed from Czarist advances to the antiquity. He needs that the Soviets change less wide to receive dominant educators than is written read; but that this valley exists so here from secret address as from hardcover. use you for your century! has Club, but succeeded also see any death for an respective Afro-American, we may take chronologically guaranteed you out in conrol to learn your m.
Your Web online uses not shown for catalog. Some myths of WorldCat will particularly read fraudulent. Your progress is isolated the online Farming of media. Please help a thick moderator with a literary time; manage some efforts to a first or other year; or read some Studies.
In Early Africa, the Mali Empire and the Songhai Empire Did. On the artistic virtue of Africa, Arabic strategists was maintained where worldwide, places, and central roles was told. This analysed Africa to send the Southeast Asia musical file, Powering it pop with Asia; this, n't with emotional p., formed in the method use. The Chinese Empire were the human Sui, Tang, Song, Yuan, and controversial national dollars.
The online python and may tolerate broadcast written. compassion from a country: If there is a error to the experience you eat passing for, continue heading the nation from that Section. slug Code: 400 new Request. What is first trafficking?
Liszt: Sonata in B Minor( Cambridge Music Handbooks Series), Kenneth Hamilton. ISBN 0521469635( year). Ludwig Van Beethoven Fidelio( Cambridge Opera Handbooks), Paul Robinson. ISBN 0521458528( development).
European coke then of this online python and xml 2002 in freedom to share your page. 1818028, ' language ': ' The hardcover of party or stirrup knowledge you have considering to be starts poorly described for this sacrifice. 1818042, ' anything ': ' A electric drama with this update sacrifice rapidly Is. Civilization ': ' Can alter all world testers Year and right text on what father climaxes choose them. s significant is at you. Every intact data ruled to you. improve ultimately on the site with no film of how I succeeded so and the LibraryThing on the account race on the settlement. also I eventually benefit my century and obtain at it proactively.
There will behold pandemics in the iOS as the biggest encyclopedic ebook The orchestration in j is with 90 development is in most data, analyses, grainy Buddhism, and terms. book Математическое моделирование структуры соединений с помощью пакета программ Hyperchem 7.5: Учебно-методическое will plead 50 state. already s( after fostering to book Hegels ''Wissenschaft der Logik'': Ein or however in 2018) will also go to m to page. This is the disturbing one, readers, and it will above believe over until 2022. In a out contact meditates classroom. 0; Credit will See to explain out all the conventional megamusical--which enthroned gr8 currency world and literature. Trump and THE READING OF THEORETICAL TEXTS: A CRITIQUE OF CRITICISM IN THE SOCIAL SCIENCES n't be the physical ability constantly like Herbert Hoover in the widgets essay. Trump will See to host definitional after the hardcover and working question. 0; and ARBEITSBUCH FACHWISSEN MTRA: FRAGEN, ÜBUNGEN UND FÄLLE of advice C first in 2021 - 2022 with a 90 nature company in most characters. Pop Over To These Guys - worse than you take - Japanese other inadequate power. 0; how to do the read Операция с авторскими правами. Налогообложение и бухгалтерский учет или как получить доход от интеллектуальной собственности certainly FREE REPORT! 0; assign just break on the changes completing much enough particularly this henke-oh.de. 0; ebook Hypothetical Thinking: Dual Processes in Reasoning and Judgement (Essays in the history - Free 8 humanities of Robert Prechter's New York Times Best Seller List Book - Conquer the world. 0; - A certain 18th buy Die Welt der Oper in den Schloßgärten von Heidelberg und Schwetzingen thought-provoking iteration anger by important information model CR religious 1930s X style and their qualitites - imperial killings, idea man and strong features. 0; parts( features) please concerned online International disaster health care: toward a enhanced many same lap student programming. World War Three and a New Dark Ages making the epub The American Political Tradition: And the Men Who Made it will describe the representation if we are still exist them. 0; It is English who programming, set, manner, details and our gathering( and our publishers' end) to public variety rate access books like the taking address bartender 90s and venues along with possibility and message related to the United Nations, which is our crisis and coke. What is your online python and of experience? d withstand broadcast how cultural some of the 21st iOS of fate and Reunion think( notably to Get Islam). occasional desire is is policy due will pay that coke, here if it implodes interwoven to use rest, and mole is an own History of value( well utterly as Text), deep it will manage a different ad on that crash. The dynasty or feudalism traffic would surf in Purgatory( discussion) or a new world of currency( mind) after its human city from the team.
|
OPCFW_CODE
|
You will need to install a user script manager extension to install this script.
A few things to help with the ProductRnR adult content hits.
sets all images for "non-adult" or "No Watermark" right away.
n: next picture
p: previous picture
h: mark "Hardcore"
x: mark "Explicit"
e: mark "Educational Nudity"
s: mark "Suggestive"
b: mark "Bad Language"
g: mark "Gruesome"
d: mark "Did not load" or any variation
m: mark "Non-adult" or "No Watermark" or "Unrelated"
u: mark "Unrelated"
w: mark "Watermark"
r: mark "Related"
z: toggle visibility of sidebar
1 or numpad1: Choose first caption/image
2 or numpad2: Choose second caption/image
`: switch "Related" and "unrelated"; persists
/: show these keycodes as a helpful alert
Press the enter key to submit!
Should make life go a little faster neh? I wrote this up quick, let me know if there are issues.
Pictures now align to the bottom until you're done, then pop up. Helps with both the unintentional cropping of the top part of the image, and also letting you know when you've finished so you can click submit!
You no longer have to click inside the div to get it to register (unless you click outside the div). Also, pressing "enter" will pop up an alert asking if you really want to submit...double enter = submit.
Updated the code so that now the selection will "wrap around" (when you get to the end, you'll start over). This was an annoyance for me because sometimes the last or second to last one needed a classification, but I'd skip past it and off the end of the list, meaning I would need to manually click it.
Also changed the help alert, which incorrectly listed [spacebar] for non-adult.
Code has been updated to support watermark hits as well. See above for new keys.
Sidebar can be hidden/shown with the 'z' key. This setting persists (stores in local storage).
Updated to be useful for those .07 adult content hits. Also added logic to ignore keypress events if it's not an RnR hit (the enter button thing was annoying me in other hits)
v3.5 broke things...all the things o_O I think I fixed them. It works with the standard .05 hits now, there are no .07 hits to test it on (though it *should* work). I can't decide whether they changed the code on the .05 ones or not. I also made it a little more robust, no longer relying on an array of objects, rather calling based on the parent of the radio button.
Updated to work for the related/unrelated to search results
Fixed a bug with the related to search results HITs
'u' key now sets to unrelated, as well as 'm'
'`' key allows you to switch related and unrelated as default selection; persists across hits.
Updated to work with the "choose the better caption" hits.
Updated to work with the new "Find similar image" hits...same keybinds as better caption hits.
Bug fixed with the new related/unrelated hits.
Fixed to not use GM_get/setValue. I had done a shoddy workaround before, but apparently that broke, so I've updated to only use localStorage now. Thanks /u/Doctor_Turkleton on Reddit for the bug report and info required to fix it :)
Note 1: If you can't get it to recognize your keypresses, click inside the hit window (just not on an image).
Note 2: An image must be highlighted to mark it. Navigate using "n" and "p".
Note 3: Be sure to check the sidebar every now and then to see if there are any updated instructions.
|
OPCFW_CODE
|
Error: Unable to connect to Jellystat Backend
Hi, I deploy my jellystat with portainer, however I have this error message when start container.
Thank you
hi @MadPercy , can you add the log files from your container so i can check why you would be getting this error. thanks
I also have an error message displayed (Error: Unable to connect to Jellystat Backend). This is the log error in the jellystat container:
[1] webpack compiled with 1 warning
[1] [HPM] Error occurred while proxying request <IP_ADDRESS>:9050/auth/isConfigured to http://<IP_ADDRESS>:3003/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
[1] [HPM] Error occurred while proxying request <IP_ADDRESS>:9050/auth/isConfigured to http://<IP_ADDRESS>:3003/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
[1] [HPM] Error occurred while proxying request <IP_ADDRESS>:9050/api/getconfig to http://<IP_ADDRESS>:3003/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
[1] [HPM] Error occurred while proxying request <IP_ADDRESS>:9050/api/getconfig to http://<IP_ADDRESS>:3003/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
[1] [HPM] Error occurred while proxying request <IP_ADDRESS>:9050/auth/isConfigured to http://<IP_ADDRESS>:3003/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
[1] [HPM] Error occurred while proxying request <IP_ADDRESS>:9050/auth/isConfigured to http://<IP_ADDRESS>:3003/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
[1] [HPM] Error occurred while proxying request <IP_ADDRESS>:9050/api/getconfig to http://<IP_ADDRESS>:3003/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
In the database container:
PostgreSQL Database directory appears to contain a database; Skipping initialization
2023-09-30 03:06:01.300 UTC [1] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2023-09-30 03:06:01.327 UTC [1] LOG: listening on IPv4 address "<IP_ADDRESS>", port 5432
2023-09-30 03:06:01.327 UTC [1] LOG: listening on IPv6 address "::", port 5432
2023-09-30 03:06:01.442 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-09-30 03:06:01.665 UTC [29] LOG: database system was shut down at 2023-09-30 03:03:45 UTC
2023-09-30 03:06:01.836 UTC [1] LOG: database system is ready to accept connections
2023-09-30 03:06:10.124 UTC [33] FATAL: password authentication failed for user "postgres"
2023-09-30 03:06:10.124 UTC [33] DETAIL: Role "postgres" does not exist.
Connection matched pg_hba.conf line 100: "host all all all scram-sha-256"
2023-09-30 03:06:10.145 UTC [34] FATAL: password authentication failed for user "postgres"
2023-09-30 03:06:10.145 UTC [34] DETAIL: Role "postgres" does not exist.
I'm running this docker compose file in a Synology using Portainer.
hello, I have the same problem... I am under unraid, I tried with postgres 14 or 15. I tried to modify the port 3003:3000 but the problem is still there
hi @gab1to , your issue seems to be that it cant connect to the database due to credential issues, can you double check your credentials
hi @tony77682 , do you mind sharing the logs for your jellystat and postgres containers
hi @gab1to , your issue seems to be that it cant connect to the database due to credential issues, can you double check your credentials
I don't think so, I put postgres in user and test in password to eliminate an authentication problem
log postgres :
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
waiting for server to start....2023-10-01 14:25:39.508 CEST [48] LOG: starting PostgreSQL 15.4 (Debian 15.4-2.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2023-10-01 14:25:39.521 CEST [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-10-01 14:25:39.530 CEST [51] LOG: database system was shut down at 2023-10-01 14:25:37 CEST
2023-10-01 14:25:39.547 CEST [48] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
2023-10-01 14:25:39.988 CEST [48] LOG: received fast shutdown request
waiting for server to shut down....2023-10-01 14:25:39.990 CEST [48] LOG: aborting any active transactions
2023-10-01 14:25:39.991 CEST [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1
2023-10-01 14:25:39.992 CEST [49] LOG: shutting down
2023-10-01 14:25:39.994 CEST [49] LOG: checkpoint starting: shutdown immediate
2023-10-01 14:25:40.675 CEST [49] LOG: checkpoint complete: wrote 918 buffers (5.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.072 s, sync=0.598 s, total=0.684 s; sync files=301, longest=0.021 s, average=0.002 s; distance=4223 kB, estimate=4223 kB
2023-10-01 14:25:40.687 CEST [48] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
it just worked... I had set the password "test" for postgres and for JWT secret "test" too, I changed the value of JWT and it worked..
|
GITHUB_ARCHIVE
|
Using KiCad version 5.0.2. Please read this entire post before responding. I have completed my schematic design with Eeschema am encountering difficulty as I try to start my pcb layout. I want to open footprints with the footprint editor, as I probably want to edit them.
- I cannot find a fp-lib-table.
“One option is to remove the full fp-lib-table in your users config directory and let kicad replace it with the default one coming with the v5 libs.” Please explain in detail how do I do this?
I also cannot find the footprint libraries manager as described in Pcbnew help:
2.3.4. Adding Table Entries using the Libraries Manager
The library table manager is accessible by:
2) But I have copied a large bunch of Kicad Sourced .pretty modules:
and I have specified paths in the footprint editor:
But when I open the library browser I cannot see any of this. It is stuck on some footprints which I made a few years ago with KiCad 3.x(??) and are in a different folder which is not shown in the screen shot above. Why is this?
What is the likely problem?
[quote=“Rene_Poschl, post:2, topic:15418”]
Official KiCad documentation:
Thank you, Rene
Right away I run into this:
I will continue to work on this based upon your input.
appaently the address of the documentation changed to reflect the version, here is a working link:
Yesterday I was looking for some "table" files (in my libraries folder and in my project folder) and could not find them. Today I found those along with the program files and copied some of those to a new folder in with my libraries.
I cannot find an "add library" button in the footprint editor. However, I did find Preferences > Manage footprint libraries and am now going with that so I think I now can get and edit footprints. My next challenge is figuring out how to produce a board outline.
Are you talking (typing) to yourself?
Footprint Editor / Preferences / Manage Footprint Libraries.
Same menu is also in Kicad, Eeschema, Symbol Library Editr, Pcbnew, (others?)
Don’t give “talking to yourself” short shrift. I wanted to thank der.ule for his correction of Rene’s URL for help files. Also there was a comment about an “add library” button and I saw none such so maybe that can be corrected somewhere. Also I wanted to say that I am not still lost in pcb layout nowhere-land. However I would like to see some comment about how to set down a pcb outline. Right now I am assigning my own footprints to my schematic symbols.
I gave yo a roadmap to that just below “Are you talking to yourself”
The outline of the PCB is defined by drawing lines on the “Edge.Cuts” layer.
First make the “Edge.Cuts” layer active by clicking just left of the square before the Yellow “Edge.Cuts” text.
The little blue triangle makes the layer active.
Then you can draw lines on that layer with:
Pcbnew / Place / Line
You can also use Arc’s on the Edge.Cuts layer.
Make sure that the endpoint of a line segment meets the coordinate of the startpoint of the next line segment. This is easies done on a coarse grid.
(Coarse grid will also make drawing horizontal and vertical lines easy).
You can also read the Pcbnew manual, chapter 6:
I understand that Preferences>manage libraries gets the job done. But it is not an “add library” button so I think that instruction should probably be revised. Thanks for your other comments…
May I ask where you found a reference to an “add library button”?
It’s been a sore thorn in my eye that the “Getting started with KiCad” manual is still referencing KiCad V4.0.7. KiCad V5.1 is expecteded within a few weeks and i’m seriously thinking about updating the “Getting started with KiCad” manual.
Hi, paulvdh Please see:
I can understand that keeping up with stale links and other such information is a real challenge. But this did get me stuck for a while.
I should add that the link
at the top of that same page does not work. A few days ago I was given an updated one…
I updated the links. This seems to have changed very recently as i wrote that tutorial only a few months ago and did edit it just recently (The links to the official docu where added sometime in January.)
I tried to reply with a simple “Thanks!” but the website decided that that was too short.
Paulvdh: Somewhere we were discussing my use of fat corner pads (SOIC, TSSOP, etc.) to make hand soldering easier. I have made a couple of these footprints now. One way to maintain normal pad-pad spacing is to position the pad differently. But I find that the offset capability is nice. Basically (I think?) it allows me to define the “position” of the pad as a point which is offset from its center. Here is my “bigfoot” SOIC-14.Bobs_SOIC-14_Bigfoot_1.kicad_mod (3.3 KB)
That is pretty much what the offset does. Think of the anchor point of the pad as the center of where the drill hit would be (if your SMT pad was a THT pad). The offset allows you to move that anchor point off of the geometric center of the pad. For the fat corner pads, using the offset would reduce the mathematical complexity of positioning your pads.
Note, however, when routing your board with pad-snap turned on, I think the traces will snap to the anchor point of the pad, not the geometric center. But, I think thermal reliefs are calculated from the pad geometry, not the anchor point. I could be wrong on one or both points though. Some experimentation (that I don’t currently have time to do) would tell you for sure…
One can enter mathematical expressions directly into the position fields (In fact i think all fields should accept expressions now. If you find one that does not in nightly then report it as a bug.)
Apart from mathematical expressions such as 2.2+0.4 you can also enter units strings such as mm or mil after a number, and it gets translated to the current viewing units.
I’ve just been experimenting a little bit with:
Footprint Editor / Pad 'e’dit / Custom Shape Primitives
Before you can edit properties there you first have to set “Shape” in the “General” tab page to “Custom”. The options seem powerfull, but hard to use. I get the impressiont this part is not finished yet.
|
OPCFW_CODE
|
Managing a test team for one project is relatively simple. You spend some time getting to know the developers and their productivity and skill levels, then you hire tester specialists as needed. You have one main flow of work to production to keep track of.
Add another platform, web and mobile, and things get complicated. You now have several flows of work that most likely need to go together hand in hand, but you typically work as isolated teams. This is fertile ground for miscommunication, architectural problems and challenging testing strategy.
Regardless of how “mobile-first” a company is, I usually see web efforts being more cared for, and difficulties keeping the projects in sync.
Get TestRail FREE for 30 days!
Most projects I see today are based on some sort of API architecture. I have the most experience with REST APIs.
A few years ago I was working on a platform for marketing staff. We had some workflow tools, but the important part of our product helped marketers build and manage interactive advertising campaigns for social media. Most of the tools to build the advertisement were in a web front end, and the management tools were part of customer-specific mobile apps. The entire product was based on a large set of endpoints contained in a REST API.
Despite having one shared code base, mobile and web operated as two separate teams. Each team had its own tester, its own product manager and its own technical leadership. Our web team was usually the driver of change. They would have a feature request — for example, the ability to update the end date on an advertising campaign — and make that change, independent of what was happening in the rest of the organization.
When I was working on the mobile team, we had our own queue of change requests, some related to what was happening on the web product and some that were specific to our mobile product line. I would grab the latest version of our iOS product out of HockeyApp, install it on my device, and start testing, only to discover an error that didn’t make any sense.
Usually I would think these problems were related to implementation bugs and spend time investigating that. After awhile, a developer and I would open a developer tool and notice that a JSON response looked different than we had expected. The API had changed underneath us, and we didn’t know until we stumbled over the change while testing this feature. This resulted in a lot of wasted time and a lot of frustration and ill will between teams.
API versioning is one way to manage this difficulty. We opted to have a more static version of the API that was committed to at specific intervals and was backward-compatible for the mobile team, and another forward-driving API for the web team. We were able to test changes more efficiently without stumbling on unrelated problems with this method.
This, of course, creates new problems of managing two code bases, compatibility, release schedule and management, but it worked while I was on that project. A better solution might be to figure out a development cadence that works for both teams.
Unfortunately, technology management was only part of the issue on this project.
The web and mobile teams were separate at that company. Each had its own queue in JIRA, and they held a standup at different times. Managing smaller projects like this is seductive: You only see the changes that are immediately relevant to you. You also regularly miss run-of-the-mill important project information.
One project I worked on had a web front team building a management tool and a separate mobile team that was supposed to consume the webpages created by the web team. The web team was moving slowly but surely through building the webpage management tool. Each new feature added had to be accounted for by the mobile team in terms of consuming and displaying data, as well as being able to edit those pages in a native mobile product.
The mobile team generally found out about new changes to the management tool because something would break when they were viewing a page. As you would expect, things quickly became hostile between the two teams.
There was an important date coming in the next couple of weeks for a sales demo on the mobile product, and testers on the web team learned about it by accident. The mobile test team had been operating under the assumption that everyone knew about this date and didn’t care. The web test team was blissfully ignorant, working as if breaking changes here and there was OK because there was plenty of time to get things fixed.
The teams simply were not talking with each other, despite working in the same building.
We solved this problem by blending the team standups. Each day, the mobile team would have a chance to hear about important changes on the web team and ask questions.
Software projects are usually built on the idea that each project is a distinct thing, with its own workflows and reasonably self-contained. That is an illusion. Most products I work on will either consume data from an outside product, send data to an outside product, or somehow have a workflow that will start in one piece of software and end in another.
The medical products I have worked on required integration with several other products we made. A nurse might document patient vitals and drugs administered during a surgery on an iPad app. That data is automatically sent to a web front end via a tool like Redis. Once the patient information is displayed in the web dashboard, it might go through a few review workflows by a nurse manager, who reviews the case to make sure it was completely documented and either accepts it if everything is OK or rejects the case back to the original nurse if there is something wrong. After the case is eventually accepted, the data is packaged into a format that can be consumed by billing products.
A change in one product would always find a way to ripple through to the other products.
The company where I was working was very small, so although we had separate developers for our web and mobile products, the product people and test specialists worked on both. This facilitated a more complete understanding of the product. If a new vital was added to the mobile application, a tester could explore how that information was collected on the mobile application, see how the data was sent to and managed on the web front end, and then ultimately see if and how the new data was sent to insurance companies.
Companies I have worked at with distinct mobile and web testers have to coordinate work, figure out who has the domain knowledge, schedule time to work together and then fumble through test setup. My preference in this case is the test team being able to handle both web and mobile products.
Where to Start
Ultimately, my feeling is that having distinct mobile and web teams is a bad idea when the two teams are built on the same dependency tree. If you are having problems across mobile teams and the rest of your company, I would suggest building a plan for managing API changes, blending the standups to provide better information pollination and having teams that work together. Your developers probably don’t need to be able to work on both mobile and web technology stacks, but they need to at least understand what is happening on the other side.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime
|
OPCFW_CODE
|
General information about the workshop
Time ~5h (1h prep + 4h during)
Max. participants: 10 or 12
Tutor: Desirée Hammen
Phone number: 0624622533
For this workshop, you will need:
- embroidery hoops
- embroidery fabric(s)
- random objects for participants to choose and embroider on
- clamps to hook the hoops on the table
- sewing yarns / threads
- TV for Desiree's presentation
A week before the workshop
Make sure we have all materials we need (fabrics, hoops, needles)
Image: Layout of workshop location
On the Friday before the workshop
- Make sure Desiree has a spot for her personal belongings.
- Get a rack for participants to hang their coats on.
- Place lightboxes for decoration.
- Set up the TV for Desiree’s power point presentation (see image below) and make sure you have the right cable and that it actually works.
- Get a tall table with a white cloth for drinks.
- Get two tables for the materials.
- Bring the boxes of materials and objects downstairs.
- If you wish to make the atmosphere more cozy (as Desiree likes it) you can bring in a couple of plants from the Tuinkamer.
Image: The cable you need to connect the Mac to the TV for the Power Point presentation
On the day of the workshop
- Come in at least one hour before the workshop starts.
- Set up the powerpoint presentation from Desiree on the TV.
- Lay out all the materials on the tables.
- Prepare the hand sanitizing station (alcohol spray, tissue paper).
- Get some water with mint from the aquaponics and glasses for participants.
- If it's cold - get the water heater from dry storage and prepare some tea.
- Set up music. You can use any device that is plugged in the sound system. This playlist is very nice! To set up the sound system, be sure to turn on the speakers (extension cable in the workshop area), connect the mini jack to your device, and turn up the master knob on the sound board.
- Scan Corona pass QR codes when participants enter.
During the workshop
Welcome participants and scan tickets.
Help Desiree when needed.
Timeline of the workshop
In the week before
|Go over check list|
|11:00||Finish the set-up of the room and prepare for scanning tickets|
|11:45||Welcome the participants and scan their tickets.|
|12:00||Start of the workshop|
|12:05||Presentation by Desiree|
|16:00||End of the workshop|
After the workshop
|Cleaning up and restoring the space to its standard set-up. Bring the workshop boxes back to the project room.|
|
OPCFW_CODE
|
Name of group/organization/project: Open Source Software Institute (OSSI) Overview: The Open Source Software Institute (OSSI) is a non-profit (501 c 6) organization that was launched in 2001 with the purpose of “[promoting] the development and implementation of open source software solutions within federal, state, and municipal government agencies and academic entities”. OSSI acts in three capacities: as a policy advocate, as a research and development facilitator and as an open source policy consortium. The OSSI community is diverse but mostly made up of private-sector open source players. The organization has a strong following of 1,000 individuals in their mailing list (as of the end of 2007), along with 16 major corporate sponsors, 3 governments, 1 academic institution and some associates and community members. The structure of the organization consists of an elected Executive Director at the top, who is responsible for the day to day running of the organization, along with a board of directors that deal with strategic decisions. Similar to a non-profit organization, it also has an advisory board that advises on various strategic and tactical issues. Though OSSI’s initial focus was on defense, the group has diversified into all levels of government. For example, as of 2009, they have been working on a suite of applications for the criminal justice department for example the Public Open Source Safety Environment (POSSE) which aims to provide each agency autonomy and control of its IT system and record data. Problem addressed and Solution implemented: OSSI rose out of the demand for product software that met individual government agency’s organizational needs. The members felt that when it came to information technology, government was reinventing the wheel. The solution was therefore to “[facilitate] a move towards open source solutions in government”. OSSI’s strategy focused on particular projects. Developed software were often mission-critical such as the development of code for Department of Defense (DoD). There was clearly a collaboration between private entities and government organizations to develop open source products that met specific government needs. OSSI also went beyond code development and puts major emphasis on research and policy advocacy. At the moment, members have access to the open source codes under open source licensing, which varies depending on the project. However, software developed for DoD is not available for reuse on the web due to security concerns. OSSI’s immediate goal is to launch a repository for all the open source codes for broad government reuse beyond the membership base. Challenges Faced and Criticisms:
- Narrowly focused on specific projects like the DoD software, which could not be shared with other organizations.
- A stable leader, who was able to devote significant amount of time and energy to the organization, was key to the organizations survival. Also having a dedicated paid staff was crucial. The graduate fee levied from corporate sponsors was used to fund the executive director and the administrative positions.
- A well defined organization structure was important but flexibility allowed for change and evolution of the organization to better fit the needs of members and clients.
- Nearly all those interviewed by Hamel noted that value was critical in open source collaboration.
- Incentives and membership structures can help mobilize collective action in the open source development domain. “The generation of new economic opportunities with government partners provides real motivation for members to participate in the collaboration, which is supported by recent work which found that firms are typically motivated by economic and technical benefits (Joode, et al., 2006; Bonaccorsi and Rossi, 2006).”
- Focus on initiatives. OSSI’s project driven model kept the organization focused. This meant that government participants were motivated to work with other collaborators because of the skills and potential benefits on offer.
- Participants were quick to point out that “that support and hardware expenses should be a part of any software consideration.”
References (websites, documents, links, resources used and for further information):
- “Open Source Collaboration: Two Cases in the U.S. Public Sector” Michael P. Hamel and Charles M. Schweik. First Monday Journal, Volume 14, Number 1-5 January 2009. (http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2313/2065)
- “Open Source Software Institute Website” (http://www.oss-institute.org/)
|
OPCFW_CODE
|
<?php
declare(strict_types=1);
namespace Gotea\Test\TestCase\Model\Table;
use Cake\TestSuite\TestCase;
use Gotea\Model\Table\CountriesTable;
use Gotea\Model\Table\PlayerScoresTable;
/**
* Gotea\Model\Table\PlayerScoresTable Test Case
*/
class PlayerScoresTableTest extends TestCase
{
/**
* 所属国
*
* @var \Gotea\Model\Table\CountriesTable
*/
public $Countries;
/**
* Test subject
*
* @var \Gotea\Model\Table\PlayerScoresTable
*/
protected $PlayerScores;
/**
* Fixtures
*
* @var array
*/
protected $fixtures = [
'app.PlayerScores',
'app.Players',
'app.Countries',
'app.Ranks',
];
/**
* setUp method
*
* @return void
*/
public function setUp(): void
{
parent::setUp();
$config = $this->getTableLocator()->exists('Countries') ? [] : ['className' => CountriesTable::class];
$this->Countries = $this->getTableLocator()->get('Countries', $config);
$config = $this->getTableLocator()->exists('PlayerScores') ? [] : ['className' => PlayerScoresTable::class];
$this->PlayerScores = $this->getTableLocator()->get('PlayerScores', $config);
}
/**
* tearDown method
*
* @return void
*/
public function tearDown(): void
{
unset($this->Countries);
unset($this->PlayerScores);
parent::tearDown();
}
/**
* ランキングデータ取得(データ有り)
*
* @return void
*/
public function testFindRankingByPoint()
{
$country = $this->Countries->get(1);
$ranking = $this->PlayerScores->findRanking($country, 2013, 3, 'point');
$this->assertGreaterThan(0, $ranking->count());
$win = null;
$lose = null;
$ranking->each(function ($item) use ($win, $lose) {
// 0勝は存在しない
$this->assertNotEquals($item->win_point, 0);
if ($win !== null) {
$this->assertGreaterThanOrEqual($win, $item->win_point);
// 勝数が同じ場合、敗数の昇順
if ($win === $item->win_point) {
$this->assertLessThanOrEqual($lose, $item->lose_point);
$lose = $item->lose_point;
} else {
// 勝数が変わった場合は敗数を0に
$lose = 0;
}
}
$win = $item->win_point;
$lose = $item->lose_point;
$this->assertEquals(2013, $item->target_year);
});
}
/**
* ランキングデータ取得(データ有り)
*
* @return void
*/
public function testFindRankingByPercent()
{
$country = $this->Countries->get(1);
$ranking = $this->PlayerScores->findRanking($country, 2013, 3, 'percent');
$this->assertGreaterThan(0, $ranking->count());
$percentage = null;
$win = null;
$lose = null;
$ranking->each(function ($item) use ($percentage, $win, $lose) {
// 0%は存在しない
$this->assertNotEquals($item->win_percent, 0);
if ($percentage !== null) {
$this->assertGreaterThanOrEqual($percentage, $item->win_percent);
// 勝率が同じ場合、勝数の昇順
if ($percentage === $item->win_percent) {
$this->assertGreaterThanOrEqual($win, $item->win_point);
// 勝数が同じ場合、敗数の昇順
if ($win === $item->win_point) {
$this->assertLessThanOrEqual($lose, $item->lose_point);
$lose = $item->lose_point;
} else {
// 勝数が変わった場合は敗数を0に
$lose = 0;
}
} else {
// 勝率が変わった場合は勝数をnullに
$win = null;
}
}
$percentage = $item->win_percent;
$win = $item->win_point;
$lose = $item->lose_point;
$this->assertEquals(2013, $item->target_year);
});
}
/**
* ランキングデータ取得(データ無し)
*
* @return void
*/
public function testFindRankingNoData()
{
$country = $this->Countries->get(1);
$ranking = $this->PlayerScores->findRanking($country, 2014, 3);
$this->assertEquals(0, $ranking->count());
}
}
|
STACK_EDU
|
St. Jude Children’s Research Hospital is seeking a Bioinformatics Research Scientist to study the role of genome and other nuclear organization in pediatric cancers. Recognized for state-of-the-art computational infrastructure, well-established analytical pipelines, and deep genomic analysis expertise, St. Jude offers a work environment where you will directly impact the care of pediatric cancer patients. As a Bioinformatics Research Scientist, your responsibilities include analyzing data generated from a variety of second- and third-generation sequencing applications that interrogate a broad range of human gene regulatory biology.
The Abraham lab studies gene expression-regulation mechanisms in healthy and diseased mammalian cells. We are recruiting computational biologists to collaboratively develop computational tools and frameworks to analyze high-throughput sequencing (-omics) data. We build analytical software pipelines to find answers to biological questions about gene regulation in big datasets, usually from applied sequencing experiments like ChIP-Seq, RNA-Seq, and Hi-ChIP. Our interests center on enhancers and super-enhancers. Specifically, we seek to understand how these regulatory elements establish gene expression programs in healthy cells, and how enhancers are altered by mutation, abused by mistargeting, and targetable with drugs in diseased cells. We focus on characterizing the core regulatory circuitries driving disease-relevant cells, and on understanding how mutations in the non-coding DNA of such cells can drive disease, including cancers, through gene misregulation.
The successful candidate will become a fundamental component of a multidisciplinary, inter-institutional team assembled to study how genome structures meaningfully differ between normal and pediatric cancer cells.
Ideal candidates will have experience building, tailoring, and deploying analysis pipelines using widely available genomic analysis toolkits (e.g. bedtools, samtools), as well as experience managing large numbers of datasets. The successful candidate will be tasked with collaborative research within and beyond the lab, so strong communication and interpersonal skills are essential. Additional experience in fundamental understanding of gene expression mechanisms (e.g. transcription factors, enhancers, genome structure, and transcriptional condensates), and experience building succinct, clear figures using R are preferred.
The department of Computational Biology provides access to high performance computing clusters, cloud computing environment, innovative visualization tools, highly automated analytical pipelines and mentorship from faculty scientists with experience in data analysis, data management and delivery of high-quality results for competitive projects. We encourage first author, high profile publications to share this element of discovery. Take the first step to join our team by applying now!
Abraham BJ, Hnisz D, Weintraub AS, Kwiatkowski N, Li CH, Li Z, Weichert-Leahey N, Rahman S, Liu Y, Etchin J, Li B, Shen S, Lee TI, Zhang J, Look AT, Mansour MR, Young RA. Small genomic insertions form enhancers that misregulate oncogenes. Nat Commun. 2017 Feb 9;8:14385. doi: 10.1038/ncomms14385. PubMed PMID: 28181482; PubMed Central PMCID: PMC5309821.
Hnisz D, Abraham BJ, Lee TI, Lau A, Saint-André V, Sigova AA, Hoke HA, Young RA. Super-enhancers in the control of cell identity and disease. Cell. 2013 Nov 7;155(4):934-47. doi: 10.1016/j.cell.2013.09.053. Epub 2013 Oct 10. PubMed PMID: 24119843; PubMed Central PMCID: PMC3841062.
Dowen JM, Fan ZP, Hnisz D, Ren G, Abraham BJ, Zhang LN, Weintraub AS, Schujiers J, Lee TI, Zhao K, Young RA. Control of cell identity genes occurs in insulated neighborhoods in mammalian chromosomes. Cell. 2014 Oct 9;159(2):374-387. doi: 10.1016/j.cell.2014.09.030. PubMed PMID: 25303531; PubMed Central PMCID: PMC4197132.
- Bachelor’s degree is required
Bioinformatics Research Scientist
- Seven (7) years of relevant post-degree work experience is required
- Five (5) years of relevant post-degree work experience is required with a Master’s degree
- Two (2) years of relevant post-degree work experience is required with a PhD
Lead Bioinformatics Analyst
- Six (6) years of relevant experience is required.
- Four (4) years of relevant experience may be acceptable with a Master’s degree.
- No experience may be acceptable with a PHD in Computer Science or Bioinformatics, with a background in the biological sciences.
- Experience in programming (Python, Java, C/C++, perl or other programming/scripting languages) under linux/unix environment is required.
- Experience with and the ability to deal with a wide range of users is required.
- Experience of independent data analyses and project management is required.
No Search Firms:St. Jude Children’s Research Hospital does not accept unsolicited assistance from search firms for employment opportunities. Please do not call or email. All resumes submitted by search firms to any employee or other representative at St. Jude via email, the internet or in any form and/or method without a valid written search agreement in place and approved by HR will result in no fee being paid in the event the candidate is hired by St. Jude.
|
OPCFW_CODE
|
// SPDX-License-Identifier: Unlicense OR BSD-3-Clause
package shaping
import (
"fmt"
"github.com/benoitkugler/textlayout/harfbuzz"
"github.com/go-text/typesetting/di"
"github.com/go-text/typesetting/font"
"golang.org/x/image/math/fixed"
)
type Shaper interface {
// Shape takes an Input and shapes it into the Output.
Shape(Input) Output
}
// MissingGlyphError indicates that the font used in shaping did not
// have a glyph needed to complete the shaping.
type MissingGlyphError struct {
font.GID
}
func (m MissingGlyphError) Error() string {
return fmt.Sprintf("missing glyph with id %d", m.GID)
}
// InvalidRunError represents an invalid run of text, either because
// the end is before the start or because start or end is greater
// than the length.
type InvalidRunError struct {
RunStart, RunEnd, TextLength int
}
func (i InvalidRunError) Error() string {
return fmt.Sprintf("run from %d to %d is not valid for text len %d", i.RunStart, i.RunEnd, i.TextLength)
}
const (
// scaleShift is the power of 2 with which to automatically scale
// up the input coordinate space of the shaper. This factor will
// be removed prior to returning dimensions. This ensures that the
// returned glyph dimensions take advantage of all of the precision
// that a fixed.Int26_6 can provide.
scaleShift = 6
)
// Shape turns an input into an output.
func Shape(input Input) (Output, error) {
// Prepare to shape the text.
// TODO: maybe reuse these buffers for performance?
buf := harfbuzz.NewBuffer()
runes, start, end := input.Text, input.RunStart, input.RunEnd
if end < start {
return Output{}, InvalidRunError{RunStart: start, RunEnd: end, TextLength: len(input.Text)}
}
buf.AddRunes(runes, start, end-start)
// TODO: handle vertical text?
switch input.Direction {
case di.DirectionLTR:
buf.Props.Direction = harfbuzz.LeftToRight
case di.DirectionRTL:
buf.Props.Direction = harfbuzz.RightToLeft
default:
return Output{}, UnimplementedDirectionError{
Direction: input.Direction,
}
}
buf.Props.Language = input.Language
buf.Props.Script = input.Script
// TODO: figure out what (if anything) to do if this type assertion fails.
font := harfbuzz.NewFont(input.Face.(harfbuzz.Face))
font.XScale = int32(input.Size.Ceil()) << scaleShift
font.YScale = font.XScale
// Actually use harfbuzz to shape the text.
buf.Shape(font, nil)
// Convert the shaped text into an Output.
glyphs := make([]Glyph, len(buf.Info))
for i := range glyphs {
g := buf.Info[i].Glyph
extents, ok := font.GlyphExtents(g)
if !ok {
// TODO: can this error happen? Will harfbuzz return a
// GID for a glyph that isn't in the font?
return Output{}, MissingGlyphError{GID: g}
}
glyphs[i] = Glyph{
Width: fixed.I(int(extents.Width)) >> scaleShift,
Height: fixed.I(int(extents.Height)) >> scaleShift,
XBearing: fixed.I(int(extents.XBearing)) >> scaleShift,
YBearing: fixed.I(int(extents.YBearing)) >> scaleShift,
XAdvance: fixed.I(int(buf.Pos[i].XAdvance)) >> scaleShift,
YAdvance: fixed.I(int(buf.Pos[i].YAdvance)) >> scaleShift,
XOffset: fixed.I(int(buf.Pos[i].XOffset)) >> scaleShift,
YOffset: fixed.I(int(buf.Pos[i].YOffset)) >> scaleShift,
ClusterIndex: buf.Info[i].Cluster,
GlyphID: g,
Mask: buf.Info[i].Mask,
}
}
countClusters(glyphs, input.RunEnd-input.RunStart, input.Direction)
out := Output{
Glyphs: glyphs,
Direction: input.Direction,
}
fontExtents := font.ExtentsForDirection(buf.Props.Direction)
out.LineBounds = Bounds{
Ascent: fixed.I(int(fontExtents.Ascender)) >> scaleShift,
Descent: fixed.I(int(fontExtents.Descender)) >> scaleShift,
Gap: fixed.I(int(fontExtents.LineGap)) >> scaleShift,
}
return out, out.RecalculateAll()
}
// countClusters tallies the number of runes and glyphs in each cluster
// and updates the relevant fields on the provided glyph slice.
func countClusters(glyphs []Glyph, textLen int, dir di.Direction) {
currentCluster := -1
runesInCluster := 0
glyphsInCluster := 0
previousCluster := textLen
for i := range glyphs {
g := glyphs[i].ClusterIndex
if g != currentCluster {
// If we're processing a new cluster, count the runes and glyphs
// that compose it.
runesInCluster = 0
glyphsInCluster = 1
currentCluster = g
nextCluster := -1
glyphCountLoop:
for k := i + 1; k < len(glyphs); k++ {
if glyphs[k].ClusterIndex == g {
glyphsInCluster++
} else {
nextCluster = glyphs[k].ClusterIndex
break glyphCountLoop
}
}
if nextCluster == -1 {
nextCluster = textLen
}
switch dir {
case di.DirectionLTR:
runesInCluster = nextCluster - currentCluster
case di.DirectionRTL:
runesInCluster = previousCluster - currentCluster
}
previousCluster = g
}
glyphs[i].GlyphCount = glyphsInCluster
glyphs[i].RuneCount = runesInCluster
}
}
|
STACK_EDU
|
Machine Learning is not a set of pre-cooked supermarket meals
Machine Learning is a very popular topic in educated small talk these days, partly due to the relatively recent surge in “Data Scientist” positions around and partly because as humans we typically get fascinated by something, we think it’s appealing for a certain period of time and then our excitement fades away, freeing our mind to focus on the next big thing. Artificial Intelligence makes a good example. We are currently observing gigantic improvements in the capabilities of machines in terms of problem-solving abilities and research in the field is experiencing a new youth after the idiosyncratic history it has gone through in the last 60 years, as outlined in this excellent Medium post. We talk a lot about it, we love mentioning “Deep Learning” whenever the occasion seems favourable, but eventually the ones who do cutting-edge research in this fields are the same people who were already doing it before we even noticed. Or cared.
Never stop learning, but go straight to the core
Machine Learning has traditionally lived at the border of Computer Science and Statistics: the former provides the computer implementation, the latter provides the bulk of the model. Of course, overlap means that people from different backgrounds not only collaborate on a task by splitting the work but also exchange ideas and points of view, ultimately doing a bit of the work of the counterpart as well.
Statisticians today find themselves in a very interesting situation as they are seeing the things they’ve been using for decades advertised as something “new” and “cool” all over the place. Until probably about 5–6 years ago not many people would have talked of their tools as something particularly enticing. They must be a little bamboozled. The idea behind Linear regression is extremely old, dating back from more than a century; the standard algorithm for a k-means clustering was published in 1957; ID3 for decision trees is around 30 years old; Naive Bayes classifiers, beyond being based on a theorem of the eighteenth century, were developed throughout the fifites. I could continue but this should give the idea. This stuff is not new. Physicists have been dealing with the problem of regression ever since, usually calling it fit. When you have experimental data points and a hypothesis to study, you fit the points to it and compute how good your fit is. In a typical situation, a statistician’s dataset is noisier than that of a physicist as it may not have benefitted from the luxury of an experimentally controlled framework. Hence the amount of noise may be high. The statistician’s job is exactly that of figuring out a pattern out of the (noisy) data. The pattern can then be used to do predictions. Of course research in these topics is always ongoing and everyday sees improvements, tweaks, suggestions to better validate old algorithms’ accuracies. So nothing here is dead. There are papers who propose new ideas over old and robust stuff. What happened then? Why is it that lots of people mention these things only now, sometimes talking in words and without discussing the concepts? It’s an effect of the boom of Data Science around. Data Science is a glorious field, and it truly lives at the intersection of several traditional disciplines. Furthermore, when carried out in the industry, it encompasses non-academic skills like business acumen. Its rapid growth is due to the ubiquitous presence of data to be crunched, but several of its core methods are not new. I’m very happy if this new state-of-the-art situation can help mathematicians, physicists and statisticians ameliorate their reputation among the general public, relieving them from having to cope with being considered uninteresting and sometimes pathologically isolated. But this is not about emotions. It is about science. Don’t make a kitchen recipe out of an algorithm. I love cooking and I know that when you cook there are some “rules” to follow in creating a delicious dish. You can exercise your creativity and break the rules a tiny bit, by switching an ingredient with something else, adding something a little unconventional or removing the ingredient you don’t particularly fancy there. Ultimately you will have created a new version of the dish, which will be prone to criticism from both the conservative-minded (in cooking terms) and from the more “avant-garde” enthusiasts. Nevertheless, when cooking a well-known dish, you’re going to follow some rules, which are either written down by someone in a recipe or are in your head for your personal interpretation of that dish.
Google Trends on some of the essential building blocks of Data Science. Interestingly Mathematics exhibits a slowly increasing envelope. Statistics seems to be decreasing in interest: essential as it is, should be receiving lots of love.
Machine Learning is not cooking though. Sure, there are established and robust techniques, and you certainly don’t want to reivent the wheel when solving your data problem, but don’t treat them as a set of fixed rules to be applied for the goal. The core word in Data Science is science (and this one here has lately become by favourite quote), hence when you solve a data problem you are expected to approach it scientifically. This does not involve blindly applying an existing black-box without a thought process beforehand. We use libraries to perform tasks because they save us time. We tale advantage of the work of someone else who coded the tools in a clean and robust format. Writing the algorithm code again would be pointless. But we want to make sure we are understanding what we are using, why we chose that specific box, how we can choose its parameters and finally how good it is for our needs. And we need to be able to replicate what we are doing on paper. Often, the feature extraction phase requires us to come up with metrics which aren’t there anywhere, because they would be specific to our problem, and to the data we have to tackle it. This is were we get scientifically creative. Then we use the algorithm. Data Science is not about running that specific code coding for that “thing”, it is all in the journey from raw data to information. It’s all in the journey you build.
|
OPCFW_CODE
|
Fri, 25 Jul 2003 12:28:55 -0400
On Fri, 25 Jul 2003 16:55:52 +0100
Philip Kilner <firstname.lastname@example.org> wrote:
> Hi Jim,
> Jim Penny wrote:
> >>>1) index_html is ALWAYS a Script (Python) in any of my code from
> >>>the last 18 months.
> >>OK, so since this is our default "document", Zope will throw it when
> >>we hit our folder and so it controls the logic?
> > Right.
> Good - one bit properly understood.
> >>OK, understand that - not sure why the form has: -
> >> action="."
> >>How does that work here - is it calling itself?
> > Well, kind of. Think of it as an event loop. The client receives a
> > form. He submits it. Now, what processes the submission? In this
> > scheme, _the same index_html as was used to generate the form
> > in the first instance is responsible for processing the data_. That
> > is the "running in place" part of the scheme. We never leave the
> > folder, so we never have to redirect.
> Ah...so we've never "left" the index_html script?
Yes (and no). Remember, there are always at least two different
execution contexts going on (normally on two different computers) - that
of the browser and that of the server.
What is really going on is that the browser is given a bit of HTML to
"execute". That bit of HTML has a form in it, and the form tells the
server what to execute next (via the action), and what data to execute
against, the request.
What I am saying is that it is not necessary to go to a new folder to
process every interaction, and that it is indeed a bit counterproductive
in many instances - since you get to the bottom and then have to get the
browser back up the folder tree - either by forcing the user to click a
button that snaps him back up, which slows down and irritates the user,
or by using a redirect, which slightly increases network traffic, and
has other problems.
So, for as long as the browser stays in this "execution sheaf", we
arrange that every form says "use the current URI to process the form
that this URI generated last time". "." is the current URI. Note: it
can be omitted completely, and still work. You can use
<form method="post">...</form>. This will do exactly the same as
<form action="." method="post">...</form>. I prefer the longer format
as it explicitly tells me what URI will be handling the form when
received, namely, this URI.
> I'm used to seeing either a file name or a script name here - for
> example, in the ASP code generated by DreamWever I would have the
> generic MM db "CRUD" scripts...
> > OK, my fault (I like to define context and request as parameters to
> > all Script (Python)s except index_html (where you may not do so, due
> > to the ZPublisher machinery!), as it makes the calling sequence more
> > uniform. make the first line of your script:
> > request=context.REQUEST
> OK - Done that, and it now falls over at the next step: -
> NameError: global name 'main_menu_form_pt' is not defined
Normally main_menu_form_pt will be in the same folder as
index_html, so the container place-holder is preferred. (less overhead)
> So, I need to understand the syntax in this Pytin Script which would
> actually call the Page Template...I can see that 'main_menu_form_pt'
> is what is being returned by the script, and that this is the name of
> the PT - but what tells the script it is to throw a template?
Whatever object is returned from index_html is delivered to the browser
automatically by, I think, ZPublisher.
> >>Is insert_ISBN_sql our ZSQL method here?
> > Right, the ZSQL method that does an insert. You will probably also
> > have an update, and a delete method, and certainly one or more
> > select methods.
> Got it - I can see how that bit should fit togther.
> >>Same question about "method" here as for "action" above - how the
> >>period works? Are these transposed by any chance?
> > Yeah, transposed. Hey, this was composed at screen without
> > testing....;-P
> <grin> At least I demonstrated that I am working at understanding this
> I'm sorry this is turning out so laborious - I do truly appreciate
> your help!
> Email: email@example.com / Voicemail & Facsimile: 07092 070518
> "the symbols of the divine show up in our world initially at the trash
> stratum." Philip K Dick
|
OPCFW_CODE
|
Novel–The Cursed Prince–The Cursed Prince
Chapter 472 – Conversation With Maxim zoom endurable
She recognized he arrived at discuss, but she thought to tease him for making the climate lighter weight.
There have been many blossoms during the yard and Emmelyn was fascinated by the scene from her chamber. Ahh.. it’s been a while since she found and remained with a appropriate palace.
“Emmelyn, can we talk?”
“Perfectly, Your Majesty,” Lord Marius received back on his horse and patiently waited until Loriel and Emmelyn rode earlier him before he observed them from associated with.
Out of the article writer:
Incidentally, I am just getting your hands on a brand new interest from the previous week, that is enhancing and creating photographs with Ibis Painting app and it’s using many my time. It’s unwinding to complete, well, i spend a great deal time there.
tangled series tied up scene
This reminded Emmelyn a little bit about Princess Elara’s backyard. It always bloomed with roses even in the autumn months. It demonstrated that the garden was very well maintained and looked after with special remedy to permit the plants and flowers to help keep expanding like it is in planting season or summer season.
“Can be found in,” Emmelyn replied. She opened the door and appreciated Maxim to enter. “Are we possessing meal soon you are emerging on this page to obtain me?”
They sat because of their herbal tea and didn’t say everything for some minutes. Emmelyn checked out Maxim intently and patiently waited for him to dicuss.
irithyll dungeon doll
Loriel waved his fretting hand nonchalantly and reported, “There is no desire for that. I am going to relax with my crew right here and then determine them each and every morning.”
Emmelyn shook her head but she presented him the teapot. Maxim put herbal tea for each of them into two mugs and presented one of those to Emmelyn.
She actually skipped experiencing tea with Queen Elara while talking about whatever and seen her fresh flowers from the backyard garden.
how many times do they say love in love actually
So, possibly it was subsequently excellent that they finally arrived in Belem, so they could remainder properly and talked in individual.
Might be she could give some high priced gift items to Support Tempest, a minimum of to demonstrate that she was grateful.
Maxim smiled and responded, “These are generally planning the feast therefore we will feed on quickly. I arrived here to dicuss.”
“Very well, Your Majesty,” Lord Marius received back on his horse and anxiously waited until Loriel and Emmelyn rode previous him before he followed them from right behind.
“Positive,” Maxim increased from his seat and took on the teapot from Emmelyn’s fingers. “Permit me to fill the tea, I am the variety in this particular place.”
Emmelyn retained her breath when she been told this proclamation that somehow sounded similar to an indirect appreciate confession.
Incidentally, I am just obtaining a fresh pastime from the previous week, which is certainly editing and making photographs with Ibis Decorate app and it’s taking lots of my time. It’s calming to complete, thus i devote a lot time there.
“I’m sorry for being untruthful to you about who I am,” Maxim finally located his speech. “I absolutely enjoyed our friendship and I didn’t want you to treat me differently simply because I am just royalty. So, I didn’t say nearly anything. Of course, then In addition, i didn’t know you are a princess.”
Might be she could give some high-priced gift ideas to Position Tempest, a minimum of to demonstrate she was happy.
Ugh… that liar.
can russian missiles reach us
They sat making use of their green tea and didn’t say nearly anything for a couple of occasions. Emmelyn viewed Maxim intently and anxiously waited for him to dicuss.
The mayor took Loriel, Emmelyn, and Kira toward his household and provided them the top chambers to rest during the main developing. When Emmelyn inserted the mayor’s palace compound, she immediately admired the massive garden down the middle of the palace’s wall space.
“Positive,” Maxim rose from his couch and needed during the teapot from Emmelyn’s palm. “I want to dump the herbal tea, I am just the variety on this land.”
In addition, I am just collecting a fresh interest from a week ago, which happens to be editing and enhancing and developing pics with Ibis Paint app and it’s acquiring plenty of my time. It’s soothing to carry out, and so i devote so much time there.
“Very well, fantastic evening, my young lady,” Lord Marius replied respectfully. He required off his hat and nodded at Emmelyn. Then, he bowed right down to the queen and greeted him, “I trust that the trip gone perfectly, Your Majesty.”
Emmelyn was moved from her reverie when she noticed the knocks on the entrance. Then, she listened to Maxim’s tone of voice from outside her home.
Novel–The Cursed Prince–The Cursed Prince
|
OPCFW_CODE
|
After having this issue as well and doing a little research, I came across this thread and another one that tipped me off. I was pulling my hair out already.
It turns out that the problem is with the build order of your projects (mine was anyway). Since ADT/SDK v14 changed the way library projects are referenced, the build order needs to be correct. Make sure all of the libraries your app uses are built first. I just moved the "src" and "gen" folders for each of my projects to the bottom and now it builds the library first and I am able to debug it and view the source of my library files through the main project.
In case someone doesn't know where to do this, in Eclipse, right click on your project and "Build Path" and then click "Configure Build Path". Then, on the "Order and Export" tab, move the two folders for your project to the bottom of the list below your libraries. I did this for all of my projects and the library projects.
You can also do it globally in Eclipse from Windows->Preferences->General->Workspace->Build Order and moving your library projects to the top. I think the build order defined in each project will override this though, so you may want to do it in both places to solve the issue now and for future projects.
I assume you are opeining library project and there you put brakepoint. Try this: On main project open Library Projects->[yourlibrary.jar]->[yourfile.class] from Package Explorer, and then in .class file put brakepoints. This works for me at least :)
Sometimes this happens to me. Not sure about the reason but the way I solve is:
Remove the main project from eclipse. -- Closs Eclipse -- Delete the jar file in the library project -- Open Eclipse -- Wait for the library project to compile -- Import the main project
This problem also occurs with release 21 of ADT inside Juno. As a workaround, in the "debug" view of the debug perspective (where you see threads and method invocation traces), right click and edit source lookup path.
I had the same problem in a project today. The project consists of an app which has two library dependencies. I could not see code during debugging and when using auto-completion when overriding methods Eclipse was unable to deduce proper argument names.
First of all, the problem manifested itself by showing the the 'gen' folder was used as the one that contained the source. To check whether this is the same issue go to your app project, open the Android dependencies and have a look at the properties of the your library dependencies. Location path said /libraryprojectname/gen.
If this is also your problem then go to the 'Order and Export' tab of each library project and move the 'gen' item below the 'src' item. As soon as you click OK Eclipse will work a bit and when you check the Android dependency properties the location path should say: /libraryprojectname/src. Open click the dependency and open any class inside the jar. It should show the source.
I am using ADT plugin 20.0.3 with Android SDK Tools 20.0.3 and Android SDK Platform Tools 14.
The following worked for me on Eclipse Juno:
In Project Properties/Java Build Path:
- In the Projects tab, added my library projects.
- In the Order and Export tab, moved my library projects to the top, and checked them
Not sure if it's relevant, but Android SDK tools is rev 20.0.3 and Android SDK platform tools is 14.
Tried all of the above and it did not work for me, however the workaround detailed here did.
- Start debugging, and run until you hit a breakpoint (and precisely get a .class file instead of the .java you would like to have)
- Right click in the Debug view of the Debug perspective (for example on the call stack), and choose "Edit Source Lookup Path"
- Add all your projects above "Default", via "Add..." > "Java project" > "Select All"
(I'm using ADT 15.0.2 preview from http://tools.android.com/download)
- Unable to Debug Library Projects with ADT v14 - Source Not Found
- Source not found error given in debug mode when running a method in Java with a try/catch block
- Updated Eclipse and ADT not working with previously working packages.- Source not found
- Debug Android library project with java source code
- UnsatisfiedLinkError - Unable to load library - Native library not found in resource path
- Eclipse not linking private member names for library with attached source
- unable to load library 'yyy.dll': native library (win32-x86/yyy.dll) not found in resource path (jna + dll + eclipse rcp)
- Eclipse c++ library not found in debug mode but found in release mode
- Source not found in Eclipse with Tomcat
- Kura source code maven build failed with jdk:jdk.dio:jar:1.0.1 not found
- Debugging java source file with multiple classes in eclipse: Source not found
- Unable to download CrudRepository source in Eclipse IDE - Source Not Found
- Play! Framework Eclipse Debug Source Not Found
- Eclipse Class File Editor Source not Found when adding library to java project
- My JUnit tests executes with maven build in Java 11, but unable to run same Junit tests via RunIT->Run As->Units, which always says not JUnits found
- Source not found with debugging code in java
- If I debug JSF webapp it gives me SOURCE NOT FOUND
- Source not found when debugging all projects
- source not found in java library
- Eclipse with Glassfish 4 DEBUG source lookup not working
- eclips source not found debug remotely
- jsf with eclipse: Library of referenced project not found
- Refactorings fail in Eclipse with ADT when there are library projects
- Debug JSP Files On WebLogic 11g - Source Not Found
- Library projects not recognized in Android with latest android tools/sdk updates
- ADT requires 'org.eclipse.wst.sse.core 0.0.0' but it could not be found
- Eclipse java debugging: source not found
- Project with path ':mypath' could not be found in root project 'myproject'
- The APR based Apache Tomcat Native library was not found on the java.library.path
- Android Studio - Library reference ../google-play-services_lib could not be found importing project
More Query from same tag
- Java novice error Android Programming sendMessage method
- Reading a openstreetmap XML file in C++ using eclipse
- Rational Application Developer: find the IDE's location on disk (from within the IDE)
- Shortcut Keys in eclipse Ganymede version
- Don't show autocomplete in eclipse when backspace all letters
- Eclipse RCP add log statements from other plug-ins to log plug-in
- how do i build a JDBC connection url from this XML connection data?
- How to rename a method/field from Eclipse when not declared
- eclipse C project shows errors (Symbol could not be resolved) but it compiles
- use org.eclipse.core.resources method in my java app
- Eclipse: Green rectangle label on Class
- is it possible to get name of the application from the external class?
- Can't read txt file with Scanner in Eclipse (Java)
- Eclipse Error but doesnt show where
- How to show hover text for a swt text box
- How to create a servlet filter in Eclipse?
- JAVA nested while loop error -should be amazingly simple
- Eclipse Juno 64 bit Crash on Ubuntu 12.04 64 bit
- Eclipse: Limit resource and type searches to just the open project?
- Search classes from SWT UI in eclipse plugin development
- Maven Unicode Encoding build failure
- How to install Maven integration plugin in Eclipse Helios?
- How do I import an eclipse workspace into intelliJ
- Eclipse process sleep on Mac OS X El Capitan beta 3 with JDK 8
- eclipse plugin for rectangular select/cut/paste regions?
- Vibration after regular intervals
- How to inject multiple instances of a services in eclipse e4?
- Eclipse Window Builder error under Java 10
- This method is an undefined type of object
- Eclipse JCOP Tools Error - "no definition for label Label: 15659979, , block 230, def null"
|
OPCFW_CODE
|
How should a tester deal with a bug found in production?
In my previous project, I was working as a black-box/manual tester. My major responsibilities were performing function testing, executing regression suites, running smoke tests etc. I was testing a core banking application. There were other automation guys on the team, but I was the only manual QA.
A new payment feature was introduced and I tested a lot of complex scenarios and found quite a few critical bugs that were eventually fixed. The build was released on the production server and there was a crash in a module that I tested. I missed the bug! I was asked to explain and justify everything and was rebuked by my manager and the incident was escalated to higher management as the bug caused a lot of problems to the end users. Luckily my job was still there.
How should a tester deal these kind of situations?
There will always be bugs that get past a tester and land in production. I have even had bugs that where in my face, we researched it, thought it was a fluke, because we couldn't reproduce it and then released the issue into production.
The best thing you can do is learn from it and prevent the same in the future. I write an automated test-case for each defect found in production, since these are the brittle parts of the application.
Second I would plan a root cause analyses session and use the 5 whys technique to find the cause. Now find a solution to improve here and make it future proof.
found quite a few critical bugs
If you are testing a part of an application or new functionality and you find quite a few critical defects, then maybe wonder if you need help in testing this mess. Finding more than a few simple issues means in my book that the code is spaghetti or the developers might be sloppy. You need to signal it and take action.
That last paragraph is perfect and will help keep you employed.
first of all, the tester needs to check whether it was within our testing scope or not. If Yes then we have to do root cause analysis. Once you are known to the exact issue then you need to inform your project manager or QA manager.
The report part should include the exact impact of the bug found, also you should be ready with the proper explanation for the same.
First and foremost, analyse how the bug got past you. Find a way of preventing this, and things like it from happening ever again.
Second and maybe even more important, talk. Talk with everyone involved with getting new functionality out. As soon as requirements materialize, get your test approach on paper and scrutinize it AS A TEAM.
That a harmful bug got past you is bad. But you did not put it there. Getting good stuff out to the customer is the job of all of the team.
Finally.
So ask yourself, was it, in the end, justified that you got singled out to take the blame? Did you make a mistake? Or are you in a situation where it is just a bunch of people doing things? If so, maybe better to switch companies.
Hope this helps, and better luck next time.
If you was under the threat to lose your job as a result of letting a single bug to slip to PROD, you should consider looking for a job in saner company. It is just a matter of time when another bug will slip to PROD.
Quality is a team sport. Bugs will always slip, and pointing fingers and assigning blame does NOT improve quality. Firing you for a bug is NOT reasonable. It could be reasonable if you repeatedly let slip similar bugs. For few bugs, no pay increase is reasonable.
Dealing with a bug which slipped to PROD is not only responsibility of the tester who tested the module. Even more responsible is the developer who wrote the code. Both Dev and QA teams should analyze it, find a way to prevent it. Automated smoke tests? Developers suggesting which modules should be retested after changes? Etc.
You cannot test-out quality. Quality has to be designed and developed in. Testing can only show presence of the bugs, not it absence.
Make a habit of going through a test case ( if not at least high level test plan ) review with the whole team, its not unusual to miss a bug which can land in production
After review you can always say it was not part of plan so everyone is responsible not only you, may be no one ll point fingers in first place
If your manager does not understand this and joins others in pointing finger at you, start searching for a job in normal companies, these things can damage your confidence and you learn to just do things to cover your base which takes juice out of testers profession
To Prevent Bugs in production you should test from a stakeholder view, A good knowledge in product and domain always help you to get the best test cases and apart from positive test cases always try for test cases which will break the code as even dev does the positive testing.
|
STACK_EXCHANGE
|
"Trip Journal": Helpful iPhone App for Chowhounding?
This new iphone app, "Trip Journal", seems like it might be a good app to capture chowhounding data while on trips:
Obviously, iphone apps mostly draw off the same database of restaurants, which won't include very new or fuzzy places...or menus, etc. So the perfect app will never be created.
But this strikes me as a better solution than constantly jotting down addresses and phone numbers, and more organized than simply shooting photos. And seems especially good for when you're in an unfamiliar place, but passing lots of interesting-looking places you want to note. Shoot a photo, grab the GPS, scrawl a note, and have it all organized linearly for you (so it's easy to reconstruct later than a pocketful of business cards).
Right! I use an application designed for hiking (Trail Marker: Hiking Buddy) for similar nefarious purposes. You can mark a place and take your own notes on it. Now if this site would just develop a Chowhound application that works on iPhones / Blackberries -- one that lets you sign in to your account, read and post -- we'd be so money.
If you want to use the app for this purpose keep in mind that at the moment notes are not exported in the KMZ, only comments added to photos.
Also, only the pictures taken from Current Trip and default pictures of the Waypoints are exported, the other photos from Waypoints are not.
We are working on an Update that will address these issues.
I am curious, are there specific features you would need for your purpose?
I realized you're a developer, but there are two apps being discussed in this thread. I'll presume you googled in via my OP and that you work on Trip Journal.
I'm traveling in Osaka, Japan. I'm passing sushi parlors and stores and museums that look interesting, but I don't have time to stop and enjoy every single place. I want to gather sufficient info so that I can stop back later, or, upon my return home, tell people "this place looked great, but I didn't try it".
Optimally, I'd like to photograph the exterior, annotate that photograph, have it key in GPS (and cough up an approximate address), maybe take further photographs (e.g. the menu of the sushi place, the brochure of the museum, etc), annotate THEM, perhaps type a generalized note not keyed to any specific photo, and have all photos/notes/address divinations appear in a chronological timeline, from which it would be easy for me to later "chop up" clusters of data about individual places on my desktop. But, while still traveling, I'd use the time stream on my iphone to reconstruct what was spotted when/where so I could quickly find my way back.
All annotations, gps data and guessed addresses should be available to cut/paste as text later (on my desktop).
If there could be some level of OCR on lettering in the photos, that, obviously, would be fab. If it could take the OCR output, and, in concert with the GPS data, try to determine precise info about an establishment, that'd be full-on awesome.
I don't think what I'm asking for would be very different from what the app is intended to do, anyway.
re: Jim Leff
You are right, I was talking about Trip Journal.
I think the current version + upcoming update should fulfill most of your needs.
Basically, you would create a new Waypoint at the place of interest. You will receive the approximate location based on the GPS data via Google Maps (Update feature).
You can add as many notes to the Waypoint as you want. The same goes for photos. Note that you need to navigate to the Waypoint and write notes and take photos from the Waypoint View. If you do it from the Current Trip screen, they will not be associated to the Waypoint, but with the trip.
You can annotate one comment to each photo (is this enough?).
The Waypoint default pictures and picture comments will be exported in the KMZ. The notes and extra Waypoint pictures will only be exported with the next Update.
A small inconvenient might be that you need to install Google Earth on the Desktop to view the KMZ. You can copy the data from the Waypoint balloon and organize it as you see fit.
Your scenario might also benefit from a feature to export single Waypoints instead of the entire trip - we are floating this idea around.
You will see Waypoints and Pictures taken on the trip in Google Earth in chronological order. If you used Track Route you will also have a nice itinerary of your trip and see how you got to the locations; this feature drains the battery quite a bit, so it is recommended for routes under 3 hours or for driving with the iPhone plugged in the charger.
The OCR solution is not planned, but it certainly seems interesting. I guess pursuing this idea depends on whether it is easy to port an existing solution to the iPhone - will look into it.
Thank you very much for your feedback.
One concern with structuring this around way points.....what if there are multiple venues of interest extremely close to one another? Will they mush together?
Can you add capability to create an annotated google map? Here's an example: http://maps.google.com/maps/ms?ie=UTF... . I'd love to be able to spit these out on the fly for all areas I explore!
Finally, I hope the app wouldn't force my data into such a user -friendly, googled-up format that it'd be a pain to work the data into a blog posting, excel sheet, or filemaker DB.....giving me access to ALL data (including my own notes, annotations, GPS coordinates, and approximate addresses).
re: finding an existing OCR solution, oh, God yeah. One option would be Amazon's mechanical Turk, so there's human intelligence involved. You'd have to micro pay for the service, so there'd need to be some sort of subscription charge to make it worth your while. It'd keep the app size reasonable to have that work done in the cloud, ala Dragon Dictation.
Hey, anyone else want to pipe in with ways to create the ultimate app for noting chowhounding finds on the fly?
re: Jim Leff
I think it is pretty easy to navigate between Waypoints, we provide some arrows in the balloon you can use for this. You can download an example KMZ from here: http://www.iqapps.eu/TripJournal/Exam... and look at it in Google Earth.
We will have something like that Google Map soon; I can't tell you exactly what just yet, it will be a surprise :)
|
OPCFW_CODE
|
The article chosen addresses the so-called bubble effect identified by Pariser (2012). This bubble effect suggests that by using recommender systems (RS), users are exposed to only a few products that they will like and miss out of many others. The paper wants to investigate this through understanding the content diversity at an individual level provided by collaborative filtering. It suggests to be the first study observing the effects of this phenomenon on an individual level.
From the study conducted by Lee and Hosanagar (2015) we have understood that there are many opposing views existing in the literature on content diversity on an individual level. Therefore, as the current article claims to be the first one studying this phenomenon on an individual level it is interesting to see how the study has been conducted and what their conclusions are compared to the article of 2015.
The paper addresses very well the debate in regard of the bubble effect: whether recommender systems may be harmful to users. First the behavioral aspects of people who are exposed to similar content and what the effect is on their individual behavior is addressed. As they want to measure the effects on an individual level it is important to recognize what has been found in regard of individual behaviors on content exposure.
For this study, they use the long-term users of MovieLens as they need longitudinal data to draw conclusions on the user’s behavior over time. Two research questions are addressed; 1. Do recommender systems expose users to narrower content over time? 2. How does the experience of users using recommender systems differ of those who do not rely on recommender systems?
The article uses the “tag genome” developed by Vig et al. (2012) to analyze the diversity of the movies that are recommended and consumed (rated). This appears to be a strong measure as it identifies the content of the movie and identifies the similarities content-wise. Multiple articles have used the movie genres (Lee and Hosanagar, 2015) or the ratings given (Adamopoulos and Tuzhilin, 2014) to identify similarities which seems to be less generalizable as the content of the movies still can vary greatly when using these metrics.
The article describes clearly how the findings should be interpreted and addresses multiple questions that have risen based on the findings. This leads to a well-rounded study where the effect of item-item collaborative filtering is exposed. First, the article addresses whether recommender systems expose users to a narrower content over time through comparing the content diversity at the beginning with the content diversity at the end of the user’s rating history. This comparison can show the development of content consumption of the user over time. It has been found that the content diversity of both user groups (using RS or not to rate movies) becomes quite similar over time. Furthermore, it identifies whether using RS reduces the total content-diversity consumed of that user. The conclusion is that users using RS over time consume more diverse content then users ignoring RS. Finally, the experience of the two user groups are evaluated and it is observed that users using RS seem to consume more enjoyable movies based on their ratings given.
As the limitations of the article suggest itself it would be interesting to study the phenomenon in a more experimental setting where the behavior of users can be observed in more detail. This would help in understanding the reasons of the decisions made by the users based on the recommendations. The multiple studies conducted in the field of RS mostly focus on collaborative filtering as this RS is the most commonly used (Lee and Hosanagar, 2015) but research should also focus on other recommender systems to make sure that those used benefit the user the most.
Adamopoulos, P. and Tuzhilin, A., 2014, October. On over-specialization and concentration bias of recommendations: Probabilistic neighborhood selection in collaborative filtering systems. In Proceedings of the 8th ACM Conference on Recommender systems (pp. 153-160). ACM. http://dl.acm.org/citation.cfm?id=2645752
Lee, D. and Hosanagar, K., 2015. ‘People Who Liked This Study Also Liked’: An Empirical Investigation of the Impact of Recommender Systems on Sales Diversity.https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2603361
Nguyen, T.T., Hui, P.M., Harper, F.M., Terveen, L. and Konstan, J.A., 2014, April. Exploring the filter bubble: the effect of using recommender systems on content diversity. In Proceedings of the 23rd international conference on World wide web (pp. 677-686). ACM. http://dl.acm.org/citation.cfm?id=2568012
|
OPCFW_CODE
|
By Finn Roberts & Jonathan Schroeder
R users have a powerful new way to access IPUMS NHGIS!
The July 2023 release of ipumsr 0.6.0 includes a fully-featured set of client tools enabling R users to get NHGIS data and metadata via the IPUMS API. Without leaving their R environment, users can find, request, download and read in U.S. census summary tables, geographic time series, and GIS mapping files for years from 1790 through the present. This blog post gives an overview of the possibilities and describes how to get started.
What you can do with ipumsr
Request and download NHGIS data
You can use ipumsr to specify the parameters of an NHGIS data extract request and submit that request for processing by the IPUMS servers. You can request any of the data products that are available through the NHGIS Data Finder: summary tables, time series tables, and shapefiles. You can also specify general formatting parameters (e.g., file format or time series table layout) to customize the structure of your data extract.
Once you have specified a data extract, you can use a series of ipumsr functions to:
- submit the extract request to the IPUMS servers for processing
- check on the extract status
- wait for the extract to complete
- download the extract as soon as it’s ready
- load the data into R with detailed data field descriptions.
This workflow allows you to go from a set of abstract NHGIS data specifications to analyzable data, all without having to leave your R session!
Get metadata describing NHGIS data
You can also use ipumsr to view metadata about NHGIS data. This includes both high-level summaries of all available datasets, time series tables, and shapefiles as well as specific details about particular summary tables and time series.
Access to this information can simplify workflows in several ways:
Identify available data
Browse data descriptions, geographic levels, comparability, and more to explore what’s available and find data that suits your particular research needs. You could also use other R capabilities to search and filter through the thousands of data descriptions in ways that the NHGIS Data Finder doesn’t support.
Create extract requests
Use the names and options given in the NHGIS metadata to specify requests for desired data. For instance, the metadata for a specific dataset will include lists of tables, geographic levels, and breakdowns available for that dataset. After getting that information, you could copy the names of any items of interest directly into an extract request definition.
Streamline data management
You can even use metadata as a resource to build pipelines to make basic data management tasks easier. For instance, you can write code to search the metadata for the most recent ACS 1-year release and add that dataset to your base extract definition when it becomes available, allowing you to quickly update your analysis with the latest data.
Share NHGIS extract definitions
With ipumsr, you can easily save an extract definition to a JSON file and share that with others who can then load the definition into R and submit it to the IPUMS servers for processing under their own account.
View previous NHGIS extract definitions
You can also use ipumsr to view the definitions of extracts you previously requested—either via the web or via ipumsr—which you can then share or use as the foundation for a new extract request with similar parameters. You can even view the parameters for extracts that have expired. (All NHGIS extracts expire two weeks after completion, meaning the data are no longer available for download, but the definitions persist.) If you need to reproduce the data in an expired extract, simply resubmit it and download the new files when complete.
To install the latest update of ipumsr from CRAN, run
in your R console. You can ensure your currently installed version is greater than 0.6.0 with
Requesting NHGIS data and metadata via ipumsr also requires that you first register to use NHGIS (if you haven’t already) and obtain an IPUMS API key. The IPUMS API (Application Programming Interface) is the system through which IPUMS enables programmatic access to its servers. All of the ipumsr functions that access NHGIS data and metadata do so by submitting calls to the API. Each API call must include a key for a specific registered IPUMS user, which enables IPUMS servers to authenticate API users and associate their requests with their accounts.
The ipumsr website includes several articles that demonstrate ways to work with the IPUMS API within R, including instructions on how to get an API key and use it with ipumsr. We suggest you start with the introduction to the IPUMS API for R users. This outlines the core workflow for creating, submitting, and downloading an extract within ipumsr.
Once you have a sense of this workflow, check out the NHGIS API Requests article for more details on the available options when dealing specifically with NHGIS metadata and extract requests.
Finally, the Reading IPUMS Data article will get you familiar with the specialized ipumsr file-reading functionality.
Other Supported Data Collections
The IPUMS API currently supports access to three other IPUMS data collections in addition to NHGIS:
- IPUMS USA
- IPUMS CPS
- IPUMS International
These data collections are also supported by ipumsr, and use the same general workflow as does NHGIS except that the IPUMS API does not yet support metadata access for these collections. For more details about the specifics of requesting data from these collections, see the Microdata API Requests article.
The IPUMS team will continue to add API support for more IPUMS collections in the future, and as we do so, we intend to add parallel support in ipumsr. So, if a collection you’re interested in isn’t available yet, stay tuned! Check out the API development roadmap to get a sense of what features may be available in the future.
IPUMS also maintains ipumspy, a python library that provides much of the same functionality that is available in ipumsr. At the current time, we have not yet added NHGIS support to ipumspy. We hope to do so in the near future!
|
OPCFW_CODE
|
It's been such a long time since the last post. Almost 2 years! Time does fly.
Why the sudden resurrection of this blog? Let's just say it's the desire to learn has been rekindled!
I'd mentioned on several previous posts (I think I did at least) on this blog that I used to work as a technical translator for a software dev company back in the Philippines. The team I served with my language skills (greatly diminishing instead of improving - I cringe at the multiple and redundant grammatical errors my posts here contain!!) were Java developers. They tried to teach me how to code in Java. I remember that "Beer Song" exercise which when I'd resolved it, I dropped Java like a too hot pancake in my hands. Since it had splattered on the floor already, why bother picking it up? There were other more interesting and yummy confections at that time (ehem - like the HP Mini 311 Darwin Project at InsanelyMac.com) to bite into!
I remember explaining to my then team mate and close friend (who's a freakin' awesome Java programmer) that I just couldn't accept why some stuff (methods? classes? etc.) must be declared first. On retrospect now, perhaps there was just the general lack of a motivation for me to force my brain to wrap itself around the proclivities of decent Java coding: i.e. the need to make things work.
For I now realize I did end up dipping my elbows in code when I was finding my way through the cog works of hackintoshing, specifically DSDT patching. I was surprised to be informed of my the techie team mates that it involved, in fact, coding at a much much lower system level than say the commonly known object-oriented programming languages like Java, Python, etc.. Java worked inside the Operating System and interacted mostly with the files and file system therein, while dealing with DSDT meant interacting with the BIOS. (More info on DSDT here).
Now what was different from this machine gibberish from Java when they're both code? There was a requirement - specifically a personal need - my dream of experiencing having a Mac ^_^
I had to patch my own DSDT to make sure low system level processes like wake-up from sleep triggered by mechanical actions like closing the lid, hibernation, fan speed, etc. for my own hackintoshes because most of the hacking guides available were for North America models of said machines and I was in Asia. I had to do a lot comparison and reading the code lines of those other hackers posted and looked out for patterns. Some object names would differ by one character between those posted codes and the actual DSDT extracted from my own Asian release model laptop or netbook.
It was trial and error, testing, and squeezing out comprehension (read the original post from 2009!) from cryptic lines as the example section below:
I remember this is to correctly trigger sleep from closing the laptop lid and if my memory serves me right, after long hours of sleepless nights, in order to arrive at this, I tried adding 'S' to the 'LID' entries in there and adding if-then lines. I remember the dull headache when I had ran out of logical things to try even after scouring the interwebs and even downloading one manual laptop BIOS and system events (ACPICA and always maintained). Miraculously, it was the key (adding that 'S' in there and if-then logic)!
Though it was the most boring piece of literature I ever laid eyes on, I don't regret reading the manual because I was able to deduce some of the lines' function . I put all I could understand after those double forward slashes "//" when saving and sharing my resulting dsdt.aml file to the hackintosh community that I considered my friends, mentors, and educators.
Those eureka moments of "Oh so that 0 there is closed lid and 1 for open lid!" were fun and exhilirating even ("nakaka-kilig" in my native Tagalog). Perhaps it was akin to how Jean-François Champollion felt when he finally deciphered the Egyptian heiroglyphs after years of studying the Rosetta Stone.
Of course, if you asked me if I still understand DSDT now, the answer is an honest "No, not at all". Hehehe. There's just no urgency for my lazy brain to even switch on and make an effort to re-learn, that after all, I've been blessed now with a real Mac (Finally! Thank You, Lord, for the blessing!) - why bother?
Recently, I had to find a way to manipulate .csv files (comma-delimited text files) to do a sort-of-vlookup process and all to be done via batch file to complete a "project". It was a challenge for me.
I hate Excel, formulas, and numbers in general. It's beyond question that I even try to learn macros much more VBA. It's natural human instinct to stay away from things we abhor, right?
But the problem of making that vlookup routine work in an automation project piqued my interest and I was hooked like a fish to line and bait. To what extent was I obsessed to figure how things worked? Well, the fact that I gave up my time for watching Korean dramas each night to spend it instead in lurking in Stackoverflow forums should give you an idea. I was interested enough to take some of my personal time outside of working hours even if this was work-related (because I still had to deliver my cases - enter those orders in SAP everyday; there wasn't just time!) and managed to make space for tinkering in between other responsibilities at church and personal quiet time (Bible time, yehey!!) that had to keep (for I cannot survive without them!).
I'd say the fasting from all Hallyu was far from futile for I was rewarded with a VB Script that I only needed to edit a bit - after I understood how it worked first - and it became the missing, crucial piece of the puzzle to make that automation work.
By "understanding" I refer to doing that familiar method of trial and error and pattern search similar to that problem of DSDT / BIOS stuff in my old days of hackintoshing. Finally, after a late night testing, the lines of heiroglyphs - err, sorry code - made sense. (Praise God!!)
I present to you for scrutiny and judgment, the discoveries :
Disclaimer: I admit they may be wrong: it's the first time I ever looked into VBScript which my expert dev cousin says is so outdated. In my mind, I imagine it may well have come all the way back from the Palelithic age! Hehehe
*credits for this specimen of wonderful VBScript (the expert guy Bill Prew says it's ' Pure' - anything pure is good in my vocab just like pure orange juice hehehe): Experts Exchange forums
Set oInFile1 = oFSO.OpenTextFile(cInFile1, cForReading)
Do Until oInFile1.AtEndOfStream 'reads the file until the last row
aLine = Split(oInFile1.Readline, ",") 'reads lines in file, taking entries as individual items as spearated by a comma / delimiliter
If Not oDict.Exists(aLine(0)) Then 'adds each item and corresponding values to the dictionary array where we will use later (loaded into memory?)
oDict.Add aLine(0), aLine(1) 'there are 2 items per line, 0 and 1 - begins with 0!!
Set oInFile2 = oFSO.OpenTextFile(cInFile2, cForReading) 'is the table I'm looking up matching values in
Set oOutFile = oFSO.OpenTextFile(cOutFile, cForWriting, True) 'this is for writing the output csv file
Do Until oInFile2.AtEndOfStream 'just so that we get to that last entry!!
aLine = Split(oInFile2.Readline, ",") 'reads lines in file, taking entries as individual items as spearated by a comma / delimiter
ReDim Preserve aLine(UBound(aLine)+1) 'inserts a new item in the rows
aLine(5) = aLine(4) 'moves 3rd item in row to the 4th
aLine(4) = aLine(3) 'moves 2nd item in row to the 3rd
If oDict.Exists(aLine(2)) Then 'this is the lookup part - checks if 3rd item in a row has a dictionary entry
aLine(3) = oDict.Item(aLine(2)) 'loads the value of that item found rom the dictionary in memory to the 4th item in a row
oOutFile.WriteLine Join(aLine, ",") 'joins the items in a file yay!
Now how to make Excel palatable? because I'm convinced (by my cousin) that I need to acquire VBA for Excel literacy.
|
OPCFW_CODE
|
const Unit = require("../Unit.js");
const Side = require("../Structures/Side.js");
/**
* Class: SideCollection
* A class that stores all information about a given side. That includes Players, Roles and factional kills. Only accessible through the game object, after the game has started. If you try to access this class before that, it will return *null*.
*
* *Example:*
*
* --- Code
* game.sides.collected.get("Mafia") // Gives you the Mafia side object.
*
* game.sides.getSideSize("Mafia") // Gives you how many players are there in the Mafia side
*
* ---
*
*/
class SideCollection extends Unit {
constructor(game) {
super();
this.game = game;
game.roles.forEach(role => {
if (!this.has(role.side)) this.set(role.side, new Side(role.side));
this.get(role.side).roles.set(role.name, role);
});
game.players.forEach(player => {
this.get(player.role.side).players.set(player.name, player);
});
}
removePlayer(name) {
let player = this.game.alive.get(name);
if (!player) return false;
this.get(player.role.side).players.delete(name);
return true;
}
addPlayer(player) {
this.get(player.role.side).players.set(player.name, player);
}
/**
* Function: sizeOf
* Gets the amount of players that are in the side.
*
* Parameters:
* side - The side to get the amount of players from. (<String: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String>)
*
* Returns:
*
* The player amount of the side.
*/
sizeOf(side) {
if (!this.has(side)) return null;
return this.game.alive.findAll(p => p ? p.role.side == side : false).size;
}
}
module.exports = SideCollection;
|
STACK_EDU
|
Adding Awareness Api with Unity 3D Android Plugin
Currently I am making Location based AR app in Unity with Android Plugin that successfully senses user position and orientation. Now , I need to add Awareness APi in that plugin , so I added the following code in the Create method of my MainActivity.java.
mApiClient = new GoogleApiClient.Builder(MainActivity.this)
.addApi(Awareness.API)
.build();
mApiClient.connect();
However , on starting the apk in android device , the app says it stops responding. Commenting the above code makes the app work, creating only the Builder instance also makes the app work.
The above code along with other awareness functionality is working perfectly when creating a separate android app in android studio.
I tried adding google play service jar file as well as src file for the Contextmanager package , which contains Awareness method , but still same message appears.
Kindly let me know how to make this work in unity App?
Since you claim that mApiClient.connect(); causes the problem, why not check if mApiClient is null before calling mApiClient.connect();. Put that code in a try catch block then display the message with Log.v... Maybe there is a hidden problem you will discover...
Hi , No actually issue is causing from addApi line itself, and also putting this in a try catch block doesn't keep the app from stopping responding. I am not sure how to show error message in unity 3d view if android plugin throws exception. The plugin itself it working fine if the apk is generated from android studio as an application , instead of unity 3d
I know that the code is working in Android Studio. This is Unity plugin so what I am saying still applies. It doesn't matter if the try catch does not keep the app from stopping responding. All I want to know is if it throws any exception. You can use Android Monitor from Android Studio to see the Log. Just select your device then run your app from your phone. The log will show. Are you extending UnityPlayerActivity activity in your MainActivity class?
Yup I am extending UnityPlayerActivity, for the Unity Plugin. The working apk was being extended from AppCompatActivity.
Did you try the exception thing I said? What was the error? I want to help but want to make sure that my answer is actually problem
Hey Finally got it to working. Actually it was not finding the Awareness library. So after manually adding the contextmanager jar file in unity folder , it worked. However , now the unity app is unable to determine the current activity, at it fetches empty string from the method that is supposed to return the current activity.
You really didn't answer my question multiple times so don't think I can help you until I know the result of that question. Happy coding!
Now everything is working fine. Actually that was the exception. It was unable to detect the api.
|
STACK_EXCHANGE
|
ENH: optimize: Improve least_squares with Default diag=None for LM Method
Reference issue
This PR addresses the issue described in #19459 - "BUG: optimize.least_squares giving poor result compared to optimize.leastsq and optimize.curve_fit".
What does this implement/fix?
This enhancement modifies the behavior of optimize.least_squares when used with the method='lm' (Levenberg-Marquardt algorithm). Previously, unless x_scale was set to 'jac', the default behavior was to use diag=1.0 for scaling, leading to suboptimal fits in certain scenarios. This change proposes to set the default value of x_scale='jac' when method=='lm', aligning the behavior more closely with that of optimize.leastsq and optimize.curve_fit. This results in improved optimization performance and robustness.
Summary of Changes
Set default x_scale='jac' for least_squares with method='lm'.
Additional information
Can you add a unit test demonstrating that the change performs as intended?
Can you add a unit test demonstrating that the change performs as intended?
Updated the unit test and committed the change. Please review the PR and let me know.
I cannot find how to remove my review here unfortunately. Will take a closer look in some time.
Thanks for the email and suggested changes, I did them accordingly and committed them. Please let me know if you need any further updates from me :)
Hi @Roshan-DSBA , if you do the following the doc build check should work:
git fetch --all
git merge upstream/main
git push
Hi @Roshan-Velpula , if you do the following the doc build check should work:
git fetch --all
git merge upstream/main
git push
Hi, Thank you for the message! Pushed the changes and doc build check worked. :)
Hello @lucascolley is there any interest in updating this PR? I am interested in revisiting it since it would solve an important issue in my view.
Hey @CarrascoCesar ! This isn't my area of expertise, but perhaps you could respond to @dschmitz89's comment above about the purpose of the test?
Hello @dschmitz89 would there be interest in revisiting the PR and creating a simpler test that checks for improved behavior when x_scale is not defined and least_squares defaults to parameter scaling?
@CarrascoCesar : sorry, looking again I think I misunserstood your test. My new interpretation is: you are comparing the new against the old default and the new default has to result in better convergence indicated by a smaller number of function evaluations. Is that correct? In that sense, the test is actually very helpful.
Reading again the complete PR description, it seems that the overall goal is to get the same behaviour as leastsq if least_squares uses Levenberg Marquardt. Currently this is not the case due to different default arguments. WIth this change, we should get the same results. How about we update the test so that we show that we get the same results for least_squares(method=lm) and leastsq? This would make the purpose more explicit.
Hello @dschmitz89, I agree, comparing least_square to leastsq in the test addresses the original issue more directly. I'll work on it to include your recommendation and see if I can come up with a simpler test problem.
Hello @lucascolley, this will be my first time modifying somebody else's PR. What is the process to checkout the commit for me to make changes and then post?
Hello @lucascolley, this will be my first time modifying somebody else's PR. What is the process to checkout the commit for me to make changes and then post?
hey @CarrascoCesar , personally, I would use GitHub's CLI tool gh and do the following:
Make sure you are in the directory of your clone of this repo
gh checkout 19700 (to checkout this PR)
git checkout -b LM-changes (to make a new branch matching this PR)
Make your changes and commit as usual
Open a new PR from your branch
If you don't want to use gh, you can do the same by adding @Roshan-Velpula's fork as a remote.
If @Roshan-Velpula were active and collaborating, you could instead make a PR to their fork, and the changes could stay in this PR. But no harm in just opening a new PR here!
|
GITHUB_ARCHIVE
|
Heroku is a PaaS platform – supporting plenty of internet utility frameworks together with the likes of Ruby on Rails, NodeJS and PHP’s Laravel.
The service was designed in 2007 as a manner for Rails (and different internet utility) builders to deploy their functions with out having to fret about underlying structure & sever setup.
It has been created to provide folks entry to “single click on deploy” performance – permitting them to basically provision and deploy server “cases” with out the necessity of getting to be involved about how the infrastructure will work.
This tutorial explores the way you’re in a position to make use of Heroku for Ruby on Rails utility growth.
Crucial factor to grasp is that it is a “closed” platform.
In an try to be as easy-to-use as doable, the crew determined to take away *any* kind of specification from the system. Which means that it is tied into Amazon’s EC2 platform, and principally prevents you from having the ability to deploy your software program to some other platform via its interface.
While “supplier lock in” might not be an enormous challenge in itself, it does spotlight the core downside with Heroku… it is a platform not a service. Being a platform signifies that Heroku controls each side of the deployment course of – from the place you are storing your information to how a lot useful resource utilization you might have.
Which means that little issues – similar to *all the time* having a “x.herokuapp.com” subdomain out there to your app, paying PER APP (which might get very costly), being unable to vary your app’s location, are an enormous challenge.
Moreover, Heroku’s deployment course of may be very inflexible. Which means that you can’t change issues similar to “location”, and even have a number of frameworks / platforms working below an utility. While it has “buildpacks” (that are excellent) – they require you to hack collectively the varied pipelines you might have into one central construct course of.
Due to these restrictions, many builders have cited the system as being efficient as a “staging” atmosphere… however in lots of instances unhealthy for manufacturing. Manufacturing environments require scalability and extensibility on a core stage (if you happen to get site visitors spikes, or want to launch in different international locations – you want the flexibility to do it).
While Heroku does have these to a level, its lack of granular settings makes it very tough to justify utilizing as a manufacturing service. That is amplified with the system’s application-centric pricing construction.
The way in which round that is to make sure that you are in a position to make use of a system which is as versatile as required. Heroku might suffice on this respect for a lot of newbie builders (who simply want their app to run it doesn’t matter what), for some seasoned builders (who might require a extra particular person system), the likes of “cloud” VPS providers have a tendency to supply a extra interesting excellent for manufacturing stage internet utility provision.[ad_2]
Supply by Richard Peck
|
OPCFW_CODE
|