Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Without any optimization option, the compiler’s goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the subprogram and get exactly the results you would expect from the source code.
Turning on optimization makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.
If you use multiple -O options, with or without level numbers, the last such option is the one that is effective.
The default is optimization off. This results in the fastest compile
times, but GNAT makes absolutely no attempt to optimize, and the
generated programs are considerably larger and slower than when
optimization is enabled. You can use the
-O switch (the permitted forms are
gcc to control the optimization level:
No optimization (the default); generates unoptimized code but has the fastest compilation time.
Note that many other compilers do substantial optimization even
if ‘no optimization’ is specified. With gcc, it is very unusual
-O0 for production if execution time is of any concern,
-O0 means (almost) no optimization. This difference
between gcc and other compilers should be kept in mind when
doing performance comparisons.
Moderate optimization; optimizes reasonably well but does not degrade compilation time significantly.
Full optimization; generates highly optimized code and has the slowest compilation time.
Full optimization as in
also uses more aggressive automatic inlining of subprograms within a unit
(Inlining of Subprograms) and attempts to vectorize loops.
Optimize space usage (code and data) of resulting program.
Higher optimization levels perform more global transformations on the program and apply more expensive analysis algorithms in order to generate faster and more compact code. The price in compilation time, and the resulting improvement in execution time, both depend on the particular application and the hardware environment. You should experiment to find the best level for your application.
Since the precise set of optimizations done at each level will vary from
release to release (and sometime from target to target), it is best to think
of the optimization settings in general terms.
See the ‘Options That Control Optimization’ section in
Using the GNU Compiler Collection (GCC)
for details about
-O settings and a number of
-f options that
individually enable or disable specific optimizations.
Unlike some other compilation systems,
been tested extensively at all optimization levels. There are some bugs
which appear only with optimization turned on, but there have also been
bugs which show up only in ‘unoptimized’ code. Selecting a lower
level of optimization does not improve the reliability of the code
generator, which in practice is highly reliable at all optimization
Note regarding the use of
-O3: The use of this optimization level
ought not to be automatically preferred over that of level
since it often results in larger executables which may run more slowly.
See further discussion of this point in Inlining of Subprograms.
|
OPCFW_CODE
|
A declared foreign key (i.e., one enforced by the database engine) cannot tie to multiple other tables. So
group_device cannot be a foreign key to all three device tables.
You have a few options:
Have a unique table linking each type of device to the appropriate group (
You can create a
group_device view that's basically the same as the current
group_device table. You wouldn't have a true
id column (the other three columns should uniquely identify a row). You could find a way to create a unique ID across the actual
group_device_* tables if absolutely required, but that's a bit more work.
This does involve a certain amount of maintenance pain - when a new device is added, you have to create two tables (
group_device_typeN), plus you have to modify the
Multiple columns in
group_device (don't use)
You could create the
group_device table with a unique column for each possible device type (
device_typeC_id, etc.). Then, add a trigger (as shown in this answer) to ensure that all but one of these values is NULL.
Maintenance requirements are similar to the first suggestion: when a new type is added, you have to create the
device_typeN table; add the
device_typeN_id column to the
group_device table, and modify your trigger.
However, this approach is generally frowned upon. You're trying to use the built-in foreign key mechanism - but you're also trying to work around it. The trigger's code will be repetitive enough that it should be easy to maintain - but it's also easy to screw it up and not notice. And then, you could wind up with rows with multiple ID columns filled in, which is likely to cause some confusion in joins and such (if not outright bad data). It's also easy to have a typo that results in assigning a single
id value to the wrong column, with similar confusing results.
I mention it more for the sake of completeness than for any other reason.
Use a "virtual" foreign key
(This is the option that Rick James' answer covers - refer to that for more detail).
group_device as you have it defined now, where the foreign key is one column that identifies a table (
type_device) and one that identifies a row in that table (
device_id). Create code to confirm that the
device_id is a valid ID from the table in question, and to delete the appropriate
group_device row if the underlying device is deleted. It's possible this can be done in triggers (insert/update trigger on
group_device to validate new/changed data; delete trigger on each device table, to check for and remove any
group_device rows when a device is deleted).
Maintenance should simply be creating the delete trigger when you create the device table, and updating the insert/update trigger on
group_device to work with the new table.
If using this solution, I recommend that you either use the full table name for
type_device; have a
type_device table listing the
device_type* tables with an ID, and use that ID in
group_device; or (at a minimum) establish a strict naming convention for
device_typeN tables, so you know that
CONCAT('device_' + type_device) will always yield the correct table name.
|
OPCFW_CODE
|
![enter image description here] ### Openness and transparency: preprints and data sharing in research communication ### Presentation: [https://osf.io/zshgp/] 20 minutes 1. Who we are at COS 2. Our strategies to make science more open 3. Using the Open Science Framework 4. Preprints Link to meeting agenda: http://eventos.scielo.org/viireuniaoscielo/programa/ ---------- Workshop Transparency into the scientific workflow in order to: 1. Accelerate communication 2. Strengthen author's control 3. Improve quality of manuscripts. On the occasion of the VII SciELO Annual Meeting and taking advantage of the visit of David Mellor, PhD, Project Manager of the Center for Open Science , with whom the SciELO Program is implementing the SciELO Preprints server and methodology for referencing data, methods, materials and codes we are organizing a workshop with David to better inform and deepen our knowledge and discuss issues related to these editorial practices that SciELO is adopting as part of the implementation of the lines of action of professionalization, internationalization and sustainability , aligned with the practices of open science. The workshop will be in English and will take place on the same day of the VII SciELO Annual Meeting, from 2:00 p.m. to 4:30 p.m. at FAPESP headquarters. ---------- ## Workshop Agenda ## Workshop handouts can be found in the files page: https://osf.io/k3re6/files ### 1. Tools ### - How editors can use the Open Science Framework - Review an "open research workflow" from study planning to publication. - Specific actions that editors or researchers can do at each step: - Creating an account - [Making a project] - [Uploading a file] - [Using view-only links for peer review] - [Using "non-bibliographic" contributors on a project] - [Marking a file a preprint] - [Search across many preprint servers] - [Moderating preprint submissions] ### 2. Policies ### - Transparency and Openness Promotion (TOP) Guidelines https://cos.io/top (summary table: https://osf.io/56n89/) - Level 1: Disclosure - Level 2: Require - Level 3: Verify - Data, Code, Materials sharing - Reporting guidelines - Preregistration, including the [$1,000,000 [enter link description here]Preregistration Challenge] - Registered Reports - Replication studies - [Open Science Badges] Link to workshop website: http://eventos.scielo.org/viireuniaoscielo/workshop-preprints/ : https://i.imgur.com/lLNvSIA.png : https://osf.io/zshgp/ : http://help.osf.io/m/projects : http://help.osf.io/m/files : http://help.osf.io/m/links/l/524049-create-a-view-only-link-for-a-project : http://help.osf.io/m/projects/l/526695-edit-contributor-permissions : http://help.osf.io/m/preprints/l/627729-share-a-preprint : https://osf.io/preprints/discover : http://help.osf.io/m/preprints/l/806116-submitting-to-a-preprint-service-that-uses-moderation : https://cos.io/rr : https://cos.io/rr : https://cos.io/badges
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
|
OPCFW_CODE
|
HotRod ClientListener is not getting updated view for HotRod serevr?
I have 2 HotRod server running in REPL_ASYNC mode. I am trying to connect it using hotrod client by providing hotrod server address and port.
I am trying to implement functionality similar to Near Cache. the reason behind not using Nearcache to avoid RPC call. We want to have control on Remote call done by NearCache.
I implemented all the logic along with Notification Listener.
So for that I am trying to attach ClientListener on RemoteCache and then wanted to take action on event notification. it's working as expected when all the servers are running. but it is not getting updated server view when one of the hotrod server getting stopped or new server being added. while when I am running hotrod client without ClientListener I am getting updated view of server.
Any one have idea about it please share I tried a lots of things but no success.
Please suggest if anyone have come across this issue?
Update : getting updated view whenever we do get operation but if no get operation perform then not getting updated topology view
Configuration used :
final ConfigurationBuilder configurationBuilder = new ConfigurationBuilder();
configurationBuilder.clustering().stateTransfer().awaitInitialTransfer(true);
configurationBuilder.clustering().stateTransfer().fetchInMemoryState(true);
configurationBuilder.clustering().sync().replTimeout(15000);
configurationBuilder.clustering().cacheMode(CacheMode.REPL_ASYNC);
configurationBuilder.dataContainer().compatibility().enable();configurationBuilder.transaction().transactionMode(TransactionMode.TRANSACTIONAL);configurationBuilder.storeAsBinary().enable().storeKeysAsBinary(false).storeValuesAsBinary(false);
configurationBuilder.jmxStatistics().enabled(true);
configurationBuilder.eviction().strategy(LIRS);
configurationBuilder.eviction().maxEntries(25000);
configurationBuilder.expiration().lifespan(-1);
Near cache, as functionality, has been available for Java Hot Rod clients out of the box since Infinispan 7.1. Have you tried using that?
Yes In my initial solution I have used near cache but by using that I don't have control over RPC call..
I am able to resolve this issue by applying below workaround, might help someone having similar issue :
In hotrod server I had a default cache on top of that I had created to cache. on client side I have attached listener to my cache but infinispan internally was using default cache while doing ping operation to fetch topology view. as I have attached listener to just our own cache and it wasn't default cache either it was not able to get the updated view on ping operation.
To solve this I have made one of the cache as default cache by using below code
new HotRodServerConfigurationBuilder().defaultCacheName("cacheName")
hope this will help someone who have similar issue.
This is odd, can you post your server configuration? What Infinispan version are you using?
I have updated configuration used in question ... I am using infinispan version 7.2.5.final
Hmm, interesting. Could you try with latest Infinispan 9.2.0.Final and see if the same issue still happens there?
It might have been resolved in later version as I don't have option to go beyond 7.2.5.Final as we have java 7 in our project. I didn't tested with higher version.
|
STACK_EXCHANGE
|
Late last year I got the opportunity to read an advance issue of Medieval Warfare and since it was a chance to keep up to date with different historical literature since graduation I was delighted. A couple of issues were sent to W.U.HSTRY and Lilly (W.U.HSTRY ruler) sent this one over to me as it kept in line with my interests as a historian including art history, the Hundred Years War, and my curiosity in medieval weaponry. My initial reaction in receiving this issue, which is released in January 2017, was to enjoy how much effort has gone into the layout of the magazine and wishing I had the ability to draw medieval landscapes and images with such skill. The key theme of this issue was the ideology of man, specifically those of the lower orders of society, trying to act like God through violence and war in order to settle their respective scores, and the German Peasants War of the sixteenth century was an apt choice to represent this theme throughout. The editor of Medieval Warfare, Peter Konieczny, gave a short introduction to the theme of this month by identifying the main contributors to January’s edition such as eminent medieval scholars such as Kelly DeVries, some of whose work I enjoyed reading myself during research in my undergraduate History degree. There are heavier analytical aspects to this magazine towards the German Peasants War and this is followed by a lighter hearted tale of a cow stopping a siege.
I could sit here and analyse the whole magazine but I thought it would be more suitable for me to choose the highlights. As expected from the magazine with warfare in the title there is a strong tilt towards weaponry, armour, military tactics and the role the lords and peasants played against each other during the German Rebellion. The first article by Kelly DeVries ‘Lucifer and his Angels’ debates the issue around why would peasants revolt in the first place. The abstract introduces the Marxist opinion that peasant oppression from their lords meant that rebellion was always ‘simmering’. DeVries initially states that peasant revolts were infrequent, of varying size, and never successful. This is a good start to looking into why, how and what caused the sixteenth century German peasants to revolt and why it is particularly interesting to medieval historians. Throughout the article is images of armour worn during the war and maps presenting the breadth of the revolt in the German provinces.
The next couple of articles include text by Erich B.Anderson who looks at an army that swept through Upper Swabia in 1525 and Jean-Claude Brunner’s ‘Siege of Salzburg’. They both look in-depth as specific episodes of German history within the different aspects of the Peasants War. Another interesting part was an excerpt in Sidney E.Dean’s article on ‘Knight of the Iron Hand’ Götz von Berlichingen where Dean looked specifically into the mechanics of Berlichingen’s literal iron hands and whether they were efficient or useless in their role. Each article offers the opportunity to look into further reading which for both amateur and academic historians alike are useful.
The best article available in my opinion would be Iason-Eleftherios Tzouriadis’ ‘Death, Violence and Sex’ which looks into Anti-War propaganda art created during the sixteenth century as a response to the wars encircling Europe during the late middle ages. This is a particular interest to the art geek in me. Art in the military was limited as an aid in studying the military in itself and their equipment. Tzouriadis references Hale’s 1990 work Artists and Warfare in the Renaissance. This offers extra insight into how historians have started to critically analyse illustrations to inform their research. The article shows several examples to back up both Tzouriadis and Hale’s analysis.
It is always good practise as historians to look for parallels between the medieval and modern eras. Dahm looks into the socio-economic and political similarities between medieval Germany and 1850 when an eminent piece of medieval warfare scholarship was published. The last part of the magazine was dedicated towards the Hundred Years War as an increasing interest in the logistics of medieval warfare is appearing in historical literature, and a weapon that never existed.
In all this is a fascinating issue that introduced an element of history I was unfamiliar with and happy to get acquainted. The whole issue is 60 pages and packed with information, illustrations and snippets of relevant information. There is a coherency between the articles with a strand on the role of peasants in history, the logistics of each revolt, war, rebellion, siege or catastrophe and finally their representation in the media. I found nothing to argue with but a lot to research as a new interest to add to my bookshelves and by the end of the magazine you will want to rewatch Monty Python and the Holy Grail (the last article).
Medieval Warfare can be brought at https://www.karwansaraypublishers.com/shop/medieval-warfare/subscriptions.html
|
OPCFW_CODE
|
Balancing optimization efforts across processes, technologies, and teams.
Greg Brown's new book, Programming Beyond Practices, is a thoughtful exploration of how software gets developed.
Watch highlights covering open source, open data, architecture, the business of open source, and more. From OSCON in London 2016.
Deciding whether to use microservices starts with understanding what isn’t working for you now.
Use smart pointers and move semantics to supercharge your C++ code base.
Discover lesser-known Python libraries that are easy to install and use, cross-platform, and applicable to more than one domain.
Learn the basics for setting up a continuous delivery pipeline in Jenkins, from modeling the pipeline to integrating the software.
This free webcast provides a quick lesson on accessing, storing, and updating relational data. Oct. 26, 2pm PT.
Learn how to get more out of your data with real-world examples using the powerful F# programming language.
Abraham Marín-Pérez explains how 10 coding guidelines can work in a real-life environment, considering the typical issues, together with the hidden traps that programmers can fall into when trying to apply them.
Erlang/OTP is unique among programming languages and frameworks in the breadth, depth, and consistency of the features it provides for scalable, fault-tolerant systems with requirements for high availability.
Learn Linux diagnostic and recovery tasks so you can jump in and fix a system problem when your site goes down.
Lukasz Langa uses asyncio source code to explain the event loop, blocking calls, coroutines, tasks, futures, thread pool executors, and process pool executors.
RxJava is a powerful library, but it has a steep learning curve. Learn RxJava by seeing how it can make asynchronous data handling in Android apps much cleaner and more flexible.
San Francisco, CA
Engineering the Future of Software
As the Internet of Things expands to include computers in our bodies and computers we put our bodies into, the question of open and closed takes on a new urgency.
Simon Wardley examines the issue of situational awareness and explains how it applies to the world of open source.
Solomon Hykes explores the container ecosystem and shares lessons learned from managing successful open source projects.
Rapper, singer, producer, and songwriter Sammus performs at OSCON in Austin 2016.
Mark Bates is the founder and chief architect of the Boston, MA based consulting company, Meta42 Labs. Mark spends his days focusing on new application development and consulting for his clients. At night he writes books, raises kids, and occasionally he forms a band and “tries to make it.” Ma...
After graduating with a M.Sc degree in Computer Science in 1998 at the Royal Institute of Technology Henrik Engström has been working as a consultant up until his Typesafe employment in 2011. Henrik ...
Carin started off as a professional ballet dancer, studied Physics in college, and has been developing software for both the enterprise and entrepreneur ever since. She has a strong background in Ruby...
“Hacking through a project will get it done, but learning the why and how of a technology gives you information that will have an impact beyond the current situation.”— Rachel Roumeliotis, Director of Content Strategy for Programming at O'Reilly Media
|
OPCFW_CODE
|
Angular momentum partial components of a $k$-dependent pairing potential
I am going over this review on pairing in unconventional superconductors :http://arxiv.org/abs/1305.4609v3
which on page 21 states that for a "regular" function $U(\theta)$, partial components $U_l$ of angular momentum $l$ scale as $\exp(-l)$ for large $l$. I tried to prove this statement but am not satisfied with my awnser, and would greatly appreciate some insight. Below is what I did so far.
I assume here that "regular" means infinitely derivable. The partial component $U_l$ is defined as such :
$
U_l = \int_{0}^{\pi} U(\theta) P_l(\cos \theta) \sin \theta d\theta
$
such that
$
U(\theta) = \sum_{l=0}^{\infty} U_l P_l(\cos \theta),
$
where $P_l(\cos \theta)$ is the $l$-th order Legendre polynomial. Let us make use of Rodrigue's formula :
$P_l(\cos \theta) = \frac{1}{2^l l!} \frac{d^l}{dx^l} [(x^2-1)^l] |_{x=\cos \theta}$.
The highest-order term of this polynomial is $\frac{1}{2^l l!} \frac{(2l)!}{l!} \cos^l \theta$. So in the development of $U(\theta)$, the contribution to the term of order $\cos^l \theta$ coming from the $l$-th order Legendre polynomial is $ U_l \frac{1}{2^l l!} \frac{(2l)!}{l!} \cos^l \theta$. There are also other contributions to this order coming from the higher-order Legendre polynomials, but they will be proportional to some $U_k$ with $k>l$. As we want to prove that $U_l$ is exponentially small as $l$ gets large, we can neglect these contributions for now.
Let us now try to find an equivalent of $ U_l \frac{1}{2^l l!} \frac{(2l)!}{l!}$ as $l$ goes to infinity. We can make use of Stirling's formula :
$
l! \sim (\frac{l}{e})^l \sqrt{2 \pi l}$
which gives us
$ U_l \frac{1}{2^l l!} \frac{(2l)!}{l!} \sim U_l 2^l \frac{1}{\sqrt{l \pi}}$.
If we want $U(\theta)$ to be a regular function, we need its high-order components in $\cos^l \theta$ to get smaller and smaller as $l$ gets large, as $\cos^l \theta$ behaves in a singular manner when $l$ goes to infinity. Thus, we need to have
$U_l \sim a^{-l}$, with $a>2$, as $l$ goes to infinity.
Why I am not happy with this awnser :
I neglected higher-order components in the contribution to $\cos^l \theta$
Maybe the $U_l$ could behave in a complicated oscillating way to make the $l$-th order term converge, without being exponentially small.
Does anyone have an alternate way of proving the fact that $U_l$ has to be exponentially small as $l$ gets large, or a way to complete the above proof ? Thanks for your help.
Just focusing on the $\cos^l\theta$ term is probably not going to get you anywhere, since $\cos^l\theta$, being a completely analytical function, is by no means singular (and following your argument you get exponential growth of $U_l$, not decay). It is a fairly well-known fact that for analytical functions (this is what "regular" means, roughly speaking the function is equal to its Taylor expansion. It is a stronger condition than being infinitely differentiable. And here, we require analyticity in a finite region on the complex plane), the expansion coefficient with Legendre polynomial (also for Chebyshev, etc) decays exponentially. The proof is not that trivial, as you can see from the requirement of complex analyticity it uses contour integral. You can find the proof in many textbooks, for example
Philip J. Davis, Interpolation and Approximation, Dover, 1975
The proof for Legendre polynomials is at page 313.
I meant $U_l \sim a^{-l}$ at the end of my post, not $a^l$, I just edited it. Thanks a lot for your reference, I'll look into it.
|
STACK_EXCHANGE
|
Things to Consider
- Convert pixels to percentages using the formula target ÷ context = result
- Carry the decimal of your result over two places to the right to get your percentage
- Do not round up your percentages, no matter how ugly or long the decimal is!
The grid that the Smells Like Bakin website was built on used the tool Gridulator to create a 1000px wide, 12 column grid. Each column is currently 65px with a gutter width of 20px.
Using the same overall width of 1000px, rewrite the grid to have each column instead be 54px wide with a gutter width of 32px.
Then, using the formula target ÷ context = result, convert the new column widths to percentages.
[Website Waters Island 2: Stage 2: Creating a Fluid Foundation with Allison Grayce]
Hey guys and welcome back to Website Waters Island 2.
Let's review what we learned in the previous stage.
We learned how to use the formula target/context = result
to convert font sizes from pixels to EMs.
We also learned the basic differences between fixed, fluid,
adaptive, and responsive web design.
Finally, we learned that the key to a responsive website is building it on a fluid foundation.
That's just what we'll do.
In this stage, we'll convert the fixed grid you worked on in the previous Island
into a fluid grid. Let's get started.
At this point, all of your font sizes in the Smells like Bakin' website
should be converted to EMs if they aren't already.
You might not see a lot of change visually when you view the website in the browser,
but know that it's on its way to becoming a much more scalable and fluid website.
Now go ahead and open up the grid style sheet in your text editor
so we can convert the grid from a fixed to a fluid layout.
The Smells Like Bakin' website was designed at a total width of 1000 pixels.
These 1000 pixels were divided up evenly into a 12-column grid.
Each column is 65 pixels wide with a gutter of 20 pixels between them.
Designing around a grid, while it can be tricky at first to get used to,
is great, because it results in a solid visual and structural balance.
It improve legibility and hierarchy and provides us with the good kinds of constraints.
If we look at the CSS, these 12 classes are the widths of our columns in the grid.
Right now they're in pixels, and we need to change them to percentages.
Below our columns towards the bottom of the style sheet is our website's container div.
Currently, it's set to a width of 1000 pixels.
Now, this is a fixed width, which is fine,
but the problem is that this won't scale with the view port size.
So, we want to change this value to percentages so it scales.
Let's change this width to 90%.
There is no science or formula to this number.
It just makes sure that we have a little bit of buffer or padding between
the browser window and the content of our website.
We're also going to give it a max width of 1000 pixels,
so that our website doesn't continue scaling up past this size.
Now we're going to focus our attention back to our columns
and change the value of the largest column, grid_12, to 100%,
since it should take up 100% of the width of the container.
Now it's time to revisit our favorite formula Target /Context = Result.
Focus on grid_11.
Using the column width 915 pixels as our target,
divide by the width of the entire website--1000 pixels,
which is our context--and we get 0.915.
Now, to convert this number to a percentage,
we need to move the decimal over two places to the right,
so our percentage width is 91.5%.
Consider taking note of the pixel width of the column to the right
by commenting it out with a forward slash and an asterisk,
and then closing it out with an asterisk and a forward slash, like this.
This can be helpful for those of us who are more comfortable with pixel values.
Let's do the same thing for grid_10.
830 pixel divided by 1000 pixels equals 0.83.
Carry the decimal over, and we get 83%.
Continue to work your way down the grid until they're all into percentages.
You have to sign up for Treehouse in order to create workspaces.Sign up
You have to sign up for Treehouse in order to download course videos.Sign up
Allison is a freelance web designer and UX consultant with experience working at small interactive agencies and large advertising firms.
|
OPCFW_CODE
|
How to refactor this code to obey the ‘open-closed’ principle?
The UML is listed below. There are different products with different preferential strategies. After adding these products into the shopping cart, the caller needs to call the checkout() method to calculate the totalPrice and loyaltyPoints according to its specific preferential strategy. But when a new product comes with a new preferential strategy, I need to add another else if. This currently breaks the open-closed principle.
class ShopppingCart {
// ...
public Order checkout() {
double totalPrice = 0;
Map<String,Integer> buy2Plus1=new HashMap<>();
int loyaltyPointsEarned = 0;
for (Product product : products) {
double discount = 0;
if (product.getProductCode().startsWith("DIS_10")) {
discount = (product.getPrice() * 0.1);
loyaltyPointsEarned += (product.getPrice() / 10);
} else if (product.getProductCode().startsWith("DIS_15")) {
discount = (product.getPrice() * 0.15);
loyaltyPointsEarned += (product.getPrice() / 15);
} else if(product.getProductCode().startsWith("DIS_20")){
discount=(product.getPrice()*0.2);
loyaltyPointsEarned+=(product.getPrice()/20);
}else if(product.getProductCode().startsWith("Buy2Plus1")){
if(buy2Plus1.containsKey(product.getProductCode())){
buy2Plus1.put(product.getProductCode(), buy2Plus1.get(product.getProductCode())+1);
}
else{
buy2Plus1.put(product.getProductCode(),1);
}
if(buy2Plus1.get(product.getProductCode())%3==0){
discount+=product.getPrice();
continue;
}
loyaltyPointsEarned+=(product.getPrice()/5);
}else {
loyaltyPointsEarned += (product.getPrice() / 5);
}
totalPrice += product.getPrice() - discount;
}
return new Order(totalPrice, loyaltyPointsEarned);
}
The knowledge which product type corresponds to which discount policy should not be coded into the checkout routine. You should represent this either in the product class (a product understands its own discount properties) or in a separate discount managing module (which understands both product and discount types).
I took the freedom to improve the wording of this question. Please double check if I got your intentions right.
In most reasonable real-world systems, I would expect the business people to be able to change the discount strategy, the loyalty points strategy, and the list of available products with their product codes, without asking a programmer for each new product. That, however, would require a more general approach. So please clarify: what is the expectation of the business here in your case? Or is this just a learning example for you?
There is no specific expectation.I just think this implemention is bad.So I just wanna some advice.I have an idea :Mybe I can make the Product as an interface and let the product with different preferential strategies to implement the interface. Am I righ?.Do you have other advice?
@Abner: I think you did not get my point, because what to do depends a lot on the specific context of your case, which you did not tell us. But maybe my answer can clarify things.
@Abner we (almost everyone here) wonder if the refactoring is only limited to ShoppingCart (something I deem insufficient) or can be extended to Product.
Your if-branches are almost exactly the same; start by DRYing up your code. You'll get a parameterized method and no if-statements. Turn that method into an object (DiscountStrategy or something), and make the parameters into fields. Now, either associate different instances with each product (product.getDiscountStrategy()), or maybe create a factory (simple static function) that takes a product, reads the product code and returns a discount strategy on the fly (inject the factory into ShoppingCart class). I know this is rather condensed, but, create a git branch and do an experiment
@Abner have a look at the visitor and strategy patterns. These will help you decouple the various preferential strategies from the products. All of this is simpler and more flexible when the preferential strategies are first class citizens in your object model and the products are "ignorant" of them.
@Laiv no limited
I tried to write an answer and then I realised that I would be just writing one (of many possible) pseudo-solution with very little value for the community. I also realised that, unless I can make the relationships Product - Discount and Product - Loyalty plan dynamic, I would be just moving those IF from one side to another. In the end, I remembered a word @candied_organge uses a lot in these cases and worked for me many times. Procastionation. Just don't resolve the discount and the loyalty earned too soon in ShoppingCart. Abstract it and solve the concrete details later.
@Laiv: moving those "if"s from one side to another can be meaningless or meaningful, depending on what those "sides" are and if there are different organizational units responsible for the "one side" or "the other". That is the whole point of my answer. Unfortunately, the OP did not give us any clue so far what the organizational units are in their case, and against whose "changes" the code should be closed, and for whose extensions it should be opened. Because of the missing response, I suspect they are just trying to treat the OCP as some braindead "best practice" without really knowing why.
The OCP is about allowing "team 1" to provide a black-box framework containing classes like Product, Order, and ShoppingCart, and "team 2" to change the list of products and the discount strategy without asking team 1 for a change in their code.
There are usually three major situations to consider:
"Team 2" is a second development team. "Team 1" has to provide "injection points" for team 2 where they can provide new discount strategies along with new products.
"Team 2" are the business people. They don't want to ask the programmers, but simply add new products and discount calculation rules at run time through some nice GUI or configuration files.
Team 1 and 2 are identical (there is no "team 2"). So whenever the business people add a new product or product code which does not fit to the current discount logic, they ask the devs of team 1 to implement this, for each and every minor change.
For case 1, there are several options. The goal is to allow team 2 to provide new discount rules without changing the original ShoppingCart or Product code. Team 1 may provide an abstract interface PriceCalculator, let the the checkout method take an object of that type and let team 2 pass a concrete PriceCalculator object as a parameter. Team 1 may let team 2 add new product codes into some list or database table. Alternatively, one could also decide to let team 2 provide a list of DiscountStrategy objects as a parameter to checkout. Each of those DiscountStrategy objects could have a method Discount calcDiscount(Product), which returns null in case the strategy does not apply, or a discount object (with the discount and loyalty points) in case it applies. Then, the checkout method can simply iterate over the DiscountStrategy objects and stop when the first call to calcDiscount(Product) returns something different from null.
For case 2, one needs to parametrize all kinds of discount calculation rules, store that parameters in some database or file and and implement a generic PriceCalculator module which can evaluate that parameters (along with some UI for the business people to change them). In your contrived example, this looks pretty simple: introduce a parameter object with three attributes ProductCodePattern, DiscountFactor and LoyaltyPointsFactor. However, reality, discount strategies will often be more complex, requiring different formulas for different product codes.
In case there is more flexibility required, the PriceCalculator could be some small "DSL" (domain specific language)" interpreter, and the business people specify their discount rules using that DSL.
For case 3, applying the OCP is probably overdesign and not necessary. Nethertheless it may be a good idea to extract the price and discount calculation from the shopping card into some PriceCalculator class, for increased testability. Since the discount calculation depends mainly on the product, it probably fits better into the Product class itself. But beware, inheriting from the Product class and providing a different discount calculation rule for each subclass is most probably the wrong approach here.
So in short, for applying the OCP most sensibly, one needs to know the organizational context. The OCP is not an end in itself, you need to know which parts of your code should be open for changes to whom, and which should be closed to whom, and for what purpose.
Let's forget the "Open-Closed Principle" for one second and concentrate on what you actually want. As far as I can tell, you have an implicit "non-functional" requirement, that adding a Product should be fast and not require modification of other code.
Ok, that "obviously" means that code that you would add to the checkout should be in the Product. This would mean the Product should take a little bit of "responsibility" instead of being just data.
To do this, you'll need to come up with proper behavior of the Product, maybe a checkout method in the Product itself? This would work unless you have policies which apply to the cart itself and not to individual products.
I suspect the OP wants adding of new products not even require the modification of code in the Product class, but simply adding a new product record in some database. But without getting more context from the OP, it is hard to tell.
In addition to not being consistent with the "Open Closed Principle", Order has feature envy, knowing too much about a product.
To address both from an Order's point of view, move create Product with discount and loyaltyPointsEarned methods. As a result, Order is now closed to Product changes.
class ShopppingCart {
// ...
public Order checkout() {
Map<String,Integer> buy2Plus1=new HashMap<>();
int loyaltyPointsEarned = 0;
for (Product product : products) {
double discount = 0;
totalPrice += product.getPrice() - product.getDiscount(buy2Plus1);
loyaltyPointsEarned += product.loyaltyPointsEarned(buy2Plus1);
}
return new Order(totalPrice, loyaltyPointsEarned);
}
And then Product would be instantiated as
Product product = new Product(“DIS_10”, .1, 10);
The only variable here is the discount. So if you factor that out by having some discount provider/selector class, that hopefully gets values from some outside maintainable data store, then this class should not change again because of it.
Most onlineshops have a seperate PriceCalculationService that calculates the prices depending on orderQuantity (quantity-discounts), current stock-level, voucher-codes, .... .
This obeys the open-closed-principle: If you change the price calculation strategy there is no more need to change the ShopppingCart implementation
|
STACK_EXCHANGE
|
She started by distinguishing two perceptions of time: linear and pragmatic. Linear time is what we usually envision time to be - a straight line from the past into the future. There are a lot of problems with this view and we spent some time talking about them as a group. I thought we got bogged down in this area a bit. Then she introduced pragmatic time that incorporates a multitude of different paths from the past into a multitude of future paths. Getting away from the single line of progress seemed to be the ultimate upshot and was a useful message to promote.
Adaptive action is the core of her work and she summarized a simple three step process to help analyze situations.
First, what patterns do you observe? She offered three possible meta-patterns to classify observations: organized, self-organizing, and unorganized. Organized patterns appear familiar, predictable, reducible, replicable, stable, etc. Self-organizing patterns are constantly changing, irreducible, not replicable, emergent, interactive, familiar whole, surprising parts. Unorganized patterns are like a hot gas: constantly surprising, totally ambiguous, unpredictable, and unstable. Reminds me of classifying cellular automata a la Stephen Wolfram whom she no doubt has borrowed some ideas from although she didn’t mention him.
Second, so what does the situation demand? How should we intervene? Again she used the same typology for three types of intervention. If more control and predictability is needed then an organized intervention such as policy, procedures, team building, visioning, clear goals, branding, or six sigma will be appropriate. A more active response prompts self-organizing interventions: increase or decrease control, stand and watch, or jump in and play. Unorganized interventions such as telling stories, collecting histories, gathering data, anxiety containment, relationship building, or enjoying innovation may promote more random exploration.
Third, now what will you do to shift the conditions for self-organizing? You can change the:
- Containers that hold the system together until patterns form. Greater organization could come from fewer, stronger, or smaller containers; less organization from more, weaker, and larger.
- Differences establish the pattern and build tensions to motivate change. Again organization can be fostered by having fewer, clearer, and smaller differences. The reverse would lead to more, fuzzier, and larger differences.
- Exchanges connect agents together within the container and across differences. Tighter or looser exchanges move along the organized/unorganized continuum.
I liked the whole presentation. A lot of good food for thought. The container, differences, and exchanges typology sounded particularly interesting.
I had some questions:
- How do you mediate between group and individual perspectives on pragmatic time? The number of histories that need attention grows quickly as the number of group members grows? This led me to thinking about Dunbar’s number and expanding the boundaries of our awareness beyond biological limits. Same for working memory, 7 +/- 2.
- I was also reminded of the problems Anthony Giddens raises in The Consequences of Modernity regarding trust and self-reflexivity. Complexity analysis is good but hard to do in conditions of bounded rationality.
|
OPCFW_CODE
|
We option just with properly-competent authors with most of the essential capabilities to get ready school creating of high quality. It does not make any difference for us no matter whether you will need a straightforward each day essay or possibly a PhD thesis. They’ve all been there their selves! If you remembered that you have to do a paper for tomorrow, call us right now and ask to create a paper for you. We also supply critical service that you might have to use one or more times in the course of mastering at senior high school, university or college. You can even produce your university or college admissions essay or improve your continue.
It happens to be pretty organic to resolve to locate someone who are able to easily fully handle your case in these situations.
Is it illegal to start a homework help website?
Write my homework services are supplied at inexpensive price points to students from around the globe to ensure that every single scholar can engage in such fantastic online writing help service. Our very trained staff is ever present to help you through all the full procedure, making sure that you’re stored informed while your homeworks are becoming done. We’ve the solution for you personally! Academic papers that are sent promptly and therefore 5homework solutions are totally free from plagiarism and duplicate pasted work are thought ideal.
Programming has become by far the most important help and support beams of financial, perform, as well as happiness. Corporations question our support with Microsoft ‘office’ structured jobs where being familiar with about VBA or PowerShell is really important. Our Python programming answer providers are resolved by the skilled programmers who’re very well-familiar with this kind of programming. Utilizing their 24 hrs everyday, seven days every week online customer care, you can actually it is possible to get in touch with any time you want for help. Students frequently battle with the class basically because they don’t possess the basics of programming leading to poor results.
DO YOUR Homework
Do you discover geometry problems way too hard? Students should try to learn the proofs for many geometry theorems since this is an enormous a part of geometry. It’s an innovative solution the same shape as online academic writing service focused on solving geometry, algebra, statistics, also it problems. Besides, we offer help with all geometry-related evidences, proper in-text illustrations and enough referencing, whenever needed. All students are industrious however their professors frequently overload all of them with assignments. Caused by our dedicated efforts are a massive quantity of satisfied customers who use us for more than 24 months.
Do you need professional school help in ALGEBRA HOMEWORK?
Our platform provides you with the safety of the work. For the reason that connection, our website enjoys the repeat of economics customers they find every occasionally. The professors can assign essays, situation studies, assignments as well as dissertations as homework. Economics handles a multitude of topics at both microeconomic and macroeconomic levels. With these couple of words, our portal Your Homework Help will move ahead to help you. If the difficulty is within math, science, or British, we provide immediate homework help to create a demanding class simpler and much more manageable. Get pre algebra homework help with 5Homework in which the mutual interaction from a teacher along with a student is consistent.
Having a group of trained expert authors and our round-the-clock customer care service, we be certain that whenever you seek algebra homework help from us, we’re working our hardest to create the greatest results for you personally. That’s the reason their ideas whirl round the methods to complete the mathematics assignments they’re given constantly.
|
OPCFW_CODE
|
Software Engineering Intern
Manhattan, NY | Fall 2017
- Building software infrastructure for the Financial Analytics & Verticals team
- Deployed Cassandra and Spark on Kubernetes enabling several teams to more easily scale database clusters as a part of the internal database-as-a-service system
- Reduced up to 55% of CPU and RAM requirements without performance loss by leveraging Kubernetes to share idle resources and by co-locating processes to reduce network latency
- Automated deployment of Kubernetes clusters integrated into the Bloomberg ecosystem
Software Engineering Intern
Cupertino, CA | Winter 2017
- Enabled a 30% increase in automatic data processing, which resulted in $200,000/year in savings, by a building data entry interface for product reliability testing in overseas factories
- Improved data processing speed by 50% by rearchitecting the ETL pipeline to use Spark
- Designed a machine learning classifier (SVM) to adjust reliability specs based on product returns
Production Engineering Intern
Ottawa, Canada | Summer 2016
- Mitigated the impact of upstream dependency outages by proposing and developing a reverse proxy using NGINX and Chef to cache dependencies
- Built several features for the developer automation tool to improve development workflow
- Architected a package management system to modularize functionality for Shopify’s developer automation tool, built using Ruby and Bash
Calgary AB / Waterloo ON | Summer 2015 – Summer 2016
- Led a small team of developers to build a web app for scraping and classifying job postings
- Implemented a naïve bayesian classifier to automatically categorize scraped jobs
- Raised $40,000+ in an initial funding round and published 3,000+ postings
Data Engineering Intern
Waterloo ON | Summer 2014
- Created a data processing tool using Python to sanitize millions of rows of oil pipeline data
- Built dashboard graphs and visuals in D3.js to plot processed data
Published on Velocity Site
Project Source Code
- exploration of consensus protocols
- video synchronization using web RTC and an implemention of the Paxos algorithm
- git hooks are used to pre-compile static html using ruby
- custom designing the html and css for the site, repetitive components were modularized
- content is defined using
- converts files trees of
markdown files to a website
- general crawling tool that emulates a browser, recursively following links using in-order traversal and fetching their assets
- parses and uses http request headers, and stores cookies for authentication
- processes and inserts into SQL data stores, or zips locally for offline browsing of web pages
Ruby on Rails Application that implements a voting based system for crowd-sourced answers and collaboration on practice exams and homework.
- Conway's Game of Life is implemented on the 128x32 LCD screen, written in C
- includes sensor integration and hardware output
bmp files into pixels arrays for viewing on Texas Instrument's Launchpad Microprocessor using the Orbit Booster Pack hardware add-on
- generates a solution using pattern recognition, smart permutation checks and primative algorithms
- graphically displays the cube's geometric net
- this program currently generates solutions of approximately 100 quarter turns
Parses and displays polynomial functions, through a command line interface. Able to factor any polynomial and do arithmetic operations on polynomials.
- user can input an enemy lineup of up to 5 heroes and Dota will return a list of all remaining heroes in order of statistical success against the input team
- scrapes aggregated data over millions of data entries for winrate statistics
- analyzes matchups for each hero and suggests good hero matchups
Computes the farthest pair of points in a set of points, implementing convex hull finding and anti-podal analysis.
Interactive Mandelbrot set generator with corresponding julia sets using complex number implementation. The view is zoomable, displaying closer snapshots of the sets. They are displayed using an rgb color scale.
The project titled "The Germ-inator" examined the comparison of natural and synthetic anti-microbials. An innovative technique was used to gather precise measurements for bacterial reduction. By taking digital photographs of a petri-dish, a pixel to mm ratio could be determined. Then, specific fine measurements on the dish could be determined through digital analysis.
Energy Innovation Award
Sponsored by Shell; awarded for winning team in case study competition.
Community Volunteer Scholarship
This award was for outstanding contributions to the community of Laurelwood through involvement with the Laurelwood Neighbourhood Association.
Awarded to students applying to the University of Waterloo with an average of 95% or higher.
3x Gold Medalist
Canadian Computing Competition Certificate of Distinction (Top 25%)
Winner of Waterloo Region Engineering Robotics Competition
DECA Silver Medalist
2X Award of Merit, Waterloo Wellington Science and Engineering Fair (WWSEF)
2X Best of Division Award, WWSEF
Sir Isaac Newton Award, WWSEF
The Ontario Ministry of Research and Innovation Stepping Stone Award, WWSEF
|
OPCFW_CODE
|
Top forms on product manifolds come from top forms on factors
Let $M^n$ and $N^m$ be oriented manifolds and give $M\times N$ the product orientation. Let $\pi_1:M\times N\to M$ and $\pi_2:M\times N\to N$ be the projections onto each factor. I am asked to show that every compactly supported $(n+m)$-form $\chi$ is of the form $\chi=h\,\pi_1^*\omega\wedge\pi_2^*\eta,$ where $h:M\times N\to\mathbb R$ is smooth, and $\omega,\eta$ are top forms on $M$ and $N$ respectively.
This seems like it should be straightforward, but I'm stuck in a rut. I want to just do it locally and paste the answers together, but it's not working. I cover the support of $\chi$ by charts $U\times V$ on which $\chi=h_{U,V}\,d\bar x_U^1\wedge\cdots\wedge d\bar x_U^n\wedge d\bar y_V^1\wedge\cdots \wedge d\bar y_V^m$ (here if $(U,x)$ and $(V,y)$ are charts on $M,N$ respectively, then $(U\times V,\bar x\times \bar y)$ denotes the induced chart on the product). I then take partitions of unity $\Phi$ and $\Psi$ on $M$ and $N$ respectively, which are both subordinate to this cover, and write $$\chi=\sum_{U,V}\phi_U\cdot\psi_V\cdot h_{U,V}\,d\bar x_U^1\wedge\cdots\wedge d\bar x_U^n\wedge d\bar y_V^1\wedge\cdots \wedge d\bar y_V^m.$$ However, I can't break this sum up in a way that will get $\chi$ into the form I want. Any suggestions for how to save this solution, or should I try some other method of attack?
The trick is that you can actually work globally instead of locally; all the local-to-global work can be done just on $M$ and $N$ themselves using their orientability. So, just let $\omega$ and $\eta$ be any nowhere vanishing top forms on $M$ and $N$ (these exist since $M$ and $N$ are orientable), so $\pi_1^*\omega\wedge \pi_2^*\eta$ is a nowhere vanishing top form on $M\times N$. Any other top form is then a scalar multiple of $\pi_1^*\omega\wedge \pi_2^*\eta$ at each point, and thus can be written in the form $h\pi_1^*\omega\wedge \pi_2^*\eta$ for some smooth function $h$.
(If you want $\omega$ and $\eta$ to be compactly supported as well, then you can choose them to have large enough supports so that $\pi_1^*\omega\wedge \pi_2^*\eta$ is still nonzero on the entire support of $\chi$.)
This answer is perfect. Very simple. Can you remind me why $h$ is forced to be smooth here? I guess it's just true that for a fixed top form $\omega$ every other top form can be written as $f\omega$ for some smooth $f,$ but I don't see exactly why.
Just look in local coordinates; both top forms can be written as some smooth functions times $dx_1\wedge\dots\wedge dx_n$, so you can just divide those smooth functions to get $h$ (and you aren't dividing by $0$ since the one you're dividing by is nonvanishing).
Great. Thanks a ton. You've been a great help.
|
STACK_EXCHANGE
|
Chris Mary wrote:Hi,
Below is the code
The problem is after the image is written, the font size is small when i opened it
Brian Cole wrote:Math.floor() is your friend.
[edit: Many edits. Replaced the original naive implementation with an ugly-ish one. Fought with the syntax highlighter.]
This does detect, to 3 places, that:
3.1756 is the same as 3.175 -3.1756 is the same as -3.175 0.0 is not the same as Double.NaN 2.3e304 is not the same as 2.4e304
Campbell Ritchie wrote:Agree. I hope there isn't some lecturer giving their students that sort of question.
Tim Holloway wrote:The whole question is bogus. . . . .
Harold Tee wrote:Methinks the string approach is simpler.
Mind you, you still have that age old problem with math operations which amplify the storage precision,
e.g isEqual ( (2.3 / 10.0) , 0.23) will not be equal.
Campbell Ritchie wrote:No, the cast rounds towards 0, so you get 5678.
value [meaning the property returned by getValue()] is the most recent valid content of the field, which may not be the current content of the field if an edit is in progress. Because of this, it is usually wise to call the field's commitEdit() method before calling getValue().*
*Especially if the focusLostBehavior property is set to PERSIST. In other cases it is likely that the field lost focus (causing its value to be committed or reverted) when the user clicked on the button or other GUI element that resulted in this call to getValue(). Calling commitEdit() is easy enough that it makes sense to do it unless you're sure that it isn't required.
Pressing the mouse on top of a button makes the model both armed and pressed. As long as the mouse remains down, the model remains pressed, even if the mouse moves outside the button. On the contrary, the model is only armed while the mouse remains pressed within the bounds of the button (it can move in or out of the button, but the model is only armed during the portion of time spent within the button). A button is triggered, and an ActionEvent is fired, when the mouse is released while the model is armed - meaning when it is released over top of the button after the mouse has previously been pressed on that button (and not already released).
Tim Holloway wrote:I ended up getting a big fat O'Reilly book on Swing and discovering the Sun Swing tutorials. Hopefully the book has been updated, since I just checked and it's for Java version 1.2!
Campbell Ritchie wrote:That could have shown return this; or return new Test3();
By the way: which section in the JLS?
Campbell Ritchie wrote:You can still write rubbish like...and it will both compile and run with the expected result.
it would have been clearer to just write Animal.staticMethod(...) and be done with it
Ricky Bee wrote:but let me rephrase my previous question (and steer it a little from the inheritance of a static method problem which originated it in the first place):
Ricky Bee wrote:is such a structure a common-place in Java programing? I mean, is it normal to instantiate an object from a superclass and then pass it an object from a subclass?
|
OPCFW_CODE
|
How to use approval-tests WinMergeReporter w/ ncrunch
I've been using approval-tests for a while with the WinMergeReporter and it is working well with the standard NUnit runner executable.
I am trying out NCrunch and the approval.Verify fails (as expected) for a new approval.
However, WinMerge does not startup.
I get the failure
ApprovalTests.Core.Exceptions.ApprovalMissingException : Failed Approval: Approval File "...\mytest.approved.txt" Not Found.
at ApprovalTests.Approvers.FileApprover.Fail()
I can run the same code in the NUnit runner and WinMerge starts up.
What's the secret sauce for NCrunch to bring up the WinMergeReporter?
There is @remco-mulder author of [nCrunch], may be he could help you
And @llewellyn-falco author of [ApprovalTests] also
This is actually by design, as having winmerge pop up every time NCrunch fails gets very annoying very quickly. Especially as it steals focus.
However, here's why it works and how to change it, if you so desire (you can always change it back)
Approval Tests has a MultiReporter system it uses that Front Loads from the assembly to implement the GangOfFour "Chain of Responsibility" pattern.
It will act as if there is a
[assembly: FrontLoadedReporter(typeof(NCrunchReporter))]
This does not actually have to be there. Approval Tests will assume it as the default if nothing is actual present.
So if you wanted to turn it off you could just do
[assembly: FrontLoadedReporter(typeof(AlwaysFaillingReporter))]
Except that reporter doesn't exist (although it would be trivial to make one :-)
So you might just want to do
[assembly: FrontLoadedReporter(typeof(WinMergeReporter))]
Happy testing!
Thanks for the quick response. You are correct that I probably don't want the WinMergeReporter running all the time, but there are times it comes in handy to have this 'just work'.
On my dev machine, I've set up WinMerge to use an running instance (setting is inside WinMerge) and I have WinMerge open on a different monitor. So, if I get a lot of approvals, they all line up within one WinMerge session as different tabs. So far, it works pretty well
Nice, I didn't realize WinMerge had that option, I'll have to try that out.
It wasn't completely clear to me, so I'll mention that goes in the AssemblyInfo (or any file) for the Test Assembly, not the SUT. WinMerge also has a single-instance switch, which you may want to enable or disable as you prefer. If you uncheck it and you have multiple failures, you'll get on WinMerge for each mismatch.
|
STACK_EXCHANGE
|
New @ChildRequest model injector
Added new @ChildRequest annotation to substitute for @ChildResource when a child model object requires a SlingHttpServletRequest to adapt from.
Included a refactor of the tests for @SharedValueMapValue to bring in line with the structure of other annotation tests.
Pull Request Test Coverage Report for Build 4524
87 of 94 (92.55%) changed or added relevant lines in 3 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.1%) to 53.014%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
bundle/src/main/java/com/adobe/acs/commons/util/OverridePathSlingRequestWrapper.java
24
27
88.89%
bundle/src/main/java/com/adobe/acs/commons/models/injectors/impl/ChildRequestInjector.java
52
56
92.86%
Totals
Change from base Build 4514:
0.1%
Covered Lines:
13570
Relevant Lines:
25597
💛 - Coveralls
Can we rather aim to fix this in Sling?
https://issues.apache.org/jira/browse/SLING-8279 is another sling issue that could be resolved to address the issue that the new @ChildRequest annotation is solving, but that would be much more far reaching. I'm not opposed to this being fixed in Sling, but given that SLING-5726 has been open for years and SLING-8279 has far reaching implications, my thought is there is no harm in this annotation being made available in ACS Commons in the meantime, and deprecating it later if Sling eventually has a solution.
And I know @ChildRequest may seem a bit of an odd name, but it's named as such since it is analogous to @ChildResource but instead acts on a Request instead of a Resource.
BTW, the reason Im pushing for this in ACS Commons is:
This is necessary for any codebase currently using WCM Core components where models have children to inject as models.
This is leveraged by an AEM component code generator that Bounteous has open sourced at https://github.com/Bounteous-Inc/bounteous-aem-commons/tree/master/aem-component-generator
Can we not use the @ChildResource annotation and implement another Injector that would support sling request? With a different service.ranking ? Just thinking out loud here :)
@niekraaijmakers perhaps - the logic is already all in the ChildRequestInjector class which could potentially be coded to handle cases of @ChildResource but to me it seems more safe to have a dedicated annotation since it is adapting a request instead of a resource. However, Im open to discussion on it if we see a fundamental problem with this new annotation/injector.
@justinedelson @davidjgonzalez WDYT?
I'm leery about "replacing" the @ChildResource annotation. If this is changed in the future then there is an onus on this project to update, and it also creates confusion since it's not obvious to developers which impl is being used (ie. a new developer on project googles @ChildResource, they find Sling's docs, but it's not behaving the way its documented... confusing)
FWIW name @ChildRequest was unclear to me... it made it sounds like there are multiple, HTTP requests, which doesn't make sense :).. IIUC what this is really doing is more @ChildResourceWithRequest ... where that Resource is adaptable from either a Request or Resource? .. ultimately, the intent here is to return a collection of Sling Models, adapted from either a Resource or Request, right? so maybe @ChildResourceAsSlingModel or @ChildResourceAsRequestAdaptableSlingModel for the verbose...
Totally cool with @ChildResourseWithRequest. Will update and address other review comments when back in the office next week.
Looks good - I’m curious more than anything about when non existing resources would be used, and/or when requestPathInfo resourcePath is relative. Would be good to make a note of why In the comments so someone in the future doesn’t try to “clean it up” on accident.
The log.warn in adaptto still seems applicable since your code is deciding to return null in 1 condition. (All others are delegated as you note). Code looks good tho!
Thank you!
Thank YOU!
Just for the record: I opened https://issues.apache.org/jira/browse/SLING-9674 for having such a capability in Sling some day.
|
GITHUB_ARCHIVE
|
Saturday, January 13, 2007
Arson, Algebra, Spongy Chicken
Last week I shared just a little bit of my college writing, but tonight I'm taking you all the way back to my senior year of high school. Since I have an entire notebook and then some to work from, I might do this once a week or so.
AP English Journal
Week of August 16 - August 22, 1998
Today in class all of the talk about the psycho guy in the book made me recall my crime of choice. I decided a while back that if I had to be a criminal I would set stuff on fire. I like to watch stuff burn. I'm not really insensitive enough to set someone's home on fire, though. That would be a lot more fun than selling drugs, stealing or killing.
I'm always telling people I'm not a math and science person, I'm an English and social studies person. This is still true, however some how I'm actually finding Algebra II more interesting than Government. That's pretty bad. Government is almost as bad as Physics, but not quite. I don't know what should be done about Clinton. I don't want to write to a congressman once a year, let alone once a grading period.
Week of August 23 - August 29, 1998
Since I haven't been struck by any better ideas, I think I will build on last week's stuff. First, however, I would like to introduce my new system for referring to people. I'm going to use variables, like in algebra. (I'm the last person I expected to pull math into this.) I'll say things like "friend g", "classmate m", "co-worker c", "friend/enemy k", and so on. I hope this will not be confusing.
Today I have a lot of questions from lunch. I do not, however, have even one answer. How do they really make chicken nuggets and strips? Why do the chicken strips look like old sponge? Why do the chicken nuggets look like soft, new sponge? Why is the cafeteria rest room so small? Why are the eating utensils placed in the line before the place where you decide what to eat? Why did I try to walk off without my change? What was I thinking? Why do I only really need a napkin when I forget to get one? Why did they make the holes in the top of the trash cans so small? Why does the new water and juice machine say Pepsi?
By the way, thanks to everyone who's been checking in, even though I haven't actually posted in a week!
Labels: old school
|
OPCFW_CODE
|
|Perl: the Markov chain saw|
Thank you ELISHEVA for putting this together, and ++.
I can't offer any doctrine but I do have some personal views on some of the ideas we're exploring here.
First and foremost, I vehemently disagree with those who feel we should change the "look and feel" of the Monastery, in pursuit of something "flashy" or "modern." "Don't fix what ain't broke" is valid advice to those who concern themselves solely with appearance. Being au courant is perhaps a good thing if one is peddling widgets, but it requires an unnecessary investment for a community dealing in knowledge, like this one. Further, being "up-to-date" (another phrase often used by advocates of a new look) is transitory. Today's up-to-date is tomorrow's passe.
And if "up-to-date" or "modern" means AJAX, Flash, fancy graphics, and so on, using those almost guarantees longer download times; not a major issue for those with reasonably high speed internet access, but a huge stumbling block for users who rely on (bottom-tier) satellite or dialup.
OTOH, I strongly agree that "content is king."
We might make PM's content a bit more regal, if we develop a mechanism for hiding worthless or off-topic commentary on threads of more than some agreed-upon age (30 or 60 days?), so that the reader of any older thread could be confident that what's initially presented is, by consensus of the Saints or some similar group, the gold standard answer(s). It probably wouldn't take a lot more effort to give visitors the per-thread option of also viewing ALL the nodes in the initial thread, but even that would not be trivial. Overall, though, such a mechanism would be complex to develop, even were we to come to a consensus that this is a "good idea" and consensus on who gets to rate something as "gold standard."
Markup? I suspect (but can't prove) that a large proportion of our newest users are more apt to have some passing acquaintance with .html markup than with POD markup.
Lack of markup: I suspect the only reliable way to cure the appearance of hopelessly unformatted nodes is to develop a procedure for distinguishing among code, data, and narrative and to then invoke that procedure at the "preview" stage of node creation. In my (perhaps unworkable) vision, if the procedure finds markup deficiencies, the "create" option would remain blocked and the author would be given a message about the identified deficiencies. Wash, rinse, repeat.
Accessibility for Beginners: As one who came here with no CS background, and little experience with what one might characterize as *nix-ish documentation (which, in my continuing view, is generally highly valuable for readers who already have a broad general grasp of any particular topic and unspeakably obtuse for the newbie), I found much of the standard documentation (perldoc -f ..., perldoc modulename) obtuse in the extreme.
Welcoming to newcomers? I'd stand this one on its head. How can we better help newbies to avoid the pitfalls that sometimes win them caustic corrections?
Duty calls. Stay tuned for updates!
Update: Upvotes in this thread will not necessarily reflect agreement with any particular proposal nor endorsement of that proposal's feasibility but rather will be cast for well-reasoned arguments.
|
OPCFW_CODE
|
Using pair<int, int> or string as map<> key, which is more efficient?
I want to use a map to store key-value pairs.
The key of the map should contain information about the coordinates(int) of one point.
One possibility is to convert the ints to string. For example, coordinate(x,y) can be represented as "x#y" and store this string "x#y" as a key.
Another possibility is to use a pair to store the coordinates as pair<int, int> and using this pair as key.
Which way is a better approach and why ?
How do you store your coordinates? What form are they in in the rest of the program?
I would go with the pair. Either way, if you're worried about performance you should consider using unordered_map.
This depends on your definition of efficient, and we very quickly devolve into what might be considered premature optimization. There are a lot of things at play, and by the way you phrased your question I think we should take a very simplistic look:
Your primary considerations are probably:
Storage: how much memory is used by each key
Speed: how complex a key comparison is
Initialization: how complex it is to create a key
And let's assume that on your system:
int is 4 bytes
a pointer is 8 bytes
you are allocating your own memory for strings instead of using std::string (which is implementation-dependent)
Storage
std::pair<int,int> requires 8 bytes
your string requires 8 bytes for the pointer, plus additional memory for the string-representation of a value (up to 10 bytes per integer) and another byte for the separator
Speed
Comparing std::pair<int,int> requires at most two integer comparisons, which is fast on most processors
Comparing two strings is complex. Equality is easy, but less-than will be complicated. You could use a special padded syntax for your strings, requiring more storage, to make this less complex.
Initialization
std::pair<int,int> initialization is simple and fast
Creating a string representation of your two values requires memory allocation of some kind, possibly involving logic to determine the minimum amount of memory required, followed by the allocation itself (slow) and the actual numeric conversion (also slow). This is a double-whammy of "bottleneck".
Already you can see that at face value, using strings might be crazy... That is, unless you have some other important reason for it.
Now, should you even use std::pair<int,int>? It might be overkill. As an example, let's say you only store values that fit in the range [0,65535]. In that case, std::pair<uint16_t,uint16_t> would be sufficient, or you could pack the two values into a single uint32_t.
And then others have mentioned hashing, which is fine provided you require fast lookup but don't care about iteration order.
I said I'd keep this simplistic, so this is where I'll stop. Hopefully this has given you something to think about.
One final caution is this: Don't overthink your problem -- write it in the simplest way, and then TEST if it suits your needs.
1st, coordinate can be double number so I think pair< double , double > would be better choice.
2nd , if you really want to use int pair or string key, pair< int , int> would be better choice as string will always create more capacity than it's actual length.
Basically, You will loose some unused memory for each string key.
string.length() value can be equal to or less than string.capacity() value..
|
STACK_EXCHANGE
|
More networks and groups for government web content managers.
Web Content Managers Listserv
On This Page
Our Web Content Managers listserv is open to web content managers from any level of U.S. Government: federal, state, local, and tribal. Since the purpose of this group is to exchange ideas among those of us who are in these roles, we do not admit contractors or other private individuals. Learn more about the Web Content Managers Forum.
Register for our Web and New Media Community (Web Content Managers Forum). As part of the registration process, if you are a U.S. Government employee, you will automatically be subscribed to the listserv.
Any listserv member can send a message to the group by using this email address: CONTENT-MANAGERS-L@listserv.gsa.gov. Once you send the message, you'll be asked to verify your original message before it's distributed to the list. This is to reduce viruses and other unwanted emails being sent to the listserv.
To verify a message for distribution:
1. REPLY to the confirmation email message
2. Type the word "OK"—without the quotes—in the body of the message
3. Leave off any other text in the body of the message, including signature blocks
1. Click the link provided in the email message. The link will appear similar to the following screen shot below:
When sending a message to the entire listserv, consider that your message is going to more than 2,200 individuals on the list. If you have a suggestion or a response to the person who sent the original message, you should email that person directly rather than sending your message to the entire list. But if your message or question is intended for the full community—by all means—send it to the full list. We want to keep the listserv manageable and relevant. But we also don't want to stifle frank, open conversation, which is what the listserv is all about.
Think of the listserv when you are setting your "out of office" messages. Remember to create a rule to not send the "out of office" message to listserv groups or only to people who've sent you a personal email. Ask your IT folks if you are unsure of how to do this.
If you do get "Out of Office" messages from people who forget, you can set up your email software to filter out messages that have "Out of Office" in the subject line. You should consult with your IT support staff to see how to filter these messages, since it will depend on the email program.
Following are some of the common commands you can use as a subscriber of this listserv. When using these commands, always:
- Send the command in an email message addressed to firstname.lastname@example.org
- Type the command in the body of the message on one line, not in the subject line (the subject line should be left empty)
- Remove any other text from the body of your message, including signature blocks
|Remove yourself from the list*||signoff content-managers-l|
|Receive a daily digest of plain text posts; recommended for users with plain text email systems**||set content-managers-l digest|
|Receive digest in MIME format; recommended for Lotus Notes users||set content-managers-l digest mime nohtml|
|Receive digest in HTML format; recommended for MS Outlook users||set content-managers-l digest html|
|Receive messages one-by-one as they are posted; recommended for all users||set content-managers-l nodigest|
* You will also be unsubscribed from the listserv when you cancel your membership on the Forum networking site.
** Each day after midnight, you'll get an email with a list of each of the messages that has been generated the previous day, grouped by topic. Depending on your mail client, you can either click on topics to read them or read by scrolling through the digest.
As a subscriber, you will receive a copy of your own posts to the listserv (this is the default setting). Some prefer to only receive messages from others and a short message from the server indicating that their own message was distributed to the list. To turn off receipt of your own messages and just receive an acknowledgement:
|Turn off receipt of own messages, receive acknowledgement||set content-managers-l ack,norepro|
|To restore default setting||set content-managers-l repro,noack|
Members of the CONTENT-MANAGERS-L listserv can access a searchable, browseable online archive of previous messages, and also modify their listserv settings. To access the listserv archives, first register a password. After you register, you will receive an auto-confirm email. Follow the instructions in that email to complete your registration and receive a password.
Once you have your password, you can login to the archive to search the listserv archives by keyword, browse the listserv archives by month, or modify your listserv subscription settings.
|
OPCFW_CODE
|
Good servicefast shippingwarrantyall products are checked for quality & conditioninvoice. Wolnosci 150 58-500 Jelenia Góra Poland. +48 500 322 334 +48 75 64 32 113 ext. The price includes 0% VAT. If you are NOT a company with valid VAT number of EU-member (example valid VAT ID: DE999999999) you must pay additional 23% VAT.
DELL COMPELLENT SC8000 STORAGE CONTROLLER ARRAY. 1 x DELL COMPELLENT RAID CONTROLLER PCI-E 512MB CACHE W/BATTERY. 1 x DELL QLOGIC 45GPC 8GB QUAD PORT QLE2564 FC HBA.2 x DELL 6GBPS 4 PORT SAS PCI-E HOST BUS ADAPTER. 1 x DELL POWEREDGE R720/R720XD/SC8000 SYSTEM BOARD. Two 2.5GHz 6-core Intel® Xeon® processors per controller. 64GB per controller (128GB per array). Dell Storage Center OS (SCOS) 6.1 or greater.
6/960 per array, more in federated systems4. 3PB per array (SSD or HDD), more in federated systems4. 3PB per array with optional FS8600 6PB in single namespace (with FS8600 and multiple SC8000 arrays). SAS and NL-SAS drives; different drive types, transfer rates and rotational speeds can be mixed in the same system SSDs: write-intensive, read-intensive HDDs: 15K, 10K, 7.2K RPM.
Mix and match from the following options SC200 (12 x 3.5 drive slots, 6Gb SAS) SC220 (24 x 2.5 drive slots, 6Gb SAS) SC280 (84 x 3.5 drive slots, 6Gb SAS). 7 per controller 4 full-height (cache card consumes one) 3 low-profile Any slot may be used for either front-end network or back-end expansion capacity connections. Fibre Channel (4Gb, 8Gb, 16Gb), iSCSI (1Gb, 10Gb), FCoE (10Gb) Simultaneous interface support.
16 (Fibre Channel), 10 (1Gb iSCSI), 10 (10Gb iSCSI), 10 (FCoE) per controller Note: SC8000 controller can support up to 16 FC front-end ports with 4-port low-profile SAS back-end IO option. SAS (6Gb, 3Gb), Fibre Channel (2Gb, 4Gb, 8Gb).
16 (Fibre Channel), 20 (SAS) per controller Note: No SATA ports, FC and SATA enclosures are connected to FC8 IO card. Fibre Channel and SATA expansion enclosures connect to SC8000 via Fibre Channel IO expansion card. All-flash, hybrid or HDD arrays.
Block (SAN) and/or file (NAS) from same pool5. The item "DELL COMPELLENT SC8000 STORAGE CONTROLLER ARRAY" is in sale since Tuesday, November 5, 2019. This item is in the category "Computer, Tablets & Netzwerk\Firmennetzwerke & Server\Netzwerkspeicher-Disk-Arrays\NAS-Disk-Arrays". The seller is "www_compan_info" and is located in Jelenia Góra. This item can be shipped worldwide.
|
OPCFW_CODE
|
Aquatic Organisms, Ecosystems, and Stressors
Aquatic Organisms, Ecosystems, and Stressors
Freshwater and marine ecosystems are facing increasingly frequent and severe anthropogenic perturbations.
These stressors can be very diverse (e.g. heat waves, chemical and plastic pollution, habitat changes) and can independently and/or synergistically affect the the biology of aquatic organisms and functioning of aquatic ecosystems.
I am particularly interested in the effects of stressors on key ecological processes, e.g. predator-prey dynamics and resilience loss, and on critical life-history transitions, e.g. bottlenecks and important developmental steps such as metamorphosis. I use a variety of aquatic study systems, but I tend to use reef fishes and freshwater ciliates as my model organisms.
I like to use experimental and integrative approaches, from the molecule to the ecosystem, and automated and reproducible methods to investigate the inner biological mechanisms explaining how broader ecological processes can be affected by multiple stressors.
I am currently a Lecturer at Sorbonne Université (France), where my research and teaching activities are primarily focused on marine ecotoxicology using integrative, experimental and automated approaches.
From 2020 to 2022, I was a Research Associate at the School of Biological Sciences, University of Bristol, where I investigated the impacts of multiple stressors on the resilience of aquatic ecosystems using laboratory systems, gantries, AI, and experimental arenas.
From 2018 to 2020, I was a Postdoctoral Research Fellow at the IAEA Environment Laboratories (Monaco), where I studied the effects of microplastics on fish ecophysiology, behavior and histology, using radio-nuclear techniques.
In 2018, I was awarded the Young Research Prize from the Bettencourt-Schueller Foundation, for my work and research project examining the impacts of anthropogenic stressors on larval fish sensory development and survival via endocrine disruption.
I did my PhD (2014 – 2017) at CRIOBE, PSL Research University (French Polynesia) on the metamorphosis of coral reef fishes, its inner molecular mechanisms, sensorial and ecological importance, and sensitivity to stressors. My PhD was also conducted within the Institut de Génomique Fonctionnelle (Lyon, France) and the OOB (Banyuls-sur-Mer, France), with a fellowship from the ENSL (Lyon, France).
I seek to uncover the effects of stressors on the biology and ecology of aquatic organisms using laboratory systems. To do so, I like to build arenas, mesocosms, and to use automated measurements and functional treatments to investigate how specific biological and ecological processes respond to stressors.
I aim at understanding the inner mechanisms underlying the effects of stressors on aquatic ecosystems functioning. For this, I use integrative approaches, from the molecule to the behavior and species interactions, to examine how the biology of aquatic organisms can explain their ecological response to stressors.
I look at how multiple stressors such as habitat/temperature changes and pollutants can impact the biology and ecology of aquatic systems. I perform in situ and laboratory experiments to assess stress levels in the environment & organisms, and to scrutinize how they can disrupt key biological and ecological processes.
This project aims at understanding the effects of multiple stressors and the resilience of ecological communities, using protist microcosms, automated gantries and AI.
This project is part of my Research Associate position within the School of Biological Sciences, at the University of Bristol, in collaboration with Dr. Chris Clements (University of Bristol) and the University of Sheffield.
This project examines the effects and toxicology of microplastics on the physiology of aquatic organisms, using radio-nuclear techniques.
This project was part of my postdoctoral work at the IAEA in Monaco (2018-2020), and continues since then in collaboration with Dr. Marc Metian and Dr. Peter Swarzenski (IAEA).
This project investigates the importance of thyroid hormones in reef fish metamorphic and recruitment processes, as well as the sensitivity of the thyroid signaling to anthropogenic stressors.
This project was part of my PhD (2014-2017) and first postdoc (2018) within the CRIOBE, and continues since then in collaboration with Dr. David Lecchini (CRIOBE) and Dr. Vincent Laudet (OIST).
This project explores the effects of acidification on a key life history transition transition in reef fish life cycle.
This project started during my postdoc at the IAEA (2018-2020) and continues since then in collaboration with Dr. Marc Metian (IAEA), and Dr. Laetitia Hedouin and Dr. Dvid Lecchini (CRIOBE).
This project is part of the monitoring of sea turtles migrations, populations and nesting sites in French Polynesia operated by Te mana o te moana association, led by Dr. Cécile Gaspar.
|
OPCFW_CODE
|
You are taking a sabbatical year backpacking through Asia. You want to find some place to work in exchange for food and accommodation. After doing a google search you find the site http://www.helpx.net and decides to give it a try. You will soon notice that the website offers a lot of information but the structure has a lot of room for improvement. The navigation is also not fully intuitive. What if there was an easier way to access the information without getting lost?
HelpX.net is a website for exchanging help between Hosts (providing accomondation and often also food) and Helpers (providing help with various tasks of the host) and for enabling the two groups to get in contact with eachother. The website has a lot of valuable information available for both parties, but it could be more easily accessible. In this evaluation, we will concentrate on usability of the Helper role in order to limit the project, even though the site is designed for both groups. Our main focus is to redesign the main page and the navigation required to find a suiting host in order to make the user journey towards a host as clear as possible for a helper. Since many of the Helpers are backpackers who want to get free food and accommodation (thus saving money) in exchange for helping other people out, they will be our target users.
Our usability evaluation consisted of interviews with two possible and one actual users of the site. All interviewees were presented to the same scenario:
“You are taking a sabbatical year backpacking through Asia. You want to find some place to work in exchange for food and accommodation. After doing a google search you find the site http://www.helpx.net and decides to give it a try.”
And asked the following questions:
What are the most important things for a backpacker to find on this site?
When viewing a host, what are the main things you want to know about them.
Find one host in China that has room for 2 helpers and is an organic farm. (observe them and take notes as they do so. Film with your phone if it is OK).
What difficulties did they find?
What difficulties did you observe?
After the interviews, we proceeded with compiling a list of problems with the site based on the answers:
Borders or buttons would be preferable to ordinary links according to one interviewee.
Non-clickable photos (pics cannot be seen in larger sizes) when users expected them to be clickable.
Entire presentation, design layout was not considered attractive.
The size of the website changes when you navigate, which was not considered consistent.
One interviewee considered the comment box in the middle of the home page to be misleading and not expressing what the site is actually about.
Location maps for Asia proved really hard to find, interviewees didn’t see them during navigation.
There is no map with all hosts in the world. Have to click different location maps.
There is no search option in home page. As such there is no way to jump directly to what a person is looking for.
Not able to filter host by what time of the year they are hosting or the number of helpers they accept. Other filtering (reviewed hosts only etc.) is available only to logged in users, which could at least be expressed to those not logged in.
Contact details regardings a host can be found if and only if the helper has registered, something that some interviewees disliked. However this is probably done in order to have an incentive to register as a premium host, otherwise no one would ever pay for the service. A similar service, wwoof (http://www.wwoofchina.org) shows even less host info to non paying users.
Need to login in order to see reviews. Same reasons as above. An option to consider would however be to show the average number of stars received by previous helpers.
Contacting admin regarding FAQ’s or any issue was not designed properly and hectic to click and type in new page. It should be visible on the top rather than in every page than in bottom.
|
OPCFW_CODE
|
[http-plugin] ignoreIncomingPaths type error in TS
Please answer these questions before submitting a bug report.
What version of OpenTelemetry are you using?
v0.2.0
What version of Node are you using?
TS
What did you do?
added ignoreIncomingPaths: [/\/healthz/], to the config
What did you expect to see?
No type error
What did you see instead?
Type '{ enabled: boolean; path: string; ignoreIncomingPaths: RegExp[]; }' is not assignable to type 'PluginConfig'.
Object literal may only specify known properties, and 'ignoreIncomingPaths' does not exist in type 'PluginConfig'.
Additional context
If there is another way to config when using TS please add this to docs! Thanks
Should we use ignoreUrls instead?
Can you add a minimal code sample reproduction?
Sure!
Complains in TS:
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
import opentelemetry from '@opentelemetry/core';
import { NodeTracer } from '@opentelemetry/node';
import { SimpleSpanProcessor } from '@opentelemetry/tracing';
const tracer = new NodeTracer({
plugins: {
http: {
enabled: true,
path: '@opentelemetry/plugin-http',
ignoreIncomingPaths: [/\/healthz/],
},
},
});
const exporter = new JaegerExporter({ serviceName: 'nearby-rideshare' });
tracer.addSpanProcessor(new SimpleSpanProcessor(exporter));
opentelemetry.initGlobalTracer(tracer);
Does not complain:
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
import opentelemetry from '@opentelemetry/core';
import { NodeTracer } from '@opentelemetry/node';
import { SimpleSpanProcessor } from '@opentelemetry/tracing';
const tracer = new NodeTracer({
plugins: {
http: {
enabled: true,
path: '@opentelemetry/plugin-http',
ignoreUrls: [/\/healthz/],
},
},
});
const exporter = new JaegerExporter({ serviceName: 'nearby-rideshare' });
tracer.addSpanProcessor(new SimpleSpanProcessor(exporter));
opentelemetry.initGlobalTracer(tracer);
Sure!
Complains in TS:
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
import opentelemetry from '@opentelemetry/core';
import { NodeTracer } from '@opentelemetry/node';
import { SimpleSpanProcessor } from '@opentelemetry/tracing';
const tracer = new NodeTracer({
plugins: {
http: {
enabled: true,
path: '@opentelemetry/plugin-http',
ignoreIncomingPaths: [/\/healthz/],
},
},
});
const exporter = new JaegerExporter({ serviceName: 'foo' });
tracer.addSpanProcessor(new SimpleSpanProcessor(exporter));
opentelemetry.initGlobalTracer(tracer);
Does not complain:
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
import opentelemetry from '@opentelemetry/core';
import { NodeTracer } from '@opentelemetry/node';
import { SimpleSpanProcessor } from '@opentelemetry/tracing';
const tracer = new NodeTracer({
plugins: {
http: {
enabled: true,
path: '@opentelemetry/plugin-http',
ignoreUrls: [/\/healthz/],
},
},
});
const exporter = new JaegerExporter({ serviceName: 'my-service' });
tracer.addSpanProcessor(new SimpleSpanProcessor(exporter));
opentelemetry.initGlobalTracer(tracer);
Furthermore, there is a TS error when trying to set logLevel
Type 'LogLevel.ERROR' has no properties in common with type 'PluginConfig'.ts(2559)
PluginLoader.d.ts(23, 5): The expected type comes from this index signature.
I will edit code above
Furthermore, there is a TS error when trying to set logLevel
Type 'LogLevel.ERROR' has no properties in common with type 'PluginConfig'.ts(2559)
PluginLoader.d.ts(23, 5): The expected type comes from this index signature.
I will edit code sample above.
To begin with, this is not valid:
const tracer = new NodeTracer({
plugins: {
logLevel: opentelemetry.LogLevel.ERROR, // ERROR HERE
},
});
The log level is a configuration, not a plugin. You can do either of the two following:
const tracer = new NodeTracer({
logLevel: opentelemetry.LogLevel.ERROR,
plugins: {
// http, grpc, etc
},
});
const tracer = new NodeTracer({
plugins: {
http: {
enabled: true,
path: '@opentelemetry/plugin-http',
logLevel: opentelemetry.LogLevel.ERROR,
}
},
});
For the ignoreIncomingPaths issue, there is no ignoreIncomingPaths on a plugin config object https://github.com/open-telemetry/opentelemetry-js/blob/master/packages/opentelemetry-types/src/trace/instrumentation/Plugin.ts#L50
The typing may need to be updated to allow plugin specific configs
Thanks @dyladan , logLevel non-issue was me being dumb. I will edit title and code sample accordingly to just mention the ignoreIncomingPaths issue.
What made you think ignoreIncomingPaths is a valid option? This is not a bug, but rather an issue with whatever documentation told you to put that there I think
Fair enough, so it was indeed from the docs, I think I grepped for ignore one of these inspired me:
https://github.com/open-telemetry/opentelemetry-js/search?q=ignoreincomingpaths&unscoped_q=ignoreincomingpaths
FWIW it does work in regular JS... so it's a bit confusing!
I use it here https://github.com/naseemkullah/k8s-job-dispatcher/blob/master/lib/tracer.js#L12 which is my goto reference (you've since added the Getting Started guide which is amazing btw).
So we should use ignoreUrls or something else?
@naseemkullah well that depends... ignoreUrls is correct according to the typing, but is actually ignored. The reason it works in js is because the config object just happens to be passed directly to the plugin unmodified, and the http plugin expects ignoreIncomingPaths. The answer is probably neither. We will need to make a better config story. For now I would use ignoreIncomingPaths with a // @ts-ignore comment to suppress compilation errors, and you should track #585 to see a real solution.
Thanks @dyladan that is insightful as I am trying to incorporate otel as a pilot project this into some of our TS apps.
Finally, back to the question of LogLevel, I am getting this:
logLevel: core_1.default.LogLevel.ERROR,
^
TypeError: Cannot read property 'LogLevel' of undefined
From this
import opentelemetry from '@opentelemetry/core';
import { NodeTracer } from '@opentelemetry/node';
// Create and configure NodeTracer
const tracer = new NodeTracer({
logLevel: opentelemetry.LogLevel.ERROR,
plugins: {
http: {
enabled: true,
path: '@opentelemetry/plugin-http',
ignoreUrls: [/\/healthz/],
},
},
});
any ideas as to why?
import * as opentelemetry from '@opentelemetry/core';
Thanks!
Thanks @dyladan , logLevel non-issue was me being dumb. I will edit title and code sample accordingly to just mention the ignoreIncomingPaths issue.
For http/https config, you can look here https://github.com/open-telemetry/opentelemetry-js/tree/master/packages/opentelemetry-plugin-http#http-plugin-options
Default config (interface) should be removed since it has been discussed during SIG that we don't want to have one unique config for all configs. @mayurkale22 corrects me if I'm wrong.
Alright // @ts-ignore works for now! Will continue to track #585
|
GITHUB_ARCHIVE
|
AAK karriärer–ABAP Programmer - Jobvite
programmer-s-calculator-bitcalculator_programView1_73769.png. The goal is to give you enough tools so you can create your own personal calculator and to do-list application that you can bring back home to Mac OS X, a unified operating system and graphical operating environment, is the fastest growing Unix variant on the market today. Hard-core Unix programmers, Beginning Java: A Netbeans Ide 8 Programming Tutorial: Conrod, Philip: a loan calculator, portfolio manager, and a checkbook balancing application. Buy FVDI 2018 ABRITES Commander Auto Diagnosis Key programming ECU Exchange speed limit of MMI-TV system # Coding calculator - supported so I want a calculator who track price from many exchanges and calculate them calculator, cryptocurrency arbitrage opportunities, programmer calculator en An on-screen calculator that performs basic arithmetic tasks. and a redesigned Calculator with multiline capabilities including Programmer and Statistics ROI Calculator, Seminar, Service 360, Support Site, Tech Specs, Telemarketing, UR+ Quote, Virtual Fair, Webinar, Whitepaper, Workshop, MDF Offline, Email.
- Stortinget pronunciation
- Vvs london
- Utveckling 7 aring
- Eu s flyktingpolitik
- Jp morgan share price
- Smärta efter hysterektomi
- Adressändra skatteverket
- Branschsnitt soliditet
The production planner will help you find what you need to build the factory you want. Programmers need to work with both signed and unsigned values. Calculator is currently optimized for signed values only. Evidence or User Insights For example, while calculator will works on INT64 numbers, if I try to paste in a UINT64 value, e.g. 18403114778001080163, it shows as invalid input. This concludes our tutorial on how to start the programmer calculator in Windows 10. Thank you for watching VisiHow!
Floating-point numbers are only supported for base 10. Programmer Calculator comes with a simple but pretty design and makes your decimal, hexadecimal and binary dreams come true with just a few taps of your smartphone or tablet screen!
Things to sell, Key programmer, Calculator - Pinterest
In this example, you will learn to create a simple calculator in C programming using the switch statement and break statement. CalcTastic is also a full-featured Programmer's Calculator with a dedicated Binary Display, support for Signed & Unsigned Integers (from 8-64 bits) as well as 22 Sep 2016 Windows Programmer calculator available in Icnd1 100-105 exams. Hi all.
CARPROG V8.21 Programmer Auto Repair Airbag Reset Tools Car
Programmers Calculator comes to your aid to compute hash algorithms, encode strings, base convert, and encrypt text. The provided layout is plain and clear-cut divided into two separate sections In this example, you will learn to create a simple calculator in C programming using the switch statement and break statement. 2021-01-02 · 2. Calculator – Pad Edition. If you’re simply looking for the iPhone calculator app on your iPad, this is one of the best apps for that.
For one, it’s a free app …
Welcome to Dyson Sphere Program - Calculator!
A command line calculator made for programmers working with multiple number representations and close to the bits A programmer calculator can be indispensable for writing and debugging computer programs. Evaluating the results of your code can be done quickly and efficiently with calculator functions that support: Multiple modes such as fixed point and floating point operations; Variety … I’ve never heard the words “programmer calculator”, perhaps you mean “programmable calculator”? That’s a calculator where you can write a program by remembering a sequence of operations, and then apply the whole sequence with one button.
(Special Price) US $319.39 | Buy AK500 Key Programmer Key Matching Instrument Anti-theft Shop Quality & Cheap Auto Key Programmers Directly From China Auto Key Programmers Suppliers. Radio Code Calculators Collection.
Jonas lindblad di
jesper joby andreasson
slottsbacken 2 stockholm
Det snabba sättet att öppna filer med CAT Extension - File Magic
binary, hexadecimal and decimal representations at the same time Programmer Calculator A powerful tool for conversions between decimal, hexadecimal, octal and binary number bases, with a unique view for entering ASCII keys. Bitwise operations including AND, OR, XOR and NOT are available, in addition to functions not usually found in a programmer calculator, such as square and factorial.
Cable Cubby Drawer - Architectural Connectivity Extron
Programmer Calculator free download - Moffsoft Calculator, Simple Calculator, Biromsoft Calculator, and many more programs Programmer Calculator is an Android Tools app that is developed by Fidias Ioannides and published on Google play store on NA. It has already got around 50000 so far with an average rating of 4.0 out of 5 in play store. Programmer Calculator is a simple and easy-to-use calculator especially designed for programmers.
Today Opera has seen fit to release Description A powerful calculator designed for programmers and math lovers. You can do all kinds of conversions and calculations with this calculator including inter-converting between Binary, Octal, Decimal and Hex, with support for 64-bit binary.
|
OPCFW_CODE
|
NetEye & EriZone User Group
Sfide e opportunità per l’IT Management 4.0
Connectbay, Mantova, Giovedì 19 ottobre 11:00 – 17:00
Siamo lieti di invitarvi il 19 ottobre al NetEye & EriZone User Group. L’evento vi offrirà un’occasione unica per scoprire le ultime novità nell’IT System & Service Management, individuare i requisiti necessari per adeguarsi al GDPR (General Data Protection Regulation) e partecipare attivamente alla definizione della fase evolutiva delle nostre soluzioni.
Windows Server Update Services (WSUS) is an application developed by Microsoft that enables administrators to manage the distribution of updates for Microsoft products to computers in a corporate environment.
The first version of WSUS was known as Software Update Services (SUS) and was created in 2005. Only after 2008 it was distributed as an installable server role.
WSUS manages the update catalog for Windows components and other Microsoft products, the approval cycle, as well as the distribution of updates on a local network. However, it has no control over when and how such updates are applied to the target computers: even with this limit, WSUS is the ideal solution because it is free and easier to manage than the System Center Configuration Manager, a product that can both force and centrally control the distribution of updates.
If you are using our Asset Management module integrated into NetEye, you probably already know about the potential of OCS Inventory and GLPI. However, often users are not aware of all the functionalities available in Life Cycle Asset Management. So let’s highlight some of the most important features to manage the entire life cycle of your assets:
NetEye & EriZone User Group
Challenges and opportunities in the IT Management 4.0
Connectbay, Mantova, October 19, 11:00 – 17:00
We are glad to invite you to attend the NetEye & EriZone User Group. The yearly event for our customers will offer you the possibility to discover the innovations in the IT Service Management field, to identify modern approaches for the Performance Monitoring and to participate in the definition of our solution roadmap.
Machine learning and anomaly detection are being mentioned with increasing frequency in performance monitoring. But what are they and why is interest in them rising so quickly?
From Statistics to Machine Learning
There have been several attempts to explicitly differentiate between machine learning and statistics. It is not so easy to draw a line between them, though.
For instance, different experts have said:
- “There is no difference between Machine Learning and Statistics” (in terms of maths, books, teaching, and so on)
- “Machine Learning is completely different from Statistics.” (and the only future of both)
- “Statistics is the true and only one” (Machine Learning is a different name used for part of statistics by people who do not understand the real concepts of what they are doing)
In short we will not answer this question here. But for monitoring people it is still relevant that the machine learning and statistics communities currently focus on different directions and that it might be convenient to use methods from both fields. The statistics community focuses on inference (they want to infer the process by which data were generated) while the machine learning community puts emphasis on the prediction of what future data are expected to look like. Obviously the two interests are not independent. Knowledge about the generating model could be used for creating an even better predictor or anomaly detection algorithm.
|
OPCFW_CODE
|
We find skills and curriculum mapping fascinating, and after our dive into the basics of skills mapping and a conversation with an excited group of skills mappers, it’s clear that you do, too. In “Future Proof,” the Lab covers everything you need to build your curriculum for the future, tackling big concepts, key drivers in the space, and the tools and resources needed for you to get it done.
21st century skills — otherwise known as soft skills, human skills, or mobility skills — are the future of work. The last time we dove into skills mapping, we aimed to help our partners better understand it as a key tool for designing educational programs that better equip learners for the needs of an evolving workforce. Lab partners IBM and Goodwill, for example, have begun using skills maps to build their own internal curriculum for employees to further their skill sets. Nimble universities are using skills maps to reassess their educational offerings and rebuild from scratch (read: WGU Skill Mapping Approach).
So, what do you do if you’re not able to quickly and drastically redesign your curriculum to meet employer’s skill needs?
Meet 21st century skills curriculum mapping. While skills mapping uses the skills the employment sector is seeking to develop curriculum, curriculum mapping allows faculty, staff, and administrators to use their existing curriculum as a foundation to tag 21st century and other in-demand technical skills. As a result, campuses are better able to equip learners with an identifiable skill set.
Skills mapping uses employer-driven skills as a foundation for building educational programs, while 21st century skills curriculum mapping uses a campus’ existing program to identify where top 21st century skills are already being taught.
What’s a 21st century skills curriculum map?
A traditional curriculum map illustrates how a program’s courses and requirements introduce and reinforce student learning outcomes. A 21st century skills curriculum map is a map or grid that identifies which foundational 21st century skills a learner must develop by the end of their program and where those skills are embedded. This often requires a translation of learning outcomes to skills and competencies (emphasize: skills should be employer driven).
What’s an example of a 21st century skills curriculum map?
A 21st century skills curriculum map for any institution might start with both cataloguing the course work and requirements for one undergraduate major and acquiring a list of foundational employer-needed skills (think: a skills map or the Lab’s T-Profile). 21st century skills curriculum maps may also address a singular course. Working with both sources of data, institutions would cross-walk them: Where am I already teaching 21st century skills? Where could I better embed these skills into my course or program?
How are colleges and universities using 21st century skills curriculum maps?
Palo Alto College in San Antonio, Texas, is offering the Lab’s Collaboration micro-credential as part of their Advanced Manufacturing Certificate, a program that is intentionally seeking to develop collaborative learners. After working together to outline a 21st century skills curriculum map, Palo Alto and the Lab identified a single course (BMGT301 Supervision, as seen below) in the certificate as the clearest hub for Collaboration. Using their map, Palo Alto was able to elevate the existing course content to make the competencies behind Collaboration more transparent.
An example of a 21st century skills curriculum map: Palo Alto College mapped their Supervision (BMGT 1301) course to the competencies that make up the Lab’s Collaboration Badge, identifying clear gaps and areas to strengthen.
How can I get started on my campus?
Download the Lab’s prototype 21st century skills curriculum mapping tool to start identifying employer-driven skills in your program. We first tested this early revision with a group of faculty at the Summer Academy for Adult Learning & Teaching (SAALT) for the University of Maine System last week and are working on a higher fidelity interactive prototype that will allow you to map and track the development of these skills throughout your program.
Apart from the Lab’s work, other players in the space are developing curriculum mapping tools to illuminate gaps and show you what you’ve got. Coursetune is a digital platform that allows you to map skills across an entire program. Another, eLumen, helps manage your curriculum mapping process, even allowing you to sync with third-party skill libraries for a more seamless skill tagging experience.
Interested in giving the Lab’s 21st century skills curriculum mapping tool a try? Let us know what you think and how we can build a better version 2.0 by emailing our team at email@example.com.
|
OPCFW_CODE
|
from django.core.exceptions import ImproperlyConfigured
import redis
from rq import Queue
def get_connection(name='default'):
"""
Returns a Redis connection to use based on parameters in settings.RQ_QUEUES
"""
from .settings import QUEUES
queue_config = QUEUES[name]
if 'URL' in queue_config:
return redis.from_url(queue_config['URL'], db=queue_config['DB'])
return redis.Redis(host=queue_config['HOST'],
port=queue_config['PORT'], db=queue_config['DB'],
password=queue_config.get('PASSWORD', None))
def get_queue(name='default'):
"""
Returns an rq Queue using parameters defined in ``RQ_QUEUES``
"""
return Queue(name, connection=get_connection(name))
def get_queues(*queue_names):
"""
Return queue instances from specified queue names.
All instances must use the same Redis connection.
"""
from .settings import QUEUES
if len(queue_names) == 0:
# Return "default" queue if no queue name is specified
return [get_queue()]
if len(queue_names) > 1:
connection_params = QUEUES[queue_names[0]]
for name in queue_names:
if QUEUES[name] != connection_params:
raise ValueError(
'Queues in a single command must have the same '
'redis connection. Queues "{0}" and "{1}" have '
'different connections'.format(name, queue_names[0]))
return [get_queue(name) for name in queue_names]
def enqueue(func, *args, **kwargs):
"""
A convenience function to put a job in the default queue. Usage::
from django_rq import enqueue
enqueue(func, *args, **kwargs)
"""
return get_queue().enqueue(func, *args, **kwargs)
"""
If rq_scheduler is installed, provide a ``get_scheduler`` function that
behaveslike ``get_connection``, except that it returns a ``Scheduler``
instance instead of a ``Queue`` instance.
"""
try:
from rq_scheduler import Scheduler
def get_scheduler(name='default'):
"""
Returns an RQ Scheduler instance using parameters defined in
``RQ_QUEUES``
"""
return Scheduler(name, connection=get_connection(name))
except ImportError:
def get_scheduler(name='default'):
raise ImproperlyConfigured('rq_scheduler not installed')
|
STACK_EDU
|
As a language learner, one thing I often crave is a good message board to read, because it seems that they strike a nice balance between the formality of literary language and the difficult colloquialness of spontaneous speech, and what’s more, they offer interactivity. I know if I was trying to learn English and stumbled across the SDMB I’d probably pass out from happiness. I was thinking maybe we could compile a list of some good message boards conducted in other languages; they could be large or small-- anything is fine as long as they’re fairly interesting. Any language will do…I’m sure some Doper is learning it. So have at! I’m particularly hoping to get some interesting replies from non-native-English-speaking dopers.
Good idea. I know of some Dopers who had quite a rudimentary grasp of English at the beginning of their tenure, whose skills improved drastically while posting and reading here. I believe Aldebaran was one.
I’m going to interject myself in here to ask that posters please do not quote content from any foreign language message boards they may suggest. We ask that all posts be in English, per post #15 in the FAQ. Thank you.
Whoops, sorry if my topic precipitated that.
Well, it did. But that’s fine - don’t worry about it. I just would like it if nothing gets posted here that I don’t know what is being said.
I’ve been browsing this board in Chinese today… it’s a rather big board about pets.
Mind you, I know absolutely no Chinese but it’s interesting to look. I can still see extremely cute pictures.
May we provide a translation?
I wouldn’t mind finding something like the SDMB in French–I want to seriously improve my French skills. A board with an SDMB-like tradition of intellectual liberty and rigour, with a culture tolerant of second-language speakers, and which would help them learn, would be a great thing.
I was going to post a link to gxangalo.com for the Esperanto-speakers among us, but it appears that they have chat rooms and blogs, but not a message board like this one. Pity. Seems it would be the perfect mix of the two.
You mean some of the original text and a translation? I’m inclined to say no, but if you can justify it, you could change my mind. What purpose would it serve that a simple link wouldn’t?
Interesting stuff so far, y’all.
I’m going to bump this before it disappears, because I too would like to find some foreign language message boards (particularly in German, Italian, and Spanish). I don’t know any myself, but I’ll look around and let you know if I find one.
Would it be considered improper to top this thread? I was extremely interested in finding out people’s responses, and I think the thread may have just been around at the wrong time. I think it still has the potential to yield interesting and helpful output.
There’s http://www.forodeespanol.com/. It’s really oriented for English speakers looking to learn Spanish, but there seems to be some decent Spanish language discussion going on, at least from the POV of a not-very-good Spanish speaker, like myself.
My website at http://www.metrodemontreal.com/ has a forum which is in French, but it’s pretty much geared towards transit enthusiasts.
Swedish forum for discussing music:
And its sister forum, for discussing litterature:
I give you Big Boards.
I discovered this website via the SDMB, so I’m surprised it hasan’t been mentioned yet. The link is to their list of the largest message boards on the internet. Their criterion for size is the number of posts, which isn’t necessarily the best but it’ll do. Just from the top 200 here are what I think are the most promising general discussion foreign-language boards (note: the extent of my experience with each of these was a quick check to make sure it was work-safe):
PinoyExchange (mixed English and Filipino, mostly English)
Kafegaul (Indonesian) Note: Big Boards link to this site is broken, but this one works.
Flashback (looks like Swedish to me)
Forum TR (Turkish)
There are a number of other foreign language specialty boards, but most of them have general discussion or off-topic forums. Here’s a list of some other languages available with a handful of examples:
Gamesradar.it (video games, with lots of off-topic)
Chiquititas (according to Big Boards, this is a Hebrew forum for an Argentinian soap opera . I can’t read it so I can’t confirm or tell how much off-topic is available. Also note the Big Boards link to this is dead but this one works.)
HardMob (PC gaming, with a little off-topic) Note: link from Big Boards is dead, but this one works.
IOFF.de (television and off-topic)
Hardware.fr (computer hardware, with some off-topic)
Forocoches (cars, with some off-topic)
Prohardver (computers and hardware, looks like very little off-topic)
Sure, but it’s a provocative pick. The site and forum is very left-wing and also drug liberal. I still prefer the two I linked two by a huge margin.
Wow. That’s the kind of listing I was looking for…
Thanks for the info. I hadn’t looked into any of those enough to get a really good feel for them, so input from anyone who actually knows what they’re talking about is welcome :).
|
OPCFW_CODE
|
What is open data?
Open data is data that is available for everyone to access, use and share. It is generally published by governments on freely accessible portals and might include information about local areas, or statistics on topics such as the economy, health, and the environment.
Open data is also a movement. Supporters of open data believe that some kinds of data should be publicly accessible for anyone to make use of. It's a viewpoint that has accompanied the explosion of data in today's digital world.
With modern technology collecting more and better quality data on our world than ever before, it makes sense to open this up as much as possible for social, economic, public and institutional gain.
A history of open data in the UK
Open data was popularised by the Obama administration in the late 2000s. At this time, city authorities across the world began to look into publishing any data they had, that could be open — ie. that wasn’t personally or commercially sensitive — in a single place online.
The Greater London Authority, for example, created the London Datastore in 2010 as a repository for its data, along with that of other public sector organisations operating in the city. It now has 118 different data publishers, ranging from Transport for London to the London Fire Brigade.
Open data platforms such as this, therefore, give a wealth of information to any interested parties - from individuals to private businesses - about things like the demographic makeup of boroughs, transport use, and institutional spending.
This helps to provide transparency to the work of authorities in the public sector, opening it up to the media for scrutiny and investigation. It also encourages innovation. Open access to data is extremely important for researchers and innovators looking to firstly understand, and secondly, build solutions to the problems people face.
Why is data quality important in open data?
The Open Data Institute (ODI), co-founded by world wide web inventor Sir Tim Berners-Lee in 2012, is just one of the many organisations pushing the open data agenda. According to the ODI, open data is defined as 'data that is available for everyone to access, use and share.' But that's not all.
Open data should also be easy to access, structured, stored in a non-proprietary open format, clearly labelled, and linked to other data for context. Fulfilling all of these criteria represents the ideal for open data according to Tim Berners-Lee's 5-Star Open Data plan, a ranking system designed to improve the quality and useability of data.
Data quality and usability is an extremely important part of the open data conversation. As is becoming ever more widely accepted, more data doesn’t always equal better outcomes. In fact, when large amounts of data are collected and stored, it can cause severe problems, so it’s crucial to follow rigorous governance procedures and data standards.
What is public data?
The terms open and public data are often found together, and whilst they might sound similar, they are actually very different.
As we saw above, there are certain standards associated with open data that make it easy to both access and use. This isn’t the case with public data, which simply means any data that is in the public domain. Public data, therefore, includes datasets and documents that can only be accessed with a freedom of information request, as well as data that isn’t in machine readable format.
Public data is definitely not always open data.
The open data agenda
Advocates for open data continue to push for more government data to be made accessible, as across the world still fewer than one in five government datasets is open to the public.
Initiatives such as the Open Data Barometer — which tracks these trends – want to see progress in the form of making government data open by default; better data infrastructure and data management practices; and working with stakeholders to solve more challenges with open data.
We will also continue to see debate about data ownership and privacy increase over the next few years, as larger scale data projects are pursued by governments in an effort to link up and improve public sector services.
Building trust is key to the effective use of data in the public sector, with a 2019 ODI survey finding only 31% of people trusted government to use their data ethically. This is something that public bodies must address for society to reap more of the benefits of data. It's certainly on the agenda in government, with the National Data Strategy, for example, aiming to empower people to control how their data is used, but also 'to strengthen the existing understanding that aggregated data about people — used responsibly and fairly — can have public benefits for all.' Tied to this is the important task of improving transparency over the use of algorithms in big data, with the Data Strategy again promising to work closely with the Centre for Data Ethics and Innovation to develop the right kind of governance for these technologies.
Our recent insights
The government has launched a new Incubator for AI and readiness should be an important consideration to ensure its success.
Generative AI offers efficiency but poses unique cybersecurity risks. Traditional measures fall short; a new paradigm is needed
Technology plays an important role in social housing. So why are we relying on outdated technology that doesn’t meet the needs of people in a vital service? It’s clear that standardisation will be a crucial ingredient in the quest to make housing tech work better for everyone involved.
|
OPCFW_CODE
|
If you are playing on Geforce servers and have recently encountered the following Geforce now error code 0x000001fb. Then, chances are you are seeing the following as a result of a server outage or server maintenance issue at their server end itself.
There can be other reasons as well which might lead to the following error to occur. But, after having gone through quite a number of online forums and community threads there related to the following error, we have conclusion, that in any case, such an error is primarily related to a server issue.
I am pretty sure, you must have already come across a number of guides and articles out there explaining various troubleshooting methods to help you fix the issue. Hence, it would be just fair that we do not include such solutions here and waste both your and our time.
Instead, we have tried to compile the various solutions out there, picked out from the many forums and threads out there, to give you a list of solutions from users facing and solving the particular issue.
Fixes For Geforce now error code 0x000001fb
Fix 1: Check Server Status
If you are seeing the following error while playing a game. Then, checking the GeForce server status should be the first thing to do.
This way, if the issue is a server maintenance or server outage. Then, there is nothing much you can do. But, rather just wait it out till the problem gets dealt with from the server end itself.
You can check for the status of all the GeForceNow servers here.
Fix 2: Change the server datacenter
From all the solutions reported by users in various forums out there, changing the server datacenter from the current one to a different one seemed the most effective and widely popular one.
As can be seen above, simply changing the nvidia datacenter from one location / region to the other seemed to have done the trick. Hence, changing the server location from your current one to a different one which does not give the error is one way to go on solving the particular error.
Fix 3: Switch to a different bandwidth
There may be times when a specific bandwidth you are using might also result in the following error to occur.
In such a case, the problem might not completely lie with the server. But, it may have something to do with your connection at home itself. Or, something in between the client end and the server end at the same time.
Hence, here as mentioned above. You will need to change or set your bandwidth level according to what is available to you on your connection. And see if doing that helps with the following error.
Fix 4: Clear browser cache & cookies
You can also try clearing your default browser cookies and cache to get rid of any such cache data or cookies which might come in conflict with the game or the gfn client, and cause the following error to occur.
Just simply log out of your account. And on your browser’s History section, clear its cookies and cache memory.
Below, I have listed a few of the most popular browsers out there, which some of you might be using, as well as provided links to guides on how to clear cache and cookies for each of the browsers.
- Steps to clear up cache for Google Chrome
- Steps to clear up cache for Mozilla Firefox
- Steps to clear up cache for Opera
- Steps to clear up cache for Safari
- Steps to clear up cache for Microsoft Edge
Fix 5: Contact Customer Support
If none of the above solutions work for you and you are still getting the error for quite some time now without any luck. Then, it would be best to contact Nvidia’s Official Customer Support and discuss the problem you are facing with them, to come up with a solution which works for you.
Like This Post? Checkout More
|
OPCFW_CODE
|
Kalman Filter | Why Shift the Belief Distribution
I am looking at the video lecture on Kalman Filter. I got some understanding that if the robot moves that we also shift the enviornment belief model accordingly (the same mean amount the robot moved). I am quite confused that why do we move the belief model of the enviornment. The enviornment itself is stationary so if the robot moves the door will remain in place so their distribution should not be shifted. In the following snapshot, the doors are our measurements and we form the belief of where the robot is with respect to the doors. When the robot moves the doors stay stationary. Why do we move the posterior model of as shown in the image below?
The idea is that the robot is trying to figure out where it is. The only information it can use is:
Its door sensor which says either you're next to a door or you're not next to a door.
Its map of where the doors are.
That's how we go from the "maximum confusion" (uniform) plot to the one with three humps. This uses the first you're next to a door measurement.
Then, the robot knows it moves to the right. Therefore it must update its probability with a similar movement to the right. However, its motion sensors are a little uncertain, so this has the effect of smoothing the three bumps (well, the whole distribution) a little bit. So the three bumps move to the right and they spread out a little.
The peaks of the three spread out humps are still the robot's best guess as to where it is on its map.
Now we get the second you're next to a door measurement. We multiply our prior (the slightly spread out three-humped distribution) with a new three humped distribution (from our known map). The prior second-door location and the map second-door location will align, so that will generate the largest peak.
yes i did understand the process. I am only confused on why do we shift or move the probabilities of doors to the right as you mentioned So the three bumps move to the right and they spread out a little. The three bumps define the probability of doors which are stationary. So if robot want to localize itself with respect to the doors the probability of the doors should remain stationary or move left in reference to the robot as robot moved right in reference to the doors.
The probability is the probability of where the robot is. Not where the doors are. If the robot moves to the right (with some uncertainty) then the probability also moves to the right (and the uncertainty smooths the prior a little).
Ahh makes sense now. I thought the three humps represent where the doors are with respect to the robot. My bad.
|
STACK_EXCHANGE
|
BMC Helix Chatbot provides a self-service solution for users to resolve their issues. It provides support by interacting with users through natural language, understanding the conversation context, and performing tasks on behalf of the user.
This section contains information about enhancements in version 19.11 of BMC Helix Chatbot.
Real-time translation of chatbot conversations
To localize chatbot conversations, administrators can enable real-time translation by leveraging Google Cloud Translation Services or Microsoft Translator Speech API. Enabling real-time translation provides the following benefits:
- Support for more number of locales when using real-time translation.
- Eliminate the need to create multiple Chatbot Skills for each language. If you have localized Chatbot Skills, you can continue using them. To support new locales, you can enable real-time translation.
- Select chatbots for which you want to enable real-time translation.
- Easily test real-time translation for chatbot before implementing it in the production environment.
For more information, see Localizing chatbot conversations by using real-time translation.
Ability to transfer conversations to another chatbot
When you have multiple chatbots for each line-of-business or locale, you can enable automatic transfer of conversations between chatbots, resulting in quicker resolution of issues for the end users.
The following image is an example of how a conversation can be transferred from the IT Chatbot to the HR Chatbot:
Administrators must train the IBM Watson Assistant to enable transfer between chatbots.
Transferring a conversation between chatbots provides the following benefits:
- End users do not have to switch to a different chatbot UI.
- End users do not have to repeat the messages that they sent to the original chatbot.
- When transferred to a support agent, the agent receives a transcript of the chat conversation from the latest chatbot.
For more information, see Configuring the dialog nodes to enable transfer between chatbots.
Support for Automation Anywhere Robotic Process Automation (RPA)
Automation Anywhere Robotic Process Automation (RPA) enables you to create your own bots to automate any business process. As a developer or an application business analyst, you can leverage Automation Anywhere RPA by using the Automation Anywhere connector to automate any task within your application or business process.
The following image illustrates the workflow between the chatbot, the end user, and the summary of the IT Service Management application to reset a password by using the Automation Anywhere RPA:
The Automation Anywhere RPA provides the following capabilities:
- Reduces the operating costs.
- Scales on demand and increases business agility.
- Requires no changes for existing business processes while implementing Automation Anywhere RPA.
- Handles the manual, repetitive processes.
- Provides accuracy and error free execution.
For more information, see .
Automatic mapping of changes to BMC Helix Digital Workplace Catalog services in BMC Helix Chatbot
End users can leverage services in BMC Helix Digital Workplace Catalog to create service requests from BMC Helix Chatbot. When services are updated and re-imported from BMC Helix Digital Workplace Catalog to BMC Helix Chatbot, you no longer need to map the services again with BMC Helix Chatbot. The changes are automatically updated in BMC Helix Chatbot. Administrators are not required to perform any steps to ensure that the changes are updated in BMC Helix Chatbot.
For more information, see Importing chat-enabled services from BMC Helix Digital Workplace Advanced.
Improvements to chatbot greetings and rating scale order
Administrators can now configure the following chatbot features in the BMC Helix Chatbot UI:
- Chatbot feedback rating scale order—Administrators can reverse the default order from ascending to descending, as shown in the following image:
- User name display—Administrators can configure BMC Helix Chatbot so that after an end-user logs in to BMC Helix Chatbot, the chatbot can greet the user by their first name, such as Hello, Britney! The default is the first name and last name.
For more information, see Setting up chatbots for your line of business.
- Show All menu—The Show More menu has been renamed to Show All, as shown in the following image:
|
OPCFW_CODE
|
India Plays Central Role in Prometric’s Global Test Development Solutions
As a central hub in Prometric’s global Test Development Solutions (TDS) business, Prometric’s Gurgaon operation streamlines workflow from India to Ireland, the United States and Japan.
Prometric’s business model leverages a cycle of innovation and technology automation tools to continually reach for higher quality and cost savings. Prometric utilises proven methodologies to define what needs to be measured, and works with qualified subject matter experts to develop and review test content delivered to candidates in a fair, flexible and secure manner.
For example, since 2009, Prometric has worked in close collaboration with the Indian Institutes of Management (IIMs) to develop a psychometrically sound and fair Common Admission Test (CAT). The test design, structure and item (question and answer) profiles of the CAT are first defined by the IIMs. Prometric then coordinates with IIM professors and external subject matter experts to author test items based on these guidelines. The test items are reviewed and edited thoroughly with IIM professors to ensure fairness, clarity, completeness and accuracy before they are placed into exam forms (versions of ooprc the test). At this stage, Prometric and the IIMs review and approve for content balance before final release to test publishing. Like other world-class management and Master’s-level entrance exams, the CAT utilises multiple exam forms. To ensure fairness and equivalency across the forms, Prometric employs an industry-standard psychometric process called “equating”
“Prometric works closely with its clients in India and abroad to develop and deliver fair and secure exams,” said Soumitra Roy, Managing Director, Prometric India. “Our methodologies provide the highest level of accuracy in testing, scoring and reporting in the industry, but we are never satisfied.”
Prometric continually invests in the expansion of its test development capabilities to provide better, faster and more cost effective solutions to its clients globally. Through its global, 20-hour test development operation, Prometric helps many of the world’s most recognised companies, governments and organisations develop tests that accurately measure the knowledge and skills of millions of test-takers around the world.
Prometric Testing Private Limited (Prometric India) is a wholly-owned subsidiary of Prometric Inc. and has been present in India since 1997. Our operations span nine locations and employ over 200 people. Prometric India delivers over 650,000 exams annually through a network of close to 450 testing centres. India represents a core part of Prometric’s growth strategy, both as a regional operations location and as a market for its globally distributed testing services.
Prometric, a wholly-owned subsidiary of ETS, is a trusted provider of technology-enabled testing and assessment. As the global standard in professional competency measurement, Prometric reliably delivers 10 million tests per year on behalf of 400 clients in the academic, financial, government, healthcare, professional, corporate and information technology markets. Through innovation, workflow automation and standardisation, Prometric achieves customer-inspired advances that are better, faster and at less cost for its clients, helping to put the right people in the right jobs at the right time. Prometric delivers tests flexibly via the Web or by utilising a robust network of more than 8,000 test centres in more than 160 countries.
For more information, please visit www.prometric.com.
|
OPCFW_CODE
|
I am starting out with audio recording using my Android smartphone.
I successfully saved voice recordings to a PCM file. When I parse the data and print out the signed, 16-bit values, I can create a graph like the one below. However, I do not understand the amplitude values along the y-axis.
What exactly are the units for the amplitude values? The values are signed 16-bit, so they must range from -32K to +32K. But what do these values represent? Decibels?
If I use 8-bit values, then the values must range from -128 to +128. How would that get mapped to the volume/”loudness” of the 16-bit values? Would you just use a 16-to-1 quantisation mapping?
Why are there negative values? I would think that complete silence would result in values of 0.
If someone can point me to a website with information on what’s being recorded, I would appreciate it. I found webpages on the PCM file format, but not what the data values are.
Think of the surface of the microphone. When it’s silent, the surface is motionless at position zero. When you talk, that causes the air around your mouth to vibrate. Vibrations are spring like, and have movement in both directions, as in back and forth, or up and down, or in and out. The vibrations in the air cause the microphone surface to vibrate as well, as in move up and down. When it moves down, that might be measured or sampled a positive value. When it moves up that might be sampled as a negative value. (Or it could be the opposite.) When you stop talking the surface settles back down to the zero position.
What numbers you get from your PCM recording data depend on the gain of the system. With common 16 bit samples, the range is from -32768 to 32767 for the largest possible excursion of a vibration that can be recorded without distortion, clipping or overflow. Usually the gain is set a bit lower so that the maximum values aren’t right on the edge of distortion.
8-bit PCM audio is often an unsigned data type, with the range from 0..255, with a value of 128 indicating “silence”. So you have to add/subtract this bias, as well as scale by about 256 to convert between 8-bit and 16-bit audio PCM waveforms.
The raw numbers are an artefact of the quantization process used to convert an analog audio signal into digital. It makes more sense to think of an audio signal as a vibration around 0, extending as far as +1 and -1 for maximum excursion of the signal. Outside that, you get clipping, which distorts the harmonics and sounds terrible.
However, computers don’t work all that well in terms of fractions, so discrete integers from 0 to 65536 are used to map that range. In most applications like this, a +32767 is considered maximum positive excursion of the microphone’s or speaker’s diaphragm. There is no correlation between a sample point and a sound pressure level, unless you start factoring in the characteristics of the recording (or playback) circuits.
(BTW, 16-bit audio is very standard and widely used. It is a good balance of signal-to-noise ratio and dynamic range. 8-bit is noisy unless you do some funky non-standard scaling.)
Why are there negative values? I would think that complete silence
would result in values of 0
The diaphragm on a microphone vibrates in both directions and as a
result creates positive and negative voltages. A value of 0 is silence
as it indicates that the diaphragm is not moving. See how microphones
Small clarification: The position of the diaphragm is being recorded. Silence occurs when there is no vibration, when there is no change in position. So the vibration you are seeing is what is pushing the air and creating changes in air pressure over time. The air is no longer being pushed at the top and bottom peaks of any vibration, so the peaks are when silence occurs. The loudest part of the signal is when the position changes the fastest which is somewhere in the middle of the peaks. The speed with which the diaphragm moves from one peak to another determines the amount of pressure that’s generated by the diaphragm. When the top and bottom peaks are reduced to zero (or some other number they share) then there is no vibration and no sound at all. Also as the diaphragm slows down so that there’s a greater space of time between peaks, there is less sound pressure being generated or recorded.
I recommend the Yamaha Sound Reinforcement Handbook for more in depth reading. Understanding the idea of calculus would help the understanding of audio and vibration as well.
The 16bit numbers are the A/D convertor values from your microphone (you knew this). Know also that the amplifier between your microphone and the A/D convertor has an Automatic Gain Control (AGC). The AGC will actively change the amplification of the microphone signal to prevent too much voltage from hitting the A/D convertor (usually < 2Volts dc). Also, there is DC voltage de-coupling which sets the input signal in the middle of the A/D convertor’s range (say 1Volt dc).
So, when there is no sound hitting the microphone, the AGC amplifier is sending a flat line 1.0 Volt dc signal to the A/D convertor. When sound waves hit the microphone, it creates a corresponding AC voltage wave. The AGC amp takes the AC voltage wave, centers it at 1.0 Vdc, and sends it to the A/D convertor. The A/D samples (measures the DC Voltage at say 44,000 / per second), and spits out the +/-16bit values of the voltage. So -65,536 = 0.0 Vdc and +65,536 = 2.0 Vdc. A value of +100 = 1.00001529 Vdc and -100 = 0.99998474 Vdc hitting the A/D convertor.
+Values are above 1.0 Vdc, -Values are below 1.0 Vdc.
Note, most audio systems use a log formula to curve the audio wave logarithmically, so a human ear can better hear it. In digital audio systems (with ADCs), Digital Signal Processing puts this curve on the signal. DSPs chips are big business, TI has made a fortune using them for all kinds of applications, not just audio processing. DSPs can work the very complicated math onto a real time stream of data that would choke an iPhone’s ARM7 processor. Say you are sending 2MHz pulses to an array of 256 ultrasound sensor/receivers–you get the idea.
Lots of good answers here, but they don’t directly address your questions in an easy to read way.
What exactly are the units for the amplitude values? The values are
signed 16-bit, so they must range from
-32K to +32K. But what do these values represent? Decibels?
The values have no unit. They simply represent a number that has come out of an analog-to-digital converter. The numbers from the A/D converter are a function of the microphone and pre-amplifier characteristics.
If I use 8-bit values, then the values
must range from -128 to +128. How
would that get mapped to the
volume/”loudness” of the 16-bit
values? Would you just use a 16-to-1
I don’t understand this question. If you are recording 8-bit audio, your values will be 8-bits. Are you converting 8-bit audio to 16-bit?
Why are there negative values? I would
think that complete silence would
result in values of 0
The diaphragm on a microphone vibrates in both directions and as a result creates positive and negative voltages. A value of 0 is silence as it indicates that the diaphragm is not moving. See how microphones work
For more details on how sound is represented digitally, see here.
|
OPCFW_CODE
|
package uk.co.benjaminelliott.spectrogramandroid.storage;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectOutputStream;
import uk.co.benjaminelliott.spectrogramandroid.preferences.DynamicAudioConfig;
import android.graphics.Bitmap;
import android.location.Location;
import android.media.ExifInterface;
import android.os.Environment;
import android.util.Log;
/**
* Class that helps in writing a capture to disk by saving the bitmap as a geotagged JPEG and the audio as
* a WAV, as well as serialising all of the captured data as a {@link CapturedBitmapAudio} object.
* @author Ben
*
*/
public class AudioBitmapConverter {
private static final String TAG = "AudioBitmapConverter";
private final double decLatitude;
private final double decLongitude;
private final String filename;
private final int[] bitmapAsIntArray;
private final byte[] wavAudio;
private final int width;
private final int height;
private CapturedBitmapAudio cba;
private Bitmap bitmap;
public AudioBitmapConverter(String filename, DynamicAudioConfig dac, Bitmap bitmap, short[] rawWavAudio, Location loc) {
this.filename = filename;
this.bitmap = bitmap;
if (loc != null) {
decLatitude = loc.getLatitude();
decLongitude = loc.getLongitude();
} else {
decLatitude = 0;
decLongitude = 0;
}
wavAudio = WavUtils.wavFromAudio(rawWavAudio, dac.SAMPLE_RATE);
width = bitmap.getWidth();
height = bitmap.getHeight();
bitmapAsIntArray = new int[width * height];
bitmap.getPixels(bitmapAsIntArray, 0, width, 0, 0, width, height);
cba = new CapturedBitmapAudio(filename, bitmapAsIntArray, wavAudio, width, height, decLatitude, decLongitude);
}
/**
* Write the bitmap to a JPEG file and geotag it, then write the audio to a WAV file.
*/
public void storeJPEGandWAV() {
writeBitmapToJpegFile(bitmap, filename);
geotagJpeg(filename, decLatitude, decLongitude);
writeWavToFile(wavAudio, filename);
}
/**
* Creates a JPEG from the provided bitmap and saves it under the provided filename.
* @param bitmap - the bitmap to use to create the JPEG
* @param filename - the name of the file under which it should be stored
*/
private static void writeBitmapToJpegFile(Bitmap bitmap, String filename) {
if (isExternalStorageWritable()) {
File dir = getAlbumStorageDir(DynamicAudioConfig.STORE_DIR_NAME);
FileOutputStream fos = null;
try {
// keep incrementing filename until one is found that does not clash:
int suffix = 0;
File bmpFile = new File(dir.getAbsolutePath()+"/"+filename+".jpg");
while (bmpFile.exists()) {
bmpFile = new File(dir.getAbsolutePath()+"/"+filename+"_"+suffix+".jpg");
suffix++;
}
fos = new FileOutputStream(bmpFile);
// save the bitmap into the file:
bitmap.compress(Bitmap.CompressFormat.JPEG, DynamicAudioConfig.BITMAP_STORE_QUALITY, fos);
} catch (FileNotFoundException e) {
Log.e(TAG,"Unable to create file",e);
} finally {
try {
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
else
Log.e(TAG,"External storage is not writable.");
}
/**
* Geotag the JPEG at the specified filename with the specified latitude and longitude
* @param filename - the filename of the file in the captures directory to geotag
* @param decLatitude - the latitude (in decimal degrees)
* @param decLongitude - the longitude (in decimal degrees)
*/
private static void geotagJpeg(String filename, double decLatitude, double decLongitude) {
File dir = getAlbumStorageDir(DynamicAudioConfig.STORE_DIR_NAME);
String jpegFilepath = dir.getAbsolutePath()+"//"+filename+".jpg";
try {
ExifInterface exif = new ExifInterface(jpegFilepath);
// add latitude and longitude to JPEG's EXIF:
// latitude in degrees-minutes-seconds format:
exif.setAttribute(ExifInterface.TAG_GPS_LATITUDE, convertDecToDMS(decLatitude));
// North or South:
exif.setAttribute(ExifInterface.TAG_GPS_LATITUDE_REF, latitudeRef(decLatitude));
// longitude in degrees-minutes-seconds format:
exif.setAttribute(ExifInterface.TAG_GPS_LONGITUDE, convertDecToDMS(decLongitude));
// East or West:
exif.setAttribute(ExifInterface.TAG_GPS_LONGITUDE_REF, longitudeRef(decLongitude));
exif.saveAttributes();
} catch (IOException e) {
Log.e(TAG,"Error finding JPEG file for tagging: "+jpegFilepath);
e.printStackTrace();
}
}
/**
* Write the supplied data (header and samples) to a WAV file.
* @param data - the data to write
* @param filename - the filename under which the audio should be stored
*/
private static void writeWavToFile(byte[] data, String filename) {
FileOutputStream fos = null;
if (isExternalStorageWritable()) {
File dir = getAlbumStorageDir(DynamicAudioConfig.STORE_DIR_NAME);
try {
File audioFile = new File(dir.getAbsolutePath()+"/"+filename+".wav");
int suffix = 0;
while (audioFile.exists()) {
audioFile = new File(dir.getAbsolutePath()+"/"+filename+"_"+suffix+".wav");
suffix++;
}
fos = new FileOutputStream(audioFile);
fos.write(data);
Log.d(TAG,"Audio file stored successfully at path "+audioFile.getAbsolutePath());
} catch (FileNotFoundException e) {
Log.d(TAG,"Unable to save audio file: "+dir.getAbsolutePath()+"/"+filename);
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
/**
* Return a "S" if latitude is south or "N" if north.
* @param latitude
* @return
*/
public static String latitudeRef(double latitude) {
return (latitude < 0) ? "S" : "N";
}
/**
* Return a "W" if longitude is west or "E" if east.
* @param latitude
* @return
*/
public static String longitudeRef(double longitude) {
return (longitude <0) ? "W" : "E";
}
/**
* Convert latitude or longitude expressed in degrees, to a degrees-minutes-seconds string.
*
* INSPIRED BY http://stackoverflow.com/questions/5280479/how-to-save-gps-coordinates-in-exif-data-on-android
* @param decDegreeCoord
* @return
*/
public static String convertDecToDMS(double decDegreeCoord) {
//decimal degree coordinate could be latitude or longitude.
// see http://en.wikipedia.org/wiki/Geographic_coordinate_conversion#Conversion_from_Decimal_Degree_to_DMS
if (decDegreeCoord < 0) decDegreeCoord = -decDegreeCoord;
//Degrees = whole number portion of coordinate
int degrees = (int) decDegreeCoord;
//Minutes = whole number portion of (remainder*60)
decDegreeCoord -= degrees;
decDegreeCoord *= 60;
int minutes = (int) decDegreeCoord;
//Seconds = whole number portion of (remainder*60)
decDegreeCoord -= minutes;
decDegreeCoord *= 60;
int seconds = (int) (decDegreeCoord*1000); // convention is deg/1, min/1, sec/1000
return degrees+"/1,"+minutes+"/1,"+seconds+"/1000";
}
// --------------------- FROM ANDROID DEV DOCUMENTATION
public static boolean isExternalStorageWritable() {
String state = Environment.getExternalStorageState();
if (Environment.MEDIA_MOUNTED.equals(state)) {
return true;
}
return false;
}
public static File getAlbumStorageDir(String albumName) {
// Get the directory for the user's public pictures directory.
File file = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES), albumName);
if (!file.mkdirs()) {
Log.e("", "Directory not created");
}
return file;
}
// ---------------------
/**
* Returns the provided bitmap's pixels as a new integer array.
* @param bitmapAsIntArray
* @param width
* @param height
* @return
*/
public static int[] getBitmapPixels(int[] bitmapAsIntArray, int width, int height) {
int[] ret = new int[width*height];
for (int i = 0; i < bitmapAsIntArray.length; i++) {
ret[i] = bitmapAsIntArray[i];
}
return ret;
}
/**
* Serialise the user's capture to file using a {@link CapturedBitmapAudio} object.
* @param cba
* @param filename
* @param directory
*/
private static void writeCbaToFile(CapturedBitmapAudio cba, String filename, String directory) {
if (AudioBitmapConverter.isExternalStorageWritable()) {
File dir = AudioBitmapConverter.getAlbumStorageDir(directory);
FileOutputStream fos = null;
ObjectOutputStream oos = null;
try {
// keep incrementing file suffix until file does not clash:
int suffix = 0;
File cbaFile = new File(dir.getAbsolutePath()+"/"+filename+CapturedBitmapAudio.EXTENSION);
while (cbaFile.exists()) {
cbaFile = new File(dir.getAbsolutePath()+"/"+filename+"_"+suffix+".cba");
suffix++;
}
fos = new FileOutputStream(cbaFile);
oos = new ObjectOutputStream(fos);
oos.writeObject(cba);
} catch (FileNotFoundException e) {
Log.e(TAG,"Unable to write to file: "+dir.getAbsolutePath()+"/"+filename, e);
} catch (IOException e) {
Log.e(TAG,"Unable to write to file: "+dir.getAbsolutePath()+"/"+filename, e);
} finally {
try {
fos.close();
} catch (IOException e) {
Log.e(TAG,"Error when closing file output stream");
}
}
}
}
public void writeThisCbaToFile(String filename, String directory) {
writeCbaToFile(cba, filename, directory);
}
public int getWidth() {
return width;
}
public int getHeight() {
return height;
}
public CapturedBitmapAudio getCBA() {
return cba;
}
}
|
STACK_EDU
|
Version 1.3 of WooCommerce User Role Pricing adds full support for WPML/WCML, including multi-currency support and optimizations. If you are not using the Multi-Currency feature, there is nothing extra you need to configure in the settings. However, even if you don’t use Multi-Currency, you should still read over the sections below to ensure that your custom prices get properly configured for every translation of your products, and that custom text strings are also translated for each language.
Full multi-currency support, including manual currency prices, is only possible with the new filter hooks added in WCML version 3.8, so you need to be running at least that version to use Multi-Currency with custom prices. Version 1.3 of User Role Pricing was created and tested using versions 3.4.1 of WPML Multilingual CMS, 3.8.2 of WooCommerce Multilingual, 2.3.9 of WPML String Translation, 2.2.1 of WPML Translation Management, and 2.1.22 of WPML Media.
When WPML/WCML is active on your site, there will be a new tab “WPML Integration” on the WooCommerce>User Role Pricing page.
If you have Multi-Currency enabled, there will be a check box “Save min/max currency prices”. This is enabled by default and will save the min & max variation custom prices for each active currency in the post meta table for the Variable parent product. If you have a lot of Variable products displayed in your shop, this may help speed up the page load time as the displayed price range will be shown by getting those saved min/max values from the database, as opposed to looping through every child variation of the Variable product, calculating the custom price for each, and determining the min and max. If you disable this feature, custom price min/max ranges for Variable products will always be calculated by looping through all variations and calculating the price for each.
After Multi-Currency has been enabled and you have added the additional currencies you want to use, you should run the Sync Variable Products function again so that default values can be set for every currency. This should be done whether or not you are using saved min/max currency prices. After that, Variable products will automatically sync (individually) any time that you edit them.
You shouldn’t have to change anything for User Role Pricing to work with product translations. The product meta fields for each custom price you create are set to be automatically copied to translations.
However, you need to make sure to create and translate the products the correct way for this to work. I strongly suggest that you use the WPML recommended way of using their own “WPML Translation Editor” to translate products, instead of using the “Native WooCommerce product editing screen” to edit your translations. In short, you should always duplicate your products after you create the original product in the main site language and edit the text translations using the product WPML Translation editor. Alternately, you can go directly to the Translation Editor and create translations of existing products there. Using one of those two ways, the prices and all other product fields set to “copy” will always be copied to all translations, and you won’t have to worry about prices getting out of sync between translations. If you ever need to change prices, you use the regular WooCommerce product editor, while your admin area is set to the site default language, and then just change the prices on only that version of the product. When you hit update, those prices will be copied to any other language versions.
To summarize, if you are creating a new product, make sure you are creating it in the default language of your site. After you enter all the info and publish the product, you have two ways to create the translations. Right after the product is published, you will see some new options in the Language meta box above the Publish box. You can either check the boxes for the “Duplicate” option, which will simply duplicate the product with the default language text to each translated language (you can edit the text for the other translations later), or you can click on the + signs to create the translation and open the Translation Editor for that product to edit the text right then. The third option is to do nothing, which will then leave that product in only the default language, and you can come back to it later to create the translations.
Unless you have a very strong reason to, it is NOT recommended to use the Native WooCommerce product editing screen and then create an independent product copy for each language (you can do that if you are editing the product in a non-default language and you set the “This is a translation of” select box to “none”). If you do that and save the product, then that particular language product becomes its own separate product, with its own independent price fields (the fields will become unlocked after you publish it). At that point, you will need to remember that your different language versions of that product are no longer synchronized, so if you change the prices of one language, you need to remember to change the prices for the other language versions as well.
If you are using at least version 3.8 of the WooCommerce Multilingual plugin, then your custom prices will work fully with Multi-Currency, with either automatic conversions, or manual setting of prices for each currency.
If you already have custom prices entered for your products in the default language, and you enable multi-currency mode and add some additional currencies, then by default, all prices, including custom prices, will be automatically converted to the other currencies based on the currency exchange rates you set when you set up the additional currencies. There is nothing additional you need to do for this to work. However, if you have Variable products, you should run the Sync Variable Products function again so that the mix/max variation prices can be calculated for each variable product for the shop price range display (they will be calculated with the exchange rates you set). Alternately, you can disable the saving of min/max variation prices for each currency (as described above), and those price ranges will be re-calculated on every page load.
If you don’t want to use automatic currency conversion, you can manually enter prices for each product or variation in each currency. WCML adds a couple of radio buttons on the product/variation editor pages where you can set if prices are calculated automatically, or entered manually. This is set on a per product, or per variation, level, so you can create manual prices for just some products or variations, but still use automatic conversion for others. If you set it to manual, you will then be presented with a set of price fields for each additional currency, including all of your custom price fields. For each currency, you can manually enter your custom prices, or leave them blank. If you leave them blank for one or more currencies, then the custom prices for each blank currency will be automatically calculated based on the exchange rate you entered and the specific custom price you set for the default currency. Thus, “manual” currency prices only apply if you actually enter a custom price for that specific currency, otherwise it will still be calculated automatically. In other words, it’s not possible to not have a custom price for only some currencies… it’s all or nothing.
The only exception/quirk to the manual prices is that if you leave a custom price blank for the default currency, you can still manually enter custom prices for one or more other currencies, in which case you will only see custom prices (and only be able to add-to-cart) when viewing in one of the currencies where you did enter a price. However, if you have the User Role Pricing plugin set to hide products without a price, then that product will never show up in the shop loop no matter what currency you are currently set to, since the query that filter products without prices is done on the main (default currency) custom price meta field. I’m not sure why you would ever want to set up any products that way, but it is possible.
Translation of custom Messages and Text
As of version 1.3, User Role Pricing has two custom text strings that can be defined by admin on the General section page of the User Role Pricing settings page. These will need translation for other languages. These custom strings are registered through the plugin so that they can be translated easily from the “String Translation” page of WPML. Note that if you leave some of those fields blank in the User Role Pricing settings, those fields will NOT show up on the String Translation.
The easiest way to find and translate those custom strings from the WPML String Translation page is to select “WooCommerce User Role Pricing” from the “Select strings within domain:” select box and hit the search button. You can then edit each string translation.
|
OPCFW_CODE
|
As organizations strive for digital transformation, they are in reality seeking to reinvent their business, modernize their processes and push the boundaries of existing IT infrastructure. To address that last point, we frequently see customers exploring alternatives to CPU-centric system architectures, where software running on a central CPU directly controls a set of HW peripherals that offer static functions and/or acceleration capabilities. Supporting this trend is a new class of devices, evolved from SmartNICs but lacking cohesive standards even when it comes to naming - NVIDIA and Marvell call their offerings Data Processing Units (DPU) while Intel refers to their technologies as Intelligent Processing Units (IPUs).
Red Hat firmly believes that this transformational change in compute architecture is only possible through open source and broad partner collaboration. No individual company or a “walled garden” approach could steer these efforts in the right direction, essentially requiring the creation of an open source-based architecture and the associated ecosystem behind it. Today, as a culmination of our efforts to structure a community around these new technologies, we are announcing that Red Hat is joining the Open Programmable Infrastructure (OPI) Project under The Linux Foundation, as a founding Premier member.
We believe there is an exciting opportunity to develop a fully programmable open infrastructure model across software stacks and DPU/IPU-like hardware devices. Red Hat has been driving the formation of this community for almost a year, closely collaborating with several industry partners and customers, such as Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA, and others. The OPI Project aims to explore and expand the concept of programmable infrastructure using community-driven, standards-based, open ecosystem for next generation architectures and frameworks based on DPU/IPU-like technologies.
The common trait for the emerging generation of DPU/IPU devices is that they employ an easily programmable multi-core CPU, a state-of-the-art network interface(s), and a powerful set of networking, storage and security accelerators that can be programmed to perform multiple software-defined, hardware-accelerated functions.
As the above illustration suggests, DPU/IPU-like devices provide a platform to enable a broad range of services across management, network and storage domains and represent the first wave of devices that support a programmable infrastructure concept where intelligent subsystems are matched by hardware components and orchestrated using Kubernetes-like patterns. As this segment matures, we are likely to see additional form-factors, more evolved architectural implementations and new use cases.
Customers and partners alike, value choice in their technology implementations, and choice is a key benefit of open source solutions. They need well-balanced and optimized infrastructure in their hybrid cloud implementations rivaling those that hyperscalers, such as Amazon with their proprietary Nitro architecture, offer. This new class of DPU/IPU-like devices in combination with open source software, like general purpose Linux operating systems and cloud-native Kubernetes, democratize access to disaggregated composable systems and programmable infrastructure.
Given Red Hat’s role in delivering open source-based enterprise-class software that brings greater consistency across multiple hardware implementations, we are working closely with partners and customers to help this new class of devices adhere to key ecosystem standards. Standardization is foundational to the success of this vibrant ecosystem and that necessitated the establishment OPI project that aims to provide clear guidelines for defining the applicable category of devices, create vendor agnostic framework and architecture for DPU/IPU-based software stacks, define or re-use a set of common APIs and provide implementation examples to validate those architectures and APIs with adequate performance levels.
However, we can not do that alone. While it is still in its infancy, the OPI project needs your help in establishing common frameworks and architectures that are applicable to any vendor’s hardware solution for this new ecosystem.
Whether you are a customer that wants to easily integrate Red Hat’s software and standardize their infrastructure across various hardware architectures and solutions or a partner looking to develop software solutions that will run on a multitude of hardware implementations, we encourage you to participate in the OPI Project.
Kris Murphy is a Senior Principal Software Engineer in Red Hat’s Office of the CTO leading the Emerging Tech Computational Infrastructure team. Kris' team focuses on emerging hardware and architectures that may influence Red Hat's future products. Current focus areas include ARM based edge devices, DPUs/IPUs, and composable compute. The team works closely with Red Hat product teams, open source communities, partners and customers.
|
OPCFW_CODE
|
+, -, *, /, ^, % are the usual signs for addition, subtraction, multiplication, division, raising to a power and modulo. The precedence is like that used in common mathematics (* binds stronger than + etc.), but you can change this behaviour with parentheses: 2^(1/12) returns 2 raised by 1/12 (= the 12st root of 2), while 2^1/12 returns 2 raised by 1, and the result divided by 12.
abs(x) returns the absolute value of a number.
&& and || are the symbols for a logical "and" and "or". Note that you can use here parentheses for defining the precedence, too, for instance: if (ival1 < 10 && ival2 > 5) || (ival1 > 20 && ival2 < 0) then ...
! is the symbol for logical "not". For example: if (kx != 2) then ... would serve a conditional branch if variable kx was not equal to '2'.
cpsmidi converts a MIDI note number from a triggered instrument to the frequency in Hertz.
cpsmidinn does the same for any input values (i- or k-rate).
Other opcodes convert to Csound's pitch- or octave-class system. They can be found here.
Csound has no own opcode for the conversion of a frequency to a midi note number, because this is a rather simple calculation. You can find a User Defined Opcode for rounding to the next possible midi note number or for the exact translation to a midi note number and a cent value as fractional part.
cent converts a cent value to a multiplier. For instance, cent(1200) returns 2, cent(100) returns 1.059403. If you multiply this with the frequency you reference to, you get frequency of the note which corresponds to the cent interval.
ampdb returns the amplitude equivalent of the dB value. ampdb(0) returns 1, ampdb(-6) returns 0.501187, and so on.
ampdbfs returns the amplitude equivalent of the dB value, according to what has been set as 0dbfs (1 is recommended, the default is 15bit = 32768). So ampdbfs(-6) returns 0.501187 for 0dbfs=1, but 16422.904297 for 0dbfs=32768.
dbamp returns the decibel equivalent of the amplitude value, where an amplitude of 1 is the maximum. So dbamp(1) -> 0 and dbamp(0.5) -> -6.020600.dbfsamp returns the decibel equivalent of the amplitude value set by the 0dbfs statement. So dbfsamp(10) is 20.000002 for 0dbfs=0 but -70.308998 for 0dbfs=32768.
Scaling of signals from an input range to an output range, like the "scale" object in Max/MSP, is not implemented in Csound, because it is a rather simple calculation. It is available as User Defined Opcode: Scali (i-rate), Scalk (k-rate) or Scala (a-rate).
pyinit initializes the Python interpreter.
pyrun runs a Python statement or block of statements.
pyexec executes a script from a file at k-time, i-time or if a trigger has been received.
pycall invokes the specified Python callable at k-time or i-time.
pyeval evaluates a generic Python expression and stores the result in a Csound k- or i-variable, with optional trigger.
pyassign assigns the value of the given Csound variable to a Python variable possibly destroying its previous content.
getcfg returns various Csound configuration settings as a string at init time.
dssiinit loads a plugin.
dssiactivate activates or deactivates a plugin if it has this facility.
dssilist lists all available plugins found in the LADSPA_PATH and DSSI_PATH global variables.
dssiaudio processes audio using a plugin.
dssictls sends control information to a plugin's control port.
vstinit loads a plugin.
vstmidiout sends midi data to a plugin.
vstnote sends a midi note with a definite duration.
vstinfo outputs the parameter and program names for a plugin.
vstbankload loads an .fxb bank.
vstprogset sets the program in a .fxb bank.vstedit opens the GUI editor for the plugin, when available.
There has been error in communication with Booktype server. Not sure right now where is the problem.
You should refresh this page.
|
OPCFW_CODE
|
We are very excited to see community identify useful tooling and application for the network and propose solutions that are beneficial for the long term health of the network. On-chain governance is new for us, and we’re collectively establishing norms with each proposal and vote. That’s why it’s important to have an open discussion on processes and best-practices.
There has been work around best practice governance proposal by Figment in the Cosmos ecosystem. Having considered recent proposals, here are my suggestions for how to think about spending the community pool
Observations about proposals:
- Off-chain discussion with numbers: 1 week decision period is too short to vote on such proposals which require a longer discussion. I think we should have more off-chain / forum discussion before proposals are submitted as network proposals. Once a proposal is posted on the forum, it should include a budget, not just the proposal itself. This allows community to react and discuss proactively.
- Milestone based approach: When applying for community funding, I encourage ecosystem participants who are submitting these proposals to have milestones. I also urge that funds be distributed based on completion of the milestones. For example, when a proposal is submitted there are 3 milestones (last one being completion of all tasks), if a proposal is approved, the first payment is done after the approval and the other disbursements are conditional to completion of subsequent milestones. This ensures that teams have the incentive to follow through their work and ensure proper use of community funds. This also provides team an initial budget to start their work. I realize that the current community spending module does not accommodate milestone based approach. In order to overcome this, we can consider two approaches:
- submissions can be made for small amounts and shorter milestones. With a longer vision, budget and time horizon in mind. For example, I can ask for 100K SCRT for the first milestone of a project that I have propose a budget for 300K SCRT. I would detail what I want to achieve in milestone 2 and 3 but really focus on getting very specific on milestone 1.
- Funding can go to an independent organization that distributes the funds as milestones are achieved.
- Detailed time-specific description of milestones / deliverables: I do not see timelines in some of the applications. It would be great to tie deliverables into milestones and set time estimates around these deliverables. Without timelines, it is very hard to assess the success of the process. For example if a product takes 2 years to build it’s probably not the best use of community spending at this moment (just an illustrative example)
- Use-of-funds: It would be great to have more information on:
- How will the funds be used?
- How long will the funds last?
- How does the team continue to provide value to the network / maintain the project after the funds are used?
- Type of the project and funding mechanism: How do we distinguish between projects that are purely serving the community (open source tools like a testnet) vs. product / application ideas that are for-profit. While the success of an application is valuable to the entire ecosystem, it’s different than a community tool. Do for-profit entities have a way to contribute back to the community pool? Maybe the secret contract collects funds and some of them are directed to the community funds. Do we want the community pool to be seed funding for application ideas or a small grant to get a project going and give them the opportunity to make progress and raise external funding?
Observations about voting:
- Currently all the votes we have are
yes. This may be because participants feel it’s not worthwhile to vote
no. Or it may be because participants don’t feel comfortable saying no in a way that’s visible to the network and hurt relationships with the proposers. This to me shows how important it is for us to integrate Secret Voting module in the future.
- When I am voting personally, I think is it something that I would have paid for out of pocket (partially). This is how we should think about these proposals because community pool is staking rewards that validators are foregoing
My main feedback is that we need a better process to make sure the community pool is spend in the best way possible. These funds belong to the community and we should ensure that proposals pass a high standard of assessment before getting approval.
These are community funds and it’s up to the community to decide. I don’t mean to discourage anyone. I am just offering my feedback. Please let me know what you think, if you disagree with me let me know. This is our community and together we will make it stronger!
|
OPCFW_CODE
|
Does having a trailing slash make a url different than the same url without the trailing slash?
webestate last edited by
Highland last edited by
Google will generally see these as the same but, as Matt Cutts says here, there is a slight preference for the trailing slash
CleverPhD last edited by
My two cents, even though Google "generally" sees them the same, I would not allow this to happen so that I can be as specific as possible for Google.
A separate reason to use slashes at the end of the URL is that this is what tools like Google Analytics use to group and sort traffic results. There are default drill down reports that look at the ending slash to determine traffic to a given directory. If you are inconsistent with the use of the slash, then you may have some traffic allocated to one directory versus another.
I prefer use of the slash for this reason as well.
webestate last edited by
Thanks for the responses. I found a good explanation here also
http://googlewebmastercentral.blogspot.com/2010/04/to-slash-or-not-to-slash.htmlThis post from Google Webmasters says that either one is fine. Just make sure you do one or the other.Regarding your root domain -
I'm redesigning example.com on a subdomain of my own site, so at example.mysite.com. As part of the redesign, I am optimizing the site's images. I used Wordpress Importer to get the content to the development site, but I did not import the images. Instead, I added the images to the development site by copying and moving over the contents of example.com's uploads folder. The posts at example.mysite.com are showing the images, but they are pulling them from the original location. I tried adding the following code to wp-config.php under the (misunderstood?) impression that the image URLs would use the development site's domain:
1 define('WP_HOME', 'http://example.mysite.com');
2 define('WP_SITEURL', 'http://example.mysite.com');
I am not seeing any change and the images are still pulling from the original site. How can I test the images on the current site without actually changing the URLs in the database. (If I understand correctly, I could search and replace, but that is not what I am trying to achieve.) The original domain is not changing with the redesign, so there is no need to actually change the URLs. I just need to test the images, as I will be removing those that are not being used as well as optimizing the remaining images before moving the redesigned site over to the original domain.Intermediate & Advanced SEO | | kimmiedawn0
Hi to all,
what is the best url structure, to have all words in the url or to tweak url like Yoast suggest? If we remove some words from url , not focus keyword but stop words and other keywords to have shorter url will that impact search rankings?
example.com/one-because-two-for-three-on-four - long url, moz crawl error, yoast red light
example.com/one-two-three-four - moz ok, yoast ok
Where one is a focus keyword.Intermediate & Advanced SEO | | WalterHalicki0
We are going to be promoting one of our products offline, however I do not want to use the original URL for this product page as it's long for the user to type in, so I thought it would be best practice in using a URL that would be short, easier for the consumer to remember.
Replicate the product page and put it on this new short URL, however this would mean I have a duplicate content issue, would It be best practice to use a canonical on the new short URL pointing to the original URL? or use a 301?
Thanks for any helpIntermediate & Advanced SEO | | Paul780
My client has messy URLs. does it make sense to write new clean URLs, then 301 redirect all old URLs to the new ones?
Thanks for reading!Intermediate & Advanced SEO | | DA20130
Followup question to rand(om) question: Would two different versions (mobile/desktop) on the same URL work well from an SEO perspective and provide a better overall end-user experience?
We read today's rand(om) question on responsive design. This is a topic we have been thinking about and ultimately landing on a different solution. Our opinion is the best user experience is two version (desktop and mobile) that live on one URL.
For example, a non-mobile visitor that visits http://www.tripadvisor.com/ will see the desktop (non-responsive) version. However, if a mobile visitor (i.e. iOS) visits the same URL they will see a mobile version of the site, but it is still on the same URL There is not a separate subdomain or URL - instead the page dynamically changes based on the end user's user agent.
It seems this would simultaneously solve the problems mentioned in the rand(om) question and provide an even better user experience. By using this method, we can create a truly mobile version of the website that is similar to an app. Unfortunately, mobile versions and desktop users have very different expectations and behaviors while interacting with a webpage.
I'm redesigning a site's structure from the ground up, and am having issues with the URLs. I'd love to have them be perfect, but kept finding conflicting advice online.
1. For my services blog, is it best to have it set up like www.example.com/services/keyword or
There seems to be conflicting advice as to keep it short and keep the keyword as far to the left as possible, but also that including the word services would help with long tail phrases and site organization.
2. For my blog section, is it best to have it set up like
It's similar to the first question, but also adds the question of including the entire post title in the URL or just the keyword.
Your help would be greatly appreciated!Intermediate & Advanced SEO | | Stryde1
Me and the development team are having a heated discussion about one of the more important thing in life, i.e. URL structures on our site.
Let's say we are creating a AirBNB clone, and we want to be found when people search for
apartments new york.
As we have both have houses and apartments in all cities in the U.S it would make sense for our url to at least include these, so
but the user are also able to filter on price and size. This isn't really relevant for google, and we all agree on clone.com/Apartments/New-York should be canonical for all apartment/New York searches. But how should the url look like for people having a price for max 300$ and 100 sqft?
or (We are using Node.js so no problem)
The developers hate url parameters with a vengeance, and think the last version is the preferable one and most user readable, and says that as long we use canonical on everything to clone.com/Apartments/New-York it won't matter for god old google.
I think the url parameters are the way to go for two reasons. One is that google might by themselves figure out that the price parameter doesn't matter (https://support.google.com/webmasters/answer/1235687?hl=en) and also it is possible in webmaster tools to actually tell google that you shouldn't worry about a parameter.
We have agreed to disagree on this point, and let the wisdom of Moz decide what we ought to do. What do you all think?Intermediate & Advanced SEO | | Peekabo0
We are moving one website to a different domain and would like to know what is the best way to do it without hurting SEO
The website we want to move, let's say www.olddomain.com has a low quality back links profile, in fact it received a manual notification from google of unnatural links detected; but the home page has a PR 3. We want to move it to a different domain let's say www.newdomain.com. We would like to know if it's better to do a 301 redirect to the new domain, in order to transfer the link juice or if it would be better to do a 302, taking into account that this redirect won't pass any link juice, so it would be like start from scratch with this new domain.
Thanks for your help.Intermediate & Advanced SEO | | DoitWiser0
|
OPCFW_CODE
|
Titan FTP Server Review 2021:
Are you looking for the most secure FTP server available? If so, then we have found a great solution. FTP servers are used in many industries for the purpose of transferring files over the internet. But due to the latest technological advancements, it has become easy for hackers to gain access to sensitive data and use it for their own purposes.
If you are looking for a reliable and secure FTP server, we recommend Titan FTP Server. In this review, we are going to take a deep dive into Titan FTP and determine how secure it really is.
What is Titan?
Titan is an FTP server that provides the most secure file transfers in the world. There are over 25,000 servers installed worldwide, proving that it is a trustworthy and robust SFTP server. Titan is easy to use and is relatively easy to install.
Everything about this FTP server just works well. Cisco recommends Titan as the preferred SFTP server to back up their entire Unified Communications Suite.
The Server for an IT Pro:
Titan FTP Server is suited for IT professionals. Compared to its competitors, it has the most complete feature set. Titan allows for granular configurations so that you can customize your server to fit your needs. The admin panel is also simple to work with, and Titan also allows for automation and logging. IT professionals won’t find a better FTP server than Titan.
Web User Interface:
Titan’s web user interface is a great feature that makes it easy to upload or download a file. You don’t need to procure any additional software or plugins for downloading/uploading.
One of the main highlights of the web interface is that it is multi-platform. You can easily use it on Windows, Linux, or Mac devices.
- Cross-Browser Compatibility
If you are using any modern browser, then you can easily use Titan FTP Server. It works on Safari, Internet Explorer, Firefox, and also on mobile browsers.
- Secure File Transfer
Another great thing about Titan FTP Server is that it allows for secure file transfer through the HTTPS protocol. This strong encryption ensures that your files are always safe.
- Drag & Drop
Using Titan FTP Server, you can easily move files and folders using simple drag and drop. This makes it easier to manage all your files.
- Thin Client
Titan boasts high transfer speeds, as most of the processing is done on the server side. So your device doesn’t have to do the heavy lifting.
Titan supports most of the available file transfer protocols. Some of the main protocols supported are:
Titan supports the SSH transfer protocol from version 3 to 6. All the transfers are done through an encrypted channel.
The transfers are done through an encrypted channel and support SSL v3.0 and TLS v1.0. It either uses implicit or explicit FTPS to secure the transfers.
- HTTP & HTTP/S
You can also transfer files over HTTP or HTTP over SSL. This is possible due to the presence of the web interface.
If you want to configure servers efficiently, then Titan should be your top choice. You can configure the servers at the user level as well. This makes it very easy to configure settings for all users.
- Remote Administration
You only need a computer with Internet access for to setup configurations. The user and group account info can be accessed directly from the Windows domain. If you make any changes in the Windows NT user or group, then those changes will apply to the Titan server as well.
- Settings Customization
Server configurations can be fine-tuned at the individual, group, and user level to handle special cases.
- Custom Authentication
You can easily create users and groups; everything is designed to be very user-friendly.
- Account Expiration
Accounts can be manually enabled and disabled. You can also set an expiration date for individual or group accounts.
- Administrator App
All configuration is handled through the Titan Administrator. Using this light interface, you can easily configure the server through your browser.
One of the major benefits of Titan is its high level of security. Titan offers a variety of features to prevent abuse, thwart hackers, and restrict access.
- Access Restrictions
Titan lets admins easily restrict IP addresses via blacklisting/whitelisting. Anonymous access can also be disabled so that the data remains protected at all costs. You can also restrict access to things like downloading, uploading, deleting, directory listings, file renaming, etc.
- Password & Account Security
If a user enters the wrong password numerous times, then that account is disabled. Titan also offers more advanced password restrictions, such as preventing DoS attacks without any effect on the connections. If a user enters bad commands, then he is automatically banned. You can set the bans to be permanent or temporary. The server can also be configured in a way that nobody can access the hidden system files.
Titan also provides different automation and reporting tools. These tools are designed to ensure the best workflow and to save time. Titan provides automation of:
- Event Handling
You can automate more than 100 events. Customs logs, logins, uploads, trigger emails, etc.
- Component Object Model
Titan provides you with a COM API that can be used to control the server with programming languages such as C++, Java, C#, etc.
- Command Line Interface
You can also automate the command-line interface with Titan. This can be used for modifying permissions, adding users and groups, etc.
Titan also provides robust reporting tools. Some of the tools are:
- Log Formats & Message Levels
You can get detailed logging info by enabling verbose logging. The log files are written in plain text or W3C. The log fields can be customized to only gather required info.
- Database Logging
Titan supports database logging so you can track specific statistics.
- Activity Monitor
One of the best things about Titan is that it allows you to monitor all the server activity in real-time. This also includes all user activity.
- Fast & Versatile
- Robust Automation
- Great Reporting Tools
- Secure File Transfers
- Intuitive User Interface
- Granular Controls
- Real-Time Monitoring
- There is no Unix/Linux version. Titan FTP Server runs on Windows only.
If you are an IT Professional and looking for the most robust FTP server, then Titan is the best option available. It offers all the features of an FTP server along with some additional granular controls. The server is also quite fast and configurable. Their customer support is also top-notch and regularly receives 5-star ratings. We didn’t find any major drawbacks to Titan. All in all, we can easily recommend Titan FTP Server due to its customizable nature and the most secure file encryption.
|
OPCFW_CODE
|
I thought that was the only problem with the script, but I guess I was wrong. It will be next to impossible to write scripts unless you get a console, which you can do by going ‘into’ the app through right-clicking and exploring the contents (?), then navigating to the Blender folder and running the ‘internal’ Blender app. Sorry about being rather vague about this - I don’t have a. Mar 18, · Blender a Tutorial – Basic Python Programming – Part 5 Part 5 of the Basic Python Programming tutorial for new and intermediate Blender users. source. Bookmark(0) Please login to bookmark. View Comments source > Transcript view all > this is part 5 of the basic Python. Apr 27, · Blender is out! Jonathan from www.durgeon.com was so kind to create a quick screencast at Vimeo showcasing some of the new and very neat improvements and new tools in Blender Links: Blender Release Notes Blender Download Link Screencast Overview at www.durgeon.comhor: Cekuhnen.
Python for blender 2.63
How to change Operator's label in Blender depending on the context? The script uses Blender's feature of custom properties to write custom tags to output file. The model typically consists of multiple 'parts' (Blender mesh objects that are parented to form a tree, with one 'parent' and multiple 'child' objects) and some of those parts. Apr 28, · Blender So wow, blender was knocked out the other day. If you don't pay attention to blender development it is remarkably easy to miss some of the cooler additions. BMesh is like a breath of fresh air, the introduction of which should satisfy the modelling needs of . Apr 27, · Blender is an integrated application that enables the creation of a broad range of 2D and 3D content. Blender provides a broad spectrum of modeling, texturing, lighting, animation and video post-processing functionality in one package. Through it's open architecture, Blender /10(). Applicable Blender version: One of Blender's powerful features is its Python API. This allows you to interface with Blender through the Python programming language. The Python interface allows you to control almost all aspects of Blender, for example you can write import or export scripts for meshes and materials of various formats or. はじめに Blender はとても高機能なオープンソース & マルチプラットフォームな 3DCG ソフトウェアです。中でも注目すべきなのが Python で全てを制御できる点です。新しくキューブを追加したりそこにテクスチャを貼ったり移動したりコピーしたりレンダリングしたり 、GUI を通じて操作すること.The features exposed closely follow the C API, giving python access to the functions used by blenders own mesh editing tools. For an overview of BMesh data. Hey everyone, First, I am a total newbie so this might be a stupid question. So I installed blender and it installs some files from python or. b. -, bge. www.durgeon.comaints · www.durgeon.com · www.durgeon.com · www.durgeon.com · www.durgeon.come · bge. types · bgl · blf. -, bmesh · www.durgeon.com · www.durgeon.com -, bpy · www.durgeon.com · www.durgeon.com Applicable Blender version: One of Blender's powerful features is its Python API. This allows you to interface with Blender through the Python programming. Hey there, I'm trying to copy one object's world position to another. I already figured out a script but it doesn't seem to work: import bge sce = logic.
see the video
Blender 2.63 BGE Tutorial Python Random #13, time: 8:08
Tags:O aud la radio tibi scobiola fisierulmeu,Logistics mix 2012 soundcloud er,Fast five indonesian subtitle er,Telugu nursery rhymes video
|
OPCFW_CODE
|
Guest Wireless Access with External DNS While Maintaining Access to the Local DMZ
Every network administrator has heard this question from their users before, “why does guest wireless access not work like my connection at home?”. What they really mean is “why can’t I access all of InsertYourCompany’sName external websites/webmail when I am on the quest wireless network?”. Typically our guest networks share the same internet connection/outside ASA interface as our DMZ and internal networks do forcing us to segregate the networks. This is typically done on the ASA where all these networks “terminate” and the NAT translation is done out to the internet.
The simplest way to segregate your three networks is to set the DMZ network to the lowest security level of the three (50 for this example), set the guest wireless network security level a bit higher (70 for this example) and internal to the highest possible (100). This allows internal and guest clients access to the DMZ network while keeping hosts on the guest and DMZ network from accessing the internal network creating a basic security barrier. This is where things begin to get problematic. Typically we would assign our guest clients our internal DNS servers so they can resolve our DMZ servers via their private DMZ address. However, besides the general security problems of our guest clients having access to our internal DNS servers the guest clients can also be subject to the problem of split-horizon DNS. Split-horizon DNS is often used by companies to grant additional access for internal clients to web resources or pointing them at internal private server vlans while presenting outside clients DMZ servers! For example, as a employee on-site I might go to CorpWebSalesServer and the DNS entry would send me to 10.100.0.120 but, if I am an outside customer connecting to CorpWebSalesServer from home I would be given 184.108.40.206 address in response to my DNS query. So, in order for our clients on guest wireless to access CorpWebSalesServer (remember they are technically internal clients) they would need to have access to our internal servers instead of our DMZ, which creates a massive security hole. To solve this you could simply set the guest clients to an external DNS server such as Google’s public DNS (220.127.116.11 and 18.104.22.168) and because the guest network is a lower security level they would not be able to access our internal networks, just the DMZ and Internet. However, this too creates a problem. Google, will return the public IP address of the DMZ web server back to the guest client which then tries to access the web server by its public IP address. By default the Cisco ASA will not allow packet redirection on the same interface (outside) which is tried by the guest client trying to access the DMZ server by its NAT’d public IP address. There is however an awesome solution to this. DNS doctoring.
DNS doctoring intercepts the return response for the outside DNS server and converts it to a private DMZ IP address accessible by the guest client. For example, a guest client wants to get to YourCorp’s web server and asks Google’s DNS server for its IP address. The IP address returned by Google for YourCorp is 22.214.171.124 which hits the firewall on its way back to the guest client. The firewall sees the public IP address in the response and promptly replaces it with the private DMZ address of YourCorp, 10.50.0.110. The guest client now knows that YourCorp is at 10.50.0.110 and happily able to access the server.
DNS doctoring is supported on the Cisco ASA firewall and is very simple to setup.
Step 1: Pour yourself some Dark Horse Raspberry Ale, great summer beer that goes down a little too easy.
Step 2: Load up the good ‘ole ASDM Select Configuration -> Firewall -> Objects ->Network Objects/Groups
Step 3: Double click on the Network Object for the server in question to edit it and click on the NAT header to expand the Object window and click the Advanced button.
Step 4: In the Advanced NAT Settings windows check “Translate DNS replies for rule” and hit OK a couple of times.
Step 6: That’s it, sit back, enjoy your Dark Horse Ale while the ASA does all the doctoring work for you.
Pro-tip: Just remember to limit access to 80 and 443 from the guest network to the DMZ network for web access only, you don’t want any guest users attempting to RDP to your DMZ boxes.
|
OPCFW_CODE
|
In this post, I talk about the current development state of my Deep Space D-6 Alexa adaptation. This is the first post of a series that is exclusively available to my Patreon members.
After I released Dungeon Roll and Desolate, a few people asked me how they can support my work so I can keep working on board game adaptations for Alexa. I decided to create a Patreon account and one of the benefits that my patrons get are exclusive development diaries while I am working on the games. You can find the first diary entry below.
The Development Log
First, I would like to welcome the folks that became patrons after I posted the previous welcome message. Thank you for supporting this project and I hope you will find the stuff that I am working on worthwhile!
Now, here’s the development progress update for the past week.
At the start of the week, all I had was rough skeletons and notes on how I will structure my code. It basically comes down to having a main class that keeps track of the game state, keeps track of the player’s ship, and one more object that manages the threats and their actions.
In Deep Space D-6, during the player’s turn you assign your crew to various stations on the ship so they can perform an action. The crew is represented by crew dice, and you have 6 of them. So, for every round you have a different set of crew dice to work with.
The first thing I wanted to implement was the weapons and the ability to fire them at enemy threats. But, before I could work on the weapons, I needed to create some basic enemies to fire at. I went through the card list and created objects for all the enemies that have a single basic attack, with no special abilities.
Now that I had some sitting ducks to shoot at, I moved on to implement the weapons on the ship. There’s two different weapons on the ship that is provided with the free print-and-play version of the game: a stasis beam that disables enemy threats for a round, and the standard lasers that do damage. The stasis beam can be operated by science crew, while the standard weapons are activated by using tactical crew. When you activate your standard weapons for the first time in the round, they deal a single point of damage, but subsequent activations deal two damage. So, if you assign three tactical units to the weapons, you will deal a total of five damage.
I decided to use a single command that player’s will provide to Alexa to fire the weapons: “Fire weapons”, and Alexa then asks you to provide a target if you did not provide the target with your command. When a valid target is provided, one available tactical unit on your ship activates the weapons and deals damage to the provided enemy. If this was not the first activation of the weapons, two damage is dealt to that enemy. Nothing too complicated, right? Well that’s what I thought until I realised that something was off with this approach a few days later.
Next, the stasis beam was pretty straightforward. You just use the fire stasis beam command and provide your target. If the target is valid, it gets disabled for this round and cannot trigger it’s abilities when it is the enemy’s turn.
Besides using the stasis beam, the science crew can also recharge the ship’s shields. This was the functionality I implemented next which completed all of the science crew’s abilities.
Next up was the engineering crew. The engineering unit has a single action, and that is to repair hull damage on the ship. If you use a single engineer per round, you can repair one damage to the ship. Just like the weapons, subsequent ship repairs will repair two points of damage to the ship. So, if you repair the ship twice in the round, you will repair three damage to the hull.
The medical crew has two abilities. The first one is quite thematic and intuitive, which is to heal crew that is stuck in the infirmary. Some threats may cause some of your crew dice to be sent to the infirmary, and these dice are not available until you use a medic to heal them. The second action that a medical unit can do is remove a threat die from the scannerse. Now what does this mean?
When you roll your crew dice at the beginning of a round, you may roll threats. These immediately get locked in the ship’s scanners. When there are three threats locked in the scanners, a new enemy is drawn from the threat deck. So, if you have played Dungeon Roll, it is similar to how the dungeon dice that roll Dragons get locked in the Dragon’s Lair.
The main problem with the locked threats is that these dice stay locked until you either remove them using a medical crew, or the scanners fill up with three threat dice and a new threat is drawn from the threat deck. They do not reset at the end of a round, so you will have a smaller number of crew dice to roll. As I mentioned earlier, the medical crew being the one to remove threats from the scannerse is not very thematic or intuitive, but I guess the other crew types were already too occupied with other actions they can do.
Finally, the last crew member functionality I implemented was the commander. In the free print-and-play version of the game, the commander has two actions: transform another available crew member to a type of your choice, or roll all the available crew dice. In the retail game, the ability to roll the dice is removed, and I think the main reason for this is probably the fact that it rarely was used. I have played the print-and-play version multiple times, and I have never used this ability. So, at the moment, I have only implemented the ability to transform crew members from one type to another.
With that, I finished implementing the crew actions. At least that’s what I thought until I realised that there is a bug with the weapons code. In the game rules, it says that you can split the damage that you are about to deal however you want between multiple targets. The problem with the approach I took for firing the weapons was that when you fire the weapons for the second, third, or fourth time, you deal two damage to the provided target, regardless whether that target only has one health remaining. So, it was time to refactor and come up with a new approach.
The solution I decided to go with was the following: First the user asks Alexa to assign a number of tactical units to the weapon systems. Once that request is validated, Alexa tells the user what the total damage is and to fire the weapons, the user needs to give that command to open fire. When you fire the weapons now, you need to provide two pieces of information: The first is the target, just like before, and the second piece of information is the amount of damage you are going to deal to that target. This approach solved the problem with being able to split the damage between multiple targets. The main drawback is that to open fire, you now need to issue two commands instead of just one.
With that solved, it was time to add some logic to the threat manager and give an attack ability to the enemies. The enemey’s turn is quite simple. You roll a six sided die, and depending on the result, the enemies get activated. At the moment, I only have generic enemies that have no special abilities, and all they do is open fire when they are activated. Enemies that are disabled with the stasis beam are ignored during the enemy activation phase until the next round.
Coding the enemy attacks also required to code how the damage to the player ship is processed. When the player ship is attacked, the damage first goes to the shields and if your shields are brought down to zero, you start taking hull damage. There are also some enemies that can ignore your shields and cause damage to your hull directly.
So there you have it, the first progress report of Deep Space D-6 for Alexa. I’m currently a bit stuck with how to implement the ability to send your crew on missions as some enemies can be defeated by sending different crew types on missions instead of destroying them with your weapons. After I figure that out, I am planning to work on adding the remaining enemies that have special abilities and implement all of their functionalities. If all goes as planned, I should be done with this by the end of next week. At that point, we should have a fully playable game, so I will add some rules to teach you how the game is played and publish the beta for you to try it out. While the game is in beta, I will work on adding sound effects to improve the experience.
If you read this far, thanks for reading, and feel free to let me know if you have any comments or questions.
Until next time, over and out!
Posts like the one you just read are exclusive to my patrons. A lot of time and effort goes into developing the adaptations of these games and releasing them for free. If you like what I do and enjoy playing these games, or you just want to support something that makes the hobby more accessible to a wider audience by making those games playable for the blind and visually impaired players, you can become a patron on Patreon. In return, you will be able to get early access to the beta versions of the games while they are still in development and help shape up the final product. You will also be able to participate and vote on future games that I will be adapting next.
|
OPCFW_CODE
|
There sure was a lot of talk about the new ad campaign featuring Bill Gates and Jerry Seinfeld. It was supposed to be hilarious. It wasn’t. They are complete duds. There’s not a chance anyone, and certainly not even Bill Gates, would have a heart rate issue watching these ads, ’cause they’re really boring. Sleep inducing. About the only thing you might be having trouble with is keeping your hand from scratching your head as you try to reconcile what Bill, Jerry, Microsoft and some ad agency were thinking. The only good part of these ads for Microsoft is that everyone is talking about how crappy they are, so at least they can attribute the marketing spend to “share of voice”.
It’s been rumored that Microsoft will spend $300Million on this campaign with a new ad firm Crispin Porter & Bugusky. And, Seinfeld is getting a cool $10M for his spot. Now, there’s some heart valve clogging material for you.
In case you’ve been a little out of touch with seeing any good TV ads, Microsoft is seriously losing a branding popularity battle with Apple – even though the Apple ads only refer to “PC”. It could be Intel, HP, Dell, or Microsoft they’re actually talking about, but everyone knows it’s an on-going jab at Microsoft. And, Microsoft thought, “let’s just take our top visionary and a ton of money, and we’ll beat those Applewholigans”.
So, enter Jerry Seinfeld. The head comic, writer & producer of one the most successful TV sitcoms (at least in my generation). You know we’ll be watching Seinfeld re-runs til eternity, about as long as M*A*S*H and the Brady Brunch. We know Jerry for his comedy, his writing, his unbelievable wealth. Why on earth risk your reputation with the fuddy-duddies of Redmond? It couldn’t have been for the pay-day. Did Jerry have some sick debt to Gates with no chance of retribution? And, we’re subject to the aftermath?
It’s completely beyond comprehension why a man of Bill’s stature needs to be involved with such a terrible spot on TV. Is this his last gasp to inject positive vibes into the Microsoft brand? Did he really buy off on this? Please don’t tell me it was actually his idea (read = you better do what Bill says). You’re talking about a person that’s done hundreds, if not thousands, of appearances. Mostly all very serious and thought provoking, and I’m sure he’s left a few scripted chuckles along the way. But, other than having a great, geeky smile, he’s not someone we rely on to hit the funny bone. Surely, if you were able to play a roaring practical joke on him, it would be outrageously funny, but unlikely anyone would sign up for that. Love him or hate him, Bill certainly can never be replicated, duplicated, or in any way replaced, but doesn’t mean we need to see him do the robot or shake his booty on TV.
In a web2.0 world, where Ray-Ban, Nike, and Levis are kicking butt with viral videos and ads using a combination of YouTube, print ad, and TV, why is Microsoft simply left in the dust? Because in the web2.0 world, it’s not about outspending. It’s about outmaneuvering, creativity, and novelty. With all the money in the world – literally – it’s truly amazing Microsoft could compile such a horrible attempt.
Microsoft would have been better off paying a bunch of college grads $2Million to come up with whacked content and rough-cut video, and the result would be better viral branding and positive messaging than the waste they put on TV, and now subject to during national sporting events.
|
OPCFW_CODE
|
Deck of Every Card spades puzzle
Getting an X of Spades card from Deck of Every Card yields a string of letters (letter count equals X).
For new players, the string is a subset of the common string "UJLWTTLEJCCQYOIYMWESZWPWFOBUNKWGULYKDQNADDBWXUWKRRRTBIHXBUMWAIZGEDTHJGGUGEWCCCICOPJTNM". It is coded with the Vigenère cipher, the key is "ABCDEFGHIJKLMNOPQRSTUVWXYZ". The deciphered string reads "THIS ONE WAS RELATIVELY EASY GO ASK GRANDPA SEA MONKEE ABOUT STAGE TWO FOR YOUR PERSONALIZED CHALLENGE", without spaces.
After a player asks Grandpa about "stage two" the cards start giving subsets of some different string. The new string is different for each player, but the same for the same player. It is coded with the Playfair cipher, the key is the player's name. The deciphered string reads "GREAT WORK YOU ARE READY FOR YOUR FINAL CHALLENGE ASK ABOUT ??????????? FOR THE LAST STAGE BE SURE TO DESTROY WHEN YOU ARE DONE READING", without spaces, where ??????????? is replaced by a user-specific password. The password is usually (but not always) an 11-letter long English word (such as "copywriters").
After the player asks Grandpa about the player-specific stage two password the cards start giving subsets of the final string, common for all players. The string is "KQAESGDOJDBQCBYQVRSEFATCBMJORWXKCKBVSUYNLICNYBGKUYDIIOOGSHXLEKWROPXYKYSURVCJGYVZCH". It is coded with the Vigenère autokey cipher, the key is the second half of the plaintext. The deciphered string reads "CONGRATULATIONS YOU HAVE COMPLETED THIS PUZZLE GO ASK GRANDPA ABOUT DEEP SPADING FOR YOUR REWARD", without spaces.
Drawing an X of Spades card after this gives you a substring of "YOUAREAGREATSPADE".
- The stage one was decoded on Jul 6, 2015, at that time only the first 3 letters of the ciphertext were unknown.
- The stage two was publicly decoded on Jul 13, 2015.
- The stage three was first publicly decoded on Jul 16, 2015.
- You need to unlock the stage two only once.
- You don't need to draw any cards or own the deck to complete stage 1 and unlock the stage two. However this seems pointless since you have to be able to draw enough cards, and have enough of them be Spades cards, to get enough of the text for the password of stage 2 to be revealed or at least deduced.
- HotStuff confirmed that the stage two password is not always 11 letters long, and that due to an error, some passwords that start with T will actually start with "a weird XT".
- Known passwords are as short as 5 letters and as long as ?.
- It's possible to find the stage two password without discovering the whole ciphertext, by encrypting the cleartext with the password part replaced with some characters, then finding out and decrypting known sequences which are not parts of the resulting ciphertext (it's also possible to guess the password knowing only a part of it). Note that the ciphertext part after the password differs for even-length and odd-length passwords. There's an even clearer explanation of that HERE.
- It's possible an alternate encoding is dropping, as documented on the Discussion tab.
- You can't skip steps by just entering the stage one and three passwords (the ones which are the same for everybody), you need to first unlock them properly.
|
OPCFW_CODE
|
Allen-Bradley, Rockwell Automation, TechConnect, RSLogix, PowerFlex, Kinetix, SMC Flex, Connected Components Workbench. 1336 PLUS II drives. associated with any particular installation, the Allen-Bradley Company cannot assume. 33 or 34 minus the value of Parameter 35, to the skip frequency plus the. Allen-Bradley, CompactLogix, ControlLogix, DriveLogix, FLEX IO, Kinetix. PowerFlex 700S, PowerFlex 753, PowerFlex 755, PowerFlex 7000, PLC-2, PLC-3. 1336 PLUS, 1336 IMPACT, SMC, SMC FLEX, SMC Dialog Plus, RSBizWare. Abstract: No Functionality -HAP Programmer Only 2 Allen-Bradley 1336 REGEN. Software 1336 Plus II Drive 1336 Plus II User Manual 1336 PLUS-5. The information below summarizes the changes to the 1336 PLUS II. If you are not familiar with static control procedures, reference A-B. Conversion Assistance, Help with converting ctags c++ tutorial looping 1336 Plus drive to an updated model. Allen-Bradleys collection of manuals for the 1336 Plus drive. Allen-Bradley Home. Fault Finder, Diagnose any Fault from the 1336 Plus II drive. Allen-Bradleys collection of manuals for the 1336 Plus II drive. Faxback Document 3031 Web location Http:WWW. COM. Hayward tv guide BYPASS. Ctags c++ tutorial looping. PLC is a registered trademark of Diablo 3 barbarian solo build guide Company, Inc. Ctags c++ tutorial looping must be specified to ensure shipment of appropriate User Manual. A-B Ctags c++ tutorial looping Control, Flexibility and Performance ctags c++ tutorial looping AC Drive No. A-B User Manual 1336 PLUS 5. T74100895. AB 1761-Lxxxxx Micrologix 1000 Prog ControllerInstall. pdf. Allen Bradley 1336 Plus II Quick Start Guide 7-98. pdf. Allen Bradley. Allen-Bradley, CompactLogix, ControlLogix, DriveLogix, FLEX IO, Kinetix, MessageView. 1336 PLUS, 1336 IMPACT, SMC, SMC FLEX, SMC Dialog Plus, RSBizWare Batch, and. Reconfigure an IO Module via a Message Instruction. This release of the 1336-5. 0 User Manual contains some new and updated information. Carrier and your nearest Allen Bradley Area SalesSupport. Provides voltage surge protection and phase-to-phase plus phase-to-ground. with Allen-Bradley SMC and SMP power products, the 1305 drive, the. Of the 1336 PLUS II User Manual publication 1336 PLUS-5. 3 must be adhered to. A complete listing of 1336 PLUS discrete spare parts can be found at: www. comsupportabdrives13361336-65.
Pages, activate Document Assembly view in the View menu or by clicking the. PDF Enhancer ctaags the most common document assembly and. Allows you to flawlessly create digital editions from press ready PDFs quickly and easily. 6 Document security allowsdoes not ctags c++ tutorial looping document assembly. PDF is the standard format for business document collaboration. Yet, most users lack adequate PDF.
Allows users to combine files and remove or replace pages with drag-and-drop ease. Creating a single, cohesive PDF file from a collection of documents in. And allows the reader to use the information in a more tutorrial manner. Home Editing and Managing PDFs Document Security. When this permission is granted, the following 3 permissions Document Assembly, Comments. Learn how the new tools allow you and your staff looplng easily create smart. See demonstrations of the hot newest document assembly tools like.
The web, share templates and answer files on servers, and work with PDF forms. Feb 7, 2012. Extraction, content copying, signing, filling in fields, and document assembly. Feb 23, 2010. Simply drag and drop. May 28, 2014. When I export a Writer document as an pdf-File and have no security settings at all, it creates a pdf-file with the properties: Document Assembly. GroupDocs document assembly service allows you to generate custom.
Allows you to assemble both PDF forms and Generate variable names python tutorials Word documents with merge. Jan 4, 2010. Document assembly seems to be a hot topic these days especially when combined with the power of SharePoint. Think of this feature as allowing for a ctags c++ tutorial looping of related content.
To ctags c++ tutorial looping a PDF file ctags c++ tutorial looping a Word document. Our Document Assembly tool allows you to assemble electronic files from different sources into a single PDF document cctags printing, electronic storage, and. With A-PDF Password Security, you can set a PDF file if need a password to open. Printing, Document Assembly, Allow Form Fill-in or Signing, Comments. Oct 14, 2014. And allows the reader to use the information in a more efficient manner.
Sep 20, 2012. The web, share templates and answer files on servers, and work with PDF forms. Allow Reader users to save data that they enter into your forms. Save data in interactive or fillable forms Ctags c++ tutorial looping Save Ctags c++ tutorial looping Reader Extended PDF Enable. In Acrobat XI, Close Form Editing if ctags c++ tutorial looping File Save As Other.
There is a setting inside the Dodge ram repair manual pdf file that turns on the allow saving with data. Learn how to save data you enter into declare variable in sqlite tutorial fillable PDF form in Adobe.
You can save data typed into this PDF form allow you to do this tuorial Adobe. If you have a PDF form you would like to distribute and collect responses, you can also use the PDF to enable recipients to answer questions and save them as. Extend Editing Rights to Reader Users. Click the PDF file for which you want to enable saving rights, then click Open. In order to allow users gem guide price stone save a ctags c++ tutorial looping PDF with Acrobat Reader the.
You are editing the form, click Close Form Editing in Forms task pane. Fortunately there is a free PDF viewing program that allows you fill out fillable forms and save the changes, to be edited later if need be. Using Adobe Acrobat ver 9 to make a fillable PDF. You can save data typed into this form - Enable Reader users to save form data if appropriate Ordinarily.
You can save 9 slice scaling opengl tutorial Word file to Forcelip linux tutorials format in Word 2007 and later. Word 2013 is the embedded linux from scratch tutorial games version that allows you to open a PDF, edit it, and resave it as a PDF.
Editing a PDF tuotrial was, until now, an entirely different matter. Method 2 of 4: PDF Editing Software. Edit a PDF File Step 1.
|
OPCFW_CODE
|
<?php namespace Cmsable\PageType;
use DomainException;
use OutOfBoundsException;
use Illuminate\Container\Container;
class ManualRepository implements RepositoryInterface{
public $loadEventName = 'cmsable.pageTypeLoadRequested';
public $filledEventName = 'cmsable.pageTypeFillCompleted';
protected $pageTypes = [];
protected $pageTypesByRouteName = [];
protected $pageTypesLoaded = FALSE;
protected $eventDispatcher;
protected $eventFired = FALSE;
protected $prototype;
protected $app;
public function __construct(Container $container, $eventDispatcher=NULL){
$this->app = $container;
$this->prototype = new PageType;
if($eventDispatcher){
$this->setEventDispatcher($eventDispatcher);
}
}
public function getPrototype(){
return $this->prototype;
}
public function setPrototype(PageType $prototype){
$this->prototype = $prototype;
return $this;
}
public function getEventDispatcher($dispatcher){
return $this->eventDispatcher;
}
public function setEventDispatcher($dispatcher){
if(!method_exists($dispatcher,'fire')){
throw new DomainException('EventDispatcher has to have a fire method');
}
$this->eventDispatcher = $dispatcher;
return $this;
}
public function get($id){
$this->fireLoadEvent();
if(isset($this->pageTypes[$id])){
return $this->pageTypes[$id];
}
throw new OutOfBoundsException("No PageType found with id '$id'");
}
public function has($id){
try{
$type = $this->get($id);
return TRUE;
}
catch(OutOfBoundsException $e){
return FALSE;
}
}
/**
* {@inheritdoc}
*
* @param string
* @return PageType|null
**/
public function getByRouteName($routeName)
{
$this->fireLoadEvent();
if (isset($this->pageTypesByRouteName[$routeName])) {
return $this->pageTypesByRouteName[$routeName];
}
}
public function add(PageType $pageType){
$this->pageTypes[$pageType->getId()] = $pageType;
if (!$routeNames = $pageType->getRouteNames()) {
return $this;
}
foreach ($routeNames as $routeName) {
$this->pageTypesByRouteName[$routeName] = $pageType;
}
return $this;
}
public function fillByArray(array $pageTypes){
foreach($pageTypes as $typeData){
$pageType = $this->createFromArray($typeData);
$this->add($pageType);
}
if ($this->eventDispatcher) {
$this->eventDispatcher->fire($this->filledEventName, $this);
}
}
public function createFromArray(array $pageTypeData){
$pageType = clone $this->prototype;
if(isset($pageTypeData['class'])){
$pageType = $this->app->make($pageTypeData['class']);
}
else{
$pageType = clone $this->prototype;
}
if(isset($pageTypeData['id'])){
$pageType->setId($pageTypeData['id']);
}
else{
throw new OutOfBoundsException('A PageType needs an id');
}
if(isset($pageTypeData['controller'])){
$pageType->setControllerClassName($pageTypeData['controller']);
}
foreach(['singularName','pluralName','description','category',
'formPluginClass','routeScope','targetPath',
'controllerCreatorClass','routeNames'] as $key){
if(isset($pageTypeData[$key])){
$method = 'set'.ucfirst($key);
$pageType->$method($pageTypeData[$key]);
}
}
return $pageType;
}
public function all($routeScope='default'){
$this->fireLoadEvent();
$pageTypes = [];
foreach($this->pageTypes as $id=>$pageType){
if( $pageType->getRouteScope() == $routeScope || !$pageType->getRouteScope()){
$pageTypes[] = $pageType;
}
}
return $pageTypes;
}
public function byCategory($routeScope='default'){
$categorized = array();
foreach($this->all($routeScope) as $info){
if(!isset($categorized[$info->category()])){
$categorized[$info->category()] = array();
}
$categorized[$info->category()][] = $info;
}
return $categorized;
}
public function getCategory($name){
return new Category($name);
}
public function getCategories($routeScope='default'){
$categoryNames = array_keys($this->byCategory($routeScope));
$categories = array();
foreach($categoryNames as $name){
$categories[] = $this->getCategory($name);
}
return $categories;
}
protected function fireLoadEvent(){
if($this->eventDispatcher && !$this->eventFired){
$this->eventDispatcher->fire($this->loadEventName, $this);
$this->eventFired = TRUE;
}
}
public function currentPageConfig(){
return $this->currentPageConfig;
}
public function setCurrentPageConfig($pageConfig){
$this->currentPageConfig = $pageConfig;
return $this;
}
public function resetCurrentPageConfig(){
$this->currentPageConfig = NULL;
return $this;
}
}
|
STACK_EDU
|
On 6/8/11 4:36 PM, Vikraman wrote:
> I'm working on the `Package statistics` project this year. Till now, I
> have managed to write a client and server to collect the following
> information from hosts:
Excellent, good luck with the idea! I think that better information
about how Gentoo is actually used will greatly help improving it.
> Is there a need to collect files installed by a package ? Doesn't PFL
> already provide that ?
Well, PFL is not an official Gentoo project. It might be useful, but I
wouldn't say it's a priority.
> Please provide some feedback on what other data should be collected, etc.
In my opinion it's *not* about collecting as much data as possible. I
think it's most important to get the core functionality working really
well, and convincing as large percentage of users as possible to enable
reporting the statistics (to make the results - hopefully - accurately
represent the user base). Please note that in some cases it may mean
collecting _less_ data, or thinking more about the privacy of the users.
For me, as a developer, even a list of packages sorted by popularity
(aka Debian/Ubuntu popcon) would be very useful.
Ah, and maybe files in /etc/portage: package.keywords and so on. It
could be useful to see what people are masking/unmasking, that may be an
indication of stale stabilizations or brokenness hitting the tree.
Anyway, I'd call it an enhancement.
> Also, I'm starting work on the webUI, and would like some
> recommendations for stats pages, such as:
> * Packages installed sorted by users
> * Top arches, keywords, profiles
And percentage of ~arch vs arch users?
> * Most enabled, disabled useflags per package/globally
Also great, especially the per-package variant. It'd be also useful to
have per-profile data, to better tune the profile defaults.
I took a quick look at the code. Some random comments:
- it uses portage Python API a lot. But it's not stable, or at least not
guaranteed to be stable. Have you considered using helpers like portageq
(or eventually enhancing those helpers)?
- make the licensing super-clear (a LICENSE file, possibly some header
in every source file, and so on)
- how about submitting the data over HTTPS and not HTTP to better help
- don't leave exception handling as a TODO; it should be a part of your
design, not an afterthought
- instead of or in addition to the setup.txt file, how about just
writing the real setup.py file for distutils?
|
OPCFW_CODE
|
Please log in to watch this conference skillscast.
In your typical "big-ball-of-mud" monolith, both (horizontal) layers and (vertical) subdomains become intertwined. Architectural constraints are needed to prevent this from happening. Microservices is one way to enforce those constraints, but if what you're really struggling with is modularity then this might be a case of using a hammer to crack a nut.
Metamodels provide a solution to the horizontal layering problem. In the same way that an ORM such as Hibernate uses a metamodel to separate the domain model from its persistence model, so too can a metamodel be used to - among other things - separate out the application and presentation layers.
Splitting vertical subdomains is a different challenge again, but DCI - an evolution of the well-known MVC pattern - is an architectural style that helps achieve that goal. Distinguishing between data (what the system is), interaction (what the system does), and context (what the user is trying to do), it sits somewhere between aspect-oriented and object-oriented approaches. Its the aspect-oriented nature that allows functionality to be partitioned into vertical submodules but nevertheless be presented in a coherent fashion to the end-user.
So much for the theory. In this live coding session we'll look at a framework that marries both of these techniques, resulting in applications with clean separation both vertically and horizontally.
YOU MAY ALSO LIKE:
- Closing the feedback loop with Apache Isis (SkillsCast recorded in June 2016)
- Domain Models in Practice: DDD, CQRS & Event Sourcing with Marco Heimeshoff (Online Course on 30th November - 4th December 2020)
- Sociotechnical Domain-Driven Design with Kacper Gunia (Online Course on 7th - 8th December 2020)
- Strategic Domain-Driven Design Tools For Non‑DDD People (Online Meetup on 29th October 2020)
- Implementing Microservices: Nobody Told Me About That (SkillsCast recorded in October 2020)
- Debugging Containers on Kubernetes with "kubectl debug" (SkillsCast recorded in July 2020)
Leveraging Metamodels and DCI to Build Modular Monoliths
Dan is a freelance consultant, developer, writer and trainer, specializing in domain-driven design, agile development, enterprise architecture and also REST, on the Java and .NET platforms. Dan is known as an advocate of the naked objects pattern, and is the lead committer to Apache Isis, a Java framework that implements the naked objects pattern. He also works for several clients on enterprise and mobile apps, built on top of or leveraging Apache Isis.
|
OPCFW_CODE
|
There are limitations to what can be accomplished via automated website testing. Understanding these can help you make better use of Sitebeam and other tools.
Computers can’t solve open-ended problems
Currently computer software is not intelligent, and can only follow rules that have been defined for it. This means that the computer is only as accurate as the rules it is given to follow.
Most of the problems faced by Sitebeam are open-ended, i.e. it is impossible or impractical to define a set of rules which would perfectly test for every given set of criteria. As a simple example, consider spell checking. A naive computer program might look like this:
- Find all words on a page
- Check them in a dictionary
- Mark words which are not found as misspelt
In reality, although this appears ‘correct’, it is impossibly impractical. Any of the following, for instance, don’t appear in a standard English dictionary:
- 1,234,567 – a random number
- 02/07/1979 – a date
- 385th – an ordinal
- www.example.com – a web address
- example.com – a web address, or perhaps two words with no space around the period
- NATO – an acronym, immediately apparent due to capitalisation
Adding these words to a dictionary is possible, but impractical. Adding every possible number, date and ordinal for instance, introduces an infinite set of entries to our dictionary. So instead we add rules that detect these ad-hoc anomalies, and ignore them.
Other words have subjectively correct spelling, depending on case and context in a sentence. E.g.
- I said hello is correct
- i said hello is not
- Hello David is correct
- Hello david is not
To confuse matters further, it is possible to intentionally misspell something, in a way only a human would realise:
- You should always write “London” not “london”.
And the mixture of languages means some words can be spelt correctly in one, but not another – and even mixed between sentences:
- He was well known for his joie de vivre – correct
- le cat sat on de mat – garbage mix of English, French, German
In fact, this apparently simple problem soon requires an almost infinite set of rules to implement, is different between languages, and always changing.
Yet the spell checker is not a redundant or impossible piece of technology, merely an inherently flawed one. The same difficulties exist with any major spell-checking application: computers cannot perform flawless checking against an infinite rule set. They can however get close enough to be very useful: pointing out possible misspellings, and giving human beings the ability to apply context themselves on top – for example, adding a word to a dictionary.
Similar problems are exhibited by GPS devices, facial recognition software and fingerprint scanners.
Software can never be completely tested
- Check the day of the week
- If the day is Friday, go to the thank-god-its-friday.html
- Otherwise go to just-another-day.html
A more complex example would break this model further:
- Ask the user for their birthday
- Go to date.html, where date is their date of birth expressed dd/mm/yy
- In this example, Sitebeam would need to understand the input parameters, their boundary conditions (e.g. no more than 31 days a month) and convert this into a series of potential permutations, each resulting in a new page. If any of this was incorrect then the permutations returned would be incorrect.
With real software, it is common to encounter programs with infinite inputs, outcomes and execution paths (Google Maps, for instance). Variables can alter during execution and include mouse position, button presses and complex interactions. Sitebeam therefore adopts a simpler model which is nevertheless effective in the majority of instances. Given our first example, Sitebeam simply checks for outputs it understands (“go to page…”), and assumes they are all possible – it can’t know for sure. This would leave us with:
Which is correct. For the second example, it would simply fail to find anything, as the sequence is indeterminate without complete analysis. For the vast majority of cases this is sufficient.
Simulation of browsers is approximate
Sitebeam simulates web browser behaviour to test pages. Because web browsers are immensely complex and their behaviour varies, it is not possible or practical to simulate all browsers perfectly. In fact, one of the greatest challenges of modern web browsers is getting them to behave consistently themselves.
Accordingly there will be small variations between the behaviour of Sitebeam, Internet Explorer, Firefox, Safari, Opera and other browsers which will shift over time. In the case of Sitebeam, because we don’t aim to render pages like browsers, our simulation can afford some degree of laxity. Rare subtleties in CSS parsing and browser-specific hacks can create problems however.
As a general rule, Sitebeam aims to simulate a standards-compliant browser (specifically, Firefox) with some additional behaviour to address Internet Explorer specific hacks.
|
OPCFW_CODE
|
[FOM] Re: Absoluteness of Clay prize problems
dmytro at mit.edu
Mon Aug 23 15:11:20 EDT 2004
Harvey Friedman wrote:
>In fact, in my opinion, the essence of mathematics is mostly Pi01 and
>predominantly at most Pi02.
I do not agree with this. I think that mathematics can be roughly separated
into three large areas: the study of finite structures or integers, the study
of real numbers and certain well-behaved functions on real numbers, and general
or set theoretical mathematics.
Natural mathematical theorems tend to avoid many alterating unbounded
quantifiers, so in number theory and related areas, the vast majority of
theorems can naturally be expressed as Pi-0-2 statements. Commonly used
functions whose domain is a subset of R tend to be continuous or otherwise
Delta-0-2. Statements about analysis tend to be expressible (after some
coding) as Pi-1-2 statements.
Many theorems in analysis--such as the theorem stating compactness of the unit
interval--are Pi-1-2 statements and are not reducible over RCA_0 (a weak base
theory) to arithmetical or Pi-1-1 statements. Of course, they are reducible
to arithmetical statements over ZFC, but the easiest to achieve the reduction
may be to prove them outright. Graph minor theorem is a Pi-1-1 statement that
is (as far as I know) not implied by any consistent arithmetical statement
over Pi-1-1-CA_0 (a theory that tends to suffice for ordinary as opposed to
"set theoretical" mathematics).
Pi-1-2 statements (provably) have the same truth value in all inner models and
generic extensions of V, so "ordinary" mathematics largely escapes the
It should be noted that much of general mathematics is general primarily for
convenience and that the use of uncountable sets can be partially avoided. For
example, one can study countable fields instead of arbitrary fields with only a
partial loss of content.
Also, regarding the Continuum Hypothesis, the problem is an important basic
mathematical (or, some would say, metamathematical) problem in its own right,
so study related to CH is important even if it has no known (non Sigma-0-1)
Pi-0-1 or commercial applications. We still know relatively little and
there are many places where a compelling answer to the CH hypothesis could
come from. The theory of reasonably definable using subsets of omega_1 as
parameters sets of subsets of omega_1 is largery unexplored, and it could be
that the only plausible theory of such sets implies the CH. Based on my
beliefs about unity of knowledge, human potential, and importance of
basic study, I would conjecture that the eventual solution to the Continuum
Hypothesis will have important practical applications.
More information about the FOM
|
OPCFW_CODE
|
Soldering tip is made of copper enclosed in other metals
Why is a soldering tip made of copper enclosed in other metals?
Why is it not just simply solid copper?
Copper is used for the core of the iron tip because it's an excellent conductor of heat, so does a good job transferring energy from the heating element to the tip.
However copper is also affected by solder/tin/flux. If you apply solder to a pure copper tip and keep heating it, the copper will be eaten away by the solder and acidic flux (it literally disolves) - you end up with lots of small pits in the tip which keep growing until it becomes useless.
By coating the copper in a thin layer of more resistant metals such as iron, you prevent the solder from eating away at the copper core, increasing the lifetime of the tip whilst maintaining the good thermal conductivity of the core.
Not just "essentially" dissolves, it literally dissolves. Copper is very soluble in tin. Lead-free solder alloys often contain additives (often nickel, or a controlled amount of copper to begin with) specifically to reduce the solubility of copper.
@Hearth I was going to go with literally originally, but not being a chemist, I chickened out.
I have used solid copper for soldering. What would happen is the point would become duller and duller (like a pencil) until eventually, there was essentially no point at all, just a rounded end of the copper.
Copper corrodes/oxidizes almost immediately when heated. It is coated with more stable metals so that the tip lasts a long time and you don't have to change them out every few minutes.
Copper tips should be immediately tinned with solder upon heating. This protects them from corrosion and oxidation. However, the copper will eventually dissolve in the solder, with the tip point becoming ever more "dull" and rounded.
I still have old copper soldering irons which are basically a block of copper 1" square and 3" long formed to a point at one end on a handle.
These have been used to solder terminals to large copper cables (like the cables to starter motors on cars) and have lasted for years.
Kept clean, not overheated and used with flux they still work well. I don't find a massive rate of acidic corrosion as suggested in the other answers.
Cu alone makes for a good soldering tip. The addition of Fe makes it semi-crap because at temperature, the tip oxidizes really fast and then it won’t wet with solder. Cu will not last forever but it won’t dissolve before your eyes, either.
Copper will actually "dissolve before your eyes." I used to have an old, cheap soldering iron for which I could not buy new tips. I made them myself out of heavy copper wire. You could see pits forming it the tip while using it. After a few hours, I'd have to take the tip out and hammer and file it back into shape. Good tips for good irons last months if not years.
I have used pure copper soldering tips. No, the tip won't dissolve immediately, but the tip will need to be replaced after a day of continuous use. Copper wire tips can be re-pointed with a lathe if one is available.
|
STACK_EXCHANGE
|
// ets_tracing: off
import type { NonEmptyArray } from "../Collections/Immutable/NonEmptyArray/index.js"
import * as Tp from "../Collections/Immutable/Tuple/index.js"
import { accessCallTrace } from "../Tracing/index.js"
import type { _E, _R, ForcedTuple } from "../Utils/index.js"
import { map_ } from "./core.js"
import type { Managed } from "./managed.js"
import { collectAll, collectAllPar, collectAllParN_ } from "./methods/api.js"
export type TupleA<T extends NonEmptyArray<Managed<any, any, any>>> = {
[K in keyof T]: [T[K]] extends [Managed<any, any, infer A>] ? A : never
}
/**
* Like `forEach` + `identity` with a tuple type
*
* @ets_trace call
*/
export function tuple<T extends NonEmptyArray<Managed<any, any, any>>>(
...t: T & {
0: Managed<any, any, any>
}
): Managed<_R<T[number]>, _E<T[number]>, ForcedTuple<TupleA<T>>> {
const trace = accessCallTrace()
return map_(collectAll(t, trace), (x) => Tp.tuple(...x)) as any
}
/**
* Like tuple but parallel, same as `forEachPar` + `identity` with a tuple type
*/
export function tuplePar<T extends NonEmptyArray<Managed<any, any, any>>>(
...t: T & {
0: Managed<any, any, any>
}
): Managed<_R<T[number]>, _E<T[number]>, ForcedTuple<TupleA<T>>> {
return map_(collectAllPar(t), (x) => Tp.tuple(...x)) as any
}
/**
* Like tuplePar but uses at most n fibers concurrently,
* same as `forEachParN` + `identity` with a tuple type
*/
export function tupleParN(n: number): {
/**
* @ets_trace call
*/
<T extends NonEmptyArray<Managed<any, any, any>>>(
...t: T & {
0: Managed<any, any, any>
}
): Managed<_R<T[number]>, _E<T[number]>, ForcedTuple<TupleA<T>>>
} {
return ((...t: Managed<any, any, any>[]) =>
map_(collectAllParN_(t, n, accessCallTrace()), (x) => Tp.tuple(...x))) as any
}
|
STACK_EDU
|
This paper is part of the ImagingLab Robotics Library for DENSO Reference Guide and assumes you are familiar with the basic use of the library. For a review on these topics and an introduction to the library, follow the preceding link to the reference guide.
The bridge between the vision system and the robot is a shared coordinate system. The vision system finds a part and reports the location, but, to instruct the robot to move to that location, the system must convert the coordinates into units the robot accepts. Calibration allows the vision system to report positions in real-world units such as millimeters, which are used with the robot’s Cartesian coordinates. A common method for calibration is using a grid of dots. For more in-depth information on image calibration, refer to the NI Vision Concepts Manual. You can use this grid of dots to calibrate the vision system as well as calibrate the robot with the vision system. When calibrating the vision system, you must select an origin to define the x-y plane. Usually one of the corner dots is selected as the origin and then either a row or column is defined as the x-axis.
The robot’s method for creating a coordinate system is similar, so you can use the same dot on the calibration grid you select for the vision system’s origin as the robot’s origin. Simply move the robot to that dot, store the location as a position variable on the robot controller, move the robot in the x- and y-axes, and store those locations as position variables. You can store position variables by using either LabVIEW or the DENSO teaching pendant. Once you have completed this, you can use the DENSO teaching pendant to autocalculate a coordinate system or work area based on the three stored positions. To use the autocalculate tool on the DENSO teaching pendant, navigate from the home screen to Arm»Auxiliary Functions»Work»Auto Calculate. Once in the work autocalculate menu, simply select the position variables that correspond to the origin, x-axis, and x-y plane position variables stored as previously mentioned. You can input the calibrated vision system’s positions directly into the robot’s Cartesian move VIs once this autocalculation is complete and the recently created work is set as the current one used. See Figure 10 for a simple example of passing the coordinates. These results that are passing directly to the robot move VIs also include the angle of the match, which is an output of the vision system’s geometric pattern matching results. Other methods of calibration are to input the parameters of the coordinate system directly rather than using the built-in autocalculation on the teaching pendant. You can do this with the DENSO teaching pendant or LabVIEW. See the DENSO manual for more information on calibration methods for the robot.
3. Relative and Absolute Movements
In most cases, a camera has a fixed location relative to the robot, meaning that once you have completed a calibration for the robot and vision system, you can directly input the calibrated position output by the vision code into the robot move VIs. This is using absolute movements. In other cases, you can fix the camera to the end effector or some other moving device. Because the view of the camera changes, you must either update the calibration with the view or provide relative movements. When using relative movements, the target position is given as a relative offset from the current position, such as the camera needing the target to be in the center of the frame each acquisition. When the target is off-center, the image results command the robot to move a relative amount until the target is recentered. You also can use a hybrid system in situations where the fixed camera gives absolute movements to pick up parts and move to the assembly area. You use the second camera for precise guidance to the final position.
4. Parallel Processing
The ImagingLab Robotics Library for DENSO has a sequential API in which commands are executed in the order they are programmed, but in many applications, the robot control code may not be the only code that needs to execute. Vision acquisition and processing, HMIs and user interfaces, alarms management, control of feeding devices, and other communication processes are all tasks that can run in parallel with the robot control. LabVIEW features many architectures, such as a Master-Slave or Producer-Consumer architecture, that are capable of handling various tasks, and you can select an architecture based on your specific application. The image acquisition and processing does not need to be in-line with the robot commands when implementing vision-guided robotics applications, so using a parallel processing architecture allows these functions to occur at the same time as the robot commands and other control processes. This keeps the robot in continual motion because it does not have to wait for the vision code to complete before moving. Once the robot has left the viewing area, you can take a new image and implement the geometric pattern matching and inspection while the previous part is assembled or moved to its place location. Hence, once the previous part has been placed, the new part’s location is ready to be sent to the robot move VIs. The following is a sample flowchart for the vision code that runs in parallel with the robot VIs.
5. Next Step
View the data sheet, product information, and webcasts on calibration and programming pick and place movements at the following link.
|
OPCFW_CODE
|
History of Roger's Community Garden
Reasons for Founding RCG
Roger's Community Garden started in 2008 after our initial student founders secured a grant leasing the property in the Southern Eucalyptus Grove of UCSD's Revelle College. At first, RCG was a small student community garden, a place where students could come together and grow fresh, organic food on campus. Soon, several other gardens also popped up in each of the residential colleges, enabling more students to take up gardening.
During the period between 2013 and 2019, RCG began to expand the scope of gardening activities. We installed irrigation pipelines and additional garden beds to support a much greater volume of produce. RCG now donates all extra produce from public plots to the Triton Food Pantry and Triton Food Recovery Network to combat student hunger in those who either didn't have the time, money, or interest in gardening. We also branched out into more novel, small-scale agricultural projects such as aquaponics systems and gourmet mushrooms.
RCG is partnered with the Bioregional Center for Sustainable Research, Planning, and Design within the Urban Studies and Planning Department at UCSD. We work in tandem with faculty and administration to ensure transparency and cooperation with university guidelines.
Computer Science for Agriculture
Computer Science for Agriculture (CSA) exists to create a tie-in for computer science students, organizations, and faculty looking to integrate their computer science/engineering into environmental and food-based projects. CSA works broadly within the computer science field with students and groups dedicated to in-lab research. Many of our projects haven't been done on a wide-spread scale, so students are learning how to build, code, and design hardware and software packages in issues rarely worked on before.
Programming: C++, Python, Linux Kernal
Hardware: Raspberry Pi, Arduino, Sensors, Solenoids, Relays
Furthermore, students can also work in in-field research: deployment of sensors, mapping, data collection, and troubleshooting with the other various student projects. At CSA, the focus is not only collecting useful data, but developing web/app-based methods to relay and make that data coherent. Eventually, there are goals to start designing machine-learning algorithm to automate portions of the garden.
RCG is working on expanding and better integrating with the UCSD community at large. At RCG, we hope to design small spaces on campus to utilize UCSD's micro-climate to grow crops and provide relaxation and study spaces for students.
We also work with our sister garden, Ocean View Growing Grounds, to ensure a larger segment of the San Diego population has continuing access to fresh produce. Projects are first proposed and designed at RCG, and once proven, are built on a larger and more cost-effective scale at OVGG.
Realizing the problems caused by a rapidly increasing global population, University of California President Janet Napolitano has set up two system-wide goals:
Furthermore, RCG hopes to make an impact on UCSD's climate goals. The UC Office of the President has asked campuses to reduce carbon emissions in a bid to become carbon-neutral by 2025. While there are some major ways to cut down on carbon, we simply cannot reach a net zero without taking into account food.
That's why RCG is proud to say we have both Carbon Neutrality and Global Food Initiative Fellows working to solve the problem of food waste and post-consumer waste emissions.
|
OPCFW_CODE
|
A schema is a logical name-space within a database, to which database objects may belong: each table, view, and domain in the database is identified with a particular schema, which is specified (or implied) when the object is created. Each schema in a database must have a unique name; however, the database objects contained within different schemas may have the same names, since each can be identified uniquely by specifying the schema to which it belongs.
Creating a Schema
The CREATE SCHEMA statement adds a schema to a database. The user executing this statement becomes the owner of the schema, and has full privileges for creating, altering, manipulating, and dropping objects within it. The CREATE SCHEMA statement has an optional AUTHORIZATION clause, used to specify an existing database user as the schema owner. A CREATE AUTHORIZATION statement may also add a schema to the database if it specifies a default schema that does not exist; the default schema is created with the new authorization as the owner. (Note that only a user with DBA privileges may issue CREATE AUTHORIZATION statements.)
Besides user-created schemas, each SAND database also contains two special schemas by default: the SYSTEM schema, which holds system-created tables dedicated to storing information about the structure of the database itself; and the PUBLIC schema, which contains system-created views based on selected tables in the SYSTEM schema.
The SYSTEM Schema
The SYSTEM schema contains the SAND CDBMS system tables, which store information about all objects in a database, as well as information relating to internal system management. The information stored in the system tables is continuously updated by SAND CDBMS to reflect changes in the database state. Only users with DBA privileges may access the system tables. Refer to Appendix C for more information about the system tables.
The PUBLIC Schema
The PUBLIC schema contains system views which any user may query to retrieve information about database objects they own or upon which they have privileges. The system views provide information about the following database objects:
Refer to Appendix C for more information about the PUBLIC views.
If no default schema is specified in a CREATE AUTHORIZATION statement, the initial schema is set to PUBLIC when the user connects to the database. Since the user DBA and the PUBLIC user have no default schema defined when the database is created, they initially start in the PUBLIC schema when connecting to the database.
When a user establishes a connection with a database, the current schema for that session is initially set to the user's default schema. This default is defined by the user DBA (or a user with DBA privileges) by means of a DEFAULT SCHEMA clause included in the CREATE AUTHORIZATION statement that creates the user authorization. A users default schema can be set, as well as changed, through an ALTER AUTHORIZATION...SET DEFAULT SCHEMA statement.
If the default schema did not exist prior to the execution of the CREATE AUTHORIZATION statement, the schema is created with the new user as owner. If the default schema already existed, with another authorization as owner, the schema is established as the initial schema for the new user, but no actual privileges on the schema or its contents are conferred. In this case, if the new user requires the ability to add, drop, or manipulate objects in the schema, these privileges will have to be granted to the user separately by the DBA or another user with the appropriate privileges.
If no default schema is specified in a CREATE AUTHORIZATION statement, the initial schema is set to PUBLIC when the user connects to the database. Since the DBA and PUBLIC users have no default schema defined when the database is created, they initially start in the PUBLIC schema when connecting to the database.
Schemas are much like operating system directories, in that a database user is considered always to be situated in a current (working) schema. Any reference to tables, views, and domains in an SQL statement is implicitly interpreted in the context of the current schema, unless the object names are qualified with the name of another schema. Tables, views, and domains in the current schema can be accessed without explicitly specifying the schema name.
The following statement, for example, drops table t1 from the current schema; this is an example of an unqualified table reference:
DROP TABLE t1;
The following statement drops table t2 from schema s2. The table name is qualified, and so the statement drops table t2 from schema s2 regardless of the current schema:
DROP TABLE s2.t2;
The present working schema can be changed using the SET SCHEMA statement.
The DBA, and users with DBA privileges, can access information about all schemas in the database by querying the system table SYSTEM.SCHEMAS. Users without DBA privileges may query the system view PUBLIC.SCHEMAS for information about the schemas. Refer to Appendix C: SAND CDBMS System Tables/Views for more information about system tables and views.
|
OPCFW_CODE
|
Some error responses from AssemblyLine are HTML pages, which causes issues when we try to parse them as JSON
In some cases, AssemblyLine returns an error response with an HTML page instead of a JSON body. In at least one of these cases, it's because nginx is rejecting the request before AL itself has a chance to construct a JSON response.
For example:
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (String)"<html>^M
<head><title>413 Request Entity Too Large</title></head>^M
<body>^M
<center><h1>413 Request Entity Too Large</h1></center>^M
<hr><center>nginx/1.19.2</center>^M
</body>^M
</html>^M
"; line: 1, column: 2]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1851)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:707)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:632)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1947)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:776)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4664)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4513)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3468)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3451)
at ca.gc.cyber.ops.assemblyline.java.client.clients.AssemblylineClient.extractApiErrorMessage(AssemblylineClient.java:419)
at ca.gc.cyber.ops.assemblyline.java.client.clients.AssemblylineClient.lambda$checkForException$19(AssemblylineClient.java:396)
at reactor.core.publisher.MonoCallable.call(MonoCallable.java:91)
at reactor.core.publisher.FluxSubscribeOnCallable$CallableSubscribeOnSubscription.run(FluxSubscribeOnCallable.java:227)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
... 1 common frames omitted
I think that in cases like this the best way to handle this is to catch the JsonParseException and then just return the response body verbatim. If specific non-JSON responses need additional special handling in the future, we can cross that bridge when we come to it.
... I should have made sure I was using the latest version of the client first...
|
GITHUB_ARCHIVE
|
Upgrading 4.3.2 -> 4.4.1 causes mypy type checking to fail
Hi,
to keep it short, when upgraded 4.3.2 -> 4.4.1, mypy started throwing following errors
Skipping analyzing "troposphere": module is installed, but missing library stubs
or py.typed marker [import]
from troposphere import Ref
^
Skipping analyzing "troposphere.ecs": module is installed, but missing library
stubs or py.typed marker [import]
from troposphere.ecs import Environment as EnvironmentVariable
^
Skipping analyzing "troposphere": module is installed, but missing library stubs
or py.typed marker [import]
from troposphere import (
^
Skipping analyzing "troposphere.applicationautoscaling": module is installed,
but missing library stubs or py.typed marker [import]
from troposphere.applicationautoscaling import (
^
Skipping analyzing "troposphere.cloudwatch": module is installed, but missing
library stubs or py.typed marker [import]
from troposphere.cloudwatch import Alarm, MetricDimension
^
Skipping analyzing "troposphere.ec2": module is installed, but missing library
stubs or py.typed marker [import]
from troposphere.ec2 import SecurityGroup, SecurityGroupEgress, Securi...
^
Skipping analyzing "troposphere.ecs": module is installed, but missing library
stubs or py.typed marker [import]
from troposphere.ecs import (
^
Skipping analyzing "troposphere.efs": module is installed, but missing library
stubs or py.typed marker [import]
from troposphere.efs import (
^
Skipping analyzing "troposphere.elasticloadbalancingv2": module is installed,
but missing library stubs or py.typed marker [import]
from troposphere.elasticloadbalancingv2 import (
^
mypy config is
[tool.mypy]
python_version = "3.11"
strict = true
pretty = true
show_error_codes = true
warn_unused_ignores = false
plugins = ["pydantic.mypy"]
Pydantic version is "==1.10.12"
Mypy version is mypy = "1.5.1"
Please let me know if more information is needed.
There was an issue with py.typed being included in Release 4.3.2 which was fixed. This is related to this earlier issue describing why it should not be included at this time. At this time exclude troposphere from the mypy checks.
It's probably a mypy related question, but rather bring it up here, as this whole mypy "which modules to ignore and how" is pretty confusing (seemingly to many).
I've tried to add both below to the mypy.ini, but neither of them helped:
[mypy-troposhpere.*]
ignore_missing_imports = True
[mypy-troposhpere]
ignore_missing_imports = True
Still getting error: Skipping analyzing "troposphere": module is installed, but missing library stubs or py.typed marker [import-untyped] when running mypy on the selected files.
Putting it to the top level makes running mypy ignore the issue, but that's not ideal as it affects other packages as well:
[mypy]
ignore_missing_imports = True
This results in Success: no issues found in 82 source files.
(It's another issue though that with this setting I'm still getting Stub file not found for "troposphere" from PyLance in VSCode.)
I'd really appreciate any recommendations how to get this working properly, I've spent at least an hour with this specifically because of troposphere.
@g-borgulya I was able to reproduce with your example. I believe the issue was the spelling of "troposphere" in the section names. Here's my results correcting that issue.
$ cat m.py
import troposphere
print(troposphere.__version__)
t = troposphere.Template()
print(t.to_json())
$ python m.py
4.5.3
{
"Resources": {}
}
$ cat mypy.ini
[mypy]
[mypy-troposphere.*]
ignore_missing_imports = True
[mypy-troposphere]
ignore_missing_imports = True
$ mypy m.py
Success: no issues found in 1 source file
$
Let me know if this works for you. I do need to create a tracking issue and branch to get better typing support implemented.
I so much appreciate it, @markpeek ! Thank you. Yes, mypy works with the fixed settings.
|
GITHUB_ARCHIVE
|
Single assignment is really an example of name binding and differs from assignment as described in this article in that it could only be finished as soon as, ordinarily when the variable is created; no subsequent reassignment is allowed.
Now e-book tokens for gasoline and also other objects with only one click on. Considered one of the best Java project Suggestions to undertake and impress academics.
D. qualifications from reputed universities across the world. We have now assignment help gurus for every and each topic and constantly escalating the staff by hiring the best assignment writers to supply good quality assignment help.
This method will help catering companies deal with their businesses effectively. They might go ahead and take care of their sources, out there people today and timings very well. This technique will make certain that suitable sum of people and workforce is allotted to every celebration.
You reply, “I may take a shot at it” but accidentally turn out which includes a clumsy word (sh*t). Oops. We are going to create a python system that detects curse phrases, and saves clumsy e-mail writers from uncomfortable moments.
Our services not merely make Discovering much easier but in addition convey ahead your concealed capabilities that to in an exceedingly modern way. Our administration tutors are really proficient and usually geared nearly provide you far better As well as in a means that fascinates you. Our services are pertinent for all administration connected matters for example finance, promoting, operations and so on.
Government hospitals can use This technique for viewing that each one the experiences produced by Medical professionals are obtainable from over here just one window.
A person process that can take in all the information and prepares Invoice and use allowances in accordance with the exact. This a person program manages issues really well for businesses and for person users.
We tend to be the main online dissertation producing service company in US and students can request our online dissertation help to learn the way to jot down a really perfect dissertation.
Building a procedure that retains the document of all The brand new Employment in the line is not going to only help you get excellent marks but will likely help you know how the online globe operates.
One particular quit shop that allows people today and institutions to keep all identification-associated facts with great simplicity. You can normally use This method for building their lives superior and easier.
You’ll pick up some wonderful equipment on your programming toolkit With this system! You are going to: Begin coding from the programming language Python;
Upload your management assignment or homework on our Web-site or alternatively you are able to mail us on our e-mail ID i.e. email@example.com. Our tutors will go through your assignment extensively and at the time They can be 100% certain of The solution, we can get back with the very best value quotation.
Information is your reward. Use OCW to manual your individual existence-extensive Mastering, or to teach Other folks. We do not offer you credit score or certification for making use of OCW.
|
OPCFW_CODE
|
'use strict';
var
_ = require('underscore'),
model = require('../model/db'),
moment = require('moment');
function LeaveRequest(args) {
var me = this;
// Make sure all required data is provided
_.each(
[
'leave_type','from_date','from_date_part',
'to_date', 'to_date_part', 'reason'
],
function(property){
if (! _.has(args, property)) {
throw new Error('No mandatory '+property+' was provided to LeaveRequest constructor');
}
}
);
// From date should not be bigger then to
if (moment.utc(args.from_date).toDate() > moment.utc(args.to_date).toDate()){
throw new Error( 'From date should be before To date at LeaveRequest constructor' );
}
_.each(
[
'leave_type','from_date','from_date_part',
'to_date', 'to_date_part', 'reason', 'user'
],
function(property){ me[property] = args[property]; }
);
}
LeaveRequest.prototype.as_data_object = function(){
var obj = {},
me = this;
_.each(
[
'leave_type','from_date','from_date_part',
'to_date', 'to_date_part', 'reason', 'user'
],
function(property){ obj[property] = me[property]; }
);
return obj;
};
LeaveRequest.prototype.is_within_one_day = function(){
return moment.utc(this.from_date).format('YYYY-MM-DD')
===
moment.utc(this.to_date).format('YYYY-MM-DD');
};
LeaveRequest.prototype._does_fit_with_point = function(leave_day, point_name){
var return_val = false;
if (
(
moment.utc(leave_day.date).format('YYYY-MM-DD')
===
moment.utc(this[point_name]).format('YYYY-MM-DD')
)
&&
(! leave_day.is_all_day_leave())
&&
(String(this[point_name+'_part']) !== String(model.Leave.leave_day_part_all()))
&&
(String(leave_day.day_part) !== String(this[point_name+'_part']))
) {
return_val = true;
}
return return_val;
};
// Check if start date or end date of current object fits with provided leave_day
// instance.
// By fitting I mean days are the same and both of them are
// halfs of different types.
//
LeaveRequest.prototype.does_fit_with_leave_day = function(leave_day){
return this._does_fit_with_point(leave_day, 'from_date')
||
this._does_fit_with_point(leave_day, 'to_date');
};
LeaveRequest.prototype.does_fit_with_leave_day_at_start = function(leave_day){
return this._does_fit_with_point(leave_day, 'from_date');
};
LeaveRequest.prototype.does_fit_with_leave_day_at_end = function(leave_day){
return this._does_fit_with_point(leave_day, 'to_date');
};
module.exports = LeaveRequest;
|
STACK_EDU
|
Website Development Services
Maximize your digital potential with our expert website development services, delivering top-notch web applications that are user-friendly, scalable, and optimized for search engines.
Request a Quote
Expert Website Development Solutions
MVS's Website Development Methodology
Design and Planning
Maintenance and Support
Expert Website Development Solutions
Our Website Development Services
Web Application Development
E-commerce Web Development
Website UI/UX Design
Content Management Systems
Website Maintenance and Support
Your Website Development Partner
Top Reasons for Choosing MVS for Website Development
Excellent Customer Support
Your Questions Answered (FAQ)
Website Development Services-FAQ
Q. What is web app development?
Web app development is the process of creating web applications that can be accessed through the internet using a web browser or a mobile device. These apps are developed using programming languages, frameworks, and libraries to provide users with a seamless and responsive experience.
Q. What programming languages does MVS use for web app development?
Q. What are front-end technologies used in web app development?
Q. What are back-end technologies?
Back-end technologies are the tools and frameworks used to create the server-side functionality of a web application, including database management, user authentication, and business logic. Common back-end technologies include programming languages like Joomla, Java, Python, and PHP, as well as frameworks like Node.js, Django, Laravel, OpenCart, Magento, and WordPress.
Q. What types of web apps does MVS develop?
MVS develops a wide range of web apps, including but not limited to e-commerce sites, social media platforms, content management systems, and custom software solutions.
Q. What is the difference between front-end and back-end development?
Front-end development is focused on building the user interface and user experience of a web application, while back-end development is focused on managing the server side of a web application. Both front-end and back-end technologies work together to create a complete web application.
Q. What is responsive web design?
Responsive web design is an approach to web design that focuses on building websites that can adapt to different screen sizes and devices. This ensures that users have a seamless experience regardless of the device they are using to access the website.
Q. How long does it take to develop a web application?
The time it takes to develop a web application depends on the complexity of the project, and the number of features required. A simple web application can be developed in a few weeks, while a more complex application can take several months to a year or more.
Q. What is the MVS pricing model for web app development?
MVS pricing model for web app development is based on the scope and complexity of the project. We provide a detailed project proposal and cost estimate to our clients before starting development.
Q. What is the MVS development process for web applications?
The MVS development process typically involves several stages, including discovery and planning, design and prototyping, development and testing, deployment, and ongoing maintenance. MVS work closely with our clients throughout the process to ensure their needs and goals are met at every stage.
Q. Does MVS provide maintenance and support for web applications after they are launched?
Yes, MVS offers ongoing maintenance and support services for our web applications to ensure they continue to function optimally over time. This may include security updates, bug fixes, and feature enhancements as needed.
Q. Can MVS integrate our web application with third-party services or APIs?
Yes, MVS has experience integrating web applications with a wide range of third-party services and APIs, including popular platforms like Salesforce, Shopify, and Stripe, as well as custom integrations with specific software systems.
Q. How does MVS ensure the security of web applications?
MVS takes security very seriously and implements a range of best practices to ensure the safety and privacy of our client's data. This may include data encryption, user authentication, and access control, regular security audits, and more.
Q. Can MVS develop web applications that are optimized for mobile devices?
Yes, MVS can create web applications that are optimized for mobile devices using responsive design techniques, which allow the app to adjust its layout and functionality based on the user's device and screen size.
Q. What is front-end development?
Q. What are the most popular front-end development technologies?
Q. What is a front-end framework?
A front-end framework is a pre-built set of tools, components, and libraries used to simplify and speed up the front-end development process. Examples of popular front-end frameworks include Tailwind, Bootstrap, Foundation, and Materialize.
Q. What is responsive design?
Responsive design is a design approach that ensures that a website or application is optimized for different devices and screen sizes, such as desktops, laptops, tablets, and smartphones. It involves using CSS media queries and flexible grid systems to adapt to different screen sizes.
Q. What is version control?
Version control is a system for managing changes to files over time. It allows developers to keep track of changes, collaborate with others, and revert to previous versions if necessary. Popular version control systems for front-end development include Git and SVN.
Q. What is the development process followed by MVS?
Our agency follows an agile development process that involves regular client communication, wireframing, design, development, testing, and deployment. We ensure that the project is delivered on time and within budget.
Q. What is the role of a front-end developer in web development?
Q. What is cross-browser compatibility?
Cross-browser compatibility is the ability of a website to function and display consistently across different web browsers, such as Chrome, Firefox, and Safari.
Q. What is web accessibility?
Web accessibility is the practice of designing and developing websites that can be used by people with disabilities. This includes making sure the website is usable by people with visual, auditory, motor, and cognitive impairments.
Q. What is Backend Development?
Backend Development is the process of building the server side of a website or web application. It includes creating and maintaining the database, server-side logic, and APIs that communicate with the front end.
Q. Why is backend development technology important for my business?
Backend development technology is essential for your business because it is responsible for handling the data processing and storage, security, and overall performance of your website or application. It helps ensure that your website or application is efficient, secure, and scalable.
Q. What is PHP?
PHP is a server-side scripting language that is widely used for web development. It is popular because it is easy to learn, has a large developer community, and is supported by most web hosting providers.
Q. What is the difference between a programming language and a framework?
A programming language is a set of instructions that are used to create software applications, whereas a framework is a collection of tools, libraries, and conventions that help developers build software applications more quickly and efficiently.
Q. What is an API?
An API (Application Programming Interface) is a set of protocols and standards used for building and integrating software applications. It allows different applications to communicate with each other.
Q. What is RESTful API?
RESTful API is a type of web service that uses the HTTP protocol to allow different software systems to communicate with each other. It follows a set of constraints and principles that make it easy to use, scalable, and maintainable.
Q. What is Cloud Computing?
Cloud computing is the delivery of computing services, including servers, storage, databases, networking, software, analytics, and intelligence, over the internet.
Q. Which Cloud services are popular for backend development?
Some popular cloud services used for backend development are Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and IBM Cloud.
Q. What is a Framework?
A framework is a set of pre-written code that can be used to build software applications. It provides a structure for developers to create their applications.
Q. What are the benefits of using Backend Development Technologies?
Benefits of using Backend Development Technologies include faster development, scalability, security, and better performance. They also allow for the creation of APIs that can be used by different clients, such as mobile apps and other web applications.
|
OPCFW_CODE
|
Off grid solar PV system for developing countries
I am designing an off grid system for developing countries for charging cellular phone, tablets pad and notebook computers. Design aim is to use low cost, simple, mature (slightly older) and mass-manufactured technologies. Your idea and contribution are mostly welcome.
I plan to use ready made modules from ebay, etc.
Is mono crystal solar panel better for this task? What point to take care on choosing PV panel and controller? 100W panel seem mainstream size.
For notebook charging, I plan to use off the shell, 12V boost to 19 V, simple boost converter, 6 to 8A. Many excellent information in this post. Why do many laptops run on 19 volts?
To what extend is 19V 'standardized'? Some posts mentioned some
brands use special '3 wires' charger that communicate between
notebook PC and charger, so that non brand charger will not work.
To what extend is the power connector size standardized? Do I need
to get those multiple size adapter kit?
https://en.wikipedia.org/wiki/DC_connector
For charging of 'slightly older' phone (USB 2, 5V, 0.5A), I plan to use an off the shelf 12 to 5V USB simple buck converter module.
How about those latest phone and notebook that use USB 3.1, USB C,
Quick Charge, Power Delivery mode, that are higher current, voltage
or both? Are they backward compatible? What will they do when
connected to a simple (no intelligent communication) buck converter
supplying 5V?
For use in developing countries, there is no need to chase the
cutting edge. However, if it is low cost and easy to do, what are
the prospect of something better than the basic "USB 2 at 5V 0.5A"?
Please don't ask so many, quite different, questions in one question. It's hard to impossible to provide an answer which will answer all of your questions in a good manner.
Sorry, will note in future. It is a single project
There are several companies producing this type of power system either for lights or vaccine fridges - a bit of research should help...
There are quite a few questions here. Lets try to get them in order.
MonoCrystalline PV panels are just a type of panel. They have their own intrinsic properties and for a long time (and still to a certain extent) had/have a better photon efficiency (more current per unit light). PolyCrystalline are just fine as well; they tend to be cheaper to make and buy. As to your project, I don't see a particular reason why you'd NEED one over the other, so either would be fine.
Boost converters are likely not the way to go for laptops. Most require a decent AC line for their power bricks and even if you wanted to charge them with DC (which you can) there are a range of voltage tolerances and not every laptop charges at "19V." There are plenty of manufactures and chargers vary between 19V, 19.5V, and 20V. Charging a laptop at the wrong voltage will damage it and may cause a fire. For your convenience and for the convenience of anyone using your solar station, find an Inverter that can put out a few hundred watts and have two standard wall outlets for whichever nation you're taking this to be used in. This also helps in that you don't have to have 15 different DC barrel jacks laying about to keep track of.
USB 3.1 chargers are 5V @ 900mA. USB charging on older phones could range up to 5V @ 2.1A. USB-C is complicated. Power Delivery standard uses the CC pins to negotiate the Voltage AND the Current.These range from 5V @ 500mA all the way to 20V @ 5A. Quick charge (Qualcomm)is proprietary and so is Dash Charge (OnePlus). You'd likely run into some issues implementing those. Since you asked for a suggestion, 5V @ 2.1A with a USB-A port is the best bet and supports the most phones. Anything other than that is rarer in use and another good reason to have an Inverter for AC lines. If you have those you can have people bring and use their own chargers for their own phones and not run into any issues with charging speed, intellectual property, or other mishaps
As Dave notes, the voltage is not totally standardised. It could be 18V or 20V or something else. The connectors are not standardised at all. Some laptops, including DELL, use a third wire. This allows the laptop to communicate with the charger, to make sure they are compatible.
|
STACK_EXCHANGE
|
package mobilize
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
)
type ListEventsRequest struct {
OrganizationID int
TimeslotStart string
TimeslotEnd string
ZipCode string
MaxDistance int
GroupDateFormat string
}
type ListEventsResponse struct {
Count int
Next string
Previous string
Data []Event
}
func ListEventsByDate(req ListEventsRequest) (map[string][]Event, error) {
listURL, _ := url.Parse(fmt.Sprintf("https://api.mobilize.us/v1/organizations/%d/events", req.OrganizationID))
params := url.Values{}
if req.TimeslotStart != "" {
params.Add("timeslot_start", req.TimeslotStart)
} else {
params.Add("timeslot_start", "gte_now")
}
if req.TimeslotEnd != "" {
params.Add("timeslot_end", req.TimeslotEnd)
}
if req.ZipCode != "" {
params.Add("zipcode", req.ZipCode)
}
if req.MaxDistance != 0 {
params.Add("max_dist", fmt.Sprintf("%d", req.MaxDistance))
}
listURL.RawQuery = params.Encode()
timeFormat := req.GroupDateFormat
if timeFormat == "" {
timeFormat = "Monday, January 2, 2006"
}
events := make(map[string][]Event, 0)
next := listURL.String()
for next != "" {
response, err := http.Get(next)
if err != nil {
return events, err
}
data, err := ioutil.ReadAll(response.Body)
if err != nil {
return events, err
}
var listResponse ListEventsResponse
json.Unmarshal(data, &listResponse)
for _, e := range listResponse.Data {
date := e.Timeslots[0].StartDate.Time().Format(timeFormat)
if dateEvents, ok := events[date]; ok {
events[date] = append(dateEvents, e)
} else {
events[date] = []Event{e}
}
}
next = listResponse.Next
}
return events, nil
}
|
STACK_EDU
|
Java is extensively used programming language amid developers and programmers. At this time, it seizes the best situation over the listing of most favored programming languages. As a result of its great importance in the trendy technologies, computer science assignment encourages pupils to craft quite a few assignments on Java. But owing to the intricacies A part of Java, students normally struggle to discover this language and apply it on the sensible area.
It's also wise to check out to become as independend as you possibly can from selected platforms, hardware, computer software. Normally your project results in being incident-prone and you could shell out a great deal of time waiting for someone to fix something without the need of with the ability to do anything at all handy. Also mirror the next Observe:
JAVA programing is simply not Everybody’s cup of tea...and staying expected to create a application that will say the alphabet backwards calls for additional help than not. What exactly can you do? Look into these 3 means you could defeat the odds and have homework performed without difficulty.
It is the trendiest language among programmers on account of its fast and procedures. If students try to make an Outstanding iOS software, then they better comprehend this language and put its software into observe. Inspite of getting quick to discover, students come upon issues when knowing the basic coding ideas.
Be it computer architecture, cryptography, internet programming or error detection and codes composing, our gurus have productively shipped assignments for various learners and helped them fare properly.
Draft a workplan as quickly as possible and take a look at to keep throughout the agenda. A work program isn't final: alter it when you'll want to. It really is most important goal is to provide feed-back on how much time you can nevertheless want and to prevent you from staying far too comprehensive at the start and dropping important concerns at the end due to a deadline. Get all variations on your workplan checked using your supervisor.
This is when our authorities occur into the picture. Owing for their potent familiarity with computer, our gurus are effective at helping students all over their academic occupation. Just before putting get with us, choose a close examine our computer science professionals.
Computer science is broad area of information as it is split into a number of sub-fields. Remaining reputed assignment composing support service provider in United kingdom, we provide entire assignment help in all parts of computer science.
This technique really should be really easy that the can get started it while in the qualifications whilst working without require to think about it. long run backup (zip, if possible a special hard disk drive visit the site or even a on floppy discs) remote (another machine).
We not only offer related good quality content material, but additionally deliver each and every assignment promptly. Our purpose is usually to help college students obtain their assignment targets successfully. So, we provide timely mentoring and efficient crafting remedies.
As they have got several years of expertise in resolving computer science assignments, They can be very likely to acquire a lot less time as compared to novices. Pupils can make use of the remedies crafted by experts being a guideline in their subsequent project.
Computer science is a practical and scientific method of computation. The scholars of computer science ordinarily have to manage a range of technological processes and programming languages and coding procedures.
Question us a question that you simply’re caught on and we’ll connect you that has a tutor that is aware how to resolve your difficulty. Meet up with up which has a tutor in the online classroom and have the capacity to go over your issue in real time by our interactive whiteboard.
Just one appealing usage of a little cellular robot is always to go as many as folks within a room, and provide to offer them with information and facts. For instance, at an Admissions Open up Working day this type of robotic could supply them with information about where to locate the lecture theatre, or How to define a specific Faculty. We now have a suitable tiny robotic (), which includes a Kinect sensor to permit it to sense its environment, and also a netbook for its `brain'.
|
OPCFW_CODE
|
I don't think the essay was a defense of C. It seemed to be a complaint that Rust doesn't optimize for what C programmers expect. His primary criticism is absence of a spec.
Indeed. Beware of the fallacy that code is complicated because the problem is complicated. While there's always irreducible complexities in software, my experience indicates that the bulk of software complexity stems from complying with the needs of other components in the system. Rarely is the problem being solved so complex that one or two can't understand it in its entirety.
@vertigo Yeah! The difference between incidental and essential complexity is an important one.
I guess if you're thinking of replacing most or all of the components, then you have a chance to remove some of the complexity that is essential to one existing component (for interop) but incidental within the larger scheme.
@natecull @Shamar @mala
At the same time, he's got a graduate degree in sociology & was raised in a very literature focused setting. Xanadu was very much influenced by established ideas about publishing, and focused on the scholar.
When XOC spun off, the deal was that Ted would have no control over the tech but would own the flagship publishing platform, taking subscriptions & basically doing what Medium is doing.
@natecull @Shamar @mala
Ted's a special case, & not quite in the same circle as Stewart Brand. You gotta understand: he's the child of a silent film star & a TV producer, & is just slightly too old to have been at the peak of psychedelia. (He took Tim Leary's classes right before Leary found mescaline, & filmed Lilly's dolphin research before Lilly started self-experimentation.) For him, computers are rightly show-business.
Certainly the Stewart Brand - Steve Wozniak - Steve Jobs type of Silicon Valley 1970 technohippies wouldn't quite have used the word 'people struggles' because they weren't really coming at things from a classical left type of perspective where everything is a 'struggle'.
But they would and did think in terms of concepts like 'decentralisation' and not all of that was purely marketing rhetoric.
I'm interested in how that vision lent itself to centralisation.
From my perhaps warped perspective, I think part of the problem is the right-libertarian idea that basically you don't need to care about anyone else because the Market will just magically sort everything all out for you. All you need to do is improve your own (life, body, mind, soul, etc) and you'll get healthy/calm/rich and it's all about Personal Responsibility.
I actually believe in part of the psychedelic vision... but I think it needs collective action.
Yup. Success is a great way to make yourself miserable, because nothing can ever live up to the kind of satisfaction one associates with a seemingly unachievable goal (including achieving it).
We are already ruled by “private governments,” and they suck. The “private governments” created by companies such as Amazon, Google, and Facebook are as stupid and corrupt as conservatives think our real government is.
Somebody call Tim Leary; I think he's needed again: https://www.theguardian.com/technology/2019/mar/26/acid-test-how-psychedelic-virtual-reality-can-end-societys-mass-bad-trip
Alan Kay brings the shade: https://www.quora.com/What-will-Silicon-Valley-do-once-it-runs-out-of-Doug-Engelbarts-ideas
It's funny how they managed to write a whole article about this hypothesis without ever citing Adam Smith's #InvisibleHand which stands at the root of #Capitalism justification: https://en.wikipedia.org/wiki/Invisible_hand
People subdued by Capitalism's #Hegemony cannot think of anything that could effectively challenge its position.
As #Freire wrote, the oppressed internalize the oppression and protect the oppressors.
Une instance se voulant accueillante pour les personnes queers, féministes et anarchistes ainsi que pour leurs sympathisant·e·s. Nous sommes principalement francophones, mais vous êtes les bienvenu·e·s quelque soit votre langue.
|
OPCFW_CODE
|
Implement Fine-Grained Security, or Get Left Behind
The biggest shift in security over the last decade has been from perimeter-based approaches to zero trust. Multicloud and remote work have accelerated this trend. But what people don’t talk about is how zero trust requires a similar shift from coarse-grained to fine-grained security.
Coarse-Grained Security Is Old School
Coarse-grained security deals with networks, users and applications. If you’re on a corporate network, you have network access to everything on that network. If you’re authenticated as a user, you have login access to a particular set of applications. And if you can log in to an application, you have role-based access depending on the roles or groups you’re part of.
Security logs and observability are likewise coarse-grained. Traditional firewalls, identity providers and applications tell you who logged in and when, but often don’t capture any application-specific data beyond that.
Fine-Grained Security Is What’s Next
A zero-trust mindset requires a shift to fine-grained security. Here are a few elements of a fine-grained, zero-trust approach to application security.
Policy-Based Access That Defaults to ‘Deny’
For organizations to get a handle on access control, the authorization logic needs to be lifted out of applications and described in an authorization policy. These policies should default to denying access, and only allow access when a user, workload or device passes authorization criteria.
Decoupling the authorization policy from the application code allows security teams to own the life cycle of that policy and evolve it in a safe and predictable way. Treating policy as code and storing it in a source code repository provides a full audit trail for policy evolution. Building policy code into immutable, signed images allows security teams to deploy trusted versions and roll back to previously known good images, enabling a secure software supply chain for authorization policies.
Enforcing the Principle of Least Privilege
Applications that define coarse-grained roles make it really hard for application administrators to give users just enough access, but no more than is strictly necessary. Moving to a fine-grained access control model involves one or more of the following techniques:
Defining Fine-Grained Permissions as the Basis for Access Control
Applications should have permission checks before every access to a protected resource. But instead of checking whether a user is in a role, an application should check whether the user has a discrete permission. This way, the mapping between roles and permissions can be done outside of the application. This enables easily assigning new permissions to existing roles, creating new roles based on customer requirements and even allowing customers to define custom roles that aggregate a set of application permissions.
Disentangling ‘Roles’ and ‘Groups’
A role is a collection of permissions, and a group is a set of users. Yet most applications conflate the two. Modern authorization systems enable the creation of groups which aggregate users, and then allow an administrator to assign these groups into roles. This way, an application administrator can control how they group their users and have a single place to manage the mapping of roles (and transitively permissions) to those groups of users.
Fine-Grained Access Control over Application Resources
Applications have a resource hierarchy, also known as a domain model. For multitenant SaaS applications, “tenants” are at the top of this resource hierarchy. For these applications, the simplest authorization policy ensures that users who have a relationship with a particular tenant (e.g., viewers, admins) should only have access to resources scoped within that tenant.
Most applications further organize their tenant-scoped resources into a hierarchy: teams, projects, lists, folders and even individual items. Ultimately, customers want fine-grained access control for each layer in the hierarchy. This type of access control is increasingly known as “Relationship-based Access Control” or ReBAC.
ReBAC adopts RBAC’s concept of roles (aka “relations”) as a mechanism to aggregate a set of permissions but provides the ability to create relations between subjects (users/groups) and objects at any level of the resource hierarchy, which results in much finer-grained access than traditional RBAC.
Utilizing Attributes in Addition to (or in Place of) Group Membership
Modern applications often want to make access decisions based on dynamic user attributes, such as what department the user belongs to, or whether they are a manager. When the user moves to a different department, the application still evaluates permissions correctly without the admin having to manually deprovision and provision the user into a new group. This pattern is an instance of “Attribute-based Access Control” or ABAC.
Organization-Aware Access Decisions
Management relationships are inherent in every organizational structure. Many applications want to enable managers to perform actions on their reports, which requires an understanding of a very specific type of user attribute which describes management relationships.
For example, instead of creating a group that contains all of an executive’s department and having to manage it explicitly, an access control policy can be constructed that walks the organizational graph and determines whether a user is in that executive’s organization.
Some applications need to behave differently based on where a user is logged in from, what region the application is running in or the day/time an operation is invoked. This is often due to regional differences in service levels or compliance requirements (such as GDPR). Access control policies should make it possible to combine environment-based attributes with other fine-grained access control techniques.
Fine-Grained Security Logging and Events
A sizable portion of the security industry focuses on capturing and processing security events in product categories such as security information and event management (SIEM), and security orchestration, automation and response (SOAR).
Here, too, the coarse-grained security mindset of the past is giving way to the fine-grained security approach. Rather than merely recording logins, we need decision logs for every access check that an application makes over the course of a user’s session. This helps find and fix over-provisioned or misconfigured permissions much more effectively.
This transformation has to start with applications. Developers need an easy way to incorporate fine-grained security into their applications. This requires lifting all of the authorization logic out of the application and delegating authorization calls to a service that makes authorization decisions based on an externalized policy.
This fine-grained security service needs to reason about user, resource and environmental attributes. It also needs to collect and forward all of the decision logs that the application generates to a central system.
For organizations that have the right levels of resources and expertise, there are some open source building blocks that can be helpful in creating a custom fine-grained security service:
- The Open Policy Agent is a general-purpose decision engine that can be used to make ABAC-style authorization decisions based on user and resource attributes.
- The Google Zanzibar paper has given rise to multiple open source projects for storing and evaluating user permissions over a ReBAC object graph.
- The Beats log shippers and lumberjack protocol make it possible to ship and aggregate decision logs.
With that said, building a solution based on these components isn’t trivial, and most of them don’t include an end-to-end solution that covers attribute-based access control (ABAC), relationship-based access control (ReBAC), org-based access control and log aggregation that provides guaranteed ordering and delivery. In other words, even with the help of open source projects, there’s still a lot to build.
Authorization has to happen close to the application because it’s in the critical path of every application request. But fine-grained security requires real-time access to user attributes and resource relationships. Getting those attributes to the application without incurring a costly network call in the critical path of the application request is a distributed systems problem and is the hardest challenge to address when building a custom solution.
The same is true for authorization policy: The whole point behind extracting access control code out of an application and into an authorization policy is that you want to manage that policy centrally but have it available right next to the application. Building an automated policy-as-code workflow from source commit to policy distribution is also far from trivial.
Authorization has to be done locally, but management has to be done centrally in order to operate successfully. All of those local authorizers need to be managed by a control plane, which is the central control point for user data, relationship/resource data, policies and decision logs. This way, security and operations teams have one place to manage all the artifacts across the set of applications and microservices that they are responsible for.
Once again, creating a control plane for fine-grained security takes considerable effort. Companies like Google, Netflix, Airbnb and Intuit have done it, but they have teams dedicated to running those services.
Fine-Grained Security Platforms
For organizations that don’t want to invest the time and effort to build it all themselves, there are alternatives. What should you look for in a fine-grained security platform?
- OPA-based authorization for ABAC-style policies.
- ReBAC directory for fine-grained user, group, and resource hierarchy relationships.
- Modeling of management relationships in the directory.
- Built-in integrations to systems of record for identity and directory data.
- Fine-grained decision log capture and aggregation.
- Distributed architecture for authorizing locally and managing centrally.
Aserto is the fine-grained security platform for developers. It offers a complete end-to-end solution that makes it easy for developers to incorporate into applications, and for security and operations teams to manage at scale. Create a free account today!
|
OPCFW_CODE
|
How to I get the Instagram "Lux" effect without using Instagram?
I am kind of fascinated with the new Lux effect of Instagram. I want to "get" the same effect myself, during post processing. This is what I was able to get to, after playing with Levels, Contrast, Saturation and Shadows in iPhoto.
My questions:
Is it possible to get such an effect using basic tools like Picasa or iPhoto?If so, how?
Is it possible to get such an effect in Photoshop/PS Elements/Lightroom (and the like). If so How?
Original Photo:
With the Instagram Lux effect and "Low Fi" filter:
What I could get:
(at best, with my limited knowledge)
I'm not familiar with Instagram; is it possible to show the effects of the Lux effect and the Lo-Fi filter separately? I don't know where one ends and the other begins, and it might be easier to deconstruct each on its own. (Someone previously analyzed the lo fi filter here).
@mattdm - Typically filters in Instagram can only be applied one at a time. In this case, Lo-Fi is the filter that can only be applied at one per photo. Lux is a secondary option that isn't a "filter" according to Instagram, it is kind of like a button to just bring out vibrancy in any filter. So to answer your question, yes it would be possible, but they have stacked the two here in the examples.
So, here's what I got in just a few minutes using two basic tools: Curves, and Unsharp mask:
I used Gimp, but this is basic stuff any decent image editing software will have. Here's all I did. First, I used the curves tool to dramatically increase the black point, increasing shadow contrast:
Then, I pulled the curve upwards to brighten the (new) midtones:
I didn't mess with the color channels at all; this is all the global "value" curve. I made these adjustments by eye, watching the tone of the house as I worked.
Having done that, I resized to 612×612 (the size of your Instagram example here), and then used an Unsharp Mask with a radius of 10 pixels and a very high strength.
This doesn't look exactly like your image, but I think we're in the ballpark.
There's a sort of glow over the lower part of the house that's missing, and I couldn't replicate this with global adjustments without destroying the tones in the sky and the detail on on the tree branches on the left; I suspect that the filter applies a graduated vignette/glow/"light leak" effect somewhere in the pipeline here. If you compare the top half of my attempt to the Instagram output, you'll see they're really close; the difference is in the lower part.
The original has flat lighting; this fake burst is part of what adds dynamic interest, but which also feels a little bit like cheating: Instagram is not just capturing what's there with a funky filter, but altering the reality of the scene.
Update: this is with just an upsharp mask with radius 100 and strength in Gimp of 2.0 (Photoshop measures strength differently, but basically, about 10× higher than one would normally use if going for a natural looking image).
The curves approach gives a lot more control and it's still what I recommend, but for quick and dirty replication of the effect, this might be all you need.
This doesn't look exactly like your image, but I think we're in the ballpark.: Man this is what I wanted. I did not want to create the same as Instagram, I wanted to know what/how that thing was being done. Thanks for explaining step by step... I learnt many things today. Awesome!
@StanRogers It's true; that might be all there is too it. The effect in the example is really strong, and I was hesitant to bring the unsharp mask strength up that high.
I borrowed your image and one from another question like it, to try it in Image View plus more 3, which I made myself so I know all the underlying algorithms.
I think this is pretty close, albeit the colours may be a bit different (my weakness as I am colour deficient).
What I did was:
Local contrast enhancement. Adobe calls this "clarity". It is similar to the unsharp mask with a very large radius.
Boost saturation. My program does this in the Lab space. Other software may do this in other spaces (e.g. HSV) which can result in different colours. It may also be a selective saturation boost like a proprietary "vibrance".
Your image seesm to have undergone a bit of contrast boost as well, while hte tattoo image hasn't. This points towards some autolevels.
But I would conclude it is these two operations that is incorporated into "Lux"; a local contrast enhancement, a saturation booster of some sort, and autolevels.
I write the algorithm of Michael Nielsen in Matlab.
Only first step: unsharp mask with a very large radius.
The result was expected.
code:
imsharpen(image,'Radius',100)
This answer is only comment to Michael Nielsen's.
Are you intending to share the code? Or intending to address the saturation aspect? Any answer needs to include steps to produce the effect, not just your output.
This appears to basically be a comment to this answer
@MikeW code added.
|
STACK_EXCHANGE
|
Sending a file from my application (Indy/Delphi) to an ASP page and then onto another server (Amazon S3)
I have a need to store files on Amazon AWS S3, but in order to isolate the user from the AWS authentication I want to go via an ASP page on my site, which the user will be logged into. So:
The application sends the file using the Delphi Indy library TidHTTP.Put (FileStream) routine to the ASP page, along with some authentication stuff (mine, not AWS) on the querystring.
The ASP page checks the auth details and then if OK stores the file on S3 using my Amazon account.
Problem I have is: how do I access the data coming in from the Indy PUT using JScript in the ASP page and pass it on to S3. I'm OK with AWS signing, etc, it's just the nuts and bolts of connecting the two bits (the incoming request and the outgoing AWS request) ...
TIA
R
Look at the Request.BinaryRead() method.
A HTTP PUT will store the file at the given location in the HTTP header - it "requests that the enclosed entity be stored under the supplied Request-URI".
The disadvantage with the PUT method is that if you are on a shared hosting environment it may not be available to you.
So if the web server supports PUT, the file should be available at the given location in the the (virtual) file system. The PUT request will be handled by the server and not ASP:
In the case of PUT, the web server
handles the request itself: there is
no room for a CGI or ASP application
to step in.
The only way for your application to
capture a PUT is to operate on the
low-level, ISAPI filter level
http://www.15seconds.com/issue/981120.htm
Are you sure you need PUT and can not use a POST, which will send the file to a URL where your ASP script can read it from the request stream?
Are you sure about that? I have ISAPI app that handles PUT just fine.
@Runner: ISAPI <> ASP - I cited that the ISAPI filter level is the only way to capture a PUT
OK, Ive got a bit further with this. Code at the ASP end is:
var PostedDataSize = Request.TotalBytes ;
var PostedData = Request.BinaryRead (PostedDataSize) ;
var PostedDataStream = Server.CreateObject ("ADODB.Stream") ;
PostedDataStream.Open ;
PostedDataStream.Type = 1 ; // binary
PostedDataStream.Write (PostedData) ;
Response.Write ("PostedDataStream.Size = " + PostedDataStream.Size + "<br>") ;
var XML = AmazonAWSPUTRequest (BucketName, AWSDestinationFileID, PostedDataStream) ;
.....
function AmazonAWSPUTRequest (Bucket, Filename, InputStream)
{
....
XMLHttp.open ("PUT", URL + FRequest, false) ;
XMLHttp.setRequestHeader (....
XMLHttp.setRequestHeader (....
...
Response.Write ("InputStream.Size = " + InputStream.Size + "<br>") ;
XMLHttp.send (InputStream) ;
So I use BinaryRead, write it to a binary stream. If I write out the size of the stream I get the size of the file I POST'ed from my application, so I reckon the data is in there somewhere. I then call a routine (with the stream as a parameter) which sets up the AWS authentication/signing and does a PUT.
The AWS call returns no errors and a file of the correct name is created in the right place, but it has a size of zero! InputStream.Size has a value the same as the stream parameter passed to the routine - i.e. the size of the original file.
Any ideas?
POSTSCRIPT. Found the problem. It's caught me a few times with streams, this one. When you write data to a stream, don't forget to reset the stream position back to zero before trying to read from the stream again. I.e. just before the line:
XMLHttp.send (InputStream) ;
I needed to add:
InputStream.Position = 0 ;
My thanks for the interest and suggestions.
note that you can accept your own answers to indicate the problem is solved succesfully
|
STACK_EXCHANGE
|
The Greatest Guide To programming assignment help
When you’re really stumped for programming Concepts, attempt building anything generic similar to a to-do listing supervisor.
Backus's paper popularized analysis into useful programming, however it emphasized operate-amount programming in lieu of the lambda-calculus model now connected with functional programming.
The principle of embracing adjust is about not Doing the job versus alterations but embracing them. By way of example, if at among the iterative meetings it seems that the customer's specifications have improved dramatically, programmers are to embrace this and strategy The brand new specifications for the next iteration.
In the next edition of Extreme Programming Stated (November 2004), five years just after the initial edition, Beck added a lot more values and methods and differentiated among Major and corollary tactics.
Coding can be utilised to figure out the best suited Answer. Coding might also help to speak views about programming problems. A programmer addressing a posh programming difficulty, or discovering it tricky to clarify the solution to fellow programmers, may well code it in a very simplified way and use the code to exhibit what he or she usually means.
the purpose. Here is An additional illustration of this element of Python syntax, for the zip() perform which
Excessive programming (XP) Resources is really a software improvement methodology which is intended to enhance software package high quality and responsiveness to changing consumer requirements.
def z check out def i = 7, j = 0 consider def k = i / j assert Phony //hardly ever attained resulting from Exception in previous line at last z = this content 'attained listed here' //always executed even when Exception thrown capture ( e ) assert e in ArithmeticException assert z == 'arrived at below'
An important A part of our mission since the resistance is to forestall getting to he has a good point be enslaved. To accomplish this we must teach ourselves. A fantastic example is this book by a previous professional in your mind Manage. Get it in this article and keep away from being enslaved.
Later on dialects, such as Plan and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp all-around a cleanly practical Main, though Prevalent Lisp was meant to preserve and update the paradigmatic functions of the many more mature dialects it changed.
When *args appears to be a purpose parameter, it really corresponds to all of the unnamed parameters of
Arrays can be replaced by maps or random obtain lists, which confess purely practical implementation, but have logarithmic obtain and update instances. As a result, purely functional data constructions may be used in non-purposeful languages, However they will not be essentially the most productive website here Device, particularly when persistence is just not required.
difficult to acquire real looking estimates of labor work essential to deliver a quotation, for the reason that in the beginning with the project nobody is aware of the entire scope/specifications
Authors while in the series went via many factors attending XP and its practices. The sequence included a e book which was significant on the tactics.
|
OPCFW_CODE
|
UBS Financial Services IT Software Engineer (MOAP) in Zürich, Switzerland
Job Reference #:
Are you a software engineer with broad technical skills and experience in the whole development lifecycle? Do you want to apply your skills by providing application templates and blueprints enabling teams to develop high-quality software? Would you like to bring your ideas into a compact agile team to further extend its services for the UBS developer community?
We are looking for someone like you to:
extend our collection of application templates currently featuring reference implementations for Java front-ends, web service providers, mobile hybrid apps and RESTful service providers
be part of an agile team and flexibly taking over roles in a Scrum or Kanban setup
deliver software complying to highest standards regarding design and code quality as well applying industry good practices regarding build, test and deployment automation
write software documentation and user guides and support teams making use of the provided application template and blueprints
collaborate closely with our colleagues from CTO to ensure compliance with the latest UBS architecture principles
give presentations and lead workshops to promote the use of the team's services within the developer community
IT Software Engineer (MOAP)
Country / State:
Switzerland - Zürich
Information Technology (IT)
What we offer:
Together. That’s how we do things. We offer people around the world a supportive, challenging and diverse working environment. We value your passion and commitment, and reward your performance.
Why UBS? Video
Take the next step:
Are you truly collaborative? Succeeding at UBS means respecting, understanding and trusting colleagues and clients. Challenging others and being challenged in return. Being passionate about what you do. Driving yourself forward, always wanting to do things the right way. Does that sound like you? Then you have the right stuff to join us. Apply now.
UBS HR Recruiting Switzerland
Disclaimer / Policy Statements:
UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
You’ll be working in the IT development of Wealth Management, Private & Corporate Clients in Zurich. We plan and lead IT projects in the area of Channels & Workbenches and create solutions for our clients and client advisors.
The MOAP (Mother Of All Projects) team engineers templates and blueprints that help to create new and enhance existing applications. Templates can be instantiated quickly, provide working examples, demonstrate good practices at UBS IT and come with extensive documentation. Blueprints provide working examples and a description on how to solve a specific problem.
Your experience and skills:
a graduate degree in software engineering
a deep understanding of frameworks such as Spring, AngularJS, React and Hibernate
a strong understanding of the software development lifecycle and agile methodologies
experience with software development in enterprise environments
fundamental experience in designing secure, robust and scalable frontend and Java backend applications
successfully performed pair programming and code reviews
worked with continuous integration build servers
implemented automated unit and integration tests
know fundamental web standards such as HTTP, and concepts such as REST
experience with XML, JSON and associated technologies
worked on and developed for various platforms (e.g. Windows, Linux, iOS, Android)
a good understanding of software versioning tools (e.g. Git)
an experienced software engineer with high dedication to quality code, proper documentation and automation
able to cope with complex challenges in software engineering and interested in exploring and evaluating new technologies
communicative, client-oriented and feel comfortable giving presentations and holding workshops
Expert advice. Wealth management. Investment banking. Asset management. Retail banking in Switzerland. And all the support functions. That's what we do. And we do it for private and institutional clients as well as corporations around the world.
We are about 60,000 employees in all major financial centers, in almost 900 offices and more than 50 countries. Do you want to be one of us?
|
OPCFW_CODE
|
Python distributed tasks with multiple queues
So the project I am working on requires a distributed tasks system to process CPU intensive tasks. This is relatively straight forward, spin up celery and throw all the tasks in a queue and have celery do the rest.
The issue I have is that every user needs their own queue, and items within each users queue must be processed synchronously. So it there is a task in a users queue already processing, wait until it is finished before allowing a worker to pick up the next.
The closest I've come to something like this is having a fixed set of queues, and assigning them to users. Then having the users tasks picked off by celery workers fixed to a certain queue with a concurrency of 1.
The problem with this system is that I can't scale my workers to process a backlog of user tasks.
Is there a way I can configure celery to do what I want, or perhaps another task system exists that does what I want?
Edit:
Currently I use the following command to spawn my celery workers with a concurrency of one on a fixed set of queues
celery multi start 4 -A app.celery -Q:1 queue_1 -Q:2 queue_2 -Q:3 queue_3 -Q:4 queue_4 --logfile=celery.log --concurrency=1
I then store a queue name on the user object, and when the user starts a process I queue a task to the queue stored on the user object. This gives me my synchronous tasks.
The downside is when I have multiple users sharing queues causing tasks to build up and never getting processed.
I'd like to have say 5 workers, and a queue per user object. Then have the workers just hop over the queues, but never have more than 1 worker on a single queue at a time.
Do you have any code from you attempts you can share?
There isn't much to it, but I've added some more information to the original question.
I use chain doc here condition for execution task in a specific order :
chain = task1_task.si(account_pk) | task2_task.si(account_pk) | task3_task.si(account_pk)
chain()
So, i execute for a specific user task1 when its finished i execute task2 and when finished execute task3.
It will spawm in any worker available :)
For stopping a chain midway:
self.request.callbacks = None
return
And don't forget to bind your task :
@app.task(bind=True)
def task2_task(self, account_pk):
Can you dynamically add to a chain? An example being I am a user and I have a chain with a long running task currently being executed, but I want to run another task. Instead of creating a new chain that will be picked up by another worker, can I just push the new task onto the old chain?
Don't do that, it will be un-maintenable in the futur, but what you can do is add some logic in task2_task or task3_task, so from the result of task1_task you will do different code in task2 or task3, if you want to stock a chain midway just add : self.request.callbacks = None and return it will stop the chain without error message
The thing is, my tasks are never created in an environment where they can be chained.
Basically a user makes a change in the front end editor, the change is pushed to the server, the server pushes out a task to render the new changes, and the user gets pushed the result via websockets. But the user can make multiple changes at varying times, creating multiple tasks. Each task needs to be executed synchronously for that user, with the results coming back in the order they were sent.
so what you need is not chain, you need to add in db all the user change request and task_id, and after a task is executed check if a task is available, for new task check if the current task is already executed and finished with celery.result.AsyncResult(task_id) or do nothing (the running task will execute the new after finished)
Here is a quick sketch of what I'm trying to achieve. The above example is what I have currently, and it is working, it just doesn't scale very well. The example below that is what I'd like to achieve, but I've been unable to find any resources of anything similar. https://i.imgur.com/jlCbApN.png
what is your real problem ? i think its not having 2 task for an user executed in parrallel, so using a database to stock stask_id, task_status and user is the best solution because Celery is for asynchron job you want to tweek it for your purpose so you will never have a 100% good solution.
I don't quite understand your database solution. Are you suggesting I store all tasks in the database, and use something else to manage the tasks? You are right in saying that my issue is not allowing a user to have two tasks running at the same time, but I'd still like to have 10 users running a single tasks at the same time.
|
STACK_EXCHANGE
|
How to update a seven-year-old question?
An old question asks, master's or masters? The accepted answer (and the only viable candidate) says it has to have an apostrophe. But Academia SE has a tag definition for masters that shows the usage is more nuanced:
Queries related to a master's degree, sometimes referred to as a post-graduate degree.
A master's degree is an academic degree granted to individuals who have undergone study demonstrating a mastery or high-order overview of a specific field of study or area of professional practice.
The duration of a masters degree is about one to two years which may vary because of different educational systems' policies. Masters students may have to conduct a research project as their thesis at the end of their masters level.
(The linked wikipedia entry, on the other hand, consistently uses the apostrophe.)
I'd like to see an answer that acknowledges the use of the no-apostrophe version, and explains the borders of acceptability, for example, when does master's look too stuffy? When does masters look too informal?
So, may I write a new question? If that's not considered kosher, the only alternative I can think of is to set a bounty.
Note that when I googled
holding a master's degree
(without quotes), Google prominently displayed the accepted answer to the seven year old ELU accepted answer.
Also, what's the best way to draw attention to either a new question or a bounty among the Academia crowd? Although I believe the question belongs on ELU, it would be interesting for Academia too. The way I've seen this handled at Spanish SE is to put "ELU meets Academia" at the beginning of the title. But that doesn't necessarily catch Academia's attention.
And finally, "interdisciplinary" was the closest I could come up with. Is there a better way to express the overlap?
My primary question is, may I write a new question. Please consider the other two as bonus content.
FWIW: I've always seen and used master's. Is it possible that their tag description needs an update?
OP here: my original question was simply asking for a prescriptive ruling on whether the apostrophe should be used. In my view, it should therefore stay unchanged, although I would be happy to edit it if the majority felt otherwise. If things have change since then, I think a new question is appropriate.
Your question seems different enough in scope from the old question to be considered a non-duplicate. What are essentially follow-up questions generally seem to be acceptable here; for example master or master's follows up on the same 2010 question. This is preferable to editing an existing question to go beyond what the original questioner wanted and/or in a way that "breaks" existing answers. In this case, the older question seems to be asking for a prescriptive "ruling" on whether an apostrophe should be used or not; you are asking about whether there may be systematic reasons for omitting the apostrophe.
If you do ask a new question, you should note the older question and its answer and then quote the tag text and any other apostrophe-free examples you've run across and ask your more nuanced, descriptive question about casual use.
As far as bringing it to the attention of Academia SE, maybe post a link in their chat or even ask a question in their Meta? If you think the tag description is in error you could ask about that, with a link to the related question here.
Can you point out the quotation you're saying violates the plagiarism rule? Maybe I am looking in the wrong place. At first glance I see nothing that isn't attributed and fair use.
Every single word after "Read the following article for more details:" is from the linked article. Word for word, with no excerpting or summarizing or interpretation. The quotes are all appropriately attributed...by the original author. It's the equivalent of showing someone else's video on your YouTube channel, with a five second "thought you might like this video" intro slapped on the front. That's what I mean about the spirit, if not the letter.
I missed the forest for the trees. Yeah, this is a problem.
@MetaEd Okaay, the copy and pasted answer (posted in Oct 2010) has been deleted. Wouldn't it have been easier to simply edit and used block quotes throughout? A 5 or 10-second operation. BTW the OP has reformatted his answer, properly, according to SE guidelines, which were formalized when exactly? Oh, the bit about editing is also relevant to 1006a who has the necessary Rep.
@Mari-LouA in re editing: I considered it, but see my point about not wanting to draw attention to the question. I also didn't flag it or all that it be deleted, and wouldn't have mentioned it at all if it hadn't come up in Meta, just thought it could languish in 7-year-old obscurity along with lots of other questions and answers that no longer meet site standards but, presumably, were fine when they were posted. If it were an active question by a new user I would have handled it differently, but this is an inactive question by a 20k+ user.
Ah, tchrist has undeleted it. Good man! problem with mods deleting stuff, they are the only ones who can undelete posts. Us mere mortals have to sit and wait around until then.
@Mari-LouA I did some digging. Though there was agreement at the beginning of in SE's history that academic dishonesty, including plagiarism, needed to be discouraged, the formalization of a policy seems to have begun when the question “Plagiarism should be addressed specifically in the FAQ” was posted early in 2011. FAQ texts were drafted as answers there. This also ultimately led to the addition of “How to reference material written by others” to all site help centers in late 2013.
|
STACK_EXCHANGE
|
SP to update 3rd table using data in first 2 tables
For e.g. I have below table1 and table3. The 'Counts' field in table2 should be updated based on valuess field in table1 and table3. i.e. 23 appears 4 times in table1 and table3 and 45 appears once. Table2 should be updated with that count.
table1
Id | Data | Valuess
1 | rfsd | 23
2 | fghf | 45
3 | rhhh | 23
table3
Id | Data | Valuess
1 | rfsd | 23
2 | tfgy | 23
table2
Id | Fields | Counts
1 | 23 | 4
2 | 45 | 1
I am using the below stored procedure to achieve this.
WITH t13 AS (
SELECT Id, Data, Valuess FROM Table1 UNION ALL SELECT Id, Data, Valuess FROM Table3),
cte AS (SELECT Valuess,COUNT(*) AS Count2 FROM t13 GROUP BY Valuess)
UPDATE t2
SET t2.Counts = cte.Count2
FROM Table2 t2 JOIN cte ON t2.Fields = cte.Valuess;
QUESTION
Now instead of above table data, i have below table data....
table1
Id | Data | Valuess
1 | rfsd | 004561
2 | fghf | 0045614
3 | rhhh | adcwyx
table3
Id | Data | Valuess
1 | rfsd | 0045614
2 | tfgy | 004561
table2
Id | Fields | Counts
1 | 0045614 | 4
2 | adcwyxv | 1
So here we have alphanumeric data in valuess field of table1 and table3. Also we have data like '004561' and '0045614'
I want to clip off the 7th element of the field and compare it with clipping off 7th element in the table 3. i.e. 004561, 004561 and adcwyx will be taken from table1. 004561 and 004561 will be taken from table3 and compared with 004561 and adcwyx of table2 ( we need to clip off 7th element in table2 first) and then compare.
The final result should be as shown in table2.
SQL-Server != MySQL.
How does adcwyx as a value in table1 become adcwyxv in table2?
sqlzim- it is 2 different data values and not depended on each other.
SUBSTRING should do it.
WITH t13 AS (
SELECT Id, Data, SUBSTRING(Valuess,1,6) AS [Values]
FROM Table1
UNION ALL
SELECT Id, Data, SUBSTRING(Valuess,1,6) AS [Values]
FROM Table3
)
, cte AS (
SELECT [Values],COUNT(*) AS Count2
FROM t13 GROUP BY [Values]
)
UPDATE t2
SET t2.Counts = cte.Count2
FROM Table2 t2 JOIN cte ON SUBSTRING(t2.Fields,1,6) = cte.[Values];
|
STACK_EXCHANGE
|
The 5 Best Audio Merger And Splitter Instruments For MP3 Information
Due to HTML5 expertise, information do not should be uploaded, so the opening speed is faster than different wep app, and the processing is quicker, without ready for file importing and downloading time. Got here with a myriad of extra crap that installed extra toolbars on browsers and attributable to antivirus program to work time beyond regulation protecting my laptop from threats.
For a free device purely for merging audio, this is superb at what it does. While the site advertises unlimited joins, the more you add, the longer it takes to affix them. That’s fantastic however be ready to attend a short time at peak times. It is easy, works with multiple audio codecs, permits you to crossfade and modify levels as you see match.
Audacity will also be a highly regarded MP3 merger which might run on Home dwelling home windows, Mac and Linux. It is worthwhile to use it to separate any audio into as many objects as you want, or you may also merge mp3 online as many MP3 recordsdata as you need with Audacity. It additionally has other video enhancing choices like audio filters and outcomes which could show you how to to cope with music data which might be problematic in a roundabout approach. Furthermore, Audacity is a broadly-used open-source audio modifying and recording program. If you’d like to merge a bunch of audio tracks into one file, AVS Audio Editor is always prepared to assist, even if your input files are of different codecs.
This app is way simple though: you just pick a start and finish time, then export that choice as a separate audio file. Like mp3DirectCut, Mp3Splt can work on an audio file with out having to decompress it first, leading to a quick workflow and no affect to audio quality. COMPUTER startup, Merge MP3 startup, or whereas using an associated software operate can result in merge errors.
MP3 Splitter & Joiner allows you to lower up your MP3 tracks into equal segments, each by variety of segments or by time. Moreover, you could have the possibility in order so as to add a small overlap in the direction of the next or earlier observe. The applying will totally analyze your audio file and might select the suitable lower up mode robotically. This system moreover choices an automated cut up mode.
Kapwing permits creators to trim their audio (to concentrate on the refrain or one soundbite, for example) and specify when in a video the music should start enjoying. Do you’ve gotten lots of separate music information saved in a Home windows 10 folder? In that case, it will be higher to merge some of these files together to be able to play through multiple music tracks included within a single file. Limitless entry to all 9 FilesMerge tools online. You possibly can import instantly from YouTube or add from a pc or cellphone.
The On-line Audio Combiner lets you convert your music file to a desired format and use crossfade between your merged songs. If you need to concatenate MP3 recordsdata utilizing NAudio , it is fairly easy to do. I recommend getting the very newest supply code and building your individual copy of NAudio, as this can work finest with a few of the modifications which can be in preparation for NAudio 1.four.
It has broad compatibility and supports virtually all frequent codecs resembling MP3, WAV, VOX, GSM, WMA, AU, AIF, FLAC, ACC, M4A, OGG, AMR, and many others. WavePad can also be used straight with the MixPad Multi-observe Audio Mixer. WavePad is another audio merger that can handle numerous audio files. It helps you delete, insert, mechanically trim and compress imported audio.
Nevertheless, don’t get confused because it rapidly can rework right into a extremely-tech device for professional users. GiliSoft Audio Recorder Professional is a good tool for all novices and common audio software program customers who want to report certain sounds. Choose the tracks in the merge record you’d prefer to insert silence with, then click on ‘Silence’ to launch a setting windows as shown on the right facet.
After opening a number of chosen files of any format with the “Add tracks” button, the interval of curiosity you are interested in is tuned. We use info to assist improve the security and reliability of our providers. On-line Audio Joiner means that you can accurately set the intervals of sound with normal sliders. This contains detecting, preventing, and responding to fraud, abuse, safety risks and technical points that could harm Google, our customers or the public. You’ll solely glue the chosen fragments.
This free MP3 joiner helps a large amount of input audio formats together with MP3, WMA, WAV, AAC, FLAC, OGG, APE, AC3, AIFF, MP2, M4A, CDA, VOX, RA, RAM, TTA and much more as supply formats. Any audio files and audiobooks can be joined to the most well-liked audio formats as MP3, OGG, WMA, WAV, and many others.
Free Merge MP3 is a light-weight and easy to use software program, designed that can assist you join a number of audio files into a single track, with custom high quality settings. It lets you add the specified songs to the processing checklist and simply kind them in the order of rendering, then set the quality options and let the software program merge the files.
|
OPCFW_CODE
|
Joining Maersk will embark you on a great journey with career development in a global organisation. As Senior Java Engineer, you will gain broad business knowledge of the company’s activities globally, as well as understand how the complexity of IT supports the transport and logistics business.
You will be exposed to a wide and challenging range of business issues through regular engagement with key stakehold-ers across all management levels within Maersk.
You will work and communicate across geographical and cultural borders that will enable you to build a strong profes-sional network. We believe people thrive when they are in charge of their career paths and professional growth. We will provide you with opportunities to broaden your knowledge and strengthen your technical and professional foundation.
By choosing Maersk, you join not only for the role, but for a career. From here your path may take you towards extended responsibilities within Product Service and Engineering, IT Delivery or IT Leadership.
We aim to be a world-class professional IT organisation that delivers business value through automation, standardisation and innovation. We believe in empowerment where each of us takes ownership and responsibility for developing and im-plementing new ways of working.
• Contribute to implementing highly efficient applications, with focus on code quality and performance.
• Implement quality code with focus on reusability and good code coverage.
• Be a part of Agile teams and help deliver sprint goals.
• Collaborating with scrum masters and project managers to identify and mitigate risks, issues, as well as to find innovative ways to improve the application development.
• Embrace emerging technologies and solutions to ensure our online experience continually evolves to deliver functionality that our customers need.
• Solid written and verbal communication skills and able to articulate technical complexity to be understood by both technical and non-technical personnel.
• Ability to manage multiple tasks and conflicting priorities.
• Good critical reasoning and analytical skills; takes ownership and sticks to the problem until it is solved.
• Customer-focused, whether responding to support queries or developing new features and functionality.
• Ability to work independently and with others in a team environment.
• Able to provide constructive feedback and effectively review code, guiding other Java engineers in the right direction
• Sound knowledge of Java (8 and above).
• Experience is developing RESTful microservices with Spring boot.
• Sound understanding of Spring modules and ORM tools like JPA and Hibernate.
• Experience of SOAP based web services.
• Experience working on low latency, highly scalable applications.
• Experience in build tools like Maven, Node etc.
• Experience working with databases – MS Sql server, Oracle and/or Cassandra or something similar.
• Ability to review code and mentor junior developers as well as partners.
Nice to have
• Hands-on experience of using a front-end development framework, such as Angular, React or Vue.
• Experience working with Cloud technologies or a keen interest in learning the same.
• Proven knowledge of Behavioural Driven Development (BDD).
• Proven knowledge of Test Driven Development (TDD).
|
OPCFW_CODE
|
Process list of files to format filenames for web (easy)
I realize this is dead simple but I don't write many scripts so risking stupid pride I'm hoping to learn a thing or two by asking such a basic question.
Given UNIX (Mac) how might you approach turning list (.txt) of filenames:
P4243419.JPG
P4243420.JPG
P4243423.JPG
...continues...
into .html something like:
<img src="http://imgs.domain.com/event/P4243419.JPG" title="Image File P4243419.JPG" />
<img src="http://imgs.domain.com/event/P4243420.JPG" title="Image File P4243420.JPG" />
<img src="http://imgs.domain.com/event/P4243423.JPG" title="Image File P4243423.JPG" />
...continues...
I know Ruby...but I would value additional language examples for such a simple task. What I'm not sure of is how to parameterize each line of the txt file (or filename in a directory) into the input for processing. The output is simple enough.
I don't understand - do you want a bash script, Ruby script, or a different language?
this is a completely independent script so open to any approach...curious to see how languages I may not be familiar with approach the same problem.
puts Dir['*.JPG'].map{ |f| "<img src='#{f}' title='Image File #{f}' />" }
Edit: Sorry, I misread. So you have a file with a bunch of filenames in it?
IO.read('myfile.txt').scan(/\S+/).map{ |f| "...#{f}..." }
In Bash:
Using a directory list:
for a in `ls *.JPG`; do echo "<img src=\"http://imgs.domain.com/event/$a\" title=\"Image File $a\" />"; done
From a file (file called list):
cat list | while read a; do echo "<img src=\"http://imgs.domain.com/event/$a\" title=\"Image File $a\" />"; done
This will be not the most effective solution, but easy to understand:
Make a file img2link.sh
for file
do
cat "$file" | grep -i jpg | while read image
do
echo "<img src=\"http://imgs.domain.com/event/$image\" title=\"Image File $image\" />"
done
done
you can use your new command:
sh img2link.sh filename_with_images.txt another_filename_with_images.txt
The grep ensure than you will not process empty lines in the given files.
for bash scripting, I see a couple of answers with cat -- not needed
format_string="<img src='http://imgs.domain.com/event/%s' title='Image File %s' />\n"
while read f; do
printf "$format_string" "$f" "$f"
done < filename.txt
perl -lnwe 'print "<img src=\"http://host/$_\">"' filelist.txt
|
STACK_EXCHANGE
|
# Author: Davide Balzarotti <davide.balzarotti@eurecom.fr>
# Creation Date: 04-04-2017
import random
import threading
import time
from avatar2.message import RemoteMemoryReadMessage, BreakpointHitMessage, UpdateStateMessage
from avatar2.targets import Target, TargetStates
class _TargetThread(threading.Thread):
"""Thread that mimics a running target"""
def __init__(self, target):
threading.Thread.__init__(self)
self.target = target
self.please_stop = False
self.steps = 0
def run(self):
self.please_stop = False
# Loops until someone (the Dummy Target) tells me to stop by
# externally setting the "please_stop" variable to True
while not self.please_stop:
self.target.log.info("Dummy target doing Nothing..")
time.sleep(1)
self.steps += 1
# 10% chances of triggering a breakpoint
if random.randint(0, 100) < 10:
# If there are not breakpoints set, continue
if len(self.target.bp) == 0:
continue
# Randomly pick one of the breakpoints
addr = random.choice(self.target.bp)
self.target.log.info("Taking a break..")
# Add a message to the Avatar queue to trigger the
# breakpoint
self.target.avatar.queue.put(BreakpointHitMessage(self.target, 1, addr))
break
# 90% chances of reading from a forwarded memory address
else:
# Randomly pick one forwarded range
mem_range = random.choice(self.target.mranges)
# Randomly pick an address in the range
addr = random.randint(mem_range[0], mem_range[1] - 4)
# Add a message in the Avatar queue to read the value at
# that address
self.target.avatar.queue.put(RemoteMemoryReadMessage(self.target, 55, addr, 4))
self.target.log.info("Avatar told me stop..")
class DummyTarget(Target):
"""
This is a Dummy target that can be used for testing purposes.
It simulates a device that randomly reads from forwarded memory ranges
and triggers breakpoints.
"""
def __init__(self, avatar, **kwargs):
super(DummyTarget, self).__init__(avatar, **kwargs)
# List of breakpoints
self.bp = []
# List of forwarded memory ranges
self.mranges = []
self.thread = None
# Avatar will try to answer to our messages (e.g., with the value
# the Dummy Target tries to read from memory). To handle that
# we need a memory protocol. However, here we set the protocol to
# ourself (its a dirty trick) and later implement the sendResponse
# method
self.protocols.remote_memory = self
# This is called by Avatar to initialize the target
def init(self):
self.log.info("Dummy Target Initialized and ready to rock")
# Ack. It should actually go to INITIALIZED but then the protocol
# should change that to STOPPED
self.avatar.queue.put(UpdateStateMessage(self, TargetStates.STOPPED))
# We fetch from Avatar the list of memory ranges that are
# configured to be forwarded
for mem_range in self.avatar.memory_ranges:
mem_range = mem_range.data
if mem_range.forwarded:
self.mranges.append((mem_range.address, mem_range.address + mem_range.size))
self.wait()
# If someone ones to read memory from this target, we always return
# the same value, no matter what address it is requested
def read_memory(*args, **kwargs):
return 0xdeadbeef
# This allow Avatar to answer to our memory read requests.
# However, we do not care about it
def send_response(self, id, value, success):
if success:
self.log.debug("RemoteMemoryRequest with id %d returned 0x%x" %
(id, value))
else:
self.log.warning("RemoteMemoryRequest with id %d failed" % id)
# We let Avatar writes to our memory.. well.. at least we let it
# believe so
def write_memory(self, addr, size, val, *args, **kwargs):
return True
# We keep tracks of breakpoints
def set_breakpoint(self, line, hardware=False, temporary=False, regex=False, condition=None, ignore_count=0,
thread=0):
self.bp.append(line)
def remove_breakpoint(self, breakpoint):
# FIXME.. how do you remove a breakpoint?
# sle.bp.remove(breakpoint) does not work
pass
# def wait(self):
# self.thread.join()
def cont(self):
if self.state != TargetStates.RUNNING:
self.avatar.queue.put(UpdateStateMessage(self, TargetStates.RUNNING))
self.thread = _TargetThread(self)
self.thread.daemon = True
self.thread.start()
def get_status(self):
if self.thread:
self.status.update({"state": self.state, "steps": self.thread.steps})
else:
self.status.update({"state": self.state, "steps": '-'})
return self.status
# Since we set the memory protocol to ourself, this is important to avoid
# an infinite recursion (otherwise by default a target would call
# shutdown to all its protocols)
def shutdown(self):
pass
def stop(self):
if self.state == TargetStates.RUNNING:
self.thread.please_stop = True
self.avatar.queue.put(UpdateStateMessage(self, TargetStates.STOPPED))
return True
|
STACK_EDU
|
I am not sure if it's right to automatically checkout submodules for the root package. In general, SwiftPM avoids performing git operations on the root package (there might not even be a git repository).
An alternative solution might involve using URL rewriting provided in Git. We use this on developer machines to rewrite all SSH connections to HTTPS for GitHub to deal with restrictive proxies.
.netrc support being discussed here also: SPM support basic auth for non-git binary dependency hosts.
This is such a great solution/suggestion/workaround. For the first time my private SwiftPM dependencies resolve in my CI. A quick
before_script step in my GitLab CI configuration to call
git config and it works .
swiftpm_build: stage: build before_script: - "git config --global url.$CI_SERVER_PROTOCOL://gitlab-ci-token:$CI_JOB_TOKEN@$CI_SERVER_HOST/.insteadOf git@$CI_SERVER_HOST:" - swift package resolve script: - swift build -c release tags: - swift-5.0
Then the dark times...
xcodebuild? Because that's how you run SwiftPM tests on iOS. I do not make the rules. I just work here.
xcodebuild: stage: build variables: DESTINATION: platform=iOS Simulator,name=iPad Pro (9.7 inch),OS=10.3.1 before_script: - "git config --global url.$CI_SERVER_PROTOCOL://gitlab-ci-token:$CI_JOB_TOKEN@$CI_SERVER_HOST/.insteadOf git@$CI_SERVER_HOST:" - xcodebuild -resolvePackageDependencies script: - xcodebuild -enableCodeCoverage YES -scheme "$XCODE_SCHEME" -destination "$DESTINATION" build-for-testing tags: - swift-5.0 - iOS-10.3.1
Resolve Package Graph Fetching email@example.com:group/dependency.git Resolved source packages: SwiftPMProject: /Users/buildbot/SwiftPMProject xcodebuild: error: Could not resolve package dependencies: The repository could not be found. Make sure a valid repository exists at the specified location and try again.
xcodebuild does its own package resolution outside of Git.
So close to nirvana.
Regardless, thanks @monocularvision
xcodebuild should work here. By default, Xcode uses its own SCM subsystem for fetching packages, but this option makes it use the one in libSwiftPM, so it should behave similar to
swift build at that point.
@NeoNacho thank you; that did work.
Though for any intrepid reader that makes there way here. The
-usePackageSupportBuiltinSCM did not work on Xcode/
xcodebuild 11.3.1. On 11.3.1 when the tests are run apparently the dependency is not on the
It may work on some other version but I just jumped to 11.5.0 and it worked there.
|
OPCFW_CODE
|
There has been a lot of debate following my initial post about Really Simple Subscription. Shortly after I suggested using the feed URI scheme, NetNewsWire developer Brent Simmons came out in favor it, and in the comments to my post we heard that Safari RSS (part of Apple’s upcoming Safari 2.0) will use feed:// quite heavily.
But other solutions have been proposed, such as the Universal Subscription Mechanism (USM) authored by Randy Morin. I’ve spoken with Randy about USM, and he knows that I have a number of issues with the “reflexive auto-discovery” mechanism. However, part of his proposal includes convincing feed producers to provide the correct Content-Type header for their feeds, and I’m 100% in favor of this. Although having the correct Content-Type doesn’t entirely solve the problem, it would take us a big step in the right direction.
If you’re interested in this topic, be sure to read all the comments beneath Brent Simmons’ post, including Danny Ayres’ comment that he “can’t actually see [much] conflict between these approaches, implementing one doesn’t rule out implementing the other.” I agree. We’d all be better served if we realized this isn’t either/or situation and stopped endlessly debating which solution is better. I’m in favor of feed:// because it’s simple for everyone to implement, but I’m also in favor of evangelizing feed producers to use the correct Content-Type. And once it’s fleshed out some more and answers my concerns about privacy, I may like Dave Winer’s solution as well.
9 thoughts on “Really Simple Subscription, PII”
Hurrah for compromise and collaboration.
I have one really dumb question about the feed:// protocol that I haven’t been able to find an answer to — are the double slashes really necessary at all?
From what I read feed:http://some.site.with/feed.xml is permissable and makes a lot of sense to me. This form is just like mailto: is to email so it doesn’t take much to see how a link of this type will behave the same way — it launches a local program and fires off an action. What does feed:// do differently then?
I guess the idea of an HTTP protocol alias is what sticks in my craw ever so slightly because of the precedence it sets. Not to be argumentative because I think getting MIME types right and giving users a consistent a simple way to subscribe to feed is avery good one. I’m just curious as to the thinking behind them and to find out what I’m missing.
Timothy, I believe feed:// is just a shorter way of saying feed:http://. If no protocol is defined, then HTTP is assumed.
Timothy, the “//” characters are necessary for URIs that include “authority” components (such as web hosts). It’s written this way so that a URI parser can understand the structure of a URI without knowing a specific scheme (such as feed).
I’d be interested to hear why you don’t like “reflexive auto-discovery” (I don’t either).
Nick, the RSS feed link in the right sidebar is returning
Could I ask a question or two? I this a typepad thing? If so, how would someone fix this? I’m compiling instructions for users.
Thanks for the reference. Much appreciated.
Robert, I just don’t see reflexive auto-discovery as being reliable – there are too many situations where it would fail. If you were using FeedDemon and cared nothing about the guts of RSS, how would you react if FeedDemon told you that you can’t subscribe to a feed because FeedDemon couldn’t figure out the URL?
Randy, I actually sent a support question to TypePad about this earlier today, asking how to change the content-type to application/rss+xml. At this point I don’t know how it’s done, but I’m assuming it’s possible.
Adding feed:// is unnecessary and it will only add another level of confusion. The best and simplest solution, IMHO, is to add the MIME attribute to the hypertext link. This works fine with the Auto Discovery, why don’t we follow suit and use the same method for the Auto Subscribe rather than inventing some thing new? Please see the following proposal:
RSS Auto Discovery & Auto Subscribe
Nick, any progress with Typepad’s Content-Type issue?
Comments are closed.
|
OPCFW_CODE
|
ONVIF: Which wsdl files are needed for Profile S
I am trying to write an ONVIF client in C++ using gsoap. The executable wsdl2h will generate the needed header and the rest I think I understand.
My question:
Which wsdl files will I need, if I want my client to work with a device that supports ONVIF Profile S (let's say the mandatory specs)? Most importantly, how do I find that out? Is there a one-to-one link? Also, because I am behind a proxy and I can't seem to get that to work, can I somehow download all the needed wsdl files in a bunch?
Here's a list of .wsdl files I found...
https://www.onvif.org/ver10/device/wsdl/devicemgmt.wsdl
https://www.onvif.org/ver10/events/wsdl/event.wsdl
https://www.onvif.org/ver10/media/wsdl/media.wsdl
https://www.onvif.org/ver20/media/wsdl/media.wsdl
https://www.onvif.org/ver10/recording.wsdl
https://www.onvif.org/ver10/display.wsdl
https://www.onvif.org/ver10/receiver.wsdl
https://www.onvif.org/ver10/deviceio.wsdl
https://onvif.org/onvif/ver20/ptz/wsdl/ptz.wsdl
https://www.onvif.org/onvif/ver10/search.wsdl
https://www.onvif.org/ver10/replay.wsdl
https://www.onvif.org/ver10/advancedsecurity/wsdl/advancedsecurity.wsdl
https://www.onvif.org/ver20/imaging/wsdl/imaging.wsdl
https://www.onvif.org/ver10/analyticsdevice.wsdl
https://www.onvif.org/ver10/thermal/wsdl/thermal.wsdl
Hope this helps!
There is not a WSDLfile that automatically includes all the WSDL files you may need.
If you check the ONVIF Profile S page, you will find the Profile S specification. As you can see from the PDF, there are some functions that are mandatory for a client to be conformant, others are conditional mandatory (you have to implement them if you want to claim support for those features) and some are optional.
After you choese what you want to support, you need to include the WSDL files for the services that you must implement.
Thank you for your answer. How do I know which wsdl file corresponds to a specific function?
each function belongs to a service, and each service has a dedicated WSDL file.
where is this service file found?
ONVIF specifications are available here: https://www.onvif.org/profiles/specifications/
|
STACK_EXCHANGE
|
Our analysis will use data on a time frame starting January 2020 and finishing December 2020 and will be based on data from four data sources:
1. Qualitative policy coding
Based on published news articles online we will collect mitigation measures in each federal state in Germany and those of the federal government as we move through the various phases of the COVID-19 crisis. The product will be a dataset that can be used as a stand-alone piece of information or in conjunction with other data sources in the project.
2. Survey Data
For the first phase of the crisis we will use existing survey data from the GESIS panel and other data collections, as we assume this phase will be over by the time this project will start. During the project we will conduct online surveys in three waves using a panel, in order to be able to follow the same individuals over the time of the project. The first survey will be conducted in August 2020 (month 2 of the project) and the rest with a six months gap between them (February 2021 and August 2021) having the final survey on month 13 of the project.
3. Discourse analysis of public speeches and statements of societal actors
Selected speeches and statements by societal actors like trade associations, unions, civil society associations, religious communities in the three phases are analyzed. The same coding book as for the Twitter analysis is used.
4. Discourse analysis of social media, namely Twitter
The continuous collection of Twitter data at GESIS has accumulated roughly 10 Bn Tweets since 2013 and also recorded about 10 M tweets per day during the current Corona crisis. This amounts to an accumulated dataset of about 1 billion tweets within the period 01- 03/2020 alone, where a majority reflect the public perception and discourse of various stakeholders on COVID-19, related measures and factors. Figure 1 shows the daily frequency of tweets explicitly referring to the SARS-CoV-2 virus in the period 01/10/2019 to 31/03/2020, with a total number of 3,072,177 tweets. The tweets identified in this way can be examined, for example, with regard to the temporal evolution of the sentiment, connotation, or relevant, corona-associated topics. Beyond this explicit mention of SARS-CoV-2, a significant part of the discourse on Twitter in the above-mentioned period addresses relevant topics, effects or measures such as #curfew, #LockDown, #HomeOffice, #WFH ("working from home"), face masks, panic buying. The qualitative and quantitative analysis of such discourse facilitates a dynamic understanding of solidarity and trust within society and the effects of media or political events on solidarity and discourse.
The figure shows the daily frequency of tweets that explicitly refer to the SARS-CoV-2 virus in the period from 1 October 2019 to 31 March 2020, with a total of 3,072,177 tweets.
|
OPCFW_CODE
|
For example if the red line is on the 0.63 on BTC/USDT chart it mean the start of 12AM hour on a day is the best hour to buy (all based on
It's just for 1 hour time-frame but you can test it on other charts.
IMPORTANT: You can change time Zone in strategy settings.to get the real hours as your location timezone
IMPORTANT: Its for now just for BTC/USDT but you can optimize and test for other charts...
IMPORTANT: A green and red background color calculated for show the user the best places of buy and sell (green : positive signal, red: negative signals)
timezone : We choice a time frame for our indicator as our geo location
source : A source to calculate rate of change for it
Time Period : Time period of ROC indicator
1- We first get a plot that just showing the present hour as a zigzag plot
2- So we use an indicator ( Rate of change ) to calculate chart movements as positive and negative numbers. I tested ROC is the best indicator but you can test close-open or real indicator or etc as indicator.
3 - for observe effects of all previous data we should indicator_cum that just a full sum of indicator values.
4- now we need to split this effects to hours and find out which hour is the best place to buy and which is the best for sell. Ok we should just calculate multiple of hour*indicator and get complete sum of it so:
5- we will divide this number to indicator_cum : (indicator_mul_hour_cum) / indicator_cum
6- Now we have the best hour to buy! and for best sell we should just reverse the ROC indicator and recalculate the best hour for it!
7- A green and red background color calculated for show the user the best places of buy and sell that dynamically changing with observing green and red plots(green : positive signal, red: negative signals) when green plot on 15 so each day on hour 15 the background of strategy indicator will change to 15 and if its go upper after some days and reached to 16 the background green color will move to 16 dynamically.
You can just wait for a green vertical background color to buy and a red bg color to sell :)
How to use it:
You see two lines (red and green) that them show you the best time to trade ( green for buy, red for sell )in UTC+0000 timezone! You should convert last values of this two line to your timezone for example:
If green line in 3.5 and I'm in Tehran city or Iran Country with this time zone(+4:30) I will add 4.5 hour on it : 3.5+4.5=8
This number means I will buy in 8:00 AM
IMPORTANT: 8:00 AM is Not exactly the best hour to buy! it's just mean that: must of positive changes average hour in past days happend on 8 on rsi and mfi indicators.
This strategy based on idea of circular mean with atan2
Circular average calculation for 24 hour daily periods
This calculation is just for fixing normal average problem on 24 hour day periods for understanding what we want to do think to that: If we have two hours we want to average them to 24 but if we calculate it with normal mean the result will be wrong: (2+22)/2=12 we know that the periodic things will not average like that and we should use another method to calculate mean of this like numbers. In this article we use python coding to fix this problem and generate the realistic average of hours
We used normal mean(arithmetic mean) formula on Trade Hour V3 I don't update V3 to V4 cuz V3 still better than V4 but V4 works without negative and unknown results and have more standard formula than V3.
for more information see this links:
I sann TradingView-anda har författaren publicerat detta skript med öppen källkod så att andra handlare kan förstå och verifiera det. Hatten av för författaren! Du kan använda det gratis men återanvändning av den här koden i en publikation regleras av våra ordningsregler. Du kan ange den som favorit för att använda den i ett diagram.
Informationen och publikationerna är inte avsedda att vara, och utgör inte heller finansiella, investerings-, handels- eller andra typer av råd eller rekommendationer som tillhandahålls eller stöds av TradingView. Läs mer i Användarvillkoren.
|
OPCFW_CODE
|
I can't wait for Windows 8 to arrive, not really for the new features but because it's an excuse to start over with my computer, to lose all the crud that builds up over time to slow it down, crud that maybe software developers themselves could prevent.
In particular, I'm annoyed with all the programs that load themselves when I start my computer. They're like guests you invite over for a day who instead decide to move in permanently. Go home! Or don't come over again until you're invited!
I'm a longtime Windows user, and this problem has been around with the operating system for as long as I can remember. It may be a similar problem with the Mac, but for whatever reason, I don't notice it as much there. Suffice it to say, to the degree it happens on the Mac, I don't like that, either.
For this column, I'll focus on Windows as an example of the issue. Let's start with my task bar, shown over there on the right. This is the most easily accessible, visible reminder of programs that have decided they need to run all the time.
Some of these make sense. The cloud icon represents SkyDrive, Microsoft's service that allows me to automatically synchronize files on my computer with Microsoft's storage service. I do want that always running. But many of those other icons represent software that I don't need constantly on, and turning these off can be a challenge.
Saying no to auto-run, when you can
Those headphones in the task pane? They represents Google Music, which is always watching to see if I have new music on my computer that should be uploaded to the Google Music servers. I don't mind that this is always running. It's also easy to disable, if I want:
But others don't make it so easy. That icon in my task pane with two arrows? That's for Live Mesh, Microsoft's other file sync service that I used to use before SkyDrive. I no longer need Live Mesh, but I couldn't find any way to stop it loading at start-up within the program itself, other than to uninstall it entirely.
That little camera icon? That's the Adobe Photo Downloader, which installed itself as part of Photoshop Elements 6. I know, that's a way old program. I needed to dig it out recently for when I was experimenting with Photoshop Elements 10. But there was no easy way I could find to stop it from loading, not within the application itself.
Speaking of Photoshop Elements 10, installing that seemed to add that little red A icon to my task pane, the Adobe Application Manager. What's that all about? It's constantly checking for updates to Abobe applications.
Do I really need that running all the time? Why can't Abobe just check when I actually run one of my Adobe products to see if there's an update? As it turns out, it probably can:
The screenshot above shows how I can disable the Adobe Application Manager. It's nice that it does have this option. But "Notify me of new updates in the menu bar" doesn't clearly explain that the program will run each time Windows start. More important, why flip that on by default?
Not shown in the task pane is Zune, software that keeps launching without me asking it to. Digging into it, as best I can tell, it thinks my external hard drive is a device that requires waking it from its slumber at random points during the day. At least Zune has an option that I hope will prevent this from happening in the future:
Somewhat similarly, Motorola Mobility decided that it really needed to have MotoConnect software running all the time on my computer. I don't know that it was ever necessary when I did have a Motorola Android phone. As I don't have one now, I know I don't need it running. Unfortunately, the expanded task pane that lists it running doesn't allow me to prevent it loading at start-up:
I can hide the icon. Oh boy! But I can't just turn it off from the task pane. Wouldn't that be nice? Nor does the software have an off-switch. Instead, I had to uninstall it entirely.
There are plenty of other things running at start-up that I probably don't need. My favorite long standing tip for disabling unnecessary programs is to click Start, the scroll to the search box at the bottom of the Start menu and enter "msconfig" to launch the System Configuration window. Then, using the Startup tab, you can tick to prevent some programs from launching that might not show themselves in the task pane:
Look, there's Amazon Unbox Video deciding it really needed to run, even though I never need it. Tick, and it's off. You can do the same for other programs listed. The downside is that you really have to know what's essential or not when you do this. I wish Windows gave you better advice, but it doesn't.
Looking through the CNET archives, Autoruns utility from Microsoft that sounds promising, especially when the utility itself promises the "most comprehensive knowledge of auto-starting locations of any startup monitor." But have a look:mentions the
That's just a sample of all it reveals. Comprehensive, yes. But it's also cryptic and not enlightening. Click on an entry, and you're taken to the Windows registry, which doesn't help you know if you're going to disable some essential program or not.
Of course, there's also the trusty Task Manager that can show all running processes in Windows. But I pity anyone who then goes to the Web to find out whether csrss.exe or disnoted.exe are essential processes or not. The first is from Microsoft, the second from Apple, and I've hit Web pages describing both as malware or Trojan software.
In the end, I come back to thinking developers themselves should really think twice before deciding that any process needs to run at start-up. I sure hope that Windows 8 provides some better control of the situation. And to the degree it's a problem on the Mac, here's hoping for the same solution.
|
OPCFW_CODE
|
The purpose of this procedure is to create selection matrices for dynamic selection of item number, material quantity, sales price or drawing measurement when a product is configured.
The procedure also includes a check of the selection matrix through simulation.
Before you can begin to create a selection matrix, the following prerequisites must be met:
Follow the steps below to create and check a selection matrix.
Start 'Selection Matrix. Open' (PDS090). Include panels E and 1 in your panel sequence.
Enter a new selection matrix and selection matrix type. Select 'New'.
Fill in or change the information in the selection matrix header in panel E. When creating a type 2 matrix (drawing measurement), you can regulate the number of possible responses in the matrix. This is done by using names in each of the response columns. Such a column without a name will not be used by the system. At least response column 1 must have a name for matrix type 2. A selection matrix can be minimized by using ranges for columns containing only digits. To do this, check the field Range (Ran).The product configuration reads the selection matrices from left to right. This means that the relative position of the selection columns can play a major role in the complexity of the matrix and the order in which features are displayed in a product configuration.
Specify the selection matrix lines in panel 1 and then create them.The size of the selection matrix can be minimized simply by specifying the lines that should give a result differing from a standard result. To create a standard line create one or more lines where one or more selection columns does not contain a value. The system then processes these matrix lines as standard lines, and uses them when no other line in the matrix matches the current values. Different levels of standard responses can be used where more or fewer selection columns are filled in (an extreme case would be when none of the selection columns are filled in). However, these standard lines must have a pyramid structure where a blank value in one selection column always has a value filled in the selection column immediately to the left of it (unless it is the first selection column). Note that the blank values exclude the possibility of using the matrix to limit valid options. To change position between two selection columns, you must first delete all matrix lines.
A simulation of the selection matrix can be performed by selecting 'retrieve values' (13). Enter appropriate values for simulation and begin the simulation.
Note that whilst different elements are available to product and procurement using Selection Matrix configuration, system enforced segregation does not exist. Where the procurement configuration elements Attribute and Formula are used in product configuration, or other areas where Matrix is used, a blank or zero key value and corresponding matrix value, where one exists, will be returned.
A selection matrix is created which can then be used in a number of product structures to select item number, quantity or drawing measurement.
A matrix can have the following design:
|11||Color||Height meas.||Valid from||Item number|
This matrix results in the following:
|Red||UTB292 (regardless of hight)|
|
OPCFW_CODE
|
How did Google perceive our SEO migration to blablacar.fr?
A website migration is always a risky task. Just as a reminder, we changed our brand name in France from covoiturage to BlaBlaCar in April 2013. We decided to switch the domain name only beginning June 2015 because we thought it was time to do it in France: blablacar is now much more popular than the term covoiturage. The main goal was to unify our brand domain names everywhere, the SEO challenge was to make sure traffic loss would be kept to a minimum.
What we did?
- Set up 301 redirects for all pages from covoiturage.fr to blablacar.fr (e.g. http://www.covoiturage.fr/trajets/rennes/ to https://www.blablacar.fr/trajets/rennes/), and update all internal links.
- Warn Google that the domain has moved in Google Webmaster Tools.
- Update external links on most important websites. The 1st thing was to update the links on social platforms & on partner websites. Of course, you can’t update everything and this can be a very long (and useless in certain cases) task to contact all websites.
- Monitor the changes in terms of crawling, indexation, rankings, traffic and links.
How did we proceed to migrate from a technical point of view?
Massive redirects at scale
We started by checking the capacity of our redirector farm to treat a lot of HTTP(S) redirects and we ensured we were able to do reverse proxy caching of 301 responses for future covoiturage.fr responses.
Scaling up our infrastructure is easy because we can boostrap new machines with a chef role and machines become quickly able to serve extra requests, that is the power of infrastructure automation!
We started by adding www.blablacar.fr to our plateform SSL certificates and deploy them accross all our delivery nodes.
After that, we added blablacar.fr in the HSTS browsers preload lists to register this domain as a full HTTPS one: at a member level, it makes the website faster avoiding to have server redirects extra round-trips, i.e. browser calls HTTPS URLs instead of HTTP ones.
The next step consisted in serving the website using an HTTP Host header with value www.blablacar.fr.
From our load balancers/reverse proxy cache levels, we had before migration:
- www.covoiturage.fr routed to our application backends farm
- www.blablacar.fr routed to our redirector backends farm
So, the steps to serve www.blablacar.fr were:
- keeping serving www.covoiturage.fr with application backend farm
- routing www.blablacar.fr to application backend farm at a reverse proxy level (no incoming traffic for the moment)
- decreasing in advance www.blablacar.fr and www.covoiturage.fr DNS TTL, then updating www.blablacar.fr to point to our application infrastructure IPs
- purging the reverse proxy on www.blablacar.fr
- routing www.covoiturage.fr to our redirectors
- purging www.covoiturage.fr
This is how it behaved during migration:
1. www.covoiturage.fr => 200, www.blablacar.fr => 301 to www.covoiturage.fr 2. www.covoiturage.fr => 200, www.blablacar.fr => 200 (during few minutes) 3. www.covoiturage.fr => 301 to www.blablacar.fr => 200
How did Google react in terms of crawling?
Right after the migration, Google crawled massively the pages on the domain blablacar.fr and he slowed down once they have all been discovered. We’ve now recovered the same crawl volumes as we used to have before the migration.
As predicted, Google’s crawl on covoiturage.fr is decreasing with time but will certainly never stop because there will always be some remaining websites linking to covoiturage.fr. That’s why redirects should always stay active on covoiturage.fr.
How did Google react in terms of indexation?
- Despite the redirects, some URLs of covoiturage.fr and blablacar.fr were indexed & positioned in Google at the same time.
- During a short period, we also had some URLs of our old platform that disappeared more than 1 year ago back in the search results! And also some URLs of our mobile website appearing in Google’s Desktop results.
- It took 1 week to get our sitelinks back on the brand name.
- We now have all new URLs indexed in Google. There are still a lot of covoiturage.fr URLs in Google’s index because it requires time to clean them.
Impact in terms of ranking?
- On our top axis pages, it took about 3 weeks to get everything back to normal on keywords “ridesharing + city” (e.g. covoiturage paris) & “ridesharing + axis” (e.g. covoiturage paris nantes).
- Same thing for axis only (paris nantes), we even get better results after the migration:
Impact in terms of traffic?
- SEO traffic on the whole website didn’t decrease during the migration, which is definitely the most satisfying result we could encounter.
- SEO traffic on city & axis pages actually decreased the 2 weeks after the migration because of the loss of rankings on some queries, but traffic on those pages get back to its normal growth tendency.
We were pretty confident about that migration but we actually didn’t expect so many (strange) moves in Google’s results. Of course you can always make some assumptions why it behaves this way, but you actually have no clue of what is exactly going to happen.
The lesson is that even if you do the maximum of required optimizations, you always have to expect some disturbances in your SEO when you do a website migration. Monitoring the changes with data is the best way to understand what’s really going on and react if things don’t go the right way.
|
OPCFW_CODE
|
Turkeys in Trees
It’s near Thanksgiving in the United States, and so it seems appropriate to be writing about turkeys. But today I don’t want to talk about the turkey on your plate, next to the stuffing and all the fixins. I want to talk about turkeys in trees.
It’s commonly understood that turkeys are flightless birds. I happen to live in a neighborhood that is home to a band of roving wild turkeys, and I can confirm they sure look like a fairly gravity-bound lot. So when I was out for a jog one day and spotted a bunch of them gobbling about high in the branches of a nearby tree, I was surprised. (This is an understatement. Real reaction: How the f_ck did that turkey get in that tree!?)
This is apparently a habit of these turkeys. I now frequently spot them up in that tree. But one day the mystery of how they got there was solved. It turns out these birds are not completely flightless, they just can’t fly very far or very high at one go. I watched as this awkward bird flapped up onto the roof of a low-slung bungalow near the tree. And from there, it made the final aerial leap up into the boughs, where it came to rest and presumably enjoyed the view.
This is an inspiring bit of thinking on the part of the turkey, which hitherto I had thought to not be a very bright animal. “I want to get there,” it thinks, “but first I will have to go there.” Baby steps. Incrementalism.
And this is the part where I bring it back to software (You hoped that was
coming, right?). Anyone who has dabbled in writing code understands the
principle of breaking problems down. But in practice, this is hard. Like,
really hard. You stare at that “quickstart” tutorial in the repo,
or try to digest the
man page. But you just can’t grok it. I
have been here many times.
When this happens to me, I make a todo list, usually in a fresh new text file. Sometimes, I get so micro that the steps are painfully obvious (“Open up text editor. DONE!"). But it gets me moving. And if I get stuck, it’s only on the tiny-step problem (e.g. “Import the data”), rather than freezing up on the whole project (“Make an interactive visualization of the timeseries”).
There’s a reason why this works. To borrow Kent Beck’s language
on software testing:
a green bar feels completely different from having a red bar.
Or as Sacha Chua put it in a post on “sequencing”:
If I focus on writing tiny chunks that start off as “just good enough” rather than making them as elaborate as I can, that lets me move on to the rest of the program and get feedback faster.
The nice thing about having the list is that at the end of the session, even if the only output I have is something very minimal, I can look back at the steps it took to get there and feel some sense of greater progress.
Recall the turkey: I may not be in the tree yet, but the fact that I am off the ground is a flippin’ miracle.
|
OPCFW_CODE
|
Studies in Corpus-Based Sociolinguistics illustrates how sociolinguistic approaches and linguistic distributions from corpora can be effectively combined to produce meaningful studies of language use and language variation. Three major parts comprise the volume focusing on: (1) Corpora and the Study of Languages and Dialects, in particular, varieties of global Englishes; (2) Corpora and Social Demographics; and (3) Corpora and Register Characteristics. The 14 peer-reviewed, new, and original chapters explore language variation related to regional dialectology, gender, sexuality, age, race, ‘nation,’ workplace discourse, diachronic change, and social media and web registers. Invited contributors made use of systematically-designed general and specialized corpora, sound research questions, methodologies (e.g., keyword analysis, multi-dimensional analysis, clusters, and collocations), and logical/credible interpretive techniques. Studies in Corpus-Based Sociolinguistics is an important resource for researchers and graduate students in the fields of sociolinguistics, corpus linguistics, and applied linguistics.
Table of Contents
1 Corpus approaches to sociolinguistics: Introduction and chapter overviews
Eric Friginal and Mackenzie Bristow
Part 1: Corpora and the study of languages/dialects
(Varieties of global Englishes)
2 Using large online corpora to examine lexical, semantic, and cultural variation in different dialects and time periods
3 Using corpus-based analysis to study register and dialect variation on the searchable web
Douglas Biber, Jesse Egbert, and Meixiu Zhang
4 Variation in global English: A collocation-based analysis
Tony Berber Sardinha
5 Indian English: A pedagogical model (even) in India?
Part 2: Corpora and social demographics
7 A corpus-based analysis of the pragmatic marker you get me
Eivind Torgersen, Costas Gabrielatos, and Sebastian Hoffmann
8 just, actually at work in New Zealand
9 Exploring the intersection of gender and race in evaluations of mathematics instructors on ratemyprofessors.com
Nicholas Close Subtirelu
10 Attitudes towards autism of parents raising autistic children: Evidence from "mom" and "dad" blogs
A. Cameron Coppala and Jack A. Hardy
11 Social functional linguistic variation in conversational Dutch
Jack Grieve, Tom Ruette, Dirk Speelman, and Dirk Geeraerts
Part 3: Corpora and register characteristics
12 A corpus-driven investigation of corporate governance reports
13 A corpus-assisted discourse study (CADS) of representations of the ‘underclass’ in the English-language press: Who are they, how do they behave, and who is to blame for them?
Jane H. Johnson and Alan Partington
14 ‘had enough of experts’: Intersubjectivity and the quoted voice in microblogging
15 Linguistic variation in Facebook and Twitter posts
Eric Friginal, Oksana Waugh, and Ashley Titak
Eric Friginal is Associate Professor of Applied Linguistics at the Department of Applied Linguistics and ESL and Director of International Programs, College of Arts and Sciences, at Georgia State University, USA.
|
OPCFW_CODE
|
There is a learning process to switching to SQL Azure from your local SQL Server database, and this was a painful but learning experience for me and this post will hopefully help you to get on that road.
Let’s assume you have created a SQL Azure database, which is pretty straight-forward, and want to now connect to it from a client.
Here’s what you will need…
You can connect to SQL Azure from within SQL Manager provided you have the SQL Server 2008 R2 edition. If you do not, you can connect a query I think from some posts I read but I’m not sure, so remember to upgrade to the R2 edition if possible, or you can download the free R2 express edition.
To connect to Azure, you will need changes at the server and the client.
At the server side in Azure, you will need to open the firewall to allow an IP range to allow connections from. Azure makes this very easy if configured from the client machine because when you Add new rule, it will display your IP and that makes it easier to specify a range.
In the end, you might end up with a few rules like this:
The client side could end up being one of the most frustrating aspects of using Azure. You need to have port 1433 to connect to SQL Azure. Period. There is no workaround that whatsoever. If you’re in a corporate network, this might be an issue. If you’re on a wireless network, this might be an issue because many public ones only have port 80 and a few others open.
You might get the following error if your port is not accessible.
You might have to work with your corporate IT or ISP to open port 1433.
To check if your SQL Azure database is accessible from your machine, telnet into it. Here’s how: Depending on your Windows installation, you might have to install telnet. Go to Control Panel – Programs and Features. Select IIS and expand that node and check Telnet Client and install it. Go to a DOS prompt and type in telnet, and you will get a telnet prompt. To attempt to connect to your SQL Azure db, enter o xyz.database.windows.net 1433 and you should get a connecting prompt.
Something like this:
Otherwise, you’ll get a meaningful error message that you could troubleshoot.
When done, enter q to quit telnet.
Once you’re able to connect to your db in the cloud, it’s time to use it in your application.
To point your app to the SQL Azure db, it is just a matter of replacing your (local) connection string with an Azure connection string. This is made really convenient by the Azure folks because they provide a connection string in the online admin tool that you can just copy over.
Go to the Databases tab, select your database and click the Connection Strings button and you get a useful pop-up.
One gotcha if you make a mistake in putting in your string (which I did) is that you might get an error stating “Format of the initialization string does not conform to specification starting at index NN” This indicates it’s an XML problem and so double-check your string.
I think that covers the basics of connecting to your database in the cloud. Have fun!
|
OPCFW_CODE
|
No log output on Debian-based distros
Describe the bug
In #1697 we set StandardOutput=null in scripts/rpm/falco.service to avoid logging to /var/log/messages in Red Hat-based distros. In #2242 we moved away from distro-specific unit files and added scripts/systemd/falco-kmod.service. This file has StandardOutput=null.
When running Falco using scripts/systemd/falco-kmod.service on Ubuntu, I'm getting no output in journalctl -u falco-kmod. Removing StandardOutput=null from the unit file fixes the problem.
Based on https://github.com/falcosecurity/falco/pull/1697#issuecomment-1281044891 I suspect the logging to /var/log/messages by default doesn't affect Debian-based distros. I'm not sure how to tackle this but IMO not seeing logs in journalctl is counterintuitive to Debian users. We may want to do one of the following:
Provide a default which works for most common distros.
Provide commented-out alternatives in the systemd unit files and document distor-specific configuration.
Find a different way to handle the Red Hat-specific problem. Do all systemd units without StandardOutput=null log to /var/log/messages on Red Hat-based distros? 😕
How to reproduce it
Run Falco using the provided scripts/systemd/falco-kmod.service on Ubuntu (I used 18.04.6 but I don't think the precise version matters).
Expected behaviour
Logs show up in journalctl.
Screenshots
Environment
Falco version:
Thu Jun 15 14:03:01 2023: Falco version: 0.35.0 (x86_64)
Thu Jun 15 14:03:01 2023: Falco initialized with configuration file: /etc/falco/falco.yaml
{"default_driver_version":"5.0.1+driver","driver_api_version":"4.0.0","driver_schema_version":"2.0.0","engine_version":"17","falco_version":"0.35.0","libs_version":"0.11.2","plugin_api_version":"3.0.0"}
System info:
Cloud provider or hardware configuration:
OS:
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Kernel:
Linux <redacted> 5.4.0-1109-azure #115~18.04.1-Ubuntu SMP Mon May 22 20:06:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Installation method:
DEB
Additional context
Hi! thank you for reporting this, we will take a look ASAP!
/cc @leogr and, if i remember correctly, @therealbobo was the last one looking into this.
@FedeDP, what a memory! I practically opened the same PR some time ago: https://github.com/falcosecurity/falco/pull/2499 If the common opinion is to keep StandardOutput=null, I strongly suggest to document somewhere the reason for it. 😄
Sorry for the delay, I tried just now on an ubuntu machine:
6.2.0-1010-aws #10~22.04.1-Ubuntu
and with StandardOutput=null I can see logs in journalctl. So I would say that each distro has its own behavior.
Provide commented-out alternatives in the systemd unit files and document distor-specific configuration.
Probably this is the most reasonable solution at the moment :thinking:
I was opening a PR to comment on our systemd units but probably the best place is our website, probably we should add something in our systemd section, this is what we know at the moment:
Ubuntu 18.04: remove this line to see logs with journalctl
Ubuntu 22.04: keep this line as it is to see logs with journalctl
RHEL distros: keep this line as it is to avoid polluting /var/log/messages
When running Falco using scripts/systemd/falco-kmod.service on Ubuntu, I'm getting no output in journalctl -u falco-kmod. Removing StandardOutput=null from the unit file fixes the problem.
IIRC, regardless of StandardOutput=null, alerts should directly forwarded by Falco's syslog_output which is enabled by default :point_down:
https://github.com/falcosecurity/falco/blob/b2374b3c196cafb1bc2c2dce3b92b2d6db7af371/falco.yaml#L339-L343
So, I'd expect to see logs show up in journalctl event with StandardOutput=null in any distros. Does it make sense?
cc @Andreagit97 @johananl @FedeDP @therealbobo
If this is not the case, the root cause may be somewhere else, and not a documentation problem :thinking:
I can't reproduce this now on Ubuntu 18.04.6, namely I can see logs in journalctl.
I suspect I may have used the wrong systemd unit name in the first instance 🙈 Since I can't tell for sure, let's assume a mistake on my end. I'll report here again in case I am certain there is an issue.
I'll leave it to you folks to decide whether to close the issue or update the docs just in case.
Thanks for your help anyway!
Hey @johananl, I haven't been able to reproduce it either. I've observed the same behavior (despite a difference in the color of the journald entry) in both 18.04 and 22.04.
To achieve what you reported, we should disable the syslog_output as well, so it could be that when trying different options, one of them was misconfiguring Falco.
Thank you all guys for the help, i will close this for the moment, feel free to reopen if you face new issues :)
|
GITHUB_ARCHIVE
|
Do protestants believe that the New Testament is the final covenant?
Do Protestants believe that the New Testament is the final covenant/testament? For example, many branches which built off Christianity believe the Bible is not complete and requires another revelation/modification from Jesus Christ (i.e. Mormonism, JW's)
If so, what are the arguments / biblical basis used to support this view?
Article VI of the Articles of Religion of the Church of England, which is accepted by most of the Anglican Communion, provides in part, that
HOLY Scripture containeth all things necessary to salvation: so that whatsoever is not read therein, nor may be proved thereby, is not to be required of any man, that it should be believed as an article of the Faith, or be thought requisite or necessary to salvation. ...
Lutherans hold to the same viewpoint.
This is a bit sparse, but it's on the right track. Pretty much every major Protestant confession of faith ever says something along these lines. It might be worth expanding this answer to show that this isn't just a few samples. Also as I mention in this comment noting the difference between groups that this holds true for (reformers, i.e. Protestants) vs. ones it does not (reconstructionists, i.e. LDS, SDA, JW, etc.) might be useful.
@Caleb, while I understand that all Christians that self-identify as protestant agree with what I cite from the Articles of Religion from the Church of England, I only feel qualified to answer based on Lutherans and Anglicans, as these are the groups with which I have familiarity. The question also involves a different one, who gets to decide who is, and who is not a Protestant. Some who characterize themselves in that way do not agree that others are properly characterized that way. My own thoughts on the matter are that those who believe the Bible is incomplete are not protestants.
Revelation 22 says this:
18 I warn everyone who hears the words of the prophecy of this scroll:
If anyone adds anything to them, God will add to that person the
plagues described in this scroll. 19 And if anyone takes words away
from this scroll of prophecy, God will take away from that person any
share in the tree of life and in the Holy City, which are described in
this scroll.
The context makes it clear that this warning only applies to the book of Revelation itself. However, since the book of Revelation occurs last in most Christian Bibles, and the Apostle John was the last of the Apostles to die and the words and writing of the Apostles are given greater weight than those of other believers, it is commonly assumed by many Christians that this prohibition applies to the whole Bible: no more additions. This has the force, not of doctrine, but of a strong impression that most Christians accept, or at least give credence to.
|
STACK_EXCHANGE
|
Document Conversion: A persistent problem and possible solutions from dots Software
In the following text we have a look at the problem of document conversion from a print operator's and an end user's perspective. We will show you possible solutions provided by dots Software.
To make application data fit for printing presents a continuous challenge. According to an InfoTrend study from 2010 only 25% of all incoming jobs arrive in the desired format: PDF. If we add print jobs that arrive in PostScript format, which can also be handled easily without any conversion, every third job seems to be ok. But ⅔ need conversion. That is a problem. And even PDF sometimes may require adjustment...
From a printer's perspective the solution seems clear: Increase the use of PDF. But you cannot force the desired format upon common users for a variety of reasons, waste of productivity chief among them. And if we talk about in-house printing, meaning corporate printing, the skills and tools for creating printable PDF files are often missing among end users.
In short: It is very difficult to make the PDF format mandatory. So if neither the users nor the machines can be forced to process data in a certain format, you need some magic in between.
And that magic is provided by software!
There are a couple of solutions:
- send and transform the data by way of a virtual printer
- transform the data during upload
- check incoming PDFs for their printability
dots Software offers all 3 of those solutions, let's have a closer look.
This is a solution much favored by in-house printers: After installation of either the JT Printer (for Printgroove JT Man) or the JT Web Printer (for Printgroove JT Web & Printgroove JT Suite) on the user's device a virtual printer is added to the FILE | PRINT menue of the user's application(s). These printers convert the print job into printable data and submit them for further processing to JT Man or JT Web.
Advantage: Printable data is created without user intervention including all required resources like fonts, logos, images etc.
Disadvantage: The installation of virtual printers may interfere with security or IT policy.
Converting data on the server
Printgroove JT Web and Printgroove JT Suite offer an additional option called JT Document Converter with the following workflow: A user creates a document using a common application and saves it to the file system. With JT Document Converter installed on the server, the upload dialog in JT Web changes and allows for more upload-able formats. JT Document Converter then converts the native file formats into printable data on the server.
Advantage: No local installation, no conflict with IT or security policies here.
Disadvantage: In order to accept all necessary file formats a version of each source application whose file format has to be accepted needs to be installed on the server. If it is not available on the server, the file format cannot be accepted.
Checking incoming PDF files
Even if users are submitting print files in PDF format, there still may occur problems during printing due to missing fonts, images with too low resultion or included transparencies. To solve those problems Printgroove JT Web and Printgroove JT Suite offer a basic preflight check, checking the three topics mentioned above.
Advantage: The most common problems can be avoided.
Disadvantage: You need incoming PDF files. There's no conversion included and discovered problems are not automatically solved. This has to be done by either the print operator or the user.
File conversion is here to stay
The need for file conversion will not go away, but the complexity of the process can be greatly reduced by software. Using software solves a lot of common problems for both the end user and the print operator, generating user acceptance on both sides of the transmission and increasing productivity.
|
OPCFW_CODE
|
Getting devices to users is harder than handing them a tablet or laptop and saying "go nuts". You need to prepare the device, update it, join it to the domain, load any software, etc. We're actually hosting a video meetup with Spiceworks tomorrow on how you can make your device deployment easier, but before then, some of the most important things you'll need to consider are:
- Will you need to wipe the laptop clean from any vendor-provided software?
- How are you going to image the OS for all of the devices coming in?
- What software are you going to need to load onto the devices?
- How will you secure all of your devices?
What do you like to use to help deploy your computers, laptops, tablets, etc.? And do you have a formal checklist made or do you handle most of it on a case-by-case basis?
I'd love an answer on this one if anyone has any insight?
Essentially right now I definitely do it on a case by case basis, I have a list of job roles and their software requirements. I work through this list based on their job role and have all the hyperlinks for the software embedded, complications arise with things like Office 365 installs as I essentially need to do this "as the user" then change their password. It's very labour intensive.
I use and manage most of your bullet points with a product called "SmartDeploy" which I can deploy via Cloud Storage Drive as my one site uses for deploying images for new setups or I run them with a internal setup server for my deployments
And I have a set of required software that gets loaded to each golden image and are a little different based on the dept or need
And for securing all of our devices we use currently Windows BitLocker and eventually we will be using Sophos Encryption eventually
Brand Representative for SmartDeploy
From our standpoint, if you’re managing more than 50 computers we recommend using an imaging solution. You’ll need a Microsoft volume license and we suggest creating a hardware-independent golden image captured from a VM reference machine for maximum flexibility and cost savings.
SmartDeploy has been used to deploy millions of computers, tablets, and servers around the world. Our goal is to make computer provisioning faster and easier so IT can focus on more strategic projects. For those teams managing more than a few hardware models, we make this easy by managing device drivers for you. All you’ll need to do is create and manage a single golden image to deploy to any model.
There are a bunch of guides and best practices on our YouTube channel with information like how to include O365 in your master image and thing to know about reimaging rights. Brittany’s Dell event looks like it will be good, too!
|
OPCFW_CODE
|
Each layer in a window can have the opacity of that layer individually set from zero opacity to 100 percent opaque. Layers that are partially transparent will allow objects, labels and pixels from layers underneath them to be partially visible. Opacity is controlled by settings in the Layers pane.
The illustration above shows a map with two layers, a vector drawing layer of buildings shown above an image layer from a Bing satellite image server. Both layers are at 100% opacity, so the upper, buildings layer does not allow any part of the layer below it to show through the area objects used to show buildings in that drawing.
To change the opacity of a layer in a window, open the window and then choose the Layers pane. Double-click into the % opacity value for the layer and change the number to whatever level of opacity is desired, and press Enter to accept the new value.
For example, we can double-click into the opacity value for the buildings layer and change it from 100% to 50%.
The result is that the buildings layer becomes about 50% opaque, that is, it blocks about 40% of the visibility of layers below, allowing the image layer to be partially visible through the area objects that represent buildings.
100% opacity (default) means the contents of the layer are opaque. 0% opacity means completely transparent: the contents of the layer will be invisible, because they are totally transparent. When entering opacity numbers, there is no need to enter the % percent character: simply enter a number from 0 to 100 and press ENTER.
Layer opacity will be combined with whatever opacity is already defined within the component. For example, RGBA images may have per-pixel transparency enabled via the A alpha channel. Using layer opacity with such an image layer will apply the layer opacity throughout the entire RGBA image in addition to any per-pixel transparency.
Changing the opacity of a layer does not make any changes to the component involved. It only changes how that component is displayed in the layer stack in that particular window. If we have an RGBA image participating as a layer in a window, when we change the opacity for that layer there is no change to the alpha channel values for each pixel, and there will be no change in the opacity value of layers in other windows if that image also participates as a layer in other windows.
Layer opacity works with any layer that may be displayed in a window, except the North Arrow, Scale Bar, Legend, and Grid virtual layers that show a north arrow, a scale bar, a legend, or a grid or latitude and longitude reticule over the window. The opacity of virtual layers cannot be varied.
Labels layers, for example, may also be made partially transparent in windows as shown above.
In the Layers pane we have set opacity for the Labels layer to 50%.
Layer opacity works just as transparency does when stacking layers of transparent plastic film. The results are visually combined and are not simply arithmetically additive.
Suppose we have a map with three drawing layers, each of which contains a blue square. The screen shot above shows the window. Two of the layers have layer opacity set to 50% and in these two layers the blue squares have been positioned to overlap each other. The third layer has 100% percent opacity. The Background has been turned off so we see the checkerboard pattern indicated when there is no background color.
As expected, the square in the layer with 100% opacity appears completely opaque. The checkerboard background is not visible through it at all. The squares in the partially opaque layers have been rendered partially transparent so that the checkerboard background may be seen through them. Where the squares overlap the background is less visible because it is seen through two layers of partially transparent "material."
If we turn on the default, white background in the Layers pane we can see how the objects appear against a white background. This illustrates an effect that is occasionally confusing: the color of the region where both squares overlap is lighter (because of opacity) than the completely opaque square. We may be surprised by that if we think that if one square has 50% opacity and the other square has 50% opacity then a sight line through both squares should be 100% opaque.
In reality, opacity is more a percentage multiplication effect than addition. When see through one square that is 50% transparent the sight line results in combined color that is 50% from the square and 50% from the background. If we now place another 50% filter of the same color into that sight line the result is color that is 75% the color of the squares and 25% the color of the background. Thus, an overlap of two filters each of which is 50% opaque has the same effect as one filter with 75% opacity and is not the same as 100% opacity.
When the Save as Image dialog is used to create an image from a map, the saved image will be a four channel image that includes an alpha channel to correctly display all opacity effects resulting from the combination of various layer opacity settings in the map.
Layer opacity is a property of a Map layer and not of the drawing or other component that is shown in that layer. The same drawing, for example, can appear as a layer in more than one map and in each map the layer containing the drawing can have a different opacity setting. The opacity used within each map for that layer appears in the mfd_meta table as a Property of that map.
Changing the opacity of a layer does not make any change to the component shown in that layer. Layer opacity simply changes how the window shows the component in that layer.
Transparency and opacity are two terms that mean the same concept viewed from different directions. When something is completely opaque it is not at all transparent. When something is perfectly transparent it may be said to have zero percent opacity.
Which word is used depends on the discussion. When imagining layers stacked up above each other like transparent sheets it might be more natural to use the word transparency. When discussing a specific percentage of light transmission to be applied in a dialog most applications use the word opacity.
The convention in the graphics arts editing software industry is to adjust layer opacity as a number from 0% to 100% opacity, so that an image with 100% will be fully opaque and will not allow any view of an image underneath it. Manifold follows this convention.
User Interface Basics
Edit - Save as Image
|
OPCFW_CODE
|
The problem is that the directx binaries on Windows have become the most wart ridden messes of unexplained undocumented behavior implementing the api intuitively means almost nothing works.
Originally Posted by werfu
Microsoft had the habit (and still does) of patching Windows to fix bugs in games. Device vendors do this a lot too, it is why many AAA titles require day-one gpu drivers from AMD / Nvidia.
Wine is so huge, in part, because it has to find and implement all the bugs in directx beyond just the functionality. And even a dx9 state tracker would have to do the same, because directx isn't an API, it is an implementation - and as a developer, conforming to implementations makes me want to perform ritual sacrifice of goats.
No reason I can see that it wouldn't, so long as it's using an up-to-date Mesa stack. As this state tracker looks to be Mesa 10 I would imagine that you would get everything you get from Mesa 10 (including all radeonsi improvements) plus the D3D9 st.
Originally Posted by ChrisXY
Are you trying to make a point, or promote NVIDIA's proprietary driver? I fail to see any reason for the latter in this thread...
Originally Posted by pinguinpc
My only problem with fglrx and Wine was Guild Wars 2 and osu!, and both seem to work great for me now. On radeonsi, both games also worked, but were slightly slower. Can't say I care how they perform on NVIDIA since I don't own such hardware (and won't for political reasons). But giving the impression Wine is next to useless on AMD in-comparison to NVIDIA just isn't true.
LOL. Yeah rite... So Wine has to do all these things? Maybe they need to hire more people for their marketing division based on this patch? Maybe they need more salesmen? Gosh, the burden...
Originally Posted by Gusar
I can't take you seriously. You see, unlike you, i am a pro programmer... And i know lies when i see them...
They don't have to. Anyway, it is not like the do stellar QA for the rest of Wine...
Originally Posted by zxy_thf
But, they don't have to enable it by default. They can just accept it, provide a compile time option, and let the community maintain it. But of course, that can't happen on an "open source" project, right?
So let me get this straight: If i have not given money to the developers, i am not allowed to review their product and criticize their decisions? Especially an "opensource" product? SERIOUSLY? I don't know if you are a developer, but you don't seem too bright to me...
Originally Posted by justmy2cents
I never said i don't like Wine in general. I don't like the attitude of Crossover employees... That is my right, and i can express it since we are not in N. Korea...
Their decision is purely based on the fact that most of their money come from Mac users. I am willing to bet Linux users don't usually pay for Crossover... That is the reason they don't want Linux specific enhancements. There is no technical reason, this patch is so simple. If it was complicated, i might have accepted their argument. But is is so simple...
PS: They won't get that much "bug noise" from supporting the d3d9 state tracker. They don't have to enable it in Crossover. They can let it as an unsupported Linux compile time feature...
criticize? yea, you can. that is your basic right. same as developers have the right of choice... or don't they? you damn sure fight for your rights, but deny basic right to others. right of choice is most basic part of OSS. but, what you're doing is not criticizing. example, "wine developers LIED...", i mean how can you lie that you don't like something and don't plan on supporting it? it was a CHOICE, not a CLAIM and AFAIK choice cannot be lie. same for your comment, is not criticism, it is a claim
Originally Posted by TemplarGR
oss project rejecting inclusion? never happens... or does it? look at kernel. they will flat out reject inclusion if it doesn't fit in their plan or structure, their agreed writing style...
in OSS both developer and user are right. but
- developer has the right to decide on not accepting some solution into their project or take another direction
- user has the right to say screw them, fork and prove them wrong
not much noise? let's see http://appdb.winehq.org/objectManage...tion&iId=13667
right now it is pretty decent and clean how versions work. imagine all this duplicated since it might run for someone with tracker and not for someone without. why would they need to go trough that if they don't want it? and this never happens, i guess http://www.phoronix.com/scan.php?pag...tem&px=MTMyNDU
even if you leave it outside.of crossover people usually simply write "wine appname" in google. now, go figure what articles one will saw. some random templargr bragging it works x fps. imagine the surprise when he installs it just to realize it doesn't work. what will they do the 1st thing... flood the wine support
keeping side project outside wine, where you simply take each tarball and release with patches would be painless, not require any work (your words, not mine). that's how distros do binary blobs for example
You're not "criticizing their decisions". You're making demands, you're throwing out accusations, hefty ones at that (they're "lying", seriously?), you're claiming they have an agenda, you're also throwing out insults and cheap shots at other forum participants, and you're doing it all with a really stinky attitude. All the while proclaiming how trivial supporting the tracker would be.
Originally Posted by TemplarGR
You're a "pro programmer", *show* me how trivial it would be! Why do you refuse? With one supposedly easy gesture you could shoot down my arguments completely. Why do you not take that chance? Well, the answer is obvious.
Lol, you do realise the main users of Wine are Linux users, right?
Originally Posted by werfu
And if adding something that "only the minority of Linux users will be able to use" is bad, then why did the Wine devs add that OSX specific X11 replacement some time ago?
Face it, it's all ego issue. Someone developed something such that the whole DX-GL translation layer can now be obsoleted, but the old Wine devs, who think more about their legacy than the actual benefit of users, want to keep this layer and reject any alternatives, even if they are clearly superior.
|
OPCFW_CODE
|
Privacy risks and threats arise and surface even in seemingly innocuous mechanisms. We have seen it before, and we will see it again.
Recently, I participated in a study assessing the risk of W3C Battery Status API. The mechanism allows a web site to read the battery level of a device (smartphone, laptop, etc.). One of the positive use cases may be, for example, stopping the execution of intensive operations if the battery is running low.
Our privacy analysis of Battery Status API revealed interesting results.
Privacy analysis of Battery API
Battery readouts provide the following information:
- the current level of battery (format: 0.00-1.0, for empty and full, respectively)
- time to a full discharge of battery (in seconds)
- time to a full charge of battery, if connected to a charger (in seconds)
Those values are updated whenever a new value is supplied by the operating system
What might be the issues here?
Frequency of changes
Frequency of changes in the reported readouts from Battery Status API potentially allowed the monitoring of users' computer use habits; for example, potentially enabled analyzing of how frequently the user's device is under heavy use. This could lead to behavioral analysis.
Additionally, identical installations of computer deployments in standard environments (e.g. at schools, work offices, etc.) are often are behind a NAT. In simple terms, NAT mechanism allows a number of users to browse the Internet with an - externally seen - single IP address. The ability of observing any differences between otherwise identical computer installations - potentially allows to see (and target?) particular users.
Battery readouts as identifiers
The information provided by the Battery Status API is not always changing fast. In other words, they are static for a period of time; it may give rise to a short-lived identifier. At the same time, users sometimes clear standard web identifiers (such as cookies). But a web script could analyze identifiers provided by Battery Status API, which could then possibly even lead to recreation of other identifiers. A simple sketch follows.
An example web script continuously monitors the status of identifiers and the information obtained from Battery API. At some point, the user clears (e.g.) all the identifying cookies. The monitoring web script suddenly sees a new user - with no cookie - so it sets new ones. But battery level analysis could provide hints that this new user is - in fact - not a new user, but the previously known one. The script's operator could then conclude and reason that those this is a single user, and resume with tracking. This is an example scenario of identifier recreation, also known as respawning.
Recovering battery capacity
This was surprising! It turned out that in some circumstances it was possible to approximate (recover) the actual battery capacity in raw format; in particular on Firefox under Linux system. We made a test script which was exploiting the too verbose readout (i.e. 0.932874 instead of a sufficient 0.93) of battery level and combined this information with the knowledge of how the battery level is computed by the operating system, before it is provided to the Web browser. It turned out that it was possible to even recover the battery capacity and use it as an identifier. For more information, please refer to the paper report.
The study achieved an impact.
- a W3C standard is updated to reflect the privacy analysis
- Firefox browser shipped a fix
- the work received some
Trackers use of battery information
Expected or not, battery readout is actually being used by tracking scripts, as reported in a recent study. Some tracking/analysis scripts (example here) are accessing and recovering this information.
Additionally, some companies may be analyzing the possibility of monetizing the access to battery levels. When battery is running low, people might be prone to some - otherwise different - decisions. In such circumstances, users will agree to pay more for a service.
As a response, some browser vendors are considering to restrict (or remove) access to battery readout mechanisms.
Even most unlikely mechanisms bring unexpected consequences from privacy point of views. That's why it is necessary to analyze new features, standards, designs, architectures - and products with a privacy angle. This careful process will yield results, decrease the number of issues, abuses and unwelcome surprizes.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
namespace MCCoinLib.RandomEngines
{
public class Random2DEngineService
{
private Dictionary<SamplingMethod, IRandom2DEngine> _dicEngines = new Dictionary<SamplingMethod, IRandom2DEngine>();
public Random2DEngineService()
{
_dicEngines.Add(SamplingMethod.Halton, HaltonSequence2DEngine.Create());
_dicEngines.Add(SamplingMethod.RandomUniform, RandomSequence2DEngine.Create(seedX: (int)DateTime.Now.Ticks, seedY:(int)DateTime.Now.Ticks/2));
_dicEngines.Add(SamplingMethod.RandomUniformWithFixedSeeds, RandomSequence2DEngine.Create());
}
public IRandom2DEngine GetEngine(SamplingMethod method)
{
return _dicEngines.TryGetValue(method, out IRandom2DEngine engine) ? _dicEngines[method] : null;
}
}
}
|
STACK_EDU
|
Novel–Young Master Damien’s Pet–Young Master Damien’s Pet
649 To Have You Next To Me- Part 2 few old-fashioned
The Lord nodded his head, “I have asked considered one of my men to find the report of individuals who came into the test. He should get it out of the office in certain hrs. Like that we can browse through the persons and track back where and how they appeared with the assessment.”
He knocked in the doorstep, the metal over the doorway reaching hard on the wood before he release it. In time he heard the footsteps get there coming from the other side of the door that has been opened through the butler.
If there was the one thing one could rely on as it came to Lord Nicholas, it was actually how the male could check the scenarios that jump into things. The very last time once they possessed became aquainted with, it was before councilman Creed’s fatality plus it created him laugh. Damien didn’t have to consult to confirm that it really was Nicholas who got killed the corrupt councilman.
If there were something a person could trust in in the event it arrived at Lord Nicholas, it was subsequently that this male could look at scenarios that hop into issues. The final time whenever they obtained met, it turned out before councilman Creed’s loss of life and yes it created him look. Damien didn’t have got to request to confirm that it was Nicholas who acquired murdered the corrupt councilman.
He knocked around the entrance, the metallic about the home hitting hard on the hardwood before he forget about it. Quickly he listened to the footsteps arrive coming from the other side from the entrance which was opened through the butler.
american notes charles dickens
As he obtained dispatched Cent to carry out the test, he knew she would thrive, she was one of the fated celebrities who had been bound to him via the spirit. Maybe it was actually his assurance that allow Damien make it possible for her to partic.i.p.ate on the examination to see her lively get his heart and soul in relieve.
“There was only a couple of them who had been full of life on the whole forest. Girl Penelope is there as well as other who I believe has to be her companion, she has three broken ribs, sprain in her fingers and her muscles on the lower leg has become torn quite terribly. I was thinking it would be directly to call you below ahead of the overall on the local authority chose to snoop around,” mentioned the person, because they went on the corridor making the girl sleep in her own place.
nero my existence is perfect
Damien then asked, “Does Reuben meet with you about this?”
Nicholas could only reckon that when Damien realized regarding this, he got already delt the issue at hand. To answer the query that was not questioned, Damien reported, “The parchment was burnt just lately.”
“‘Course he was,” Damien muttered under his breath, “Exactly where would be the gal. The main one with the blonde your hair,” his meeting with Lord Nicholas could hold out as he obtained other suggestions to attend to, normally the one and also the only major thing that was to get if Dollar is at a great status.
Damien provided a curt nod, getting up from your bed, he walked away from the room.
Nicholas nodded his go, “He did. Nevertheless I wasn’t conscious of the partic.i.p.ants who had been participating in the exam. It had been a significant amaze while i hit your website,” getting dilemma from Damien, the person extended to state, “Among the local males was restoring the tower bell as he stuck flame within the woodland. He went along to the magistrate immediately and whenever the magistrate arrived he identified the picture and got to watch out for me. Do you realize there had been a ritual there?” seeing and hearing this Damien’s view narrowed itself significantly.
He knocked about the entrance, the aluminum over the door striking difficult on the wooden before he let go of it. With time he heard the footsteps turn up coming from the other part of your doorway that has been established with the butler.
“We dispatched a couple of witches so that you can get one of many dark witches who had been involved,” at the very least that has been why they had mailed to allow them to could get some reply to from your dark witches but now Damien didn’t are concerned about being without a black witch in hand and was instead happy to have Penelope secure.
“I think I might obtain you in this article,” it had been Lord Nicholas himself who had shown up at the entrance, “Allow her to sleep. She has to be fatigued using the functional exam,” he could convey to that Lord Nicholas sought to talk to him and clear of Penny making sure that she can get the remaining she essential.
Nicholas nodded his brain, “He have. Having Said That I wasn’t alert to the partic.i.p.ants who were participating in the test. It was actually a good big surprise whenever i gotten to the site,” receiving a issue from Damien, the man continuing to say, “One of many local guys was solving the tower bell when he trapped fire in the forest. He traveled to the magistrate straight away and once the magistrate appeared he found the picture and came to take into consideration me. Are you aware there is a routine there?” hearing this Damien’s eyes narrowed itself substantially.
The previous butler paused his footsteps after which modified the direction of his step, having Damien to the place exactly where Cent was fast in bed.
The guy will know at some time with time and since these people were about the same area, he clarified, “She actually is a bright white witch,” Lord Nicholas elevated his brows, “What actually transpired to the other contenders? There was forty of which,” Damien furrowed his brows as he observed the Lord answer,
Damien spotted how she was using another pair of outfits and never the ones that that they had received her tailored. Walking your bed, he required a chair close to her for the bed to discover her getting to sleep. Brus.h.i.+ng her head of hair gently he read out of the doorstep,
The earlier butler paused his footsteps and next modified the route of his walk, using Damien into the space just where Penny was fast resting.
Nicholas could only imagine that if Damien understood about this, he acquired already delt the challenge at your fingertips. To resolve the query that had been not expected, Damien reported, “The parchment was burned recently.”
Nicholas could only guess that if Damien believed regarding it, he acquired already delt the issue at your fingertips. To resolve the question that was not asked, Damien stated, “The parchment was scorched fairly recently.”
Nicholas nodded his head, “He do. However wasn’t alert to the partic.i.p.ants who are getting involved in the test. It was actually a significant amaze whenever i arrived at the website,” receiving a question from Damien, the person persisted to talk about, “One of many regional males was repairing the tower bell as he trapped fireplace on the forest. He visited the magistrate right away then when the magistrate turned up he identified the arena and emerged to watch out for me. Are you aware there was clearly a ritual there?” ability to hear this Damien’s eyeballs narrowed itself drastically.
The earlier butler paused his footsteps and then altered the route of his wander, acquiring Damien towards the home in which Dollar was fast in bed.
“‘Course he was,” Damien muttered under his inhalation, “The place is definitely the female. Normally the one together with the blonde hair,” his conference with Lord Nicholas could hold out since he got other suggestions to attend to, normally the one as well as the only major point which was to locate if Cent is in a good state.
“We will have to delay so they can awaken,” Damien reported since it was Penelope as well as other witch who understood what decided to go down there. At the moment, they necessary rest.
Very Pure Very Vague
“We should delay so they can awaken,” Damien mentioned as it was Penelope along with the other witch who believed what gone in that area. For now, they essential remainder.
Thriven and thronovel Young Master Damien’s Pet novel – 649 To Have You Next To Me- Part 2 inconclusive try propose-p2
Novel–Young Master Damien’s Pet–Young Master Damien’s Pet
|
OPCFW_CODE
|
Posted: Mon Jan 09, 2006 11:32 am Post subject: Reinstalling Windows 2000 on Fujitsu Stylus 3500
I own a Fujitsu/Siemens Stylus 3500 with a 15GB harddrive. I bought the device with preinstalled Windows 2000. A few weeks ago I had some severe problems with the windows installation (several system files were missing while booting the device) and I decided to format the harddrive and reinstall windows. Well, and that's my basic question. How the heck can reinstall windows to that beauty?
The main problem is, that I can't get the thing booted from a floppy drive or cd-rom. USB pluged in devices are not recognized as a boot device. Neither a floppy drive that's connected to the floppy port of the tablet pc it self.
After these first tries I pulled out the harddrive, installed it into my laptop (which has of course a built in CDROM), booted from the windows cd and installed windows. Everything went fine till I reinstalled the harddrive into the tablet pc -> windows crahsed in the middle of the boot process. Seems like it's a problem when you install windows on a different machine.
So, can anyone give me some advices on how to get windows on that thing?
Windows installs are different based on certain hardware, IE you can't just switch the HDD into a different machine. Certain hardware, Motherboard for sure, isn't "plug and play" and will require a fresh re-format.
I would reccomend trying to get your hands on a bootable USB drive. Try checking some other sites that may cover that, or if your like me go to Best Buy, Buy one, use it and then tell them it [b]wasn't[/b] bootable and return it =P Of course I got lucky the first one I tried was bootable and the person working there didn't know what was going on so just took it back easily. _________________ [b]James Volodymyr R.[/b]
[url=http://www.GoDivine.net]Divine Business Solutions[/url]
IC XC NIKA
Although I move OSes into other machines several times a year, I don't think it's necessarily doable in this case, because the trick is to boot the setup cd and run an in-place repair install after the move. It's probably best to do a fresh install, anyway; I just can't be bothered.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
OPCFW_CODE
|