Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
I only have one minute…
Here’s a quick video.
I like it…
Cool. Magic Number is available on the Mac App Store for $ 9.99.
It runs beautifully on OS X. And this includes the latest El Capitan.
You can take the usual route to see the full overview.
Or we can look at it differently.
If Magic Number is a drill, we will look at ways to make better holes.
But we are lazy. We will show you the lazy ways to do math.
Don’t take the calculator literally.
The keypad is really for helping you to get familiar with the app, and in particular, the shortcut.
If you are a keyboard lover, you can do everything without touching the mouse.
Magic Number is not a physical calculator.
You can have it any size.
And feel free to go buttonless.
Editing at its simplest
You can click and type.
Or use the keys ◀,▶
Of course you can go further.
You can’t go fast with square wheels.
The QWERTY keyboard is never meant for math.
Math symbols are second class citizens.
To multiply, you have to hunt for *
To add, you have to press shift with
Of course there is a better way — you can cure this in seconds.
If you switch between a calculator and another app a lot, treat yourself with the global shortcut ⌃space. You can use it in any app.
Press it to show Magic Number. And when you’re done, press again to hide it.
Do more for less while switching
Sometimes you want to calculate, copy the result, and switch back.
You can let Magic Number do the copying for you.
R is for result.
For safety, the result of your most recent calculation is automatically saved. You can access it by clickingor by pressing R.
It’s convenient for calculation too.
Let’s say I’m calculating my total journey time:
My distance is 15 km. To calculate the speed, I can simply enter
This means you have less things to worry about:
- No need to copy the result beforehand.
- No need to discipline yourself to store 1:15 as a variable.
- No need to use the prehistoric M+, MR, etc.
List – A mini spreadsheet
Get the sum, tax, and statistics for a list of numbers.
Click the below image for a quick demo.
Here are some prices for the Apple Watch.
Let’s compare the 38mm models.
Difference (549 – 349)
% increase (349 ➝ 549)
(Conclusion: My brain thinks the Sport is better. But my heart disagrees.)
History – Exploit the past
You can double-click to insert your past calculation —
whether it’s for correction or for doing a similar calculation.
Select something to see the sum, average, or other summary.
So instead of typing one long expression, e.g. ‘(349 + 649) + (80 – 25%)’,
you can combine shorter ones.
Embrace the unknown
Sometimes you are too tired to think.
Too tired to do the elementary algebra in your head.
Something ÷ 17 = 19
Something is the unknown. Similar to ‘x’ in elementary algebra.
÷ 17 = 19
Here is an example involving %:
Click here to learn what Magic Number can solve for you.
Compound interest — The lazy way
Let’s start with simple interest.
If we borrow $ 2000 with 10% interest, we will pay back:
For convenience, we type:
If the interest is compounded yearly for 3 years, our payment is:
We can simply type:
You can read it as ‘2000 with 10% compounded 3 times’.
Here’s the answer.
Let’s play a bit of what-if’s.
What if we want 3000 instead of 2662? What rate would that be?
The best rate we can get is 12%. How many years it takes to reach 3000?
I hope you find this easier than the normal math or the spreadsheet’s future value function. You can learn more about compounding here.
Need to know more?
The home page gives you a full overview of features.
If you have any questions, feel free to tweet me.
Oh my name is Dennis Liu and I am the developer of Magic Number.
Have a nice day.
|
OPCFW_CODE
|
iPhone api's for locking screen and more
I am brand new to developing for the iPhone. It's looking like I need to create a app for my kids iPhones. If you are familiar with MMGuardian, that's what I am after. I want to create an application which based on trigger times will display a lock screen and not allow my children access during that time frame. I can't seem to find anything like that on the market, so I figured I might take a stab at writing one myself. My intent is to lock the phone during school hours. I can't take it away, as they need it for after school in case I am running late or they have an issue or anything like that.
That being said, are there api's that I can leverage to create a global application which will provide (or reuse) a lock screen. Further, MMGuardian has a nice lock & unlock feature based off of SMS text messages.
Any and all guidance would be much appreciated!
Thanks in advance...
Apple API's dont usually give users controls like that.
Looks like me asking a question on how to do something is frowned upon. Not sure why, StackOverflow has been invaluable in helping me in the past either by responding to my questions but more so by searching for answers and finding other users with similar questions. My guess is there may be someone else out there with similar questions. Anyway, my apologies for offending Wain, Abizern, Bass Jobsen, mhwombat, and Abbas certainly that was not my intent. To those who actually offered some feedback, thank you very much! While the answer isn't what I wanted to hear, I am extremely gratefull!
Android fully opens their API for just about any task. Apple wants to protect its users from any Malware. To them its no justification to be able to lockout an iPhone. Imagine devevloping this... one bad move and you lock out your iPhone. Even if that wasnt the case functions that are built in to iOS are usually prohibited. As to why the question was closed. I am guessing its because its a "will Apple let me" question which cant be answered objectively.
This is not possible with the iOS SDK. Consider using parental controls in iPhone settings.
Thank you for your feedback. I did look into those, but unless I missed something obvious those controls will not meet my needs. But thank you once more!
What you're looking for is Mobile Device Management, or MDM. Search Google for iOS MDM, and you'll find more information about it. I don't think there are any solutions out there right now that are geared towards parental control, it's more for corporations managing their devices in use by employees. But I think that ones geared toward parents are being worked on, which would allow you to remotely manage and monitor different aspects of the phone.
The iOS SDK doesn't contain anything that would allow you to implement something like this on your own.
|
STACK_EXCHANGE
|
Oracle: How can I get a value 'TRUE' or 'FALSE' comparing two NUMBERS in a query?
I want to compare two numbers. Let's take i.e. 1 and 2.
I've tried to write the following query but it simply doesn't work as expected (Toad says: ORA-00923: FROM keyword not found where expected):
SELECT 1 > 2 from dual
The DECODE is something like a Switch case, so how can I get the result of an expression evalutation (i.e. a number comparison) putting it in the select list?
I have found a solution using a functions instead of an expression in the SELECT LIST: i.e.
select DECODE(SIGN(actual - target)
, -1, 'NO Bonus for you'
, 0,'Just made it'
, 1, 'Congrats, you are a winner')
from some_table
Is there a more elegant way?
Also how do I compare two dates?
I've found a solution using FUNCTIONS instead of EXPRESSION in the SELECT LIST: i.e. DECODE(SIGN(actual-target), -1, 'NO Bonus for you', 0,'Just made it', 1, 'Congrats, you are a winner').
Are there more elegant way?
AND how do I compare two date
Please edit your question to include relevant info, instead of leaving a comment.
Possible duplicate of Oracle: comparison between integer in select list
The SIGN() function is indeed probably the best way of classifying (in)equality that may be of interest to you if you want to test a > b, a = b and a < b, and it will accept date-date or numeric-numeric as an argument.
I'd use a Case statement by preference, rather than a decode.
Select
case sign(actual-target)
when -1 then ...
when 0 then ...
when 1 then ...
end
Using SIGN for this was necessary when all we had was DECODE, but with CASE, I think it just makes the code less clear. You can simply write out the comparison: CASE WHEN a>b THEN ... WHEN a=b THEN ... WHEN a<b THEN ... END.
Possibly so, in particular if the comparison is very brief (a > b) ... I should think that SIGN() would then be more useful if you had some awful long expression to work with that you don't want to repeat (for example to work out whether something was before, on, or after, then first day of the fiscal year, which can be a bit workdy) or some expensive function to call to evaluate the expression.
There is no boolean types in sql (at least in oracle).
you can use case:
SELECT CASE when 1 > 2 THEN 1 ELSE 0 END FROM dual
But your solution (decode) is also good, read here
The SQL standard does define a boolean datatype, but - as you correctly mentioned - Oracle SQL does not support it (unlike other DBMS). Inside a PL/SQL procedure you can define boolean variables though.
you can compare two dates with sql
METHOD (1):
SELECT TO_DATE('01/01/2012') - TO_DATE('01/01/2012')
FROM DUAL--gives zero
METHOD (2):
SELECT CASE
when MONTHS_BETWEEN('01/01/2012','01/01/2010') > 0
THEN 'FIRST IS GREATER'
ELSE 'SECOND IS GREATER OR EQUAL' END
FROM dual
sorry i cant format the code the formatting toolbar disappeared !
do any one know why?
SELECT (CASE
WHEN (SIGN(actual - target) > 0 ) THEN
'NO Bonus for you'
ELSE
'Just made it' END)
FROM dual
|
STACK_EXCHANGE
|
Understanding the polynomial maps between two affine varieties.
I am reading Fulton's book on algebraic curves.I am in chapter $2$ currently.After defining coordinate rings,they define polynomial maps.
If $V\subset \mathbb A^n$ and $W\subset \mathbb A^m$ are varieties then $\varphi:V\to W$ is called a polynomial map if there exist $T_1,T_2,...,T_m\in K[X_1,X_2,...,X_n]$ such that $\varphi(a_1,a_2,...,a_n)=(T_1(a_1,...,a_n),...,T_m(a_1,...,a_n))$ for all $(a_1,a_2,...,a_n)\in V$.
Now,they say that if $f$ is a polynomial function from $W\to K$ and $\varphi:V\to W$ is a polynomial map,then $f\circ \varphi$ is a polynomial function from $V\to K$.So,this gives rise to a ring homomorphism $\tilde\varphi:\Gamma(W)\to \Gamma(V)$.I wnat to understand what is the role of a polynomial map in study of algebraic curves and what things should I appreciate about these maps.I mean,what are the key points/facts about polynomial maps that I need to digest in order to proceed further?Can someone provide me some motivation as I do not have much exposure to this topic.As far as I understand,polynomial maps are the correct maps/morphisms between two affine varieties.
The proposition in section 2.2 on polynomial maps actually says that polynomial maps are precisely those that correspond to $K$-algebra homomorphisms $\Gamma(W) \to \Gamma(V)$. Much of chapter 2 is about setting up a dictionary between finitely generated reduced $K$-algebras and affine varieties, and the definition of polynomial maps (somewhat tautologically) is the analog of algebra homomorphisms.
The fact that this correspondence is also compatible with composition is the first exercise, so I suggest looking at the other exercises for some motivating properties and examples.
What do you mean by finitely generated 'reduced' $K$-algebra?
A ring or algebra is reduced if the only nilpotent element is $0$. The relevant algebras $\Gamma(V)$ have this property since if $f^n(x) = 0$ then $f(x) = 0$, but eg $K[X]/(X^2)$ does not, so that cannot be the ring of functions of an affine variety.
Also by finitely generated I mean finitely generated as an algebra (and not necessarily as a module/vector space), which is sometimes instead termed of finite type.
|
STACK_EXCHANGE
|
The Achilles’ heel of modern web applications
Modularity is crucial for rapid software development, so it is no surprise that today’s web applications can contain as little as 10% of custom code. The rest is scaffolding: frameworks, libraries, and external components that do the heavy lifting so application developers can focus on business logic. This is especially important in agile development, where work is divided into smaller parts that need to be completed in a matter of weeks.
The shift from monolithic to modular applications has also affected application security. The growing awareness that security needs to be an integral part of the software development lifecycle (SDLC) means that custom application code usually goes through some sort of security testing to catch at least the most obvious security flaws. But since custom code is only one part of the total attack surface, attackers are increasingly turning their attention to weaknesses in the underlying technologies. This is where the reliance on convenient ready-made components becomes a liability.
Today’s web applications, including commercial products, combine custom code and open-source components, with the latter commonly making up from 70% to 90% of the codebase. The problem is that as many as 79% of applications use libraries that are never updated after being included in the code, not even for critical security updates. Many of these components are used in thousands of applications worldwide, making it easy for attackers to find installations that use known vulnerable libraries. This is the web application aspect of supply chain vulnerabilities that were called out in Joe Biden’s recent presidential order as a priority area for cybersecurity.
Finding the many ways you can be vulnerable
Dynamic application security testing (DAST) solutions probe a running application for weaknesses. To accurately mimic the actions of real-life attackers, advanced products such as Invicti don’t stop at scanning your application for vulnerabilities but also detect its underlying technologies. Apart from using a cutting-edge heuristic scanning engine to detect previously unknown vulnerabilities, Invicti also has its own vulnerability database for reporting out-of-date libraries, frameworks, and other application components.
The vulnerability database (VDB) brings together all of Invicti’s security checks and serves as a repository of known web technologies along with their versions and security status. The VDB is carefully curated by Invicti’s security researchers and periodically updated based on multiple sources of vulnerability intelligence to ensure that the latest web application CVEs and other known issues are covered. VDB updates are automatically reflected in the Invicti user interface and available for use in your scan policies.
3 steps to technology identification
Having discovered the version, Invicti then queries its vulnerability database to see if a newer version is available and whether the current version has known vulnerabilities. Depending on the result, you will then get a warning about an out-of-date or vulnerable technology.
All the reports and warnings about identified technologies are presented in the Technology section in the main menu for easy management. Here you can see all the technologies used in all your applications. For each one, you see the number of sites that use the technology, the number of identified versions, the version branches, the number of out-of-date versions, and the number of issues found. The overall security status for a technology corresponds to the highest severity vulnerability found across all installed versions.
Identifying web application technologies and components is not a trivial task, so Invicti uses a variety of techniques to extract as much accurate intelligence as it can. Some products explicitly advertise their presence in response headers, so header analysis is the first and easiest step. For popular web applications such as WordPress, the product can also be reliably identified based on known headers, cookies, and directory structures. Invicti gathers all this information during the crawling phase.
The scanner also runs more advanced pattern recognition on responses received during crawling to identify web servers, application servers, frameworks, programming languages, and more. Invicti then queries its vulnerability database for information about the latest available versions and known security issues with specific products and versions.
Get the full picture
A quality automated solution for dynamic security testing is a must-have in any modern web development workflow, especially for agile software projects. Invicti’s advanced heuristic vulnerability scanner with Proof-Based Scanning can reliably detect a wide variety of security issues in your entire web application as executed, covering vulnerabilities in both your custom code and external dependencies.
Technology fingerprinting takes this a step further to help you manage your entire web application stack and avoid vulnerabilities introduced by outdated third-party components, even if you are not actively using a specific vulnerable function or module in your applications. You can also use the Technologies page to find abandoned servers and installations that needlessly increase your overall attack surface. Combined, these results give you a complete picture of your web application security posture – all from one reliable solution.
For a detailed description of all the available information, see our support page for the Technologies feature.
|
OPCFW_CODE
|
ConfigMap change does not trigger bean refresh
I am doing some training with spring-cloud-kubernetes on Red Hat minishift. I managed to define a ConfigMap which is correctly found and the configured message ${service.message} is shown..
However, when I change the value in the ConfigMap, the RestController in the @RefreshScope seems not to be refreshed and the message stays the same. Only a restart of the app updates the message.
According to the documentation I assumed this should work. Did I miss some additional step or is this not working yet?
@Controller
@RefreshScope
public class RestController {
private final String message;
public RestController(@Value("${service.message}") String message) {
this.message = message;
}
/**
* Say hello to earth
*
* @return First word from the moon
*
*/
@GetMapping("/hello")
@ResponseBody
public String hello() {
return message + " (" + LocalDateTime.now().toString() + ")";
}
}
The ConfigMap
apiVersion: v1
data:
application.yml: |-
service:
message: Say hello to the world
kind: ConfigMap
metadata:
creationTimestamp: '2019-06-02T17:35:24Z'
name: openshift-spring-boot-service
namespace: dsp-dev
resourceVersion: '67065'
selfLink: /api/v1/namespaces/dsp-dev/configmaps/openshift-spring-boot-service
uid: cd7a8217-855c-11e9-8dc5-00155d010b07
I would suggest you take a look at this integration test. My guess is that you probably haven't enabled restart.
Thanks for the hint. I activated the config reload like this:
spring:
cloud:
kubernetes:
reload:
enabled: true
But then I received the following error on startup:
Description:
Parameter 2 of method configurationUpdateStrategy in org.springframework.cloud.kubernetes.config.reload.ConfigReloadAutoConfiguration$ConfigReloadAutoConfigurationBeans required a bean of type 'org.springframework.cloud.context.restart.RestartEndpoint' that could not be found.
The following candidates were found but could not be injected:
- Bean method 'restartEndpoint' in 'RestartEndpointWithIntegrationConfiguration' not loaded because @ConditionalOnClass did not find required class 'org.springframework.integration.monitor.IntegrationMBeanExporter'
- Bean method 'restartEndpointWithoutIntegration' in 'RestartEndpointWithoutIntegrationConfiguration' not loaded because @ConditionalOnEnabledEndpoint no property management.endpoint.restart.enabled found so using endpoint default
Action:
Consider revisiting the entries above or defining a bean of type 'org.springframework.cloud.context.restart.RestartEndpoint' in your configuration.
Then I decided to activate refresh/restart and to expose all actuators as shown below. This way it works. However, it feels a bit strange to expose the restart endpoint even tough I left the value spring.cloud.kubernetes.reload.mode to its default (event)
management:
endpoint:
refresh:
enabled: true
restart:
enabled: true
endpoints:
web:
exposure:
include: "*"
@spencergibb @ryanjbaxter is this consistent with the rest of Spring Cloud?
No it is not consistent. Why would the restart endpoint need to be enabled when we are using refresh by default?
this is done part of #440, merged into 1.0.x. I would close this one.
|
GITHUB_ARCHIVE
|
I don't understand why sun decided to make this so difficult. They could have just provided the full scalable japanese fonts for free. But.. you don't need em anyway! There are two Japanese(Well, kanji), non-scalable fonts provided with EVERY VERSION OF SOLARIS 2.x, and indeed most modern Unixen.
Anyway, here's how to get java to use the kanji fonts.
Directions for java 1.2 and 1.2 are lower down.
Then add entries for serif, sansserif, monospaced, but most importantly, dialog. You should be able to just add the following lines at the top:
# Japanese font, presumably jis, here. This is a "lets grab something" game # The "3" should actually be changed to whatever does not conflict # with what is already present in the file dialog.plain.3=*jisx0208.1983-0 # If you want to make sure the above is obnoxiously big, try one of: #dialog.plain.3=kanji24 #dialog.plain.3=*--24-*-jisx0208.1983-0 # # And here's the "magic" that makes the above actually work # make sure the stuff between "fontcharset" and "=" matches up. fontcharset.dialog.plain.3=CharToByteX11JIS0208
Note1: If this doesn't work, use "CharToByteJIS0208", instead of "CharToByteX11JIS0208"
Note2: It doesn't matter if the other lines say
fontcharset.xx.x=sun.awt.CharToByteXXXXJust write it like I show above, skipping "sun.awt" and the like.
Repeat as needed, for serif, dialoginput, monospaced, etc. However, jdrill by default only needs the dialog font entries.
[This actually stands for "Solaris 5.7 version", but seems to be the best general version for all UNIXen.]
Copy "font.properties.UTF8.5.7" to "font.properties", in the same directory. I specify UTF8.5.8, vs UTF8, because the plain UTF8 file seems to have some uncomment font entries in it.
Or with jdk1.3 and solaris 8, copy "font.properties.ja", which seems to not have any missing font complaints. If you have ALL the optional fonts installed, you shouldn't get any complaints about "Cannot convert string". But if you are using XFree86, you may have to stick to font.properties.UTF8.5.7
Once you have copied the right font.properties file, you should be able to see Japanese chars (and possibly other interesting ones),providing you have the "optional fonts" installed. This should work without you having to do backflips with your locale. But see the note below. (If, on the other hand, you normally use your computer in a locale that matches one of the other font.properties.XXX extensions, you might want to copy the "UTF8.5.7" file to that file, instead)
Oddly enough, this does not seem to work with jdk1.3 if your "LC_ALL" variable is set to en_US.ISO8859-15, or is unset altogether. So do
export LC_ALL LC_ALL=POSIXand the above method should work for you.
|
OPCFW_CODE
|
GaadiKey blog is a one stop portal for all the latest in the automobile industry. We are a bunch of people who are passionate about bikes and cars. We share the experience, reviews and stuff connected to Gaadis.
Here is our team.
- Chethan Thimmappa
- Nikhil Thorvat
- Rahul R
- Akhilesh Chetty
- Anand Chaudhuri
- Ganapathi BR
- Gautam Doddamani
And this list keeps growing as we grow…
Website and GaadiKey Network statistics
- July 2014 – GaadiKey.com website launched.
- August 2014 – blog.gaadikey.com launched.
- August 2014 to January 2015 – Development work on GaadiKey Cross-platform mobile application.
Website Traffic Stats:
|February 2015||2,500 Page views|
|March 2015||4,500 Page views|
|April 2015||6,000 Page views|
|May 2015||9,000 Page views|
|June 2015||16,000 Page views|
|July 2015||34,000 Page views|
|August 2015||42,000 Page views|
|September 2015||46,000 Page views|
|October 2015||70,000 Page views|
|November 2015||81,000 Page views|
|December 2015||80,000 Page views|
|January 2016||87,000 Page views|
|February 2016||73,000 Page views|
|March 2016||88,000 Page views|
|April 2016||1,00,090 Page views|
|May 2016||1,15,000 Page views|
|June 2016||1,20,000 Page views|
|July 2016||1,30,000 Page views|
|August 2016||1,16,500 Page views|
|September 2016||1,19,237 Page views|
|October 2016||1,42,072 Page views|
|November 2016||1,12,000 Page views|
|December 2016||1,52,000 Page views|
|January 2017||1,83,000 Page views|
|February 2017||1,70,000 Page views|
|March 2017||3,36,244 Page views|
|
OPCFW_CODE
|
In Canny described what has since become one of the most widely used edge finding algorithms. The first step taken is the definition of criteria which an edge detector must satisfy, namely, reliability of detection, accuracy of localization and the requirement of one response only per edge. These criteria are then developed quantitatively into a total error cost function. Variational calculus is applied to this cost function to find an ``optimal'' linear operator for convolution with the image. The optimal filter is shown to be a very close approximation to the first derivative of a Gaussian, i.e., in one dimension,
Non-maximum suppression in a direction perpendicular to the edge is applied, to retain maxima in the image gradient. Finally, weak edges are removed using thresholding. The thresholding is applied with hysteresis. Edge contours are processed as complete units; two thresholds are defined, and if a contour being tracked has gradient magnitude above the higher threshold then it is still ``allowed'' to be marked as an edge at those parts where the strength falls below this threshold, as long as it does not go below the lower value. This reduces streaking in the output edges.
The Gaussian convolution can be performed quickly because it is separable and can be implemented recursively. However, the hysteresis stage slows the overall algorithm down considerably. While the Canny edge finder gives stable results, edge connectivity at junctions is poor, and corners are rounded, as with the LoG filter.
The scale of the Gaussian determines the amount of noise reduction; the larger the Gaussian the larger the smoothing effect. However, as expected, the larger the scale of the Gaussian, the less accurate is the localization of the edge.
Canny also investigated the synthesis of results found at different scales; in some cases the synthesis improved the final output, and in some cases it was no better than direct superposition of the results from different scales.
Finally, Canny investigated the use of ``directional operators''. Here several masks of different orientation are used with the Gaussian scale larger along the direction parallel to the edge than the scale perpendicular to it. This improves both the localization and the reliability of detection of straight edges; the idea does not work well on edges of high curvature.
In Deriche uses Canny's criteria to derive a different ``optimal operator''; the difference is that the filter is assumed to have infinite extent. The resulting convolution filter is sharper than the derivative of the Gaussian;
This is also implementable as a recursive filter for speed.
In Shen and Castan describe another related linear operator; the form is,
Like the Deriche filter, this is implemented recursively and has an infinite support region. The filter is even sharper than that of Deriche; the argument presented is that the larger the scale of the Gaussian, the more planar the central region, giving rise to an ``unnecessary'' reduction in edge localization. Hence the filter contains a discontinuity at x=0, and information very close to the centre of the filter is given more weighting (and not less, as is usual) than that from slightly further out. However, it is suggested in that the discontinuity can induce multiple edges. The Shen filter is not separable (in two dimensions), so an approximation to the optimal function must be made.
In practice, the first derivative of the Gaussian, and the Deriche and Shen operators all give very similar results when applied to real images.
In Monga et. al. extend two dimensional linear filters (in particular, the Shen and Deriche filters) to find edges in three dimensional data such as nuclear magnetic resonance scans.
|
OPCFW_CODE
|
Kelvin "4 Wire" Resistance PCB Design Questions
I am designing a PCB that will utilize 4 wire measurement for 40 or so test points.
A little background here - day to day I troubleshoot PCB's to component level, I'm an electronics technician and not an engineer and I'm okay with that. 95% of the time I can find an issue with a circuit based off of how the DVM measures resistance and it's behavior. My boss wants me to automate my process using LabVIEW; so I need to make a PCB that does just 4 wire resistance testing. I have an existing pogo pin block and a PCB that mounts to it, I just need to re-spin the PCB board and write the LabView program.
Are there any special PCB design details I need to consider when doing this very basic design?
Please be gentle; this is my first engineering project.
Put your high-impedance sense element as close as possible to the sense connection.
Oh god for everyone's sanity don't use lab view. Use a saner language, like python or something.
To be honest I really wish I could; but that was squashed by my supervisor :(. He spent half of last years budget to get everyone in the department trained in using it "except me, I wasn't here yet" and now he in his own words "wants to see a return on the company's investment". It's an awful programming language.
The nature of the Kelvin measurement is that you have a separate current path and measurement path so that no current (except leakage and bias currents) flow through the measurement conductors.
Thus, the PCB layout (unless the design itself is faulty) is remarkably easy, and I don't expect you'll have any troubles, since series resistance hardly matters.
In the above schematic, the exact value of the resistors R1, R2, R3, and R4 hardly matter, provided they are reasonably low. R4 affects the common-mode voltage the instrumentation amp sees, R1 and R2 affect errors due to input offset current (and noise) a bit, but really the PCB layout is not very important until currents and voltage drops start to become significant wrt the common mode range of U1. So you want to keep the voltage drop across R4 reasonably low (not a problem typically unless you use very thin traces and/or very high currents).
It's also a good idea to make R1 and R2 about 10k, with a 0.1 uF to ground at each input of the IA. This prevents common mode noise from getting into your high-gain amplifier. If you're feeling paranoid, match both the resistors and the capacitors. 1% is good enough. Use good layout - be careful where your load currents run, because even a few milliohms of unintended current path can give you errors.
If you follow what @WhatRoughBeast suggests, adding resistors to the line resistances R1/R2, you can put another (bigger) capacitor between the +/- input lines. His suggested values don't have much roll-off at mains frequencies, so matching is likely not critical, but you can put a big fat 1uF cap between the +/- inputs of the instrumentation amplifier without harm (slows down the response a bit, that's all) and it helps to kill noise. All caps preferably film types (or NP0 ceramic). Resistors can be metal film precision types.
Does trace thickness matter? I have a few measurements that are 1 ohm most are in Meg ohms
Not really. If the circuit is good it won't matter much, if it is bad it won't help much.
Maybe I should have also explained that the measurement equipment I am using for this project is National Instruments. Is the circuit provided above along with all the comments such as R1, R2 and caps to ground for common mode noise to obtain the measurement as accurately as possible without NI equipment or should I also be considering these components on the PCB that I am re-spinning?
They're probably in the NI boards if you're using something like the PCI-4462, but read the manuals.
|
STACK_EXCHANGE
|
As we are using raspberry pi for most of our IoT Projects, I did thought that is Raspberry Pi only used for IoT projects, are there any other ways we can use the devices. Definitely yes is the answer and I have already started using Raspberry pi for lot of other things.Looking at the device, you know you can use it for so many project but you’re not sure how to do it. Let met list down some of the top uses of raspberry pi 3. I believe you will be using raspberry pi 3 or other modes like raspberry pi zero , Arduino , Esp8266 node mcu etc for your projects.
The simplest use for a Raspberry Pi is as a desktop computer.Depending on which model you buy, the Raspberry Pi is one of the world's least expensive and most versatile computers with just 512MB to 1GB of RAM and an SD card for storage. A good desktop computer can be useful, particularly for work-related tasks, but for many people space is a problem. What better computer to turn to than the box-sized Raspberry Pi. Connect the pi to your TV through HDMI cable, connect as keyboard, mouse and to a WiFi you have good speed , fast booting computer. You also get applications like LibreOffice which is one of alternatives to Microsoft Office, claws email, chromium browser etc.
You need to install Raspbian or different OS to setup your raspberry pi.
Refer the tutorial raspberry pi as desktop pc/can you use raspberry pi as a desktop.
We will learn how to set up Raspberry Pi as entertainment center solution (Kodi was XBMC) with the right accessories and software. Kodi is a free and open-source media player software application developed by the XBMC Foundation, a non-profit technology consortium. Kodi is available for multiple operating systems and hardware platforms, with a software 10-foot user interface for use with televisions and remote controls.
Turn a Raspberry Pi Into an Media Center in under 30 Minutes.Raspberry Pi is the perfect choice for a best home theater PC which are small, quiet, and inexpensive.Before you even hook up your Raspberry Pi to your TV,you’ll need to get the Kodi installed on normal Raspbian OS or you can use separate OS OpenELEC, or LibreELEC which is specifically designed for media center purpose. I will write a separate tutorial on how to setup the operating system for media center and configure the same.
You can convert your raspberry pi in to a gaming system, do you believe it ? You should.
Welcome to RetroPie. RetroPie allows you to turn your Raspberry Piinto a retro-gaming machine. It builds upon Raspbian, EmulationStation, RetroArch and many other projects to enable you to play your favorite Arcade, home-console, and classic PC games with the minimum set-up. For power users it also provides a large variety of configuration tools to customize the system as you want.
An emulator is software that makes a computer behave like another computer, or in the case of RetroPie a computer that behaves like a video game console such as the Super Nintendo. The RetroPie SD image comes pre-installed with many different emulators. Additional emulators may be installed from within RetroPie
Apache is a popular web server application you can install on the Raspberry Pi to allow it to serve web pages.On its own, Apache can serve HTML files over HTTP, and with additional modules can serve dynamic web pages using scripting languages such as PHP.
You can setup your own website in Raspberry pi using apache webserver, php and WordPress. You can use no-ip and no need to worry about changing public ip.
Install apache using simple commands.
sudo apt-get update
Then, install the
apache2 package with this command:
sudo apt-get install apache2 -y
This article will give you how to install webserver: https://www.raspberrypi.org/documentation/remote-access/web-server/apache.md
In future posts , I will share the tutorial how setup wordpress in Raspberry pi.
Home security system using raspberry pi, we can build using a PIR Sensor and PI Camera.This system will detect the presence of Intruder and quickly alert the user by sending him a alert mail. This mail will also contain the Picture of the Intruder, captured by Pi camera. Raspberry Pi is used to control the whole system.
- Raspberry Pi
- Pi Camera
- PIR Sensor
- Bread Board
- Resistor (1k)
- Connecting wires
- Power supply
Refer the tutorial on simple Home Security email alert using Raspberry Pi .
|
OPCFW_CODE
|
Error: error:24064064:random number generator:SSLEAY_RAND_BYTES:PRNG not seeded
I ran into this error when using karma. I followed the instructions here:
https://groups.google.com/forum/#!topic/angular-dart/hYY8WN_5OQc
I wasn't sure if this was a known issue or something to do with my config. I'm running karma with requirejs, mocha, and coverage. Let me know if you need more details or if this is part of another plugin.
Same issue here
karma start
INFO [karma]: Karma v0.12.31 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
ERROR [karma]: [Error: error:24064064:random number generator:SSLEAY_RAND_BYTES:PRNG not seeded]
Error: error:24064064:random number generator:SSLEAY_RAND_BYTES:PRNG not seeded
at Manager.generateId (/home/default/Projects/requirejs-tpl/node_modules/karma/node_modules/socket.io/lib/manager.js:735:12)
at /home/default/Projects/requirejs-tpl/node_modules/karma/node_modules/socket.io/lib/manager.js:790:21
at Manager.authorize (/home/default/Projects/requirejs-tpl/node_modules/karma/node_modules/socket.io/lib/manager.js:916:5)
at Manager.handleHandshake (/home/default/Projects/requirejs-tpl/node_modules/karma/node_modules/socket.io/lib/manager.js:786:8)
at Manager.handleRequest (/home/default/Projects/requirejs-tpl/node_modules/karma/node_modules/socket.io/lib/manager.js:593:12)
at Server.<anonymous> (/home/default/Projects/requirejs-tpl/node_modules/karma/node_modules/socket.io/lib/manager.js:119:10)
at Server.EventEmitter.emit (events.js:98:17)
at HTTPParser.parser.onIncoming (http.js:2108:12)
at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:121:23)
at Socket.socket.ondata (http.js:1966:22)
at TCP.onread (net.js:525:27)
These are all dependencies:
"devDependencies": {
"phantomjs": "~1.9.16",
"requirejs": "~2.1.16",
"karma": "~0.12.31",
"jasmine": "~2.2.1",
"jasmine-core": "~2.2.0",
"karma-jasmine": "~0.3.5",
"karma-requirejs": "~0.2.2",
"karma-phantomjs-launcher": "~0.1.4",
"karma-chrome-launcher": "~0.1.7"
}
Strangely enough this only happens on one project I'm working on, can't seem to figure out why.
Any ideas?
Just had the same issue ... did
rm -rf node_modules
npm install
and everything suddenly worked. (make sure your package.json contains all dependencies before doing this)
Hope this helps.
rm -rf node_modules
npm install
Worked for me too! Thanks!
|
GITHUB_ARCHIVE
|
AuthVOD (AUTHenticated Video On Demand) is a monetization package where viewers must authenticate, that is enter a username and password, to the Brightcove Beacon app to watch videos included in the package. The videos may or may not show advertisements, depending on how you configure that package.
See the Understanding Monetization Options document for more information on AuthVOD, as well as all monetization packages you can use with Brightcove Beacon.
Configuring an AuthVOD package overview
Here are the high-level steps in configuring an AuthVOD package:
- In Brightcove Beacon, create an unpublished, cost free package.
- If ads are desired, supply an ad provider when creating the package.
- In Video Cloud Studio, add data to videos' custom fields to assign them to the AuthVOD package.
- In Brightcove Beacon, ingest the videos and clear the cache.
Creating a package in Brightcove Beacon
Here are the detailed steps to follow in Brightcove Beacon to create an AuthVOD package:
- Click the Commerce tab along the top of Brightcove Beacon.
- Select SVOD/AuthVOD Plans from the left navigation.
- Click the Add New Plan button.
- On the Package tab supply the internal name and be sure to leave the Status as Unpublished.
- If you wish to use advertising, you can click the Yes radio button for Advertisement then choose an Ads Provider from the dropdown.
- On the Textual Data tab enter the name visible to viewers, and a headline about the package.
- If desired, supply an image that will display with the package information.
- On the Streams tab, enter values for the maximum number of devices that can stream at the same time, and the maximum number of devices that can have an active connection.
- If you wish finer control of the package availability, click the Yes radio button for Advanced Streams?, then complete form with finer availability options.
Add videos to the package in Studio
To add content to the package, you need to use Video Cloud Studio to assign values to custom fields. The two custom fields that you must assign values to for an AuthVOD package are:
The steps below detail the process of assigning values to those custom fields.
For each video you wish to be in the package, complete the following steps:
- Log in to Video Cloud Studio.
- In the Media Module, click on the video name to view that video's properties.
- Click the CUSTOM FIELDS' Edit button.
- For the beacon.rights.<counter>.packageName, enter the name of your AuthVOD package created earlier.
- For the beacon.rights.<counter>.type, enter SVOD.
The following screenshot shows an example of actual values used in the custom fields. In this case, the counter is zero (highlighted in red) and the package name is AuthVOD Test (highlighted in yellow). Remember when implementing AuthVOD, the beacon.rights.<counter>.type is always set to SVOD.
Ingest videos into Brightcove Beacon
Changes that are made to videos in Video Cloud Studio need to be ingested into Brightcove Beacon. This is done automatically every hour, but you can also ingest manually on demand. The broad steps will be detailed below.
- In Brightcove Beacon, in the top menu, click on the Tools (wrench icon) tab.
- From the left navigation, be sure the Ingestion option is selected (it is the default option).
- Click the Update Brightcove Videos button.
- Once the ingestion is finished, you can check to be sure your videos have been assigned to your AuthVOD package by first clicking in the top menu the Commerce tab.
- From the left navigation, be sure the Packages option is selected (it is the default option).
- Click the name of your AuthVOD package.
- Click on the Content tab to see the assets assigned to this AuthVOD package. An example of this tab is shown here:
- To start cache clearing, in the top menu, click on the Tools (wrench icon) tab.
- From the left navigation, click the Cache option.
- Click the Cache Purge button.
Now, any viewers who are authenticated to the Brightcove Beacon app will have access to AuthVOD content. If a viewer tries to play a video while NOT authenticated, a message appears in the bottom-left of the video prompting them to sign in, as shown here:
Unpublishing or deleting a package
If a package is no longer needed you have two options:
- Unpublish: You can unpublish a package that will be needed again. For instance, you have a special package associated with a holiday that you will use year after year. This is a good use case for unpublish.
- Delete: You can delete a package and it is not recoverable. For instance, you have an introductory package for a new set of content, but that content will not be new again, so you will not need the package again. This is a good use case for delete.
There are two places you can unpublish/delete a package:
- From the SVOD/AuthVOD Plans section check the package(s) you wish to unpublish/delete then click the appropriate button.
- If you are editing the package, from the Package tab you can both unpublish and delete. Use the Status to unpublish or the Delete button to delete.
Active Subscriptions info in Beacon Studio
Note that in the Registered Users module in Beacon Studio when an AuthVOD package is implemented it is not directly shown. The Active Subscriptions information is reported as N/A. Here is an example:
|
OPCFW_CODE
|
This section describes problems that can affect the Support for Oracle RAC framework resource group.
If a fatal problem occurs during the initialization of Support for Oracle RAC, the node panics with an error messages similar to the following error message:
panic[cpu0]/thread=40037e60: Failfast: Aborting because "ucmmd" died 30 seconds ago
Description: A component that the UCMM controls returned an error to the UCMM during a reconfiguration.
Cause: The most common causes of this problem are as follows: A node might also panic during the initialization of Support for Oracle RAC because a reconfiguration step has timed out. For more information, see Node Panic Caused by a Timeout.
Solution: For instructions to correct the problem, see How to Recover From a Failure of the ucmmd Daemon or a Related Component.
The UCMM daemon, ucmmd, manages the reconfiguration of Support for Oracle RAC. When a cluster is booted or rebooted, this daemon is started only after all components of Support for Oracle RAC are validated. If the validation of a component on a node fails, the ucmmd daemon fails to start on the node.
The most common causes of this problem are as follows:
An error occurred during a previous reconfiguration of a component of Support for Oracle RAC.
A step in a previous reconfiguration of Support for Oracle RAC timed out, causing the node on which the timeout occurred to panic.
For instructions to correct the problem, see How to Recover From a Failure of the ucmmd Daemon or a Related Component.
Perform this task to correct the problems that are described in the following sections:
For the location of the log files for UCMM reconfigurations, see Sources of Diagnostic Information.
When you examine these files, start at the most recent message and work backward until you identify the cause of the problem.
For more information about error messages that might indicate the cause of reconfiguration errors, see Oracle Solaris Cluster Error Messages Guide.
For more information, see Node Panic Caused by a Timeout.
The solution to only certain problems requires a reboot. For example, increasing the amount of shared memory requires a reboot. However, increasing the value of a step timeout does not require a reboot.
For more information about how to reboot a node, see Shutting Down and Booting a Single Node in a Cluster in Oracle Solaris Cluster System Administration Guide .
This step refreshes the resource group with the configuration changes you made.
# clresourcegroup offline -n node rac-fmwk-rg
Specifies the node name or node identifier (ID) of the node where the problem occurred.
Specifies the name of the resource group that is to be taken offline.
# clresourcegroup online -eM -n node rac-fmwk-rg
|
OPCFW_CODE
|
What transfer protocols do you support? HTTP? FTP?
Input: we support HTTP, HTTPS, FTP, SFTP, Rackspace Cloud Files, Google Cloud Storage, Azure, and Amazon S3.
Output: we support FTP, SFTP, Rackspace Cloud Files, Azure, Google Cloud Storage, and Amazon S3. We do not currently support general HTTP uploads.
Cloud Files users: You can specify the region to use (DFW, ORD or UK) by adding it to the protocol, like
cf+ord://username:api_key@container/object. Currently the only Cloud Files regions supported are DFW, ORD and UK. The DFW region will be used by default.
Need another transfer method? Let us know.
Where can I put my original files for transcoding?
We can pull in files from anywhere via HTTP, HTTPS, FTP, or SFTP. We can also pull files from an Azure container, Google Cloud Storage or S3 bucket, including private buckets, and from Rackspace Cloud Files.
http://example.com/path/to/file.avi http://s3.amazonaws.com/bucket/file.avi s3://bucket/file.avi gcs://bucket/file.avi cf://username:api_key@container/file.avi azure://account-name:account-key@container/file.avi https://example.com/path/to/file.avi ftp://example.com/path/to/file.avi sftp://example.com/path/to/file.avi
Can you transfer files securely?
Sure. Use SFTP or HTTPS and include a username and password in the URL, like this:
Can you pull files from a private S3, Google Cloud Storage, or Cloud Files bucket?
Are there any file size limits?
No. We don't have any file size limits, and in fact, we chose not to charge extra for large input videos to encourage customers to send us their best copies. The higher the quality of videos you send us, the higher the output quality we can give you.
There is, however, a limit of 24 hours on the duration of the source video.
Where do the finished videos go?
You have a few options for where we will send the videos.
We can send the files to a FTP or SFTP server, to a S3 or Google Cloud Storage bucket, or to Rackspace Cloud Files. See the url option in our API docs for more info.
Or you can specify no output destination, and we'll hold on to the file for 24 hours. We'll provide you with a URL that you can use to download the video. We'll remove the video after 24 hours, so don't forget to download it.
Starting with API version 2, if we are unable to upload files to your server we will instead upload them to S3 and provide you with a URL for all files.
How do I set the ACL permissions if I send encoded files to S3?
Use our API. See the API docs on access_control for more info.
Can I ship a hard drive with my videos on it to Zencoder?
If you have a large number of videos to encode but don't want to transfer them over the internet you can use Amazon's AWS Import/Export. You'll ship a hard drive with your videos to Amazon, who will upload them to S3 for you.
Amazon has simple instructions for getting your drive ready to ship, where to send it, and more.
Zencoder runs on Amazon EC2 instances, so there's a wide range of IP addresses that we use for File and Live transcoding. Amazon regularly adds new ranges to the list, which can be found via the AWS IP Range API.
Notifications are currently sent from
sg-77f03012, but this could change at any time. In order to account for this, we've added a
X-Zencoder-Notification-Secret header to all notifications delivered. This can be found on the API dashboard, where you can see your current secret and generate a new one if necessary.
This header is delivered with every request and can be used to guarantee that the notification was delivered from your Zencoder account. Note: In order to keep your notification secret secure, make sure to deliver notifications to HTTPS endpoints!
Note: Servers may move to new datacenters, and IP addresses may change without notice.
|
OPCFW_CODE
|
How to find out which songs in a directory are loudest?
Backstory:
I have about 400 odd songs that I use for background music in my wikipedia audiobooks. I had thought I had normalized them all properly so their volume would not overwhelm the speech, but a few bad ones got through.
For example: https://youtu.be/VVlWWs7Fq0U
Now I need to figure out which songs are the loudest so I can fix or remove them.
Questions:
How can I get a value for overall loudness of an audio file?
How can I get a numerical value for peak loudness of an audio file?
Thanks.
Testing this SO answer:
$ sox /usr/share/example-content/Ubuntu_Free_Culture_Showcase/Jenyfa\ Duncan\ -\ Australia.ogg -n stat
Samples read: 21199104
Length (seconds): 240.352653
Scaled by:<PHONE_NUMBER>.0
Maximum amplitude: 0.963440
Minimum amplitude: -0.957550
Midline amplitude: 0.002945
Mean norm: 0.094807
Mean amplitude: 0.000000
RMS amplitude: 0.131004
Maximum delta: 0.531006
Minimum delta: 0.000000
Mean delta: 0.012794
RMS delta: 0.021026
Rough frequency: 1126
Volume adjustment: 1.038
It looks you could use the Maximum amplitude and either of Mean or RMS amplitudes (for overall loudness).
As this answer demonstrates, SOX works for this.
However, it is painfully slow and does not offer to correct (i.e. normalize) the offending files in the same process, therefore:
I love easyMP3Gain for normalizing MP3s !
Unfortunately, there seems to be no package for 17.10 Artful, but luckily the GUI packages for 16.04 Xenial are compatible and you can find them here!
This is probably not best practice but what I did was..
wget <your favorite ubuntu repo server here>libqt4pas5_2.5-15_amd64.deb
wget <your favorite ubuntu repo server here>easymp3gain-data_0.5.0+svn135-6_all.deb
wget <your favorite ubuntu repo server here>easymp3gain-qt_0.5.0+svn135-6_amd64.deb
Start with the libqt dependencies but install all the packages like this :
sudo dpkg -i libqt4pas5_2.5-15_amd64.deb
you will have to run sudo apt-get --fix-missing
and sudo apt-get --fix-broken install once or twice to get all the libqt dependencies, and then rerun the dpkg -i commands
Once you are able to run easymp3gain you can get the source for mp3gain from sourceforge here, untar it. Change directory to the extracted files and make sure you have the build tools installed
sudo apt-get install build-essential
then build it with
sudo make
sudo make install
You might need to copy the binary, at least I had to do that
sudo cp -p mp3gain /usr/local/bin/
Now you can run easymp3gain-qt ! Just select the folder with your MP3s and sort by Volume !
Screenshot of easyMP3Gain sorted by volume
I hope this helps !
I use another gui application - qtgain. the gui applications will install ok, but may not get the dependency package 'mp3gain' to scan mp3 files.
if you can't find mp3gain package in ubuntu repositories, there is an additional repository here - https://launchpad.net/~flexiondotorg/+archive/ubuntu/audio
|
STACK_EXCHANGE
|
Posted: 2015-02-14 22:39
great work with WinSCP, i really like its scripting capabilities.
I have an idea to further improve the console output.
In case WinSCP.com is used to synchronize large files there is no information about how long it will take for the current file to be transferred.
Currently only the following information is displayed:
filename | transferred KBytes | speed | mode | percentage
The field with the number of tranferred KBytes could be replaced with the filesize of the currently transferred file, because the percentage should already provide enough information about the progress in many use cases. If the filesize format is changed to KB/MB/GB accordingly, there might be enough space left to add an ETA field with the remaining time for completion like already implemented in the GUI. Another option would be to display the transfer mode on the console output only once before the status line, because it does not change during transfer anyway.
filename | filesize | percentage | speed | ETA
What do you think?
Location: Prague, Czechia
Thanks for your feedback. Will consider this.
Another option would be to display the transfer mode on the console output only once before the status line, because it does not change during transfer anyway.
It does. In "automatic" mode.
Posted: 2015-02-24 03:06
I'm not entirely sure that percentage transferred would provide enough information.
Let's say a 99Gb file is being transferred and the percentage is 99.99%, for example. The means that there is approximately 10Mb left to transfer.
Let's say the target disk is getting short of space, which happens often enough to be a concern for my work transferring to customer disks. You need a target to work out how much disk space to free. It would be useful to know whether one needs to free 5Mb or 10Mb, let's say because it is usually easier to find 5Mb of files to remove than 10Mb...
Also, if only "filename | filesize | percentage | speed | ETA" were displayed, for this example only ETA and not percentage would give the most accurate value of what is remaining. That is, ETA would have to be damned reliable and accurate! When I'm doing file transfers, during the final part of transfer, I find that the most reliable countdown are not ETAs but actually the number of kilobytes left to transfer!
Sure, the precision of percentage transferred could be changed from 99.99% to 99.999%, but that's shifting the problem.
You can post new topics in this forum
And it's free!
|
OPCFW_CODE
|
The impact of the Lidar survey for the AEC industry
Lidar technology will continue to impact the AEC industry in the years to come by providing affordable and quick surveys. Architects, engineers, and landscape architects will embrace new tools and techniques for error-free projects with more precise and up-to-date digital elevation models.
What is Lidar?
Lidar is a method of exploring the sensor's surroundings using light detection and ranging. In the last years, Lidar technology has been employed on self-driving vehicles and surveying the terrain, forest, and underwater. This technique has become highly reliable and widespread and revolutionized the way countries, authorities, and the building industry does terrain surveying.
How does a Lidar sensor work with the surveying of topography?
A Lidar topography survey can be done using a lidar sensor placed on a ground vehicle. If it is done on a drone, helicopter, or airplane, it is then called a Lidar airborne survey. A Lidar sensor is paired with an RGB camera and GPS for precise positioning, sending thousands of pulses per second. The vegetation, terrain, water, roads, and buildings, reflect the Lidar pulses, and their position and signal parameters are then saved in a database.
The range and frequency of the Lidar pulses allow the survey to go through the forest and vegetation and create a point cloud with several layers.
Processing of the Lidar point cloud is critical for usability.
A generic point cloud scanned by a Lidar sensor identifies its 3D location worldwide. But what is extraordinary with this surveying method is the possibility of classifying each point with color and different classes. The post-processing using a specific software allows for the classification of ground points, different types of vegetation, roads, building, bridges, etc. The classes used by the surveyors and the software to process Lidar clouds are defined in the Lidar standards.
Why are Lidar surveys more affordable than land surveys?
Land surveys require foot access to the terrain, line of sight, and manual placement of equipment. The land surveyor can take a few days to complete a job in complicated landscapes with dense vegetation or steep hills. A drone survey can be deployed using a software-defined route and perform an automatic Lidar scan of the area less than 20% of the time than a land survey.
The greater affordability of the drone and Lidar scanner due to the demand for Lidar sensors in the automobile and survey industries is allowing more firms to buy capable equipment.
How accurate is a Lidar survey?
The accuracy of the Lidar survey depends on the sensor, considering that the weather conditions were optimal. Today's sensitive and precise sensors can provide accuracy within 1 cm and very dense point clouds. The USGS National Geospatial Program defines different quality and tolerance classes, being class 0 the most accurate. The horizontal and vertical maximum deviations are in the region of two to three centimeters.
What is the density of a Lidar airborne survey?
There are denser and sparser Lidar point clouds, depending on the quality of the survey. A light Lidar cloud can have one point per square meter, but a thick Lidar cloud can have 6 points per square meter or more.
A dense Lidar cloud defines the variations in the landscape in much more detail, making it possible to represent the difference in levels in the section of a street and its sidewalks. And allowing us to have a precise idea of the fall of a roof or small objects over a building.
Generate Revit terrain: contour lines versus Lidar point clouds.
A dense point cloud from a Lidar survey, due to its density, provides a tighter net of points that allows a precise modulation of a Revit digital elevation model using interpolation.
The creation of digital elevation models from contour lines, on the other hand, can only work if the height difference between the curves is small. Contours provide the exact placement of points along predefined heights, but the terrain features between two iso-lines are not represented. If a Revit terrain is to be precise and generated from contours, the height between the lines should be 50cm or less.
Contours are, however, an excellent method for describing in a plan the features of the landscape and help architects and engineers position structures. Whether generating a digital elevation model from contours or Lidar point clouds, the AEC industry would not abandon the tradition of using contours. The Greeks built amphitheaters to match contours towards beautiful landscapes, and the Romans have built roads along contours for centuries. So there is a practical need for contours now and in the future.
How are national authorities adopting Lidar surveying?
Lidar is currently the most precise technique to scan the Earth's surface. The affordability, precision, and availability of airborne Lidar are making national authorities sign large contracts to scan their territories. Denmark was one of the first countries to complete a national survey of its territory. Many other European countries are following in creating national height databases. The US has several nationwide and federal survey projects. A topography and bathymetry programs exist currently in Norway for the delight of the AEC and surveyor community. One can just expect this technology to boom in the following years.
These surveys will assist in creating more detailed and up-to-date representations of the country's topography through digital elevation maps and cartography. Still, they will also assist in monitoring forests and agricultural land and help identify and prevent natural disasters.
The most exciting aspect for the AEC industry is that the national authorities also make Lidar datasets available to the public domain. A nationwide public Lidar point cloud has a tremendous potential to reduce the building project costs in two ways: savings with the initial survey costs and avoiding costs resulting from building errors due to incorrect terrain representation.
Can Lidar be used directly with Revit?
Yes, Lidar datasets can be used directly in Revit to generate terrain. The result is a high-quality terrain with the detail and precision one would expect from a land survey. You can get more information about how to generate terrain from a Lidar dataset with Revit using archi topography in the following page.
|
OPCFW_CODE
|
Previously, I had only been running Blazor applications on the root of a site. This made me completely miss the point of the
<base href="/" /> element, which is added by default to Blazor applications. That element isn’t required until you run your application in a subdirectory of your site.
That is why I started to look into GitHub Pages. GitHub Pages allows you to host static content on GitHub. You can publish any GitHub repository to a project site in GitHub Pages. A project site will be published to a subdirectory under your account. An example of that is the sample application I wrote for this article, which you can find running at mikaberglund.github.io/hosting-blazor-apps-on-github-pages. The application is published in the
/docs folder in the master branch of the repository.
Considerations for GitHub Pages
Blazor is a SPA Framework, which typically only has one physical page that acts as the starting point for the application. The application provides a routing mechanism that loads the view associated with the current route (the path in the browser’s address bar).
This is all good as long as the Blazor application with its routing engine is loaded. But what if you reload the browser window? Or if you copy the address in your browser, and send it off to a friend? Then the request will go all the way to the server. Since there is no corresponding folder or file that would match the request, the server returns a 404 Not Found error message.
ASP.NET Core handles this by loading the application’s startup page, if no better matching resource or route is available.
But since GitHub Pages only hosts static content, no server-side processing is possible. One solution to this is to leverage custom 404 (not found) pages in GitHub Pages. The basic idea in this solution is that the custom 404 page will store the current route (URL) in the browser’s history, and redirect the browser to
index.html (the application’s startup page) with the original route as parameters.
index.html will then rewrite the URL so that Blazor’s routing engine will understand it. Read more about this solution on the Microsoft Docs site.
If you publish your application on a project site, you need to modify the
<base /> element in your
index.html file. My sample app associated with this article runs in the /hosting-blazor-apps-on-github-pages folder. This means that I need to change the element as shown below.
<base href="/hosting-blazor-apps-on-github-pages/" />
A Simplified Solution
The solution described above is pretty cool. You can probably apply that to any kind of SPA Framework hosted on any kind of service that supports static content with customizable 404 pages.
However, this got me thinking whether the solution could be simplified. If the custom 404 page “catches” any incoming request that does not have a corresponding physical file, why could that 404 page not be the application’s startup page too? Why would you need to modify the incoming request and redirect to
index.html when you could serve the application directly from
So I published the sample application to the
/docs folder, renamed
404.html, and pushed my changes to the remote repository. As you can see in the
/docs folder, there is no
404.html. Still, the application works just fine.
There is one downside to this though. When you load your application, GitHub Pages will return the HTTP status 404 (not found), even if your application loads completely fine. So, if this becomes an issue for you, then you need to do it the way it is described on the Microsoft Docs site.
Running the application directly from the
404.html page instead of doing some tweaking and redirecting to the
index.html is obviously a more simple solution. However, so far I have only tested it with the sample Blazor application I wrote for this article. That sample application is pretty simple, so I’m going to add more advanced routing scenarios to it later on to be more certain. And, as I said, you will get a 404 response status to every request to your application.
I already published the Blazor Bootstrap Showroom application on GitHub Pages. You can find it here. I wrote about the Blazor Bootstrap component library a while ago here on my blog. If you want to learn more about this project and how you can use it in your Blazor application, have a look at the Blazor Bootstrap Wiki.
If you manage to publish your SPA application directly from the custom 404 page on GitHub Pages, I’d be happy to hear about it. Feel free to leave a comment below and let me know.
|
OPCFW_CODE
|
In today’s digital era, the role of software developers has become increasingly vital. But what exactly does a software developer job entail? In this article, we’ll delve into the world of software development, exploring the roles, responsibilities, and opportunities that come with this profession.
Roles and Responsibilities of a Software Developer
As a software developer, your primary responsibility is to design, create, and maintain software applications or systems. This involves collaborating with stakeholders, such as clients or other team members, to understand their needs and develop solutions that meet those requirements. Some common tasks performed by software developers include:
- Writing code in various programming languages
- Testing and debugging software applications
- Collaborating with cross-functional teams to ensure seamless integration
- Analyzing user feedback and making necessary improvements
- Keeping up with industry trends and advancements
To excel in this role, software developers must possess a combination of technical skills, problem-solving abilities, and excellent communication skills.
Educational Background and Training for Software Developers
While there is no one-size-fits-all educational path to becoming a software developer, a strong foundation in computer science or related fields is generally recommended. Many software developers hold a bachelor’s degree in computer science, software engineering, or a related discipline.
However, formal education is not the only route to success in this field. Some aspiring software developers choose to pursue coding bootcamps, online courses, or self-study to gain the necessary knowledge and skills. Continuous learning and staying updated with the latest technologies and programming languages are crucial for long-term success in the industry.
Career Opportunities for Software Developers
Software developers are in high demand across various industries and sectors. From healthcare to finance, e-commerce to entertainment, almost every industry relies on software applications or systems. As a result, software developers can find employment opportunities in:
- Technology companies
- Financial institutions
- Healthcare organizations
- Government agencies
- Startups and entrepreneurial ventures
Furthermore, software developers can specialize in different areas, such as web development, mobile app development, database management, or artificial intelligence. This diversity allows individuals to choose career paths that align with their interests and passions.
Frequently Asked Questions (FAQ) about Software Developer Jobs
What is the average salary of a software developer?
Software developers are well-compensated for their skills and expertise. The average salary varies based on factors such as experience, location, and industry. According to recent statistics, the median annual wage for software developers in the United States is around $110,000.
How can one become a software developer without a degree?
While a degree can provide a solid foundation, it is not the only path to becoming a software developer. Many successful developers have entered the field through coding bootcamps, online courses, or self-study. Building a strong portfolio and gaining practical experience through internships or freelance projects can also help individuals kickstart their careers without a formal degree.
Is coding experience necessary to become a software developer?
Yes, coding experience is essential for software developers. Proficiency in programming languages, such as Python, Java, or C++, is crucial for writing code and developing software applications. However, it’s important to note that coding is a skill that can be learned and improved with practice and dedication.
What programming languages are most commonly used by software developers?
The choice of programming language depends on the specific project requirements, industry trends, and personal preferences. Some popular programming languages among software developers include:
- Python: Known for its simplicity and versatility, used in web development, data analysis, and artificial intelligence.
- Java: Primarily used for creating enterprise-level applications and Android app development.
- C#: Mainly used with Microsoft technologies for desktop and web application development.
- Swift: Specifically designed for iOS and macOS app development.
Can software developers work remotely?
Yes, software development is a field that offers remote work opportunities. With the advancement of collaboration tools and cloud-based technologies, many companies now embrace remote work arrangements. Remote software developers can work from anywhere, as long as they have a stable internet connection and the necessary equipment.
In conclusion, a software developer job involves designing, creating, and maintaining software applications or systems. With the increasing reliance on technology in almost every industry, software developers play a crucial role in shaping the digital landscape. Whether you pursue a formal degree or take alternative paths to gain expertise, the software development field offers a wide range of career opportunities. By continuously learning and honing your skills, you can thrive as a software developer and contribute to the ever-evolving world of technology.
|
OPCFW_CODE
|
Pythagoras Theorem Calculator
More CalculatorsArea Calculator Circle Calculator Square Calculator Ellipse Calculator Rhombus Calculator Centroid Calculator Triangle Calculator Rectangle Calculator Circumference Calculator Parallelogram Calculator Right Triangle Calculator Quadratic Equation Solver Complex Number Calculator Linear Equation Calculator Regular Polygon Calculator Simple Interest Calculator Scalene Triangle Calculator Compound Interest Calculator Isosceles Triangle Calculator Equilateral Triangle Calculator
How To Use Pythagoras Theorem Calculator
Lets first understand, what is pythagoras theorem and how we can use pythagoras theorem?
The Pythagoras's theorem, is a fundamental relation in Euclidean geometry among the three sides of a right triangle. Pythagoras theorem states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides.
Pythagoras theorem can be written as an equation relating the lengths of the sides a, b and c as a hypotenuse, often called the Pythagorean equation:$$a^2+b^2=c^2$$
Different forms of pythagorean equation
Converse of pythagoras theorem
The converse of the theorem is also true,
For any three positive numbers a, b and c such that \(a^2+b^2=c^2\), there exists a triangle with sides a, b and c, and every such triangle has a right angle between the sides of lengths a and b.
1. If \(a^2+b^2=c^2\), then the triangle is right.
2. If \(a^2+b^2>c^2\), then the triangle is acute.
3. If \(a^2+b^2< c^2\), then the triangle is obtuse.
A Pythagorean triple has three positive integers a, b, and c, such that \(a^2+b^2=c^2.\) Such a triple is commonly written as (a, b, c). Some well-known examples are (3, 4, 5) and (5, 12, 13).
(7, 24, 25), (8, 15, 17), (9, 40, 41), (11, 60, 61), (12, 35, 37), (13, 84, 85), (16, 63, 65), (20, 21, 29), (28, 45, 53), (33, 56, 65)
For calculating the hypotenuse of a triangle using above pythagoras theorem calculator, you have to just write the values in the given input boxes and press the calculate button, you will get the result.
|
OPCFW_CODE
|
Git blame: A reliable rat
A great benefit of version control systems is that they make it possible to see who introduced substantive changes in the past. For example in Git,
git blame <file> will reveal who last edited each line of code in
Despite the cheeky name, the greatest value of
git blame isn’t so much blaming others for their mistakes, as identifying who to confer with when proposing changes. The last developer to touch a line of code may have an interest in its current state, can answer questions about it, and may have valuable perspective that will improve your proposed changes.
Slow standards adoption? Blame blame.
Unfortunately, this is an obstacle to the adoption of consistent code standards in an open source project like WordPress.
Any patches you make to legacy code whose soul purpose is applying coding standards, without introducing substantive changes, will make you appear as the last author in
git blame, losing valuable information about whoever made the last substantive changes. Thus, this type of edits is discouraged.
As a result, WordPress’ adoption of its own coding standards in core code slows way down.
This is a bummer, because there would be dozens of people happy to make core contributions strictly to apply code standards. It’d be a great way for newbies to learn the ropes while making incremental improvements to code quality.
How about “minor commits” that blame is blind to?
Wouldn’t it be nice if you could indicate that a change is minor edit when you commit it?
git blame would skip over these minor edits to display only substantive edits from older, non-minor commits. Obstacle to code standards adoption solved.
This could look something like
$ git commit --minor <file to commit>.
Implementation considerations (wherein I wade way out past my depth)
For this to work, I’m aware of at least three things that would need to change in Git’s internals:
- Implement the
--minorflag (or whatever) in
- Extend data model in commit blobs (the files where Git stores its object data) to include optional metadata that means “this is a minor edit”.
git blameaware of “this is a minor edit” metadata and crawl as far up the tree as needed to encounter an edit that is not minor.
Number 3 would add a bit of performance overhead to running
git blame. I could be way off here, but I doubt that’s a deal breaker.
Number 2 might be, though. The structure of commit blobs is super lean — just a reference to a tree describing the current file structure, the commit’s author, the commit message, and a reference to parent commit object(s). Nothing more. Thus, adding metadata to support this type of feature could increase every commit’s size by a significant percentage, and that would add up when applied to an entire repository’s object graph. Would that be justified by the limited utility that a minor edit functionality would add?
Perhaps this isn’t such a big issue, as that “minor” metadata flag could either be set to true, or be nonexistent and implied to be false. This would only take up more hard disk in the cases where
minor = true, instead of with 100% of commits.
Applying this to WordPress
I wrote this up with Git examples because I’m much more familiar with it, but WordPress still uses SVN for core development, and probably will for some time.
So until and unless WordPress completely migrates to Git, we’d also need an equivalent new “minor edit” feature added to SVN if we were e to benefit fully in WP developer land.
|
OPCFW_CODE
|
Next Steps in Mozilla’s Ongoing Efforts to Put People in Charge of Their Privacy
As Mozilla’s new privacy lead, there are a number of new and existing initiatives that I will be tackling. This month, in particular, will be extra busy with comments due to both the FTC and Commerce, Data Privacy Day, as well as a number of internal activities underway. I will be using this blog to post updates on our work and seek community input, as well as to share my experiences as a privacy officer in Silicon Valley.
Mozilla has a long history of taking privacy seriously. The topic is well grounded in Mozilla’s principle-over-profit mission to build an Internet where the individual is respected and has choices. We approach privacy from the perspective of putting people in control and advocating for their ability to shape the future of the web. This comes through our commitment to support a vibrant add-on ecosystem with powerful third party tools like Adblock Plus and Ghostery, our work on privacy icons and making privacy policies not suck, leadership on geolocation privacy, and, among other examples, convening open forums with the community to collaborate on privacy and security solutions. I’m fortunate to be working with a number of people here who have strong professional credentials and personal commitments to online privacy. Working together to engage with the broader Mozilla community on fostering greater user transparency and choice will be one of my primary roles.
As I begin my second week with Mozilla, one of my first tasks is to finalize and roll out Mozilla’s Privacy & Data Operating Principles to inform our data handling practices and product decisions. In the rapid pace of development that defines today’s Web, we believe grounding our work in a set of guiding principles will be vital to maintaining internal vigilance, as well as enhancing privacy-related considerations in the development process.
Following an internal privacy review last summer that looked at a broad range of privacy-related organizational risks and controls, Mozilla formed a working group comprised of representatives from across the organization to develop a set of guiding principles. Drafts underwent a number of iterations based on input generated through open meetings and presentations.
I am sharing them now, in draft form, to seek broader input from the community. The current draft is focused on these five objectives:
- No Surprises. Only use and share information about our users for their benefit and as disclosed in our notices.
- Real Choices. Give our users actionable and informed choices by informing and educating at the point of collection and providing a choice to opt-out whenever possible.
- Sensible Settings. Establish default settings in our products and services that balance safety and user experience as appropriate for the context of the transaction.
- Limited Data. Collect and retain the least amount of information necessary for the feature or task. Try to share anonymous aggregate data whenever possible, and then only when it benefits the web, users, or developers
- User Control. Do not disclose personal user information without the user’s consent. Advocate, develop and innovate for privacy enhancements that put people in control over their information and online experiences.
- Trusted Third Parties. Make privacy a key factor in selecting and interacting with partners. (Updated)
Questions for your consideration and input: Are these the right principles? Do they cover the areas that you care about? Will they drive us to develop better products and features? Are we missing anything critical? How do we think about guidelines, policies or standards to best guide our decisions without hampering the course and speed of innovation?
Once finalized, we will translate these principles into various communications, training and implementing tools to support the work of our teams across Mozilla. I expect a number of new projects to follow in the areas of online notices, user choices, security and data governance, not to mention a variety of privacy enhancing features and tools implemented in our great software products and services.
I’m excited to be a part of Mozilla and look forward to hearing your comments on these principles, as well as working with you in this new year and beyond.
|
OPCFW_CODE
|
Unable to include "large" code in lyx files
Following this question I was able to insert (Java) code into my lyx document. However, when I try to visualize (export it as a .pdf) the following error comes up:
I can't work with sizes bigger than about 19 feet. Continue and I'll use the largest value I can.
The related questions I found, this one and this one are related to including images into a document. Why does lyx show the same error when working with code snippets?
The issue appears in the following case:
The error message:
A shortened example can be found here
Can you please post a minimal example? see http://wiki.lyx.org/FAQ/MinimalExample
I actually found out why. When copy/pasting from Intellij the white spaces are discarded and the code appears to lyx to be a very long string that it doesn't like.
@scottkosty No, I was wrong, I'm updating the question. I cannot really post a minimal example since the error occurs when the code to be inserted is of a certain length.
I see. Are you able to upload an example to dropbox or google drive or somewhere else?
Yes, I'm adding the document now.
@scottkosty Added it now.
For me the example you posted compiles fine to PDF.
Yes, it does compile but the lines do not split accordingly. I mean my code is hafway outside the page.
Ah yes I can reproduce but in your question you say you get errors. Is this a separate question?
I get errors when I have lines that are too long.
@scottkosty nvm I've fixed it using the smallest available font size, adding new lines after each line of code and shortening each line to about 100 characters and have also checked the Break long lines options in the Settings menu.
OK please add an answer to this question and accept it. Glad you got it figured out!
You seriously want some kind of automatic line breaks added to your code.
No. I mean you can't break up code and still expect it to run (definitions method contracts and invocations may be affected). The initial version did not display the lines at all once they reached the right margin of the page.
I've managed to fix it by splitting all lines of code with new lines (when copied from intellij all '\n' characters have been removed and this cause a problem as the source was too long).
Then, by right clicking the source code("Program listing") and going to the "Settings" I've changed the following:
in the Style menu box picked smallest font type (not tiny)
changed the family type to "Roman"
checked the break long lines box
It is important that each line has no more than 100 characters, otherwise when converting to .pdf (luatex) the lines might not fit within the pages of the exported .pdf.
|
STACK_EXCHANGE
|
microsoft authentication using firebase
I am working on Microsoft authentication using firebase. its a web project in vuejs 2 tech. I have followed this Documentation step by step for firebase and code section also followed this Documentation for creating account in azure portal but getting this error:
error FirebaseError: Firebase: Error getting verification code from microsoft.com response: error=invalid_request&error_description=Proof%20Key%20for%20Code%20Exchange%20is%20required%20for%20cross-origin%20authorization%20code%20redemption.&state=AMbdmDnE2TjhyB-T1hIHqYTh73Za9GIrASM-9NFz4trUb4QSLmP6W_qIFNCSl2fmUyq0tTvTNeB3Yg1a3XmOHg93aDItLCJTEEf9B-6EdpPLzR-_mkV9bI3QLoTyT3JQl9Pldczh3BfRlTZQ2KwKfV8IxgpHoXxKJByVzaB-M1wxWO9ESh7Ap_2BvNYHrq2tSFQHbK9D70l7xzi292de6G4rbGUgKmtuTtND4B671A1sxhD2-1WTWaCXkLMv_R7q5JTiWmfqn12ZipA_RWnMBDkPRhglBVReg6jBCRWKv1PvWN2dVQOQfjIoTKRfUs8VK4KfMDR6rYAVst8UStsO79nPN27_32yBjoU9pdl3 (auth/invalid-credential).
at _errorWithCustomMessage (vendors~app~._node_modules_@firebase_auth_dist_esm2017_index-1679a2b2.js~8334e211.js:568:20)
at _performFetchWithErrorHandling (vendors~app~._node_modules_@firebase_auth_dist_esm2017_index-1679a2b2.js~8334e211.js:1085:23)
at async _performSignInRequest (vendors~app~._node_modules_@firebase_auth_dist_esm2017_index-1679a2b2.js~8334e211.js:1100:29)
at async _signInWithCredential (vendors~app~._node_modules_@firebase_auth_dist_esm2017_index-1679a2b2.js~8334e211.js:4706:22)
at async PopupOperation.onAuthEvent (vendors~app~._node_modules_@firebase_auth_dist_esm2017_index-1679a2b2.js~8334e211.js:7965:26)
please suggest what could be the possible fix for the above issue
I was able to fix this problem by doing under written 2 steps
step 1 : I created SPA platform on azure portal but it should be web platform so just deleted the SPA and added web platform to fix this problem
To configure application settings based on the platform or device you're targeting, follow these steps:
In the Azure portal, in App registrations, select your application.
Under Manage, select Authentication.
Under Platform configurations, select Add a platform. Under
Configure platforms, select the tile for your application type
(platform) to configure its settings.
step 2: we have to Application secret in fire base console which need to be copied correctly from azure portal
basic steps to create and add a client secret
In the Azure portal, in App registrations, select your application.
Select Certificates & secrets > Client secrets > New client secret.
Add a description for your client secret.
Select an expiration for the secret or specify a custom lifetime
Select Add.
Record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page.
please read the last step properly which says we have to copy key value if you left the page the value will be hidden like this with *** so in that case just delete this key and add new client key then copy the value (it "Value" field not "Secret ID" field)
now just add that key to your fire base console in application secret field
Note: try to follow these documentations properly Firebase documentation and microsoft azure documentation
|
STACK_EXCHANGE
|
Why did Agni serve as the priest for two kings?
In my question here, I discussed an excerpt from the Aitareya Brahmana of the Rig Veda which lists various kings who have performed the Rajasuya Yagna, the ritual to become an emperor. Now here is another excerpt from the Aitareya Brahmana which lists various kings who have performed the Rajasuya Yagna, along with the priests who officiated:
This food Rama Margaveya proclaimed to Vishvanta Saushadamana... This also Tura Kavasheya proclaimed to Janamejaya Parikshita; this Parvata and Narada proclaimed to Somaka Sahadevya, to Sahadeva Sarnjaya, Babhru Daivavridha, Bhima of Vidarbha, Nagnajit of Gandhara; this Agni proclaimed to Sanshruta Arimdama and to Kratuvid Janaki; this Vasishta proclaimed to Sudas Paijavana. All of them attained greatness having partaken of this food. All of them were great kings; like Aditya, established in prosperity, they gave heat obtaining tribute from all the quarters. Like Aditya, established in prosperity, he gives heat, from all the quarters he obtains tribute, dread his sway and unassailable, who as a Kshatriyas when sacrificing partakes thus of this food.
"Proclaiming the food" is a description of one of the parts of the Rajasuya Yagna, by the way. In any case, some of these kings are regognizable; Janamajeya son of Parikshit was Arjuna's great-grandson, Nagnajit of Gandhara was Shakuni's uncle as I discuss here, and Sudas Paijavana was the victor of the famous Battle of Ten Kings as I discuss here.
But my question is about Sanashruta Arindama and Kratuvid Janaki. Who are these two kings, and why would Agni the fire god serve as the priest for their Rajasuya Yagnas?
Kratuvid Janaki might be related to Sita's father Janaka. But do any other scriptures mention Sanashruta and Kratuvid, and/or describe their interaction with Agni the fire god?
I think Rig veda verse one answers your question, Agnimile purohitam which literally means salute to agni who is purohit, purohit is a term for preist, so that is why he served as a preist.
@Yogi That's referring to the fact that he's the priest of the gods. It has nothing to do with him serving as priest for two human kings.
Then the human kings might have deviya qualities or they might have obtained something more than devas like bramhajnana
@Yogi Yeah, I assume the answer is that they were great kings who did something to please Agni, but I'm looking for the details.
Interesting question!
|
STACK_EXCHANGE
|
The challenge that the project addresses
Supporting studying from poverty stricken families. The results of the above mentioned school are declining from year to year especially in science and mathematics. Among other problems, the students in this school come from poor families who cannot support them fully financially to get study resources to enhance their performance. Students are demotivated, teachers are also demotivated making it hard to perform well at the last year of study preparing students for universities and colleges.
What is your project doing to respond to this challenge?
After a recurring declining of grades obtained by learners at this school, it became worrisome to me as a former mathematics, physics and integrated science teacher. The questions I asked myself were: What could be the problem? What can be done to help improve the results? I approached the principal to allow me work with the students and willing teachers to motivate students, help them develop content knowledge in mathematics and physics. To also give them hope in studying on their own irrespective of their background and to implant spirit of competition to strive to always be the best performers in class. I visited students in their classes, talked to them on how to become the best achievers irrespective of their family, socioeconomic and school background. I gave motivational talks. I prepared tutorials and responses to tutorial questions. I tried to liaise with the mathematics and physics teachers, giving advice on content knowledge presentation but the interaction was very poor such that I had to stop. I sponsored two top achievers’ tuition fees for 2022. I am willing to sponsor more top achievers for the coming years.
Describe the project's impact
Some students improved their content knowledge, seen through the reasoning ability in answering questions in tutorials. Some students kept on asking for more work to be done, improved willingness to work on their own and interest to study. As someone based in Thaba-Tseka district which quite far from Maseru where the school is based, it is not easy for me to pay students regular visits, I involved ex-students willing to help in any way either in the form of motivation, career guidance or content development in their areas of expertise. I invited ex-students to contribute small amounts to keep the fire of sponsoring top achievers burning. I also asked for advice on how the small contributions can be made directly to the school and kept for sponsoring 2022 top achievers and maybe teachers who will perform quite well in 2022.
The grant will sponsor the two high performers who are in grade 10 this year for their tuition fees in the year 2023. It will also sponsor the two teachers who will have highest number of quality grades for the LGCSE results of 2022. The other money will be used for traveling costs to the school for motivational talks and content knowledge development in the areas of mathematics and physics. It will also try to buy at least one desktop computer for easy internet access for students at the last year secondary school level.
Read a magazine article about the project.
|
OPCFW_CODE
|
Does ColdFusion support REST API URIs with a dynamic token in the middle of the URI?
I've been playing with ColdFusion 11's REST API support and am wondering if it's possible to have it support a URI with a dynamic token in the middle of the URI instead of only at the end. That is, it very easily supports URIs like:
/rest/users/12345
where the 12345 is dynamic (in this case, the user's userID). But I haven't been able to find a way (without a tremendous amount of URI hacking) to support URIs like:
/rest/users/12345/emailAddresses
So, is it possible to do this in ColdFusion (11 or 2016)? If not, is it supported in Taffy (I didn't see where it is but I could be wrong)?
TIA
What happens when you call /rest/users/12345/emailAddresses?
It seems to be supported. See the section 'Specifying subresources' here - https://helpx.adobe.com/coldfusion/developing-applications/changes-in-coldfusion/restful-web-services-in-coldfusion.html
@Miguel-F Depends on the code. Using the built-in ColdFusion REST support, you build CFCs for each endpoint in your API. To "register" a specific CFC as an endpoint, you would use something like:
`<cfcomponent rest="true" restpath="/users">`
This CFC would be called via /rest/users. In the CFC, you configure functions to accept URL params, as in:
`<cffunction name="getUser" ... restPath="{id}">
<cfargument name="id" ... restArgSource="Path" />
</cffunction>`
This would allow you to call /rest/users/12345. I can't see how to have anything after the "12345".
@Miguel-F Sorry if I'm being thick here but I don't see anything in that section that would allow me to support dynamic tokens in the middle of the URI. Can you elaborate on that a bit? Thanks.
I'm a bit unfamiliar with CF's built in but I don't see a way to do it as simple as Taffy. If I'm reading the docs right though if you use the Subresource locator you can technically do what's in the first example in the subresource section but it seems like it would be more cumbersome. Again, I'm less familiar with it, but it seems possible however I think you'd have to have some messy code. I know it's not your preferred, but if you have some standard you could use URL Rewriting. Very little work as long as you have some common paths.
I misunderstood your question, sorry for the confusion. Perhaps you should look at using query string parameters instead. Something like this - http://stackoverflow.com/a/16134855/1636917
@Leeish Thanks for the comment. I was kind of hoping that I wouldn't need to use URL Rewriting because CF would have it built-in. But it looks like that's not the case and that I may need to go with URL Rewriting.
@Miguel-F Thanks for the link. My hope is to use query string parameters to restrict the scope/order/etc. of what is returned but use the path parameters to identify the resource. The big problem with doing all of this by query string is that it would necessitate the code on the other end of the URI to be able to handle all of the different "resources" that are defined on the query string. In other words, 1 big CFC to handle everything instead of several smaller CFCs each with a specific purpose and/or for a specific resource. It's looking more and more like I'll have to go with URL Rewriting.
It's been a while and I wanted to provide the answer in case anyone else has this same question...
ColdFusion, when defining a CFC for a REST endpoint, allows you to specify wildcards/variable names in the restpath attribute to both the <cfcomponent> and <cffunction> tags. You would then define <cfargument> tags for each one of these variables so that you can access them within your function. For example:
<cfcomponent rest="true" restpath="/users/{userId}/pets" ... >
<cffunction name="getPets" access="remote" httpMethod="GET">
<cfargument name="userId" type="numeric" required="true" restargsource="Path" />
<!--- Called with a path like /users/123/pets/ --->
<!--- do stuff using the arguments.userId (123) variables --->
</cffunction>
<cffunction name="getPet" access="remote" httpMethod="GET" restpath="{petId}">
<cfargument name="userId" type="numeric" required="true" restargsource="Path" />
<cfargument name="petId" type="numeric" required="true" restargsource="Path" />
<!--- Called with a path like /users/123/pets/456/ --->
<!--- do stuff using the arguments.userId (123) and/or arguments.petId (456) variables --->
</cffunction>
</cfcomponent>
The keys here are using the restpath attribute with the variable defined as a variable name in curly braces and then defining those variables as arguments to the function with a restargsource attribute set to "Path".
I hope this helps.
|
STACK_EXCHANGE
|
|Skip Navigation Links|
|Exit Print View|
|Oracle® x86 Servers Diagnostics Guide For Servers Supporting Oracle ILOM 3.0.x|
Use the Immediate Burn-in Testing menu option to run burn-in test scripts on your server. Immediate Burn-in tests include full server-level tests and component-level tests. You can use predefined tests or you can create and run your own tests.
Use the Immediate Burn-in Testing menu option to run burn-in test scripts on your server.
This section includes the following topics:
Three scripts have been created for testing your server during Manual mode operations:
Note - Each of these scripts tests the operating status of your entire server. If you want to test only a certain percentage of your server’s hard drives, refer to To Test the Hard Disks of the Server to change the test options.
quick.tst – This script performs a high-level test of all hardware components, including those components that require user input, as well as a more in-depth memory test. You must interact with the Pc-Check utility to progress through these interactive tests. The tests cannot be run unattended and do not contain "timeout" facilities. The interactive tests wait until you provide the correct input.
noinput.tst – This script is used as a first triage of any hardware-related problems or issues. The script performs a high-level test of most hardware components, excluding those components that require user input (keyboard, mouse, sound, video). This test does not require user input.
full.tst – This script performs the most detailed and comprehensive test on all hardware components, including those components that require user input. This script contains a more in-depth memory test than quick.tst, as well as external port tests (which might require loopback connectors). You must interact with the test utility to progress through these interactive tests.
Note - The memory tests in Pc-Check detect single-bit error-correcting code (ECC) memory failures and report them down to an individual memory module (DIMM).
When you select the Immediate Burn-in Testing menu option, the Continuous Burn-in Testing window is displayed. The screen includes the list of options shown in Test Menu Options for running the tests. When a quick.tst, noinput.tst, or full.tst script is loaded, the defaults indicated in the third column are automatically loaded.
Table 3-3 Test Menu Options
To load one of the scripts available to test the devices on your server, follow these steps:
The top portion of the window lists the options described in Test Menu Options , and the bottom portion of the window lists the Immediate Burn-in menu options.
A text box is displayed.
To use a pre-written test – Enter one of the following: quick.tst, noinput.tst, or full.tst
To use a script that you have created and saved – Enter d:\testname.tst where testname is the name of the script that you have created.
Opens the Burn-in Options menu, which enables you to modify the various options listed in Test Menu Options for the currently loaded test script.
Opens a listing of the tests available for your server configuration and the currently loaded test script.
Runs the currently loaded burn-in test script.
|
OPCFW_CODE
|
The first way is to find an appropriate whizz-danger of a data type and write a diddly little dongle that does the job. So smooth...
But for 95% of the computation using that data type is rather overkill as far as efficiency is concerned.
The second way is is choose an even more appropriate data type, use that for the 95% of the calculation, and then with a flick of my wrist do the last 5% of the job with the necessary whizz-danger data type.
Now what to do? YAGNI tell's me to do the first option, I just have to print a few numbers on a screen and I'm done.
Now my dark past tells me to have pride in producing an efficient result, look at the quality, feel the width.
Which to choose?
Another point: to get one of the numbers computed, I can used some simple Noddy Programmer Maths, OR... being brilliant, as well as modest, I can perform a wonderful magical mathematical trick. But a Noddy Programmer might not get it without some tedious explanation which would probably baffle him anyway. Ok it's not my fault he didn't do his sums well at school and that sort of thing, and I would definitely like to show I'm A1 at maths, if not at coding this stuff.
BTW I guess you can read Newbie for Noddy, but there is a distinction.
Any more subtle hints?
Are you really, really sure about this? In the topsy turvy world of self-optimizing compilers, stochastic processes and hand-tuned libraries, such "common sense" can not always be relied on.
It's almost a "zen" thing:- clearing your mind of all thoughts of efficiency leaves you free to design a truly simple solution, and can often actually lead to more efficient code
Originally posted by Barry Gaunt:
Another point: to get one of the numbers computed, I can used some simple Noddy Programmer Maths, OR... being brilliant, as well as modest, I can perform a wonderful magical mathematical trick. But a Noddy Programmer might not get it without some tedious explanation which would probably baffle him anyway.
Well, I'm a liberal arts graduate, and I was able to figure out wonderful magical mathematical trick #1 on this assignment on my own.
And when I saw wonderful magical mathematical trick #2 on this assignment, I exclaimed "So, that's what that's for!"
So any more subtle hints?
Google is failing me so I'll quote from memory now, and edit later when I can look this up from hardcopy at home.
"The value of fortunetelling is that is shows how perfectly useless it is to have the correct answer to the wrong question." -- Ursula LeGuin, The Left Hand of Darkness, loosely, from memory
In jest, Michael
? I sassed you as a hoopy frood who knew where his towel was.
Where's my towel
Is this the one where you show that signed whatsit is missing, then write it again so it works?
The 95%-5% approach didn't work here either !
Then it becomes an incredible shrinking program - And it's optimised; there's nowt left !
Minimalism, don'tcha love it ?
Sure every once and a while you have to write some code for the Hubble telescope, that has to run on a space hardened 386, in 768 bytes of memory...
But usually, you want to write code that's easy to debug, easy to enhance, and that the next guy / gal will enjoy reading. 'Write your code as if the next person who has to work with it is a homocidal maniac, with a short temper, who knows your home address.'
|
OPCFW_CODE
|
Mindfulness before Kafka
Everyone uses Kafka, Let’s use it!
If you have any ideas for using Kafka as part of your stack, please don’t just use it because “other company” is using it so you have to use it too. And also if you successfully run Kafka on your local computer doesn’t mean it will be the same in the Development or even the Production environment.
You need to think about it for “Production” use cases and environment, start with what if? who’s doing it? Implication?
There are a lot of articles about Kafka’s do’s and don’t you can read, if your use cases fit with the uses of Kafka then Great!
So the next question will be how big the data goes through Kafka, if it’s the data too tiny ask your self “is there another way to do it without Kafka?”, and if the answer is yes you better do it without Kafka. It will not be worth the hassle.
But if your use cases are a perfect fit and you will have a lot of data going through Kafka is Great! And next question will be “How’s the sizing of your Kafka cluster will be?” you need to start planning the sizing of your CPUs, RAM, and Storage capacity.
Few mistakes example
Kafka has a lot of configurations, from brokers, producers, consumers, and topics config need to be set accordingly so it can run and serve you better. Below are a few among many examples.
For example, for start it’s easier if we just set auto.create.topics.enable to a true value so we can immediately start producers to push the data to Kafka without needing it to check if the topic exists and if not exist the producers need to create topic and then can start pushing the data to Kafka. This is a very basic setup for you to start exercising from the start because in the production environment you need to disable auto.create.topics.enable and even add ACL on top of it for authentication and roles management.
The next example is topic partitions, in Kafka broker, you can set many partitions for one topic. Unfortunately for this topic partitions never will be a simple explanation, because there are a lot of use cases out there. A simple one that I can come up with is never to use too many partitions in one topic because a big partition number can have an impact on the producer’s throughput and memory usage.
The next one is the retention period, the default setting is a week. But do you need to keep the topic data that long? Or do you need more retention? You need to set this up on topic-level config so each topic has a different configuration. This retention period is related to the storage you have for the brokers.
Monitoring? For what? It’s already worked
A Kafka cluster is a distributed system and there is a lot of possibilities to go wrong with it. In my personal opinion, Observability for the Kafka cluster is a must.
To start, you need to monitor the metrics of the Kafka cluster you can start reading for JMX and Kafka Exporter. It is essential to have tools for this effort, there is a lot of Kafka monitoring tools out there you can choose from the open-source license to enterprise depending on your needs and budget.
You can monitor the health of your cluster, consumer lagging, throughput, and many more useful detail you can take advantage of. You can set an alert if the broker is down or low on storage.
What I like and useful to me is always monitoring the throughput because sometimes you rush into topic creation without having a proper configuration and it always affects the performance of your Kafka cluster. When you exercise this on the dev environment you can always tune the topic configuration to a better one before you set the config as production ready.
Let’s upgrade the version! It better!
It’s common thing for us to try out the latest version of the software. But if the version jumps from 1 to 2 or 2 to 3 there is always a fundamental change. And of course, the step for installing might be the same but for upgrading might be different.
One thing worth checking is always the data structure. Is it affected? Or even they upgraded something so the data need to be converted in some way to be readable in the new version?
Kafka upgrades usually involving a rolling restart of the broker, when this happens it will affecting the producers and consumers that connect to the cluster without the bootstrap server list (only one IP). So you need to make sure that all the application connected to Kafka is always using the bootstrap server list.
Please use the dev environment to exercise the upgrade, but your dev environment needs to have the same configuration as the production environment. Common mistakes are many of these dev environment setup configs are different from the production. This can cause you serious trouble.
So yes upgrade is always better!
This article I wrote is from personal experience and yes it’s very high level. The main reason I write this up is that I have encountered a few of company using Kafka just for use cases you can achieve without using Kafka and most of them use Kafka for a very small amount of data.
And some of them use docker Kafka for production with one broker in it. Come on dudes!
So, if you need to use Kafka inside your technology stack please “Take a deep breath, calm down, relax and think it through” before you decided to use it.
There’s a lot of effort in using Kafka, especially if you decided to self-manage it. If you want to piece of mind and avoid the hassle you can choose Kafka-managed service.
|
OPCFW_CODE
|
the problem is that when you delete a project maybe sts only close it. try view menu --> uncheck closed projects now you will see all closed project, simply delete it.
make sure it is really not in workspace, also if there aren't any other projects with the same name. if not, just delete the
.metadata folder or create a new workspace.
check if you still have the project in folder of the workspace on disk. you may have deleted in sts, without checking 'delete on disk'. so, the project may be still there in the workspace folder though its deleted in sts.
i get this issue from time to time. usually i just open a new workspace but sounds like you don't want to loose other projects.
i simply open the.project file in my project and change the name of the project in name tag.
probably whey you 'accidentally deleted' your project, you only deleted it from the eclipse workspace, but not from the actual workspace folder on your hard-drive (as other people pointed out, eclipse can arbitrarily map workspace projects to files on disk, so it is possible for a project to be 'deleted' from your eclipse workspace but still exist on disk.
the good news is the files you deleted are actually still there.
instead of importing your project from a zip, you may just want to import those files from the workspace folder back into your eclipse workspace.
generally this kind of problems not occurred you can go to project option and clean and than restart sts. may be sts is not synched with the latest configured project.
when you launch spring tools suite, it will ask you to select directory as workspace as below:
if the directory you selected here (i.e., workspace directory) is the same as the directory where the project that you are going to import resides, then you will get some projects cannot be imported because they already exist in the workspace.
therefore, to solve the issue,
- close spring tool suite
- create a new directory
- launch spring tool suite again
- and, select that as your workspace
- launch the application and you would be able to import as you mentioned in your question
it solved my problem. hope it helps..
the workspace in sts/eclipse is not automatically the same as the file structure that you have on disc in your workspace directory. you can have projects in this workspace folder or somewhere else on disc.
to get them into your project explorer (and access them from inside sts/eclipse), you need to import them (import existing projects into workspace). then you can select the folder where those projects are located in. in case you have those projects already in your workspace folder on disc, you can choose the workspace folder as root folder in the wizard. it will show all the projects that exist on disc in that folder and grey those out that are already imported/referenced in your workspace in eclipse.
- How to import a spring project in STS
- How to import gradle project into STS 3.7.2
- How can I import external *.jar in spring mvc project using maven?
- How to import existing Android project into Eclipse?
- How do you import an Eclipse project into Android Studio now?
- How do you create a Spring MVC project in Eclipse?
- How to import Android Studio project in Eclipse?
- How to import a GIT non-Eclipse Java project into Eclipse?
- How do I import a pre-existing python project into Eclipse?
- How to import C makefile project into eclipse or put in under eclipse
- How can I create Spring web project with Maven in Eclipse?
- How do I import a pre-existing Java project into Eclipse and get up and running?
- How to import a Java project to Eclipse?
- How to import eclipse library project from github to android studio project?
- How can I import NetBeans project to Eclipse and vise versa?
- How to import a java project missing .project .settings .classpath files into eclipse
- how to import correctly the commons.apache.math libraries for my own project
- How to import the Spring Framework sourcecode into an eclipse project?
- How to import a Jdeveloper java project in eclipse?
- How to open multiple Spring STS instances on mac
- How to import android studio project in eclipse which use library?
- How do I create a Spring Boot Starter Project in Eclipse that is properly configured with a Run Configuration?
- How to import .dll to Android java project (working with eclipse)
- How do I run an imported Spring project in IntelliJ
- How can I import eclipse JDT classes in a project
- How to import one Eclipse project as a library into another?
- How do I use an existing Spring 3.0.5 project in STS?
- Spring boot how to make project run on server in eclipse
- How to open Spring Boot sample in Eclipse STS
- How to import a GitHub hosted project as a Java Project in Eclipse?
More Query from same tag
- Eclipse menu bar won't show - Ubuntu 16.04 - Eclipse Mars
- Error "Server failed to start" while running JBoss 7.1 in Eclipse IDE
- the zipalign tool was not found in the sdk
- Making one project's jars available to another dependent project
- Unable to load picture from resource
- Play Framework: How to access again my submitted form during bad request, my code can't access the list again
- Modify/view static variables while debugging in Eclipse
- "Run" menu is placed before other menus in an Eclipse RCP IDE project
- How can I split the LogCat for 2 devices on eclipse?
- Synchronize Eclipse workspace automatically?
- Eclipse Web Tools Platform (WTP) vs NetBeans - IDE for Java Web Development
- Eclipse ADT + JDT Plugin Issue
- Eclipse CDT not building project on header file change
- XDebugging PHPUnit testcases in Eclipse PDT
- Eclipse debug settings
- c++ cygwin and eclipse - binary not found error
- Changing a variable value inside a void method to be used outside in Java?
- eclipse plugin does not work after update to juno (eclipse 4)
- Jboss 7.X application server on mac?
- Can't run app from Eclipse in my htc device
- java.lang.AbstractMethodError - while using ANT Built jar
- Eclipse does not generate New Activity after I download the latest SDK
- Converting int to arr. Keep getting error Exception in thread "main" java.util.InputMismatchException
- Clicking on input box moves a div
- Eclipse: How to ensure jar in plugin is given priority over other versions elsewhere?
- Where's the native executable in my application?
- Change two childs from Java code
- NoClassDef error with restlet/gwt
- Why is Install/Update "hidden" in Help menu in Eclipse?
- Calling multiple activities in android eclipse
|
OPCFW_CODE
|
"Tweet others... as you would have them tweet you."
Ok, I've been following some of my innovative friends who are major advocates for Twitter. It's not merely my 'monkey-see-monkey-do' mentality -- sometimes I just don't understand things right off, but I greatly respect the person strongly recommending it. [You ever do that?]
So I signed up at Twitter.com (as 'indychristian' -- what else?). And I've toyed with it from time to time, not really catching on just yet. And frankly, I'm STILL not positive I know how the Lord would want me to use it in my own personal information strategy. But again, it's because I so highly respect these guys ahead of me. So sometimes I somewhat blindly stand on their shoulders. 'Faith', huh? But not BLIND FAITH... they have a reputation of coming-through for me. [Got any friends like that?]
And btw, in the past, that's usually worked out really really well. Choose your friends wisely, my parents always told me. When I've done that, it's worked well. When I haven't, it hasn't.
Ok, so Twitter is incredibly simple. Too simple even, for me to catch onto very quickly as to why it could POSSIBLY be so valuable. If you go there and sign up, you'll see it pretty much allows you to make a one-line message available to anyone who 'follows' you... ie, your group of friends. Think of it perhaps as a status-update. "I'm headed to http://UBcafe.com", for instance. Or... "We just went live at http://AskAnythingSaturday.TV." These 'tweets' can even go through to my friends' cell-phones, if they've set it up to do so.
Seems innocuous enough. Nobody has to 'follow' you if they don't want to. So really, it's only for those who really MIGHT, for who knows what reason, WANT to 'follow' me. Funny thing though -- innovators LIKE to follow others, to see what's fresh, new and should be tried. After all, the speed of life has accelerated -- and the only good way to stay up on important matters is to collaborate with others who prioritize similarly... and share their lives with you.
Hmmm. There's bound to be a Bible lesson here somewhere.
Anyway... today perhaps I understand a little more about the value of Twitter as a quick communication tool among social-connected innovative types. I realized that it MIGHT be a great tool for quickly spreading an important message, and helping it to then spread virally further. I'll tweet a short message and cite a particular site to go to. And whoever believes it's important enough to pass it on, does so.... and adds their link at the bottom of the page being tweeted.
Example: Today I tweeted... "Tweet Others... http://cityreaching.pbwiki.com/Community+TV"... alerting them to the Community TV concept that just went LIVE... [and it's using national collaborative wiki so we could work TOGETHER on it.] If anyone cares, they can likewise pass it on, and add their link at the bottom, effectively endorsing the concept. AND... it's an indication of who are our most 'collaborative' types who like working together to reach our cities for Christ.
In fact... visit "TweetOthers.com" to follow the crowd to whatever site might be spreading virally at the moment. [You'd like to help?]
Oh, oh oh oh oh.... here's a Bible lesson...
"Jesus, thank you for taking the weight of my sins and letting me stand on your shoulders as the only way I could ever hope to reach heaven. May I someday learn and be able to emulate your self-sacrificial nature. Amen.
|
OPCFW_CODE
|
Asyncify your code. Everybody's doing it. (Chicks|Dudes)'ll dig it. It'll make you cool.
Pretty much everything I build these days is asynchronous in nature. In SoapBox products we are often waiting on some sort of IO to complete. We wait for XMPP data to be sent and received, database queries to complete, log files to be written, DNS servers to respond, .NET to negotiate Tls through a SslStream, and much more. Today I'll be talking about a recent walk down Asynchronous Lane: the AsynchronousProcessGate (if you don't like reading just download the package for source code goodness).
I ran into a problem while working on a new web application for Coversant. I needed to execute an extremely CPU and IO intesive process: creating and digitally signing a self extracting compressed file -- AKA The Package Service. This had to happen in an external process, and it had to scale (this application is publicly available on our consumer facing web site). Here's a basic sequence of the design I came up with:
Do you notice the large holes in the activation times? That's because we're asynchronous! The BeginCreatePackage web service method the page calls exits as soon as the BeginExecute method exits, which is as right when the process starts. That means we're not tying up any threads in our .NET threadpools at any layer of our application during the time a task is executing. That's a Good Thing™.
At this point I'm used to writing highly asynchronous/threaded code. However, I still wouldn't call it easy. Why do it? I'd say there are three main reasons.
- To provide a smooth user experience. The last thing a developer wants is for his/her software to appear sluggish. There's nothing worse than opening Windows Explorer and watching your screen turn white (that application is NOT very asynchronous).
- To fully and most appropriately utilize the resources of the platform (Runtime/OS/Hardware). To scale vertically, you might call it.
- Because it makes you cool. AKA: To bill a lot more on consulting engagements.
Microsoft recommends two asynchronous design patterns for .NET developers exposing Asynchronous interfaces. These can be found on various classes throughout the framework. The Event Based pattern comes highly recommended from Microsoft and can be found all over new components they build (like the BackgroundWorker). Personally I think the event based pattern is overrated. The hassle of managing events and not knowing if the completed event will even fire typically steers me away from this one. However, it is certainly easier for those who are new to the asynchronous world. This pattern is also quite useful in many situations in Windows Forms and ASP.NET applications, leaving the responsibility of the thread switching to the asynchronous implementation (the events are supposed to be called in the thread/context that made the Async request -- determined by the AsyncOperationsManager). If you've ever used the ISynchronizeInvoke interface on a Winforms Control or manually done Async ASP.NET Pages you can really appreciate the ease of use of this new pattern...
The second recommended pattern, and usually my preference, is called the IAsyncResult pattern. IAsyncResult and I have a very serious love/hate relationship. I've spent many days with my IM status reading "Busy - Asyncifying" due to this one. But, in the end, it produces a simple interface for performing asynchronous operations and a callback when the operation is complete (or maybe timed out or canceled). Typically you'll find IAsyncResult interfaces on the more "hard core" areas of the framework exposing operations such as Sockets, File IO, and streams in general. This is the pattern I used for the Asynchronous Process Gate in the Package Service.
The Package Service has a user interface (an AJAXified asynchronous ASP.NET 2.0 page) which calls an asynchronous web service. The web service calls another asynchronous class which wraps a few asynchronous operations through the AsynchronousProcessGate and other async methods (i.e. to register a new user account) and exposes a single IAsyncResult interface to the web service.
Confused yet? Read that last paragraph again and re-look at the sequence. In order to make this whole thing scale it had to be asynchronous or we'd be buying a whole rack of servers to support even a modest load. Also because of the nature of the asynchronous operation (high cpu/disk IO) it had to be configurably queued/throttled. I went through a few possible designs on paper. But in the end I chose to push it down as far as possible. The AsynchronousProcessGate, quite simply, only allows a set number of processes to execute simultaneously, the number of CPU's reported by System.Environment.ProcessorCount by default. It does this by exposing the IAsyncResult pattern for familiar consumption. The piece of magic used internally is something we came up with after writing a lot of asynchronous code: LazyAsyncResult<T>.
LazyAsyncResult<T> provides a generic implementation of IAsyncResult. It manages your state, your caller's state, and the completion events. It also uses Joe Duffy's LazyInit stuff for better performance (initializing the WaitHandle is relatively expensive and usually not needed).
Using the asynchronous process gate is straight forward if you're used to the Begin/End IAsyncResult pattern. You create an instance of the class, and call BeginExecuteProcess with your ProcessStartInfo. When the process is complete you will get your AsyncCallback, or you can also wait on the IAsyncResult.WaitHandle that is returned from BeginExecuteProcess. You then call EndExecuteProcess and the instance of Process that was used is returned. If an exception occurred asynchronously, it will be thrown when you call EndExecuteProcess.The Begin Code:
static void StartProcesses()The End Code:
AsynchronousProcessGate g = new AsynchronousProcessGate();
//keep twice as many queued as we have cpu's.
//for a real, CPU or IO intensive, operation
//you shouldn't do any throttling before the gate.
//that's what the gate is for!
if (g.PendingCount < g.AllowedInstances * 2)
static void ProcessCompleted(IAsyncResult ar)
AsynchronousProcessGate g =
using (Process p = g.EndExecuteProcess(ar))
Console.WriteLine("Exited with code: " +
p.ExitCode + ". " +
g.PendingCount + " notepads pending.");
catch (Exception ex)
+ ") - " ex.Message);
Phew! After all that, the end result for SoapBox: a single self extracting digitally signed file someone can download. Oh, and a simple library you can use as an Asynchronous Process Gate! Enjoy. Look, another download link so you don't even have to scroll back up. How nice am I?
|
OPCFW_CODE
|
WIX installation fails when installing COM dll to GAC
I am creating an installer which will use a .NET COM component for our Access application. When I installed the COM dll to INSTALLDIR, it is working fine. I leave it to WIX to do the COM registration by running heat to harvest both the dll and the tlb. But now we want to install the dll to GAC, and only the tlb file will be installed in the INSTALLDIR. Our target is that different versions of our SW (it is OK to install them on the same machine) can use the same COM version and after uninstalling one version of our SW, the others will still work(this can't be achieved when we install the dll into INSTALLDIR, am I right?. If my way is wrong, please correct me).
Here comes the problem: in order to install it into GAC, I guess I am supposed to add a Assembly=".net" to the dll file declaration. However, during the installation, I get this error:
"A problem was encountered in error handler: Automation error The system cannot find the file specified." when I am calling one COM method (Access reference that we created) in the commit phase, i.e., after the COM is registered. Apparently my COM was not registered successfully. But I don't really know why this happens. As I mentioned, the only change I made was to add this Assembly=".net". Before that, the COM registration is OK and I was calling it successfully.
Any help would be appreciated. Thanks!
This will not work as the GAC is for .net based dll's and will not accept COM dll's. To see for yourself try and drop the dll into the folder C:\Windows\assembly, you will get an error saying it doesn't have an assembly manifest.
What you should do is keep the Component GUID the same for all versions of your installer for that particular COM dll. This way windows installer will keep note of which applications are using it and only un-install it once the last one is removed. I would also put that into it's own installation folder otherwise you will get an application installation folder left over when the first installed version is un-installed.
Another option is to separate it into it's own installer and use a bootstrapper to install them together.
Thanks again. Regarding the COM registration, in order for Windows installer track down the COM components, what Guid I should maintain the same and what I should let WIX to assign? As you mentioned, the Component Guid of the COM dll should always be the same. But there are some others: The one for the ClassInterface of my COM interface class(I assume I should keep them the same for different versions). And the one for the tlb Interfaces, which have the same interface name as dll interfaces with a preceding underscore(_). Should I keep them the same as well?
Yes anything related to the COM+ lib should stay the same. When you use heat to harvest the dll you can use the "-gg" switch and that will generate all the required GUID's for you.
Yes I know the -gg switch, and it is actually why I am asking: it generates a new GUID every time heat is ran, which I guess is the opposite to "keeping GUIDs the same". Currently I am running heat every time my installer project is built. I remembered somebody said heat should be ran only once, or at least more statically, but they did create the MSBuild task HeatFile, which I am using, so heat will run every time the installer is built. Is it desired to generate a new GUID in this case?
And I am using a xsl to modify the GUID to keep the DLL component's GUID always the same. So I can do that to other GUIDs as well, but it just doesn't sound like the right way in my opinion.
Yeah for the COM dll's I only ever run heat once and then save the wxs file in my project. In my case it will very rarely change and isn't worth setting up to run every build. The only things I use heat for on every build is things that change regularly, for example we package a third-party's database creation scripts and as these are released once a month I use heat to generate on each build.
Thanks for sharing the experience. I'll see what suits us the best. We don't separate the development and build for different developers, so I have to make the installer as simple as possible and everybody can build it.
No probs. In that case I would suggest coding everything in wxs files, heat is a bit confusing if you don't know what is going on, at least with static files people can see them! :)
|
STACK_EXCHANGE
|
from itertools import chain
import numpy as np
import pandas as pd
from pandas.api.types import union_categoricals
from ..progress import Progress
from ..result import QueryResult
class NumpyQueryResult(QueryResult):
"""
Stores query result from multiple blocks as numpy arrays.
"""
def store(self, packet):
block = getattr(packet, 'block', None)
if block is None:
return
# Header block contains no rows. Pick columns from it.
if block.num_rows:
if self.columnar:
self.data.append(block.get_columns())
else:
self.data.extend(block.get_rows())
elif not self.columns_with_types:
self.columns_with_types = block.columns_with_types
def get_result(self):
"""
:return: stored query result.
"""
for packet in self.packet_generator:
self.store(packet)
if self.columnar:
data = []
# Transpose to a list of columns, each column is list of chunks
for column_chunks in zip(*self.data):
# Concatenate chunks for each column
if isinstance(column_chunks[0], np.ndarray):
column = np.concatenate(column_chunks)
elif isinstance(column_chunks[0], pd.Categorical):
column = union_categoricals(column_chunks)
else:
column = tuple(chain.from_iterable(column_chunks))
data.append(column)
else:
data = self.data
if self.with_column_types:
return data, self.columns_with_types
else:
return data
class NumpyProgressQueryResult(NumpyQueryResult):
"""
Stores query result and progress information from multiple blocks.
Provides iteration over query progress.
"""
def __init__(self, *args, **kwargs):
self.progress_totals = Progress()
super(NumpyProgressQueryResult, self).__init__(*args, **kwargs)
def __iter__(self):
return self
def __next__(self):
while True:
packet = next(self.packet_generator)
progress_packet = getattr(packet, 'progress', None)
if progress_packet:
self.progress_totals.increment(progress_packet)
return (
self.progress_totals.rows, self.progress_totals.total_rows
)
else:
self.store(packet)
def get_result(self):
# Read all progress packets.
for _ in self:
pass
return super(NumpyProgressQueryResult, self).get_result()
class NumpyIterQueryResult(object):
"""
Provides iteration over returned data by chunks (streaming by chunks).
"""
def __init__(
self, packet_generator,
with_column_types=False):
self.packet_generator = packet_generator
self.with_column_types = with_column_types
self.first_block = True
super(NumpyIterQueryResult, self).__init__()
def __iter__(self):
return self
def __next__(self):
packet = next(self.packet_generator)
block = getattr(packet, 'block', None)
if block is None:
return []
if self.first_block and self.with_column_types:
self.first_block = False
rv = [block.columns_with_types]
rv.extend(block.get_rows())
return rv
else:
return block.get_rows()
|
STACK_EDU
|
Metal Oxide Varistor Identification / Replacement? Component marking IOE241 99P
Could anyone identify the component shown in the image? It is marked 'IOE241 99P'. It is ~12mm in diameter. I believe it is an MOV. The image shows the location on the Power PCB it was removed from. The power board is from a Brother Sewing Machine CS6000i (this machine draws 0.65A at 120V)
This component was destroyed when the board was connected to a 240V / 50Hz AC supply in error. The board expects 120V / 60Hz AC input. I have spent many hours searching for 'IOE241 99P' (and sub-strings of the text) for found nothing. If I cannot identify this component, is there some reasonable substitute component that would work in its place?
I am working my way through the board testing/replacing components as needed. There are no replacement boards available. (The component marked C101 in the image will be replaced too - it is a polypropylene capacitor Class X2 0.1uF - Okaya LE104. The component marked F101 is a fuse - this has already been replaced). Thanks!
The PCB symbol is a diac.
Hi serpentinite, The 10E241, a 240V MOV, has been damaged in carrying out it's protective task of blowing the fuse. In normal circumstances it would have only quenched voltage spikes and lasted longer. Do hope there are no bigger damages.
Nowadays that symbol is used for DIACs, but it was the original MOV symbol I'm told. Not sure when it switched or how much overlap there was between the two. There are actually similarities between the two, in that at a certain voltage their resistance drops significantly, in both polarities.
Here is a paper on choosing a varistor - [https://m.littelfuse.com/~/media/electronics_technical/application_notes/varistors/littelfuse_selecting_a_littelfuse_varistor_application_note.pdf][1]
Littelfuse has a ton of staff that will work for you!
Just pick one the same diameter in mm and rated for your proper 120VAC mains voltage. It will have roughly the same ability to absorb a transient (joules).
I would not be surprised at all if other things have been killed by the 240VAC though. Particularly, semiconductors such as high-voltage transistors or MOSFETs.
Thankyou all for the timely advice. It helped me to determine the following:
The MOV '10E241' is a 10mm diameter,240V, high 'E'nergy component. I found two datasheets for such a component:
https://cms.nacsemi.com/content/AuthDatasheets/WPRDD00172-76.pdf
https://www.mouser.com/datasheet/2/315/ERZ-E10%20Datasheet-1196708.pdf
I also found it available to buy at multiple sites, e.g.
https://www.mouser.com/ProductDetail/Panasonic/ERZ-E10E241?qs=%252B9%2Fcbd0IE0TBcvddNuha9A==
If I am in error, please comment, otherwise - thanks!!
Anytime, serpentinite!
|
STACK_EXCHANGE
|
Introduction: Hexagon Insect Hotel
I got the idea when a couple of small wild bees settled on my balcony. Unfortunately, they chose something that sooner or later I had to put away for space reasons. So I had to find a new home for the little bees and beetles.
The results after a short search at Thingiverse unfortunately did not satisfy me, so I decided to create something myself. I've done a few things with Tinkercad® before, but I've always wanted to try out the Codeblocks thing and it seemed like a good project for it.
You need a 3D printer or at least have access to some over a colleague, makerspace, internet..
Tools I have used to clean the printed hotel:
- square rasp
- round rasp
- drill bits in different sizes
Step 1: Stacking Up the Code Blocks
I really played around with the code blocks for some time. The production of some cubes and cylinders in Tinkercad® Codeblocks is simple and has a scale / side length of 20 mm.
But creating a polygon and changing it parametrically was a real problem at first. The polygon object in Tinkercad® is a hexagon with a side length of 20 mm. The width or height can only be changed by calculating and then adjusting the scale of the object.
I had to calculate the inner cycle and the outer cycle radius, then the width and height and everything backwards ... Luckily for me, the calculation for a hexagon is not that complicated and everything is well documented (thanks to Wikipedia). :) :)
If someone really wants to know:
- We already know the side length(s) of 20mm, therefore the circumradius is s and the apothem is s * 1/2 * √3.
- If we have the apothem(for the holes) ri and want to know the side length s, we have to do it the other way round ri * 2/3 *√3.
Step 2: Parameterise the Code
I wanted the insect hotel to be fully customizable by changing just a few parameters in the code. That was the real difficulty.
If everything were static, you could hard code and change everything until it fits. But so I had to do it accurately without hard coding anything.
Step 3: Printing
I don't have my own printer, but luckily I have a nice colleague who prints the models for me. :)
The colleague told me that the wood filament is not ideal for such small holes because a thicker nozzle has to be used. But if someone has the perfect nozzle for the filament, you could try it.
It is best to use natural colors for printing. I have seen many bright blue or red 3d printed insect hotels on the internet, this is not ideal and does not attract insects either.
If someone also wants to try out my "generator", I have published it for free on Tinkercad® for everyone (not commercially). If you do: Please share the results!! :)
Step 4: Cleaning
It is really important to remove any loose filaments from the holes as this would hurt the insects when they crawled in.
- Use the cutter to remove big junks of filament
- Use the rasp to clean the large holes
- Also clean the top of the hexagons with the rasp
- The smaller holes can be cleaned with the drills
Step 5: Roundup
It was a little more difficult than I thought, but it was also fun to build. I have learned a lot and am optimistic that further projects will follow. ;)
Step 6: First Bookings After Less Then a Month :)
20 days after my small hotel opened for the insect world, the first wild bee has moved in. The sunny beginning of spring naturally favors this. :) :) :)
We will see what the month of may will bring. Hopefully a lot of sunshine and new guests in my first hexagonal hotel. ;)
Participated in the
3D Printed Contest
|
OPCFW_CODE
|
Compilation takes too long
Hi,
Currently trying to use SOAP for fine-tuning HF base model, but compilation takes too long. Is this expected?
Sample code
import jax
import jax.numpy as jnp
from transformers import FlaxBertForSequenceClassification, BertConfig
from transformers import BertTokenizer
from flax.training.train_state import TrainState
from transformers.models.bert.modeling_flax_bert import FlaxBertModel, FlaxBertModule
import optax
import soap
def create_train_state(model, learning_rate=1e-5):
"""Creates initial `TrainState` for the model."""
learnin_rate_fn = optax.join_schedules(
schedules=[
optax.linear_schedule(0.0, 0.001, 1000),
optax.linear_schedule(0.001, 0.0, 5000 - 1000),
],
boundaries=[1000],
)
opt = soap.soap(learning_rate=learnin_rate_fn, b1=0.9, b2=0.999, eps=1e-8, weight_decay=0.01, precondition_frequency=5)
module = FlaxBertModule(model.config)
state = TrainState.create(
apply_fn=module.apply,
params=model.params,
tx=opt
)
return state
@jax.jit
def train_step(state, batch):
"""Single training step."""
def loss_fn(params):
outputs = state.apply_fn(
{'params': params},
batch['input_ids'],
attention_mask=batch['attention_mask'],
deterministic=False,
rngs={'dropout': jax.random.PRNGKey(0)}
)
logits = outputs.last_hidden_state
loss = jnp.mean(logits)
return loss
grad_fn = jax.value_and_grad(loss_fn)
loss, grads = grad_fn(state.params)
# Update parameters
new_state = state.apply_gradients(grads=grads)
return new_state, loss
# Initialize model and tokenizer
config = BertConfig.from_pretrained(
'bert-base-uncased'
)
model = FlaxBertModel.from_pretrained(
'bert-base-uncased',
config=config
)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Create dummy batch
text = ["This is a positive review!", "This is a negative review!", "This is a neutral review!", "This is a mixed review!", "This is a review!"]
labels = jnp.array([1, 0, 2, 3, 4])
# Tokenize
encoded = tokenizer(
text,
padding=True,
truncation=True,
max_length=128,
return_tensors='np'
)
# Create batch
batch = {
'input_ids': encoded['input_ids'],
'attention_mask': encoded['attention_mask'],
'labels': labels
}
# Initialize training state
state = create_train_state(model)
# Perform single training step
new_state, loss = train_step(state, batch)
print(f"Loss: {loss}")
Its expected that it takes a while to compile, each parameter update requires a loop over the preconditioners. It's possible that I can use https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html to reduce the compilation time, I'll take a look at that.
Thanks for providing some example code.
The long compile time makes parameter sweeps very difficult. Please consider using scan to avoid the loop.
|
GITHUB_ARCHIVE
|
Presto follows the SQL Standard faithfully. We extend it only when it is well justified,
we strive to never break it and we always prefer the standard way of doing things.
There was one situation where we stumbled, though. We had a non-standard way of limiting
query results with
LIMIT n without implementing the standard way of doing that first.
We have corrected that, adding ANSI SQL way of limiting query results, discarding initial
results and – a hidden gem – retaining initial results in case of ties.
Limiting query results #
Probably everyone using relational databases knows the
LIMIT n syntax for limiting query
results. It is supported by e.g. MySQL, PostgreSQL and many more SQL engines following
their example. It is so common that one could think that
LIMIT n is the standard way
of limiting the query results. Let’s have a look at how various popular SQL engines
provide this feature.
- DB2, MySQL, MariaDB, PostgreSQL, Redshift, MemSQL, SQLite and many others provide the
... LIMIT nsyntax.
- SQL Server provides
SELECT TOP n ...syntax.
- Oracle provides
... WHERE ROWNUM <= nsyntax.
And what does the SQL Standard say?
SELECT * FROM my_table FETCH FIRST n ROWS ONLY
If we look again at the database systems mentioned above, it turns out many of them support the standard syntax too: Oracle, DB2, SQL Server and PostgreSQL (although that’s not documented currently).
And Presto? Presto has
LIMIT n support since 2012. In Presto 310,
we added also the
FETCH FIRST n ROWS ONLY support.
Let’s have a look beyond the limits.
Tie break #
FETCH FIRST n ROWS ONLY syntax is way more verbose than the short
LIMIT n syntax Presto
always supported (and still does). However, it is also more powerful: it allows selecting rows “top n,
ties included”. Consider a case where you want to list top 3 students with highest score on an exam.
What happens if the 3rd, 4th and 5th persons have equal score? Which
one should be returned? Instead of getting an arbitrary (and indeterminate) result you can use
FETCH FIRST n ROWS WITH TIES syntax:
SELECT student_name, score FROM student s JOIN exam_result e ON s.id = e.student_id ORDER BY score FETCH FIRST 3 ROWS WITH TIES
FETCH FIRST n ROWS WITH TIES clause retains all rows with equal values of the ordering keys (the
ORDER BY clause) as
the last row that would be returned by the
FETCH FIRST n ROWS ONLY clause.
Per the SQL Standard, the
FETCH FIRST n ROWS ONLY clause can be prepended with
OFFSET m, to skip
m initial rows.
In such a case, it makes sense to use
FETCH NEXT ... variant of the clause – it’s allowed with and without
but definitely looks better with that clause.
SELECT student_name, score FROM student s JOIN exam_result e ON s.id = e.student_id ORDER BY score OFFSET 5 FETCH NEXT 3 ROWS WITH TIES
As an extension to SQL Standard, and for the brevity of this syntax, we also allow
SELECT student_name, score FROM student s JOIN exam_result e ON s.id = e.student_id ORDER BY score OFFSET 5 LIMIT 3
Concluding notes #
FETCH FIRST ... ROWS ONLY,
FETCH FIRST ... WITH TIES and
OFFSET are powerful and very useful clauses
that come especially handy when writing ad-hoc queries over big data sets. They offer certain syntactic freedom beyond
what is described here, so check out documentation of OFFSET Clause and
LIMIT or FETCH FIRST Clauses for all the options.
Since semantics of these clauses depend on query results being well ordered, they are best used with
ORDER BY that
defines proper ordering. Without proper ordering the results are arbitrary (except for
WITH TIES) which may or may
not be a problem, depending on the use case.
For scheduled queries, or queries that are part of some workflow (as opposed to ad-hoc), we recommend using query
predicates (where relevant) instead of
OFFSET. Read more at
|
OPCFW_CODE
|
from app import redis_instance
from redis import RedisError
class CMS(object):
hash_key = 'yacms_settings'
@classmethod
def get(cls, key=None):
if key:
# safe to assume it will only return one value because when adding fields to this hash,
# I never add more than one (I hope)
return redis_instance.hmget(cls.hash_key, key)[0].decode()
return redis_instance.hgetall(cls.hash_key)
@classmethod
def set(cls, field, value):
return redis_instance.hset(cls.hash_key, field, value)
"""
when creating a CMS object, I populate it with a form's fields (and values). to actually save those values in
the redis instance, I iterate over all the attributes of this object and save each attribute in redis.
"""
def save(self):
try:
for set_attr in self.__dict__:
if set_attr not in ['submit']:
self.set(set_attr, getattr(self, set_attr))
msg, cat = 'Settings saved successfully', 'success'
except RedisError:
msg, cat = 'Error saving settings', 'error'
return msg, cat
|
STACK_EDU
|
from torch import Tensor
from torch.autograd import Variable
from torch.optim import Adam
from .networks import MLPNetwork, policy, critic
# from .networks import policy_orca as policy #For orca one may use this network
# from .networks import critic_orca as critic #For orca one may use this network
from .misc import hard_update, gumbel_softmax, onehot_from_logits, soft_update
from .noise import OUNoise
import torch
MSELoss = torch.nn.MSELoss()
class MADDPG(object):
"""
General class for DDPG agents (policy, critic, target policy, target
critic, exploration noise)
"""
def __init__(self, num_in_pol, num_out_pol, num_in_critic,
lr=0.01, discrete_action=True, agent_i=1):
"""
Inputs:
num_in_pol (int): number of dimensions for policy input
num_out_pol (int): number of dimensions for policy output
num_in_critic (int): number of dimensions for critic input
"""
self.policy = policy(num_in_pol,num_out_pol)
self.critic = critic(num_in_critic,2*num_out_pol)
self.target_policy = policy(num_in_pol,num_out_pol)
self.target_critic = critic(num_in_critic,2*num_out_pol) # this take state and action dimension
self.agent_i = agent_i -1
hard_update(self.target_policy, self.policy)
hard_update(self.target_critic, self.critic)
# self.critic.load_state_dict(torch.load("critic"+str(agent_i)+"_.pth"))
# self.policy.load_state_dict(torch.load("agent"+str(agent_i)+"_.pth"))
# self.target_critic.load_state_dict(torch.load("critic"+str(agent_i)+"_.pth"))
# self.target_policy.load_state_dict(torch.load("agent"+str(agent_i)+"_.pth"))
self.policy_optimizer = Adam(self.policy.parameters(), lr=lr)
self.critic_optimizer = Adam(self.critic.parameters(), lr=lr)
self.tau = 0.01
self.gamma=0.95
if not discrete_action:
self.exploration = OUNoise(num_out_pol)
else:
self.exploration = 0.3 # epsilon for eps-greedy
self.discrete_action = discrete_action
def reset_noise(self):
if not self.discrete_action:
self.exploration.reset()
def scale_noise(self, scale):
if self.discrete_action:
self.exploration = scale
else:
self.exploration.scale = scale
def step(self, obs, explore=False):
"""
Take a step forward in environment for a minibatch of observations
Inputs:
obs (PyTorch Variable): Observations for this agent
explore (boolean): Whether or not to add exploration noise
Outputs:
action (PyTorch Variable): Actions for this agent
"""
action = self.policy(obs)
# print('after policy',action)
if self.discrete_action:
if explore:
action = gumbel_softmax(action, hard=True)
# print('after gumbel',action)
else:
action = onehot_from_logits(action)
else: # continuous action
if explore:
action += 0.3*torch.randn(action.shape)#Variable(Tensor(self.exploration.noise()),requires_grad=False)
action = action.clamp(-1, 1)
return action
def get_params(self):
return {'policy': self.policy.state_dict(),
'critic': self.critic.state_dict(),
'target_policy': self.target_policy.state_dict(),
'target_critic': self.target_critic.state_dict(),
'policy_optimizer': self.policy_optimizer.state_dict(),
'critic_optimizer': self.critic_optimizer.state_dict()}
def load_params(self, params):
self.policy.load_state_dict(params['policy'])
self.critic.load_state_dict(params['critic'])
self.target_policy.load_state_dict(params['target_policy'])
self.target_critic.load_state_dict(params['target_critic'])
self.policy_optimizer.load_state_dict(params['policy_optimizer'])
self.critic_optimizer.load_state_dict(params['critic_optimizer'])
def get_policy(self):
return self.target_policy, self.policy
def get_critic(self):
return self.critic
def update(self, sample, oppo_target_policy, oppo_policy, parallel=False, logger=None,iter=5):
"""
Update parameters of agent model based on sample from replay buffer
Inputs:
sample: tuple of (observations, actions, rewards, next
observations, and episode end masks) sampled randomly from
the replay buffer. Each is a list with entries
corresponding to each agent
parallel (bool): If true, will average gradients across threads
logger (SummaryWriter from Tensorboard-Pytorch):
If passed in, important quantities will be logged
In the code below, discrete is adapted for soccer and countinuous is for CarRacing
"""
obs, acs, rews, next_obs, dones = sample
self.critic_optimizer.zero_grad()
# if self.alg_types[agent_i] == 'MADDPG':
if self.discrete_action: # one-hot encode action
if self.agent_i ==0:
all_trgt_acs = [onehot_from_logits(pi(nobs)) for pi, nobs in
zip([self.target_policy,oppo_target_policy], next_obs)]
else:
all_trgt_acs = [onehot_from_logits(pi(nobs)) for pi, nobs in
zip([oppo_target_policy,self.target_policy], next_obs)]
# all_trgt_acs = [onehot_from_logits(pi(nobs)) for pi, nobs in
# zip([self.target_policy,oppo_target_policy], next_obs)]
else:
if self.agent_i ==0:
all_trgt_acs = [pi(nobs) for pi, nobs in
zip([self.target_policy,oppo_target_policy], next_obs)]
else:
all_trgt_acs = [pi(nobs) for pi, nobs in
zip([oppo_target_policy,self.target_policy], next_obs)]
# all_trgt_acs = [pi(nobs) for pi, nobs in zip(self.target_policy,
# next_obs)]
trgt_vf_in = torch.cat((*next_obs, *all_trgt_acs), dim=1)
if self.discrete_action:
target_value = (rews[self.agent_i].view(-1, 1) + self.gamma *
self.target_critic(trgt_vf_in) *
(1 - dones[self.agent_i].view(-1, 1))) #change after
else:
target_value = (rews[self.agent_i].view(-1, 1) + self.gamma *self.target_critic(trgt_vf_in)*(dones.view(-1, 1)))
vf_in = torch.cat((*obs, *acs), dim=1)
actual_value = self.critic(vf_in)
vf_loss = MSELoss(actual_value, target_value.detach())
vf_loss.backward()
torch.nn.utils.clip_grad_norm(self.critic.parameters(), 0.5)
self.critic_optimizer.step()
self.policy_optimizer.zero_grad()
if self.discrete_action:
curr_pol_out = self.policy(obs[self.agent_i])
curr_pol_vf_in = gumbel_softmax(curr_pol_out, hard=True)
else:
curr_pol_out = self.policy(obs[self.agent_i])
curr_pol_vf_in = curr_pol_out
all_pol_acs = []
if self.discrete_action:
if self.agent_i == 0:
all_pol_acs.append(curr_pol_vf_in)
all_pol_acs.append(onehot_from_logits(oppo_policy(obs[1])))
else:
all_pol_acs.append(onehot_from_logits(oppo_policy(obs[0])))
all_pol_acs.append(curr_pol_vf_in)
else:
if self.agent_i == 0:
all_pol_acs.append(curr_pol_vf_in)
all_pol_acs.append(oppo_policy(obs[1]))
else:
all_pol_acs.append(oppo_policy(obs[0]))
all_pol_acs.append(curr_pol_vf_in)
#
# for i, ob in zip(range(self.nagents), obs):
# if i == self.agent_i-1:
# all_pol_acs.append(curr_pol_vf_in)
# elif self.discrete_action:
# all_pol_acs.append(onehot_from_logits(self.policy(ob)))
# else:
# all_pol_acs.append(self.policy(ob))
vf_in = torch.cat((*obs, *all_pol_acs), dim=1)
pol_loss = -self.critic(vf_in).mean()
pol_loss += (curr_pol_out**2).mean() * 1e-3
pol_loss.backward()
total_norm=0
for p in self.policy.parameters():
param_norm = p.grad.data.norm(2)
total_norm += param_norm.item() ** 2
total_norm = total_norm ** (1. / 2)
torch.nn.utils.clip_grad_norm(self.policy.parameters(), 0.5)
self.policy_optimizer.step()
def update_all_targets(self):
"""
Update all target networks (called after normal updates have been
performed for each agent)
"""
soft_update(self.target_critic, self.critic, self.tau)
soft_update(self.target_policy, self.policy, self.tau)
|
STACK_EDU
|
Day 1: Introduction to Python, Installation, and Configuration for DevOps
Getting Started with Python for DevOps: Day 1
Table of contents
Welcome to the first day of our Python for DevOps blog series! Over the next few days, we will explore how Python, a versatile and powerful programming language, can be a valuable tool in the world of DevOps. In today's installment, we'll lay the foundation by introducing Python, guiding you through its installation and configuration, and helping you write your first Python program.
🔶 Introduction to Python and Its Role in DevOps 🔶
🔸 What is Python?
Python is a high-level, interpreted programming language known for its simplicity and readability. It is widely used in various fields, including web development, data analysis, artificial intelligence, and, importantly, DevOps. Python's straightforward syntax makes it an excellent choice for automating repetitive tasks, managing infrastructure, and building tools for continuous integration and deployment.
🔸 Python in DevOps
Python plays a crucial role in the DevOps ecosystem. DevOps is all about streamlining and automating processes, and Python is a perfect fit for this purpose. With Python, DevOps professionals can create scripts, build automation tools, and develop applications to manage infrastructure, monitor systems, and facilitate the continuous delivery of software.
🔶 Installing Python and Setting Up a Development Environment 🔶
Before you can start using Python for DevOps, you need to install Python and set up a development environment. Follow these steps to get started:
🔸 Installation on Windows
Visit the official Python website (https://www.python.org/downloads/).
Download the latest Python installer for Windows.
Run the installer, making sure to check the "Add Python to PATH" option during installation.
🔸 Installation on macOS
- macOS typically comes with Python pre-installed. Open a terminal and type
python3to check the installed version. If Python is not installed, you can download the macOS installer from the official Python website.
🔸 Installation on Linux
- Most Linux distributions come with Python pre-installed. You can use the terminal to check if Python is installed and which version is available. If needed, you can install Python using your package manager (e.g.,
aptfor Ubuntu or
🔶 Setting Up a Development Environment
Now that you have Python installed, it's time to set up a development environment. You can choose between text editors like Visual Studio Code, and PyCharm, or command-line editors like Nano or Vim. Some of these IDEs offer built-in support for Python and DevOps-related extensions and plugins.
🔶 Writing our First Python Program 🔶
Let's dive into writing our very first Python program. We'll keep it simple and create a classic "Hello, World!" script.
Open your chosen text editor or integrated development environment.
Create a new Python file with the extension ".py." For example, you can name it "hello.py."
In the file, type the following code:
# This is a Python program
- Save the file.
To run your program:
Open a terminal or command prompt.
Navigate to the directory where you saved your "hello.py" file.
hello.pyand press Enter.
You should see "Hello, World!" printed on the screen. Congratulations, you've just written and executed your first Python program!
This marks the end of our first day in the Python for DevOps series. In the coming days, we will explore more advanced Python concepts and their applications in the DevOps world. Stay tuned, and keep practicing your Python skills!
Note: I am following Abhishek Verraamalla's YouTube playlist for learning.
Happy Learning :)
Stay in the loop with my latest insights and articles on cloud ☁️ and DevOps ♾️ by following me on Hashnode, LinkedIn (https://www.linkedin.com/in/chandreshpatle28/), and GitHub (https://github.com/Chandreshpatle28).
Thank you for reading! Your support means the world to me. Let's keep learning, growing, and making a positive impact in the tech world together.
Did you find this article valuable?
Support CHANDRESH PATLE by becoming a sponsor. Any amount is appreciated!
|
OPCFW_CODE
|
|Reviews for Death Star Surprise|
| WPS82 chapter 2 . 7/17/2019
If the first was a flesh wound; were the others just scratches? I'm glad you followed up so quickly!
| Brievel chapter 2 . 7/17/2019
Uh, maybe the wrong person to nominate, Vader...
| moonstarwhite chapter 2 . 7/17/2019
I just love Padmé’s reaction so much!XD
| Mike chapter 2 . 7/17/2019
Just a flesh wound? LOL. Monty Python and the Holy Grail. Killer bunnies next? Nice chapter. I'm sure you can squeeze out more than one chapter. LOL! Good job!
| Wfest chapter 2 . 7/17/2019
Monty Python and the Holy Grail! (The fight with the Black Knight). Hilarious! Just like your very enjoyable stories! I especially love the chorused finishing of sentences by Luke and Leia.
| Nightwing5123 chapter 2 . 7/17/2019
I like the way you had Vader make his wife the Empr.
| Me chapter 2 . 7/17/2019
| HannahKathleen chapter 2 . 7/17/2019
Haha, Princess Bride! Cool chapter! Love Padmé's surprise!
| Morriganna chapter 2 . 7/17/2019
HAHA! Vader pulled a fast one there! He did all for Padme! So sweet!
| kelwin chapter 2 . 7/17/2019
another good fic. only problem is i want more. more background more moree more lol. i must say i enjoy the different ways you have them all meeting up.
| Jarjaxle chapter 2 . 7/17/2019
Empress Padme...Anakin...Padme is going to Skin you alive for this! XD
| sunmoonwindandstars chapter 2 . 7/17/2019
Yes, yes, yes, yesssss! More! This is great! :D
| brandonack96 chapter 2 . 7/17/2019
I bet padme wasn't expecting to become empress
| Joeclone chapter 2 . 7/17/2019
Sounds so familiar...Monty Python?
| TheVampireStrahd chapter 1 . 7/13/2019
This was awesome!
Really loved that bit with Leia giving Vader a nasty Force push and her choking of Tarkin.
The constant "he is more machine than man...twisted and evil" from Obi Wan was perfect. The fact that Leia and Luke would just parrot those words was hilarious.
|
OPCFW_CODE
|
package anonymize;
import java.io.BufferedInputStream;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.text.Normalizer;
import javax.xml.stream.XMLInputFactory;
import javax.xml.stream.XMLStreamConstants;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamReader;
/**
* XMLファイルを処理するクラス
* @author kajiyama
*
*/
public class StAXParser {
private static boolean inTag = false;
private static String tagName = "";
private static int lineNum = 0;
/**
* StAXで解析し、charactorのみをtextファイルに書き出すメソッド
* @param inputFileName : File
* @param outputFile : File
* @throws XMLStreamException
* @throws IOException
* @author kajiyama
*/
public static void parse(File inputFileName, File outputFile) throws XMLStreamException, IOException {
// 引数で最初に学習データにつかうテキストデータを指定
String tagName = "text"; // 取得するタグのコード
// こちらではxml中のcタグの位置を格納する
// StAXを使用するための前処理
XMLInputFactory factory = XMLInputFactory.newInstance();
BufferedInputStream stream = new BufferedInputStream(new FileInputStream(inputFileName));
XMLStreamReader reader = factory.createXMLStreamReader(stream, "UTF-8");
BufferedWriter bw = new BufferedWriter(new FileWriter(outputFile, true));
String word;
for (; reader.hasNext(); reader.next()) {
int eventType = reader.getEventType();
// cTagの開始位置を取る
if (eventType == XMLStreamConstants.START_ELEMENT) {
// cタグの開始位置を取得
// System.out.println(reader.getLocalName());
if (reader.getLocalName().equals(tagName)) {
inTag = true;
} else if (inTag) {
tagName = reader.getLocalName();
System.out.println("tagName = " + tagName);
}
}
// cTagの終了位置を取得
if (eventType == XMLStreamConstants.END_ELEMENT) {
if (reader.getLocalName().equals(tagName)) {
inTag = false;
}
}
if (eventType == XMLStreamConstants.CHARACTERS) {
word = reader.getText();
if (inTag) {
bw.write(word);
}
}
}
bw.close();
reader.close();
System.out.println(outputFile + "に書き込み完了"); // 終了の確認
}
public static void parse(File inputFileName, File outputFileName, boolean print) throws XMLStreamException, IOException {
// 引数で最初に学習データにつかうテキストデータを指定
String tagName = "text"; // 取得するタグのコード
// こちらではxml中のcタグの位置を格納する
// StAXを使用するための前処理
XMLInputFactory factory = XMLInputFactory.newInstance();
BufferedInputStream stream = new BufferedInputStream(new FileInputStream(inputFileName));
XMLStreamReader reader = factory.createXMLStreamReader(stream, "UTF-8");
BufferedWriter bw = new BufferedWriter(new FileWriter(outputFileName, true));
String word;
for (; reader.hasNext(); reader.next()) {
int eventType = reader.getEventType();
// cTagの開始位置を取る
if (eventType == XMLStreamConstants.START_ELEMENT) {
// cタグの開始位置を取得
// System.out.println(reader.getLocalName());
if (reader.getLocalName().equals(tagName)) {
inTag = true;
}
}
// cTagの終了位置を取得
if (eventType == XMLStreamConstants.END_ELEMENT) {
if (reader.getLocalName().equals(tagName)) {
inTag = false;
}
}
if (eventType == XMLStreamConstants.CHARACTERS) {
word = reader.getText();
if (inTag) {
bw.write(word);
}
}
}
bw.close();
reader.close();
if (print) {
System.out.println(outputFileName + "に書き込み完了"); // 終了の確認
}
}
/**
* StAXで解析し、タイトルも出力しつつcharactorのみをtextファイルに書き出すメソッド
* @param inputFileName : File
* @param outputFileName : File
* @throws XMLStreamException
* @throws IOException
* @author kajiyama
*/
public static void parseWithTitle(File inputFileName, File outputFileName) throws XMLStreamException, IOException {
// 引数で最初に学習データにつかうテキストデータを指定
boolean inTextTag = false;
boolean inTitleTag = false;
String textTag = "text"; // 取得するタグのコード
String titleTag = "title";
// こちらではxml中のcタグの位置を格納する
// StAXを使用するための前処理
XMLInputFactory factory = XMLInputFactory.newInstance();
BufferedInputStream stream = new BufferedInputStream(new FileInputStream(inputFileName));
XMLStreamReader reader = factory.createXMLStreamReader(stream, "UTF-8");
BufferedWriter bw = new BufferedWriter(new FileWriter(outputFileName, true));
String textWords;
boolean first = true;
for (; reader.hasNext(); reader.next()) {
int eventType = reader.getEventType();
// cTagの開始位置を取る
if (eventType == XMLStreamConstants.START_ELEMENT) {
// cタグの開始位置を取得
// System.out.println(reader.getLocalName());
if (reader.getLocalName().equals(textTag)) {
inTextTag = true;
if (!first) {
bw.newLine();
} else {
first = false;
}
}
if (reader.getLocalName().equals(titleTag)) {
inTitleTag = true;
}
}
// cTagの終了位置を取得
if (eventType == XMLStreamConstants.END_ELEMENT) {
if (reader.getLocalName().equals(textTag)) {
inTextTag = false;
}
if (reader.getLocalName().equals(titleTag)) {
inTitleTag = false;
}
}
if (eventType == XMLStreamConstants.CHARACTERS) {
textWords = reader.getText();
if (inTextTag) {
textWords = Normalizer.normalize(textWords, Normalizer.Form.NFKC);
textWords = textWords.replaceAll("^\\s+$", "");
textWords = textWords.replaceAll("\"\\d{10}$", "");
textWords = textWords.replaceAll("^\r\n+$", "");
textWords = textWords.replaceAll("^\t{4,}$", "");
bw.write(textWords);
}
if (inTitleTag) {
if (textWords.equals("問題") || textWords.equals("主訴") || textWords.equals("所見") || textWords.equals("現疾患(診断内容)") || textWords.equals("計画") ) {
bw.newLine();
bw.write("\""+ textWords + "\"");
bw.newLine();
}
}
}
}
bw.newLine();
bw.close();
reader.close();
}
public static void parseWithTime(File inputFileName, File outputFileName) throws XMLStreamException, IOException {
// 引数で最初に学習データにつかうテキストデータを指定
boolean inTextTag = false;
String textTag = "text"; // 取得するタグのコード
String timeTag = "effectiveTime";
// こちらではxml中のcタグの位置を格納する
// StAXを使用するための前処理
XMLInputFactory factory = XMLInputFactory.newInstance();
BufferedInputStream stream = new BufferedInputStream(new FileInputStream(inputFileName));
XMLStreamReader reader = factory.createXMLStreamReader(stream, "UTF-8");
BufferedWriter bw = new BufferedWriter(new FileWriter(outputFileName, true));
String textWords;
boolean first = true;
for (; reader.hasNext(); reader.next()) {
int eventType = reader.getEventType();
// cTagの開始位置を取る
if (eventType == XMLStreamConstants.START_ELEMENT) {
// cタグの開始位置を取得
if (reader.getLocalName().equals(textTag)) {
inTextTag = true;
if (!first) {
bw.newLine();
} else {
first = false;
}
}
if (reader.getLocalName().equals(timeTag)) {
if (reader.getAttributeValue(0) != null && !reader.getAttributeValue(0).isEmpty()) {
bw.write("timeValue==" + reader.getAttributeValue(0));
bw.newLine();
}
}
}
// cTagの終了位置を取得
if (eventType == XMLStreamConstants.END_ELEMENT) {
if (reader.getLocalName().equals(textTag)) {
inTextTag = false;
}
}
if (eventType == XMLStreamConstants.CHARACTERS) {
textWords = reader.getText();
if (inTextTag) {
textWords = Normalizer.normalize(textWords, Normalizer.Form.NFKC);
// textWords = textWords.replaceAll("^\\s+$", "");
textWords = textWords.replaceAll("\"\\d{10}$", ""); // patient id
textWords = textWords.replaceAll("\"\\d{8}$", ""); // patient id
textWords = textWords.replaceAll("\\R", "");
textWords = textWords.replaceAll("\t{5,}", "");
bw.write(textWords);
}
}
}
bw.newLine();
bw.close();
reader.close();
}
}
|
STACK_EDU
|
Download GATE CS / IT Book By Kanodia Free Download
GATE CS / IT Book By Kanodia Part 1 to 4
|GATE CS / IT Book By Kanodia PART 1||Download|
|GATE CS / IT Book By Kanodia PART 2||Download|
|GATE CS / IT Book By Kanodia PART 3||Download|
|GATE CS / IT Book By Kanodia PART 4||Download|
THEORY OF COMPUTATION: Regular languages and finite automata, Context fix languages and Push-down automata, Recursively enumerable sets and Turing machines, Un-decidability, NP-completeness.
DIGITAL LOGIC: Logic functions, Minimization, Design, and Synthesis of Combinational and Sequential circuits, Number representation and Computer arithmetic (fixed and floating point).
COMPUTER ORGANIZATION AND ARCHITECTURE: Machine instructions and Ad-dressing modes, ALL: and data-path, CPU control design. Memory interface, I/0 interface (Interrupt and DMA mode), Instruction pipelining, Cache and main memory, Secondary storage.
PROGRAMMING: Functions. Recursion. Parameter passing, Scope, Binding, Abstract data types, Arrays.
DATA STRUCTURES AND ALGORITHMS: Analysis, Asymptotic notation, Notions of space and Time complexity, Worst and Average case analysis, Design, Greedy approach, Dynamic programming, Divide and Conquer: Tree and Graph traversals. Connected components. Spanning trees, Shortest paths. Hashing, Sorting, Searching.
COMPILER DESIGN: Lexical analysis, Parsing, Syntax directed translation, Runtime environments, Intermediate, and target code generation, Basics of code optimization.
OPERATING SYSTEM: Processes, Threads. Inter-process communication, Concurrency, Synchronization, Deadlock, CPU scheduling, Memory management and Virtual memory, File systems, I/O systems, Protection and Security.
DATABASE: ER-model, Relational model (relational algebra. tuple calculus), Database design (integrity constraints. normal forms), Query languages (SQL), File structures (sequential files. indexing, 13 and B+ trees), Transactions and Concurrency control.
INFORMATION SYSTEMS AND SOFTWARE ENGINEERING: Information gathering, Requirement, and Feasibility Analysis, Data flow diagrams, Process specifications, Input/Output design. Process life cycle, Planning and Managing the project, Design, Coding, Testing, Implementation. Maintenance.
COMPUTER NETWORKS: ISO/OSI stack. LAN technologies (Ethernet, Token ring), Flow and Error control techniques, Routing algorithms. Congestion control. TCP/UDP and Sockets, 1P(v4), Application layer protocols (icmp, dns, smtp. pop, tip, hap), Basic concepts of Hubs, Switches, Gateways, and Routers.
ENGINEERING MATHEMATICS: Mathematical Logic: Propositional Logic. First Order Logic. Probability: Conditional Probability; Mean, Median, Mode, and Standard Deviation, Random Variables, Distributions, Uniform, Normal. Exponential. Poisson. Binomial. Set Theory & Algebra: Sets, Relations, Functions. Groups, Partial Orders, Lattice, Boolean Algebra. Combinatorics: Permutations. Combinations. Counting, Summation, Generating Functions, Recurrence Relations, Asymptotic§ Graph Theory: Connectivity. Spanning Trees, Cut Vertices & Edges, Covering, Matching, Independent Sets, Colouring, Planarity. Isomorphism Linear Algebra: Algebra of Matrices. Determinants, Systems of Linear Equations, Eigen Values and Eigen Vectors. Numerical Methods: LU Decomposition for Systems of Linear Equations, Numerical Solutions of Non-Linear Algebraic Equations by Secant, Bisection and Newton-Raphson Methods, Numerical Integration by Trapezoidal and Simpson’s Rules. Calculus: Limit. Continuity & Differentiability., Mean Value Theorems, Theorems of Integral Calculus, Evaluation of Definite & Improper Integrals, Partial Derivatives, Total Derivatives, Maxima & Minima.
Disclaimer:- This website is not the original publisher of this book PDF downloaded from another website Already available on the internet. All rights reserved to the original publisher of the books
|
OPCFW_CODE
|
const
{ sum, length, values } = require('portable-fp'),
expect = require('chai').expect;
// curryPermutations :: [1, 2, 3] -> [[1, 2, 3]], [[1, 2], [3]], [[1], [2], [3]], ... ]
function curryPermutations(args) {
if(args.length === 1) return [[[args[0]]]];
const
last = args[args.length-1],
bases = curryPermutations(args.slice(0, -1)),
appendNewSet = arr => arr.concat([[last]]),
extendLastSet = arr => (arr[arr.length-1] = arr[arr.length-1].concat([last]), arr);
return []
.concat(bases.map(appendNewSet))
.concat(bases.map(extendLastSet));
}
// remainder :: [[a]] -> n
function remainder(argSet) {
return argSet.map(args => args.length).reduce((a,b) => a+b);
}
// testCurrying :: fn -> [arg] -> * -> testImplementation
function testCurrying(fn, args, result) {
const
argSets = curryPermutations(args); //.filter(arr => arr.length > 1);
return function() {
const partials = [];
argSets.map(function(copy) {
const set = copy.slice();
let step = fn;
while(set.length > 1) {
partials.push(step = step.apply(null, set.shift()));
expect(step, 'arity matching remaining args').lengthOf(remainder(set));
// const nextArgs = set.shift();
// console.log('adding args', nextArgs);
// step = step.apply(null, nextArgs)
// console.log('step', step.toString());
// partials.push(step);
}
const lastArgs = set.shift();
const end = step.apply(null, lastArgs);
if(typeof result == 'function') return result(end);
if(result !== undefined) expect(end, 'end result').eql(result);
else expect(end, 'end result').not.a('function');
//console.log('executed');
});
// We should have accumulated a partial every time we stepped through currying without completing
expect(partials).lengthOf(sum(argSets.map(length)) - argSets.length);
partials.map(fn => expect(fn).a('function'));
//console.log('done');
}
}
describe.skip('[curry test generator]', function() {
it('generates sets of args to pass', function() {
expect(curryPermutations([1, 2, 3])).eql([
[[1], [2], [3]],
[[1, 2], [3]],
[[1], [2, 3]],
[[1, 2, 3]]
]);
});
it('runs every arg set', function() {
const
argNames = ['a', 'b', 'c'],
partialRuns = [],
finishRuns = [];
const curryMock = (prior = {idx: 0, a: 0, b: 0, c: 0}) => function(a, b, c) {
prior.idx++;
if(a && !prior.a) prior.a = prior.idx;
if(b && !prior.b) prior.b = prior.idx;
if(c && !prior.c) prior.c = prior.idx;
const set = [];
for(i = 1; i <= prior.idx; i++) {
set.push(argNames.filter(key => prior[key] == i).join(''));
}
const target = c ? finishRuns : partialRuns;
target.push(set);
if(!b) return curryMock(prior).bind(null, a);
if(!c) return curryMock(prior).bind(null, a, b);
return a + b + c;
}
testCurrying(curryMock(), [1, 2, 3], 6)();
});
});
function sparse() {
const arr = [1, undefined, 3];
arr[5] = 5;
arr[9] = 9;
return arr;
}
function pack(list) {
return [1, undefined, 3, 5, 9];
}
// The set of inputs both distinct amongst themselves and matcheable by equals,
// for use by tests of equals and other functions expected to follow its spec.
const variety = {
a: null,
b: 0,
c: 1,
d: '1',
e: undefined,
f: String,
g: NaN,
h: new Date(2017, 1, 1),
i: new Date(2018, 1, 1),
j: new String(),
k: new Boolean(true),
l: new Boolean(false),
m: new Number(Infinity),
};
// Duplication of variety, replacing objects with primitive equivalents
// where possible (which should still be considered equal)
const dup = {
a: null,
b: 0,
c: 1,
d: '1',
e: undefined,
f: String,
g: NaN,
h: (new Date(2017, 1, 1)).valueOf(),
i: (new Date(2018, 1, 1)).valueOf(),
j: '',
k: true,
l: false,
m: Infinity,
};
exports.testCurrying = testCurrying;
exports.sparseList = Object.freeze(sparse());
exports.packedList = Object.freeze(pack(exports.sparseList));
exports.varietyObj = Object.freeze(variety);
exports.varietyObjCopy = Object.freeze(dup);
exports.varietyList = Object.freeze(values(variety));
exports.varietyListCopy = Object.freeze(values(dup));
|
STACK_EDU
|
I want to create a website, and I know how to get a domain and everything, but I don’t know where to go from there.
I just want to play around with the Idea to get used to it, but i have no idea where to start :(. Eventually I will be doing a major overhaul to our teams website, but right now I know nothing about it so I am trying to gain experience!
How about just starting a blog at a site like Wordpress.com or Blogspot.com. You could easily accomplish what you are trying to do, unless the intent is to learn how to build the website with things like PHP, or ASP.Net, or some other development language.
Ya I intend to learn how to build sites from scratch through things like PHP… I have looked into blogspot.com sort of things, and they dont look as fun as it would be to create it from scratch (and what do you learn from just using someone elses site??)
For team 1323’s website, we use dreamweaver. I myself like dreamweaver because you don’t have to fully know html. As long as you know some your fine. But for us, we get free stuff due to the fact that our school buys a ton of Autodesk and Adobe products.
No. The Microsoft products are geared to their technologies, asp,.net.
There are many freebie webhosts or you can opt for a low cost one. I suggest the low cost route (site5 , 1&1, Godaddy)
You will have issues pointing a domain name to a freebie site.
Get a linux/apache package with cpanel. Stay away from a windows server.
John, I have to disagree with you, while I think Linux servers are wonderful Microsoft servers are also quite handy if you know how to set them up. Where I work we have servers that have not had a minute of unscheduled down time in a year. When we do have downtime it is mostly for upgrades (hardware and software) and those times are known over a month in advance.
My suggestion, before paying for a server use XAMP (google it) to test out your code on a local machine. Depending on what language you want to do I would suggest different software. If you want to use .Net Visual Studio is the recommended app. For PHP development I would recommend notepad++ (or your code editor of choice) As for HTML development, notepad++ again is my developer of choice on Windows. On OS X I use Textmate for all HTML/PHP development.
Only once you have learned the basic skills would I rent space on a server and buy a domain name. No point to be paying for it to be sitting out there unused.
I’m developing our new team site in an Ubuntu 9.04 Server setup with mod-wsgi & python on Django framework. Though, where you should start in order to learn is through basic semantics of HTML, and then learn basic PHP. Once you grasp solid understanding (note: not knowledge) of PHP, you should be able to crank some cool stuff out with documentation handy. Check out tizag.com and W3’s website.
Yeah, a good path to take is: Learn HTML&CSS&JS (The static stuff), then move on to dynamic stuff like Python or PHP (or ASP) and then move on to web framework stuff (this isn’t really necessary, but you may be able to crank out really cool stuff).
Well you have a domain and a webserver, good job. that’s about half the battle right there for most new web designers (:
for me, the first thing i learned when i developed my first site was PHP, and some basic MySQL queries. trust me, if you can understand that and make something basic (like a database-driven news page), you will very quickly learn how to move along from there. I sort of learned html on the side as i learned php, when i realized i needed to format stuff.
If you want ASP.NET, you dont need a windows server! you can use apache with the mono mod on linux.
For a ASP.NET editor, defiantly Visual Web Developer Express. For PHP, Eclipse PDT (http://www.eclipse.org/pdt/) It has syntax highlighting, auto completion for PHP, HTML, JS and CSS, PHP debugging (I find this hard to get working), and a few other nice features, like being able to install add-ons, like subversion
You can also use Microsoft Publisher to create the web site for you. It’s pretty easy to use if you have no experience at all. You can choose from templates with color schemes and have something up and running in a few hours that looks professional. You can edit the html, but then the next time you create the site, it overwrites your changes, so it’s really for creating a web site from their user interface. It also handles clicking on other links, moving from page to page, etc.
If you want to learn html, using notepad.exe is very basic, and you’ll spend a lot of time figuring out html syntax. You will spend a lot of time getting things to look decent. This is for hard-core html learners.
If you want to learn ASP.NET, using Microsoft Visual Studio, you can create a web site very quickly, using their user interface, but you will need to learn VB.NET or C# to be able to do any processing of user input. If you’re just displaying status, showing images, etc., you probably won’t need any code behind except for moving from one page to another. There are plenty of examples of code for anything you want to do by googling it.
|
OPCFW_CODE
|
Avoiding boilerplate when using typeclass-based polymorphism
I'm finding that my code frequently looks a little like this:
trait Example {
def getThing1[A, O <: HList](a: A)(implicit g1: GetThing1[A] { type Out = O }): O = g1(a)
def getThing2[A, O <: HList](a: A)(implicit g2: GetThing2[A] { type Out = O }): O = g2(a)
def combineThings[T1 <: HList, T2 <: HList, O <: HList](t1: T1, t2: T2)(implicit
c: CombineThings[T1, T2] {type Out = O},
): O = c(t1, t2)
def getCombinedReversed[A, T1 <: HList, T2 <: HList, C <: HList, O <: HList](a: A)(implicit
g1: GetThing1[A] {type Out = T1},
g2: GetThing2[A] {type Out = T2},
c: CombineThings[T1, T2] {type Out = C},
r: Reverse[C] {type Out = O},
): O = r(combineThings(getThing1(a), getThing2(a)))
}
This is actually more complex than a stand-alone getCombinedReversed method that uses implicits only and does not call the getThing1, getThing2 or combineThings methods:
def getCombinedReversedStandAlone[A, T1 <: HList, T2 <: HList, C <: HList, O <: HList](a: A)(implicit
g1: GetThing1[A] {type Out = T1},
g2: GetThing2[A] {type Out = T2},
c: CombineThings[T1, T2] {type Out = C},
r: Reverse[C] {type Out = O},
): O = r(c(g1(a), g2(a)))
I have no particular problem with this, but it does bloat out my code a bit, so I thought I'd check that there's no obvious solution. Obviously calling the getThing and combineThings methods without asserting that the correct implicit is in scope isn't possible.
Thanks for any assistance.
In implicit parameters of a method you can prefer Aux-types rather than type refinements (you can automize generating Aux types with a macro annotation from AUXify). Also in return type of a method you can prefer path-dependent type rather than additional type parameter (to be inferred).
def getThing1[A](a: A)(implicit g1: GetThing1[A]): g1.Out = g1(a)
def getThing2[A](a: A)(implicit g2: GetThing2[A]): g2.Out = g2(a)
def combineThings[T1 <: HList, T2 <: HList](t1: T1, t2: T2)(implicit
c: CombineThings[T1, T2]
): c.Out = c(t1, t2)
def getCombinedReversed[A, T1 <: HList, T2 <: HList, C <: HList](a: A)(implicit
g1: GetThing1.Aux[A, T1],
g2: GetThing2.Aux[A, T2],
c: CombineThings.Aux[T1, T2, C],
r: Reverse[C]
): r.Out = r(combineThings(getThing1(a), getThing2(a)))
def getCombinedReversedStandAlone[A, T1 <: HList, T2 <: HList, C <: HList](a: A)(implicit
g1: GetThing1.Aux[A, T1],
g2: GetThing2.Aux[A, T2],
c: CombineThings.Aux[T1, T2, C],
r: Reverse[C]
): r.Out = r(c(g1(a), g2(a)))
Besides that, regarding necessity to repeat implicit parameters please read
How to wrap a method having implicits with another method in Scala?
Pass implicit parameter through multiple objects
Generally speaking, writing your code like you described seems conventional. Implicit parameters help to understand the logic what method does (this surely demands some skill). If you start to hide implicits then your code can start to look less conventional :) If you repeat the same set of implicit parameters many times this is a signal to introduce a new type class.
import com.github.dmytromitin.auxify.macros.{aux, instance}
import shapeless.DepFn1
@aux @instance
trait GetCombinedReversed[A] extends DepFn1[A] {
type Out
def apply(a: A): Out
}
object GetCombinedReversed {
implicit def mkGetCombinedReversed[A, T1 <: HList, T2 <: HList, C <: HList](implicit
g1: GetThing1.Aux[A, T1],
g2: GetThing2.Aux[A, T2],
c: CombineThings.Aux[T1, T2, C],
r: Reverse[C]
): Aux[A, r.Out] = instance(a => r(c(g1(a), g2(a))))
}
def foo1[..., A, A1, ...](implicit ..., gcr: GetCombinedReversed.Aux[A, A1], ...) =
f(..., gcr(a), ...)
def foo2[..., A, A1, ...](implicit ..., gcr: GetCombinedReversed.Aux[A, A1], ...) =
g(..., gcr(a), ...)
In Scala 3 you can write
def getCombinedReversed[A, T1 <: HList, T2 <: HList, C <: HList](a: A)(using
g1: GetThing1[A],
g2: GetThing2[A],
c: CombineThings[g1.Out, g2.Out],
r: Reverse[c.Out]
): r.Out = ???
so type refinements or Aux-types become necessary rarer although sometimes they are still necessary. I'll copy my comments from here:
def foo(using tc1: TC1[tc2.Out], tc2: TC2[tc1.Out]) = ???
doesn't compile while
def bar[A, B](using tc1: TC1.Aux[A, B], tc2: TC2.Aux[B, A]) = ???
and
def baz[A](using tc1: TC1[A], tc2: TC2.Aux[tc1.Out, A]) = ???
do.
thanks - I actually found a statement from your other answer you linked to said it well: "If you repeat the same set of implicit parameters many times then idiomatic solution is to introduce your type class (or just single implicit) instead of that set of implicits and use this type class."
On my answer from a couple of days ago somebody said that the Aux pattern also helps the compiler infer types and so avoids the over-refined implicit problem. I had previously thought they were useful only to reduce boilerplate. Are they also helpful for type inference?
@Chrisper Well, this helped in that specific case. Generally, Aux-types are intended to be more or less equivalent to type refinements.
@Chrisper Similarly, path-dependent return types and return types with additional type parameter should be more or less interchangeable although in older versions of Scala they were not and there can be some specific situations now as well (I'll try to find the link).
@Chrisper I found. See "And thirdly..." in my answer there and the discussion in comments under MateuszKubuszok's answer.
@Chrisper For future readers I'll put here the link to your question you've mentioned where as you said "the Aux pattern also helps the compiler infer types and so avoids the over-refined implicit problem" https://stackoverflow.com/questions/64814539/why-is-this-implicit-resolution-failing
thank you. If the Aux pattern reduces boilerplate and may sometimes help with type inference then it seems to be worth making a habit of.
|
STACK_EXCHANGE
|
REDMOND, Wash., and DORTMUND, Germany, Jan. 29, 2001 — Microsoft Corp. and MATERNA GmbH Information & Communications today announced a business and technical alliance to provide enterprise customers with cutting-edge mobile solutions based on Microsoft® platform technologies, including the new Microsoft Mobile Information Server. As part of this alliance, MATERNA, which is the first mobile software and service provider to offer a totally integrated Windows® 2000-based WAP Server solution, intends to offer customers a mobile add-on solution to Mobile Information Server, which is due in the first half of this year.
MATERNA, which is the first mobile ASP to use the Windows 2000 Datacenter Server for its new WAP Server hosted in the Anny Way Information Center, will work closely with Microsoft to enable mobile access to corporate and hosted services such as Microsoft Exchange 2000 Server. The strategic alliance with MATERNA, and MATERNA’s add-on offering for MIS, will provide Mobile Information Server enterprise customers with increased mobility. The MATERNA technology, obtainable with the market availability of Mobile Information Server, will take the form of a service and a server software product. The service, offered by the Anny Way Information Center, will extend Mobile Information Server’s mobile SMS notification reach into any and all European and Asian mobile operators’ systems using one secure connection from the enterprise. The software product, MATERNA’s Anny Way WAP Server, will extend Mobile Information Server’s single password logon and secure browse access to all Internet and corporate intranet data sources. The Anny Way WAP Server can cache key user and device credentials, simplifying the life of mobile-data users because they are not forced to log on each time access is attempted on a new mobile service.
“Our relationship with MATERNA is part of a shared vision to provide the core mobility infrastructure for enabling enterprise customers to gain access to corporate data from any device,”
said Paul Gross, senior vice president of the Mobility group at Microsoft.
“Given MATERNA’s proven expertise and technology depth in the mobility space, we are very excited to be working with them to make that vision a reality for customers.”
“As mobility issues assume a more significant role in our business and daily life, it has become one of MATERNA’s chief objectives to support technologies for mobile communication,”
said Helmut an de Meulen, CEO of MATERNA.
“Our relationship with Microsoft and especially the integration of the Microsoft Mobile Information Server with MATERNA’s software solutions positions us perfectly for future engagement in this rapidly evolving industry. Moreover, MATERNA’s technologies are also an interesting link for other products and services from Microsoft. As we share our knowledge and work together in the area of wireless information technology, the user will profit from this alliance between two global players.”
About MATERNA Information & Communications
The MATERNA Group is one of Germany’s leading software distributors for information and communications technology. MATERNA currently employs more than 1,250 people throughout Europe. In 2000, the company earned revenues of 175 million euros. In addition to its headquarters in Dortmund, the company has branch offices throughout Germany as well as in France, Austria, Belgium, the Netherlands and Hong Kong. Its sphere of activity covers products, solution and services for e-solutions, mobile solutions and unified messaging.
About the Anny Way Information Center
The Anny Way Information Center (AIC) was developed in cooperation with the leading mobile network operators. The platform consists of a module that provides several integrated value-added services as a part of a scalable and comprehensive system. The selected value-added service modules are hosted in one of the AICs. Services may also be obtained on the basis of a license structure.
Detailed information on Anny Way Information Center can be found at
About Microsoft Mobile Information Server
Mobile Information Server is a new mobile applications server product that extends the reach of Microsoft .NET applications, enterprise data and intranet content into the realm of the mobile user. Mobile Information Server, which is scheduled to be available in the first half of 2001 in the United States, Asia and Europe, will bring the power of the corporate intranet to the latest generation of mobile devices so users can securely access their e-mail, contacts, calendar, tasks or any intranet line-of-business application in real time, wherever they happen to be.
Detailed information on Mobile Information Server can be found at http://www.microsoft.com/servers/miserver/ .
Founded in 1975, Microsoft (Nasdaq
) is the worldwide leader in software, services and Internet technologies for personal and business computing. The company offers a wide range of products and services designed to empower people through great software —
anytime, any place and on any device.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corp. in the United States and/or other countries.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
Anny Way is a registered trademark of the MATERNA Group.
Note to editors: If you are interested in viewing additional information on Microsoft, please visit the Microsoft Web page at http://www.microsoft.com/presspass/ on Microsoft’s corporate information pages.
|
OPCFW_CODE
|
Mathematics for Data Science 2 (2020)
Note that this is the OLD course (S2-2020). The current course is in UQ's Blackboard System
This course has a heavy focus on some of the mathematics used in data science applications. It is a linear algebra course that also incorporates some multi-variable calculus. The foundations of linear algebra are studied and explored with the help of numerical examples from the Julia programming language. The assignments and final project also celebrate these 12 data science use cases:
- Convergence proof for the perceptron
- Least squares data fitting
- Least squares classification
- Multi-objective least squares and regularization
- Multiple ways for evaluating least squares
- Linear dynamical systems and systems of differential equations
- Covariance matrices and joint probabilities
- Multi-variate Gaussian distributions and weighted least squares
- Cholesky decomposition for multi-variate random variable simulation
- Analysis of gradient descent and extensions
- Principal component analysis (PCA)
The prerequisite for the course is knowledge comparable to MATH7501
. This includes basic discrete mathematics, calculus and elementary manipulation of vectors and matrices. Feel free to use the 7501 - course reader
to brush-up as needed. If you haven't done MATH7501, you may also use the mathematics first year learning centre
to get help on elementary (MATH7501'ish) items such as basic matrix/vector operations, basic calculus, basic understanding of mathematical notation. It is also recommended that you read the following sections from the [VMLS] book as background: 1.1, 1.2, 1.3, 1.4, 3.1, 3.2, 6.1, 6.2, 6.3, 6.4, 7.1, 7.2, 7.3, 10.1, 10.2, 10.3.
The course makes use of the following resources and materials:
References to [CLAWJ], [DSUC], [JULIA], [VMLS], [LALFD], [ILA], [3B1B], and [SWJ], are frequently made during the course and in each week students are requested to cover selected material from these sources.
The course assessment includes two assignments, three quizzes, and a final project:
Quiz, assignment, and project submission information
: All assessment items are to be submitted to email@example.com
(this account should not be used for any queries - only for submissions). Submissions should include two files, a PDF file and an audio clip. The submission must adhere to the following guidelines:
- Submit a single PDF file, with pages of uniform size, and a file size that does not exceed 8MB (you can use a pdf compression utility if needed). The name of the PDF file should be FFFF_LLLL_SN-IIII.pdf where FFFF is your first name, LLLL, is your last name, SN is your student number, and IIII is "Quiz1", "Assignment1", "Project", etc.
- Do not submit code - instead format your code into the PDF file.
- Both handwritten notes and typed notes are acceptable. However typed (LaTeX/Jupyter) mathematics is preferable.
- All graphs, plots, source code, and other figures must be clearly labeled.
- All questions/items must appear in order.
- A recorded audio clip in any standard format with a minimum duration of one minute and a maximum duration of two minutes. The file size may note exceed 4MB. In your recording, state your name, and your experience with this assignment. Mention the resources that you used to carry out the assignment, and if valid, indicate that you did not plagiarize. Name the audio clip in the same way that your PDF file is named, but with the valid audio format extension.
Lecture and practical recordings are here
Other material from the lectures and practicals is in this GitHub repo
Here is the schedule:
||Julia linear algebra
||Ass 1 guidance
||Quiz 1 + 1 hr
||Quiz 1 Sol
||Ass 1 guidance
||Quiz 2 prepare
||Ass 1 Due: Sep-10
||Quiz 2 + 1 hr
||Quiz 2 Sol
||Ass 2 guidance
||Ass 2 guidance
||Quiz 3 prepare
||Ass 2 Due: Oct-15
||Quiz 3 + 1 hr
||Quiz 3 sol
Project due: Nov-14.
|
OPCFW_CODE
|
Re: [Usability] Screensaver and idle time
- From: William Jon McCann <mccann jhu edu>
- To: Joachim Noreiko <jnoreiko yahoo com>
- Cc: Gnome usability <usability gnome org>
- Subject: Re: [Usability] Screensaver and idle time
- Date: Wed, 01 Mar 2006 12:17:34 -0500
Joachim Noreiko wrote:
This caused us a number of headaches in the UI review,
and the more I think about it, the more illogical it
seems, for a number of reasons.
My recollection was that the difficulty in the review was related to:
1. This feature having a cross-module scope
2. The specific language to use for the label text (session and idle)
By cross module scope I mean that this is one example of a different
kind of problem then we may be used to. As the desktop becomes more
integrated it will become increasingly difficult to maintain preference
panels that are per-application. For example, gnome-screensaver and
gnome-power-manager are two distinct modules with fairly well defined
roles at this point but the policy/preferences UI needs to become more
integrated. Exactly how this integration will occur I don't know.
Firstly, we couldn't work out a good way to label the
controls. That should be a red flag to begin with.
The order that springs to mind is:
[*] Use the screensaver
- start it after [-------] minutes
- lock the screen
That's the way the user conceptualizes it: do I want
the saver or not? Supposing I do, how do I want it?
The current way is quite simply backwards, and the
logical way is not possible because of what the
controls actually do.
Second, I am currently writing the docs for this.
What do I say about this slider?
"Use the slider to set the screensaver delay time. The
screensaver will" ... um, no, it won't!
"Use the slider to set the session idle time"...
meaningless jargon that users don't understand.
"Use the slider to set the computer idle time"... I
don't want my computer to be lazy!
"Use the slider to set the time that your computer
must be unused to count as "... to count as what?
Again, I am depending on jargon.
Right, we haven't come up with a good way to describe this. This is
partly due to the problem I described above in that this function is
scoped wider than just screensaver stuff.
In the UI review we just gave up. We probably should have tried harder
to get this right. Sometimes there just isn't enough time to get it
Perhaps we could have used "desktop" instead of "session" and "inactive"
instead of "idle" to avoid *new* jargon.
Or perhaps if we had this slider in the same dialog as the
gnome-power-manager screen and computer suspend sliders, and we don't
care to advertize a desktop wide idle/away setting we wouldn't have to
make the distinction that it is the baseline idle time at all.
So, do we care to have an integrated desktop-wide idle setting? I think
it is useful. I'd like some other opinions on this though. There are
lots of technical reasons why we'd want this which for the most part are
either presence/communication related or boil down to a contract with
the user that after this time applications should be free to do stuff
that might otherwise piss the user off (eg. run backups, rebuild
databases, etc). But this is a usability list... So why might this be
nice for people to have. Well, there would only be one place that
someone has to set this instead of in a screensaver, in a power-manager,
in a chat client, in ekiga, in a backup client, etc. I think it also
makes the Desktop experience more personal in that it gets closer to
knowing what you think. We already can detect presence by proximity
using bluetooth without using something as crude as a time slider bar.
However, until we can ask the user "Is there anything else I can help
you with for now?" we need to use a slider bar and try to detect the
human activity from input devices. So, that time slider bar has to go
somewhere and it has to be described somehow.
If you can't explain something cleanly and simply,
then there is probably something wrong with it.
Probably more accurate to say there may be something wrong with it.
The user opens the Screensaver prefs to do one thing:
set the screensaver properties. When the screensaver
starts, what it shows, and whether to lock the screen.
We should not be overloading these settings with other
I agree. This is partly due to the name of the menu entry, yes.
Thirdly, there is no logical reason to tie screensaver
activation to session idleness.
The user may want several things to happen when the
computer is unused. I can think of:
- mark IM as away
- show screensaver
- power down monitor
I think we disagree here. I think there is a clear reason for relating
screensaver activation and session/desktop idleness. In fact, that has
always been the case. Just because there may be other things that are
also interested in the session/desktop idleness doesn't mean that it
shouldn't be related to screensaver activation.
The last two have to go together. But why tie the
If I set my monitor to power down after 10 minutes,
and my screensaver to show after 15, then I won't get
to see it. Tough.
No not tough - wrong. Why on earth should we allow settings that are
meaningless? That is super confusing.
But the first two have no predefined order.
For example, suppose I run BOINC as a screensaver. I
want it to start early on, say 2 minutes. But I might
be in front of my PC reading paper documents while
that is happening. I want GAIM to be able to return me
to the desktop if one of my contacts comes online or
messages me, so I don't want GAIM to mark me as idle.
I don't mean to insult you but I hardly think this is normal behavior.
Most people don't sit and watch screensavers. Perhaps it should set the
away message as "I'm really here but I'm watching my screensaver" ;)
There are some other subtle points here regarding the effect of IM
messages and DPMS suspend that I won't get into here. Some more thought
needs to go into how IM and messaging will work on a desktop appliance.
It's obviously too late to do something for 2.14, so
I'll try my best to come up with something for the
documentation for this as it stands.
But please could this be reconsidered?
Almost everything can be reconsidered. As always, I welcome and
appreciate more people being thoughtful about this stuff. And thanks
for working on the docs.
] [Thread Prev
|
OPCFW_CODE
|
I wrote a monstrous document, circa 2015, about natural gas pipeline scarcities driving electricity prices higher and how to model the behavior of individual power generation firms that operate in tightly coupled natural gas and electricity markets. That document—my doctoral dissertation—is part linear programming, part economics, part policy analysis, and nearly two hundred pages long. Here, I’ve distilled that work into its most basic ideas for the curious.
The term “electricity market” tends to surprise people who have never heard it before. While not all electricity systems operate as markets, many, including most of the large systems in the United States, do. At any given instant, power generation firms want to sell electricity at different prices that largely depend on their costs, and consumers want to buy electricity for a wide array of end uses. The price that firms want to sell at and the price that consumers are willing to pay do not always match; firms that only want to sell electricity at high prices will tend to find few willing consumers, and consumers that only want to buy electricity at low prices will tend to find few willing suppliers. Electricity markets coordinate the behavior of generation firms and large consumers connected to the same transmission network by matching as many buyers and sellers as possible.
In practical terms, electricity markets coordinate the behavior of generation firms and consumers by determining the “marginal electricity prices” that buyers and sellers will trade with one another at throughout the day. These prices implicitly coordinate activity on the network because generation firms typically will not want to lose money by selling electricity when the marginal price falls below their cost, and consumers will not want to lose money by paying the marginal price for electricity when it exceeds the value that they would obtain from using that electricity.
Generation firms sell electricity at different prices because their costs to operate a power plant vary greatly depending on each plant’s technology. Some technologies, such as nuclear plants, have high, upfront investment costs and low fuel costs. Other technologies, such as natural gas, tend to have lower initial fixed costs but higher fuel costs. In addition to monetary differences, power plant technologies also vary widely with respect to the emissions that they create and their physical capabilities to start up, shut down, and operate somewhere in between these two states. All of these characteristics play an important role in determining how much electricity a firm can sell from each power plant. A lightbulb offers a succinct analogy to understand differences in power plant technologies. Consider two lightbulbs, one incandescent and one LED, both capable of producing the same amount of light. The incandescent lightbulb may initially cost less than its LED counterpart, but the incandescent lightbulb also costs more to operate per hour. Money aside, incandescent lightbulbs can also be easily dimmed over a large range of their total possible brightness, whereas LED bulbs have a smaller operating range and may require special dimming hardware. Diversity of power plant technologies allows a power system to balance operational reliability with cost, to adapt to changes in demand, and to react to unexpected supply interruptions.
Natural gas-fired power plants, one example of a fossil fuel power plant technology, operate by burning natural gas to generate heat, boil water, generate steam, and turn a turbine to generate electricity. Although the United States has abundant natural gas supplies, pipeline capacity to transport natural gas into some regions such as New England remains quite scarce because pipeline investment has historically required multi-decade agreements between industrial consumers, utilities, and pipeline operators. To date, power generation firms have benefited from the excess natural gas transport capacity that these other large-scale consumers created. However, over the last few years, as the power sector displaced all other sectors as the largest consumer of natural gas in the United States, investment in pipeline capacity slowed. This shift coupled the electricity market in regions with scarce pipeline capacity to the natural gas market. In particular, in New England, this coupling led to electricity reliability problems because 1) natural-gas fired power plants could not always acquire the fuel that they needed; and 2) due to economic and environmental factors, few alternative technologies remain that could substitute in for natural gas plants during pipeline scarcity events.
To explore the implications of such a tightly coupled natural gas and electricity market on potential pipeline investment and electricity reliability, I developed a computationally tractable, mathematical model of the decisions that a power generation firm must make in natural gas and electricity markets over a timescale ranging from a few hours to the next few years, taking into consideration uncertainty from future electricity demand, natural gas prices, pipeline availability, and unexpected power plant failures.
Fortunately, I did not have to start from scratch. The field of electric power systems contains a few canonical mathematical models that describe optimal, welfare-maximizing decisions such as when power plants should turn on and off, how much electricity each plant should generate, and how much additional capacity should be built for each technology. Of course, these models are subject to strong assumptions, and they are necessarily always wrong in some manner. Yet, they are useful tools for exploration and discussion, so long as their results are interpreted with the correct skepticism and perspective.
Many electric power system models employ two central economic ideas. First, assuming a single central planner tasked with the goal of deciding everything—which power plants should turn on and off; how much each plant should generate; and which consumers will consume and how much—the central planner’s objective is to maximize the aggregate welfare of the electric power system. Welfare is defined as the revenue that firms earn less their cost, plus the utility that consumers gain less their cost. Second, under the assumption of perfect competition, a firm’s profit-maximizing decisions are identical to its decisions under a central planner’s welfare-maximization problem. With these economic ideas and assumptions, we can start our exploration of how to model the behavior of generation firms in a tightly coupled natural gas and electricity market.
I chose the canonical “unit commitment” model as the foundational building block for my dissertation because of its relatively fine temporal resolution. Broadly, the unit commitment problem answers the following question: given a power system’s power plants, their operating characteristics, costs, and the hourly electricity demand over a 24-hour period, when should each power plant turn on and off, and how much electricity should each plant generate in each hour to maximize welfare? Because the unit commitment problem contains a single, welfare-maximizing decision maker that knows everything about the power system, this type of unit commitment problem is often also called the “central planner’s unit commitment” problem.
The central planner’s unit commitment problem can be mathematically expressed as a “mixed integer linear program.” Linear programs are special math problems that computers can adeptly solve; the autolayout of graphical user elements for iOS apps is just one example of a common linear programming application in everyday life. The “mixed integer” modifier for “mixed integer linear programs” indicates that some decisions are binary/integer in nature. For example, a power plant can be “off” or “on,” but not somewhere in between. Importantly, linear programs can guarantee under specific conditions that a solution is optimal with respect to a particular objective, and solutions to linear programs automatically yield useful economic information in the form of marginal prices. This is exactly how system and market operators in real electric power systems both calculate and justify their marginal electricity prices (which implicitly decide which firms can sell electricity on any given day). The cartoon diagram below shows a stylized version of a unit commitment’s outputs, given information about total electricity demand and power plant costs.
While most real-sized unit commitment models analyze one or two days at hourly intervals, generation firms must make a vast array of decisions that can span multiple years. Many of these decisions may require a firm to engage in a contractual agreement with another party before knowing what will actually happen in the future. To make well-informed decisions—for example, to balance the risk of paying too much to secure sufficient pipeline capacity ahead of time against the possibility of not needing that capacity in the future—generation firms must somehow take into consideration uncertainties such as future electricity demand, pipeline capacity availability, natural gas commodity price, and plant availability. The figure below shows the frequency and timescale of the primary decisions that I considered in the model developed for my dissertation.
A straightforward approach to model a firm’s multi-year and annual decisions would entail extending the hourly unit commitment model from twenty four to tens of thousands of hours, as shown in the figure below. New decision variables corresponding to the appropriate long-term decisions described above could be introduced spanning the appropriate number of hours. Unfortunately, this brute-force approach also requires computing a solution to the hourly unit commitment problem for tens of thousands of hours, which quickly turns into an intractable problem for any real-sized power system.
To work around the computational intractability problems, I restructured my embellished-and-extended unit commitment formulation into a series of hierarchical optimization problems based on a dimensionality reduction approach first described in “A New Approach to Model Load Levels in Electric Power Systems With High Renewable Penetration” by Wogrin et al. in 2014. In that paper, the authors solve the unit commitment problem by first approximating it using system states. Rather than make one set of plant commitment and generation decisions per hour, they bin each hour into one of K-means clustered states, and then rewrite the unit commitment problem as a mixed linear integer program operating over only those states. The figures below illustrate how this state-based approximation works.
This state-based approximation, though not without its limitations, substantially reduces the amount of time required to solve a time-extended unit commitment problem. To analyze the multiyear, annual, and hourly behavior of generation firms, I constructed a hierarchical model that first solves for longer term decisions by approximating the shorter term problems using system states, and then reintroduced those longer term decisions as fixed parameters into the remaining shorter term problems. This approach allows longer term decisions to take short-term dynamics and uncertainties into consideration, while also allowing longer term decisions to influence a firm’s short-term choices. The figure below illustrates the final, hierarchical structure of the optimization model that I constructed to study the behavior of generation firms across multiple timescales.
Finally, to take uncertainties about electricity demand, natural gas commodity price, available pipeline capacity into consideration, I converted my embellished-and-extended unit commitment problem into a “deterministic-equivalent” mixed integer linear program. This modified the overall optimization program by forcing the solver to choose decisions that would maximize the weighted sum of welfare across all scenarios. A bit of jargon, but, pared down, instead of pretending to know what the future would look like at an hourly level for the next three years, the firm faced several possible future scenarios for electricity demand, natural gas commodity price, and pipeline availability.
I wrote all of the actual, computable models for my dissertation in GAMS (General Algebraic Modeling System) and solved them with CPLEX, a commercial solver for linear and mixed integer linear programs. Inspired by the polling and prediction tools that I built for my NextBus Delay Tracker, I also built a research system, based on mySQL, Python, Git, and bash scripting, that automatically logged code changes, could reliably select and recreate scenarios, and would import and generate visualizations of my model’s outputs. The figure below shows a high level view of the different components involved. This research system saved me an incredible amount of time and allowed me to conduct, log, and review and reliably reproduce the results of hundreds of optimization runs (and I had a great time building it!).
Over the course of my dissertation, I ran over 400 different optimizations to explore how firms changed their behavior subject to different electricity demands, natural gas costs, and pipeline availabilities. I also explored how firms might react to different policies by holding all but one constraint constant, rerunning the optimizations, and then comparing how the objective function and decisions changed from run to run. While I did study New England as a case study for regions with tightly coupled natural gas and electricity markets, more generally, I built a set of tools to study the behavior of generation firms over multiple time scales with fine grain temporal resolution.
|
OPCFW_CODE
|
11-17-2011 01:30 PM - edited 11-17-2011 01:34 PM
I've tried everything to make Flash work properly on my computer...have new updates (11.1), switch browsers, and, yet, I keep getting message regarding flash not responding. I've been to the Abobe support forums and there are tons of folks with older computers (like me) and new computers....all experiencing freezing and crashes of Adobe Flash Player. Some folks with NEW computers are even getting "blue screens"!
I found if I right click on a video, it will bring me to the Flash Player Settings Manager. I am NOT a tech and the info they give isn't for the average user.
Is it necessary to allow access to my computer via the Settings Manager for both Flash Player components and third party websites?
I was having such a nightmare with Flash, that I had to walk away from my computer for a few days!
Finally, I clicked "allow" for the Global Settings to store info on my computer up to 10 mbs and even was forced to make my email account AND Adobe Flash Manager my home pages, so I could easily see what was happening with the Flash.
I was able to watch videos with no problem for a couple of days. As I said in another message, I always clear my history and run CCleaner, after I sign off AND run Norton full scan overnight, every night. I've been doing a "workaround", by leaving a blank tab up to clear the history every hour or so. It worked for awhile...then I started getting error messages that only a computer programmer would understand.
I went back (yet again) to the Flash Settings help page, and saw that SOME third party websites are abusing that access to our computers. So, I disabled it and now am back to square one...getting the "flash not responding" error message!
My question is, should I set the Flash Global Settings to store ANYTHING on my computer? As I said, I disabled it today, cleared the addon cache and I'm back to square one..."flash is not responding" with ONLY an option to "stop" or "continue".
Since disabling the "save to my computer" option, I can't even open emails?!!
Adobe really needs to realize that not all of us are techies! I'm no dummy, but the "help" they give is very lame...and really gives no guidance as to whether it is NECESSARY to allow access! The only thing the help page says is..."some websites may not work if you disable websites to store data on your computer".
Please give we average users some guidance as to how to set up the Flash Player's "Settings Manager"!
Solved! Go to Solution.
11-17-2011 06:05 PM
Generally, you do not need to allow sites to store Flash content on your computer in order to view Flash videos. In fact, it is generally recommended that you NOT let sites store flash content because so-called Flash Cookies can be placed there which will make it difficult to remove the tracking cookies that some sites place in your browser (and Norton removes). You definitely do not want to allow third-party sites to store anything.
Flash has nothing at all to do with email, so it sounds like something else may be going on there. Try browsing through the troubleshooting suggestions at Adobe to see if you can fix the Flash issue. If not try uninstalling Flash Player using the Uninstaller, which you can download from that site as well. Then reinstall Flash and see if it behaves better.
11-17-2011 08:26 PM
Thank you, SendofJive! I've waited all day to hear from Adobe and still have no reply.
I did what you suggested, uninstalled and reinstalled the Flash Player. Also, to be sure the old version was gone, I ran that uninstaller also (version 10+).
My email seems to be working again...but I've been through this before. All is okay for a while, then it starts acting up.
I think once I get my badly needed extra RAM ordered (next week) and installed, I may not get these errors. Although, as I said, I saw on the Adobe web site all sorts of folks having problems with Flash freezing and crashing...even those with new computers.
I greatly appreciate your quick reply! I now know not to allow any access to my computer. I cleaned out the addon info, ran CCleaner, ran a MBAM scan, a Norton scan and all looks okay.
I've left my email up and no problems in the past hour or so...it may have done the trick.
Thanks again and all the best~Donna
PS Since the version 10+ uninstaller did run and acknowledge deletion of Flash (and I had uninstalled via programs in Windows, prior to installing 11.1 before), perhaps the old version had some components left behind and they were bumping heads with each other?!
11-17-2011 09:57 PM
I hope that solves the Flash issue for you. Using the standalone Uninstaller often works for issues like you describe. As long as you can watch YouTube videos with no problems, then your Flash Player is working - your email would be something else.
11-18-2011 12:34 PM
You answered my main concern (and your response was fast...I didn't have to wait and I appreciate that greatly).
My concern was that Flash kept prompting me to allow access to my computer by both Flash and third parties, and then I saw a disclaimer (when I looked into it) that stated some third parties were using their access in a way to gather information. That concerned me greatly, and it seems Adobe is either understaffed or puzzled as to the Flash problem...due to the huge amount of folks having problems all across the spectrum. Thanks again!
I was able to watch my Youtube videos for a long time last night, with no problems AND I denied all access to my computer via Flash Settings Manager. It leads me to think that Adobe is getting paid by these third party entities for the encouragement by Adobe to allow access. I caught it early (thanks to you) and was able to clean the data stored on my computer.
CCleaner is a tool I've used for years, as it cleans all history, cookies and across all browsers in one sweep. Again, I want to stress to all average users like me, DO NOT use the registry tool...I never have in the 4 or 5 yrs I've been using it. I never changed the settings, and just use the main cleaner. NOT a good idea to mess with the registry, if you don't know what you're doing. But CCleaner cleared the data stored by Flash for me, plus I ran the "clear data stored by addons" in my browser to be sure it was off my computer.
As to my email, that is a strange one. I kept getting an error message that stated "the video running is causing instability and may cause a crash..do you want to continue?" I had no videos or any media running...I was just browsing my email! the error was tied to Flash, though. I'm leaving my email open today, as I open other tabs, to see if it happens again. I browsed Adobe Flash support and see no email references, at all.
After clearing totally the old 10+ version of Flash with the uninstaller, the performance of Youtube was fast and I had no problems (knock wood).
You may have saved my old computer, SendofJive, as I had no reply by Adobe and thanks to your quick reply, I cleaned up all data storage on my computer, and all is running fine.
It's misleading on Adobe's part, to lure users into using Flash's Setting Manager and blindly making settings to allow potentially harmful access to their computers. I'm wondering if those with new computers (on the Adobe support site) are getting the "blue screens" and having problems BECAUSE they are using that Flash Settings Manager.
I can't thank you enough, SendofJive, for addressing the issue so quickly!
All the best~Donna
P.S. I also removed the plugin version of Flash 11.1, since I'm using IE now...maybe they were bumping heads, too. I'll go back to Foxfire, after I get my RAM...but IE is the one that runs the best for me now (probably because I have low RAM, XP and haven't updated the browser or Windows yet).
11-18-2011 03:19 PM
Is it possible that when you were looking at email you had a tab in a browser that was open to a site running Flash content? This could have been going on in the background and caused the notice you saw.
11-18-2011 06:22 PM
No, I had this site open or just the email tab alone! The error message is "a script in this movie is causing Flash Player to become unresponsive" do you want to stop it or continue?
I think I know why the email problem is happening. The system requirements for Adobe Flash 11.1 require 128 mbs of graphic RAM. Seems it all comes back to getting more RAM, which I'm going to order next Wed.
BUT the crucial thing, in my mind, is the push by Adobe to encourage folks to open access to their computer to both Flash and third parties. You saved me on that issue.
I finally heard back from Adobe, and they have all the usual blah, blah, blah. Turn off your hardware accelerator (which I did a while ago), and to a woman with a new lap top with a blue screen, they send links to her to send what she has on her computer as an error message and report as a "bug". She clearly said there was no message and took a picture of her laptop with a blue screen and writing...another lady had a screen all squiggly, with colored lines. They weren't helpful at all.
Gave kudos to this site, for helping me so quickly, as it was 2 days before I even heard from them AND they say nothing about not allowing access to your computer.
The problem is, that the update is forced upon those who don't have matching system requirements, rather than offer a version compatible. I'm fortunate, I know my system requirments...not everyone does and just updates and blindly allows access. Not very good practice on Adobe's part, imo.
It's my RAM, as to the email, SendofJive...and as I said, I'm going to order more next week. Hugh helped me GREATLY by helping me find my computer and the site to order it from.
Thanks again for your quick response, I am very grateful!
All the best~Donna
|
OPCFW_CODE
|
/**
* RocksDBWrapper.h
* Simple RocksDB wrapper
*
* @author valmat <ufabiz@gmail.com>
* @github https://github.com/valmat/rocksserver
*/
#pragma once
namespace RocksServer {
// forward declaration
class Batch;
class RocksDBWrapper
{
public:
/**
* Constructor
* @param IniConfigs
* @param DefaultConfigs
*/
RocksDBWrapper(const IniConfigs &cfg, const DefaultConfigs &dfCfg) noexcept;
~RocksDBWrapper()
{
delete _db;
}
/**
* Cast to a rocksdb::DB pointer
*/
operator rocksdb::DB * () const
{
return _db;
}
rocksdb::DB* operator->()
{
return _db;
}
/**
* Get value by key
* @param string key
* @param string value
*/
bool set(const rocksdb::Slice &key, const rocksdb::Slice &value)
{
_status = _db->Put(rocksdb::WriteOptions(), key, value);
return _status.ok();
}
/**
* commit batch
* @param RocksDB write batch
*/
bool commit(rocksdb::WriteBatch &batch)
{
_status = _db->Write(rocksdb::WriteOptions(), &batch);
return _status.ok();
}
/**
* commit batch
* @param RocksDB write batch
*/
bool commit(Batch &batch);
/**
* Get value by key
* @param string key
* @return string value or NULL (if the key is not exist)
*/
std::string get(const rocksdb::Slice &key) const
{
std::string value;
_status = _db->Get(rocksdb::ReadOptions(), key, &value);
if (!_status.ok()) {
return "";
}
return value;
}
/**
* Get array values by array keys
* @param keys
* @param statuses
* @return values
*/
std::vector<std::string> mget(const std::vector<rocksdb::Slice> &keys, std::vector<rocksdb::Status> &statuses) const;
/**
* Get array values by array keys
* @param keys
* @param statuses
* @return values
*/
std::vector<std::string> mget(const std::vector<std::string> &keys, std::vector<rocksdb::Status> &statuses) const;
/**
* Fast check exist key
* @param string key
* @param string value. If the value exists, it can be retrieved. But there is no guarantee that it will be retrieved
* @param bool value_found.
* @return bool (true if key exist)
*/
[[gnu::deprecated("Use keyExist(const rocksdb::Slice &key, std::string &value) instead")]]
bool keyExist(const rocksdb::Slice &key, std::string &value, bool &value_found) const;
/**
* Fast check exist key
* @param string key
* @param string value. If the value exists, it can be retrieved. But there is no guarantee that it will be retrieved
* @return bool (true if key exist)
*/
bool keyExist(const rocksdb::Slice &key, std::string &value) const;
/**
* Fast check exist key
* @param string key
* @return bool (true if key exist)
*/
bool keyExist(const rocksdb::Slice &key) const
{
std::string value;
return keyExist(key, value);
}
/**
* Remove key from db
* @param string key
*/
bool del(const rocksdb::Slice& key)
{
_status = _db->Delete(rocksdb::WriteOptions(), key);
return _status.ok();
}
/**
* Get last query status string
*/
std::string getStatus() const
{
return _status.ToString();
}
/**
* Get last query status state
*/
bool status() const
{
return _status.ok();
}
/**
* Incriment value
* @param string key
* @param incval, default: 1
*/
bool incr(const rocksdb::Slice& key, const int64_t& incval)
{
_status = _db->Merge(rocksdb::WriteOptions(), key, std::to_string(incval));
return _status.ok();
}
/**
* Incriment value
* @param string key
* @param string incval
*/
bool incr(const rocksdb::Slice& key, const rocksdb::Slice& incval = "1")
{
_status = _db->Merge(rocksdb::WriteOptions(), key, incval);
return _status.ok();
}
/**
* Get new Iterator
*/
std::unique_ptr<rocksdb::Iterator> newIter() const
{
return std::unique_ptr<rocksdb::Iterator>(_db->NewIterator(rocksdb::ReadOptions()));
}
private:
// DB pointer
rocksdb::DB* _db;
// Last operation status
mutable rocksdb::Status _status;
};
}
|
STACK_EDU
|
As you develop your integration, it's important to keep the rate limits Finix has set for our APIs in mind.
A rate limit, in simplest terms, limits the number of API requests you can make in any given period. Rate limits play a crucial role in ensuring the stability and performance of an API.
It's important to keep these limits in mind while developing your integration and making sure your application doesn't programmatically create unnecessary requests.
Finix can change limits to prevent abuse or enable high-traffic applications. If your application needs different rate limits, reach out to your Finix point of contact or email the Finix Support team.
By controlling the number of requests your application makes, you can avoid hitting these limits and keep the performance of your application consistent and stable.
Rate Limits in Finix's API
Finix rate limits read and write requests at the:
: Request made by any credentials under your
- IP Address level : Request from any individual IP address (e.g., 184.108.40.206).
|Read||GET API requests used to retrieve data/resources.|
|Write||POST/PUT/PATCH/DELETE API requests used to create or update data/resources.|
Treat these individual limits as a guide to how many requests your application should make at any given time. Develop your application so it doesn’t create unnecessary requests or load and works within these limits.
To prevent abuse, rate limits based on the IP address request originate from are also in place. These limits are less restrictive than the public application rate limits and shouldn't affect legitimate customer requests.
Exceeding Rate Limits
Any requests over these limits get rejected and return a 429 HTTP Error. If a 429 HTTP Error is returned, your application won't be able to get the data it needs from responses to process transactions.
See Handling Rate Limits for tips on how to develop your application to work within these limits and handle 429 HTTP Errors.
If you suddenly see a rising number of requests get rate limited, please reach out to Finix Support.
Handling Rate Limits
Understanding and developing within rate limits is important to ensure the stability and performance of your application.
To avoid exceeding these rate limits, optimize your application, so it makes the least amount of requests needed. By optimizing your code and monitoring usage, you can develop an application that's both efficient and effective.
Here are some ways you can optimize your application:
Caching responses can reduce the number of requests your application needs to make to process transactions.
If your application exceeds rate limits, make sure you have error handling in place to catch the 429 HTTP Error and handle it appropriately. Error handling can include retrying the request later or displaying an error message to the user.
Use exponential backoff to handle errors. When your application exceeds a limit and receives a 429 HTTP Error, employ exponential backoff and gradually increase the time between requests. Exponential backoff helps the API recover before you the subsequent request gets made.
Regularly monitor how many requests your application makes and ensure you stay within limits. This enables you to identify any issues that come up early and make changes as needed.
Instead of using GET requests to retrieve data, setup webhooks to be programmatically alerted of responses and changes to resources you manage.
|
OPCFW_CODE
|
“I abhorrence MS Access, abnormally developing with it. You can’t do annihilation able with it.”
Wrong, wrong, wrong!
There are a few able things I accept been able to calligraphy to accomplish developing with it passable. I still would rather use Visual Studio, but this improves the acquaintance a fair bit.
Access files are bifold (I use the ADP/ADE book format, but I accept added Access book formats accept the aforementioned problem), so you can’t animosity them to see what has changed. This is bad.
However, there is a band-aid to this. A apparatus alleged Access SVN and can be downloaded from here, which gives you a way to abstract to argument files all the forms and letters that are in Access. Afore every commit, I would manually run this apparatus on my ADP book and abstract to argument files, again I would accomplish these argument files to antecedent ascendancy and could calmly see what had afflicted in anniversary commit.
Despite the name Access SVN, the apparatus is not angry to subversion. You can use any antecedent ascendancy arrangement — I use git.
Also included in this apparatus is a way to do this with the command line, so you can accomplish this a body footfall on your body server. I accept not acclimated this abundantly yet, but the syntax is adequately simple
asvn.exe e “path to Access file” “path to txt files” “*.*”
The clarify at the end *.* allows you to specify what to extract, so you could abstract all forms/reports alpha with D with “*.D*”. I had agitation application *.* because the names of my forms/reports accommodate characters not accustomed in a windows book name. I am abiding there is a way annular this but I haven’t had the chance to attending into it added yet.
Surely testing is not accessible with MS Access! I would accept agreed with that account until the added day, back I begin a accurate way of testing if a affection is enabled.
Firstly a bit of background. I advance application MS Access 2003 because the architecture appearance is far easier to use. However, because it is out of support, all my users use MS Access 2010. MS Access 2010 has a affection alleged Called Documents, which allows all forms and letters to accessible in new tabs so you can calmly about-face amid them. This affection can alone be enabled in MS Access 2010 and has no aftereffect if aperture with MS Access 2003.
If you use Access SVN on your Access book with called abstracts angry on and off, you will see UseMDIMode: 0 and UseMDIMode: 1 appearance up in the Database backdrop file. UseMDIMode: 0 agency that called abstracts are angry on.
In PowerShell, I can now address a analysis to see if UseMDIMode: 0 can be begin in the database backdrop file:
If the analysis passes, True will be returned. If it fails, absent will be returned.
On my body server, I scripted the abstraction of Database properties.dbp.txt from the ADP book with asvn.exe afore active this test. While not carefully bare as Database properties.dbp.txt should be in antecedent control, it is accessible that addition could balloon to abstract the argument files from the ADP — with this footfall you are consistently testing what is enabled in the bifold file.
While developing with MS Access, I generally bandy the database affiliation to point to my bounded apparatus or a body server. I consistently try and bethink to alone anytime accomplish with this set to the alive database to abstain accessible problems.
The added day I begin on StackOverflow a way to calligraphy this. I adulation this! I can accommodate this footfall in my deployment process, and it will overwrite what anytime the affiliation cord is in antecedent ascendancy with what your assembly ambiance needs.
All you charge to run this footfall is, (note it is spaces amid the parameters, not commas):
cscript connect.vbs Project.adp “ServerName” “DatabaseName”
The capacity of connect.vbs can be begin in the StackOverflow article. It is additionally accessible to canyon a username and countersign if your ambiance requires this.
The aftermost able affair I do with MS Access is catechumen my ADP book into the aggregate ADE version. To manually do this, there is an advantage in the accoutrement menu.
To automate this I run:
cscript createADE.vbs “path to ADP” “path to ADE”
The capacity of createADE came from this appointment post. The alone change I fabricated was to animadversion out some of the answer statements so it would run silently as allotment of my body process. It should be acclaimed that cscript and wscript are about identical and either will run these scripts. However, in a command band environment, cscript is preferable, and wscript should be acclimated in a Windows environment.
I am absolutely afraid at how abundant I accept managed to do in agreement of scripting the body and deployment action for MS Access. I still don’t like developing with Access, but this has absolutely bigger things.
Ten Things About Oracle Forms And Reports Tutorial For Beginners Pdf You Have To Experience It Yourself | Oracle Forms And Reports Tutorial For Beginners Pdf – oracle forms and reports tutorial for beginners pdf
| Allowed to the website, within this time period I will teach you in relation to oracle forms and reports tutorial for beginners pdf
|
OPCFW_CODE
|
[quote]Originally posted by sizzle chest:
Actually, there have been several machines between the 8600 and these next-next-next generation ones, that will run OSX. You could have purchased one of those. And if those aren't good enough, buy one of the new ones coming out in the next month or so.
There will ALWAYS be bleeding-edge, early design stage stuff we'll hear about, that's faster than anything we can actually buy. It doesn't mean the stuff we can buy isn't worth buying, just because the bleeding-edge stuff is coming eventually.</strong><hr></blockquote>
I'm in a better position than sc, since in addition to my 8600, I have an iBook 466. However, my 8600 is still my main machine, mainly due to screen size, so I feel for sc. My 8600 is going to be five years old, and I would love to have OS X on the desktop. And while there are many capable machines out there one could purchase, why not look instead at the logic of the situation?
As another poster opined, anyone who bought a G4 early on or at the half way point in its progression, they made a good purchase. The reason is the G4 didn't progress all that far - an increase of 500MHz with a loss of IPC efficiency. Now when I bought my 8600 to replace my Quadra 650, my 8600 had a more efficient processor with nearly ten times the clock speed. If Apple had made the same strides with the G4 that it made between my 650 and my 8600, we would have multi-GHz G4s right now.
Yet, as the MHz gap turned into a GHz gap, Mac users woke to the unfortunate truth that the G4 is woefully inadequate. Now IBM has just announced a chip that promises to blow the G4 away, a leap that should even dwarf the comparative difference between my 650 and 8600. Realize we're not simply talking about moving from a G4 @1000MHz to a G4 @1200MHz (which is probably all we'll get in the short term). We're talking instead about a huge leap in technology. The G4 has been holding Apple back; it will be dwarfed by this modern IBM chip. With this in mind, who could contemplate buying one of Apple's current desktop offerings? If Apple is going to use this new IBM line (and that's probably the only plausible inference to draw), then I'm waiting for the new POWER Macs, even if they're another year off or beyond.
[quote]Originally posted by tiramisubomb:
<strong>I don't think IBM will supply their Power4 chips for the Mac. The key issue here is the cost. Power requirements and heat dissipation will also lead to design problems.
I think you may have overlooked the basis of this particular thread.
From what we know now, I believe the IBM G5 is practically guaranteed.
[ 08-08-2002: Message edited by: Big Mac ]</p>
|
OPCFW_CODE
|
Listening or viewing non-fiction/non-art (eg lectures, presentations) at realtime speed is tiresome. I’ve long used rbpitch (but more control than I need or want) or VLC’s built-in playback speed menu (but mildly annoyed by “Faster” and “Faster (fine)”; would prefer to see exact rate) and am grateful that most videos on YouTube now feature a playback UI that allows playback at 1.5x or 2x speed. The UI I like the best so far is Coursera’s, which very prominently facilitates switching to 1.5x or 2x speed as well as up and down by 0.25x increments, and saving a per-course playback rate preference.
HTML5 audio and video unadorned with a customized UI (latter is what I’m seeing at YouTube and Coursera) is not everywhere, but it’s becoming more common, and probably will continue to as adding video or audio content to a page is now as easy as adding a non-moving image, at least if default playback UI in browsers is featureful. I hope for this outcome, as hosting site customizations often obscure functionality, eg by taking over the context menu (could browsers provide a way for users to always obtain the default context menu on demand?).
Last month I submitted a feature request for Firefox to support changing playback speed in the default UI, and I’m really happy with the response. The feature is now available in nightly builds (which are non-scary; I’ve run nothing else for a long time; they just auto-update approximately daily, include all the latest improvements, and in my experience are as stable as releases, which these days means very stable) and should be available in a general release in approximately 18-24 weeks. You can test the feature on the page the screenshot above is from; note it will work on some of the videos, but for others the host has hijacked the context menu. Or something that really benefits from 2x speed (which is not at all ludicrous; it’s my normal speed for lectures and presentations that I’m paying close attention to).
Even better, the request was almost immediately triaged as a “[good first bug]” and assigned a mentor (Jared Wein) who provided some strong hints as to what would need to be done, so strong that I was motivated to set up a Firefox development environment (mostly well documented and easy; the only problem I had was figuring out which of the various test harnesses available to test Firefox in various ways was the right one to run my tests) and get an unpolished version of the feature working for myself. I stopped when darkowlzz indicated interest, and it was fun to watch darkolzz, Jared, and a couple others interact over the next few weeks to develop a production-ready version of the feature. Thank you Jared and darkowlzz! (While looking for links for each, I noticed Jared posted about the new feature, check that out!)
Kodus also to Mozilla for having a solid easy bug and mentoring process in place. I doubt I’ll ever contribute anything non-trivial, but the next time I get around to making a simple feature request, I’ll be much more likely to think about attempting a solution myself. It’s fairly common now for projects have at least tag easy bugs; OpenHatch aggregates many of those. I’m not sure how common mentored bugs are.
Back to playback rate, I’d really like to see anything that provides an interface to playing timed media to facilitate changing playback rate. Anything else is a huge waste of users’ time and attention. A user preference for playback rate (which might be as simple as always using the last rate, or as complicated as a user-specified calculation based on source and other metadata) would be a nice bonus.
|
OPCFW_CODE
|
Starting storm nimbus command doesn't work
I have zookeeper servers, and I'm trying to install storm using those zk servers.
My storm.yaml file looks like:
storm.zookeeper.servers:
- "ZKSERVER-0"
- "ZKSERVER-1"
- "ZKSERVER-2"
storm.local.dir: "/opt/apache-storm-2.2.0/data"
nimbus.host: "localhost"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
I tested ping with those ZKSERVERs, and it worked fine.
However, when I start nimbus with ./storm nimbus command, it doesn't show any error, but it doesn't end either.
root@69e55d266f5a:/opt/apache-storm-2.2.0/bin:> ./storm nimbus
Running: /usr/jdk64/jdk1.8.0_112/bin/java -server -Ddaemon.name=nimbus -Dstorm.options= -Dstorm.home=/opt/apache-storm-2.2.0 -Dstorm.log.dir=/opt/apache-storm-2.2.0/logs -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib:/usr/lib64 -Dstorm.conf.file= -cp /opt/apache-storm-2.2.0/*:/opt/apache-storm-2.2.0/lib/*:/opt/apache-storm-2.2.0/extlib/*:/opt/apache-storm-2.2.0/extlib-daemon/*:/opt/apache-storm-2.2.0/conf -Xmx1024m -Djava.deserialization.disabled=true -Dlogfile.name=nimbus.log -Dlog4j.configurationFile=/opt/apache-storm-2.2.0/log4j2/cluster.xml org.apache.storm.daemon.nimbus.Nimbus
The terminal just shows the above logs, and that doesn't change until I run control+C.
What could be a problem here?
Can you share the log of the nimbus?
Generally, the nimbus will be in a running state until you stop it, or it faces an error. If you want to be sure about your nimbus status, you can check the log of your nimbus (./logs/nimbus.log).
on running ./storm nimbus command. The process has started as it is showing in your example. This is the usual behavior.
If you want to run the storm in the background, try to run it with the nohup command
nohup ./storm nimbus > storms.log &
|
STACK_EXCHANGE
|
I am using FrameMaker release 2017 (version 188.8.131.521) and the welcome screen is displaying like this:
I'm using a work computer so I don't know if they have customized any setting. Here are some of my computer specs I do know:
Any idea on how I can fix this issue?
Did it just start doing this? Could be a messed up C:\Users\user_name\AppData\Roaming\Adobe\FrameMaker\version folder - try renaming it to like \2017_old and restart FM. It should recreate it.
I previously was on a Windows 7 machine and I got the same thing. I assumed the issue would be resolved once I switched to Windows 10 because I have Windows 10 at home and it looks fine on my personal subscription.
I don't even get an option for AppData following the path above.
It's a hidden folder, so you'll need to change your File Explorer View settings
App Data is hidden by default:
Just Uncle Bill Gates' attempt to protect us from ourselves
Thanks. I'm able to see the folder and tried renaming - but still no luck (also restarted computer).
Ok, another thing you can try is looking at the properties of the FM shortcut - there's a compatibility setting in there that talks about high DPI displays? You can play with that to see if it has any impact on how FM looks after relaunching.
How would I go about doing that?
Right-click on the shortcut that launches FM and have a look at the Properties.
The Welcome screen is just an HTML page.
By default FrameMaker uses Internet Explorer to open this page (I think).
Which browsers do you have installed? You might change your default browser.
What happens, when you open the welcome page with various browsers?
In FM 2019 it's here:
c:\Program Files\Adobe\Adobe FrameMaker 2019\fminit\dws\resources\welcomeScreen\welcome.html
Re: Winfried_Reng's point about the FrameMaker default browser configuration setting, I'm getting a similar distorted display of the Welcome Screen when I launch FrameMaker 2019. My Windows 10 Desktop PC has Google Chrome Version 81.0.4044.138 (Official Build) (64-bit) installed as the default browser, but when I right-click on the welcome.html file, a Windows IE Explorer icon is displayed on the File Properties display. Where would I modify the browser setting in FrameMaker 2019? Thank You!
To get to the AppData (Roaming) directory, you can type %appdata% into the address bar of a File Explorer window.
Pro tip: I save the Roaming directory to my shortcuts by dragging the folder into the File Explorer sidebar.
Not yet. I'm experiencing some other problems with the software, so it could be a firewall issue at our end. I'll update once we test that.
having the same issue and wondering if you ever found a resolution?
FrameMaker uses Internet explorer browser support for Welcome screen Html.
Could you please review your security settings in IE?
and are you able to see Welcome screen present at "c:\Program Files\Adobe\Adobe FrameMaker 2019\fminit\dws\resources\welcomeScreen\welcome.html" correctly in IE?
I'm experiencing the exact same thing. For me, welcome.html displays fine in IE, Edge, Firefix and Chrome. What I see in FrameMaker, is very similar to the image posted by the OP
No, actually 2019.
Hmm, any other messed up screen effects in FM? If so, it might be the Windows screen resolution settings. If not, maybe it's something with the way FM renders IE doing the welcome page. Try running as Admin - any difference?
I don't have admin privileges. I'd have to put in an IT request and it's probably not worth it. I don't really use the welcome screen and I usually just X it out. I did notice that the date/time stamps on all the files in the welcomeScreen folder changed to 1/23/2020. That's around when the problem started. I can't do anything with the files so I don't know how that happened. And it doesn't explain why the page renders correctly in all other browsers.
It might be something to do with the way FM is invoking IE to display the page in your locked-down environment - I don't know.
We are having the same issue: FrameMaker 2019 v184.108.40.2068
Open FrameMaker and it's abnormally GYNORMOUS in the Welcome page.
Opened as Administrator: same result
We have verified it in the folders: C:\Program Files\Adobe\Adobe FrameMaker 2019\fminit\dws\resources\welcomeScreen\
FM 2019; like phyllisPEPY, the welcome screen just started showing HUGELY blown up and unresponsive.
|
OPCFW_CODE
|
I bet that most of you have faced this common problem: “How to upload large size files (above 4MB) in ASP.NET?” As you know, the default maximum file size allowed in the framework is 4MB and this is to prevent service attacks like submitting large files, which can overwhelm the server resources. In this blog post I am going to show you how to overcome this limitation, as well as how to validate the file size and type on the client before submitting it to the server.
All it takes to overcome the framework limitation is to do several modifications to the project’s web.config file. To enable large file uploads you need to change the value of the following attributes on the system.web/httpRuntime configuration element:
For example, the web.config configuration allowing uploads of files up to 100MB and upload periods of up to 1 hour should look like the following:
Sometimes you want to limit the file type and size which the user is allowed to upload on the server. This might be related to the application’s security or overall performance.
Here are some validation options you can choose from:
The main disadvantage of the standard file upload control is that it does not offer client-side validation. Therefore the file needs to first be uploaded to the server and then validated by some custom implementation of such functionality. Here is a simple server-side validation scenario involving the ASP File Upload control://markup code
The File API comes with the new web standard HTML5 and is supported by all modern browsers. Its main advantages are that it offers client-side validation and additionally supports the following features:
Let’s explain a bit about each of the modules:
The IFrame and Flash modules upload the selected file(s) using normal http post request. The IFrame uses <input type="file" /> tag for file uploads, whereas Flash uses the Flex object in order to upload files.
It is very important to know that the Flash module allows you to validate both file type and size on the client side. The files are uploaded using Post HTTP request in absolutely the same manner as the normal <input type="file" />. Since both modules upload a file with a single request, in case of uploading a file larger than the 4MB ASP.NET default limitation, you need to modify your web.config file as shown in the first section.
Browser support: IE9,8,7.
In contrast, the Silverlight upload is designed in a different way so that it divides the file to be uploaded on the client side in many chunks, each of which is 2MB large. It then starts uploading the chunks one after another subsequently. It does support file type and size validation on the client.
Browser support: Firefox < 3.6, IE9,8,7.
If you’re using a third-party control, such as Telerik’s Asynchronous ASP.NET upload control, all of the above modifications will be taken care of for you.
RadAsyncUpload is working with modules based on the above information in order to handle the upload process no matter of the current browser version. The module with highest priority is File API. If the browser does not support File API, the module is automatically changed to Silverlight. If there is no Silverlight installed on the client machine, RadAsyncUpload will utilize the Flash module. If neither Flash nor Silverlight are installed – the IFrame module takes place.
RadAsyncUpload helps you overcome the 4 MB file size upload limitation in ASP.NET by dividing the large files into smaller chunks and uploading them subsequently. You can control the size of the chunks and thus the number of requests to the server required to upload the file, which can improve your application's performance. No need to modify the configuration file and override different properties.Just set the ChunkSize property to the desired value and you are ready to go!
After reading this article you should now be able to:
Boyan Dimitrov is a Support Officer at Telerik’s ASP.NET AJAX Division, where he is mainly responsible for RadScheduler and the navigation controls. He joined the company in 2012 and ever since he has been working on helping customers with various RadControls scenarios and improving the online resources. Boyan’s main interests are the web, mobile and client-side programming. In his spare time he likes participating in sport activities, walking in the nature and reading criminal stories.
Subscribe to be the first to get our expert-written articles and tutorials for developers!
All fields are required
|
OPCFW_CODE
|
New Tools for Managing Large Tables in HANA and for SAP BW on HANA
In a ‘scale-out’ environment of SAP HANA, there may be many servers with InfoCubes and DSOs. Some of these can be so very large and accessed so very frequently that it can lead to bottlenecks, even for HANA. To fix this, you can partition large tables and move them across multiple servers to provide load balancing. In this blog, we look at some of the basic HANA partitioning options and new tools under development specifically for BW on HANA.
By Dr. Berg
When monitoring the system, you can sometimes see tables that have grown so large in HANA that it makes sense to split them ‘horizontally’ into smaller partitions. This is possible for column tables in HANA by de-fault. This is really a nice way to manage high data volumes. The SQL statements and any data manipulation language (DML) statements do not need to know that the data is partitioned. Instead, HANA manages the partitions behind the scenes automatically. This simplifies the access, front-end development and also gives the administrators a key tool to manage disks, memory, and large column stores.
In a distributed (scale-out) HANA system, you can also place the partitions on different nodes and thereby increase performance even more since there will be more processors available for the users. In fact, this may become the standard deployment method for extremely large systems with tens of thousands of users.
Currently, HANA supports up to 2 billion rows in a single column table. In a partitioned schema, you can now have 2 billion rows per partition and there is virtually no limit on how many partitions you can add. It becomes a hardware and landscape architecture issue, not a database limitation. There are three different ways you can create partitions in HANA from an admin standpoint. This include by ranges, by hash and by round-robin method. While more complex schemas are possible with multi-level partitioning, these three options covers the basics used in the higher level options. So let’s take a look at the fundamental partitioning choices
h2>Option 1: Partitioning Column Tables by Range
If you know your data really well, you can partition the data by any range in our table. While the most common is date, you can also partition by material numbers, postal codes, customer numbers or anything else.
A partition by date makes sense if you want to increase query speed and keep current data on a single node. Partition by customer number, makes sense if you are trying to increase speed of delta merges, since multiple nodes can be used at the same time during data loads. So you have to spend some time thinking of what benefits you want to achieve before undertaking any partitioning scheme. It should be noted that the maintenance of range partitions are somewhat higher than the other options since you will have to keep adding new partitions as data outside the existing partitions emerge (i.e. next year’s data if you partition by year now). Partitioning is done by SQL and the syntax for range partitioning is simply:
pre>CREATE COLUMN TABLE SALES (sales_order INT, customer_number INT, quantity INT, PRIMARY KEY (sales_order))
pre>PARTITION BY RANGE (sales_order)
pre>(PARTITION 1 <= values < 100000000,
pre> PARTITION 100000000 <= values < 200000000,
pre> PARTITION OTHERS)
This creates a table with 3 partitions. The first two have 100 million rows each and the last have all the other records. There are some basic rules though. First, the field we are partitioning on has to be part of the primary key (i.e. sales_order). Second, the field has to be defined as string, date or integer and finally, we can only partition column stores, not row stores.
h2>Option 2: Partitioning Column Tables by Hash
Unlike partitioning by ranges, partitioning column stores by the hash does not require any in-depth knowledge of the data. Instead, partitions are created by an internal algorithm applied to one, or more, field in the database by the system itself. This is known as a hash. The records are then assigned to the required partitions based on this internal hash number. The partitions are created in SQL and the syntax is:
pre>CREATE COLUMN TABLE SALES (sales_order INT, customer_number INT, quantity INT, PRIMARY KEY (sales_order, customer_number))
pre>PARTITION BY HASH (sales_order, customer_number)
In this example we are creating six partitions by sales orders and customer numbers. There are some rules though. If the table has a primary key, it must be included in the hash. If you add more than one column, and your table has a primary key, all fields used to partition on, have to be part of the primary key also. If you leave off the number (6), the system will determine the optimal number of partitions itself based on your configuration. It is therefore the recommended setting for most hash partitions.
h2>Option 3: Partitioning Column Tables by Round-Robin
In a ‘round-robin’ partition, the system assigns records to the partitions on a rotating basis. While it makes for efficient assignments and requires no knowledge of the data, it also means that removing partitions in the future will be harder as both new and old data will be in the same partitions. The partitions are created in SQL and the syntax is
pre>CREATE COLUMN TABLE SALES (sales_order INT, customer_number INT, quantity INT)
pre>PARTITION BY ROUNDROBIN
Here, we are creating six partitions and assigning records on a rotating basis. If you change the last statement to PARTITIONS GET_NUM_SERVERS() the system will assign the optimal number of partitions based on your system landscape. The only requirement here is that the table does not contain a primary key.
h2>Moving Files and Partitions for Load Balancing
You can periodically move files and file partitions for column tables to achieve better load balancing across hosts. Redistributions are particularly useful if you are adding, or removing, a node from the system, creating new partitions or load balancing existing ones that have grown very large. However, before you start, make sure you save your current distributions so that you can recover in case you make a mistake. If you have the system privilege Resource Admin, you can open the administration editor in SAP HANA and choose landscape –> redistribution and click ‘save’. Then select ‘next’ and ‘execute’. You have now saved the current distribution and can recover if anything goes wrong.
Once this is done, you can go to the Navigator pane in Studio and select the Table Distribution Editor. From here, you can see the catalog, schemas, and tables. Select the object you want to display, and choose Show Table Distribution. You can also filter to a single host as needed. This will display the first 1,000 tables in the area you selected. If more are available, you’ll see a message box.
In the overview lists, you can now select any table you want to analyze, and the details are displayed in the Table Partition Details area. You can move the table to another host by right-clicking it and selecting Move table. If you want to move a partition instead of a table, you can select the partition instead and do the same. This may be very useful if you want to load-balance large tables across multiple hosts or consolidate the partitions to single hosts. For detailed recommendations on load balancing, see SAP Note 1650394 for large table management.
h2>A New HANA Partitioning Tool for SAP BW
SAP is also working on a tool to help you automate the partitioning and merge tasks for SAP BW. It is currently being planned to be released to non-pilot customers in SP1.x for 7.3. The first part of the new partitioning tool for BW on HANA allows you to repartition DSOs and InfoCubes
The Second more advanced part of the tools also allows you to Merge, Split and Move partitions in an Admin interface. You can also schedule this to run as background jobs, with your own paramters, and even pick the partitioning schemes we discussed in option 1, 2 and 3 above.
Both of these capabilities are still under development at SAP, and may change before being released to non-pilot customers. But it is really great to see that SAP is working on taking the capabilities outlined in the general partitioning discussion above and place these into a BW tool that makes the tasks of partitioning and managing them much more manageable.
SAP HANA’s capabilities keeps evolving and new tools keep being developed at a high pace. In this blog we looked at the core capabilities of managing very large tables and also on what new tools may be available in the very near future to help you to this with BW on HANA.
Next time, we will explore more of the admin features of SAP HANA.
|
OPCFW_CODE
|
Currently Krita only import a brush texture from abr file, you have to recreate the brushes by adding appropriate values in size, spacing etc. They can be modified/tagged in the brush preset editor.
How do I import bundles into Krita?
To import a bundle click on Import Bundles/Resources button on the top right side of the dialog. Select . bundle file format from the file type if it is not already selected, browse to the folder where you have downloaded the bundle, select it and click Open.
What brushes to use in Krita?
For sketching the simple round brush is enough, for painting I would use something with texture (“Dry” or “Chalk” brushes). Tip: you can right-click on a brush and add a flag to it, and then filter all brushes for that flag – that way you’ll see only brushes you chose in a docker.
How do I make my Krita brush pressure sensitive?
- Make sure all your drivers are updated — check your tablet desktop client and your windows updates.
- Make sure your tablet connection works. ( …
- Open Krita.
- On the toolbar, mouse over to ‘Settings > Configure toolbars… >
- Make sure that ‘mainToolBar’ Krita> is selected in ‘Toolbar:’
Can you download more brushes for Krita?
You can download directly the mix-brushes. bundle file here ( in a zip, extract it after download ) or from this folder (source git here). Open Krita, go to _Setting _then _Manage Resources _and then click on the import Bundle/Ressources button. Select the mix-brushes.
Where are Krita brushes stored?
Krita’s brush settings are stored into the metadata of a 200×200 PNG (the KPP file), where the image in the PNG file becomes the preset icon. This icon is used everywhere in Krita, and is useful for differentiating brushes in ways that the live preview cannot.
How do I install Krita plugins?
Go to Tools ‣ Scripts ‣ Import Python Plugin…, find the *. zip file and press OK. Restart Krita. Go to Configure Krita ‣ Python Plugins Manager, find the plugin and enable it.
Can you import pictures into Krita?
Import the image to the layer stack by going to Layer menu > Import/Export > Import as paint layer . Then keep this newly imported layer below the layer you want to add this image as mask, and then right click on the mask image and go to the convert section and click on convert to transparency mask.
Can you trace on Krita?
For tracing your line art you can import it in you document ( or copy paste) then add a paint layer above it by pressing Insert key , then select the line-art scan layer and reduce its opacity OR right click on it and check off Blue and green channels this will make it blueish and then select the newly added layer and …
Is Krita good for beginners?
Krita is one of the best free painting programs available and includes a great variety of tools and features. … Since Krita has such a gentle learning curve, it’s easy – and important – to familiarise yourself with its features before diving into the painting process.
How do I get all the brushes in Krita?
Load the brushes into Krita.
Use the second alternative for brushes imported from GIMP or PhotoShop. If you have a . brush file, click Settings > Manage Resources > Import Bundle/ Resource to load its contents. Use Edit Brush Presets to import or delete brushes from Krita’s library of brushes.
|
OPCFW_CODE
|
In my role as a Spring library developer at Neo4j, I spent the last year – together with Gerrit on creating the next version of Spring Data Neo4j. Our name so far has been Spring Data Neo4j⚡️RX but in the end, it will be SDN 6.
Anyway. Part of the module is our Neo4j Cypher-DSL. After working with jOOQ, a fantastic tool for writing SQL in Java, and seeing what our friends at VMWare are doing with an internal SQL DSL for Spring Data JDBC, I never wanted to create Cypher queries via string operations in our mapping code ever again.
So, we gave it a shot and started modeling a Cypher-DSL after openCypher, but with Neo4j extensions supported.
You’ll find the result these days at neo4j-contrib/cypher-dsl.
Wait, what? This repository is nearly ten years old.
Yes, that is correct. My friend Michael started it back in the days. There are only few things were you won’t find him involved in. He even created jequel, a SQL-DSL as well and was an author on this paper: On designing safe and flexible embedded DSLs with Java 5, which in turn had influence on jOOQ.
Therefor, when Michael offered that Gerrit and I could extract our Cypher-DSL from SDN/RX into a new home under the coordinates
org.neo4j:neo4j-cypher-dsl, I was more than happy.
Now comes the catch: It would have been easy to just delete the main branch, create a new one, dump our stuff into it and call it a day. But: I actually wanted to honor history. The one of the original project as well as ours. We always tried to have meaningful commits and also took a lot of effort into commit messages and I didn’t want to lose that when things are not working.
Adding content from one repository into an unrelated one is much easier than it sounds:
# Get your self a fresh copy of the target git clone git@wherever/whatever.git targetrepo # Add the source repo as a new origin git remote add sourceRepo git@wherever/somethingelse.git # Fetch and merge the branch in question from the sourceRepo as unrelated history into the target git pull sourceRepo master --allow-unrelated-histories
But then, one does get everything from the source. Not what I wanted.
The original repository needed some preparation.
git filter-branch to the rescue.
filter-branch works with the “snapshot” model of commits in a repository, where each commit is a snapshot of the tree, and rewrites these commits. This is in contrast to
git rebase, that actually works with diffs. The command will apply filters to the snapshots and create new commits, creating a new, parallel graph. It won’t care about conflicts.
Manisch has a great post about the whole topic: Understanding Git Filter-branch and the Git Storage Model.
For my use case above, the build in
subdirectory-filter was most appropriate. It makes a given subdirectory the new repository root, keeping the history of that subdirectory. Let’s see:
# Clone the source, I don't want to mess with my original copy git clone sourceRepo git@wherever/somethingelse.git # Remove the origin, just in case I screw up AND accidentally push things git remote rm origin # Execute the subdirectory filter for the openCypher DSL git filter-branch --subdirectory-filter neo4j-opencypher-dsl -- --all
Turns out, this worked good, despite that warning
WARNING: git-filter-branch has a glut of gotchas generating mangled history
rewrites. Hit Ctrl-C before proceeding to abort, then use an
alternative filtering tool such as ‘git filter-repo’
(https://github.com/newren/git-filter-repo/) instead. See the
filter-branch manual page for more details; to squelch this warning,
I ended up with a rewritten repo, containing only the subdirectory I was interested in as new root. I could have stopped here, but I noticed that some of my history was missing: The filtering only looks at the actual snapshots of the files in question, not at their history you get when using
--follow. As we moved around those files around a bit already, I lost all the value information.
Well, let’s read the above warning again and we find filter-repo.
filter-repo can be installed on a Mac for example with
brew install git-filter-repo and it turns out, it does exactly what I want, given I know vaguely the original places of the stuff I want to have in my new root:
# Use git filter-repo to make some content the new repository root git filter-repo --force \ --path neo4j-opencypher-dsl \ --path spring-data-neo4j-rx/src/main/java/org/springframework/data/neo4j/core/cypher \ --path spring-data-neo4j-rx/src/main/java/org/neo4j/springframework/data/core/cypher \ --path spring-data-neo4j-rx/src/test/java/org/springframework/data/neo4j/core/cypher \ --path spring-data-neo4j-rx/src/test/java/org/neo4j/springframework/data/core/cypher \ --path-rename neo4j-opencypher-dsl/:
This takes a couple of paths into consideration, tracks the history and renames the one path (the blank after the
: makes it the new root). Turns out that
With the source repository prepared in that way, I cleaned up some meta and build information, added one more commit and incorporated it into the target as described at the first step.
I’m writing this down because I found it highly useful and also because we are gonna decompose the repository of SDN/RX further. Gerrit described our plans in his post Goodbye SDN⚡️RX. We will do something similar with SDN/RX and Spring Data Neo4j. While we have to manually transplant our Spring Boot starter into the Spring Boot project via PRs, we want to keep the history of SDNR/RX for the target repo.
Long story short: While I was skeptical at first ripping the work of a year apart and distributing it on a couple of projects, I’m seeing it now more as a positive decomposing of things (thanks Nigel for that analogy).
|
OPCFW_CODE
|
iDeators offers highly-rated data science courses prgram that will help you learn how to visualize and use to new data, as well as develop New innovative technologies.
Whether you’re interested in Data Science, Machine Learing, R Language, Python or Deep Learning. iDeators has a course for you.
Data Science has been creating a lot of buzz around the world. Data Science professionals are more in demand than ever. But with that, the demand for good mentors who can help them apply their skills in the real world has also been rising magnificently. Top 1 Data Scientist Course in Mumbai
In such a scenario, the role of institutions offering data science course in Mumbai or any other city for that matter is of utmost importance. There are numerous courses available at the entry level and even undergraduate level. Due to which even entry level individuals bring a lot of expertise on-board, ultimately raising the bar of expectations. Deep Learning Course, Big Data Course and other similar courses in Mumbai & other areas make these entry level professional industry ready.
In spite of the advanced situation, there’s still a lot of gaps between academic training and real world practice. This gap leads to many questions – What project can be assigned to an entry level hire? How can a manager define an entry level professional’s job role?
Develop skills and knowledge
As well as qualifications, you’ll need to be demonstrate specific skills and knowledge.
Many people pursue a master’s degree in data science, but there are other routes you can take, such as e-learning courses, to acquire the relevant knowledge. Depending on the requirements of the role you want, you may need:
- To know how to code with a language such as Python or C#
- To know how to use SQL
- Need Experience with Hadoop or similar platforms
- Need Experience in machine learning and AI
- To be able to visualize and present data with software or platforms such as ggplot, d3.js, or tableau. Top 1 data Scientist Course in Mumbai
To answer these queries and more, here’s a list of few things suggested by the experts:
- Cleaning Data: Data Scientists spend nearly 80% of their time in cleaning & sorting the data and remaining 20% in analyzing it. So, basis the ratio the first few weeks for them should be learning and sorting data. As it will help them to understand the data sets and their features.
- Know Your Domain: A new comer should be encouraged to acquire a diversified data. They should also be pushed to learn as many programming statistical languages as possible.
- Speed Up on SQL: SQL is very significant for an entry level job profile. Generally, it is not taught during courses, but in the practical world it is commonly used and very much in demand. So, a manager should invest a time in making sure the entrant is well-versed in SQL.
- Define Their Roles: Define his roles and make him understand how his contribution will help the organization to attain its big goal. It will not only help him to understand his contribution, but also encourage him to push the envelope.
iDeators top data Scientist course in Mumbai offers highly-rated data science courses program that will help you learn how to visualize and use to new data, as well as develop New innovative technologies. Whether you’re interested in Data Science, Machine Learning, R Language, Python or Deep Learning. iDeators has a course for you.
|
OPCFW_CODE
|
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Http\Requests;
use Laracasts\Flash\Flash;
use DB;
class AccionesController extends Controller
{
public function index()
{
$file_local = 'up_'.date("Ymd").'.sql';
$file_server = 'fcec_'.date("Ymd").'.gz';
$mensaje = 'Presione el botón';
if (env('APP_ENV')=='local')
{
$name_host = 'LOCALHOST';
$fileDown = $file_local;
$fileUp = $file_server;
}else{
$name_host = 'SERVER';
$fileDown = $file_server;
$fileUp = $file_local;
}
return view('admin.acciones.index')
->with('name_host',$name_host)
->with('fileDown',$fileDown)
->with('fileUp',$fileUp)
->with('mensaje',$mensaje);
}
// Backup del localhost
public function exportLocal($file_name)
{
$cmd = 'C:\wamp\www\fcec\mysql_upload.bat';
exec($cmd,$output,$return_value);
if ($return_value == 0){
//$file_name = 'up_'.date("Ymd").'sql';
$mensaje = 'Back Up en: C:\\wamp\\www\\fcec\\public\\' . $file_name ;
Flash::success('Archivo creado: '.$mensaje);
return redirect()->back();
}else{
$file_name = '';
$mensaje = 'Error en grabar archivo de Back Up.';
Flash::error($mensaje);
return redirect()->back();
}
}
// Backup del server
public function exportServer()
{
// https://voragine.net/weblogs/como-hacer-copias-de-seguridad-de-bases-de-datos-con-php-y-mysqldump
// variables
$dbhost = env('DB_HOST');
$dbname = env('DB_DATABASE');
$dbuser = env('DB_USERNAME');
$dbpass = env('DB_PASSWORD');
$backup_file = $dbname .'_'. date("Ymd") . '.gz';
// comandos a ejecutar
$command = "mysqldump --opt -h $dbhost -u $dbuser -p$dbpass $dbname | gzip > $backup_file";
system($command,$output);
// Verifica creacion del backup
if ($output == 0){
$file_name = public_path() . '/' . $backup_file ;
// Descarga del backup creado
if(!$this->downloadFile($file_name)){
$mensaje = "No se pudo descargar el archivo " . $file_name;
}
}else{
$file_name = '';
$mensaje = 'Error en Back Up.';
}
//return $mensaje;
}
// Restore en el localhost
public function importLocal()
{
}
// Restore en el Server
public function importServer()
{
}
public function ExportSQL()
{
if (env('APP_ENV')=='local')
{
$name_host = 'LOCALHOST';
$cmd = 'C:\wamp\www\fcec\mysql_upload.bat';
exec($cmd,$output,$return_value);
if ($return_value == 0){
$file_name = 'up_'.date("Ymd").'sql';
$mensaje = 'Back Up en: C:\\wamp\\www\\fcec\\public\\' . $file_name ;
// echo 'Back Up en: C:\\wamp\\www\\fcec\\public\\' ;
}else{
$file_name = '';
$mensaje = 'Error en Back Up.';
// echo 'Error en Back Up.';
}
}else{
$name_host = 'SERVIDOR REMOTO';
// https://voragine.net/weblogs/como-hacer-copias-de-seguridad-de-bases-de-datos-con-php-y-mysqldump
// variables
$dbhost = env('DB_HOST');
$dbname = env('DB_DATABASE');
$dbuser = env('DB_USERNAME');
$dbpass = env('DB_PASSWORD');
$backup_file = $dbname .'_'. date("Ymd") . '.gz';
// comandos a ejecutar
$command = "mysqldump --opt -h $dbhost -u $dbuser -p$dbpass $dbname | gzip > $backup_file";
system($command,$output);
// Verifica creacion del backup
if ($output == 0){
$file_name = public_path() . '/' . $backup_file ;
// Descarga del backup creado
if(!$this->downloadFile($file_name)){
$mensaje = "No se pudo descargar el archivo " . $file_name;
}else{
$mensaje = "Archivo descargado: " . $file_name;
}
}else{
$file_name = '';
$mensaje = 'Error en Back Up.';
}
}
return view('admin.acciones.index')
->with('name_host',$name_host)
->with('file_name', $file_name)
->with('mensaje', $mensaje);
}
protected function downloadFile($src)
{
if(is_file($src)){
$finfo = finfo_open(FILEINFO_MIME_TYPE);
$content_type = finfo_file($finfo, $src);
finfo_close($finfo);
$file_name = basename($src).PHP_EOL;
$file_size = filesize($src);
header('Content-Type: $content_type');
header('Content-Disposition: attachment; filename='.$file_name);
header('Content-Transfer-Encoding: binary');
header('Content-Lenght: $size');
readfile($src);
return true;
} else {
return false;
}
}
public function DownData()
{
return view('errors.000');
}
}
|
STACK_EDU
|
Why a Dangerous Security Flaw in USB Devices Is Putting Computers Everywhere at Risk Thumb drives and other USB devices are vulnerable to malware that could let an attacker take over a user's computer.
Opinions expressed by Entrepreneur contributors are their own.
USB devices and the computers their users connect them to are vulnerable to malicious code that can totally take over a user's computer, manipulate files stored on the drive and redirect Internet traffic.
Security researchers Karsten Nohl and Jakob Lell first demonstrated the attack this summer at the Black Hat security conference in Las Vegas where they showed a large crowd how their malware embeds itself in the firmware that allows USB devices to communicate with computers, Wired reports.
Nohl and Lell did not publish their code, called BadUSB, for fear that it would be used for nefarious purposes; but now two other researchers have opened Pandora's Box.
At last week's Derbycon hacker conference two other researchers, Adam Caudill and Brandon Wilson demonstrated that they'd reverse engineered the BadUSB malware and then published it on Github for anyone to see.
"The belief we have is that all of this should be public. It shouldn't be held back. So we're releasing everything we've got," Caudill said at Derbycon. "If you're going to prove that there's a flaw, you need to release the material so people can defend against it."
Caudill's statement highlights a philosophical split among security researchers: those who elect to keep the flaws they find under wraps in order to protect the public directly, and others, who believe publishing their software exploits is the best way to put pressure on the industry to fix security flaws quickly.
In an interview with Wired, Caudill said even if this particular flaw isn't being used by garden variety hackers already, he believes well-funded organizations, like the NSA, may already have the capability and are using it.
"You have to prove to the world that it's practical, that anyone can do it … That puts pressure on the manufactures to fix the real issue," Caudill said. "If this is going to get fixed, it needs to be more than just a talk at Black Hat."
Because the malware is stored on the device's firmware, which controls the basic functionality of the device, it's very difficult to detect and can't even be deleted by clearing the storage contents. Caudill also demonstrated how the malware can be used to hide files and secretly disable password-protected security features.
Before last week's demonstration Nohl told Wired that he considered this exploit to be basically unpatchable. In order to mitigate against these types of attacks, he said, the entire security architecture would have to be rebuilt from the ground up with code that cannot be changed without the manufacturer's signature. Even then, he said, it could take more than a decade to get rid of vulnerable devices and smooth out all the new bugs.
Both research teams reverse engineered the firmware from USB devices made by Phison, a Taiwanese company and one of the largest USB device makers. Even if you don't use Phison devices yourself, your computer is still vulnerable, especially if you swap files with other users or happen to pick up a new free thumb drive at a business conference.
|
OPCFW_CODE
|
The latest version of Twitter In An App. is now available!
There are quite a few changes in this version which helps to improve your experience, and whilst it’s a stable build introducing better support for those pesky non-rendering videos, it’s not 100% perfect and I’ll continue working on it!
Labelled version 0.3.12, this update brings the following changes:
- Fixed ‘Find URLs’ function no longer working (introduced in 0.3.10, possibly also 0.3.06)
- Fixed ‘lack-lustre’client by adding ‘Copy Image’, which copies an image to the clipboard
- Added ‘Browser URL’ to launch selected URL in external browser
- Fixed ‘Enable Browser Cache’ when disabled not supporting cookies. Now cookies (Remember me) will work even with Browser Cache Disabled!
- Fixed ‘Enable Browser Cache’ when disabled not supporting Chromium options ‘disable GPU’, ‘allow-running-insecure-content’, ‘enable-media-stream’ and ‘debug-plugin-loading’
- Added ‘Enable Menu Confirmations’ option which, when turned off, will not display message boxes after clicking menu items (such as ‘Copy URL’, ‘Save Image’ etc..)
- Fixed notification icon when hovering mouse / new notifications arriving causes notification window to animate in reverse incorrectly
- Fixed notification window(s) not updating with new notifications when already visible/animating
- Completely re-engineered video URL extraction using ‘Offscreen’ browser (fixes ‘has stopped working’ error ‘0x4000001f’ in ceflib.dll)
- Fixed inadvertent Window flashing introduced by previous video URL parser
- Fixed short GIF image looping can cause strange lock-up / exception message
- Opt-out option for Video support added, videos will not attempt to render unless supported by Chromium
- Pre-parse URL option added (can now choose to parse video URL on-demand, less memory intensive, but longer waiting time)
- Fixed ALL links opening in internal browser, now delegates to external browser where appropriate
- Moved some options around in the options screen!
- Added version number to ‘Options’ screen for support purposes!
- Added Video URL ‘pre-parsing’ when Tweets appear, caches video links in memory
- Added support for Vine videos in popup player
- Fixed Media Player ‘Close’ button cursor so button looks like it’s clickable
- Added proxy panel for mouse tracking to only show toolbar after 500ms of being hovered over it [extra 4px space dedicated for this]
- Reduced ‘Remove Conversation Link’ timing to 250ms (removes the link quicker!)
- Added TweetDeck Tweak ‘Include Hashtags In Reply’ (defaults to on) to always include original hastags when replying to a tweet
You can grab the new version from the downloads section now!
|
OPCFW_CODE
|
i run on windows xp pro sp3
so for no real reason yesterday my computer, after a video game crahes, decided to freeze. so i rebooted and when i logged back on... it wasnt loading anything... really...
no ati ccc, no sound devices, no desktop icons, it wasnt responding to any action, mouse was moving fine however but clicks went responding only like 1 minute later...
so i go in safe mode and it runs ok, so i decide to clean the disk and defrag. took 20hours to defrag... it was almost at 90% fragmented. weird since last times i did defrags it wasnt in any way bad
so i do that and boot back up. same problem
i go back safemode with networking and run a kaspersky online scan. it didnt detect anything
i tried to uninstall/install graphic drivers since they wernt loading and thought it might be the issue... i DL from amd's website the file but when i try to install... i get the error saying it cannot find the proper driver for the hardware or OS. even though i am 100% sure its the right card(hd4850) and os(xp pro 32bit)...
driver sweeper doesnt change anything it wont install after a cleanup insafe mode. so im gva driver-less now lol
i wanna clear out that it isnt running slow. it is running slow AND not working.
takes 10+ min to load most of the programs of the task bar, the others dont load, and opening a .jpg file froze my computer (not the mouse but nothing was responding)
i uninstalled several video games since it was maybe the cause<they did cause awkward crashes a long time ago> and nothing changed... still slow/unusable normal mode xp
i am writting this (in safe mode)and thinking only thing i can do is reformat since i cant find a way to "repair" windows...
i am clueless...
my datas/musics are on a 2nd physical drive
the drive running the os has 2 partitions: the 2nd isnt used or formatted(it would be free to put a dif OS on)
...help me please xD
edit: in safe mode it takes 25seconds for the icons, startbar, etc to show after i click "yes" on the "you are in safe mode. blahblahblha.." bar thingy when windows loaded.
in normal mode half the bar and icons show but only 1/3 have picture-icons. and even after 15mins i couldnt click on anything or w/e it was frozen but not the mouse.
im guessing its driver problems but i dont understand why? i mean nothing changed besides updating anti-viruses, programs and games... however 1 game did crash alot because of errors of loading files from the game's folder (but the problem is in the game itself not windows).
edit2: i tried to repair windows with the disk. it didnt change a thing
edit3: i put a windows xp pro sp3 on the said partinion and it is fine and prolly the best thing i could do until i get an answer =P
Edited by Shuh, 14 October 2009 - 12:10 PM.
|
OPCFW_CODE
|
Make pg/redshift database verification case-insensitive
Database names are case sensitive in Postgres (and other databases).
When running DBT I got the following:
ERROR: Cross-db references not allowed in postgres (stg_rammeraal_82136 vs STG_rammeraal_82136)
To avoid this, please apply the following patch:
index af27aef2..f442281e 100644
--- a/plugins/postgres/dbt/adapters/postgres/impl.py
+++ b/plugins/postgres/dbt/adapters/postgres/impl.py
@@ -23,10 +23,8 @@ class PostgresAdapter(SQLAdapter):
def verify_database(self, database):
if database.startswith('"'):
database = database.strip('"')
- else:
- database = database.lower()
expected = self.config.credentials.database
- if database != expected:
+ if database.lower() != expected.lower():
raise dbt.exceptions.NotImplementedException(
'Cross-db references not allowed in {} ({} vs {})'
.format(self.type(), database, expected)```
This may be an issue with other database vendors as well.
Thanks for the repot @rpammeraal! Please feel free to submit a pull request for this change -- it's a very good one :)
We've recently added a CLA to the dbt contribution process. This CLA ensures that contributions to the dbt codebase are "original work that does not violate any third-party license agreement." As such, we're not well-suited to copy/paste code into patches to the dbt-core repository. If you're able to submit a PR for this, I'd be very happy to merge it! If not, we can retitle this issue to "Make Postgres database comparisons case-insensitive" and prioritize it on our own accord. Let us know!
I looked at requirements on how to create a PR -- it is quite a bit of work.
Also, some other changes need to be made in the underlying postgres modules
to ensure that database (and other entity) names are always compared in
lower case in postgres -- however, I don't have the bandwidth to set up,
debug and test all this.
Thanks,
Roy
On Mon, Oct 7, 2019 at 6:22 PM Drew Banin<EMAIL_ADDRESS>wrote:
Thanks for the repot @rpammeraal https://github.com/rpammeraal! Please
feel free to submit a pull request for this change -- it's a very good one
:)
We've recently added a CLA
https://docs.getdbt.com/docs/contributor-license-agreements to the dbt
contribution process. This CLA ensures that contributions to the dbt
codebase are "original work that does not violate any third-party license
agreement." As such, we're not well-suited to copy/paste code into patches
to the dbt-core repository. If you're able to submit a PR for this, I'd be
very happy to merge it! If not, we can retitle this issue to "Make Postgres
database comparisons case-insensitive" and prioritize it on our own accord.
Let us know!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/fishtown-analytics/dbt/issues/1800?email_source=notifications&email_token=AGE35HHAEGLTVB7OOZUNI4DQNPOEFA5CNFSM4I43HUT2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEASKEWQ#issuecomment-539271770,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AGE35HAZ2MSEY2UWCYOVHITQNPOEFANCNFSM4I43HUTQ
.
I don't believe the fix in #1918 fully addresses this issue. While the cross-db error message has indeed been addressed, dbt now fails with an error like:
Compilation Error in model debug2 (models/debug2.sql)
When searching for a relation, dbt found an approximate match. Instead of guessing
which relation to use, dbt will move on. Please delete "mydb"."test_schema"."debug2", or rename it to be less ambiguous.
Searched for: "MyDB"."test_schema"."debug2"
Found: "mydb"."test_schema"."debug2"
Here, my database is named MyDB on Postgres. We can reproduce this failure mode with:
-- models/my_model.sql
select 1 as id
And a profile configured with:
debug:
target: dev
outputs:
dev:
type: postgres
host: localhost
user: drew
pass: password
port: 5433
dbname: MyDB
schema: test_schema
threads: 8
Followed by:
dbt run
dbt run
The first dbt run will succeed while the second dbt run will fail.
|
GITHUB_ARCHIVE
|
In this video, we're going to talk about an important part of having users on a machine, and that's working with passwords. Passwords add security to our user accounts and machines, they make it so that only Marty knows the magic secret to access her account and no one else's, not even the admin of the computer. When setting up a password, you want to make sure that you and only you know that password. Remember, if you're managing other people's accounts on a machine, you shouldn't know what their password is. Instead, you want the user to enter the password themselves. To reset a password in the gooey, let's go back to our computer management tool. Under local users and groups, we're going to right click on a username like this account Sarah. Let's click on properties. Then from here, we're just going to check this box that says "user must change password at next log on", then apply and hit "OK." Then, when the user logs into the account, they'll be forced to change their password. If they forgot their password, you have the option to set a password for them manually, by right clicking and selecting set password. This has some caveats though, like losing access to certain credentials. You can read more about this option in the supplemental reading right after this video. To change a local password in power shell, we're going to use the DOS style net command. There's a native power shall command that can be used to set the password, but it's a little more complicated. It requires a bit of simple scripting to use. For now, we'll stick to the simpler, the less powerful net command. net does lots of different things, changing local user passwords is just one of them. If you want to learn more about what the net command can do, take a look at the documentation in the supplementary reading for the command. Since this is an old DOS style command, you can also use the slash question mark parameter to get help on the command from the CLI. To change a password for a user, the command is net user then the username and password. The best way to use this command, is to use an asterisk instead of writing your password out on the command line. If you use an asterisk, knit will pause and ask you to enter your password like so. Why is this approach better? Imagine you're changing your password and right at that moment someone walks behind you and glances over your shoulder. Your password isn't a secret anymore. You should also know that in many environments, it's common that the commands that folks run on the machines they use are recorded in a log file that's sent to a central logging service. So it's best that passwords of any kind are not logged in this way. Do you notice a problem with the asterisk approach though? That's right. If I change passwords for someone else using this command, I would know their password, and that's not good. Instead, we're going to do what we did in the GUI and force the user to change the default password on their next log on using the /logonpasswordchg:yes. So I'm just going to force Victor to change his password on the next log on. So, net user victor /logonpasswordchge:yes. The slash log on password change yes parameter means that the next time that Victor logs into this computer, he'll have to change his password. Sorry Victor.
|
OPCFW_CODE
|
- Extensive development experience in design, development, maintenance and support of .net applications and jobs built on Asp.Net,.Net Core, WeApi, MVC and Angular, Oracle, PostgreSQL and SQL server and deploying code using GIT CI/CD pipeline.
- Implemented solutions using Rest API, SOAP and WCF Services to fulfill business requirements.
- Experienced in writing integration test with automated test driven development methodology (ATDD) and unit test using Nunit, MOQ and Selenium frameworks.
- Experience in working with Configuration Management tools like Git, SVN, TFS, Visual SourceSafe version control systems.
- Working knowledge in processing large sets of structured, semi - structured and unstructured data and supporting systems application architecture.
- Working knowledge in multi-tiered distributed environment, OOAD concepts, good understanding of Software Development Lifecycle (SDLC) and AGILE Methodologies.
- Posses good business knowledge on Banking & Finance, Transportation, Entertainment and Communications domains and experience in translating business needs to technical requirements.
- Strong experience in XML related technologies.
- Hands on experience with ETL tool SSIS and Reporting tool SSRS reports.
- Experience in assessing business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review.
- Excellent analytical and problem-solving skills, able to understand business requirements, will work independently as well as a team member.
Frameworks: Asp.Net, MVC, .net Core, Angular, REST, SOAP, Swagger API
Unit Testing: NUnit, Selenium, MOQ
Build Tools: GIT CI/CD
Database: Oracle, SQL server, SyBase, Postgres, MySQL
Tools: Visual Studio IDE, SSRS, SSIS, GIT, SVN, Jira, Rally, HP ALM, Postman, Google Rest Client, Visio, Vault, WIX.
Development Methodologies: Iterative, Agile Scrum, Waterfall
Confidential, Englewood, CO
- Integrated Angular 7 application with .net core apis and used HTTP Client to perform HTTP Requests and Response.
- Implemented Responsive Design using CSS4 and Media queries for cross screens.
- Implemented Angular Router for single page application for navigation.
- Maintained environment variables to differentiate environment builds.
- Established Angular lazy loading strategy to optimize application performance.
- Helping with Production support after and pre-deployment of solutions with stakeholders/teams.
- Used Angular CLI for tasks such as minifying, auto reloading, deploying.
- PostgreSQL and SQL Server query optimization, queries for reporting using joins, group by and aggregate functions.
- Added and maintained Stored Procedures, triggers and functions in SQL Server.
- Mentored junior developers to pick up applications and environments quickly.
- Applied core Angular features like HTTP, Data Binding, Forms, Services and Dependency Injection, Lazy Loading, Route, Interceptors, Pipes, directives.
- Usage of TypeScript for Services, Models, testing and other based resources.
- Developed http requests using RxJs observer / observables to send / receive requests and responses.
- Usage of built in and custom Pipes for data transformations. Mentoring junior team members on the process and technology.
Environment: .Net core, Angular, Redis, PL/SQL, GIT, PostgreSQL, REST, HP ALM, Rally, Visual Studio,Swagger,Google Rest Client, Postman and Scrum methodologies.
- Modernization of Online Account opening system using Asp.Net, C#, Web api and MVC and Angular as a full stack developer. Using annotations for input validation.
- Create design diagrams and architecture diagrams based on the business requirements. Development and maintenance of APIs and Micro services to fulfill business requirements.
- Development of ETL (extract, transform and load) SSIS jobs to process Confidential Bank's RMI and RMO interest rates.
- Performing Unit testing and Integration testing in Dev, Dev-Int environments. Creating unit test cases using MOQ framework.
- Consuming SOAP based services for credit check and Address verification
- Providing access to users and supporting production issues. Used WIX for bundling and deployment. Mentoring junior team members on the process and technology.
- Deploying blueprints and releases to Azure servers after tear down process in Dev, Dev-Int, SIT, Pre Prod environments
Environment: Asp.Net, MVC, C#, SQL server, Web api, Angular, SSIS, JQuery, REST, Swagger, Postman, SoapUI, Jira, GIT, Rally and Visio
- Creating UI pages in Asp.Net MVC as per the requirement and doing validation using JQuery. Designing grids with KendoUI and JQuery.
- Writing Controller and Model code in C# and Entity Framework. Creating LINQ based data acess using C# and doing unit testing.
- Performance tuning in both UI and server code. Using annotations for input validation.
- Developed reports using SQL Server Reporting Services. Mentoring junior team members on the process and technology.
- Created Silverlight control to fix fast tabbing issue with KendoUI grid.
- Used Version One and JIRA for project tracking and defect. Developed code according to test cases developed (TDD) prior to development.
- Created logs in code analyzing logs to find issues and debug them.
Environment: Asp.Net MVC, C#, SQL Server, Version One,Entity Framework, SSRS, JQuery, Team Foundation Server, Silver Light, Kendo UI
- Responsible for development, maintenance of PRISM application which tracks software usage of Confidential employees and spend report in Asp.net, C# and SQL Server.
- UI design, database schema design and development for the new requirements using Asp.Net C#.
- Doing weekly date refresh and monthly reports using custom SQL queries. Pulling data and proecessing delimited data files and performance tuning for sql quieries.
- Generating custom reports using stored procedures and queries. Build and deployment activities using RFC creation using Hermes.
- Provisioning access to users. Troubleshooting and resolving production incidents and updating status. Created logs in code analyzing logs to find issues and debug them.
Environment: Asp.Net, C#, SQL Server,Hermes, JIRA
- Understanding requirements and developing UI and data access code using Asp.Net, C#, Ajax and SQL server.
- Developed drag and drop modules using web parts. Developing SSIS packages for Extract, Transform and Load data from various data sources.
- Developing reports using SSRS. Doing hands-on in Biztalk Server using various adapters for automation
- Creating stored procedures and inline queries for data access.
- Unit testing the code deveoped and peer reviewing modules of team mates. Defect fixing and production incident fixes.
- Designing low level database schema and ER diagrams using SQL Server . Creating stored procedures and writing C# code for data access.
- Used Ajax control Toolkit and Ajax libraries for avoiding entire page refresh and better user experience. Creating windows services to run scheduled jobs.
- Used third party map controls and grids for the UI design. Created Calendar control to bind events in the front end.
- Using HTMl/CSS designs in .net code and customizing the UI by modifying HTML and CSS as per the requirements. Used .net Themes and skins.
- Optimizing tables using indexes by analyzing the ueries by using SQL tools.
- Unit testing and peer reviewing the code. Assisting the production support team by giving training to the team.
|
OPCFW_CODE
|
React: filter array: Can I work with elements that didn't pass the test?
I am working on a React app. In my React app, I have a price list in "Price" popup window that I was able to code to edit, add, or delete each item in the price list. However, I need to modify the delete function to change the status of a button from disable to active.
I had it set up that if I "Add" a price in the "Price" popup window, the "Add" button is now disabled to prevent adding more entries since the price entry is supposed to be limited to one entry per day.
Therefore, the way my delete function is coded is that it filters through the price list and remove the price from the price list.
handleDeletePrice = deletePrice => {
// const { date } = this.props;
this.setState(prevState => ({
// spreads out the previous list and delete the price with a unique id
priceArr: prevState.priceArr.filter(item => item.date !== deletePrice)
}));
};
What I am attempting to do is to check if the deleted price's date is the same as today's date for the price that was added. If the deleted price's date is today, then I will active the "Add" button in the "Price" popup window so that I can add another today's price.
Here's my attempt that's not working:
handleDeletePrice = deletePrice => {
// const { date } = this.props;
this.setState(prevState => ({
// spreads out the previous list and delete the price with a unique id
priceArr: prevState.priceArr.filter(item => {
if (item.date !== deletePrice) {
return item;
}
if (deletePrice == todaydate) {
return buttonDisabled: true;
}
})
}));
};
The buttonDisabled is a state.
constructor(props) {
super(props);
this.state = {
priceArr: this.props.pricelist,
showPricePopup: false,
addPricePopup: false,
todaydate: new Date().toLocaleDateString(),
date: props.date || "",
number: props.number || "",
buttonDisabled: this.props.buttonStatus
};
}
Do I need to use a different method to go through the price array such as forEach to push the deleted price in another array and then, filter through the deleted array to see if it matches today's date to activate the button?
You can check out my CodeSandBox demo at https://codesandbox.io/s/github/kikidesignnet/caissa
You can update your buttonDisabled flag based upon whether the deletePrice is equal to today's date or not,If it is equal to to today's date then button disabled is false and vice-versa.I tried in the code sandbox and the add button gets enabled again.
Also,deletePrice variable can be named as deletePriceDate for better representation.
handleDeletePrice = deletePrice => {
const { todaydate } = this.state;
console.log("deletePrice", deletePrice);
this.setState(prevState => ({
// spreads out the previous list and delete the price with a unique id
priceArr: prevState.priceArr.filter(item => item.date !== deletePrice),
buttonDisabled: !(deletePrice === todaydate)
}));
};
Also,regarding your original function of elements failing the filter test,the lodash library does have a reject function but there is no javascript method as such.
Thanks! It's now so simple when I was thinking of the solution from a complicated perspective...
Just a slight change in data structure might solve your problem. You can use javascript Map instead of priceArr array. Disabled state of your button depends only on one entry of prices list. Using Map you can just check if entry for particular date exists or not. It will also help in deletion and you can avoid iterating over same array again and again. Please let me know if you need more help with code.
|
STACK_EXCHANGE
|
My G+ Metastasis System Described
Short Version: I have cobbled together a rickety system to originate content in G+ and have it appear in Twitter, Facebook and/or my blog – all controlled by hashtags.
Very Long Version:
I've grown weary of the fragrmentation of my social media existence. I've been looking for a way to have one single point of entry for my content. My original idea was to post everything to my blog in a special "microblog" category and have things push from there. The holdup is that G+ is not only hard to write to but apparently the SMS posting loophole has been closed:
I opted to start with thesharing system as described here:
I'm using a large chunk of that for my system, so Mike gets the bulk of the credit for this. The key point there is the steps that use Pluss Feed Proxy for G+:
then piped through Feedburner. (I'm experimenting with not laundering through Feeburner and using the Pluss Atom feed directly.) Once that is done, an RSS feed (or Atom) exists of all your Google+ posts. Given that, there are a lot of things that can happen. Mike Elgan has ManageFlitter as the posting system, which I set up. I also have now set up basically the same thing using only IFTTT rules, all driven by hash tags.
If the post contains #twt, then tweet it:
If the post contains #fb, then post to Facebook:
If the post contains #blog, then post to my blog:
The thing I like least about this is the way the title in the Atom or RSS feed comes from the G+ post. I'd like to have it truncuate at the newline if the post begins with a short paragraph, rather than just run on until it runs out of characters. The Pluss Feed Proxy server code is open source. I'm considering getting my own copy, making that change and running my own instance. I think I'd rather submit a patch and have that go into the server that is already there.
As mentioned above, I'm trying this with and without Feedburner in the middle. Mike Elgan cites the cleanup of the feed as the reason to use it. It does also add a lot of latency and one more link in this chain. Since the whole thing is a brittle Rube Goldberg machine, every one of those you can eliminate is one fewer place for it to break.
For now, I'm going to run with this a while and see how it works. This will be a sizable post on the blog by the time it pushes there, and too long for Facebook to get the whole thing. It will be an interesting experiment just to see the different levels of truncation and how everything handles it.
I'm interested in any feedback people have. If you use this, or improve it, let me know your experience. I got it from Mike and made some twists, so let me know what twists you make, please.
|
OPCFW_CODE
|
Relationship data listed wrong
Bug description
When calling the findmany method with an include the relationship data is being listed wrong
How to reproduce
description in prisma information
Expected behavior
Relationship data is listed correctly
Prisma information
[
{
topic_id: 1,
formatId: 51, // <--
format: {
id: 20, // <--
}
},
{
topic_id: 1,
formatId: 20, // <--
format: {
id: 20, // <--
}
},
{
topic_id: 1,
formatId: 65, // <--
format: {
id: 20, <--
}
}
]
model format {
id Int @id
format_recommendation format_recommendation[]
}
model format_recommendation {
topic_id Int
format format? @relation(fields: [formatId], references: [id])
formatId Int? @map("format_id")
}
return this.prisma.format_recommendation.findMany({
where: {
topic_id: data.topic_id,
formatId: {
not: null,
},
format: {
deleted: false,
},
},
include: {
format: true,
},
orderBy: {
rank: 'asc',
},
});
Environment & setup
OS: mac
Database: pg
Node.js version: 16
Prisma Version
prisma : 3.10.0
@prisma/client : 3.10.0
Current platform : darwin
Query Engine (Node-API) : libquery-engine 73e60b76d394f8d37d8ebd1f8918c79029f0db86 (at<EMAIL_ADDRESS>Migration Engine : migration-engine-cli 73e60b76d394f8d37d8ebd1f8918c79029f0db86 (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core 73e60b76d394f8d37d8ebd1f8918c79029f0db86 (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt 73e60b76d394f8d37d8ebd1f8918c79029f0db86 (at node_modules/@prisma/engines/prisma-fmt-darwin)
Default Engines Hash : 73e60b76d394f8d37d8ebd1f8918c79029f0db86
Studio : 0.458.0
Hi @endroca! Unfortunately I wasn't able to reproduce this with the information present in the report (neither with the latest Prisma, nor with 3.10.0).
This sounds it might be a bad bug and we would love to investigate and fix it, but there is not enough context in this issue for us to be able to figure out what's happening: the result of the query is quite different from what you get with the schema you provided (note that I had to infer and add a few fields for the query to be valid), and this functionality is tested quite extensively. Would you be able to come up with a working reproduction, ideally as a GitHub repo?
Thank you!
We're closing this issue due to inactivity. Please leave a comment below, ideally with a reproduction repository, if you have more actionable feedback about this issue. Thanks!
|
GITHUB_ARCHIVE
|
ASP.NET MVC & SQL Server vs. PHP & MySql
I'm going to develop a strategy browser-based game with ASP.NET MVC and SQL Server.
However, as I explored the web to get some ideas and patterns from existing games, I found out that none of them developed by .NET tools.
So, it made a question for me that what's the reason behind this...:
1) Is there any big draw back for microsoft technologies in this area?
2) Why all of games in this field are developed with PHP & (maybe1?) MySql?
3) Is there any performance issue with above mentioned microsoft technologies in this area?
Thanks in advance
Your questions are a bit too open ended to be reasonably answered. You should edit your question to narrow the scope. 1) The drawbacks and their importance, depend on how you're using the software and how it aligns with your requirements. 2) As Alex says, that's a pretty big generalization, and difficult to answer, since we don't know for sure why someone decided to use X instead of Y. 3) Again, this depends on how you're using it, how well you implement your functionality and what kind of performance you require.
I'm voting to close for a combination of what-tech-to-use and not-gamedev specific. For a browser game, the details of the web server that sends and records data are almost irrelevant. Anything that provides a data service will do, all equally well. The point of data over http is that the source doesn't matter. And likewise, storing data on a server is not a game dev question. The data structure and logic...maybe. But not the operating environment.
@Seth Battin - I asked this Q because don't know PHP at all(so any spesification of it), and I didn't find any where else to ask this Q... So I was realy worry about starting my project with Asp.Net MVC and sql server. In fact, my main Q was #2, and I got my answer - Thanks to eBusiness
@SethBattin Actually, if you look at the gorilla vs. shark article http://blog.stackoverflow.com/2011/08/gorilla-vs-shark/ which lists all the good reasons for not allowing technology choice questions I don't see a single one of them really being relevant to this question. 1: OP did in fact need to know if there is anything holding ASP.NET back. 2: The playing field in this case is games, it could have been narrowed further, but it still does allow a reasonably concrete debate. 3: It doesn't seem to be a lesser resource for learning than a lot of other questions.
4: Of course we need to limit the amount of technology questions, but that seems to be reasonably handled by the duplicate questions rule. The next PHP vs. ASP.NET question we get can probably be closed as a duplicate of this one.
There are some differences between ASP.NET and PHP, mostly they suit different programming styles. The both use a similar request/response pattern designed for generating old school web pages. They seem about equally useful.
I think the trends have more to do with market powers than technical differences, until a few years ago you couldn't get a good free software stack for ASP.NET, so hobbyists mostly went with PHP.
Both ASP.NET and PHP are geared towards treating each incoming request as a separate entity. If you want to share data between the processing of different requests you are more or less forced to do so through a database, and that can be a huge bottleneck.
What tool you actually should use? Well I'm glad you asked. Node.js provide easy to use http server functionality without making any assumptions on what threading structure you would like to use or how you want to share data internally. For a basic website the setup is a bit more complicated than PHP or ASP.NET, but for a game server you'll be rid of a lot of stupid constrictions. Send whatever data needs to persist in case of a crash to the database, keep everything else in process memory.
Just to add a bit, if you choose the ASP.NET approach, look into SignalR, it allows you to keep things in memory and not have that constant DB appoach you mentioned in this answer. Instead, you only query the DB at certain key points (ie: start/end of battle and not every round of it). Similar to node. http://www.asp.net/signalr
|
STACK_EXCHANGE
|
Garry's Mod - Props won't spawn after launching Lua script
I have a test server for gmod. I've coded a script that works excellent when I launch it, but there is a lot of downsides with it.
I've tried to code a script that will simply change the users speed if they type a command like "!speed fast", or "!speed normal". It looks like this:
table = {}
table[0]="!help"
table[1]="!speed normal"
table[2]="!speed fast"
table[3]="!speed sanic"
hook.Add("PlayerSay", "Chat", function(ply, message, teamchat)
if message == "!speed normal" then
GAMEMODE:SetPlayerSpeed(ply, 250, 500 )
elseif message == "!speed fast" then
GAMEMODE:SetPlayerSpeed(ply, 1000, 2000 )
elseif message == "!speed sanic" then
GAMEMODE:SetPlayerSpeed(ply, 10000, 20000)
elseif message == "!help" then
for key, value in pairs(table) do
PrintMessage( HUD_PRINTTALK, value)
end
end
end)
As you can see the script change the users speed if they either type "!speed normal", "!speed fast" or "!speed sanic" in chat. The script also contains a table of every command, and it will be shown if the user type "!help" in chat.
When I launch the script it works excellent, but if I try to spawn a prop after I've launched it, the prop won't spawn. Even when I spawn a prop first, then launch the script and try to "undo" the prop, the "undo" function won't work! The script makes Sandbox gamemode completely useless, because you can't even spawn props!
I've tried to search a little bit around on the internet first, but I haven't stumbled across something like this yet, so I hope someone got the solution! Please help
you should use some text editor that is able to highlight keywords. so you won't overwrite Lua's essential libraries with your own stuff. From your code snippet I also don't see any reason why your table has to be global. Use local variables wherever possible.
My guess is that this is happening because you are overwriting the global table. The table library contains helper functions for tables. Try renaming your table table to something else, like commands. I would also suggest you declare it as local commands so it does not replace any other global, so it does not interfere with anything else, like other scripts or library.
Also, as extra tips, lua tables are indexed with 1, So you could declare your renamed table as:
local commands = {
"!help",
"!speed normal",
"!speed fast",
"!speed sanic",
}
You could then iterate over it with a normal for:
for index = 1, #commands do
PrintMessage(HUD_PRINTTALK, commands[index])
end
I think this makes it a bit cleaner, in my opinion.
|
STACK_EXCHANGE
|
Apache 2.2 htaccess Require Password and IP Address
In htaccess using Apache 2.2.x, is there a way to require a password and a certain IP address, and block outright everyone else?
I've tried all the Allow/Deny/Require/Satisfy combinations I could find or try. Maybe someone here has the answer? I did an extensive search but everyone is looking to allow bypassing a password for certain IP addresses, not demanding an IP and password.
Is there any specific reason why you want to block the ip directly from the server and not through an app?
Yes, this is a pre-authorization setup to allow access to a WordPress wp-login.php file. Currently I have it set up to require a login/password in Apache before people get to the wordpress login, to block WP brute force attacks, but for some domains I want to restrict it even further to just my IP address.
As a side note... I did find a way to accomplish this but not just using Apache. I use nginx (running as a proxy) to block all but certain IP addresses before passing the proxy through to Apache which then requires the password. Problem solved, kinda.
I would suggest setting up your configuration to require just a password first, and once you have that working as intended. Add the correct allow from directive to the htaccess file. For example
Allow from <IP_ADDRESS>
You should not need to add anything else as Satisfy All is the default, but if you are still having problems add this as well.
If you are still having problems show us the htaccess file and check the rest of your config for overriding configuration.
New config based on discussion below:
<Location />
Order allow,deny
AuthType Basic
AuthName "Restricted Files"
AuthBasicProvider file
AuthUserFile /path/to/htpasswd
Require valid-user
Satisfy all
Allow from <IP_ADDRESS>
</Location>
Additional info, doesn't need the <Location> block if in htaccess
Sorry it doesn't work since it works off a first rule wins method. You can't both require an IP address and a Password... it only allows one or the other. My solution works fine though of using nginx with it.
It does work, and I've used it several times myself. The whole point of Satisfy All is to allow you to require multiple conditions for authorisation/authentication. If your configuration is not working then you have either done it wrong or you have conflicting configuration somewhere. The fact that you have it working in nginx suggests your distro distributes initial apache configuration files which are "broken" in some way.
I think you'll find it doesn't work if you are denying everyone except one IP, and also require a password from that IP. Everyone except that IP should be denied without a password prompt. Using Satisfy All in the case where you are denying everyone gives them a password prompt instead of denying them. Therefore, using Satisfy All will not work in this case. This is why I said "...block outright everyone else" in my question... "everyone else" should not even get a password prompt. Thanks for trying though.
Just added a config to my answer that does what you require. Someone not from <IP_ADDRESS> gets denied immediately, otherwise asked for a password. If I've misunderstood, please elaborate.
Hey that works!
I've been using it to try and add security to a block and for some reason it doesn't work inside that, but it does in a location block. Hmm.....
<LocationMatch> maybe what you need then. If you having problems with the configuration in a <Files> block it may be a merging problem. You can read about merging here: http://httpd.apache.org/docs/2.2/sections.html#mergin
|
STACK_EXCHANGE
|
#!/usr/bin/env ruby
require 'RightScaleAPIHelper'
require 'yaml'
require 'Base64'
require 'json'
@debug = true
def read_config(config_file, environment)
begin
raw_config = File.read(config_file)
@APP_CONFIG = YAML.load(raw_config)[environment]
debug "Read the configuration file"
rescue Exception => e
puts "Failed to read the configuration file"
puts e.message
puts e.backtrace.inspect
exit 1
end
end
def init(options={})
# This is the initial connection to RightScale
begin
@rs_conn = RightScaleAPIHelper::Helper.new(@APP_CONFIG[:account_id], Base64.decode64(@APP_CONFIG[:username]),
Base64.decode64(@APP_CONFIG[:password]), format='js' )
debug "Connect to RS API complete"
rescue Exception => e
puts "Failed making the connection to RightScale"
puts e.message
puts e.backtrace.inspect
exit(1)
end
# TODO : SHAZBOT : Need to add the connector for AWS.
# Working on one side at a time.
end
def debug(message)
if @debug == true
puts message
end
end
def get_servers
begin
resp = @rs_conn.get('/servers')
# 200 Success :: anything else is failure
unless resp.code == "200"
#raise "Error requesting server list. Error code #{resp.code}"
# Not sure that I want to raise an exception. Will give back raw data.
return {code: resp.code, body: resp.body}
end
# Convert the output to json
server_list = JSON.parse(resp.body)
# server_list.each do |server|
# begin
# puts server
# puts server["nickname"]
# ref = server["href"]
# resp = @rs_conn.get("#{ref}/settings")
# puts "Response code: #{resp.code}"
# puts resp.body
# rescue Exception => e
# puts e.message
# puts e.backtrace.inspect
# exit 1
# end
# end
rescue Exception => e
raise e
end
return {code: resp.code, body: server_list}
end
def get_volumes
begin
resp = @rs_conn.get('/ec2_ebs_volumes')
unless resp.code == '200'
puts "Failed to gather ec2_ebs_volumes. Error code #{resp.code}"
return nil
end
return JSON.parse(resp.body)
rescue Exception => e
puts "Error getting ec2_ebs_volumes"
puts e.message
puts e.backtrace.inspect
end
end
def generic_get (query)
begin
resp = @rs_conn.get(query)
unless resp.code == '200'
puts "Failed to run query: #{query}.\n Error code #{resp.code}"
puts "Body: #{resp.body}"
return nil
end
return JSON.parse(resp.body)
rescue Exception => e
puts "Error running RightScale request."
puts e.message
puts e.backtrace.inspect
end
end
begin
# First thing is to read the configuration file.
read_config(ARGV[0], ARGV[1])
# now we initiate our connections
init
#server_list = generic_get '/servers'
#get_servers
server_arrays = generic_get('/server_arrays')
count = 1
icount = 1
server_arrays.each do |array|
#puts "#{count} : #{array['nickname']} : #{array["active_instances_count"]}"
array_intances = generic_get(array["href"] + "/instances")
array_intances.each do |instance|
puts "#{icount} : #{instance['nickname']} : #{instance['resource_uid']} : #{instance['cloud_id']}"
icount += 1
end
count += 1
end
puts "Total arrays : #{count}"
puts "Total servers : #{icount}"
#server_list.each do |server|
# puts server
#end
#volume_list = get_volumes
#volume_list.each do |volume|
# puts volume
#end
#puts "Listing individual machine"
#server_info = generic_get "https://my.rightscale.com/api/acct/44210/servers/869991001"
#puts "server_info is of type #{server_info.class}"
#server_info.each do |server_item|
# puts server_item
#end
rescue
# Don't die on me
end
|
STACK_EDU
|
So the NYT came out with an editorial detailing how Silicon Valley firms should become more diverse.
It seems wrong to blame firms for their hiring decisions in this case, however. Firms are rational players and they won’t (and neither should they) hire more diverse employees as long as the applicant pool itself doesn’t change.
Let me illustrate this with a very simple model. Suppose a firm’s output Y is a function of the average ability of its workers, A-bar, and of its workers’ diversity, D. Additionally, assume simply that output is increasing in both of these arguments,
These are some very benign assumptions right here. Furthermore, suppose that mean ability is a function of diversity,
without any restrictions on the direction of this relationship. Under these set of assumptions, suppose a firm chooses its level of diversity in order to maximize output,
After maximization, this optimization problem implies the following relationship at the optimum
The inequality follows from our reasonable assumptions that the marginal products of both mean ability and diversity are positive.
So with this simple model – under a very loose set of assumptions -, we can show that the mean ability of workers at a firm is decreasing in diversity. Therefore, if a firm wishes to increase its level of diversity, it will have to sacrifice some level of mean ability, i.e. it will have to hire some workers with lower abilities.
Another way to see this is via a more concrete example. Suppose there are 10 workers applying for a job, and the firm wants to hire 5 of them. 80% of them are majority workers (e.g. men, White, Asian), 20% of them are minority workers (e.g. women, Black, Latino). Within both groups the ability distribution follows a bell curve-like distribution.
So the 8 majority workers have abilities of: A, B, B, C, D, E, E, F. Here, ability is measured from A to F, A being the best.
The 2 minority workers have abilities of A and F.
If the firm hires by ability, they will hire the two A ability workers (one majority, one minority), and three more majority workers (the ones with abilities B, B, and C). So altogether they will hire 4 majorities and 1 minority, exactly the 80-20 split we see in the applicant pool.
If the firm hires by diversity (making ability a secondary concern), and instead decides to hire both minority workers, then they will hire the minority with ability F and ditch the majority with ability C. Mean ability clearly declines.
These results hold as long as the ability distribution within the minority group is the same as (or worse than) the distribution in the majority group. In these cases, if only 20% of the applicants are minorities, then hiring more than 20% minorities will hurt mean ability.
Of course, a minority group with a superior ability distribution (say Asians) can be hired in higher shares. But minority groups with the same or worse distributions (which likely include women, Blacks and Latinos) cannot be hired in a higher fraction than their number in the applicant pool without hurting mean abilities.
In other words, firms like Google cannot hire more than 20% women if only 20% of computer science graduates are women without decreasing the mean ability of their hires. And if the ability distribution of women within the computer science field is actually worse than that of men (for whatever reasons), firms would actually need to hire even less than 20% women in order to maximize mean ability.
Now, I’m not saying this situation is desirable or unchangable or anything. It is just a fact. It shows that if one wants change, that probably shouldn’t start with Google and co. We should get more women to study the fields Silicon Valley needs, and we should ensure that female graduates of these fields are as competitive (in terms of ability) as males. If we can do that, Silicon Valley will hire more “minorities”.
I’m also not saying that we should necessarily incentivize women to enter into fields like computer science. This is just an option if we want more gender equality and whatnot in Silicon Valley. But there are theories implying that women in countries with more gender equality will self-select more into traditionally female fields for evolutionary reasons (if anybody knows of peer-reviewed empirical research on this, do let me know). So the lack of women in computer science may not be a big problem.
Finally, I’m not saying that there is no discrimination in the tech industry. It’s just that its role/magnitude may be much smaller than what certain people (would like us to) think.
|
OPCFW_CODE
|
Updated: Aug 1
The rapid expansion of communication technologies has led to a significant increase in data traffic, particularly in the area of streaming services such Netflix, YouTube, and Twitch. The growth in popularity means users expect seamless, high-quality content delivery. In order to meet these demands, the development of a more sophisticated and efficient method to streamline streaming traffic is crucial. This article delves into how cutting-edge AI switching systems can reshape communication technology and enhance streaming traffic optimization.
Advanced AI Switching Systems
Cutting-edge AI switching systems represent a breakthrough in communication technology. They employ artificial intelligence and machine learning algorithms to intelligently analyze, direct, and optimize streaming traffic (1). These systems use advanced algorithms to dynamically allocate resources based on current network conditions, user preferences, and content types being streamed (2).
Reinforcement Learning (RL) is one such algorithm that allows the AI system to learn and adapt to varying network conditions, thus improving its decision-making process over time (3). This enables advanced AI switching systems to continuously optimize their performance, providing an improved streaming experience for users.
Advantages of Advanced AI Switching Systems in Communication Technology
Enhanced Quality of Service (QoS): Advanced AI switching systems can identify and prioritize various types of streaming traffic, ensuring that high-priority content is delivered without interruptions (4). This leads to an improved QoS for users, as buffering and lag times are minimized.
Effective Resource Allocation: Intelligent resource allocation by advanced AI switching systems ensures that available bandwidth is used more effectively (5). This allows for better resource distribution among multiple users and helps prevent network congestion.
Scalability: As the number of users and streaming services continues to rise, advanced AI switching systems can easily adapt and scale to accommodate increasing traffic demands (6). This makes them a future-ready solution for communication networks.
Decreased Latency: Advanced AI switching systems can reduce latency by making real-time decisions and optimizing traffic flow (7). This results in a smoother streaming experience for users, with minimized delays and buffering times.
Real-World Applications and Future Prospects
Advanced AI switching systems hold the potential to transform communication technology and provide a more efficient, reliable, and scalable solution for streamlining streaming traffic. Some practical applications of this technology include:
Smart Cities: Implementing advanced AI switching systems in urban communication networks can significantly improve streaming traffic management, leading to enhanced public services such as surveillance systems, real-time traffic updates, and emergency response (8).
Entertainment and Media: As the demand for high-quality content increases, advanced AI switching systems can ensure that streaming services deliver an optimal user experience with minimal buffering and high-quality video playback (9).
Telecommunication Infrastructure: Integrating advanced AI switching systems into telecommunication networks can help optimize resource usage and reduce latency, resulting in improved performance for mobile and broadband internet users (10).
In conclusion, advanced AI switching systems represent a promising development in communication technology, with the potential to significantly improve streaming traffic optimization. These systems can provide users with a superior streaming experience while efficiently utilizing available network resources.
Lee, S., & Kim, H. (2019). Efficient streaming traffic management using AI-based network systems. IEEE Communications Magazine, 57(6), 52-58. [URL:
Samarakoon, S., Bennis, M., & Saad, W. (2018). Ultra-Reliable Low-Latency V2V Communications through Federated Learning. [
Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2015). Achieving human-level control with deep reinforcement learning. Nature, 518(7540), 529-533.
Jiang, J., Zhang, N., Han, K., et al. (2020). Balancing user-experience and energy efficiency in fog networks. IEEE Transactions on Wireless Communications, 19(4), 2756-2769.
Wang, T., Zhang, C., & Liu, Z. (2017). Application of machine learning in Internet of Things: a comprehensive survey. International Journal of Machine Learning and Cybernetics, 9(8), 1399-1417.
Yang, C., Zhang, X., & Tang, J. (2019). Scalable deep reinforcement learning for adaptive traffic signal control. IEEE Transactions on Intelligent Transportation Systems, 21(3), 1233-1243.
Zhan, K., & Zhang, N. (2020). Optimal distribution of streaming traffic in multi-hop wireless networks. IEEE Transactions on Mobile Computing, 19(4), 983-996.
Adeli, H., & Jiang, X. (2018). Integrated construction and information technologies in smart cities. Journal of Civil Engineering and Management, 24(3), 173-184.
Ali, A., Ammar, M., Zink, M., et al. (2018). VideoEdge: Hierarchical clustering for processing camera streams. In Proceedings of the 2nd ACM/IEEE Symposium on Edge Computing (pp. 115-131).
Zhang, S., Zhang, S., Chen, X., et al. (2017). Mobile cloud computing systems and cloud radio access networks (C-RAN). IEEE Transactions on Cloud Computing, 6(1), 148-160.
|
OPCFW_CODE
|
Azure News May 2021
What’s new in Windows Virtual Desktop?
Every month Microsoft update Windows Virtual Desktop (WVD) to ensure the best user experience. Some of the latest changes include:
- Use the Start VM on Connect feature (preview) in the Azure portal
- MSIX app attach is now generally available
- Updates to the Azure portal UI for WVD such as an upgrade to the Portal SDK, fixed issues, and detailed sub-status messages for session hosts
- Updates for Teams on WVD such as issue resolution and the addition of hardware acceleration for video processing of outgoing video streams
You can explore all of the latest changes here.
Azure Monitor for Windows Virtual Desktop now generally available
With Azure Monitor for Windows Virtual Desktop (WVD), you can find and troubleshoot problems in the deployment, view the status and health of host pools, diagnose user feedback and understand resource utilisation.
General availability comes with improvements such as:
- Improved data collection and new guidance to help you optimise for cost
- Updated setup experience with easier UI, expanded support for VM set-up, automated Windows Event Log setup, and more
- Relocated WVD agent warnings and errors at the top of the Host Diagnostics page to help you prioritise issues with the highest impact
- Accessibility enhancements
Azure Cost Management and Billing updates
No matter the size of your business it’s important to know what you’re spending, where, and how you can reduce those costs. This is where Azure Cost Management and Billing comes in. Here are some of the latest improvements and updates based on user feedback:
- New date picker in the cost analysis preview
- New cost analysis views for resources and reservations
- Streamlined Cost Management menu
- New Azure Cloud Services deployment model
- Limited-time free quantities offer for Azure Synapse Analytics
- Plus, much more…
So, grab a drink and have a read of all the latest improvements and updates here.
Cloud Services (extended support) now generally available
Last month Microsoft announced the general availability of Cloud Services (extended support) – a new Azure Resource Manager (ARM)-based deployment model for Azure Cloud Services.
Cloud Services (extended support) has the primary benefit of providing regional resiliency along with feature parity with Azure Cloud Services deployed using Azure Service Manager (ASM). It also offers some ARM capabilities such as role-based access and control (RBAC), tags, policy, private link support, and use of deployment templates.
To find out more, or to get started, click here.
Azure Site Recovery updates
Microsoft have announced new functionalities to Azure Site Recovery over the past month to ensure a consistently great user experience. The updates mean that Azure Site Recovery now supports:
- Cross-continental disaster recovery for 3 region pairs
- Azure Policy in public preview
- Proximity placement groups (PPGs) across hybrid as well as cloud disaster recovery scenarios
New solutions for Oracle WebLogic on Azure Virtual Machines
In April Microsoft announced a major release for Oracle WebLogic Server (WLS) on Azure Virtual Machines. The release covers common use cases for WLS on Azure, such as base image, single working instance, clustering, load balancing via App Gateway, database connectivity, integration with Azure Active Directory, caching with Oracle Coherence and consolidated logging via the ELK stack.
Upgrade your infrastructure with the latest Dv5/Ev5 Azure VMs in preview
Last month Microsoft announced the preview of Dv5-series and Ev5-series Azure Virtual Machines (VMs) for general-purpose and memory-intensive workloads. They run on the latest 3rd Gen Intel Xeon Platinum 8370C (Ice Lake) processor in a hyper-threaded configuration, providing better value for most general-purpose enterprise-class workloads. The VMs are designed to deliver up to 15 percent increased performance for many workloads and better price-to-performance than the previous Dv4 and Ev4-series VMs.
Azure Trivia is back and it’s bigger, better, and faster than ever
Are you a fan of Azure Trivia? On 26th April Microsoft kicked off this year’s Azure Trivia and they claim it’s going to be the best year yet. The rules are simple:
- Visit twitter.com/azure
- Find the relevant week’s #AzureTrivia question and click on the correct answer
- Tweet the auto-generated confirmation message from your own account
To enter you need to be 18+, live in the US, and be a software developer, IT professional, or student studying computer science, IT management or related fields.
For more information click here.
IN OTHER NEWS:
One of only nine UK organisations hold this accreditation – providing validation for N4Stack’s capabilities, skills, and expertise in delivering services built on the Azure cloud.
Have you got 2-minutes to spare? Grab a coffee and watch our short video to find out about the key pricing considerations when deploying Windows Virtual Desktop.
|
OPCFW_CODE
|
I started to suspect a permissions issue after I wrote this. So I mounted the EFI and checked permissions were okay (didn't change anything). Then reran install and this time it worked! No idea why. I did have a reboot in between tries so maybe that cleared something for this to work. I'll...
I can mount the EFI parition, and it *does* have a config.plist file. In fact, I had to modify it to get the boot options correct. So I know it should be possible for Multibeast to mount and update this file.
I'm trying to use Multibeast to install the Realtek ALC887 codec. After attempting to install, I get a screen "The Installation Failed". One observation. It didn't matter if I had the EFI mounted or unmounted before attempting install.
Here are the details:
macOS version 10.13.5
How do I get to a prompt to run this?
I recall previously I could get to a prompt through "Book Recovery from Recovery HD", but I'm hitting boot errors. At this point I'm trying to cleanup my system and re-install everything. The system won't even recognize my newly formatted bootable USB...
Your screen shot looks quite a bit different than the one I posted. I don't think its the same issue. I would suggest to leep google searching both tonymac86 and the rest of the internet for the exact same wording as in your logs. Or just post a new posting on tonymacx86 with your issue.
Hi, thanks for your response. This helped remove that error message. But it seems I'm still stuck here. - see screen shot in attachments.
There's some messages like:
"IOUSBHostHIDDevice::handleStart: unable to open interface"
"IOUSBHostHIDDevice:: start: unable to start IOHIDDevice"
I'm upgrading from Sierra to High Sierra.
After download and installation from app store, I reboot the system.
Inside Clover bootup screen I select "Boot macOS Install from Sierra" (should that say High Sierra???). Its gets stuck in bootup with "Service existed with abnormal code: 1". See...
Check out these posts:
These solved my issue!
So I was having the same issues on my hackintosh. I have about 48GB total memory, and the WindowServer would quickly creep up to 40GB consuming most of my memory.
Ever since I updated to High Sierra, I've noticed my system was very sluggish, e.g. slow typing text in various apps, slow...
I just realized this post was originally intended to deal with a boot issue when updating to High Sierra. Shortly after I posted the issue, I stumbled upon the solution (see my followup post) and was running fine for several weeks. During this time I didn't see any performance issues. Then I...
Hi @jAcL.uy ,
What exactly do you mean by lag?
Since updating to from 10.13.2 to 10.13.3 I've been having what feels like sluggish overall performance (scrolling windows, flipping windows, launching programs, etc...). Is that what your experiencing? Can you be more specific?
I had HDMI audio working on Sierra, but after upgrading to High Sierra it no longer works. I tried this procedure, but audio is still NOT working for me. I don't see any output devices (System Preferences->Sound->Output).
I have the HDMI going from my Samsung TV/Monitor connected to the HDMI...
|
OPCFW_CODE
|
Which programs can I use to visualize a JSON File?
I need a program to visualize a json response from a URL or a json file, which organizes the data so it's more human readable. Any suggestions?
possible duplicate of JSON viewer for browsing APIs
If you are looking for an interactive online app, http://jsbeautifier.org/ works great and there are many suggestions for integrating the functionality into other environments.
You can either execute the core of jsbeautifier (beautify-cl.js) in a hosted js runtime or attempt to re-engineer it in the language of your choice.
If you need to reformat JSON at runtime in .net, I can suggest JSON.Net.
Great online tools:
json.parser.online.fr (Online JSON Parser)
Excellent for detecting invalid json, and shows both json-parse and eval methods.
jsonlint.com (JSONLint - The JSON Validator)
Open-source, has excellent validation of detecting invalid json, and beautifies JSON.
web.archive.org/.../chris.photobooks.com (JSON Visualization)
Shows json as html tables and is good for detecting invalid json
jsonviewer.stack.hu (Online JSON Viewer)
Nice if you want to traverse json as a tree with properties (but bad for invalid json)
Downloadable tool built on .NET:
JSON Viewer
Has a stand-alone viewer similar to the online viewer of the same name, but also has plugins for Fiddler 2 and Visual Studio 2005
Yes! http://chris.photobooks.com/json/default.htm is very useful..
I wrote this one: https://keyserv.solutions/VizBuild/VizBuild.html It is very similar to: http://chris.photobooks.com/json/default.htm
I found the chris.photobooks.com page through the waybackmachine. Here's a link: http://web.archive.org/web/20200326113913/chris.photobooks.com/json/default.htm
http://jsonviewer.codeplex.com/ is no more?
For mac: VisualJSON on appstore
For web browser: JSONView as plugin
Firefox: JSONView
Chrome: JSONView
For Terminal: httpie
JSONView only app I found that worked with very large files
To toot my own horn, I've also written a JSON visualizer - it can even fetch a remote url and visualize the JSON it returns.
You can find it at http://json.bloople.net.
Link is broken: 500
I wrote one too but yours is a work of art!
Big fan.
I use JsonLint, a web based validator and reformatter for JSON. Upon validation, it also reformats the JSON file so that it is easier to read.
well to continue the trend here (a couple years later) here is another JSON Visualizer created by myself .. though this one a little bit prettier ;)
visualizer.json2html.com
|
STACK_EXCHANGE
|
FREE - In Google Play
FREE - in Win Phone Store
I have the answer to your math dilemma, do you want in liner feet or metric tons?
The problem here is the you are taking the material loses of the players in the ratio. You should take their absolute material instead.
Here's what you're saying:
Position 1 : 0:0
Position 2 : 5:0 = infinite
Position 3 : 5:3
Position 4 : 10:3
Heres what you should say:
Position 1 : (Bishop+Queen):(Rook+Rook) = 12:10
Position 2 : (Bishop+Queen):(Rook) = 12:5
Position 3 : Queen:Rook = 9:5
Position 4 : Queen:null = 9:0 = infinite
Hence black shouldnt recapture with the rook as it loses his advantange from 5/12 to 0. This also matched with additive/substractive evaluations.
The fallacy is that you can't divide by zero. This is not allowed by the laws of mathematics. It doesn't give the result of "infinite;" it gives you the result of "meaningless." If you allow the expression 1/0 in algebra, it is easy to prove that 1=2. So your "ratio" of 5:0 yields no fraction and is null and void.
How's that 1=2 thing work?
@StrategicPlay: What type of math are you studying right now?
I think he means that 1/0 = infinite ; 2/0 = infinite ; therefore 1=2.
What he is saying is wrong.
Any non-zero non-infinite value divided by zero is infinite but not all infinites are the same.
You can easily say that 2/any number will always be greater than 1/the same number because 2>1. Therfore 2/0>1/0 Although that isn't an entirely accurate proof, it explains the concept nicely.
Ummm sorry, but you're wrong.
It is true that not all infinites are equal, howver, all infitinities within the same Aleph set are equal. The limit of 1/x as x approaches zero from the right is positive inifinity and is equal to the limit of 2/x as x approaches zero from the right and is equal to the limit of /x as x approaches zero from the right.
As this relates to the OP's question - the proper course would be to consider the ratio of material left on the board, if you want to work in ratios.
You all need to ask Sheldon and Leonard for help on this one.
anyway, but the mathematical problem presented here is interesting! its really worth discussion.........
nice one, wish I had said that ... I think I will ^^^^
Ummmm, even more sorry, you are both horribly wrong Kingpatzer and Hellcraft. This is surprising given your obvious talents. Possibly you have forgotten the low level stuff taught in secondary school.
Please follow the link to understand what Ricardo_Morro was referring to:
errrrr, umm... If "any number" includes negative numbers the initial statement is false. 2/-1 = -2 1/-1 = -1 -2 is not greater than -1.
So, does division by zero have the properties of division by a negative number, or a positive, considering that it is neither? Just wondering.
Division by zero is undefined in most maths. It does not = infinity (unless that's how you define it -- but it's a problematic definition that causes some rather gaping wounds in mathematical consistency).
^^^^^^^^ -- "doesn't know how to follow a link" -- OR
-- "doesn't read previous posts" --
I'll advise you to please note that I specifically stated the limit of R/x as x approaches zero, and I did not speak to division by zero specifically. Please revist any text book that discusses limits and the properties of infinity.
http://tutorial.math.lamar.edu/Classes/CalcI/TypesOfInfinity.aspx Note that infinity + infinity = infinity.
Your link simply isn't relevant to what I said precisely because I specifically am speaking to the right handed limit, and not presuming division by zero to be defined.
I'll advise you to please note that I specifically stated the limit of R/x as x approaches zero, and I did not speak to division by zero specifically. Please revist any text book that discusses limits.
Isn't relevant? I would suggest you reread the posts at least STARTING WITH THE POST BY RICARDO_MORRO which started this discussion of divide by zero, etc. to understand that you and HellCraft are so far off-base that it is laughable. None of your nonsensical tripe about limits pertains to the topic at hand - the fallacy of division by zero and its relation to the (faulty) analysis in the OP. BTW, obviously, this was part of my point, "duh, by the way, guys - you are off topic by miles".
The fact that HellCraft went astray and then you followed him is no excuse. If you don't have aything worthwhile to offer about the topic at hand, well, make your own decision I guess.
Again, what I said is accurate and correct. That you don't see it as relevant isn't really my problem.
Further, I pointed out that as the entire thing relates to the OP it's the wrong approach.
But thanks for playing.
I find your posts unpleasant to read, but I have read them.
Now, try putting in your own words the salient parts of what it is you think the link (which I read) says.
Your welcome, and, lets do it again sometime.
I'll mathematically paraphrase - take the perfectly correct equation 5x = 7x, divide both sides by x. This is known as algebra, a.k.a. do the same thing to both sides to maintain the equality.
The Result! 5 = 7
Hmmm, why did this not work out right?
Answer: We divided by zero! (since we notice that only x equal zero is a sol'n)
Can I be of any further help?
Although this tread is chockablock with Georg Cantor wannabees, the net result is much closer to transfinite mindlessness, math BA versus BS notwithstanding.
Join the "gg is arrogance" thread, at least it's still going strong after 1000+ posts.
the funny thing is that i expected an actual dilemma,instead, i got some confused Sheldon wanna be who doesn't know what he is talking about and is trying desperatly to sound deep and provocative.
A decent chess book will solve all of the OP's "actual dilemmas," whether real or (in this case) imagined. Simple enough?
And why are you trolling for math fights, and using lame TV metaphors?
Don't go dragging imaginary numbers into this too. Everybody knows they're not real.
|
OPCFW_CODE
|
Step 5/20. Creating and Using Infopath Tasks Forms
Creating and linking the Infopath form
(start with Step 4 solution)
Let’s start Infopath 2007. Select Design a Form Template-Blank:
Check the option Enable browser-compatible features only.
Design the following form with 2 buttons :
We need to create a Data Connection; select the Design Tasks panel, select Data Sources and click on Manage Data Connections:
Add a new data connection; select Create a new connection to Submit data :
Click Next to select to the hosting environment… :
Click Next and name it Submit.
On the Design task panel click on Data Source, right click on myfields and add a new field; name it Status an keep the text data type:
Click on the Accept button, click on the Rules button and on the Add button.
We are going to add 3 actions :
· setting the status field to Accepted or Rejected,
· Submitting the data
· Closing the form
Click on the Add Action button, select the Set a field’s value Action, select the field status and set the Value to Accepted :
Follow the same procedure for the Reject button , but set the Value to Rejected.
On both the Accept and Reject button, click on the control, select Rules, select rule1, click on the Modify button: add a new action :
Add another action :
Don’t forget to follow the same procedure for the Reject button.
Set the Security Level of the form to Domain (menu Tools–Forms Option–Security and Trust) :
Publish the infopath form : File Menu–Publish: click ok to save the form and publish the form to a newtwork location:
Click Next, save the form in your project directory with the name ApproveReject.xsn.
Click on Next and keep the text box empty (very important) :
Click Next until the Wizard closes.
With Windows Explorer, go to the form, right click on it:
Select Design and go to the File menu, select Properties:
Copy the ID to to the clipboard.
Modify the manifest file (workflow.xml) in order to declare this form as the form to use to interact with the task:
In the MetaData element, create a Task0_FormURN child element :
Several forms can be used (see part 6 of this tutorial) therefore the code in the workflow will refer to the form by specifying the number provided in the Task<X> element name; for instance here <X> is 0, so in the code we will refer to the form 0.
In this part of the tutorial we are going to use only one type of task to interact with the workflow, so specify one task content type :
In the feature.xml file, add the following attributes to the feature element for handling the infopath form:
Register the Infopath form in the feature.xml file:
Modify the install.bat file in order to copy the Infopath Form :
In the same file, uncomment the lines following the Note :
::Note: Uncomment these lines if you’ve modified your deployment xml files or IP forms
In the workflow designer, double click on the ApproveRejectTask activity : in the ApproveRejectTask_methodInvoking, set the link between the Task and the TaskForm0 (which references our Infopath form):
Now, let’s retrieve the information coming from the Infopath Form : is our expense report approved or rejected ?
We can retrieve these value at the level of the workflow by using the AfterPropertries property of the OnTaskChange activity :
The type of AfterProperties is Microsoft.Sharepoint.Workflow.SPWorkflowTaskProperties.
The data coming from the Infopath form controls will be stored in its ExtendedProperties property which is an Hashtable.
Let’s bind AfterProperties to a new field (AfterApprovRejectProps) in the workflow class:
Double click on the WaitForApprovalRejection activity ,r etrieve the Infopath fields value from the HashTable (ExtendedProperties) and change the ListItem status :
Rebuild the solution, call install.bat, test the workflow; click on the new task in the task list and the Infopath Form should show up; submit your choice and check the item status.
This hands-on training is the property of Redwood S.L sprl and may not be organized in class or in group without the prior written permission of Serge Luca. Should you wish to organize this hands-on training in your company or institution, please contact Serge Luca first to enter into a licence agreement. Each trainer or teacher using this hands-on training should have a licence agreement. Please ask your trainer or Serge Luca whether he or she has entered into a licence agreement with Redwood S.L sprl.
The hyperlink to this hands-on training may be placed on your website for free, on the condition that the name Serge Luca is clearly mentioned in the reference. Please send us a mail containing the link to the web page our reference is used on.
|
OPCFW_CODE
|
Epicnovel Unrivaled Medicine God update – Chapter 2389 – Imparting Dao abrasive decorate reading-p3
Novel–Unrivaled Medicine God–Unrivaled Medicine God
Chapter 2389 – Imparting Dao living puzzled
Needless to say, the issues that existed for each person were definitely various.
These kinds of horrifying to the intense spirit pressure control, even if he developed for another 100 million a long time, it turned out unattainable to reach it very.
Ye Yuan dismissed him. His procedure out of the blue transformed.
Cards On The Table
He still smiled coldly and claimed, “Your Excellency indicating this, you’re naturally more robust than me. I want to see which kind of deceive Your Excellency can draw from the hat, by using a measly minimal Thousands of Line Cloud Foldable Hand!”
This specific indicates was truly fantastical.
The Lazy Swordmaster
Having the capability to possess a panoramic take a look at the dietary supplement refinement of more than thousands of people provide with Divine Emperor Kingdom soul power.
Witchcloud all of a sudden startled, only then, coming back to his feels. He smiled bitterly and mentioned, “Understood! You are giving experience to the outdated mankind. It’s not really that a selection of their foundations are shaky, but that every one of our foundations are unstable! If someone can casually improve an approach to the amount of ‘Dao”, why would they concern yourself with not hitting the level of principle?”
An Eight-celebrity Alchemy G.o.d displaying a rate an individual refinement procedure, it absolutely was naturally astonishing.
Witchcloud’s two vision grew to be even rounder, the glimmer in the vision turning into better and nicer.
First, was the get ranked three reference powerhouses. It was the ranking two resource, so on and many others.
All of a sudden, Ye Yuan retracted his palm gestures. An array of splendour unexpectedly converged.
He also finally fully understood why he missing on the Ye Yuan who had been only get ranked three supply.
The Eight-legend Alchemy G.o.d mentioned disdainfully, “Your Excellency’s 1000 Thread Cloud Folding Palm is stronger than my own, but what’s the effective use of this? What has it got to do with shaky basis?”
Ye Yuan smiled slightly. Curling his fingertips, plenty of okay threads with chemical shockingly developed.
But he finally realized why Ye Yuan could smash him in Alchemy Dao.
Ye Yuan nodded and reported, “I assume that everybody provide should all know Thousands of Line Cloud Foldable Fretting hand, ideal?”
Seeing and hearing Ye Yuan simply call his brand, he laughed coldly and stepped out from the positions to show the Thousand Line Cloud Collapsable Hand one time.
He was ranking within the summit of Alchemy Dao firstly. What he observed was naturally not what bystanders could compare with.
An individual immediately explained unhappily, “Thousand Thread Cloud Collapsable Palm is really a ranking one refinement procedure, who doesn’t know! Lord Main Instructor, you are completely searching upon us!’
the ranch at the wolverine
From compact to sizeable was straightforward, from significant to thorough was hard!
Ye Yuan really executed miracle along with the Thousands of Thread Cloud Collapsable Palm!
Creating a rate just one refinement strategy to the quantity of ‘Dao’!
The disparity between him and Ye Yuan was not a tiny tiny bit!
“Senior Witchcloud, I recognize that you might be somewhat dissatisfied with me stating that their base is shaky. Nevertheless I didn’t hold the goal of focusing you when I stated these. You drained your mind and body to the myriad backrounds. Junior admires endlessly. It is simply that … we need to be more powerful!” Ye Yuan considered Witchcloud and said sincerely.
“Saw evidently, Older person?” Ye Yuan investigated the Witchcloud with a stunned confront, when he mentioned by using a laugh.
Ye Yuan nodded and said, “I think that every person provide should all know Thousands of Thread Cloud Foldable Hands, appropriate?”
Witchcloud’s two eyeballs became even rounder, the glimmer as part of his eyeballs becoming nicer and better.
He failed to fully grasp how many Dao pill powerhouses came out after, but Ye Yuan was confident that there ought to have been several!
Witchcloud abruptly startled, only then, returning to his feels. He smiled bitterly and claimed, “Understood! You’re supplying face to this ancient guy. It is not too a selection of their foundations are unreliable, but that our foundations are unreliable! If someone can casually perfect an approach to the degree of ‘Dao”, why would they be worried about not getting to the level of concept?”
Absolutely everyone stared at Ye Yuan dumbfoundedly, like looking at a monster.
However the emotion it brought everybody was already totally different.
Some were actually big, some had been slight.
The Alchemy Hallway was single-handedly set up by him. These individuals have been also trained by him sole-handedly.
|
OPCFW_CODE
|
- This topic has 2 replies, 2 voices, and was last updated 14 years, 7 months ago by .
We are working on upgrade from CL 5.4 to 5.7 (on Windows) and I wonder if others have some kind of guidelines of what should be tested.
So far I made sure that all translations in 5.7 produce exactly the same messages as in 5.4. Here is description how I do it. You can skip to the end if you are not interested in details.
So far I built a pair of sites on 5.7. One is a copy of production site from 5.4 where all outbound threads changed to file type threads. These are saving messages translated by 5.7 into files. All inbound threads are the same as in 5.4 but they are receiving via local ports from the second site. That way I don’t need to change port numbers and any other configuration data on receiving threads. Every receiving thread on this site has a corresponding sending thread on the second site.
Second site (although logically it should be called 1st) is receiving all messages from production 5.4 site, where each message has an extra Z segment with information like source thread and destination thread (I already had this code in place in 5.4 – I use it to store all messages in MS SQL Server database). The receiving thread on 5.7 uses trxId proc to determine where to route corresponding message depending on info in that Z segment.
If received message is a raw message from 5.4 it routed to corresponding sending thread (that sends via local port to the receiving thread on the 1st site). I use a simple Tcl proc in route details to remove Z segment before sending the message.
If message received from 5.4 is translated message I route it to a different thread that is a local fileset thread with TPS OB proc that determines file name. This thread saves xlated messages from 5.4 in separate files – one file per original outbound thread.
That way I have two sets of files for each of 5.4 outbound threads – one xlated by 5.4, another xlated by 5.7. After running this pair of sites for a day or few I stop receiving thread from 5.4 (to keep the files in sync) and compare each pair of files 5.4 vs. 5.7. If I found discrepancies in translated messages I fix translation on 5.7, then repeat the whole process again until I got desired results (100% match)
So I reached the point when I am happy with my translations (after comparing over 10,000 messages per translation).
Now I wonder what other kinds of tests it will be good to run? Any suggestions and tips will be appreciated
- The forum ‘Cloverleaf’ is closed to new topics and replies.
|
OPCFW_CODE
|
The Annual Meeting of the Association for Computational Linguistics (ACL) announced the Best Paper Awards of 2020 via Twitter on July 8.
ACL is also the organizer of the world’s largest natural language processing conference, EMNLP. ACL’s Annual Meeting covers a range of research areas on computational approaches to natural language, drawing thousands of research papers from across the globe. And like all conferences, ACL, has also gone remote in 2020.
The ACL 2020 conference committee narrowed down this year’s 3,088 submissions, selecting 779 papers (571 of them long, 208, short); an acceptance rate of 25.2%.
Best Overall Paper went to Beyond Accuracy: Behavioral Testing of NLP Models with CheckList by Marco Tulio Ribeiro of Microsoft Research; Tongshuang Wu of the University of Washington; Carlos Guestrin and Sameer Singh of the University of California, Irvine.
Among the two Honorable Mentions for Best Overall Paper was one that, in the words of the paper’s authors, “adds to the case for retiring BLEU as the de facto standard metric.” This sentiment echoes that of other experts who, as previously mentioned, believe that BLEU is fast becoming useless.
In fact, although still widely used in academia, BLEU’s reliance on reference text makes it impractical for industry use, as NVIDIA’s Senior Deep Learning Engineer, Chip Huyen told the SlatorCon audience last fall.
In their ACL 2020 winning paper, Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics, University of Melbourne researchers Nitika Mathur, Timothy Baldwin, and Trevor Cohn show that “current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric’s efficacy.”
The Unimelb trio found that outlier systems (or those whose quality is much higher or lower than the rest of the systems), can have “a disproportionate effect on the computed correlation of metrics,” such that “the resulting high values of correlation can […] lead to false confidence in the reliability of metrics.”
When the outliers are removed, they said, “the gap between correlation of BLEU and other ‘more powerful’ metrics (e.g., CHRF, YISI-1, and ESIM) becomes wider. In the worst case scenario, outliers introduce a high correlation when there is no association between metric and human scores for the rest of the systems. Thus, future evaluations should also measure correlations after removing outlier systems.”
It is time to retire BLEU, the authors concluded, and use instead other metrics such as CHRF, YISI-1, or ESIM because “they are more powerful in assessing empirical improvements.”
They end by saying that “human evaluation must always be the gold standard, and for continuing improvement in translation, to establish significant improvements over prior work, all automatic metrics make for inadequate substitutes.”
The other Honorable Mention for Best Overall Paper was Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks by researchers from the Allen Institute for Artificial Intelligence and the University of Washington.After holding its 57th annual meeting in Florence, Italy, last year, ACL originally planned to hold this year’s conference in Seattle, Washington, but moved online due to the Covid-19 pandemic. ACL’s Twitter page (@aclmeeting) was abuzz on the days of the virtual conference, July 5 – 10, 2020, with live tweeters providing updates in many languages.
|
OPCFW_CODE
|
import DriverBase from '../../base/DriverBase';
import NetworkDriver, {
IncomeRequestHandler, IncomeResponseHandler,
NetworkRequest,
NetworkResponse,
NetworkStatus
} from '../../interfaces/NetworkDriver';
import IndexedEventEmitter from '../IndexedEventEmitter';
import {
COMMANDS, deserializeRequest, deserializeResponse, makeRequestId,
MESSAGE_POSITION,
REQUEST_PAYLOAD_START,
serializeRequest,
serializeResponse
} from '../networkHelpers';
import Promised from '../Promised';
import {hexNumToString, stringToUint8Array} from '../binaryHelpers';
type Timeout = NodeJS.Timeout;
enum EVENTS {
request,
response,
}
export default abstract class NetworkDriverBase<Props> extends DriverBase<Props> implements NetworkDriver {
protected events = new IndexedEventEmitter();
protected abstract write(data: Uint8Array): Promise<void>;
async request(port: number, body: Uint8Array): Promise<NetworkResponse> {
const promised = new Promised<NetworkResponse>();
const requestId: number = makeRequestId();
const request: NetworkRequest = { requestId, body };
let timeout: Timeout | undefined;
this.sendRequest(port, request)
.catch((e: Error) => {
clearTimeout(timeout as any);
!promised.isFulfilled() && promised.reject(e);
});
// listen for response
const listenIndex = this.onIncomeResponse(port, (response: NetworkResponse) => {
// do nothing if filed or resolved. process only ours request
if (promised.isFulfilled() || response.requestId !== requestId) return;
this.removeListener(listenIndex);
clearTimeout(timeout as any);
promised.resolve(response);
});
timeout = setTimeout(() => {
if (promised.isFulfilled()) return;
this.removeListener(listenIndex);
promised.reject(
new Error(`SerialNetwork.request: Timeout of request has been exceeded of port "${port}"`)
);
}, this.config.config.requestTimeoutSec * 1000);
return promised.promise;
}
onRequest(port: number, handler: IncomeRequestHandler): number {
const wrapper = (request: NetworkRequest) => {
handler(request)
.then((response: NetworkResponse) => {
// send response and don't wait for result
this.sendResponse(port, response)
.catch(this.log.error);
})
.catch((e) => {
const response: NetworkResponse = {
requestId: request.requestId,
status: NetworkStatus.errorMessage,
body: new Uint8Array(stringToUint8Array(String(e))),
};
this.sendResponse(port, response)
.catch(this.log.error);
});
};
const eventName: string = `${EVENTS.request}${hexNumToString(port)}`;
return this.events.addListener(eventName, wrapper);
}
removeListener(handlerIndex: number): void {
this.events.removeListener(handlerIndex);
}
protected onIncomeResponse(register: number, handler: IncomeResponseHandler): number {
const eventName: string = `${EVENTS.response}${hexNumToString(register)}`;
return this.events.addListener(eventName, handler);
}
protected sendRequest(register: number, request: NetworkRequest): Promise<void> {
const data: Uint8Array = serializeRequest(register, request);
return this.write(data);
}
protected sendResponse(register: number, response: NetworkResponse): Promise<void> {
const data: Uint8Array = serializeResponse(register, response);
return this.write(data);
}
/**
* Handle income message and deserialize it.
* @param data
*/
protected incomeMessage(data: Uint8Array) {
if (!data.length || ![COMMANDS.request, COMMANDS.response].includes(data[MESSAGE_POSITION.command])) {
// skip not ours commands or empty data
return;
}
else if (data.length < REQUEST_PAYLOAD_START) {
throw new Error(`NetworkDriverBase.incomeMessage: incorrect data length: ${data.length}`);
}
const register: number = data[MESSAGE_POSITION.register];
if (data[MESSAGE_POSITION.command] === COMMANDS.request) {
const request: NetworkRequest = deserializeRequest(data);
const eventName: string = this.makeEventName(EVENTS.request, register);
this.events.emit(eventName, request);
}
else {
// response
const response: NetworkResponse = deserializeResponse(data);
const eventName: string = this.makeEventName(EVENTS.response, register);
this.events.emit(eventName, response);
}
}
protected makeEventName(eventName: EVENTS, register: number): string {
return `${EVENTS.request}${hexNumToString(register)}`;
}
}
|
STACK_EDU
|
[WIP] Introduced visuals that render themselves on the render thread
This is needed for displaying things like video output, animated images, foreign content from WritableBitmap, etc. The common thing about these usage scenarios is that they don't need the UI thread, but need a steady frame rate.
Since render thread timer ticks independently from UI thread the "render time critical" visuals can update themselves even if UI thread is completely blocked by the user code.
The API looks like this:
public interface IRenderTimeCriticalVisual
{
bool HasNewFrame { get; }
void ThreadSafeRender(DrawingContext context, Size logicalSize, double scaling);
}
They are forced to have a separate layer since they are supposed to update very frequently.
This is DeferredRenderer implementation. For ImmediateRenderer I'm planning to connect it to the RenderLoop's Update and trigger Invalidate calls to the OS if the last frame had IRenderTimeCriticalVisual anywhere in the graph.
Is there a feature you plan on adding soon which makes use of this?
@jmacato is currently working on GIF animations support and we are planning to run them on the render thread since they obviously don't need anything from the UI thread to continue ticking.
Actually now that I think about it, we might want to let "render time critical" visuals not occupy a separate layer if they are opaque. Might want to change the code for that.
@kekekeks i agree, but lets retain the ability to explicitly request for a render layer if it's possible
I think we need to introduce IsOpaque property. That's how Cocoa does it.
Isn't it possible to request a layer that is rendered fast enough to process such time critical scenarios? Avoiding a special interface would be ideal. Maybe i don't understand the real performance issue here. Isn't this approach just avoiding the heavy layouting etc by having it's own layer? If they are processed fom top to bottom this should still be fast enough. Someone could even request a layer that is smaller than the screen dimensions.
@Gillibald we aren't alone on the UI thread. It can be also busy processing user's business logic. Our code can also be quite expensive. So we can't get a steady frame rate unless render thread doesn't need anything from the UI thread. The only way to do so is to allow visuals to render themselves on the render thread bypassing our scene builder entirely.
This PR introduces an interface that allows visuals to do painting on the render thread. The current implementation always creates a layer because we can't properly track rendering operations, so we don't know what we need to invalidate and re-render. If such visual is opaque, we could skip layer creation and just re-render everything on top of the visual's rect.
So ThreadSafeRender is called every RenderLoop tick on the render thread as long as the HasNewFrame is true.
Why is ThreadSafeRender needed? Can't we use IVisual.Render
Why is ThreadSafeRender needed? Can't we use IVisual.Render
Visual.Bounds is a dependency property which can't be read from the render thread. The value of Visual.Bounds might also contain a new value that doesn't match the current scene. That's why there is a separate method. It also explicitly indicates in its name that method should be thread-safe.
Are there any real world examples that need the ability to issue render commands threw a DrawingContext? Isn't a special Bitmap implementation enough for this use case? In general someone wants a surface he can write to in a performant way or am I wrong?
@Gillibald my gif renderer is one of those use cases, sometimes one needs a much more flexible approach when it comes to rendering, besides if we ever get to have 3d matrix transforms then it'll be easier to develop them with the current API than just having a fast bitmap control
@jmacato So you use draw calls other than DrawImage(Bitmap)?
@Gillibald not yet, but i do have some valid use case for it later on like text overlay, etc
I just thought it would make sense to enable this kind of rendering just for Bitmaps so you could use all kinds of Bitmaps. Even RenderTargetBitmap. In the future you could have a 3DImage etc that kind of works the same. Everything just has to have a Bitmap implementation. Maybe I am wrong.
i think this enables that though, they are not mutually-exclusive anyway :)
I certainly have a use case for more than just bitmaps as I have a changing grid and scales over various live graph/waterfall type displays and it would be nice to use the avalonia tools to draw these. They could be done in an overlay on the ui thread but this could look jumpy at times.
I just thought it would make sense to enable this kind of rendering just for Bitmaps so you could use all kinds of Bitmaps.
I think that locking the use of this feature only for bitmaps is not a good idea, because it's impossible to know why the user want this feature and how he plan to use it. For example I use this feature for a game engine to write bitmaps generated by each game render passes and text for FPS information, logs, events, etc...
It could be heavy for the user to compute everything as a bitmap before passing it to the ThreadSafeRender method if he want to write more than a simple bitmap.
Another use case could be that the user want to create a custom frame by frame animation, a bouncing ball for example, with the set of DrawXXX methods from the DrawingContext.
Every layer in Avalonia is a RenderTargetBitmap so it will that way anyways. If someone wanted to issue draw commands he could use a RenderTargetBitmap. WritableBitmap and RenderTargetBitmap could implement that new logic by default. Is there any downside to this? Wouldn't WritableBitmap etc benefit from this?
Wouldn't WritableBitmap etc benefit from this?
Yes it's true, so it's to the user to know exactly why this feature have to be used.
Maybe for performances issues, or because he want a control on the frame rate of what he want to draw, to bypass the default 60 FPS of the render loop.
I think that we are not able to say that this feature will always be used for bitmaps, so to enable a larger set of use cases, it's better to let the user decide...
Every layer in Avalonia is a RenderTargetBitmap so it will that way anyways.
As I've said, if the visual is opaque, we don't need a separate layer. I'll do this optimization later.
There appears to be an issue using FillRectangle with a brush created like so
SolidColorBrush filterBrush = new SolidColorBrush(Colors.White, 0.2);
This https://github.com/AvaloniaUI/Avalonia/blob/04745d8effd89b22102a620294f57524c048b0e2/src/Skia/Avalonia.Skia/DrawingContextImpl.cs#L495 throws a 'Call from invalid thread' exception
You need make the brush immutable as far as I understand.
Yep, using IBrush filterBrush = new SolidColorBrush(Colors.White, 0.2).ToImmutable();
works perfectly, thanks.
@ahopper You can't use types derived from AvaloniaObject on UI thread. Immutable non-visual brushes should be fine.
This appears to write over the top of all other controls (except popups).
It seems to be an issue with layer ordering, I'll try to repro it with regular layers.
No evolution on this WIP? Since the PR is pretty old, it seems that dotnet is unable to resolve the last build of this PR, and fallback to another build in which this feature is not available...
@grokys @kekekeks It't been a while but I will repeat the question - any changes on the side of this PR? Do you need some help here? Could be useful for my purposes quite soon.
It's blocked by https://github.com/AvaloniaUI/Avalonia/issues/2244
That bug causes "critical time" visuals to render themselves over everything else.
Any updates? 🤔
closing due to inactivity.
|
GITHUB_ARCHIVE
|
''' You are given an integer, N, your task is to print an alphabet rangoli of size N. (Rangoli is a form of Indian folk art based on creation patterns.) Examples:
N = 3
----c----
--c-b-c--
c-b-a-b-c
--c-b-c--
----c----
N = 5
--------e--------
------e-d-e------
----e-d-c-d-e----
--e-d-c-b-c-d-e--
e-d-c-b-a-b-c-d-e
--e-d-c-b-c-d-e--
----e-d-c-d-e----
------e-d-e------
--------e--------
'''
def print_rangoli(size):
'''
This function implements the main logic to draw the alphabet rangoli on screen.
Input: size -- Desired size.
Output: Prints the alphabet rangoli.
'''
width = (size*4) - 3
mat = [] #Stores the half of the alphabet rangoli items since the other half is the same in different order
for i in range(size):
line = '-'.join([str(chr(j)) for j in range(97+i, 97+size)])
#Transforms 'a-b-c' into 'c-b-a-b-c' from line
#line[::-1] reverses the string
#line[1:] prints from the first position
mat.append((line[::-1]+line[1:]).center(width, "-"))
#We have the half of the alphabet rangoli in mat, print the mat in reverse order and add the other half.
print('\n'.join(mat[::-1]+mat[1:]))
if __name__ == '__main__':
n = int(input())
print_rangoli(n)
|
STACK_EDU
|
Iterators/Apply-qualifiers in MathML 2.0(content)
I have one suggestion about the opportunity to qualify apply
First note that these qualifiers are used for different things:
- iterating an operation on an intentionaly defined set of objects (bvar,
lowlimit, uplimit, condition and interval;
- defining the context of the operation (for degree -- only appliable
for moment, and not to be misused for diff and partialdiff, -- or
logbase -- only applyable to log -- as I understand them).
Maybe, distinguishing them could be useful.
Meanwhile, there is a need for a general "iterate" primitive
(maybe not the best name). Which iterate any operation on variables,
like in the way apply does it with qualifiers (188.8.131.52). Such an operator
would have the advantage of caring of the iteration in place of the
operator itself. As an example, it is perfectly legitimate to apply
an iteration to:
- boolean and/or;
- arithmetic plus/times. There is a special sum/product couple just for
this. But if you start refining plus/times for a particular
structure (say quaternions), will you be allowed to use it with the
apply+qualifiers or will you have to refine ALSO
the sum/product couple? This is not making extendibility easier.
- set union/intersection (OK: any boolean algebra operator);
- statistic mean;
- some kind of declared function;
- some kind of fn;
- even set/vector/list/matrix construction can be defined that way;
but none of these is stated in 184.108.40.206.
Moreover, appart from a few of them that should be taken as
particular cases instead of the rule (viz. int, limit and diff), all these
operations can be rendered the same way(s).
In general, when it is possible to iterate on a set of values,
it is possible to use any qualifier. For instance, the examples given for
the forall could be defined with a uplimit:
<apply> <!-- the second forall example of 220.127.116.11 -->
<apply> <!-- a tautology -->
(\forall x\in\[0 10\[, \forall y\lt x, s.t. x\neq y, x-y\neq 0)
The opportunity to use one of these qualifiers depend only on the structure
of the underlying value set (ordered for up/low limit and interval). Why
restricting them to this.
It is a pitty that the begining of 18.104.22.168, evokes the presence of
an interval (just before the second example) and that this interval is not
specified with the interval construction (but with lowlimit/uplimit).
Appendix: some example of iteration
<lambda> <!-- define the powerset of E (by iterating union) -->
<lambda> <!-- define the powerset of E (by iterating set) -->
I hope this can help.
Jérôme Euzenat / /\
_/ _ _ _ _ _
INRIA Rhône-Alpes, /_) | ` / ) | \ \ /_)
(___/___(_/_/ / /_(_________________
655, avenue de l'Europe /
38330 Montbonnot St Martin,/ firstname.lastname@example.org
|
OPCFW_CODE
|
How can I programmatically set a permanent environment variable in Linux?
I am writing a little install script for some software. All it does is unpack a target tar file, and then I want to permanently set some environment variables - principally the location of the unpacked libraries and updating $PATH. Do I need to programmatically edit the .bashrc file, adding the appropriate entries to the end for example, or is there another way? What's standard practice?
The package includes a number of run scripts (20+) that all use these named environment variables, so I need to set them somehow (the variable names have been chosen such that a collision is extremely unlikely).
Why would you do this at all? If you're packaging a given piece of software, you can make your packaging wrap it in a shell wrapper that sets the needed variables only when that specific software is being run, and has no need to modify the larger system's behavior otherwise.
LSB-compliant (see the specification) practice is to create a shell script in the /etc/profile.d/ folder.
Name it after your application (and make sure that the name is unique), make sure that the name ends with .sh (you might want to add scripts for other shells as well) and export the variables you need in the script. All *.sh scripts from that directory are read at user login—the same time /etc/profile is sourced.
Note that this is not enforced by Bash; rather, it's an agreement of sorts.
ok this looks like it might be the solution. presumably the installer will need to be run as root in order to write an executable script here.
Are this script read also by services? For example apache or tomcat?
Standard practice is to install into directories already in the path and in the standard library directory, so there is no need to update these variables.
Updating .bashrc is a bit failure-prone, among other things; what if a user uses a different file or shell?
+1 correct. Users would be very unhappy if you tried to edit their .bashrc for them. If you want to install into a non-standard directory - or don't think your user will have the permission to - let them specify --install-dir=mydir and tell them what they'd need to add to their environment. There is a good example at http://golang.org/doc/install.html
yes, i thought that editing .bashrc would be dubious for a number of reasons. The problem is the software contains a number of scripts (20+) that all use these named environment variables - so i need to set them somehow.
One way is to source them in a wrapper script that calls your scripts.
You can also generate and install a script that sets those variables. Users of your package then source that script or copy its contents to their own shell init file.
Or a wrapper script calls the var-setting script at a known name and place and then calls the initial executable.
|
STACK_EXCHANGE
|
import { round, commatize } from '../../src/utils'
describe('utils', () => {
describe('#round()', () => {
it('should round number with given precision', () => {
expect(round(123.456, 2)).toBe(123.46)
expect(round(123.454, 2)).toBe(123.45)
})
it('should round negative number with given precision', () => {
expect(round(-123.456, 2)).toBe(-123.46)
expect(round(-123.454, 2)).toBe(-123.45)
})
it('should round number with default precision `0`', () => {
expect(round(123.456)).toBe(123)
expect(round(123.454)).toBe(123)
})
it('should round number with fixed negative precision', () => {
expect(round(123.456, -2)).toBe(123)
expect(round(123.454, -2)).toBe(123)
})
})
describe('#commatize()', () => {
it('should commatize positive number', () => {
expect(commatize(12345678)).toBe('12,345,678')
})
it('should not commatize bad number', () => {
expect(commatize('12345.67.8')).toBe('12345.67.8')
expect(commatize('a12345.678')).toBe('a12345.678')
expect(commatize('012345.678')).toBe('012345.678')
})
it('should commatize negative number', () => {
expect(commatize(-12345678)).toBe('-12,345,678')
})
it('should commatize decimals', () => {
expect(commatize(1234567.890)).toBe('1,234,567.89')
})
it('should commatize number with specified division', () => {
expect(commatize(12345678, { division: 4 })).toBe('1234,5678')
})
it('should commatize number with specified separator', () => {
expect(commatize(12345678, { separator: '`' })).toBe('12`345`678')
})
it('should commatize number with specified division and separator', () => {
expect(commatize(12345678, { division: 2, separator: '`' })).toBe('12`34`56`78')
})
})
})
|
STACK_EDU
|
Found a bug when logged in as a Reporting User and you haven't done the first device enroll for mobile devices.
If the reporting user for some crazy reason click my network > mobile devices
they are stuck in the first enroll screen, unable to click out and unable to proceed.
A redirect like your other pages would probably be the best solution.
That's odd. A reporting user should not have access to that page. They should only be able to access the reports page. Are they able to click on other pages?
no and they don't really have access, i mean they can see it but they can't do anything on the screen it just goes there and locks up. Have to restart the browser to get back in.
Right, but if they are "Reporting" users, they shouldn't even be able to see that page. Clicking anywhere should redirect them to the reports page, as that is the only page they can access. What is their role under User Accounts?
Yes that is how I was assuming a reporting user works. The account type is Reporting that is why I found it strange.
Are you able to replicate this on your install?
Here is a screen shot of the Mobile Device Screen when logged in as a reporting user.
NO Other Page is Available they all redirect EXCEPT Mobile Devices.
Mobile Device instead jumps to this page and as you can see the "Next" button is greyed out and unusable but the "Previous" button is still available.
This is what it looks like if you hit previous
You can press the "Choose" Under "Yourself" this then takes you back to the first screen with the "Next" still greyed out.
At this point the User can fill out the fields for create new user and the "Choose" button under it LOOKS useable but is not(thankfully). They can also type in the search box for existing users but no results come up(thankfully).
There is also no way of closing this dialogue to get back to the main Spiceworks page, So you have to reload spiceworksinstall/dashboard and your back at the publicly available reports.
So as far as I can tell there is nothing that a reporting user can do here so its not a security problem but if they do happen to get here (for some reason -_- ex. "click happy" users) then they get stuck here and have to browse back to the main page via the address bar.
I cannot duplicate this on my end. The Reporting user can only access reports on my end, not Mobile at all. I do get to that page, but only when clicking on Mobile as an admin.
One thing to note is we do not have any mobile devices set up on this install. so the screen coming up is the "first device registration" (for lack of a better term) screen.
ok further testing shows that they can actually send out join requests the screen is not unusable i just never selected a device type.
I think I kinda figured this one out tho
I have 2 Accounts one is a reporting account mostly for testing the other is an admin account they are created with different emails but have the same First name last Name. Some how (from what i'm seeing at least) when i click the mobile devices it pick up my admin account.
So when I am logged on as a reporting user i can click into the mobile device screen then select a device (say windows phone) then hit next and i am brought to the "Send an Email" page. but the profile showing up in the"send request" section is my admin profile with my admin email.
so only thing i can think of is its basing the access for that section off of full user names? idk.
The only "odd" thing i can see about my setup is that I have 2 users with the same first and last name but different access levels and different email addresses.
|
OPCFW_CODE
|
DropDownV2 not having the Dropdown Items even though passed the items
Detailed description
Describe in detail the issue you're having.
I'm passing the prop items to the component DropDownV2. But Still, I can't able get the list
Is this issue related to a specific component?
DropDownV2
What did you expect to happen? What happened instead? What would you like to see changed?
Need to get the list of the items in the drop-down
What browser are you working in?
Google Chrome
What version of the Carbon Design System are you using?
6.38.1
What offering/product do you work on? Any pressing ship or release dates we should be aware of?
Steps to reproduce the issue
check it
Please create a reduced test case in code sandbox
https://codesandbox.io/s/r4zxq9pvq
Additional information
Screenshots or code
Notes
Hey, can you create a reduced test case over in Code sandbox? You can use this as a template: https://codesandbox.io/s/r4zxq9pvq
Hi
I've added the code in code sandbox.
Check it out
On Tue, Oct 9, 2018 at 1:23 AM TJ Egan<EMAIL_ADDRESS>wrote:
Hey, can you create a reduced test case over in Code sandbox? You can use
this as a template: https://codesandbox.io/s/r4zxq9pvq
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/IBM/carbon-components-react/issues/1403#issuecomment-427958811,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AYVJ8SES03lcXU6dK70VF4MTPnZ711R5ks5ui60egaJpZM4XMWju
.
--
Thanks & Regards,
Mani Kumar
I'm not able to see anything, I just see the default template. You will need to fork it, then save and share that link
Okay, Done
Can you check it now?
On Wed, Oct 10, 2018 at 12:16 AM TJ Egan<EMAIL_ADDRESS>wrote:
I'm not able to see anything, I just see the default template. You will
need to fork it, then save and share that link
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/IBM/carbon-components-react/issues/1403#issuecomment-428306290,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AYVJ8bG53ffWf-dDph_0EhYNutpCFiTqks5ujO8OgaJpZM4XMWju
.
--
Thanks & Regards,
Mani Kumar
The link has not changed and is still the default link. Please send the updated link
Hi
Here is the updated link:
https://codesandbox.io/s/4j2knnr7nx
On Wed, Oct 10, 2018 at 7:48 PM TJ Egan<EMAIL_ADDRESS>wrote:
The link has not changed and is still the default link. Please send the
updated link
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/IBM/carbon-components-react/issues/1403#issuecomment-428590239,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AYVJ8dCuuJOANzeSQ-CTUpEFP0EbMoxeks5ujgHFgaJpZM4XMWju
.
--
Thanks & Regards,
Mani Kumar
Looks like you're missing the itemToString prop, just need to pass it something like
itemToString={item => (item ? item.text : "")}
This resolved your issue in CodeSandbox, so I'm going to close this issue. Feel free to reopen it if you are still having problems.
|
GITHUB_ARCHIVE
|
A couple weeks ago I presented for the 24HOP Summit Preview and I had a lot of great general questions about how Query Store works. My session title was “Why You Need Query Store” (you can watch it here) and I only had about 45 minutes. As you can probably guess – since I have a full day pre-con on the topic – I can talk about Query Store for a loooong time 🙂 The 24HOP session was really focused on getting folks to understand THE most important things that Query Store provides to show why it’s needed. I left out a lot of details about HOW Query Store works because talking through it is the fun stuff that I’ll dive into during the pre-con. I did have a good number of questions from attendees related to specific functionality, and I promised to write a post answering them. Questions* and answers are below…if you need clarification on anything, please leave a comment and I’ll follow up!
*Questions copied exactly as they were shared with me, I did not try to re-word or make any inferences about what was being asked.
1. Can I get Query Store data for a production database deployed on a client site, that I don’t actually have access to myself? Can the DBA send me something I can use in my own development environment?
A: You provide instructions to the client that explain how to enable Query Store, either through the UI or with T-SQL. If you want to view that Query Store data, the client can either send you a backup of the database or create a clone with DBCC CLONEDATABASE and share that.
2. If a user executes a stored procedure from ‘master’ it is not captured?
A: If you have Query Store enabled for a user database, and you execute a query against that user database from the context of the master database, it is not supposed to be captured in Query Store for that user database. But in my testing, it is. But it is not supposed to be, so there is no guarantee it will always work that way.
3. If your database is part of an AG the data you look at can be different based on the server it is running on at that time, correct?
A: I’m not quite clear what’s being asked, but I wrote a post about Query Store and Availability Groups, which will hopefully answer the question.
4. Is it easy to remove the forcing of a given plan?
A: Yes, just use the “Unforce Plan” button in the UI, or use the stored procedure sp_query_store_unforce_plan (you supply the query_id and plan_id).
5. If you have 3+ plans how does SQL Server decide which plan to use?
A: I assume this is specific to the Automatic Plan Correction feature, and if so, it will force the last good plan (most recent plan that performed better than the current plan). More details in my Automatic Plan Correction in SQL Server post.
6. What equivalent options we have for lower versions?
A: There is an open-source tool called Open Query Store for versions prior to SQL Server 2016.
7. Why are the trace flags not on by default? given the issues with AlwaysOn and QS
A: Great question. Trace flag 7752 will be default functionality in SQL Server 2019. TF 7745 is not default functionality because, I suspect, of the potential for losing Query Store data…and SQL Server wants you to make a conscious choice about that. More details in Query Store Trace Flags.
8. How would you use Query Store to troubleshoot Views?
A: Query Store does not differentiate between a query that references a view and a query that references a table. It does not capture the object_id of the view and store that in Query Store (as happens for a stored procedure), so you have to look specifically for the view name in the query_sql_text column (within sys.query_store_query_text) to look for queries that reference the view.
9. Is there any way to make use of Query Store in readonly secondary AG replicas?
A: You can read data from the Query Store views on a read-only replica, but you cannot capture data in Query Store about queries executing against the read-only replica. See my post referenced in #3, and then please up-vote this request: Enable Query Store for collection on a read-only replica in an Availability Group.
10. Is it possible that a query store run from one instance to another instance for example I want check the queries of production from dev instance?
A: If you can connect to the production instance from the dev instance, and have appropriate permissions, then you can query the Query Store data on the production instance (but the data exist in the production database).
11. If I execute a parameterized query with OPTION (RECOMPILE), will Query Store have the parameter values of every execution?
A: No. The plan will have the values used for the initial execution that generated said plan, but values for every individual execution are not captured (it would generate excessive overhead to capture every execution).
12. Can Query Store supply the T-SQL to force plan?
A: The UI does not provide an option to script forcing a plan for a query, but if you are using Automatic Plan Correction, the T-SQL to force it can be found in sys.dm_db_tuning_recommendations.
13. Will there be any significant performance overhead by using query store?
14. How does it function when queries span multiple databases?
A: As alluded to in question #2, cross-database queries are tricky. You should work under the assumption that if you execute a query from Database_A, where Database_A has Query Store enabled, it will be captured. If you execute a query from Database_A that queries both Database_A and Database_B, and both databases have Query Store enabled, it will ONLY be captured in Database_A.
15. It seems to be working for me, but sometimes not
A: I would love to help you out, just not sure of the behavior you’re seeing and what your question is.
16. How do you get the full query text from inside the ‘Top Resource Consuming Queries’ windows?
A: Click on the button with the grid and magnifying glass, which says “View the query text of the selected query in a query editor window” when you hover over it.
17. Is the data will be stored in Query Store After the adhoc/SP completed or it will do while is running?
A: Once the plan has been compiled for a query, the query text and plan are sent to Query Store. When execution completes, the runtime statistics are sent to Query Store.
18. If we change the compatibility to SQL 2012 or lower, will that affect Query store?
A: No, Query Store functions in SQL Server 2016 and higher, and Azure SQL Database, regardless of your compatibility mode.
19. If we drop a SP, will that clear the history of that SP plans in the query store?
A: No, but…If you use DROP PROCEDURE syntax, then the object_id column in sys.query_store_query will no longer reference an existing object (in sys.objects). The query and plans will stay in Query Store until they are aged out based on the retention policy.
Again, if any answers are unclear, leave a comment and I can clarify. If you are interested in learning more about Query Store I would love to see you in my full day session at the PASS Summit! It’s on Monday, November 5th, and you get more details here: Performance Tuning with Query Store in SQL Server
|
OPCFW_CODE
|
SELENIUM 3.0 CERTIFICATION TRAINING
About BitsBytez Technologies
BitsBytez is leading in providing live instructor-led interactive training. We can set your career on an upward move. Our incessant efforts in comprehending the possibilities of selenium in the world of IT have rendered us competent in helping interested people learn this skill set. We cater to professionals and students across the globe.
A comprehensive Selenium Certification Training will help you in mastering various concepts of Selenium from scratch. Selenium Java Training by BitsBytez is a comprehensive, job oriented, certification based learning path for QA Professionals to transform themselves into QA Automation Engineers.
This program gives you a complete package to build core competencies in the field of Selenium. Anyone who completes all competencies successfully along with the project work, class work as well as homework, stand a chance to be a successful QA Automation Engineer.
20 Nano and Mini Projects
Nano projects that you do along with Home Work and apply in Mini projects in the Classroom.
30 hours of Classroom Work
In class projects where you apply what you have learned by practicing it.
1 Capstone Project
Create a project that integrates and synthesizes what they’ve learned. A 360 degrees learning experience.
30 Hours of Homework
You watch the lectures, prepare with the theory and complete pre-class work at your own pace.
About the Course
Selenium is the most popular tool used to automate the testing of web applications. In this Course, you will learn about Selenium 3.0 and its variouscomponentssuchasSeleniumIDE, Selenium WebDriver and Framework. You learn to set up your environment so that you are ready to start using Selenium for testing your web applications. Browsers such as Chrome, Firefox, and IE are used to test web applications. In addition, you will experience to work with different frameworks such as Data Driven, Keyword Driven, and Hybrid Frameworks.
Page Object Model (POM) is a design pattern that enables you to maintain reusability and readability of the automation scripts. This course introduces you to the concept of POM, and how to implement Page Classes and Page Factory to optimize the execution of automation scripts. In addition, you learn about various third-party tools such as Jenkins, TestNG, and AutoIT to optimally use them for performing various tasks in our browsers.
Module 1 Introduction to Java
Goal – Programming is not just about learning a programming language! The essence of programming is Problem Solving. The software industry is not interested in the number of programming languages you know, it is interested in your problem-solving skills. This is the reason why all the technology giants like Google, Apple, and Microsoft focus their interviews more on evaluating your approach to solve the problem than testing your proficiency in a programming language.
In this Module, you will Learn Java in simple and easy steps starting from basic to advanced concepts with examples. You will get introduced to Core Java, its features and why it is so popular. This course is taught in a practical GOAL oriented way.
Objectives – Upon completing this Module, you should be able to: Create objects, know about the different operators and data types present in Java. You should master the essentials of object-oriented programming on the Java platform. To understand the flow of a program which uses different control statements. Learn about the Java collections framework (JCF) which is a set of classes and interfaces that implement commonly reusable collection data structures. You must be able to explain various aspects, tips, and tricks of Java exception handling.
- Understanding Selenium
- Introduction to Selenium
- Introduction to Java
- Why Java for Selenium
- Java Setup and configuration
- Installing Eclipse
- Writing your first Java Program
- Running first test script
2. Core Java
- Java Fundamentals
● History and Features of Java
● Variables, Data Types, and Operators ● Classes, Methods, and Objects
- Control Statements● If-else
● For loop
● While loop
● Do While loop
- OOPs Concepts ● Inheritance● Polymorphism ● Abstraction
● Encapsulation ● Interface
- Exception Handling
Module 2 Manual Testing
Goal – In this Module, get introduced to Testing, the types of testing, and the purpose of automation testing. You will also get introduced to the different methodologies followed by the testers. Compare different types of software testing, such as unit testing, integration testing, functional testing, acceptance testing, and more!
Objectives – Upon completing this Module, you should be able to: Keep yourself in the shoes of End User and then go through all the Test Cases and judge the practical value of executing all your documented Test Cases. You should be familiar with QA tools and techniques, bug tracking tool, test design and execution.
1. Introduction to Testing ● What is Testing ● Testing Principles
2. SDLC & STLC ● Different Models in STLC
3. Types of Testing
4. Test Strategy, Test Planning and Test Case Design
5. Defect Tracking in JIRA / Bugzilla
Module 3 Automation Testing
Goal – In this Module, get introduced to automation testing. You will also gain insight into the evolution of Selenium, get an overview of Selenium 3.0 and its components, and compare 2 different automation tools. Finally, set up your environment so that you can start working with Selenium WebDriver 3.0.
Objectives – Upon completing this Module, you should be able to: Define selenium, discuss the Evolution of Selenium from Selenium 1 to Selenium 2 and then to Selenium 3, state the current version of Selenium, discuss the different components of Selenium Suite, describe Selenium IDE, describe Selenium WebDriver, describe Selenium Grid and set up:- Java, Eclipse, Selenium WebDriver
- What is Automation Testing and When to go for Automation Testing
- Selenium Components
- Selenium WebDriver
- Locators and Locator Technique
- Advanced Selenium
Module 4 TestNG
Goal – TestNG is an open source testing framework that provides more flexible and powerful tests with the help of Annotations, Grouping, Sequencing, and Parametering. In TestNG HTML reports can be produced, Parallel testing can be performed, Test cases can be prioritized and data Parameterization is possible. Cross browser testing enables our application to work with different browsers. Learn all about TestNG in this Module.
Objectives – At the end of this Module, you should be able to: Describe the purpose of TestNG, explain reports, discuss annotations, execute scripts using TestNG, prioritize test cases, discuss cross-browser testing, illustrate the need of taking screenshots in case of test failure, illustrate how to enable/disable a particular test, explain the need of executing a test multiple times.
- Installation @Eclipse/Download – Jar Dependency
- To create and run Test Suites using TestNG
- Parallel Execution
2. Advanced TestNG Concepts
- Printing the Log Statements in TestNG Report
- TestNG results Output Folder walkthrough
Module 5 Page Object Model (POM)
Goal – Page Object Model is a design pattern to create object repository for web UI elements. Page object model includes page classes which finds the web elements of that web page and contains page methods that perform operations on those web elements.
Objectives – At the end of this lesson, you should be able to: identify the need for page object modelling, discuss page classes, express the concept of page factory.
Object repository and test cases
Module 6 Frameworks
Goal – Framework is a basic structure of any environment whether testing or designing. Selenium offers flexibility to create a testing framework that can be reused.
Objectives – You should be able to: define parameterization, discuss how to read data from excel sheet, describe different types of frameworks.
- Hybrid Testing Framework
- Data Driven Development Framework
Module 6 Interview Questions
MNC Interview Questions with Answers
1 Capstone Project and Certification
|
OPCFW_CODE
|