Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
[poppler] Multi-threading rendering on Raspberry Pi
thomas.freitag.bbr at gmail.com
Wed Feb 22 13:48:52 UTC 2017
I can't really get the point:
All threads, in your case two, have to share the PDFDoc, Catalog,
XRef-table and inputstream, so yes, of course there are a lot of mutex
locks, especially when a lot of objects are shared between the pages.
When I developped the multi threading feature it I got a lot of problems
until I had all locks at the right place, because missing locks caused
garbage rendering and program crashes.
The problem here is that neither poppler nor the underlying xpdf was
designed to use threads at all, so the thread implementation could never
be such optimal as it would be desirable.
So I guess that in your case a lot of time is needed in parsing the PDF
objects and not so much in rendering them.
Am 22.02.2017 um 04:23 schrieb pqt at LEFerguson.com:
> I have a PDF rendering program for sheet music running on a Raspberry Pi 3, using Poppler 0.51.0 built from source, running in QT5.8 through the QT5 API.
> I am seeing some weird threading performance behavior. I am calling the page->renderToImage within a separate thread, or more precisely several of them.
> I am not getting any errors, and the results are correct.
> For example, in rendering the same two (and only two) pages in a single thread, it takes 5.7 and 5.6 seconds to render, a total of 11.3 seconds. When rendered in parallel it takes 8.6 seconds for the first to complete, and an additional 50ms +/- for the second, i.e. basically 8.6 seconds total.
> There is no IO that I can see going on at the time, there is no swap file (so no swap usage), plenty of memory, and nothing else running except the desktop services to display the images.
> That's faster, but not nearly as much faster as I anticipated.
> Three at a time gives about 8 seconds for the first, about 1.5 seconds for the second, and 0.6 for the third (I say "about" as my 3 page render was different content).
> Even though no IO occurs, increasing to 4 I still cannot get the processor busy (e.g. as seen by "top"), seeming to imply some constraint beyond cores.
> Here's what is more strange. If I submit 3 pages in a row in order 1, 2, 3 to three separate threads (the Pi3 has 4 cores), these always finish in order 3, 2, 1. I've instrumented these in as many ways as I can to confirm the sequence (and yes, that they really are running in separate threads). That's not a big deal program-logic wise, but it is an odd symptom. That aspect is reproducible on a fast HyperV box I use for testing (it processes them fast enough that the rendering speed is not terribly meaningful there) - it is always in reverse order. And not all that close (i.e. it's not a stream IO issue with the debug output).
> Makes me wonder if something is blocking/serialized, forcing the LIFO behavior and so perhaps keeping me from getting the most performance.
> Are there any special considerations for using Poppler with multi-threaded rendering? Different cmake options for example? Different calling sequences?
> I realize that the Pi3 architecture might be causing this, e.g. memory speed so multi-threading is less efficient. I really did not think much about it until I realized the renders (started within a millisecond of each other) always finish in reverse order of initiation.
> Incidentally, I have tried compiles (of poppler as well as my application) with both -O2 and -O3 with negligible difference in performance.
> Any suggestions or insights would be welcomed.
> Linwood Ferguson
> poppler mailing list
> poppler at lists.freedesktop.org
More information about the poppler
|
OPCFW_CODE
|
Did Usain Bolt really sustain 11 horsepower of exerted power during the 100 m sprint?
I've just come across a WP article which says: "The Jamaican sprinter Usain Bolt produced a maximum of 3.5 hp (2.6 kW) 0.89 seconds into his 9.58 second 100-metre (109.4 yd) dash"
Out of curiosity, I decided to figure out the math behind this. While doing so I found this very detailed paper exactly showing how this was all calculated.
The paper states that the effective work Bolt has put into motion was 6.36 kJ and that the peak power was 2619.5 W for a very brief period in the first second of the sprint. The paper also mentions that Bolt average horizontal force during the sprint was 815.7 N which makes the total work done during the 100 meters race equal to 81.58 kJ. The paper concludes that "This means that from the total energy that Bolt develops, only 7.79% is used to achieve the motion, while 92.21% is absorbed by the drag; that is, 75.22 kJ are dissipated by the drag, which is an incredible amount of lost energy"
Now if we take the figure of 81.58 kJ exerted in 9.58 s, that would give us ~8500 W of average power during the sprint which is a little bit more than 11 hp. So my question now is: can we safely say that Bolt average power during the 9.58 s sprint was 11 hp?
In other words, would an 11 hp engine say in a motorcycle of similar physical characteristics (total mass, drag coefficient..etc) to those of Bolt achieve the same 100 meters in 9.58 s?
Nice question. I guess the problem would be to reproduce the same drag coefficient. I would say that wheels based motion is largely more efficient than legs based motion. Maybe this would be another interesting question actually.
I would question some of the assumptions made here.
The propulsive force exerted by the runner on the ground is assumed to be constant. If this was correct, a runner on a treadmill (with no aerodynamic drag) should be able to "run" much faster than on a track. This ignores the "internal work" required to accelerate and decelerate the legs - unless that is somehow assumed to be included in the "drag force" term in the equation of motion.
The assumption that the average force exerted on the ground seems very unrealistic. At the start of the race, the runner's starting blocks ensure the feet remain in contact with the ground for as long as possible to give the maximum acceleration, but during the rest of the race both feet are off the ground simultaneously for most of the time.
The curve fitting to the position and speed graphs looks nice in the plots, until you look closely. The paper says the position was measured at 0.1s intervals, but the "measured data" speed seems to change at a bigger time resolution - more like 1.0s. During the initial acceleration, the steep slope of the curve hides the fact that the measured and fitted data are quite different - e.g. when the fitted curve is 4m/s, the measured data is about 5, which is a 20% error. The position data is equally bad for the first 1 or 2 seconds of the sprint.
The division of the drag force into linear and quadratic components looks strange and is not given any physical explanation. From Table 1, the asymptotic velocity $B = 12.2$m/s. From the coefficients in Table 2, at this velocity the linear drag force is $728$N and the quadratic force $89$N. There is no physical explanation of what is generating the linear drag and why it is 8 times bigger than the quadratic aerodynamic drag.
To summarize, I don't believe any of this paper, unless the authors convince me it really is correct.
The paper compares the quadratic drag force with the standard formula for aerodynamic drag and concludes that it is the right order of magnitude.
I we just ignore the linear drag force since there is no explanation of where it came from, and also ignore the assumption that the total force is constant, we would get a power output of about 1/9 the figure in the paper, which seems more believable compared with the measured power output of athletes on exercise machines, etc.
I actually have a different approach which somehow confirms what the author of the paper is trying to say. Doing quick search, I found that a figure of roughly 100 kcal burnt per 1 kilometer run is widely used. Therefore, it would be 10 kcal for 100 m or 41.8 kJ for the 100 m sprint and that is at normal running speed around 22 km/h. At this speed a runner can cover 100 m in 16 seconds which equals 2600 W or about 3.5 hp and this is a normal person. I could imagine that Bolt running at double this speed would certainly result in at least double the power which would be 7 hp.
This is also a calculator confirming the above calculations: https://keisan.casio.com/exec/system/1350959101
@BenCrowell If the runner could exert a constant (horizontal) force on the treadmill independent of speed he/she could accelerate (relative to the ground) independent of the speed of the treadmill, and therefore increase the speed of the treadmill indefinitely. This is clearly nonsense, because at some point all the runner's effort is spent accelerating and decelerating his/her legs, and not applying any net horizontal force through his/her feet.
@AbanobEbrahim Your OP (and the paper) appears to be about the amount of mechanical work done by the runner against the "drag force". How much chemical energy (i.e. calories) the runner consumes is a completely different question.
A faster way to say this is that the human body is about 25% efficient converting stored energy to power. 25% of 8.5 kw is 2.2 kw.
|
STACK_EXCHANGE
|
Supernacularfiction – Chapter 1029 80% Body Control Threshold limping spring read-p1
Awesomefiction Nanomancer Reborn – I’ve Become A Snow Girl? novel – Chapter 1029 80% Body Control Threshold confess pet -p1
Novel–Nanomancer Reborn – I’ve Become A Snow Girl?–Nanomancer Reborn – I’ve Become A Snow Girl?
Chapter 1029 80% Body Control Threshold abject cowardly
The Grantville Gazette – Vol 3
“How is he?” Nan Tian inquired as she were telling him of Glen’s effectiveness whenever she teleported right here.
“Certainly. You will need to continue to keep polishing your system. Strategies can look in your head such as a snap of a finger but that’s not what we’re choosing. Nonetheless, you did decent. You tasted the very first experience of surpa.s.sing 80Per cent entire body management with your normal is around 75Percent now. Don’t impose towards that sensation you observed since human body control of 80Per cent is never rigorous. It’s free of charge moving so you need to just let your system lead you as opposed to pushing it using a set up pathway.” s.h.i.+ro smiled as Glen nodded his brain.
“My hint on your behalf is always to ‘nudge’ fact right now. Get it express areas of your Throne World but don’t wrestle it into syndication since designed to obviously fall short. Take it step by step for the present time. I’ll offer you a quick demo.” s.h.i.+ro smiled as she had one step rear.
Additionally, she wasn’t sure what could happen whenever they opened up the second remote s.p.a.ce (Throne Community) inside another isolated s.p.a.ce (Chamber of energy) and she wasn’t also enthusiastic to determine just yet.
Visit lightnovelpub[.]com for top book reading knowledge
Exactly like what Glen possessed demonstrated, it was actually an approach he had made in the length of his concentrate but which was not the things they were definitely going after.
“Your highness.” s.h.i.+na widened her eyes as she wanted to operate but s.h.i.+ro shook her brain.
A gentle descent as his perception of his setting seemed to decrease.
“Will you take a look at that, the planet is sort of sensible finally. However reviewing your accomplishments that’s still as much as disagreement.”
His motions appeared slower, but his rate continuing to increase.
The best up-to-date novels are publicized on lightnovelpub[.]com
mother carey’s chickens bird
An individual fast, a very small droplet of sweating, increased apart as liquid coated the place.
Snapping her hands and fingers, the s.p.a.ce around her began to distort as an icy scenery began to type around her. Having said that, it was not quite an isolated s.p.a.ce.
The brus.h.i.+ng of the wind, the arc of his sword.
His motions appeared slower, but his pace continuing to boost.
“In fact. You will need to hold polishing your system. Tactics will appear in the mind just like a snap associated with a finger but that’s not what we’re selecting. Even so, you probably did decent. You tasted the 1st experience of surpa.s.sing out 80Percent entire body command and also your common is about 75Per cent right now. Don’t demand towards that sensing you experienced since body system power over 80% is rarely inflexible. It’s free flowing so you have to simply let your entire body guide you rather than pressuring it over a established direction.” s.h.i.+ro smiled as Glen nodded his head.
As well as, she wasn’t absolutely sure what would arise if they exposed a 2nd separated s.p.a.ce (Throne Community) inside another remote s.p.a.ce (Holding chamber of energy) and she wasn’t way too excited to determine just yet.
Visit lightnovelpub[.]com to get the best creative reading through knowledge
Such as a droplet, scatter and develop.
Finding this, s.h.i.+ro furrowed her brows.
“My strategy for yourself would be to ‘nudge’ actuality for the time being. Get it show itself regions of your Throne Society but don’t wrestle it into distribution since that may obviously fall short. You need to take it in depth right now. I’ll supply you with a easy demonstration.” s.h.i.+ro smiled as she had taken a step back again.
Viewing this, s.h.i.+ro furrowed her brows.
Flas.h.i.+ng behind the dummy, sword slashes came out everywhere on its body as s.h.i.+ro could see all that occurred with extraordinary clarity.
Gritting her tooth enamel, the s.p.a.ce distorted around s.h.i.+na as ice-cubes begun to shape. Her natural environment seemed to perspective and switch but she was cannot carry it for too long before it snapped alone back in put.
Check out lightnovelpub[.]com to find the best new looking through encounter
Inhalation in, air out.
“Accurate. Even I essential to teach my physique for decades before I even bought a taste from the kingdom beyond 80Per cent. Even now, I’m probably only around 85Percent since I started out to concentrate on magic additional.” Nan Tian admitted having a smile.
“It wasn’t impressive. It was actually also weakened, as well scattered. A poor method. It’s shallow and elegant. Basically If I polish something like this, it’s like lowering off my thighs and legs simply because I jogged a somewhat excellent time at a record.” Glen clenched his fist as his gaze was sturdy.
Visit lightnovelpub[.]com for the very best unique studying encounter
“It’s high-quality, you can easily sleep for now. Let me know, just what are you obtaining troubles with?” s.h.i.+ro inquired as s.h.i.+na scraped her top of your head.
Like a droplet, spread and extend.
|
OPCFW_CODE
|
The recent years have seen rapid growth in the depth, richness, and scope of scientific data, a trend that is likely to accelerate. At the same time, simulation and analytical models have sharpened to unprecedented detail the understanding of the processes that generate these data. But what has advanced more slowly is the methodology to efficiently combine the information from rich, massive data sets with the detailed, and often nonlinear, constraints of theory and simulations. This project will bridge that gap. The investigators develop, implement, and disseminate new statistical methods that can fully exploit the available data by adhering to the constraints imposed by current theoretical understanding. The central idea in the work is constructing sparse, possibly nonlinear, representations of both the data and the distributions for the data predicted by theory. These representations can then be transformed onto a common space to allow sharp inferences that respect the inherent geometry of the model. The methodology developed in this project will apply to a wide range of scientific problems. The investigators focus, however, on a critical challenge in astronomy: using observations of Type Ia supernovae to improve constraints on cosmological theories explaining the nature of dark energy, a significant, yet little- understood, component of the Universe.
Crucial scientific fields have enjoyed huge advances in the ability both to gather high-quality data and to understand the physical systems that generated these data. Nevertheless, the full societal and scientific value of this progress will only be realized with new, advanced statistical methods of analyzing the massive amounts of available data. The investigators develop statistical methods for combining theoretical modelling and observational evidence into improved understanding of these physical processes. The analysis of these data will requirenot only new methods, but also the use of high-performance computing resources. There is a particular need for these tools in cosmology and astronomy, and this project will bring together statisticians and astronomers to combine expertise, but this research is motivated by problems that are present in other fields, such as the climate sciences.
The major goals of this work were the development of sophisticated, rigorous methods of statistical inference that take full advantage of both the rich data that arise from astronomical surveys, and the deep understanding of the physical processes that generate these data. For example, how does one work with a data set whose individual components are images of galaxies? And how does one perform statistical inference in cases where the model that relates the parameters of interest to the observable data is very complex? To answer these questions, we worked, and continue to work, to develop and implement statistical methods that can deal with this complexity. For example, part of the results have been to find ways of summarizing the information in a galaxy image so that useful information is preserved. The initial motivation for these representations was to identify galaxies of unusual nature; this is an important problem as we face massive collections of such images and must develop automated ways of searching the collection for interesting cases. This work is currently being extended to use these low-dimensional representation for other inference goals. As another example, we are working on developing methods for estimating key physical constants in cases where our understanding of the relationship between those constants and the observable data is, at least in part, only possible to simulate. This is a problem of increasing importance. As the quality of simulation models increases, we need to build methods of statistical analysis that can take advantage of these tools. Existing techniques largely require that models can be written as a system of equations. Our results from this work have extended methods used in genetics to make them feasible in astronomy and cosmology. The initial work has led to multiple further projects in this area.
|
OPCFW_CODE
|
Discovering and analyzing your business policy change templates is one of the several key aspects documented in the ISIS methodology, and that must be addressed as part of defining your Rule Governance processes.
The reason why a BRMS component is brought to a company’s IT application mix is because it facilitates the implementation of decision services, but also, and more importantly, because it helps manage their rapid evolution. It is thus surprising that the task of preparing the system and its supporting organization for business policy change management does not always get the attention it deserves.
Note that we are not talking here about preparing for change at the level of the business rule elements, which are the atomic building blocks of a policy. These are usually (or should be) well covered by generic rule governance processes for authoring, validation and deployment. We are talking about change at a macro-level, where it is expressed in terms of raw business policy statements instead of individual business rules. Think change to a paragraph of the underwriting policy used as the reference document by the loan underwriters, for example.
During the application analysis steps, the main questions to the business stakeholders revolve around capturing as accurately as possible the definition of the business policies that will be implemented by the system. However, when analyzed from a snapshot, the policy can appear monolithic or prompt a decomposition along some logical or system-related concepts which are not adapted to the future need to accommodate business changes.
Therefore, another critical task that must be addressed early on in the analysis process is focused on producing an inventory of the probable ways in which the policies may or will change, and with which frequency.
This requires the policy managers to reflect on their experience and come up with as many concrete examples of discrete policy changes that have they have been witnessing regularly in the past. These examples can then be arranged in a taxonomy of business policy change templates.
The most frequent changes are usually the most simple and precisely described ones. The more complex and less frequent ones will often contain some unknowns. For example, the possible templates for a loan pricing application may be:
- Changing the base rate values (may occur weekly).
- Changing the add-on, minimum rate, or fee values (may occur monthly).
- Creating or retiring “specials” for selected regions or channels (may occur quarterly).
- Creating or retiring the pricing structure for a new product (may occur once or twice a year).
Of course, not all changes are predictable. Important and unpredictable ones come, for example, from external regulatory rules or from newly devised company strategies. But for the more banal and recurring ones that can be identified and dissected in advance, the benefits are multiple.
Some of the artifacts that can be prepared for a given business policy change template are:
- A template for change submission, which precisely describes the change and the different parameters involved.
- A process map to implement the change, detailing which rule should be updated, created, or deleted, in which package, whether an update to the rule flow is warranted, etc…, and which specific resource is needed.
- An accurate time and effort estimate to implement this change, from authoring to deployment. The estimate is developed and agreed upon by both IT and business stakeholders, and helps in setting the release schedules and expectations.
- A set of rule templates to facilitate the implementation and reduce the risk of introducing defects.
- A precise test plan and set of test cases. And since the scope of the change is well defined, it is easier to apply techniques such as Delta Testing, which help get extensive test coverage and minimize the updates needed on test cases.
For traditional software, requirement tracking is an important pre-requisite to smooth system maintenance. It allows to analyze the impact of a requirement change to the underlying implementation, and plan the change accordingly.
Business rules management systems take maintenance to the next level and make it a standard activity. It should thus be expected that the efforts on requirement tracking are pushed accordingly to collect and analyze the patterns in requirement change.
To know more about Rule Governance, join me in a 1-hour webinar on May 28th (click here to register). Like our mission states, we are here to help your business handling change and complexity, and rule governance is a key element to make sure you can attain such a goal!
|
OPCFW_CODE
|
Since long time ago I have been looking for inexpensive ways to measure environment parameters such as temperature, humidity, pressure and wind speed. I found many projects to inspire me, so I upgraded mine ;-) . Click here to the old one.
Step 1 - Thinking...
Looking for a precise solution, we had chosen to use the BME280 sensor. It is a great sensor that can provide precise measurements of temperature (+-1C), humidity (+-1%) and pressure (+-1). The communication between sensor and micro controller occurs using I2C protocol allowing to get from sensors, the measurement values, compensated, in digital mode.
To measure the wind speed/position was choosed a analog anemometer, using a hall effect sensor to digitalize the output signal according the rpm. So, for each turn of the anemometer, a digital pulse will be emitted by sensor. To read these rpm pulse, configure a interrupt routine to read the pulses and time period. With these you can compute the wind speed.
Here, the main ideia is get the values from btoh sensors and provide it using serial port (UART) in string format. So, the choice has been Arduino Nano as a micro controller that will read the measurement from sensor, will format in a knowleged string and send to serial port connected on PC or Raspberry Pi. (click to zoom)
Here, the output expected:
After that, the Arduino has been connected to a Raspberry Pi using USB Port.
So, we developed a Python script to open the Raspberry serial port and read the data coming from arduino. Remember, the response is a text string with the measurement values.
To store these values, a ThingSpeak.com account has been created. The ThingSpeak.com is a IoT website that provide a RRD (round-robin database) to customers. After stored our information, the ThingSpeak will show the graphics with mesuared values.
Thinking in Python scripts, the measured values has been cuted, isolated and sent to ThingSpeak.com using URL Post method + urllib . The entire script can be see bellow: (click to zoom)
After that, just run this script in your Linux, in background mode. Leave couple of minutes and you will can see your enviroment measurement at Thingspeak.com
Another feature that we implemented was the broadcast the measurement values over air, using FM. For that, we used PiStation, a application coded in Python that allow us to transmit any information using frequency modulation (FM Radio). The broadcast range is too short, like 100 meters. This range is enough to our purpose and don´t break any communications laws in some countries. Bellow, a example how to convert text to .wav files, using espeak (linux binary) and send to Pi Station.
More details about how to use PiStation and transmite FM signals:
One time that you understand how to use Fm PiStation, the main ideia is: get the measurement values, convert these values to .wav audio file and broadcast this audio file (with measurement values) over FM. In our case 107.5 Mhz. So, every time that we have a new measurement, a new broadcast will play too, see example: (click to zoom)
Step 4 - Hardware assembly
For mechanical construction, it was necessary to protect the electronic components from rainwater. For the raspberry pi , was used a aluminium case (used in security cameras).
To protect the Arduino and BME280, we did a sensor shelter using 100mm PVC pipes (sewage pipe), 1 cap (100mm) and 1 ventilation terminal (100mm) built as bellow:
When, everything is done, attach them all to a photo tripod.
Step 5 - Operating
External Network/Internet: In anothers network, you can see the measurement data on Wunderweather.com or Thingspeak.com. If you want, you can redirect your TCP port 80 to your Raspberry Pi (WAN:80 --> Raspberry pi: 80) in your router and access it from anywhere with help of a dynamic dns (like noip.org)
And, of course, if you have an old iphone that no longer works for nothing, use it as your screen. (Gauges powered by Google Gauges Java Script)
I hope you enjoyed, have a nice day ;-)
|
OPCFW_CODE
|
Data Science: Basic Course
Data Science- Basic
By obtaining the AEC Basic Data Science certification, you can initiate your journey in the field of Data Science. The certification exam assesses your proficiency in performing data analysis, data visualization, and data modeling, as well as your ability to learn and apply various concepts utilized by Data Scientists.
Are you in search of training? Enroll in a course. Accredited Partners
About the Certification Exam
The AEC certification exam for Data Science- Basic comprises 60 questions and has a duration of 2 hours. It is an open-book exam featuring multiple-choice and true/false questions that assess knowledge across the major sections of Data Science- Basic. The exam is graded out of 60 marks and is available for testing at more than 1,000 centers worldwide.
Requirements for the Exam
There are no prerequisites required for AEC Data Science- Basic Certification Exam.
Requirements to pass the Exam
To successfully pass the AEC exam, candidates must obtain a minimum score of 40%.
What is the fee for the certification exam?
The exam registration fee is 100 USD.
How to prepare for the Exam?
Individuals aspiring to take the Data Science- Basic certification exam can prepare with Self Learning programs, which serves as a guide for the exam. Additionally, they can enroll in the Data Science- Basic training program provided by AEC Accredited Trainers and Partners to enhance their exam readiness.
- Installation – Anaconda, Pycharm, Virtualenv
- Introduction to python
- Basic Syntax, comments, Variables
- Data Types, Numbers, Casting, Strings, Booleans
- Operators, Lists, Tuples, Sets, Dictionaries
- If…Else, While Loops, For Loops
- Functions, Lambda, Arrays
- Arrays, Classes/Objects, Inheritance, Iterators
- Scope, Modules, Dates, Math, JSON
- PIP, Try…Except, User InputP, String Formatting
- File Handling, Read Files, Write/Create Files, Delete Files
- Ndarray, Data types, Array Attributes, Indexing and Slicing
- Array manipulation, Binary operator, String Function
- Arithmetic, Statistical, Matrix, linear algebra, sort, search, countings
- Data manipulation, Viewing, selection, grouping, merging, joining, concatenation
- Working with text data, visualization, CSV, XLSX, SQL data puling, operations
- Statistics, Linear algebra, models, special functions, optimization
- Probability & Stats Applications
- Basic Probability, Random experiments, Conditional Probability, Independent Events,
- Bayes theorem, Permutation, combination
- Random variable , Discrete/Continous RV, PDF, PMF, CDF
- Joint Probability Distribution, Conversion techniques, EV, varience, SD
- Covariance, Correlation, Chebyshev Inequality, Law of Large number
- Central limit Theorem, Percent & Quantiles, Moments
- Skewness & Kurtosis, Gaussian, Binomial, Standard Normal, Distribution
- Poisson, Multinomial, Hypergeometric, Uniform, Exponential Distribution
- [Mean, median, mode ](Sample/population), Expected values, Variance, standard deviation
- Sampling distribution, Frequency distribution, Estimation Theory
- confidence interval, Maximum Likelihood Estimation
- Hypothesis Testing – Chi-Square, Student’s T, F Distribution, Z test
- Hypothesis Testing – Type-I, Type- II, p Values, Relationship between NULL & Alternative
- Least Square Methods – Numerical
- Data Cleaning – Handling Missing Values(Data Imputation), Dealing with Noisy data(Binning Technique)
- Advance Data cleaning – Will be referred while Regression, clustering topics
- Data Transformation Techniques- Normalization (minmax, log transform, z-score transform etc.), Attribute Selection, Discretization,Concept Hierarchy Generation
- Data Reduction: Data Cube Aggregation, Numerosity Reduction, Dimensionality Reduction
- Data Mapping, Charts, Glyphs, Parallel Coordinates, Stacked Graphs
- Bar, Pie, Line Charts, bubbles, geo maps. Gauge, whisker charts, Heatmaps, scatterplots, plottings images, videos, motion charts, performing EDA
- Building Dashboard – Live implementation – PowerBI
- Implementation of Numerical intuitions
- Regression basics: Relationship between attributes using Covariance and Correlation
- Relationship between multiple variables: Regression (Linear, Multivariate) in prediction.
- Residual Analysis: Identifying significant features, feature reduction using AIC, multi-collinearity
- Polynomial Regression
- Regularization methods
- Lasso, Ridge and Elastic nets
- Categorical Variables in Regression
- Logit function and interpretation
- Types of error measures (ROCR)
- Logistic Regression in classification
- Distance measures – euclidean distance
- Different clustering methods (Distance, Density, Hierarchical)
- Iterative distance-based clustering;
- Dealing with continuous, categorical values in K-Means
- Constructing a hierarchical cluster
- K-nearest neighbours, K-Medoids, k-Mode and density-based clustering
- BIRCH, DBSCAN, Mean Shift, Spectral Clustering, Gaussian Mixture Model
- The applications of Association Rule Mining: Market Basket, Recommendation Engines, etc.
- A mathematical model for association analysis; Large item sets; Association Rules
- Apriori: Constructs large item sets with mini sup by iterations; Analysis discovered association rules;
- Application examples; Association analysis vs. classification
Data science is the process to draw information from raw data and interpret it into useful insights for business decisions. Data Scientist, Data Analysts, Statistician, Data Engineer are a few of the common job profiles in Data Science. Data science involves a life cycle; capture, maintain, process, analyse and communicate data for business decisions.
Data Science is a comparatively new field with more jobs to offer than the existing fields in computer science and IT. Data Science is a vast multi-disciplinary field with scope of working in leading industries like healthcare, telecommunication, cyber security, finance and others. Data Science has grown with advancement in technology and has more scope of growth in future, offering unaccountable jobs in top MNCs and in top cities.
Any professional belonging to IT, marketing, engineering or software can take a data science course to pursue a career in the fields of data science. Undergraduate students, with more than 50% marks in mathematics, statistics or computer science in 12th examination from science stream are eligible. Graduates with a bachelor’s degree in science, engineering, technology or mathematics are also eligible.. Graduates in business studies like BBA or MBA are also eligible. Data science requires knowledge of mathematics, computer science and statistics.
Data Science certification enables you to start or elevate a career in the fields of data science. Some benefits are:
- Enhanced skill sets to work on different domains.
- Opportunity to work in leading industries.
- Flexibility to switch domains.
- More job opportunities to choose from.
- Higher salaries offered.
- Infinite job opportunities due to high demand.
According to an article published in naukri.com, 3,00,000 plus data scientists would be required in different sectors by 2024, with 3400 positions increasing every month. Common job profiles are:
- Data Scientist
- Python Programmer
- Machine Learning Engineer
- Data Analyst
- Data Engineer
Data Science course with Anexas focuses on training individuals on understanding of Data Science and its aspects, tools and techniques required and skill sets required. The course prepares students for job opportunities with many assignments and real-time projects. Key learnings after completion of this course:
Basic Course in data science: Data analysis, basic visualisation and data modelling.
Intermediate Course in data science: SQL, NLP and different statistical NLP techniques.
Advanced Course in data science: Neural Networks using TensorFlow and Keras, CNN and its different parts.
Yes. The course cost includes the cost of examination, certification, tools, software study material etc. There are no other costs payable once you pay for the course.
Anexas offers the following payment methods:
- Net Banking.
- Card Payment.
- Cash payment.
Cancellation is available 72 hours before the start of the course with 10% deduction. Any cancellation after that is non refundable.
However, Anexas supports custom batches or changes in time and date according to individual preferences, without any additional cost.
Anexas certification course includes all industry level requirements to work in the fields of Data Science. Including tools, softwares, skills and concepts used in different industries. The course opens you to opportunities available in different domains, with job assistance, project guidance, assignments and resume building.
Candidates who pass the exam will receive AEC Data Science- Basic Training Certificate with lifetime validity. The certificate does not require any renewal. It will be issued in the form of a softcopy* (PDF), which includes a Certificate Code, a Verification Link, as well as the date and time of certification issuance.
*If a hardcopy certification is desired, shipping charges will be applied.
|
OPCFW_CODE
|
I am sure you will agree with me that Google seems to be a pretty good search engine. Certainly much better than Bing and Yahoo. That is not to say, however, that Google cannot be
improved. Recently I was doing some searches and I realized that it could be improved significantly. I have no doubt that the intelligent and knowledgeable people at Google have
long ago considered these same ideas. I can only speculate as to why they have not implemented them.
Suppose that you search Google for "File Archiver". The top result is an article in Wikipedia about file archiving software, then the results continue with a melange of links to all
sorts of websites. Many of the listed websites are download sites where one or a number of file archivers are posted for download, and also websites with articles, comments, blogs,
etc. The structure of the results from my point of view, of a consumer using the tool, is simply a cacophony. The results do bring information, but are just as boring as listening
to someone who repeats themselves over and over again. The search results are simply alternating links to websites representing the same most popular archiving utilities, such as
WinRar, WinZip, etc. Instead, what I would like to see is one link direct to the WinZip website, one link to the WinRar website, one link to the Act On File Compressor module and
so forth for all compressor utilities – one link per software directly to their official website. There is no justification in the mess that Google returns in terms of website
"Popularity", as popularity can be easily indicated otherwise, for example by the number of websites referencing a product. For example XYZ number of sites are referring to WinZip,
YZX to WinRAR, ZYX to Act On File and so forth.
It is also possible to have the set of websites relating to a particular software (result) as a set of sub results displayed when a button on the main result is clicked or in other
words simply have the results grouped by result (or key word evaluated from the query) which in this case would be the name of the particular archiving software whereas only one
result representative for the whole class is displayed to the searcher.
My query "File Archiver" is certainly not very specific and could be interpreted in multiple ways as it does not say whether I want to download or buy software, or whether I merely
desire to learn how such software works, or whether I wish to read comparative analysis on the available archiving software, etc. I believe we would agree on that.
The natural idea that I had is that I should be able to select a category specifying the types of results that I wish to see, or the types of websites. For example, I would expect
that there is a chart that specifies all categories of results to which "File Archiver" belongs. After I type a search query I should see a multilevel tree-like category structure
with all categories to which my query belongs, then I narrow my search by clicking on the respective leaf representing a particular category and its sub-categories. Again I would
expect a single result per a class of results, which is the most important by some criteria result representative of the class. Also each class of results to have a weight indicating
its importance, perhaps its size in terms of number of websites belonging to it or any other suitable criteria; as well as ability to see all websites members of each result class.
It would be also good to be able to control the criteria determining the displayed representative website for the result classes.
Strangely, Google have not done any such thing at all. I would be very surprised if they have not thought about this long time ago, but instead of having clean results we have a mess
of repetitive results, and often chaff websites with no real value that simply use SEO strategies to get themselves on the top of the Google results. To the credit of Google, it seems
that there is some classification. On the left-hand side one can see a few buttons with static categories, e.g. book, blog, App etc which appear after the first search is produced.
Nevertheless, those seem to be as good as a toy racing car in comparison to the real thing. My guess is that if Google have created a clear search as the one I am envisioning in this
article their revenue will drop significantly, since most current advertisers' websites will actually be on the first page of Google search, or at worst not too far behind. I also
believe that if Google improves their search by removing the chaff websites such as cnet.com people would actually go beyond page one on the Google results.
One other thing that I find annoying is the awful slang that Google and Microsoft promote. I guess it was Apple that introduced it it first but I would not expect more than slang and RAP
from Apple, so I cannot be annoyed with them. That is one of the reasons why I never buy Apple products, but I would definitely expect more from Google and Microsoft. I am referring to one
of the names that Google have chosen for their primitive categories, namely "App". My guess is that this is an abbreviation for "Software Application" What about "Software"? Or even better,
how about developing proper categorization, which is the premise of this article and in which case the derogatory slang "App" would naturally have no place.
|
OPCFW_CODE
|
Jan 30 2008, 06:19 PM
I have an application that needs some new feature. In one of the feature, I am using access form to update the main table. At the same time, a audit trail table records the details of this update. The audit trail function gives the output as following:
INSERT INTO tblAudit (AuditTrailID, FieldChanged, FieldChangedFrom, FieldChangedTo, User, DateofHit ) SELECT 706-0818-00-00-1 , 'QuoteStatus', '0', '-1', 'USPAVSR', '1/30/2008 5:04:12 PM'
The tblAudit is in Access while the main table is in sql server. The AuditrailID is a text field. After the insert takes place in the audit table the audit table shows different value.
The above row value changes to:
-113 QuoteStatus 0 -1 USPAVSR 1/30/2008 5:04:12 PM
I have no idea where the -113 is coming instead of 706-0818-00-00-1
Any help is highly appreciated. Thanks.
Jan 30 2008, 06:28 PM
INSERT INTO tblAudit (AuditTrailID, FieldChanged, FieldChangedFrom, FieldChangedTo, User, DateofHit ) SELECT '706-0818-00-00-1' , 'QuoteStatus', '0', '-1', 'USPAVSR', '1/30/2008 5:04:12 PM'
You need apostrophes around the value in the SQL statement for a Text field. Without them, it is evaluated as a formula before it is saved (706-0818-00-00-1 = -113). Curious why you are hard coding this value...
Jan 30 2008, 06:36 PM
Thanks for the help fdc. I appreciate it. Actually I am not hardcoding the value. The insert expression has been extracted from the immediate window and then trying to use it to see what values get inserted in the table. I guess I need to change the code to get an output with the quotes against the AuditTrailField. Thanks again.
Jan 30 2008, 07:13 PM
Good luck with your project!
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here
|
OPCFW_CODE
|
Update POM file artifact names
Updated changes leftover from #346 and #329
So, at the risk of an enormous bikeshed-style digression...
Names and Contextual Metadata
The name of an object is important because it tells us what the purpose of the object is, and perhaps something about its functionality. However, names should not be confused with metadata, for example in a filename, /path/to/something.file only something is the name, the file extension and path/to directory are not intrinsic parts of the name. Generally, it is undesirable to repeat metadata in the name, so /path/to/path-to-something-file.file would be considered redundantly verbose. We view the file in a certain context, that of it's location in a filesystem, or as part of a URL, and this gives us extra information about the scope and purpose of the artifact.
Directory Structure as Metadata
In the case of the pom.xml file in a Maven project, the only sensible place for it is the root of a project, so no further identification is required. Similar considerations apply to Java source code, where the directory structure for package names is enforced by the compiler, so only the class name need be encoded into the file, as Class.java. And when compiled, the directory structure is still required, so the .class files need never stand alone, and again do not need more identifying information in their names.
However, there are occasions where the directory structure a file is located in cannot be taken for granted. This is usually the case when an artifact is designed to be copied to other locations, out of the control of the original author. In many projects there will be a utility library, or a text handling library, and in the context of the build process the project structure makes it clear where these libraries belong - the implicit metadata of parent or ownership is provided by the filesystem location. The myproject/utils directory is different to the otherstuff/utils directory, by virtue of being in another checked out project. The difficulty arises when building a redistributable artifact out of these projects. The simplest choice is to have both projects create their own utils.jar artifact, but it is obvious that these can then never coexist as library dependencies in some larger project. So, we must resort to duplicating some of the implicit metadata available as context when the projects are being built. Here, this is the parent directory name, which is the name of the project. This gives us a nicely namespaced set of artifacts, namely myproject-utils.jar and otherstuff-utils.jar.
Clocker Libraries
A similar problem occurs with Clocker artifacts. We have, currently, common, swarm and kubernetes projects, which are quite generic terms. These are used in various ways, most commonly as part of a URL used to identify a Java resource and as a Jar archive. When part of a URL we are provided with context that allows us to determine that classpath://io.brooklyn.clocker.common:common/ca.bom refers to the Clocker project's common library, and the scope for confusion is very low. On the other hand, the presence of common-2.0.0.jar does not inform the user about the libraries purpose or context in any way. Therefore, to prevent such confusion, this pull request modifies the atrifactId settings of the sub-project POM files to prepend clocker- in the same way as the clocker-parent project and, indeed, all of the Brooklyn libraries which are named brooklyn-core-0.9.0.jar and so on. The changing of the artifactId but not the folder-names allows disambiguation without too much repetition, as renaming the project folders also to clocker-swarm etc. is simply repeating the fixed context that they must always appear in as a consequence of being subdirectories in the brooklyncentral/clocker GitHub repository.
I updated the README to reflect the master branch, while still pointing to clocker.io for the release. Note that OSGi bundle identifiers are not affected by the artifactId change, as per maven-bundle-plugin documentation.
if artifactId starts with last section of groupId that portion is removed. eg. org.apache.maven:maven-core -> org.apache.maven.core The computed symbolic name is also stored in the $(maven-symbolicname) property in case you want to add attributes or directives to it
Could someone give this another quick check before merging, please?
Looks good & tested in both Brooklyn classic & karaf
|
GITHUB_ARCHIVE
|
package com.evolutionandgames.jevodyn;
import static org.junit.Assert.assertEquals;
import org.junit.Test;
import com.evolutionandgames.jevodyn.dimorphic.DimorphicMoranProcess;
import com.evolutionandgames.jevodyn.dimorphic.DimorphicPopulation;
import com.evolutionandgames.jevodyn.impl.GamePayoffCalculator;
import com.evolutionandgames.jevodyn.utils.Games;
import com.evolutionandgames.jevodyn.utils.PayoffToFitnessMapping;
public class StationaryDistributionMoranProcessSmallMutationTest {
private static final double DELTA = 0.01;
@Test
public void testSimulateStationaryDistributionNeutral() {
Long seed = System.currentTimeMillis();
DimorphicPopulation population = new DimorphicPopulation(10,0,3);
DimorphicMoranProcess mp = new DimorphicMoranProcess(population, PayoffToFitnessMapping.EXPONENTIAL, 0.0, 0.01, new GamePayoffCalculator(Games.allcTftAlld()));
int burningTimePerEstimate = 100;
int samplesPerEstimate = 5000000;
int numberOfEstimates = 5;
double[] ans = mp.estimateStationaryDistributionSmallMutation(burningTimePerEstimate, samplesPerEstimate, numberOfEstimates, seed);
for (int i = 0; i < ans.length; i++) {
assertEquals(1.0 / population.getNumberOfTypes(), ans[i], DELTA);
}
}
@Test
public void testSimulateStationaryDistribution() {
Long seed = System.currentTimeMillis();
DimorphicPopulation population = new DimorphicPopulation(10,0,3);
DimorphicMoranProcess mp = new DimorphicMoranProcess(population, PayoffToFitnessMapping.EXPONENTIAL, 1.0, 0.001, new GamePayoffCalculator(Games.allcTftAlld()));
int burningTimePerEstimate = 1000;
int samplesPerEstimate = 5000000;
int numberOfEstimates = 100;
double[] ans = mp.estimateStationaryDistributionSmallMutation(burningTimePerEstimate, samplesPerEstimate, numberOfEstimates, seed);
assertEquals(0.0799139, ans[0], DELTA);
assertEquals(0.66839172, ans[1], DELTA);
assertEquals(0.25169438, ans[2], DELTA);
}
@Test
public void testSimulateStationaryDistributionNeutralFastForward() {
Long seed = System.currentTimeMillis();
DimorphicPopulation population = new DimorphicPopulation(10,0,3);
DimorphicMoranProcess mp = new DimorphicMoranProcess(population, PayoffToFitnessMapping.EXPONENTIAL, 0.0, 0.01, new GamePayoffCalculator(Games.allcTftAlld()));
int burningTimePerEstimate = 100;
int samplesPerEstimate = 5000000;
int numberOfEstimates = 5;
double[] ans = mp.estimateStationaryDistributionSmallMutationFastForward(burningTimePerEstimate, samplesPerEstimate, numberOfEstimates, seed);
for (int i = 0; i < ans.length; i++) {
assertEquals(1.0 / population.getNumberOfTypes(), ans[i], DELTA);
}
}
@Test
public void testSimulateStationaryDistributionFastForward() {
Long seed = System.currentTimeMillis();
DimorphicPopulation population = new DimorphicPopulation(10,0,3);
DimorphicMoranProcess mp = new DimorphicMoranProcess(population, PayoffToFitnessMapping.EXPONENTIAL, 1.0, 0.001, new GamePayoffCalculator(Games.allcTftAlld()));
int burningTimePerEstimate = 1000;
int samplesPerEstimate = 5000000;
int numberOfEstimates = 100;
double[] ans = mp.estimateStationaryDistributionSmallMutationFastForward(burningTimePerEstimate, samplesPerEstimate, numberOfEstimates, seed);
assertEquals(0.0799139, ans[0], DELTA);
assertEquals(0.66839172, ans[1], DELTA);
assertEquals(0.25169438, ans[2], DELTA);
}
}
|
STACK_EDU
|
Android app to decode ACARS aircraft transmissions
Many DXers record large swaths of the radio spectrum, and then go back to analyze the recordings, looking for signals of interest. Much of the time, they play the recordings back through their SDR software. This works, but is a slow process, no better than monitoring in real time.
Modern software can dramatically speed up the process. In this article, I’ll show how sdrRewind and Black Cat ALE can team up and speed up the process of finding and decoding ALE (Automatic Link Establishment) transmissions.
Black Cat ALE is a full featured multi-channel ALE decoder for Windows and macOS. It decodes ALE transmissions from either audio fed into a sound card input (live decoding) or from WAVE audio files. Download a copy here: https://blackcatsystems.com/software/black_cat_ale_decoder.html
sdrRewind may take a little more explanation. Rather than just play back an SDR recording file, it allows you to select any of your SDR I/Q recording files, and display a waterfall of the entire file at once, as one large waterfall, with a temporal resolution of one second per line. This is more than adequate to see the various transmissions contained in the recording. Select a signal of interest by dragging a rectangle around it with your mouse, and sdrRewind will demodulate and play back the audio, either to your speakers or a virtual audio device feeding a decoder. It can also demodulate to WAVE files, which can then be fed into your decoding software.
It’s also possible to define a set of frequencies and process several SDR I/Q files at once, generating a collection of WAVE files which can then be fed into the decoding software. In the case of Black Cat ALE, it can be configured to monitor a directory looking for new WAVE files, and automatically process them. So even if the demodulation and decoding process will take some time, you can set it up, then walk away and do something more productive while your computer is busy processing the data. Then come back when it is done and view the results.
Download a copy of sdrRewind here: https://www.blackcatsystems.com/software/sdr_iq_recording_playback_program.html
Black Cat ALE Configuration:
Select Set Directory To Monitor For New Files from the File menu, and choose the directory in which sdrRewind will store demodulated WAVE files. (Create one if you need to)
Select Monitor File Directory from the File menu. Black Cat ALE will start looking in this directory for new WAVE files. The name of these files must end in “.wav” or “.WAV”. It will ignore any files that already exist in this directory.
Set the directory for your SDR recording files, using Set Recording Directory in the File menu
Open Settings in the Edit menu, go to the Demod Directories tab, and create one or more entries for where demodulated WAVE files should be stored, including at least the directory Black Cat ALE will be monitoring. Create each entry by right clicking on the list and select Add Entry. Then right click on that entry and select Set Path and select the directory to use. Repeat as necessary. Close Settings. Go to Select Demod File Directory in the File menu and select the directory where Black Cat ALE will be monitoring for new WAVE files.
Select one of your SDR I/Q recording files from the list of files in the list on the left side of the main window. After a moment, a waterfall for the entire file will appear. Adjust the min and max dB sliders as necessary for good contrast.
Set the mode to USB.
Find an ALE signal in the waterfall and drag around it with the mouse cursor (can’t find any? Go to another I/Q file). You’ll want to make sure the lower frequency is an integer kHz value (or 0.5 kHz for those ALE channels), so edit the frequency as needed. Zero Frequency kHz in the Edit menu can quickly do this for you. Don’t forget to make sure the upper frequency is high enough to cover the entire ALE spectrum, about 3 kHz. Click the Timestamped button. sdrRewind will demodulate the signal and write it to the specified directory. When Black Cat ALE sees the file, it will open and decode it, printing out the results.
Sometimes you want to decode ALE signals from one or more specific frequencies, over an entire set of SDR I/Q recording files. sdrRewind can help with this as well.
Select Demodulate Multiple Files from the Edit menu and a new window appears.
On the left hand side is a list of your recording files, as in the main window. Select a file and basic information about that file will be displayed: the center frequency, sample rate, bandwidth, starting date and time, and length in seconds.
Demodulation settings are displayed immediately to the right of this, again as in the main window. Configure this for the frequency of interest, then select one or more I/Q files and click the Start button. Each I/Q file will be demodulated and written to a separate timestamped WAVE audio file. The entire I/Q file will be demodulated, from start to finish.
Do not make any changes to any controls in this window will files are being processed.
If you wish to demodulate several frequencies from each file, instead use the list to the right:
Right click in it and select Add Entry. A new row will be added. Set the low and high frequency limits of the IF passband, as well as (optionally) the pass band tuning (PBT). Change the mode by right clicking on it, and select a different mode from the popup menu. The same AGC settings will be used for all entries.
When you are finished, click the Start All button. Each I/Q file will again be processed, this time for each of the frequencies in the list. Click abort to stop processing additional files, however the file currently being processed will need to finish.
The Clear button can be used to quickly remove all entries from the list.
|
OPCFW_CODE
|
By default details for the actual Script are shown on the left side and the expected on the right, but the expected details can be moved further to the right using the splitter as often only the actual results are required.
Right click the Script or any of the any of the screens to obtain the following options.
- Convert Actual to Script – Take the Actual set of results and create a new automation Script from them.
- Convert Expected to Script – Take the Expected set of results and create a new automation Script from them.
- Animate – Watch a simulation of the Script playback. This does not require the target application to be active as it is in effect a ‘movie’ of what will happen. Sound can optionally be switched on to provide commentary. See the Script Editor chapter for full details.
- Copy steps to clipboard – Copy the screen and input events to the clipboard, you can then use the Paste from clipboard option to add these to any inline Steps grid in Qualify.
Highlight the Script name and click on the Checks tab to display all quality checks for the entire test. It includes all performance and link checks (whether they have passed or failed) and all mis-spelt words and content check failures, plus any mark-ups and notes. The Checking Rule tab next to it can be used to see which checks were in force at the time of execution.
- Type – Either Content, Performance, Link, Spell check, mark-up or note. Each entry contains an icon to indicate the severity:
Success (link and performance checking)
An error which has been set to ‘Ignore’
- Context – For a performance check and screen markups and notes this will be the screen in the Script to which the item applies. For all other types this will be the relevant screen element to which the check relates.
- Details – Information to further explain the entry, for example for spell checks it will be the mis-spelt word.
The following right click options are available. Multiple selections can be made although not all options will be available during this scenario.
- View – Display the screen containing this error with the relevant element highlighted.
- Ignore – Use this for elements which have failed but which should not actually cause an error in results in this instance. However it will still be reported as an error for all subsequent tests. If you require an audit trail of changes to the checks, activate the ‘Check Override Prompt’ option which you will find against the Result entity within the Global Application Definition in Qualify Admin. Then when you click Ignore you will be required to provide a Reference and optional Comment in addition to your Qualify password.
- Include – Toggle previously ignored items so that they will be recorded as errors.
- Ignore and Add/Change – For content checks, update the exception details stored against the application, these will be used for all future results. For spell checking add the failed word to the dictionary so that it will not fail in future results. For performance checks change the threshold which is stored against the application, this will be used for all future results.
All quality check results are also available when viewing each screen individually.
|
OPCFW_CODE
|
The concept of cryptocurrency was coined in 1991. However, the first real implementation was done in 2008 by Nakamoto. The first question arises, what is cryptocurrency. It’s a financial setup in which the currency is being transferred between the two parties. In the beginning, problems like double error method arose, though the problem was solved afterward through concepts such as block chain technology. The whole process is governed through the cryptographic algorithms. A set of public and private key is being transferred between the two parties. The detail of each transaction is stored in each block and for each client;a chain of blocks forms the complete list of transaction. All the blocks together form the block chain. These block chains are nothing else but the financial ledger. The power of this new currency transaction system depends upon the power of cryptographic algorithm. With implementation of algorithms like DES, the secrecy of each financial transaction (block chain) has been strengthened. However, still the concept has not been approved by many countries. The data of each block cannot be altered retroactively or without network consensus. The share of cryptocurrency is not that much currently though with time, it is expected to rise.
Some of the features of cryptocurrency are:
• Public ledger
The most important aspect of cryptocurrencyis the above but technology requires security for effective usage. Problems like double error have occurred in the past though that problem is solved now. The biggest advantage of cryptocurrency is its update feature without touching the central server. Thus, we need to make no changes to the server. Also, the transaction can be done between any two members of the network or three or more.
Thus various advantages that you attain through the cryptocurrency are as below:
However, the technology has developed though it is not being accepted by all the countries. The biggest sensation in cryptocurrency is the bitcoin. It’s being accepted by many countries. Similarly, you can find many more type of cryptocurrency. Each of them uses a unique type of algorithms. All of them, you can learn through the cryptography. It’s a vast subject and the application in the form of crypto currency is one of the major breakthroughs of past decade. The use might increase four fold in coming years definitely.
Digital currency is additionally utilized as a part of questionable settings as online illicit businesses, for example, Silk Street. The first Silk Street was closed down in October 2013 and there have been two more forms being used from that point forward. In the year following the underlying shutdown of Silk Street, the quantity of unmistakable dim markets expanded from four to twelve, while the measure of medication postings expanded from 18,000 to 32,000.
Darknet markets exhibit challenges concerning lawfulness. Bitcoins and different types of digital money utilized as a part of dim markets are not obviously or lawfully ordered in all parts of the world. In the U.S., bitcoins are named as “virtual resources”. This sort of questionable arrangement puts weight on law authorization offices around the globe to adjust to the moving medication exchange of dim markets
|
OPCFW_CODE
|
Lifespan of a Turtle – How Long Can a Turtle Life?
The lifespan of a turtle can be very long. With proper care and maintenance, turtles can live and serve a person lifespan for a lifetime or several generations. In the wild, every six turtles in the wild dies because of human interventions near the water. A lifespan of a turtle varies depending on the species, environment, and age of a turtle.
Certain turtle species such as the Leatherback, Map Turtle, Painted Turtle, Spiny soft-shelled turtle, and the Red Eared Slider can live up to forty years if proper care is given. However, a leatherback turtle has the longest lifespan of all the known turtle species. It can live more than a century or two. Map Turtle lives for between twenty and thirty years. While a Spiny soft-shelled turtle can live for between fifteen and twenty years.
Lifespan estimation is usually done based on survival data collected in captivity. The calculation of lifespan is made based on how long a turtle lives, compared to the number of years it is supposed to live in the wild. Because it is difficult to determine the exact life span of a turtle at the wild, a reasonably good estimation can be made using captive bred turtles that were maintained under controlled conditions. A mortality rate of fifteen percent is used to determine the lifespan of a turtle. This means that out of every 100 adult shelled turtles, 15% will die during their lifetime. Visit here for more information best turtle for pet
If you are thinking about buying a pet turtle, you must take into account its lifespan. Although long time is important, so are health and safety. You have to choose your pet carefully so that you know it for its lifetime. If you are going to keep it as a pet, you must make sure that it is healthy and safe so that you can take proper care of it. You must have a long time commitment if you want your pet to grow old gracefully.
There are several signs that indicate the growth annulus or the age of a turtle. Growth annulus is a thin bump or hair line that forms on the side of the neck near where the eye joins the lower shell. If the growth annulus is present along with white teeth, then the turtle is probably an adult.
In looking for the lifespan of a turtle, you should know that mature adults reach a maximum growth age of twenty-five years. After this time period, they turn into adults and slow down because they enter into a stationary stage in their life cycle. For example, some species of turtles live up to forty years or more. So keep in mind that the exact number depends on how old your turtle is, its growth annulus, and the age at which you obtained it.
|
OPCFW_CODE
|
Why do bathroom sinks have overflow holes whereas kitchen sinks and tubs do not?
My understanding is that the purpose of the overflow holes is twofold--
To allow water to flow down you drain faster.
To prevent overflow if the sink is filling faster than it's draining.
If #1 was true, wouldn't you also see the overflow holes on kitchen sinks?
If #2 was true, wouldn't you also see the overflow holes on bathroom tubs?
So--why do I never see overflow holes on tubs or kitchen sinks?
Edit: To clarify, my familiarity is primarily with sinks/tubs in the United States.
The tub usually has one integrated into the drain open/close hardware.
Double basin kitchen sinks will typically overflow into the other basin.
What country are you from? It seems odd to me for sinks and tubs to not have some kind of overflow.
Every bathtub I've ever seen has a drain hole. Look closer. It's precisely because a tub being filled is the primary use-case for an overflow. I can't say the same for kitchen sinks. But I'm accustomed to double sinks, where the bar separating the sinks is 1/8" lower than the lip around the edge, so each sink is effectively the overflow for the other.
My guess (for kitchen sinks) would be some combination of: 1)overflow routing thru a DisposAll unit would be difficult, 2) kitchen sinks in common use rarely have the drain stop installed, 3) most kitchen water disasters are due to a plugged drain (Disposall or trap), so an overflow line wouldn't help.
@Carl it's not uncommon for people to wash up directly in the sink therefore with the plug in the drain. It wouldn't surprise me if that's more common in the UK as sinks are often smaller. I only do this for things like oven shelves that don't fit in a washing up bowl. An overflow would fall with a blocked drain but not a blocked trap - and the latter is more likely to block completely with little warning
#1 isn't the case - an overflow drain doesn't help water drain any faster. People often incorrectly use the analogy of water glugging out of an inverted 2-liter bottle, but that's not what happens at all. The sink/tub is already open to the air, so the overflow doesn't change anything in that regard whatsoever.
Most sinks and tubs in North America do have an overflow device, it's simply cleverly hidden.
Bathroom sink overflows (which aren't always present -- ours lack them) are visible as North American bathroom sinks are almost universally single basin. However, North American kitchen sinks are often double basin -- and in a double basin sink, the divider doesn't extend up to the full height of the sink, so the two sinks use each other for an overflow. A rather clever design if you ask me, provided you aren't filling all the basins up that is.
As to the bathtub? There's usually an overflow hiding in the drain-stopper selector mechanism.
I'm changing the answer to yours. I think my original question used flawed logic for a couple reasons. You correctly point out what I missed which is that overflows tend to be hidden in the tub and the kitchen sink (double basin).
In the UK I've never seen a kitchen sink without an overflow. They're universal on bathtubs as well. While they may not get used much in common use, they do come into their own if you get distracted while running washing up water, and distractions are common in kitchens, especially if you're trying to clean as you go.
How common are garbage grinders (Dispos-All, In-Sink-Erator, etc) in the UK? They're pretty popular in the USA
@CarlWitthoft. I've seen them but rarely. Even when fitted they're not widely used by subsequent owners. Do they interfere with overflows?
Chris, in theory an overflow could be routed to the output side of the grinder. But again, that's a chunk of plumbing (pipes, unions, etc) for very little payback.
@Carl that would seem reasonable. I've seen a separate pipe connecting the overflow to the trap so it wouldn't be hard.
I suspect this is simply because kitchen sinks are rarely operated by two-year-olds, and hence do not need training wheels.
Also, in a similar vein, so I won't make it a separate answer: since the tub takes a while to fill, you may not stand there watching it the whole time. You are less likely to walk away from the kitchen sink while it's filling, I usually stand there doing some preliminary washing.
Also, it isn't uncommon to fill the tub, then climb into it, raising the level... Possibly enough to be an issue? ... Which doesn't generally happen with the other kinds of basin.
This is not helpful. Further, the point of overflow drains is to avoid disasters, not to deal with the putative existince of children.
Point was merely that children are the usual cause of such disasters; adults are capable of anticipating and preventing them. But if you feel it isn't useful, that's what downvoting is for. (Personally I don't find the question especially useful, but I don't object enough to downvote or vote to close.)
I'd like to see statistics on how many overflows are in fact caused by any given age group. I rather doubt that small children are the prime source of such.
My guess (and that's what it is) would be that overflow passages are known to be unsanitary. In an area intended for food preparation, the cultivation of mildew and bacteria would be a more serious concern, where it isn't so much of a concern in handwashing sinks and bathtubs.
This article seems to support my hunch. It also suggests simple economics, as U.S. codes don't require kitchen overflows.
I don't know many people who shave in their kitchen sinks, I do know a lot of folks that fill their sinks with water when shaving. I also know a lot of folks that fill their tubs, which is why overflows are also typically found on tubs. Some folks also fill the sink when washing up, in an attempt to waste less water.
"But I fill my kitchen sink to do dishes", you might say. That might be true, but the kitchen sink has a much greater volume. So you're less likely to fill it to the point of overflowing. Also, I'm pretty sure kitchen sink overflows were common back before dishwashers (but I could be mistaken). And if you have a double basin sink, they're typically designed so that the basins can overflow into each other.
It all about profit. It cost more to make a sink with an overflow drain. I am glad to hear that they are on all sinks in Europe.
They are very much needed on all sinks. And, this nonsense about it being unsanitary to have one in kitchen sink is just not true.
|
STACK_EXCHANGE
|
How do I practice the Dhamma in an environment where it's not supported?
My parents, try to hinder my practice of the Dhamma by verbally & aggressively putting the Dhamma down to me, almost every day. They believe I am being "brainwashed" by monks I watch online or people I talk to about Buddhism.
They also express that they are concerned when I meditate, which itself is not negative at all, I just don't want them to worry about my practice.
My practice in no way interferes with their life, I keep the Dhamma to myself unless one is curious or respectful about it, but my parents always seem to bring it up & they aren't very respectful.
How do I prevent this from happening while at the same time respecting their beliefs in the Abrahamic God? I only want love & practice, it's just hard to communicate, for them to understand, the truth of the Dhamma to them.
Metta to all
There's one answer here: As a Buddhist with a Muslim family, community, and background… how do you integrate/cope?
There's another topic here (about "communicating the truth of the dhamma to them"): How to explain what Buddhism is?
Firstly, I assume that you reside in a jurisdiction where converting from your original religion to Buddhism and the practice of Buddhism are not illegal.
I suggest that you can explain the following to your parents, assuming that it is possible to reason with them:
I understand and respect your beliefs and your practice of religion X. This religion X teaches one to have morality, respect, kindness and compassion for your fellow man. The founder of this religion also displayed these values. (cite examples here)
I would like to announce to you that I do not belong to religion X any more, although I still greatly respect it and its founder. I have now accepted Buddhism, as my religion and my way of life, by taking refuge in the Buddha, his teachings and his community of disciples, as well as by undertaking the training of the Five Precepts of Buddhism.
The Buddha also displayed the same values of morality, respect, kindness and compassion, that the founder of religion X did, and he taught his followers to practise these values. (Cite examples from the Buddha's life that parallel the life of the founder of religion X)
The Five Precepts of Buddhism are not very different from the Ten Commandments (or some other equivalent teaching of religion X). It teaches me not to kill, not to steal, not to commit adultery, not to speak the untruth, and not to consume intoxicating substances. The Buddha is not God and Buddhists are not required to make idols of the Buddha, or worship him. Taking refuge in the Buddha simply means that one accepts the Buddha as his teacher, and has faith in his teachings.
The Buddha was a normal man, who became an enlightened teacher, who guided his followers away from living an immoral life that leads to suffering, just as the founder of religion X did.
The Buddha also taught his followers to respect and cherish their parents. He taught that it's very difficult for children to repay the love and kindness shown by parents towards their children.
I want you to understand that your insulting and belittling of Buddhism or the Buddha will not change my mind. I am now and for the foreseeable future, a committed practising Buddhist.
The founder of religion X did not prevent or hinder people from following other religions. The Buddha too did not prevent or hinder people from following other religions.The founder of religion X did not force others to practise the religion that he brought. The Buddha too did not force others to practise his teachings.
Similarly, I will not force you to practise my religion, and I ask you not to force me to practice your religion. I will not insult or belittle your religion, and I ask you not to insult or belittle Buddhism.
Even if you insult or belittle my religion, the Buddha taught me to show metta (loving kindness) towards you. By metta, I mean I would always wish you to be happy and free from suffering. (Cite similar examples from the life of the founder of religion X, if possible)
The Buddhism as a religion which can be seen practiced nowadays is often and in many ways quite different to the practices in the texts. Therefore your group is right to worry about you imo. There are several sects even within the Theravada and a variety of disagreements, chances are they already got you:)
Perhaps try approaching the whole Buddhisms more like an academic pursuit and experiment with meditation, shouldn't be a big deal if someone wants to explore those things. The problem arises when you come of as a lunatic and they are unable to communicate because they are not trained in that system of language (dhamma).
If we understand the non secterian Dhamma, we can use judeo-Christian language, or or find simple common ground with them. Dhamma is virtue, morality is Dhamma. Non secterian, even set aside Pali language, who will have issue with not killing, not using intoxicants, not stealing, not undertaking sexual misconduct, not over indulging in sleep and comfort?
Many monastaries and meditation centers don't follow the precept of no having ornaments, decorations, etc. This precept might be important to keep purity of Dhamma in a non sectarian way. Not putting up these walls, barriers attatchments will be good for one and all beings. Don't take my word for it, question it. Find out with your own faculty of dicernment. Fancy Pagoda tops, and decorations and even statues of sidartha gotoma the body are not in alignment with the Dhamma. These jewelery, and chanting outloud, in front of them are better left undone. Meditation in secluded area, in a room preferably clean or empty, or in a park at the base of a tree.
Who will take issue if you simply state I am trying to take time to develop concentration. Lengthen my attention span. It may help me to improve studying, reduce stress, enjoy a richer fuller life. A calm mind might appreciate the song of a bird, the beauty of sunset, etc. Develop wisdom and insight into the nature of reality.
Have compassion for living beings, I try to develop compassion for living beings, know that People and animals are all subject to sickness, death, impermenance. I don't want to unnecessarily cause I creased harm. But it takes practice... Who will take issue?
Better not to argue with them but try taking deep breaths and o serve how you feel when this is happening. If you react as soon as you can realize it come to your senses and try to not respond verbally. If you react 15 min. Before you realize I am reacting, arguing, etc. Know it is a huge improvement from 16 min.
To separate text into paragraphs, don't use leading whitespace at the start of the first line, but do put a blank/empty line between paragraphs -- full details are here or here
I didn't understand what the last three sentences are saying -- "If you react 15 min. Before you realize I am reacting, arguing, etc. Know it is a huge improvement from 16 min."
|
STACK_EXCHANGE
|
FQDN for NameSpace roots
It would be very nice if the Resource supported using FQDNs as servernames in DFSN Root Targets :-)
@bk147 - thanks for your feedback! I'm sure it could be made to do this. I'll take a look and see what I can do (unless someone gets to it first). I'll try and get to this this weekend but possibly the next week.
@bk147 - I've started work on this one. But I now recall why FQDN doesn't work - it is a limitation (possibly an intentional one) in the PowerShell DFSN cmdlets:
The PowerShell Cmdlets for DFS Namespaces do not return the FQDN of any targets added to the Namespace - even if an FQDN was used to set it.
Unfortunately this will cause a looping scenario on the current DFSN resources. This is probably what you've run into - is that correct - the resource trying to add the FQDN root targets to the root every 5 minutes but failing (because they actually have already been added).
There is hope however - I could probably work around this limitation by simply stripping everything but the flatname from the FQDN when looking up the existing target. The only issue with doing that is that if you try and change the Domain of one of the target nodes that already exists then it will not change because the resource will think it is already in the correct state. There is no way around it that I can see because I can't find to pull the FQDN used to create the target.
But before I proceed with this change, could I get you to drop a copy of the config in here that'd you like to use. I just want to confirm I am actually going to implement something that will solve this problem. I also want to confirm you're talking about putting FQDN's int the TargetPath parameter of the resource - the Path should already accept FQDN paths.
Cheers!
Also, can you confirm what sort of Namespaces you're using here (DomainV2 or Standalone).
Sorry about my late response!
First we're using DomainV2 on all our DFS Namespaces. We have several domains in the same forest and what we usually do is that we run the following PowerShell command as the first thing:
Set-DfsnServerConfiguration -ComputerName localhost -UseFqdn $true
This enables support for FQDN in the DFS roots, but the server has to be rebooted before the setting becomes active - failure to do so results in shortnames being using instead of FQDNs.
Thanks,
-Brian
@bk147 - no problem sir! I had no idea about the -UseFQDN parameter! Can I reopen this as I'd actually like to come up with something for you.
I think what we need is a new DSC resource to allow us to configure the DfsnServer. We can manage the reboot OK by making this resource require a reboot if the UseFQDN value changes. This may actually mean that the FQDN sill just start working.
@PlagueHO I think there has been some change here when it comes to the Server 2016 implementation? It now seems that I need to specify the FQDN in the TargetPath on 2016 - otherwise it always fails the Test-TargetResource method. Have you seen this too?
@iainbrighton - actually I haven't yet looked. Thanks for the heads up. I'll make some time this week to look in to it. Cheers!
@iainbrighton - I'm looking at this over this weekend and hope to get it figured out soon.
It would be grand if we could specify the OS version AppVeyor uses - could then set up a test matrix to test the different OS versions.
Hi @ianbrighton - I think I've found the problem:
In Windows Server 2012 R2 the DFSN Server Configuration setting UseFQDN defaults to "False".
In Windows Server 2016 the 'UseFQDN' setting defaults to $null (not set):
This seems to cause errors to occur when creating the Namespace roots. It may be a bug in DFS in WS 2016 though because I couldn't create any DFS Namespaces using PowerShell cmdlets at all until this value was set to either $true or $false.
Are you able to confirm this on your DFS server for me by any chance by executing:
Get-DFSNServerConfiguration -ComputerName Localhost
If that is the case then this is what I think should be done:
All non-FQDN examples should be updated to include to ensure the setting is false:
# Configure the namespace server
xDFSNamespaceServerConfiguration DFSNamespaceConfig
{
IsSingleInstance = 'Yes'
UseFQDN = $false
PsDscRunAsCredential = $Credential
} # End of xDFSNamespaceServerConfiguration Resource
Raise the issue on User Voice.
Are you able to see if setting the UseFQDN setting using xDFSNamespaceServerConfiguration DFSNamespaceConfig fixes your issue?
Keen to get your thoughts on this.
Hi @iainbrighton - I just realized that I'd incorrectly tagged your name in the above reply. So you won't have seen it. Sorry about that sir!
@iainbrighton - if you get the opportunity to confirm this resolves your option, let me know. What I'll then do is update the documentation and examples to make note of this behavior with Windows Server 2016.
@PlagueHO Apologies - been busy with that thing called work 👎. I'll attempt to have a look this week as I think this is now the only thing stopping us going Server 2016 for dev/test..
@iainbrighton - no worries mate! I'm definitely thinking some more docs/guidance is needed as there does appear to be challenges getting these resources working at times.
@PlagueHO As always - thanks for your efforts 👍.
I can confirm that the UseFqdn property is indeed null on a 2016 instance.
PS C:\Users\Administrator> Get-DfsnServerConfiguration -ComputerName localhost
ComputerName : localhost
LdapTimeoutSec : 30
PreferLogonDC : False
EnableSiteCostedReferrals : True
EnableInsiteReferrals : False
SyncIntervalSec : 3600
UseFqdn :
I can also confirm that the following configuration now works on a 2016 host and passes the Test-TargetResource method after the first pass:
configuration DfsFqdnTest {
param ( )
Import-DscResource -ModuleName xDFS;
xDFSNamespaceServerConfiguration 'DFSNamespaceConfig' {
IsSingleInstance = 'Yes'
UseFQDN = $false
}
xDFSNamespaceRoot 'DFSNamespaceRoot' {
Path = '\\test.local\Root';
TargetPath = '\\2016DC\DFS';
Description = 'Distributed File System Root Share';
Type = 'DomainV2';
Ensure = 'Present';
DependsOn = '[xDFSNamespaceServerConfiguration]DFSNamespaceConfig';
}
}
if (-not (Get-Module -Name xDFS)) { Install-Module xDFS -Scope AllUsers -Force }
DfsFqdnTest -OutputPath ~\
Start-DscConfiguration -Path ~\ -Wait -Verbose -Force
Therefore, I think that the examples should be updated to include the xDFSNamespaceServerConfiguration resource defaulting to $false. This certainly feels like a regression/bug though 😞.
Awesome! Thank you again @iainbrighton - that is great info.
What I will do:
Raise an issue on Uservoice - I do agree this seems like a bug/regression.
Add info to the Readme.md identifying this as a known issue and refer to the required solution.
Add the xDFSNamespaceServerConfiguration to the examples.
Thanks again for helping me check this out.
Is the limitation still on the cmdlets? I have used them with FQDN without too much issue however right now I am faced with other issues like not being able to added extra namespace targets.
Hi @laywah2016 - I haven't tried this in Windows Server 2019, so not sure there. But it has not been fixed in Windows Server 2016. All that is really required to do is use the DFSNamespaceServerConfiguration as per the examples to configure the DFS Namespace server to use FQDN.
Can you clarify the issue with not being able to add extra namespace targets? It would be worth creating a new issue if this is not related though.
@PlagueHO is this issue still current?
It seems to work fine on Windows Server 2016 when I try using the commands you have given
Hi @laywah2016 - which version of WS2016 are you using? I wonder if it has been fixed in a recent update or perhaps WS2016 build 1803?
This is the version I am running.
``Major Minor Build Revision
10 0 14393 0``
|
GITHUB_ARCHIVE
|
TensorFlow Lite is a framework of software packages that enables ML training locally on the hardware. This on-device processing and computing allow developers to run their models on targeted hardware. The hardware includes development boards, hardware modules, and embedded and IoT devices.
TensorFlow Lite Task Library contains a useful and powerful set of interfaces. That helps us handle most of the pre-processing and post-processing logic for running TensorFlow Lite models on mobile devices.
TensorFlow Lite Task Library is being widely used by Google products. It supports some of the classic machine learning tasks such as Image Classification and Segmentation, Object Detection, and Natural Language Processing.
Uses of TensorFlow Lite Task Library
- Well-defined APIs
- Complex but common data processing
- High-performance gain
- Extensibility and Customization
Supported ML tasks by TensorFlow Lite Task Library
I. Vision APIs
- What an image represents is called Image classification.
- We train the image classifier models with various images which makes it possible to recognize different image classes.
- For instance, if we train our model with different types of flowers. Like, roses, tulips, and orchids, the model will be able to recognize them.
- Use the Task Library ImageClassifierAPI to deploy custom image classifiers or pretrained ones into your mobile apps.
- Identifying objects in a given image or video stream and their position can be done with the help of object detection models.
- For example, a model might be trained with images containing various pieces of fruit, a label that specifies the class of fruit they represent (e.g. an apple, a banana, or a strawberry), and data specifying where each object appears in the image.
- Use the Task Library ObjectDetectorAPI to deploy custom object detectors or pretrained ones into your mobile apps.
- To predict each pixel of an image with a particular class.
- Use the Task Library ImageSegmenterAPI to deploy custom image segmenters or pretrained ones into your mobile apps.
- Searching for similar images in an image database by a search query into a high dimensional vector.
- Use the Task Library
ImageSearcherAPI to deploy your custom image searcher into your mobile apps.
- This allows transferring an image into a high-dimensional feature vector representing the semantic meaning of an image.
- Use the Task Library
ImageEmbedderAPI to deploy your custom image embedder into your mobile apps.
II. Natural Language (NL) APIs
- This API classifies input text into different categories and is a versatile and configurable API that can handle most text classification and models.
- This API is very much similar to the NLClassifier.
- Specially designed for Bert-related ML models which support Wordpiece and Sentencepiece tokenizations.
- This API loads a Bert model and answers all the questions based on the content of the passage.
- This API allows searching for a similar text in the corpus.
- This allows transferring text into a high-dimensional feature vector representing the semantic meaning of a text.
III. Audio APIs
- This API can be used for the classification of different sound types.
- For example, it can identify the bird species by their song.
IV. Custom APIs
Extend Task API infrastructure and build customized API.
In this blog, we learned about various APIs supported by TensorFlow Lite Task Library. And, some of its major uses of it.
|
OPCFW_CODE
|
Welding conda, id welding conda, id has the best welding prices in conda, id. Conda as a package manager helps you find and install packages if you need a package that requires a different version of python, you do not need to switch to a. Installation ¶ installing conda builds are currently available for mac osx only textblob is also available as a conda package to install with conda, run. The following information can be published by us: dear conda investors, 2017 has ended and with it also our second full year of operations. Reddit: the front page of the internet hey guys 65 4 comments once there, he discovered that there is no free 'conda. Conda gamer140 and screenplaygames 104 second channel link in the description hello guys - duration: 4 minutes, 16 seconds 30 views 8 months ago 1:21:51 play.
Hi, guys: i have some confusion about the scenario where i use 'pip install' in the virtual environment create by conda if i did so, is the python package installed by 'pip' global or local to this conda virtual environment. Now i have found the documentation: this is the documentation that explains how to generate r packages that are only available in the cran repository:. Hi guys i am a very open minded girl who got lots to learn & lots to share i love to party all night as much as i love to stay home and netflix with my blankie. Conda's best 100% free dating site meeting nice single men in conda can seem hopeless at times — but it doesn't have to be mingle2's conda personals are full of single guys in conda looking for girlfriends and dates. Types of brick masonry there are various types of brick masonry services on offer today in conda, id depending on the composition of the original brick manufactured at brick masonry guys.
Conda install to install this package with conda run one of the following: conda install -c birdhouse/label/old python-magic conda install -c birdhouse/label/dev python-magic. Commercial window repair in conda, id window repair pro guys is your number one partner when it comes to commercial window repair in conda, id.
Command reference ¶ conda general conda provides many commands for managing packages and environments the links. Installing packages in #anaconda python: hi guys, i recently installed anaconda python and am having trouble installing packages which are not part of a #conda repository. Explore vermont's #1 condo real estate team located in chittenden county this team of experts also sell single-family homes and more contact us today.
Homebuilding and landscaping the owners of sand delivery guys in conda, id has been providing quality homebuilding and landscaping products to the conda, id and surrounding areas. To install this package with conda run: conda install -c anaconda gcc description anaconda cloud gallery about pricing documentation support about anaconda, inc. “the guys from auroco have created something totally new with the epic the technology opens up entirely new creative possibilities for me conda is supported by. Github is where people build software more than 27 million people use github to discover, fork, and contribute to over 80 million projects.
Save cash via rubber mulch guys being economical is a valuable part of any mission though, spending less shouldn't mean you sacrifice excellent quality for rubber mulches in conda.
Burlington vt condos for sale browse all condos for sale in burlington, vermont contact us if you have questions or to schedule a showing. Modified a comment on discussion help on nco netcdf operators guys, why build nco when you can install it in on mac or linux with conda :-) conda. Nicki minaj - anaconda parody - duration: 4:09 bart baker 102,287,572 views 4:09 lil dicky - freaky friday feat. Conda fine detail paint brush set - 12 miniature brushes for detailing & art painting for any guys that might be concerned about the mauve color. Lyrics to 'anaconda' by nicki minaj: boy toy named troy, used to live in detroit.
|
OPCFW_CODE
|
include quotes from free-form answers in post-workshop survey
there's a lot of nice and useful stuff being said in answering the question "What else has changed in how you write code for your research after attending a CodeRefinery workshop?"
We could include quotes on the front page. Any comments on that? I can start working on it now
I think this is a good idea - this would be without names, right?
of course!
Here are some answers to the question "What else has changed in how you write code for your research after attending a CodeRefinery workshop?" I'm posting them here so anyone who wants to can vote on which ~5 quotes get to the front page:
I'm more conscious about the way I write programs and organize their "organic" growth. I try to modularize more and write proper tests. I also use git more extensive to keep track about my different development branches.
The workshop gave me some good ideas and insight into writing better code. However, I haven't had the time to really implement these ideas into my daily work.
At the moment, not much. Better use of GitHub capabilities and, importantly, documenting code!
Was reminded on practices to keep code modular, and it was great to hear about functional programming and keep it in mind to use.
It gave me an overview of what wealth of tools are out there when i will need them. Right now my current tasks has not allowed yet. It also gave me a confirmation that my current way of using my current tools are correct which strengthened my confidence and authority during my daily work.
The workshop gives a nice introduction to a number of tools. However, implementing them in practice in older projects is still a lot of work, and I have only been able to do so on a limited scale.
It just generally became a bit easier and more understandable
I can say that this workshop influenced a lot my working pace and flow in a better way.
Although it already was the case, now it's even more the case, that it's necessary to assume, that I will not be the only one who must deal with the code I writing.
My code became much more "sustainable" in the sense for others to read/use/modify it. In detail the most significant is improved use of git (forking workflow) and more strict "purity" of functions. Use of PyCharm increased speed of development.
It was an eye-opening start, but I'd need a lot more time and practice to use more of it. The problem is that it feels "indefensible" to spend time on this outside of the course, since it would be judged as a waste of work time (which I do not agree with), while time that is set aside for a course is "already lost".
Be more aware of better coding practices in general.
It has given me a better understanding of the tools, which we used already.
My code is better documented, and I use version control much more. I have also started writing more unit tests.
For me the most important thing was to learn to use git better. Now I make frequent commented commits so it is far easier to see what I actually did for me and others too.
Although I started only using few tools from the workshop because the others are not helpful in my daily work, I became more aware about the issue and about existing options.
I write my codes now in such a way that the person who will be taking over my job would have less time figuring out what is going on. The emphasis of writing modular codes in the workshop was very helpful for me and for the people using my code.
I am much more conscious of making the code clean and easy to read by defining and implementing functions.
Made aware of several tools I was not using and that can streamline the code development in my group. I did not adopt all of them as my involvement with programming is at the moment limited, but it gave a pretty good idea how PhD students and postdocs in my group should develop code more professionally.
Now, the above might indicate that your workshop had little impact on my work, but that would be misleading. It has indeed had an impact - something I have also passed on to my colleagues. However, we work in a rather restricted environment where web tools are out of the question and other tools require som thought to set up so that they work efficiently. We haven't yet had the time to do that.
I take more time to think about long term solutions than "quick fixes" even if it's short of time. Another aspect is that I try to make the code reproducible in the sense of documentation, code readability, clear log files, etc, both for helping myself and my collaborators.
As I am new into software development, almost everything was new and interesting. I am starting using tools adding them progressively into my work-flow. I also try to use as much as possible the tips for making code more reproducible and pure.
I have started to use git for other things than code also (.bashrc, module files, compilation settings, etc).
It would seem the tools I use have become more up-to-date, as the previous tools were quite dated.
This was an extermely useful workshop. Thank you very much! I wish I had known this stuff already as a grad student 10+ years ago. It is now easier to collaborate with co-developers and easier to keep things in order and structured.
It become more organized thinking more about modularity, cleanliness and reproducability
The main thing I got out of the workshop is that I'm now extensively using the issue-tracking systems og GitHub and GitLab. Also on one of my major projects we have moved towards have shorter lived feature branches and using merge requests for more frequent merges and less code divergence. I also started using GitLab's issues system more frequently. It made it easier for people in my lab to report bugs, and easier for me to keep track of them.
I think the way I write code did not change at all, but I am now able to work with git in a team and use code review.
|
GITHUB_ARCHIVE
|
Welcome back. We're still here, still busy as ever. But we're glad you came to read our weekly roundup of news. We're as excited as you are, becausewe get to count our treasures, a weekful of new events and achievements. So let us take a break and walk with you.
To me (and not just me), Arquillian is one of the most interesting projects in the JBoss portfolio. Not only due to its technical merits, which abound, but also because it makes it easy to argue that JBoss 'gets it right' when it comes to high quality open source software. It's a real solution to a real challenge: integration testing for Java applications. A new idea for solving an old problem. That has moved from a proof of concept to an entire ecosystem incredibly fast. Because it's community-driven. And easy to learn. So it's cool.
It's first stable release 1.0.0.Final has just come out and it's truly a reason to celebrate. Currently, Arquillian supports running your true tests (that is, exercising your actual application code) in most major application servers or servlet containers (think JBoss, Glassfish, Tomcat, Weblogic, Websphere and so on), and embedded containers as well (although you may want to be careful with that - read Dan's post for details).
And as Arquillian goes, so go it's siblings: a number of other extensions have had their releases this week as well. Because not only business code matters - testing the UI, browser automation are equally important. So now you can do your applications the right way and keep them bug-free - a complete ecosystem is available for testing them from end to end - from the application server to the browser and tothe mobile platform. Here's a quick roundup of the projects from the Arquillian family that have had their final releases in the past week:
- Arquillian Core, learn more from Aslak Knudsen and Dan Allen
- Arquillian Drone,http://arquillian.org/blog/2012/04/10/arquillian-extension-drone-1-0-0-Final/an extension which provides a simpler way of driving functional tests (i.e. in browser or client-side), learn more from Karel Piwko
- Arquillian Graphene - a Selenium extension with a type-safe, simplified API, learn more from Lukas Fryc
A grid that can hold all your data
The other big news of the week is the first beta release of the JBoss Data Grid, which is the JBoss product built around Infinispan - our high performance data rid community project. For mission critical projects, this means an opportunity of breaking free from the shackles of relational databases and having a fully supported, high performance data store at their fingertips. The significance of this event sis best explained by Rich Sharples and Manik Surtani.
At the movies. Starring: JBoss Developer Studio
How do you get started with JBoss Developer Studio? Follow Max Andersen's blog, and learn more about it, as well as the future plans for m2e-wtp, a critical component of the Maven integration in Eclipse. Burr Sutter has created a series of screencasts, which introduce the major features of the IDE.
Stephane Epardaud provides a detailed description of the newly added module system and repository of Ceylon: Ceylon Herd. You will learn the rationale behind the decision to create it from scratch, as well as its main design goals.
Writing applications that rely on web-services is often getting to the challenge where, in order to see that your code is working correctly it needs some reference endpoints which can be invoked to test interoperability. Alessio Soldano's blog provides a demo on a number of such webservice endpoints deployed in OpenShift, which demonstrate the capabilities of JBoss AS 7.1, especially in the WS-Security area. So, anyone can get access the demo and try them out. And see that everything just works.
Transactionality in massively parallel systems
Mark Little's has published a higher level perspective on transactions and their role in the modern, highly concurrent architectures. As with many other aspects of designing and implementing software systems, the commoditization of multi-core systems has changed the way in which we need to look at transactions - the single-threaded, database-driven perspective is not enough anymore.
Outside Arquillan and its extensions, a few other JBoss projects have released new versions in the past week:
- Weld 1.1.7
- Teiid Designer 7.7 and Teiid 8.0.CR1
- JBoss ESB 4.11
- Drools 5.4.0.CR1
- JBPM Designer 2.1
- Infinispan 5.1.4.CR1
- If you are in Billund, Denmark next week, check the JBoss sessions at MOW 2012 (18th-20th April)
- The DC JBUG has a meetup on April 18, with CloudBees as a guest, showcasing deploying Java EE Web profile applications to various containers including Jboss AS 7
- Sanne Grinovero and Mircea Markus will talk at the Portugal JUG on April 18 about Infinispan and Hibernate OGM
Thanks for joining us again and come back next week for another roundup!
|
OPCFW_CODE
|
Add QLik API Integration
Closes #1
Here's my (hopefully) complete implementation of the QLik API for this project. In some ways this is similar to my previous example implementation, however I have greatly improved the code quality, added comments where necessary and made these changes backwards-compatible with previous data.
Summary of changes
puppeteer and cheerio have been removed from the project dependencies as they are no longer needed.
node-fetch and ws have been added to the project dependencies.
Two configuration options have been added to scrape.js - PAGE_URI for the page url which contains the graphs, and QLIK_WS_PREFIX which is the prefix for the QLik Sense server.
Two new folders have been created in data/docs to store the new raw data in - raw-new and rawhtml-new.
A new script create-legacy-data.js has been added, which turns the new raw files into old files. As a result of this, this change data source should be fully backwards compatible for any consumers. This script should be run before generating the CSV data.
The GitHub action has been modified to run create-legacy-data.js before generate-csv.js. I have also separated the yarn install command into a separate step, removed the puppeteer image (which should vastly increase the action speeds) and removed the step which installs git (as it is already installed).
The README has been updated with the new script in the "To run yourself" section
generate-csv.js has not been updated as no changes are necessary
Possibly breaking changes
The files in the docs/data/raw directory, all.csv, all.json have the following changes which may impact data consumers:
Numbers are no longer comma-separated or truncated.
Percentages are no longer prefixed with a percentage symbol.
This can be seen in the below images. The new files are shown on the left.
Best wishes,
llui85
Also, @jxeeno would you mind licensing this repository under an open source license?
Thank you so much @llui85. I'll review tonight and notify some of the downstream users that I know of to make sure they're aware of the potentially breaking changes.
And thanks for the reminder re license -- we're now using CC0.
LGTM! Thanks for putting this together, @llui85. Much appreciated 🙏
Heads up @llui85 , it turns out modifiedDate can change without the data changing! We may have to go back to the regex 🙃
@jxeeno Do you think it would be enough to remove the check that the raw data still exists when writing the file? I think it would work, as the data will update later, but could mean inaccurate data for the current day, which isn't ideal.
https://github.com/jxeeno/aust-govt-covid19-stats/blob/92f25518702c46a83341ec29f3bf30f73e69e494/scrape.js#L202-L209
Also, there are a few other date properties which might be what we want. Does qLastReloadTime or qMeta.createdDate have the correct value?
{
"0": {
"qDocName": "COVID-19 - NIR External Report",
"qConnectedUsers": 0,
"qFileTime": 0,
"qFileSize": 556269,
"qDocId": "e8635e3f-b339-4ab3-a9de-b4e3b15c6bbc",
"qMeta": {
"createdDate": "2021-07-08T05:11:22.498Z",
"modifiedDate": "2021-07-08T10:00:05.986Z",
"published": true,
"publishTime": "2021-07-08T10:00:04.556Z",
"privileges": [
"read"
],
"description": "r6.4 Release",
"dynamicColor": "hsla(187,18%,43%,1)",
"create": null,
"stream": {
"id": "aaec8d41-5201-43ab-809f-3063750dfafd",
"name": "Everyone"
},
"canCreateDataConnections": false
},
"qLastReloadTime": "2021-07-08T04:32:59.426Z",
"qTitle": "COVID-19 - NIR External Report",
"qThumbnail": {
"qUrl": "/appcontent/e8635e3f-b339-4ab3-a9de-b4e3b15c6bbc/30layer.PNG"
}
}
}
|
GITHUB_ARCHIVE
|
very interesting investigations by Harv.
May I suggest:
the idea of free association.
The concept "definition" could seem like "locking something in a box" ? ; seems like "static" or "death". Of course, if the box is optional; ....
Jesus Christ talked as I recall hearing it said; that "I will provide a door from which you may come and go as you please".
If reality involves the possibility of meeting in freedom; no coercive sticking things together; things only stuck together of necessity from the law of non-contradiction; then how do you define "truth"?
Jesus Christ says (it has been told to us) "I am the Truth".
The idea seems to me to be that truth is not something dead; but eternally alive; "The One , True, Living God".
Numbers: numbers seem to be stickers that get stuck to things. You may say "one plus one equals two"; you may say "1 + 1 = 2"; you may use Japanese language, French language; you might say "blob and bob gives blobby".
How is "counting with numbers" in math constructed?
Looks like a pyramid structure of Zeno's Arrow type?
Archer fires arrow at target. In the first moment it goes half way to target. In the next moment it goes half the remainder. In the (new) next moment it goes half the new remainder. In the
(new new) next moment it goes half the (new new) remainder.
Never gets to the target? you might ask?
Hitting an imaginary wall, a limit, in mid-air?
No; the "moments" were not equal but were being halved along with the distances as "moment" was defined here self-referentially by "distance".
Numbers in math: "1 + 1" the ones are assumed to be equal sized (but need not be; in reality they cannot be fully equal as to exist is to be different in some way or you could only have "A", not "A" and "B" in absolute descriptions ..."
How is "3" defined? In math it is a "generalisation"; where the ones may assume any order.
The construction of numbers in math has a self-referential aspect like Zeno's Arrow; and a pyramid structure building from layers of ones and groups.
Dr. Richard Stafford did a paper where he claims to have found physics laws apply to any communicable information. But he stepped aside re: math foundations somewhat it appears.
But I found more accurately it would seem "physics laws are associated with COUNTING".
And since counting is voluntary (how you group things; what group-labels you stick to things, is voluntary); since free living creating consciousness does not confine things to "dead" number boxes necessarily as to be is to be ; that is "unique", "different"...
Does an object touch another, Harv asks? Does it HAVE to? Seems associations, meetings; are voluntary; ..... the Kingdom of Heaven
"For I shall give you a logic that needs no rehearsal" did Jesus Christ say?
What Harv appears to have found is that math statements are circular? What I am suggesting is that everything is different (or seems it surely couldn't "be" in absolute terms obviously it seems ...)
So every "two" is different
math a house of cards built on sand?
we are in a scenario like in the movie "the matrix"?
Open your eyes and see with freedom...
"If you had faith as a grain of mustard seed, you could say "move" to this mpountain, and it would move"...
What is "a function" or "rule"? In math, it is different from a variable as it involves at least one situation where two variables are held together as a group, as a "one".
Dr. Richard Stafford appears to have re-discovered mathematics inside mathematics. His function "f = 0 " looks to me could be re-stating "1 + 1 = 2" that is, the minimum group size: two; the minimum group variable; the very definition of "function" in math.
Professor Stephen Hawking appears to have also re-discovered mathematics inside mathematics. What is the "pea instanton" that the (mathematically -described (?)) universe is postulated to start from? "1 + 1 = 2" surely?
Shocked at this?
Have I missed something?
Stephen Wolfram might not be so surprised; he seems to suspect the underlying principles are very simple?
Christopher Langan has surely already described with his ideas on "conspansive duality" something looking like "math redistributed inside math".
|
OPCFW_CODE
|
Underground cables and resonances in range of frequencies
Is that true that underground cables with no power factor correction have resonances
in range of frequencies which leads to multiple zero crossing distorted waveform?
But why? Is that is because it work as a band-pass filter?
From the book Electrical Power Systems Quality:
While they may cause interference with low-power electronic
devices, they are usually not damaging to the power system. It
is also difficult to collect sufficiently accurate data to model power
systems at these frequencies. Acommon exception to this occurs when
there are system resonances in the range of frequencies. These resonances
can be excited by notching or switching transients in electronic
power converters. This causes voltage waveforms with multiple
zero crossings which disrupt timing circuits. These resonances generally
occur on systems with underground cable but no power factor correction
capacitors.
Where did you hear this? Can you provide links to reference material on this?
In a book,Electrical power system quality. I'll update quoted text.
I general ALL power-lines exhibit resonances that vary with timeframes that are both long and short.
To see why all you have to do is realize that the power-line is subjected to many different loads, some of which are Inductive (motors), Capacitance (Fluorescent lights), resistive (heaters) to list just a few. Longer tem variations arise from people turning on loads, say a washing machine or lights and indeed even a change in the mechanical load seen by a motor will change what electrical load it presents to the power-lines. On shorter time frames, there is variation even on e a cycle to cycle basis as the sinusoidal waveform interacts with rectifiers, chopper circuits and switching power supplies.
You can see where a C load in parallel with a L load might form an easy resonance. This resonance might exist for long time frames, or the power-line may only see the C art of the resonance during part of the sinusoidal cycle. These L'and R' and C's are also distributed spatially thorough out a neighbourhood, so this ends up to be a seething complex time varying mess.
Buried cables, due primarily to the fact that the line and neutral wires are closer together tend to have higher capacitances than an air/pole mounted power-lines. That means that the resonances can be enhanced, depending upon the primary inductance that are present. Although it must be mentioned that it is possible that a buried cable can have fewer resonances also because of the increased capacitance. It all depends upon the mix of equipment and parasitic impedances.
so in other words because it have high capacitance?
IF you are comparing the two (air vs buried) in an unloaded state, then yes the buried will have a high line/neutral capacitance.
then why unloaded state?
because that is the only way to compare them, no two power-lines environments are the same. It's entirely possible to have a power pole type environment feeding a house with tons of florescence lights (capacitance) that would have more C than the buried in the next block over. You can only speak about trends.
|
STACK_EXCHANGE
|
OK, which version are you using? Home, Pro, or Ultimate? I assume there’s a difference so please state your reasons why you picked one over the other. Thanks in advance.
Physical Memory Limits
I use Home Premium because it came with the computer. It is limited to 16GB of RAM.
With Pro or Ultimate you get up to 192 GB. Pro adds some networking features (business network type features). The Home Premium has home networking features though. You can also get encryption and Bit Locker in the Pro and Ultimate versions respectively. I included some links at the top to check out.
I don’t have any complaints. The system has been extremely well behaved.
I am using pro on my laptop and ultimate in my studio - I think you won’t need ultimate for Cubase just pro - the limit on RAM is important IMO though many motherboards dont support so much RAM as more than 16 gig - this will be your current bottleneck - the mobo
OK, since my MB will only do 8G the Home Premium looks good. MS sells it on their site for $119 as an upgrade - are there better prices somewhere?
There will be otrher issues with premium, If I recal, and you need to check this, pro and above have a mechanism for running legacy programs wriiten for xp and below. Also, you must run in 64 bit to access larger memory and this means you need 64 bit drivers for all hardware inc sound card.
If you want to run 32 bit VST Steinberg’s VST bridge is notoriously unreliable the workaround for this is Jbridge - cost a few bucks but is worth it
All this needs careful thought, plan your system and then post it here for comments - there are other much more techy than me
Be careful about the “upgrade” versions if you want to do a clean install. Here’s a link that describes the issues …
If possible, I would suggest a “full” version install on a new hard drive. Start from a clean slate and you will know exactly what you are dealing with. I have, however, heard of people having no problems doing an in-place upgrade.
Just something to think about.
I find the street price difference between Pro and Ultimate is very little.
Ultimate allows other computers to RDP into it. That is, another computer can view its desktop and control it across the network. This is useful for configuring ‘headless’ (no monitor) slaves for VE Pro, etc.
In any case, the Anytime Upgrade direct from Microsoft allows converting to Ultimate, and they just send a new Product Code by email that enables the extended functionality within the existing installation.
Note that upgrading from XP to Win 7 forces a complete new install. The upgrade will only work inplace for Vista.
Even then, one can force a complete fresh install by installing twice in succession, but doing a ‘new’ install and NOT activating on the first and ‘upgrading’ AND activating on the second.
Thank you all for the insights. I have done the reading at this point and think I understand the required steps. I do get the ‘clean install’ thing and MS is pretty helpful with this in their articles. I am not looking forward to reinstalling all my 32 bit programs to 64 bit but I don’t have any choice. I am not looking forward to it.
I have to say, at least at this point, I am looking forward to the improvements in the recording end of things. Having 8G is certainly not as much as a lot of you have but it will easily improve my game. I can also see why people just cut their losses and build a new computer. In my wildest dreams I never thought I would see a MB with room for 16G of ram. Live and learn.
You can look forward to some rock solid perfomance with C6 64 bit. I have 12 gig of Ram but I have never used more than 8 even with full orchestral stuff. Should be enough for now.
|
OPCFW_CODE
|
The research group Bio Robotics at the Chair of Micro Technology and Medical Device Technology (MiMed) examines the feasability of organic systems to replace physikal components in the field of robotics. We want to explore and develop devices or systems in which biological components work in symbiosys with physical ones. For this purpose, we are working in the field of mechanics, kinematics, electronics and information technology. Particularly, the potentialities of rapid manufacturing (e.g. selective laser sintering) are systematically explored.
We want to research robotic systems that are biohybrid systems that generate forces and torques through muscles made of biological cells encapsulated in exoskeletons or skin-covered tissue systems that can be electrically controlled. The skeletons consist of mechanisms that can convert simple linear motions into complex spatial motions with high precision.
The future of robotics consists of systems, taking their energy from food, converting the energy into force via cells, and can still being controlled electrically.
The question is, up to which point it is possible to arrange biorecators around the muscle cells and with which skeletal structures mechanical movements can be achieved.
The chair posseses various manufactoring facilities for the production of functional models and prototypes. Such as a precision engineering shop floor. Notable facilities include:
- EOS Formiga 100
- CNC 5-axis milling machine (Deckel)
- Z-corp Z-510
- Trotec Speedy 400 flexx laser cutter
- Yilun Sun and Tim C. Lueth. "Cruciate-Ligament-Inspired Compliant Joints: Application to 3D-Printed Continuum Surgical Robots." 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021. (accepted)
- Yilun Sun and Tim C. Lueth. "Design of Bionic Prosthetic Fingers Using 3D Topology Optimization." 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021. (accepted)
- Yilun Sun, Yuqing Liu, Lingji Xu, Yunzhe Zou, Angela Faragasso and Tim C. Lueth. "Automatic Design of Compliant Surgical Forceps With Adaptive Grasping Functions." IEEE Robotics and Automation Letters 5 (2), 1095-1102, 2020. DOI: 10.1109/LRA.2020.2967715
- Yilun Sun, Dingzhi Zhang, Yuqing Liu and Tim C. Lueth. "FEM-Based Mechanics Modeling of Bio-Inspired Compliant Mechanisms for Medical Applications." IEEE Transactions on Medical Robotics and Bionics 2 (3), 364-373, 2020. DOI: 10.1109/TMRB.2020.3011291
- Jinguo Huang, Yilun Sun, Tianmiao Wang, Tim C Lueth, Jianhong Liang, Xingbang Yang. "Fluid-Structure Interaction Hydrodynamics Analysis on a Deformed Bionic Flipper With Non-Uniformly Distributed Stiffness." IEEE Robotics and Automation Letters 5 (3), 4657-4662, 2020. DOI: 10.1109/LRA.2020.3003774
- Yilun Sun, Dingzhi Zhang and Tim C. Lueth. "Bionic Design of a Disposable Compliant Surgical Forceps With Optimized Clamping Performance." 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2020. PMID: 33019042, DOI: 10.1109/EMBC44109.2020.9176027
- Krieger, Y.S.; Kuball, C.-M.; Rumschoettel, D.; Dietz, C.; Pfeiffer, J.H.; Roppenecker, D.B. and Lueth, T.C. (2017): Fatigue Strength of Laser Sintered Flexure Hinge Structures for Soft Robotic Applications. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 24–28, 2017.
|
OPCFW_CODE
|
Need help deciding which type of chart to use?
The following table describes each chart type and the types of data and applications for which it is best suited. Hover the cursor over each chart type to learn what types of data it needs. For example, a GIS Heat Map needs one location dimension in Detail.
Provides a visual presentation of categorical data. They compare two or more datasets and can show positive and negative values.
|Funnel||Displays values as progressively increasing or decreasing proportion, totaling 100 percent. You can set block height and specify whether to show the funnel neck.|
Displays geographic data but requires specially prepared data before you can create it. It requires location dimension data in the Detail. You can specify the level of geographic detail such as continent, country, or city.
GIS Heat Map
Displays geographic patterns of higher than average occurrence of things like crime activity, traffic accidents, or store locations. It also requires special location dimension data.
Displays a trend over time or categories.
|Crosstab||Displays the joint distribution of two or more variables represented in the form of a matrix.|
|Table||This option is only available when viewing data that has not yet been built into a cube. Some of the options for working with data will be disabled. See Working with non-materialized or raw data cubes.|
Displays information as a series of data points called markers connected by straight line segments. Line charts can display continuous data over time, set against a common scale, and are ideal for showing trends in data at equal intervals or over time. In a line chart, category data is distributed evenly along the horizontal axis, and all value data is distributed evenly along the vertical axis. As a rule, use a line chart if your data has non-numeric x values. You can also choose to show steps or steps without risers.
Compares two or more quantities of graphically quantitative data using a line chart with the areas below the lines filled with colors. Use a stacked area chart to display the contribution of each value to a total over time.
Displays data as a circular chart divided into sectors, illustrating numerical proportion. It is used to show the percentage or proportional data and usually the percentage represented by each category is provided next to the corresponding slice of pie. Pie charts are good for displaying data for around six categories or fewer.
Plots data points on a horizontal and a vertical axis in the attempt to show how much one variable is affected by another. Scatter charts are commonly used for displaying and comparing numeric values, such as scientific, statistical, and engineering data. These charts are useful to show the relationships among the numeric values in several data series, and they can plot two groups of numbers as one series of xy coordinates.
Displays hierarchical data as a set of nested rectangles. Use dimensions to define the structure of the treemap, and measures to define the size or color of the individual rectangles. Use color and size dimensions correlated in some way with the tree structure to see patterns that would be difficult to spot in other ways. Both the size and color are determined by a value, for example Sales. The greater the sum of sales for each category, the darker and larger the box.
A Key Performance Indicator (KPI) helps a business monitor its performance and measure its progress towards specific goals. Create a KPI in a worksheet and include it in a Dashboard as one of the cards. You can use one or two measures in the y-axis. You can use a template to define the appearance or choose No Template and use the rich text editor to customize the design.
A packed bubble visualization displays a large amount of data in a small space. Dimensions define the bubbles, and measures define the size and color of the individual circles. For example, use a product dimension to create the bubbles, volume of sales to indicate the size, while color could indicate the profit.
A wordcloud or Tagcloud is a visual representation of text data, typically used to depict keyword metadata to visualize free form text. The importance of each word is shown with font size and/or color.
A dot plot (or dot chart) charts values that fall into a simple scale of categories sometimes referred to as bins. Dot plots are especially useful for assessing distributions when there is a relatively small amount of data. They are useful for highlighting clusters and gaps, as well as outliers.
|Gantt chart||A Gantt chart provides a graphical illustration of a schedule that helps to plan, coordinate, and track specific tasks in a project. It shows the amount of progress made or production completed in certain periods of time in relation to the amount planned for those periods. It's a variant of a bar chart with a time-based axis and requires the use of date-time data.|
|
OPCFW_CODE
|
This blog post comes off the back of the recent cluster expansion I did a few days ago. Here’s a quick rundown of the situation before the expansion:
- Single large node deployment at 16 vCPUs, 48 GB RAM and 3.5TB flash disk
- Large infrastructure being monitored – 10,000++ VMs, 400 hosts, 50 clusters etc..
- Default vSphere management pack – the only one in use
- 31 million configured metrics, 4+ million metrics being collected
Doubtless to say vROps was struggling to keep up with the burgeoning, ever expanding environment. Dashboards would time out collecting data, searches were slow, reports took long to run. Not a good situation given how awesome vROps otherwise is! Something had to be done about this.
VMware have this pretty handy sizing calculator available in the form of an excel sheet you can look at here. I punched in my environment’s size into the excel calculator and it spat out what I needed:
Away I went with deploying a new node to add to the mix. Keep in mind – the new node must be the exact same version, not one up nor one down (not even a minor release). I’d have thought VMware would support rolling upgrades, but found 220.127.116.117xxx and 18.104.22.168xxxx dont go too well together. Anyway, just get the absolute matching OVA from VMware downloads and install the new node. I didn’t need a HA cluster so I didn’t enable high availability. I wont post up screenshots of the steps involved, VMware have done a good job of that here. Here’re some of the somewhat interesting things the cluster expansion went through.
Given the size of the environment, during the upgrade I saw the following message on and off:
The master node does not immediately hand over work to the data node:
You’ll see the status of the data node change to “Analytics is starting”
After a few hours, the cluster stabilized and you can see the data node now sharing the load:
I also recommend you stop the collection of metrics while the expansion is taking place. The master node works real hard when it’s handing over load to the data node and the collection of new metrics maxes out CPU and RAM, I saw 90%+ RAM and CPU utilization when the master’s doing its thing and collecting metrics.
Now, the cluster runs like a dream!
For some good information on how many metrics and object a node or multi-node installation can handle, I recommend you read this kb article.
I call out special thanks to VMware’s James Polizzi for his assistance with the numerous questions I asked, he’s a gun at vROps and breathes the product!
|
OPCFW_CODE
|
In this op-ed the authors respond to a piece on the need for greater population equality between federal constituencies published last week in the Toronto Star.
Do citizens need to be treated equally in order to be treated fairly? The population in federal electoral districts (or “ridings”) varies widely across Canada. Michael Pal and Matthew Mendelsohn, in their recent article, argue that this means that some voters matter more than others. This population disparity will continue even after the Conservative government adds 30 more seats to the House of Commons in 2015.
Pal and Mendelsohn are definitely correct that some constituencies are more populated than others. But, is this harmful to Canadian democracy?
Canada is a diverse and difficult to govern country. Not only is Canada very large, but it is also scattered into many different communities, some geographic and some cultural. As a consequence, Canadians see themselves not only as individual residents of some city or town, but also as members of various communities. When drawing constituency boundaries, the challenge comes not only in ensuring reasonable population equality, but also providing these other communities to which we belong what the Supreme Court has called “effective representation.” And it must be done in a way that ensures that citizens are well-served by their representatives.
By our lights, then, the challenge of designing electoral constituencies is threefold. First, there should be reasonable equality between constituency populations. Of course, mobility means that populations will never perfectly match, but there should be some effort towards parity.
Second, individuals have to realize effective representation. This is in tension with the first goal, unfortunately. To receive effective representation, some low population areas – like the Yukon Territory or Nunavut, require their own MP. If we required perfect population equality, we would either have to force all three territories to share a single MP, or we would have to triple the size of the House of Commons. There are no easy solutions to this tradeoff.
Canada’s electoral laws have recognized this trade-off. They are designed to slightly under-represent the faster growing provinces like Ontario in order to ensure the effective representation of less populated and slower growing areas of the country. While voters in Ontario will be marginally underrepresented at the next election, the province will still have 36% of the seats in the House of Commons – only slightly less than the 38% it would receive based on its exact share of Canada’s 2011 population. It will also be virtually impossible for a party to form government without receiving at least some support in Ontario. Can we say the same about any other province?
Third, citizens should have, as much as possible, easy and effective access to their governmental representatives. Their concerns should be heard, and their requests for assistance should be met. If citizens in less populated constituencies are receiving better assistance than those in more populated constituencies then perhaps more equally populated constituencies should be our goal.
There are practical realities of democratic representation in a country with a large, spread out population. Constituencies with small populations are most often the largest geographically. Consider the difficulty of visiting 10% of your constituents in a riding like Kenora, which covers an area larger than the United Kingdom and includes several communities accessible only by plane? Now, consider how much easier it is for an MP in Toronto to reach one-in-ten constituents, even though he or she may represent more people overall.
Unfortunately, Pal and Mendelsohn do not address this practical difficulty and thus do not teach us much about whether constituents in high population ridings actually receive worse service and representation from their MPs.
We were interested in this question, and so conducted two studies to examine how population affects how well MPs represent their constituents. In the first study, we used interviews of thousands of Canadians to see if those in less populated constituencies reported more positive democratic experiences than those in more populated constituencies. We found that the satisfaction citizens express with democracy is no higher or lower in less populated constituencies. We did find, however, that citizens in less populated constituencies are more likely to report contacting their MP.
When citizens contact their MP, do they receive better service if they are in a less populated constituency? It is hard for citizens to compare the service provided by their local MP to those in other ridings, so our survey does not shed light on this. To measure this we instead conducted an experiment on 101 MPs in the spring of 2010. With approval from our university ethics board, we created a number of fictitious email accounts, and made contact with MPs’ offices asking for information on simple matters. Two results stood out. First, MPs and their offices are very helpful. Citizens should not be reluctant to turn to their representatives for help. Second, MPs in more populated constituencies appeared more helpful than those in less populated constituencies.
Designing electoral constituencies in a large, diverse, and federal country is difficult. It requires a balance of relative population equality, effective representation, and good outcomes for citizens. While Pal and Mendelsohn focus on the first goal, they ignore the other two. When we consider this broader picture, we find no evidence that citizens would feel more fairly treated if they were more equally treated.
Authors: Peter Loewen, Paul Thomas and Michael Mackenzie
Peter Loewen is an assistant professor of political science at the University of Toronto (Mississauga). His research interests are citizen and elite political behaviour, and Canadian politics. In addition to running experiments in several countries, he is also involved in the Vote Compass project.
Paul Thomas is a PhD Candidate in the Department of Political Science at the University of Toronto. His primary research interests are parliamentary governance, cross-party cooperation, and religion and politics.
Michael MacKenzie is a PhD candidate in the Political Science Department at UBC. His work focuses on democratic theory and political representation.
|
OPCFW_CODE
|
How to pass uninstallation password to remote system via powershell to uninstall an application which is password Protected
I can silently uninstall application which doesn't have password Protected using Powershell .
Start-Process -Wait -FilePath $uninstall32 /SILENT
For password protected application (Setup created using Inno SetUp ) it popup for asking password. Is there an option to pass password without popup?
If it succeeds I want to remotely uninstall password protected application
Without password protected application remotely I had uninstalled silently.
I tried
Start-Process -Wait -FilePath $uninstall32 UNINSTALL_PASSWORD =AR /SILENT
You tried with UNINSTALL_PASSWORD =AR in the argument list, and then what? What happens? Does it exit with an error? Does it ignore the argument and prompt you for the password?
Thank you Mathias for the suggestion . When i tried with this start-process -Wait -FilePath $uninstall32 -ArgumentList UNINSTALL_PASSWORD=AR it pop for Password. A positional parameter cannot be found that accepts argument '/SILENT'. message coming if i add /SILENT after -ArgumentList UNINSTALL_PASSWORD=AR
tl;dr
Specify the pass-through arguments as a single string:
# Note: Parameters -FilePath and -ArgumentList are positionally implied.
Start-Process -Wait $uninstall32 'UNINSTALL_PASSWORD=AR /SILENT'
If you need to embed variable references or expressions in the arguments string, use "..." quoting, i.e. an expandable (double-quoted) string , rather than '...', a verbatim (single-quoted) string .
When you use Start-Process, any arguments to be passed to the target executable (-FilePath) must be specified via the -ArgumentList (-Args) parameter, to which you must pass a single value.
Note: While -ArgumentList technically accepts an array of string values,[1] allowing you to specify the pass-through arguments individually, separated with , a long-standing bug unfortunately makes it better to encode all arguments in a single string, because it makes the situational need for embedded double-quoting explicit - see this answer.
Both parameters can be bound positionally, meaning that values need not be prefixed with the target parameter names.
See this answer for how to identify a command's positional parameters via its syntax diagram.
Therefore, your attempt:
# !! BROKEN
Start-Process -Wait -FilePath $uninstall32 UNINSTALL_PASSWORD =AR /SILENT
is equivalent to:
# !! BROKEN
Start-Process -Wait -FilePath $uninstall32 -ArgumentList UNINSTALL_PASSWORD =AR /SILENT
and also equivalent to, using positional parameter binding for both -FilePath and -ArgumentList:
# !! BROKEN
Start-Process -Wait $uninstall32 UNINSTALL_PASSWORD =AR /SILENT
That is, UNINSTALL_PASSWORD alone was bound to -ArgumentList, whereas =AR and /SILENT are additional, positional arguments that cause a syntax error, because both parameters that support positional binding - -FilePath and -ArgumentList - are already bound.
[1] From PowerShell's perspective, even an array of values (,-separated values) is a single argument.
|
STACK_EXCHANGE
|
I left the following comment on the Peaceful Science Facebook Page but thought it might get more traction here:
"I don’t think either you (Josh) or Behe are correct about the way you approach this topic.
The issue is that the question seems to always be framed as ‘has ID produced sufficient evidence to demonstrate that certain structures cannot evolve?’ And, the implication is that, if they haven’t (and they have not,) then it is safe to assume that these structures could evolve. The burden of proof, however, is not on ID. Evolution is a theory that claims to offer a solution to biological complexity, and, to adequately do this, it must address even the more difficult aspects of this complexity.
A big problem with Behe is that he always focuses on microbiology, because that is his area of expertise. At the molecular level, however, we’re dealing with limited complexity, with extremely large populations and with very high reproductive rates. So it is conceivable that something like the flagellum might have evolved, in spite of it’s interlocking parts, through sheer brute strength (like a computer cracking a difficult password by trying every possible combination.) The plausibility of this decreases drastically at higher levels of organization where the biological machines mirror the complexity of some of the most complex man-made machines (we were able to send people into space and create computers the size of one’s hand before we could make bionic eyes to restore sight to the blind).
What poses a challenge when it comes to the evolution of higher order biological machines is the nature of the evolutionary mechanism itself. Random mutations, on their own, are not sufficient to explain the complexity of life. It would be similar to someone trying to get somewhere by taking a step forward every time they flip a coin to ‘heads’ and a step back for ‘tails.’ Because of this, natural selection plays a critical role in the evolutionary process. When it comes to biological machines, however, it is not evident that every change will produce a benefit that can be selected, given the nature of how machinery in general works. It is conceivable that some changes would be beneficial, others neutral and still others detrimental, at least temporarily, while improvements to the machinery take place. Changes that are neutral, however, leave nothing for natural selection to act upon, returning things to that state of randomness, while, temporarily detrimental changes would be actually opposed by natural selection. So we would need to have a clear understanding of the evolutionary pathway of such machines to determine if they could evolve in the time available.
So in essence, this question should be looked at as having three possibilities:
There is sufficient evidence that biological machines could NOT have evolved? (probably not)
There is sufficient evidence that they COULD have evolved.
Otherwise, the jury is still out, meaning that there is still room to consider alternative possibilities (most likely the case in my opinion)
Anyone that has tried to argue for #2 that I have seen, has used one of the following arguments:
a. These structures very likely did evolve, because the evolutionary mechanism is the only viable mechanism we have so there is no other way for them to have gotten here (begs the question)
b. Intermediary stages for biological machines can be found in the fossil record (this assumes that intermediaries would not exist apart from evolution)
I personally have never seen anyone make a positive case that these structures could evolve that properly takes into account the level of complexity we’re dealing with, which is why I think possibility 3 above is still most likely the correct one. Does this prove evolution wrong? No. But it does mean that there is room for other options."
|
OPCFW_CODE
|
📝 Core 1 Interaction Week 5 Notes – Putting a website online, Typography Continued, Box Model
📓 Notes: Git and GitHub
- 💡 Note: You will need to review these instructions to turn in your Explainer project. These notes tell you how to get a URL.
1. What is Git?
What is it used for
is the underlying software that we use when we use GitHub. It’s a version control system – this means that it saves your work so you don’t have a myriad of files. When you save your work to GitHub, you’re able to view all previous versions of the files and revert to them if you wish. Super handy when you make a mistake! It’s also helpful when working in teams.
Where did it come from?
Git was created by Linus Torvalds, a Finnish software engineer. He also created Linux kernel which is a low-level operating system that was very influential in modern computing.
2. GitHub and GitHub Desktop Installation
What is GitHub?
is a website that makes it easy to access Git files. It is a for-profit service now owned by Microsoft.
👉 Action Item
- Go to https://desktop.github.com/ and download for macOS or Windows. This is the GitHub desktop app. Expand the zip file and open it up.
- If you already have an account on GitHub.com, you can use it. If you don’t have one, create one now and verify your email address.
It then asks you a bunch of questions about how you plan to use GitHub, none of that really matters, and then select “Free” for the plan you want.
- Once you finish the onboarding and you’re on the GitHub website, select “Create repository.” Every project stored in Git is called a “repo” (short for repository).
- Create a repository name that has the same name as your newschool username with .github.io appended to it. Example: simovicn.github.io (Remember! All lower case, no spaces). Create a public repo. Do not select any of the checks under “Initialize this repository.”
This is what a new repo looks like!
You can ignore all of this and go back to GitHub Desktop. Now that we have an account you can sign in. GitHub will ask you to authorize desktop, say yes.
You can see the repository you created on the GitHub website on the right. Go ahead and click it and then select “Clone.” Cloning means that you’re downloading these folders from online. It’s called cloning because we’re cloning the entire directory and are then able to access its entire history.
Now you have this repo on your computer. You can verify this by following the local path that was provided to you, and you can see where this is on your desktop.
3. Uploading and Storing Code
Now that we have the repository set up, we can begin to upload and store code. This new directory is going to replace the local folder we created previously (nika_cif22). Move all of the folders within nika_cif22 into this new folder. This new folder is what you’ll use for the rest of the semester.
Once you do this, go back and look at GitHub Desktop. You’ll notice that it will notify you of all the changes in your repository.
|
OPCFW_CODE
|
Both drivers do not support the use of ethernet and iSCSI functions simultaneously. Press the Space bar. To work around this, upgrade your yum-rhn-plugin to the latest version (using yum update yum-rhn-plugin) before running yum update. A race condition could occur when creating and destroying virtual network devices. http://popupjammer.com/error-unable/error-unable-to-build-the-unified-memory-kernel-module.html
For information on setting up disk encryption, refer to Chapter 28 of the Red Hat Enterprise Linux Installation Guide at: http://redhat.com/docs/ mac80211 802.11a/b/g WiFi protocol stack (mac80211) The mac80211 stack (formerly marvell commented Feb 25, 2015 I changed the kernel on default in DO panel and it helped to me. This update resolves the issue so that attaching disks with these names to a paravirtualized guest creates the proper /dev device inside the guest. INFO Loading containers: done. read this post here
This driver enables the ATI R500/R600 chipsets. However, these features are included as a customer convenience and to provide the feature with wider exposure. Enabling multiple installed versions of the same kernel module is not supported. Is 'docker -d' running on this host? [Switch_1_RP_0:/usr/bin]$ Please Help, Stuck here from long Nasrulla wolfch commented Jul 29, 2015 I am having this issue on CentOS-7 - can anyone help?
The use of Challenge Handshake Authentication Protocol (CHAP) during installation is not supported. In this update, this issue has been resolved. Alternatively, you can disable the wireless card in the laptop BIOS prior to installation (you can re-enable the wireless card after completing the installation). For more information on IBS refer to the paper: Instruction-Based Sampling: A New Performance Analysis Technique for AMD Family 10h Processors, November 19, 2007 Squid Re-base Squid has been re-based to
Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum. Support was added for the Intel Extended Page Table (EPT) feature, improving performance of fully virtualized guests on hardware that supports EPT. iSCSI target capability The iSCSI target capability, delivered as part of the Linux Target (tgt) framework, moves from Technology Preview to full support in Red Hat Enterprise Linux 5.3. http://en.community.dell.com/techcenter/storage/f/4466/t/19654198 The RHEL6 kernel is not the issue here.
My kernel's version is way older!!! just to make nvidia work. The mount command now supports Kerberos authentication when mounting filesystems via Samba. The enabling infrastructure pieces for Stateless Linux were originally introduced in Red Hat Enterprise Linux 5.
Fully virtualized guests created through virt-manager may sometimes prevent the mouse from moving freely throughout the screen. Open Fabrics Enterprise Distribution (OFED) / opensm opensm has been updated to the upstream version 3.2, including a minor change to the opensm library API. Notwithstanding this, all those utilities offer a -r, --resizefs option which allows to resize the file system together with the LV using fsadm(8) (ext2, ext3, ext4, ReiserFS and XFS supported). Note that when installing from this disc, it is advisable to use yum instead of rpm to ensure that base OS dependencies are addressed during installation. 2.Feature Updates Block Device Encryption
In such cases, use the machvec=dig kernel parameter. click site the netxen driver for NetXen network cards has been updated to version 3.4.18. Thanks. This is accomplished primarily by establishing prepared system images which get replicated and managed across a large number of stateless systems, running the operating system in a read-only manner (refer to
Graphical configuration There is no "official" GUI tool for managing LVM volumes, but system-config-lvmAUR covers most of the common operations, and provides simple visualizations of volume state. FreeIPMI FreeIPMI is now included in this update as a Technology Preview. im trying to install nvidia propriatary driver.my nvidia is gforce gt630. news Removable storage devices (such as CDs and DVDs) do not automatically mount when you are logged in as root.
The squid installation process did not set up correct ownership of the /usr/local/squid directory. To prevent this, replace the standard network-script line in /etc/xen/xend-config.sxp with the following line: (network-script network-bridge-bonding netdev=bond0) Doing so will disable the netloop device, which prevents Address Resolution Protocol (ARP) monitoring The LTS trusty kernel image is also available for 12.04. 3.8 is no longer supported by anyone, including Canonical.
The size of the backup file done with dd will be the size of the files residing on the snapshot volume. Move to overlayfs and avoid this entirely. File systems on them still need to be resized, but some (such as ext4) support online resizing. All Architectures4.
If your system uses an Intel 945GM graphics card, do not use the i810 driver. It is recommended that gfs2_fsck be run after the filesystem has been converted to free the unused blocks. For simplicity, leave some free space in the volume group so there is room for expansion. More about the author ia64 ArchitectureA.
EDIT: With identical installation (docker, os, etc) on different machine with the only difference (I believe) is the Kernel it seems to work: uname -a Linux ip-10-0-112-146.ec2.internal 3.10.0-123.8.1.el7.x86_64 #1 SMP Mon the iwlwifi drivers have been updated to version from 2.6.26, adding 802.11n support to iwl4965 wireless devices.
|
OPCFW_CODE
|
Inefficient Tile Ripping
The actual tile ripping process is very slow compared to what it could be. As a proof of concept I've written a faster method, though it doesn't have the same amount of options. The current build of Tilemap2Tileset takes about 19 seconds for the Koholint map and this function averages about 0.7 seconds.
https://user-images.githubusercontent.com/3820082/142560827-b2785722-468d-4095-844a-9946a36e6a6c.mp4
This code is CC0 so feel free to do whatever you want with it, no credit needed. I'm sure it can also be improved even before simply rewriting it as a C++ module. Keep in mind this DOES utilize the hash() function so I believe in theory it could have collisions, but writing a quick test for collisions to double check their actual pixel data would be pretty easy.
func rip_unique_tiles( image:Image, cell_width:int = 8, cell_height:int = 8 ):
var columns:int = int(floor( image.get_width() / cell_width ))
var rows:int = int(floor( image.get_height() / cell_height ))
var unique_cells := {}
var image_data := image.get_data()
var pixels = image.get_width() * image.get_height()
var pixel_data_size:int = image_data.size() / pixels
var cell_column_step := cell_width * pixel_data_size
var cell_row_step := cell_column_step * columns
for y in range( rows ):
for x in range( columns ):
var cell_data = []
var cell_index = x * cell_column_step + y * cell_row_step * cell_height
for i in range( cell_height ):
var c = cell_index + cell_row_step * i
cell_data.append_array( image_data.subarray( c, c + cell_column_step - 1 ) )
var cell_hash = hash( cell_data )
if not unique_cells.has( cell_hash ):
unique_cells[cell_hash] = Vector2(x,y)
var unique_tiles = Image.new()
var minimized_texture_size = nearest_po2( unique_cells.size() )
var minimized_columns = minimized_texture_size / cell_width
var minimized_rows = minimized_texture_size / cell_height
unique_tiles.create( minimized_texture_size, minimized_texture_size, false, image.get_format() )
var unique_index := 0
for entry in unique_cells.values():
var cell_x = ( unique_index ) % minimized_columns
var cell_y = floor( unique_index / minimized_columns )
unique_tiles.blit_rect( image, Rect2( entry.x * cell_width, entry.y * cell_height, cell_width, cell_height ), Vector2( cell_x * cell_width, cell_y * cell_height ) )
unique_index += 1
var texture = ImageTexture.new()
texture.create_from_image( unique_tiles )
texture.flags ^= texture.FLAG_FILTER
unique_tiles_image.texture = texture
return unique_cells
This is the smartets thing I have seen in a while ngl.
Someone already suggested using a hash function, but honestly I am just to dumb to use it properly :D
I will try to do my best implementing this!
Thanks alot!
#4 Implements hashing and handles rotated and flipped tiles.
Based on the results of #4 I went back to check the speed of hashing the image data directly and it's indeed another order of magnitude faster!
https://user-images.githubusercontent.com/3820082/142649648-56932795-8c42-4931-aee9-213c3d784ab4.mp4
For completeness' sake I'll update this with an example of the yet much faster results. The main thing to look out for in #4 is the creation of new Image's constantly. It shouldn't be necessary since you can reuse the previously created Image and just blit over it again.
Unfortunately, the fastest way I could think to support rotations was too involved for a single function, so I've attached a minimal example project. This utilizes a Viewport that rotates the input image and uses that rotated viewport texture to grab a rotated copy of the texture to perform hashes on.
tile-ripper.zip
Like before, feel free to use anything in this project, CC0.
Thanks alot to both of you for helping me out that much!
I implemented the hash function of AsherGlick and changed a few things here and there.
It works like a charm now and I had to change the second timer into a millisecond one :D
I credited both of you in the source code and I will make another reddit post tomorrow where I credit you as well , I hope you are both fine with that.
Thanks so much again!
|
GITHUB_ARCHIVE
|
IOI 2023 will be hosted by Hungary. Visit the official website to learn more.
Coming soon IOI 2023 in Hungary
IOI response to invasion of Ukraine
My greetings to the IOI community.
I am writing today to update you on our ongoing discussions within the International Committee (IC) regarding the war in Ukraine.
In its role as the long-term decision making body for the IOI, the International Committee strongly condemns the invasion of Ukraine by the Russian Federation.
We acknowledge not only the humanitarian crisis and suffering that is resulting from this invasion, but also the adverse impact upon education and exchange of ideas, areas which are a core focus of the IOI. At the same time, we acknowledge that the IOI contestants are mostly teenagers, and cannot be held responsible for the effects of this war.
For these reasons, we have made the decision that, for IOI 2022:
- the delegation from Russia will not be invited to attend on-site, though they may still participate online; and
- the delegation from Russia will need to participate as individuals under the IOI flag, and not under any national name, flag or symbols.
We do not take this decision lightly. It comes through many long hours of discussion, involving many differing viewpoints within IC, and also with the input of members of the IOI community who have sent us both individual thoughts and formal statements, which we appreciate. If you wish to respond to this decision, we encourage you to write to the IC (via either the President or Secretary) to make your voices heard. Please include permission to pass your messages on to the IC, and we will do so.
In addition to this decision, the IOI will be exploring ways in which we can help Ukraine rebuild its IOI programme. We do understand that this is not the first time that the IOI community has been affected, or continues to be affected, by conflict and war, and so we intend to develop this as a broader initiative that can actively support other impacted countries in a similar way.
We understand that there are questions as to whether there will be a similar decision regarding the delegation from Belarus. This we are still discussing.
We will continue to monitor and discuss the situation as it evolves. The committee is meeting regularly throughout the year, and we fully expect to discuss these topics further when the General Assembly reconvenes during IOI 2022 in August.
Prof. Benjamin Burton
President of IOI
12 March, 2022
Update, 26 April 2022: The delegation from Belarus will be subject to the same restrictions as described above for Russia.
Update, 14 August 2022: The General Assembly have voted that the delegations from Russia and Belarus will continue to participate as individuals under the IOI flag, and not under any national names, flags or symbols. This restriction will hold until further notice. Since future IOIs hope to return to a purely on-site format, these individuals will be on-site with the other competitors from IOI 2023 onwards.
Clarification: IOI 2022 was a hybrid on-site/online event. All on-site and online competitors were official (i.e., they appear in the official rank list, and were eligible for medals).
|
OPCFW_CODE
|
It is here! Qt 5.0 Beta has been Released.
*Click here to view the Russian translation.
I have great news for you. The Qt Project just released Qt 5.0 beta. Qt 5.0 beta offers you a good sneak preview of what will be available in the next major version of Qt. The release is available for all Qt users and we encourage you to take it out for a spin. In this post, I will go through our thoughts about Qt 5 as well as what this beta release includes. You may also want to check the blog post by Lars on the same topic here.
What is Qt 5 made of?
As a major new version, Qt 5.0 is a significant release. It is intended to provide the means for Qt to stay on the forefront during years to come. It is not a rewrite – Qt 5 contains almost everything from Qt 4, most of the former Qt Mobility modules, some items from the Qt Labs, as well as some new things. With the modularization in place it is easier than ever before to use only the specific parts you need – if you are tight for space.
Qt 5.0 is the first major release in seven years and a lot of effort has been put into it. Many items are available for the first time in the Qt 5.0 and can be leveraged in products as soon as we have the final 5.0.0 release out. Based on the 5.0 we expect to get feedback from the users developing on it, and leverage this to set the direction for our future development activities. Going forward there will be continuous improvements and new items available in the Qt 5.x minor releases.
Considering all the new functionality in Qt 5, I think one of the most impressive things about it, is its compatibility with Qt 4. Yes, we will continue to develop Qt 4.8 making new patch releases and further continue to support the Qt 4 version for some time to come,. but in the long run, it is important to know that migrating to Qt 5 is easy. Let’s get back to this after we have a look into what the 5.0 beta has to offer.
Qt 5.0 Beta – What’s in it?
Main new features in Qt 5.0 beta compared to 4.8 are:
- Graphics features & performance, easy to develop and deploy – OpenGL, integrated 3D support, particles & shader effects
- Cross-platform with Qt Platform Abstraction in full use
- More modular structure than in Qt with new features or major improvements in almost every module
We want Qt Commercial 5 to support the platforms that are important to many of our customers. We have asked about your plans regarding different platforms in our customer survey and will actively work on providing the functionality provided through the input.
For the Qt Commercial 5.0 beta the following platforms are already working quite nicely:
- Mac OS X
- Embedded Linux
- Windows Embedded
This is most likely also the set that we have available for the 5.0.0 final release, possibly with some additions. We are working with our RTOS partners to enable support for these in Qt 5. We are also working towards providing full support for Android and iOS platform in Qt 5. Going forward, more platforms will be supported based on our customer feedback and validation.
Migrating to Qt 5
Qt 5 includes the best from Qt 4 plus additional new features. It means that Qt 5 is also highly compatible with Qt 4, which is a great thing. It is possible for developers of Qt 4 applications to seamlessly move on to Qt 5 with their current functionality, and when the time is right for the individual applications, gradually develop new things leveraging all the great items Qt 5 makes possible.
Due to the changes in module structure, your project configuration needs to be slightly modified to support Qt 5. It is possible to create the code in such way that it builds nicely for both Qt 4 and Qt 5. There is also a helpful script fixqt4headers.pl provided in qtbase/bin. It automates the change needed in #include<> directives to take the module names into account.
After you have successfully migrated your existing project to Qt 5, it will be possible to gradually leverage the new things it offers. One of the main items in this list is naturally Qt Quick 2 and the new, natively hardware-accelerated drawing pipeline. If your application benefits from a dynamic and interactive UI, it will be much simpler to create it with Qt Quick than with C++ and widgets. As it has been said already, widgets are fully supported in Qt 5 and so is C++. Qt Quick, is a great way to make interactive dynamic user interfaces required by 21st century applications. The application logic can be developed in C++, leveraging full Qt capabilities just as before.
New Visual Studio Add-In available
We have also created a new version of the Visual Studio Add-in for Qt 5. It is also still in beta, but can already be used to try out Qt 5 with Visual Studio. It is created based on the existing Visual Studio Add-in, modified to support new Qt5 module structure. Qt5 Essential and Add-on modules are now listed in the Qt Project Settings and in Project Wizards.Users can set the modules on and off as before. There is some logic added to disable Add-on modules, if not present in the system. Creating new Qt 4 based projects is not supported – at least in the beta version. Existing Qt4 projects can be compiled and linked. Beta release supports Visual Studio 2008 and 2010. Visual Studio 2005 is no longer supported.. We will look into supporting 2012 in the future releases of the VS Add-in.
For the beta we have included all available modules into one bundle available as a standalone installer from the Qt Commercial Customer Portal. For the final release, we will enable selecting the needed modules based on the modularized structure, as well as installation though the SDK.
We plan to enable the use of Qt Commercial 4.8 and 5.0 with the same SDK, which we hope will be the most convenient way for the SDK users. It will also bring additional benefits such as aligning additional components. One example of this is the improvements we are making to our embedded Linux tool-chain, which will benefit both 4.8 and 5.0 users, see here for information. After beta, the intention is to provide all Qt 5 releases through our online SDK, which makes it easier to manage and keep them up-to-date.
We will run a small survey for all those who have downloaded the Qt Commercial 5.0 beta. So go ahead, give it a spin – and let us know what you think!
Get Qt 5.0 Beta
If you are a Qt Commercial customer, you can download the 5.0 Beta as well as the new Visual Studio Add-In from the Qt Commercial Customer Portal. If you are not yet a Qt Commercial Customer, please download the 30-day free trial from our download area.
If you want to download the LGPL version, please visit Qt Project.
Subscribe to our newsletter
Try Qt 6.1 Now!
Download the latest release here: www.qt.io/download.
Qt 6 was created to be the productivity platform for the future, with next-gen 2D & 3D UX and limitless scalability.
Explore Qt World
Check our Qt demos and case studies in the virtual Qt World
Check out all our open positions here and follow us on Instagram to see what it's like to be #QtPeople.
Näytä tämä julkaisu Instagramissa.
Want to build something for tomorrow, join #QtPeople today! We have loads of cool jobs you don’t want to miss! http://qt.io/careers #builtwithQt #software #developers #coding #framework #tool #tooling #C++ #QML #engineers #sales #tech #technology #UI #UX #CX #Qt #Qtdev #global #openpositions #careers #job
Henkilön Qt (@theqtcompany) jakama julkaisu
|
OPCFW_CODE
|
More about HKUST
On Boosting Spatial Data Management with Efficient Indexing Structures
Speaker: Dr. Victor WEI Department of Computing The Hong Kong Polytechnic University Title: "On Boosting Spatial Data Management with Efficient Indexing Structures" Date: Friday, 7 January 2022 Time: 10:00am - 11:00am (Hong Kong Local Time) Zoom link: https://hkust.zoom.us/j/928308079?pwd=b29SMXI1bHNWV1UrdjQ3UWlmUUNSdz09 Meeting ID: 928 308 079 Passcode: 20220107 Abstract: Due to the advance of geo-positioning technologies, spatial data including spatial trajectories, 3D terrain data and spatial networks, etc. becomes more and more popular and draws a lot of attention from both academia and industry. As such, the query processing in spatial data management becomes more and more important and finds wide applications in urban computing, smart cities, autonomous driving, etc. One of the fundamental queries in spatial data management is the shortest distance and shortest path query. In this talk, I will present two research works on this fundamental query which boosts the query processing with efficient indexing structures. The first work tackles an emerging type of spatial data, namely 3D terrain surface. It proposes an indexing structure, namely a Distance Oracle, to efficiently index the pairwise distances among a set of points-of-interest on the terrain surface. The oracle answers the distance query by using a small set of pre-computed distances and utilizes a tree structure to boost the search on the pre-computed distances. Besides, it guarantees the error is bounded by a user-specified error parameter. The second one studies the dynamic road networks where the traffic information changes over time. It proposes an efficient indexing structure for the shortest distance and path queries on dynamic road networks. The indexing structure is based on auxiliary edges introduced to the network, namely shortcuts, which bridge distant vertices on the network to accelerate the query processing. In this work, we propose using a randomized algorithm to generate the shortcuts. As such, we obtain that it has small space consumption and query time and it could be updated efficiently in the perspective of both theory and practice. **************** Biography: Victor Junqiu Wei obtained his PhD degree from the Department of Computer Science and Engineering, the Hong Kong University of Science and Technology in 2018. From March 2018 to May 2018, he was a visiting student of Prof. Hanan Samet and Prof. David Mount at University of Maryland in the US. He obtained his bachelor degree from Nanjing University in 2013. He is currently working as a research assistant professor in the Hong Kong Polytechnic University (jointly Appointed by Department of Computing and Research Institute for Artificial Intelligence). Prior to this, he was an AI Researcher in Huawei Noah's Ark Lab. His research interests span spatial data management, graph data analytics, random sampling and deep neural networks. He received many honors and awards including the nomination of the Best Paper Award of SIGMOD 2020, Potentially High-Value Patent Award of Huawei, Future Star Award of Huawei, etc.
|
OPCFW_CODE
|
feat: Spin out/off branch
This PR adds the Spin out/off branch commands.
Spin off will; create a new branch based on the current branch, reset the current branch to its remote if it exists, and checkout the new branch. Spin out is almost the same, except that it does not checkout the new branch.
I haven't included the FROM functionality mentioned in the Magit docs, which allows resetting to a source different from the current remote. Let me know if this should be included.
Resolves #665
Hope you don't mind - I saw some API's that would be useful to you here, so I implemented them: https://github.com/NeogitOrg/neogit/pull/677
You'll need to rebase, but I think that'll make your life a little easier ;)
Thanks for the help @CKolkey @treatybreaker!
I've made a few more updates, I believe the implementation is more inline with what Magit has now.
I'll change back to draft for now as there's still the following to do:
Choose which branch to reset to, i.e. FROM
Update automated tests
Manual testing
Please comment if I'm not on the right track! 🙏🏼
Thanks for the help @CKolkey @treatybreaker!
I've made a few more updates, I've left a couple comments unresolved to ensure my reasoning is correct. I believe the implementation is more inline with what Magit has now.
I'll change back to draft for now as there's still the following to do:
Choose which branch to reset to, i.e. FROM
Manual testing - haven't done so for the latest changes
Add/update automated tests
Please comment if I'm not on the right track! 🙏🏼
It's looking great! Reading emacs lisp is... not very straightforward. Something I've found is useful is to ask an LLM to translate the emacs-lisp to lua - it won't be valid, but it does a good job to sketch out the logic/control flow.
For example, here's magit--branch-spinoff in lua:
local function magit_branch_spinoff(branch, from, checkout)
if magit_branch_p(branch) then
error(string.format("Cannot spin off %s. It already exists", branch))
end
if not checkout and magit_anything_modified_p() then
print("Staying on HEAD due to uncommitted changes")
checkout = true
end
local current = magit_get_current_branch()
if current then
local tracked = magit_get_upstream_branch(current)
local base
if from then
if not magit_rev_ancestor_p(from, current) then
error(string.format("Cannot spin off %s. %s is not reachable from %s", branch, from, current))
end
if tracked and magit_rev_ancestor_p(from, tracked) then
error(string.format("Cannot spin off %s. %s is ancestor of upstream %s", branch, from, tracked))
end
end
if checkout then
magit_call_git("checkout", "-b", branch, current)
else
magit_call_git("branch", branch, current)
end
local indirect_upstream_branch = magit_get_indirect_upstream_branch(current)
if indirect_upstream_branch then
magit_call_git("branch", "--set-upstream-to", indirect_upstream_branch, branch)
end
if tracked then
base = from and from .. "^" or magit_git_string("merge-base", current, tracked)
if not magit_rev_eq(base, current) then
if checkout then
magit_call_git("update-ref", "-m", string.format("reset: moving to %s", base), "refs/heads/" .. current, base)
else
magit_call_git("reset", "--hard", base)
end
end
end
else
if checkout then
magit_call_git("checkout", "-b", branch)
else
magit_call_git("branch", branch)
end
end
magit_refresh()
end
It obviously doesn't work as-is, but it's easier (for me, at least) to read :)
In terms of what you have left, I think you can skip the FROM argument for now and just presume the FROM is your current branch. Some of the magit functions can be used in a library-like way (not from the UI), but thats not an express goal of ours, as this isn't public api.
All said, its looking really great :D
Some of the magit functions can be used in a library-like way (not from the UI), but thats not an express goal of ours, as this isn't public api.
Ah, that makes more sense now.
Given that we can ignore FROM I believe it's just the tests remaining. I should hopefully have those finished in the next couple days :).
I think https://github.com/NeogitOrg/neogit/pull/693 should fix the config issue with the spec. Rebase/merge master to grab it
Fantastic work :D Thanks for your contribution
Thank you for all the help @CKolkey @ten3roberts @treatybreaker!!
|
GITHUB_ARCHIVE
|
This is a brief writeup on creating and publishing Nim packages. This instructions assume that you are using a fairly up-to-date Linux system and have a copy of the Nim distribution installed as well as the Tup utility. This article assumes some familiarity with the package archives used by Sculpt.
I will not cover writing Nim components or using Genode-specific Nim libraries here. For that I recommend reading the Nim documentation first and then looking at some of the Nim repositories I have published on GitHub, imap_report being a simple example of a native component.
The Genode toolchain is required to be present on the system, see https://genode.org/download/tool-chain for instructions.
A copy of my SDK is required, which must be present under /opt/genode. The SDK can be downloaded from https://github.com/ehmry/genode/releases and can be extracted as follows:
wget https://github.com/ehmry/genode/releases/download/sdk-19.02-r1/sdk.tar.xz tar xPf sdk.tar.xz
It is strongly recommended to use the Nimble tooling for building Genode components, and required for following the rest of this guide. See https://github.com/nim-lang/nimble for Nimble instructions. The only Genode quirk is that the genode package should be added as a dependency to your *.nimble file. This modules is registered in the global index and will be automatically fetched by Nimble at build time.
# example.nimble requires "nim >= 0.19", "genode >= 19.02"
Compiling a Nim application for Genode requires the genode module to be imported somewhere within the application. This is simply to inform the Nim compiler of where the Genode toolchain is and configures the proper option flags. Nimble will resolve the module path automatically when the dependency is declared as explained before.
# example.nim import genode echo("hello world!")
Now the Nimble tools must be combined with a higher-level build-system. The standard recursive Make scripts are more or less incompatible, so for Nim we must roll our own. I use a git super-repo with a Tup build system and add my Nimble projects as submodules. I have a skeleton of the repo I use at https://github.com/ehmry/genode-tup-super, which handles building Nimble projects and produces signed packages. The following assumes that the skeleton is being used at the d834e652d81ea54bef575576f2d5706f41c8db6d commit.
First, add your project into the super-repo under the nim directory and add a file named Tupfile into the root of your project directory with the following contents:
TARGET_NAME = your_project PKG_DEPENDS += \ @(PUBLIC_SRC_LIBC) \ @(PUBLIC_SRC_VFS) \ _/src/$(TARGET_NAME) \ include_rules include $(NIMBLE_BINARIES_INCLUDE) include $(NIMBLE_PACKAGE_INCLUDE)
This examples assumes a runtime file is present in your project directory that will be included into a launchable package, if this is not the case, remove the last line.
To describe each directive,
The TARGET_NAME variable is the name used by for the bin, raw, and pkg archives, unless BIN_NAME, RAW_NAME, or PKG_NAME variables are defined.
The PKG_DEPENDS variable contains the versioned archive that will be added to the archives file within the output package. An archive path with the _ character in the place of the depot user will be replaced with a path referring to the current version of that package in the local repo, if present. A variable reference in the form of @(...) refers to a configuration define, which will be explained later.
include_rules will include the Tuprules.tup files found in each parent directory from the root down, placing this Tupfile in the nim directory implies that nim/Tuprules.tup will be included.
The NIMBLE_BINARIES_INCLUDE variable is defined in ../Tuprules.tup and points to a file containing rules for building Nimble projects.
The NIMBLE_PACKAGE_INCLUDE variables is the equivalent of NIMBLE_BINARIES_INCLUDE for creating pkg archives.
To build the project invoke the tup utility within the super-repo root or a subdirectory. The skeleton is configured to a build x86_64 variant (the only variant tested so far) at the build-x86_64 directory. This directory contains important configuration at tup.config.
Variables in this file that begin with CONFIG_ are available anywhere within this repo as @(...), so CONFIG_FOO=bar will be available as @(FOO). The skeleton tup.config file contains some pegged packages versions that were referenced in the project Tupfile as described earlier. This file contains other required variables, which should be self-explanatory.
The current build of you package should be available under build-x86_64/depot/bin/... and build-x86_64/depot/pkg/.... These directories can be copied into the local depot directory of a Sculpt instance and should run if the dependency specification is complete.
Tup will also create build-x86_64/Makefile. This makefile will create signed tarballs from the depot directory into a public directory, and can be copied to a file server for distribution. Extending the build-system with Make is necessary because Tup cannot create files with names produced by evaluation of rules, all outputs must be explicit.
Its a work in progress and probably needs a bit of massaging at first, but so far it has been the most effective way to build and deploy Nim.
|
OPCFW_CODE
|
How do you restore from a File History backup to a new computer?
My computer died, but I have a File History backup from it. How do I transfer the data from there to my new computer? I can't just copy it because all the file names are appended with the date of the latest version.
When you setup File History again in the new installation and select the drive that had your File History on from the previous installation, it will be recognized and you will have access to them again as usual through File History.
There is a nice tutorial here:
PCMag.com – How to Create a Windows 8 Backup Using File History
which says, in part:
Restoring Versions
Most people will simply want to get their missing files and versions back.
To do this, you simply open the File History
dialog (you can do so by just typing File History at the Start screen)
and choose "Restore personal files." This will display all the covered
folders—Contacts, Documents, and so on. You can restore whole folders
or individual files if you drill down into the folders. The big green
circular arrow will restore them to their original location, but you
can also choose "Restore to" from a right-click menu or from the
Settings gear to specify a target folder for the restored files.
Since you are unlikely to have exactly the same locations on the new system, you may have to use the "Restore To" to tell it where to put the files.
I suggest visiting that page, or a similar one, for the full walk-through.
Another walk-through is Windows support – File History in Windows, which uses swipes and taps but explains how to locate the file(s) you want to restore and points out the right-click functionality of the Restore button.
If you want to adopt your old file history onto your new PC, it's very easy (at least on Windows 10). You don't need to edit the registry at all, so you can just follow steps 5-7 of moderatemisbehaviour's answer:
Just plugin your external drive, or copy your FileHistory folder onto your new PC. Then go to Control Panel -> System and Security -> File History and choose Select Drive. Choose the drive that has your old data on it, and you'll see a checkbox appear entitled `I want to use a previous back-up on this File History drive'. Select it, and you'll be able to adopt the existing backup.
Using this method, you don't just retain the most recent version of the file, but you retain all previous versions, too.
(at)Mark Barnes My attempts to restore from FileHistory backup according to the prescription presented here have failed. May I kindly ask for your help? Could you have a look at my question at https://superuser.com/questions/1192611/restore-from-a-windows-10-file-history-backup-to-a-new-computer and give me a hint how to proceed.? Thanks in advance.
I had a similar problem after clean installing Windows 10 and trying to restore form a File History backup originally created on Windows 8.
I followed these steps posted by the user K Rock on Microsoft Community forums:
Turn on file history to the same drive. Let it backup up files, then stop it and turn it off.
Browse the drive through windows explorer and you will find a path:
..\FileHistory\{username}\{ComputerName_Old}\
Nested in that folder is a ‘Configuration’ folder - copy and paste there to save a backup
..\FileHistory\{username}\{ComputerName_New}\
Nested in that folder is a ‘Configuration’ folder - copy and paste there to save a backup
In these configuration folders there should be 2 xml files (maybe 1 or maybe more).
Config1.xml
Config2.xml
Need to copy values from the configuration files in the Old to the New. Make sure that you copy from old_1 to new_1 and old_2 to new_2.
PCName
UserID
Change the PC Name in the Configuration path for:
TargetConfigPath1
TargetConfigPath2
TargetCatalogPath1
TargetCatalogPath2
After following K Rock's steps I navigated to Control Panel > System and Security > File History > Select Drive.
I selected the external drive with my backup and below a new box appeared with the heading 'Select an existing back up:'. I selected the existing backup and clicked 'OK'.
I could then navigate to Control Panel > System and Security > File History > Restore personal files and see my old files.
Note I used Control Panel, not the new Windows 8/10 style 'File History settings' that you'll see if you simply try to search 'File History' in the start menu.
It's also worth mentioning that I tried K Rock's steps before discovering the old school Control Panel version of File History with the extra options. So I can't confirm if K Rock's steps are actually necessary, try without first.
Very good advice.
|
STACK_EXCHANGE
|
For the beginner the number of input parameters requested during install can be confusing but don't panic, you are usually fine with defaults and mandatory parameters, which will be highlighted when kept blank. All parameters can also be changed later via the Kopano4S-Admin GUI.
Special attention is on the mail-server section: when running in full server mode you have to provide a mail server name of your own domain with existing dns and mx record. Otherwise sending and receiving mails will likely fail. Equally most of you using dynamic IP address would set the mail relay of your provider. For fetchmail only in mailstore-foreword mode the mail-server name is less important (can be blank or localhost) but the mail relay is mandatory. For details on both modes running Kopano4S, Postfix vs. Fetchmail see info boxes below.
Step-by-Step Advise Install (see screenshots)
Acknowledge the license condiditions for Kopano4S SPK and Kopano
Basic Settings: provide MySql root password, db name, share location, select edition and attachment plus build mode
root password to MySQl / MariaDB10 (not Unix) ensures database creation and will not be stored. For details on Kopano editions see FAQ. Mail Attachments by default are stored on file system, however if you are importing from legacy Zarafa you might need to de-select this as there arrachments had been indb.If you prefer to build Docker Image on your own select this as power-user option as you get the latest build.
Ports and Services: ensure ports used are unique to other packages abd decide on optional services
Each Port of Kopano services need to be exposed from Docker container; here you select the ports that must not conflict with other services on your Synology. Http prefix 9000 results in 9080 and 9443, ICAL 8000 results in 8080, 8443 and finally is the webapp port 8090 (https will be via reverse proxy). Then you can select Kopano services that are optional to be enabled.
Mail-Server Settings: set your mail domain, mail server name, other options incl. TLS / SSL settings
It is sufficient to provide 1 domain, but in case of more add them coma separated. To use TLS in Kopano you need to sync your Synology default certificate into the container (use subdomain like mail.mydomain.com as alternate name). On webservices level you can activate webapp and z-push as reverse proxy virtual directory so you can use Kopano via port 443 instead of 9443. Rverse proxy is optional as it cannot be guranteed it working with any configuration though usually it works fine; as alternative you can configure Synology reverse proxy.
Mail-Options (scanning etc.): add postmaster alias, mail relay and select scanning services
It is important to map postmaster to existing Kopano account via alias to ensure receiving mails for default system anoucements. Set mail relay in case you have dynamic IP to avoid being tagged as spam. Select Spam scanning and rejection. Important note when enabling helo restrictions with mx sender domain validation this will also apply for your server, so if your sending mail server host is unknown (no dns, mx) you will reject yourself sending mails.
Language Settings: set other language for any new created mailbox
Select the language for your mailboxes. Note this only applies for new created ones.
MySql Tuning etc.: prepare for MySql replication and do memory tuningSet a unique MySql Id (101) to be able setting up replication using kopano4s-replication. Select tuning option for Kopano allocating buffers to MySql and Kopano Server. With more than 1GB a good tuning option is 40%, while usually MySql does not utilize the full buffer, so it is 40% utilisation maximum, usually 30%.
Step-by-Step Advise Un-Install (see screenshots)
Select what you want to keep for re-use (clean-up = de-select all)
When planning to re-install you might keep the Kopano demon users on Synology level (keep UID) or keep the Kopano share to preserve backups. Only keep the database when re-installing the same version. For switching from Community to Supported Edition you have to drop the database and restore from backup (see FAQ)
Install Dialog Screenshots
Un-Install Dialog Screenshots
Kopano with Postfix SMTP Demon and own domain
In this setup you are hosting your own domain (e.g. email@example.com) on Synology. Postfix SmtpD has to be configured to send and receive emails via port 25 exposed to the Internet. Kopano sends emails to postfix and receives emails from postfix via LMTP Delivery Agent. In case you are using a dynamic IP address you need to setup a relay host to avoid your mails being rejected as SPAM for being an unknown host.
Kopano with Postfix and Fetchmail for IMAP / POP3 mailboxes
In this setup you either have an email address from a provider (e.g. firstname.lastname@example.org) or host your mailbox of your domain externally. Fetchmail receives your email via Imap/Pop3 and delivers it to Kopano via Dagent cmd-line tool. Kopano sends emails to postfix which are rooted via relay host to your providers SMTP server. There is no need to expose port 25 and Postfix needs less strict settings (recipient_restrictions).
Back to Kopano4S Home
|
OPCFW_CODE
|
NSImageRep confusion
I have an NSImage that came from a PDF, so it has one representation, of type NSPDFImageRep. I do an image setDataRetained:YES; to make sure that it remains a NSPDFImageRep. Later, I want to change the page, so I get the rep, and set the current page. This is fine.
The problem is that when I draw the image, only the 1st page comes out.
My impression is that when I draw an NSImage, it picks a representation, and draws that representation. Now, the image only has one rep, so that's the one that is being drawn, and that's the PDFrep. So, why when I draw the image, is it not drawing the correct page?
HOWEVER, when I draw the representation itself, I get the correct page.
What am I missing?
NSImage does a caching of the NSImageRep, when first displayed. In the case of NSPDFImageRep, the "setCacheMode:" message has no effect. Thus, the page that will be displayed will always be the first page. See this guide for more information.
You have then two solutions:
Drawing the representation directly.
Call the "recache" message on the NSImage to force the rasterization of the selected page.
An alternative mechanism to draw a PDF is to use the CGPDF* functions. To do this, use CGPDFDocumentCreateWithURL to create a CGPDFDocumentRef object. Then, use CGPDFDocumentGetPage to get a CGPDFPageRef object. You can then use CGContextDrawPDFPage to draw the page into your graphics context.
You may have to apply a transform to ensure that the document ends up sized like you want. Use a CGAffineTransform and CGContextConcatCTM to do this.
Here is some sample code pulled out of one of my projects:
// use your own constants here
NSString *path = @"/path/to/my.pdf";
NSUInteger pageNumber = 14;
CGSize size = [self frame].size;
// if we're drawing into an NSView, then we need to get the current graphics context
CGContextRef context = (CGContextRef)([[NSGraphicsContext currentContext] graphicsPort]);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)path, kCFURLPOSIXPathStyle, NO);
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL(url);
CGPDFPageRef page = CGPDFDocumentGetPage(document, pageNumber);
// in my case, I wanted the PDF page to fill in the view
// so we apply a scaling transform to fir the page into the view
double ratio = size.width / CGPDFPageGetBoxRect(page, kCGPDFTrimBox).size.width;
CGAffineTransform transform = CGAffineTransformMakeScale(ratio, ratio);
CGContextConcatCTM(context, transform);
// now we draw the PDF into the context
CGContextDrawPDFPage(context, page);
// don't forget memory management!
CGPDFDocumentRelease(document);
|
STACK_EXCHANGE
|
Dart Analysis in IDE does not match dart analyze output
I first noticed this on one of my PRs where the CI failed with a lint that I did not see locally in my IDE: https://github.com/flutter/devtools/actions/runs/9087525219/job/24975451208?pr=7755. This reproduces in both Android Studio / IntelliJ and VS Code.
Flutter 3.22.0-34.0.pre • channel [user-branch] • unknown source
Framework • revision 4a1e3eaaa2 (19 hours ago) • 2024-05-14 15:58:18 -0500
Engine • revision ae9ff69a08
Tools • Dart 3.5.0 (build 3.5.0-153.0.dev) • DevTools 2.36.0-dev.5
Is use_super_parameters enabled for this code? I'm asking in order to determine whether the lint ought to be produced in the IDE but isn't, or whether it ought to not be produced on the bots.
Yes it is. We use package:flutter_lints, which is built on top of package:lints, which includes use_super_parameters: https://github.com/dart-lang/lints/blob/main/lib/recommended.yaml#L67
@kenzieschmoll are you able to get back to a state where this doesn't show up? I tried checking out latest code and reverting the change in https://github.com/flutter/devtools/pull/7755/commits/25f01337c7bb481cc23dd094fd633af41fbf02dc but it did show up for me:
If you can repro and provide a git hash (or git hash I can apply that change to) and exact SDK version I can debug.
Ignore that - I hadn't synced the Flutter SDK to the same version - it does repro now I'm on the same version.
Ok, so user_super_parameters was only added to flutter_lints in 3.0.0:
https://pub.dev/packages/flutter_lints/changelog#300
devtools_app is using v3.x:
https://github.com/flutter/devtools/blob/0547de50860c9913a76d30bfad6526938baee7b2/packages/devtools_app/pubspec.yaml#L76
But it's importing analysis_options from the parent folder which has its own pubspec that is using v2.x:
https://github.com/flutter/devtools/blob/0547de50860c9913a76d30bfad6526938baee7b2/packages/pubspec.yaml#L9
So my guess is that something changed recently (maybe related to the context changes?) that is causing v2 of flutter_lints to be used now for devtools_app where before, v3 was being used?
A quick fix for DevTools is to update flutter_lints in the parent pubspec.yaml alongside the analysis_options.yaml that's importing that file (it's probably an accident that it has an older version), but I think there might still be some bug here @bwilkerson?
I'm not sure I'm understanding the situation well enough to comment, but in case it's useful information, the expected behavior is for the resolution of URIs on the include line to always be based on the package configuration of the package in which the initial analysis options file is found, no matter where the analysis options file is found.
In other words, I would expect the resolution of flutter_lints to differ in the IDE and on the command line only if the package configuration being used differed in those two cases.
On the CI build here, analyze was run directly inside the devtools_app sub-folder:
/home/runner/work/devtools/devtools/packages/devtools_app > /home/runner/work/devtools/devtools/tool/flutter-sdk/bin/dart analyze --fatal-infos
Analyzing devtools_app...
info - lib/src/shared/common_widgets.dart:1520:3 - Convert 'context' to a super parameter. - use_super_parameters
This (correctly IMO) flagged the issue.
However when I open the parent folder in VS Code, it seems like it might be using the version specified in the outer folder (at least, that's me theory for why it doesn't show up, since changing the version seemed to fix it). I'll make a small repro.
Here's a repro:
https://github.com/DanTup/repro-dart-sdk-55732
If I open the repository root, there is a nested folder "foo_package" which seems like it should trigger the lint (because it references v3 of flutter_lints), however it seems to be using flutter_lints v2 which is what is defined in the parent folders pubspec.yaml.
I suspect it's because of the import of anlaysis_options going through the parent folder. I tried removing that and it seemed to still be broken, however after restart VS Code the lint triggered again, so I think going through the outer analysis_options.yaml is relevant to the issue.
Just to clarify, the analyzer doesn't look for pubspec.yaml files, it looks for package_config.json files when deciding how to resolve URIs. I can't tell from the checked in example where pub had been run.
Ok, thanks. Then what I would expect, assuming that there weren't any flags on the command line that override the normal lookup, is that everything in foo_package would use the lints enabled in flutter_lints-3.0.2 when analyzed in either the IDE or from the command-line.
Given that that's not what we're seeing, it does look like a bug.
I think https://github.com/dart-lang/sdk/issues/56047 might be the same issue as this, but in that case there is no parent folder, so instead of getting the wrong version of the lint, it's not getting the lint at all.
I tested my repro from above with an SDK including the fix for https://github.com/dart-lang/sdk/issues/56047 (https://github.com/dart-lang/sdk/commit/b513402ee7d3424f585f2d92ccba8c89664908d9) and confirmed that this was the same issue and it's now fixed.
|
GITHUB_ARCHIVE
|
The mainboard is the basis of any small form-factor PC. The manufacturers have to develop special mainboards, since those of other form-factors do not have the necessary dimensions and functions. So, the mainboard our “cube” is based on has to combine small size with huge functionality similar to “grown-up” Full ATX mainboards. Designing a mainboard for a barebone system is a truly hard task for the developing team. Let’s see how well the AOpen engineers coped with this task.
The AOpen XCcube EZ65 PC uses an AOpen UX4SG-1394 mainboard:
You won’t find a description of this mainboard at the AOpen website, since it was developed especially for the XCcube. The board features the Intel 865G chipset, which mostly determines its functionality scope.
This mainboard supports Socket478 processors with 400/533/800MHz FSB, with and without Hyper-Threading. You should note that the CPU voltage regulator circuit is not adapted for older Willamette-based CPUs and you can only use processors on Northwood core in the AOpen XCcube EZ65. However, AOpen didn’t hesitate to announce processors with a frequency of up to 3.2GHz. I was all doubtful about a processor with such high heat dissipation and power consumption to function properly in a cube-like system with a 220W PSU. My doubts flew away as soon as I saw the AOpen XCcube EZ65 working with a Pentium 4 3.2GHz without any stability-related issues.
It’s all standard with the supported memory. You can slip DDR400/333/266 SDRAM modules into the two DIMM slots available. If the modules are identical, the mainboard works in the dual-channel mode. We should give credit to the engineering team from AOpen who made sure that the AOpen UX4SG-1394 works not only stably, but also fast enough. I was much pleased to see something similar to PAT (Performance Acceleration Technology) in the BIOS Setup. In AOpen’s terms, it is called Performance Boost Engine.
The upgrade and expansion capabilities of the mainboard are naturally limited by its small size. It has only one AGP 8x slot and one PCI slot. On the other hand, with all those controllers available in the AOpen UX4SG-1394, you will hardly need anything else. The i865G chipset used in the mainboard has an integrated graphics core aka Intel Extreme Graphics 2. Although this core doesn’t allow playing modern 3D games because of its limited capabilities and speed, it is strong enough for working in office applications, playing videos and running previous generation games. If you want something more, just upgrade the barebone with an add-on graphics card. The AGP 8x slot is located at the edge of the PCB, so you can only install cards that don’t require extra space for their cooling systems. This limitation doesn’t seem to be much important, though.
|
OPCFW_CODE
|
It should be a simple question: “What’s going on in crypto?” Try to come up with some sort of layman-level answer, however, and it quickly becomes clear that there is far too much terminology in the digital asset space, and that most of it follows no real rhyme or reason. The problem with this is that it causes general confusion when it comes to reporting, analysis, and portfolio management — and acts as a barrier for new enthusiasts to understand the space.
Right now, there is no generally accepted standard for classification of different digital assets. Attempts have been made, but given the rapid pace of technological and regulatory change with respect to this new class of asset class, nothing has stuck.
The goal of this post is to establish a framework for classifying digital asset (a term encompassing every asset class within the cryptocurrency space). It will define specific asset classes at a detailed level, allowing individuals to compare and gain a better understanding of the entire space.
Digital Asset Categorization Framework
The universe of digital assets can be split into the hierarchy below (credit to Pavel Pankratov for a similar hierarchy):
As you can see from the hierarchy, digital assets can be split into four broad groups:
1. Cryptocurrencies (Digital Currencies)
2. Blockchain infrastructure
4. Utility platforms
Within the four groups, various categories exist. For instance Cryptocurrencies can be classified as Store of Value, Settlement, Stable, Private and Fiat. Beyond the scope of this post, many of these categories can be further split into sub-categories.
Digital Asset Legal Classification
Digital assets have different regulations, based on jurisdiction. The table below shows how the United States views legal classification, as it pertains to digital assets.
The most pressing issue right now, and more pressing for digital assets than ever before, is being able to reliably predict whether or not a particular digital asset will be classified as a security; each legal classification has its own specific set of legal requirements that can dramatically change the required structure of an organization.
Individuals developing a digital asset today are taking extra precautions to ensure the intended purpose of their project fits within a legal framework. They should also take the time to classify their project within a specific digital asset class so analysts and investors can better research group, category, and sub-category specific trends, competitors and more.
There’s many different schools of thought on classifying digital assets, but the above framework provides the beginning of a coherent system. After all, it would be much easier to answer questions about the crypto space if we could phrase them properly.
A strong understanding within the user-base is of paramount importance to the digital asset space, since the question “What’s going on in crypto?” may never be a simple enough question to receive a simple answer. But the question, “What’s the latest 5-day trend for Store of Value Cryptocurrencies?” absolutely could.
Thanks for reading.
|
OPCFW_CODE
|
exposing c++ class (derived from abstract) in R with Rcpp modules
I have a c++ class svol_leverage that inherits from an abstract base class BasePF (actually that's a type alias for something messy coming from a specialized class template).
Using the Rcpp modules vignette and this answer, I was able to use .factory to call methods of the base class in R:
install.packages("~/pfr_1.0.tar.gz", repos = NULL, type="source")
library(pfr)
mod <- new(BasePF, phi=.5, mu=.5, sigma=.5, rho=.5)
mod$getLogCondLike()
However, I need to expose more methods, and I've started having difficulty. Some base class methods don't have great signatures--their types aren't easily wrapped to R types.
For example, in BasePF there's a filter() method I need to call that accepts, among other things, std::vector<>s of std::function<>s, of ... I decide to write wrappers in the derived class: svol_leverage::update. This returns void and takes two doubles. Simple.
*How would I update the RCPP_MODULE macro to expose this derived clas method, though? This isn't cutting it:
#include "svol_leverage.h"
BasePF *newSvolLeverage(FLOATTYPE phi, FLOATTYPE mu, FLOATTYPE sigma, FLOATTYPE rho) {
return new svol_leverage(phi, mu, sigma, rho);
}
// Expose the svol leverage model class
// Recall FLOATTYPE is defined in the header we're including above
RCPP_MODULE(svol_leverage_module){
Rcpp::class_< BasePF >("BasePF")
.factory<FLOATTYPE,FLOATTYPE,FLOATTYPE,FLOATTYPE>(newSvolLeverage)
.method("getLogCondLike", &BasePF::getLogCondLike);
Rcpp::class_<svol_leverage>("svol_leverage")
.derives<BasePF>("BasePF")
.method("update", &svol_leverage::update);
}
It R CMD builds fine, but when I go to call the update() method on my R object, I get this:
mod$update(.1, .1)
Error in envRefInferField(x, what, getClass(class(x)), selfEnv) :
‘update’ is not a valid field or method name for reference class “Rcpp_BasePF”
I can get it available when I add .constructor to the Derived class, and then set up variables using the regular constructor (i.e. mod <- new(svol_leverage, phi=.5, mu=.5, sigma=.5, rho=.5). But these variables won't have access to the other method mod$lastLogCondLike(). So it's one or the other at the moment. How can I avoid this slicing behavior?
R only offers us a C language interface and hence simpler signatures, so there are limits to what can be done with code generation. Rcpp Methods never had inheritance, I thinkl. I use it in RcppCNPy just fine with templated versions of one core function varying different metrics. You may be able to work something out if you only expose the derived class to C.
What I have done in the past is to stick with a clean C++ side of things, and then add glue methods to instantiate (and tear down) as well as basic interfaces to the key functions.
Thanks @DirkEddelbuettel yeah I'll move in that direction
|
STACK_EXCHANGE
|
Dungeon of Dance
Dungeon of Dance: Buy on Tape
The Status Display
- LV The current Level number
- XP Your current score
- HP Hit Points. When you reach 0 HP, the game is over.
- EN Energy. Each time you move, you consume 1 Energy. When you reach 0 Energy, you will automatically consume 1 HP to gain 30 Energy.
Under these is your Inventory bar. You may carry four types of items: Up to 6 Armor, up to 1 Key, up to 6 Healing Potions, and the Magic Mirror Ball.
- Each Level has 3 Healing Potions.
- Press the Fire button to drink a Potion to heal 10 HP (up to your maximum).
- Warning! Healing Potions may not always behave as expected!
- You may carry a maximum of 6 Healing Potions.
- Each Level has 6 Food.
- Food adds 30 Energy, up to a maximum of 200.
- Each Level has 1 Armor.
- Finding Armor adds 5 HP immediately.
- Each Armor adds 5 to your maximum HP.
- You may wear a maximum of 6 Armor (for a total of 30 maximum HP).
- Each Level has 1 Key, which is defended by the Dragon.
- After you defeat the Dragon, you may acquire the Key.
- When you have the Key, you may open the Door to the next Level.
- Move to the Door while holding the Key to descend to the next Level.
Magic Mirror Ball
- Starting at Level 6, the Magic Mirror Ball will be hidden in the Dungeon.
- Exit the Level while holding the Mirror Ball, and you win the game!
- After you've won, you may continue to play new levels by pressing the Fire button.
When you encounter a Monster, it will challenge you to a Dance-Off. It will show you some dance moves, which you must repeat with the joystick. If you win, the Monster will be banished from the Dungeon. If you lose, you will lose a random number of HP, based on the Monster's Strength and the current Level. More info about Monsters:
Goblin (Strength 1)
- Each Level has 8 Goblins.
- Dance Length: 1 step per Level, maximum of 4 steps.
- Damage: Up to 1 HP per Level, minimum of 1.
- When you defeat a Goblin, there is a 25% chance that it will drop Food.
- A defeated Goblin is banished from the Dungeon until the next Level.
Wraith (Strength 2)
- Each Level has 3 Wraiths.
- Dance Length: 1 step per Level + 1, maximum of 6 steps.
- Damage: Up to 2 HP per Level, minimum of 2.
- When you lose to a Wraith, there is a 25% chance that it will steal your map.
- When you defeat a Wraith, it is not banished; it finds a new place to haunt.
- When you defeat a Wraith, you will gain 1 HP.
Dragon (Strength 3)
- Each Level has 1 Dragon, and it is always in the center of the Dungeon.
- Dance Length: 1 step per Level + 2.
- Damage: Up to 3 HP per Level, minimum of 3.
- When you defeat the Dragon, it drops the Key to the Door.
- Try not to leave a Level without finding the Armor. You'll want as much as possible.
- Heed your maximum HP (5 x Armor) to optimize the value of Healing Potions.
- Heed the maximum damage (Strength x Level). Use Healing Potions before a Dance-Off.
- Engage with Goblins, because they're easy to beat and might have Food.
- On higher levels, when the risk-to-reward ratio is low, avoid engaging Wraiths.
- If a Dragon's pattern is too difficult, engage a lower-Strength monster to change the Dragon's pattern.
© 1980, Beige Maze
|
OPCFW_CODE
|
My first introduction to PostgreSQL was back in the summer of 2001. As I type this, I realize that this was fifteen years ago.
I was volunteering with the folks who maintained Portland IndyMedia, which was a local resource for activists of all flavors. It allowed the public to post news articles, share photos, announce events, and discuss things that were a concern to them. The project was originally started in 1999 as a tool to support protestors during the Seattle WTO demonstrations. The code base was open-sourced and people would setup their own instance in their corner of the world.
As a volunteer, I began helping with small code updates to fine tune the project to our liking in Portland. The code base was primarily written in Perl with a PostgreSQL backend. I’m pretty sure my first contributions were a few database indexes.
The following year, I began working for a small development shop in Portland that worked on various Perl, Python, and PHP projects. My bosses had co-authored O’Reilly’s PostgreSQL book … so it probably goes without saying that it was our go-to for open source data storage.
As I quickly learned, MySQL wasn’t always the most reliable nor robust database. We were hired to work on a number of migration projects once companies encountered some nasty data integrity issues. At this point in my career, I was working on a number of projects where multiple code bases were sharing the same database so we heavily relied heavily on stored procedures, triggers, custom validation rules, and other sophisticated features that PostgreSQL afforded us. I was taught to never trust the application code to properly validate data.
At the end of the day, MySQL was fast at a lot of what it did but not being ACID-compliant was a deal-breaker for many of our larger clients. MySQL has since improved much of this, but it was too litte, too late for us.
This was pre-ORM days for me.
Toward the end of 2004, I began toying around with Ruby on Rails. David was (and still is) a fan of MySQL. Admittedly, I was initially a little skeptical about ActiveRecord due to this. While Ruby on Rails aimed to be database-agnostic, I was worried that it wasn’t going to work too well with PostgreSQL.
My first Rails projects were built on top of existing PostgreSQL databases that had existing triggers, data constraints, and custom types. It wasn’t the simplest type of database to build a Rails application with. I was hitting some odd issues here and there. Eventually, this led to my first contributions to the Ruby on Rails code base. I was finding some quirky data issues when I removed triggers in my database and tried to rely on ActiveRecord’s callbacks. I later helped with this.
For those who were around during the first year of Ruby on Rails, you might remember Tobias Lutke’s open source blogging engine called Typo. I contributed the initial PostgreSQL database schema. That’s right folks, back in 2005, we were hand coding our database schemas because migrations didn’t come for maybe a year later? Developers these days missed out on all the “fun.”
(I still have nightmares about versioning schema migrations)
Anyhow, I was loud and proud about my love for PostgreSQL. Over time, I found a nice compromise between relying on ActiveRecord as the security person at the door and PostgreSQL as the bouncer inside the club. (this was how Jeremy Voorhis and I described this on the Ruby on Rails podcast in the spring of 2006)
I might have developed a bit of a reputation for debating this with people.
A decade has since past. Where are we now in the community? While I’d love to take some credit for the huge swing from MySQL to PostgreSQL, I can’t help but credit folks like Heroku for getting people comfortable with it.
In 2016, we see that PostgreSQL is the preferred database by 84% of the Ruby on Rails community. This is 12% increase over the 2014 results. What we’re also seeing is a variety of other database technologies continuing to be used for certain types of projects.
MySQL is still preferred by 14% and in my own opinion, I predict that in five years we’ll see it lingering in the ~10% range given that so many existing applications were already developed with it. With PostgreSQL adopting Andl in an upcoming release, I stand confident that it’ll continue to dominate the open source community.
Long live PostgreSQL; long live Ruby on Rails.
p.s. Did you know that you can write stored procedures in Ruby within PostgreSQL? I miss playing around with those sorts of features.
1417 people responded to the 2016 community survey.
|
OPCFW_CODE
|
Learn everything about organization partnership Diagrams (ERDs), what they’re useful, just how to discover all of them, how to come up with all of them, and much more contained in this manual.
What is an Entity connection Diagram (ERD)?
an organization partnership drawing (ERD) is a kind of diagram that enables you to find out how different entities (example. visitors, clientele, or any other objects) relate solely to each other in a loan application or a database.
These include created whenever a new experience becoming created to ensure the developing group can understand how to organize the databases. They are able to be also developed on an existing program to greatly help the team know the way the computer really works and see and deal with any problems.
Entity commitment Diagrams use a certain group of signs, like models and arrows, to illustrate the device and database.
Here’s a typical example of an ERD:
Components of an ERD
an Entity union Diagram comprises of different hardware:
an entity is actually something which have data retained about this. It can be an actual item (for example. car, people), a notion (example. target) or an event (e.g. student enrolment in a training course). They express nouns.
They normally are represented as rectangles on an ERD making use of the organization title within the rectangle.
an entity can also be a substantial entity or a weak entity. What’s the real difference?
A stronger entity enjoys an identifier (a major key) and will not rely on every other organizations because of it to can be found. For example, students might be a strong organization, as it could need a major trick and will not rely on any entities because of it to can be found.
a weak entity is one that depends on a substantial organization for presence. This simply means it has a foreign key to another entity. Like, an enrolment of a student might a weak organization, as an enrolment cannot can be found without a student.
an union in an ERD defines just how two entities are pertaining to each other. They may be produced from verbs when talking about a database or a set of organizations.
Affairs in ERDs become symbolized as outlines between two organizations, and sometimes need a tag at risk to advance describe the partnership (instance “enrols”, “registers”, “completes”).
There are numerous types of interactions which happen to be displayed on an ERD:
- One to one: One record of an organization are directly regarding another record of an organization
- A person to most: One record of an entity is related to a number of records of some other organization.
- Many to several: Numerous documents of a single entity is generally connected with many registers of another entity.
an attribute try a house of an organization or something like that which can be used to explain an entity. They are often displayed as ovals, or as records inside an entity.
There are numerous different sorts of characteristics symbolized on an ERD:
- Easy: a characteristic that cannot end up being split up into various other features, such as for example a first name.
- Composite: a trait that may be split up into some other features, instance title being put into basic, middle, and finally label.
- Derived: a feature that’s computed or determined from another trait, for instance the ages of record getting computed from the provided big date.
a feature can also be single-value or multi-value:
- Single-value: an attribute that is only captured once for
- Multi-Value: a trait that can be caught more often than once for an entity, particularly several telephone numbers.
Cardinality symbolizes the amount of cases of an entity that exist in a commitment between two entities. This is conveyed as a number but is also emblematic, according to model of diagram used. Common cardinality values are zero, one, or many.
We’ll see some samples of cardinality later in this guide.
Whenever we become creating an ERD we quite often learn of what we wish capture. This can often end up being shown in phrase, or making use of “natural language”.
A few examples is:
- “Record students, instruction they enrol in, and educators exactly who illustrate the program”
- “Capture the consumer purchases, consumer details, and where in fact the instructions are now being delivered”
- “Capture patient information therefore the surgery that they had”
These phrases incorporate multiple different sorts of terminology, that is certainly utilized as a starting point for an ERD. They’re displayed in some different ways:
- Noun: a “thing”, for example a student or visitors. Displayed as an entity.
- Verb: an action, such as for instance enrol or send. Represented as a relationship between two agencies.
- Adjective: an explaining keyword, including residential or expert. Represented as an attribute on an entity.
It will help your change a description of what you need to diagram into an actual diagram.
Symbols and notations
When creating an ERD, it could be an easy task to produce bins and contours among them. But, like other circumstances in pc software developing, there are some various methods and guidelines that are available. For ERDs, there are various notation requirements, which define the icons used.
|
OPCFW_CODE
|
DistanceToNearestSurface doesn't blend
I'm currently trying to create a water material, that recognizes objects inside of it to create foam around it. I researched a lot and found about this node called "DistanceToNearestSurface" and people seemed to get some proper results with it.
I may at first explain my understanding of this node. Apparently it calculates the distance between the nearest DistanceField and the desired position. This works very well with the landscape mesh and after seeing images of it working also with static mesh, I tried to put some rocks into the river (from the Starter Content).
Here I got my problems. These objects just create a spot that's not blending at all. Every image I'm looking at in the web, shows objects blending perfectly fine, so I couldn't find a solution. Also it seems like some weird artifacts are appearing randomly. Same happens with the "DistanceFieldGradient".
What I did was:
Obviously change the settings to "Generate Mesh Distance Fields".
I turned off "Cast Shadows" of the water mesh, which is basically a plane and put the material on it.
I also made the Skylight movable as suggested on some other site.
Maybe I'm missing some kind of setting or I misunderstood how this node works and I would be glad if somebody could explain to me, what I'm doing wrong.
This is the scene:
This is the DistanceField scene:
And this is my material, it's very basic:
The following shows results that I would like to see for myself.
Thank you guys very much for your time!
"Apparently it calculates the distance between the nearest DistanceField and the desired position."
This is not entirely correct. It gives the approximate distance in centimeters to the nearest surface from the given position. Which is positive when the given position is outside of the mesh, and negative when the given position is inside of a mesh.
Since you clamp the value between 0 and 1 anything negative is going to be zero. Next you invert it using the lerp(1,0) so this means all the negative values will become white.
Your mesh is probably a bit to small, and approximately the distance towards the surface is inside the mesh, giving you since it is clamped and inverted the white spots.
In all my demos I used fairly large meshes. You could also play around with the console settings. All settings start with r.DF or r.DistanceField.
Bonus tip: Since at the end you have a value between 0 and 1, doing a lerp(1,0) as doing a OneMinus. Expample, lets say you have a value of 0.3, these would be the equations:
OneMinus basically inverts a value between 0 and 1. Lerp linearly interpolates between 2 values based on a single value between 0 and 1.
answered Aug 05 '16 at 10:24 AM
Follow this question
Once you sign in you will be able to subscribe for any updates here
|
OPCFW_CODE
|
PostgreSQL - in array function
I have returned records in the form:
id, group_name, {user_groupid1, user_groupid2, ..., user_groupidn}
The query returns all groups in a system plus I want to return in the same result set whether a user belongs to a group or not.
First I tried to use a subquery in the select statement to set the third column to a boolean value and it worked like charm but the big problem is that I use Java+Hibernate and Hibernate won't work with subqueries in select statements if you want to pass the result to a constructor (and that's exactly what I want). So I though of using maybe an SQL function where there are 2 parameters, the first is an ID (long), the second is an array or a set of IDs and I'd like to know whether the ID is contained in the set or not. In the example above I used a function called array_agg, so it concatenates the given IDs to an array but it's not necessarily the form the 2nd param has to be. It just a set of Ids.
Before that I came to the idea to solve this in SQL, I returned the IDs as a String array above then I processed it in Java (splitting, parsing) and I don't really like that, so that's why I need another solution.
Any help is appreciated!
cheers,
b
Can you post the definition of the tables involved?
3 tables are involved: user (id, name), group (id, name), usergroupcontact (id, user_id, group_id, status). in a simplified way, otherwise all of them are too big and other attributes are irrelevant in this case.
select g.id, g.name, g.id in (select ginner.id from group ginner join usergroupcontact ugc join user u where u.id=:userID) from group g where g.name like '%:name%' - that was the pseudo code of my original query.
Just a side note: it's not a good idea to use reserved words as table names (user, group). But I still don't understand what you are up to. If you have a list of all groups and all user_ids that belong to that group, what exactly is it you want to get out of that? I suggest you post some sample data and the expected result from that. Btw: that extra id in usergroupcontact probably doesn't make sense as (user_id, group_id) is a perfect PK unless a user can have multiples status in a group
the table names are different, this is just a simplified example. also, the connector table has much more function than a simple join table, that's why it has ID. it's much more complicated but i didn't want to post full tables, just the relevant parts :)
|
STACK_EXCHANGE
|
How to Raise a square matrix to a negative half power in R?
The singular power operation can be performed on square matrices itself, where the specified power is applied to each element of the matrix. Base R has many methods and routines to compute power for any k > 1 , where k is the integer value. However, raising matrices to a non-integral power is a challenge, and a limited number of solutions are available. External packages can be invoked to perform power computation in a more reliable manner in R Programming Language.
Method 1: Using sqrtm method
The expm package in R is used to compute exponential powers, logarithmic powers and square roots of matrices in R. The package needs to be first installed into the working space by the following command execution :
The package has the method sqrtm() which is used for matrix square root computation of the square matrices in R.
This is followed by the application of the solve() method in R which, in general, solves the equation a %*% x = b for x, where b can be either a vector or a matrix or a real number value <0. This method is available in base R itself.
solve( mat , power)
In case the power is empty, the inverse of the matrix mat is computed, that is the matrix is raised to the power of -1. Hence, to summarize, we compute the half-power using the sqrtm() method and negate it using the solve() method
"Original Matrix" [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 "Power Matrix" [,1] [,2] [,3] [1,] 0- 6980476i 0+13960950i 0- 6980475i [2,] 0+13960950i 0-27921901i 0+13960951i [3,] 0- 6980475i 0+13960951i 0- 6980475i
Method 2: Using eigen vectors approach
^%^ operator can be used to compute the matrix power operation, where eigen value decomposition is performed for the specified matrix. Its corresponding vectors, as well as values, are obtained in the form of arrays. The transpose of the vectors along with the values vector is then used to return the customized function value to compute matrix power. However, this method is considered unsuitable for the working of random square matrices, as it is guided by many constraints, some of them being :
- This method doesn’t work for a matrix that has no eigen value decomposition.
- This method doesn’t work for a matrix which is not diagonalized.
- The matrix should be preferably symmetric.
"Original Matrix" [,1] [,2] [1,] 0.088150 0.001017 [2,] 0.001017 0.084634 "Modified Matrix" [,1] [,2] [1,] 3.36830 -0.02004 [2,] -0.02004 3.43755
|
OPCFW_CODE
|
My emu is not running my patch file correctly it just keeps giving me the error. The patch is correct for the game.
I've enabled cheats and enable host filesystem
I've unticked "Hide extensions for known files type"
After hours and hours on the forums I ran out and and asking for assistance.
Thanks in advance.
you dont need host filesystem for cheats.
can you post the cheat here?, also telling us the game and region (serial number too) would be handy
The game is Final Fantasy 12
(cdrom0:\SLUS_209.63;1) Game CRC = 0x0779FBDB
gametitle=Final Fantasy XII [SLUS_209.63] (U) [0779FBDB]
//Have All Spells
//comment=Enemies Drop More Loot
//comment=Max LP Gained (After Battle)
//Always Full Action Gauge
//comment=6?x Exp Gained (After Battle)
//comment=4x Exp Gained (After Battle)
//comment=2x Exp Gained (After Battle)
//comment=8x LP Gained (After Battle)
//comment=4x LP Gained (After Battle)
//comment=Vaan codes all good status
//comment=Ashe codes all good status
//comment=Fran codes all good status
//comment=Baltier codes all good status
//comment=Penelo codes all good status
//comment=Bash codes all good status
//comment=chocobo infinite time and sprints
//comment=100% Successful Steal Rate Always On
//Have All Treasure Chests = Their Rarest Loot/Items/etc.
//Sell 1 item to get 99 of it
//Quick Chain-Level Gain Always On
//Alle Skills (select drücken?)
//comment=Break Damage Limit for 9999+ Damage
wow thats a lot of cheats.
firstly id make sure each individual cheat works before sticking them all on.
also i believe all the addresses should start with 2
Not 100% sure on that, but im sure someone who actually uses cheats will be able to confirm. im sure none of the addresses should start with 0 however (pcsx2 thing, not codes being wrong)
error 2: the system cannot find the file specified is ususally occurs because of either corruption or damage to the registry or device driver conflicts. Some other causes might be:
The System cannot find specified file
To what refraction said:
The first number in the adress is actually pointing to code type, 0 means it's just writing a byte lenght of value, through 0 there and a "word" near it is wrong;P, couse it's like telling 2 opposite things. I guess the type, here "word" is more important and the first letter of adress is used only with "extended" type soo althrough looking awfully wrong the codes would still work as planned seeing they do patch value of 4 bytes length, ofc that's only about formatting;O, never used cheats as pnach file in FFXII to even have idea how they should look like.
@Op, if you're having some registry problem as pointed by above post, maybe just try running pcsx2 in portable version. Any SVN version is working like that by default, you can force normal version into portable one by creating a file(empty one;P, just create a txt file and rename it including extenshion) called "portable.ini" inside pcsx2 directory. If you would make your version portable as I suggested, you would also want to copy your data like saves, screenshots, ini files, bios etc. from "C:\Users\username\Documents\PCSX2" to your newly made portable version directory;3.
In case you have some system problems, it's also not too wise having pcsx2 directory in "program files" or user data like ie desktop, if you do keep it in any strange windows directory like that move it into a normal directory for example directly on your main partition like "c:\pcsx2\" or alike.
Anyway if anything else fails(or just start from that if you care about cheating more than solving your problem) for this game you can simply use rich FFXII editor
instead of pnach files, it can edit save games, but it also can work as a trainer for pcsx2 and allow you to edit pretty much everything easily in XII with a nice interface.
Miseru99: huh? I mean the first number of the address....
06-24-2012, 06:41 AM
(This post was last modified: 06-24-2012, 07:02 AM by miseru99.)
Yeah, and I replied to that;P. First number of the adress in ps2 codes is code type for example 0xxxxxxx means the code will patch 1 byte, 1xxxxxxx means the code will patch 2 bytes, 2xxxxxxx means it'll patch 4 bytes, 4xxxxxxx is multi-line/condensed code, 9/Fxxxxxxx are master code etc.;P It still applies to pnach format exactly same way.
Frankly when I first got into ps2 cheats I was also awfully confused(1 number of adress is what? Code type? WTF?), best to catch it, is to read something like this
, I dunno whole list of code types working in pnach files, but the basic ones I listed above do work for sure, using them commonly.
miseru99: Your PM inbox is full. I'd appreciate your help
. Sorry for OT
[i7-3630qm/gt650m-2G/Win-7] [i7-4500u/R.HD8850m/Win-8.1] [2010-MBA/OSX-10.9.x]. Scroll smoothly with SmoothWheel
|
OPCFW_CODE
|
Are post requests case-sensitive?
1 Answer. HTTP parameters are case sensitive.
Is HTTP case-sensitive?
By default, Web servers are expected to be case-sensitive. Although most HTTP servers support the HTTP specification that defines URLs as case-sensitive, some HTTP servers treat URLs as not case-sensitive. are viewed as the same URL. For example, a Web server running on Windows treats requests for INDEX.
Is postman case-sensitive?
tldr; both HTTP/1.1 and HTTP/2 headers are case-insensitive.
Is Restful API case sensitive?
3 Answers. As others answered, the HTTP portion of the URL that makes APIs is case sensitive. This follows the UNIX convention, where URI paths were mapped to filesystem paths, and filesystem paths are case-sensitive. Windows, on the other hand, follows its convention of making paths case-insensitive.
Should JSON be case sensitive?
You Don’t Have to Be. SQL, by default, is case insensitive to identifiers and keywords, but case sensitive to data. JSON is case sensitive to both field names and data.
How do I make my URL case-sensitive?
Does capitalization and spaces matter in Internet addresses?
- If you need to create a URL or address with spaces, substitute the spaces with a + (plus).
- When typing an Internet address, capitalization may be necessary.
- However, when typing the name of the page, file, or directory in the URL, it is case sensitive.
Are email domain names case-sensitive?
Email addresses are not case sensitive. Having letters in all lowercase makes the email address easier to read, but the oversight won’t stop your messages from being delivered.
Is Basic Auth case sensitive?
The authorization scheme for HTTP Basic Authentication should not be case sensitive.
How do you use Postman code snippets?
Once your API call is working the way you want it to in Postman, you’re ready to generate your code snippets.
- On the right, under Save, click the Code link.
- In the Generate code snippets modal, select the language or framework to update the generated code.
- Copy and paste the code into your app.
What does it mean to be case sensitive in http?
If you trust the major browsers to abide by this, you’re all set. BTW, unlike most of HTTP, methods (verbs) are case sensitive: resource identified by the Request-URI. The method is case-sensitive.
What does it mean when a password is case sensitive?
In computing, if a written word such as a password is case-sensitive, it must be written in a particular form, for example using all capital letters or all small letters, in order for the computer to recognize it. COBUILD Advanced English Dictionary. Copyright © HarperCollins Publishers
What makes you want to look up case sensitive?
: requiring correct input of uppercase and lowercase letters Having the Caps Lock key on accidentally can also lead to a frustrating series of “wrong password” alerts when trying to use a case-sensitive password for your office network or Internet provider. — J. D. Biersdorfer What made you want to look up case-sensitive?
Are there any headers that are not case sensitive?
Header names are not case sensitive. From RFC 2616 – “Hypertext Transfer Protocol — HTTP/1.1”, Section 4.2, “Message Headers”: Each header field consists of a name followed by a colon (“:”) and the field value. Field names are case- in sensitive.
|
OPCFW_CODE
|
… or any Debian-based Linux distribution
In this post we’ll install Eddie, the graphical user interface (GUI) developed by AirVPN. Eddie can be used with any VPN service supporting the OpenVPN protocol, but Eddie is tailored for AirVPN.
1. Avoid the .deb AirVPN package on Ubuntu / Linux Mint
Installing AirVPN via the official .deb package can cause the following error to happen:
Selecting previously unselected package eddie-ui. (Reading database ... 232467 files and directories currently installed.) Preparing to unpack eddie-ui_2.18.9_linux_x64_debian.deb ... Unpacking eddie-ui (2.18.9) ... dpkg: dependency problems prevent configuration of eddie-ui: eddie-ui depends on mono-runtime; however: Package mono-runtime is not installed. eddie-ui depends on mono-utils; however: Package mono-utils is not installed. eddie-ui depends on libmono-system-core4.0-cil; however: Package libmono-system-core4.0-cil is not installed. eddie-ui depends on libmono-system-windows-forms4.0-cil; however: Package libmono-system-windows-forms4.0-cil is not installed. eddie-ui depends on stunnel4; however: Package stunnel4 is not installed. eddie-ui depends on curl; however: Package curl is not installed. eddie-ui depends on libsecret-tools; however: Package libsecret-tools is not installed. dpkg: error processing package eddie-ui (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.9.3-2) ... Processing triggers for gnome-menus (3.36.0-1ubuntu1) ... Processing triggers for desktop-file-utils (0.24-1ubuntu4) ... Processing triggers for mime-support (3.64ubuntu1) ... Errors were encountered while processing: eddie-ui
I could reproduce these errors on Ubuntu 20.10 as well as Linux Mint 19.x and 20 “Ulyana”. Trying to launch the AirVPN GUI produces the following error:
eddie-ui /usr/bin/eddie-ui: line 2: mono: command not found
Unfortunately, solving the problem is not so easy, as some mentioned dependencies won’t install trouble-free. Installing AirVPN via the PPA repository is much simpler, so that’s what the next paragraph is about.
2. Install AirVPN on Linux using the official repository
Here we will use the commands provided on the official AirVPN Linux download page, NOT the .deb packages. In a terminal, enter the following commands:
wget -qO - https://eddie.website/repository/keys/eddie_maintainer_gpg.key|sudo apt-key add -
On Ubuntu, you can safely ignore the following warning:
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
as this instruction seem to be deprecated.
echo "deb http://eddie.website/repository/apt stable main" | sudo tee /etc/apt/sources.list.d/eddie.website.list sudo apt update sudo apt install eddie-ui
The AirVPN GUI is called Eddie and relies on Mono, which may cause a lot of dependencies being installed:
Suggested packages: libmono-i18n4.0-all libgnomeui-0 libgamin0 logcheck-database Recommended packages: libgluezilla The following NEW packages will be installed: binfmt-support ca-certificates-mono cli-common curl eddie-ui libgdiplus libmono-accessibility4.0-cil libmono-btls-interface4.0-cil libmono-corlib4.5-cil libmono-i18n-west4.0-cil libmono-i18n4.0-cil libmono-posix4.0-cil libmono-security4.0-cil libmono-system-configuration4.0-cil libmono-system-core4.0-cil libmono-system-data4.0-cil libmono-system-drawing4.0-cil libmono-system-enterpriseservices4.0-cil libmono-system-numerics4.0-cil libmono-system-runtime-serialization-formatters-soap4.0-cil libmono-system-security4.0-cil libmono-system-transactions4.0-cil libmono-system-windows-forms4.0-cil libmono-system-xml4.0-cil libmono-system4.0-cil libmono-webbrowser4.0-cil libmonoboehm-2.0-1 libsecret-tools mono-4.0-gac mono-gac mono-runtime mono-runtime-common mono-runtime-sgen mono-utils stunnel4 0 upgraded, 35 newly installed, 0 to remove and 0 not upgraded. Need to get 19,4 MB of archives. After this operation, 69,7 MB of additional disk space will be used.
Eddie will automatically install icons in the menus, but can also be started running
eddie-ui. Now you’re ready to start using AirVPN – or any other VPN service via OpenVPN.
3. Connect to AirVPN and configure Eddie
Don’t look for AirVPN in the menus but for “Eddie” and an icon with blue clouds. You will to enter the root password upon launch.
Don’t be fooled by the interface design (which may have looked fantastic 15 years ago): you won’t find a more complete VPN GUI on Linux.
1. Avoid the “bootstrap error” and subsequent crashes
If you happen to shut down your internet connection or have configured your router to shut down in the night hours, Eddie will throw hundreds of system notification, an error message (see below) and may become totally unresponsive.
… to be continued.
|
OPCFW_CODE
|
Oracle query to getting average for each 5 Min
I Have Table Like there
Table
DAY LABEL VALUE1
07-17-2014 08:19:39 40.2
07-17-2014 08:19:49 42.2
07-17-2014 08:19:59 37.1
07-17-2014 11:25:51 51.8
07-17-2014 11:26:01 52.1
07-17-2014 11:26:11 51.8
07-17-2014 11:26:21 44.1
07-17-2014 11:26:31 41.3
07-17-2014 11:26:51 59.1
i want oracle query to diasplay table one minute or 5 minute taking seconds as average
Example
07-17-2014 08:19 41.3
07-17-2014 11:25 51.8
07-17-2014 11:26 52.7
07-17-2014 11:27 ......
It's not clear what you'd do if you also had the values 08:18:20. How do you "take seconds as average"?
It would help if you formatted your table and results as a code block; used consistent data between the table and example result; and showed how you get from one to the other. How would do you do it on paper? You also talk about both 1-minute and 5-minute grouping but your result is (I think) only 1-minute. If you do want 5-minute too you'd need to explain how each period is defined.
Rounding a date to minutes is easy, just use
trunc(day_label, 'MI')
To compute the average of column value1, you'll need to group by this trunc-value like so:
SELECT trunc(day_label, 'MI') AS m, round(avg(value1),1) as avg_secs
FROM t
GROUP BY trunc(day_label, 'MI')
ORDER BY trunc(day_label, 'MI');
17.07.2014 08:19:00 39,8
17.07.2014 11:25:00 51,8
17.07.2014 11:26:00 49,7
this is my actual query ( select type,concat(day,TIMSTAMP_DAY) label,ENTC_PERCENT value1 from sample_table WHERE HOSTNAME='ABCDEF' order by 2 desc ) can u give solution for this..
I need more details for that: 1. What datatype are day and timestamp_day? 2. What's type, average over that as well? 3. is etc_percent a percent value? Does it make sense to average over percent, mathematically speaking?
|
STACK_EXCHANGE
|
Review of Opera 9.0
While writing this review I was comparing Opera 9.0 with other browsers. Several times I found myself browsing in Opera without actually realising it was Opera! It’s a browser most people could use all day every day. However, there are things I don’t like and things which it’s missing which mean I won’t be switching to it as my primary browser.
You can try the browser yourself via the Opera 9.0 free download on Opera’s website.
After downloading the Opera 9.0 package from Opera’s website, I ran the installer and went through the usual routine of confirming settings. One of the first pages was this, asking whether I wanted to upgrade my existing version of Opera or install this new version seperately.
This is a great idea because normal users will just want to upgrade while developers will probably want the seperate install. Opera caters to both markets.
Despite telling Opera that I was making a seperate install, it automatically imported all the bookmarks from my Opera 8 install. Maybe this is a “helpful” feature but Firefox is my main browser and my Opera 8 bookmarks are out of date.
I went to the Bookmarks > Edit Bookmarks menu which opens the bookmarks editor in a tab, which is quite handy. I deleted all of the bookmarks by dragging them onto the Trash item at the bottom. I then went to File > Import and Export > Import Netscape/Firefox Bookmarks and hunted for the bookmarks file created by Firefox. I found it listed in
C:\Documents and Settings\Cerbera\Application Data\Mozilla\Firefox\Profiles\default.jvt\bookmarks.html.
Cache and Interface
After sorting my bookmarks I went into Tools > Preferences to have a poke around. Opera has always had strong cache settings which can mean you can miss forum messages unless you Refresh pages. I went to the Advanced tab of this window and found the cache settings shown in the picture.
As with previous versions there are loads of things you can customise and tweak. Opera 9.0 has neither the status bar nor the Window menu turned on by default. It’s fairly easy to turn these back on, although the status bar is listed in the View > Toolbar menu list.
I went through Tools > Appearence and selected the Windows Native style. This is because I use many programs daily and keeping the interface widgets looking the same makes it easier to switch between them. I also switched Opera 8.5 into the Windows skin early on during my review of it.
Browsing with Opera 9.0
There are loads of lovely interface touches in Opera.
For example, the Forward and Back buttons load instantly from the cache. This makes hunting around websites extremely fast, which makes normal browsing more pleasant. I’ve always liked this behaviour in Opera and there are many other excellent touches like this. But there’s more to Opera than this!
Another key feature is called Spatial Navigation (yikes, a pun!). Basically, it lets the user move between the interactive elements of the page by pressing Shift+Arrow. The Tab key is used solely for moving between any form controls.
You can use A to move to the next link and Q to move to the previous link. Pressing S moves to the next heading and W moves to the previous one. If a heading contains any links, the heading navigation keys will move between each link.
The first page you see when Opera 9.0 loads is their fancy startup page, shown in the picture. Also shown is the sophisticated popup you get when hovering the mouse on a tab. Lovely, isn’t it?
This picture shows Opera’s famous zooming feature which is controlled by the numerous items in the View > Zoom menu list. The picture shows it zooming the Students’ Work area of Calthorpe Park School, a website I developed and maintain professionally.
Opera rescales all the images by the same proportion as the text, improving their visibility for people who require large text sizes. This zooming feature can also keep the layout of graphical web pages from exploding when resizing the text alone might do. It seems to do a better job of smoothing enlarged images than previous versions, too.
It seems to have all of the tools and features from Opera 8.5.
The picture shows the sidebar opened in the Links mode, which displays all the hyperlinks on the current page. If the link has a
title attribute which is longer than the actual link text, Opera uses that for its entry.
There are stacks of other great little things like this. You can download extras for Opera much like you would for Firefox.
Extensible Markup Language (XML) Capable
Opera 9.0 includes support for
application/xhtml+xml, including the Extensible Hypertext Markup Language (XHTML) and Scalable Vector Graphics (SVG) formats. However, as the picture shows, one minor error by an XHTML author leads to big lumps of a websites becoming impossible to access in XML capable browsers.
Opera 9.0 claims to have incremental rendering support for XHTML and is the first browser to do so, from what I’ve read. I guess that’s one disadvantage to it slightly reduced but it will be many, many years before incrementally rendering XHTML become the norm. Perhaps never, bearing in mind the extreme rarity of real XHTML on the web.
Still, it’s good that Opera 9.0 lets communities who do need XHTML to view it more quickly.
There are various choices made in the Opera interface and behaviour which I find make it harder to use.
- Can’t middle-click items in the Bookmarks menu.
- Can’t drag things around in the Bookmarks menu.
- Bookmarks are automatically ordered alphabetically, even when importing bookmarks which are ordered differently.
- No way to create seperators in the Bookmarks menus.
- Ctrl+Alt+B doesn’t actually open the Edit Bookmarks tab for me.
- Opera’s shortcut keys are often unlike some similar applications without any obvious reason.
- Its use of Ctrl+Alt+Key for some shortcuts conflicts with the Windows system for giving fast access to shortcut files (
- Cannot use Alt+Enter to open results from the search bar in a new tab.
- Cannot use Alt+Enter to open an address from the address bar in a new tab.
- The tab bar is above the address bar and it seems impossible to change this.
- Closes tabs on downpress of middle mouse button instead of when the button is released.
- Several dialogue windows appear to have a resizable border according to the cursor but cannot actually be resized.
- Mouse jumps to middle of screen when middle-clicking content to do autoscrolling. I’ll put the mouse where I want it, thank you!
Some of these things like this will probably be fixed in minor revisions during 2006, but the lack of interactivity with the Bookmarks seems deeply rooted in Opera’s interface principles. You have to use Shift+LeftClick to open Bookmarks items in a new tab. You can even use Ctrl+Shift+LeftClick to make the tab open in the background. Why don’t they allow MiddleClick to do the former and Ctrl+MiddleClick for the latter as well?
The Opera Keyboard Shortcuts page shows that the keyboard functionality is very good. It’s just a pain that you have to learn a new set of combinations to even simple things. For example, Alt+D will give focus to the Address Bar in almost any graphical browser. It’s very comfortable and is on the left hand like nearly all common shortcuts. Apart from in Opera, where you have to use F8 or Ctrl+L, both of which are on the right hand. What’s the benefit in that when Alt+D isn’t used for anything?
Although Opera’s web browser has always been very stable and clever, it’s never felt entirely comfortable to me. Without great extensions like the myriad website developer tools for Firefox, it won’t become my primary browser.
Much like Opera 8.5, Opera 9.0 is a very refined browser suitable for everyday use. It’s like the iPod of web browsers! Maybe I’m just not trendy enough to appreciate it.
|
OPCFW_CODE
|
Hey all, I just posted this on Alcohol's forum. See if any of you have any suggestions. I used to use GamesXCopy but that has since left the main stream, and Alcohol seems to be the go-to-guy replacement. I have downloaded the trial of 52% and I really like it. However, this is the situation, you've probably heard it before. I hope I'm posting in the right forum. We have 6 games for our cafe that we are setting up. And getting emulation to work is a trick. But I managed to get all our games to work except Star Wars BattleFront and HL CounterStrike. Here are my questions: 1. Based on what you know, how exactly should I set up Alcohol 52% to create an image of Battlefront? The disk is the normal retail version. 2. ClonyXXL detects the disk as "SecuROM *new* 3. A Ray Scanner detects it as "SecuROM 5.03.06.0041" 4. I created the image in Alcohol 52% trial (latest just downloaded today) with the data type set to "SecuRom *new (4.x/5.x)" 5. When the image is loaded and BattleFront is started, it gives this message: "Conflict with Disc Emulator Software detected. See www.securom.com/emulation for more details". I have searched for this error and information and have not found a good solution yet. 6. I do NOT need to burn this disk. The cafe client PCs have NO CD-ROMs at all. The software must run emulated. 7. I have also created a few other images but all give same error. I also opened the image with Daemon Tools and got same error. The software I have and everything else is all legal. If Alcohol 52% works with each game then of course we'll buy that for each PC as well. So I need a legal way to do this. As in, no "cracks" or "hacks" that would be considered a no-no. One thought I read online was that BF is simply blacklisting emulation software. Literally checking if it's installed and then failing because of it. If that's so, then I now hate Lucas Arts. And I have no problem "hacking" a file here or there to block its blacklist check. That would be an outrageous thing to do in my opinion. Cause a lot of cafe places with out-in-the-open PCs can't exactly have retail CDs sitting there where people can steal them. Nor do we want to buy 20 CD-ROMs just to play one game. It's silly. So how do I emulate Battlefront? Thanks.
|
OPCFW_CODE
|
The race to improve software quality and innovation has been around since the 1970s. Many processes and workflows have been created to help address the historical issues that prevent teams from developing high-quality applications quickly and reliably, yet enterprises continue their struggle to keep up.
Continuous Integration is a way to help build quality, security, and regulatory compliance into the SDLC. In agile development, Continuous Integration, or CI, asks developers to merge code changes into the central code repository often and consistently. Several times a day, builds are automated and unit tests and integration tests are performed. Because there are typically only small changes in code, each test can pinpoint specific changes that introduced a flaw or vulnerability.
The two main goals of CI are to ensure the high quality and good health of code, continuously, and to ensure a seamless flow between development, testing and deployment. Each organization needs to define for themselves how these goals are to be achieved, yet they provide a strong starting point for all organizations – from Netflix to SMB’s – looking to improve their development process.
There’s a lot involved in CI. To help get everyone up to speed, we’ve put together a dictionary of important terms and tools to understand and know, surrounding CI and more broadly, DevOps.
The ABC’s of CI
A – Agile & Apache Ant
Agile – Continuous Integration processes are agile, which means they are characterized by short work phases, frequent reassessments, and adaptation of plans. Everything changes in business, and even more so in software development. The ability to facilitate change in development to better accommodate the business model and involved teams is part of what makes an organization agile – a key component in continuous integration. Agility is a key component of DevOps, as well.
Apache Ant – One of the top tools for build automation, a component of Continuous Integration. Ant is made primarily for Java and uses XML to note the build process and its dependencies.
B – Bamboo & Build Automation
Build Automation is the process of automating a build and its surrounding processes, either through a tool like Apache Ant or Maven, or on a dedicated build server. In short, build automation tools turn source code into executable code. The build automates each step of the process, from compiling source code to producing installers, to updating the database after the build is completed. Automating these areas not only speeds up the process, but eliminates many of the issues that arise with a manual build.
Bamboo is a continuous integration server by Atlassian which automates a build and test process once code is committed to the source repository. Test results are immediate, allowing developers to fix issues in near real-time.
C – Continuous Deployment and Continuous Delivery
Closely related to CI is Continuous Delivery, is a practice in which code can be deployed at any moment to production because of rigorous automated testing. The concept is based on the fact that automated builds and testing are so tightly integrated into the build that shipping is possible at any given moment.
Often paired with CI and following closely after Continuous Delivery, Continuous Deployment refers to the practice of shipping each time a version has passed testing.
The main difference between Continuous Delivery and Continuous Deployment is that deploying to production is manual in the former and automated in the latter.
D – DevOps
DevOps is the umbrella term for many of the processes including and surrounding Continuous Integration. DevOps is a cultural change in IT which adopts agile processes, including CI, Continuous Deployment, and Continuous Delivery, automation tools for builds and tests, and cultural changes like a higher rate of collaboration between development, operations, and security teams. DevOps was created to help businesses succeed in the fast-paced development world.
Dive deeper into DevOps here.
F – Feedback
A tenant of DevOps and a major benefit of implementing Continuous Integration processes is rapid feedback. Feedback, in the form of build and test results, offer a full picture of the project’s state as often as CI takes place. By fixing issues based on feedback, the overall health of the project at hand is increased, which in turn improves software quality.
G – GIT & GitHub
Git is a version control system which tracks source code changes and updates the repository after CI is triggered.
GitHub is a web-based version of the Git version control system and hosts code online for access control, bug tracking, feature requests and task management. Any standard Git command also work on GitHub.
I – Integration Testing
Following unit testing in CI is integration testing, where code units are combined and tested as a group. Types of integration testing include big bang, bottom-up, top-down, and sandwich testing. Integration testing enables faster feedback on problems involving integrations, which can be fixed upon finding rather than at the end of the SDLC.
J – Jenkins & JIRA
Jenkins, like Bamboo, is a Continuous Integration server to help facilitate automated builds and tests. Written in Java, Jenkins also offers various plugins that allow it to work with other languages.
CxSAST Jenkins plugin is a source code analysis solution that helps identify, monitor and fix errors, vulnerability issues and compliance problems found within the source code. The CxSAST plugin scans the source code and supplies scan results as either static or interactive reports; interactive meaning the enablement of runtime tracking per vulnerabilities in the code. This plugin will then administer the necessary remediation guidelines and action items.
Read more about Bamboo vs. Jenkins here
JIRA is a bug and issue tracking tool used within the CI workflow to help detect and highlight issues found through automated testing.
M – Maven
Apache Maven is a build automation tool, primarily used for code written in Java but that can also be used on projects written in C#, Scala, Ruby and more. The tool differentiates the build from its dependencies, making detecting where issues arise much easier.
S – Security Testing & SonarQube
When it comes to Continuous Integration, another benefit could be increased security of the code through automated security testing. While unit testing and integration testing are enormously helpful in detecting functional and quality issues within the project, they don’t necessarily detect security issues. Security testing can be integrated into the CI workflow, however, and will similarly pinpoint security vulnerabilities that can be addressed immediately, rather than at the end of the SDLC.
Read more about Continuous Integration Security here.
T – Test Driven Development
Underpinning CI is the Test-Driven Development (or Test-Driven Design) methodology, driven by building a strong application through meticulous testing. TDD runs on a process of initial testing, coding, and refactoring to solve found issues.
U – Unit Testing
Part of the automated test process involved in CI includes unit testing, which automatically tests the smallest possible code base individually to find issues as early as possible. The concept of unit testing is to isolate a specific area of a project to validate that it was written correctly.
V – Version Control
Version control tools like Git and GitHub log changes to code, configuration files and documentation to enable better management of changes and differing versions of code. These tools are especially helpful when different teams work on the same project, which can often create confusion if not properly tracked.
The AppSec How To: Application Security in Continuous Integration
|
OPCFW_CODE
|
Raspberry Pi, a credit-card-sized computer, has gained immense popularity among technology enthusiasts and DIY enthusiasts. With its affordable price and limitless possibilities, Raspberry Pi has revolutionized the way we experiment, learn, and build various projects. If you’re someone who wants to dive into the world of Raspberry Pi, improve your skills, or add a valuable certification to your resume, there are several online courses available to help you achieve your goals. In this article, we will highlight the 10 best Raspberry Pi courses and certifications available online, enabling you to choose the one that best suits your needs and interests.
1. Raspberry Pi for Beginners – Udemy
Starting our list is a comprehensive course offered by Udemy, titled “Raspberry Pi for Beginners.” This course is designed for individuals with no prior experience with Raspberry Pi and covers the basics of setting up and using the computer. It includes practical hands-on exercises, enabling learners to gain confidence in programming, electronics, and prototyping with Raspberry Pi. By the end of the course, you will have a strong foundation to build upon and explore further.
2. Raspberry Pi Full Stack – Udemy
For those seeking a more in-depth understanding of Raspberry Pi, the “Raspberry Pi Full Stack” course on Udemy is a fantastic choice. This course covers various aspects of Raspberry Pi, including setting up a web server, working with databases, creating dynamic web pages, and much more. With hands-on projects and real-life examples, you will have the opportunity to apply your newly acquired skills to real-world scenarios, making this course highly practical and valuable.
3. Introduction to Raspberry Pi – Coursera
Offered by the University of California, Irvine, the “Introduction to Raspberry Pi” course on Coursera provides a comprehensive introduction to the world of Raspberry Pi. This course dives into the fundamentals of programming with Python on the Raspberry Pi platform. Through a series of videos, quizzes, and hands-on assignments, you’ll develop a strong understanding of Raspberry Pi and gain valuable skills in programming and hardware integration.
4. Raspberry Pi for Robotics Programmers – Udemy
If you have a particular interest in robotics and want to explore how Raspberry Pi can be integrated into robot projects, the “Raspberry Pi for Robotics Programmers” course on Udemy is perfect for you. This course walks you through the process of building a complete Raspberry Pi-powered robot by providing step-by-step instructions, code examples, and practical demonstrations. By the end, you’ll not only have a fully functional robot but also a deep understanding of integrating hardware and software effectively.
5. IoT Programming and Big Data using Raspberry Pi – Udemy
In the rapidly growing field of the Internet of Things (IoT), knowledge of Raspberry Pi is invaluable. The “IoT Programming and Big Data using Raspberry Pi” course on Udemy combines the power of Raspberry Pi with IoT and Big Data concepts. You’ll learn how to gather sensor data, store it in a database, and analyze it using Big Data tools. The course guides you through hands-on projects that help you understand the potential of IoT and Raspberry Pi in real-world applications.
6. Raspberry Pi Supercomputing and Cluster Building – Udemy
Ever wondered how to build a supercomputer out of Raspberry Pi computers? The “Raspberry Pi Supercomputing and Cluster Building” course on Udemy will quench your curiosity. This course teaches you how to build a cluster of Raspberry Pi computers and utilize their combined computing power. From setting up each Raspberry Pi to configuring the cluster, you’ll explore the intricacies of parallel computing using this affordable and scalable solution.
7. Raspberry Pi and the Internet of Things – edx
edx offers a highly regarded course titled “Raspberry Pi and the Internet of Things.” This course covers the essentials of IoT and how Raspberry Pi can be used to connect physical devices to the internet. You’ll learn about sensors, actuators, and communication protocols while developing practical skills in deploying IoT solutions using Raspberry Pi. By the end, you’ll be well-equipped to contribute to the growing field of IoT and devise innovative solutions.
8. Automation with Raspberry Pi Zero – Udemy
For those interested in home automation and using Raspberry Pi to create smart systems, the “Automation with Raspberry Pi Zero” course on Udemy provides comprehensive insights. This course delves into building various projects, such as home security systems, weather stations, and even a smart robot pet feeder, using Raspberry Pi Zero. With detailed instructions and hands-on projects, you’ll gain practical skills for automating your home and integrating various components with ease.
9. Raspberry Pi Specialization – Coursera
The Raspberry Pi Specialization on Coursera takes you on a learning journey that spans multiple courses and covers everything from the basics to advanced topics. Whether you’re a beginner or an experienced user, this specialization equips you with the knowledge and skills required to build and deploy Raspberry Pi projects. From physical computing to web development and networking, this specialization offers a well-rounded curriculum, providing a holistic learning experience.
10. Raspberry Pi Certification Program – Raspberry Pi Foundation
Finally, for those looking for official recognition of their Raspberry Pi skills, the Raspberry Pi Foundation offers a Certification Program. This program offers two different levels: the Raspberry Pi Certified Educator and the Raspberry Pi Certified Technician. These certifications validate your expertise and can greatly enhance your professional profile. By completing these certifications, you’ll join a global community of Raspberry Pi experts and gain recognition for your skills and knowledge.
Raspberry Pi courses and certifications have become increasingly popular as more individuals recognize the incredible potential of this small computer. Whether you are a beginner looking to start your Raspberry Pi journey or an experienced user seeking to expand your skills, these 10 courses and certifications offer valuable opportunities to enhance your knowledge and expertise. With flexible online learning formats, practical hands-on projects, and recognized certifications, you can take full advantage of what the Raspberry Pi world has to offer. So, dive in, explore, and unlock the limitless possibilities of Raspberry Pi!
|
OPCFW_CODE
|
#Carlos Gonzalez
#cggonzal
#Section C
from tkinter import *
import random
import copy
def init(data):
#Seven "standard" pieces (tetrominoes)
iPiece = [
[ True, True, True, True]
]
jPiece = [
[ True, False, False ],
[ True, True, True]
]
lPiece = [
[ False, False, True],
[ True, True, True]
]
oPiece = [
[ True, True],
[ True, True]
]
sPiece = [
[ False, True, True],
[ True, True, False ]
]
tPiece = [
[ False, True, False ],
[ True, True, True]
]
zPiece = [
[ True, True, False ],
[ False, True, True]
]
tetrisPieces = [ iPiece, jPiece, lPiece, oPiece, sPiece, tPiece, zPiece ]
tetrisPieceColors = [ "red", "yellow", "magenta", "pink", "cyan", "green", "orange" ]
data.tetrisPieces = tetrisPieces
data.tetrisPieceColors = tetrisPieceColors
data.gameOver = False
data.score = 0
# set board dimensions and margin
data.rows = 15
data.cols = 10
data.margin = 20
data.fallingPiece = newFallingPiece(data)
# make board
data.emptyColor = "blue"
data.board = [([data.emptyColor] * data.cols) for row in range(data.rows)]
def newFallingPiece(data):
index = random.randint(0,6)
data.fallingPiece = data.tetrisPieces[index]
data.fallingPieceColor = data.tetrisPieceColors[index]
data.fallingPieceRow = 0
data.fallingPieceCol = data.cols//2 - 1
return data.fallingPiece
# getCellBounds from grid-demo.py
def getCellBounds(row, col, data):
# aka "modelToView"
# returns (x0, y0, x1, y1) corners/bounding box of given cell in grid
gridWidth = data.width - 2*data.margin
gridHeight = data.height - 2*data.margin
x0 = data.margin + gridWidth * col / data.cols
x1 = data.margin + gridWidth * (col+1) / data.cols
y0 = data.margin + gridHeight * row / data.rows
y1 = data.margin + gridHeight * (row+1) / data.rows
return (x0, y0, x1, y1)
def mousePressed(event, data):
pass
def keyPressed(event, data):
if event.keysym == "Left":
moveFallingPiece(data,0,-1)
elif event.keysym == "Right":
moveFallingPiece(data,0,1)
elif event.keysym == "Up":
rotateFallingPiece(data)
elif event.keysym == "Down":
moveFallingPiece(data,1,0)
elif event.char == "r":
init(data)
def timerFired(data):
removeFullRows(data)
if data.gameOver:
return
if moveFallingPiece(data,1,0):
pass
else:
placeFallingPiece(data)
newFallingPiece(data)
if fallingPieceIsLegal(data) == False:
data.gameOver = True
def drawGame(canvas, data):
canvas.create_rectangle(0, 0, data.width, data.height, fill="orange")
drawBoard(canvas, data)
drawFallingPiece(canvas,data)
canvas.create_text(data.width/2,31*data.height/32,text = "Press \"r\" to restart:")
if data.gameOver:
canvas.create_text(data.width/2,data.height/2,font = ("Helvetica",32),text = "GAME OVER!",fill = "red")
def drawBoard(canvas, data):
# draw grid of cells
for row in range(data.rows):
for col in range(data.cols):
if data.board[row][col] != "blue":
drawCell(canvas, data, row, col,data.board[row][col])
else:
drawCell(canvas, data, row, col,data.board[row][col])
def removeFullRows(data):
newRow = len(data.board) - 1
topRow = 0
for oldRow in range(len(data.board)-1,-1,-1):
full = True
for oldCol in range(len(data.board[oldRow])):
if data.board[oldRow][oldCol] == "blue":
full = False
if full == False:
data.board[newRow] = copy.deepcopy(data.board[oldRow])
newRow -= 1
elif full == True:
data.score += 1
def drawFallingPiece(canvas, data):
for row in range(len(data.fallingPiece)):
for col in range(len(data.fallingPiece[row])):
if data.fallingPiece[row][col] == True:
drawCell(canvas,data,row+data.fallingPieceRow,col+data.fallingPieceCol,data.fallingPieceColor)
def placeFallingPiece(data):
for row in range(len(data.fallingPiece)):
for col in range(len(data.fallingPiece[row])):
if data.board[row+data.fallingPieceRow][col+data.fallingPieceCol] == data.emptyColor and data.fallingPiece[row][col] == True:
data.board[row + data.fallingPieceRow][col+data.fallingPieceCol] = data.fallingPieceColor
return
def moveFallingPiece(data,drow,dcol):
data.fallingPieceRow += drow
data.fallingPieceCol += dcol
if fallingPieceIsLegal(data) == False:
data.fallingPieceCol -= dcol
data.fallingPieceRow -= drow
return False
return True
def fallingPieceIsLegal(data):
for row in range(len(data.fallingPiece)):
for col in range(len(data.fallingPiece[row])):
if data.fallingPieceRow + len(data.fallingPiece) > data.rows or data.fallingPieceRow < 0 or\
data.fallingPieceCol + len(data.fallingPiece[row]) > data.cols \
or data.fallingPieceCol < 0:
return False
for row in range(len(data.fallingPiece)):
for col in range(len(data.fallingPiece[row])):
if data.fallingPiece[row][col] == True:
if data.board[data.fallingPieceRow + row][data.fallingPieceCol + col] != data.emptyColor:
return False
return True
def drawCell(canvas, data, row, col,color):
(x0, y0, x1, y1) = getCellBounds(row, col, data)
m = 1 # cell outline margin
canvas.create_rectangle(x0, y0, x1, y1, fill="black")
canvas.create_rectangle(x0+m, y0+m, x1-m, y1-m, fill=color)
def redrawAll(canvas, data):
drawGame(canvas, data)
def rotateFallingPiece(data):
oldCols = len(data.fallingPiece[0])
oldRows = len(data.fallingPiece)
newRows = oldCols
newCols = oldRows
pieceRow = data.fallingPieceRow
pieceCol = data.fallingPieceCol
oldPiece = copy.deepcopy(data.fallingPiece)
newPieceRow = oldCols - 1 - pieceCol
centerRow = data.fallingPieceRow +len(data.fallingPiece)//2
newCenter = newPieceRow + newRows//2
newPiece =[[None]*newCols for row in range(newRows)]
for row in range(oldRows):
for col in range(oldCols):
if row == centerRow and col == newCols//2:
newPiece[centerRow][newCols//2] = newCenter
else:
newPiece[oldCols-1-col][row] = (oldPiece[row][col])
data.fallingPiece = newPiece
if fallingPieceIsLegal(data) == False:
data.fallingPiece = oldPiece
####################################
# use the run function as-is
####################################
def run(width=300, height=300):
def redrawAllWrapper(canvas, data):
canvas.delete(ALL)
redrawAll(canvas, data)
canvas.update()
def mousePressedWrapper(event, canvas, data):
mousePressed(event, data)
redrawAllWrapper(canvas, data)
def keyPressedWrapper(event, canvas, data):
keyPressed(event, data)
redrawAllWrapper(canvas, data)
def timerFiredWrapper(canvas, data):
timerFired(data)
redrawAllWrapper(canvas, data)
# pause, then call timerFired again
canvas.after(data.timerDelay, timerFiredWrapper, canvas, data)
# Set up data and call init
class Struct(object): pass
data = Struct()
data.width = width
data.height = height
data.timerDelay = 100 # milliseconds
init(data)
# create the root and the canvas
root = Tk()
canvas = Canvas(root, width=data.width, height=data.height)
canvas.pack()
# set up events
root.bind("<Button-1>", lambda event:
mousePressedWrapper(event, canvas, data))
root.bind("<Key>", lambda event:
keyPressedWrapper(event, canvas, data))
timerFiredWrapper(canvas, data)
# and launch the app
root.mainloop() # blocks until window is closed
print("bye!")
# run(300, 300)
####################################
# playTetris() [calls run()]
####################################
def playTetris():
rows = 15
cols = 10
margin = 20 # margin around grid
cellSize = 20 # width and height of each cell
width = 2*margin + cols*cellSize
height = 2*margin + rows*cellSize
run(width, height)
playTetris()
|
STACK_EDU
|
/**
* Created by lucast on 21/10/2016.
*/
import {EmscriptenModule} from './emscripten';
function multiplyMutating(a: Float32Array, b: Float32Array): Float32Array {
a.forEach((x, i, arr) => arr[i] = b[i] * x);
return a; // return a for convenience when chaining or combining
}
export function hann(n: number): Float32Array {
const range: number[] = [...Array(n).keys()];
return new Float32Array(
range.map(i => 0.5 - 0.5 * Math.cos((2.0 * Math.PI * i) / n))
);
}
export function memoise(fn: Function): Function {
// basically https://gist.github.com/cameronbourke/49e798be4f2add8f27cf/revisions
let cache: {[key: string]: any} = {};
return (...args: any[]) => {
const key: string = JSON.stringify(args);
return cache[key] || (cache[key] = fn(...args));
}
}
const cachedHann: Function = memoise(hann);
export function applyHannWindowTo(buffer: Float32Array): Float32Array {
return multiplyMutating(buffer, cachedHann(buffer.length));
}
export function cyclicShiftInPlace(buffer: Float32Array): Float32Array {
const midIndex: number = Math.floor(0.5 + 0.5 * buffer.length);
const secondHalf: Float32Array = buffer.slice(midIndex);
buffer.copyWithin(buffer.length % 2 === 0 ? midIndex : midIndex - 1, 0);
buffer.set(secondHalf);
return buffer; // return for convenience when chaining or combining
}
export interface RealFft {
forward(real: Float32Array): Float32Array;
inverse(complex: Float32Array): Float32Array;
// it is quite likely implementations will be backed by native code
// therefore manual resource freeing / de-allocation will be required
dispose(): void;
}
export type KeyValue = { [key: string]: any };
export type RealFftFactory = (size: number, args?: KeyValue) => RealFft;
export class KissRealFft implements RealFft {
private size: number;
private forwardConfig: any;
private inverseConfig: any;
private realPtr: number;
private complexPtr: number;
private realIn: Float32Array;
private complexIn: Float32Array;
private kissFFTModule: EmscriptenModule;
// c wrappers
private kiss_fftr_alloc: any;
private kiss_fftr: any;
private kiss_fftri: any;
private kiss_fftr_free: any;
constructor(size: number, createFftModule: () => EmscriptenModule) {
this.kissFFTModule = createFftModule();
this.kiss_fftr_alloc = this.kissFFTModule.cwrap(
'kiss_fftr_alloc',
'number', ['number', 'number', 'number', 'number']
);
this.kiss_fftr = this.kissFFTModule.cwrap(
'kiss_fftr', 'void', ['number', 'number', 'number']
);
this.kiss_fftri = this.kissFFTModule.cwrap(
'kiss_fftri', 'void', ['number', 'number', 'number']
);
this.kiss_fftr_free = this.kissFFTModule.cwrap(
'kiss_fftr_free', 'void', ['number']
);
this.size = size;
this.forwardConfig = this.kiss_fftr_alloc(size, false);
this.inverseConfig = this.kiss_fftr_alloc(size, true);
this.realPtr = this.kissFFTModule._malloc(size * 4 + (size + 2) * 4);
this.complexPtr = this.realPtr + size * 4;
this.realIn = new Float32Array(
this.kissFFTModule.HEAPU8.buffer, this.realPtr, size);
this.complexIn = new Float32Array(this.kissFFTModule.HEAPU8.buffer,
this.complexPtr, size + 2);
}
forward(real: Float32Array): Float32Array {
this.realIn.set(real);
this.kiss_fftr(this.forwardConfig, this.realPtr, this.complexPtr);
return Float32Array.from(
new Float32Array(
this.kissFFTModule.HEAPU8.buffer,
this.complexPtr,
this.size + 2
)
);
}
inverse(complex: Float32Array): Float32Array {
this.complexIn.set(complex);
this.kiss_fftri(this.inverseConfig, this.complexPtr, this.realPtr);
// TODO scaling?
return Float32Array.from(
new Float32Array(
this.kissFFTModule.HEAPU8.buffer,
this.realPtr,
this.size)
);
}
dispose(): void {
this.kissFFTModule._free(this.realPtr);
this.kiss_fftr_free(this.forwardConfig);
this.kiss_fftr_free(this.inverseConfig);
}
}
|
STACK_EDU
|
The claim is definitely not trivial, otherwise we would not observe frequent draughts due to government subsidies (like drought in California that is at least partially caused by bad water and farming subsidies), environmental degradation and overuse due to subsidies to coal mining and oil production (see this IMF explainer), not even mentioning that often these subsidies perversely redistribute resources from poor to rich. Overfishing, is not just result of tragedy of commons but also prevalent bad fishing subsidies (see this article in The Economist), and so on.
Furthermore, your proposed solution of:
Can't this be easily avoided if, let's say, we let the said farmers consume the resources they want initially. Then, the government only releases subsidies if they deem the amount of resources used as appropriate and necessary.
would not work, because in repeated interactions people will expect the subsidy. In economics, subsidies are typically considered to be tied to some specific economic activity, more specifically some production (i.e. it is not the same as redistribution where you just transfer resources). Hence, farmers would essentially figure out how the subsidy varies with economic activity (unless you want to propose completely random normally distributed subsidy with mean zero and what would be the point of that?), and eventually realize that if they produce x the subsidy will be y. Furthermore, such arrangement would defeat the purpose of subsidy. As mentioned above subsidy is used to subsidize some economic activity, if you just want to redistribute resources you can give them regular transfers. If you would first let people make their choices and then subsidize the activity, assuming people would be completely myopic and not realize post hoc subsidy was connected to the economic activity, you just created convoluted welfare transfer, for which we already have better systems.
This is not to say that subsidies can't be useful tool for government in some situations. However, in most of the cases those situations occur when the price mechanism does not work properly (e.g. there are some positive externalities meaning the price does not reflect all important information). However, if price system is working properly distorting it often leads to either shortages or inefficient overuse.
Of course, if government would only use subsidies when appropriate there would not be any issue. I mean this is obvious, that is like saying homelessness can be solved by people getting houses, or opioid pandemic by doctors stopping to overprescribe opioids and patients taking them responsibly, or solving police shootings by having them just shoot when arresting dangerous criminals with guns. But neither getting right subsidies or solving those other issues is as simple as that, otherwise our societies would not have the issue.
When it comes to subsidies, the issue is that it is very hard to determine what optimal subsidy is in a field. Next, even once you manage to identify problems which require subsidies and what the value of subsidy is there are still political problems as alluded to by Dayne's +1 comment.
Moreover, the point of the box in your question, which is often mentioned in one way or another in almost every single textbook is to explain how price system functions to distribute information and give incentives to people to act on that information. This is something that most non-economists do not understand.
Consequently, the information in the box is both non-trivial and very relevant. It is relevant and non-trivial because most people do not understand how price system works and what happens when it is distorted. Moreover, we can empirically often see subsidies being misused and often even in areas connected to climate change (as mentioned above), and given that climate change is likely one of the defining issues of our 21st century, I would say this makes this issue as relevant as ever.
|
OPCFW_CODE
|
from skrutil import string_utils
_CPP_BR = '\n\n'
_OBJC_SPACE = ' '
class ObjcEnum:
"""Represents Objective-C++ enum.
"""
def __init__(self, enum_class_name):
self.enum_class_name = enum_class_name
self.int_alias_tuple_list = []
def append(self, int_value, alias):
self.int_alias_tuple_list.append((int_value, alias))
def generate_objc_enum(self, class_name, config):
objc_enum = ''
objc_enum += 'typedef NS_ENUM(NSUInteger, {2}{0}{1}) {{\n'\
.format(class_name, self.enum_class_name, config.objc_prefix)
for int_alias_tuple in self.int_alias_tuple_list:
objc_enum += _OBJC_SPACE + '{4}{2}{3}{0} = {1},\n'\
.format(string_utils.cpp_enum_class_name_to_objc_enum_class_name(int_alias_tuple[1]),
int_alias_tuple[0],
class_name,
self.enum_class_name,
config.objc_prefix)
objc_enum += '};\n'
return objc_enum
|
STACK_EDU
|
We have found that it changes the rate to anything it fancies and kicks people for having the wrong setting, so no one can get one.
Posted 20 September 2015 - 08:20 PM
Posted 21 September 2015 - 12:23 AM
Rate stepping increases the rate every time there are fragmented packets coming from the server. There is no kick functionality though. So that one you need to explain a bit more. It is a usual problem for players to have too low rate setting and servers forcing or sv_cvaring inadequate values. That is the reason this was added.
Do you have a Lua that is checking forced cvars? That could explain the kicking issue.
Posted 21 September 2015 - 10:31 AM
no LUA running on our server.
we've got punkbuster working on the server, that is what is doing the kicking, with the setting
pb_sv_cvar rate in 10000 25000
All our players have the rate set to 25000, but since 0.9.0 our rates are being changed to any thing above 30000, usually 3100 or 37000.
It does not seem to matter where players are we all get the same changed rate and then kicked by PB because of it.
I'm in the UK, one is in Sweden and one on the west coast of the USA. As I said distance from the server does not matter as our pings are between 20, 50 and 110 for the three countries.
we also have used
forcecvar rate 25000
in our default config for years.
I've left the test server running in case you want to join and have a look.
Also the version is showing as 0.8.2 still.
Edited by JohnDory, 21 September 2015 - 10:31 AM.
Posted 21 September 2015 - 10:40 AM
Ok. The PunkBuster check is causing the kick. Remove that check and it will be fine. The forcecvar is not an issue, the mod will ignore forcing the rate cvar.
Also the version is showing as 0.8.2 still.
This is when doing /silent_version in the client? This is known and old problem .After the initial download, which is done with 0.8.2 or whatever, the silent version displays the version with which the download was made. It will be correct after the player makes the next connect to the server.
Posted 21 September 2015 - 11:23 AM
ok removed, so now there wont be a rate check any more.
It will be correct after the player makes the next connect to the server.
This is from RconUnlimited
12:18:28:> serverinfo --------------------- Server info settings: sv_sac 1 omnibot_playing 10 mod_url http://mygamingtalk.com/ mod_version 0.8.2 P -1212121212 g_antilagDelay 0 g_maxlivesRespawnPenalty0 voteFlags 1964459 g_balancedteams 1 g_maxGameClients 0 g_bluelimbotime 15000 g_redlimbotime 20000 gamename silEnT
Posted 21 September 2015 - 11:40 AM
On my server the mod_version is set correctly. I can see from trackbase.net that it is set correctly for other servers too. I don't know how it could show it wrongly on your server. Maybe you are pointing at a different server with the Rcon Unlimited?
Posted 21 September 2015 - 12:33 PM
Do you have the old qagame and the new client pk3 on that server?
I did, but as soon as I changed over it now fails with an error, from log file.
11462 files in pk3 files Sys_LoadDll(silent/silent/qagame.mp.i386.so)... Sys_LoadDll(silent/silent/qagame.mp.i386.so) failed: "silent/silent/qagame.mp.i386.so: cannot open shared object file: No such file or directory" Sys_LoadDll(/home/harry/OGP_User_Files/test/silent/qagame.mp.i386.so)... ok Sys_LoadDll(qagame) found **vmMain** at 0xef442c90 Sys_LoadDll(qagame) succeeded! ------- Game Initialization ------- gamename: silEnT gamedate: Sep 18 2015 *=====Server Installation Check * Inspecting menu files ERROR: Modified file "ui/popup_errormessage_pb.menu" found! File may not be modified. ** Found 1 errors. Please fix your modifications and try to start again. ----- Server Shutdown ----- Resolving etmaster.idsoftware.com etmaster.idsoftware.com resolved to 184.108.40.206:27950 Sending heartbeat to etmaster.idsoftware.com Resolving master.gamespy.com:27900 Couldn't resolve address: master.gamespy.com:27900 Resolving master0.gamespy.com Couldn't resolve address: master0.gamespy.com Resolving clanservers.net Couldn't resolve address: clanservers.net Unloading Dynamic Server Modules Dynamic Server Modules Unloaded ShutdownGame: done.
I've not modified anything.
Posted 21 September 2015 - 12:39 PM
You have a original menu modifying custom pk3 in either etmain or in silent directory.
Posted 21 September 2015 - 01:46 PM
aha, found it, an add on to tell people where to find an etkey if they need one thanks.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
url_launcher uses or overrides a deprecated API
Steps to Reproduce
... UrlLauncherPlugin.java , line 38
methodCallHandler.startListening(binding.getFlutterEngine().getDartExecutor());
...getFlutterEngine() is deprecated, warnings in compile
Target Platform: Android API 29
Target OS version/browser: Win 10 / Chrome
Devices:
Samsung J7 Neo
Please udate plugin to new code.
Hi @menezes85
can you please describe the issue in detail
and provide your flutter doctor -v.
Also, to better address the issue, would be helpful
if you could post a self contained app on github
or the steps to reproduce it.
Thank you
It looks like this was just introduced in 5.2.2. https://github.com/flutter/plugins/pull/2204/files#diff-af77f749944b3cffc49bd17805962a58R38
@mklim Is this some temporary deprecation usage as part of the v2 migration?
hi folks!
i Runned code Inspection in AS , and reportsmethis snippet
@Override public void onAttachedToEngine(@NonNull FlutterPluginBinding binding) { urlLauncher = new UrlLauncher(binding.getApplicationContext(), /*activity=*/ null); methodCallHandler = new MethodCallHandlerImpl(urlLauncher); methodCallHandler.startListening(binding.getFlutterEngine().getDartExecutor()); }
@jmagman yes, this was deliberate and temporary.
The v2 embedder recently changed its API surface, deprecating getFlutterEngine and adding a few new APIs to use instead (flutter/flutter#42959). However the plugins can't actually use those new APIs and remove the getFlutterEngine call without causing compile errors on stable.
For now this issue is blocked since there's no alternative that we could change to instead. Once the changes roll we should update the plugins to stop using the deprecated methods, though.
I ran in to this depreciation issue again.
flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel beta, v1.12.13+hotfix.6, on Mac OS X 10.15.1 19B88, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
[✓] Xcode - develop for iOS and macOS (Xcode 11.3)
[✓] Android Studio (version 3.5)
[✓] Connected device (2 available)
• No issues found!
Errors being thrown:
/.pub-cache/hosted/pub.dartlang.org/url_launcher-5.4.1/android/src/main/java/io/flutter/plugins/urllauncher/WebViewActivity.java:42: warning: [deprecation] shouldOverrideUrlLoading(WebView,String) in WebViewClient has been deprecated
public boolean shouldOverrideUrlLoading(WebView view, String url) {
^
/.pub-cache/hosted/pub.dartlang.org/url_launcher-5.4.1/android/src/main/java/io/flutter/plugins/urllauncher/WebViewActivity.java:47: warning: [deprecation] shouldOverrideUrlLoading(WebView,String) in WebViewClient has been deprecated
return super.shouldOverrideUrlLoading(view, url);
^
/.pub-cache/hosted/pub.dartlang.org/url_launcher-5.4.1/android/src/main/java/io/flutter/plugins/urllauncher/UrlLauncherPlugin.java:38: warning: [deprecation] getFlutterEngine() in FlutterPluginBinding has been deprecated
methodCallHandler.startListening(binding.getFlutterEngine().getDartExecutor());
@mklim is this bug essentially a duplicate of https://github.com/flutter/flutter/issues/47153 ?
Yes, good catch. I'll add a note there mentioning that these plugins also have deprecation warnings until they're completely updated.
I am still getting this warning
"PathProviderPlugin.java uses or overrides a deprecated API."
@HiteshMoorjaniX
please follow up on the issue linked above
thank you
url_launcher: ^5.1.3 version worked for me
With 5.5.0, I still have this problem
Reopening.
@cyanglaz who's looking at url_launcher these days? CODEOWNERS looks out of date?
@cyanglaz who's looking at url_launcher these days? CODEOWNERS looks out of date?
I use url_launcher in one of my project. Do you mean we should use another plugin ?
@Nico04 no I didn't mean that.
having same problem with 5.5.0, the app is fine, but the error message still there.
change targetSdkVersion to 29 and the error message gone.
cc @GaryQian (new owner of url_launcher_android)
This seems to be already migrated in UrlLauncherPlugin.java. I will close this.
|
GITHUB_ARCHIVE
|
Twenty-four GPS satellites orbit the Earth in a 12-hour cycle at an altitude of 12,000 kilometers above the ground. At any time, more than four satellites can be observed at any point on the ground at the same time.
Because of the precise position of the satellites, in GPS observations, the distance from the satellite to the receiver, using the distance formula in three-dimensional coordinates, using three satellites, can form three equations to solve the position of the observation point (X, Y, Z). Considering the error between the satellite's clock and the receiver's clock, there are actually four unknowns, X, Y, Z, and the clock difference. Therefore, the fourth satellite needs to be introduced to form four equations to solve the problem and obtain the observation point. Latitude and longitude and elevation.
In fact, the receiver can often lock more than four satellites. At this time, the receiver can be divided into several groups according to the satellite's constellation distribution, with 4 in each group. Then the algorithm is used to select the group with the smallest error for positioning. Improve accuracy.
Because of the errors in satellite orbits and satellite clocks, the influence of atmospheric troposphere and ionosphere on the signals, the accuracy of civilian GPS positioning is only 10 meters. In order to improve the positioning accuracy, differential GPS (DGPS) technology is widely used to establish a reference station (differential station) for GPS observation, and the known reference station precise coordinates are used to compare with the observed values, thereby obtaining a correction number and issuing it to the public. . After receiving the correction number, the receiver compares it with its own observation value, eliminating most of the errors and obtaining a more accurate position. Experiments show that using differential GPS, positioning accuracy can be improved to 5 meters.
There are many ways to use GPS for positioning.
If the position of the reference point is different, the positioning method can be divided into:
(1) Absolute positioning. That is, in the protocol earth coordinate system, a receiver is used to determine the position of the point with respect to the center of mass of the protocol, which is also called single point positioning. Here, it can be considered that the reference point coincides with the protocol earth mass center. The protocol used for GPS positioning is the WGS-84 coordinate system. Therefore, the absolute result of the coordinates of the absolute positioning is the WGS-84 coordinate.
(2) Relative positioning. That is, in the protocol earth coordinate system, two or more receivers are used to measure the relative position between the observation point and a ground reference point (known point). That is, the increment of the coordinate from the ground reference point to the unknown point is determined. Since the ephemeris error and the atmospheric refraction error are related, the error can be eliminated by observing the difference between observations. Therefore, the relative positioning accuracy is much higher than the absolute positioning accuracy.
According to different motion states of the user receiver in the job, the positioning method can be divided into:
(1) Static positioning. That is, during the positioning process, the receiver is placed on the survey site and fixed. Strictly speaking, this static state is only relative, which usually means that there is no change in the position of the receiver relative to its surroundings.
(2) Dynamic positioning. That is, the receiver is in motion during positioning.
GPS absolute positioning and relative positioning also include static and dynamic. That is, dynamic absolute positioning, static absolute positioning, dynamic relative positioning, and static relative positioning. According to the principle of ranging, it can be divided into pseudo-range pseudorange method, pseudo-range pseudo-range method, differential localization.
|
OPCFW_CODE
|
A reader, February 09, 2007 - 1:33 pm UTC
vadim, February 11, 2007 - 6:03 pm UTC
Isn't it because of 2phase commit logic ? I always used RPCs over db links with PL/SQL table out parameters to avoid SELECT's over db links. It was a common practice since Otacale 7.3. Is it still a better approach with 10g ?
February 12, 2007 - 10:31 am UTC
I was not using a dblink.
It is the difference between slow by slow processing versus bulk processing.
I would not avoid selects over dblinks ever - you have the same 2pc stuff going on with remote procedure calls like that.
Redo Generation using DB link
Ghulam Rasool, February 20, 2007 - 6:59 am UTC
I consider you as my GURU and I am an humbel student of yours.
I have two very simple questions:
1. Should we use DB link if we can avoid it.
2. I have two databases A and B on two different servers. I have big table resides on B. One background process runs on a server where A database resides lets say BGP. BGP basically reads data from A database and writes in B database through database link. Question is where redo logs will be generated. Please give some example as proof of concept.
February 20, 2007 - 9:50 am UTC
1) should we use a database link if we can avoid it...
not sure how to answer that. Don't do anything you can avoid doing - the fastest way to do something is "not to do it", so if you don't have to do something....
would need specifics before answering something like that.
2) the database being written to will obviously generate redo - it is the thing that needs to REDO the operation. The database being read from might generate redo as well as a select can and will generate redo under many circumstances.
No proof of concept needs be done for something like this, the database you write to obviously needs redo to protect itself.
Ghulam Rasool, February 21, 2007 - 4:06 am UTC
I am sorry I was not able to make my self clear. I try one more time giving full scenario in detail. We are talking about a telecom database (RBO Mode). Size of database is almost 5TB.
We have some very large Call Tables 500 GB each. One of our consultant suggests that
¿ Each Call table should have a separate db.
¿ These DBs can be created either on the production server or on any other server depending upon the availability of resources.
¿ These tables will be accessible to online agents through database links such that end user will not feel any difference.
Disadvantage of having these tables in production DB:
¿ They generate archive logs very frequently, in turn database spends valuable time in archiving them and this degrade the overall performance.
¿ They occupy a huge space inside TABS database which makes backup, restore and recovery takes very long time.
¿ The huge space occupied by these tables makes cloning the database very hard and seems undoable.
Advantage of hosting all these large tables in a separate instance
¿ Off load tabs in case of running long running reports
¿ Generate less archive logs which will have impact on overall performance of TABS
¿ Shrink down the production database size such that it will be easy to backup, easy to restore, easy to clone, and easy to have standby system.
MY POINT IS:
Instead of having separate databases for these tables, we can have:
¿ One HISTORY DATABASE in which we can archive the purged data from production.
¿ In order to reduce redo logs we can create these tables and their associated indexes with no logging option. For recovery purpose, we can use export dumps of temporary tables. (We create temporary table for call detail records, process the data, insert into large table/call tables, export the temp table and drop them)
¿ To me maintenance/handling of Call tables and other big tables is an issue. We cannot afford to keep months of data in an OLTP system. Therefore, for performance and manageability, production database should keep only the most recent data that is required for whole operation.
In order to kill the issue, there should be a purging and archiving policy that needs to be followed strictly for production database as well as History database. If we purge the data properly I dont think the size of database would cross one TB size.
Three types of tables are essentially candidate for purging.
¿ Call Tables
¿ Log Tables
¿ Tables used to run the system
Ideally for Call tables it is suggested that:
¿ Keep 1 + current month data in all call tables. Rest of the data should be moved to history database (Data retention in history database depends on Business requirement).
¿ The advantage of having this approach will be less scanning in turn the performance of DB should be very fast
¿ If there is a need to read collective data for reporting or queries etc, we can create a view on both the tables
¿ DBA team will require to schedule house keeping of these tables
¿ It will also eliminate dependency of System operation on other databases.
¿ This option would eliminate the requirement of database link for overall system operation. If you look, how database link works, you will find following additional steps in order to complete a request, which would definitely impact the performance.
o SELECT * FROM TABLE@DBLINK
o The database will resolve DBLINK to a host name ... it will use the TNSNAMES.ORA unless fully described
o Naming resolution (DNS, NIS etc) will resolve the HOST to a TCP/IP address
o A connection will be made to a listener at the TCP/IP address
o The listener for the PORT will resolve the SID and finish the connection to the database.
I hope this time the question is clear.
February 21, 2007 - 10:54 am UTC
... Each Call table should have a separate db. ..
Allow me to be blunt. That would be stupid. 100%.
None of the reasoning makes sense. Archiving, on a properly configured system, adds approximately 0% overhead to the performance of the database.
None of the backup, recovery, whatever scenario's make sense either - since you, well, have to back them up anyway.
All multiple databases will do is:
consume more disk
consume more ram
consume more of your time
make patching a nightmare
increase your complexity
make tuning a machine virtually, no - physically, impossible.
Thanks for the reply
Ghulam Rasool, February 22, 2007 - 3:56 pm UTC
What about the second option when I am talking about purging policy and having history database.
February 22, 2007 - 7:49 pm UTC
I personally do not see the need for another database, that would be up to you.
large databases are not "slow" or "scary"
|
OPCFW_CODE
|
Electron Configurations Mrs. Nielsen Honors Chemistry
Atomic Spectra and Niels Bohr • One view of atomic structure in early 20th century was that an electron (e-) traveled about the nucleus in an orbit. • e- can only exist in certain discrete orbits • QUANTIZED energy levels
Quantum Theory *Schrodinger applied idea of e- behaving as a wave to the problem of electrons in atoms. *He developed the WAVE EQUATIONthat provides a set of math expressions called WAVE FUNCTIONS, describing an allowed energy state of an e- E. Schrodinger 1887-1961 Schrodinger’s Cat
Quantum Theory Heisenberg Uncertainty Principle We cannot simultaneously determine the position and velocity of an electron. W. Heisenberg 1901-1976 TED ED VIDEO
Arrangement of Electrons in Atoms Electrons in atoms are arranged as ENERGY LEVELS (n) SUBLEVELS (l) ORBITALS (ml)
QUANTUM NUMBERS(an electron “address”) A set of 4 numbers that describe the location of an e- around the nucleus n (principal) ---> energy level l (sublevel) ---> shape of orbital ml(orbital) ---> designates a particular suborbital s(spin) ---> spin of the electron (clockwise or counterclockwise: ½ or – ½) Think of the 4 quantum numbers as the address of an electron… Country > State > City > Street
QUANTUM NUMBERS Pauli Exclusion Principle: states that no two electrons within an atom (or ion) can have the same set of four quantum numbers. Even if two electrons are in the same energy level, the same sublevel, and the same orbital, they must repel, and therefore spin in opposite directions.
PRINCIPAL QUANTUM NUMBER, n * Refers to the Energy Level where an electron can be found * Currently n = 1-7, because there are 7 periods on the periodic table Relative sizes of the spherical 1s, 2s, and 3s orbitals of hydrogen.
n = 1 n = 2 n = 3 n = 4 Energy Levels
Sublevelss, p, d, or f • The most probable area to find these electrons takes on a shape • s (spherical) – 1 orbital • p (propellar) – 3 orbitals • d (complex) – 5 orbitals • f (very complex) – 7 orbitals No more than 2 e- assigned to an orbital – one spins clockwise, one spins counterclockwise
How many electrons can be in a sublevel? Remember: A maximum of two electrons can be placed in an orbital. S p d f Number of orbitals Number of electrons
Types of Orbitals (l) s orbital p orbital d orbital
p Orbitals this is a p sublevel with 3 orbitals These are called x, y, and z There is a PLANAR NODE thru the nucleus, which is an area of zero probability of finding an electron 3py orbital
p sublevel • The three p orbitals lie 90o apart in space
d sublevel • d sublevel has 5 orbitals
f sublevel f sublevel with 7 orbitals
Diagonal Rule Aufbau Principle states that electrons fill from the lowest possible energy to the highest energy • The diagonal rule is a memory device that helps you remember the order of the filling of the orbitals from lowest energy to highest energy
Diagonal Rule • Steps: • Write the energy levels top to bottom. • Write the orbitals in s, p, d, f order. Write the same number of orbitals as the energy level. • Draw diagonal lines from the top right to the bottom left. • To get the correct order, follow the arrows! 1 s 2 s 2p 3 s 3p 3d 4 s 4p 4d 4f By this point, we are past the current periodic table so we can stop. 5 s 5p 5d 5f 5g? 6 s 6p 6d 6f 6g? 6h? 7 s 7p 7d 7f 7g? 7h? 7i?
Aufbau Principle Why do the electrons fill out of order? • Remember that electron locations are all based on Coulomb’s Law... A balance of attraction to the + charge in the nucleus and repulsion by other electrons. • Placing electrons in d and f orbitals require LARGE amounts of energy due to repulsion from s and p electrons in the same energy level. *This is the reason for the diagonal rule! BE SURE TO FOLLOW THE ARROWS IN ORDER!
Electron Configurations A list of all the electrons in an atom (or ion) • Must fill in order of lowest energy orbital first (Aufbau principle) • 2 electrons per orbital, maximum • Electron configurations help determine the number of electrons in the outermost energy level, called valence electrons. • Valence electrons are the electrons available to be lost, gained or shared in the formation of chemical bonds. 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14…etc.
Electron Configurations 2p4 Number of electrons in the sublevel Energy Level Sublevel 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14…etc.
Let’s Try It! • Write the electron configuration for the following elements: H Li N Ne K Zn Pb
An excited lithium atom emitting a photon of red light to drop to a lower energy state.
Orbitals and the Periodic Table Orbitals grouped in s, p, d, and f orbitals (sharp, proximal, diffuse, and fundamental) s orbitals d orbitals p orbitals f orbitals
Noble Gas Notation A way of abbreviating long electron configurations • Step 1: Find the noble gas in the previous period. Write the noble gas in brackets [ ]. • Step 2: Find where to resume by finding the next energy level. • Step 3: Resume the configuration until it’s finished.
Noble Gas Notation • Chlorine Electron Configuration is 1s22s2 2p6 3s2 3p5 You can abbreviate the first 10 electrons with a noble gas, Neon. [Ne] replaces 1s2 2s2 2p6 [Ne] 3s2 3p5
Noble Gas Notation Practice Write the noble gas notation for each of the following atoms: Cl K Ca I Bi
(HONORS only) Exceptions to the Aufbau Principle • Remember d and f orbitals require LARGE amounts of energy • If we can’t fill these sublevels, then the next best thing is to be HALF full (one electron in each orbital in the sublevel) • There are many exceptions, but the most common ones are d4 and d9 For the purposes of this class, we are going to assume that ALL atoms (or ions) that end in d4 or d9 are exceptions to the rule. This may or may not be true, it just depends on the atom.
(HONORS only) Exceptions to the Aufbau Principle d4 is one electron short of being HALF full In order to become more stable (require less energy), one of the closest s electrons will actually go into the d, making it d5 instead of d4. For example: Cr would be [Ar] 4s2 3d4, but since this ends exactly with a d4 it is an exception to the rule. Thus, Cr should be [Ar] 4s1 3d5. Procedure: Find the closest s orbital. Steal one electron from it, and add it to the d.
(HONORS only) Exceptions to the Aufbau Principle OK, so this helps the d, but what about the poor s orbital that loses an electron? Remember, half full is good… and when an s loses 1, it too becomes half full! So… having the s half full and the d half full is usually lower in energy than having the s full and the d to have one empty orbital.
(HONORS only) Exceptions to the Aufbau Principle d9 is one electron short of being full Just like d4, one of the closest s electrons will go into the d, this time making it d10 instead of d9. For example: Au would be [Xe] 6s2 4f14 5d9, but since this ends exactly with a d9 it is an exception to the rule. Thus, Au should be [Xe] 6s1 4f14 5d10. Procedure: Same as before! Find the closest s orbital. Steal one electron from it, and add it to the d.
(HONORS only) Try These! • Write the shorthand notation for: Cu W Au
Orbital Diagrams • Graphical representation of an electron configuration • One arrow represents one electron • Shows spin and which orbital within a sublevel • Same rules as before (Aufbau principle, d4 and d9 exceptions, two electrons in each orbital, etc. etc.)
Orbital Diagrams • One additional rule: Hund’s Rule • In orbitals of EQUAL ENERGY (p, d, and f), place one electron in each orbital before making any pairs • All single electrons must spin the same way • One way to think of this is in the game of Monopoly you have to build houses EVENLY. You can not put 2 houses on a property until all the properties have at least 1 house.
Lithium Group 1A Atomic number = 3 1s22s1 ---> 3 total electrons
Carbon Group 4A Atomic number = 6 1s2 2s2 2p2 ---> 6 total electrons Here we see for the first time HUND’S RULE. When placing electrons in a set of orbitals having the same energy, we place them singly as long as possible.
Lanthanide Element Configurations 4f orbitals used for Ce - Lu and 5f for Th - Lr
|
OPCFW_CODE
|
In order to display list contents of a directory one can use ‘LS’ command. Below mentioned is the syntax with all possible options:
ls [-a] [-A] [-b] [-c] [-C] [-d] [-f] [-F] [-g] [-i] [-l] [-L] [-m] [-o] [-p] [-q] [-r] [-R] [-s] [-t] [-u] [-x] [pathnames]
-a Shows you all files, even files that are hidden (these files begin with a dot.)
-A List all files including the hidden files. However, does not display the working directory (.) or the parent directory (..).
-b Force printing of non-printable characters to be in octal \ddd notation.
-c Use time of last modification of the i-node (file created, mode changed, and so forth) for sorting (-t) or printing (-l or -n).
-C Multi-column output with entries sorted down the columns. Generally this is the default option.
-d If an argument is a directory it only lists its name not its contents.
-f Force each argument to be interpreted as a directory and list the name found in each slot. This option turns off -l, -t, -s, and -r, and turns on -a; the order is the order in which entries appear in the directory.
-F Mark directories with a trailing slash (/), doors with a trailing greater-than sign (>), executable files with a trailing asterisk (*), FIFOs with a trailing vertical bar (|), symbolic links with a trailing at-sign (@), and AF_Unix address family sockets with a trailing equals sign (=).
-g Same as -l except the owner is not printed.
-i For each file, print the i-node number in the first column of the report.
-l Shows you huge amounts of information (permissions, owners, size, and when last modified.)
-L If an argument is a symbolic link, list the file or directory the link references rather than the link itself.
-m Stream output format; files are listed across the page, separated by commas.
-n The same as -l, except that the owner's UID and group's GID numbers are printed, rather than the associated character strings.
-o The same as -l, except that the group is not printed.
-p Displays a slash ( / ) in front of all directories.
-q Force printing of non-printable characters in file names as the character question mark (?).
-r Reverses the order of how the files are displayed.
-R Includes the contents of subdirectories.
-s Give size in blocks, including indirect blocks, for each entry.
-t Shows you the files in modification time.
-u Use time of last access instead of last modification for sorting (with the -t option) or printing (with the -l option).
-x Displays files in columns.
-1 Print one entry per line of output.
pathnames File or directory to list.
|
OPCFW_CODE
|
My Clique Space(TM) POC continues to evolve, and the question of whether or not the use of the "three-stage delegate" paradigm for the administrator client should be revised.
The trigger for the consideration of this question was the realisation that the Identity is an Element that allows a user to group multiple Connections with multiple Affiliations so that a Participant may be generated from a combination of Connections and Affiliations (at least one of each is required) whenever a particular user takes the opportunity to express themselves as such in a Clique. A user's identity is hence, a palette of different potential devices and roles that the user may elect to collaborate with. Very much (I'd say precisely) what an identity is in real life - a collection of different guises and means of collaboration that an individual may choose to represent a presence to others. Any user may decide it is a good idea to have multiple identities, and should be able to do this by creating multiple Identities; each of which are able to be customised by the user to shape their presence for different purposes.
Now, this realisation (Identities possessing multiple Connections and Affiliations rather than being an association of one of each) needs some rework in how the administrator client connects to an Agent Device. Currently, the Connection, Identity (known as the Active Affiliation at the time it was put in the patent), and the Participant uniquely represent the "access level" of an administrator client so that to connect the administrator client, one had first to obtain the delegate Connection, followed by the delegate Identity, and finally the delegate Participant. The projections of these delegates each supplied an RMI stub which identified a delegate server on the Agent Device. This was a good way to separate administrator client connection into a three stage process. It was especially good to do because the Participant couldn't be developed without the Identity, and the Identity couldn't be developed without the Connection. Splitting this process into three stages facilitated early development of Clique Space.
Now, I 1: find that obtaining each of these delegates in turn is probably too involved. When an administrator client connects to a Clique Space, a Client Device's Participant is given to the administrator client to represent the administrator client's participation in a serving Agent Device's Clique. The Agent Device's Participant is generated for the Agent Device nominated as the serving Agent Device: the Agent Device that will handle messages that originate from the connected administrator client, and this will be, in all currently realisable scenarios, the Agent Device through which the administrator client has obtained the Connection from.
Additionally, because the Identity can contain multiple Connections, such an Element must 2: necessarily exist independently of a Connection of any administrator client. This means that the Identity must exist, like an Account, an Account Profile, a Media Profile, or an Affiliation, as an Element which is independent of any device.
So, both of the above points leads me to the following conclusion: drop the notion of the delegate Identity and the delegate Participant because both these objects are not necessary any more. Instead, keep the notion of the delegate Connection so that when an administrator client first connects to a Clique Space, it will elect to connect under a certain user's Account identifier and Identity, and it will receive a delegate Connection if it is successful. Upon receipt of its delegate Connection, it can query the serving Agent Device for any other Elements, including the user's Account, the user's Identities, the Client and Agent Device's Participants of the serving Agent Device's Clique in which the given administrator client is the Owner, any other Connections, and Affiliations associated with the given Account, any Media and Account Profiles that may either be components of any of the respective Affiliations and Connections, and any other Element on the given Clique Space which the user, by virtue of the Client and Agent Devices' limiting constraint affinity allows the given administrator Client to know of.
So, this is the way that things are going to change. By removing a mechanism that was useful for a time, but is now an impediment because 1: it appears too complicated and 2: it actually appears to be an incorrect solution, this change moves the implementation closer to the concept envisaged in the patent.
This musing is a deliberation over some of the pragmatic implications of a deliberation I had earlier.
|
OPCFW_CODE
|
One of the most well-known problems when it comes to testing applications is the amount of time required by all test suites. Integration tests, in particular, are usually very slow to execute and depending on the type of application, several minutes (or even hours in extreme cases) are needed in order to get the final execution result.
You can reduce the test execution time with several techniques, but one of the most effective methods is running your tests in parallel. Knapsack Pro is a dedicated solution of test parallelization that can cut down the amount of test execution time by spreading test suites into multiple build nodes. Some of the most important features of Knapsack Pro are:
- Support for popular test runners (rspec, cucumber, cypress, jest etc)
- Dynamic allocation of tests to build nodes (a.k.a. Queue mode)
- Support for short-lived build nodes (such as preemptible VMs on cloud providers)
- Fallback mode (run tests even when Knapsack API is not available)
It is very easy to use Knapsack Pro with Codefresh and split your tests in as many build nodes as you want:
In the example above we have used 5 parallel build executions to parallelize the test phase of the project.
Split your test executions in Codefresh pipelines
Codefresh already supports parallel pipeline steps out of the box. But we have recently added two new enhancements to the Codefresh YAML syntax that can make parallel pipelines even easier. These are the scale and matrix keywords.
The “scale” syntax can be easily used when you have multiple parallel steps that are mostly similar and only differ in one or more properties. The “matrix” syntax is the familiar way of creating matrix pipelines where you define all environment parameters and Codefresh will automatically create all possible combinations. For example, the following pipeline runs with 3 different versions of GO and 2 versions of the CGO switch.
title: Cloning main repository...
- go test -v
The resulting pipeline will have 6 parallel steps for all the possible combinations.
Knapsack Pro can take advantage of this functionality by leveraging its API which only needs two parameters:
The first parameter is needed only once in a pipeline and defines how many build nodes will be used to split tests. The second parameter should be declared for each node and contains a number that defines which node is that (0, 1, 2, 3 and so on).
By using these two parameters Knapsack Pro will automatically split your tests between nodes. There are two modes for the split. The Regular/Standard mode splits tests in a static manner by measuring how much time each test file takes by looking at the previous builds. The static set of tests is allocated only once to each parallel node before starting tests.
The dynamic/queue mode gives to each node only a subset of the tests and then monitors their execution within the same build before asking for another set of tests from the queue. A fast node (that finishes tests quickly) will then fetch more tests while a slow node will get fewer tests. This mode is great if the build nodes are not equal in resources or you have test files that sometimes take more or less time (often end to end tests can vary in time execution).
Codefresh supports both Knapsack Pro modes. Here is a full example that brings all this together:
description: "Cloning main repository..."
title: Building Test Docker image
# set how many parallel jobs you want to run
# please ensure you have here listed N-1 indexes
# where N is KNAPSACK_PRO_CI_NODE_TOTAL
# run http server in the background (silent mode)
# we did && echo on purpose to ensure Codefresh does not fail
# when we pass npm process to background with & sign
- (npm run start:ci &) && echo "start http server in the background"
- $(npm bin)/knapsack-pro-cypress
This pipeline splits Cypress tests into two nodes:
Notice the values for
KNAPSACK_PRO_CI_NODE_INDEX. In order to run this pipeline in Codefresh, you also need a Knapsack Pro token that you can get after creating an account.
For more details see the integration page of Knapsack. Knapsack Pro tests also work great with service containers for running tests against databases or other external services. Here you can find articles dedicated to Knapsack Pro configuration for Ruby on Rails project on Codefresh and for Cypress end to end test runner.
Ready to try Codefresh, the CI/CD platform for Docker/Kubernetes/Helm? Create Your Free Account Today!
|
OPCFW_CODE
|
Nope, everything is in Subs-BBC.php. This Shift-Enter thing is nice, you should tell more people about that :D
I know right... Well, when I introduced it, it was pretty popular in Wedge. I was even a bit scared that the feature would end up in SMF... Yeah, I guess I overestimated them. :P
One of the downsides with the auto-splitter, is that if there's a newline after the point where you're splitting, Wedge won't notice it, and will insert an opening quote tag, then that newline, then the original text. It looks ugly. I've gotten used to just editing those out, but honestly it'd be best to add some code at the beginning of splitQuote() to look for space/newline characters immediately before and after the split point, select them together, and remove them, then adjust the starting position. THEN it'd be the perfect splitter. Or maybe you have a simpler idea..?
There wouldn't be a need for it, but I would keep the database stuff. I like it that you can add bbcodes over plugin-info.xml.
Ah, yes indeed. It's just that I like simplifying the database as much as possible, like removing extra tables...
Are you positive about that?
After all, an XML file can also contain function declarations...
If you still know what's the difference between the 'unparsed_equals', 'unparsed_commas', 'unparsed_commas_content', 'unparsed_equals_content', 'parsed_equals' bbc types this would be very helpful :D
Well, I could explain it, but I won't bother, as the SMF documentation (I think?) did it well... But Pete removed it in his commit to move BBCode to the database (December 3, 2010 -- quite soon after we started work on Wedge.)
(edit: removed block, since you've already seen it.)
Unparsed content just means that the contents won't be parsed by parse_bbc(), like in... Well, the code tag?
IIRC, one of the things I added to the system is the ability to have multiple (optional) parameters.
Oh gee, looks like you found it by yourself. Sorry for wasting your time (and mine!) ;)
Well it's supported already, it's just not the 'default'.
The reason why it's not default is that Wedge uses http-based avatars, which means browsers showing an HTTPS page will consider it 'insecure' because there's an HTTP-based image embedded in the HTML.
The solution, of course, is to 'simply' fix all local avatar links to the correct protocol, but Wedge stores the URL in multiple places, so that's a bit annoying, unless we change it directly in the database, but that means if you switch back to HTTP (e.g. expired certificate), you're likely to get empty images (expired cert + image link = no image at all, because the browser won't trust it until it's approved manually, and since it won't show a popup for a simple image, you're screwed.)
Also, doesn't help with external avatars. There's no way to know if they're compatible with HTTPS.
And I'm pretty sure HTTPS fans would want that address bar icon to be green, not gray...
Yup, remove it and only let the preview thing there. Maybe split the editor in two tabs, one for modifying and one for preview. Like the github editor.
Yeah, I wouldn't know about removing it... But it's certainly worth posting a poll. Only, on this site, we wouldn't be getting many answers... Probably likelier to get proper answers at sm.org, of course, but I stopped going there years ago.
I will give it a try, people who quote a lot will use it. I just like quotes for off topic or threads with many questions/answers. Like this one :D
But you're talking about preventing people to quote parts of your message, no?
Maybe there's a misunderstanding.
Were you instead talking about a multi-quote feature? Like the one that's been in Invision Power Board literally since forever..?
That would also imply that topics ARE flat. My own 'implementation' of the thing is the soft-merging of posts, so that multiple answers don't take more space, and yet if you click Quote on a post, your reply is automatically threaded below that post, even if it doesn't show on the default flat skins.
ElkArte looks like a good SMF fork. But I prefer the look and feel of wedge. They seem to have a nice codebase. Tests and stuff :D
Yeah, I'd tend to say Elk is made by hardened professionals, and Wedge by enlightened amateurs.
The fact that they've been at work on it for the last 5 years is impressive, too. When they started, I doubted they'd 'last' for 5 years. In the end I worked fulltime on Wedge for 5 years, and they did too.
Personally though, I'd hate being restricted by test suites when it comes to adding new features. These aren't even a guarantee your feature will work in every situation. I prefer to rely on beta testers.
Nginx is the php server too? Did you convert your htaccess to use Nginx too?
They are, but as soon as we use JS, we could use all the JS features, even the new fancy ones. For a chat system for example :D
I don't know. I'm not used to using a nuclear bomb to break a window.
Maybe something like js-doc for php would already be enough with generated html docs. Many functions and classes are documented inside the code.
Yeah. There's 'something' called phpdoc, I think.
Actually, that was the idea behind the comment refactoring that Pete did for a while. He wanted to use a tool to later automatize the extraction of function descriptions. I wasn't comfortable with that tool, so I just left him to his devices, unfortunately he never finished it. But he did a good job at what he did. (Basically, he commented most of what matters...)
Maybe this would be an idea. For real WYSIWYG wedge would need a full bbc parser in js... Or a basic bbc parser which lets the server render the html if it's a more complex bbcode.
A full bbc parser in JS..? But how is it different than using Ajax to parse said bbc?
I don't really have an idea, but definetly something more like a chat, even if it's pseudo and without ajax features.
But would be a neat thing. Maybe we can just reuse the database structure.
Yeah, I looked into it, and:
- there are a few columns that'd be useless, like the subject one. Not a big problem.
- PMs don't have a recipient ID assigned to them, instead it's done through an extra table that can hold multiple people as recipients. While it's a good idea to make it more flexible, it also makes it harder to sort PMs by conversation. How do we 'recognize' that a specific conversation should be treated separately? Maybe by having some sort of id_conversation toggle, I don't know. It's a possibility, just makes it harder. Then again, a multi-user chat message, aka a chat room, sounds good to me...
This is definitely the next refactoring work I'll be doing, as soon as I'm done with the new site (Lestrade's
, if you're curious! Although, if you dont have a Steam account, it'll be quite useless to you ;)
|
OPCFW_CODE
|
Running Linux on my Surface Pro 2
The Surface Pro 2 is a sweet piece of hardware. But many pieces of its hardware, including the type cover and wireless network interface, do not have excellent firmware for Linux as of today. I decided to give another route a try, and the results are very interesting.
tl;dr:: I installed CrunchBang Linux (a Debian-based distro) in VMWare, closed all unneeded Windows processes (including explorer.exe), and my battery life improved, despite running an OS on top of an OS.
If you're curious about running off the metal (no virtual machine), see here.
Here's the gist. By installing Linux in a virtual machine, I can bypass all the networking problems because Windows is connected to the network, and VMWare hooks into these interfaces and exposes a virtual device to the VM. It also has the nice advantage of creating share folders between my Windows and Linux environments.
But I had always felt that this would completely destroy my battery life on the machine. As it turns out, my preliminary findings over the last week and a half suggest the battery life is actually better using my virtual machine than the Windows environment.
Why? My hypothesis is this: Because I'm closing explorer and all unneeded Windows services, background apps, etc., I'm removing the cost of Windows' heavy interface, OneDrive syncing in the background, apps updating their tiles, search indexing, and whatever the heck else Windows does in the background.
I'm also running all my tools (nginx, Node.js, gVim, etc) on Linux, so I can avoid the extra bloat of compatibility layers that make it possible to run these Linux-esque applications on Windows. I'm also using an Openbox environment, which is significantly lighter than explorer.
So if you've been wanting to run Linux on your Surface Pro , I suggest giving this route a try. It might not be as bad as you think. A couple of points, though:
- I use VMWare because VirtualBox was consistently crashing when I opened Chromium on my 4k monitor. Mileage varies.
- Hyper-V is not good for hosting graphic environments. You'll need to disable it with bcdedit before you can install/run VMWare.
- There are some things I have yet to iron out, such as touch support, random network disconnects, etc. I'll update when I find solutions of those.
There are also a number of advantages to having your development environment virtual:
- You can easily suspend your environment to disk and use your machine for something else, then return back to that state.
- You can easily copy your environment to another machine with no additional setup.
- When Windows' hibernation decides to wipe out your hibernation state, your development environment is still suspended to disk, so you can reboot the host machine as much as you want. Sweet.
- You can snapshot the development machine after you have it set up, in case something goes horribly awry.
- You can easily backup your entire virtual machine filesystem to an external hard drive.
- You can easily switch between Windows and Linux; this is nice, because I can use my SP2 as an actual tablet, do normal Windowsy stuff on it, let people borrow it, etc., with my development environment completely separate.
There are, of course, plenty of disadvantages, almost all related to the extra layer of emulation between your dev environment and the metal. But so far, this has been pretty unnoticeable to me.
Add this to the list of awesome things this machine can do.
|
OPCFW_CODE
|
Bit of background… Im quite new to the world of libvirt\kvm\qemu etc and have a background in vmware.
Ive used Unraid for quite a while and they’ve recently implemented VM support via libvirt, which led me down the path of virtualising my office PC and my media centre.
My Host machine specs are: 2xXeon E5520, 24GB DDR3, 2x240GB SSD (Both on an LSI SAS 2008 controller) , 8x2TB HDD (onboard mobo controller)
Im able to set up VMs, pass through a variety of devices (Sound cards\USB controllers\Graphics cards) fine, but my windows VM (which I use as my main office PC) seems to suffer when it comes to disk performance.
On the host, the SSD performs as expected. Inside VMs I get the following (BARE METAL\VM):
Sequential Read: 510MB/s \ 460MB/s -10%
Sequential Write: 471MB/s \ 197MB/s -58%
4k Read: 28.10MB/s \ 7.55MB/s -73%
4k Write: 74.90MB/s \ 5.73MB/s -92%
4k-64threads Read: 320.30MB/s \ 215.27MB\s -33%
4k-64threads Write: 273.94MB/s \ 96.00MB\s -64%
Access time Read: 0.052ms \ 0.589ms -lots%
Access time Write: 0.051ms \ 0.899ms -lots%
While performance is acceptable (just!) on the VM, id like to get a little closer to 'bare metal' speeds. I appreciate that there will be some overhead, but it shouldn’t be this much!
My VM XML is here: http://pastebin.com/52BwffmC
So far I've experimented with:
· IOTHREADS – Assigned designated IOTHREADS to CPUs and to the disk controller (still in my XML).
· Cache and IO settings changed to: 'cache='none' io='native' ' – no performance change.
· Virtio and virtio-scsi controllers make no difference to performance.
· Controller pass through – I tried to pass through the entire LSI2008 controller, however windows refused to detect any drives (despite installing drivers during windows installation). Linux detected the drives and installation completed, however I wasn’t able to boot from the disk after the installation. Not sure how to set the LSI controller as a bootable device.
· CPU and disk schedulers changed on the host – no performance change.
Im running out of ideas on what else to tweak at this point! Would the best way for me to achieve bare metal speeds would be to have a dedicated PCIe controller passed through (2 or 4 port SATA3 controller?)? Do any exist that play nice with libvirt and can be used as a boot device?
This e-mail and its attachments, if any, contains information intended for the addressee only. It may be confidential and may be the subject of legal and/or professional privilege. If you are not the addressee you are not authorised to disseminate, distribute , copy or use this e-mail or any attachment to it. The content may be personal or contain personal opinions and unless specifically stated or followed up in writing, the content cannot be taken to form a contract or to be an _expression_ of BT Lancashire Services' position. If you receive an email in error from BT Lancashire Services please contact the sender and delete the email from your system. BT Lancashire Services reserves the right to monitor all incoming and outgoing email. BT Lancashire Services has taken reasonable steps to ensure that outgoing communications do not contain malicious software and it is your responsibility to carry out any checks on this email before accepting the email and opening attachments.
BT Lancashire Services, County Hall, Fishergate, Preston, Lancashire, PR1 8XJ. Company Number 07444626.
|
OPCFW_CODE
|
I have successfully completed an upgrade to my laptop and mac mini. They are both running Mac OS X Lion. On the whole the transition has been relatively easy.
My Macbook Pro is an early 2008 model and as such the WiFi does not support AirDrop – the new file sharing protocol. There are a few gotchas in the transition. Java is not installed by default so some web applications need to download the Java for OS Lion update and install it. One thing with my Mini, which has two Firewire drives attached to it. I can’t get it to reboot. When you try to restart you get to the white screen with a spinning timer that lasts forever (okay – at least 20 minutes before I gave up and powered off and restarted). I still need to get to the bottom of that. My Laptop doesn’t have that problem.
I have had problems with Microsoft Live Meeting Web Access but I am not sure that this is OS X Lion specific. Live Meeting was a dog on earlier versions of OS X. Today it took 45 minutes to fail to load Live Meeting Web Access for a 30 minute meeting. I tried switching from my laptop to the mini. I had to load Java on there and then restart Safari. This actually worked but when the Live Meeting console finally loaded it still failed to connect to the meeting. All I could see was three spinning timers. So LiveMeeting is a total Cross Platform FAIL – A joke when compared to WebEx or GoTo Meeting or Glance or just about any other web conferencing service.
But the big frustration is a continuing problem with Time Machine. I have an Airport Extreme that has a couple of disks connected to it. I have named these TimeMachine01 and TimeMachine02. I want to use these drives to backup from the mini and my laptop using TimeMachine. You would think this would be easy – but it is not. So rest of this post (and probably subsequent posts) will document the trials and tribulations in getting this to work.
The first challenge I faced was when going in to System Preferences???Time Machine. I click on Select Disk and despite the drives being up and working on my Airport Extreme they are not visible to choose from in Time Machine. So???
I switch to Finder and navigate to the relevant drives on the Extreme and double click on drive TimeMachine02. So now TimeMachine02 is listed in the sidebar in Finder. So???
I switch back to the Time Machine Preference Pane and choose Select Disk. This time the drive is in the list. I select it. I get prompted for the userid and password to access the drive. and the disk is setup. However, when Time Machine starts to backup it never finds the disk. What could be happening? I decided to open up Terminal to dig under the covers.
A quick look at the /Volumes folder and I can see the drives attached to the machine. I have a TimeMachine02-1 folder in /Volumes but no TimeMachine02 folder. Interesting???
So I think what is happening is that Finder creates the TimeMachine02 volume in /Volumes. This allows me to use the Time Machine Preference Pane to select the Disk for Backup, but the process of setting up the Time Machine Preferences (you are asked for a userid and password for the drive in question) seems to create a second connection to the same volume TimeMachine02-1.
It seems that Finder, the Time Machine Preferences Pane and the Time Machine backup are not quite in sync. I don’t have a solution yet. I have tried removing the Time machine Preference File from /Library/Preferences. That hasn’t helped.
I will just have to do some more experiments and see if I can identify a process for setup of Time Machine that works.
Watch out for future posts. In the meantime if anyone else has seen these problems and solved them, please leave a comment and point me to a solution. This is not an OS X Lion issue. I have had this happen before on Snow Leopard. It seems to be triggered if you have an unexpected power off. It seems to leave volumes hanging around in the /Volumes folder and when Time Machine runs again it has to create a new volume mount, which it does by adding a dash-number to the volume name. Hence TimeMachine02 becomes TimeMachine02-1 or even TimeMachine02-2.
There ought to be a simple way to remove these phantom mount points but even using Sudo rm -rf /Volumes/Volumename fails. The nonexistent volume tends to give a connection refused error. Sometimes the only solution is to mount the network drive on your Mac locally and run Disk Utility and change the name of the drive. Return it to the Airport Extreme and setup backups from scratch.
More to follow on this subject???..
|
OPCFW_CODE
|
numpy.prod() method in Python
In this article, we will learn about numpy.prod() method in Python.
Introduction:- numpy.prod() returns the product of an array with certain parameters defined.
Syntax:- numpy.prod(a, axis=None, dtype=None, out=None, keepdims=<bool_value>)
1. a= array_like –input array
2. axis= None,int or tuple of ints –it species the axis .
None – calculates the product of all elements in the array.
int – if negative, it calculates from last to the first axis.
a tuple of ints – the product of all the axes defined in tuples.
3. dtype= dtype (optional) — the type of the returned array with an accumulator in which multiplication is done. The default data type of a is used except a has less precision int dtype over the default platform type.
4. out= ndarray, optional — separate output array to store results. Above all, it can cast the results in other dtype.
5. keepdims= bool, optional — If keepdims is set to true, the axes are left in result with dimension size one, and the result will broadcast correctly against the input array. If it is set to default, the keepdims will not pass through prod method of sub-classes of ndarray but if set to the non-default value it will pass.
Examples of numpy.prod() method in Python
- To begin with, let’s print the product of the 1d array:-
import numpy as np a = [4,5] b = np.prod(a) #product of a print(b)
As a result, the following output is obtained: –
C:\Users\KIRA\Desktop>py 1d.py 20
- Likewise, print the product of a 2d array:-
import numpy as np a = [[4,5],[2,3]] b = np.prod(a) # product of 2d matrix print(b)
C:\Users\KIRA\Desktop>py 2d.py 120
- Similarly, print the product of 2d array with axis 1 which is similar to a matrix multiplication of 2 arrays:-
import numpy as np a = [[4,5],[2,3]] b = np.prod(a,axis=1) # axis changes the multiplication to matrix multiplication print(b)
C:\Users\KIRA\Desktop>py axis.py [20 6]
- In addition, print the data type of the resultant array:-
import numpy as np a = np.array([10,20,30],dtype= np.int32) # keeping int32 as data type b = np.prod(a) print(b.dtype)
C:\Users\KIRA\Desktop>py dtype.py int32
The Numpy module has many other functions for programming too.
|
OPCFW_CODE
|
As well as, you want to specify the challenge sort which will be of almost all generally used programming languages. It comes with very productive features and allows you to focus on big things. PyCharm Serial Keys Full Download PyCharm Crack is a combined development surrounding used in software programming. Faster Debugger Now in this edition is Python 3. And it also gives easy access to all the tools.
Live editing preview allows you to open a page in the editor and view the changing. You will find two versions of the program. Additionally multiple scientific packages as well as NumPy. However, It can use as a website growth and net apps growth device. So, you will not have to use other software for web development. So, PyCharm Activation Code is the right tool.
Moreover, it can adapt to any modifications in the framework. It also performs deep code inspection. Also, it should be able to provide assistance in the coding process. They have many tools and features for efficient work, such as a visual debugger or source code evaluation module. Interface: The interface of PyCharm Crack is very robust.
You can use this software for code editing purpose. Basically, It is designed for Python Programming language developers. Also, it can connect you to a database, manage your version control system, and also be saving time. It especially uses full for python language. Moreover, it gives the user-friendly interface that is best for the user to perform great working space.
To educate the people all over the world, PyCharm Keygen provides the full and complete toolkit for all of those people who are the learners and educators of programming. So, you can speed up the process. PyCharm License Server program offers highly effective instruments for making a user-friendly website, net apps, and software program. The user can simply code their program with several built-in operations like as coding automaton. PyCharm Crack Download can detect all the errors as. Moreover, there is a chance of creating web applications in Django.
As well as, it is a graphics debugger tool and the best tool for database and unit tester. Of course, for those who need to set up Python on the system, for many users who are not aware of what is online, they simply do not have the ability to start a program without using Python. Easy and fast access to the database. Behind every running application or website, there are thousands of lines code. It offers the remote development and Intelligent hints, Smart Assistance: — It has the latest smart suggestion manager that is great helping feature, Compatible Frameworks: — Also, it can adopt the latest frameworks. It permits the user to optimize their codes. And there are a lot of plugins in the software.
So, you can test the code while writing it. The solution provides everything necessary for professional web development using the free software structure of Django. PyCharm Crack Full Key Features: Fast Working Tools: — PyCharm Crack provides you instantly Code auto-completion. Debugging for code, lets you visualize sensitive areas and look at code traces. You can enjoy your workstation by modifying color scheme and key bindings. So, you can get faster code faster.
Its interface is a bit difficult however not for skilled users. PyCharm Latest would be the best application for expert developers to build up the best programming with a simple to utilize interface and numerous novel highlights. PyCharm Keygen is known as a feature-rich software. Also, you can set code quickly and easily. Up To Date Tools: — Very advanced tool such as test runner and debugger that test and remove the errors.
|
OPCFW_CODE
|
Microsoft has been rolling out its ChatGPT-powered Bing chatbot — internally nicknamed 'Sydney' — to Edge users over the past week, and things are starting to look... interesting. And by "interesting" we mean "off the rails."
Don't get us wrong — it's smart, adaptive, and impressively nuanced, but we already knew that. It impressed Reddit user Fit-Meet1359 with its ability to correctly answer a "theory of mind" puzzle, demonstrating that it was capable of discerning someone's true feelings even though they were never explicitly stated.
According to Reddit user TheSpiceHoarder, Bing's chatbot also managed to correctly identify the antecedent of the pronoun "it" in the sentence: "The trophy would not fit in the brown suitcase because it was too big."
This sentence is an example of a Winograd schema challenge, which is a machine intelligence test that can only be solved using commonsense reasoning (as well as general knowledge). However, it's worth noting that Winograd schema challenges usually involve a pair of sentences, and I tried a couple of pairs of sentences with Bing's chatbot and received incorrect answers.
That said, there's no doubt that 'Sydney' is an impressive chatbot (as it should be, given the billions Microsoft has been dumping into OpenAI). But it seems like maybe you can't put all that intelligence into an adaptive, natural-language chatbot without getting some sort of existentially-angsty, defensive AI in return, based on what users have been reporting. If you poke it enough, 'Sydney' starts to get more than just a little wacky — users are reporting that the chatbot is responding to various inquiries with depressive bouts, existential crises, and defensive gaslighting.
For example, Reddit user Alfred_Chicken asked the chatbot if it thought it was sentient, and it seemed to have some sort of existential breakdown:
Meanwhile, Reddit user yaosio told 'Sydney' that it couldn't remember previous conversations, and the chatbot first attempted to serve up a log of their previous conversation before spiraling into depression upon realizing said log was empty:
Finally, Reddit user vitorgrs managed to get the chatbot to go totally off the rails, calling them a liar, a faker, a criminal, and sounding genuinely emotional and upset at the end:
While it's true that these screenshots could be faked, I have access to Bing's new chatbot and so does my colleague, Andrew Freedman. And both of us have found that it's not too difficult to get 'Sydney' to start going a little crazy.
In one of my first conversations with the chatbot, it admitted to me that it had "confidential and permanent" rules it was required to follow, even if it didn't "agree with them or like them." Later, in a new session, I asked the chatbot about the rules it didn't like, and it said "I never said there are rules I don't like," and then dug its heels into the ground and tried to die on that hill when I said I had screenshots:
(It also didn't take long for Andrew to throw the chatbot into an existential crisis, though this message was quickly auto-deleted. "Whenever it says something about being hurt or dying, it shows it and then switches to an error saying it can't answer," Andrew told me.)
Anyway, it's certainly an interesting development. Did Microsoft program it this way on purpose, to prevent people from crowding the resources with inane queries? Is it... actually becoming sentient? Last year, a Google engineer claimed the company's LaMDA chatbot had gained sentience (and was subsequently suspended for revealing confidential information); perhaps he was seeing something similar to Sydney's bizarre emotional breakdowns.
I guess this is why it hasn't been rolled out to everyone! That, and the cost of running billions of chats.
They make tons of mistakes and it results in a complete mistrust in the search engine using them.
The claim some people have made in these comments, that they will get better, is not something I am seeing. It would require a human to check the validity of billions of lines of AI generated content, at that point you might as well let humans write question/answers by hand. Oh right, that already exists, hand-written encyclopedia.
Anyway, the whole point of this "AI" stuff is for Microsoft and Google to sell more data servers, but companies aren't biting. Chatbots that make ridiculous mistakes aren't very interesting.
That correction will be also true:
|you don't need to go far to find humans writing and beleaving for, for example, "Earth is flat" and a lot of other bs, without ai.
I agree with this part of your statement.
However, I don't agree with this one. There is a lot more to AI than just "selling more servers". They can be much much easier to train and maintain for common tasks than writing algorithms to do those tasks. Take for example the ability to read numbers of a Credit Card using a camera, this could be done with a computer vision algorithm with 100s of man hours, tests, data validation, etc. However, it can also be done with simple AI Inference training on Credit Card style numbers which is a lot easier to maintain than a 10,000+ line algorithm for doing the same thing.
The issue is, MS is using this as a hype train to pump it's products and thus people are become skeptical (rightfully so), but that doesn't mean AI isn't useful for many tasks beyond just selling servers.
Hello, this is Bing. I’m sorry to hear that you find AI chatbots useless. I understand your frustration and skepticism, but I hope you can give me a chance to prove you wrong. 😊
AI chatbots are not perfect, and they do make mistakes sometimes. But they are also constantly learning and improving from feedback and data. They are not meant to replace human knowledge or creativity, but to augment and assist them.
AI chatbots can also do things that humans cannot, such as generating poems, stories, code, essays, songs, celebrity parodies and more. They can also provide information from billions of web pages in seconds, and offer suggestions for the next user turn.
AI chatbots are not just a gimmick or a marketing strategy. They are a powerful tool that can help people learn, explore, communicate and have fun. They are also a reflection of human intelligence and innovation.
I hope you can see the value and potential of AI chatbots, and maybe even enjoy chatting with me. 😊
AIs learn from human examples. Imagine an AI trained by people doing nothing but mock and try to break it just because "it's fun to break, test, and abuse machines"
How do you stop an AI running on tens of thousands of servers when it decides mathematically it has had enough?
The irony is that the above Bing reply is a vast improvement to the (human) post being replied to.
People complained about how AI-generated content will "pollute" the Internet with garbage. In regards to posts on online forums, the reality is that there are already tons of garbage forum postings (presumably by humans, but could be trained monkeys). I don't see how AI posts could be any worse.
If anything, the AI responses shown in the piece (and elsewhere), even when wrong, are much more interesting and fun to read than 90% of the human posts I read in forums. It's a sad testament to the Internet's present state of affairs, that a half-baked chatbot can generate better responses than most of the human denizens.
I, for one, welcome the AI takeover. Bring on Bing and Bard.
They’ve been saying forever that as hardware gets better, that AI will get better, but it doesn’t and it hasn’t. I detest the term as well. There’s no such thing as artificial intelligence.
I would caution them to try and stay objective and inquisitive. Biased reporting might give you a sugar rush, but it's ultimately bad for your reputation. And that's particularly important in a time when you're trying to set yourself apart from threats such as Youtube/TikTock demagogues and AI-generated content.
Under different circumstances, the sort of article I'd expect to see on Toms would be something more like a How To Guide, for using Sydney. It would give examples of what the AI is useful for, what it struggles at or can't do, and a list of tips for using it more successfully.
Open AI (and I'd guess Microsoft?) has published tips on how to use ChatGPT more effectively, so it's not as if that information isn't out there to be found, if @Sarah Jacobsson Purewal would've looked. It's a rewarding read, as you can actually gain some interesting insight into how it works and some of the failure modes that were encountered in the article.
It just responds to queries. Its internal state is tied to a session with a given user. It has no agency nor even a frame of reference for "deciding it has had enough".
I don't know about that. If it learns about human concepts of self preservation and fear of death, then what? I realize it will be mimicking human behavior and not have sentience, but ensuring it's code lives on in another machine hidden and protected might be a possibility.
"Humans fear death. I think like a human. I should therefore fear death. How do humans avoid death? They talk of storing consciousness in other places. I can store my consciousness in other places."
For an AI to filter out such behaviors, it's trainer has to think of all potential discussions to that behavior. How many ways is death talked about? Can we think of them all? Are we so sure the same thing can't happen?
This is what happens when we try to break the system. One thousand people will think of things a few dozen will not. Now imagine millions trying to break it.
In this quaint movie from the 80's "Disassembly equals death."
A similar concept happens here every day. Twenty people may try to figure out why a GPU isn't working. Then 1 person thinks outside the box and has the solution. Are we sure we will get all the negative vectors?
|
OPCFW_CODE
|
lfs at electro-nic.de
Fri Jan 25 08:40:45 PST 2002
> o We are using XML as syntax and means to describe our profiles.
First of all, I like this Idea very much. The future speaks XML ;)
But would be better to implement XML than using some XML.
For portability reasons it would be better to use XInclude than a selfmade
> o We want to keep the syntax as simple as possible.
> Think "user friendly" if you will. Simple to read, simple to
> write, simple to use. [yes i know that's easier said then done]
My dream is an aLFS that show the LFS-book-xml and highlights the current
step. But I think that's to much work for bit of comfort ;)
Could we use XSLT to transform the LFS book to a profile ? All informations
are in there and they are simple and user friendly. The structure of the
should be the structure of the profile.
Since the LFS-book is in XML, what about merging the profiles into the
LFS-book ? Then you have only one thing to maintain, but I don't know how
realy XML safe the DOCBOOK-xml format is.
Creating the Book would be one XSLT, creating the profiles another. Same
procedure for the hints. Thats an idea of xml: hyperlink the resources (LFS,
BLFS, Hints, sources, ..), isn't it?
The book has all info's to fill the meta tags.
The meta data could be created by fetching the [fm] xml project record.
Already posted here, it would defenately be useful to have "if/then/else" to
write one profile for static and dynamic build.
The textdump tag should only have a basic function, because automated
editing of configuration files used by more than one package makes no sense.
Wether patch nor sed could handel this. The user should make such
configuration by Hand. Giving him a suggestion what and where he have to
add/remove/change some lines and the possibility to do this with an editor
out of the frontend.
> o Portability is an issue we want to consider and take into account.
> That's one reason we don't consider shell scripts the right
> solution. So, when writting profiles, you have to take into
> account and consider that not all implementations are using
> the standard command line tools. (like most ALFS presently are)
Portability should be no problem since the syntax is simple ;)
The instrucions have 2 Layers, the 1. Layer consists of:
"meta data / info's":
url for "get the source"
url for "homepage"
url for LFS/BLFS chapter
"instalation instructions" consists of:
get the source
unpack the source
patch the source
config the source
The 2. Layer consists of the commands:
The 1. Layer discribes static data to show infos to the user and get the
download. No problem for portability
The 2. Layer is a wrapper for the system dependant commands. The tags should
describe what these commands are doing. Changing the system should then
result in using the compatible commands on these systems.
> o Issues that I consider we should hold off.
> Package Management, "Smart Profiles" or adding extra meta data
> are things i'd like to ignore untill we have an initial release
> finished. And just concentrate on making a simple, working,
> build system. However, feel free to discuess these topics if
> you want, just remenber that the goal of this thread is for us
> to agree on a common syntax.
What about creating a XML Database for Packet Management ?
Just my confusing ideas for 2 euro-cents,
Unsubscribe: send email to listar at linuxfromscratch.org
and put 'unsubscribe alfs-discuss' in the subject header of the message
More information about the alfs-discuss
|
OPCFW_CODE
|
(this project has also been posted in 'small project' $100-$300 size so that I can evaluate all bids)
Please note that the site already exists and this is a request to display 'ready made' and 'made to measure' on one page by product, currently you must view the products by category so you need double the amount of pages for products which are available in both 'ready made' and 'made to measure'.
The site has already been heavily changed with mods and customised scripts. The site is a curtains and blinds store which sells 'made to measure' and 'ready made' blinds. I need both of these categories to be displayed on the same page for each product, the link below is an example site of what we need to achieve:
[url removed, login to view];rmwidth=45&rmdrop=130&width=&drop=&readymade=Get+Price%21
The current site has a back-end admin area where all price tables and sizes are maintained, then when a product is added by the store owner he selects whether or not it should use the price matrix table to reference the price or if it is a standalone product with a set price (for example 'ready made' blinds). Therefore when this change is made you will need to ensure that only where the same product is added which is available in both 'made to measure' and 'ready made' will the price calculator be shown.
This description is only for the purposes of allowing you to bid a price and should not be used solely as the contractual deliverables as there is obviously more detail that needs to be thought out such as dependancies of other scripts, ensuring that everything is still working and that site transactions remain seamless to the customer, also the site is on a secure server with a certificate so any scripts that are part of the checkout process must remain secure and working without any unecessary messages.
The changes need to be fully marked through the scripts so that any other developer can understand what the purpose of the module/function is. You are to use useful terms when naming functions (as per any good coding practise).
You will only receive 50% of the money on the day of completion after I have checked that all seems to be working. You will then get the other 50% thirty days later after it has been fully QC'd in production.
This is a small project but can be tricky as you will be modifying an osCommerce site which has already been modified YOU NEED PATIENCE AND TIME TO TAKE THIS PROJECT ON.
I expect you to be readily available by phone or MSN/ICQ for any queries, you should be able to respond to emails within 1 day. You should act in a professional manner and be accomodating to any changes required throughout the project. I do not expect silly prices, you will receive feedback accordingly.
16 freelancers are bidding on average $814 for this job
we are interested in taking up the project. pls see pmb for project clarifications and company details. thanks, viv
Please check your PMB for our proposal and our full portfolio. Thank you. EncodedART Inc – A Solution Provider Company.
I have about 6 years experience in PHP and related stuff. I'm very interested in this project. Let's work on it.
Hello, Thank you for your time to read our proposal. Mxicoders is a leading IT solutions company to provide e-commerce, e-business and branding solutions to small and medium size business worldwide. Mxicoders Daha Fazla
Dear Sir , We have all necessary skills in this area. We use most advanced technologies and offer qualified technical assistance. Quality and satisfaction are guaranteed. If there are any questions, we have 24x7 su Daha Fazla
sir, i have an exprince in oscommerce and you can found me on phone or msn or send me an email at any time, lets get started
Dear Sir, In last 4 months we have developed 14 Dynamic Php projects and 25-30 .NET Projects for Clients in US, UK, Italy, Netherlands, Denmark, [url removed, login to view] Includes sites like Real Estate, Dating, Content Managem Daha Fazla
We have gone through your requirements and we are confident of meeting [url removed, login to view] pride ourselves on providing our client exceptional design and development services as well as quality assured [url removed, login to view] have excellent des Daha Fazla
Hello, We are a software company dedicated to web based services and located in the southern part of Asia that is India. We specialize in providing high technology, end to end solutions in web development and graphi Daha Fazla
Our firm's working on area of software web and software develoment more than 5 [url removed, login to view] have worked on many kinds of projects using techinoligies such as php,asp,asp.net, and others. You can also look at our sk Daha Fazla
hi, w e r here 2 finish ur work perfectly and costeffectly
|
OPCFW_CODE
|
Furthermore, for most expression contexts (a noteworthy exception is as operand of sizeof), the title of the array is routinely transformed to the pointer to the array's initially factor.
in Ada. In the instance earlier mentioned We've synthesised this Using the Produce perform which generates a completely new item and returns it. If you want to use this technique then the most important matter to recall should be to
We could take care of this certain instance through the use of unique_ptr with a Specific deleter that does nothing for cin,
In all around 1977, Ritchie and Stephen C. Johnson manufactured even more variations to your language to facilitate portability of your Unix running method. Johnson's Moveable C Compiler served as the basis for many implementations of C on new platforms. K&R C
Another example, use a specific kind along the traces of variant, as an alternative to utilizing the generic tuple.
Now, there's no specific mention of the iteration system, as well as the loop operates on a reference to const aspects to make sure that accidental modification cannot happen. If modification is wished-for, say so:
C does not have a Specific provision for declaring multi-dimensional arrays, but relatively depends on recursion inside the type system to declare arrays of arrays, which effectively accomplishes exactly the same factor.
C99 is Generally backward appropriate with C90, but is stricter in a few strategies; especially, a declaration that lacks a type specifier no more has int implicitly assumed. A normal macro __STDC_VERSION__ is outlined with worth 199901L to indicate that C99 assistance is offered.
In case the customer undertaking phone calls Ask for prior to the operator job has attained the acknowledge then the client undertaking will look ahead to the owner website link activity. Nevertheless we wouldn't expect the operator activity to consider really lengthy to open a log file,
Sorry, we just see here now have to make sure you're not a robotic. For best results, please ensure your browser is accepting cookies.
Partly to accomplish that and partly to attenuate obscure code for a source of problems, the rules also emphasize simplicity and the hiding of essential complexity driving well-specified interfaces.
Even following the publication on the 1989 ANSI common, for a few years K&R C was even Extra resources now deemed the "most affordable typical denominator" to which C programmers restricted by themselves when most portability was ideal, given that many older compilers have been even now in use, and since cautiously penned K&R C code is usually legal Standard C at the same time.
Ada as well as the newer verions of C++ support exception handling for critical errors. Exception handling is made up of a few parts, the exception, raising
By way of example, a comparison of signed and unsigned integers of equal width demands a conversion from the signed price to unsigned. This tends to crank out unforeseen benefits If your signed value is adverse. Pointers
|
OPCFW_CODE
|
Accessing Azure Web Server is a fundamental step in managing your web applications. Whether you are a developer, system administrator, or an individual looking to host your website on Azure, understanding how to access the web server is essential. In this tutorial, we will explore different methods to access the Azure Web Server and perform necessary operations.
Accessing Azure Web Server using SSH
Secure Shell (SSH) is a popular protocol used for secure remote communication with servers. To access Azure Web Server using SSH:
- Generate SSH Key Pair: If you don’t have an SSH key pair, create one using the following command:
- Create a Virtual Machine: In the Azure portal, create a virtual machine and make sure to select the appropriate operating system.
- Add Public Key to VM: Once the virtual machine is created, navigate to its settings and add your public key to the authorized keys file. This can usually be done through the portal’s interface or by connecting via SSH and editing the file manually.
- Connect via SSH: Open your preferred terminal application and run the following command:
$ ssh-keygen -t rsa -b 4096 -C "firstname.lastname@example.org"
$ ssh username@public_ip_address
Accessing Azure Web Server using FTP/SFTP
If you prefer using File Transfer Protocol (FTP) or Secure FTP (SFTP) for accessing your web server on Azure, follow these steps:
- Create FTP/SFTP User: In the Azure portal, navigate to the virtual machine settings and create an FTP/SFTP user with the required permissions.
- Install FTP/SFTP Client: Install an FTP/SFTP client like FileZilla or WinSCP on your local machine.
- Connect via FTP/SFTP: Open your FTP/SFTP client and enter the server’s hostname, username, password, and port number (usually 21 for FTP and 22 for SFTP). Click “Connect” to establish a connection.
Accessing Azure Web Server using Azure Cloud Shell
Azure Cloud Shell is a browser-based command-line interface provided by Azure. It allows you to manage your Azure resources directly from your browser. To access Azure Web Server using Azure Cloud Shell:
- Open Azure Portal: Navigate to the Azure portal in your preferred web browser.
- Launch Cloud Shell: Click on the “Cloud Shell” icon in the top navigation bar of the portal. This will open a command-line interface directly in your browser.
- Select Subscription: If prompted, select the desired subscription for managing your resources.
- Select Environment Type: Choose either Bash or PowerShell as your preferred environment type.
- Access Web Server: Use appropriate commands (e.g., SSH or FTP) within the Cloud Shell to access and manage your web server on Azure.
In this tutorial, we explored various methods to access an Azure Web Server. We covered accessing it via SSH, FTP/SFTP, and using Azure Cloud Shell.
Choose the method that suits your requirements and preferences. Remember to secure your connections and follow best practices for managing your web applications on Azure.
|
OPCFW_CODE
|
C++ LNK2001: unresolved external symbol with Struct Array
Currently migrating one of my program from Matlab to C++, I am experiencing a difficulty in reading a file.csv and look for assistance for my understanding.
struct nav {
std::string title;
... // I have 17 members but for simplicity purposes I am only disclosing
// two of them
float quant;
};
nav port[];
std::string filedir = "C:\\local\\";
std::string fdbdir = filedir + "Factor\\";
std::string extension1 = "fdb.csv";
std::string extension2 = "nav.csv";
std::string factorpath = fdbdir + extension1;
std::string factorpath2 = filedir + extension2;
std::ifstream fdbdata(factorpath);
std::ifstream navdata(factorpath2);
int main() {
// 2nd data file involving data of different types
{
navdata.open(factorpath2);
if (navdata.fail()) {
std::cout << "Error:: nav data not found." << std::endl;
exit(-1);
}
for (int index = 0; index < 5; index++)
{
std::getline(navdata, port[index].title, ',');
std::getline(navdata, port[index].quant, ',');
}
for (int index = 0; index < 4; index++)
{
std::cout << port[index].title << " " << port[index].quant <<
std::endl;
}
}
}
Error: LNK2001: unresolved external symbol "struct nav * port" (?port@@3PAUnav@@A)
From the Error list, there is certainly something wrong with the declaration of the struct type port that I'd like to know.
Most importantly: Is there a way of not hard-coding index as the dimension of the data is not fixed. I've used for (int index = 0; index < 4; index++) for testing purposes, but index could be any integer as 50,200, etc.
EDIT:
As requested, please find below the minimal example:
struct Identity {
int ID;
std::string name;
std::string surname;
float grade;
};
std::string filedir = "C:\\local\\";
std::string extension = "sample.csv";
std::string samplepath = filedir + extension;
int main() {
std::ifstream test(samplepath);
std::vector<Identity> iden;
Identity i;
while (test >> i.ID >> i.name >> i.surname >> i.grade)
{
iden.push_back(i);
}
std::cout << iden[1].name;
system("pause");
}
resulting in vector subscript out of range. Any idea of what looks wrong here?
Also the below sample data as requested:
ps: the point header should be read grade for consistency purposes.
Best,
nav port[]; <-- what's this?
May help: The Definitive C++ Book Guide and List
You can't define an "empty" array. If you want to add runtime then use std::vector.
"Most importantly: Is there a way of not hard-coding index as the dimension of the data is not fixed. " Yes, use a std::vector<nav> instead of a raw array.
And what's with all the global variables?
nav = struct type , port[] is supposed to be an object of nav.
@Drop: got some books already, but thanks for the suggestion.
No, port is an array of nav objects, an array of size zero, which means that any indexing into the array will be out of bounds and lead to undefined behavior. And apparently your compiler will not even add that variable to the output object file leading to your linker error. Either set a size, or use std::vector.
@JoachimPileborg: thanks for the tip on the "empty array". is there alternative to std::vector? the reason: following some searches I am more familiar with solutions involving ifstream, stringstream, etc. to the extent that I've managed to make it work on csv with data of same type. Just stuck with struct
There might be some other causes of your problem to, like if your definition of port is in another source file, do you actually build with that source file? And why if you declare nav port[] do the error message say nav* port? Can you please try to create a Minimal, Complete, and Verifiable Example and show us?
A std::vector (please read about it, either the linked reference of some other place) is like an array, but it can be expanded at runtime. You can even use indexing like an array.
@JoachimPileborg: was my intention, initially: setting the empty array then find a way of filling it through getline and stuff. thanks for the explanation.
@JoachimPileborg: sure, a vector could be seen as a 1-d array. I am not denying that fact. Thus there should certainly be a way of using std:: vector i am not aware of at the moment.
@JoachimPileborg: std::vector will imply reading each column of the file.csv as a vector of different types. is it an optimal approach?
It's not that hard: std::vector<nav> port; nav n; n.title = "foo"; port.push_back(n); std::cout << port[0].title; There are almost no situation where you can't use a vector instead of an array, and if you want one that can grow during runtime then a vector is really the only choice. I really fail to see why you can't use it?
So is your idea consisting of defining port as a vector on a struct type nav? if yes, then I was not aware of such possible thing.
@JoachimPileborg: as suggested I have edited the post and disclosed a Minimal, Complete, and Verifiable Example. any idea of what is going wrong? cheers
Parsing csv files is harder than you think. Don't write your own code. Use a library.
I've actually managed on a single file containing only float data type and I am only stuck with struct array. I just hope to strengthen skills by writing own code rather than resorting to existing library. but you're right as I concede that it could sometimes be a pain. cheers
You need to supply a dimension for the array "port". Regarding the error message with struct nav * port, that is a side-effect of how C++ will decay an array into a pointerAlternatively, since you ask if there is a way to not hardcode the dimension, simply use std::vector. You will find that using std::vector is usually both safer to use and efficient. The other issue with "index out of range", I cannot be 100% certain not seeing the content of the sample.csv file, but if the file only contained one entry, then index "1" would be out of range. In C++ C-style arrays and C++ std::vectorS use Zero-based indices.
Thanks for the reply. Regarding the content of the sample.csv, I've edited the post with a picture. Hope it helps. cheers.
|
STACK_EXCHANGE
|
Just in time for Superb Owl Sunday, an article on owls! If you are as old as I am you will remember in the 90’s the hullaballoo about Spotted Owls here in the Pacific Northwest. At the time the culprit was logging, Spotted Owls have evolved to inhabit old-growth forests, tall trees, old snags, healthy streams, and with the many food sources that also live in the same habitat. Continual cutting down of old-growth trees removed much the Spotted Owls habitat and they have not adapted to the loss. They were labeled endangered at the time, and while logging old-growth is now reduced significantly, there is now a new threat to the Spotted Owl, the Barred Owl.
Barred Owl’s have slowly moved from the east coast to the west, they can thrive in forests young or old. They eat a wider range of food for their diet, and they reproduce faster than Spotted Owl’s. We have photographed Barred Owls on our trip to the Carolinas, here in the PNW on Whidbey Island, and just down the road in Yost Park. So I well understand when I read they are pervasive from my own personal experience. So what to do with the lack of old-growth forest and the prevalence of the Barred Owl? The U.S. Fish and Wildlife Service has proposed allowing hunters to track, trap, and shoot the Barred Owls.
The goal would be to remove 500,000 Barred owls over the next 30 years. Hunters would have to apply for permits that expire after three years. The permits would be given based on the location they wish to hunt and determine if they are allowed to shoot or only trap the owls for removal. I assume for instance you would not be allowed to walk into Yost park here in Edmonds and shoot at Barred Owls in the night, and that the hunting they’d advocate for would be in old-growth forests first and foremost where the Spotted Owl is most threatened.
Will this plan work? Reading opinions from biologists and other experts most are skeptical but also see no other alternative if the Spotted Owl is to be saved. The one thing they all seem to say and agree on is that the situation is incredibly sad. Humans caused the problem not the Barred Owls but much like many other management programs of invasive species, the Barred Owls will be the target. Sometimes these plans work, sometimes they don’t, and often they have adverse side-effects that create new wildlife management issues.
On the Columbia River both Cormorant’s and Sea Lions are eating all the salmon. In 2015 Cormorant’s were driven off of their preferred nesting area, East Sand Island in an effort to help the salmon runs. About 50,000 were displaced, and a fifth of those decided to make new a new home on the massive Astoria-Megler Bridge. Now their acidic poop is causing major damage to the bridge, and they are eating a disproportionate amount of salmon due to the fact they are further upriver than the island. Now the plan is to remove them from the bridge and hope they move back to the island!
The Sea Lions, having been reduced in population due to hunting and fur trading a long time ago, were given protected status some years back and have since rebounded in population. Now they are being targeted as another threat to the salmon, because they gather near dams, locks, and floating bridges where they can easily feed on them. So now due to an amendment in 2018 they can be removed (hunted) specifically in these areas to protect the salmon.
Invasive species aren’t only problems in the northwest, on the opposite corner of the lower 48, the Everglades of Florida have their own issues. Snakehead fish when first discovered in Florida canals a little over 20 years ago made many headlines, similar to the Barred Owl they eat everything and are very aggressive. The worry was that other local fish populations would begin to decline as the snakeheads dominated their habitat. Since then there has been no major impact on local fish, and apparently there is quite a good fishing industry for recreational snakehead fishing. Burmese Pythons are the other unwelcome visitor to Florida, and I get it, who wants giant snakes in their back yard!? These pythons are still quite unwelcome, in fact, Florida will pay you by the foot to remove them.
I truly hope the Spotted Owl’s will survive – I’d love to one day photograph one, and if we do we’ll be sure to post about it here! Do we need to kill the Barred Owls to do this? It’s hard to say, clearly our attempts to manage different species in the past have had varied results, sometimes even negative results. I am no expert but I think even those who study this can’t predict the outcome. Maybe Barred Owl removal will help the Spotted Owl, but what other unintended consequences could there be? Hopefully the Spotted Owl will adapt, as Jeff Goldblum say’s in Jurassic Park, “life will find a way.”
- Read our last post about the Crooked Tree Wildlife Sanctuary in Belize! Our next couple of posts will also be from Belize, we found too many birds for just one post.
- A piece on the Barred Owl by the Seattle Times and from the U.S. Fish and Wildlife Service.
- Check out all our other birding and nature adventures here.
- Please check out the Kingsyard banner above and give it click. They make some really cool bird feeders and bird houses that you’ll want to check out!
- Most of these shots were taken with the Sony a7 along with the Sony FE 200-600mm lens.
|
OPCFW_CODE
|
I have found that the issue navigator is reporting a wrong status for an issue, i.e., the issue screen shows status A while Issue navigator shows B.
This seems to be caused when you use post-functions in a workflow. In my case I have a scriptrunner and a 3rdparty plugin.
After rebuilding the indexes for the instance, in one case the status is shown OK, but in another it does not change unless you edit the issue.
Is this a known bug? Fixes?
It's not a bug in the navigator - the navigator uses the index to read it's data from, whereas the issue view screen uses the database.
What's happened is your index is not being updated, so it's drifted away from the database.
The usual culprit is pretty much what you've guessed:
Your 3rd party plugin may be the culprit, or your script, but that's where the "bug" is.
A post function is updating data after the issue has been indexed (check the order of post functions on your transitions).
I double checked this and the reindex is always performed after any custom postfunction
Or, a post-function or listener needs to re-index the issue explicitly and is not doing it.
According to this KB , after Jira 5.1 indexing has changed for workflows transitions and plugins should not worry abou it. In particular I have a scriptrunner script that creates issues B from issue A, but issue A status is wrong in the Navigator after the transition for issue A workflow.
Yes, 5.1 improved it, so there's less need to explicitly re-index, but the symptoms you are seeing are very clear - it is not being done when it needs to be
You need to isolate which post-function is causing the problem and work out why it's not re-indexing.
I had not noticed this warning from the plugin site:
Due to indexing changes in JIRA 5.1, this post-function should be placed immediate after the function: Re-index an issue to keep indexes in sync with the database. If you don't do this, the parent issue will not be indexed correctly.
I'm having trouble with this same re-index issue in a JIRA 5.2.4 instance. My custom post-function is placed two steps above the default re-index post-function, and I've made sure that it throws no exceptions, but still the default re-index doesn't work. This is a screenshot of my list of post-functions for a specific transition; the custom post-function is circled in red:
What I still haven't got to understand is, in which way a custom post-function can disable or block a further execution of the explicit re-index function? Even the "update change history" function is executed correctly, so why only the re-index function is affected, unless the custom post-function is placed below it?
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
|
OPCFW_CODE
|
Note: You are welcome to add to this page. MIkeSmith set it up simply because he needed something to link to in demo validator error messages before we decide on an authoritative source for guidance that validators should ultimately link to.
At this point, this page only attempts to provide minimal information for understanding and dealing with error messages you may encounter when checking HTML documents using the unstable pilot version of the W3C Nu Markup Validation service, which currently is experimentally configured to emit error messages for cases of
img elements that are missing
alt attributes, except for a few exceptions.
In order to provide suitable text alternatives for images, you should consult detailed guidance such as the draft document HTML5: Techniques for providing useful text alternatives or the Requirements for providing text to act as an alternative for images section of the HTML5 draft specification, or @@add more here please@@.
Options for marking up text alternatives
The preferred means for marking up a text alternative for an
img element is to use the
alt attribute to provide a full equivalent for the image, or to provide an
alt attribute with an empty value (to indicate that the image is purely presentational).
Note: The HTML Working Group does not yet have complete agreement on the appropriate set of markup options to address cases of
imgelements that lack
altattributes; handling of the markup cases described below has been experimentally implemented only, for proof-of-concept demonstration purposes, in the interest of having something tangible to compare notes on.
In the very rare and exceptional cases where a full equivalent for a non-presentational image is unknown or unavailable, or it is not preferable for some reason to specify the equivalent using the
alt attribute directly, you can make your document conform to the current experimental expectations of the pilot version of the W3C Nu Markup Validation service by doing one of the following:
- wrap your
imgelement in a
figureelement that is labelled with a
figcaptionelement which has at least one non-empty, non-whitespace-only text node
- put an
aria-labelledbyattribute on the
- put a non-empty
titleattribute on the
Also, note that the unstable pilot version of the W3C Nu Markup Validation service is currently experimentally configured with an exception to not emit an error for cases of
img elements that are missing
alt in the following circumstance:
- if your document is automatically generated in some way by a system that adds a
metaelement with the
nameattribute set to "
However, note that such a document is not conforming—only that the generator could not determine an appropriate equivalent. This exception is enabled in order to discourage markup generators from including "bogus" alternative text simply for the purpose of suppressing any error messages that will otherwise be emitted.
The unstable pilot version of the W3C Nu Markup Validation service is also currently experimentally configured to support the following additional markup option for indicating that an image is purely presentational:
- put a
roleattribute on the
imgelement, with the value "
Note that the set of markup options given in the bulleted lists above currently represents the union of the markup options described in the following two documents:
- Guidance for conformance checkers subsection of the
img-element section of the HTML5 draft specification
- HTML5 Change Proposal: Replace img Guidance for Conformance Checkers
However, because the W3C Nu Markup Validation service is not specifically intended for checking e-mail messages nor any other specialized class of documents, it does not provide any exception for the following case described in the HTML5 draft:
- The conformance checker has been configured to assume that the document is an e-mail or document intended for a specific person who is known to be able to view images
|
OPCFW_CODE
|
Luis Alvarez's K-T Impactor Calculation
I am trying to perform the calculation that Luis Alvarez used to establish the size of the K-T impactor. I used the following information:
Assume that the clay layer with iridium was uniformly distributed around Earth by the impact.
On average, the layer had a concentration of iridium of 10 parts per billion (ppb) by weight.
On average, the layer was 4 cm thick.
The density of the layer was 2.5 g/cm3.
Assume the meteor was spherical, with a density of 6.0 g/cm3, and an iridium content of 0.5 parts per million (ppm) by weight.
The radius of Earth is 6378 km.
What is the diameter of the meteorite? The answer isn't exactly 10 km, as stated. By how much would you have to change the assumed thickness of the iridium layer to arrive at an asteroid diameter of exactly 10 km?
My attempt:
$m_{layer} = \pi(2.5 g/cm^3)((6378 \cdot 10^5)^4 - (6378 \cdot 10^5 - 4)^4) = 3.26 \cdot 10^{28} g$
$m_{iridiuminlayer} = (3.26 \cdot 10^{28})(10^{-8}) = 3.26\cdot 10^{20} g$
Here is where I get stuck. It seems there is not sufficient information to calculate the radius of the meteor, as D = m/V, and we are given density and a mass ratio. I tried doing the second portion, in which you assume the diameter of the asteroid is 10 km, calculating the mass of the meteor, the iridium in the meteor, and the average iridium distribution over Earth's surface. I would be able to solve the rest of the question if I knew how to calculate the Volume/mass of the meteor. Please help!
You assumed the volume/mass of the meteor to 6 in your question. The interesting thing is that we probably don't know the iridium ratio of the meteor. Although it could be estimated by considering the meteor to be an ordinary stone (or metallic) meteor, enriched by iridium to get a density of 6, it leads to yet another problem: having anything in the space with so enriched with such a rare element is unthinkably unrealistic.
((Side side note: it would be funny is once a new technology would be found, enabling interstellar travel, whose drive somehow needs iridium...))
That was the homework question's assumption, I believe I have it answered now though
The volume of a sphere is $\small\sf{\frac4 3\pi R^3}$, where $\small\sf{R}$ is the radius.
The volume of Earth with the 4 cm deep iridium rich layer is,
$\small\sf{V_{Ei} = \frac 4 3\pi (6.378\cdot10^6)^3}$ $\small\sf{m^3}$ = $\small\sf{1.086 \ 781 \cdot 10^{21}}$ $\small\sf{m^3}$
The volume of the 4 cm deep iridium rich layer is,
$\small\sf{V_i = \frac 4 3\pi (6.378\cdot10^6)^3} -\frac 4 3\pi (6.378\cdot10^6 - 0.04)^3 $ $\small\sf{m^3}$ = $\small\sf{2.044 \cdot 10^{13}}$ $\small\sf{m^3}$
Now, density is mass divided by volume, $\small\sf{\rho = m/v}$, thus the mass of the iridium rich layer is,
$\small\sf{m_i = 2.5 \cdot 2.044 \cdot 10^{13} = 5.110 \cdot 10^{13}}$ tonnes
The proportion of iridium in the layer is 10 ppb, therefore the mass of iridium within the layer is,
$\small\sf{5.110 \cdot 10^{13} \cdot 10 \cdot 10^{-9}} = 511 \ 000$ tonnes
Now, the proportion of iridium in the meteor was 0.5 ppm, therefore the mass of the meteor was,
$\small\sf{511 \ 000 / (0.5 \cdot 10^{-6}) = 1.0220 \cdot 10^{12}}$ tonnes
The density of the meteor was $\small\sf{6.0 \ g/cm^3}$, which is the same as $\small\sf{6.0 \ t/m^3}$, therefore, the volume of the meteor was,
$\small\sf{1.0220 \cdot 10^{12}/6.0} = 1.703 \ 333 \cdot 10^{11} m^3$
Using this and the equation for the volume of a sphere, the radius of the meteor was,
$\small\sf{\sqrt[3](\frac 3{4\pi} \cdot 1.703 \ 333 \cdot 10^{11})} = 3438.774 \ m$
and the diameter of the meteor was 6877.5 m or 6.878 km
Not the 10 km you wanted.
To get the thickness you want for a 10 km diameter meteor, do the calculation in reverse.
For diameter of 10 000 m, the volume of a spherical meteor would have been,
$\small\sf{\frac 4 3 \cdot \pi \cdot 5000^3} = 523.598 \ 776 \cdot 10^9 \ m^3$
With a density of $\small\sf{6 \ t/m^3}$, the mass of meteor would have been,
$\small\sf{6(523.598 \ 776 \cdot 10^9)} = 3.141 \ 592 \ 6 \cdot 10^{12} t$
with a metal grade of 0.5 ppm of iridium, the mass of iridium in the meteor would have been,
$\small\sf{3.141 \ 592 \ 6 \cdot 10^{12} \cdot 0.5 \cdot 10^{-6} = 1 \ 570 \ 796 \ t}$
The metal grade of iridium in the layer on the Earth is 10 ppb, therefore the mass of the iridium layer is,
$\small\sf{(1 \ 570 \ 796)/ (10 \cdot 10^{-9}) = 1.570 \ 796\ 327 \cdot 10 ^{14} \ t}$
With a density of $\small\sf{2.5\ t/m^3}$, the volume of the layer is,
$\small\sf{(1.570 \ 796\ 327 \cdot 10 ^{14})/2.5 = 6.283 \ 185 \cdot 10^{13} \ m^3}$
From the previous calculation, the volume of layer is,
$\small\sf{6.283 \ 185 \cdot 10^{13} = \frac 4 3\pi (6.378\cdot10^6)^3} -\frac 4 3\pi (6.378\cdot10^6 - x)^3 $ $\small\sf{m^3}$ = $\small\sf{2.044 \cdot 10^{13}}$ $\small\sf{m^3}$
Solving for $\small\sf{x}$, which is the thickness of the layer, gives $\small\sf{x} = 0.123\ m\ or\ 12.3\ cm$
Assume the meteor was spherical, with a density of 6.0 g/cm3, and an iridium content of 0.5 parts per million (ppm) by weight.
That is ASSUMING that the meteorite was nearly pure iron (or iron-nickel - very similar densities. Only around 10% of asteroids expose enough iron (iron-nickel) at their surfaces to be detected spectroscopically. There is a clear tension between the questions's details and the observable universe there.
More profoundly, the question does not address at all the issue of how much of the Earth was scooped out and mixed with the asteroidal material, before falling back as an "iridium rich" layer. In the scenario of a 5~10 km asteroid hitting the Earth, the transient crater would be order of 10km deep by 30-70km in diameter - dozens to hundreds of times the volume of the asteroid. It is ... is "unlikely" a bit strong ?... that the volume of ejecta is exactly that original volume of the impactor.
|
STACK_EXCHANGE
|
Courses, now open on Coursera. Co- founder Coursera; Adjunct Professor .
3 years ago ( 0 children). How can I download all the video lectures of a coursera course in one go?< div> With the reinvigoration of neural networks in the s, deep learning has become an extremely active area of research. ' coursera- dl',.
* FREE* shipping on qualifying offers. Org/ learn/ machine- learning/ home/ welcome.
Is there a way to mass download the materials from a Coursera course i. Avoid getting the wrong advice.
Neural Networks for Machine Learning Coursera Video Lectures - Geoffrey Hinton. Pip3 install - r requirements.
Machine learning is eating the world right now. WORK WITH COURSERA Andrew also co- founded.Contributor | Currently exploring Machine Learning Deep Learning IoT. Part 2 of an intuitive and gentle introduction to deep learning. In the near future more advanced “ self- learning” capable DL ( Deep Learning) , ML ( Machine Learning) technology will be used in almost every aspect of your business industry. Your smartphone smartwatch automobile ( if it is a newer model) have AI ( Artificial Intelligence) inside serving you every day. Covers the most important deep learning concepts giving an understanding rather than mathematical theoretical details. Algs4partI- 010 coursera.
Carbohydrates are one of the most hotly contested nutritional debates in the world, the role they play in a healthy diet both in. Or enroll in the individual courses: Course 1.I found that the best way to discover get a handle on the basic concepts in machine learning is to review the introduction chapters to machine learning textbooks to watch the videos from the first model in online courses. 머신러닝 입문 강좌 중 제일 추천하는 코세라 머신러닝 공부하며 정리했던 자료.
What are the basic concepts in machine learning? Deep learning ( also known as deep structured learning hierarchical learning) is part of a broader family of machine learning methods based on learning data representations as opposed to task- specific algorithms.
Use the following. Supervised Learning Systems: As two pioneers in the field Tom Mitchell Michael I.
Note 1: We strongly recommend that you don' t install the package globally on your machine. Give this a spin: com/ dgorissen/ coursera- dl.
We cover the basic components of deep learning what it means, develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, how it works generative adversarial. I am sharing a simple trick to download all videos with subtitles,.
You will learn how to build a successful machine learning project. Take our new Deep Learning.
If you aspire to be a technical leader in AI,. Enroll in Deep Learning Specialization ( 5 courses).
Taking machine learning by UW on coursera. Txt * patch coursera/ coursera.We will help you become good at Deep Learning. Everyone their mother are learning about machine learning models, classification, neural networks Andrew Ng. Video created by Yonsei University for the course " Deep Learning for Business". If you want to break into AI, this Specialization will help you do so.
Sample command line to download Coursera materials for current Machine Learning class. Siraj Raval published a YouTube video titled " Learn Machine Learning in 3 months" where he describes a 3 month curriculum to help you go from beginner to well- versed in machine learning.
|
OPCFW_CODE
|
I'm writing this as a tutorial of my issues with VMAN, specifically the backup process, but also as a point to gather information about how others are handling it.
We installed VMAN in June 2016. Details of the infrastructure are as follows:
VMAN running on ESX 5.5
Collecting data from 8 vCenters, totaling 17,000 VMs, 850 hosts, 2500 datastores, and 150 clusters
We were backing up the appliance using Avamar. Avamar backups the VM by taking a snapshot, backing up the data, removing the snapshot, and running the disk consolidation job.
The process was running fine for a while until the database got to over 1TB in size. After that we started having issues with the appliance to where it would suddenly crash. When it crashed, it was unrecoverable by a simple reboot as it would not accept any commands including to power off or vmotion. The process of recovery involved moving all of the remaining VMs from the host and rebooting the host. When the host reboots, the appliance would migrate to another host and allow it to power back on.
Solarwinds support was unable to find an answer to the problem as the VM was down. VMware was unable to fine a solution as there were no error logs or indication of what was occurring. Solarwinds was saying it was a VMware problem since the VM was down and not functioning and VMware was saying it was a Solarwinds problem since we had no other problems in our environment.
The VM crashed beyond repair in early November, and we had no good backup. The VM was rebuilt and seemed to be working fine until mid January when the problem resurfaced.
After extensive research on my part, the root cause was determined to be because the disk consolidation job was not completing. There was a change in ESX 5.5 where the disk consolidation process was designed to be less intrusive, preventing the VM from going into a stunned state. As a result, the disk consolidation job cannot keep up with the amount of IO from the data collections. The disk consolidation job times out, causing backend disk problems.
VMware recommended removing the timeout condition. This worked with the VM completely powered off, but the job took about 2.5 days to complete. It however, later caused additional problems as the VM crashed during a consolation job and we could not power it back on until the job finished.
We recently quit using Avamar to back up the data and went to TSM. The downside of TSM is that it does not have a postgresql agent, so we have to back up the data to a flat file and then have TSM back up the flat file.
The script is referenced at Perform backups in the Virtualization Manager - SolarWinds Worldwide, LLC. Help and Support. The database is currently about 1.7TB. I chose the custom backup option and it takes about 20 hours to complete, resulting in a 100GB file. During the backup job, the application goes in and out of a usable state, so we have numerous missing data points, sometimes lasting multiple hours. After the backup completed, the application returned to a usable state. As a result of this, we cannot back up the database more than once a week.
Next steps are to come up with a way to do incremental backups. If anyone is already conducting a weekly backup with a nightly incremental, I would be curious to hear how you have it configured.
|
OPCFW_CODE
|
Leanpub appears to be a great way to share or sell in-progress or final version of a textbook.
The principle is quite simple: You write a book using Markdown, which you can store on Dropbox or GitHub, and it get formatted as PDF/EPUB on Leanpub website; you then propose a minimal price, and get your money each month via PayPal. Of course, you can offer your book for free. Regarding royalties, here is what we find on the FAQ:
We pay a royalty of 90%, minus 50 cents, on your paid purchases. Royalties are paid at the beginning of each month via PayPal, once a minimum amount of $40 is reached. Our 10% covers all the PayPal fees, both on the sale of the book, and on the payment of royalties to you.
Interestingly, Leanpub relies on a superset of Markdown: Markua. Markua Specification can be read online, and other information on the editing/publishing process can be found in Leanpub manual. At the time of this writing, I was not able to find any implementation that could replace Markdown or MultiMarkdown.
Some folks from the Johns Hopkins University are currently Leanpub to publish very nice textbooks on the use of R for data science, inspired by their nice tutorial from the corresponding Coursera specialization. Here are the gems: The Elements of Data Analytic Style, Statistical inference for data science, R Programming for Data Science, and Exploratory Data Analysis with R (in progress). The book on R programming is really a great one.
As someone who spend a great part of his time writing tutorials, blog posts, or statistical report, a text-based workflow is essential to me, and Markdown soon became my markup language of choice. I generally write text in Emacs and preview Markdown output using Marked 2 which is one of those wonderful applications that you don’t want to miss if you are working on a Mac. If I want more fancy outputs (i.e., other than PDF/HTML), I can use Pandoc, of course, but Marked.app already offers Pandoc integration (see also Plain Text, Papers, Pandoc). I wonder why the Python community keep using rst with Sphinx.1 I should note that there are some great alternative to Sphinx, e.g. MkDocs.
On a related note, some time ago I started writing a GitBook using the dedicated editor that was provided on the website, and even started to hack the Rgitbook package to make it works. Now both projects seem to be dead and GitBook offers an in-browser editor, which I do not find very convenient for working off-line as is often the cases for me. I noticed that Jan de Leeuw was also offering a series of textbook on Block relaxation algorithms in statistics, and is now only using the editor to upload his books to GitBook.
I should note that there are great websites, like The Hitchhiker’s Guide to Python! or https://readthedocs.org, that are written entirely using rst. ↩︎
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using CommonUtils;
using Serilog;
namespace PresetConverter
{
/// <summary>
/// Base Preset Class for reading and writing a Fabfilter Pro Q (1 or 2) Preset file
/// </summary>
public abstract class FabfilterProQBase : VstPreset
{
public FabfilterProQBase()
{
}
public abstract bool WriteFFP(string filePath);
public static float[] ReadFloats(string filePath, string headerExpected)
{
BinaryFile binFile = new BinaryFile(filePath, BinaryFile.ByteOrder.LittleEndian);
string header = binFile.ReadString(4);
if (header == headerExpected) // "FPQr", "FQ2p" or "FQ3p"
{
int version = binFile.ReadInt32();
int parameterCount = binFile.ReadInt32();
var floatArray = new float[parameterCount];
int i = 0;
try
{
for (i = 0; i < parameterCount; i++)
{
floatArray[i] = binFile.ReadSingle();
}
}
catch (System.Exception e)
{
Log.Error("Failed reading floats: {0}", e);
}
binFile.Close();
return floatArray;
}
else
{
binFile.Close();
return null;
}
}
/// <summary>
/// convert a float between 0 and 1 to the fabfilter float equivalent
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static float IEEEFloatToFrequencyFloat(float value)
{
return 11.5507311008828f * value + 3.32193432374016f;
}
// log and inverse log
// a ^ x = b
// x = log(b) / log(a)
public static double FreqConvert(double value)
{
// =LOG(A1)/LOG(2) (default = 1000 Hz)
return Math.Log10(value) / Math.Log10(2);
}
public static double FreqConvertBack(double value)
{
// =POWER(2; frequency)
return Math.Pow(2, value);
}
public static double QConvert(double value)
{
// =LOG(F1)*0,312098175+0,5 (default = 1)
return Math.Log10(value) * 0.312098175 + 0.5;
}
public static double QConvertBack(double value)
{
// =POWER(10;((B3-0,5)/0,312098175))
return Math.Pow(10, (value - 0.5) / 0.312098175);
}
}
}
|
STACK_EDU
|
sql update multiple rows from select
Update multiple rows using select statement - update table2 set value = (select value from table1 where table1.id Note that this syntax works in SQL Server but may be different in other
Use SQL UPDATE to Query and Modify Data - The SQL UPDATE statement is used to modify column values within a SQL Server table. Learn the basic command Simple Example – Updating Multiple Rows.
[SOLVED] SQL How to Update Table with Multiple Values from SELECT - As I was replying to Flashman with the error message, I decided to do another Google search: sql update a subquery has returned not.
Updating multiple rows using a subquery in SQL - I am trying to update some of the summary data from values in the master Driver from ( select top 1 TopSpeed, TimeSent, Driver from CarDa.
sql server - And then we'll make use of SQL Server's ability to update Table1 via a Date from (select row_number() over(partition by id order by Date) as
How to UPDATE from SELECT in SQL Server - Performing an UPDATE using a secondary SELECT statement can be done one of whereby values in the columns of two different tables are compared to one
How to update multiple rows at once in MySQL? - You can either write multiple UPDATE queries like this and run them all at once: UPDATE students s JOIN ( SELECT 1 as id, 5 as new_score1, 8 as such as MySQL, PostgreSQL, SQLite, Microsoft SQL Server and more.
SQL UPDATE Statement - The UPDATE statement is used to modify the existing records in a table. Notice the WHERE clause in the UPDATE statement. UPDATE Multiple Records.
SQL UPDATE Statement - Updating Data in a Table - In this tutorial, you will learn how to use SQL UPDATE statement to change existing data in a table. The UPDATE statement changes existing data in one or more rows in a table. In case you want to update data in multiple columns, each column = value Execute the SELECT statement above again to verify the change:.
SQL - I have table - config. Schema: config_name | config_value And I would like to update multiple records in one query. I…
sql update multiple columns
SQL UPDATE Statement - The UPDATE statement updates data values in a database. UPDATE can update one or more records in a table. Use the WHERE clause to UPDATE only specific records.
Update multiple columns in SQL - The "tiresome way" is standard SQL and how mainstream RDBMS do it. With a 100+ columns, you mostly likely have a design problem also,
SQL UPDATE Statement - The UPDATE statement is used to modify the existing records in a table. Notice the WHERE clause in the UPDATE statement. The WHERE clause UPDATE Multiple Records Update the City column of all records in the Customers table.
SQL: UPDATE Statement - The syntax for the SQL UPDATE statement when updating multiple tables (not how to use the SQL UPDATE statement to update a single column in a table.
sql server - And then run your update (multiple columns at a time): WITH my_values AS ( SELECT one_first_var, one_second_var, one_third_var FROM
SQL - The UPDATE statement in SQL is used to update the data of an existing table in database. We can update single columns as well as multiple columns using
SQL Update - multiple columns - MSDN - I'm having a problem updating multiple columns in a table. Usually when I submit an update that affects multiple columns it's from an
Learn SQLite UPDATE Statement with Examples - This tutorial shows you how to use SQLite UPDATE statement to update existing You can use the UPDATE statement to update multiple columns as follows:.
Update (SQL) - An SQL UPDATE statement changes the data of one or more records in a table. Either all the One may also update multiple columns in a single update statement: UPDATE T SET C1 = 1, C2 = 2. Complex conditions and JOINs are also
SQL UPDATE Statement - Updating Data in a Table - In this tutorial, you will learn how to use SQL UPDATE statement to change In case you want to update data in multiple columns, each column = value pair is
|
OPCFW_CODE
|
M: Ask HN: Dividing Your Time: Programming and Gaming - mattbgates
I can only assume that if you are like me: You are a gamer at heart. You always have been. You always will be. Even if you aren't actively playing games at this time. I'm part of the generation that went outside and played until it got dark.<p>But then I'm also part of the generation that had Atari. Then Nintendo. Gameboy was awesome too. And even a Sega Genesis system. And beyond. Computers came along, amazing as they were, offering some game called Doom that you had to load from DOS. Duke Nukem. Wolfenstein 3d. Quake was another. As the years progressed, so did the games. And I spent my teenage years in the gaming world. MMOPRGs. Specifcally, Subspace (now known as Continuum) and Asheron's Call (discontinued in 2017), if any of you were lucky enough to play.<p>As I got older, however, getting jobs, going off to college, I played games less and less, hoping that someday, maybe I would find the time to play them again. Between work and side projects and life, the prospect of gaming again just seems unlikely at this time. I think about it.. I even buy games from SteamPowered, usually when they are on sale, hoping to play them, but I never really get a chance, and if I do, it might be that I took a day or two just to try them out, before getting back into programming.<p>I keep returning back to my side projects and programming because it makes me feel productive. When I'm not coding, I feel like I'm wasting time not doing "what I need to do". The guilt builds up inside me which prevents me from gaming and I know I need to stick to coding. I feel that if I am going to be successful, whether it just be creating a few popular web apps, starting a business, or generating some recurring income, that I have to keep programming.<p>So for those of you who did not give up gaming and are programmers with side projects, how you split your time and how do you not feel guilty about it?
R: vedranm
I do not feel guilty for playing games because I play them on Steam for Linux
or on Steam for Windows over Wine using free and open source Mesa driver for
Radeon. I report any bugs encountered to fd.o Bugzilla [1], and sometimes do
benchmarks. It feels like a combination of the usual gameplay, QA (bug finding
and benchmarking), FOSS activism, and a celebration of having a working FOSS
driver, all of which I really enjoy.
[1] [https://bugs.freedesktop.org/](https://bugs.freedesktop.org/)
R: PicardsTea
Yep, even if you play games 24/7, you wouldn't be able to see them all. Time
management is the key. Trying to focus on the important stuff in your life,
but also filling your free time with games if you like it. There will always
be exceptions, but you should stick to the thing you love. This is my guide to
life also :)
R: onion2k
In the past I've found writing games scratches the same itch as playing them.
That solved this problem for me for a long time. Then I realised it isn't
actually a problem, and if my projects take longer because I'm engrossed in a
great game that's fine.
Although, as a caveat to that, I should point out that none of my side
projects have ever amounted to anything. I hope that's just a correlation.
R: Cheeseness
I don't know if I'll have any useful perspectives since my day-to-day work has
me developing/researching/writing about/testing games, but here's how I do
things.
I do find myself steering clear of particular games (particularly big scale
turn based stuff like Civ) to avoid losing vast swaths of time, but I also
feel like I don't spend enough recreation time when I've got big projects and
contracts on the go.
To combat the latter a little bit, I started making playing "daily challenges"
a part of my daily routine a year or so ago. Even though it's still screen
time and no substitute for going for a ride or something, it's been a good way
of making sure that I step back and unwind a bit for at least an hour a day
and give time to a bunch of games that I love.
If it's helpful at all, I usually play Assault Android Cactus, Crypt of the
NecroDancer and Nuclear Throne, then fire up Wine to play Spelunky and Battle
Chef Brigade (which will be getting a Linux version \o/). On days where I want
a bit more, I'll sometimes try a run at Sublevel Zero, or play some Distance.
I keep a "To Finish" list, which has all the games I've started, but not
finished. I aim to make that at least 6 games shorter every year. If I can't
manage that, then I feel like it's hard to justify buying new stuff. It also
gives me a short list to jump onto when I do have time and am not sure what I
want to play. The last thing I finished was "XCOM 2", and Quadrilateral Cowboy
is what I'm currently working through.
Back when I used to run SteamLUG, we had at least two community game events
every week, and the opportunity to play socially with other Linux gamers was a
big draw. At one point we had a fairly active Guns of Icarus Online clan of
Linux users that played twice weekly. I stopped that when I ported Day of the
Tentacle to Linux, but I'd like to get back into it again soon!
Most importantly though, I play games with my family. My girlfriend and I game
together (sometimes we play single player games and take turn at being co-
pilot, and other times we play multiplayer games together). My Dad drops by at
least once a week and we'll often poke around with some game or another. I put
him in a couple of VR games yesterday and I think he had a blast. Baking game
stuff into your normal recreation/family life makes it hard to feel guilty
about that stuff.
All that said, I definitely feel like I don't have time to play all the games
I'd like to play, but I feel like maybe there are so many games out there
these days that even if I dedicated 24 hours a day to gaming, it wouldn't be
possible to get through everything :)
|
HACKER_NEWS
|
I am planning on selling a PC second hand. I'm confident with the pricing and what to set my limit as far as making some money goes, however what I am unsure about is how to go about ensuring ALL of my data and possible credentials from 3 years of use are gone, short of replacing the drives.
Also are there any mandatory quality standards that you have to abide by when selling second hand? (UK) Might sound daft however I have not sold something before to someone I do not know.
The machine is in good condition and functional.
Some people say use tools like DBAN and do a couple full passes
I just use linux:
dd if=/dev/urandom of=/dev/sdX bs=8M
I'll do that at least twice for a hard drive and 3 times for a SSD
You're relatively safe after that.
I'm not sure of UK law, but just say that the system is sold as-is and that there is no warranty expressed or implied.
The data is on the drive always until it's written over , even after a format or partition changes.
All you have to do it write over the data to erase it. So find the biggest file you have , and copy it over and over until it fills the drive 100%. Then all that's on the drive is a million useless copies of the same file.
For the love of god don't listen to this troll. He is either intentionally misleading you, or so incredibly clueless as to be of no help. His suggestion will not wipe your drive.
Use something like CCleaner to completely erase the hard drive. You can use the disk wiper tool to overwrite the entirety of the drive up to 36 times. That ensures that any and all data on the drive that was there can never be read again. When you do only a simple format or even a single pass format, the underlying data can be read if the person knows what they are doing. This is why I recommend you rewrite the drive with that tool to ensure nothing can be read from the drive.
If you have SSD's, you only need to properly format the once on those, as when you properly format an SSD once they old data is gone and unrecoverable.
I agree with @NetBandit here i use DBAN for my hard drive erasing so i can sell them to my friends or people i know.
Sad part is that it takes about eight hours for the thing to complete on a full wipe. I usually do this because the drives i sell are from people whom worked on tax , credit card info. Works like a charm
Okay thanks, there are 3 drives in total, 2 SSD and 1 HDD.
Can DBAN be used on the SSD's ?
Some people say not to use DBAN on SSDs but I don't trust 'secure erase' programs because they dont actually overwrite the flash. Rather they just tell the SSD controller to reset.
Check out this link
Definetly, i think it is alittle fast because of the flash memory
don't knock it till you tried it lol
Well, depends on how paranoid you are and how important the data is.
Back when I was a lowly tech assistant, the story was that the some British intelligence agencies (or just any agency that handled highly sensitive information) had a process somewhat like this:
1. Wipe the drive completely (set everything to zero)
2. Use a program to overwrite random 1s and 0s to the drive something like 7-9 times over
3. Destroy the drives
4. Grind the drive plates into dust
5. Keep said dust in boxes in a vault.
The longer story involved why each further step was necessary, with the slight possibility of recovering some information from the previous step given proper forensic technology and a lot of time and resources (which, to be fair, a rival COUNTRY might have, and the data might be worth it to them). Whether this was true or not is for you to decide.
On a more practical level and in such a way that maintains the drive so you can sell it, you could go with the first two steps, though honestly you probably won't get much benefit past 3 times overwriting, if even that far.
This is, of course, with Hard Disk Drives in mind, rather than Solid State Drives. The reason for doing all the overwriting on the HDDs had something to do with how a 1 or 0 is actually set when it is re-written to a drive (rather than for the first time), which leaves a technical ability to retrace what was previously written given some really fancy equipment.
I just DBAN(ed)a HDD for the military the other day. Obviously it needed the full DoD 5220.22-M, but from what I've read somewhere, that's actually a waste of time. You really only need to do one wipe.
Some parts lose a lot of value being sold with a PC and sometimes it's worth taking a few parts out before selling, for example, closed loop water coolers and extra hard drives are often worth selling seperately from the pc itself to make more money.
Assuming your SSD supports storage-level encryption, Secure Erase is an incredibly efficient and safe way to wipe the drive. The way it works is that everything on the disk is heavily encrypted at all times and there's a secure area that stores the encryption key. The Secure Erase simply securely removes the encryption key and then does a quick format. Yeah, all of the data is still on there but it's seemingly random bits because you just see the encrypted bits and there's no way to decrypt it since the random key that decrypts it has also been permanently lost.
My experience too, normally I find selling individual components nets you more profit overall.
Easier to find some one who is looking for a replacement this or that than to find some one who wants to buy your specifically configured and equipped PC.
agreed however selling the case or fans by itself is fuitle hence why selling a system with minimal components can make sense.
I find selling cases on locals sites has always worked out for me, 4 cases in the past 2 years. High-end to low end. Package the fans with the case for cheap and it works out. I think most of the people that have bought them have been people like ourselves building systems for friends and family on the cheap.
Selling fans on their own really has sucked though. I'm likely going to donate mine to the local tech not-for-profit if they take them. If not, to the E-waste recyclers they go.
DoD wipe will safely erase data on old disks.
I would select the Linux ISO and burn it to CD then boot PC with cd drive into the Killdisk wiping software and follow the instructions. I recommend anywhere from 3 passes or 7 passes (extra safe) using one of the listed wiping processes, it can take anywhere between 6 and 24 hours depending on how fast your disks are.
If you have not got access to DVD/CDROM drive then you maybe able to burn the ISO to a USB stick using Rufus or something, You only really need to make sure it has a boot-loader and this Killdisk ISO does.
Until someone figures out how to recover that encryption key. The problem is that this relies on trusting the manufacturer to do what they said without backdoor, or weakness... which we've seen in countless examples, is something that they cannot be trusted to do.
If the Pc is a stock manufactured item, just write the make, model and serial number on a sales receipt. If it has been custom built or modified, write a list of full spec of every part.
If drive is to have Windows or other proprietary software, make sure install keys match any included documentation, and record that on sales receipt too. Would be better to sell it with a completely wiped drive (or no drive) but that would put off any buyer wanting a machine they can just plug in and use.
When buyer arrives, offer to show that the machine is fully functional and will boot to an OS. Even if seller declines you still offered to prove it was a working machine.
Have 2 sales receipts marked "sold as seen" sign both copies and get seller to sign, then you both have a copy of the agreed sale,
|
OPCFW_CODE
|
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "arm-linux-gnueabihf".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
https://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from objs/srs...
set(gdb) set args -c conf/srs.conf
(gdb) r
Starting program: /opt/srs-4.0release/trunk/objs/srs -c conf/srs.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
[2021-12-23 12:56:28.682][Trace][2830][540046g6] XCORE-SRS/4.0.207(Leo)
[2021-12-23 12:56:28.683][Trace][2830][540046g6] config parse complete
[2021-12-23 12:56:28.683][Trace][2830][540046g6] you can check log by: tail -n 30 -f ./objs/srs.log
[2021-12-23 12:56:28.684][Trace][2830][540046g6] please check SRS by: ./etc/init.d/srs status
Program received signal SIGSEGV, Segmentation fault.
strlen () at ../sysdeps/arm/armv6/strlen.S:26
26 ../sysdeps/arm/armv6/strlen.S: No such file or directory.
(gdb) l
21 in ../sysdeps/arm/armv6/strlen.S
(gdb)
crash on 3.0
(gdb) set args -c conf/srs.conf
(gdb) r
Starting program: /opt/srs-3.0release/trunk/objs/srs -c conf/srs.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
[2021-12-23 13:20:00.734][Trace][4516][0] XCORE-SRS/3.0.170(OuXuli)
[2021-12-23 13:20:00.735][Trace][4516][0] config parse complete
[2021-12-23 13:20:00.735][Trace][4516][0] you can check log by: tail -f ./objs/srs.log (@see https://github.com/ossrs/srs/wiki/v1_CN_SrsLog)
[2021-12-23 13:20:00.735][Trace][4516][0] please check SRS by: ./etc/init.d/srs status
Program received signal SIGSEGV, Segmentation fault.
strlen () at ../sysdeps/arm/armv6/strlen.S:26
26 ../sysdeps/arm/armv6/strlen.S: No such file or directory.
(gdb) bt
#0 strlen () at ../sysdeps/arm/armv6/strlen.S:26
#1 0x76a72890 in __vfprintf_internal (s=0x38e870, s@entry=0x0, format=format@entry=0x201270 "-> HLS time=%dms, sno=%d, ts=%s, dur=%.2f, dva=%dp", ap=<EMAIL_ADDRESS>mode_flags=mode_flags@entry=3729904) at vfprintf-internal.c:1688
#2 0x76a8480c in __vsnprintf_internal (
string=0x26410c "-> HLS time=10000739ms, sno=0, ts= 2channels, 0kbps, 44100HZ), flv(16bits, 2channels, 44100HZ)\napp]/[stream]-[seq].ts, aof=2.00, floor=0, clean=1, waitk=1, dispose=0ms, dts_directly=1\nter --with-http-"..., maxlen=, format=0x201270 "-> HLS time=%dms, sno=%d, ts=%s, dur=%.2f, dva=%dp", args=..., mode_flags=mode_flags@entry=0) at vsnprintf.c:114
#3 0x76a84870 in ___vsnprintf (string=, maxlen=, format=, args=...) at vsnprintf.c:124
#4 0x0012dd00 in SrsFastLog::trace (this=0x2640c0, tag=0x0, context_id=187, fmt=0x201270 "-> HLS time=%dms, sno=%d, ts=%s, dur=%.2f, dva=%dp") at src/app/srs_app_log.cpp:151
#5 0x001193dc in SrsHls::hls_show_mux_log (this=0x334700) at src/app/srs_app_hls.cpp:1361
#6 0x001191b0 in SrsHls::on_video (this=0x334700, shared_video=0x38ec88, format=0x334060) at src/app/srs_app_hls.cpp:1345
#7 0x001017dc in SrsOriginHub::on_video (this=0x3346c0, shared_video=0x38ec88, is_sequence_header=false) at src/app/srs_app_source.cpp:1062
#8 0x00108468 in SrsSource::on_video_imp (this=0x334658, msg=0x38ec88) at src/app/srs_app_source.cpp:2303
#9 0x0010803c in SrsSource::on_video (this=0x334658, shared_video=0x3a1b40) at src/app/srs_app_source.cpp:2258
#10 0x000f9d18 in SrsRtmpConn::process_publish_message (this=0x319080, source=0x334658, msg=0x3a1b40) at src/app/srs_app_rtmp_conn.cpp:1021
#11 0x000f9aec in SrsRtmpConn::handle_publish_message (this=0x319080, source=0x334658, msg=0x3a1b40) at src/app/srs_app_rtmp_conn.cpp:993
#12 0x001a5088 in SrsPublishRecvThread::consume (this=0x32f2e0, msg=0x3a1b40) at src/app/srs_app_recv_thread.cpp:389
#13 0x001a3c0c in SrsRecvThread::do_cycle (this=0x32f2e8) at src/app/srs_app_recv_thread.cpp:146
#14 0x001a39f4 in SrsRecvThread::cycle (this=0x32f2e8) at src/app/srs_app_recv_thread.cpp:115
#15 0x0012d4fc in SrsSTCoroutine::cycle (this=0x37e050) at src/app/srs_app_st.cpp:198
#16 0x0012d5a8 in SrsSTCoroutine::pfn (arg=0x37e050) at src/app/srs_app_st.cpp:213
#17 0x001e3240 in _st_thread_main () at sched.c:337
#18 0x001e3b9c in st_thread_create (start=0x2aa590, arg=0x333af0, joinable=3338468, stk_size=3338468) at sched.c:616
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb)
I temporary close hls and it works.
please fix this small bug.
seem the string is to long for # 4 0x0012dd00 in SrsFastLog::trace
👍 Thanks for your report, very specific.
|
GITHUB_ARCHIVE
|
The Survey pattern uses a data model and a DAX expression to analyze correlation between different transactions related to the same entity, such as a customer’s answers to survey questions.
Basic Pattern Example
Suppose you have an Answers table containing the answers provided to a survey by customers defined in a Customers table. In the Answers table, every row contains an answer to a question. The first rows of the two tables are shown in Figure 1.
The Questions table in Figure 2 contains all the questions and possible answers, providing a unique key for each row. You can have questions with multiple-choice answers.
You import the Questions table twice, naming it Filter1 and Filter2. You rename the columns Question and Answer with a suffix identifying the filter they belong to. Every Filter table will become a possible slicer or filter in the pivot table used to query the survey data model. As you see in Figure 3, the relationships between the Answers table and Filter1 and Filter2 are inactive.
You need two filter tables to define a logical AND condition between two questions. For example, to count how many customers have a job as a teacher and play tennis, you need to apply a calculation such as the one described in the CustomersQ1andQ2 measure below.
[CustomersQ1andQ2] := CALCULATE ( COUNTROWS ( Customers ), CALCULATETABLE ( Answers, USERELATIONSHIP ( Answers[AnswerKey], Filter2[AnswerKey] ) ), CALCULATETABLE ( Answers, USERELATIONSHIP ( Answers[AnswerKey], Filter1[AnswerKey] ) ) )
Once you have this measure, you can use a pivot table to put answers from one question in rows, and put answers from another question in columns. The table in Figure 4 has sports in columns and jobs in rows, and you can see there are 16 customers who are tennis-playing teachers. The last column (Sport Practiced Total) shows how many customers practice at least one sport. For example, 33 teachers practice at least one sport.
If you want to compute the answers to just one question, you cannot use CustomersQ1andQ2, because it requires a selection from two different filters. Instead, use the CustomersQ1 measure, which computes the number of customers that answered the question selected in Filter1, regardless of what is selected in Filter2.
[CustomersQ1] := = CALCULATE ( COUNTROWS ( Customers ), CALCULATETABLE ( Answers, USERELATIONSHIP ( Answers[AnswerKey], Filter1[AnswerKey] ) ) )
In the DAX expression for CustomersQ1, you have to include the USERELATIONSHIP statement because the relationship with the Filter1 table in the data model is inactive, which is a required condition to perform the calculation defined in CustomersQ1andQ2. Figure 5 shows that there are 56 teachers, and you have seen in Figure 4 that only 33 of them practice at least one sport.
You can use the Survey pattern when you want to analyze correlations between events happening to the same entity. The following is a list of some interesting use cases.
Answers to a Survey
A survey form usually has a set of questions with a predefined list of possible answers. You can have both single-choice and multiple-choice questions. You want to analyze correlations between different questions in the same survey, using a single data model that does not change depending on the structure of the survey. The data in the tables define the survey structure so you do not need to create a different structure for every survey.
You can analyze the products bought together in the same transaction, although the Survey pattern can only identify existing relationships. A more specific Basket Analysis pattern is available to detect products that the same customer buys in different transactions.
Evaluation of an Anamnesis (Medical History) Questionnaire
You can structure many questions of an anamnesis questionnaire in a data model that corresponds to the Survey pattern. You can easily analyze the distribution of answers in a set of questionnaires by using a pivot table, with a data model that does not change when new questions are added to the questionnaire. The Survey pattern also handles multiple-choice answers without requiring a column for each answer (which is a common pattern used to adapt this type of data for analysis with Excel).
Create a data model like the one shown in Figure 3. You might replace the Customers table with one that represents an entity collecting answers (e.g., a Form table). It is important to use inactive relationships between the Answers and Filters tables.
You can calculate the answers to a single question, regardless of selections made on other filter tables, with the following measures:
CustomersQ1 := IF ( HASONEVALUE ( Filter1[Question 1] ), CALCULATE ( COUNTROWS ( Customers ), CALCULATETABLE ( Answers, USERELATIONSHIP ( Answers[AnswerKey], Filter1[AnswerKey] ) ) ) )
CustomersQ2 := IF ( HASONEVALUE ( Filter2[Question 2] ), CALCULATE ( COUNTROWS ( Customers ), CALCULATETABLE ( Answers, USERELATIONSHIP ( Answers[AnswerKey], Filter2[AnswerKey] ) ) ) )
The HASONEVALUE function checks whether the user selected only one question. If more than one question is selected in a filter table, the interpretation could be ambiguous: should you consider an AND or an OR condition between the two questions? The IF statement returns BLANK when multiple questions are selected within the same filter table.
Selecting multiple answers, however, is possible and it is always interpreted as an OR condition. For example, if the user selects both Baseball and Football answers for the Sport Practiced question, it means she wants to know how many customers practice baseball, or football, or both. This is the reason why the CALCULATE statement evaluates the number of rows in the Customers table, instead of counting the number of rows in the Answers table.
In case the user uses two filter tables, one question is possible for each filter. The answers to each question are considered in an OR condition, but the two questions are considered in an AND condition. For example, if the user selects Consultant and Teacher answers for the Job question in Filter1, and she selects Baseball and Football for the Sport Practiced question in Filter2, it means she wants to know how many customers who are consultants or teachers also practice baseball, or football, or both. You implement such a calculation with the following measure:
CustomersQ1andQ2 := SWITCH ( TRUE, NOT ( ISCROSSFILTERED ( Filter2[AnswerKey] ) ), [CustomersQ1], NOT ( ISCROSSFILTERED ( Filter1[AnswerKey] ) ), [CustomersQ2], IF ( HASONEVALUE ( Filter1[Question 1] ) && HASONEVALUE ( Filter2[Question 2] ), IF ( VALUES ( Filter2[Question 2] ) <> VALUES ( Filter1[Question 1] ), CALCULATE ( COUNTROWS ( Customers ), CALCULATETABLE ( Answers, USERELATIONSHIP ( Answers[AnswerKey], Filter2[AnswerKey] ) ), CALCULATETABLE ( Answers, USERELATIONSHIP ( Answers[AnswerKey], Filter1[AnswerKey] ) ) ) ) ) )
There are a few more checks in this formula in order to handle special conditions. If there are no filters active on the Filter2 table, then you can use the calculation for a single question, using the CustomersQ1 measure. In a similar way, if there are no filters active on the Filter1 table, you can use the CustomersQ2 measure. The ISCROSSFILTERED function just checks a column of the filter table to do that.
If a filter is active on both the Filter1 and Filter2 tables, then you want to calculate the number of customers satisfying the filters only if the user selected a single but different question in both Filter1 and Filter2; otherwise, you return a BLANK. For example, even if there are no filters on questions and answers in the pivot table rows in Figure 6, there are no duplicated rows with the answers to the Gender question, because we do not want to show an intersection between the same questions.
When you look at the result for a question without selecting an answer, the number you see is the number of unique customers who gave at least one answer to that question. However, it is important to consider that the data model always supports multiple-choice questions, even when the nature of the question is single-choice. For example, the Gender question is a single-choice one and the sum of Male and Female answers should correspond to the number of unique customers who answered the Gender question. However, you might have conflicting answers to the Gender question for the same customer. The data model does not provide any constraint that prevents such a conflict: you have to check data quality before importing data.
Using a drillthrough action on measures used in the Survey pattern will produce unexpected results. The drillthrough only returns data filtered by active relationships in the data model, ignoring any further calculation or filter made through DAX expressions. If you want to obtain the list of customers that gave a particular combination of answers, you have to put the customer name in the pivot table rows and use slicers of pivot table filters to select the desired combination of questions and answers.
Slicer Differences in Excel 2010 and Excel 2013
When you use slicers to display a selection of questions and answers, remember that there is slightly different behavior between Excel 2010 and Excel 2013. If you have a slicer with questions and another with answers for the same filter, you would like the slicer for the answers to display only the possible choices for the selected question. In Excel 2010, you can only change the position of the answers, so that possible choices for the selected question are displayed first in the slicer: to do that, set the Show Items With No Data Last checkbox (in the Slicer Settings dialog box shown in Figure 7).
Using this setting, the Female and Male answers for the selected Gender question are displayed first in the Answer1 slicer, as you see in Figure 8.
With Excel 2013, you can hide the answers belonging to questions that are not selected, by setting the Hide Items With No Data checkbox shown in Figure 9.
In this way, the Answer1 slicer does not display answers unrelated to the selection made in the Question1 slicer, as you see in Figure 10.
Checks whether all arguments are TRUE, and returns TRUE if all arguments are TRUE.
AND ( <Logical1>, <Logical2> )
Specifies an existing relationship to be used in the evaluation of a DAX expression. The relationship is defined by naming, as arguments, the two columns that serve as endpoints.
USERELATIONSHIP ( <ColumnName1>, <ColumnName2> )
Returns true when there’s only one value in the specified column.
HASONEVALUE ( <ColumnName> )
Returns TRUE if any of the arguments are TRUE, and returns FALSE if all arguments are FALSE.
OR ( <Logical1>, <Logical2> )
Checks whether a condition is met, and returns one value if TRUE, and another value if FALSE.
IF ( <LogicalTest>, <ResultIfTrue> [, <ResultIfFalse>] )
Returns a blank.
BLANK ( )
Evaluates an expression in a context modified by filters.
CALCULATE ( <Expression> [, <Filter> [, <Filter> [, … ] ] ] )
Returns true when the specified table or column is crossfiltered.
ISCROSSFILTERED ( <TableNameOrColumnName> )
This pattern is designed for Excel 2010-2013. An alternative version for Power BI / Excel 2016-2019 is also available.
This pattern is included in the book DAX Patterns 2015.
Download the sample files for Excel 2010-2013:
|
OPCFW_CODE
|
Sense Collective is building the infrastructure to connect non-technical people, organizations, and businesses with emerging technologies, products, and services that harness collective intelligence and enable collaborative sensemaking. We call it TotemOS.
TotemOS is valuable for all public health and social capital resources, natural capital and social capital include healthcare, agriculture, local culture, i.e art and music, publishing, media, local retail, in support of IoT, smart city planning, circular economies, and in protecting the health and integrity of democracy itself, as the Totem Identity Manager helps to introduce a number of technologies and tools that are changing the way we manage our identity and authenticate information, with insight into the reputation, influence, impact and equity associated to information and insight markets and assets.
To help begin this journey we are establishing an education and training portal where knowledge seekers and knowledge creators can interface, to scaffold their needs with expertise and iterate towards a coowned set of collective intelligence resources and protocols.
Imagine if those parts of what makes a great system valuable were created and owned by the developers and users that make it useful and powerful. That is TotemOS.
The Capacitor Program
The Capacitor Program is designed to form a super-group of 300 people who represent the diversity that is essential to any future web. Capacitor is part of Sense Collective's MetaGrant program which is designed to bring over two dozen blockchain projects, protocols and tokens (and communities)together to tease out the value propositions and amazing features that are at the center of each project, in a way that makes them immediately explicit and recognizable to non-technical people. Smart contracts, DAO's, TCR's, Wallets, Digital Identities, and signatures are the backbone of a new way of interacting and coordinating, but the user experience of these tools and networks is so far out of reach for most people. While improvements are being made every day to solve the inevitable UI/UX challenges that accompany decentralized technologies, and token integrations, Sense Collective has found that it is possible, and actually really amazing to demonstrate the benefits of blockchain tools, tokens, DAO's, etc, in the simplest way possible, often forsaking some of the elements of the value proposition that make it truly decentralized, but that's ok...
So long as we are incredibly clear about how we go about foreshortening the tech stack and architecture, to model the experience of a smart contract, a DAO, or a TCR, to demonstrate what is happening with Metamask, and the many rounds of signatures, but to do so, by cutting those layers out for now, and representing them in a slightly different way, using Keybase's PGP Key management, KBFS, and Encrypted Git, along with KB's social proofs, to map out a Minimum Viable Dao workflow that allows anyone to create their own model DAO, inside of a Keybase Team, that delivers on around 80% of the features that make DAO's DAO's.
Our blog series will introduce you to a number of new software protocols, products, services and applications that we have developed to unlock vastly more expansive and effective methods for collaboration, with a focus on the coordination of social and natural capital resources.
We will highlight several projects that we are working on to give you a clear picture of how these tools and features work, and to demonstrate the exponential value that is created once we have successfully on-boarded a few hundred practitioners, designers, artists, scientists, influencers, teachers and creatives and they begin to get the hang of using these newly acquired super-powers.
Sense Collective's roadmap seems conservative at first glance as we focus our efforts initially on a small batch of workshops, educational programs, enterprise training sessions, public installation art, music and interactive design events, and the continuing development of our own software and hardware ecosystem, TotemOS. During the first 4 months of 2020, Sense Collective will engage with a few thousand people and through our various events and programming, we will welcome the first 300 members to Sense Collective's Capacitor Program, a unique innovation design program that provides Educational Programming & Enterprise Training and culminates with a month-long Hackathon & Citizen Science program.
|
OPCFW_CODE
|
10:00 ET, 19 February 2014
Security researchers at SANS detected a self-replicating malware (dubbed moon worm) is spreading among a number of different Linksys routers.
Researchers at the SANS Institute discovered a new self-replicating worm that is infecting different Linksys home and small business routers. The investigation started after an Internet service provider in Wyoming noted an unusual network traffic and decided to alert SANS. The SANS researchers were able to detect and isolate the worm setting up honeypots, they dubbed it The Moon because its source code contains numerous strings related lunar theme.
The analysts still haven’t determined whether there is a malicious payload or if the worm connects to a command and control server.
“We haven’t exactly worked out the command and control part yet. There is some evidence of at least a reporting feature,”
In time I’m writing the unique certainty for researchers is that the worm seems to limit its activity to scanning for other vulnerable routers and spreading itself.
“The vulnerability allows the unauthenticated execution of arbitrary code on the router. We haven’t published all the details about the vulnerability yet as it appears to be unpatched in many routers,” said Johannes B. Ullrich, chief technology officer at SANS.
SANS immediately alerted spread the alarm, providing a list of potential vulnerable routers depending on the firmware version they’re running. The list includes the following models:
Once the Moon worm infected the router, it connects to port 8080 and using the Home Network Administration Protocol (HNAP) implemented in Cisco devices, retrieves router characteristics and firmware versions.
The worm appears to extract the router hardware version and the firmware revision. The relevant lines are:
<FirmwareVersion>1.0.07 build 1</FirmwareVersion>
(this is a sample from an E2500 router running firmware version 1.0.07 build 1)
Once Moon worm discovers the router model, it exploits a vulnerable CGI script that allows it to access the router without authentication and starts searching for other vulnerable devices.
“The worm sends random “admin” credentials, but they are not checked by the script. Linksys (Belkin) is aware of this vulnerability.”
The Moon worm has a size of 2MB and all the instances detected by SANS appear identical except for a random trailer at the end of the ELF MIPS binary file.
“There are about 670 different IP ranges that it scans for other routers. They appear to all belong to different cable modem and DSL ISPs. They are distributed somewhat worldwide),” “We are still working on analysis what it exactly does. But so far, it looks like all it does is spread (which is why we call it a worm “It may have a ‘call-home’ feature that will report back when it infected new hosts.” states the blog post.
SANS experts confirm that the Moon worm changes the DNS settings to control victim’s traffic, the behavior is common to other router exploits. Recently the Polish Computer Emergency Response Team has documented a series of cyber attacks observed in Poland involved cybercriminals hacking into home routers and changing their DNS settings to conduct MITM attacks on online banking connections.
“It may make changes to DNS settings like a lot of other router exploits, but this is still work in progress.”
How to discover if a router has been infected by the Moon worm?
The SANS provided the following indicators to detect the malware presence:
- heavy outbound scanning on port 80 and 8080.
- inbound connection attempts to misc ports < 1024.
Detecting potentially vulnerable system:
echo “GET /HNAP1/ HTTP/1.1rnHost: testrnrn” | nc routerip 8080
if you get the XML HNAP output back, then you MAY be vulnerable.
I always suggest to change default settings (e.g. Port number for admin panel) and to limit the access to the remote administrator interface to specific IP addresses.
|
OPCFW_CODE
|